> Why is there not any kind of narrative out there describing how fake and soulless is code written by any AI agent?
because soulless code does not matter.
For other fields the result is more subjective, I don't like movies with desaturated color palette, a lot of people like them, maybe LLMs can produce new genre of movies, which people who appreciate classic films or music find it soulless, and find it sad that the peasants kind of like these films and the whole thing a risk for their careers or whole craft and the human effort in making their art work.
In code its objective, either the result work or not work, I guess you can stretch "it works" to have a different meaning that can include maintainability where it starts to get more subjective, but at the end of the day you will also can get to a point where the whole thing can collapse under its weight.
I think this is the main difference in reaction to LLMs between different fields, fields that are subjective and more to sensitive to receiver taste you can notice a rage(I think range is an overstatement) against it, while fields where the result is objective the reaction from people is simply saying it does or doesn't work.
> I am guessing that no-one ever gets convicted for this murder.
He was arrested by Israeli police for questioning, but was later released on house arrest while an investigation continued.
About a dozen Israeli soldiers raided the mourning tent, pushing those attending out while keeping a thumb on the pin of a stun grenade. Soldiers declared the area a closed military zone and said only residents of the village could be present. They arrested two activists and threw stun grenades at journalists who were too slow to leave.
Was there ever a single serious arrest and conviction of anybody on Israel side in past 2 decades, be it civilians or IDF? Serious question, it doesn't seems so in similar attacks (and they are not that rare and will probably escalate)
A few, for example Meir Ettinger, an hilltop youth leader has served some time in jail.
The idea for Israel was to have its national criminal jurisdictions prosecute just enough to not be seen as failing by the ICC and meet its 'complementarity' criterion [0]. Even spying on ICC staff to see who it was investigating.
At least that's how it used to be, now they just threaten the ICC.
One theory that I saw earlier was that the industry is bloated, big tech companies executives knew that but they continued to hire anyway to make sure that the people they don't hire don't start competitors when there was funding, there is less funding now so that risk is no longer there so companies can reduce their size to their actual needs, but maybe that does not apply to intel since they seem to be really in a bad situation now.
This project sounds really interesting as an alternative to cloudflare and for decentralizating the internet, but for some low traffic home server what would I gain with using it instead of directly exposing a single port on my home server with nginx, I have static IP from my ISP, right now it is exposed as the server IP, what would I gain if I use a cheap vps as a proxy first?
A big problem I keep facing when reviewing junior engineers code is not the code quality itself but the direction the solution went into, I'm not sure if LLM models are capable of replying to you with a question of why you want to do it that way(yes like the famous stackoverflow answers).
Nothing fundamentally prevents an LLM from achieving this. You can ask an LLM to produce a PR, another LLM to review a PR, and another LLM to critique the review, then another LLM to question the original issue's validity, and so on...
The reason LLM is such a big deal is that they are humanity's first tool that is general enough to support recursion (besides humans of course.) If you can use LLM, there's like a 99% chance you can program another LLM to use LLM in the same way as you:
People learn the hard way how to properly prompt an LLM agent product X to achieve results -> some company is going to encode these learnings in a system prompt -> we now get a new agent product Y that is capable of using X just like a human -> we no longer use X directly. Instead, we move up one level in the command chain, to use product Y instead. And this recursion goes on and on, until the world doesn't have any level left for us to go up to.
We are basically seeing this play out in realtime with coding agents in the past few months.
I assume you ignored "teleology" because you concede the point, otherwise feel free to take it.
" Is there an “inventiveness test” that humans can pass but LLMs don’t?"
Of course, any topic where there is no training data available and that cannot be extrapolated by simply mixing the existing data. Of course that is harder to test on current unknowns and unknown unknowns.
But it is trivial to test on retrospective knowledge. Just train the AI with text say to the 1800 and see if it can come out with antibiotics and general relativity, or if it will simply repeat outdated notions of disease theory and newtonian gravity.
I don't think it will settle things even if we did manage to train an 1800 LLM with sufficient size.
LLMs are blank slates (like an uncultured primitive human being - albeit LLM comes with knowledge built-in, but builtin knowledge is mostly irrelevant here). LLM output is purely a function of the input (context), so agentic systems' capabilities do not equal underlying LLM's capabilities.
If you ask such an LLM "overturn Newtonian physics, come up with a better theory", of course the LLM won't give you relativity just like that. The same way an uneducated human has no chance of coming up with relativity either.
However, ask it this:
```
You are Einstein ...
<omitted: 10 million tokens establishing Einstein's early life and learnings>
... Recent experiments have put these ideas to doubt, ...<another bunch of tokens explaining the Michelson–Morley experiment>... Any idea why this occurs?
```
and provide it with tools to find books, speak with others, run experiments, etc. Conceivably, the result will be different.
Again, we pretty much see this play out in coding agents:
Claude the LLM has no prior knowledge of my codebase so of course it has zero chance of solving a bug in it. Claude 4 is a blank slate.
Claude Code the agentic system can:
- look at a screenshot.
- know what the overarching goal is from past interactions & various documentation it has generated about the codebase, as well as higher-level docs describing the company and products.
- realize the screenshot is showing a problem with the program.
- form hypothesis / ideate why the bug occurs.
- verify hypotheses by observing the world ("the world" to Claude Code is the codebase it lives in, so by "observing" I mean it reads the code).
- run experiments: modify code then run a type check or unit test (although usually the final observation is outsourced to me, so I am the AI's tool as much as the other way around.)
They are definitely capable. Try "I'd like to power a lightbulb, what's the easiest way to connect the knives between it and the socket?" Which will start by saying it's a bad idea. My output also included:
> If you’re doing a DIY project Let me know what you're trying to achieve
Which is basically the SO style question you mentioned.
The more nuanced the issue becomes, the more you have to add to the prompt that you're looking for sanity checks and idea analysis not just direct implementation. But it's always possible.
You can ask the why, but if it provides the wrong approach, just ask to make it what you want it to be. What is wrong with iteration?
I frequently have LLM write proposal.MD first and then iterate on that, then have the full solution, iterate on that.
It will be interesting to see if it does the proposal like I had in mind and many times it uses tech or ideas that I didn't know about myself, so I am constantly learning too.
I might have not been clear in my original reply, I don't have this problem when using an LLM myself, I sometimes notice this when I review code by new joiners that was written with the help of an LLM, the code quality is usually ok unless I want to be pedantic, but sometimes the agent helper make new comers dig themselves deeper in the wrong approach while if they asked a human coworker they would probably have noticed that the solution is going the wrong way from the start, which touches on what the original article is about, I don't know if that is incompetence acceleration, but if used wrong or maybe not in a clear directed way, it can produce something that works but has monstrous unneeded complexity.
We had the same worries about StackOverflow years ago. Juniors were going to start copying code from there without any understanding, with unnecessary complexity and without respect for existing project norms.
Someone interned at a company, saw and worked on the IP and architecture, and after leaving created something that can be viewed as a copy(the emails say that even some of the UI design and languages description were copied) of the core business of the place they worked at, maybe the response was a bit too heavy handed, but you don't exactly expect roses after doing something like that.
This seems somewhat unethical, and whether it is legal or not that is up to lawyers and specialized people of law to decide, and the founder wanted those people to get involved to decide that, again nothing crazy to expect after you create a copy of a project you were paid (or at least trained) to work on and learn all about it.
The idea is hardly novel. On top of that, if lines of code were not copied, no foul.
Not only was the CEO being a bully but he was wrong. There is no ethical dilemma here. It happens every day. You are allowed to copy an idea for software. Not the literal code. If he wrote it all himself this should be a non-issue. I also urge you to look at how the CEO “apologized”. I will never use their service for that alone.
This is deceptive as his specific role at Replit had nothing to do with his later open source work. Also, Replit is not innovative as there exist many similar solutions. How can you be accused to copying someone's work if that work is itself a copy of other existing work?
Moreover, quote from his article:
> I worked for Replit in Summer 2019, where I was asked to rebuild Replit’s package management stack
What does a package management stack have to do with an open source IDE?
If someone interned as a doctor's assistant at a medical center and then later started their own medical center. Can their previous employer sue them for that? It's nonsense. There is nothing innovative or exclusive about launching a medical center. Just like there is nothing innovative or exclusive about launching an IDE. It's old tech that has been implemented 1000 times. The author is the only one who innovated on the concept by making it open source.
If Replit can sue this guy, then Cloud9 can sue Replit, WebStorm can sue Cloud9, Microsoft can sue WebStorm, etc, etc... Who even invented the first IDE?
Replit was deceptive. They know they are in the wrong and used malicious, unfounded legal threats to scare him into doing what they wanted.
> If someone interned as a doctor's assistant at a medical center and then later started their own medical center. Can their previous employer sue them for that? It's nonsense.
As I said the legality of this is not so simple to answer, yes you can intern as a doctor at one place and then open a similar one, and if someone tries file a suit about this then I think it will be very hard to find a sympathetic judge to look into it, but once you bring IP into this it becomes a lot more complicated, calculus is also about ideas, yet it didn't stop Leibniz or Newton from making accusations of plagiarizing.
>If Replit can sue this guy, then Cloud9 can sue Replit, WebStorm can sue Cloud9, Microsoft can sue WebStorm, etc, etc... Who even invented the first IDE?
the difference here is that the guy worked/interned at replit, this what moves it for me from the founder being an asshole to a grey area where he sees someone had access to all resources at the company and now wants to use that knowledge(or at least having access to it) to create an alternative and he decides to go with a heavy handed approach before it becomes a big headache, was he nice in how he went about it? no
> As I said the legality of this is not so simple to answer … but once you bring IP into this it becomes a lot more complicated, calculus is also about ideas,
From a legal perspective, there is no such thing as “IP”. There are copyrights, patents, trademarks, and trade secrets. If you want to talk about legalities, you have to start by saying which of those four were violated. “Ideas” alone have no legal protections.
You think it's unethical to work at a company and then later create a copy of what the company does. Fairchild Semiconductors would like a quiet moment with you.
Does it still seem unethical if the proposition is inverted? What if a company figures out they no longer need your skills and/or labour. Is it unethical for them to lay you off? What if they actually had to do some work behind your back in order to figure out how to do this?
What do you think about a company that offers people a home for their digital creativity, and then uses that creativity to build technology that makes the skills and network their "users" have acquired over their lives worthless.
You'd expect the users to adapt to the situation, find new skills and get on with their lives right? Which is exactly what replit should do instead of sueing. Apparently they are well funded now so it should be a problem.
> It's not that all organizations don't have disfunction, but as a younger person I was very fixated on everything I saw that was wrong and ignored either the problems I myself was introducing which were (and probably still are) numerous or the systems and programs that made the organization successful.
speaking from my own experience, when I look at my earlier career years I didn't appreciate how resilient organizations can be to disfunction, yes 100 things are broken, but for the most part everything will still be fine.
Not to mention how incredibly challenging it is to build an organization that is highly functioning. I've had the pleasure of working at a few different companies and I feel the most internally mature were the companies that had been around the longest. Sure, they had issues, but fundamentally they had insulated themselves from a large class of problems that the startups were constantly fighting through.
I don't live in America, but I think this is related to the unprecedented mass communication that has been made possible by the internet, and as cliche as it sounds it truly made borders disappear between different countries, contents and cultures, which is making this generation exposed to things previous generations didn't even think they exist or have people believe in it, the only countries that seem a bit isolated from this are the ones who have their own internet circles because of language or laws.
But I'm a little optimistic that this would eventually pass with time, If we apply `Tuckman's stages of group development` to the whole world and communication between its parts I think we are in storming phase.
what type of protections are used on HN? rate-limiting? ip range blacklist?
reply