Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm in the hard disagree camp. I'm heading towards late 60s now, and have been writing software for all of my working life.

I am wondering how your conclusions are so different from mine. One is you only write "in the small [0]". LLMs are at least as good as a human at turning out "web grade" software in the small. Claude CLI is as good an example of this sort of software as anything. Every week or two I hit some small bug. This type of software doesn't need a "principal software engineer".

The second is you never used an LLM to write software in the large. LLMs are amazing things, far far better than humans at reading and untangling code. You can give some obfuscated javascript and they regurgitate commented code with self explanatory variable and function names in a minute or two. Give them a task and they will happily spit out 1000s of lines of code in 10 minutes or so which is amazing.

Then you look closer, and it's spaghetti. The LLM has no trouble understanding the spaghetti of course, and if you are happy to trust your tests and let the LLM maintain the thing from then on, it's a workable approach.

Until, that is, it gets large enough for a few compile loops to exceed the LLM's context window, then it turns to crap. At that point you have to decompose it into modules it can handle. Turns out decomposition is something current LLMs (and junior devs) are absolutely hopeless at. But it's what a principal software engineer is paid to do.

The spaghetti code is the symptom of that same deficiency. If they decide they need code to do X while working on concept Y, they will drop the code for X right beside the code for Y, borrowing state from Y as needed. The result is a highly interconnected ball of mud. Which the LLM will understand perfectly until it falls off the context window cliff, then all hope is lost.

While LLM ability to implement a complex request as simple isolated parts remains, a principal engineer's job is safe. In fact given LLMs are accelerating things, my guess is the demand will only grow. But I suspect the LLM developers are working hard at solving this limitation.

[0] https://en.wikipedia.org/wiki/Programming_in_the_large_and_p...

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: