I see comments like this a lot. In fact, I've run into it in my own side projects that I work on by myself -- what is this slop and how do i fix it? I only have myself to blame.
I can't speak to open source orgs like curl, but at least at the office, the company should invest time in educating engineers on how to use AI in a way that doesn't waste everyone's time. It could be introducing domain-specific skills, rules that ensure TDD is followed, ADRs are generated, work logs, etc.
I found that when I started implementing workflows like this, slop was less and if anyone wanted to know "why did we do it like X" then we can point to the ADR and show what assumptions were made. If an assumption was fundamentally wrong, we can tell the agent to fix the assumption and fix the issue (and of course leave a paper trail).
Engineers who waste other engineers' time reviewing slop PRs should just be fired. AI is no excuse to start producing bad code. The engineer should still be responsible for the code they ship.
> Engineers who waste other engineers' time reviewing slop PRs should just be fired. AI is no excuse to start producing bad code. The engineer should still be responsible for the code they ship.
Yeah, this is the unfortunate truth about what's going on here in my opinion. The underlying problem is that some workplaces just have bad culture or processes that don't do enough to prevent (or even actively encourage) being a bad teammate. AI isn't going to solve that, but it's also not really the cause, and at the end of the day, you're going to have problem at a place like that regardless of whether AI is being used or not.
I can't speak to open source orgs like curl, but at least at the office, the company should invest time in educating engineers on how to use AI in a way that doesn't waste everyone's time. It could be introducing domain-specific skills, rules that ensure TDD is followed, ADRs are generated, work logs, etc.
I found that when I started implementing workflows like this, slop was less and if anyone wanted to know "why did we do it like X" then we can point to the ADR and show what assumptions were made. If an assumption was fundamentally wrong, we can tell the agent to fix the assumption and fix the issue (and of course leave a paper trail).
Engineers who waste other engineers' time reviewing slop PRs should just be fired. AI is no excuse to start producing bad code. The engineer should still be responsible for the code they ship.