Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s funny to see where we are on model improvements.

Back when I was maintaining a coding harness around the time of Claude 3.5 we tried hash prefixes we tried line number prefixes we tried a lot of different approaches to making the model better at selecting edit blocks and ultimately at-least then fuzzy string matching won out.



Yes, very similar results here (http://brokk.ai)

We got lines-with-anchors working fine as a replacement strategy, the problem was that when you don't make the model echo what it's replacing, it's literally dumber at writing the replacement; we lost more in test failures + retries than we gained in faster outputs.

Makes sense when you think about how powerful the "think before answering" principle is for LLMs, but it's still frustrating




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: