Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Are past LLM models getting dumber?
4 points by hmate9 23 hours ago | hide | past | favorite | 4 comments
I’m curious whether others have observed this or if it’s just perception or confirmation bias on my part. I’ve seen discussion on X suggesting that older models (e.g., Claude 4.5) appear to degrade over time — possibly due to increased quantization, throttling, or other inference-cost optimizations after newer models are released. Is there any concrete evidence of this happening, or technical analysis that supports or disproves it? Or are we mostly seeing subjective evaluation without controlled benchmarks?




No technical analysis, but all models experience drift eventually.


No, you're just getting used to them.

Interesting.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: