Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my tests, GPT-OSS-120B Q8 was close to DeepSeek R1 671B Q16 in solving graduate-level math but much faster with way fewer thinking tokens.


Supporting TFA'd thesis that it's trained to be good at benchmarks.


Is it bad? It was trained on synthetic data with emphasis on coding and scientific thinking. Good on my opinion, that's what it can be used for. Not as universal do it all model.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: