Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't agree. Average consumer application these days should be parallelized like any other. So more cores should be better. It's not last century anymore.


The time taken for any process is still limited by Amdahl's law. You can't make inherently serial parts any faster by throwing more cores at it.


I disagree. Any CPU you get nowadays has a minimum of 4 cores which is plenty for the average consumer application. The most basic things people do are still bottlenecked by ST more than MT such as email, browsing, messaging. Even most AAA gaming doesn't scale more than 8 cores.


It’s not about a single application scaling to all cores but modern OSes do A LOT in the background


Even on my decade old CPU, all those background OS processes fit into a single core.


And those background tasks can be scheduled onto the slower cores.


Basic things like compression can be needed for a consumer application. And it's just one example.

So I completely disagree with "2 cores are enough for consumer application" idea.

And it's even further form truth for games. Last time I looked at something like Cyberpunk 2077 in the debugger, it had 81 threads. 81! Part of it were vkd3d-proton ones, but only a small part.

And it actually does load CPU pretty evenly if you monitor it, so I'd say it scales OK.


Cyberpunk may not be an extreme outlier, but it's definitely better than the average game at making effective use of more than a handful of CPU cores.


The data those threads are touching is just as important. Cache locality is hugely important to many game workloads.


Not the case for gaming, single core performsnce is still king for the average game.


For average game from 10 years ago - may be. Not for modern games. For them - GPU performance is king, not single core CPU one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: