> to handle the same number of requests/sec you need much more workers, which has a lot of overhead
When doing multiprocessing, it's advisable to fork() instead of creating fresh processes (or launching them in containers etc). As long as you preload the necessary libraries/modules in the parent process before the fork, children will reference the same memory in a copy-on-write fashion. It keeps overhead extremely low.
This [1] article is for the Unicorn (ruby) webserver but the way it explains forking, signals, and pipes for inter-process control it's absolutely illuminating. These techniques are applicable to any unix-based applications, including php daemons.
Thanks, that's a nice idea. Kind of hard to implement in the setup I'm thinking about right now, but, still, that never occurred to me, so this is something to consider.
But, of course, it doesn't replace proper async/await. First of all, there are other situations where this solution wouldn't apply. And even in described scenario it isn't ideal. For starters, pre-loading everything before fork is really easier said than done: quite often data you need to fetch depends on the result of request, but just so happens to be very much reusable between multiple jobs. Then, it complicates the code quite a bit, makes to constantly keep in mind across the whole codebase things you wouldn't really want developer to think much about to start with. Additionally, it makes DevOps harder as well: people want to rely on LoadAvg (which is very unreliable metric, but explaining this to people every time is not what makes for a friendly environment) and multiple jobs doing nothing just waiting for I/O to finish sky-rocket it. Then some DevOps tools (you don't necessarily know about because of department communication) try to be smart by relying on loadavg and start actually breaking things.
When doing multiprocessing, it's advisable to fork() instead of creating fresh processes (or launching them in containers etc). As long as you preload the necessary libraries/modules in the parent process before the fork, children will reference the same memory in a copy-on-write fashion. It keeps overhead extremely low.
This [1] article is for the Unicorn (ruby) webserver but the way it explains forking, signals, and pipes for inter-process control it's absolutely illuminating. These techniques are applicable to any unix-based applications, including php daemons.
[1] https://thorstenball.com/blog/2014/11/20/unicorn-unix-magic-...