Without addressing deployment canaries, exponential deployments, exponential rollbacks, traffic steering, API load duplication to staging, IaaS/PaaS, 12factor apps, HA, scaling, fault injection, hot backups, cold (and tested) backups, DR/BCP, configuration management, CI/CD, monitoring, troubleshooting the entire stack, and SLAs, it doesn't seem to me like a professional-enough treatment to celebrate.
I got the impression that the book is oriented to solo entrepreneurs building their own Saas (perhaps I got that wrong). What you have described sure sounds like good standards to follow by an entire infrastructure team, but a bit too much for solo devs doing infra stuff.
This. The book was called Deployment for Makers before, but I am taking it in a more general direction now. I wrote about some of the things mentioned above, but had to choose where I spend the time (it's already a long book).
There is a reason it's called "from Scratch", because I cover starting out. Some things bootstrapping startups don't need (there are only nice-to-haves). I will update and improve the text based on feedback, though (including the comment above).
Ad 12-factor. Some makers are better off not to follow everything. For example, it's fine to use server disk space for a single server. So I don't want to include it as dogma.
Ad CI/CD. You either build it yourself (and the book gives you the technical knowledge to do it) or use a vendor (GitHub/GitLab/Circle CI). I actually want to do a bit more on CI/CD, but thinking how to approach it.
I have not read all the book yet, but it definitely talks about some of these things (12factor app, canaries, scaling, logging, etc.).
It is a review of the most fundamental tools and strategies in Red Hat flavored Linux distros. Things like troubleshooting the entire stack would be far too advanced and specific.
I talk about some of it (like push vs pull approach to server configuration, containers or not, single server or not), but most likely not to the extend you are probably thinking of.
As for high availability I discuss whether it's worth it or not in the Scaling chapter. I also show how to use NGINX as a load balancer. I currently don't have HA content written for PG and Redis, but there are planned (as a free update).
2. It's included in the distribution and enjoys security fixes.
3. It's included in the distribution and shipped with good SELinux profile.
4. Since I use it to explain reverse proxy, I don't have to go into explaining new syntax (and people don't have to learn two different tools just for the sake of it).
Most people will prefer the reasons above to a better performance in theory (in theory, because I doubt NGINX will be your bottleneck).
Interesting about the total requests, but the article does not test.. balancing load? In a perfect world, 1 lb in front of 4 app servers should be able to serve 4x the requests as 1 lb in front of 1 app server.
So I'd imagine 4x nginx back-end servers, and a baseline hitting 1 nginx directly?
That, and the fact that the nginx config is shorter, and the performance not terrible (though terrible next to envoy in this test admittedly) - does mean nginx shouldn't be ruled out.
Also, for taking apps to production, there should definitely be a part on high availability.