Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For a book on deployment, I feel there is too much content on setting up a server. Would have like to see pros and cons of differing ways to deploy.

Also, for taking apps to production, there should definitely be a part on high availability.



Without addressing deployment canaries, exponential deployments, exponential rollbacks, traffic steering, API load duplication to staging, IaaS/PaaS, 12factor apps, HA, scaling, fault injection, hot backups, cold (and tested) backups, DR/BCP, configuration management, CI/CD, monitoring, troubleshooting the entire stack, and SLAs, it doesn't seem to me like a professional-enough treatment to celebrate.


I got the impression that the book is oriented to solo entrepreneurs building their own Saas (perhaps I got that wrong). What you have described sure sounds like good standards to follow by an entire infrastructure team, but a bit too much for solo devs doing infra stuff.


This. The book was called Deployment for Makers before, but I am taking it in a more general direction now. I wrote about some of the things mentioned above, but had to choose where I spend the time (it's already a long book).

There is a reason it's called "from Scratch", because I cover starting out. Some things bootstrapping startups don't need (there are only nice-to-haves). I will update and improve the text based on feedback, though (including the comment above).


I dunno, I kinda side w the parent you replied to; it's not like 12-factor app principles or CICD are only for big teams.


I mention both, but don't go into detail.

Ad 12-factor. Some makers are better off not to follow everything. For example, it's fine to use server disk space for a single server. So I don't want to include it as dogma.

Ad CI/CD. You either build it yourself (and the book gives you the technical knowledge to do it) or use a vendor (GitHub/GitLab/Circle CI). I actually want to do a bit more on CI/CD, but thinking how to approach it.


I have not read all the book yet, but it definitely talks about some of these things (12factor app, canaries, scaling, logging, etc.).

It is a review of the most fundamental tools and strategies in Red Hat flavored Linux distros. Things like troubleshooting the entire stack would be far too advanced and specific.


Do you have an up-to-date reference that handles all those concerns in one place?


Also it would be nice if this book could do all the deploying for me.


I talk about some of it (like push vs pull approach to server configuration, containers or not, single server or not), but most likely not to the extend you are probably thinking of.

As for high availability I discuss whether it's worth it or not in the Scaling chapter. I also show how to use NGINX as a load balancer. I currently don't have HA content written for PG and Redis, but there are planned (as a free update).


[flagged]


Yes, because:

1. It's popular and well understood.

2. It's included in the distribution and enjoys security fixes.

3. It's included in the distribution and shipped with good SELinux profile.

4. Since I use it to explain reverse proxy, I don't have to go into explaining new syntax (and people don't have to learn two different tools just for the sake of it).

Most people will prefer the reasons above to a better performance in theory (in theory, because I doubt NGINX will be your bottleneck).


Popularity != most suitable. Might as well advocate LAMP, Java, JS/Node, and Mongo too.

It will be a bottleneck for anything real.

Plus, it's old and doesn't do what Envoy does. Nginx isn't a full-featured, live-reconfigurable L7 ingress RP. It's a toy.


Interesting about the total requests, but the article does not test.. balancing load? In a perfect world, 1 lb in front of 4 app servers should be able to serve 4x the requests as 1 lb in front of 1 app server.

So I'd imagine 4x nginx back-end servers, and a baseline hitting 1 nginx directly?

That, and the fact that the nginx config is shorter, and the performance not terrible (though terrible next to envoy in this test admittedly) - does mean nginx shouldn't be ruled out.


Its not, its still widely used also in kubernetes world.

Tho a lot of traction is going for apps written in GO. Which supposedly makes them faster.


Language religion is irrelevant except to fanatics and haters. Performance is all that matters.


1 LB would be a SPoF.

And you don't necessarily need Nginx between an app and the LB if the app speaks http/s. DSR is an option too.


Well sure. Maybe they should've benchmarked with a fail over node. But I at least expect a benchmark of load balancers... To balance load?

In this setup both the "app" and the lb are SPOFs...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: