Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let me be real with you for a second in one of those let's pretend we've just taken a bunch of psychedelics and all the structures that prop us and our beliefs and ideals have collapsed: I really think you should reconsider your position on this. Containers are ok, they are not a threat; the kinds of people who are going to be selfhosting your stack are the same kinds of curious, investigative people who will debug when it all falls down. Those people do not need to be pushed through a manual series of commands as a hazing ritual.

I'll be just as real with you that I'm not going to be the one to maintain it. It's in the best interest in the platform's broader adoption I make my plea. Because people like me will just flounder on the rocks of the GitHubs of the world. And that's not a future either of us really want.



It's not a hazing ritual, it's a necessary series of learning steps which equips you to be a good sysadmin of your new instance. Whatever did we do before Docker? The world must have sucked back then!


Not to be snarky but Docker (+ Hub) is mostly just packaging with some isolation to avoid conflicts, right? So why would you recommend packages from "regular" package managers (as seen in the installation instructions) but not Docker? (This is a serious question, maybe I am missing something.)


Because Docker is a really shitty way to package applications. You don't package the application, you package an entire linux distro, with it's packages, and your application into a single opaque blob. Normal package managers just package your application in a more simple manner which works in conjunction with your OS, not on top of it.


It makes sense to package and isolate the base system since it avoids any "hidden state" that could influence the behaviour of your program and introduce hard-to-debug errors.

And it's not packed into a single opaque block: You can specify an image to derive your image from and Docker will overlay-mount your image on top of the base image, avoiding the need to store multiple copies of a base image.


I can equip myself by reading the docs. I don't need to run through the Arch Linux or Gentoo Handbook because I want to run Arch or Gentoo. I can do those experiments in my own time, after I've delighted in what those distributions have to offer.

Besides, none of this addresses the laundry list of reasons why running the distributions that package your stuff is unfeasible: a server administrator with a Synology NAS, an enterprise lackey with a limited environment, or one working with any of the containers-only enterprise distributions.

Guix System (another really cool FOSS thing) and sr.ht are beautiful projects from the outside but they are hampered by their inflexibility, adherence to dogma, and resistance to critique. I once wanted to play with Guix System's new `deploy` command on a DO droplet and nobody really wanted to consider that their newfangled command was entirely bungled. You can see this kind of treatment to potential contributors echoed in examples throughout this post's comments. It is endemic to FOSS communities. And it sucks.

Ideals are only realized in compromise. There is no example in history of anything else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: