Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While Raspberry Pi's are awesome, and the power consumption is nothing to scoff at (when considering a cluster), you can accomplish this same thing for a lot less, and have quite a lot more compute power, by purchasing a used server or even something like an AMD 3600X...

A single 3600X will grossly outperform this cluster (and cost less) with less headaches (you don't have N physical machines) by using KVM to deploy a few virtual machines and using Kubernetes to orchestrate and allocate within those VMs. You'll also have a lot less latency between nodes running in VMs on the same physical host.

Another thing that unfortunately sucks about Raspberry Pis (less with Pi 4, but still mostly applies) is really shitty I/O performance...

I spent a large amount of time over the past summer and fall trying out various ideas to have a "cluster" at home that was both practical and useful. While, the PIs were nice, they never really amounted to much more than a demo. Latency and I/O become real problems for a lot of useful interconnected services and applications.

Honestly, if Ryzen 3000 hadn't come out, for cheaper cluster builds (~300-400) I still think Pis would be a solid choice but... Ryzen 3000 is just so fucking fast with a lot of cores, it's truly hard to beat.

Addendum: to touch on used servers, yes your power bill will go way up, no joke, but for some applications like large storage arrays-- it's hands down the cheapest/easiest route. Search by case, not by processor, it sounds weird but the case is likely the most valuable part of the old server (like ones with 20+ SAS2 slots for $500) or PCI-E slots that GPUs can fit into.



The other part of 'on your desk' is hearing damage.

Server hardware vendors have traditionally not given two shits about their servers being north of 90 decibels, and I'm pretty sure I've witnessed a few that were pushing 100.

That Raspberry Pi is probably going to absorb more noise than it makes.


You could also go down the path of buying one or more Dell Optiplex SFF or Micros (9020s or better). 8 threads and 32GB of RAM (and better) per box and not a lot bigger than Pi in a case.


I managed a few HP C7000 blade systems once in an inadequate environment. They have jet like fans and at the time a firmware bug that meant the fans had two levels: 79% and 100%. Compared to after the firmware was fixed it stayed around 30% at full load and was bearable at least


Oh, man...reminds me of the IBM BladeCenters. Maybe they had 2 fan settings...90% and 100% of a 737 at takeoff. They had an extra cost "office kit" that included baffles and other mods to (hahahahaha) quiet them down enough for an office environment. Maybe knocked it down from a 737 to a Gulfstream G4. Nice kit if buried in a datacenter someplace.


Is there any reason you need a kubernetes cluster literally on your desk?

Isn't the whole point of using these layers of abstraction over hardware that you pay someone else to manage it?

The only times I can imagine needing hardware on my desk are when latency is super important or when I constantly need to manipulate the hardware (change hardware, play in bios, etc). In either case, I would not use k8s to run my software.


It's fun to play around with and learn on. If you can build a cluster from scratch with Pi hardware, you get a lot of knowledge of how things work under the hood for a real cluster.


Because at the end of the day you are a mammal and pretending you don't have visceral reactions to sensory input is impoverishing your options, and honestly, robbing you of motivational tools.

Information radiators make no sense at all if you view them from a purely objective standpoint. Wouldn't it be faster to just open the web page on your computer? They only make sense because of the way humans interact with the world (and I suspect in particular, the doorway effect).

The last demo I saw for Pi clustering, the guy fiddled with the blink rate and color of an LED on the motherboards. Sounds boring, as a demo it summed up a whole lot of crap into a simple visual.


> Isn't the whole point of using these layers of abstraction over hardware that you pay someone else to manage it?

That person is also needed in the future, and that person needs to start learning somewhere. A good start is messing around with it on a few Raspberry Pis


I can't agree more.

I have had the same setup for over 10 years, a PC in the garage running Ubuntu with lots of ram. I just upgrade it every so often with last-gen CPU and motherboard and swap-in the RAID controller.

I script all of my dev environments, if I need a K3S cluster it's literally 2 minutes (Bash, Python and some Ansible).

Want a set of RHEL machines? No problem, it's another script and 30 seconds.

For training you can't beat it. If I have to get out the "big guns" I have an old Dell tower server with 128GB of memory that I got cheap and has 12/24 threads. Sure it's a power-hog but I use it surgically.

Pis are great and you get a nice, limited blast-radius but the SD cards go wrong, they are slow at single-thread and they don't really have enough memory (Pi4 4GB being an exception). You can't beat x64 for compatibility either.


Yes, but this looks super cool on your desk


As a project, I like the idea of having the nodes be separate, because endless layers of virtual things makes it harder to grop what's truly going on, IMO.

Also, gotta factor in electricity hosts.


you mean using a ryzen with VMs (cluster on one ryzen) I assume? bc cost of ryzan as a node would be pretty pricey compared to Pi 4s.


Yes, a single ryzen node running multiple VMs.


So, you don't see a difference between clustering on physically separated nodes and VM?


I see a difference, but it's not in favor of PIs because they're physically separated.

I think it's far more realistic to choose a hypervisor (KVM, raise the roof) install it and configure it, run N virtual machines on a single machine, provisioning and slicing the hardware, setting up and managing the network between them, having disks run in a raid so the VMs don't starve for I/O and data is replicated, and then running kubernetes (or whatever personal hell you want) within your "virtual cluster." That's the whole "running a cluster" part all of these Pi Kubernetes things miss because they're running on consumer grade hardware.

Your hosting provider is surely not using thousands of small under-powered machines running bare metal installs of your applications.

Want to find out how resilient your cluster is? Just fucking kill one of the VMs. Boom. Instant feedback. Now try that with a PI, you'll have to unplug it, and then wait five minutes for it to boot. It becomes tedious fast.

Clustering and distributed systems are extremely complex and difficult... just having physical machines doesn't really even begin to scratch that problems you'll face, just my two cents.


Not in this case, no. The physically separate nodes are so low power and still share significant points of failure that it doesn't matter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: