Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any site with a CDN in front of it can do that.

Don’t get me wrong this is an awesome project but if you really care about this kind of thing in a production scenario and you’re serving mostly static content… just use a CDN. It’ll pretty much always outperform just about anything you write. It’s just boring.



Even caching is normally unnecessary.

Honestly, HN front page traffic isn’t much. For most, it probably peaks at about one page load¹ per second², and if your web server software can’t cope with that, it’s bad.

Even if your site uses PHP and MySQL and queries the database to handle every request, hopefully static resources bypass all that and are served straight from disk. CPU and memory usage will be negligible, and a 100Mbps uplink will handle it all easily. So then, hopefully you’re only left with one request that’s actually doing database work, and if it can’t answer in one whole, entire second, it’s bad.

(I’m talking about general web pages here, not web apps, which have a somewhat different balance; but still for most things HN traffic shouldn’t cause a sweat, even if you’ve completely ignored caching.)

Seriously, a not-too-awful WordPress installation on a Raspberry Pi could probably cope with HN traffic.

—⁂—

¹ Note this metric: page loads, not requests. Requests per second will scale with first-party requests per page.

² From a quick search, two sources from this year: https://marcotm.com/articles/stats-of-being-on-the-hacker-ne..., https://harrisonbroadbent.com/blog/hacker-news-traffic-spike.... Both use JS tracking, but even doubling the number to generously account for we sensible people who use content blockers has the hourly average under one load per second.


> and if your web server software can’t cope with that, it’s bad.

Well then sites on average are sadly "bad" by your standards. Lots of sites that get on the front page of HN go down.


There are a lot of bad sites, but it’s nowhere near average—it’s a small fraction that are bad in these ways. I visit many sites from HN, and encounter pages that are down or even struggling due to overtraffic significantly less than once a week. Admittedly most of the pages loaded are on well-established sites or static hosts, but there are plenty that are WordPress or similar.


> Any site with a CDN in front of it can do that.

You are vastly overestimating HN front page traffic. Any reasonable system on any reasonable machine with any reasonable link can do this. And I really do mean reasonable: I've served front-page traffic from a dedicated server in a DC, and from a small NUC in a closet at home, and both handled it completely fine.


This sort of trivializes the effort and the fun of a project like this, doesn't it? Yes, you'll want to put all of your ducks in a row when you go to full production and you've reached full virality and your project is taking 5 million RPS globally and offloading all of that onto a CDN and making sure your clients requests are well respected in terms of cache control and making it secure and putting requests through a waf and and and and and. Yes we know. Lighten up. The comment you're replying to was meant to be lighthearted.


Any site that consists of static files served by a professional-grade web server like nginx on a small VPS can also trivially do that.


If you're hosting static data, shouldn't HTTP cache flags be enough in most cases? Read-only cacheable data shouldn't be toppling even a modest server. Even without an explicit CDN, various nodes along the chain will be caching it.

(though I confess it's been some years since I've worked in this area)


That’s not the case these days. Due to TLS, there is very little catching in between you and the server you’re hitting.


There are no nodes between you and that server.


Pretty much anything that isn't Wordpress is ok these days I think.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: