I think one of the bigger issues is tooling around creating web applications and related content. For now, webpack and other bundlers do a pretty decent job of grouping bundles together, however the additional costs for handling routing or server-rendering/caching in order to determine which bundles to push is a bit in the dark.
I think we will definitely get there in the near future. I also think that one can definitely get a well-crafted application together by hand, but until the tooling catches up, it will not be widely used.
One thing that could be done is a webserver that tracks which resources are most commonly requested within say 5 seconds of an initial resource, and if >N% of them also request a given resource, it gets pushed by default. That could be a good starting point, but then comes the cost of keeping those relations/counts/tables in memory and not overloaded.
* TLS requirements are complicated and hardly anyone knows how it works.
* Several agents partially support it, but the overlapping feature set is useless. An easy example is trailers.
* More complex implementation. You can't look at the bytes without a program to decode the frames.
* Flow control is hard.
Upsides:
* Too numerous to count.
HTTP/2 fixes all the most painful parts of HTTP/1.1, and even provides backwards compatibility for HTTP/1.1 proxies. Aside from bugs (and lack of counterparty support), there is no reason to keep using HTTP/1.1.
HTTP/2 effectively forces TLS 1.2+, due to the limited cipher suites in the RFC, and browsers' restrictions. For the sake of argument, TLS is a requirement. To this end:
* TLS requires ALPN. This would not be so bad, but so many clients don't provide a way to set the ALPN string. Java. In old versions of Java there is no way to do so, except to hack the JDK classpath. (Go users are feeling pretty smug right about about now, but the rest of the world suffers).
* Setting up more advanced TLS is highly undocumented. Anything such as custom host domain validation, or IP SNI, or client side certificates, or encrypted keys results in people getting stuck. There is very little help available (how would you do this in nodejs, or ruby, or python, or C#, or PHP, etc.?)
* Modern ciphersuites are not supported everywhere. For a long time, Java8 had non-hardware acceleration of AES-GCM, at a max speed of about 20MB/s per core. (on Intel, the max speed it closer to 3500MB/s). This makes it prohibitively expensive on languages/libraries that don't support it. (I'm looking at you, Android.)
* How do you test TLS? LetsEncrypt rate limits cert creation. For the purpose of CI, there isn't an easy way to get trust on both sides. Unit test become a pain, so people just don't encrypt.
With this point, I just wanted to say, I have to agree fully. The requirements that get forced on folks just to adopt HTTP/2 are frustrating. As much as I also like PFS, tying HTTP/2 to it was silly, IMO. If the old ciphers are really that bad, I wish the vendors would just commit to that and put out a timeline for deprecating them. But this "they're only bad if you're using HTTP/2" to me seems like a nonsense carrot and stick.
> client side certificates, or encrypted keys
These aren't particular to HTTP/2 though; you can use or not use them equally the same w/ HTTP/1.x. (In particular, client certs aren't going to effect most people.)
> How do you test TLS? LetsEncrypt rate limits cert creation. For the purpose of CI, there isn't an easy way to get trust on both sides. Unit test become a pain, so people just don't encrypt.
Testing your certificate issuance stuff is orthogonal to HTTP/2, again, is it not? Outside of ensuring you get the appropriate key usage bits set on the certificate, I don't see why it shouldn't be very feasible, for testing HTTP/2, to simply mock out LE w/ a self-signed certificate.
HTTP/2 in my opinion is simply a better protocol. It's more robust, and it has a binary framing layer which implements connection semantics separately from request/response headers. What I mean by this, is `content-length` or `transfer-encoding` are no longer used to control how the protocol actually works (i.e. how many bytes of data are consumed). I think this is a great step forward because it removes ambiguity and simplifies the overall protocol, and it's clear how extra features can be added in the future without impacting existing clients/servers - something that HTTP/1 struggled with a bit.
PUSH PROMISE is great and all, but the use is limited. If the remote side already has the asset you push (because it's cached), you waste your bandwidth and increase latency.
It can. But the server will only abort streaming once the cancel message reached it. In the worst case that might have already written the whole stream flow control window (64kb) to the socket. So there is some truth to the comment
It's especially bad with mobile phones and limited tariffs. Speed could be high but latency also high, so probably a lot of data would be pushed to the socket before client can respond that he doesn't need it. And mobile internet often has very tight limits. HTTP/2 reuses connection, so its speed will be high and TCP windows will be large. A practical example: my phone has 4G connection with 40 Mb/s download speed and 280 ms latency to Seattle. That's a 1.4 MB of data downloaded for 280 ms. I have larger limit, but a lot of people use cheapest Internet plan wit 3 GB limit, so that's a 0,05% of limit just for one request, it'll sum over days pretty quickly.
That's a very good remark! I'm currently in a roaming area with a very tight data limit, so this affects me too.
I think it might be helpful to have a setting that disables server push for those devices. However the problem here is that most users will neither discover nor understand the setting. Maybe if the ISPs would signal the phone firmware something about the data plan, and that one could switch the push setting it could help, but that's unlikely to happen.
Another remedy could be that push implementations on the server only push the headers and don't begin streaming the body, so that the client can check (e.g. through cache and etag comparisons) whether it really wants it. But that's some kind of departure from generic HTTP/2: The client can't really send that it wants the remaining body, since the only signal that it has is the flow control window increment. Clients would need to get modified to send this always when receiving push promises, and servers would need to understand that those should now trigger them to start streaming bodies. If not all do it then it seems like a recipe for stuck requests, and a nightmare for interoperability. So that's also unlikely to happen.