This is why Rust is going to eventually eat Javascript's lunch on the browser. I'm willing to longbets that by 2025 we see a Rust framework as the top browser toolkit. I'd even be willing to bet that there are popular frameworks in use by major websites that discard the DOM altogether for something simpler and faster.
There's a reason why this has to happen, too.
My sincere hope is that Mozilla and the Rust/Browser luminaries will develop cross-platform APIs for full device hardware abstraction. If we can get Rust/WASM to be able to access the same system calls, hardware resources, payment APIs, native windowing routines, etc. as the iOS and Android SDKs, then we can potentially build a cross-platform native web that is just as snappy as the two shitty walled gardens we have today.
I really hope the performant web slays the app store. That's the dumbest trap we ever fell for. I'd love to visit a website and be playing native Minecraft in seconds. Sans Google Play or the App Store, with no bullshit monopoly tax.
That would address one of the major problems in our industry.
What do you mean by “browser toolkit”? If you mean “thing that people make web apps with” (like React or like you might imagine Qt being, though it’s not for the web platform), there is not a chance that this will happen by 2025. Zero.
For starters, Rust: Rust is just too complex to be the most popular language when something like JavaScript exists which is faster and easier to get started with, and generally good enough. It has niche usage and will steadily grow, but it’s not going to take over the space any time soon, and probably never will. I say this as a Rust developer for seven years and a web developer for more than fifteen years.
Then the other part, discarding the DOM altogether: there are way too many pieces of functionality that the browser provides that currently cannot be implemented in web user space, for this to become mainstream any time soon. For example: correct scrolling, standard keyboard and mouse event handling, and accessibility. The idea that you would go from quite a few pieces of major functionality completely non-existent in browsers and specs (most of these issues aren’t even being talked about) to mastery of the space within five years is unreasonable.
For further shared abstractions, many web APIs are developed and continue to be developed. Other experiments like WASI exist. WebAssembly has scope to be a solid foundation for this sort of thing. But applying all of the interesting web APIs to desktop and mobile platforms, that’s not really something that Mozilla, Rust or browser developers can or should do anything about. They have somewhere between no and extremely limited clout in such spaces.
Yew (and comparable Rust frameworks) do however have a pretty slow compile time and with that a horrible debug-cycle. It's so bad that I in general opt for React-in-Yew for prototyping and only reimplement components with more complex state in Rust.
Not likely. The only thing that really matters to me is developer ergonomics. I've never seen a project struggle because it couldn't render a table with a 1,000 rows fast enough. I have seen many projects struggle because they hired a bunch of junior engineers who had difficulty writing clean, maintainable JavaScript.
Rust and WASM will definitely take on for doing things like WebGL. I can't imagine Rust becoming the de-facto solution for building UIs.
I don't think that's going to happen and I hope that's not going to happen. DOM provides good abstraction of UI elements, which helps with accessibility, scrolling behaviour with the same feel as OS, standard text utils like selection, copy-paste. Using many incompatible libraries for that would lead to neo-Flash nightmare.
What I believe is going to happen is splitting more complex web applications into frontend part - written in JS - and backend + graphics part - provided as Wasm, possibly written in Rust.
It fundamentally doesn't do what UIs need it to. That's why most web UI frameworks use a 'VDOM'. When a framework finds a clever way to avoid doing this (Svelte), we all applaud the accomplishment. I think that gives us our answer. </snark>
> I really hope the performant web slays the app store. That's the dumbest trap we ever fell for. I'd love to visit a website and be playing native Minecraft in seconds. Sans Google Play or the App Store, with no bullshit monopoly tax.
I don't, unless certain criteria are met.
There is a consumer-friendly side of apps that is often overlooked in these types of conversations: generally, I get to decide when to update my apps, and the developer can't uninstall their app off my phone without my permission. Moving from apps to the web shifts the balance of power even further from user to developer.
There are some apps (that require internet access) for which this is not true. For those, I share your sentiment; spotify has no reason to be an app. It's also possible to implement this amount of user control with the web, by caching the full version of a page, when you add it to your home screen (for example), and only replacing it after an explicit prompt (as a setting, ie auto-update).
Until that happens, though, I don't wish for the death of app stores. Especially, f-droid is a great ecosystem.
Even though I agree with the benefits of having control over updates seeing how popular SaaS has become I find it quite likely this will happen to (many kinds of) Apps as well.
Rust's most successful UI story appears to be "punt and delegate to JS via Electron, RPC, etc." There might be some client-side wasm Model written in Rust that the JS is calling into, but the making of widgets and whatnot is in JS.
Note that this parallels all the hipster languages of yesteryear, which invariably answered questions about the GUI story with one of "punt and delegate to C via a thin wrapper over WxWidgets or what have you" or "punt and delegate to the browser via some combination of JavaScript and HTTP."
The path for Rust to overtake JS in browser land is something like "compile an already-existing Qt-in-Rust to wasm with some browser-API-specific back end."
Not a chance. There are two main issues: technical and financial.
Technically, Rust is way too complex and therefore expensive to hire for. Same reason why other languages like C++ that compile to Wasm won't succeed either.
Financially, Google and Apple won't allow their digital stores lose market.
But even if those reasons weren’t a thing, Wasm is not as fast as native code and it would be a huge waste of bandwidth downloading every time game resources (measured in gigabytes these days!), even if you cache and allow huge offline storage.
In the end, you end up with a pseudo operating system, but just worse.
Web scripting trends are pretty hard to predict so who knows. The basic obstacle for this is that Rust is much harder to pick up than other JSVM targeting languages and the "doing without GC in exchange for hard to understand borrow checker" tradeoff isn't really valuable on the web.
I can't see a Rust framework becoming one of the top web frameworks simply by virtue that Rust is more complex to learn and implement than Javascript. However, I can see your second point coming true, a fully native framework that eschews the DOM by compiling directly onto a canvas, just as a game compiled with WASM would. The prime example is Flutter Web, which has an experimental WASM renderer that displays onto a canvas. Of course, now they have to reimplement all of the functionality such as scrolling, accessibility and so on. It also somewhat addresses your second point, where you can compile to iOS and Android but also to the web so you can make a Flutter app a progressive web app.
I see this concern a lot with Flutter or similar WASM -> canvas frameworks, and I think it's unfounded. Just because it's a canvas doesn't mean there's no accessibility. Indeed, even desktop apps are canvas like in that one cannot view the source and they can render using similar frameworks like Skia, or even Skia itself, yet they still have accessibility.
Flutter's developers have specifically included accessibility settings that devs should use, such as Semantics widgets, just like aria labels for the web. Flutter also constructs a separate semantic DOM like tree that screen readers hook into. So there are both built in things and things that devs can use to make their projects more accessible.
TBH, I doubt it, writing "high level" Rust requires a too big "runtime portion" (in the same sense of "runtime" as the stdlib in C++, which also tends to bloat up executables), while "embedded style" minimal Rust without such a "runtime part" is so low-level you could just as well write C code (which is also "memory safe" in that environment because of WASM).
And even in the best case, a WASM blob still has a hard time beating a bit of Javascript if it is just used to glue a couple of browser APIs together (and in such a situation, the good or bad qualities of that "glue language" don't matter much).
Hybrid JS/WASM applications make a lot more sense, where some small performance-critical parts are implemented in WASM in a "runtime-free language", glued together by Javascript (similar to how python is often used as glue between functionality implemented in native DLLs).
Are you in the habit of using Rust + WASM? From how you’re wording this, I suspect not. Rust really doesn’t require a large runtime (until you do something that requires Unicode tables, then you pay a certain tax), and the difference between high-level and embedded styles is normally small to nothing—the whole concept of zero-cost abstractions is normally talking about performance, but correlates extremely strongly with code size too; such abstraction layers typically compile out of existence.
Also there’s still a fairly big difference between a more frugal style of Rust and C. Just because it can’t cause arbitrary code execution and the likes doesn’t make C safe to use.
Yes, when developing for the web people will hopefully pay a little more attention to binary size, because WASM does have a tendency to make big code blobs easier to produce, especially if you’re using generics liberally.
Finally, although at present the browser WASM story requires it to interact with the browser through JavaScript, that limitation is planned to be removed.
I have admittedly not much experience with Rust+WASM (I dabbled in Rust though), but enough with C and C++ on top of WASM, and the situtation is really quite similar to C++ (IMHO of course).
Theoretically, C++ also has no "runtime overhead", but start using high-level stdlib features, and suddenly there is a non-negligible overhead. It's possible to cut that down, but that requires a lot of "insider knowledge" of the compiler toolchain and the specific stdlib implementation. In the end it's always about deciding what C++ features to not use to cut the size down, and at the end of that process is a language that's essentially C (no generic containers, no automatically managed memory etc).
And even when using C it is non-trivial to cut down the overhead from the C runtime library functions so that it becomes "competitive" with minified Javascript doing the same thing.
Don't get me wrong, Rust's toolchain integration with WASM is excellent. But I wouldn't call Rust compiled to WASM a contender for killing Javascript.
> because WASM does have a tendency to make big code blobs easier to produce, especially if you’re using generics liberally.
WASM was designed to be compiled to very small binaries. The big code blobs you see are produced by the Rust compiler and are not a characteristic of WASM... just check the output of the Rust compiler for a hello world project in WAT format and you'll see it consists almost entirely of the memory allocator that's included in the binary...if you write the same code in WAT directly, or something like AssemblyScript[1], which is designed to use WASM primitives, it's going to be a few bytes for your typical WASM hello world.
I’m not talking about the constant overheads, but how WASM grows as you scale your code base. My experience is that in practice it tends to be grow a little faster than equivalent JavaScript, say 1.2–2× the size depending on what you’re doing. But I should definitely have clarified that I was speaking specifically in the context of Rust—my generics remark certainly only makes sense in that context.
It’s not fundamentally terribly much larger, and can definitely be smaller (again, depending on what you’re doing), but using it as a compilation target makes it much easier to produce binaries larger than you anticipated. In JavaScript you distribute what you wrote, so you know if you loaded a lot more code. Well, in theory. Code bundlers and liberal importing of third-party libraries kinda put paid to that.
WASM provides enough memory safety so that memory corruption errors in the WASM heap can't be used to escape the sandbox (assuming there's no exploitable bug in the sandbox itself of course). It doesn't protect from memory corruption inside the WASM heap of course.
A fun exercise would be compiling the Heartbleed version of OpenSSL into WASM.
Exploits can escape the sandbox, via functions exported from WASM that change their behaviour due to internal memory corruption, or callbacks that get called with unexpected parameters from the WASM code due to the internal memory corruption that changed the execution logic.
For example, a security system that would grant higher credentials than it was supposed to.
Yes it follows Harvard architecture, still you can produce exploits by corrupting memory, no need to rewrite code, it just becomes harder to achieve them.
Basically by corrupting memory, you can achieve that the executing path changes and what was false is now true, thus influencing the outcome of what gets called and their parameters, without having to change any byte of the code.
I could really do with a diagram to explain how all the parts fit together. gfx-rs, Vulkan, wgpu-rs, wgpu-native, browser WebGPU, gfx-hal, Vulkan Portability bindings.
Here's what i think:
1. Vulkan is an API for doing graphics which is modern and standard
2. Direct3D (part of DirectX, the names are sometimes used interchangeably), Metal, and OpenGL are APIs for doing graphics which are modern or standard
3. Vulkan, Direct3D, Metal, and OpenGL are all implemented by graphics card drivers and operating systems working together - this is pretty much the bottom of the software stack
4. gfx-hal (along with gfx-backend-*) is a Rust library which abstracts over Vulkan, Direct3D, Metal, and OpenGL; its API is similar to Vulkan, but not identical
5. The Vulkan portability bindings implement Vulkan (the "universally portable subset") on top of gfx-hal (to an extent - it seems like there are lots of little bits that can't be implemented for each backend)
6. WebGPU is an API for doing graphics which is modern, standard, and easier than Vulkan
7. WebGPU is (or soon will be) implemented directly by browers, on top of who knows what, Direct3D etc i suppose
8. wgpu-core is an implementation of (a Rust projection of) WebGPU on top of gfx-hal
9. wgpu-native is a C API on top of wgpu-core
10. wgpu-rs is a nice Rust API on top of WebGPU, which can (now!) use either a browser's WebGPU, or wgpu-native
I drew this on the whiteboard within Mozilla a few times, and I agree it would be nice to make it more accessible and hosted at gfx-rs repository or the blog.
There is a few scattered sub-diagrams on the topic, buried in the various slide decks I made:
I like the diagram that has "Google's Dawn" and "Apple's something" side by side. :)
Thanks for all your hard work on gfx-rs! I've been following its progress for a few years now, and despite the major architecture redesign, the project appears to be stronger than ever now. Bravo. Your work is much appreciated.
1. Vulkan, Direct3D, Metal, and OpenGL are all native 3D APIs on different platforms.
2. WebGPU is a new 3D API, designed to be used in browsers (but not limited to them) and implemented on top of one or more of the native APIs.
3. wgpu-native is Mozilla's implementation of WebGPU, built in Rust on top of gfx-hal, which is a Rust-specific abstraction on top of native APIs.
3. wgpu-rs is a convenience wrapper to make the API more Rust-like, which either uses wgpu-native directly, or when compiled for a browser, the browser's own WebGPU implementation.
4. Dawn is Google's implementation of WebGPU, in C++, which similarly can be used directly or through a browser.
The nice side effect here is that even if you are not targeting browsers, if you want a modern, fast, usable and cross-platform 3D API, there now is one, which hasn't been the case for a long time. And there are at least two different implementations of it.
Both wgpu and gfx APIs try to avoid any significant CPU usage in hot rendering/compute paths (e.g. by doing more work upfront instead of during rendering), so generally the performance cost should be fairly low for most applications.
The Vulkan API is basically 1:1 with gfx-hal so the performance cost should be negligible when gfx-hal is running on top of Vulkan. There is some overhead when running on top of other backends (such as Metal or DX12) versus implementing separate backends for wgpu, but the design of WebGPU avoids most places where the overhead would become significant anyway.
Similarly wgpu-rs only adds Rustier bindings to wgpu-core, so any performance cost should be negligible.
There is some amount of overhead in WebGPU in general (caused by automatic memory management, additional resource tracking, etc.) versus directly using raw Vulkan/DX12/Metal, but WebGPU tries to find a good balance here between security/portability/performance/usability.
It's really hard to estimate without implementing alternative paths that are more direct and yet fairly optimized. For example, since Intel and AMD have open-source drivers on Linux, somebody could write respective wgpu-core implementations directly on top of the drivers. That's a lot of work, and without a clear prospect of being much faster.
One of the things that use wgpu-rs is Iced[1], a cross-platform GUI Library inspired by the Elm Architecture.
Although the project started around May 2019 and is still highly experimental, it looks like they got a lot of progress made and things look promising! The interest seems incredible: they have 5.4K stars on GitHub (amazing for such a young project) and there are lots of impressive examples[2].
As someone who just learned about the project: hope this takes off!!! A lightweight, simple to use GUI toolkit that can run on the browser and desktop is really something developers desperately want.
This is very cool stuff. I'm waiting a bit for it to become more mature, then I'd like to use it as the basis for druid, a native Rust UI toolkit. As noted in another comment, iced is already doing that.
One of the reasons I'm excited is that I believe I can get very high performance, and high quality 2D rendering using an evolution of the techniques explored in the piet-metal prototype. This work extensively uses GPU compute capabilities, which are not available in older versions of OpenGL or in WebGL.
Partial hash inversion is an embarrassingly parallel problem: computing a hash from a random number is independent from the other random numbers you want to compute hashes for, also it is a computation bound problem and not memory bound. It means you can build a fairly efficient partial inverse finder by running many fragment shaders ("pixels") in parallel and only drawing the one that finds it. This is doable in WebGL today.
A key feature of WebGPU is compute shaders which sound like they can compute stuff so they would be better at hashing. But actually compute shaders give more flexibility which makes it possible to handle problems that are a bit less parallel, and they help optimize memory-bound programs (but don't help for computation bound programs). So compute shaders make it slightly less code to do a partial inverse finder, but do not make it dramatically more efficient.
So to answer your implicit point: no WebGPU won't unlock capabilities that will lead to a rise in ads mining Bitcoin. (though it could maybe help for Ethereum mining)
This is really cool. I've been excitedly watching how wgpu is developing. I don't have much graphics experience outside of one ogl but it looks like it's turning into something seriously neat.
Definitely. The last time I did any truly serious graphics programming was in the early noughts and the fact that I can now toy around with a really modern graphics stack in a reasonably straightforward, cross-platform yet still performant way is really wonderful.
I've been enjoying my time with wgpu-rs so far and I intend to keep doing so.
I think that wasm potentially has a great future for cross-platform binaries. Eventually, hopefully, the same code that can run in a browser will be able to run natively on linux or macos without recompiling.
A major blocker for this is that wasm applications are slower than native ones. There are a few reasons for this, and some of them, like simd support, are slowly being fixed, but an important one in my opinion is that memory in wasm requires bounds checks.
Now, there are hacks to make that fast, but they work for wasm32 and not wasm64. I think architectures adding instructions for lightweight memory sandboxing would go a long way to alleviate this problem.
There's a reason why this has to happen, too.
My sincere hope is that Mozilla and the Rust/Browser luminaries will develop cross-platform APIs for full device hardware abstraction. If we can get Rust/WASM to be able to access the same system calls, hardware resources, payment APIs, native windowing routines, etc. as the iOS and Android SDKs, then we can potentially build a cross-platform native web that is just as snappy as the two shitty walled gardens we have today.
I really hope the performant web slays the app store. That's the dumbest trap we ever fell for. I'd love to visit a website and be playing native Minecraft in seconds. Sans Google Play or the App Store, with no bullshit monopoly tax.
That would address one of the major problems in our industry.