I'm assuming I'm not the only one who's afraid of Blink becoming the new Internet Explorer; instead of people following a web standard, they'll follow what's Chrome-compatible.
It's still better because Chromium is open-source, but I do worry that we're going to have a problem in the future with a lot of broken sites (YTMND-style).
> I'm assuming I'm not the only one who's afraid of Blink becoming the new Internet Explorer; instead of people following a web standard, they'll follow what's Chrome-compatible.
This is already the norm. I use Safari and find many sites have issues. Before that I used Firefox and many sites had issues.
Typically the JS works, but styling often looks different (and worse) on Safari and Firefox.
I can say wholeheartedly that it's Safari that's the problem, not Firefox or Chrome. I develop for Firefox first and have no issues with Chrome but Safari is requires some special tricks repeatedly.
Do you have examples? I use Safari and don’t find this to be true. Given Safari is what people use in iOS, id expect this to be rarely the case (most sites will test on an iPhone).
Same here; I don't know if I've ever seen a site that works in Chrome but not Safari (I don't doubt they exist, I just don't tend to come across them).
As a MacOS Safari 12 user, I've recently come across several important sites where some functionality fails hard in Safari yet works in Chrome ie barclays.co.uk and easyjet.com
I don't have any proof but my suspicion is that I've seen more sites failing Safari since Mojave.
I really dislike (& distrust!) using Chrome now, especially for e-commerce sites or anything with SMS based 2FA as Safari's ability to pull codes automatically from messages is a godsend.
I created an internal application for our company to use that is essentially a directory of staff members. I do most my development in Chrome but the bulk of the users are on iOS (Safari). I always find little nuances that I need to tweak for things to be "just right" on Safari. One example is that in all other browsers (including Safari Desktop) a <input type="search"> will show a little clear button ("X") at the end of the input. Some bug removed it from iOS Safari and we're still waiting for them to resolve the issue. It was reported in 2016 and was acknowledged by Apple [1]...
Chrome also frequently attempts to cover up incorrect code with what it thinks the author probably meant. Firefox tends to be much better than Chrome at following the actual spec.
I tend to develop primarily with Chrome first for some different reasons (ability to disable CORS, support for self-signed certs with WebSockets, and its better debugging of WS frames), but I always make sure to test later with Firefox and Safari, and occasionally find code that worked in Chrome, but doesn't elsewhere, and the reason is almost invariably always that Chrome didn't follow the spec.
Fair enough about it not being in the spec. It does appear to be a part of Apples' "Human Interface Guidelines"[1], although these guidelines are pointed to native applications. It was just an unexpected regression that occurred during the rollout of iOS9 and is an example of one of the little nuances that caused me to do something different. I've had some other issues regarding scrolling and focus, but this one seemed to be easier to talk about.
I have seen this many times during development. Noteworthy sites that cause problems are, well, Microsoft sites: portal.azure.com - problems downloading publish profiles.
SharePoint - too many to mention
Cant come up with example but i have fixed quite a few safari issues as a web dev. Often they dont appear on iphones because the media queries for mobile are tested on iphones, but desktop layouts are tested on chrome. Safari had a few flexbox quirks that caught me out most recently.
I run into the cause of this issue quite a bit when writing styling. The fact is that each major browser veers from the spec in its own unique ways, so if you're testing your page in a single browser (usually Chrome, though personally I use FF) the styling will often end up working differently in other browsers. As of now, the only real solution is to pick a set of "supported browsers" and to test your page in each of them before publishing, tweaking your styles to work around each engine's quirks. The problem with a browser monoculture is that it encourages developers to only test against the dominant platform, forcing competitors to be bug-compatible in order to provide a non-broken experience. The advantage is that a true and reliable standard would emerge, making developers' lives easier. Of course, the optimal solution would be for browsers to properly conform to the existing standards so that testing across browsers wouldn't be necessary in the first place, as any deviations would be a bug that would hopefully be fixed before too long.
> instead of people following a web standard, they'll follow what's Chrome-compatible
The browser vendors taking the lead over standards committees is how we got this far into HTML5, especially including Apple's/Safari's decision to ditch flash, and the WHATWG actually moving us forward vs the W3C.
Unless Blink has a history or an expectation of deprecating serious functionality in the future, is it really so bad if websites follow what's Chrome-compatible instead of 'a web standard'?
W3C might be too slow, but eschewing at least a rough consensus and instead just ramming through whatever you feel like is the other extreme end of the spectrum. And Google is no saint, so if it benefits them and only them, who cares, Chrome will support it, the community be damned. And this - of course - applies to depreciation of functionality. (Think about any web feature/standard as another Google property, like G+ ... scary? But that's where we're headed with the Blink hegemony.)
This is a good question. Let me give you an example.
Let's imagine for instance that Google develops something called "Google Pay". Let's now imagine that WHATWG works on something called "WebPay".
In a multiple-engines world, there would be pressure for Blink to support (and keep supporting) most major credit cards & payment mechanisms to let users pay using WebPay.
In a single-engine world, one single manager at Google will have sufficient power to decide that WebPay supports only Google Pay.
Feel free replace Google Pay and WebPay by any other technology strategic to Google.
It's still true and always will be. Blink is not a browser any more than the Linux kernel is a standalone operating system by itself with no attached software or applications.
Because then any pull request introducing something into Chromium essentially becomes the web standard. For example, Google has a ton of influence on the project. If they want to, they can introduce something that benefits their services but eschews the web standard.
but if Microsoft is moving to Blink, wouldn't they, as well, have (as you say) influence on the project?
What, really, is a downside to having an open-source web engine that all browsers use? I'm failing to see one. The web would become less fragmented; the web standard would be (only slightly) irrelevant, and we would move forward without having to deal with browsers interpreting the spec/standard differently, which is what happens currently in a lot of cases.
well, if you look at it now, webkit is BY FAR not the dominant engine.
So, by your own logic, if Microsoft ends up in a disagreement with Google about the direction of chromium, whats to stop them from just forking it and becoming the new dominant engine?
Chrome will only remain the dominant web browser as long as its users view it as worth the hassle. If Edge is built upon chromium in the future, I'm not going to sit here and say that Chrome will remain the dominant browser following that.
We can sit here and talk about 'what ifs' all day.
No, by my own logic Microsoft cannot win in case of a fork, because Microsoft is superficial in pumping resources into browsers, as history is showing us ever since IExplorer 6.
Them switching to Chromium is essentially admitting defeat and inability in building a modern browser.
> Since Android 4.4 (KitKat), the WebView component is based on the Chromium open source project. (...) Webviews also share the same rendering engine as Chrome for Android. [0]
KitKat was released in 2013[1] as well as Blink in the same year[2].
Apart from Blink being a WebKit fork, WebKit itself is not "the dominant engine" anymore at least since 2014.
Not since android.. 6? It defaults to the chrome browser. You can choose the other chrome browsers too (beta dev canary if installed, via the developer settings) and the web view might actually also be chrome. WebKit on android is from ages ago.
https://developer.chrome.com/multidevice/webview/overview Since 4.4 it is based on Chrome, not WebKit. Since I think version 6, on all the Google phones it defaults to using Chrome for WebView instead of the "WebView for Android" browser.
Our team is staffed by engineers. We respond well to discussions of tradeoffs. If you'd like to present a proposal for writing parts of Chromium in Rust and speak in depth to the exact costs and benefits, we'd absolutely consider it. I know this because we _have_ been doing some of this consideration; a number of people on the team have floated the idea of using Rust for some parts of Chromium.
But a real plan to do this requires a great deal of thought and serious consideration about how you get from here to there, whether there are long-term engineering velocity costs, etc. You don't just say "Rust is more memory-safe" and let that phrase alone mean "so obviously you're a dinosaur if you don't switch to it".
Even Mozilla is taking a lot of time and effort to introduce Rust-based components to Firefox. Making big changes to enormous projects used by huge numbers of people is not something you do lightly.
In late 2018, using Rust is a lot more serious of a proposal. But, in practice, I don't think the case for Rust in the browser would be nearly as compelling if we hadn't shown that it can be done. That's one strong reason for browser engine diversity: different engines can try different things.
I think it's fair to say that a Rust proposal would have been dead-on-arrival a couple of years ago. It'd be seen as far too risky. It would have remained so in a world where Blink was the only browser engine.
It's a classic innovator's dilemma: with fewer browser engines, the fewer risks the industry will take. More browser engines allow more seemingly-risky innovations (such as parallel styling/layout, or Rust) to break through.
A very nice property of open source software is that you can fork it, "show that it can be done" without having to start from scratch, and if the results are provably better, get your approach adopted.
The trickiest part is the proving that the different approach technology is enough better to warrant a switch. Very often, new approaches don't live up to expectations (not saying this is the case for the tech you're talking about).
1. This is actually discouraged for many projects (e.g. LLVM), because of the possibility that someone does a lot of work and then their results are not accepted.
2. The issues here don't have to do with whether the technology "works" (it does), but rather "developer velocity" and other more social/political concerns.
I understand your points, but I would suggest to consider a different point of view:
1. When you're experimenting with a seriously new technology or approach, the most likely outcome is that you'll fail, especially at the market adoption level. Being able to conduct your experiment at a lower cost is still a net positive, except for one point: having invested less, you are more likely to abandon the experiment early, because of the sunken costs fallacy. That doesn't necessarily need to be the case.
2. Developer velocity/productivity is something that you can demonstrate - as long as the difference is consistent, like, not 10% faster, but 80% faster. Other social/political concerns are a different thing, but really, gaining market adoption based only on those is VERY difficult - if that wasn't the case, I don't think we would be having this discussion at all, because Firefox would have a much higher penetration.
So, the point is, how is having a completely separate codebase going to help with having success? It could attract a higher number of idealistic developers, but the additional work required is very likely to negate that advantage.
Rust as a language was significantly less mature a few years ago; I think that's the bigger factor here.
It certainly doesn't hurt to do compelling things in Rust in a browser engine, but doing compelling things in Rust in some other project entirely would also be motivating.
To look at it from another angle, Mozilla itself had a reason to try to tackle some problems with Rust. It didn't need a competing browser engine using Rust in order to move forward with that plan. And the plan wasn't "well, we'll try this because we can somehow fall back on a competing engine if this doesn't work". So if your argument were true, I don't see how Mozilla could have moved on this either.
In the end people do things because the potential benefit justifies the costs and risks, and having a competitor do something is not the only way of determining potential benefit.
Would Google consider using Play Ready on Windows instead of Widevine?
Widevine perf is worse on WIndows, because it's all software frame decoding, and because of that, doesn't support 1080p or 4k Netflix videos on Windows.
The same way you would convince any other browser vendor to do the same thing. Good luck with that. But let's not forget that Firefox, Opera, et al, are not going away and no one is forcing you to use Chrome or chromium and there probably won't be any real downside either.
As a web developer, why would I even care about that? I care more about whether it adheres to web standards; or in the absence of a standard, whether it is in-line with other leading browsers.
If all of our leading browsers use the same engine, the answer to that is one and the same.
I mean, I get the point you're trying to make, but the argument doesn't really fit with what I'm saying. As far as I'm concerned go ahead and rewrite Blink in Rust or Go or Common Lisp. As long as it is the main engine, or adheres to the web standards, it really makes no difference to me.
I generally agree with your sentiment on this. The only real downside I can think of is similar to the issues that motivated the renewed investment in OpenBGPD recently: that standards and ecosystems are made stronger when there is some level of competition and diversity.
For example, a security bug in chromium becomes vastly more dangerous if everyone is working off that code. That said, hopefully there’ll be fewer of those because everyone is focused on the same codebase.
The questions seem to be “how many browser engine implementations is truly necessary for a healthy ecosystem?” And “has the spec gotten so bad that it’s not feasible for the ecosystem to support a sufficient number of independent implementations?”
Seems like <5, and maybe <3 is the answer to the first, and the answer to the second is we’ll see what happens to servo in 2019...
> The browser vendors taking the lead over standards committees is how we got this far into HTML5, especially including Apple's/Safari's decision to ditch flash, and the WHATWG actually moving us forward vs the W3C.
That takes a somewhat limited in scope view as to what a web standard is; by most measures, what the WHATWG produces is as much a standard as anything the W3C does, and is certainly comparably useful to other implementers.
> Unless Blink has a history or an expectation of deprecating serious functionality in the future, is it really so bad if websites follow what's Chrome-compatible instead of 'a web standard'?
If someone wants to build a new web browser, can they do it without having to spend huge amounts of money on reverse-engineering Chrome and being bug compatible with it?
I don't think you can conclude from "things got better when browser vendors took over" that it'd be the same if there was one vendor dominating. They keep each other honest at least to a degree, forcing that issues are discussed and things thought out instead of someone in a browser team coming up with an idea and shipping it the first way they can come up with.
This is not even close to comparable. I’ve said in other threads the issue with IE and really IE6 - was for multiple years Microsoft did zero development. The browser didn’t patch zero day exploits for months or even a whole year. Fundamentally IE6 was an extension of Windows via ActiveX controls. The layout bugs were permanent. Remember zoom:1, hack to fix a layout issue or just float:right etc...
Blink is open source- you want to encourage innovation in the usability and utility of the browser and let people compete on market share of their browser product not how people’s web apps that run on that platform work or don’t work - The key is it’s open we’ll never have another IE6. Anyone can compile Blink. Look Microsoft is focusing on getting it to work on arm- cool. This is the opposite of locked to windoze only.
I can't find any current stats in ie6 but the delay to upgrading was a pain point. While I wanted to just blacklist the browser from websites I was responsible for with a gigantic banner. I unfortunately didn't get that latitude. I just stopped caring if it worked or not for them. If a user complained I directed them their tech department. At a certain point I'm not supporting your internal outdated website because it requires an outdated browser.
* There is a number of implementations of the same platform.
* Those implementations are competing and the market encourages them to be in-compatible with each other.
You can avoid that bad state by having an explicit standard and applying pressure on all implementations to meet that standard. It usually comes at the expense of slower innovation.
But another viable way to avoid that bad state is to simply not have competing implementations. A single canonical implementation also solves the problem of ensuring all users get a compatible experience. It can come at the expense of evolving in a way that doesn't meet user needs because there isn't a competitive incentive to win users.
I don't think there are perfect solutions, but I also don't think it's the end of the world if this becomes a monoculture. There are tons of "platforms" that are effectively mono-cultures and seem to be OK. Every rechargeable tool company has its own battery pack form factor. Up until recently, each laptop company had a different port for the AC adapter.
What I think this is really showing is that the market size of the web is shrinking relative to the size of the browser standards. Mobile apps have eaten up so much user share and HTML+CSS+JS+etc. has gotten so big and complex that the desktop web market can't effectively support multiple independent browser implementations any more. It's just too much work for too little return.
There's maybe an interesting lesson here in not letting your platform get too complex. New features are always nice, but they have a cost. If you pile on too many of them, you may undermine your platform's ability to support multiple independent implementations.
I'm guessing this is a stressful day around the Mozilla watercooler. This is bad news for Mozilla. Web authors will target the behavior supported by a majority of user browsers. With many independent browsers, there is no implementation majority, just a plurality. Majority behavior only comes from a standard and all implementers are incentivized to work with that standard.
When there's only a few, it's possible for a single one to become the de facto "standard". The web is effectively moving to a first-past-the-post election. Because Microsoft is adopting Chromium, Mozilla's engine and Safari may very quickly become minority ones.
This might suck for users, but that's not entirely clear. Obviously, in a perfect world, there would be infinite engineers building infinite implementations of every platform. In reality, every engineer-hour spent working on, say, a new implementation of COBOL is an hour not spent on software that might impact more users' lives in more important ways.
Maybe a couple of commodity web browsers and engineers working on other more important stuff is better for society? How many implementations of CSS does the world really need? <shrug>
Either way, I'm not advocating anything. I'm more interested in understanding what are the causes and effects — both bad and good — of a platform going from three implementations to two. I'm approaching this as sociology, not as someone who has skin in the game.
I do work for Google and did work as part of the Chrome org, but I don't have enough expertise to make any claims about whether this is an overall good or bad thing for the world. I'm just interested in all of the consequences.
> Maybe a couple of commodity web browsers and engineers working on other more important stuff is better for society? How many implementations of CSS does the world really need?
Shouldn't users have the best possible CSS engine? If there's only one, and it's beholden to the reporting structure at Google, then disruptive innovations are less likely to happen.
People who don't work at Google should be allowed to develop the Web platform, without asking Google for permission.
> Shouldn't users have the best possible CSS engine?
Shouldn't they have the best possible COBOL compiler? The best possible VRML renderer? The best possible ICQ client?
Obviously CSS is way less dead than those, but there is always an opportunity cost. It's not a valid engineering argument to say "we should spend resources on X" without considering what else those resources could be spent on.
My initial comment is really just observing that maybe Microsoft's move actually does imply that they believe, no, the world doesn't need another CSS engine. I don't know if that's true or not, but I think it's interesting to ask the question.
We sort of a tacitly assume that the web will grow and grow forever and ever. But maybe that's just because we've only seen the first half of its life cycle. Maybe it is reaching a plateau and becoming a commodity. There are good questions about whether that's true and, if so, whether that's a good thing.
But I don't think it's illuminating to just assume the best way to improve the world is to have as many browser implementations as possible. Like an entire Dyson sphere populated solely by engineers each writing their own HTML parser.
> then disruptive innovations are less likely to happen.
Maybe the disruptive innovation has happened and the disruption was to move off the web. The Internet and HTTP is doing great. What mobile app isn't on the Internet? Maybe HTML+CSS+JS is no longer the optimal user interface language for it.
> People who don't work at Google should be allowed to develop the Web platform, without asking Google for permission.
I guess I don't see what you're getting at. Browsers don't write themselves, so if you don't want to ask one of a couple of giant rich corporations to do something, that means you better have deep pockets yourself.
Even if Microsoft kept their web engine, how does that help? Now instead of asking Google, Mozilla, and WebKit for permission, you have to ask Microsoft, Google, Mozilla, and WebKit for permission and then get them to all agree on it.
Your comment isn't too far away from accusing GP of shilling. I think probably it would be more productive to discuss any of the points he made about platform complexity or technology homo/heteroculture without going full Mozilla v. Google here.
That has no impact on what MS can do downstream as long as it's an open license.
If Microsoft, or Opera, or anyone else downstream wants to pull out parts of Chromium and replace them
with Rust components (including a parallel renderer), they are free to do so. It increases their maintenance cost to maintain the unique components, but so would maintaining a whole separate from-scratch project instead of being downstream.
I mean, I don't know that I'm afraid of it, in the sense that it's clearly already been here for a few years now. Microsoft is one of the wealthiest companies on Earth and decided they didn't have the resources to compete with the web's Chromium monoculture.
Don't people follow standards-compliant HTML/CSS/JS and compile to the browser of their choice? How often does a standards-compliant site work on Chrome but not on Firefox, assuming that all the targeted features you want are there?
In reality, none of the browsers are quite standards compliant, and developers don’t know the standards by heart, so they often go with what works on their setup, which usually means Chrome.
Any self-respecting and experienced developer who is worth his salt writes his code towards the standards and not toward a browser. If one needs to adjust his code because it doesn't work in a particular browser, then he's adjusting for the browser, not the standard.
It's still better because Chromium is open-source, but I do worry that we're going to have a problem in the future with a lot of broken sites (YTMND-style).