In a certain way, this move is horizontally integrated. Intel vertically integrated its chip design and fab. This moves design and fab apart. One of the reasons that Intel is falling so far behind is that they can't keep up with TSMC (and maybe others as well) on the fab side.
Intel's vertical integration worked well for them for so many years. However, the crack has been around long enough for others to start muscling in. AWS can push Graviton because Intel has been stuck at 14nm for so long (yes, they have some 10nm parts now, but it's been limited). Apple can push a move to ARM desktop/laptops because Intel has stagnated on the fab side.
I wouldn't say this is a reversal of that revolution as much as a demonstration of the power and fragility of vertical integration. Intel's vertical integration of design and fab gave them a lot of power. Money from chip sales drove fab improvements for a long time and kept them well ahead of competitors. However, enough stumbling left them in a fragile place.
I think part of it is that ARM also has reference implementations that people can use as starting blocks. I don't know a lot about chip design myself, but it seems like it would be a lot easier to start off with a working processor and improve it than starting from scratch.
I think we're just seeing a dominant player that no one really likes stumble for long enough combined with people willing to target ARM. Whether I run my Python or C# or Java on ARM or Intel doesn't matter too much to me and if AWS can offer me ARM servers at a discount, I might as well take advantage of that. Intel pressed its fab advantage and the importance of the x86 instruction set against everyone. Now Intel has a worse fab and their instruction set isn't as important anymore. They've basically lost their two competitive advantages. I'm not arguing that Intel is dying, but they're certainly in a weaker position than they used to be.
I think a larger piece is intel was able to jump from desktops to laptops but not cellphones. This meant TSMC simply had more economy of scale to push fab’s further.
I never understood why Intel gave up on smartphone chips. I remember thinking at the time how that would bite them in the next 10 years.
I had an ASUS ZenFone 2 powered by Intel. It was fantastic! I didn’t notice any problems or major slowness compared to a Qualcomm chip. To me it seemed like they had a competent product they could iterate on. And they just canceled the program, how short-sighted!
I mean, maybe I’m wrong here and there isn’t really money in that business.
>I never understood why Intel gave up on smartphone chips.
Margins. They were far too focused on margin they didn't realise their moat were cracked once they let go of it. 1 Billion+ of Smartphone / Tablet SoC, Modem and many other silicon pieces are now Fabbed on TSMC. None of these exist 10 years ago. Just like the PC revolution, while Sun and IBM were enjoying the Workstation and Server marketing booming, x86 took over the PC market segment, and slowly over the years took over server market.
The same could happen to Intel, this time it is ARM and will likely take 10 -15 years.
And I keep seeing the same thing happening over and over again, companies were too focused on their current market and short term benefits and margins they fail to see the bigger picture. Both Microsoft and Intel are similar here. ( And many other non tech companies. )
That's a big list of those who missed the mobile boat:
Microsoft, Intel, RIM, Nokia, HP, MIPS, Sony, HTC
And this are just the few of the big players.
And even if Microsoft lost with Windows on phones, they still try to make apps. I am currently using Edge on Android because its built in ad block is quite good.
People also forget that Intel had a pretty decent line of ARM CPUs right before the time the iPhone was released (XScale). And they gave up on those too, just in time for the market for smartphone CPUs to explode.
Not only pretty decent — XScale was the best and fastest ARM for PDA-size devices.
The only reason Intel killed XScale was that it wasn't x86, it came through an acquisition, and they were afraid to cannibalize their own x86-based mobile plans.
Turns out it would have been far better to disrupt yourself rather than let others do it.
Their hearts were never really in XScale. Intel only ended up having that group due to a bizarre legal settlement with DEC, and pawned it off to Marvell in some misguided attempt at corporate streamlining.
Should also point out that the lead on the DEC StrongARM (which became Intel XScale) was Dan Dobberpuhl, who founded PASemi, which Apple bought to help their own chip development work. In between he also co-founded SiByte which made some of the best MIPS chips ever made.
I get thinking phones were to small of a market or margins were too thin back then. But you would think when the smart phone market got bigger and prices went up someone would have re-evaluated that decision.
Executives are regularly incentivized to focus on short term profits over long term company survival. I call this process “bonuses today, layoffs tomorrow”.
Intel was actually subsidizing smartphone vendors who wanted to use Intel chips. Unluckily hardly any smartphone vendor wanted to react to this offer (except some small experiments like the mentioned ASUS ZenFone 2).
Software developers?
A large percentage of those developers would not have known the difference since these phones ran Android and the VM abstracted this away.
Only for games and some other apps that use the NDK would this have made a difference.
I don't understand this comment. Who is talking about when ARM was established?
Android runs on the vast majority of the world's smartphones and smart devices TODAY and was compatible with intel's(x86) mobile processor. That would have been enough of a market.
edit:// I am commenting on the ASUS ZenFone 2 which has the intel processor running ANDORID, fyi.
> Whether I run my Python or C# or Java on ARM or Intel doesn't matter too much to me
I think you make a key point here. A whole lot of code now runs inside one runtime or another, and even outside of that, cross-architeture toolchains have gotten a lot better partly thanks to LLVM.
The instruction set just doesn't matter even to most programmers these days.
> One of the reasons that Intel is falling so far behind is that they can't keep up with TSMC (and maybe others as well) on the fab side
Actually more that they bit way more than they could chew when they started the original 10nm node, which would've been incredibly powerful if they had managed to pull it right. But they couldn't, and so they stagnated on 14nm and had to improve that node forever and ever. They also stagnated the microarch, because Skylake was amazing beyond other (cutting corners on speculative execution, yes), so all the folowing lakes where just rehashes of Skylake.
Those were bad decisions that were tied to Intel not solving the 10nm node (temember tick-tock? Which then became architecture-node-optimztion? And then it was just tick-tock-tock-tock-tock forever and ever), and insisting on a microarch that, as time went by, started to show it's age.
Meanwhile AMD was running from behind, but they had clearly identified their shortcommings and how they could effectively tackle thme. Having the option to manufacture with either Global Foundries or TSMC was just another good decision, but not really a game changer until TSMC showed that 7nm was not just marketing fad, but a clearly superior node than 14nm+++ (and a good competitor to 10nm+, which Intel still is ironing).
That brings us to 2020, where AMD is about to beat them hard both on mobile (for the first time ever) and yet again on desktop, with "just" a new microarch (Zen 3, coming late 2020). The fact that this new microarch will be manufactured on 7nm+ is just icing on the cake, even if AMD stayed in the 7nm process they'd still have a clear advantge over Zen 2 (of course, their own) and against anything Intel can place in front of them.
That brings us to Apple. Apple is choosing to manufacture their own chips for notebooks not because there's no good x86 part, but because they can and want to. This is simply further vertical integration for them. And this way the can couple their A-whatever chips ever more tightly with their software and their needs. Not a bad thing per-se, but it will separate even more the macs from a developer perspective.
And despite CS having improved a lot in the field of emulation, cross compilers, and whatever clever trick we can think of to get x86-over-ARM, I think in the end this move will severely affect software that is developed multiplatform (this'd be mac/windows/linux, take two and ignore the other). This is some slight debacle that we've seen with consoles and PC games before.
PC, Xbox (can't remember which) and PS3 were three very different platforms back in 2005-ish. And while the PS3 held a monster processors which was indeed a "supercomputer on a chip" (for it's time), it was extremely alien. Games which were developed to be multiplatform had to be developed at a much higher cost, because they could not have an entirely shared code base. Remember Skyrim being optimized by a mod? That was because the PC version was based on the Xbox version, but they had to turn off all compiler optimizations to get it to compile. And that shipped because they had to.
Now imagine having Adobe shipping a non-optimized mac-ARM version of their products because they had to turn off a lot of optimizations from their products to get them to compile. Will it be that Adobe suddenly started making bad software, or that Adobe-on-Mac is now slow?
Maybe I got a little ranty here. In the end, I guess time will tell if this was a good or a bad move from Apple.
All current Macs include a T2 chip, which is a variant of the A10 chip that handles various tasks like controlling the SSD NAND, TouchID, Webcam DSP, various security tasks and more.
The scenario you mention — a upgraded "T3" chip based on a newer architecture that would act as a coprocessor used to execute ARM code natively on x86 machines — seems possible, but I don't know how likely it is.
Yeah, but what would be rationale? They want to avoid x86 as a main CPU, so either you'd get an "x86 coprocessor to run Photoshop" (let's go with the PS example here).
Or you'd have to have fat binaries to have x86/ARM execution, assuming the T3 chip would get the chance to run programs. Now either program would have to be pinned to an x86 or ARM core at their start (maybe some applications can set preference, like having PS be always pinned to x86 cores) or have the magical ability to migrate threads/processes from one arch to another, on the fly, while keeping the state consistent... I don't think such a thing has ever even been dreamed of.
I don't think there's a chance to have ARM/x86 coexist as "main CPUs" in the same computer without it being extremely expensive, and even defeating the purpose of having a custom-made CPU to begin with.
An x86 coprocessor is not that outlandish. Sun offered this with some of their SPARC workstations multiple decades ago, IIRC.
Doing so definitely would be counterproductive for Apple in the short-term, but at the same time might be a reasonable long-term play to get people exposed to and programming against the ARM processor while still being able to use the x86 processor for tasks that haven't yet been ported. Eventually the x86 processor would get sunsetted (or perhaps relegated to an add-on card or somesuch).
Either if it's for performance, battery life or cost reasons, it wouldn't really make sense:
a) performance wise, they move would be driven by having a better performing A chip
b) if they aimed at a 15W part battery life would suffer. 6W parts don't deliver good performance.
c) for cost, they'd have to buy the intel processor, and the infrastructure to support it (socket, chipset, heatsink, etc)
Specially for (c), I don't think either Intel would accept selling chips as co-processors (it'd be like admitting their processors aren't good enough to be main processors), nor Apple would put itlsef in a position to adjust the internals of their computers just to acomodate something which they are trying to get away from.
Apple probably doesn't need the integrated GPU, so an AMD-based coprocessor could trim that off for additional power savings (making room in the power budget to re-add hyperthreading or additional cores and/or to bump up the base or burst clock speeds).
> for cost, they'd have to buy the intel processor
Or AMD.
> and the infrastructure to support it (socket, chipset, heatsink, etc)
Laptops (at least the ones as thin as Macbooks) haven't used discrete "sockets"... ever, I'm pretty sure. The vast majority of the time the CPU is soldered directly to the motherboard, and indeed that seems to be the case for the above-linked APU. The heatsink is something that's already needed anyway, and these APUs don't typically need much of it. The chipset's definitely a valid point, but a lot of it can be shaved off by virtue of it being a coprocessor.
Most of it must be ARM compatible already, for the iPad version.
Also Photoshop was first released in 1987 and has been through all the same CPU transitions as Apple (m68k/ppc/...) so presumably some architecture-independence is baked in at some level.
NodeJS is in that category as well. If you avoid native modules it’s easy to cross deploy on ARM. The workload seem to translate well to the process model of scaling out as well.
Really though, unless you're writing x86 assembly, any language should be just fine on ARM. The only potential holdp is is you rely on precompiled binaries at some point. Otherwise it should just be a matter of hitting the compile button again.
In C/C++ land the kind of ”Just try and see if it works” kind of development is super common in proprietary software. Leading to issues such as:
Using volatile for threadsafe stuff. Arm has weaker memory model than X86 so it requires barriers. C++ standard threading lib handles this for you but not everyone uses it.
Memory alignment. Arm tends to be more critical of that. While it’s impossible for well formed C++ program to mess it up it’s quite common for people just go ”Hey it’s just a number” and go full yolo with it. Because hey, it works on their machine.
In SQLite's case (and probably in the case of most reasonably popular image manipulation libraries), those binary blobs very likely already exist or can be readily recreated. ARM is not some newfangled obscure architecture; SQLite's been used to great success on far more exotic platforms than that (including, I'd imagine, on devices running iOS).
I seldom see issues porting C and C++ to ARM unless people do weird things with casts that are undefined in the language spec but that work on X86 or use X86-specific vector intrinsics. Most well-written C and C++ compiles out of the box to ARM and just works.
> M. Whether I run my Python or C# or Java on ARM or Intel doesn't matter too much to me and if AWS can offer me ARM servers at a discount, I might as well take advantage of that.
I think that's the fundamental mistake in reasoning:
If ARM is cheaper for AWS, then AWS has no reason at all to offer it to its customers at a discount because the customers will not move if no discount if offered. As long as there's no mass market for ARM PCs/servers that work with all the modern software that anyone can rack and sell a-la ServInt/Erols/EV1 circa 1996 there won't be pricing pressure.
This has played time and time again in transit pricing.
Intel's vertical integration worked well for them for so many years. However, the crack has been around long enough for others to start muscling in. AWS can push Graviton because Intel has been stuck at 14nm for so long (yes, they have some 10nm parts now, but it's been limited). Apple can push a move to ARM desktop/laptops because Intel has stagnated on the fab side.
I wouldn't say this is a reversal of that revolution as much as a demonstration of the power and fragility of vertical integration. Intel's vertical integration of design and fab gave them a lot of power. Money from chip sales drove fab improvements for a long time and kept them well ahead of competitors. However, enough stumbling left them in a fragile place.
I think part of it is that ARM also has reference implementations that people can use as starting blocks. I don't know a lot about chip design myself, but it seems like it would be a lot easier to start off with a working processor and improve it than starting from scratch.
I think we're just seeing a dominant player that no one really likes stumble for long enough combined with people willing to target ARM. Whether I run my Python or C# or Java on ARM or Intel doesn't matter too much to me and if AWS can offer me ARM servers at a discount, I might as well take advantage of that. Intel pressed its fab advantage and the importance of the x86 instruction set against everyone. Now Intel has a worse fab and their instruction set isn't as important anymore. They've basically lost their two competitive advantages. I'm not arguing that Intel is dying, but they're certainly in a weaker position than they used to be.