Apple To Develop Own GPU, Drop Imagination's GPUs From SoCs
by Ryan Smith on April 3, 2017 6:30 AM ESTWe typically don’t write about what hardware vendors aren’t going to be doing, but then most things hardware vendors don’t do are internal and never make it to the public eye. However when those things do make it to the public eye, then they are often a big deal, and today’s press release from Imagination is especially so.
In a bombshell of a press release issued this morning, Imagination has announced that Apple has informed their long-time GPU partner that they will be winding down their use of Imagination’s IP. Specifically, Apple expects that they will no longer be using Imagination’s IP for new products in 15 to 24 months. Furthermore the GPU design that replaces Imagination’s designs will be, according to Imagination, “a separate, independent graphics design.” In other words, Apple is developing their own GPU, and when that is ready, they will be dropping Imagination’s GPU designs entirely.
This alone would be big news, however the story doesn’t stop there. As Apple’s long-time GPU partner and the provider for the basis of all of Apple’s SoCs going back to the very first iPhone, Imagination is also making a case to investors (and the public) that while Apple may be dropping Imagination’s GPU designs for a custom design, that Apple can’t develop a new GPU in isolation – that any GPU developed by the company would still infringe on some of Imagination’s IP. As a result the company is continuing to sit down with Apple and discuss alternative licensing arrangements, with the intent of defending their IP rights. Put another way, while any Apple-developed GPU will contain a whole lot less of Imagination’s IP than the current designs, Imagination believes that they will still have elements based on Imagination’s IP, and as a result Apple would need to make lesser royalty payments to Imagination for devices using the new GPU.
An Apple-Developed GPU?
From a consumer/enthusiast perspective, the big change here is of course that Apple is going their own way in developing GPUs. It’s no secret that the company has been stocking up on GPU engineers, and from a cost perspective money may as well be no object for the most valuable company in the world. However this is the first confirmation that Apple has been putting their significant resources towards the development of a new GPU. Previous to this, what little we knew of Apple’s development process was that they were taking a sort of hybrid approach in GPU development, designing GPUs based on Imagination’s core architecture, but increasingly divergent/customized from Imagination’s own designs. The resulting GPUs weren’t just stock Imagination designs – and this is why we’ve stopped naming them as such – but to the best of our knowledge, they also weren’t new designs built from the ground up.
What’s interesting about this, besides confirming something I’ve long suspected (what else are you going to do with that many GPU engineers?), is that Apple’s trajectory on the GPU side very closely follows their trajectory on the CPU side. In the case of Apple’s CPUs, they first used more-or-less stock ARM CPU cores, started tweaking the layout with the A-series SoCs, began developing their own CPU core with Swift (A6), and then dropped the hammer with Cyclone (A7). On the GPU side the path is much the same; after tweaking Imagination’s designs, Apple is now to the Swift portion of the program, developing their own GPU.
What this could amount to for Apple and their products could be immense, or it could be little more than a footnote in the history of Apple’s SoC designs. Will Apple develop a conventional GPU design? Will they try for something more radical? Will they build bigger discrete GPUs for their Mac products? On all of this, only time will tell.
Apple A10 SoC Die Shot (Courtesy TechInsights)
However, and these are words I may end up eating in 2018/2019, I would be very surprised if an Apple-developed GPU has the same market-shattering impact that their Cyclone CPU did. In the GPU space some designs are stronger than others, but there is A) no “common” GPU design like there was with ARM Cortex CPUs, and B) there isn’t an immediate and obvious problem with current GPUs that needs to be solved. What spurred the development of Cyclone and other Apple high-performance CPUs was that no one was making what Apple really wanted: an Intel Core-like CPU design for SoCs. Apple needed something bigger and more powerful than anyone else could offer, and they wanted to go in a direction that ARM was not by pursuing deep out-of-order execution and a wide issue width.
On the GPU side, however, GPUs are far more scalable. If Apple needs a more powerful GPU, Imagination’s IP can scale from a single cluster up to 16, and the forthcoming Furian can go even higher. And to be clear, unlike CPUs, adding more cores/clusters does help across the board, which is why NVIDIA is able to put the Pascal architecture in everything from a 250-watt card to an SoC. So whatever is driving Apple’s decision, it’s not just about raw performance.
What is still left on the table is efficiency – both area and power – and cost. Apple may be going this route because they believe they can develop a more efficient GPU internally than they can following Imagination’s GPU architectures, which would be interesting to see as, to date, Imagination’s Rogue designs have done very well inside of Apple’s SoCs. Alternatively, Apple may just be tired of paying Imagination $75M+ a year in royalties, and wants to bring that spending in-house. But no matter what, all eyes will be on how Apple promotes their GPUs and their performance later this year.
Speaking of which, the timetable Imagination offers is quite interesting. According to Imaginations press release, they have told the company that they will no longer be using Imagination’s IP for new products in 15 to 24 months. As Imagination is an IP company, this is a critical distinction: this doesn’t mean that Apple is going to launch their new GPU in 15 to 24 months, it’s that they’re going to be done rolling out new products using Imagination’s IP altogether within the next 2 years.
Apple SoC History | ||||
First Product | Discontinued | |||
A7 | iPhone 5s (2013) |
iPad Mini 2 (2017) |
||
A8 | iPhone 6 (2014) |
Still In Use: iPad Mini 4, iPod Touch |
||
A9 | iPhone 6s (2015) |
Still In Use: iPad, iPhone SE |
||
A10 | iPhone 7 (2016) |
Still In Use |
And that, in turn, means that Apple’s new GPU could be launching sooner rather than later. I hesitate to read too much into this because there are so many other variables at play, but the obvious question is what this means for the the (presumed) A11 SoC in this fall’s iPhone. Apple has tended to sell most of their SoCs for a few years – trickling down from iPhone and high-end iPad to their entry-level equivalents – so it could be that Apple needs to launch their new GPU in A11 in order to have it trickle-down to lower-end products inside that 15 to 24 month window. On the other hand, Apple could go with Imagination in A11, and then just avoid doing trickle-down, using new SoC designs for entry-level devices instead. The only thing that’s safe to say right now is that with this revelation, an Imagination GPU design is no longer a lock on A11 – anything is going to be possible.
But no matter what, this does make it very clear that Apple has passed on Imagination’s next-generation Furian GPU architecture. Furian won’t be ready in time for A11, and anything after that is guaranteed to be part of Apple’s GPU transition. So Rogue will be the final Imagination GPU architecture that Apple uses.
144 Comments
View All Comments
lilmoe - Monday, April 3, 2017 - link
In the short term? No. Evidently? Highly possible. Nothing's stopping them.psychobriggsy - Monday, April 3, 2017 - link
RISC-V would be a potential free-to-license ISA that has had a lot of thought put into it.But maybe for now ARM is worth the license costs for Apple.
vFunct - Monday, April 3, 2017 - link
Thing is, Arm is already Apple originated, being funded by Apple for their Newton.But, given the rumors of Apple buying Toshiba's NAND flash fabs, it seems more likely that Apple is going all in on in-house manufacturing and development of everything, including ISA and fabs.
vladx - Monday, April 3, 2017 - link
Apple owning their own fabs? Seriously doubt it, the investment is not worth it for just in-house manufacturing.Lolimaster - Monday, April 3, 2017 - link
And if your sales kind of plummet, the fab costs will make you sink.FunBunny2 - Monday, April 3, 2017 - link
-- That's a moderately large undertaking.that's kind of an understatement. the logic of the ALU, for instance, has been known for decades. ain't no one suggested an alternative. back in the good old days of IBM and the Seven Dwarves, there were different architectures (if one counts the RCA un-licenced 360 clone as "different") which amounted to stack vs. register vs. direct memory. not to mention all of the various mini designs from the Departed. logic is a universal thing, like maths: eventually, there's only one best way to do X. thus, the evil of patents on ideas.
Alexvrb - Monday, April 3, 2017 - link
The underlying design and the ISA don't have to be tightly coupled. Look at modern x86, they don't look much like oldschool CISC designs. If they're using a completely in-house design, there's no reason they couldn't start transitioning to MIPS64 or whatever at some point.Anyway I'm sad to see Apple transitioning away from PowerVR designs. That was the main reason their GPUs were always good. Now there might not be a high-volume product with a Furian GPU. :(
FunBunny2 - Tuesday, April 4, 2017 - link
-- Look at modern x86, they don't look much like oldschool CISC designs.don't conflate the RISC-on-the-hardware implementation with the ISA. except for 64 bit and some very CISC extended instructions, current Intel cpu isn't RISC or anything else but CISC to the C-level coder.
willis936 - Wednesday, April 5, 2017 - link
"Let's talk about the hardware. Now ignore the hardware."name99 - Monday, April 3, 2017 - link
I think it's perhaps too soon to analyze THAT possibility (apple-specific ISA). Before that, we need to see how the GPU plays out. Specifically:The various childish arguments being put forth about this are obviously a waste of time. This is not about Apple saving 30c per chip, and it's not about some ridiculous Apple plot to do something nefarious. What this IS about, is the same thing as the A4 and A5, then the custom cores --- not exactly *control* so much as Apple having a certain vision and desire for where they want to go, and a willingness to pay for that, whereas their partners are unwilling to be that ambitious.
So what would ambition in the space of GPUs look like? A number of (not necessarily incompatible) possibilities spring to mind. One possibility is much tighter integration between the CPU and the GPU. Obviously computation can be shipped from the CPU to the GPU today, but it's slower than it should be because of getting the OS involved, having to copy data a long distance (even if HSA provides a common memory map and coherency). A model of the GPU as something like a sea of small, latency tolerant, AArch64 cores (ie the Larrabee model) is an interesting option. Obviously Intel could not make that work, but does that mean that the model is bad, that Intel is incompetent, that OpenGL (but not Metal) was a horrible target, that back then transistors weren't yet small enough?
With such a model Apple starts to play in a very different space, essentially offering not a CPU and a GPU but latency cores (the "CPU" cores, high power and low power) and throughput cores (the sea of small cores). This sort of model allows for moving code from one type of core to another as rapidly as code moves from one CPU to another on a multi-core SoC. It also allows for the latency tolerant core to perhaps be more general purpose than current GPUs, and so able to act as more generic "accelerators" (neuro, crypto, compression --- though perhaps dedicated HW remains a better choice for those?)
Point is, by seeing how Apple structure their GPU, we get a feeling for how large scale their ambitions are. Because if their goal is just to create a really good "standard" OoO CPU, plus standard GPU, then AArch64 is really about as good as it gets. I see absolutely nothing in RISC-V (or any other competitor) that justifies a switch.
But suppose they are willing to go way beyond a "standard" ISA? Possibilities could be VLIW done right (different model for the split between compiler and HW as to who tracks which dependencies) or use of relative rather than absolute register IDs (ie something like the Mill's "belt" concept). In THAT case a new instruction set would obviously be necessary.
I guess we don't need to start thinking about this until Apple makes bitcode submission mandatory for all App store submissions --- and we're not even yet at banning 32-bit code completely, so that'll be a few years. Before then, just how radical Apple are in their GPU design (ie apparently standard GPU vs sea of latency tolerant AArch-64-lite cores) will tell us something about how radical their longterm plans are.
And remember always, of course, this is NOT just about phones. Don't you think Apple desktop is as pissed off with the slow pace and lack of innovation of Intel? Don't you think their data-center guys are well aware of all that experimentation inside Google and MS with FPGAs and alternative cores and are designing their own optimized SoCs? At least one reason to bypass IMG is if IMG's architecture maxes out at a kick-ass iPad, whereas Apple wants an on-SoC GPU that, for some of their chips at least, is appropriate to a new ARM-based iMac 5K and Mac Pro.