I’m an Apple fanboy, having used and owned Macs since the late-1980s, but I’m not alone in thinking that Apple silicon processors have placed Apple in a unique position to embed AI within their product ranges. This is a story of how they got there, and how their way was paved by a small British computer company called Acorn.
Acorn and the ARM architecture
Back in the late 70s, there was a fad for naming computer companies after things that fell from trees1. Whilst Apple was a trendsetter in this regard, across the Atlantic another computer company was being formed in Cambridge, England. Allegedly inspired by Apple’s name and wanting to get ahead of it in the phonebook, they named their company Acorn Computers. In the UK, Acorn are best known for the BBC Micro, an 8-bit microcomputer commissioned by the BBC to support their TV series The Computer Programme. If you grew up here in the 80s, you would undoubtedly have come across BBC Micros at school, where they were ubiquitous. Acorn also tried, unsuccessfully, to sell the BBC Micro to the US market.
Perhaps less known — but much more relevant to this story — is that Acorn also designed the ARM processor architecture, whose abbreviated name originally stood for Acorn RISC Machine. In fact, the BBC Micro was used as the development platform for the first ARM prototypes. There’s a great account of this in the book Acorn: a world in pixels, but basically ARM was a product of a very small team2 working within a small budget — an impressive feat given its descendants now power the majority of high-end mobile devices.
A key design decision made by the ARM team was to use RISC, which stands for Reduced Instruction Set Computing. At the time, this was a relatively new approach to processor design based around using simpler instructions (the individual actions than processors carry out) that take less time to execute, require less silicon to implement, and generally make the operation of a processor easier to optimise. This, in turn, has benefits in terms of power consumption and resource usage — and these also proved to be key reasons for the eventual popularity of ARM.
The first computer to use an ARM processor was the Acorn Archimedes3. As an Acorn fanboy (only later did I transfer my allegiance to Apple) I was rather excited by the Archimedes, and pored over images of its majestic GUI in my monthly copies of The Micro User. But, alas, it was far too expensive for the likes of me. Acorn went on to develop further ARM-based machines, notably the innovative RiscPC, but these didn’t achieve much commercial success, and Acorn was eventually purchased by Olivetti, after which it gradually fizzled out of existence.
Apple and the ARM architecture
But fortunately for the rest of us, ARM was spun out into a separate company, Arm Holdings Plc., before the demise of Acorn. ARM is still head-quartered in Cambridge, though it’s now owned by Japan’s SoftBank Group. Anyway, around this time — the early 1990s — is where Apple comes into the picture. Apple were looking for a processor for their Newton MessagePad series of handheld computers, and the ARM processor’s capacity to deliver decent amounts of computational power from a low-powered device was exactly the kind of thing they were looking for. This led to a considerable injection of money into the fledgling ARM, but also led to a change in name for the ARM architecture, from Acorn RISC Machine to Advanced RISC Machine. After all, Apple didn’t want a rival company’s name to appear in their marketing.
The Newton MessagePad was not a commercial success. Coming out 10 years before the iPhone, the hardware, software and battery capacity were not yet up to the needs of a handheld device, and the Newton product line soon withered. Nevertheless, the company’s interest in the ARM architecture foreshadowed the need for capable low-powered processors within mobile devices, and nowadays ARM processors are a dominant choice for anything that needs to be both fast and mobile.
Beyond the benefits of using RISC, the success of ARM also owes a lot to its licensing model. Rather than making integrated circuits and selling these to computer companies, they instead decided to license the architecture so that other companies could implement ARM processor cores within their own chips. First of all, this mitigated the need to set up their own fabrication facilities, which is a very expensive endeavour. Perhaps more importantly, it allowed third parties to customise ARM’s designs and integrate them with other components on a single chip — an approach known as system-on-chip, or SoC — resulting in faster internal communication and further reductions in power consumption.
Apple’s relationship with ARM didn’t resurface until the development of the iPhone in the mid-2000s. Even then, the relationship wasn’t technically with ARM but rather with Samsung, who supplied the ARM chip used within the first iPhone. This was a direct result of ARM’s licensing model, which meant that third parties could develop and resell chips containing ARM cores. However, Apple’s use of a chip produced by one of their main rivals in the mobile device space was not ideal, and they soon acquired P. A. Semi, a semiconductor design company with expertise in the ARM architecture. This enabled them to design and manufacture their own ARM processors.
One of the real benefits of this acquisition didn’t emerge until around 2020, when Apple transitioned from using Intel Core processors in their Mac range to using their own M-series ARM processors4. Given Apple’s increasing focus on their mobile range of laptops, tablets and smartphones, it made sense for them to adopt a low-power ARM architecture as the standard processor technology across their portfolio. It didn’t hurt that ARM chips had started to surpass Intel in terms of speed too.
And this brings us to the present day, where all Apple devices now use Apple silicon. Importantly, Apple’s silicon doesn’t just contain ARM processor cores. It’s a system-on-chip, and includes a host of other bits and pieces that contribute to the speed and capabilities of Apple’s devices. And this really is at the centre of Apple’s claim to be a platform for AI. For example, in addition to containing up to 16 ARM cores, the Apple M3 range — used within Apple’s current MacBook line-up — has an on-chip GPU, on-chip RAM, and a 16-core neural engine5. The GPU and neural engine are both important for efficiently executing deep learning models, and the generous on-chip memory (up to 128Gb in the M3 Max) means that it’s plausible to fit a whole LLM on-device, precluding the need to have an internet connection.
So, the modest investment that Apple made in ARM in the early days has ended up paying huge dividends. Apple is now in the fortunate position of having arguably the fastest, most-capable mobile devices, and for this they owe quite a lot to a tiny group of people working at a small British company.
Including Steve Furber, more recently a professor at the University of Manchester, where he developed the SpiNNaker hardware platform for spiking neural networks.
Try out an emulator if you want to get that full early-90s computing experience. At the time, the Archimedes’ GUI was a real beauty to behold. Nowadays it might look a bit dated!
For an account of Apple’s transition through various processor architectures, see Jacob Bartlett’s excellent article Through the Ages: Apple CPU Architecture.
This is somewhat veiled in Apple mystery, but is likely to be a neural network accelerator not dissimilar to Google’s TPU.
Talking of things falling off trees... there was also Apricot Computers.
I appreciate that this is the story of ARM and Apple but it might be worth pointing out that the Apple Mac's initial success was in no small part due to its use of Motorola's fully 16 bit chips, rather than Intel's 16 bit address, but only 8 bit data bus chips which required 2 fetches per instruction.