Why I’m still betting on Apple in the era of AI

One of my most annoying personality flaws is that I’m an Apple zealot.

I didn’t intend to be, but over the years I’ve become the bore who always wades in when someone asks “What’s the best laptop I can get for under £500?” My answer: “Spend twice as much and get a Mac.”

Though ideologically I’d quite like to be a right-on open-source bro, I’m fully immersed into Apple’s ecosystem, with an iPhone, iPad, Mac, Apple Watch, Apple TV and AirPods. I even use the objectively very stupid Apple Magic Mouse.

In other words, Apple’s marketing department would absolutely love me if I wasn’t an overweight, rapidly balding man who likes trains. I suspect this is not quite what its brand is going for.

Anyway, perhaps my Apple affinity should be unsurprising. Since the launch of the iPhone in 2007, the company has spent the last decade and a half slowly building a moat around its position as the most powerful of the big tech titans, to make it harder for its rivals to compete. It’s no wonder the iPhone is often cited as the most successful consumer product of all time.

However, that moat may not remain impenetrable forever.

The rise of generative AI we’ve witnessed over the last year, led by OpenAI’s GPT model, is the first time since the iPhone’s inception that Apple’s dominance has been challenged. To the point where we can squint and imagine a future where, just maybe, Apple isn’t the most important company in our digital lives.

ChatGPT was as important as the launch of the iPhone, and the last time a moment like that upended the tech industry, it wasn’t great news for the previous incumbents – companies like Blackberry and Nokia. Nor was it good for Microsoft, which had to essentially dump Windows as the centre of its universe, and transform its entire corporate strategy to succeed in the new world.

So could Apple – as powerful as it is now – be about to go the same way? Could AI be a vulnerability in Apple’s battlements that lets the invaders finally take it down?

Despite it looking bad on paper, I’m not convinced. In fact, I think Apple could emerge even stronger in a world where AI is baked into every device we touch. Here’s why.

Apple’s AI weakness

First, it’s important to lay out why Apple could be vulnerable. The answer comes down to something that was previously one of the company’s biggest strength: privacy.

One of the reasons I’m an Apple zealot is because I trust the company when it comes to security and privacy – at least, more so than its big tech rivals.

Apple has made privacy and security a core part of its brand – that’s why it has a little Apple logo that looks like a padlock that it shows on screen at its keynotes.

“We believe the customer should be in control of their own information,” Apple CEO Tim Cook told the audience at an Electronic Privacy Information Center event. “You might like these so-called free services, but we don’t think they’re worth having your email, your search history and now even your family photos data mined and sold off for God knows what advertising purpose. And we think someday, customers will see this for what it is.”

This is all music to my paranoid ears, and is one of the reasons why Apple has steadily rolled out ever more privacy-preserving features. That’s why such strict permission controls are now built into its operating systems to control how apps interact with your data and your hardware. Just to give one example, if you’ve enabled the feature on the most recent versions of iOS, Apple can’t even see your files and photos as they are stored in the cloud – only you, with your phone, can do that[1].

This is one of the reasons I’m happy to pay a premium for Apple products. But it isn’t just Apple doing it out of the goodness of Tim Cook’s heart.

The privacy protections are only credible, and indeed possible, because the primary Apple business model is selling physical hardware with a thick profit margin on top – not selling advertising like Google and Meta, which necessarily requires the companies to rake through mountains of data to more effectively target ads and learn customer habits.

For a long time, this approach was working great for Apple. The company has stricter privacy and a classier brand thanks to its business model, and every time there’s a controversy like the one over Cambridge Analytica, it makes Apple look even better than its rivals by comparison.

However, throw the brave, new AI world into the mix, and the privacy trade-offs start to look a little different.

Privacy in an AI world

The reason large language models (LLMs) are so powerful is because of the vast amount of data they are trained on. ChatGPT feels like magic because millions of hours of GPU time were spent effectively crunching down the entire internet into the GPT-4 model. It’s good at answering questions because it sends your prompt to a large model hosted in the cloud to divine an answer.

And suddenly you can see why Apple may be at a disadvantage compared to Google, and Meta, which have their tentacles in every corner of the web and have fewer inhibitions about analysing personal data in the cloud.

So it is Apple’s rivals who are better placed to have the largest cache of training data – and it is they who will be more willing to build and release products that sniff through and join up different personal data and personal documents, instead of keeping them carefully separated and locked down.

And while it isn’t clear exactly how sophisticated Apple’s LLM efforts are (the company is thought to have one of its own, but is notoriously secretive), given the misaligned incentives you can easily imagine a day when Google, Meta and Microsoft’s AI capabilities accelerate well beyond what Apple is capable of developing in-house.

Apple’s hardware advantage

So why am I still so bullish about Apple’s future? The reason goes back to Apple’s other major competitive advantage, which I think will still be decisive, even in a world where we’re interacting with our devices using natural language and having AI tools play an even deeper role in our lives. I’m talking, of course, about hardware.

One of the reasons Apple has prospered in the mobile era is because it is an incredible feat of vertical integration. Unlike Android (or Windows), the developers of iOS do not have to build it with the flexibility to work across thousands of different devices with just as many different hardware configurations. They just have to make sure iOS works on maybe ten different types of iPhone.

And the integration goes deeper with the adoption of Apple’s own-designed chips from the iPhone 4S onwards. Unlike a general-purpose chip from Intel or Qualcomm, Apple knows what devices it is designing chips for. So it can customise them for the job at hand, making everything work even more efficiently.

This efficiency has a number of downstream consequences: It means that Apple can ship a phone with a slower processor or a battery with a smaller storage capacity than a rival Android device on paper – but it will still effectively work just as fast or last just as long in real-world usage. Or perhaps it means Apple can build devices that run cooler – so they can run faster and crunch more data without overheating.

And because of Apple’s scale and ownership of the hardware and software, it is uniquely difficult for fragmented rivals to challenge Apple in terms of integration. Combined, this all gives Apple a pretty extraordinary competitive advantage when it comes to on-device computing – or “local compute” to use the jargon

And this brings me back to AI and LLMs.

AI: moving from cloud to local

Obviously, it is GPT that has set the world on fire and is still by some distance the most sophisticated AI model available today. It works in the cloud – it has to, as the model is just too enormous, too power-hungry and too processor-intensive to work on a local device. It needs the advanced computing power of a server farm to generate the amazingly life-like responses that it does.

But in the long term, inevitably, we’ll also be running sophisticated LLM models locally on our devices. The models are already being designed.

For example, Meta has its own model – Llama (large language model Meta AI), which is designed and optimised to run on a GPU that you could find on an ordinary desktop computer. Similarly, Google has also announced “Gemini Nano”, a shrunk-down version of its Gemini AI model that is optimised for use on mobile.

Compared to ChatGPT, these models will probably generate inferior responses – but if the responses can be made to be good enough then there are some big advantages to not doing all your computation in the cloud.

This is going to be hard to do if everything is based in the cloud. Take the example about 2:30 into the clip above, tracking the screwed-up ball of paper as it is moved around inside the overturned cups. For it to work as envisaged, the AI model would need to work in more or less real-time, analysing the pictures it sees and reacting to them. We’ll want our AI companions to already know the context for our questions when we talk to them.

Of course, one way of doing this would be to send all of the data the camera is collecting up to the cloud for analysis, and processing the visuals remotely. But now try to imagine what might happen outside of a tech demo and in the real world: your device might not have a connection, the connection might be slow, or it might be disrupted. There are bandwidth issues if everyone is sending video over 5G. And there’s simply the lag between sending and receiving.

In other words, the only reliable way to make sure AI will ever work reliably is to make sure all of the computation is done locally.

Local computing is still going to be important. Lucky for Apple, being the company with a structural advantage in making powerful hardware that out-computes rivals seems like a good position to be in.

When the chips are down

Imagine a future where we all use something like the Humane AI pin. I’m not entirely sold on the idea. In essence, you wear it like a Star Trek communicator badge and interact with it with your voice using natural language. And it has a camera so it can see what you see.

Or imagine a future where you’re wearing a Vision Pro-style headset, with cameras facing out and observing the world around you. Or even just the most boring future, where the phone is still our primary device.

In all of those scenarios, all of those devices will be taking in as much information about the world around them as possible in order to seamlessly assist us as we go about our days. Our devices – whatever form they take – will be busily crunching through our data so that they can do the magical things that they’ll do.

One way they could work is by streaming all of that image data to the cloud. In fact, the way the designers (who are ex-Apple engineers) imagine the Humane pin working is by maintaining a permanent 5G connection – skipping even Wi-Fi entirely.

But I think what is more likely is that any technology like that is going to rely on a significant amount of local computation, for the sake of bandwidth, speed and reliability.

This means that the company that wins isn’t just going to be the company with the most sophisticated model, but it is also going to be a question of which company has the best hardware. And assuming Apple maintains its local compute hardware advantage – it will still have a competitive edge over its rivals.

And that’s why though the rise of AI might not appear to be good news for Apple – I’m not ready to renounce my zealotry just yet.


This article has been tagged as a “Great Read“, a tag we reserve for articles we think are truly outstanding.


More Great Reads


References

[1] Apple doesn’t even, for example, identify faces and tag them, or generate photo memories in the cloud. It can’t because it can’t see them. Instead, it’s actually your iPhone doing the hard work crunching through data as it is plugged in overnight.

[2] The video was edited to remove latency.

Avatar photo
James O'Malley

James O'Malley is a freelance politics and technology writer, journalist, commentator and broadcaster. He has written for Wired, Politico and The New Statesman and was editor of Gizmodo UK. He has also written a number of articles about cutting-edge technology for TechFinitive.com.

NEXT UP