It’s strange how fast technology changes from something that feels magical, to utterly mundane.
Think about the last time you went on a plane. Did you take a moment to appreciate the extraordinary fact that somehow, a flying metal tube can propel you across the planet of hours – something that would have been unimaginable not so long ago? Or, perhaps more likely, did you grumble about delays, squeeze yourself into a cramped seat and bristle at the crying toddler kicking the seat behind you?
Of course, like everyone, you did the latter. And now this same transition is already happening with the jaw-dropping advances in generative AI we’ve witnessed over the last few months.
It was only last November that OpenAI launched ChatGPT. To me, it felt immediately clear that it marked a new technological epoch. An “iPhone moment” that changes everything that comes next.
Why? Because it feels like magic. GPT, the language model that underpins ChatGPT, almost makes the Turing Test feel almost irrelevant.
But this wasn’t enough for some people.
“It’s just fancy autocomplete,” is one complaint that has been made with surprising regularity on my Twitter feed of late, by commentators who are seemingly disappointed that OpenAI hasn’t invented Skynet or HAL 9000.
And this just seems bizarre to me. It’s like going to see a magician and being disappointed that he didn’t actually chop his assistant in half.
Because even though there’s a grain of truth in the criticism – that GPT, and the other generative AI tools such as Midjourney, Bard and Stable Diffusion, are just illusions of intelligence, and aren’t really ‘comprehending’ what we’re asking in any meaningful sense, it also doesn’t matter. Because the illusion is good enough – and I think it’s obvious that these tools are going to change the world and the way we work profoundly.
Ten billion marginal gains
What’s shocking about all of these new AI models is just how new they are, and yet we can already feel their impact.
ChatGPT isn’t really a product. It’s a toy for developers and curious nerds to play with. It doesn’t easily plug into anything else, it isn’t connected to the ‘live’ internet, nor can it access any of your personal data or documents. Yet despite these limitations, I have already seen in my own life and in the lives of people I know how it is already being used not just as a plaything, but for serious business.
For example, a non-tech industry friend of mine is already routinely using ChatGPT as part of his job. He runs communications and events for a small professional organisation, so has over the last few months discovered organically that many of his regular tasks can be completed almost instantly with AI.
He’s used ChatGPT to write invitations to events, he’s used it to re-word speaker biographies to change them from the first person to the third, and to draft news items for his organisation’s website. He’s even used it to design Excel formulas by describing what he outputs he wants, instead of relying on laborious trial and error or scouring Google for the solution.
I’ve also heard stories of ChatGPT being used by teachers to design worksheets for Primary School children or by social media managers to brainstorm post ideas. None of these tasks take ages, when performed the traditional way, but they’re a lot quicker if you’re starting from something more than a blinking cursor on a blank document.
What’s clear to me from just these handful of examples is that this is barely scratching the surface. There must be approximately ten billion other tiny little tasks like this that AI can take care of that little bit faster.
I’m sure my friend isn’t the only person already using ChatGPT for real-world work too. Though few organisations will have integrated any generative AI tools officially into their workflow, I’d bet that hundreds of thousands of individual employees already work with a ChatGPT tab open, to assist with whatever tedious admin task they need to do next.
Don’t get me wrong: I’m not suggesting that AI will soon be writing dense scientific papers or accurately reporting the news, where accuracy and novel information are the most important qualities. But most work isn’t like this. If you need to take a pre-existing text or dataset and do something with it, GPT is almost self-evidently transformative, whether you’re asking it to extract the key quotes or analyse what the numbers mean.
So it’s no wonder that Microsoft and Google are speedily building these sorts of generative AI capabilities directly into Office and Drive. In a few months time, it’s all very possible that Google could be auto-summarising large bodies of text in docs, and that PowerPoint could be automatically spinning up a slide deck based off of a report you’ve written in Word.
What’s important is that generative AI unlocks countless marginal productivity gains. Imagine shaving a few seconds from every boring admin activity you do – and now multiply that time saved across everyone on the planet. (And this is likely understating it – some of the tasks I describe above might typically take hours.)
Let’s narrow in on ChatGPT. If you’re actively seeking reasons to be underwhelmed by it specifically, you might point to its limitations in terms of its dataset, or the limited chatbot-style interface. But here’s the thing: even if all AI development stopped right this very second, and ChatGPT with its clunky interface was all we had to show for it, it would still be revolutionary. Because generative AI is so obviously useful in so many different circumstances, if no other more deeply integrated platforms emerge (such as adding GPT into Office), its usage would inevitably grow organically with its existing functionality – no Skynet required.
Faulty pattern recognition
So why are other people being such a downer? Well, if ChatGPT can make a passable impression of a technology journalist, then you can forgive me for playing an amateur psychologist.
I think the AI scepticism is driven by a few major factors.
The first is, ironically, like a mistake an AI would make: faulty pattern recognition. In this case, it’s because we’ve just lived through an enormous hype cycle, the crypto bubble, that turned into a bust.
For the past decade, we’ve been bombarded with overheated claims that the blockchain, and cryptocurrencies are the next big thing. “Web3”, we were told, was going to rewrite the fundamental building blocks of how the internet works, and wide-eyed evangelists could bore on for ages talking in almost impenetrable jargon.
But despite the millions of words written about the technology, and the thousands of tonnes of additional greenhouse emissions, all we have to show for it are a bunch of scams and a handful of unfunny memes for Elon Musk to post on Twitter.
And it didn’t help that many evangelists then went on to hype the so-called Metaverse, a similarly speculative technology.
Given this is recent experience, it’s not hard to imagine why people might assume “AI” is just the same old bullshit cycle all over again. After all, a lot of the breathless AI coverage makes eerily similar revolutionary claims about how this new technology that few people really understand is going to change the world.
However, the comparison doesn’t really work: unlike cryptocurrency, which for the last decade has remained the answer to a question that nobody asked, as I describe above, generative AI has already demonstrated its obvious utility. ChatGPT alone has done that in a few short weeks, and this is before the technology has filtered down into every other piece of software in our lives.
Similarly, I think there are political reasons commentators may find it easier to doubt than to believe. In many cases, the people most excited about AI are many of the same “tech bros” who were the wide-eyed fools who believed in crypto.
I also think there could be an in-group dynamic at play amongst commentators too: Scepticism is a much more defensible position to take, and if you actually go out on a limb, there’s a risk that you might be wrong, and you might be laughed at by your professional peers on Twitter. How do I know this? Because in writing this essay, I’m a bit worried my professional peers will laugh at me if I’m proven wrong.
However, I think the most significant reason for scepticism is the most understandable of all: fear.
In the 17th century, the French philosopher Blaise Pascal argued what became known as “Pascal’s Wager”, in discussions about the existence of God. He argued that any rational person should live their life as a pious religious believer, because the downside risk of not doing so is extremely bad: if you live like a non-believer, then you die and it turns out God doesn’t exist, so what? But if he does… you’re in for a toasty time for the rest of eternity.
It’s an argument that is driven not by reason or logic, but by fear. And I think the same thing is going on with regards to AI. Either the sceptics are right that generative AI isn’t a big deal and the world goes on as normal. Or they’re wrong – and the profound consequences of generative AI are almost too mind-boggling and potentially terrifying to comprehend.
A world where generative AI does live up to the hype is one where millions of jobs are suddenly in doubt. Even knowledge-workers will need to worry, given the profound capabilities of the new technology. It’s a world where the tools to create fake or misleading images, text and soon videos will be commoditised, so that anyone can make them, at enormous scale. And it’s a world where everything from legal proceedings to mortgage applications to what Deliveroo suggests you have for dinner, could be determined using an impenetrable black-box that you can’t open up and see why, specifically, any given determination can be made.
Even if you’re slightly more optimistic about AI than the doomers, it is structurally true that an AI revolution injects a significant dose of uncertainty into the future. And as much as cold-hearted rationalists like me might argue that lost industries and jobs will be replaced by new ones, the reality is that transitions have winners and losers – and if you’ve already got a stake in the existing status quo (say, a good job or a mortgage to pay), then the idea of an unknown agent shaking how society is organised is, understandably, scary.
(And this is before we even start to consider whether the existential risk people might have a point.)
Hell, having written those last few paragraphs, I’m even tempted to try and bury my head in the sand.
Ultimately though, we need to face reality. As the last few months have made clear, and as jaw-dropping new demos reveal on a near daily basis, AI and large language models are going to change the world. However we may choose to regulate them, and whatever their limitations now, they’re not going to be uninvented as they’re simply too useful.
My friend who I describe above, who is already using ChatGPT in his job, describes where we’re at now as though we’re currently at the Wright Brothers stage of development, and I think he is right:
The plane might only stay airborne for a few seconds, but it is clear that it marks a new era of powered flight, which has profound implications for the way all of us live and work. Whether we like it or not.
This article has been tagged as a “Great Read“, a tag we reserve for articles we think are truly outstanding.
 What have I been using ChatGPT for? Mostly writing new episodes of Star Trek: The Next Generation guest starring the Chuckle Brothers, and then trying to get around the AI censorship guardrails, and persuade it to let Hitler turn up on the starship too.
 Emphasis on “easily” – though OpenAI has recently launched plugin functionality and has an API, this is again only experimental. There’s no “ChatGPT” app that non-nerds can install on their computers, and Siri and Alexa can’t yet parse queries using GPT.
This article’s striking lead image was created by (the human) Eva Bee and licensed from Ikon Images.
Nathalie Parent, Chief People Officer at Shift Technology: “HR is the conscience of an organisation”
For more than 30 years, Nathalie Parent has led global HR teams, working primarily with software companies. Today she’s Chief People Officer at Shift Technology
Amazon introduces new storage class that makes it cheaper to store rarely used files
Robot carers are real, but caregiving has bigger problems, writes Richard Trenholm in this FlashForward edition