ChatGPT isn’t Star Trek, so don’t let science fiction cloud your AI thinking

AI technology is a lot of things. ChatGPT, Google Bard, Windows Copilot and the current wave of buzz-worthy technology will be a workplace game-changer — or, depending on your viewpoint, an error-riddled bias-plagued nightmare.

The one thing they’re not? They’re not really artificial intelligence — at least not in the way that science fiction has imagined AI for a hundred years.

That’s how long androids, talking computers and other fantastical forms of artificial intelligence have been a part of science fiction and fantasy. Take the term “robot”, which Czech playwright Karel Capek coined in 1930. So when you think of AI, you’d be forgiven for thinking in terms of fiction rather than the more recent and chaotic reality.

AI in science fiction

In movies, TV shows and sci-fi novels, AI comes in all shapes and sizes. 2001: A Space Odyssey’s sinister supercomputer HAL. Star Trek’s friendly Data. Westworld’s uncannily human-esque Hosts. The boxy droids of Star Wars.

But from clunky Robby the Robot to teary-eyed Haley Joel Osment feeling a devastating range of emotions in a movie Stephen Spielberg literally called AI: Artificial Intelligence, these fictional machines all have one thing in common: they think for themselves.

And the trend shows no sign of stopping. Exhibit A: the android kid in forthcoming humans-vs-robots action movie The Creator.

Back in the real world, we have ChatGPT, Bard and a whole wave of new technologies billed as “artificial intelligence”. They’ve advanced to the point that when you interact with them, you might think that they’re thinking.

But don’t be fooled.

AI in the real world

Large language models like ChatGPT are trained to regurgitate information originally created by people. They present information based on preprogrammed parameters. Whereas the AI we see in movies can think and reason, and know why you cry. They exhibit common sense reasoning and cognitive abilities that Bard and Bing could only dream of. If they could dream, which they can’t.

Technically speaking, we should describe the sentient computer systems of fiction as “strong AI”, “strict AI” or “artificial general intelligence”. Today’s task-specific machine learning tools are better described as “narrow AI”. No matter how convincingly ChatGPT spits out C++ code or A+ college essays, the sci-fi genre’s human-level machine intelligence simply doesn’t exist.

Why is this a problem? Am I just splitting hairs, arguing semantics of arcane computer science jargon? Admittedly, I have skin in the game: as a writer, I’m torn between excitement at this groundbreaking technology and lying-awake-at-night terror about it taking my job.

But our perception of AI matters. It matters because this current crop of AI is approaching a tipping point of mainstream awareness, acceptance and usage. It matters what the people using and legislating AI believe about AI, consciously or subconsciously.

Clever vs intelligent

The current brand of AI is obviously very clever. But clever isn’t the same as intelligent. And if we confuse these terms, we run the risk of making decisions that aren’t very clever at all.

That image of living, thinking AI in our heads may affect the way we deal with the current crop of technology marketed to us as AI. The tech-savvy understand narrow AI’s potential and limitations. But users and lawmakers without a technical background could have their beliefs and assumptions about real-world technology coloured by the version of AI repeated to us over decades in a diet of robots and replicants.

ChatGPT’s natural, flowing conversational style can trigger our brain’s biases

Today’s crop of AI tools have learned to talk like humans, and therefore can communicate like the chatty robots of sci-fi. It’s getting harder to spot that you’re talking to a computer. But ChatGPT’s natural, flowing conversational style can trigger our brain’s biases.

Concerns around bias come up a lot in conversations about AI, as machine learning systems hoover up data generated by humans and our prejudices with it. But we also need to be careful about our bias when we deal with AI.  That human-like communication leaves us open to a whole grab-bag of skewed thought processes.

If Captain Picard told Data to scan for life forms or plot a course for home, he wouldn’t be impressed if the android officer just made something up, or steered the starship Enterprise smack into a black hole. But that’s where today’s error-prone AI is at. There’s even a cute little phrase for the oopsies made by ChatGPT and chums: they don’t make mistakes or make stuff up, they “hallucinate”.

This is a pretty clear example of why the language used around AI is important. If your business, your family or some day your entire planet depends on AI, it’s important to be realistic about its capabilities. 

Rights for robots

One of the earliest classic episodes of Star Trek: The Next Generation, 1989’s “The Measure of a Man”, debated whether positronic-brained android Data was actually a living being, deserving of the dignity and respect accorded all humans. That thread runs through sci-fi right up the recent revival series Star Trek: Picard.

The thing is, these stories aren’t really about robot rights. Like most science fiction and fantasy, the metaphor at the heart of the story is about how we treat those who are different than us.

Specifically, fictional artificial beings are often servants: Blade Runner’s replicants, for example, are soldiers and sex workers (Karel Capek devised the word “robot” from a term referring to servitude and forced labor). Tales of futuristic labor-saving devices are meant to make us think about we treat those who serve.

In allegorical terms, the genre’s supercomputers and androids are like aliens or any other fantastical creature: they’re beings that look, act and think differently to us. Stories revolving around these different beings, these others, push us to interrogate how we think and act towards those who are different to us here and now in the real world.

In other words, stories about robots are really stories about people.

Do devices deserve rights?

If your takeaway from Star Trek is that devices deserve rights, I’d argue you’re taking it too literally.  Sure, if robots ever do achieve sentience — in other words, artificial general intelligence — then perhaps we can revisit the question of them having rights. And if Skynet reads this: I was only kidding, please don’t terminate me.

Perhaps someday Data, K-9 and Kryten will become icons to some future generation of living, thinking machines. In the meantime, the systems currently billed as AI aren’t there yet. We shouldn’t allow ourselves to confuse them with fictional artificial life-forms.

In other words, rather than having our thoughts about artificial intelligence clarified by fictional stories, they may be clouding our vision. We give today’s false-information-spewing plagiarism machines too much leeway if we conflate them with the sentient supercomputers of the silver screen. There’s nothing wrong with being excited about the potential of today’s AI tools — but don’t get carried away.

Just because they’re both dubbed ”artificial intelligence“, you might think ChatGPT is as smart as Data. But that doesn’t make it so.

Read next: Want to put AI to work in your business? Read our guide to taking advantage of GPT-4 today. And if you fancy creating your own AI images, like the one at the top of this article, read Barry Collins’ superb guide to using Midjourney 5.

This article has been tagged as a “Great Read“, a tag we reserve for articles we think are truly outstanding.

Richard Trenholm
Richard Trenholm

Richard is a former CNET writer who had a ringside seat at the very first iPhone announcement, but soon found himself steeped in the world of cinema. He's now part of a two-person content agency, Rockstar Copy, and covers technology with a cinematic angle for TechFinitive.com

NEXT UP