Generative AI is talking, but is anyone actually listening?

We’ve all been in one of those meetings. 

Bryce From Sales walks in late, talking loudly into his phone and staking his claim at the head of the table with a half-finished protein shake. Bryce kills the call, slams his phone down and leaps right into a rundown of last quarter’s sales – a loud and incomprehensible screed of figures and generic team shout-outs – before promptly standing up, jumping onto another call and stomping to his next meeting. 

While you might think the Bryces of the world no longer exist (oh rest assured, dear reader, they do), these walking walls of noise are no longer just in the boardroom. They’re now starting to appear in automated form online. Large language models that spit out any number of generic slabs of text, generated by “creators” who want to make life easier on themselves with little regard for who actually has to read the end result. 

Professor in the Department of Linguistics
Emily M. Bender is the Director of the Computational Linguistics Laboratory at the University of Washington

In the current hype cycle around artificial intelligence, we’ve heard plenty of breathless praise about generative AI. It will save time! It will create beautiful artwork with the click of a mouse! It will whip up a legal contract or write a news article in seconds, giving professionals the opportunity to do the kind of nuanced work only a human can do. And it will even send that all-staff email you’ve been dreading composing because you’re a CEO and you didn’t get here by reaching out to your employees with genuine shows of human connection! What is this, Thursday night book club?!

But with all this buzz and boosterism there has been one question that’s yet to be answered: who is consuming all this AI-generated chum? 

Because arguably, that’s what Large Language Models (LLMs) like ChatGPT are producing: chum. In the words of AI ethicist Emily Bender, they are nothing more than “synthetic text extruding machines”. 

Synthetic text extruding machines

These language generators are hyper-digital, hyper-online versions of the old factory production line: highly effective at quickly churning out cheap, semi-palatable excretions of text that look and taste close enough to the real thing and are far quicker for the factory owner to produce. 

Forget paying real human workers – the factory of the future can generate that human-made content with a simple prompt and the press of a button. Goodbye expensive journalist, lawyer or doctor, hello AI news article, Robo-Attorney and Dr ChatBot. 

For the companies rushing to push out this kind of content, the lure of spinning up your own AI-generated product is tempting. Producing content through ChatGPT is quick and easy and can be done by anyone in the team, all with just a few prompts. New website landing page? A month’s worth of email newsletters? Endless content at your fingertips? Presto!

But be warned: AI experts and ethicists warn that generative AI in its current state is not fit for purpose. 

“Synthetic media machines are designed to make things up,” says Emily M. Bender.

synthetic text extruded machines
LLMs are just systems that “parrot” text back to us

The Director of the Computational Linguistics Laboratory at the University of Washington, Bender is a leading voice on the ethics and societal impacts of AI and large language models like ChatGPT. She says that, ultimately, LLMs are just systems that “parrot” text back to us

“Their output isn’t grounded in any communicative intent, commitment to truth, or understanding of either the world or the user’s input,” Bender says. “No amount of additional data or fine-tuning can overcome this fundamental fact.”

Bender and her colleagues at organisations like the Distributed AI Research Institute (DAIR) warn that LLMs like ChatGPT are “the equivalent of enormous Magic 8 Balls”. As humans, we trust their output because the text sounds close enough to human writing, but they’re nothing more than toys “we can play with by framing the prompts we send them as questions, such that we can make sense of their output as answers”. 

Word by word

LLMs build sentences by choosing the most likely next word in a sequence. They’re trained on large data sets, hoovered up from all across the internet. That source data might include websites like Reddit and Wikipedia, or it might include the intellectual property and creative work of uncredited artists and writers. It’s all grist for the mill. But as one might expect from anything trained on the wide, wild world of the internet, that doesn’t stop these LLMs from generating inaccuracies or misinformation. And according to Bender, their output often “amplifies the biases encoded in [their] training data”.

While producing work through generative AI is simple for the creator, it shows inherent disregard for the consumer. 

Would you sip your morning coffee while reading page after page of AI-generated news? Would you appreciate an email from your CEO if you knew it was written by a machine? Are you rushing out to get tickets to “Ultra Movie 2: Written by KlaxorBot”? (Sorry if that’s actually the name of a Marvel movie, I simply cannot keep up with that franchise anymore.) 

Related reading: AI for Hollywood: pay attention to the strike, because you’re next

Or what if the stakes were higher? If you needed a lawyer, would you trust a predictive text generator trained off the internet to help you? Would you be satisfied raising serious medical concerns with a chatbot instead of a doctor? In the future, if these tools are rolled out for the masses, is it likely that the most privileged people in society will be using a robo doc? Or is it more likely that this tech will be foisted upon already disenfranchised communities who won’t get a say in the matter? 

That’s a concern raised by AI researcher Sasha Luccioni, who recently dubbed AI-powered medical chatbots for economically disadvantaged people as “techno-savourism”. 

“Helping ‘economically disadvantaged’ people with AI is a myth,” she wrote. 

Welcome to the new underclass

Tech evangelists are quick to talk up the benefits of generative AI, but the focus is on those who enjoy the benefits. The ones saving time, the ones cutting back on staff in their writers’ rooms or saving money on production. It’s about those who get to hold the huge firehose of content with little care for who is on the receiving end of the spray.

And that’s the inherent contradiction around generative AI. It’s fine to create average-sounding text, potentially riddled with inaccuracies or bias, but only if it’s consumed by someone else: A faceless mass of readers that will blindly accept synthetic text extruded from the machine.

It’s the equivalent of “Bryce from Sales” bleating out a sales pitch to a room full of unlistening colleagues. It’s the karaoke singer who loudly belts out an atonal rendition of “Sweet Caroline” without caring that everyone in the bar is on their phones and not even backing up the “Bah, Bah Bahhhh” notes in the chorus (which is, of course, the best part of the song). It’s the factory owner looking down on row after row of conveyor belts, churning out cheap gruel, without ever knowing or caring whether someone will eat it up. 

This article has been tagged as a “Great Read“, a tag we reserve for articles we think are truly outstanding.

Recommended reading: ChatGPT is the real deal – and it’s going to change the world

Claire Reilly Writer
Claire Reilly

Claire Reilly is an award-winning technology commentator and video host based in San Francisco. She worked for almost a decade at CNET where she hosted the Webby Award-winning series "Hacking the Apocalypse" and "Making Space: The Female Frontier."

NEXT UP