AI copyright: should your business be worried?

Generative AI is rapidly becoming a business staple. Many will be actively using AI to write marketing copy, generate images or create voice overs. Millions of Windows 11 users now have AI tools right there on the desktop. But are these early AI adopters putting their business in legal peril by potentially breaching copyright?

AI is trained on all manner of materials, an unknown quantity of which is protected under copyright. If your business publishes AI-generated content that’s infringing copyright, it’s normally you – not the AI tool – that bears the legal responsibility. And given that there’s no surefire way to tell if something is AI generated, which business owner can truly be certain that content created by a member of staff is actually their work?

The phrase ‘legal minefield’ is overused, but this is a precarious situation for business owners. Let’s explore the delicate issue of AI copyright and how best to protect your business for legal and reputational damage.

Generative AI art services are roughly akin to magic, spitting out images that can be hard to tell apart from genuine photography from nothing more than text prompts. Yet, there are telltale signs in the results from such services that what you’re getting back might not actually be a unique piece of digital imagination.

Take the image below that I generated using the Midjourney service. If you look closely in the bottom-right corner, you’ll notice what looks like a garbled watermark.

To be clear, this isn’t conclusive evidence that the image generated is a copyright infringement or a direct imitation of someone else’s work. Generative art services are trained on vast image libraries, some of which may have been images bearing watermarks, which the AI is aping because it thinks that’s what images of a beach should look like.

Still, it’s enough to set alarm bells ringing, as is the admission in an interview with Forbes.com last year with Midjourney CEO David Holz that the service was trained on images scraped from the internet, with no way of telling what was under copyright or not. “There isn’t really a way to get a hundred million images and know where they’re coming from,” he admitted. Midjourney is far from alone among AI services in training models from data lifted off the public internet.

Legal experts say that brings an element of risk for businesses who use such services. “If an image is generated using an AI tool that has been trained using copyright works, then the question will be whether what comes out the other end, the output of the tool, is itself an infringement,” said Eddie Powell, a partner in the IP and commercial team at Fladgate.

“That’s going to be a very detailed technical analysis of whether a substantial part of the original image has been used in what the AI generated,” Powell added. “It’s literally going to be on a case-by-case basis.”

That’s why some rivals are trying to distance themselves, by assuring customers that their AI models haven’t been trained on copyrighted material. Adobe, for instance, claims its Firefly AI model “is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired”. 

What’s more, the company says it will protect any corporate customers who are sued for copyright infringement on the back of using Firefly. “With Firefly, Adobe will also be offering enterprise customers an IP indemnity, which means that Adobe would protect customers from third-party IP claims about Firefly-generated outputs,” it claimed in a statement.

Powell said he would advise business using generative AI tools to “choose their vendor carefully”, but that the level of risk depends upon what generated images are used for. He said many copyright infringement claims would be “quite small”, but “if you’re mounting a massive advertising campaign, plastering the image on the web, so every social media feed sees it, the damages start to mount up”.

AI soundalikes

It’s not only a person’s artwork than can be mimicked by AI, but also their voice. There is no shortage of services that allow you to upload a recording or enter a passage of text and have it read in an AI-generated voice that sounds like someone famous. And if you can’t find presets with the celebrity that you want, you can upload samples of their voice to train AI models.

For a business, the temptation is obvious. Save yourself tens of thousands of pounds by hiring a celebrity, say Stephen Fry, to voice-over your commercial; have the AI impersonate the actor instead.

But where do you stand legally if you have your commercials voiced by an AI that sounds just like a public figure? You’re in serious danger of infringing the star’s image rights, according to legal experts. “It’s the right not to be used in advertising in a way that suggests that they’re endorsing or promoting a product when they actually aren’t,” said Powell.

He said there have been clear-cut cases in the past where a celebrity’s photo has been used without their permission in advertising, leading to big payouts. However, it’s less clear cut when it comes to the someone’s voice, said Powell, because – in the case of Stephen Fry – people will argue “is it stealing or is it just someone who sounds a bit like him, with a posh English voice?”

If the case was brought to court, it might be decided upon whether people believe the voice is the actor’s. “It literally might come down to going out, playing the adverts to members of the public on the street and saying ‘who is saying these words?’” If the answer comes back as Stephen Fry, then “as a business, you could land yourself in hot water,” according to Powell.

Is AI worth the risk?

Of course, there are many more forms of AI content generation seeping into businesses. If you get ChatGPT to write copy for your website can you be sure it’s not been lifted from somewhere else? If you ask for a specific piece of code, is there any guarantee it’s original?  Is any of this a risk worth taking?

Lydia Dettling, policy manager for Europe at tech advisory firm Access Partnership believes the risk of individual businesses being sued for publishing AI-generated content is small. “I actually haven’t seen any cases yet taken on individuals who have published material,” she said. “The focus at the moment is on the developers.”

Dettling said it’s early days when it comes to the law getting to grips with AI and that matters should become clearer in the next couple of years. “There definitely will be guidance coming out soon,” she said. “The court cases which are ongoing should provide some legal clarity, but it will likely be another couple of years before there’s actual copyright reform. I know it’s something that the EU is considering at the moment, and the UK is considering as well.”

In the meantime, it’s down to businesses to be vigilant about the types of AI services that they and their employees are using. Dettling said companies should ensure their AI providers “don’t just scrape all the data on the internet, but actually ensure that they use either fully licensed material or that they have gotten the permissions where necessary”.

Do your research. Don’t rely on AI to do it for you!

Avatar photo
Barry Collins

Barry has 20 years of experience working on national newspapers, websites and magazines. He was editor of PC Pro and is co-editor and co-owner of BigTechQuestion.com. He has published a number of articles on TechFinitive covering data, innovation and cybersecurity.

NEXT UP