AI hype: Why Microsoft Copilot is going to be the new Clippy

Misheard lyrics can be fun. Peter Kay has a whole routine on them, which you can view at the bottom of this article as your reward! But I came across one recently that entirely changed what I thought they were saying. 

Out on a training walk by myself, I asked a music service to play “Snap – Don’t Believe the Hype”. But it kept suggesting Public Enemy instead. I got annoyed, made a mental note something was up, and made do with “Grandaddy – He’s so simple, he’s so dumb, he’s the pilot”, which is a great tune.

When I got home, I searched for “Snap! – Don’t believe the hype”. Now, for mumble year’s I have thought this was the title. I always thought it was good advice, too. So imagine my surprise when I find the title is actually the exact opposite to what I believed. You see, the title is “Believe the hype” but the full hook is, “This one’s real so believe the hype (Don’t believe the hype is a sequel)”.

Microsoft Copilot: the new Clippy

Why am I mentioning this? Because as I write this, a new AI-fuelled hype bandwagon is in full flight. ChatGPT kicked things off, shortly followed by Microsoft adding even more advanced AI to Bing. And it’s now saying AI will be added to a whole host of products, including the Office suite, via Copilot. 

Google has created its own AI, Bard, which on a demo spouted things that a simple Google search would show are false. (The Microsoft demo of Bing did the same, but it seems no one noticed at the time.) 

So here’s my cynical view: it is my firm opinion that AI in Office is going to be the new Clippy, which is now universally looked at with derision and laughter.

Really, don’t believe the AI hype

The tech industry has always lived in a flow of tech hype bubbles. Until recently, this was an insular world that “normal” people didn’t see, but with AI the bubbles have seeped into the consciousness of the public. 

The thing with hype is that, like Snap! said: “This one’s real so believe the hype.” So whatever you do, don’t mention Microsoft’s Tay, an AI chatbot that dates back to 2016 and lived on Twitter. It started life with “Hello World” and finished by condoning the holocaust and praising Hitler. It was removed entirely after two days. 

Also try not to remember cryptocurrencies, NFTs and the like, which where the previous hyped products.

The thing to remember is that we already use AI in day to day life. Your phone uses it for face identification, and it is thought that most job application CVs are now “read” by an AI, which then decides if a human is going to read the CV. 

The problem with these AI systems is that they are the equivalent to a black box. The system was given a load of input, told the outcome to that input and then the system makes its own rules to give the required output. So you can see what goes in and you can see what comes out, but you are unable to see what happens in the black box bit. Think chickens going into the factory and chicken nuggets coming out — but what happens in the factory? Well, do you want to know? Perhaps not, but this matters for AI as we start to rely on these systems.

AI hype in healthcare

For example, in healthcare it is hoped that AI will be better than humans when it comes to noticing issues with patients. An experimental model was built that was fed images of skin cancer and normal skin, and when tested the results showed that the AI was better than the human at picking up skin cancer. This was great success in terms of results. But in this case, the team then decided to take the “black box” apart and see how it was doing what it was doing. 

This led to an unpleasant discovery. The system had in fact picked up that medical pictures of skin cancer have a ruler in them, so you can see the size of the cancer in the image. So what it was really saying was that an image included a ruler, rather than if the image had skin cancer. If this had been tested on real people, the results would have been catastrophic.

You don’t need to look far for other examples. We have put self-driving cars on the roads that have killed people, as the model wasn’t aware of things jaywalking or cyclists. There are lots of unintended consequences to letting a random black box lose on data, and it’s something we wouldn’t do if the system was physical, rather than software. 

But the key point is this: we can’t check that these systems have any consequences before they occur, as we can’t see what is happening internally.

Back to chat

The newer ChatGPT is fantastic at sounding like a human. No wonder that people believe that AI is alive and ready to take our jobs. But ChatGPT and others, including Midjourney, don’t create anything: they are a mechanical Turk and probability engines. 

If you want a picture of a Storm Trooper vacuuming a beach, and who doesn’t, Midjourney 5 will “create” it, but it does that by taking already created images and manipulating them, just as ChatGPT is really a great search engine for text. But it will also write very convincingly about complete claptrap, or just make it up, so you can’t rely on them. 

These systems have been trained on data from the internet, a chunk of which will copyrighted material. Over the next few years, we can expect numerous lawsuits for copyright infringement as a result. And because it’s the internet, it’s likely to be training on some bad content. As we in the IT world say, GIGO: Garbage In, Garbage Out.

This all comes back to one of my previous columns: my habit of saying no to user requests. I had a user ask if they could use ChatGPT to do some of their work, as it looked like it was going to be helpful, but we (IT, legal etc) had to say, no. GPT and other LLMs (large language models) self-refer, which means things you ask and the answer given are absorbed into the model. So, if you give it something private, the model could give out this information as an answer to someone else, which is an obvious privacy concern.

What happens next

My view? AI is just the latest in a line of shiny, overhyped technologies created by their providers to increase shareholder value. They are as likely to take over the world and jobs as I am to turn up on stage with my hilarious Snap anecdotes. 

With one caveat: if turn them from black boxes into transparent systems then the game changes. But this will probably require a wave of new legislation, as these firms won’t want to make them transparent. In the meantime, as Grandaddy almost said, “AI’s so simple, AI’s so dumb, AI should not be the pilot.”

Read next for a very opposing view! ChatGPT is the real deal – and it’s going to change the world

michael dear
Michael Dear

Michael has worked for more than 20 years running IT departments, mainly for small to medium insurance firms. His primary interest is focused on security and compliance.