We don’t know what AI will do next, says Adobe CTO
Adobe’s Chief Technology Officer has said he can’t confidently predict what AI will do from one version to the next.
Adobe is today announcing a series of new AI features in Photoshop and other Creative Cloud apps. The features are powered by the new Firefly Image 3 model, and include the ability to generate images from scratch within Photoshop, replace backgrounds with AI-generated images and use reference photos to prompt the AI.
The CEO of OpenAI – the company behind ChatGPT – has admitted in the past that the company often doesn’t know what to expect from future versions of its AI technology, because the AI is effectively training itself.
Ely Greenfield, CTO of Adobe’s Digital Media business, told me he’s in a similar situation. When asked whether he had any idea what he was going to get from one generation of Adobe AI to the next, he replied: “No. I’m glad Sam’s blazing that trail, it makes it very easy for me to agree.”
Firefly AI improvements
That’s not to say Adobe is entirely powerless when it comes to delivering specific improvements to the Firefly AI. For instance, when Firefly was first released a little over a year ago, it struggled with specific features such as people’s faces, hands and the rendering of text.
Greenfield said Adobe was able to improve the AI rendering of such features by boosting the quantity of training data containing those features. The company has also focused on improving the AI’s ability to understand prompts.
Greenfield described how when Firefly first emerged, it was a case of “prompt roulette”. “You type in a prompt, you get an image out, ‘cool’, but it’s not the image I want,” he said.
The company has since focused on handing more control to the user, by refining the AI’s ability to understand user intent and by giving them more options to influence output, such as allowing the user to provide reference images for the AI to mimic. “If you want to put in a little bit of work to try and be more specific and more controlling about what you want, cool, you get a little more control,” he said.
He added the company was aiming to “give people the knobs and levers that are appropriate for different types of customers”, so that, for example, a “creative director can say ‘I want this, I want this and I want this’ and get the results out with a big return on investment”.
Related: Marriott taps into personalisation with Adobe Experience Cloud
The expense of AI
Talking of return on investment, Adobe is doubtless racking up considerable cloud computing expenses with the addition of generative AI features across its various apps. The company revealed that AI has become one of the most used features in Photoshop, with more than 7 billion images generated in the past year alone.
Greenfield said the advent of generative AI convinced him to return to the company. “About two years ago, I was looking at some very early versions of the image generation technology, not at Adobe but just out in the research community, and I decided to come back to Adobe because I was like, ‘that’s going to change the way every image is made’,” he said.
“What was surprising was the ramp, how quickly the technology’s advancing and the adoption’s happened. But so far it’s still within our within our [financial] models and the budget that we predicted.”
NEXT UP
Cassidy Wolfenson, Creative Director at Labster: “Let data and intention inform your designs”
We interview Cassidy Wolfenson, who has a fascinating job: to develop compelling visuals that make online simulations more immersive — and thus more inspiring to STEM learners
IBM: Mainframes and AI are a match made in heaven
Research from IBM found that the relationship between AI and mainframes is a symbiotic one: mainframes are supporting AI strategies and vice versa.
GoldenJackal attacks prove that air-gapped security still isn’t enough
We reveal the method behind the GoldenJackal attacks, who’s being targeted, and why air-gapped defences aren’t enough