What is Adobe Firefly and how do you use it?

Just as ChatGPT creates words from prompts, Adobe Firefly does for images and videos. So this is a generative AI going toe to toe with Midjourney.

It’s designed to help content creators improve and expand upon their creative process. While Firefly is still in beta now, Adobe intends to integrate the service into the likes of Photoshop and Premiere Pro upon its launch. Firefly is already part of the Adobe Express beta, which you can sign up for today.

Subscribers to Adobe’s Creative Cloud package can also download a beta that integrates the AI tool. Read our separate guide on how to get the best AI art from Adobe Firefly in Photoshop.

This level of integration immediately lifts Adobe Firefly apart from Midjourney, which is a standalone image-generation product. But where Adobe also differs is its emphasis on business use. Only this week, it announced Firefly for Enterprise, which means “anyone in your organisation can create amazing on-brand assets in seconds using simple text prompts”.

That’s key because currently, you can’t put these AI-generated images to commercial use.

How does Firefly work in Adobe Express?

Firefly primarily uses a “text to image” approach. So, the user provides a description of what image they would like to create and the software produces an image based on that text. In the example at the top of this article, we asked Firefly (via Adobe Express) to create a brightly coloured firefly staring at the camera.

Here’s what Firefly created when we asked it to “generate a picture of a student falling asleep in a lecture hall”.

Then we told it to change to “art” (we could also have opted for “graphic” or “photo”).

We can then apply a style, such as “techniques”. These include options such as painting, line drawing, acrylic and doodling. Here’s what happened when we applied the “pencil drawing” option.

Say you don’t want that. Then undo the changes and apply a material, say. Options include origami, yarn, metal and fabric – but we naturally chose fur.

You can also apply colour themes, remove backgrounds and overlay your own images over the results. The power here is truly incredible. However, as the distorted face of the person on the far right of the above image shows, it has its limitations too.

Adobe Express also offers you the option to generate text effects. You can choose from various presets, or describe your own. We opted for raspberry jelly. Naturally, you can play around with sizes and fonts, but also shadows and decorative effects.

How does Adobe Firefly work in Photoshop?

This text generation is currently being used to experiment with features such as Generative Fill. This feature will allow users to remove or add new objects into pictures through text descriptions; for example, adding settings and props.

Adobe Firefly beta in Photoshop
Generative Fill in action, courtesy of Adobe employee Russell Preston Brown

Here’s how it works. Adobe is adding what it calls a “Generative Layer” for Photoshop users (but only in the beta for now). This allows editors to adapt their images using AI on a separate layer to the rest of their edits, so they can trial the vast capabilities of Firefly without erasing existing work.

Adobe is also adding a “Contextual Task Bar” with a Generative Fill button. This allows creators to generate or expand their images without text prompts. Again, the idea is to make it easier for users to develop and generate ideas faster.

Firefly generative fill
Here, Generative Fill extends the background in a near-miraculous way (image via Adobe Stock)

One option is extended cropping. This will allow users to crop their images and allow Adobe’s Generative Fill feature to expand the background, creating a larger image. It does this by analysing the areas surrounding the image a creator is editing, then takes into account the lighting, reflections and shadows of the image. The result, in theory, is to give a realistic depiction of the generated background.

Generative Fill doesn’t require a pre-existing image to start working. Users can create images in Photoshop using a text prompt; an image will generate and editing can begin from there.

Adobe also hopes the AI software will help new users get to grips with Photoshop faster and broaden Photoshop’s functions for experienced users.

So what does this mean if you don’t have Photoshop? Users can download a trial for Photoshop or sign up to use Firefly and the Generative Fill option will be readily available for you to try.

What will Adobe Firefly do in the future?

Adobe is also exploring other modes, including image generation from 3D models and text to brush and vector features.

Another interesting feature is generating an image from a sketch. A great option for artists, this even works with rough sketches. Once the generative AI has brought the sketch to life, it can be modified to suit the user’s requests. This is an obvious example of how Adobe Firefly has the potential to help artists and content creators rather than hinder them.

In addition, Adobe has promised that it will “include models that creators can train on their own personal style and brand language”. So, a company (or a creative freelancer) could have their own private AI helper, trained specifically for them.

Combine this with Adobe’s intention to integrate Firefly into existing workflows (as we have already seen with the Photoshop beta) and this could be huge news for creative companies.

The ethics and law surrounding AI in creative industries have gone hand in hand with the technology’s rise in prominence. Getty Images suing AI developers Stability AI for copyright infringement early this year is a key example, with Getty accusing Stability of using its image library to train the AI model.

So, how is Adobe training its Firefly model? To remain ethical, and no doubt avoid potential lawsuits, Adobe is using its own stock library to train its AI software.

This library includes over 200 million images and illustrations, so there is plenty of raw material. And Adobe is giving content producers that contribute to its stock library the choice to allow their work to be used in the training of their AI models. Although it’s opt-out rather than opt-in: Adobe has added a Do Not Train feature that gives the content creator the choice before they submit a picture.

AI art could prove to be a double-edged sword for human art producers. Tools such as Adobe Firefly can help them, and maximise their productivity, but it also undermines the years of effort that an art creator has spent honing their craft. It also potentially takes away jobs and other financial opportunities for content creators.

How can you use Firefly now?

It’s important to emphasise that Adobe Firefly is currently in beta. You can instantly take advantage by using the free Adobe Express Beta or downloading the Photoshop beta.

Just to reiterate, it doesn’t look like Adobe Firefly will ever be its own app. Nor do we have a release date for when it is coming out of beta. However, at some point, we can expect to see it working across the whole Adobe Creative Cloud suite.

Although Firefly brings its own challenges for content creators, it seems that Adobe is doing its best to cater for both creators and consumers. And blurring the lines between the two. And by helping art to evolve whilst steering clear of any copyright problems, it should avoid the legal problems of its rivals.

Firefly may only be in its beta phase, but it’s easy to see that Creative Cloud is about to get a whole lot more innovative. 

UPDATED 8 AUGUST 2023 With link to guide to getting the most out of Adobe’s Firefly tool in Photoshop.

Owen Lucy
Owen Lucy

Owen is a journalism student at Bournemouth University, with aspirations to become a sports commentator. He is currently on work experience with TechFinitive.com, looking for ways to squeeze either sport or gaming onto the site.