Could you shoot a movie in Midjourney?

Generative AI service Midjourney has announced ambitious plans to let people shoot movies and make video games in simulated 3D worlds.

Midjourney has made its name as one of the best AI services for generating regular 2D images, but the company is already developing its own 3D models. The plans were announced by Midjourney founder David Holz in the company’s weekly “Office Hours” meetings, where staff talk about what the company’s working on.

“We’re really trying to get to the world simulation,” Holz said, according to a report on Tom’s Guide. “We’re building 3D Midjourney, video MJ and real-time MJ where things move really really fast.”

Holz reportedly said that the company was developing those three projects independently, but that they could be brought together to create a virtual world simulation, similar in concept to the much over-hyped metaverse.

“It will be more of a sandbox,” Holz said. “People will make video games in it, people will shoot movies in it, but the goal is to build the open world sandbox.”

Related: How to get the best AI art from Midjourney

The AI movie business

Midjourney’s announcement is just the latest indication that the AI industry is preparing a full-on assault on the movie industry.

Earlier this year, OpenAI announced Sora, a service which allows creators to generate short video clips from text prompts. OpenAI has yet to release the service publicly, but there has been a steady stream of impressive-looking videos generated by Sora released in recent weeks, as a small band of beta testers experiment with the service.

The release of Sora’s demonstration videos sparked a debate among AI experts over exactly what it represented. Some argued Sora was a physics engine, similar in concept to the Unreal Engine used to make games such as Fortnite. Others believed it was more rudimentary than that, pointing to obvious physics glitches such as animals walking through walls that wouldn’t occur in a genuine physics engine.

Holz seemingly offered no timeline for when the company would make its 3D simulations available to customers. 3D rendering is hugely resource-intensive, especially with generative AI thrown into the mix. Reports from early Sora testers suggest that rendering one minute of video in the service can take more than an hour.

That would put enormous pressure on the companies’ cloud servers should video generation ever be made available to the general public, meaning it’s likely to be confined to business customers with bigger budgets in the near term.

Recommended: OpenAI’s building GPT-5 – but it hasn’t got a clue what it will do

Avatar photo
Barry Collins

Barry has 20 years of experience working on national newspapers, websites and magazines. He was editor of PC Pro and is co-editor and co-owner of BigTechQuestion.com. He has published a number of articles on TechFinitive covering data, innovation and cybersecurity.

NEXT UP