Founder Insight

Why AI Video Generation Can't Replace Game Engines for Live Production

Joris Corthout, CEO at Prismax

Listen on TL;Listen Prefer to listen? Hear this article read aloud.

If you’ve watched any AI video demo in the last year, you’ve probably thought: this changes everything for live visuals. Joris Corthout has thought about it too — and he’s not buying it yet.

Corthout is the CEO of Prismax, a Belgian production studio that has spent 20 years creating visuals for the world’s most demanding live events. They’re the exclusive visual operator for Tomorrowland — the 400,000-person electronic music festival — and last year they ran a real-time show inside the Las Vegas Sphere at 16K by 16K resolution, all rendered live in Unreal Engine. When it comes to visual production at scale, Prismax operates at the ceiling.

So when Corthout evaluates AI video tools, he’s not comparing them to a YouTube thumbnail. He’s comparing them to millions of synchronized pixels rendering in real time while a DJ plays to tens of thousands of people.

The Resolution Gap Nobody Talks About

Most AI video models output at HD resolution, sometimes approaching 4K. Prismax runs at 16K by 16K — a resolution so extreme that a single room installation can be thousands of pixels wide on each wall, with LED panels spaced 0.7 millimeters apart.

“What’s coming out now is HD, not even 4K sometimes,” Corthout explains. “And you could upscale it. There’s lots of AI upscaling tools. But they’re not doing the resolutions that we need.”

The upscaling problem compounds when the output is displayed on massive surfaces. At Tomorrowland, stages stretch 150 meters wide and 40 meters high. At the Sphere, the interior surface wraps around the entire audience. At those scales, every artifact that’s invisible on a phone screen becomes obvious.

Control Is the Real Dealbreaker

Resolution is a technical gap that might close with better models and more compute. But Corthout identifies a deeper problem: control.

In a game engine like Unreal, his team builds a 3D world and manipulates every element — camera position, lighting, textures, particle effects — in real time. A director says “make this forest lighter” and the change happens instantly. During a live show, data from the pyrotechnics crew triggers synchronized virtual flames on screen. The real and virtual worlds interact frame by frame.

“I have my 3D artists create a whole world and animate the trees or birds or certain things. We have complete control over that,” Corthout says. “And with AI, I am prompting. You don’t have complete control yet.”

This isn’t a minor inconvenience. In live production, the visuals respond to what’s happening on stage in real time — the DJ’s energy, the crowd’s reaction, the pyrotechnics operator pressing a button. A system that takes seconds to generate each frame can’t participate in that conversation.

The Energy Math That Makes It Worse

Even if AI models achieved the resolution and control Prismax needs, there’s a sustainability problem. Generating a single Midjourney image takes a few seconds and significant GPU power. A live show needs 25-30 frames per second, sustained for hours.

“If everybody in the world starts making videos for fun, you’re gonna need a few nuclear power plants built next to your company to have that amount of energy,” Corthout says. For a studio running multi-hour shows across massive surfaces, the compute cost of AI-generated frames versus a game engine rendering the same scene in real time isn’t even close.

The pricing model matters too. Right now, AI platforms subsidize generation costs. “If the big companies keep on giving it away so cheap, it could be interesting for me if the quality gets better,” Corthout acknowledges. “But there has to be a point when these big companies start asking the actual money that it costs in energy to generate what you just made.”

Where AI Actually Fits Today

Prismax does use AI — just not in production. Midjourney handles pre-production: mood boards, storyboards, visual concepts for client pitches. What used to take a dedicated artist two weeks of hand-sketching now takes one or two days of prompting, with results Corthout says look ten times better.

For occasional 16:9 social media content — promotional clips, Instagram videos — the team experiments with combining multiple AI models. But for core production work, Corthout estimates AI handles about 0.05% of the pipeline.

The gap between “impressive demo” and “production-ready tool” is wider than most people realize. And the person running 16K real-time shows at the Sphere is probably the best judge of how wide it actually is.

FAQ

What resolution do live event visuals actually need?

Major live productions run at resolutions far beyond consumer standards. Prismax operates at 16K by 16K for venues like the Las Vegas Sphere, with LED panels spaced 0.7mm apart. Current AI video tools max out around HD to 4K, which falls visually short when displayed across surfaces 150 meters wide.

Why can’t AI-generated visuals be used in live shows?

Live production requires real-time frame generation (25-30 fps), precise control over every visual element, and synchronization with physical effects like pyrotechnics and lighting rigs. AI video generation is too slow per frame, lacks fine-grained control, and can’t respond to live data inputs the way game engines can.

How does Prismax create real-time visuals for Tomorrowland?

Prismax builds 3D worlds in Unreal Engine and manipulates them live during performances. Operators push buttons to trigger visual effects in sync with the music and physical stage elements. Data from pyrotechnics and lighting systems feeds directly into the rendering pipeline, creating synchronized real-virtual experiences.

What is Unreal Engine used for in live events?

Game engines like Unreal Engine render complex 3D environments in real time — no pre-rendering needed. For live events, this means visual operators can adjust lighting, camera angles, textures, and effects instantly during a performance, responding to the DJ, crowd, and stage crew in the moment.

How does AI fit into Prismax’s current workflow?

AI handles roughly 0.05% of Prismax’s production pipeline, limited almost entirely to pre-production. Midjourney generates mood boards, storyboards, and concept art for client pitches — replacing weeks of hand-sketching with days of prompting. Core production still runs entirely on game engine technology.

What would it take for AI video to be production-ready for live events?

Two things: total control over every visual element (camera, lighting, textures, effects) and resolution high enough for massive LED surfaces. Current AI models lack both. Even if the quality gap closes, the energy cost of generating 25-30 AI frames per second for hours may make it economically unviable compared to game engines.

Can AI upscaling solve the resolution problem for live production?

Not yet. AI upscaling tools exist but introduce artifacts that become visible on large-format displays. At 0.7mm pixel pitch across room-sized LED walls, upscaled content looks noticeably different from natively rendered 3D — especially in fine details like foliage, reflections, and particle effects.

Why do production studios still use game engines instead of AI for visuals?

Game engines offer deterministic rendering — every leaf, every reflection stays exactly where it should be, frame after frame. AI-generated video lacks this consistency. As Corthout puts it, millions of leaves in a game engine scene stay fixed when the camera moves, but in AI generation, those leaves shift slightly between frames.

How much does energy consumption limit AI video generation at scale?

Significantly. Generating a single AI image takes seconds of GPU time. Live production needs 25-30 frames per second sustained over multi-hour shows. The compute and energy requirements scale to levels that likely require dedicated power infrastructure — a cost that game engine rendering avoids entirely.

Watch the full conversation

Hear Joris Corthout share the full story on Heroes Behind AI.

Watch on YouTube

Related Insights