Founder Insight

Why Perfectly Generated AI Shows Feel Fake — And What Live Audiences Actually Want

Joris Corthout, CEO at Prismax

Listen on TL;Listen Prefer to listen? Hear this article read aloud.

There’s an assumption baked into most AI visual tools: more perfect is better. Higher fidelity, more consistent, more polished. Joris Corthout has spent 20 years testing that assumption against real audiences — and he thinks it’s wrong.

Corthout is the CEO of Prismax, the Belgian production studio behind the visuals for Tomorrowland and the Las Vegas Sphere. His team has run live shows for crowds of 400,000 people. And one of the core lessons from two decades of that work is that perfection is the enemy of a good show.

The Sinus Principle

Corthout describes the ideal show as a sinus wave — energy that rises, peaks, falls, and rises again. The audience needs tension and release. A DJ reads the crowd and adjusts. The visual operators do the same, pushing energy when the moment calls for it, pulling back when the crowd needs to breathe.

“A show should be like a sinus. People shouldn’t be dancing the whole time,” Corthout says. “If you would have a machine create a show, it would probably just give energy all the time. And humans can put a nice break and then it goes up and then goes down.”

This isn’t an aesthetic preference. It’s a functional requirement for holding attention over multi-hour performances. A show that runs at maximum intensity for four hours exhausts the audience. A show that modulates — building anticipation, delivering peaks, creating quiet moments — keeps them engaged.

The parallel to AI-generated content is direct. An AI optimizing for visual impressiveness would likely max out every frame. A human operator knows when to hold back.

Why Imperfection Reads as Authentic

At Tomorrowland, almost nothing during the DJ performances is automated. Human operators push buttons, trigger effects, adjust lighting — all by hand, in real time. The result is slightly imperfect. And that imperfection is the point.

“If a show is too perfect, like you have these time-coded shows where everything is pre-programmed, it looks very nice, but it looks very fake because everything is too perfect,” Corthout explains. “If you have something that is being made by people in real time pushing buttons, then it looks and it feels much more human and not fabricated.”

The audience can’t articulate why a time-coded show feels different from a human-operated one. But they feel it. The small timing variations, the operator reacting half a beat late to a drop, the slightly off-sync transition that a human corrects on the next beat — these micro-imperfections signal authenticity.

Corthout sees this as a fundamental limit for AI-generated live visuals. “The audience are humans, and they’re looking at a show. And if it’s too mechanical, too much computer generated, it won’t work as well as when humans created it — humans that have a feeling for rhythm, have a feeling for color contrast, balance between video and lights.”

The Violin Test

Corthout draws the line sharply when it comes to human performance versus AI generation. He’s seen robotic orchestras and AI-trained instruments. His response is blunt.

“I prefer having a real person play a violin over a robot playing a violin, because the robot was trained in AI to play the violin and that person trained probably 25 years before the person could play the violin perfectly. I think some things shouldn’t be touched.”

This conviction shapes Prismax’s next product: an immersive classical music experience that combines real musicians — a 120-person orchestra — with 3D worlds rendered in real time through game engine technology. The musicians are real. The surrounding environment is generated. Nothing is AI-created.

“It’s still humans at the basis and technology enhancing what the humans are doing,” Corthout says. “Not everything is just generated by machines.”

What This Means for AI Visual Products

The gap Corthout identifies isn’t about technical capability. It’s about a design philosophy that most AI tools don’t account for: audiences respond to human presence in the creation process, even when they can’t see the creators.

A show where a human operator mistimes a transition by 200 milliseconds and corrects on the next beat feels alive. A show where every frame is computationally optimized for maximum impact feels sterile. The distinction matters most at scale — when 400,000 people are in a field together and the collective emotional response amplifies every moment.

For builders working on AI tools for live production, the insight is counterintuitive: the goal isn’t to remove the human operator. It’s to give the human operator better tools that preserve their ability to react, modulate, and be imperfect.

FAQ

Why do live audiences prefer human-operated shows over automated ones?

Human operators introduce micro-imperfections — slightly off-beat transitions, reactive adjustments, intuitive pacing — that signal authenticity. Time-coded shows with pre-programmed perfection look polished but feel mechanical. Audiences sense the difference even without being able to articulate it, and engagement drops when the show feels fabricated.

What is the sinus principle in live event production?

Shows should follow a sine wave pattern of energy — rising, peaking, falling, and rising again. Like a DJ reading the crowd, visual operators modulate intensity to avoid exhausting the audience. A machine optimizing for maximum visual impact would likely keep energy constant, which Corthout says produces worse shows.

How does Prismax operate visuals during live Tomorrowland shows?

Almost nothing during DJ performances is automated. Human operators push buttons to trigger visual effects, adjust lighting, and sync visuals with music in real time. Data from pyrotechnics and lighting rigs feeds into the system, but the creative decisions — when to build energy, when to pull back — are made by humans in the moment.

Can AI-generated visuals replicate the feel of human-operated live shows?

Not currently. AI systems lack the ability to read a crowd’s energy and modulate responses accordingly. They optimize for visual quality per frame rather than emotional arc across a multi-hour performance. The sinus pattern — knowing when to hold back intensity so peaks feel earned — requires judgment that current AI models don’t have.

Why does Prismax use real musicians instead of AI for their classical music project?

Prismax recorded a 120-person orchestra for their upcoming immersive classical experience. Technology renders the surrounding 3D worlds, but the music comes from real performers. Corthout believes certain art forms — violin, orchestral performance — carry meaning precisely because of the decades of human training behind them. AI replication strips that away.

What role should AI play in live event production tools?

The opportunity isn’t replacing human operators — it’s augmenting them. Tools that give operators better real-time control, faster asset creation, or smarter synchronization preserve the human element that audiences respond to. Removing the operator removes the imperfections that make shows feel alive.

How do time-coded shows differ from real-time operated shows?

Time-coded shows have every visual effect pre-programmed to fire at exact moments. Real-time operated shows have humans triggering effects live, reacting to what’s happening on stage. Time-coded shows are technically flawless but feel predictable. Real-time shows have small timing variations that create a sense of liveness and spontaneity.

What can AI builders learn from 20 years of live event production?

Perfection isn’t the goal — engagement is. Audiences are humans responding to human signals in content creation. Visual tools that optimize every frame for maximum impact miss the emotional architecture that makes live experiences work: tension, release, anticipation, and the occasional beautiful mistake.

Does audience size affect how much imperfection matters in live shows?

At scale, the effect amplifies. When 400,000 people react collectively to a visual moment, the difference between a mechanically perfect transition and a human-timed one becomes dramatic. Collective emotional responses in large crowds magnify authenticity signals that might be subtle in smaller settings.

Watch the full conversation

Hear Joris Corthout share the full story on Heroes Behind AI.

Watch on YouTube

Related Insights