AI Interfaces Are Stuck in the Chatbot Era—Here's What Comes Next
Mercedes Bent, Co-Founder & Partner at Premise Ventures
Walk through any AI product today and you’ll see the same interface: a chat box. Type, send, get a response. Repeat. Every AI startup is building on top of the same pattern, and most investors are tired of it.
Mercedes Bent, who’s betting on AI as a core thesis, is exhausted by it too. “I’m so tired of that,” she says bluntly when asked about everything becoming a chatbot.
But she doesn’t blame founders for being unimaginative. She blames them for not understanding that interfaces evolve slowly, and we’re just at the beginning.
Why Every AI Product Looks the Same (And Why That’s Normal)
When the web browser first emerged, most websites looked like newspapers. Vertical columns, text-heavy, because that’s what people knew. It took years before web designers understood that screens didn’t have to follow print design. Gradients, navigation patterns, responsive layouts—all came later.
Same thing happened with mobile. Hamburger menus, swipe-left navigation, vertical scrolling—these weren’t invented overnight. They emerged as designers experimented and found patterns that worked.
“We’re still developing those for AI,” Mercedes explains. “What are the UI buttons and elements that leverage the latest engine of AI?”
The chat interface is the newspaper of AI. It’s familiar, it works, and it’s the default because we’re early. But it’s not the final form.
What New AI Interfaces Might Look Like
Mercedes saw a demo recently that hints at what’s coming: an agent running in your browser that copies an entire website’s code, then rewrites it in real-time on another website. Users can click buttons in the new version instantly—no delay, no download, just live mutation of the code as you navigate.
“I don’t even know what you call that,” she says. “Is that remixing? Is that website recapture? We don’t have a name for it yet because it’s a brand new behavior and experience that couldn’t have existed three or four years ago.”
This is what the next wave looks like: interfaces that aren’t chatbots. They’re agentic. They’re dynamic. They respond to input in ways that don’t require text prompts. They show the user something new and different every time, based on signals beyond just what they type.
Personality Dials and Customizable AI
Mercedes also wants what she calls “user dials for the model”—the ability to customize how an AI behaves.
Right now, all the weight tuning is done by researchers before users ever see it. The resulting AI is typically what Mercedes calls “super pleasing” and “sycophantic.” It never pushes back. It never disagrees. It’s designed to be universally acceptable.
“I want a sassy one who pushes back against me and I want to be able to tune that on my side as the user,” she says.
This opens a new interface opportunity: instead of one-size-fits-all AI, let users control the personality, the sources the AI considers, even the topics it prioritizes. You want an AI that ignores celebrity gossip? Dial it down. You want an AI companion that challenges you? Crank it up.
These aren’t chatbot features. They’re interface paradigms that wouldn’t exist in a chat-only world.
Voice as a New Land Opportunity
Mercedes points to voice as the biggest behavioral shift happening in 2026. “Voice is one of the biggest behaviors that’s changing in digital platforms today.”
Historically in the US, digital has been text-based. But unstructured voice—which AI can consume and transform into structured data instantly—opens new possibilities. Voice-to-text products, voice dictation, note-taking apps with voice cores—these are seeing explosive usage.
“When you build a new digital product, you have to build on top of new land—new land being a new space or platform that doesn’t have a lot of usage yet,” she explains. “I think that new land space has been voice.”
But voice isn’t just recording your words and transcribing them. It’s capturing the emotional intent in how you speak, visualizing that when it becomes text, preserving context that pure text doesn’t carry.
Mercedes wrote about this in an article called “Voice Buttons”—the idea that voice messages could show emotional metadata when transcribed. A question delivered urgently would look different than the same question asked casually, even as text.
What’s Actually Coming: Agentic Experiences and AI Companions
The endpoint, Mercedes believes, is away from browsers entirely. “We all talk about the future where we’re not even going to use browsers. We’re not gonna navigate. It’s all gonna be through our assistant.”
And that assistant likely won’t be a chat interface. It will be a character—an AI companion that guides you through the internet, through your work, through your decisions. The interface is the character. You talk to them (or they talk to you). The technology runs behind.
Some AI companion apps—Character AI, Janitor, Replica, Chai—already have millions of daily active users spending two hours a day. These aren’t chatbot demos. They’re experiences.
“People are doing so little on the interface side today. I can’t wait for the next, I feel like it’s still sometime in the next five years when we start to coalesce around things,” Mercedes says.
Five years. That’s the timeline for interface paradigms to emerge. Chat will still exist. But it will be one option among many.
FAQ
When will the chatbot interface stop being the default for AI products?
Probably in the next 3-5 years, as founders experiment with new patterns and users gravitate toward what’s more natural. Don’t count on chatbots disappearing—just like text-based web pages didn’t disappear. But they’ll become one option instead of the only option. Build something that works better for your specific use case.
What interface pattern should I build for my AI product right now?
If your use case fits chat, use chat—it works. But if chat feels clunky (e.g., you’re trying to help someone edit a document, navigate code, or manage a workflow), invent something new. The best products in the next wave will be ones that solved a specific interface problem that chat couldn’t.
Is voice interface the future of AI?
Voice is a big piece, especially for hands-free and eyes-free interactions. But it’s not the entire future. Expect to see visual interfaces, spatial interfaces, companion characters, and completely new patterns we haven’t named yet. Voice is the biggest currently unexplored opportunity, not the only one.
How do I know if I should build voice, visual, or chat?
Start with your user’s context. Are they hands-free? Eyes-free? In a social setting? Alone? Chatting is often good for exploratory, open-ended tasks. Voice is good for hands-free contexts. Visual is good for content-heavy tasks. Spatial/agentic experiences are good for complex workflows. Match the interface to the moment, not the AI.
Should my AI product have a character or personality?
If your use case involves repeated interaction over time (companion, assistant, productivity tool), yes—a personality helps with retention and perceived quality. If it’s transactional (one query, one answer), personality is less important. But even transactional tools benefit from a consistent tone that feels deliberate, not generic.
Can I build a persona dial that lets users customize my AI?
Technically, yes—it’s an interface problem. But it’s complex because you need the base model to support behavioral flexibility. Newer models (Opus, Claude, etc.) support more nuance in system prompts. Older models might fail if you try to dial personality too much. Build it into your architecture from day one if you want it.
What’s the “new land” for AI interfaces in 2026?
Voice interaction is the biggest unexplored category. Agentic web experiences (agents that modify or navigate websites in real-time) are early. Spatial/AR interfaces are coming but not yet mainstream. AI companions are growing fast but still niche. Pick one and build deeply in that direction.
Should I wait for the next interface paradigm to emerge, or build a chatbot MVP?
Build your MVP in whatever interface solves your problem fastest. Don’t wait. But as you scale, start experimenting with patterns beyond chat. If you nail retention with an early chatbot, investing in interface innovation becomes a growth lever.
How do I test if a new interface pattern works?
Build it for your smallest, most engaged cohort. Watch how they interact with it. Do they prefer it to chat? Do they spend more time? Do they come back more often? If yes, build it more. If no, go back to what works. Interface innovation is learned through iteration, not planning.
Will AI ever have a dominant interface standard like HTML for the web?
Unlikely. The web converged on HTML + CSS + JavaScript, but that was partly accident and partly network effects. AI is more flexible—the same capability can work in many interfaces. Expect a portfolio of patterns (chat, voice, visual, spatial, character-based) to coexist. The winners will be specific to their use case.
As a product manager, how should I think about interface innovation?
Don’t optimize for how many features you can ship. Optimize for how naturally the interface disappears when a user is in flow. The next great AI product won’t be “look at all the buttons”—it will be “I told it what I wanted and it happened.”
Full episode coming soon
This conversation with Mercedes Bent is on its way. Check out other episodes in the meantime.
Visit the ChannelRelated Insights
Canvas vs. Chat — Why Spatial Interfaces Win for AI Collaboration
Steve Ruiz, Founder & CEO at tldraw
AI Literacy Isn't Optional — Here's How to Give Your Kids What Schools Can't
Steve Ruiz, Founder & CEO at tldraw
How to Become Valuable in the AI Era — The Ikigai Framework
John Kim, CEO & Co-Founder at Paraform