ElevenLabs CEO: Voice is the next interface for AI | TechCrunch
Briefly

ElevenLabs CEO: Voice is the next interface for AI | TechCrunch
"ElevenLabs co-founder and CEO Mati Staniszewski says voice is becoming the next major interface for AI - the way people will increasingly interact with machines as models move beyond text and screens. Speaking at Web Summit in Doha, Staniszewski told TechCrunch voice models like those developed by ElevenLabs have recently moved beyond simply mimicking human speech - including emotion and intonation - to working in tandem with the reasoning capabilities of large language models."
"That vision fueled ElevenLabs's $500 million raise this week at an $11 billion valuation, and it is increasingly shared across the AI industry. OpenAI and Google have both made voice a central focus of their next-generation models, while Apple appears to be quietly building voice-adjacent, always-on technologies through acquisitions like Q.ai. As AI spreads into wearables, cars, and other new hardware, control is becoming less about tapping screens and more about speaking, making voice a key battleground for the next phase of AI development."
"Iconiq Capital general partner Seth Pierrepont echoed that view onstage at Web Summit, arguing that while screens will continue to matter for gaming and entertainment, traditional input methods like keyboards are starting to feel "outdated." And as AI systems become more agentic, Pierrepont said, the interaction itself will also change, with models gaining guardrails, integrations, and context needed to respond with less explicit prompting from users."
Voice is emerging as the next major interface for AI as models move beyond text and screens to enable more natural human-machine interaction. Voice models now incorporate emotion and intonation and are being integrated with large language model reasoning to produce context-aware, expressive responses. The shift toward voice control supports immersive, hands-free use across phones, wearables, cars, and other hardware, reducing reliance on screens and keyboards. Major AI developers and startups are prioritizing voice capabilities and investing heavily. Increasing agentic behavior, persistent memory, and richer context will allow voice systems to act with fewer explicit prompts.
Read at TechCrunch
Unable to calculate read time
[
|
]