Razer unveils an "AI-native" headset with cameras that see what you see
Briefly

Razer unveils an "AI-native" headset with cameras that see what you see
"The Motoko's dual first-person-view cameras are positioned at eye level to basically see what you see, enabling real-time object and text recognition - translating street signs, tracking gym reps, summarizing documents on the fly, all of that. There are also dual far and near-field mics, working together to capture voice commands and pick up dialogue within view."
"The headset "interprets and responds instantly, acting as a full-time AI assistant that adapts to schedules, preferences, and habits". The Motoko "connects effortlessly" with Grok, ChatGPT, and Gemini, whatever that means. As you may have guessed by now, this is a concept, "offering a glimpse into the future of AI-driven wearables", not an actual product."
Razer unveiled Project Motoko as an AI-native headset with integrated cameras and microphones designed for real-time assistance. Dual first-person-view cameras sit at eye level to enable object and text recognition for tasks like translating street signs, tracking gym reps, and summarizing documents. Dual far- and near-field microphones capture voice commands and nearby dialogue, enabling instant interpretation and responses. The headset functions as a persistent AI assistant that adapts to schedules, preferences, and habits, and it connects with Grok, ChatGPT, and Gemini. Project Motoko is presented as a concept prototype, not a consumer product.
Read at GSMArena.com
Unable to calculate read time
[
|
]