Wearables
fromCGMagazine
12 hours agoBEACN Creates A Voice-First Headset, Releasing Spring 2026
BEACN launched a premium wireless headset focused on delivering high-quality voice and sound for online communication.
Qualcomm is helping address one of the auto industry's most pressing needs - scaling intelligent vehicle technology to meet growing consumer demand for vehicles that are automated, connected and highly personalised.
Meta's AI-powered glasses are designed to enhance personal presence, allowing users to engage more fully with their surroundings while offering features that could assist those with vision impairments or hearing loss.
Following the acquisition, the Cinemersive Labs team will join SIE's Visual Computing Group (VCG) and contribute to our broader efforts in advancing state of the art visual computing within games. This includes applying machine learning to enhance gameplay visuals, improve rendering techniques, and unlock new levels of visual fidelity for players.
Upload any picture or video, and Musubi uses artificial intelligence to extract the most important part and hover it in space as a 3D image within the frame. That could be a video of a child's first steps or a snapshot of a birthday party. The image will be displayed in 3D form, viewable in all its holographic glory across nearly 170 degrees.
The upper display renders 3D content without glasses, using Lenovo's PureSight Pro Tandem OLED technology to show depth and spatial volume directly on screen. A spacecraft that's been modeled in three dimensions appears to float, with genuine perceived distance between its front and rear planes, rather than sitting flat behind glass.
Laboratory safety goggles have finally joined the ranks of smart devices. That's the promise behind LabOS, an AI operating system for scientific laboratories built by the Stanford-Princeton AI Coscientist Team, a group led by Stanford University bioengineer Le Cong and Princeton University computer scientist Mengdi Wang, with founding partners that include NVIDIA. Powered by NVIDIA's vision-language models to process visual data, the system is designed to provide AI with real-time knowledge of lab work so it can determine what causes experiments to fail or succeed and rapidly train new scientists to expert levels by guiding them through experimental protocols.
That's today's project. In this article, I'll show you how I started with a picture of me, used some intermediate AI, and turned it into a physical 3D plastic me figurine. Do I need a me figurine? No. Is it cool? Yeah. Does it show off another AI capability? Yep. I'll be honest. I didn't expect my editor to sign off on this pitch.
One of the big focuses of the new operating system version is what Pico calls PanoScreen, a feature that lets the wearer run multiple applications at once while also keeping a 360-degree view of the real-world space around them. Other users can pop into the space as 3D avatars while you spin around to see spreadsheets, browser tabs, design software, or whatever else you're working on.
"It's not an overstatement to declare another VR winter," said J.P. Gownder, vice president and principal analyst at Forrester. "I think we might even go as far as to say there's only a handful of successful scenarios where people are using VR." This assessment reflects the industry's struggle to find practical applications beyond niche markets.
The company is building directly on its major success supplying its waveguide technology to glasses, and proving that geometric waveguides work at consumer scale with standard glass. At CES, Lumus showcased a ZOE prototype with a field of view of more than 70 degrees, an optimized Z-30 with 40% more brightness, and a Z-30 2.0 preview that's 40% thinner. David Goldman, VP of marketing, walked me through each demo with clear enthusiasm about the progress Lumus is making.