Following the acquisition, the Cinemersive Labs team will join SIE's Visual Computing Group (VCG) and contribute to our broader efforts in advancing state of the art visual computing within games. This includes applying machine learning to enhance gameplay visuals, improve rendering techniques, and unlock new levels of visual fidelity for players.
Across various fields, from spatial mapping for reconnaissance and construction to facial recognition, virtual and augmented reality and autonomous driving, accurate 3D representation of dynamically evolving environments is paramount for safe human-machine interaction. Therefore, substantial research efforts are directed towards developing a cost-effective, high-performance and scalable 3D imaging sensor comparable with a CMOS camera.
Upload any picture or video, and Musubi uses artificial intelligence to extract the most important part and hover it in space as a 3D image within the frame. That could be a video of a child's first steps or a snapshot of a birthday party. The image will be displayed in 3D form, viewable in all its holographic glory across nearly 170 degrees.
We look, not just for the next five years, we look at the next 10, maybe 15 years. The focus is on advanced packaging and a possible expansion of the chip size that ASML's machines can handle, positioning the company to capitalize on the rapidly evolving AI chip manufacturing landscape.
Covert recording is a lot about power. So, I was worried from the very beginning when Meta announced they were going to revive the Google Glass idea. That might be influenced by my study subject very well, but it might as well be influenced by every report and story I read on digital abuse and hate speech in the last twenty to thirty years.
According to the latest edition of Gurman's Power On newsletter, the Cupertino-based tech giant is working on its AI visual models to enable the Visual Intelligence features on the rumoured AI pendant, AI smart glasses, and AirPods model with cameras. This will enable the wearables to provide environment-based answers to users and take context-based actions. Gurman adds that Apple intends to make Visual Intelligence and visual models integral to its upcoming wearables.
AI is already doing really well in the digital world, what about the physical world? AI wearables, robotics need memories as well. ... Ultimately, you need AI to have visual memories. We believe in that future.
"It's not an overstatement to declare another VR winter," said J.P. Gownder, vice president and principal analyst at Forrester. "I think we might even go as far as to say there's only a handful of successful scenarios where people are using VR." This assessment reflects the industry's struggle to find practical applications beyond niche markets.
There are two types of grants that U.S.-based organizations can apply for: Accelerator Grants for those who are already leveraging our AI glasses to scale their impact, and Catalyst Grants for organizations proposing new, high-impact applications using our Device Access Toolkit. We will award 15 Accelerator Grants of $25,000 and 10 of $50,000 USD, depending on the scale of the project. We'll also award five Catalyst Grants of $200,000. In total, we'll grant nearly $2 million to more than 30 organizations and developers.
According to Digi Capital augmented and virtual reality are about to explode as VCs and corporates get in on the act. Facebook's multi-billion dollar acquisition of Oculus got everyone's attention early last year, but it's only really in the last 12 months that investments have accelerated, with more than $1bn pouring into the sector. Meanwhile Mashable reports Nokia's virtual reality camera is now available for pre-order for a cool $60,000.
Either way, I think the AI boom is alive and well, but with much of the short-term hype fading away, the big question is whether the long-term trajectory is still there and whether it makes sense for investors to hit the buy button now that the near-term is somewhat less hyped while the long-term is as exciting as ever.
One of the big focuses of the new operating system version is what Pico calls PanoScreen, a feature that lets the wearer run multiple applications at once while also keeping a 360-degree view of the real-world space around them. Other users can pop into the space as 3D avatars while you spin around to see spreadsheets, browser tabs, design software, or whatever else you're working on.
The Motoko's dual first-person-view cameras are positioned at eye level to basically see what you see, enabling real-time object and text recognition - translating street signs, tracking gym reps, summarizing documents on the fly, all of that. There are also dual far and near-field mics, working together to capture voice commands and pick up dialogue within view.
Laboratory safety goggles have finally joined the ranks of smart devices. That's the promise behind LabOS, an AI operating system for scientific laboratories built by the Stanford-Princeton AI Coscientist Team, a group led by Stanford University bioengineer Le Cong and Princeton University computer scientist Mengdi Wang, with founding partners that include NVIDIA. Powered by NVIDIA's vision-language models to process visual data, the system is designed to provide AI with real-time knowledge of lab work so it can determine what causes experiments to fail or succeed and rapidly train new scientists to expert levels by guiding them through experimental protocols.
Smart glasses evangelists often tell me this fear is somewhat overblown. After all, the phone in your pocket also has a camera. The government already uses facial recognition tech, and CCTV feeds are everywhere. Anyone who's ever watched a true-crime documentary or an episode of Law & Order knows that these days, it's hard to step out in public and not be recorded.
The company is building directly on its major success supplying its waveguide technology to glasses, and proving that geometric waveguides work at consumer scale with standard glass. At CES, Lumus showcased a ZOE prototype with a field of view of more than 70 degrees, an optimized Z-30 with 40% more brightness, and a Z-30 2.0 preview that's 40% thinner. David Goldman, VP of marketing, walked me through each demo with clear enthusiasm about the progress Lumus is making.