Google and Samsung just launched the AI features Apple couldn't with Siri
Briefly

Google and Samsung just launched the AI features Apple couldn't with Siri
"Sameer Samat, Google's president of Android, showed off a demo of how Gemini's new agentic features would work to help wrangle a pizza dinner order from his busy family group chat. Samat asks Gemini to look at the chat thread and figure out what to order, and then make the order with a delivery app. Onscreen - in a prerecorded video, it wasn't live - you can see Gemini figuring out what everyone wants from the group chat and showing that in a window."
"Google just recently added the ability for Gemini to auto-browse for users in Chrome, and being able to do something similar right inside of Android feels like a logical next step; Google clearly wants Gemini to be thought of as a helpful agent or productivity partner rather than just a chatbot or a series of AI models."
Google unveiled agentic capabilities for Gemini that enable the AI to handle multi-step tasks directly on Android phones, starting with Pixel 10, Pixel 10 Pro, and Samsung Galaxy S26. During a demonstration, Google's Android president showed Gemini analyzing a family group chat to determine pizza preferences, then completing an order through GrubHub with user confirmation. The feature represents a significant advancement in agentic AI, following Google's recent addition of auto-browsing capabilities in Chrome. This development positions Google ahead of Apple, which announced similar Siri features at WWDC 2024 but delayed their release in March 2025. Google aims to establish Gemini as a productivity partner and helpful agent rather than merely a chatbot.
Read at The Verge
Unable to calculate read time
[
|
]