Google has started to release new features based on artificial intelligence for Geminihis assistant AI, which can now interpret what you see through the smartphone camera. The function, reminiscent of that of chatgpt live video camera, can be recalled through Gemini Live To receive responses in real time on requests of various kinds also using as input “Share screen with live“. This update, the result of the research project “Project Astra”is gradually spreading in Italy among Android users, with particular attention to subscribers to Google One to the Premium. The implementation of the function is still in an initial phase and, at least for now, it would seem to be intended first of all to users in possession of a smartphone Google Pixel and one of the models of the family Samsung Galaxy S25. At the moment Google has not yet specified when this technology will arrive on other Android smartphones and possibly on iPhone.
What can do Gemini live: the new functions included
The New functions included in Gemini Live they can certainly contribute to Improve the experience of use overall of Google AI. Gemini’s ability to elaborate visual information in real time, in fact, can be useful in a plurality of contexts: you can frame an object with the camera of your smartphone and receive immediate explanations, practical suggestions or answers to doubts and questions.
A practical example of how this function can actually come in handy is well illustrated in a demonstrative video Published by Google a few weeks ago, in which a user frames a newly enameled ceramic and asks for advice on which color to use to decorate it. This shows how AI can become an active support in daily life, simplifying decisions and providing ideas of all kinds.
Access to this technology takes place through theGemini Live interface. By activating it, a sharing screen is shown “Live”, that allows you to Start a real -time video streamingas a post appeared on Reddit Recently, which we propose below.
This visual processing capacity of which Gemini is now provided is made possible thanks to the advances in the so -called Multimodale artificial intelligencea type of Ai capable of combining and understanding more types of inputs, not only text, but also audio, images and videos.
Who can use the new functions of Gemini Live
The adoption of this technology is not immediate for all users. For now, it seems to be active only on Some Android devicesin particular those of users subscribed on the plan Google One to the Premiumwhich has a cost of 21.99 euros per month. Google has announced that the devices Pixel And Galaxy S25 They will be among the first to receive these features, but has not clarified if there are technical reasons for this choice. In theory, the system should be compatible with Other Android devices and, potentially, even with iPhonebut at the moment there is no official information on a possible support for Apple devices.