Nano Banana 2, what it can do and how to use Google’s new AI model, faster than the previous one

Nano Banana 2. Credit: Google.

Nano Banana 2 brings interesting innovations in the field of image generation with artificial intelligence. If with Nano Banana Pro the Mountain View giant aimed at the realism and ductility of the model, capable of generating content with a strong visual impact, with version 2.0 of the model the focus was clearly on the speed of execution and processing of artificial contents. With this model, technically called Gemini 3.1 Flash Image, we can obtain advanced quality images without having to give up response speed, a compromise that until now had forced us to choose between “fast but simple” models and “slow but very accurate” models. Let’s see in concrete terms what this new model can do, how it changes the user experience and how to use it on Gemini.

Nano Banana 2, even more powerful and faster: what it can do

From a functional point of view, Nano Banana 2 brings the “Flash” logic to image generation, i.e. the idea of ​​high speed typical of Gemini models more oriented towards immediate response. This means that it is possible to iterate much more quickly than with Nano Banana Pro by requesting an image, correcting a detail, etc., obtaining a new version of the content almost in real time. Naina Raisinghani, Product Manager at Google DeepMind, defined Nano Banana 2 «a cutting-edge image model» adding that thanks to this users will be able «get the advanced knowledge about the world, quality and reasoning (available) in Nano Banana Pro, at the speed of light».

One of the central innovations of this model is precisely the so-called advanced knowledge of the world, i.e. the ability of Nano Banana 2 to draw on updated and contextual information, including images and data from the Web, to represent specific subjects more accurately. This aspect is particularly useful for creating infographics, transforming notes into diagrams or generating data visualizations, i.e. graphical representations that help understand numbers and complex relationships.

Another key element is text rendering, which is how letters and words are rendered within an image. In previous models the text was sometimes imprecise or difficult to read; here, however, it is possible to obtain correct writing, suitable for marketing mockups or materials such as cards and posters. Nano Banana 2 also allows you to translate and localize text directly in the image, making it easier to adapt the same content to different languages ​​and cultural contexts.

The image generated with Nano Banana 2 shows a graphic with legible writing that illustrates the various phases of the water cycle. Credit: Google.

On the creative control front, the model narrows the gap between speed and visual fidelity. The consistency of the subject allows you to keep up to 5 characters and up to 14 objects recognizable in the same workflow, a fundamental feature for storyboards and accurate visual narratives. The ability to follow complex instructions has also been improved: we can describe nuances, poses, environments or styles with greater precision and obtain results that are more in line with the request. Furthermore, support for different formats and resolutions, from 512 pixels up to 4K, makes Nano Banana 2 suitable for both vertical social media and widescreen backgrounds. The visual fidelity upgrade results in more realistic lighting, richer textures, and sharper details, all while maintaining fast build times.

The image generated with Nano Banana 2 allows you to appreciate the coherence of the subjects represented in various scenes. Credit: Google.

Given the improvements in Nano Banana 2, this is now the default model for generating images in the Gemini app (Google has indicated that Nano Banana 2 will replace Nano Banana Pro in the Fast, Thinking and Pro models, although Google AI Pro and Ultra subscribers will retain access to Nano Banana Pro for specialized tasks), in Search via Lens and AI Mode, and in tools like Flow, as well as being available to developers via API and in many other Google products, such as Ads. Nano Banana 2 is already available in 140 markets.

All images created with Nano Banana 2 will include an invisible watermark called SynthID, designed to signal the artificial origin of the content. This technology is interoperable with C2PA credentials, a standard promoted by a consortium involving companies such as Adobe, Microsoft, Google, OpenAI and Meta. In this way it is possible to provide verification tools that help users and researchers understand if and how an image was generated by AI.

How to use Nano Banana 2

If you want to start experimenting with using Nano Banana 2, all you have to do is follow these steps.

  1. Log in to your Gemini account on the app or its web version.
  2. Click on the Tools button.
  3. Tap on Create Image.
  4. Describe in detail the result you want to achieve by typing the prompt in the Describe your image field.
  5. If necessary, select a style from those proposed and press Enter on the keyboard to start generating the content with Nano Banana 2.

Image
How to use Nano Banana 2 from your web browser.