Videos generated by artificial intelligence showing Will Smith eating a plate of spaghetti have become, in the space of a few years, much more than a viral meme: today they represent a simple but effective test to measure how much AI’s ability to simulate human behavior has improved. This purely empirical experiment, known online as “Will Smith Eating Spaghetti,” concentrates in a few seconds some of the most complex challenges of video generation: the consistency of the face from frame to frame, the naturalness of movements, the relationship between hands, cutlery and food, and even the synchronization between audio and lips. By retracing its evolution, we can understand why a clip that in 2023 seemed like a classic example of AI slop has today reached a cinema-like level, and what this tells us about the current state of research on generative artificial intelligence, without giving in to easy enthusiasm or, on the contrary, unjustified fears.
The evolution of the Will Smith eating spaghetti meme: 2023 vs 2026
The first video, which appeared in March 2023 on Redditshowed an unrecognizable Will Smith, with constantly changing facial features and mechanical gestures, far from any real experience. It was created with ModelScope, a text-to-video conversion tool: it means that the user enters a written description, called a prompt, and the model tries to transform it into moving images. The result was disturbing to say the least precisely because the human brain is extremely sensitive to anomalies in faces and everyday actions such as eating. It is no coincidence that the film spread rapidly, generating a mix of hilarity and anxiety in those who watched it, becoming raw material for parodies and discussions of various kinds.
From that moment, “Will Smith eating spaghetti” began to function as a sort of unofficial benchmark, that is, a test used by the community to compare the progress of different models. In 2024 the new iterations already showed progress: the movements were more fluid and the scene more stable, but obvious errors still persisted, such as deformed forks or spaghetti that seemed to ignore the force of gravity.
The most interesting leap comes in 2025, when tools like Google Veo 3 produce much more convincing versions of the test. Faces are more coherent, posture believable and overall action more natural. Strange details remain, such as excessively “crunchy” chewing sounds, but we are faced with subtle imperfections, no longer macroscopic errors. It is at this stage that deepfake stops being just an experimental curiosity and becomes a more mature technology, at least from a visual point of view. Clearly, this visual maturity does not imply a real understanding of action by the AI, but an increasingly refined simulation in the way reality is represented in synthetic videos.
In the latest developments, the test evolves further thanks to generators such as Kling 3.0, developed by the Chinese company Kuaishou Technology. Here we no longer see just a man eating in front of the camera, but a real scene is represented in which we see two characters sitting at the table (one of the two is Will Smith obviously), complete with dialogues, camera changes, etc. The voices, also synthetic, are lip-synced, which is technically complex because it requires audio and video to be generated consistently. But judge for yourself the progress made from 2023 to today.
Stop clips of Will Smith eating spaghetti?
Curiously, just as the quality of AI-generated videos grows, the spaghetti test is starting to hit limits. Companies like OpenAI and xAI (Elon Musk’s company that develops the controversial Grok) adopting guardrails increasingly strict, with automatic rules that prevent the generation of images attributable to real people or protected by copyright. This makes it increasingly difficult to replicate the same experiment with famous actors, especially in the United States, where the entertainment industry is particularly careful about protecting its intellectual property.









