If you have tried using image generators based onartificial intelligence, like DALL·E 2, Stable Diffusion or Midjourney, you will have noticed how quickly they are improving and the amazing results they produce. If you don't know them, well, they are platforms that allow you to create images (for free or for a fee) starting from a “prompt”, that is, from a text invented and written by the user. Not everyone knows, however, that a serious debate is underway on legitimacy of this use of AI, because using and promoting these tools could mean collaborating on… a theft. The problem of using content protected by intellectual property to train AI is currently one of the most discussed in the public debate on large artificial intelligence models.
The charges against the AI for alleged image theft
Many artists claim that Artificial Intelligence “steals” their works, more or less explicitly. In 2023, a lawsuit was opened (and then partially closed) by some artists against some companies in the sector who were allegedly training AI with their works (illustrations, comics and various images) without prior consent and therefore, obviously, without any payment. Although many companies say that their generative engines (i.e. their image generators) use the copyright-free images they find on the internet as a model, according to these artists, in reality the AIs read and analyze many images covered by copyright in order to develop the ability to create new images.
In response to the accusations, there are those who have stated that art has always been “a copy” of something, and that therefore artists have always been copying in a certain sense. However, here it is not a question of peeking at someone else's work and then making a similar one: human artists, in short, are not able to “dismember” images by reading their data, so as to exploit that data to create new works through programs and services (also for a fee). Instead, generative AI is capable of doing just such a thing: this process, so to speak curettage (or scraping in English), allows software to capture much more information than an artist's “trivial” glance at someone else's work.
How artificial intelligence creates music
To give an idea of the size of the phenomenon, the software Stable Diffusion of Stability AI may have been trained by studying without permission 5 billion images protected by copyright (the case is currently ongoing). If it were true, the thing would create two damages: the first is that it would in all respects be a theft of works of art; the second is that, according to the creatives, these software would be used to create “fake” jobs, taking away the possibility of work and diffusion for the original authors. Even large companies like Getty Images have taken several companies that use generative AI to court for theft of art, in this case for the illicit use of historical photographs protected by copyright.
A further, more recent accusation derives from the discovery of a spreadsheet spreadsheet (disclosed by the American newspaper The Register) which appears to list thousands of artists whose images could be “successfully imitated” by generative AI. The list, which is assumed to be edited by Midjourney, catalogs almost five thousand names of artists, some deceased and very famous such as Andy Warhol and others still alive like the cartoonist Sarah Anderson, ready to “use”. But how? The software can provide users who request it with an image in a very precise style, perhaps that of a particular author: just tell the program the name and surname of the artist you want to imitate.
How do artists defend themselves? Nightshade, the digital “poison”.
In this situation, how do artists defend themselves? Partly with court cases, which however do not always succeed or get stuck, and partly with some more or less effective do-it-yourself tools. One of these has received a lot of attention: we are not talking about logos, watermarks or other ways to fix your signature on images, but about a more powerful and subtle tool. A kind of digital poison.
This “digital poison”, called Nightshade and developed by computer scientists at the University of Chicago's Glaze Project under the leadership of Professor Ben Zhao, works essentially by pitting AI… against AI. Basically, it allows artists to add “invisible changes” to pixels of their works of art before uploading them online so that, if “scraped” by the various image generators, they are “read” in the wrong way, ruining the program that copies them, even permanently.
Use this type of tool to poison AI training data it could damage their learning models and would lead the latter to confuse, for example, dogs with cats or the sun with rain. After some testing, Nightshade is now available online for anyone who wants to download it.