A Bitter Pill for GenAI as Nightshade Takes the Spotlight
With attention on software that can “poison” content to make it harder for AI models to interpret, the debate on fair use and copyright intensifies.
Litigation and regulation of AI are still in their early days, but some parties are already going on active offense to protect materials they do not want scraped to train models to emulate their work.
Momentum continues to build across social media and in news headlines for a heated discussion about software such as Nightshade, which introduces elements in content to disrupt AI’s ability to understand what it is looking at. This free software has been made available for artists to use, offering them a way to “poison” their own work and discourage GenAI from training on their creations.
According to VentureBeat, this project out of the SAND Lab at the University of Chicago leverages its own AI to thwart GenAI by adding a tag to images that spoofs AI models into believing they see something different.
Certain supporters of content produced via AI naturally cried foul about what they view as harm to their space. Meanwhile creatives who do not want their works to be mimicked by AI pointed out that they are making changes to their own materials -- that it is their choice to add such elements to their images.
Debate over the use of original works to train GenAI includes the recent lawsuit brought by The New York Times against OpenAI and Microsoft, where intellectual property and copyright law may be hashed out. There is speculation the lawsuit might stall, if not kill OpenAI. In response, OpenAI wrote in a blogpost that it supports journalism but it did not believe the litigation had merit.
This episode of DOS Won’t Hunt takes a look at proactive efforts to interfere with AI, what is at stake for opposing parties, and how data privacy may also be part of the discussion.
About the Author
You May Also Like