Home Technology Why AI Model Collapse Due to Self-Training Is a Growing Concern

Why AI Model Collapse Due to Self-Training Is a Growing Concern

14
0


AI models can degrade themselves, turning original content into irredeemable gibberish over just a few generations, according to research published today in Nature.

The recent study highlights the increasing risk of AI model collapse due to self-training, emphasizing the need for original data sources and careful data filtering.

What kinds of AI are susceptible to model collapse?

Model collapse occurs when an artificial intelligence model trains on AI-generated data.

“Model collapse refers to a phenomenon where models break down due to indiscriminate training on synthetic data,” said Ilia Shumailov, a researcher at the University of Oxford and lead author of the paper, in an email to Gizmodo.

According to the new paper, generative AI tools like large language models may overlook certain parts of a training dataset, causing the model to only train on some of the data.

Large language models (LLMs) are a type of AI model that train on huge amounts of data, allowing them to interpret the information therein and apply it to a variety of use cases. LLMs generally are built to both comprehend and produce text, making them useful as chatbots and AI assistants. But overlooking swaths of text it is purportedly reading and incorporating into its knowledge base can reduce the LLM to a shell of its former self relatively quickly, the research team found.

“In the early stage of model collapse first models lose variance, losing performance on minority data,” Shumailov said. “In the late stage of model collapse, [the] model breaks down fully.” So, as the models continue to train on less and less accurate and relevant text the models themselves have generated, this recursive loop causes the model to degenerate.

A case study in model collapse: Churches and jackrabbits

The researchers provide an example in the paper using a text-generation model called OPT-125m, which performs similarly to ChatGPT’s GPT3 but with a smaller carbon footprint, according to HuggingFace (training a moderately large model produces twice the CO2 emissions of an average American’s lifetime).

The team input text into the model on the topic of designing 14th-century church towers; in the first generation of text output, the model was mostly on-target, discussing buildings constructed under various popes.

But by the ninth generation of text outputs, the model mainly discussed large populations of black, white, blue, red, and yellow-tailed jackrabbits (we should note that most of these are not actual species of jackrabbits).

Model collapse grows more critical as AI content saturates the web

A cluttered internet is nothing new; as the researchers point out in the paper, long before LLMs were a familiar topic to the public, content and troll farms on the internet produced content to trick search algorithms into prioritizing their websites for clicks. But AI-generated text can be produced faster than human gibberish, raising concerns on a larger scale.

“Although the effects of an AI-generated Internet on humans remain to be seen, Shumailov et al. report that the proliferation of AI-generated content online could be devastating to the models themselves,” wrote Emily Wenger, a computer scientist at Duke University specializing in privacy and security, in an associated News & Views article.

“Among other things, model collapse poses challenges for fairness in generative AI. Collapsed models overlook less-common elements from their training data, and so fail to reflect the complexity and nuance of the world,” Wenger added. “This presents a risk that minority groups or viewpoints will be less represented, or potentially erased.”

Large tech companies are taking some actions to mitigate the amount of AI-generated content the typical internet surfer will see. In March, Google announced it would tweak its algorithm to deprioritize pages that seem designed for search engines instead of human searchers; that announcement came on the heels of a 404 Media report on Google News boosting AI-generated articles.

AI models can be unwieldy, and the recent study’s authors emphasize that access to the original data source and careful filtering of the data in recursively trained models can help keep the models on track.

The team also suggested that coordination across the AI community involved in creating LLMs could be useful in tracing the provenance of information as it’s fed through the models. “Otherwise,” the team concluded, “it may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.”

O brave new world, with such AI in it!



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here