There was a lot of press ~2 years ago about this paper, and the term “model collapse”:
Training on generated data makes models forget

There was concern that the AI models had slurped up the Whole Internet and needed more data to get any smarter. Generated “synthetic data” was mooted as a possible solution. And there’s the fact that the Internet increasingly contains AI-generated content.

As so often happens (and happens fast in AI), research and industry move on, but the flashy news item remains in peoples’ minds. To this day I see posts from people who misguidedly think this is still a problem (and a such one more reason the whole AI house of cards is about to fall)

In fact, the big frontier models today (GPT, Gemini, Llama, Phi, etc) are all trained on synthetic data

As it turns out, quality of data is what really matters, not whether it’s synthetic or not; see " Textbooks Are All You Need "

And then some folks figured out how to use an AI Verifier to automatically curate that quality data: " Escaping Model Collapse via Synthetic Data Verification "

And people used clever math to make the synthetic data really high quality: " How to Synthesize Text Data without Model Collapse? "

Summary:
“Model collapse” from AI-generated content is largely a Solved Problem.

There may be reasons the whole AI thing will collapse, but this is not one.

  • JeeBaiChow@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Assuming there is a huge chunk of data on the internet that the AIS have already sucked up, the rate of production of new data would be much slower than the generated stuff. Meaning, it’s not a stretch to imagine the AIS spending more time verifying and rejecting an increasingly larger percentage of incoming data, while adding only a small chunk to the knowledge base. So: exponentially more power consumption for limited gains, classic diminishing returns conundrum?