Model collapse is already happening

(cacm.acm.org)

15 points | by zdw 1 hour ago

6 comments

  • chromacity 1 hour ago
    There's some comedy in this article having all the hallmarks of LLM writing.
    • justonceokay 1 hour ago
      Yeah a typo in the subtitle does not especially inspire confidence
      • niccl 1 hour ago
        you've got me. What's the typo?
        • justonceokay 51 minutes ago
          It seems to me there is a word or two missing between “rich” and “slowly”. If I read the whole thing aloud I cannot parse it into a sentence. Or the word “rich” could be removed. That would be clunky but at least grammatically sensible.

          “Make data get smoothed out” is a very strange way of saying “smooths out data”

          • quantified 29 minutes ago
            It might be weird if you haven't read a lot of English. It's actually quite normal to say that process X is a way to make effect Y happen. "Makes your mout water" is more effective than "waters your mouth". "Makes your breath fresh and tolerable" is better than "freshens and tolerablerizes your breath". Etc.

            Actually, what you are describing is what happens when LLM-generated prose cycles and then trains humans to use equally dull thinking.

  • SunshineTheCat 1 hour ago
    I always find articles like this very odd and nebulous because they act as though AI models are just Google.

    Type request, get info.

    But that's such a narrow/one dimensional view of how LLMs are used. They can gather data or write an article, but that's probably a minority of use cases.

    People have casual conversations with them, code written, brainstorming sessions, dictating a voice-recorded note, and the list goes on.

    While data its getting trained on is important, the supposition is that this data consists only of what sits out there on the interwebs.

    That as oppose to user input/interaction which, I'm guessing, has a pretty large role in training models. Maybe even more so in some cases than AI-written blog spam.

  • kimi 1 hour ago
    I have a pet-peeve with this. As a non-native English speaker, I find it very useful to dictate multiple notes, in different languages, and have the LLM produce clear English prose out of it. The prose may be LLM-generated, but I edit it when needed to make sure that the contents is 100% mine.

    It's like dictating to a typist like they did in the 60's - he will make sure that your letter looks professional and will fix your grammar, but you will sign the letter. This is totally different from LLM spam, the kind that inflates a sentence into a three-page article full of nothing.

    So - is it a problem if the language reverts to a mean? that is the point of a shared language, right?

  • FeepingCreature 1 hour ago
    Source: a bad study from 2023.
  • levocardia 1 hour ago
    Evidence: trust me bro. Really, where is the actual evidence that models are "collapsing" from too much AI-generated training material? Evals are up, subjective perception of model usefulness is up (for me, certainly), and if anything the slop levels are down, or at least stable. I find it hard to believe that seven-figure software engineers at top labs aren't being careful about how much post-ChatGPT-era internet content is going into their training data.
    • jrmg 1 hour ago
      I find it hard to believe that seven-figure software engineers at top labs aren't being careful about how much post-ChatGPT-era internet content is going into their training data.

      I agree - but as the Internet descends into all-slop-all-the-time (seriously, just do a search for reviews or travel advice or technical questions -or most anything - to see it), where do you expect the high quality training material on future things to come from? I have a hard time imagining it.

      • ctoth 1 hour ago
        Your Claude Code sessions. Every interaction. Every time the model is asked to do something and then gets feedback on that something (this didn't work I got this traceback)

        Textbooks, company wikis, news corpora, structured reports of all kinds from far more sources than what is available on the web.

  • slowmovintarget 1 hour ago
    [dead]