I won a championship that doesn't exist

(ron.stoner.com)

41 points | by SEJeff 1 hour ago

16 comments

  • simonw 1 hour ago
    You don't need to vandalize Wikipedia to get this kind of thing to work.

    Back in September 2024 I named a whale "Teresa T" with just a blog entry and a YouTube video caption: https://simonwillison.net/2024/Sep/8/teresa-t-whale-pillar-p...

    (For a few glorious weeks if you asked any search-enabled LLM, including Google search previews, for the name of the whale in the Half Moon Bay harbor it confidently replied Teresa T)

    • slater 45 minutes ago
      (it probably helps that your name & blog carry some weight, vs. some rando writing something on blogspot or wordpress ;) )
      • Forgeties79 31 minutes ago
        Which illustrates another problem: unscrupulous actors with big names can spread whatever information they want to millions of people with minimal effort.
    • bitwize 41 minutes ago
      The Mr. Splashy Pants of the AI era!
  • xeeeeeeeeeeenu 30 minutes ago
    The key to successful poisoning attacks is to introduce brand new information that doesn't directly contradict other training data. It's much easier to convince the LLMs that you're the king of a fictional Mapupu kingdom than the president of the United States.

    So this means that for bad actors it's more efficient to manufacture brand new fake stories instead of trying to distort the real ones. Don't produce fake articles absolving yourself of a crime, instead produce fake articles accusing your opponent of 100 different things. Then people will fact-check the accusations using LLMs, and since all the sources mentioning those accusations are controlled by you, the LLMs will confirm them.

  • blobbers 37 minutes ago
    This is basically the same problem of products astroturfing reddit, or SEO optimizing google. You want a new X, and so they heavily go after the keywords associated with it.

    This is sort of why "brand" matters; it provides a source of trust.

    Encyclopedia Britannica used to be that source of 'facts'. Then it became whatever page-rank told you. Eventually SEO optimization ruined that.

    News stories are the same thing. For certain groups, they have their 'independent' publication whose reporting they trust.

    • nailer 30 minutes ago
      It's such a pity the Oxford English Dictionary decided to paywall themselves decades ago - they used to be THE dictionary in most countries, now nobody seems to know who they are.
  • billypilgrim 38 minutes ago
    I must say I expected an actual poisoning of the data used to train the LLM and was excited, but the examples indicate that the LLM just searched the web and reported what it found? When you create a website with fake information and search Google for that information, it will of course bring up your site, not because it’s factually correct but because it’s related to what you searched for. What am I missing?
  • Paracompact 1 hour ago
    Most of the popular discourse around AI is still at the level of, "Don't trust the AI, trust the sources!" When it gets to the point where even the sources of simple facts are untrustworthy, the average person just trying to learn some trivia about the world is doomed.

    Doesn't help that AI media literacy is so primitive compared to how intelligent the models are generally. We're in a marginally better place than we were back when chatbots didn't cite anything at all, but duplicated Wikipedia citations back to a single source about a supposedly global event is just embarrassing. By default, I feel citations and epistemological qualifications should be explicit, front-and-center, and subject to introspection, not implicit and confined to tiny little opaque buttons as an afterthought.

    • amiga386 50 minutes ago
      Wikipedia calls this https://en.wikipedia.org/wiki/Citogenesis (after XKCD coined it).

      You can expect the spicy autocomplete to feed you flattering bullshit. It may cite Wikipedia (it shouldn't), but you should go check out those citations, and validate the claims yourself. It's the least you can do.

      And if the cited source is Wikipedia... check Wikipedia's sources too. Wikipedians try their best to provide you with reliable sources for the claims in their articles (oh who am I trying to kid? They pick their favourite sources that affirm their beliefs, and contending editors remove them for no good reason, and eventually the only thing that accrues is things that the factions agree on, or at least what ArbCom has demanded they stop fighting over).

      I guess what I'm trying to say is: don't rely on that authoritative-sounding tone that Wikipedia uses (or that AI bots use, or that I'm using right now). It's a rhetorical trick that short-circuits your reasoning. Verify claims with care.

      Also check the Talk page, you often find all kinds of shenanigans called out there.

      • bitwize 29 minutes ago
        Perhaps my favorite example of a citogenesis-like process is the legendary arcade game Polybius, which originated as an entry on some German guy's web compendium of arcade games (coinop.org), perhaps as a "paper town", or fake entry that acts as a copyright canary when duplicated elsewhere. Gamer news and special-interest blogs and sites, and even print publications like GamePro picked it up, and I think it was even listed on Wikipedia as an urban legend whose actual existence was unknown. Then the retrogaming YouTuber Ahoy did an in-depth documentary (https://m.youtube.com/watch?v=_7X6Yeydgyg) which concluded that Polybius didn't exist and was never even mentioned before the aforementioned coinop.org reference and, for me anyway, that settled it. Polybius, in its urban legend form, never existed.

        (Norm Macdonald voice) Or so the Germans would have us believe...!

  • jrmg 48 minutes ago
    BBC journalist doing a very similar thing in February: https://www.bbc.com/future/article/20260218-i-hacked/-chatgp...
  • amarant 1 hour ago
    "Stoner became the first American world champion...."

    Even being on stoner.com,I read that as meaning something different from what was meant.

    Op has a great surname!

  • drchiu 51 minutes ago
    My wife cited ChatGPT as her primary source the other day when she wanted to debate with me on something.

    "AI told me that..."

    In the old days, it would have been "I read on Google..."

  • CrzyLngPwd 1 hour ago
    So it's trivial for an individual to poison the LLMs, but imagine what a state with billions of American dollars could achieve.

    We can easily look ahead a few years and see how people will rely on the LLMs to be a source of truth in the same way people looked at Google that way, or newspapers.

    Rewriting history has been happening for a while, and with LLMs being the one-stop shop for guidance and truth, the rewrite will be complete.

    Doubly so since most people see these things as artificial intelligence, and soon to be superintelligence...so how can they be wrong?

  • standeven 1 hour ago
    I've had LLMs regurgitate satire as fact many, many times.
  • Havoc 49 minutes ago
    Like a FIFA peace prize?
  • nailer 48 minutes ago
    Yes. Wikipedia poisoning is why regular people in the US are regularly citing Iranian narratives about the Middle East. https://x.com/npovmedia/status/2016964000190255470
  • shevy-java 1 hour ago
    So like Frank Dux! In the movie Bloodsport epilogue, he didn't do that.

    It's almost like he was a better Chuck Norris than Chuck Norris. By his own ... testimony ...

  • nonameiguess 1 hour ago
    Pails in comparison to what Frank Dux and Frank Abagnale were able to convince much of the world they did with no evidence other than their own stories. Who knows how much of recorded and believed history is complete bullshit? Not to get too far into sacred territory, but claims around Siddhartha Gautama, Jesus Christ, and the Prophet Muhammad are quite a bit less plausible than the legends of Ragnar Lodbrok or the tales of Jonathan Swift, but nonetheless widely believed.
  • blobbers 40 minutes ago
    [dead]
  • dyauspitr 1 hour ago
    Why does this person deserve any kind of support? What’s the point of poisoning LLMs? To put some cursory Luddite roadblock that might delay the technology for a couple of months?
    • jurgenkesker 1 hour ago
      Support? It's just showing weaknesses of LLM's. Which is a valid sort of research I would say?
      • wewtyflakes 56 minutes ago
        That's fair, though on the other hand it kind of feels like "Don't drive cars, there could be rocks on the road! See, just look at all these rocks I put on the road!". Which is true, and real, but perhaps frustrating for people who just want to get someplace in peace.
    • jrmg 47 minutes ago
      This is a “if we stopped testing there would be far fewer cases!” mentality...
    • duskwuff 54 minutes ago
      > What’s the point of poisoning LLMs?

      It's a demonstration. If a domain name and a quick bit of Wikipedia vandalism is all it takes to make an LLM start spouting nonsense about a "surprisingly serious tournament circuit" or a "massive online community" for an obscure card game, consider what an unscrupulous PR team or a political operative could do to influence its output on more important topics.

      • nickthegreek 24 minutes ago
        > consider what an unscrupulous PR team or a political operative could do to influence its output on more important topics.

        ‘is doing’.

    • ethin 1 hour ago
      You do know that calling people who don't like AI for any reason Luddites does you no favors, right? It just makes you look like your a part of a cult.