ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that’s referenced in the article can be found here

  • GarytheSnail
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    1 year ago

    How is this different than just googling for someone’s email or Twitter handle and Google showing you that info? PII that is public is going to show up in places where you can ask or search for it, no?

    • Asifall@lemmy.world
      link
      fedilink
      arrow-up
      42
      ·
      1 year ago

      It isn’t, but the GDPR requires companies to scrub PII when requested by the individual. OpenAI obviously can’t do that so in theory they would be liable for essentially unlimited fines unless they deleted the offending models.

      In practice it remains to be seen how courts would interpret this though, and I expect unless the problem is really egregious there will be some kind of exception. Nobody wants to be the one to say these models are illegal.

      • far_university1990@feddit.de
        link
        fedilink
        arrow-up
        14
        ·
        1 year ago

        Nobody wants to be the one to say these models are illegal.

        But they obviously are. Quick money by fining the crap out of them. Everyone is about short term gains these days, no?

    • donuts@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I can think of two ways it’s significantly different:

      1. Legally (in the United States specifically) the courts have previously ruled that search engines collecting links to other people’s data is fair use, as it’s a mutually beneficial thing for all parties: users find the info that they’re looking for, search helps drive traffic to providers of info and services, and the search engine profits off connecting them to each other.

      https://www.everycrsreport.com/reports/RL33810.html

      https://copyright.columbia.edu/basics/fair-use.html

      1. Unlike Wikipedia, for example, info that’s chewed up, processed, and regurgitated by “AI” chat bots and the like is totally unsourced, unaccountable, and passed off as original, authentic knowledge. ChatGPT is collecting various data from all of the net and forming it into something that appears to be presentable and correct, but it’s merely recycling ideas from other people’s work without any first-hand knowledge, thought, or attribution. Even the people who create “AI” can’t even connect the dots about why it says what it says, let alone have it properly source where the information came from.
      • GarytheSnail
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Thank you for the links!

        Do you think the same could be argued: that models collecting links to other people’s data is fair use?