Would you like a spicy spaghetti dish? Just use some gasoline.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    107
    ·
    edit-2
    6 个月前

    It’s almost like LLM’s aren’t the solution to literally everything like companies keep trying to tell us they are. Weird.

    I honestly can’t wait for this to blow up in a company’s face in a very catastrophic way.

    • youngalfred@lemm.ee
      link
      fedilink
      English
      arrow-up
      50
      ·
      6 个月前

      Already has - air Canada was held liable for their ai chatbot giving wrong information that a guy used to buy bereavement tickets. They tried to claim they weren’t responsible for what it said, but the judge found otherwise. They had to pay damages.

      • barsquid@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 个月前

        That’s not catastrophic yet. That cost them only the money which would otherwise have been margin on top of a low priced ticket.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      6 个月前

      AI is basically like early access games but the entirety of big tech is rushing to roll it out first to as many people as possible.

      • sp3ctr4l@lemmy.zip
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        6 个月前

        Hah, remember when games and software used to be tested to ensure they would function correctly before release?

        At least with Early Access games you know its in development.

        What has it been, nearly a decade now that we just expect nearly everything to be broken on launch?

      • tsonfeir@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 个月前

        It’s like computer game box art in the 80s. The game might be fun, but it really looks like PONG. It doesn’t look at all like the fantasy art they had painted for the box.

        AI can be a great tool for business. It can help you think, work, and produce a higher quality product. But people don’t understand its limitations and that its success is very much based on the user, and how it was trained.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      6 个月前

      Yeah. My mother is getting phishing emails and genuinely believes that Nancy Pelosi is sending her emails asking for monetary support. We’re not even American. Like, not even the same continent.

      Not everyone is as critical as they ought to be when reading stuff on the internet. It doesn’t help that LLMs have a tendency to state things confidently or matter-of-factly.

      People not familiar with the tech will read it and take it at face value, ignoring the “this is AI generated and might be wrong” because that sounds too technological to some people that their brain doesn’t even process it.

    • Icalasari@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      6 个月前

      Man, who’d have guessed that the thing that would potentially slow eventual AI dominance are companies rushing to use it? All the horror and scifi stories implied rushing would be what CAUSES it

  • Gsus4@mander.xyz
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    6 个月前

    This is such a disinfo nightmare, imagine if it was trained (prompting would be easier actually) to spread high quality data with strategically planted lies to maximize harmful confident incorrectness.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    6 个月前

    The most baffling part of it is how it looks like zero attempt was made to attribute credibility to sources.

    Using Reddit as a source was bad enough (of course, they paid for it, so now they must feel like they need to use this crap). But one of the examples in the article is just parroting stuff from The Onion.

    Edit : I’ve since learned that the Onion article was probably seen as “trustworthy” by the AI because it was linked on a fracking company’s website (as an obvious joke, in a blog article).

    If all it takes for a source to be validated is one link with no regard for context, I think the point stands.

  • towerful
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 个月前

    People hate having their favorite brand associated with vile or unethical things.

    True. But not ads, which this quote is taking about. People hate ads. It’s the ads people hate, not the context of the ads.
    If your favourite brand hired some neo-nazi as their new spokesperson, that’s a bit different than some garbage ad sitting beside some garbage AI content.
    The only reason “ads beside garbage content” is ever leveraged (ie a news story) is as a way to either hurt the garbage content or hurt the company the ad is for.

    Like with shitty twitter content, consumers can pressure twitter to deal with the content by alerting companies that they are being seen next to shitty content. Companies then leverage the fact that they are paying twitter to get their ads away from that content. If enough companies do this, twitter might change their content policy to prevent this kind of shitty content.
    Like with YouTube, it has loads of demonitizing policies to ensure companies who advertise there don’t get negative press due to association with the content, which means YouTube should have a majority of quality content.

    But, no. (The majority of) People don’t hate their brand advertising next to particular content. People just hate ads.

  • Monument@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 个月前

    This almost makes me wish I didn’t overwrite some of my shittier shitposts on Reddit.

    If I’m ever bored enough, I’m going to re-edit the like, top 10 posts in my old account with authoritative nonsense. Maybe I’ll use AI to write it!