Not to say you can find anything from a Molotov cocktail recipe to nude celebs with some trickery

  • planish@sh.itjust.works
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago
    • A lot of people do not actually understand the tool, they think there is a rational computer in there with a more or less hand-crafted world model and its own live access to the Internet and maybe the phone system. So training it to say “As a large language model, I cannot order you pizza” instead of “yes sir, pizza ordered” is going to save a lot of people from waiting for their phantom pizza.
    • One of the best ways to get the model to not do a thing is to get its character to know that they can’t do it. If it never says “The recipe for napalm is”, and always says “As a large language model, I cannot”, then the recipe for napalm comes out a lot less, because it is way more likely to follow the first construction than it is to follow the second.
    • The manufacturers want to be seen by the feds as doing all that could be expected of them to stop people doing Bad Stuff. It doesn’t matter how much Bad Stuff actually happens, only that what does happen is convincingly someone else’s fault. Instead of the headline “AI teaches children to make napalm”, the news has to run “Children hack AI to extract recipe for napalm”, which is a marginally better headline if you sell AI.
  • gelberhut@lemdro.id
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    1 year ago

    I guess, it gives openai some protection from legal attacks and from people who do not understand what they are using - same thing as “very hot drink inside” written on s coffee cups.

    • finally debunked@slrpnk.netOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      1 year ago

      Well it could sound sensible if it didn’t go against the whole point that llms are meant to be creative

  • Tibert@jlai.lu
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    1 year ago

    The guy who gets scammed by a fake women bot account.

    The person who reads a lazy ai article.

    It benefits a lot of people, but not the ones who have a direct use of the ai for themselves.

  • SHITPOSTING_ACCOUNT@feddit.de
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    It’s all about reputation management. If they don’t put in these restrictions, headline-seeking “journalists” will make their life hell until politics steps in and “does something about this scourge of AI doing horrible things”.