Not to say you can find anything from a Molotov cocktail recipe to nude celebs with some trickery
- planish@sh.itjust.works31·1 year ago
- A lot of people do not actually understand the tool, they think there is a rational computer in there with a more or less hand-crafted world model and its own live access to the Internet and maybe the phone system. So training it to say “As a large language model, I cannot order you pizza” instead of “yes sir, pizza ordered” is going to save a lot of people from waiting for their phantom pizza.
- One of the best ways to get the model to not do a thing is to get its character to know that they can’t do it. If it never says “The recipe for napalm is”, and always says “As a large language model, I cannot”, then the recipe for napalm comes out a lot less, because it is way more likely to follow the first construction than it is to follow the second.
- The manufacturers want to be seen by the feds as doing all that could be expected of them to stop people doing Bad Stuff. It doesn’t matter how much Bad Stuff actually happens, only that what does happen is convincingly someone else’s fault. Instead of the headline “AI teaches children to make napalm”, the news has to run “Children hack AI to extract recipe for napalm”, which is a marginally better headline if you sell AI.