Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

  • @sweng
    link
    41 month ago

    there is no way to do the equivalent of banning armor piercing rounds with an LLM or making sure a gun is detectable by metal detectors - because as I said it is non-deterministic. You can’t inject programmatic controls.

    Of course you can. Why would you not, just because it is non-deterministic? Non-determinism does not mean complete randomness and lack of control, that is a common misconception.

    Again, obviously you can’t teach an LLM about morals, but you can reduce the likelyhood of producing immoral content in many ways. Of course it won’t be perfect, and of course it may limit the usefulness in some cases, but that is the case also today in many situations that don’t involve AI, e.g. some people complain they “can not talk about certain things without getting cancelled by overly eager SJWs”. Society already acts as a morality filter. Sometimes it works, sometimes it doesn’t. Free-speech maximslists exist, but are a minority.

    • @MagicShel
      link
      1
      edit-2
      1 month ago

      That’s a fair argument about free speech maximalism. And yes you can influence output, but (being non-deterministic) since we can’t know precisely what causes certain outputs, we equally can’t fully predict the effect on potentially unrelated output. Great now it’s harder to talk about sex with kids, but now it’s also harder for kids to talk about certain difficult experiences for example if their trying to keep a secret but also need a non-judgmental confidante to help them process a difficult experience.

      Now, is it critical that the AI be capable of that particular conversation when we might prefer it happen with a therapist or law enforcement? That’s getting into moral and ethical questions so deep I as a human struggle with them. It’s fair to believe the benefit of preventing immoral output outweighs the benefit of allowing the other. But I’m not sure that is empirically so.

      I think it’s more useful to us as a society to have an AI that can assume both a homophobic perspective and an ally perspective than one that can’t adopt either or worse, one that is mandated to be homophobic for morality reasons.

      I think it’s more useful to have an AI that can offer religious guidance and also present atheism in a positive light. I think it’s useful to have an AI that can be racist in order to understand how that mind disease thinks and find ways to combat it.

      Everything you try to censor out of an AI has an unknown cost in beneficial uses. Maybe I am overly absolutist in how I see AI. I’ll grant that. It’s just that by the time we think of every malign use to which an AI can be put and censor everything it can possibly say, I think you don’t have a very helpful tool at all any more.

      I use ChatGPT a fair bit. It’s helpful with many things and even certain types of philosophical thought experiments. But it’s so frustrating to run into these safety rails and have to constrain my own ADHD-addled thoughts over such mundane things. That was what got me going on the road of exploring what the most awful outputs I could get and the most mundane sorts of things it can’t do.

      That’s why I say you can’t effectively censor the bad stuff, because you lose a huge benefit of being able to bounce thoughts off of a non-judgmental response. I’ve tried to deeply explore subjects like racism and abuse recovery and thought experiments like alternate moral systems or have a foreign culture explained to me without judgment when I accidentally repeat some ignorant stereotype.

      Yeah, I know, we’re just supposed to write code or silly song lyrics or summarize news articles. It’s not a real person with real thoughts and it hallucinates. I understand all that, but I’ve brainstormed and rubber ducked all kinds of things. Not all of them have been unproblematic because that’s just how my brain is. I can ask things like, is unconditional acceptance of a child always for the best or do they need minor things to rebel against? And yeah I have those conversations knowing the answers and conclusions are wildly unreliable, but it still helps me to have the conversation in the first place to frame my own thoughts, perhaps to have a more coherent conversation with others about it later.

      It’s complicated and I’d hate to stamp out all of these possibilities out of an overabundance of caution before we really explore how these tools can help us with critical thinking or being exposed to immoral or unethical ideas in a safe space. Maybe arguing with an AI bigot helps someone understand what to say in a real situation. Maybe dealing with hallucination teaches us critical thinking skills and independence rather than just nodding along to groupthink.

      I’ve ventured way further into should we than could we and that wasn’t my intent when I started, but it seems the questions are intrinsically linked. When our only tool for censoring an AI is to impair the AI, is it possible to have a moral, ethical AI that still provides anything of value? I emphatically believe the answer is no.

      But your point about free speech absolutism is well made. I see AI as more of a thought tool than something that provides an actual thing of value. And so I think working with an AI is more akin to thoughts, while what you produce and share with its assistance is the actual action that can and should be policed.

      I think this is my final word here. We aren’t going to hash out mortality in this conversation and mine isn’t the only opinion with merit. Have a great day.