Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

  • t3rmit3@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 months ago

    Not every safety control needs to solve every safety issue. Almost all safety controls are narrowly-tailored to one threat model. You’re essentially just arguing that if a safety control doesn’t solve everything, it’s not worth it.

    LLMs being a tool that is so widely available is precisely why they need more built-in safety. The more dangerous a tool is, the more likely it is to be restricted to only professional or otherwise licensed users or businesses. Arguing against safety controls being built into LLMs is just going to accelerate their regulation.

    Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.

    • MagicShel
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      3 months ago

      Not exactly. My argument is that the more safety controls you build into the model, the less useful the model is at anything. The more you bend the responces away from true (whatever that is) the less of the tool you have.

      Whether you agree with that mentality or not, we live in a Statist world, and protection of its constituent people from themselves and others is the (ostensible) primary function of a State.

      Yeah I agree with that, but I’m saying protect people from the misuse of the tool. Don’t break the tool to the point where it’s worthless.