Silicon Valley is bullish on AI agents. OpenAI CEO Sam Altman said agents will “join the workforce” this year. Microsoft CEO Satya Nadella predicted that agents will replace certain knowledge work. Salesforce CEO Marc Benioff said that Salesforce’s goal is to be “the number one provider of digital labor in the world” via the company’s various “agentic” services.

But no one can seem to agree on what an AI agent is, exactly.

In the last few years, the tech industry has boldly proclaimed that AI “agents” — the latest buzzword — are going to change everything. In the same way that AI chatbots like OpenAI’s ChatGPT gave us new ways to surface information, agents will fundamentally change how we approach work, claim CEOs like Altman and Nadella.

That may be true. But it also depends on how one defines “agents,” which is no easy task. Much like other AI-related jargon (e.g. “multimodal,” “AGI,” and “AI” itself), the terms “agent” and “agentic” are becoming diluted to the point of meaninglessness.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    arrow-up
    15
    ·
    2 days ago

    Etymologically “agent” is just a fancy borrowed synonym for “doer”. So an AI agent is an AI that does. Yup, it’s that vague.

    You could instead restrict the definition further, and say that an AI agent does things autonomously. Then the concept is mutually exclusive with “assistant”, as the assistant does nothing on its own, it’s only there to assist someone else. And yet look at what Pathak said - that she understood both things to be interchangeable.

    …so might as well say that “agent” is simply the next buzzword, since people aren’t so excited with the concept of artificial intelligence any more. They’ve used those dumb text gens, gave them either a six-fingered thumbs up or thumbs down, but they’re generally aware that it doesn’t do a fraction of what they believed to.

    • Baldur Nil
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      22 hours ago

      …so might as well say that “agent” is simply the next buzzword, since people aren’t so excited with the concept of artificial intelligence any more

      This is exactly the reason for the emphasis on it.

      The reality is that the LLMs are impressive and nice to play with. But investors want to know where the big money will come from, and for companies, LLMs aren’t that useful in their current state, I think one of the biggest use for them is extracting information from documents with lots of text.

      So “agents” are supposed to be LLMs executing actions instead of just outputting text (such as calling APIs). Which doesn’t seem like the best idea considering they’re not great at all at making decisions—despite these companies try to paint them as capable of such.