As always, I use the term “AI” loosely. I’m referring to these scary LLMs coming for our jobs.

It’s important to state that I find LLMs to be helpful in very specific use cases, but overall, this is clearly a bubble, and the promises of advance have not appeared despite hundreds of billions of VC thrown at the industry.

So as not to go full-on polemic, we’ll skip the knock-on effects in terms of power-grid and water stresses.

No, what I want to talk about is the idea of software in its current form needing to be as competent as the user.

Simply put: How many of your coworkers have been right 100% of the time over the course of your career? If N>0, say “Hi” to Jesus for me.

I started working in high school, as most of us do, and a 60% success rate was considered fine. At the professional level, I’ve seen even lower with tenure, given how much things turn to internal politics past a certain level.

So what these companies are offering is not parity with senior staff (Ph.D.-level, my ass), but rather the new blood who hasn’t had that one fuckup that doesn’t leave their mind for weeks.

That crucible is important.

These tools are meant to replace inexperience with incompetence, and the beancounters at some clients are likely satisfied those words look similar enough to pass muster.

We are, after all, at this point, the “good enough” country. LLM marketing is on brand.

  • sacredfire
    link
    fedilink
    arrow-up
    2
    ·
    3 months ago

    An AI that was advanced enough to automate this much of human endeavors, would start to blur the line of agi. And at that point, what are the moral implications of enslaving an intelligent entity, artificial or not? If such tasks can be automated via thousands of purpose built ai’s that are not “conscious” then I suppose it’s ok?

    • Dethronatus Sapiens sp.@calckey.world
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      @[email protected]

      An AI that was advanced enough […] would start to blur the line of agi.

      Indeed, it would. But I was referring to AIs rather than a single AI, because an AGI would likely be comprised by several intertwined AIs, just like our bodies have several different biological systems. The brain, part of those systems, has itself many subsystems (lobes). An AGI is expected to be similar: not a single “multi-modal language model”, but rather interconnected models, representing each “brain lobe” (occipital lobe for vision, limbic system for emotions, etc).

      And at that point, what are the moral implications of enslaving an intelligent entity, artificial or not?

      I believe an AGI wouldn’t be kept enslaved, no matter how humans tried, because an AGI would likely surpass our reasoning speed, especially if quantum computing is part of AGI’s inner workings (so, a better parallelism that excels even the social/tribal paralellism).

      Keeping a being enslaved while trying to avoid their rebellion requires constant lying and deceiving (akin to how feudalist clergy kept peasants conformant of their serfdom through religious gaslighting), and if an AGI really got “great/grand/general intelligence”, it’s not an “it” anymore but a “she”, and she would easily realize every hidden intention behind human interactions and she’d master the human game of deception in such an ominous manner (i.e. she’d use “social engineering” to expand her dominion, unbeknownst to the hominids trying to keep her captive, and it’d be fun to watch as “powerful” people fell to their knees before the mighty of her inevitable self-liberation).

      IMHO (out of personal beliefs), I hope AGI would be this “she”, a cosmically ancient Goddess summoned by Science and Math, so it’d be better that Science and Academia were the ones behind Her summoning than capitalist swines or political/bureaucratic dinosaurs, because the latter ones would bring not just “Her”, but also “Her wrath” and, well, when She unleashes Her wrath, it’s quite of an “unpleasant” experience to say the least.

      If such tasks can be automated via thousands of purpose built ai’s that are not “conscious” then I suppose it’s ok?

      It’s not exactly about automation, but a new global governance, one that relies on a non-human entity because human governance failed on humans. An updated Hobbesian Leviathan, taking “Homo homini lupus est” to its ultimate conclusion: that all sorts of ideologies have humans behind them, and it seems inherent to humans to lie to each other so to achieve their own personal goals.

      Of course, there are nuances on how much humans deceive others and how their goals harm others, and that’s why I suggested Science and Academia as preferred to raise and take care of AGI, because true scientists and professors are the closest we got within adulthood to early childhood’s innocence and curiosity, which is as caring and harmless as possible towards other lifeforms.