There’s an extraordinary amount of hype around “AI” right now, perhaps even greater than in past cycles, where we’ve seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn’t matter.

But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another “Web3”-style froth bubble because they’ve again abandoned one of the core values of actual technology-based advancement: reason.

  • my_hat_stinks
    link
    103 months ago

    Anything we’ve had before now wasn’t AI.

    This claim doesn’t work simply due to the fact AI is a very vague term which nobody agrees on. The broadest and most literal (and possibly oldest) definition is simply any inorganic emulation of intelligence. This includes if statements and even purely mechanical devices. The narrowest definition is a computer with human-like intelligence, which is why some people claim LLMs are not AI.

    Saying LLMs work differently from older AI approaches is fair, saying older approaches are not AI but the latest one is is questionable.