The problem is that today’s state of the art is far too good for low hanging fruit. There isn’t a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn’t also fail so you’re often left with weird ad-hominins (“Forget what it can do and results you see. It’s “just” predicting the next token so it means nothing”) or imaginary distinctions built on vague and ill defined assertions ( “It sure looks like reasoning but i swear it isn’t real reasoning. What does “real reasoning” even mean ? Well idk but just trust me bro”)

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn’t have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There’s actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    “a significant chunk of people” would make the same mistake

    the same was literally true for ELIZA in 1964

  • unfaithful-functor@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    There isn’t a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn’t also fail

    Man it’s so sad how this is so so so so close to the point-- they could have correctly concluded that this means GI as a concept is meaningless. But no, they have to maintain their sci-fi web of belief so they choose to believe LLMs Really Do Have A Cognitive Quality.

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      the concept of intelligence testing is so central to rats (and therefore to a big portion of HN’s poster base via cultural osmosis) that when folks like this lose their faith in GI, they tend to abandon the site as a whole

    • unfaithful-functor@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      The next comment is so peak tech hubris to me.

      It’s “just” predicting the next token so it means nothing

      This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

      The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.

      This is the part of the AGI Discourse I hate because anyone can approach this with aesthetic and analogy from any field at all to make any argument about AI and its just mind-grating.

      This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

      I’ve never seen a non-sequitur more non. The argument is that predicting the next term is categorically not what language is. That is, it’s not that there is nothing emerging, but that what is emerging is just straight up not language.

      The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.

      “Look! This person thinks predicting the next token is not consciousness. I bet they must also not believe that humans are made of cells, or that many small things can make complex thing. I bet they also believe the soul exists and lives in the pineal gland just like old NON-SCIENCE PEOPLE.”

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.

        Over and above the non-sequitur already observed, this poster/posting is one of the most condensed examples of techbro Ignoring All Prior Knowledge In Related Fields Of Study that I’ve seen in a while

        must be doing heavy lines of pure uncut Innovation for this vivid a performance

        • PJ Coffey@mastodon.ie
          link
          fedilink
          arrow-up
          8
          ·
          1 year ago

          @froztbyte @jasperty

          Yeah, ignorance of history (things that happened more than 20 years ago) is strong in these people.

          The basis of IQ tests is Spearman’s, g, a general intelligence. Inventing a new branch of statistical analysis, Spearman exhaustively shows that scientists can ignore errors if they believe hard enough. (Mismeasure of Man, SJ Gould).

          As T. Gebru points out, how can you design or verify a system you can’t spec? There’s no definition and no evidence g exists.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    I find it interesting that this was previously posted in May and dang has marked it as a “dupe” - an attempt to cool down the discussion? From what I can see, duplicates are ok on HN and it’s seldom I’ve seen one marked as such.

  • earthquake@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    If I posted more, I would try and get “AIcolyte” to compete with “promptfan” for dominance as the nom de sneer.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’ve got the ACM piece in a tab, staring at me, challenging me not to nope out with a TL;DR. Is it worth getting into it? I’d love to have some ammo against promptfans of all stripes.

    • raktheundead@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      Basically: AI is (potentially?) useful, but LLMs require substantially more data than a human brain to do what they do, which is limited at best - and often less able for generalised cases than a well-defined physics model. The ideas aren’t even new, having their roots in theoretical approaches from the 1940s and applied approaches from the 1980s, but they have a lot more training data and processing power now, which makes it seem more impressive. Even if all of the data in the universe was present, this would not lead to AGI because LLMs can’t figure out the “why”.

      But I don’t think there’s anything new asserted in that article if you’re familiar with the space and the promptfans will dismiss it anyway.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Yeah, it’s worth examining. I didn’t find any good takeaways, but I feel that they stated their case in a citation-supported manner; it looks like a decent article to throw at folks who claim that LLMs are intelligent.

    • self@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      to be honest I’m in the same boat. it’s tempting but I don’t know if I have the fortitude this week to actually engage with it