This is the potential development in AI I’m most interested in. So naturally, I tested this when I first used ChatGPT. In classic ChatGPT fashion, when asked to make a directed acyclic graph representing cause and effect, it could interpret that well enough to make a simple graph…but got the cause and effect flow for something as simple as lighting a fire. Haven’t tried it again with ChatGPT-4 though.

  • 𝕊𝕚𝕤𝕪𝕡𝕙𝕖𝕒𝕟M
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    This is a thought-provoking article, thank you for sharing it. One paragraph that particularly stood out to me discusses the limitations of AI in dealing with rare events:

    The ability to imagine different scenarios could also help to overcome some of the limitations of existing AI, such as the difficulty of reacting to rare events. By definition, Bengio says, rare events show up only sparsely, if at all, in the data that a system is trained on, so the AI can’t learn about them. A person driving a car can imagine an occurrence they’ve never seen, such as a small plane landing on the road, and use their understanding of how things work to devise potential strategies to deal with that specific eventuality. A self-driving car without the capability for causal reasoning, however, could at best default to a generic response for an object in the road. By using counterfactuals to learn rules for how things work, cars could be better prepared for rare events. Working from causal rules rather than a list of previous examples ultimately makes the system more versatile.

    On a different note, I asked GPT-4 to visualize the cause and effect flow for lighting a fire. It isn’t super detailed but not wrong either:

    (Though I think being able to draw a graph like this correctly and actually understanding causality aren’t necessarily related.)

    If you tell me the original prompts you used, we can test them in GPT-4 and see how well it performs.

    • babelspace@kbin.socialOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      That’s definitely a better graph visually. (The image capability is cool, the graph I got on the earlier model was in text form). But I think it is wrong - “prepare (tinder, kindling, and fuel wood)” are all redundant to each other. Plus there’s a direct link from “prepare tinder wood” to “maintain fire” - if this is a causal diagram indicating the sequence of actions a person needs to take, "prepare wood " should link to “light fire”. I don’t have a record of the exact prompts I was using, but I was working more with the fact that oxygen. fuel, and heat are all necessary but independent preconditions for a fire to start.