This is the potential development in AI I’m most interested in. So naturally, I tested this when I first used ChatGPT. In classic ChatGPT fashion, when asked to make a directed acyclic graph representing cause and effect, it could interpret that well enough to make a simple graph…but got the cause and effect flow for something as simple as lighting a fire. Haven’t tried it again with ChatGPT-4 though.
This is a thought-provoking article, thank you for sharing it. One paragraph that particularly stood out to me discusses the limitations of AI in dealing with rare events:
On a different note, I asked GPT-4 to visualize the cause and effect flow for lighting a fire. It isn’t super detailed but not wrong either:
(Though I think being able to draw a graph like this correctly and actually understanding causality aren’t necessarily related.)
If you tell me the original prompts you used, we can test them in GPT-4 and see how well it performs.
That’s definitely a better graph visually. (The image capability is cool, the graph I got on the earlier model was in text form). But I think it is wrong - “prepare (tinder, kindling, and fuel wood)” are all redundant to each other. Plus there’s a direct link from “prepare tinder wood” to “maintain fire” - if this is a causal diagram indicating the sequence of actions a person needs to take, "prepare wood " should link to “light fire”. I don’t have a record of the exact prompts I was using, but I was working more with the fact that oxygen. fuel, and heat are all necessary but independent preconditions for a fire to start.