That doesn’t even begin to be a dichotomy. Unless you want to claim humans are more than Turing complete (hint: that’s not just physically but logically impossible) we can be expressed as algorithms. Including that fancy-pants feature of having an internal world, and moreso being aware of having that world (a thermostat also has an internal world but it’s a) rather limited and b) the thermostat doesn’t have a system to regulate its internal world, the outside world does that for it).
Wow, do you have any proof of this wild assertion? Has this ever been done before or is this simply conjecture?
a thermostat also has an internal world
No. A thermostat is an unthinking device. It has no thoughts or feelings and no “self.” In this regard it is the same as LLMs, which also have no thoughts, feelings, or “self.”
A thermostat executes actions when a human acts upon it. But it has no agency and does not think in any sense; it does simply what it was designed to do. LLMs are to language as thermostats are to controlling HVAC systems, and nothing more than that.
There is as much chance of your thermostat gaining sentience if we give it more computing power as an LLM.
Wow, do you have any proof of this wild assertion? Has this ever been done before or is this simply conjecture?
A Turing machine can compute any computable function. For a thing to exist in the real world it has to be computable otherwise you break cause and effect itself as the Church-Turing Thesis doesn’t really rely on anything but there being implication.
So, no, not proof. More an assertion of the type “Assuming the Universe is not dreamt up by a Holtzmann brain and causality continues to apply, …”.
A thermostat is an unthinking device.
That’s a fair assessment but besides the point: A thermostat has an internal state it can affect (the valve), is under its control and not that of silly humans (that is, not directly) aka an internal world.
There is as much chance of your thermostat gaining sentience if we give it more computing power as an LLM.
Also correct. But that’s because it’s a T1 system, not because the human mind can’t be expressed as an algorithm. Rocks are T0 system and I think you’ll agree dumber than thermostats, most of what runs on our computers is a T1 system, ChatGPT and everything AI we have is T2, the human mind is T3: Our genes don’t merely come with instructions how to learn (that’s ChatGPT’s training algorithm), but with instructions on learning how to learn. We’re as much more sophisticated than ChatGPT, for an appropriate notion of “sophisticated”, as thermostats are more sophisticated than rocks.
That’s a fair assessment but besides the point: A thermostat has an internal state it can affect (the valve), is under its control and not that of silly humans (that is, not directly) aka an internal world.
I apologize if I was unclear when I spoke of an internal world. I meant interior thoughts and feelings. I think most people would agree sentience is predicated on the idea that the sentient object has some combination of its own emotions, motivations, desires, and ability to experience the world.
LLMs have as much of that as a thermostat does; that is, zero. It is a word completion algorithm and nothing more.
Your paper doesn’t bother to define what these T-systems are so I can’t speak to your categorization. But I think rating the mental abilities of thermostats versus computers versus ChatGPT versus human minds totally absurd. They aren’t on the same scale, they’re different kinds of things. Human minds have actual sentience. Everything else in that list is a device, created by humans, to do a specific task and nothing more. None of them are anything more than that.
Your paper doesn’t bother to define what these T-systems are
Have a look here. Key concept is the adaptive traverse, Tn-system then means “a system with that many traverses”. What I meant with my comparison there is simply that a rock has a traverse less than a thermostat, and ChatGPT has a traverse less than us.
They aren’t on the same scale, they’re different kinds of things.
Addition, multiplication and exponentiation all are on the same scale, yet they’re different things. Regarding number of traverses it’s absolutely fair to say that it’s a scale of quality, not quantity.
Human minds have actual sentience.
Sentience as in the processing of the environment while processing your processing of that environment? Yep that sounds like a T3 system. Going out a bit on a limb, during deep sleep we regress to T2, while dreams are a funky “let’s pretend our conditioning/memory is the environment” state. Arachnids apparently can do it, and definitely all mammals. Insects seem to be T2 from the POV of my non-biologist ass.
Everything else in that list is a device, created by humans, to do a specific task and nothing more.
You are a device created by evolution to figure out whether your genes are adaptive enough to its surroundings to reproduce
I’m giving up here but evolution did not “design” us. LLMs are designs and created with a purpose in mind and they fulfill that purpose. Humans were not designed.
In cybernetics that’s irrelevant as the purpose of a system is what it does. I can design an algorithm that plays pong, I can write a program to evolve one, they might actually end up being identical and noone could tell.
It’s entirely not irrelevant. Even if you create a program to evolve pong, that was also designed by a human. As a computer programmer you should know that no computer program will just become pong, what an idiotic idea.
You just keep pivoting away from how you were using words to them meaning something entirely different; this entire argument is worthless. At least LLMs don’t change the definitions of the words they use as they use them.
Playing pong. Inputs: ball (and possibly enemy) position, output: paddle left or right. Something like NEAT will very quickly come up with the obvious “track the ball” approach using just as many AST nodes as you would.
That doesn’t even begin to be a dichotomy. Unless you want to claim humans are more than Turing complete (hint: that’s not just physically but logically impossible) we can be expressed as algorithms. Including that fancy-pants feature of having an internal world, and moreso being aware of having that world (a thermostat also has an internal world but it’s a) rather limited and b) the thermostat doesn’t have a system to regulate its internal world, the outside world does that for it).
Wow, do you have any proof of this wild assertion? Has this ever been done before or is this simply conjecture?
No. A thermostat is an unthinking device. It has no thoughts or feelings and no “self.” In this regard it is the same as LLMs, which also have no thoughts, feelings, or “self.”
A thermostat executes actions when a human acts upon it. But it has no agency and does not think in any sense; it does simply what it was designed to do. LLMs are to language as thermostats are to controlling HVAC systems, and nothing more than that.
There is as much chance of your thermostat gaining sentience if we give it more computing power as an LLM.
A Turing machine can compute any computable function. For a thing to exist in the real world it has to be computable otherwise you break cause and effect itself as the Church-Turing Thesis doesn’t really rely on anything but there being implication.
So, no, not proof. More an assertion of the type “Assuming the Universe is not dreamt up by a Holtzmann brain and causality continues to apply, …”.
That’s a fair assessment but besides the point: A thermostat has an internal state it can affect (the valve), is under its control and not that of silly humans (that is, not directly) aka an internal world.
Also correct. But that’s because it’s a T1 system, not because the human mind can’t be expressed as an algorithm. Rocks are T0 system and I think you’ll agree dumber than thermostats, most of what runs on our computers is a T1 system, ChatGPT and everything AI we have is T2, the human mind is T3: Our genes don’t merely come with instructions how to learn (that’s ChatGPT’s training algorithm), but with instructions on learning how to learn. We’re as much more sophisticated than ChatGPT, for an appropriate notion of “sophisticated”, as thermostats are more sophisticated than rocks.
I apologize if I was unclear when I spoke of an internal world. I meant interior thoughts and feelings. I think most people would agree sentience is predicated on the idea that the sentient object has some combination of its own emotions, motivations, desires, and ability to experience the world.
LLMs have as much of that as a thermostat does; that is, zero. It is a word completion algorithm and nothing more.
Your paper doesn’t bother to define what these T-systems are so I can’t speak to your categorization. But I think rating the mental abilities of thermostats versus computers versus ChatGPT versus human minds totally absurd. They aren’t on the same scale, they’re different kinds of things. Human minds have actual sentience. Everything else in that list is a device, created by humans, to do a specific task and nothing more. None of them are anything more than that.
Have a look here. Key concept is the adaptive traverse, Tn-system then means “a system with that many traverses”. What I meant with my comparison there is simply that a rock has a traverse less than a thermostat, and ChatGPT has a traverse less than us.
Addition, multiplication and exponentiation all are on the same scale, yet they’re different things. Regarding number of traverses it’s absolutely fair to say that it’s a scale of quality, not quantity.
Sentience as in the processing of the environment while processing your processing of that environment? Yep that sounds like a T3 system. Going out a bit on a limb, during deep sleep we regress to T2, while dreams are a funky “let’s pretend our conditioning/memory is the environment” state. Arachnids apparently can do it, and definitely all mammals. Insects seem to be T2 from the POV of my non-biologist ass.
You are a device created by evolution to figure out whether your genes are adaptive enough to its surroundings to reproduce
I’m giving up here but evolution did not “design” us. LLMs are designs and created with a purpose in mind and they fulfill that purpose. Humans were not designed.
In cybernetics that’s irrelevant as the purpose of a system is what it does. I can design an algorithm that plays pong, I can write a program to evolve one, they might actually end up being identical and noone could tell.
It’s entirely not irrelevant. Even if you create a program to evolve pong, that was also designed by a human. As a computer programmer you should know that no computer program will just become pong, what an idiotic idea.
You just keep pivoting away from how you were using words to them meaning something entirely different; this entire argument is worthless. At least LLMs don’t change the definitions of the words they use as they use them.
Playing pong. Inputs: ball (and possibly enemy) position, output: paddle left or right. Something like NEAT will very quickly come up with the obvious “track the ball” approach using just as many AST nodes as you would.