I don’t see why the example requiring training for humans to understand is unfortunate.
Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.
It’s not clear why such deficiencies among humans do not argue against human consciousness.
A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.
That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.
Being conscious is not just to know what the words mean, but to understand what they mean.
Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
If I can’t prove it, I don’t know how I can claim to understand it.
It’s axiomatic that equality is symmetric. It’s also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.
What makes the difference between a human learning these things and an AI being trained for them?
I think if I could describe that, I might actually have solved the problem of strong AI.
Then how will you know the difference between strong AI and not-strong AI?
Then how will you know the difference between strong AI and not-strong AI?
I’ve already stated that that is a problem:
From a previous answer to you:
Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
Because I don’t think we have a sure methodology.
I think therefore I am, is only good for the conscious mind itself.
I can’t prove that other people are conscious, although I’m 100% confident they are.
In exactly the same way we can’t prove when we have a conscious AI.
But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.
Strong AI or AGI, or whatever you will, is usually talked about in terms of intellectual ability. It’s not quite clear why this would require consciousness. Some tasks are aided by or maybe even necessitate self-awareness; for example, chatbots. But it seems to me that you could leave out such tasks and still have something quite impressive.
Then, of course, there is no agreed definition of consciousness. Many will argue that the self-awareness of chatbots is not consciousness.
I would say most people take strong AI and similar to mean an artificial person, for which they take consciousness as a necessary ingredient. Of course, it is impossible to engineer an artificial person. It is like creating a technology to turn a peasant into a king. It is a category error. A less kind take could be that stochastic parrots string words together based on superficial patterns without any understanding.
But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.
Indeed, I do not see the relation between consciousness and reasoning in this example.
Self-awareness means the ability to distinguish self from other, which implies computing from sensory data what is oneself and what is not. That could be said to be a form of reasoning. But I do not see such a relation for the example.
By that standard, are all humans conscious?
FWIW, I asked GPT-4o mini via DDG.
Screenshot
I don’t know if that means it understands. It’s how I would have done it (yesterday, after looking up Peano Axioms in Wikipedia), and I don’t know if I understand it.
You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it. Completely different situation.
The AI must be able to figure this out in responses that require this very basic understanding. I don’t recall the exact example, but here is a similar example,
where the AI fails to simply count the number of R’s in strawberry, claiming there are only 2, and refusing to accept there is 3, then when explained there is 1 in straw and 2 in berry, it made some very puzzling argument, that counting the R in Straw is some sort of clever trick.
This is fixed now, and had to do with tokenizing info incorrectly. So you can’t “prove” this wrong by showing an example of a current AI that doesn’t make the mistake.
Unfortunately I can’t find a link to the original story, because I’m flooded with later results. But you can easily find the 2 R’s in strawberry problem.
Self-awareness means the ability to distinguish self from other, which implies computing from sensory data what is oneself and what is not.
Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.
So need to figure out if it actually understands what it means. Self awareness on the human level requires a high level of logical thought and abstract understanding. My example shows this level of understanding clearly isn’t there.
As I wrote earlier, we really can’t prove consciousness, the way to go around it is to figure out some of the mental abilities required for it, if those can be shown not to be present, we can conclude it’s probably not there.
When we have Strong AI, it may take a decade to be widely acknowledged. And this will stem from failure to disprove it, rather than actually proof.
You never asked how I define intelligence, self awareness or consciousness, you asked how I operationally define it, that a very different question.
An operational definition specifies concrete, replicable procedures designed to represent a construct.
I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.
Also:
An operation is the performance which we execute in order to make known a concept. For example, an operational definition of “fear” (the construct) often includes measurable physiologic responses that occur in response to a perceived threat.
Seem to me to be able to define that for consciousness, would essentially mean to posses the knowledge necessary to replicate it.
Nobody on planet earth has that knowledge yet AFAIK.
You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it.
Well, that’s the same situation I was in and just what I did. For that matter, Peano was also in that situation.
This is fixed now, and had to do with tokenizing info incorrectly.
Not quite. It’s a fundamental part of tokenization. The LLM does not “see” the individual letters. By, for example, adding spaces between the letters one could force a different tokenization and a correct count (I tried back then). It’s interesting that the LLM counted 2 "r"s, as that is phonetically correct. One wonders how it picks up on these things. It’s not really clear why it should be able to count at all.
It’s possible to make an LLM work on individual letters, but that is computationally inefficient. A few months ago, researchers at Meta proposed a possible solution called the Byte Latent Transformer (BLT). We’ll see if anything comes of it.
In any case, I do not see the relation to consciousness. Certainly there are enough people who are not able to spell or count and one would not say that they lack consciousness, I assume.
Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.
That’s true. We need to observe the LLM in its natural habit. What an LLM typically does, is continue a text. (It could also be used to work backwards or fill in the middle, but never mind.) A base model is no good as a chatbot. It has to be instruct-tuned. In operation, the tuned model is given a chat log containing a system prompt, text from the user, and text that it has previously generated. It will then add a reply and terminate the output. This text, the chat log, could be said to be the sum of its “sensory perceptions” as well as its “short-term memory”. Within this, it is able to distinguish its own replies, that of the user, and possibly other texts.
My example shows this level of understanding clearly isn’t there.
Can you lay out what abilities are connected to consciousness? What tasks are diagnostic of consciousness? Could we use an IQ test and diagnose people as having or not consciousness?
I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.
The brain is a physical object. Consciousness is both an emergent property and a construct; like, say, temperature or IQ.
You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable. I assume a consciousness test would be similar to IQ test in that it would contain selected “puzzles”.
We have to figure out how consciousness is different from IQ. What puzzles are diagnostic of consciousness and not of academic ability?
Can you lay out what abilities are connected to consciousness?
I probably can’t say much new, but it’s a combination of memory, learning, abstract thinking, and self awareness.
I can also say that the consciousness resides in a form of virtual reality in the brain, allowing us to manipulate reality in our minds to predict outcomes of our actions.
At a more basic level it is memory, pattern recognition, prediction and manipulation.
The fact that our consciousness is a virtual construct, also acts as a shim, distancing the mind from direct dependency of the underlying physical layer. Although it still depend on it to work of course.
So to make an artificial consciousness, you don’t need to create a brain, you can do it by recreating the functionality of the abstraction layer on other forms of hardware too. Which means a conscious AI is indeed possible.
It is also this feature that allows us to have free will, although that depends on definition, I believe we do have free will in an absolutely meaningful sense. Something that took me decades to realize was actually possible.
I don’t know if this makes any sense to you? But maybe you find it interesting?
You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable.
Yes there are different levels, actually in 2 ways. there are different levels between the consciousness of a dolphin and a human. A dolphin is also self aware and conscious. But it does not have the same level of consciousness we do. Simply because it doesn’t posses the same level of intelligence.
But even within the human brain there are different levels of consciousness. It’s a common term to use “sub conscious” and that is with good reason. Because there are things that are hard to learn, and we need to concentrate hard and practice hard to learn them. But with enough practice we build routine, and at some point they become so much routine we can do them without thinking about it, but instead think about something else.
At that point you have trained a subconscious routine, that is able to work independently almost without guidance of you main consciousness. There are also functions that are “automatic”, like when you listen to sounds, you can distinguish many separate sounds without problem. We can somewhat mimic that in software today, separating different sounds. It’s extremely complex to do, and the mathematics involved is more than most can handle. Except in our hearing we do it effortlessly. But there is obviously an intelligence at work in the brain that isn’t directly tied to our consciousness.
IDK if I’m explaining myself well here, but the subconscious is a very significant part of our consciousness.
So, it must be something that is measurable and quantifiable.
That is absolutely not a certainty. At least I don’t think we can at this point in time, but in the future there may exist better knowledge and better tools. But as it is, we have been hampered by wrongful thinking in these areas for centuries, quite opposite to physics and mathematics that has helped computing every step of the way.
The study of the mind has been hampered by prejudice, thinking that humans are not animals, thinking free will is from god, with nonsense terms like id. And thinking we have a soul that is something separate from the body. Psychology basically started out as pseudo science, and despite that it was a huge step forward!
I’ll stop here, these issues are very complex, and some of the above issues have taken me decades to figure out. There is much dogma and even superstition surrounding these issues, so it used to be rare to finally find someone to read or listen to that made some sense based on reality. It’s seems to me basically only for the past 15 years, that it seems that science of the mind is beginning to catch up to reality.
Humans aren’t innately good at math. I wouldn’t have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn’t understand what there is to prove. Actually, I’m not sure if I do.
It’s not clear why such deficiencies among humans do not argue against human consciousness.
That’s dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it’s not entirely clear how much data that is, but it’s a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.
What might an operational definition look like?
Just because you can’t make a mathematical proof doesn’t mean you don’t understand the very simple truth of the statement.
I think if I could describe that, I might actually have solved the problem of strong AI.
You are asking unreasonable questions.
If I can’t prove it, I don’t know how I can claim to understand it.
It’s axiomatic that equality is symmetric. It’s also axiomatic that 1+1=2. There is not a whole lot to understand. I have memorized that. Actually, having now thought about this for a bit, I think I can prove it.
What makes the difference between a human learning these things and an AI being trained for them?
Then how will you know the difference between strong AI and not-strong AI?
I’ve already stated that that is a problem:
From a previous answer to you:
Because I don’t think we have a sure methodology.
I think therefore I am, is only good for the conscious mind itself.
I can’t prove that other people are conscious, although I’m 100% confident they are.
In exactly the same way we can’t prove when we have a conscious AI.
But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.
I don’t think there’s an agreed definition.
Strong AI or AGI, or whatever you will, is usually talked about in terms of intellectual ability. It’s not quite clear why this would require consciousness. Some tasks are aided by or maybe even necessitate self-awareness; for example, chatbots. But it seems to me that you could leave out such tasks and still have something quite impressive.
Then, of course, there is no agreed definition of consciousness. Many will argue that the self-awareness of chatbots is not consciousness.
I would say most people take strong AI and similar to mean an artificial person, for which they take consciousness as a necessary ingredient. Of course, it is impossible to engineer an artificial person. It is like creating a technology to turn a peasant into a king. It is a category error. A less kind take could be that stochastic parrots string words together based on superficial patterns without any understanding.
Indeed, I do not see the relation between consciousness and reasoning in this example.
Self-awareness means the ability to distinguish self from other, which implies computing from sensory data what is oneself and what is not. That could be said to be a form of reasoning. But I do not see such a relation for the example.
By that standard, are all humans conscious?
FWIW, I asked GPT-4o mini via DDG.
Screenshot
I don’t know if that means it understands. It’s how I would have done it (yesterday, after looking up Peano Axioms in Wikipedia), and I don’t know if I understand it.
You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it. Completely different situation.
The AI must be able to figure this out in responses that require this very basic understanding. I don’t recall the exact example, but here is a similar example, where the AI fails to simply count the number of R’s in strawberry, claiming there are only 2, and refusing to accept there is 3, then when explained there is 1 in straw and 2 in berry, it made some very puzzling argument, that counting the R in Straw is some sort of clever trick.
This is fixed now, and had to do with tokenizing info incorrectly. So you can’t “prove” this wrong by showing an example of a current AI that doesn’t make the mistake.
Unfortunately I can’t find a link to the original story, because I’m flooded with later results. But you can easily find the 2 R’s in strawberry problem.
Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.
So need to figure out if it actually understands what it means. Self awareness on the human level requires a high level of logical thought and abstract understanding. My example shows this level of understanding clearly isn’t there.
As I wrote earlier, we really can’t prove consciousness, the way to go around it is to figure out some of the mental abilities required for it, if those can be shown not to be present, we can conclude it’s probably not there.
When we have Strong AI, it may take a decade to be widely acknowledged. And this will stem from failure to disprove it, rather than actually proof.
You never asked how I define intelligence, self awareness or consciousness, you asked how I operationally define it, that a very different question.
https://en.wikipedia.org/wiki/Operational_definition
I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.
Also:
Seem to me to be able to define that for consciousness, would essentially mean to posses the knowledge necessary to replicate it.
Nobody on planet earth has that knowledge yet AFAIK.
Well, that’s the same situation I was in and just what I did. For that matter, Peano was also in that situation.
Not quite. It’s a fundamental part of tokenization. The LLM does not “see” the individual letters. By, for example, adding spaces between the letters one could force a different tokenization and a correct count (I tried back then). It’s interesting that the LLM counted 2 "r"s, as that is phonetically correct. One wonders how it picks up on these things. It’s not really clear why it should be able to count at all.
It’s possible to make an LLM work on individual letters, but that is computationally inefficient. A few months ago, researchers at Meta proposed a possible solution called the Byte Latent Transformer (BLT). We’ll see if anything comes of it.
In any case, I do not see the relation to consciousness. Certainly there are enough people who are not able to spell or count and one would not say that they lack consciousness, I assume.
That’s true. We need to observe the LLM in its natural habit. What an LLM typically does, is continue a text. (It could also be used to work backwards or fill in the middle, but never mind.) A base model is no good as a chatbot. It has to be instruct-tuned. In operation, the tuned model is given a chat log containing a system prompt, text from the user, and text that it has previously generated. It will then add a reply and terminate the output. This text, the chat log, could be said to be the sum of its “sensory perceptions” as well as its “short-term memory”. Within this, it is able to distinguish its own replies, that of the user, and possibly other texts.
Can you lay out what abilities are connected to consciousness? What tasks are diagnostic of consciousness? Could we use an IQ test and diagnose people as having or not consciousness?
The brain is a physical object. Consciousness is both an emergent property and a construct; like, say, temperature or IQ.
You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable. I assume a consciousness test would be similar to IQ test in that it would contain selected “puzzles”.
We have to figure out how consciousness is different from IQ. What puzzles are diagnostic of consciousness and not of academic ability?
I probably can’t say much new, but it’s a combination of memory, learning, abstract thinking, and self awareness.
I can also say that the consciousness resides in a form of virtual reality in the brain, allowing us to manipulate reality in our minds to predict outcomes of our actions.
At a more basic level it is memory, pattern recognition, prediction and manipulation.
The fact that our consciousness is a virtual construct, also acts as a shim, distancing the mind from direct dependency of the underlying physical layer. Although it still depend on it to work of course.
So to make an artificial consciousness, you don’t need to create a brain, you can do it by recreating the functionality of the abstraction layer on other forms of hardware too. Which means a conscious AI is indeed possible.
It is also this feature that allows us to have free will, although that depends on definition, I believe we do have free will in an absolutely meaningful sense. Something that took me decades to realize was actually possible.
I don’t know if this makes any sense to you? But maybe you find it interesting?
Yes there are different levels, actually in 2 ways. there are different levels between the consciousness of a dolphin and a human. A dolphin is also self aware and conscious. But it does not have the same level of consciousness we do. Simply because it doesn’t posses the same level of intelligence.
But even within the human brain there are different levels of consciousness. It’s a common term to use “sub conscious” and that is with good reason. Because there are things that are hard to learn, and we need to concentrate hard and practice hard to learn them. But with enough practice we build routine, and at some point they become so much routine we can do them without thinking about it, but instead think about something else.
At that point you have trained a subconscious routine, that is able to work independently almost without guidance of you main consciousness. There are also functions that are “automatic”, like when you listen to sounds, you can distinguish many separate sounds without problem. We can somewhat mimic that in software today, separating different sounds. It’s extremely complex to do, and the mathematics involved is more than most can handle. Except in our hearing we do it effortlessly. But there is obviously an intelligence at work in the brain that isn’t directly tied to our consciousness.
IDK if I’m explaining myself well here, but the subconscious is a very significant part of our consciousness.
That is absolutely not a certainty. At least I don’t think we can at this point in time, but in the future there may exist better knowledge and better tools. But as it is, we have been hampered by wrongful thinking in these areas for centuries, quite opposite to physics and mathematics that has helped computing every step of the way.
The study of the mind has been hampered by prejudice, thinking that humans are not animals, thinking free will is from god, with nonsense terms like id. And thinking we have a soul that is something separate from the body. Psychology basically started out as pseudo science, and despite that it was a huge step forward!
I’ll stop here, these issues are very complex, and some of the above issues have taken me decades to figure out. There is much dogma and even superstition surrounding these issues, so it used to be rare to finally find someone to read or listen to that made some sense based on reality. It’s seems to me basically only for the past 15 years, that it seems that science of the mind is beginning to catch up to reality.