Humans were always the weakest link in the security chain. Why? Because humans aren’t logical and can be tricked with words and ideas.
So we’ve developed this new type of computer program that “thinks” and speaks naturally like a human, right? Responding naturally to human conversation with it.
The issue is (as I’ve said before) that we’ve essentially created a computer program that is just as fallible as humans.
In other words, no shit simple prompt engineering works. There’s no way to “secure” a human brain from dripping out things it shouldn’t on accident, and by extension, there’s no way to “secure” an LLM “brain” because they operate in a somewhat similar manner (or at least appear to). Prompt engineering is just social engineering for computers. We’ve created a computer that can be tricked with words and ideas, just like a human.
Humans were the weakest link in security and we just made computers as weak of a link as humans. Who really thought making computers as bad at everything as humans was a good idea?
The issue is (as I’ve said before) that we’ve essentially created a computer program that is just as fallible as humans.
Id say it is worse, as we have more physical presence. We can think it rains, look outside and realize somebody is spraying water on the windows and we were wrong. The LLM can only react to input, and after a correction will apologize, and then you have a high chance it will still talk about how it rains.
We can also actually count and actually understand things, and not just predict what the next most likely word is.
But yes, I don’t get from a security perspective people include LLMs in things, also with the whole data flows back into the LLM thing for training a lot of the LLM providers are prob doing.
So, let’s go over this shall we.
Humans were always the weakest link in the security chain. Why? Because humans aren’t logical and can be tricked with words and ideas.
So we’ve developed this new type of computer program that “thinks” and speaks naturally like a human, right? Responding naturally to human conversation with it.
The issue is (as I’ve said before) that we’ve essentially created a computer program that is just as fallible as humans.
In other words, no shit simple prompt engineering works. There’s no way to “secure” a human brain from dripping out things it shouldn’t on accident, and by extension, there’s no way to “secure” an LLM “brain” because they operate in a somewhat similar manner (or at least appear to). Prompt engineering is just social engineering for computers. We’ve created a computer that can be tricked with words and ideas, just like a human.
Humans were the weakest link in security and we just made computers as weak of a link as humans. Who really thought making computers as bad at everything as humans was a good idea?
Id say it is worse, as we have more physical presence. We can think it rains, look outside and realize somebody is spraying water on the windows and we were wrong. The LLM can only react to input, and after a correction will apologize, and then you have a high chance it will still talk about how it rains.
We can also actually count and actually understand things, and not just predict what the next most likely word is.
But yes, I don’t get from a security perspective people include LLMs in things, also with the whole data flows back into the LLM thing for training a lot of the LLM providers are prob doing.