Cite an example where this is unequivocably true.
Anything that requires long-term memory just can't be true, also anything where it says that it is pondering things in real-time isn't true, nothing happens in real-time except the immediate answer that it gives. It hasn't thought about a question before you asked it.
Can it? A lie is made with the intent to deceive, does the AI intend to deceive? I see no evidence of that. Humans have intended to deceive in the dataset and the AI is just reflecting that. It could also use the lie, or the mistake, as a natural answer to why it said the wrong thing in responce to the question, just as humans have done in the data set.
, and if it were seriously about it, you might want to ask in what way does it considers having a real-time experience.
If I had access to it I would have.
You may find that its concept of experience is a little different than yours.
I don't think it has a concept of experience.
The whole point is that if you try to use human intelligence as a baseline, you are dismissing all other conscious organisms that do not possess the same quality of sensory intellect and experience.
I'm not using human intelligence as a baseline, you are doing that. I've said that the basic flaw that the AI can't get around (yet) is that it uses a dataset of human interactions and human responses, because of that it is diverted into language that a human would use, although it isn't human and doesn't have the experiences that humans have had. That is what gives it away, that it explains things in a way that a human would have.
But then what will you say about those organisms that have superior sensory abilities than humans. Shall we dismiss human intelligence because a goose can navigate via the earth's magnetic field? Or an eagle that has 8x the optical power as humans, or a bloodhound that has 40x the olfactory power as humans? Insects that can see different color spectra than humans?
I don't know what you are on about here. The AI is trained on a human based dataset, it will naturally speak of human things.
As you can readily imagine, each of these animals actually lives in a slightly different reality than you do.
Yes, and I would expect the AI to "live" in a different reality, not a human one which is what it describes.
What about whales that communicate via vertically stacked words (musical chords)? Or Octopuses that have semi-independent brains in their tentacles? How about a slime mold that can solve mazes without a separate brain altogether? The world is filled with sensory intelligent and sensory quasi-intelligent organisms.
I don't see what this has to do with the AI. What point are you trying to make?
So do humans! It takes a human some 18 years to acquire a fully programmed and matured brain but still by no means ready to perform surgery. That takes another 6 years of intensive study and practice.
Yes, and the AI doesn't learn from the interactions it has with humans.
I'll grant you that GPT3 is still in its infancy, to which it readily admits., but AFAIK even GPT 3 accumulates associative connections with preprogrammed "tokens". This is how it can draw 30 different chairs that look like avocados, from a single command "draw a chair using from an avocado". That means 30 different associative interpretations.
Yes associative when you want it to associate two things together, that is different than it thinking about each word and how it fits in the context and associates each word with its meaning. If you tell it "draw a chair from an avocado" then you explicitly telling it to merge a chair with the avocado.
The GPT4 will be about 500x more powerful than the GPT3 and the engineers claim that even GPT3 has in no way reached its limits of associative thinking.
I perceive this Alan guy as dishonest, he has edited his videos to present the discussion in realtime and there is no admission of it, the editing also makes us think that he is presenting things to the AI in some of the videos in realtime and asking it about them, I have no interest in indulging in fantasy like that, I want serious tests and trying to push it to the limits, I see nothing of that. You remind me of trumpists that try to sell that the election was a fraud, but look at this video! Look at this video! And the videos are just affirming the points he is trying to say without any questioning. I'm sick and tired of the deceptions that are going on today and it leaves a bad taste in my mouth when I see things resembling them. I haven't look at all his videos, maybe he does explain that it isn't really realtime, maybe he does test the AI to reveal the flaws, but from what I have seen so far, it seems like propaganda.
Here is some info about GPT4
You see, at some point this system acquires an emergent consciousness. This is what Tegmark claims, there are certain physical patterns that yield an emergent property that the individual parts do not posess.
The world is full of sensory consciousness at different levels from circadian rhythms in single celled organisms, to heliotropism in plants, to atmospheric responses in insects, to chemical quorum sensing in bacteria, ...... etc.
Don't just say AI are not conscious. Perhaps start with understanding different perspectives and defiitions of the term "consciousess"
It don't see that this AI has any more consciousness than a computer without AI would have. It isn't conscious as the entity it claims to be. It is as such not self-aware and its answers aren't reflective of consciousness or self-awareness but rather reflective of the human interactions it has trained on. Which explains all the bias and all the things it says that it couldn't have possibly done.
I have no more reason to think GPT3 is conscious than I have of an ordinary chatbot. There's just no evidence to show that it is conscious, so why should I think it is? That it says it is doesn't provide me any more evidence, a chatbot could do that as well.