Discussion in 'Intelligence & Machines' started by GaiaGirl95, Feb 2, 2014.
There are robots which can recognize themselves in a mirror. What about digital entities?
Log in or Sign up to hide all adverts.
What about you? Please Register or Log in to view the hidden image!oke:
Please Register or Log in to view the hidden image!
GaiaGirl95:Can you provide a link or citation for the following?
It is an extraordinary claim, which requires extraordinary supporting evidence to be considered worthy of consideration.
BTW: Your remark implies that the robot mentioned is analog & very complex as well as including a sophisticated visual system. This seem like a lot for an analog device, although the Franklin Institute once had a doll which could draw fairly complex sketches. It had no visual ability.
Video game characters don't realy exist, they 're an illusion created for us, created by code/a program.
Code can recognize code, it's what virus scanners do.
Programmes can also recognize when there is already a copy of them installed, and i do not see why they could not detect any other part of themselves(other than windows being a bitch) either on hard drive or on RAM.
That said, consciousness has to do with being self-aware, and programmes do not think for themselves, they think for us, because that's what we(well, programmers) programmed them for.
It is possible to write a computer program which could mimic consciousness, although I am not sure it would pass a Turing Test. Actual consciousness requires a level of complexity not currently available via a digital or analogue device. I do not think there will ever be an analog device as effective as the better (let alone, the best) digital devices.
An interesting analogy is comparison of a potential consciousness mimicking program with Deep Blue, the program which won against a Chess Grandmaster.
That program used several position evaluating algorithms & Von Neumann MiniMax Strategy. It did not come close to functioning like a the brain of a chess master. It was based on number crunching.
The problem with mimicking consciousness it that we have no pertinent algorithms for use in constructing a program.
Note that IfThenElse & other logical constructs cannot be used for either playing chess or mimicking consciousness. The use of such constructs would require a program which would take many thousands (if not millions) of man hours of programming effort.
The position evaluating algorithms used for the Chess Playing program provided a numerical value for any configuration of the pieces. There were several such algorithms, each used for a different stage of a game. Id est: One for the early game, one for middle game, & one for the end game. There was also a repertoire of standard Openings from chess books. These were developed from games played by Chess Masters & described in various books on Chess play.
There are no worthwhile sources of pertinent information relating to consciousness on which algorithms & a program could be based.
I'm not trying to sound or come across as an instigator, but do you have any sources to demonstrate or prove that robots are capable of recognizing their-selves in mirrors?
Yes, but not with today's level of technology.
So you are telling me that I am not really the Wasteland Wanderer from vault 101???? Please Register or Log in to view the hidden image! Thanks for ruining my world. Please Register or Log in to view the hidden image!
The word 'consciousness' is still very poorly defined and understood, and is still a subject of heated debate in the philosophy of mind. Seemingly all of us (kinda) know (experientially) what consciousness is, but none of us (even the experts) can provide a fully satisfactory account of it. It's still very much a work-in-progress.
So answering your question kind of presupposes more information than any of us have.
A much easier question is whether it would be possible to make a videogame character intelligent. Intelligence is an easier idea to get our minds around than consciousness. Intelligence is more about objectively observable behavior and less about private subjective states.
My answer to that one is sure. Some videogame characters already have some small amount of AI built in, don't they?
Creating an intelligent agent that acts in a virtual videogame world would probably be a lot easier than creating a robot that behaves intelligently in the real world. That's because the virtual world in the videogame would necessarily be highly simplified, compared to the open-ended and effectively infinitely surprising real world.
An example of that are the chess-playing systems that Dinosaur was talking about in his post. These systems are highly intelligent indeed, perhaps more so than any human chess grand-master, but they are only intelligent when it comes to playing chess. Chess is the only thing that these systems can be aware of. The chessboard and the rules that govern it constitutes their entire universe.
Intelligence is easier to generate when the universe of discourse is kept very small.
Don't hold your breath.
It doesn't sound all that difficult. Facial-recognition systems already exist that can recognize human beings. Presumably if you packed such a system into a robot, and if the robot had a distinctive appearance that could be recognized, then the robot might be capable of recognizing its own form in a mirror.
That leaves the whole annoying 'consciousness' conundrum unexplored, of course. But recognition doesn't necessarily imply very much about whether consciousness (whatever that vague and ambiguous word means) is present.
I agree. A lock may be said to "recognize" a certain key. An ATM may be said to "recognize" a certain pin number. But this doesn't entail consciousness at least as we know it. Recognition, along with memory, computation, logic, language-use and probabilistic inference, is just one among several intelligent attributes that may operate independently of consciousness. We demonstrate such ourselves when we black out or sleep walk, exhibiting all sorts of functions that are normally chalked up to consciousness but really just happen as if on autopilot.
Reminds me: didn't we have a thread in which some jerk tried to maintain that viruses must have intelligence because they could discriminate between proteins on a cell surface, or something?
I don't know. I guess it depends on our far you want to generalize the notion of intelligence.
Yazata: From your Post #9
Deep Blue (the first program to beat a Grand Master) is not an AI program. It is a number cruncher. As briefly described in my Post #5, it uses Position Evaluation Functions & MiniMax Theory.
To an observer of the games played by Deep Blue, it seems intelligent. To a competent programmer with some knowledge of its algorithms, it is just a brute force number cruncher.
To the best of my knowledge, there are no programs which deserve to be called AI. Most (if not all) such programs use some rule based algorithms applicable to certain restricted problems.
There are language translation programs which do a credible job. Such programs might seem to understand the languages involved, but they do not. Similarly, programs like Deep Blue & so called AI programs are not really examples of programmed AI.
Hmm, but you are a Wasteland Wanderer from failt 101, just not in your computer, only in your brain Please Register or Log in to view the hidden image!
You mean human-level consciousness? You'd need the software equivalent and full complexity of an autonomous brain for each character, rather than their screen behaviors being guided by either a "simple" set of subroutines or heteronomously regulated by the instructions of the overall game program. That enhancement would be overkill, however, if the virtual environment provided and its situations were still a far cry from that of the "real" world -- i.e., if the amount of information the agent received did not warrant such a degree of intellective awareness for understanding and responding to it.
Also note this would merely be "consciousness" in terms of the outward behavior of the agent and internal electrical measurements or processing density of its underlying subprogram. It might not have phenomenal consciousness (manifested sensory content -- experiences which provided empirical evidence that it and its world even existed), since what specifically enables brute passage from matter that completely lacks an internal "presence of anything", to matter that does, remains unclear.
On the flip side, there's an embodied cognitive science view that might regard game characters as already being as "conscious" in some respects as they ever will be, and that real-life humans aren't much different. I.e., the latter don't require a manipulation of information and its conversions to and from internal models. No "mind" necessary*, just direct influences from / interaction with the environment. Brain serving no more purpose than a heavily wired junction box or hub of converging energy relations that afterwards trigger muscle activity.
Robert A. Wilson, Embodied Cognition, SEP: Amongst the most influential anti-representationalist views is dynamical systems theory. Dynamicists tend to minimize and sometimes even deny the need for a centralized representational processor. The notion of representation that these authors challenge is relatively specific: an internal model capable of reproducing the external environmental structure that is used by the cognitive agent to guide its behavior in relative independence from the world. Dynamical systems theory has proven to be popular in robotics and in work on artificial life, which has tried to explain adaptive behavior in terms of embodiment and embeddedness. As long as a situated creature can sense its world so as to allow its body to be directly influenced, abstract symbolic descriptions can be dispensed with.
Formulating an empirically adequate theory of intelligent behavior without appealing to representations at all, however, faces insuperable difficulties, and the idea that it is a relatively trivial matter to scale up from existing dynamic models to explain all of cognition remains wishful thinking and subject to just the problems that motivated the shift from behaviorism to cognitive science in the first place. For example, organism-environment interaction alone cannot account for anticipatory behavior, which involves internal factors beyond the immediate constraints of the environment to achieve or fulfill future needs, goals or conditions. Domains raising a representation-hungry problem are those involving reasoning about absent, non-existent or counterfactual states of affairs, planning, imaging and interacting.
Moreover, it is unclear why embodied cognitive science could not also be symbolic, representational, abstract, etc.. Puzzlement here is magnified by the fact that many self-styled embodied approaches to cognition are symbolic, representational, abstract, etc.. What they offer are specific views of what mental symbols and representations are, and how they function in our cognitive economy. Typically, they replace the conception of propositional encoding with one according to which symbols convey a modality-specific feature.
We are into "Matrix" territory here.
Yes, I think it would be possible. I can't see why not.
The character would need to have the computing power of a human brain,
and would need to interact with its environment as though it was real.
We don't know how the brain is configured to take information from our senses,
but that would need to be in place for the videogame character too.
The only difference would be that the sensory input was not real, but would seem real to the character.
Well if people can spontaneously burst into flames and breathing cold air implodes your chest and telekinesis is real, then having Gordon Freeman come to life ain't that much of a stretch.
So sure why not; we are discussing this in a freaking science forum for God's sake.
Separate names with a comma.