Discussion in 'Intelligence & Machines' started by Magical Realist, Sep 19, 2012.
with todays technology i seriously doubt it unless a breakthrough of major proportion is made
Log in or Sign up to hide all adverts.
Requisites for consciousness?
1. Mass sensory input rather than pre-selected data
2. Processing based on previously successful processes rather than set program
3. Memory storage and deletion based on usage/usefulness of data
If you knew the exact constitution of a human being, at a set moment in time, plane by plane. molecule by molecule,
then using some advanced 3D printer you should be able to print off exact clones of that individual.
They would presumably then intellectually diverge.
4. Daily renewable power source that does not corrode/decay over a long time.
Finally found out that pressing the back arrow in IE to get to your previous edit screen causes this problem.
(and great for electric cars)
Just a thought. In Neurons there seems to be a natural capacity for recirculating pulses by feedback of the efferent axon back to the afferent dendrite, though a delay (such as myelin). (Like an oscillator.) Methinks this might be an exceptionally energy efficient process, just contemplating the way dipoles flip as the action potential travels down the fiber. I mean, it's just a little bit of friction and maybe just a little attenuation to break in and out of the magnetic coupling like that. I have no idea whatsoever if this gives us memory, but anything that persists like that by recirculating has application in the body. For example, the heart circulates its own pulses with just a little nudge from the nervous system.
Of course, even if that's how memory works, (and a semiconductor memory is probably more energy efficient anyway) the high-reliability battery is a must. A biological fuel cell of some kind might fit the bill. Inject a substrate into your scalp (hair isn't efficient anyway?) move south, stay in the sun and keep your "thumb drive" powered up.
some are, some aren't.
some types of RAM must be continually refreshed.
some types of EEPROMs can hold their stored code for years without any externally applied power.
fusible link PROMs do not rely on charges.
I love this topic because it provides great insights into how others understand reality.
Daniel Dennett has taken up a safe position on the hard problem, which is really the crux of the question here.
the hard problem provides nothing to investigate scientifically, but does allow one the ability to properly recognise the gap between the consciouse and the unconscious.
I really doubt we ever will make something seperate from a biological system capable of consciousness.
I don't see how you can make that assumption. Assuming consciousness is purely a phyiscal process without the need of something supernatural, it can be replicated, be it with artificall neurons on silicon or even simulated on a computer that powerful enough. This also assumes the inputs and outputs that appear conscious are conscious, the whole chinese room problem comes up, but I feel that is irrevelent because fundementally I don't know if your conscious or vice versa, sure were appear to be, but you can't know for sure! There no way to detect consciousness, only can detect outputs that are assumed can only be produced by consciousness.
I personally feel that consciousness is a virtual property, an emergent property which does not really exist but is a product of the whole automatically being greater then the sum of its parts, but that is just my personal philsoophy. Time will tell if its true. If its true AI will someday be made that at least appears conscious, if its not true, if consciousness is more then a physical construct, not automatically, but supernaturally, then we will never create AI that even appears to be conscious. And if we can make something that is not conscious appear to be conscious, the chinese room, then well we will never know if we have created consciousness or not.
A very accurate machine simulation of a conscious biological system will be just as conscious as the biological system. It's a very messy and expensive way to do it though.
IMO, it is the mirror neural network in living things that allows for emotions such as learning, empathy, sympathy, love, hate, temptation, etc.
When we figure out how to assemble and store from mass sensory input and compare (integrate) this information from this mirror network into the functional neural network, we will have created an emergent sentience.
I don't agree with you two on this point, it could be that a very accurate machine simulation can result in consciousness but that is assuming that all there is to consciousness is within the limited scope of what we currently understand about brain functions and that consciousness can be duplicated via a machine simulation with its limited parts(meaning the atoms and assortment of atoms used in the machines construction). I personally think that if we ever do understand the hard problem we will understand that there is more to consciousness then a mere machine simulation could replicate. The role of proteins and chemicals for example, thus the need for a semi-biological system.
Another issue not being addressed, I fear, is how one would be able to actually tell that a machine is conscious or not.
Afterall, how can you tell that I am conscious? The only thing a person can be sure of is in this regard is that they themselves are conscious. We only assume that others are due to their biological similarity (and so a case of "If I am, so are they") and also that their actions fit into what we understand to be conscious.
But a machine could be very adept at mimicking those actions.
I posted on another thread a scarily looking artificial "intelligence" - a quadruped robot that could maintain balance even when walking on ice, and even when given a significant push. Its behaviour while recovering its balance looks as though it is conscious. It is certainly aware of its surroundings - at least as far as its sensors allow - and also aware of its own orientation etc.
I'm not for one moment saying it is conscious - but it manages to mimic certain behaviour of what we see in consciousness animals.
So if we saw a machine that mimicked much of the behaviour - could we even tell whether it was conscious or not... or would we look back to the fact that it is clearly and evidentially built and programmed by man that it thus can not be conscious.
It comes down to the question of what is consciousness... is it a mechanism / internal process or processes? Or is it merely the outward appearance and acceptance of being conscious by a human? E.g. if it looks like a duck, quacks like a duck, walks like a duck... who is to say it is not a duck, even if the internal workings are vastly different from a biological duck.
with current computing technology it's irrelevant what the machine is made of.
todays machines basically rely on discreet voltage levels.
If you can simulate a system accurately it behaves the same as the system being simulated to whatever degree of accuracy you specify. To put it in more basic terms, the person being so simulated would argue he is conscious as vociferously as you would, and pass any test that you could pass to prove it.
You can simulate anything, including the chemistry of proteins. It's just hard and takes a long time.
A simulation is just a means to explore our abstract understandings. A machine can simulate but not replicate or duplicate. You can simulate movement, but all you've accomplished is modeling an abstraction of what movement is.
Only a little historical background below, for those unfamiliar with the criticism of first-generation cognitive science, as pertains here to its influences and distribution to AI.
John R. Searle; Minds, Brains, and Programs (1980):
. . . imagine that instead of a mono lingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes. Now where is the understanding in this system?
It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.
[...] Precisely that feature of AI that seemed so appealing -- the distinction between the program and the realization -- proves fatal to the claim that simulation could be duplication. The distinction between the program and its realization in the hardware seems to be parallel to the distinction between the level of mental operations and the level of brain operations. And if we could describe the level of mental operations as a formal program, then it seems we could describe what was essential about the mind without doing either introspective psychology or neurophysiology of the brain. But the equation, "mind is to brain as program is to hardware" breaks down at several points among them the following three:
First, the distinction between program and realization has the consequence that the same program could have all sorts of crazy realizations that had no form of intentionality. Weizenbaum (1976, Ch. 2), for example, shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones. Similarly, the Chinese story understanding program can be programmed into a sequence of water pipes, a set of wind machines, or a monolingual English speaker, none of which thereby acquires an understanding of Chinese. Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place -- only something that has the same causal powers as brains can have intentionality -- and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn't get any extra intentionality by memorizing the program, since memorizing it won't teach him Chinese.
Second, the program is purely formal, but the intentional states are not in that way formal. They are defined in terms of their content, not their form. The belief that it is raining, for example, is not defined as a certain formal shape, but as a certain mental content with conditions of satisfaction, a direction of fit (see Searle 1979), and the like. Indeed the belief as such hasn't even got a formal shape in this syntactic sense, since one and the same belief can be given an indefinite number of different syntactic expressions in different linguistic systems.
Third, as I mentioned before, mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer.
"Well - if programs are in no way constitutive of mental processes, why have so many people believed the converse? That at least needs some explanation." I don't really know the answer to that one. The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else.
For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms. Still, there are several reasons why AI must have seemed and to many people perhaps still does seem -- in some way to reproduce and thereby explain mental phenomena, and I believe we will not succeed in removing these illusions until we have fully exposed the reasons that give rise to them.
If we do understand exactly how consciousness arises in humans then we will be able, with enough computing power, to simulate it: any understood physical process can be mathematically simulated. More so there is not proof as of yet that alternative mechanism for "proteins and chemicals" that provide the analog super-turing operations of the brain can't be found: no evidence that we won't be able emulate those without using proteins or wet chemistry. Lastly we don't understand fully how we have consciousness, yet manged to make more of us once every 15 seconds, its thus possible we could create consciousness without know how its conscious.
you don't even need to know how it arises.
i would say if it can be defined then it can be programmed.
that would explain being able to write unbeatable programs even though you are lousy at playing them.
i haven't read the link but maybe they are looking in the wrong direction.
it's my opinion that a optical computer based on frequency instead of discreet voltages levels will increase computing power by at least 10 orders of magnitude.
If you simulate it so accurately you cannot tell it from the real thing, then it duplicates that thing.
An aircraft simulator that presents a pilot with exactly the same inputs, and responds exactly the same to _his_ inputs, is for all intents and purposes identical to a real airplane when it comes to learning to fly it. That's why so much of modern flight training uses simulators.
Likewise, if you can simulate a human being down to a given level of accuracy, the human being will exhibit all the traits that a non-simulated human being will.
Separate names with a comma.