The mind/body problem of computers

Discussion in 'General Philosophy' started by Cyperium, Oct 9, 2006.

  1. invert_nexus Ze do caixao Valued Senior Member

    Messages:
    9,686
    Like I said, it's all semantic.
    Software in a computer is analogous to DNA. There are probably also other structure that would be analogous as well.

    Think of the old punch card computers.
    The 'software' in those computers were physical objects.
    Paper punch cards.

    Even in today's computers the software exists in phsyical form. Encoded on the hard drive. Or in the various chips on the motherboard.

    It's all hardware, but the division exists to make sense of the system. The division is artificial.

    And when dealing with life, it's even more artificial as the system does so much more than just 'compute'.j

    Layers upon layers upon layers.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. baumgarten fuck the man Registered Senior Member

    Messages:
    1,611
    Literally speaking, the software is part of the hardware; it's the shape of the storage medium.

    Even more literally speaking, software and hardware are words that we use to describe the characteristics of a physical system.

    And finally, figuratively speaking, the other alternative to dualism is that we're actually all software.

    EDIT: I see invert_nexus beat me to it.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Roman Banned Banned

    Messages:
    11,560
    Life must be the longest chemical reaction ever, huh?
    Weird to think that we've all got punch cards in our heads, telling us what to do.
    Would it ever be possible to derive the equation(s) that run us? We've got virtually all our genome mapped. All we need now is to figure out how the all the codons work and a big computer to run all the possible permutations. Hmm, would that create life on a machine? Input stimulus, the computer crunches numbers, then gives a reaction based on a human's genetics.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. river-wind Valued Senior Member

    Messages:
    2,671
    Why not use hardware feedback loops? All that is possible in software can be built in logic gates, it's just more efficient, based on the development path we have taken, to control general machines with specific software, instead of creating specific hardware.

    Commonly, this is the critical difference between what is considered hardware and what is software; software is encoded, and steps must be taken to read and execute the instructions. Hardware is not encoded, and no "logic" is needed, you just act as the hardware is designed, based on the rules of physics.

    I think this idea of the brain being a dual system is inaccurate. Or that the DNA is the "software"; DNa lays out the bllueprint for the structure of the hardware - it is the computer design document, it is not the software, not the encoded set of instructions that tells the brain how to function. It it used to build the brain, which functions on it's own.


    edit:
    There is a project which used a general-purpose logic chip to instruct a simple insect-like robot to work. there was alot of complex design needed to encode and decode the instruction logic; it was way too big and heavy in the end. The designer then moved all the logic into the hardware itself, using feedback-based logic eched into the chips themselves, and ended up with a much simpler, lighter, and more functional set of walking insects. One model, IIRC, could also create, listen for, and head toward certian sounds; thus creating a group of "insects" that would mingle together, with no software at all.
     
    Last edited: Oct 13, 2006
  8. Roman Banned Banned

    Messages:
    11,560
    As baum and invert have already pointed out, it's a semantic issue.
     
  9. baumgarten fuck the man Registered Senior Member

    Messages:
    1,611
    I strongly doubt that our behavior is deterministic, and I even more strongly doubt that such a proposition is actually meaningful when it's stated as clearly as can be.
     
  10. invert_nexus Ze do caixao Valued Senior Member

    Messages:
    9,686
    Ah.
    But this is where you're wrong and why I said that there are levels upon levels going on here.
    The DNA does direct the construction and the maintenance of the brain.
    But, it also directs its functioning.

    DNA is far more complex in its uses than any analogous structure in a manmade device. And this is to be expected as life is far more complex than anything manmade.

    Specialization is a death sentence.
     
  11. river-wind Valued Senior Member

    Messages:
    2,671
    My point is that it is not a semantic issue but a difference in the method of carrying out action in a computing system. One method allows for simpler hardware, but requires that instructions be encoded, and then decoded.
    the other requires more complex hardware, but then works without any encoding or decoding; it works simply because the physical rules of the universe are constant (eg, gravity, friction, etc).

    IMO, when talking about machines and intelligence, the requirment to decode inscrutions in order to function is a hurdle to overcome, not just a semantic issue.

    You mean that the proteins created in the brain as instructed by the DNA are encoded operating instructions? Ok, I'll accept that. the DNA then would cover the design manual, but then also some of the "software" end as well.

    Only if it is not able to change, and encoded software that can't change is equally vulnerable to this problem.
     
  12. river-wind Valued Senior Member

    Messages:
    2,671
    I can't find video of the hard-ware robots I'm thinking of, but here is a simpler walking "robot" which is purely hardware - wind-powered, it has no software, and does nothing but walk in a straight line. It is very simple, and does not apply to intelligent machines specifically, but the idea of physical rules alone determining behavior is shown.
    http://www.youtube.com/watch?v=Qz9HlLriGFE&mode=related&search=

    Software in an intelligent system just seems like extra baggage to me. A complicating, unneeded peice.
     
    Last edited: Oct 13, 2006
  13. baumgarten fuck the man Registered Senior Member

    Messages:
    1,611
    Actually, I want to try stating determinism clearly to see if it makes sense. Warning: This may turn out to suck.

    Every action has an antecedent cause.
    An action is a change in states of a system.
    Every change in states of a system has an antecedent cause.
    Every state of a system is causally (i.e. logically) linked to the previous state.
    For any systemic state, the following state can be logically derived.

    A system is an independent set of interacting objects.
    A set is an arbitrarily defined domain.
    An object is a dependent, unique set of attributes.
    An attribute is a parameter to an interaction.
    An interaction is the logical change in attributes of two objects according to defined rules that can be inductively determined.
    A rule is a logical structure that yields a discernable output depending on its input parameters.
    A system is an independent set of unique sets of attributes that are mutually changed by inductively determinable logical structures.

    A systemic state is the whole of the attributes of the objects that are defined by the system.
    For any systemic state, the interactions of that state can be inferred.
    For any systemic state, the inductively determinable changes in that state are inductively determinable.

    That is, if there is a system, then there is a system.
    Well, that's enlightening.

    If you want to apply it to the mind-body problem for computers, being demonstrably composed entirely of objects, a computer cannot have conscious experience, as conscious experience does not have any of these necessary inductively determinable interactions with the system we call the physical universe.
     
  14. Nate's Mind Registered Member

    Messages:
    9
    my understanding is that a computer can only be a machine for it has no purpose and can only run under the rules of axiom system. while a mind has purposes which restrict mind to help man to survive, and can create axiom system basing on assumptions or facts.
     
  15. Cyperium I'm always me Valued Senior Member

    Messages:
    3,058
    Aware as a word has everything to do with self-awareness as there are no other kind of awareness.



    That doesn't mean that it is aware of the signals, in what way do you mean that computers can be aware of signals? Ok, I know the answer to that question, you mean that anything that reacts to anything else is awareness? Anything that reacts to anything may be stimulus as stimulus is often seen as something that exaggerates a signals or perhaps retards it, or affect a system in a way that the signals in the system are exaggerated or retarded. However, that something would be aware of that stimulus within a computer or that the computer itself would be aware of that system of stimulus is not within the lines of reasoning (and you can't define awareness in the way you did, so no matter the definitions).



    Just keep it simple....that is, that they know. Since a computer doesn't know anything without having self-awareness anyway, or should we just say 'awareness'? Perhaps you get confused...well, let's use self-awareness then, we wouldn't want to loose track just because of misunderstandings do we?

    Do you see the circle you are drawing? To know that it knows? The first requires the second, and the second requires the first (or you better come up with a excellent idea as to how something would know that it knows), without the 'know' requiring self-awareness.


    Is it? Since self-awareness is subjective it would be hard to find evidence to that would it? Or do self-awareness require communication so it can tell you?


    As we know it today, the only way we know of would be to imitate that brain, which would mean that the logic behind it still is hidden.


    So what do the software tell the hardware in order to create self-awareness (if it's in the way the 'machine' is set up)?




    You are yourself so it is itself, of course, but that wasn't what I meant, awareness definitly mean identity, so some identity would be there, with this I mean the lowest form of identity (not as 'has a name' or 'has memories' or even feelings, but just the identity of 'I am').




    If you believe that everything is physical and there are no purely mental, then you would be someone inside your brain. So what if the computer had your pattern, couldn't it be said that you were inside the computer then? Or are you outside the computer? Or perhaps the you are the motherboard and screen and keyboard? Wouldn't you rather say that you were somehow inside the computer, perhaps hidden in the code somehow?

    In fact, tell me, if there were a self-awareness in a computer, where would it be to your oppinion?



    We say that we die when our body malfunctions, but in all of eternity the same configuration is 'doomed' to come up again, yet we don't say we live forever. As our life ends when we die, so do the computers when it's power was switched off.


    Sure, but that doesn't mean that life couldn't be more meaningful than that.




    So if you are transferred from your body to the matrix, through what kind of medium? You say that you instantaniously move from your body to the matrix? What would happen if the same configuration existed in two places, remember we are talking about self-awareness here.



    If you say that you could be in the matrix (a computer driven reality with your configuration in it) then there would be a way to perhaps put everybody there, just altering the configuration that represents each person in the world).










    I said that the way we usually see it, suggests that the brain gives rise to self-awareness using 'code', since any physical system could be represented as a code also, (or information) then it does suggest that.

    However I think that the phenomena of self-awareness (even though there are no other, as any other must go through self-awareness (awareness of others is awareness of self being aware of others (so why make things so complicated)) is in some ways harmonical and that it is a phenomena as real as any other, and not simply made up. But simply a natural phenomena (could even be a property of existance itself).



    Ok, let's leave it at that.



    Because two persons cannot be aware of themselves as one person existing in two bodies (which is what would supposedly happen if we were physical and we copied the entire physical system).



    I mean that the rock is aware of it's own existance as a rock and the nature of it thereby. Therefor having no advance self-awareness, but still it has a steady feedback of it's own nature somehow, causing it to be aware of it's existance (and it's nature).

    I know the problem of this, that 'then each part must be aware also, and each part of the parts etc.' but why not? Perhaps everything in existance is aware simply by being in a state of existance? Where larger bodies are aware of their nature, and smaller bodies are aware of their nature, but requires a brain to cause the awareness to actually do something (and the methods and everything that goes along with it) these methods must then be predefined in the nature of that body so that we are aware of it through it's nature.
     
  16. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Cyperium,

    No not with the current generation of processors, they simply lack adequate power. But given adequate power then self aware machines are I believe inevitable and not far away, within the next 20 years.

    If we look at biological awareness we see it ranges from lower life forms that lack any self awareness, to some dogs that have displayed some awareness to primates and dolphins where awareness has been achieved although still to a very limited extent. In humans self-awareness does not occur until a child is nearly two.

    Self awareness appears to be closely correlated to brain complexity (gosh what a surprise). Yet all brains are just essentially quantities of neurons that are interconnected in various patterns. It appears that self awareness is an emergent property that arises when a threshold level of synaptic connections are formed.

    The brain has some 200 billion neurons where a neuron is much like a small low power microprocessor. It has many inputs (connections from other neurons), and a single output, and some limited local memory. Each neuron fires at about 300 times each second (300Hz). All neurons operate in parallel so it is not valid to compare the brain to a modern single CPU computer. The brain is more like a massive network of interconnected computers more like the internet.

    So how about power? With 200Billion neurons at 300Hz we get approximately 60,000 GHz of power. The latest microprocessors run at about 4GHz. That means the human brain has a similar processing power to about 15,000 modern computers working in a tight massively parallel processing network.

    So can we link together 15,000 CPUs and have them work closely together to form a brain? Well not quite yet. I have in my lab at HP a 256 CPU parallel processing system and we can see how to connect around 4000. So the 15,000 CPU system will present some challenges. We’d need that number to come down somewhat to make the system practical. But given that unit power has been doubling every 12 to 18 months, and has been doing so since the early 1940’s then those 15,000 units should come down to a manageable quantity in the next few years. And if my calculations are off then just wait a few more years, but within the next decade we should have no difficulty building the hardware to equal the power of a human brain.

    Writing the software to emulate human level neural networks is the real current challenge facing the architects of AI. Their efforts at present are certainly seriously hampered by the current inadequate processing power but as that changes then we should see rapid progress over the next 20 years or less.

    Will we be able to accurately emulate the patterns and capabilities of the human brain? I have no doubt we’ll be able to that, it is simply an engineering and time issue, but it is unlikely to end there, as processing power continues to increase then AI will move beyond human level capabilities to produce what is termed super-intelligence. And that is where humans cease to be the dominant intelligence on the planet and our future may well be at risk.

    The idea of scanning a human brain, digitizing it, and then uploading that data into an AI brain is something I hope will occur in my lifetime, at least within the next 50 years, and I hope I can help make that happen. But that is science fiction right now. But if super AIs do appear then our best hope of competing and living with them will to be uploaded and to join them as equals and take advantage of all the processor upgrades that will occur in the future.
     
  17. Cyperium I'm always me Valued Senior Member

    Messages:
    3,058
    But we have not the slightest idea of how we could program a computer to be self-aware. The only thing AI does right now, even with the most advance computers, is to emulate awareness, that is to show something similiar to it. However, it is not real awareness, as that is subjective, and we cannot program anything to be subjective. It would make as much sense as to write "you are aware" and expect this post to be aware.

    We can program computers to talk like us, and program robots to move like us, etc. but we can't program them to be themself.






    Actually, we DON'T see this. Since awareness is entirely subjective (it doesn't matter if a monkey recognises itself in the mirror or not). We have no known method of knowing if something is aware. It could tell us that it is aware, but how could we trust it? The only one we know is aware, is ourself.



    It isn't just that there are alot of connections, they must be formed in a pattern that somehow creates awareness, it's a matter of quality not quantity, I think...

    It's actually not valid to compare the brain to computers at all, since the brain is not only parallell, but also fluid (that is, that the connections change to form new patterns all the time) which is hard to say with the computers that has fixed components and only logically fluid.

    Yes, but still, how could anything create subjective experiance? Where is the mental?



    The problem is that we don't know what the hardware should be like, since we allready have 15,000 computers (and much much more) working in parallell with the internet. So we need it to have something to solve, it would be no point in having 15,000 computers working in parallell if we don't have a cause for it, and if we have the cause of producing awareness, then what is the problem that the computers has to solve to produce it? You don't need power to lift nothing.


    And what is their idea of a software to produce awareness? I mean, I know of neural networks progress, that enhance the connections that made a good change in the overall system, and retards connections that made a bad change. Forming a kind of self-learning computer, could possibly form memories too, but we have no way of knowing if those memories are subjectively experianced or just a pattern of the best solution.




    We would never make a super-intelligence that would have the ability to hurt us. It would be nailed to the ground, without arms or legs (and if it had arms they would be made so they couldn't reach us, but only perform the tasks they were given. So I'm not *that* worried about super-intelligence taking over the world.



    What do you think of the problem with such an approach, namely that uploading your "mentality" would create two of you, which mean that you would still be in your body. What would happen if we copied your entire physical system? How could there be two you?

    That is a paradox I think (and paradoxes can be helpful in knowing how something works, by showing why our understanding of it fails).
     
  18. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Cyperium,

    But that’s an engineering problem to be solved. We know biology does it so this would be an exercise in reverse engineering.

    That’s because we only have a fraction of the compute power available right now.

    Well yes, but the quantity must be there first.

    There would be no change to physical connections in a computer based AI, that would all be done in software. It’s a different paradigm.

    That comes down to brain complexity again.

    Umm, the problem is AI. That seems big enough for now. I believe the most likely scenario in AI advancement right now is the development of a learning seed. Once an algorithm can increase neural net complexity to reflect a learned activity, much a like baby would do, then we have the essence of mental growth. Having 15,000 spread out across cyberspace isn’t very practical. The needed very high bandwidth between interconnections would need to be many orders of magnitude higher than current cyberspace connections.

    When we achieve it the machines will tell us. But that’s just part of devising appropriate tests to confirm a result. This is not an inherently show-stopping concern.

    A super intelligence would not need to have physical capabilities just a way to communicate to outsiders. Being able to convince something or someone of a particular need will be all that would be needed, and even people like me would assist with that. Once Pandora ’s Box is opened there will no way to put it back.

    This is a common topic in uploading debates. Like copying DVDs, there would indeed be the potential of making multiple copies of yourself. That brings about the issue of identity and ethics. One advantage is of a tele-port concept. Digital data can be easily transmitted over long distances. One could send oneself to a receiver on another planet in a matter of minutes/hours, upload into a waiting robotic body, have a nice vacation, and at the end of that re-transmit back home.

    Certainly making digital backups of yourself is the way to ensure immortality and recovery after a fatal accident.

    There is no paradox, just an awkward scenario that will make future life entirely fascinating.

    The most difficult scenario that I have struggling with is the upload of my bio brain into a machine and then terminating my bio body so I can continue in machine form only. If my bio body survives the scan then do I then terminate my bio self or I let it continue to live until it dies? At the moment scan technology looks as though it would be destructive. I.e. to obtain the resolution needed the brain would need to be thinly sliced and each slice very closely scanned – you’d definitely be dead as part of this process. It would be better if you survived the scan so that the resultant upload could verify with you that the upload was successful.

    However, I doubt that uploads like that will occur. I suspect that we will go through a cycle of brain enhancements and gradual replacement of brain parts with machine parts until you have no bio matter left. That way it would be possible to check each change gradually.

    Nice sc-fi at the moment.
     
  19. Azael Registered Senior Member

    Messages:
    109
    I encountered the displeasure of having to study this last semester.

    The debate is essentially Searle Vs Tulving, you can key both into a wikisearch
    other points of relevence are: functionalism, and the information-processing definition of 'intelligence'

    Essentially no-one "knows" anything, which is the reason why universities study it.. because it's a completely useless debate and formal education is obliged to waste your time. "Panpsychists" believe that everything is conscious: rocks, trees, dog-shit.
    On the other end of the spectrum there is solipsism where since the conscious state of anything cannot be known, it is assumed that only the self is conscious.
     
  20. Cyperium I'm always me Valued Senior Member

    Messages:
    3,058
    We cannot reverse engineer it, as we have no engine into which it can be transferred to, that can translate it. To reverse engineer we need to know specifics, ask anyone who have made an emulator, an emulator works as reverse engineering as we take a code and transfer it into another machine, but this means that we need to know much about what the code looks like in the machine, we don't know what the code looks like in the brain. The signals may make up the code, but they aren't the code itself, it's just a raw version of it, and we don't know the length of signals that makes up a code, we don't know if there even is a length to talk about. Furthermore, we don't even know if there is a 'code' in the traditional sense to talk about.

    It seems to be a complex series of relations, that somehow gives rise to phenomena of different kinds, as I said, there need not be a code, but I rather think it is a harmonic rythm that relates to other rythms. I also think it has to do with the symmetry of the brain, that perhaps the symmetry of the left hemishpere gives rise to the same symmetry in the right, which if the left is concentrated on logic, and the right is concentrated on feelings, then we get the feeling of the logic, and if the knowledge started in the left, we get the symmetry to the right which gives rise to the logic of the feeling (of the knowledge), so that we that way can know things that are essentially unknown to us (by it giving an equal effect on the other side (which is the symmetry)).




    Rather than coming forth with raw force, it would be better to understand the principle, since we would have to understand it anyway in order for it become aware (at least we would have to understand it in order to know that it has become aware).

    I disagree, partly, I guess that perhaps it takes much "computing" (but what is it computing?), however if we don't have the quality we simply wouldn't know if there were any awareness at all in there, and it probably wouldn't be, as we have worked blindly. Sure the connections are reinforced etc. but not to ensure awareness, but to ensure survival (survival of the best connection it would be I guess), thus we would get results that probably solves situations pretty well, even where we wouldn't make it (giving it a kind of intuition), however this problem solving doesn't show that it is aware, as we don't even know what problem it should have to be presented with in order to work it's way towards awareness (all known problems can be solved without awareness).

    If love is the final problem, then how could we present a computer to love?




    If that would all be done in the software, then if we had a powerful enough single computer and we had all known logical calculation operators, then we could program that computer to be aware purely in code and not in machine. If you say so, then what in the code produces awareness, and what would the meaning of that code be other than text on a logical screen?



    So tell me? Cause as you reason, you are that machine.

    You might say that it has to do with that you aren't so complex or that you haven't that much braincells as to tell me, however it may just be that there are a limit to where more braincells DOESN'T equal more reasoning or power, and that it has to be finetuned. It could also be that the answer shouldn't be known, as the 'hidden answer' may be the reason for awareness.

    There is no reason to think that the workings of language alone can give rise to the convincing needed to make a need in someone, even the most intelligent person alive couldn't convince you of doing something if you stubbornly say you won't. In order to not be manipulated humans have the ability to function on logic alone (as feelings of need and such would have to be used to trigger you in the way it wants to).


    However logic tells us that if you the original wasn't destroyed, then you would simply be where you are, while the copy of you (which isn't you) would have the vacation. If you then destroy yourself, then what? You would be dead, and the copy of yours, would have the vacation. This suggests that there is something that is uniquely you, which doesn't have to do with your physical properties.

    As you aren't aware as the digital backups, why would you think you would be aware as them when you have died?

    It is a paradox, as we cannot have two of "the same" at different locations, as you would be two you. There can only be a single me, a single awareness of me.

    If your bio body survived, then why would you think that you would be aware in the computer? You would simply be aware in your bio you, as you allways were. Or do you think you could be simultaniously biological you and the logical you in the computer?


    You are correct, the magnetic forces needed to scan the brain in such detail would be much harmful to you.

    Perhaps. Would you think that as a part of being more and more machine, you get less and less aware? As perhaps there is something that is used within the inherent structure of biological molecules that gives rise to the effect of awareness, which would mean that if we were to replace something, we would need something of the same matter, which would mean that we replace brain with brain.



    Yes.
     

Share This Page