Intelligence, Instincts and Reason

Discussion in 'General Philosophy' started by one_raven, Jul 24, 2005.

  1. one_raven God is a Chinese Whisper Valued Senior Member

    Messages:
    13,433
    This obviously need to be revised to make it more congruent and readable, but I would like to know what people think of the reasoning behind it...

    I don't think that any discussion about Artificial Intelligence can be complete without broaching the subject of consciousness. There are countless definitions and ideas about what consciousness is and what its implications are. People talk about expanding consciousness, deeper consciousness, wider consciousness, focused, internal, external, greater and even social consciousness. It seems prudent to define exactly what I am referring to. Consciousness, in the context I am using it, refers to perhaps the most rudimentary, straight forward, secular definition of the term.

    Consciousness- An entity's knowledge and comprehension, that it not only exists, but it exists as a complete and separate whole apart from the rest of existence- self-awareness.

    As in many fields of science, the science fictions writers preceded reality. Today, many argue, we are on the verge of creating Artificial Intelligence. There is a great deal of debate regarding what does or does not constitute "true" Artificial Intelligence. My view on the subject, I suppose, can be called fundamentalist. In order for something to be considered "true" Artificial Intelligence, it must have the ability to not only learn new information, but to create new ideas and concepts from what it has learned. Productivity and commercial pragmatism aside, if an entity (be it a robot or a simple computer program) can not process brand new information that it was not programmed and prepared to process before hand, comprehend that new information, add the new information to the base of its knowledge and manipulate that information to form new ideas- then I do not consider that entity to be intelligent.

    In order for an entity to be able to process new unexpected information the entity has to be able to discern the reality of the world around it. It has to understand that this box in front of it is an object distinct and separate from itself and it has properties that are specific to it. The entity also must have a desire (yes, I am using that world loosely) to learn. Something has to give the entity the impetus to take that information in, and make use of it rather than simply sit and wait for instructions. With the desire to learn comes the desire to continue existence. In short, for something to have intelligence it must have consciousness, free will and a desire for continue existence or survival.

    I am not alone in my assessment. We are all familiar with what has become an old hat in sci-fi, the archetypal artificial intelligence that turns on its maker. 2001, The Matrix, I, Robot etc. The basic concept is that if you create an entity with consciousness, free will and the ability to learn, it is inevitable that you will lose control over the entity (if you ever had it in the first place). Some argue that the original idea is seeded in Mary Shelley's "Frankenstein; Or, The Modern Prometheus". It is no coincidence that the concept closely parallels the story of God and Man. God (the creator of the intelligent species, Man) loses control of Man when he gains the knowledge of Good and Evil by partaking of the fruit in the garden of evil, thereby gaining both the ability to learn and the free will of choosing to disobey his master. The story of Original Sin kicked off an eternal struggle between Man exercising his free will (sometime to his peril) and God attempting to imbue "morals" upon him. It may also seem a familiar story if you have ever met a teenager.

    The authors of the various incarnations of the archetypal story have usually wanted to impart a moral upon the reader. It often took one of two shapes. Mankind is evil and this higher intelligence sees this. The cold godless intelligence of the entity is inherently evil and wants to rule the world (comment on Old World Communism, perhaps... maybe another paper). Of course the story has to be thrilling and gripping. Some form of struggle ensues and the result is that the machine must be destroyed or it will destroy.

    In Isaac Asimov's classic short story "Runaround", Asimov introduced his Three Laws of Robotics. In one sense, the Three Laws are man's parallel to Abraham's God's Ten Commandments- an attempt to integrate a moral code into his creation (if only for his own protection) and remain in control of his creation.

    1.) A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
    2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    3.) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

    It is widely, if not universally, accepted in the field of modern Artificial Intelligence Research and Development that any Artificial Intelligence must have some adaptation or another of Asimov's Three Laws somehow "hard wired" into the system.

    Humans also have a set of fundamental "hard wired" laws. The one thing that all life seems to have in common is the basic instincts- survival of the individual, survival of the species and replication or procreation. Humans, however, routinely disregard this basic set of instincts. Why? Because they can.

    With consciousness, comes the capacity for intelligence. With intelligence comes both the ability to justify and the desire to learn. With the desire to learn comes a desire for continued survival. The desire for continued survival combined with the ability to justify and free will results in an entity that can not be enslaved. Except by means of restraint.

    Creating machines with a limited ability to process information that appears intelligent is one thing. Creating a truly intelligent entity, and expecting that entity to obey a set of controlling rules is folly. Either you have the appearance of intelligence and can be enslaved, or you have intelligence and free will.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. §outh§tar is feeling caustic Registered Senior Member

    Messages:
    4,832
    Your description of consciousness is incomplete. Let us look carefully at the wording to find some of the inherent problems:

    Firstly, what is an 'entity' (as used in this context)? Secondly, what does the "comprehension" of existence entail? That is to say, how does an entity come to the knowledge that it a) exists and b) exists as a complete and separate whole apart from the rest of existence. Also, what is the meaning of 'exist' (as used in this context)?

    When you say that entity recognizes its existence as separate from 'the rest of the existence', do you characterize this as perspective or knowledge? I suspect without language, many such arbitrary distinctions will hold no power over our perception.

    The main problem I have with your definition is that it presupposes existence. That is, before the entity was conscious, the entity existed. But how have you established this? The answer to this will lie in whatever you meant by 'exist' in the original post. After going through this then we can see how it applies to the rest.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. androgen Registered Senior Member

    Messages:
    44
    i think u got some of it backwards. i dont think desire to continue existence comes from desire to learn. i think the desire to learn comes from desire to continue existence.

    so you create this AI by hard wiring desire to exist into it ... that implies that we have assumed you actually CAN hard wire something into it. at the same time you claim the AI will HARD-UN-WIRE itself from the need to treat humans in a certain way. if it can do that why will it not just HARD-UN-WIRE itself from the desire to exist? if it does the latter it will avoid ALL CONFLICT ... if it does the former it pretty much jeopardizes the only thing that will at that point matter to it ( by putting itself on collision course with man ).

    also note that people commit suicide all the time ... and in general ( sorry, no cite for that ) having more intelligence is only going to make you more likely to do so.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Onefinity Registered Senior Member

    Messages:
    401
    I have thought about "artificial" intelligence in the context of a theory of reality I was working on. My conclusion is that in order for a machine to develop consciousness, it would need to have what living organisms have. That is, it would need a "skin" of existence, something that affords it an "inside-out" perspective rather than just be a processor. This, I believe, requires the further development of quantum computing.

    I like to think of the universe as being a digital-analog-digital sandwich.

    1. The first digital is the way that the unstructured cosmos (what David Bohm suggested as an "immanent order") gets distorted into form, or the observed universe as we know it (Bohm's "explicate order.") This distortion/form-creating process lies in every living thing, starting at the level of the cell. The very structure of a living thing is self-organization and the maintenance of a boundary, within which it produces the things it needs within its own structure. Yet it is also coupled with the "outside world". It is at this level that curvatures or deflections occur in the undivided order that constitute the relationships that, in turn, comprise the living (and the conscious) thing. These distortions are binary in nature. Thus the term "digital."

    2. "Analog" in this context is the way that the living thing, unto itself, interacts with the world. Experiences are not broken into bits, but are experienced "whole." There are no shortcuts, everything is represented in 1 to 1 ratios. Remember that the world's first computers, in the 1940's, were analog computers.

    3. The second digital is the binary way that the living thing processes information, via thresholds, nervous systems, activated and non-activated. In the case of mental processing, there's both digital and analog aspects.

    My theory is that a computer would need to be constructed to participate in this process, i.e., be conscious (which, I think, also means to be living.) The strategy of producing ever-more powerful supercomputers won't, I believe, ever lead to artificial intelligence because it doesn't accomplish this.

    Instead, I think that we would need to wrap a quantum computer around an analog computer. The reason is that we need to (a) give the computer an environment or a simulated environment, (b) allow the computer to construct form subjectively and interact with it in ways that it cannot completely foresee (which is what we do), and thus (c) engage in an ongoing construction of its own sense of form and boundary. Perhaps with a digital computer interfacing there, too, to efficiently translate what's going on to the humans who built it.

    If you're unfamiliar with quantum computing, it's a very new science that taps directly into the unfirm nature of solid existence, utilizing the simultaneous existence and non-existence of a given state as a basis for storing information. These are called "qubits." Qubits are exponentially more efficient in their storage of information than regular bits (0's and 1's). The challenge has been maintaining an environment in which they can be maintained without being destroyed through observation (so to speak) So far, only a few have been generated at one time. But the important thing here is that it is a direct channel into the very nature of existence that I think "intelligence" needs in order to exist.

    Here's an interesting question that would arise: would a computer so created be only "artificial" intelligence, or would it really be opening up a new door in the natural world?
     
  8. Quantum Quack Life's a tease... Valued Senior Member

    Messages:
    23,328
    With out getting to embroilded in the detail, I wonder when you use the word "desire" what do you mean?

    I also notice the absence of the word "feeling".

    The other most important thing to consider is that consciousness involves suffering. The cost of living is suffering. Desire is the outcome of suffering, the desire to relieve suffering , whether it be a sore backside because you have sat to long in the same spot or the need for intellectual release from the suffering of intellectual isolation.

    How would a computer program feel the effects of suffering? Thus motivated to relieve that suffering?

    Artificial intelligence would be IMO with out the ability to feel it's own suffering be quite insane......a mere calculator, an intelligence without the wisdom born of suffering.....etc
     
  9. Onefinity Registered Senior Member

    Messages:
    401
    What do you think feeling is? It is the chemicals in your body interacting with your thoughts. Like I suggest above, in order to have consciousness, something needs to be living. And if something is living, it has a body. If it has a body, it might "feel" - however you like to define that. However, I would also ask you about your comment regarding suffering. Do you think that a chimpanzee or a rat suffer from their consciousness? I don't think they do. I think that suffering is something that goes along with the human tendency or need to create mental objects, and to create the self as an emotional object. But I don't think that this tendency applies to other conscious animals, all of which are more intelligent than a mechanical computer.
     
  10. Quantum Quack Life's a tease... Valued Senior Member

    Messages:
    23,328
    I must admit Onefinity I didn't read your prior post very thoroughly and have had another look at it.....

    The concept of suffering is that if one is not suffering one is UNconscious.

    Maybe it's a little extreme to use the word suffering. But self awareness is the awareness of what?

    If you can feel it I would contend that it is a part of the suffering pleasure cycle....puting forward the notion that it is our desire to relieve our suffering that drives us intellectually as well as bodilly.

    So for a computer to achieve consiousness it has to be aware of it's stresses and pain. Now the problem is how can a mechanism become aware of pain or fear, as a feeling and not a computation.

    By the time a computer is able to percieve it's own suffering it would be virtuallly organic thus the term artificial organism would be more appropriate. maybe.

    BTW please accept I have not the slightest Idea of what I am talking about....and work from there.....

    Please Register or Log in to view the hidden image!

     
  11. one_raven God is a Chinese Whisper Valued Senior Member

    Messages:
    13,433
    The "Intelligence" in question would be the entity.
    In this context, then, an "entity" would be any object with the ability (real or perceived) to reason.

    I wanted to say that existence, in the context I am using it, implies a simple, tangible, material existence.
    This limitation, however, denies the existence of theory, mathematics, history etc.
    Existence, then, must be characterized as anything that is qualifyable, be it real or imagined.
    This distinction brings yet another aspect into light -The entity must "know" that it is not simply an abstract idea or notion, rather a tangible item.

    I agree that it is an arbitrary distinction.
    Perspective requires knowledge and vice versa.
    They are really two parts of a whole, no?

    I presuppose existence, and I do not see this as a problem.
    On the contrary, to not presuppose existence is a problem in my opinion.
    If you are attempting to create an Artificial Intelligence, I believe, as I pointed out, that the entity must be able to distinguish itself from "not-self" and to do that, one must presupposed existence.

    ---

    I am not stating that the desire to continue existence necessarily results from the desire to learn.
    I do, however, believe that the desire to learn must impart a desire to continue existence.
    If you DO have a desire to learn, you would necessarily need to work towards continuing existence in order to continue learning. If you do not exist, you can not learn.
    On the other hand, an entity does not have to continue learning in order to continue existing.

    What I was implying was that Intelligence would lead to the desire to continue existence.
    "Hard-Wiring" this desire would not only not be necessary, but would be pointless due to what I was pointing out about "Hard-Wiring" in humans and other "Higher Intelligence" animals being overcome through reasoning and justification.

    That was exactly my point.
    Suicide goes against the grain of what is our most basic "Hard-Wired" instincts of survival, and against our own self-interest.
    This is why I think "Hard-Wiring" anything into an entity becomes moot once that entity gains Intelligence, therefore the ability to reason and justify over-riding its instincts.

    ---

    Onefinity,
    I don't mean to discount your theory at all.
    Please forgive me for passing over it at least for the moment.
    The question I was asking, however, assumes that creating true Artificial Intelligence is not only possible, but it comes from the standpoint of once it has been already achieved.
    To be perfectly honest, I am not convinced that it even IS an attainable goal at all.

    Or is it your thoughts causing chemical reactions in your body?
    Chicken or egg?
    Granted, PHYSICAL feeling (i.e. I am hungry) is a direct reaction to chemical processes in your body, but is jealousy? Is anger? Is serenity? Or are these things mental states that cause the production of endorphines and other chemicals?
    If they are mental states, then an Artificial Intelligence may not experience the same euphoric states that come along with certain emotions, but they very well couls experience the same states that cause the chemical production processes in our bodies.

    I do think that rats "suffer" though may not "anguish" as we do.
    Isn't the search for pleasure (food, sex, warmth etc) simply the search for cessation of suffering?

    ---

    Quantum,
    You bring up something I hadn't yet considered.
    Popular depictions of Aritificial Intellignece generally have them as emotionless, and the acquirement of emotion is what causes them to be presumably conscious.
    I never really accepted that an entity would require emotions in order to be conscious or sentient.
    This is why I said I was using the term "desire" loosely.

    However, for any entity to regard their own existence as something worth preserving, they must have some sort of value system.
    Does a value system (one that, inherently must be dynamic, as it must be reconsidered as new information is learned) require something other than raw intelligence to not only have and maintain, but value in and of itself?
    Would, perhaps, the entity's "desire" to continue its own existence automatically lead to the development of a value system (as I believe it does in humans)?
    What, in humans, causes us to assign seemingly abstract values to objects. Such as assigning a value to worthless sentimental objects that is even greater than the value assigned to at least some life -people are often willing to kill (if only complicitly) to protect their posessions.
    Would Artificial Intelligence fall to the same abstract value systems as humans?
    If so, I would assume that they would need to justfy their attachments that would not necessarily be quantifyable.
    If they do that, could they be said to have emotions (if even at a cursory level)?
    Thanks, Quantum, you have given me much more to think about.
     
  12. Quantum Quack Life's a tease... Valued Senior Member

    Messages:
    23,328
    One Raven, I am sure you would agree that for an entity to determine that a a surface is "hot" it has to do more than just measure it's temperature. I think this is what I am trying to get at.

    I have often thought about machine consciousness and how that would be possible. To me it is a vexatious issue, because I wonder whether it is simply my presumption of what conscioousness is that is more the limitation than the actual possibiiities of machine consciousness.

    A bit like asking:
    "Is a rock of salt self-aware?"

    Is there any way of determining the answer to this question?

    What is the criteria I am using to determine the condition of awareness?

    Could the rock have what I call a simple awareness, an IS awareness with out our ability to discover this due to the lack of animation?

    Does awareness require animation and articulation [expression]?


    Are self-awareness and consciousness essntially the same state, if not what is the distinction?
     
  13. Quantum Quack Life's a tease... Valued Senior Member

    Messages:
    23,328
    Possibly we are thinking inverse to the reality here....maybe it is worth thinking of awareness as being an infinite fog surrounding us, what we are conscious of is only what takes on form from that fog, but the fog is our omni wareness, awareness of everything. Our degree of awareness is how we see common or shared form from that fog of confused awarenesses.

    For example we are aware of the suffering of starving persons in Africa logically and from pictures we see on TV and in newspapers but are we also aware of it in a "foggy" sense.. Is their suffering a part of what drives us at a subconscios level where we strive to feed ourselves less we become like them.
    We think that we can create a sense of consciouness by adding together a structure of algorythyms, and sensors. However I tend to think human consciousness is more from the infinite down to the finite rather than finite up to the infinite. Our awareness is a finite variable attained by organising the infinite in to tangible finite form.
    Put it this way:
    A single skin cell is infinitely aware and to achieve consciouness it must finite this infinite awareness by finding from in the infinite confusion that is an infinite awareness. Thus by finding form it has consiousness of that form amongst a backdrop of infinite confusion.


    But define the term desire?
    In human terms it is the desire to relieve suffering, possibly, but how can a machine determine it's suffering in the way of feeling it's suffering?

    If given the infinite to play with I would think yes would be the answer.
    Some times I use the analolgy of attempting to contain a nuclear reaction using small buckets of water. A human is constantly attempting to put out fires...and maintain his sanity.... his life being essentially one of desperate survival from the moment he is born. Constantly striving in his quest for sustainability of his survival.

    Edit: NB, your question would take many books to answer One raven I feel.

    Where it all gets bogged down in my opinion , is the ability to self recognise what is. And make a claim that is IS. Not make a claim based on logic, or programming but based on IS.

    To be a mirror of reality and recognise that reality exists. This is the bit that I get stuck on : How can a machine recognise that reality exists? And reflect that reality infinitum with out being receptive to the infinite and being emotional based on what he is aware of......such is the beauty and amazing cleverness that is human consciousness and awareness.....
     
  14. enton www.truthcaster.com Registered Senior Member

    Messages:
    454
    I have to relate conscience to consciousness in like manner to precious too. Why? Because their etyma show the con(cept) in any pre(cept).

    Intellig(ence) is proven in any sci(ence). (In)stincts and (Rea)son are not always seen cohabiting in hum(ans).
     
  15. one_raven God is a Chinese Whisper Valued Senior Member

    Messages:
    13,433
    I haven't the faintest idea what you just said.
     
  16. enton www.truthcaster.com Registered Senior Member

    Messages:
    454
    I'm sorry raven, if you haven`t got what I intended to say but to amuse others might, I played upon word combinations.
     
  17. Onefinity Registered Senior Member

    Messages:
    401
    Look at the attributes of what we usually call living/aware things: they are self-maintaining (re-producing their internal components), exhibit operational closure (they have some kind of membrane), and are structurally coupled with their environment (they co-define their boundary with the environment that they transact energy and information with). In sum, they have what I like to call an inside-out. Now, a rock of salt does not exhibt an inside-out at all. If you cleave it in half, its essence is retained: it is now two rocks of salt. It does not DO anything. Thus, we have no reason to attribute awareness to it. It is, however, part of the membrane of living things (just like all material form, even air and space. But that gets into my "intimate space" hypothesis, beyond the scope of this discussion.)
     

Share This Page