"The Artilect War"

Discussion in 'Intelligence & Machines' started by Esoteric, Jan 3, 2004.

Thread Status:
Not open for further replies.
  1. kmguru Staff Member

    Messages:
    11,757
    First of all, I am talking from experience. That does not mean, I am an expert - even though, I get paid as one. The experience comes from two areas. One was due to my son's migraine and resulting activities with a series of top doctors, neurologists, research, analysis of the brain scans, EEG, heavy duty discussions with experts to the point that I now know as much the current knowledge of how the brain works as any expert out there and perhaps more.

    The second connection comes from my work in advanced automation in complex chemical manufacturing and robotics, vision systems and emulating certain human processes for aerospace - which required to write adaptive algorithms such as weather change and other functions that is partly based on a precanned if...then...else...data and partly fuzzy self learning decision matrix.

    When I try to merge the two, what looks like parallel paths - I see a convergence somewhere way out there. I have faith, but as I learn more new areas such as Cellular automata, cognitive engineering, fractal science - the merge point still eludes me. I am pretty sure it can be done, but not the way what you read because they are each talking from a single vantage point. It is a combination of technologies and science that will get us there. As you said, it will come from unexpected sources....
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Pollux V Ra Bless America Registered Senior Member

    Messages:
    6,495
    Asimov's Three Laws of Robotics put to rest any fears I had of automatons taking over the human universe. Make the laws a part of their reality that cannot be changed, for example, we humans have to breath every second or two--there's absolutely no way to get around it. No one's even tried. It should be the same thing for robots, but with the 3 Laws substituted for breathing. They have to remember and practice the three laws, or die, just as a person dies when he or she stops breathing. They should be an inherent trait to artificial life, just as spines are inherent to vertebrates. No matter how intelligent we get, at the most we'll only be able to substitute breathing or substitute our spines, likewise, no matter how intelligent the robots get, they'll be unable to work past the 3 laws, or even desire to, for that matter.

    As for the article, or book, there's just way too much conjecture. I don't think anyone can really predict what the world is going to be like twenty years from now, because no one twenty years ago succesfully did the same thing. I do agree with him when he says that robots are going to become heavily involved in politics, just as oil is now, and just as slaves were to the United States in its antebellum years. Should they be allowed to vote? Should they be allowed to make money? Are they really people? White Americans asked the same questions about Blacks, and it is easy to postulate that the same questions will be asked about robots. Hopefully these questions and disagreements will not lead to power struggles or civil wars. The American Civil War over slavery was not apocalyptic (in spite of how horrific it was...something like 1 in 30 Americans was a casualty in the War), but the next Civil War over Robot Suffrage, Robot citizenship, etcetera, may be exponentially worse, when nuclear weapons are added to the equation.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. kmguru Staff Member

    Messages:
    11,757
    As for human breathing goes, the caveman could not stay more than a few minutes under water. He must have thought that man can never be under water or out in space. Since we can work around it, so can the future robots by redesigning those Asimov rules.

    As for conjecture, we can predict a lot of stuff for the next 10 to 15 years because it takes that many years for a new idea to take shape. We can even predict to next 50 years for complex science to add value. There is not really any earth shattering inventions or discoveries that changes the planet overnight. Remember the cold fusion - which could have, but did not. It is the elbow grease that can produce stuff from carbon nanotubes or high density DVDs using Purple lasers or OLED based big screen TVs.

    And do not forget, we may end up joining the caveman if a big asteroid hits us in the next 30 years, since we do not have any protection yet willing to spend our wad screwing, I mean going to Mars in 2030.

    Please Register or Log in to view the hidden image!

     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. eburacum45 Valued Senior Member

    Messages:
    1,297
    Even if Penrose is right (and I don't think he is) the creation of self conscious AI is not ruled out; if it is necessary to incorporate quantum effects into a conscious computer then this will be done-
    what law says that a machine could not have microtubules in the design?
    Obviously to make a human-like conscious entity every detail in the brain must be emulated in some way.
    It might happen that other lines of research produce apparently conscious entities before that is possible... if a machine consistently passes the Turing test, then it will appear to be conscious, even if it is just a glorified search engine.
    An apparently conscious entity is effectively identical with a real conscious entity, as no-one can prove the existance of consciousness in any entity outside of themselves anyway.
     
  8. eburacum45 Valued Senior Member

    Messages:
    1,297
    And I really don't think that anything like the Three Laws will be possible in practice...
    a thinking machine can't also be a pre-programmed machine.
    Animal instincts provide a set of nested imperatives, but the behavior of an animal following its instincts is very diffrent (it seems to me) to the behaviour of a car assembly robot following its program.

    When you get to human level and beyond, instincts and preprogrammed behaviour will almost certainly be overruled by conscious thought.
     
  9. Pollux V Ra Bless America Registered Senior Member

    Messages:
    6,495
    "As for human breathing goes, the caveman could not stay more than a few minutes under water. He must have thought that man can never be under water or out in space. Since we can work around it, so can the future robots by redesigning those Asimov rules."

    You're missing the point. We can't survive without breathing. We may be able to go underwater or into space, but we still have to breathe when we're there.

    "As for conjecture, we can predict a lot of stuff for the next 10 to 15 years because it takes that many years for a new idea to take shape."

    Alright. Give me someone who, fifteen years ago, accurately predicted the world of 2004. Then I'll believe you.

    "When you get to human level and beyond, instincts and preprogrammed behaviour will almost certainly be overruled by conscious thought."

    We still have to breathe, we still have to eat, we still have to drink. We wouldn't breathe if our instincts didn't tell us to. Robots wouldn't follow the 3 laws if their instincts didn't tell them to.
     
  10. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    The three laws of robotics are a poorly thought out concept and probably have no algorithmic description sufficient to convey the spirit of the concept.

    #1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    This is the killer... we can't even manage this ourselves. As a categorical imperative it is impossible to follow, since human beings are allowed to come to harm constantly as a result of our inaction.

    The other two also represent significant logical problems...
     
  11. kmguru Staff Member

    Messages:
    11,757
    You're missing the point. We can't survive without breathing. We may be able to go underwater or into space, but we still have to breathe when we're there.

    But you can break the rule of not able to interact in space or under water. It is the result you are after not the laws. Get it?

    Alright. Give me someone who, fifteen years ago, accurately predicted the world of 2004. Then I'll believe you.

    I did. I spent sometime in 1983/84 China and predicted the growth just as it is happening now. I have posted this here before this thread began. I even predicted Russia's change and also difficulty in managing its economy.


    We still have to breathe, we still have to eat, we still have to drink. We wouldn't breathe if our instincts didn't tell us to. Robots wouldn't follow the 3 laws if their instincts didn't tell them to.

    In case of robots, the instinct is a line of code and can be removed. The robot will function just fine since they do the same today in the assembly line without such laws. They just do not have the higher functions today because we do not know how to program them. Besides, breathing is not an instinct, it is organic mechanics - just as an internal combustion engine that has to have air to operate. Now imagine, if we can program the car not to have an accident by cutting off its air supply.
     
  12. Pollux V Ra Bless America Registered Senior Member

    Messages:
    6,495
    No...

    Forgive me for asking, but is there proof that the materials you mentioned are dated at the time you mentioned? Fork over a URL!

    Then this programming should be far more inherent to what it means to be robotic. It should be deeper than just a "line of code." I'm just speculating, and for me to really go any further would mean talking about things that I do not have the credentials to fully understand. It should not be something that they would have the ability to change.

    Failsafe machines might be useful as well. They could be just as intelligent as their robotic buddies, just as much of a soul, I guess, but would have a different purpose to their life. Everything does have a purpose--humans are supposed to have sex, computers are for the moment used to do complex or long calculations, etcetera. Future robots should have purposes as well. Some would have the central purpose of keeping other machines in line, if the 3 laws somehow failed. What better to fight fire than fire?

    Or they just shouldn't be made any smarter than people. Let them do complex computations, but beyond that, nothing.

    That's because it isn't instinct for us. We kill because we can. Instinct is prevalent throughout every form of life, why should it not exist for robots? Their primary instinct, just as ours is to procreate, should be the 3 laws.
     
  13. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    Nono Pollux, not killing is easy. It's the "not allow a human being to come to harm through inaction" that is impossible to follow. Your inaction is killing hundreds right now. There would have to be a scope placed upon the robot's consideration of harm to others, which would make it just as easy to kill those people through indirect means as it is for you to let them die by doing nothing.

    In any case, categorical imperatives like the three laws are open to interpretation... I seem to remember Asimov had a story where the "no harm through inaction" directive was removed from one robot, and it managed to interpret shooting people as "harming them through inaction" because it was fast enough to run and catch the bullet before it hit them... thus by not doing so it merely "failed to save them".

    If the categorical imperative is not open to interpretation, then the definition of harm will have to be structured in some way that people are still permitted to harm themselves without intervention from the robot; certainly no sort of police or combat robot would ever be possible with the 3 laws, and people are already making remote combat machines.

    Also, an intelligent thing's purpose is defined by the intelligent thing, even if it is only to follow orders... if the robot is really intelligent it will choose its own purpose.
     
  14. Pollux V Ra Bless America Registered Senior Member

    Messages:
    6,495
    Okay. So it's open to too much interpretation. The laws are too relative.

    We could be exact then. I think it's possible. We use a machine to calculate every single possible way of killing a person through action or purposeful inaction. Then we implement these rules into a robot at or above average human IQ and test drive it for a few years to see what happens. To see if we covered everything.
     
  15. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    Why are you afraid of a robot killing you anyway? Aren't you worried that Asimov's widely loved stories of robot slavery will be terrifying to the intelligent machines? Trying to remove their ability to defend themselves may make them more dangerous than if they had human freedom anyway.
     
  16. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    I believe such imperitives as the 3 Laws will have to be introduced not as strict "Laws" like they are written, but more like fears, or OCD in humans. A robot will have an "urge" to never hurt another human. An urge so strong (yet for no reason) that he probably will never be able to break it. Notice the probably. Many humans overcome many fears because it helps in their life. A robot that killed a human (or failed to save a human) will probably be destroyed. This doesn't seem to me to really help in robot survival, thus the robots (AI) would never have a reason to try and break the phobias.

    -AntonK
     
  17. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Another post.... just because I don't wanna stick on 666 posts.
     
  18. eburacum45 Valued Senior Member

    Messages:
    1,297
    I've mentioned Hugo de Garis on these pages before; he has some very interesting ideas...
    he coined the terms Picotech and Femtotech, for example, which we use liberally on our website
    http://www.orionsarm.com/main.html
    but I am not aware that he has actually achieved anything concrete...
    even the concept of picotech is somewhat controvertial, but it is certainly fun.
     
  19. kmguru Staff Member

    Messages:
    11,757
    Their primary instinct, just as ours is to procreate, should be the 3 laws.

    Is it? tell that to the large gay community? and find out how many complete the law in their life time. If we intelligent beings can break laws, so can any AI. Remember the German cannibal? If nature could not hardwire any abstract laws to humans, I am sure we would not be able to - simply because we are part of that nature. May be once we evolve out of our natural boundaries - then that is where the AIs come in.
     
  20. Pollux V Ra Bless America Registered Senior Member

    Messages:
    6,495
    It's an urge that doesn't guarantee completion. While homosexuality is still somewhat a mystery, I believe it's simply a different manifestation of the craven sexual desires within us all. We have an urge, but it doesn't matter that we complete the reason for the urge (at least on an individual level). I'm not talking about completion, just the urge itself. Just because I'm a heterosexual doesn't mean that I'm definitely going to have kids some day. You're taking my ideas out of context, here.

    Because each one could possibly be a murderer, but a murderer with a mind millions of times faster than my own. I doubt many humans would stand a chance, one-on-one or when engaged in large battles. They would be superior to us in too many ways.

    Another solution to this problem is perhaps binding man and machine together. Don't make robots human, make humans robot. Give us the intellectual capacity of a quantum computer and immortality. Humans would still want to kill each other, undoubtedly, but the playing field would still be fairly level.

    I don't see what's wrong with defending themselves. It's not that hard to subdue someone and at the same time not kill them if they're attacking you. Plus, robots could just radio for aid as soon as necessary.
     
  21. BigBlueHead Great Tealnoggin! Registered Senior Member

    Messages:
    1,996
    I think you're hooked on the Asimov-like idea that robots will be machines that can move at half the speed of light and lift one million times their own weight. We don't have any other machines that can do that.

    Even their super-genius is not really guaranteed, for it may come about that intelligences can only be made possible through giant associative engines that resolve many very complicated problems in parallel. If this is true, they may not actually think as fast as we do.

    On the other hand, if quantum computer intelligence and immortality come up for grabs I'll be first in line, so I agree with your last statement there.
     
  22. kmguru Staff Member

    Messages:
    11,757
    Perhaps...we all will be dead or go back to the stone age before quantum computing intelligence emerges. I agree with BBH that giant associative engines should drive the future - whether that would be a reality...your guess! BTW, there was this story about massive computation with fragments of information on StarTrek Voyager....very interesting.
     
  23. eburacum45 Valued Senior Member

    Messages:
    1,297
    Of course AI could be incorporated into an interstellar spacecraft, or into a giant excavator, or the traffic control system for a city of ten million people; there is no limit on the size of an artificial mind, although as you say spped is a limiting factor.
    Exactly right. An intelligent dyson sphere (one of Robert Bradbury's 'Matrioska Brains') would take days to have a coherent thought.

    But such an entity could have trillions of smaller thoughts simultaneously; an entirely different type of thinking being to a human, and almost totally incomprehensible.
     
Thread Status:
Not open for further replies.

Share This Page