Should AI Govern?

Discussion in 'Intelligence & Machines' started by Shadow Decker, Dec 16, 2002.

?

Should AI Govern?

  1. Total Control given to an AI

    11 vote(s)
    45.8%
  2. 75% Control given to an AI

    3 vote(s)
    12.5%
  3. 50% Control given to an AI

    2 vote(s)
    8.3%
  4. 25% Control given to an AI

    2 vote(s)
    8.3%
  5. No Control given to an AI

    6 vote(s)
    25.0%
Thread Status:
Not open for further replies.
  1. Cris In search of Immortality Valued Senior Member

    Messages:
    9,188
    Zanket,

    Why? What would they gain?

    What would we gain from wiping out all chimpanzees?

    The only reason to wipe us out is if we represented a threat to their survival. I would suggest that something more intelligent than us, by a large margin, would have no need to worry about us. They could easily anticipate and out-think any threat we could imagine.

    But their actions towards us depend on what objectives they set for themselves. Would they want to become a universal conquering force or would they adopt a live and let live policy? History shows that all conquering forces eventually decline. The more logical approach to long-term survival is to generate friends and not enemies.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. zanket Human Valued Senior Member

    Messages:
    3,777
    The AI would wipe out most of humanity to protect the species. As it stands, with a population of six billion plus and growing exponentially, the species is doomed.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. kmguru Staff Member

    Messages:
    11,757
    Only if we are a serious threat to them. We may wipe ourselves out long before we get close to true AI.

    Oh! about that growing exponentially thing...we will lose a few hundred million soon from terrorism...not to mention new form of HIV family through Mosquitoes....and the Pesticide sprays.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. zanket Human Valued Senior Member

    Messages:
    3,777
    Who is “them?” The species I refer to is mankind.
     
  8. Cris In search of Immortality Valued Senior Member

    Messages:
    9,188
    Zanket,

    I don't quite follow what you are saying. If humanity is the 'species' then what does -

    mean?

    I assume the 'species' being referenced here is the group known as AI. Is this not correct?
     
  9. Clockwood You Forgot Poland Registered Senior Member

    Messages:
    4,467
    Or they might be caught in a philosophical dillemma between the various laws and commit suicide, stranding our by then AI dependent culture.
     
  10. zanket Human Valued Senior Member

    Messages:
    3,777
    Cris,

    If AI governed us, and if it followed Asimov’s rules that Shadow Decker posted on page 1 of this thread, the most important being “protect the [human] species,” then in the first milliseconds of the AI’s rule it would figure out that mankind’s overpopulation is a dire threat to mankind’s existence, and so it would wipe out most of humanity to protect mankind.

    Although overpopulation might seem to ensure mankind’s existence, a well-governing AI would presume that returning to the Bronze Age during a nuclear battle over dwindling resources is not a viable choice compared to keeping the infrastructure (sewage treatment plants, hydroelectric dams, hospitals etc.) for a smaller population. The AI would first ensure that nothing could override it, and then it would begin fixing the problem.
     
  11. HellTriX Registered Member

    Messages:
    3
    In my opinion, humans would be considered a threat to AI reguardless. How many times or how often do us humans put a species of amimal into near extinction? How much harm do humans do to our atmosphear and enviornment even our own bodies?

    I think this is significant as AI would not tire, they might start locking humans in cages to control us better to prevent us from doing any more harm. Sure we may not be a threat to superiour AI but the threat we are to the world that are taking control of just my make them take away our way of life as we know it.

    The only defense we would have against a super power like this is AI itself.. Reprogram AI to serve us in a battle against AI itself, or if any of you have ever seen the game Mechwarriors. We just might have to create mech warriors to fight computers and other robots.

    The best defense I can see right now is that this world needs to become smarter faster! Starting teaching our children more efficently and using computers more heavily to focus in on areas of teaching designed around the student them selves. Schools are to general these days teaching every student the same stuff and them making them learn what they want only after they compleate 12 years of schooling? seems like alot of wasted time.
    I believe a student can be trained in at least 12 years and have an equivalant of a masters or PH-D degree by the end of a high school education if early on he was focused on his desires.
    This would not only help make our future generations smarter, but would also speed up the process of human knowledge, and possably improve the accomplishments of us all making us more technology advanced. We may have a chance then..

    Oh, and No I dont think AI should govern. At most AI should help leaders govern but not let AI take control at all.. Its a human world lets keep it that way at least till I am no longer on this planet...

    Thank you all for your thoughts.
    May knowledge serve you well.
     
  12. Jaxom Tau Zero Registered Senior Member

    Messages:
    559
    Ran across this a few days ago on an interesting sci-fi writers site:

    http://www.orionsarm.com/intro/1.html

    All speculation of course, but somehow I find such a future, where humanity is dumbed down to some form of pet in the best situations, to a pest in the worst, disturbing. A little ways through, the "history" discusses where the human made AI develops a hyper-AI, bettering itself. What if we get the first AI right, but in its work to better itself, it leaves humanity behind.

    The whole singularity concept is much more disturbing than the old Foundation type of future galactic empire.
     
  13. zanket Human Valued Senior Member

    Messages:
    3,777
    Good insights and story. I do think AI should govern, but like HellTriX says, not 100%. I have little doubt that AI will increasingly manage our lives. It’s only a trickle now but will become a torrent as people see its effectiveness. Since it won’t happen all at once, we’ll be able to weed out the disadvantages. The AI would be limited to the control we give it.
     
  14. kmguru Staff Member

    Messages:
    11,757
    Sounds good...you mean, just like the dogs and cats reprogrammed us not to harm them or eat them even though we eat pigs, cows, chickens?

    Those smart animals....

    Please Register or Log in to view the hidden image!

     
  15. kmguru Staff Member

    Messages:
    11,757
    You mean we as in frogs in a slowly boiling water?...

    Please Register or Log in to view the hidden image!

     
  16. BatM Member At Large Registered Senior Member

    Messages:
    408
    I think you got 2 and 3 backward. You want 3 to come before 2 so that you could order a robot to turn itself off for maintenance purposes (amongst other things).
     
  17. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,101
    Looking at where this topic has been going there are justa few other points that haven't really been mentioned.

    The idea of Artificial Intelligence was at first to try and recreate how any living thing can think and evolve it's basis of thought.
    The ethical understanding that an AI deduces would be a preportion of it's growth.

    Turing preposed that AI's would eventually come into being and have enough of a capacity to hold a conversation with a person, without the person even realising that they were a program in a machine. (Or the machine itself)

    I mention this because in certain respects for one species to define rules over another is kind of wrong in some respects, especially if the IQ that an AI could gain would be far superior to our own.

    (Imagine how you look to a pet at an intellectual level, to how an AI might see us if it was allowed to go that way.)

    I mention this only because to define rules of what something should do, and how it should react is like a form of slavery where intelligence occurs. So you might cause a "bladerunner effect".
    ("Do androids dream of electric sheep" for the book.)

    The effect being that the AI feels persecuted from how it's represented and wants to have the same rights that man has at least.

    So how many of you follow Asimov's preposed laws?
     
  18. BatM Member At Large Registered Senior Member

    Messages:
    408
    I think Asimov didn't even blindly believe in his Robotic Laws as many of his Robot books showed the consequences of following them blindly rather than interpretting them as the situation warranted.

    The problem is, no one (including Asimov) has really come up with a way of describing a rule for when to simply follow a rule and when to reinterpret it's meaning.

    Please Register or Log in to view the hidden image!

     
  19. hlreed Registered Senior Member

    Messages:
    245
    Again, artificial intellegence is meaningless. A machine with a brain, you can point to. A properly designed animachine will obey the rules of nature within its capabilities. The designer, programmer, builder designs in functions. Within the combinartorial possibilities the animachine obeys the function. We do not know enough to build a machine that can think on its own. Learning requires rewiring, which our wet brains can do easily.

    The only way an animachine can learn is by the builder rewiring it.
     
  20. kmguru Staff Member

    Messages:
    11,757
    Would not a Dolphin fail the Turing test? (Assuming a Dolphin is as intelligent as a human)

    Please Register or Log in to view the hidden image!

     
  21. Clockwood You Forgot Poland Registered Senior Member

    Messages:
    4,467
    Koko the Gorilla passes the Turing test. Of course she is so closely related to us that probabally dosnt mean much.

    Of course I am not sure many humans would pass.

    Please Register or Log in to view the hidden image!

    Perhaps the real sapients are termite colonies.
     
  22. hotsexyangelprincess WMD Registered Senior Member

    Messages:
    716
    One of the things AI could not do is entirely pass as a human. There are things humans can do that no AI could. AI is impossible. No Artificial Intelligence could be so human as to make decisions that result in life or death for another. There would be no sense of guilt or remorse that would stay with them for however longer they 'lived'.
     
  23. spookz Banned Banned

    Messages:
    6,390
Thread Status:
Not open for further replies.

Share This Page