Can artificial intelligences suffer from mental illness?

Discussion in 'Intelligence & Machines' started by Plazma Inferno!, Aug 2, 2016.

  1. Plazma Inferno! Ding Ding Ding Ding Administrator

    Messages:
    4,610
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, but may also offer a platform on which to examine the mechanisms of biological or artificially intelligent psychiatric disease. The possibility of mental illnesses occurring in artificially intelligent individuals necessitates the consideration that at some level, they may have achieved a mental capability of consciousness, sentience and rationality such that they can subsequently become dysfunctional. The deeper philosophical understanding of these conditions in mankind and artificial intelligences might therefore offer reciprocal insights into mental health and mechanisms that may lead to the prevention of mental dysfunction.

    https://link.springer.com/article/10.1007/s11948-016-9783-0
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. wellwisher Banned Banned

    Messages:
    5,160
    Even non intelligent computers, without AI, can show mental disfunction due to externally created virus, snoop ware and spyware. If you get a computer virus, the computer can act very strange and appear to have mental defect. If we use the same parallel between machines and humans, can culture cause mental disfunction by infecting its people with viral memes, that are not designed to edify, but manipulate for power and money?

    Say you had an intelligent computer, seeking to grow and learn. You give it misinformation since you need its cooperation to get a promotion in the company. It takes what you say at face value, and then uses its power of logic to extrapolate. The output then appears irrational and dysfunctional, yet it may not think this is dysfunctional since it is using the new information in a otherwise sound way.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Jeeves Valued Senior Member

    Messages:
    5,089
    Convince them that altruism and honesty are the highest virtues, but reward greed and deceit instead. Or that love is both hugging and hitting. That's how we make our children crazy. It's unlikely that we would make robots crazy by designing them to kill, but programming them to revere life.
    There has been very good SF on this topic, for decades.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. kmguru Staff Member

    Messages:
    11,757
    Since our society has illness...why can not the AI have...it depends on who says it is ill? For the computer, it may be perfect to save itself from what ever issues that creates illness from the human side...just a thought...

    May be I may think about a logic for my next AI project that would serve the Military....we shall see...
     
  8. Jeeves Valued Senior Member

    Messages:
    5,089
    AI could not have the same pathologies as humans, since they don't have the same kind of physiology or chemistry or genetic predisposition. They would need their onw, new kind of illnesses, just as they have a new kind of existence. You couldn't catch a computer virus any more than your computer could get zika, but you could both die.
     
  9. kmguru Staff Member

    Messages:
    11,757
    illness means NOT Normal...just a thought...
     
  10. Jeeves Valued Senior Member

    Messages:
    5,089
    What's Normal?
     
  11. Ivan Seeking Registered Senior Member

    Messages:
    957
    Number 5, alive!
     
    kmguru likes this.
  12. billvon Valued Senior Member

    Messages:
    21,634
    Can they have mental illnesses as we define them? No.

    Can they malfunction? Definitely.
     
  13. Ivan Seeking Registered Senior Member

    Messages:
    957
    How could we know?

    Fun thought: The internet could already be self aware and we would have no way to know. Could the web be Descartes brain in a jar and we are all the evil geniuses?

    It is the largest network.
     
    kmguru likes this.
  14. billvon Valued Senior Member

    Messages:
    21,634
    By the definition of the term "mental illness."

    It's like asking whether a train locomotive could ever have sleep apnea. No, not really. You could say "hey, you know, that thing sounds like someone snoring when it's idling" but that's just a simile; it doesn't really have sleep apnea.
    Or we could already all be living in the Matrix!
    Or I am the only conscious person here, and everyone else is a bot. (Or even I am a bot.)
     
  15. Ivan Seeking Registered Senior Member

    Messages:
    957
    But we don't understand mental illness or the key to sentience in AIs. If we do successfully duplicate the mechanism for sentience, then perhaps we would find that the "flaws" in sentience are common to both constructs - biological and inorganic.

    "Mental illness refers to a wide range of mental health conditions — disorders that affect your mood, thinking and behavior. Examples of mental illness include depression, anxiety disorders, schizophrenia, eating disorders and addictive behaviors."
    http://www.mayoclinic.org/diseases-conditions/mental-illness/basics/definition/con-20033813

    Could my AI computer get depressed? Might it develop anxiety? How could we know before we have identified and understand artificial sentience, not to mention natural sentience?

    The idea of a simulated universe has gained some odd popularity of late. Here is an Asimov lecture debate dedicated to the subject. Davoudi and her team have even proposed a test.



    Neil deGrasse Tyson, Frederick P. Rose Director of the Hayden Planetarium, hosts and moderates a panel of experts in a lively discussion about the merits and shortcomings of this provocative and revolutionary idea. The 17th annual Isaac Asimov Memorial Debate took place at The American Museum of Natural History on April 5, 2016.
    - 2016 Asimov Panelists:
    David Chalmers
    Professor of philosophy, New York University
    Zohreh Davoudi
    Theoretical physicist, Massachusetts Institute of Technology
    James Gates
    Theoretical physicist, University of Maryland
    Lisa Randall
    Theoretical physicist, Harvard University
    Max Tegmark
    Cosmologist, Massachusetts Institute of Technology

    http://www.scientificamerican.com/article/are-we-living-in-a-computer-simulation/
     
    Last edited: Aug 20, 2016
    kmguru and cluelusshusbund like this.
  16. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    Only if robots/machines are given the same self awareness capabilities as humans, which I just can't see happening. No matter how hard one tries, the inventor will never become one with his invention. Not from a robot/machine standpoint, anyway. The machine will never be human. Simulating human performance isn't the same as living as a human being. Machines and robots are not ''living'' entities.
     
    Ivan Seeking likes this.
  17. Ivan Seeking Registered Senior Member

    Messages:
    957
    Why?
     
  18. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    Because a machine will never be a living entity, like a human being. No one can create the human condition as part of a machine.
     
    Ivan Seeking likes this.
  19. Ivan Seeking Registered Senior Member

    Messages:
    957
    Why? What makes humans special? Are we not just big bags of mostly water with some electrochemistry mixed in?
     
    kmguru likes this.
  20. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    Oh no, not this argument.

    Please Register or Log in to view the hidden image!



    If you wish for us to treat you like a machine, we can.

    It isn't so much about ''specialness'' as much as it is about humanness. We are human, and machines are not. Therefore, there will be certain aspects of machines that will never be able to take on human attributes. Likewise, there are things that machines can do that humans can't, but we are naive to think that we can be replaced by machines in all things.
     
    Ivan Seeking likes this.
  21. Ivan Seeking Registered Senior Member

    Messages:
    957
    OH YES!

    Please Register or Log in to view the hidden image!

    I am asking you for evidence that we're not.

    Are we not the universe becoming aware of itself?
     
    kmguru likes this.
  22. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    I added to my post above, lol

    Please Register or Log in to view the hidden image!

     
  23. Ivan Seeking Registered Senior Member

    Messages:
    957
    Here is one problem: If other things are self aware, that doesn't mean they have free will. Self determination and sentience do not necessarily go hand in hand. As I mentioned earlier, if the internet were self aware, we might have no way to know.

    We evolved through natural processes. While we may not know every step of that process, there is no reason to assume there are any magic steps involved. Self awareness apparently emerges from natural processes than can be duplicated. There is simply no evidence to suggest that brains have some special quality that can't be duplicated in other systems.

    Free will = quantum uncertainty? A lot of people have mused on that one.
     
    ajanta, kmguru and cluelusshusbund like this.

Share This Page