Can artificial intelligences suffer from mental illness?

Discussion in 'Intelligence & Machines' started by Plazma Inferno!, Aug 2, 2016.

  1. Write4U Valued Senior Member

    This is a very interesting program which clears up a lot of layman's questions about the potential contained in the various elements and chemical compounds..
    Last edited: Jan 20, 2017
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. birch Valued Senior Member

    that would be like having a virus which would upset normal or optimum functioning.

    the idea that ai, no matter how sophisticated and complex the programming can only be an emulation of our consciousness. it can be programmed to be almost indistinguishable at least practically but it's still a kind of a skin deep type of scenario to some extent. i really question the idea that ai can achieve sentience and emotions inherently just like us. that doesn't mean ai is not a type of consciousness itself and in it's own right but that it's a different one.

    it probably cannot have the same type of consciousness as organic life because it's not of the same design or makeup.

    for instance, how can you really program emotions if it can't experience it? it can be programmed to respond to certain cues but that's still a different type of consciousness than from visceral experience. it could work the same but still have a different engine unless you combined biotechnology.

    that doesn't mean we aren't simulations and totally conscious either as that is relative. a hypothetical higher being may see us as having less sentience or lifeforms of a totally different makeup that cannot relate.
    Last edited: Dec 22, 2017
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. Write4U Valued Senior Member

    A "computer virus" is a mathematical program.
    Hmmm........ that would seem to support Tegmark's mathematical universe.
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Write4U Valued Senior Member

    As Anil Seth mentioned; "you don't have to be smart to suffer, but you probably have to be alive."
  8. Forceman May the force be with you Registered Senior Member

    Every process that occurs through mankind first undulates after processing over and over again: you to a he is not as a gender to a robot but to Ai yes: exclamations produce intel and that incites much danger: as in that in space or this in stars closer or dt + t2 : mass equates to energy over time as e rises and x rises a limit to robots but not to AI.
  9. iceaura Valued Senior Member

    We had a very crude and simplistic example of an AI setup exhibiting the kind of context-misperception known to characterize some forms of schizophrenia, last year ( I posted a hypothetical example in post #48, this one is better):
    A child commanded one of the botservants (I think it was Alexa) to get her a dollhouse (and four pounds of cookies) and it did - ordered one via credit card.
    This made the local news, and in the amused newscaster quoted the phrase the child used to describe what she did.
    Whereupon several servantbots in homes with the newscast on likewise attempted to order dollhouses for their households.

    That kind of error in registering context is a symptom, a characteristic, of mental illness in people. The real question might not be whether AI can suffer from mental illness, but whether AI free of mental illness can be assured.
  10. someguy1 Registered Senior Member

    AI can't have mental illness because it has no mentality at all. What AI can suffer from is unintended consequences of its programming. In principle this is no different than what happens when you are learning to program and you first discover that programs do EXACTLY what you tell them to do ... and that this is very often NOT what you WANTED to tell them to do.

    The challenge of the art of programming is to tell the computer exactly what you want it to do; and to clarify your thinking so that what you THINK you want is what you REALLY want.
    Last edited: Jan 23, 2018
  11. iceaura Valued Senior Member

    Ok: behave as if it had a mental illness; emulate mental illness; function as if mentally ill.
    That's not the relevant problem with modern AI, such as AlphaGo, in which the machine is altering its behavior based on its own experience.
    The people who programmed AlphaGo specifically did not tell it how to play Go. Its moves, good or bad, are neither intended or unintended by those who programmed it.
    Last edited: Jan 23, 2018
  12. someguy1 Registered Senior Member

    I have handled this point many times already. AlphaGo Zero is a conventional program executing on conventional hardware. If you had a big box of pencils and a stack of paper and a copy of the source code you could start at the beginning and step through the program deterministically and do exactly what the program is doing. You'd do it more slowly, but given enough time your computation would be identical to the program's. Even the randomized aspects of the program are deterministic pseudo-random number generators. They give the same output every time given the same input conditions.

    Every artificial neural net can be implemented as a Turing machine; and in fact IS implemented as a TM when it executes on conventional hardware.

    And every perfectly conventional program, such as a mainframe accounting program from the 1960's, exhibits behaviors that are incomprehensible to the people stuck with maintaining it.

    You say the program "alters its own behavior based on its experience." That's true of the simplest of branching code:

    If today is tuesday go to the beach
    Else stay home.

    Based on its experience of what day it is, this program alters its behavior.

    There should be a name for the fallacy of attributing mystical capabilities to programs executing according to perfectly well understood and standard programming principles.

    The philosopher Hubert Dreyfus might have been the one who coined the phrase, "artificial intelligensia" to describe those who believed AI was an actual possibility. Of course in the 60 or so years of AI hype, we have made NO PROGRESS WHATSOEVER in building a general purpose AI that exhibits signs of consciousness, self-awareness, or any other metric of AI-ness that you care to propose. We have chess playing computers but I was beaten by the chess program on a floppy disk running on my Osborne in the early 80's. Nobody makes the argument that a chess program has general intelligence.

    Every program alters its behavior depending on its experience. You can easily code a program to implement weighted nodes in memory, and to adjust its behavior depending on the values stored in those nodes. Clever programming is still just programming.

    Conventional database programming involves writing programs that alter their behavior depending on their experience -- that is, based on the contents of the accumulated data. This has been going on since the 1950's and really took off in the 1980's with the advent of relational databases.

    Nobody has discovered any mode of computation more powerful than the Turing machine in 80 years. They might do so tomorrow but they have not done so yet. Neural nets are no more powerful than TMs and neither are quantum computers.
    Last edited: Jan 23, 2018
  13. iceaura Valued Senior Member

    And you have repeatedly posted claims - such as the one quoted - that indicate you have not come to grips with it at all.
    But you couldn't play a game of Go that way. The source code does not contain the necessary information - it does not tell you how to play Go.
    Those are errors. The break comes when the incomprehensible is not an error, but the correct response. That is a different kind of unexpected behavior. And now, from the latest AI, we have unexpected responses that cannot be classified as correct or incorrect by the programmers of the machine.
    None of my posts (including the one you are responding to) attribute mystical properties to anything - including human brains.
    No, it doesn't. Its possible behaviors are fixed in advance, specifically, and it cannot alter them.
    Responding to input is not the same as altering one's possible responses to input. And in the recent developments, we find the ability to alter the ability to alter the possible responses to input.
    Last edited: Jan 23, 2018
  14. Write4U Valued Senior Member

    I think you touched on the fundamental law of "movement in the direction of greatest satisfaction"
    This satisfaction needs not be emotional but a purely physical phenomenon, but can result in behavior that attempts to maintain "balance" to name one such imperative.
    Humans have this ability through its bio-chemical vestibular system which warns us of imbalance, without conscious decision making.

    note: this is a subconscious part of the brain which functions independent from "decision making".
    Anil Seth calls these systems as "control mechanisms", which function bio-mechanically and trigger certain "intuitive" muscle responses to maintain balance.

    It seems that such a system can be artificially emulated, and IMO, would go a long way to be "artificially sentient" at least in that respect. I have seen an artificial soccer game, where the players immediately make an effort to stand up after having been knocked down by another player.

    I am getting a feeling that while we view the brain as one integrated organ, it contains at least two very specific and separate functions. The conscious "decision making" function in relation to the external and the auto-response function, which maintains a subconscious control over internal body functions.
    Last edited: Jan 24, 2018
  15. someguy1 Registered Senior Member

    I don't think that's justified at all. And it's not conducive to productive conversation.

    But of course you COULD play a game of Go exactly that way. You would do what the computer does. Start from the first line of code and a given initial state. Using pencil and paper, step through the training algorithm. The training algorithm has you play millions of games against yourself and store the results. The results are of the form, "This move in that situation resulted in a W/L/D.") You would play millions of games to seed your knowledge base, whether that info is stored as a neural net or a relational database or any other data structure. Then in Play mode you would load your data and play according to the contents of your database or collection of nodes.

    Do you agree with this? If not, please tell me what's different.

    I assume you do understand that a computation does not depend on the speed or nature of its hardware. A supercomputer and a human with pencil and paper compute the exact same results. Of course we do not take into account any practical limitations such as the age of the universe or the supply of pencils. This is the principle of substrate independence.

    Do you agree with this? That given my description of the process, you COULD play an expert game of Go like this.

    If not, you have to say why not and please be very specific, because you would be violating well-established principles of computation. You really need to explain very carefully why a program run on a computer can compute something different than that same program executed with pencil and paper. If you did this, you would become famous because NOBODY knows how to do this.

    Do you agree with that? If disagree, please be as specific as you can.

    Of course I am myself very impressed that AlphaGo Zero has shown that weak AI is no longer constrained by human knowledge. That's amazing and it's of great interest for practical weak AI. But as far as the theory, it makes no difference. What AlphaGo Zero does can be replicated by a human being with pencil and paper. Just a lot slower.

    No actually not. Even the correct functioning of a complex system could well be incomprehensible to its programmers. First, because every complex system is incomprehensible to its designers. Just look at the systems of society, the law and politics and economics. We no longer understand the operation of our own formal systems. This is a great crisis in rationality, on the order of the discovery of non-Euclidean geometry. Our rational systems are producing results that nobody understands and that nobody likes.

    But secondly, the people who designed the old mainframe programs are no longer around. The programmers who maintain these systems do their best not to break anything.

    Every formal system eventually gets so complex we can no longer understand it.

    Weak AI systems built as neural nets or learning algorithms will certainly make the societal problems worse. But they are no different in principle than the systems that run our lives right now.

    Just as with our economic policies, our legal system, our educational system, and any nontrivial program ever written. You are conflating a difference in degree with a difference that's qualitative. It's not. The inscrutability of modern weak AI systems is just the next phase in the general inscrutability of every several-million line chunk of computer code ever engineered by man.

    By mystical I simply mean that you are ascribing to unconventional properties to perfectly conventional programs. You seem to believe that AlphaGo Zero does something that differs in principle from a computation that could be carried out with paper and pencil.

    This is not so and this can not be so by the theory of computation as it has been understood since Turing's 1936 paper. Nobody's ever found a mode of computation in the real world that can go beyond it. The only modes of computation that go beyond TMs involve infinitary principles not implementable in the real world using known physics.

    You're making semantic quibbles. I know how AlphaGo Zero works and I stipulate that it's a hell of a breakthrough in weak AI. But its inscrutability is not fundamentally different than the inscrutability of any large conventional program or formal system. It's just worse.

    It's like a 4GL database language. We don't have to tell a program how to search a database. We tell the program WHAT we want and it figures out HOW to optimally search the database to produce the desired result. These 4GL database systems were developed in the 1980's.

    Likewise with modern AI's we no longer tell them HOW to do the thing we want done, we tell it how to figure that out for itself. Yes this is cool but it's hardly unprecedented in the history of computing.

    This is what I mean by mystical. You seem overly impressed by advances in software design that are evolutionary and not revolutionary. Neural nets were invented in the 1940's. We've been teaching computers to figure how to do the things we want done for decades. AI is just the latest achievement in this direction of telling computers the WHAT and letting them figure out the HOW. It's really cool stuff, yes. But radically different in its fundamental aspects than what's come before, no.
    Last edited: Jan 24, 2018
  16. Write4U Valued Senior Member

    If all you say is true, then why should the human brain function differently in respect to a binary system?

    What you are not addressing is the product of chemical interactions in addition to electrical interactions?
    Chemical information is not transferred via a binary language, but via chirality.

    Seems to me that organic life does not only work at a purely logical electric Turing level but has the additional ability to process chemical signals, which introduces a wholly different functionality, a bio-chemical chiral aspect in addition to pure binary logical processing of information.
  17. iceaura Valued Senior Member

    Think of it as a reaction to being repeatedly told one is being "mystical" for no visible reason.
    That's odd. I don't know why. I don't even think my brain does anything that could not, in principle, be emulated via paper and pencil.
    Inscrutability in proper functioning, in doing what it's supposed to do, seems to me significantly different from unexpected and inexplicable malfunction. YMMV.
    (Any system has more ways to fail than succeed - inscrutability, great complexity, in success, is more notable).
    Doing what the computer does and doing what the source code does are not the same thing. That's important.
    It is not what was specified earlier, which was stepping through the source code. You are now drawing upon the neural net settings as they change, the stored joseki, etc - the algorithm is a small part of your labors, and an even smaller part of the capabilities of the machine.
    I am claiming their significance - that such matters are not quibbles.
    There's nothing mystical about regarding an evolutionary change as significant, as making a qualitative difference.
    Especially if it illustrates the thread topic, which is functional transfer of such advances.
    Now we are lumping economic policies of people and computer programs? That's a bigger leap than I can follow.
    If my calculator (an HP48G) does something unexpected, I claim even I (let alone its programmers) can classify it as function or malfunction, correct or incorrect.
  18. someguy1 Registered Senior Member

    I'm disappointed that you failed to respond to any of my substantive points.

    You keep repeating that the output of a neural net is incomprehensible or inscrutable to we mere humans. I think you are ascribing mystical qualities to that fact. If I said you were succumbing to the hype, would that be more accurate?

    Ok, so you agree that a human with pencil and paper could implement a neural net. But in your previous post you denied this. Now you agree. Ok. Good. Progress is being made.

    Ok fine. Suppose I stipulate that. Why do you place so much emphasis on inscrutability? What philosophical or scientific point are you making?

    When I talk about the Church-Turing thesis, do you know what I'm referring to? Are you aware that there is no known mode of computation other than the TM that can be implemented in the physical world? That a neural net is not a new or different mode of computation?

    If you understand this point, what difference does it make that a neural net is inscrutable? Any sufficiently complex formal system is inscrutable. If you can tell me CLEARLY what your point is, perhaps I'd understand why you seem so insistent on the importance of inscrutability.

    Notable. In what way? I think this is the nub of our disagreement. We agree that the output of a neural net is inscrutable. You seem to think that's of fundamental importance. I think inscrutability is a normal aspect of any large complex formal system, and that neural nets are an evolutionary development of a trend that's been under way for decades. So why do you think inscrutability is so important or meaningful?

    Ok, we could do the same thing (execute the program with pencil and paper) using the pure binary code, the output of the compiler, linker, and loader. We'd of course be equipped not only with pencil and paper, but also with the processor design and instruction set documents for the cpu on which the computation is intended to execute.

    How does this make any difference?

    And what to you mean that what the computer does and what the source code does are different? If a computation differs from what its source code says, that's a compiler or cpu error. Surely you must understand this.

    If you claim a computation does something more than what its source code says, then you ARE being mystical. You seem unfamiliar with the basic operation and terminology of computers. The source code is the description of the algorithm. If the computer does something other than what's specified by the code, that's an error in the compilation or in the execution.

    Ok. Please tell me clearly and explicitly what the significance is (of inscrutability).

    Even if I stipulate that the inscrutability of neural nets is far beyond the inscrutability of any other human-made system, what is the importance or meaning of that fact to you? I suspect this is the nub of our disagreement. We agree on inscrutability, we just assign different values of importance to it.

    If you think the mind is an algorithm of some sort then functional transfer is trivial, just as Euclid and I have both hosted the Euclidean algorithm in our brains. That's the idea of substrate independence as it applies to algorithms. Any computation can execute on any suitable hardware. I'm attacking the idea that mind is a computation or an algorithm.

    Yes I did take a leap that's not part of my core argument. Oddly enough, just hours after I wrote down that idea here, I ran across an article arguing that the system of global capital is functioning as an inscrutable AI. Well worth a read.

    Ok. But just tell me why you think the inscrutability of a neural net is important. I've already noted that neural nets were invented in the 1940's, and that fourth-generation languages in which we specify the WHAT and the computer figures out the HOW, were developed in the 1980s. So why is the inscrutability of neural nets important to your argument?

    After all, the pronouncements of the Oracle at Delphi were inscrutable too, and that doesn't make them important datapoints in the theory of mind.
    Last edited: Jan 25, 2018
  19. Michael 345 Bali in Nov closer Valued Senior Member

    Australia Day here 26th January

    Australian of the year Professor Michelle Simmons

    Working on quantum computers which according to the hype will do in minutes what currently would take a few hundred years

    Seems like Professor Michelle Simmons and her team were the first to make a transistor from a single atom. How the hell would you do that?

    Anyway the quantum computer should be able to mimic the human brain. Or surpass the brain?

    Will be interesting to see if a quantum computer, with all the computing power predicted, which (personal understanding) will be AI. Will such a computer ONLY compute, with the occasional glitch?

    My take would go the way of - first for the quantum computer to obtain awareness. Only after obtaining awareness and be concerned because of its awareness. Then it would be a candidate for a mental illness

    Please Register or Log in to view the hidden image!

  20. someguy1 Registered Senior Member

    It is a fact of computer science that quantum computers (QC) have the exact same computational ability as basic Turing machines. That is:

    * If a computation can be done by a QC, then that same computation can be done by a TM.

    Not only that, but QCs don't even give general speedups. Quantum computers are much faster than conventional computers on particular specialized problems. But that's it. For example integer factoring is polynomial on a QC. If you understand what that means it literally bowls you over with its sheer unbelievability. It's a fantastic breakthrough, one that can be appreciated even without knowing anything about how QCs work. I would say it's one of the most amazing things I know.

    However, factoring can already be done by TMs. Every math-oriented person who ever learned to program a computer has written a program to factor integers. So a QC can factor integers a lot faster than a conventional computer, but both quantum and conventional computers have the ability to factor integers. And just going faster can never change what you can compute. Any computation that can by done by a QC can be done with pencil and paper. In fact you can easily see this for yourself. Quantum computers can be simulated by regular old computers. That's how people study quantum computers. That in itself proves my point, which is that QCs can't do anything a TM doesn't.

    And again by"much faster" we don't actually mean that QC's are faster on the cases we care about!! What "much faster" means in computational complexity theory is that certain problems that be solved in exponential time in conventional computers, can be solved in polynomial time on a QC. [The quantum algorithms are often different. It's not just a matter of running the same algorithm on a QC].

    But in practice, an exponential algorithm can execute much much SLOWER than a polynomial one. It all depends on the exponent of the exponential, and the degree of the polynomial. For example \(x^{1.00001}\) runs a LOT more slowly than \(x^{\text{googol}}\) for pretty much any value of \(x\) that anyone would ever care about. But to a complexity theorist, polynomial is always better than exponential because they are only considering the limiting behavior as \(x\) goes to infinity.

    When people talk about QCs having vast new capabilities far more powerful than conventional computers, the truth is that they are, as long as you understand the technical sense in which that claim is being made.

    But if you just take the hype and don't apply some critical thinking to it, you can start believing things that aren't true.

    QCs do not compute anything that a conventional computer can't. They just do it a lot faster on SOME but not most problems.

    So no, QC's aren't capable of implementing or explaining AI, unless conventional computers already are.

    Any capability you ascribe to QC's can already be done by a TM, as long as that capability can be fairly described as computational.

    We do not know of any mode of computation that can do anything a TM can't. Not a neural network, and not a quantum computer. That is a fact. Not an opinion. It's part of computer science. It's encapsulated as the Church-Turing thesis. If the Church-Turing thesis ever gets falsified, you will read about it in the papers. It's been standing for eighty years or so without refutation. It is possible it will be refuted tomorrow morning. But if it is, it won't be by neural nets or quantum computers.
    Last edited: Jan 26, 2018
  21. Write4U Valued Senior Member

    That is because it ignores the bio-chemical aspect which can only be found in the bio-chemical interactions of living organisms

    Computing power is only part of the story, IMO.

    As Anil Seth says; "You don't have to be intelligent to feel emotion, but you do have to be alive."
  22. someguy1 Registered Senior Member

    So what exactly is that extra bit then? Earlier I understood you to be saying it's some kind of computation. I'm not sure if you're still saying that. My point would simply be that we don't know of any other mode of computation. But if the mind turns out to break the Church-Turing thesis based on brand new physics, that won't surprise me. It's what I expect.

    But if you're saying now that it's not a computation, then we have to ask, what is it? What is it that life does that computers don't? Isn't that the core philosophical question at issue? Here's we're conflating life with mind a little, because there are living things without minds. Whether there are minds without living things, well that's what we're all trying to figure out.
  23. Michael 345 Bali in Nov closer Valued Senior Member

    I understand what you say about faster means just that - faster. In the case the professor mentioned "from 100s of years to days". I'm thinking in the medical field once all the test results are feed into a QC a diagnosis would be minutes away.

    The bottleneck would be time taken to do the test. Perhaps the QM could speed that up by placing the test in a priority list. Feeding in test results as they become available would help also by perhaps ruling out some conditions. Giving expert medical personnel fewer choices might give them more confidence in making a "best guess" choice if time is of the essence

    Again I mention if the doctor comes back to the QM with further results - from another patient - and reads on the screen "How is Tommy doing?" I suggest the doctor calls in the technicians to fit tear ducts to the hardware

    Nearly forgot she did mention most of the computing was performed in parallel which helps with speed but not the end result. Yes I get it

    Please Register or Log in to view the hidden image!


Share This Page