Can artificial intelligences suffer from mental illness?

Who are you talking about?
can·tan·ker·ous, adjective

1. bad-tempered, argumentative, and uncooperative:"a crusty, cantankerous old man"

"Error, error, error, does not compute."
 
^^^
So he was illogicly trying completely repress half of himself.

<>
I get it now. He was illogical because he was half human. But it was shown in Star Trek that Vulcans have emotions but they are trained from very early childhood to repress them. <>
Oddly, Vulcans seem to lose complete control over their emotions every 7 years, mating season.

But then "sex" seems to conquer all except for AI. Now that would be scary....:wink:
 
If you had read my other posts, you would know that I never signed on to such speculation.

In fact, I have been arguing that what we call sentience is probably a result of bio-chemical (organic) processes and not from a purely inorganic electrical process.
OK. Haven't personally delved deep into the qualia/consciousness issue. Currently remains a deeply mysterious arena but I agree with someguy1 there is no indication of a need to invoke a non-material essence aka 'soul/spirit' dualism. As to whether there is a need to move beyond the Turing paradigm, I can't say either way.
 
I already showed you earlier that this is not the case. Yet you're trying out that same flawed claim on someone else. Not nice. Classical physics can not be described by propositional calculus. You can not use propositional calculus to solve the three body problem in Newtonian gravity.
I don't know why you still think the description of classical physics by propositional calculus is invalidated by an inability of mathematicians to solve the three body problem, but I thought I had answered that already at some length - did you miss that response?

The three body problem is completely describable by propositional calculus, and if it were not you would have no way of knowing whether it had been solved or whether any proposed solution were valid.

The point here would be that given the very simple features of mental illness already emulated in computers - context misperception, etc, - and the ability of computers to emulate any product of human logical reasoning, more complex forms of mental illness manifestation are already possible, without an obvious ceiling on that complexity.
 
Last edited:
No not emulated. Implemented. Emulations don't help. If I emulate gravity my program does not attract nearby bowling balls. If I emulate a brain it will not have a mind.
Not implemented. You cannot implement electron flow through transistors with a pencil and paper.
If you emulate gravity something in your computer behaves relative to the rest of the behaving entities as a bowling ball behaves relative to its surroundings in a gravitational field.
If you emulate a brain, something in your emulation is behaving - relative to the rest of it - as mind behaves relative to its context. Otherwise you would not be emulating a brain - you'd be lacking its patterns of behavior, as if you had emulated a hummingbird but had no aspect of it flying.
 
I don't know why you still think the description of classical physics by propositional calculus is invalidated by an inability of mathematicians to solve the three body problem, but I thought I had answered that already at some length - did you miss that response?

It's quite possible, I'll go back and look. I've been very curious about this myself and have been wondering if I might be wrong on this point.

The three body problem is completely describable by propositional calculus, and if it were not you would have no way of knowing whether it had been solved or whether any proposed solution were valid.

I'm planning to look into this some more. I'm not sure how the chaotic effects relate to the fact that you can certainly compute the solution to any desired degree of approximation. If you can remind me what page your response is on that I missed that will help.

The point here would be that given the very simple features of mental illness already emulated in computers - context misperception, etc, - and the ability of computers to emulate any product of human logical reasoning, more complex forms of mental illness manifestation are already possible, without an obvious ceiling on that complexity.

Ok. To avoid confusion I should make it clear that I'm ONLY talking about the fact that I don't think mind is an algorithm. I'm not even taking any position at all on the question of mental illness. I'm questioning whether you can apply the word "mental" to anything a computer does in the first place.

But ... computers don't have any such thing as "context misperception." That's an anthropomorphization that can only be used in a casual or metaphorical sense, never a literal one. Computers just flip bits. It's humans who can misperceive context. Computers don't know anything about context and they can't perceive OR misperceive it. If the computer fails to recognize the subject in a confusing background that's a programming error, not a context misperception. Like when a Tesla slams into a semi-truck in broad daylight, as one did. That's not a confused mind. That's bad programming. It was a light colored truck on a sunny day. Their contrast-detection algorithm was faulty.

Programmers metaphorically say, "The computer thinks such and so." We don't really mean that the computer thinks. We anthropomorphize it but we are aware that we are doing that.
 
Last edited:
Not implemented. You cannot implement electron flow through transistors with a pencil and paper.
If you emulate gravity something in your computer behaves relative to the rest of the behaving entities as a bowling ball behaves relative to its surroundings in a gravitational field.
If you emulate a brain, something in your emulation is behaving - relative to the rest of it - as mind behaves relative to its context. Otherwise you would not be emulating a brain - you'd be lacking its patterns of behavior, as if you had emulated a hummingbird but had no aspect of it flying.

Mentation is not a behavior.

Behavior is that which can be observed.

Mentation can not be observed.

Mentation is not a behavior.

Emulations emulate behavior.

It's possible that an emulation MIGHT POSSIBLY be, completely unknown to us, emulating mentation.

But you can never have proof that such a thing is possible.

ps -- Flying of course is an observable behavior. If I emulate a hummingbird it will fly. But I'll never know whether my emulation is having little hummingbird thoughts.
 
Last edited:
It's possible that an emulation MIGHT POSSIBLY be, completely unknown to us, emulating mentation.
But you can never have proof that such a thing is possible.
We are working on that.

The discovery of the "mirror neuron system' in other animals that must learn to survive is very promising.
THE MIRROR-NEURON SYSTEM,
Key Words; mirror neurons, action understanding, imitation, language, motor cognition
https://www.cin.ucsf.edu/~houde/sensorimotor_jc/GRizzolatti04a.pdf

Now, IMO, that would be a great asset to an AI.
 
We are working on that.

How so? Curious to know.

Here is the problem with determining if something is sentient. Since mentation refers to purely internal, subjective states of being, you can't observe it unless you're the one having the mentation. You can be sure, as Descartes was, that you doubt. But you can't be sure that your neighbor doubts.

How is anyone proposing to detect consciousness in others?
 
How so? Curious to know.

Here is the problem with determining if something is sentient. Since mentation refers to purely internal, subjective states of being, you can't observe it unless you're the one having the mentation. You can be sure, as Descartes was, that you doubt. But you can't be sure that your neighbor doubts.

How is anyone proposing to detect consciousness in others?
The 'Mirror Neuron System" it is the cognitive learning part of the brain. If we can install a learning mirror neuron system in an AI, then the AI may begin to understand subtle body langage and human actions accompanying intent and will begin to physically imitate human behavior at very subtle levels.

Anticipation is the beginning of abstract thought.

But as i have said before, it takes the human brain some 16 years to fully develop and capable of creating sophisticated mental abstractions. It would seem to me that an AI would require at least several years of learning (I subtract for the hard computational skills) to learn subtle human behavior patterns.

This phenomenon happens through all of nature. Patterns of social behaviors are mostly learned, including in some social insects.
 
Last edited:
But ... computers don't have any such thing as "context misperception."
But they behave as if they did.
Behavior is that which can be observed.
Mentation can not be observed.
Mentation is not a behavior.
That seems vanishingly unlikely, to me. In the first place, I am comfortable with the existence of unobservable behavior which is deduced. In the second, I don't see how anything that happens in the human brain escapes being a "behavior".
 
Ok. To avoid confusion I should make it clear that I'm ONLY talking about the fact that I don't think mind is an algorithm
I've never figured out whether the kinds of physical substrate modifications the brain employs continually, including while amid full scale pattern processing at the firing level, fall under the heading of "algorithm" or not. I am reasonably sure they can be emulated in principle on a Turing machine, despite the complexities introduced by the third dimension, though, so the question for me is moot.
It's possible that an emulation MIGHT POSSIBLY be, completely unknown to us, emulating mentation.
But you can never have proof that such a thing is possible.
You are betting heavily on observational technique and sophistication hitting limits in its ability to observe and interpret the firing-pattern behaviors of the human brain.
 
Imitating behavior is not the same thing as having a mind.
But imitating the behavior of the brain? All of it?

Perhaps we can agree to agree on what may be the essential matter, however: AlphaGo or no AlphaGo, algorithm or no algorithm, computers are a very long way from being able to emulate a human being's mental illness or any other whole brain behavior, and the AI folks have a tendency to oversell accomplishments that may be major in their field but cast very small shadows on the mountain they have set out to climb.

This guy has my take on the current state of the art (though I don't share his emotional investment):
https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/
 
Imitating behavior is not the same thing as having a mind. For example my reflection in a mirror perfectly "mirrors" my behavior, but it does not have a mind.
False equivalency.
As you correctly indicated, a mirror does not have a mind, it merely reflects your image.

The question is, do you recognize your reflection when you look into the mirror? How do you think this is done? It's because you do have a brain which processes the incoming reflection and makes a best guess that you are looking at yourself. That's a function of the "mirror neuron system" of your brain.

IMO, the discovery of mirror neurons in the brain is a major breakthrough in the question of cognition.

Seriously, do read up on this all important part of a cognitive brain.
Mirror neuron, a type of sensory-motor cell located in the brain that is activated when an individual performs an action or observes another individual performing the same action. Thus, the neurons “mirror” others’ actions. Mirror neurons are of interest in the study of certain social behaviours, such as empathy and imitation, and may provide a mechanistic explanation for social cognition.
https://www.britannica.com/science/mirror-neuron

IMO, we have only scratched the surface of the mirror neuron system and we will find that it is involved in all "cognitive" processes of the brain and may well be responsible for the phenomenon of the"mind" itself.

A great example is that people with autism lack (or have impaired) mirror neural functions, which effectively shuts down the ability to recognize, understand, and respond appropriately to events.
This is a true and recorded falsification, suggesting the great importance of having a functional MNS.
 
Last edited:
But they behave as if they did.

By your emphasis on the primacy of behavior, you sound like one of those philosophers who deny that qualia even exist. That we don't even have subjective mental states. I confess I do not understand this point of view. And I'm not saying you are signing on to it. I'm only saying that you seem to be refusing to distinguish between an object's behavior, what you can observe; and its internal mental states, which you cannot observe.

If you deny that internal mental states either exist or are not relevant or -- much stranger -- that internal mental states that can not be observed count as behavior -- I don't think we have common ground at all.

I find your claim that unobservable mental states are behavior to be entirely contrary to my understanding of the English language and my knowledge, superficial as it may be, of physical science and the contemporary philosophy of mind.

That seems vanishingly unlikely, to me. In the first place, I am comfortable with the existence of unobservable behavior which is deduced.

Please use a different word. You can't call it a fish because that word's taken. You can't call it a cat because that word's taken. You can't call something "behavior" and at the same time assert the unobservability of that something.

Behavior is that which is observable, by definition.

I don't mind your claiming that subjective mental states can be replicated by functionally replicating the brain at a sufficiently fine resolution (neuron by neuron, atom by atom, etc.) I might dispute the point or kick it around in my own mind or both. I just can't accept your calling subjective mental states behavior. Call them something else. Mentation, that's one word for it. Sentience, intensionality, and so forth. Those things are not behaviors.

In the second, I don't see how anything that happens in the human brain escapes being a "behavior".

Well yeah, I get that. But then what is your definition of behavior? You are absolutely mis-applying a common English word as far as I'm concerned. You are making a semantic error and this is leading you into a philosophical error, that of thinking mind is a behavior. I don't think you could find support for this viewpoint in the philosophical literature.
 
Last edited:
I've never figured out whether the kinds of physical substrate modifications the brain employs continually, including while amid full scale pattern processing at the firing level, fall under the heading of "algorithm" or not. I am reasonably sure they can be emulated in principle on a Turing machine, despite the complexities introduced by the third dimension, though, so the question for me is moot.

I can save you the trouble of further research. If a TM can do something, that something is an algorithm. Why? Because that's the definition of an algorithm. That's the judgment of the computer scientists, who are the people who study algorithms.

And if something can not be done by a TM, it's not an algorithm.

I'm merely asserting a particular technical definition. I'm not taking a radical philosophical stance. If the brain does something that's not a TM, then it's not an algorithm. That's a point of semantics, not philosophy.

You are betting heavily on observational technique and sophistication hitting limits in its ability to observe and interpret the firing-pattern behaviors of the human brain.

No not at all. I'm agnostic as to whether a sufficiently detailed emulation of the brain could produce a mind. I tend to think not, but I already know plenty of arguments against my point of view. I'm not taking sides in that debate at this moment. I'm only saying that in my opinion, whatever it is that mind is, it's not an algorithm. I'm a Searlean in that respect.

And for the record, algorithm = TM. They're taken to be synonymous until professor so-and-so in Helsinki breaks the Church-Turing thesis tomorrow morning. If it happens it will make the papers. Till it happens, algorithms = TMs.
 
But imitating the behavior of the brain? All of it?

It can be argued that emulating the behavior of a brain in every detail will not implement a mind. By analogy, a perfect computer model of gravity will not attract nearby bowling balls. Please note I'm not using gravity as an argument, only as an analogy.

Perhaps we can agree to agree on what may be the essential matter, however: AlphaGo or no AlphaGo, algorithm or no algorithm, computers are a very long way from being able to emulate a human being's mental illness or any other whole brain behavior

Yes absolutely. If I'm trying to say anything, it's that.

, and the AI folks have a tendency to oversell accomplishments

Oh God yes. The hype has been nonstop since the 1960's. And now the public is conditioned to use the term AI to mean basically datamining. I read an article today about how AI is being used in the courtroom to determine sentences. [Now that is frightening to me. How the general public and society at large accept technology without thinking of the consequences]. The system they described was pure datamining. Using a computer to sift through a large body of data to find correlations we wouldn't have found with pencil and paper. It's not magic.

that may be major in their field but cast very small shadows on the mountain they have set out to climb.

It's much worse than that. The peak they desire to attain is not even at the top of the mountain they're climbing! That is, the great advances in weak AI -- playing chess, playing Go, driving cars, and the like -- are not the way to strong AI. Strong AI, if it comes about at all, will require completely different and unexpected means. You can't do it on a digital computer. I'm certain of that personally. Searle again.

This guy has my take on the current state of the art (though I don't share his emotional investment):
https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/

Thanks for the article. As it happens I'm a frequent user of Google translate. I always joke to myself, "If this is AI, the human race has nothing to worry about." And, "Remember, these are the same people who make the self-driving cars." Yes I tell myself jokes. Do you think an AI tells itself jokes?

I think one of the attributes of a true AI would be that it would have a sense of humor. HAL, R2D2, and Robby the Robot walk into a bar ...
 
Last edited:
As you correctly indicated, a mirror does not have a mind, it merely reflects your image.

It precisely replicates your behavior down to the finest detail; yet it does not have a mind. It's a counterexample to iceaura's suggestion than mind is a behavior. A mirror image is the closest thing we have to a philosophical zombie.

The question is, do you recognize your reflection when you look into the mirror?

A better question is, does my reflection recognize me? If not -- and I trust we all agree that it doesn't -- then clearly duplicating observable behavior is not enough to ensure the existence of a mind. I know it's kind of a lame example, but then again ... it it? Why don't mirror images have minds, despite being behaviorally identical to human beings? My mirror image can do everything I can. Without exception. But it has no mind. I think someone owes me an explanation.

How do you think this is done? It's because you do have a brain which processes the incoming reflection and makes a best guess that you are looking at yourself. That's a function of the "mirror neuron system" of your brain.

I'll stipulate that neurobiology is not one of my hobbies; and that this is a weak area in my ability to discuss the philosophy of mind. I have no idea how it happens that I can distinguish my reflection in a mirror from my next door neighbor. I could say because he doesn't "look like me," but how do I know how I look except by looking in the mirror!! It's a bit of a puzzler.

IMO, the discovery of mirror neurons in the brain is a major breakthrough in the question of cognition.

I wouldn't doubt that, but it doesn't bear on the question of whether minds are algorithms, unless you think mirror neurons work by way of algorithms. I have no idea. I'm way out of my depth.

Seriously, do read up on this all important part of a cognitive brain. https://www.britannica.com/science/mirror-neuron

Well I glanced at the Wiki page anyway. I don't see what this tells us about machine cognition.

(Edit) I read the Britannica article. Interesting. Reminds me a little of induced current. If you jiggle the electrons in a wire it causes the electrons to jiggle in a nearby wire. But the mirror neurons are very interesting. They fire when I do something or when I see someone else doing that same thing. Empathy, autism. All makes sense. The article did point out that this kind of speculation is preliminary.

IMO, we have only scratched the surface of the mirror neuron system and we will find that it is involved in all "cognitive" processes of the brain and may well be responsible for the phenomenon of the"mind" itself.

Ok.

A great example is that people with autism lack (or have impaired) mirror neural functions, which effectively shuts down the ability to recognize, understand, and respond appropriately to events.
This is a true and recorded falsification, suggesting the great importance of having a functional MNS.

Ok. Just wondering how this relates to computers.

Again, the fact that human cognition works a certain way does not at all imply that we can use that same mechanism to create artificial cognition. The flying analogy again. Human flight works very differently than bird flight. We tried tying wings to our arms and flapping, and that turned out not to work.
 
Last edited:
I just ran across the most incredible article that this thread has led me to.

I've argued that since the three body problem is not solvable by computer (although it can be arbitrarily well approximated given sufficient computing resources), the universe is not computable by a TM. Since the universe clearly does solve the n-body problem for an unfathomably large (but finite) value of n, either the universe violates the Church-Turing thesis, by being a "computation" of a type we haven't yet conceived; or the universe doesn't operate by any laws or principles at all. (I'm swapping out universe for mind here, but it's the same argument either way).

I've been percolating on all this for a while. Ever since I read Ivars Peterson's book, Newton's Clock: Chaos in the Solar System. It turns out to be not true that in Newtonian physics, "if you knew every particle and its motion you could predict the future perfectly." Once you realize that widely-held belief is flat-out false, you begin to see the limits of computation itself. The universe is doing something that our idea of computation can not do. And this is important to understand.

Just now I ran across this article, which looks technical and will take a bit of a commitment beyond my usual short attention span. But finally I found someone who seems to be having the exact same thoughts. And a lot more. Here's the abstract.

Abtract of this paper said:
‘‘Church’s thesis’’ is at the foundation of computer science. We point out that with any particular set of physical laws, Church’s thesis need not merely be postulated, in fact it may be decidable. Trying to do so is valuable. In Newton’s laws of physics with point masses, we outline a proof that Church’s thesis is false; physics is unsimulable. But with certain more realistic laws of motion, incorporating some relativistic effects, the extended Church’s thesis is true. Along the way we prove a useful theorem: a wide class of ordinary differential equations may be integrated with ‘‘polynomial slowdown’’. Warning: we cannot give careful definitions and caveats in this abstract–you must read the full text—and interpreting our results is not trivial.

http://research.cs.queensu.ca/~akl/cisc879/papers/PAPERS_FROM_APPLIED_MATHEMATICS_AND_COMPUTATION/Special_Issue_on_Hypercomputation/smith[1].pdf

In particular this phrase made me feel so validated: In Newton’s laws of physics with point masses, we outline a proof that Church’s thesis is false; physics is unsimulable.

Yes! That's exactly my point. Substitute mind for universe and it's the same argument.

To go further in my understanding I have to fill in my quantum ignorance, which is vast. This paper looks perfect as a starting point.

I'm going to take a run at this and if I find anything interesting I'll report back. If I'm a little less active online it's because I've summoned the discipline to work through the paper.

ps -- It's 183 pages and very technically dense. I'll have to change my objective from working through completely, to getting in there and finding the answer to my question: How does quantum theory let the universe "compute" the n-body problem; what compute means in this context; and how this relates to Church-Turing. If I can begin to understand the outlines I'll have a much better understanding of the computationalist position. I might even become a convert, although I'd hate for that to happen!
 
Last edited:
Back
Top