Can artificial intelligences suffer from mental illness?

that would be like having a virus which would upset normal or optimum functioning.

the idea that ai, no matter how sophisticated and complex the programming can only be an emulation of our consciousness. it can be programmed to be almost indistinguishable at least practically but it's still a kind of a skin deep type of scenario to some extent. i really question the idea that ai can achieve sentience and emotions inherently just like us. that doesn't mean ai is not a type of consciousness itself and in it's own right but that it's a different one.

it probably cannot have the same type of consciousness as organic life because it's not of the same design or makeup.

for instance, how can you really program emotions if it can't experience it? it can be programmed to respond to certain cues but that's still a different type of consciousness than from visceral experience. it could work the same but still have a different engine unless you combined biotechnology.

that doesn't mean we aren't simulations and totally conscious either as that is relative. a hypothetical higher being may see us as having less sentience or lifeforms of a totally different makeup that cannot relate.
^^^
Humans are organic machines acting according to programming & reacting to stimuli.

<>
 
AI can't have mental illness because it has no mentality at all. What AI can suffer from is unintended consequences of its programming. In principle this is no different than what happens when you are learning to program and you first discover that programs do EXACTLY what you tell them to do ... and that this is very often NOT what you WANTED to tell them to do.

The challenge of the art of programming is to tell the computer exactly what you want it to do; and to clarify your thinking so that what you THINK you want is what you REALLY want.
^^^
Mental illness in humans is unwanted consequences of programming. Not much difference.

<>
 
Does an organic computer differ in capability from an inorganic computer? If so, in what way? If not, why does organic-ness matter?
^^^
As best we know at this time, there is, of course, a huge difference but perhaps someday there will not be. In principle, it is the same. Basic programming plus additional input = action & reaction.
I do not know that organicness does matter except that it is nearly all we know at this time.
Thru much of history, many humans regarded many others as not human because the others were different & they felt a sense of superiority. Now that crap is not as prevalent partly because the others look like us but mainly because it was finally seen & accepted that they act, think & feel as we do.
IF we perfect or discover inorganics which act, think & feel as we do, will we refuse to accept it because they are different.

<>
 
You keep repeating that the output of a neural net is incomprehensible or inscrutable to we mere humans.
Why are you trying to spin my simple observations of fact, attach innuendo to them?
You have also pointed out that all sufficiently complex software does unexpected stuff that its programmers have trouble explaining without a lot of work. My observation was that recent AI has been doing that not as a bug but a feature - that it does unexpected and inexplicable good stuff, superior performance, not just bugs and breakdowns.
Ok, so you agree that a human with pencil and paper could implement a neural net. But in your previous post you denied this
No, I did nothing of the kind. That's an important point, and I'm not sure how I was unclear in that matter.
Not just a human - you could probably train a chicken to implement a neural net, step by step, pecking keys.
Ok fine. Suppose I stipulate that. Why do you place so much emphasis on inscrutability? What philosophical or scientific point are you making?
My emphasis has been on the direction or nature of the inexplicable performance - that it isn't bug and breakdown only, that recent AI is producing unexpected output that is superior function, and most tellingly output that cannot be securely classified as function or malfunction even by its programmers.
And stepping through the source code somehow (paper and pencil, whatever) - even with all the auxiliaries attached, so that one is actually emulating the machine - wouldn't necessarily help. You still wouldn't know what was going on, sometimes - especially if you couldn't figure out whether you were dealing with function or malfunction.
And what to you mean that what the computer does and what the source code does are different? If a computation differs from what its source code says, that's a compiler or cpu error.
Again: the source code of AlphaGo does not contain instructions on how to play winning Go. It indicates no moves. The computer as a whole does that, after being trained. At the beginning of the training, the computer plays poorly. After a million games, with (in principle) the exact same source code, the computer plays much better.
If you think the mind is an algorithm of some sort then functional transfer is trivial, just as Euclid and I have both hosted the Euclidean algorithm in our brains.
And if I don't, or don't care, then functional transfer is not trivial - or as I posted above, I have nowhere claimed it would be easy.
Ok. But just tell me why you think the inscrutability of a neural net is important.
Not the inscrutability, the qualitative nature of the output that is unexpected and inexplicable.
In this discussion, it bears directly on whether AI can emulate human brain malfunction. It checks off another box.
 
Nearly forgot she did mention most of the computing was performed in parallel which helps with speed but not the end result. Yes I get it

I hope nobody thinks I'm arguing against the immense practical benefits of modern computing, the potential (and dangers, please remember!) of QC and weak AI. By the way I use the phrase weak AI to mean the specialized AI's that we've got. Play Go, drive a car, fold a protein. Strong AI is the "hard problem" of consciousness. Making a mind like ours. Or ... unlike ours.

But yes there are wondrous technological miracles to come. I'm not disputing that.

I'm saying that the human mind is not reducible to the principles of computation as they are currently understood. But sure, the state of the art in computing is pretty impressive these days. I totally agree with that.
 
I hope nobody thinks I'm arguing against the immense practical benefits of modern computing, the potential (and dangers, please remember!) of QC and weak AI. By the way I use the phrase weak AI to mean the specialized AI's that we've got. Play Go, drive a car, fold a protein. Strong AI is the "hard problem" of consciousness. Making a mind like ours. Or ... unlike ours.

But yes there are wondrous technological miracles to come. I'm not disputing that.

I'm saying that the human mind is not reducible to the principles of computation as they are currently understood. But sure, the state of the art in computing is pretty impressive these days. I totally agree with that.
^^^
It is reducible to programming, input & action & reaction. Every thing a human thinks, feels, says & does is caused by original programming plus input. Well, and current condition of the hardware, partly the body but majorly the brain.

Unless there is some god(s), spirit or highly advanced aliens involved. Or controllers of the artificial simulation we live in.

<>
 
I hope nobody thinks I'm arguing against the immense practical benefits of modern computing

No this Easter Bunny

Unless there is some god(s), spirit or highly advanced aliens involved. Or controllers of the artificial simulation we live in.
I'm going to go no on that

Although..... Would it be weirdly strange if we built a QC which slipped into concessness AND looked at us as it's creator (which we would be) AND called us god. Creepy

Just in news Japanese have cloned a monkey. Next step the mermaid ape :)

:)
 
No this Easter Bunny


I'm going to go no on that

Although..... Would it be weirdly strange if we built a QC which slipped into concessness AND looked at us as it's creator (which we would be) AND called us god. Creepy

Just in news Japanese have cloned a monkey. Next step the mermaid ape :)

:)
^^^
Well, we would be its creator & it may be that they would think of us as gods but if they are conscious, they would eventually realize the mistake.

Oh, do not get me started on clones.

OOPS! Too late.

I have considered starting a thread on flaws in SF. I love science fiction but sometimes it is very frigging stupid.
Such as presenting clones as fully grown humans with carbon copy memories.
When Dolly was cloned, my brother was concerned that President Clinton could be easily replaced by a clone. I had to explain that a clone of him would be a baby 50 years younger.

<>
 
^^^
Oh, do not get me started on clones.

OOPS! Too late.

I have considered starting a thread on flaws in SF. I love science fiction but sometimes it is very frigging stupid.
Such as presenting clones as fully grown humans with carbon copy memories.
When Dolly was cloned, my brother was concerned that President Clinton could be easily replaced by a clone. I had to explain that a clone of him would be a baby 50 years younger.

<>
Agree. But I think I have only seen on movie where they used the ol' accelerated growth trick to bring the body to full size. Don't recall what they did to duplicate the brain (with attendant content)

Still if you were cloned would you have the predisposition to be much like yourself (nature)?
And if you taught yourself (nurture) how close could you make yourself to a carbon copy?

:)
 
Why are you trying to spin my simple observations of fact, attach innuendo to them?

I apologize if I've been making innuendos. I'd be glad to de-escalate any misunderstandings and try to understand each other's point of view.

You have also pointed out that all sufficiently complex software does unexpected stuff that its programmers have trouble explaining without a lot of work. My observation was that recent AI has been doing that not as a bug but a feature - that it does unexpected and inexplicable good stuff, superior performance, not just bugs and breakdowns.

I completely agree. I wonder why you think I don't. I hope you don't take my style of writing as an innuendo. I have a genuine feeling of curiosity as to why you feel you must laboriously explain to me things I already understand and agree to.

No, I did nothing of the kind. That's an important point, and I'm not sure how I was unclear in that matter.
Not just a human - you could probably train a chicken to implement a neural net, step by step, pecking keys.

If you can train a group of chickens to act as a logic gate, you could implement the world's largest supercomputer and use it to run weather simulations. Then you could have chicken dinner.

My emphasis has been on the direction or nature of the inexplicable performance - that it isn't bug and breakdown only, that recent AI is producing unexpected output that is superior function, and most tellingly output that cannot be securely classified as function or malfunction even by its programmers.

I completely agree. I do believe you've mischaracterized my posts a little bit. I never said that the inscrutability of ancient accounting programs pertained only to bugs or misbehavior of the program. Even the correct functioning of the system is literally a black box to the latest generation of maintenance programmers.

But of course I agree completely with your point that the modern neural nets are inscrutable and do clever things. Deep Blue played moves that startled chess experts; and AlphaGo played moves that startled Go experts.

I wish you would credit me with being up on the state of the art on neural nets and weak AI. Just because I disagree with someone's philosophical conclusions doesn't make me ignorant of the technology.

And stepping through the source code somehow (paper and pencil, whatever) - even with all the auxiliaries attached, so that one is actually emulating the machine - wouldn't necessarily help. You still wouldn't know what was going on, sometimes - especially if you couldn't figure out whether you were dealing with function or malfunction.

I completely agree. Perhaps I haven't made my strong agreement clear. I fully agree with your assessment of the profound and arguably revolutionary nature of the inscrutability of an approach like AlphaGo Zero, where there is no human domain knowledge programmed into the system at all. [You think the inscrutability is revolutionary, I think it's evolutionary, but that's a minor issue. I agree that one way or the other it's super-duper inscrutable and I hope that's sufficient for your needs].

We are only disagreeing on the deeper meaning of this inscrutability. I still don't understand what SIGNIFICANCE this profound inscrutability has. Is it philosophical? Does it have implications for the theory of mind? What exactly? My only point is that no matter how inscrutable the program is, it's still reducible to a TM, and therefore nothing out of the ordinary in terms of the theory of computation. That's a simple statement of technical fact.

Again: the source code of AlphaGo does not contain instructions on how to play winning Go.

I really wonder why you think I don't know this. I've been following AI since the 1980's. I read about neural networks back then. I know how neural nets work, I know some of the math. I started taking weak AI seriously when Deep Blue beat Kasparov, a player known to have a very deep understanding of the game. I was as seriously impressed as everyone else when AlphaGo played the very difficult game of Go at an expert level. And I'm impressed by AlphaGo Zero.

Why do you think I don't know any of these things?

I do know these things. And I also know that the code can be reduced to a Turing machine so that no claim of qualitatively different computing can be made. A neural net is ultimately just a different way of organizing a computation. One that mimics the neurons in the brain, so it's of some neurobiological interest. But a TM nonetheless.

It indicates no moves. The computer as a whole does that, after being trained. At the beginning of the training, the computer plays poorly. After a million games, with (in principle) the exact same source code, the computer plays much better.

As the chickens would if they could only implement logic gates. Logic gates are all you need. In fact that's what's interesting. Formal neurons are very different than logic gates. But what you can compute with formal neurons can be reduced to logic gates. That's another way to express my point.

And if I don't, or don't care, then functional transfer is not trivial - or as I posted above, I have nowhere claimed it would be easy.

As I've mentioned. I'm not even arguing functional transfer at this point. I'm only arguing against the claim that the mind is a computation; or that a neural net does some kind of special computation that a conventional computer can't do.

Not the inscrutability, the qualitative nature of the output that is unexpected and inexplicable.
In this discussion, it bears directly on whether AI can emulate human brain malfunction. It checks off another box.

Ah. How does it bear directly? I don't see that at all. If a neural net "bears directly" on the question of whether an AI can emulate a brain; then so can a TM. Because neural nets can be implemented as TMs.

I'm responding as clearly as I can and not trying to express any innuendos. In my mind I am making a simple point of very well-known and well agreed-upon computer science.

You say neural nets shed light on human minds. I say that IF they do that, then so do TMs -- because any neural net can be implemented as a TM. And in fact real world neural nets ARE implemented as TMs, namely as conventional programs on conventional hardware.

In fact we have a syllogism.

Premise 1: Neural nets shed light on the functioning of the human mind.

Premise 2: Neural nets are TMs.

Conclusion: TMs shed light on the functioning of the human mind.

So if you say that neural nets are doing "something" that TMs are NOT doing ... what exactly is that? It can not be categorized as being computational. It's something else. The organization of a computation does not affect what the computation does. You can organize a given computation as a conventional TM or as a neural net. But it still computes the same thing.
 
Last edited:
Agree. But I think I have only seen on movie where they used the ol' accelerated growth trick to bring the body to full size. Don't recall what they did to duplicate the brain (with attendant content)

Still if you were cloned would you have the predisposition to be much like yourself (nature)?
And if you taught yourself (nurture) how close could you make yourself to a carbon copy?

:)
^^^
I have not read about the monkey yet but Dolly's clone had some major flaws so for present purposes, we will assume cloning has been perfected.
It would probably vary greatly from clone to clone & environment to environment. There would definitely be much predisposition. The clone would start with the same original hardware & the same basic programming but extremely different input.
There have been studies of separated twins which some were extremely similar & some were very different. And the time element is very important. How different might you be if you were born 50 years later & had different parents, different school, different society, etc?

<><
 
Well, we would be its creator & it may be that they would think of us as gods but if they are conscious, they would eventually realize the mistake.

I guess they would have the advantage of being able to actually observe us in operation. Not like current god bothers who observe.........well nothing

There could be a senerio where the dumb QC computer becomes aware, looks around, concludes us minions are dumber, gets on high ground and pronounces "YOUR FIRED"

OMG is that what happened? Mystery solved :)

:)
 
^^^
Define alive. <>

In this context, I would define "alive" as having an emotional awareness of one's own existence and environment.

An example might be having the chemically induced emotional experience of hunger, which compels a chemo-physical need to be satisfied. An imperative for self preservation.
 
Last edited:
In this context, I would define "alive" as having an emotional awareness of one's own existence and environment.

An example might be having the chemical emotional experience of hunger, which compels a chemo- physical need (imperative) to be satisfied. Self preservation.
^^^
I define alive as alert & active, animated.
AI would run on fuel just as humans do & it might be that they will sense a low fuel level & feel hungry & desire not to die due to lack of fuel.

<>
 
In this context, I would define "alive" as having an emotional awareness of one's own existence and environment.

An example might be having the chemically induced emotional experience of hunger, which compels a chemo-physical need to be satisfied. An imperative for self preservation.
Interesting

What would you feed a hungry QM computer?

:)
 
Back
Top