Can artificial intelligences suffer from mental illness?

I am not nearly as sure that we need to get "beyond" Turing machines.

I don't believe mind is a TM. We can disagree on that. But a lot of your posts have indicated to me that you are unclear on what a TM is and what it can -- and cannot -- do.

1) They are measured attributes of physical phenomena in agreement with theory.
2) It wouldn't matter if they were mere speculation - only that they are plausible, actual non-aberrant possibilities. Your claim that it makes sense, as a thought experiment, to (for example) assign an exact position and velocity to a particle at an exact time, is far more speculative and notably less in agreement with observation and theory.

You're the one who said properties of objects are subjective. I suspect you are not clearly saying what you intend to. You didn't choose to engage with the question I asked you.

It will violate Bells' Inequality, which no predetermined sequence of cause and effect can do as long as Relativity holds (and current formal logic, in which statements are T/F/Meaningless).

I'm afraid I don't see the complete chain of argument here. I'm not familiar enough with the consequences of Bell's inequality. If you can shed light on your remark, please do.

It is certainly the case that it could be - possibly - emulated by one. Nothing about it precludes the possibility.

Nothing precludes the possibility that Tinker Bell sprinkles fairy dust on the universe. What of it? Are you doing science or fairy tales?

What exactly do you mean by emulation in this context?

It's a direct demonstration that there are possible ways in which a universe which handles infinite length computations can nevertheless be emulated as a Turing machine taking steps.

I call absolute big time bullpucky. You made the statement: "Computability is preserved by countability."

Being very conversant with countability and somewhat conversant with computability, I said that this statement is meaningless. And that to the extent that it might be construed as meaningful, it's false.

I challenged you for a LINK or a PROOF that "countability preserves computability." And you respond by saying "It's a direct demonstration." WHAT is a direct demonstration? You did not supply a demonstration. You simply made a statement that shows you are making up words and hoping I won't notice. I don't mean to be rude here but you are bullshitting about this.

Proof or link. Or retraction.

I did point out that cardinal equivalence is a very weak metric of the similarity of sets. The integers and rationals are very different in terms of order and topology. In fact the subject of computability depends CRUCIALLY on the ordinal nature of the natural numbers. There's a first, and a second, and a third, and so forth. The rational numbers have no such order.

You chose to ignore my objections and just repeat your meaningless claim.

That doesn't make for interesting dialog.

That's all beside the point. We are talking about a computation that produces time - time is output, not input.

Computation produces time? What nonsense is that? Link, proof, or retraction. You can't make up your own science and change the subject when challenged.


Planck time would be a product of any computation that produced the universe.

Claim without evidence. Planck time is a feature of our current theory of physics. We have no idea if it's actually an aspect of the universe, or only of our state of knowledge of the universe.

In a finite number of steps.
Which leaves you in the odd position of being forced to acknowledge the possibility - the soundly based speculation - that a non-computable universe could nevertheless be emulated on a Turing machine.

You're being so disingenuous that at some point I have to stop replying to your posts. A TM can't compute something that's not computable. You're just making up words and typing them in, hoping I'll get bored and stop responding, leaving you with the last word. You may well win that game with your current line of argument.

You say a TM can compute something that's noncomputable? That's a square circle, a four-sided triangle. It's false by definition.

So: "A TM can compute something noncomputable." Proof, link, or retraction.


I'm fine with that. It's the possibility of emulation that I need - that's how one could get mental illness in an artificial intelligence.

You need it but you haven't defined it. Once you define emulation your idea will be more clear. If by emulation you mean approximation, I agree with your point. But approximation's useless in this context.


Our imperfect knowledge of the product of the computation (if any) does not tell us how it is done. That's hardly surprising.

Contemporary physics does not allow for the possibility of infinitary computation.

Contemporary physics says nothing much about the subject.

You're wrong. It does. We already have a well-developed theory, several theories in fact, of how infinitary and/or nondeterministic computation would work. What we DON'T have is any physical theory that would allow us to instantiate these ideas in the physical world.

If you claim otherwise: Link, proof, or retraction.

In what way did my likening velocities and positions to temperatures and densities lead you to believe any of those attributes were "artifacts of our minds"?

Because that's exactly what you said.

Of course in a sense all of observed reality is a mental creation,

And then just said again. You don't even seem to read your own posts.

but the subcategory we term "real" and not "artifact" is very useful part of that creation, and densities, temperatures, velocities, etc, belong in it separate from "artifact". IMHO.

If you had a humble opinion, we could debate it. You keep going back and forth, taking my quotes out of context and changing the subject. It's frustrating. My frustration is probably making me sound rude. I'd prefer to stop responding to your posts before this gets worse. This most recent post of yours, I found disingenuous in the extreme. Maybe it's just me.

No, I'm arguing nothing of the kind - closer to the opposite, if anything.

Well clearly I don't understand what you're saying. Perhaps it's all my fault.

I would just like link, proof, or retraction at the points I've indicated. Especially on your claim that "countability preserves computability."

Oh and "A TM can compute something noncomputable." Link, proof, or retraction on that one too. Those two claims. If nothing else.
 
Last edited:
These look like interesting links. I'll add them to my reading queue, which gets longer every day. However I don't need to read these articles in order to know that they can't possibly prove what you claim. That's because nobody can generate perfectly random numbers, for two fundamental reasons.

1) We don't even know, and can't ever prove, that there even are any truly random processes in the world.

2) And even if there are, any physical implementation would be subject to design bias and physical error.

So we could never have a mechanism that generates truly random numbers....
If 'true randomness' is so elusive and impossible to prove it even exists, no point arguing whether it can or has been implemented.
Don't know. Nobody knows. But it can't be computational. If the organization and speed of a computation make a difference, that difference is not computational. It must be something else. And exactly what is that something else?
Maybe if you chew over these two Wiki articles, something 'eureka' will emerge:
https://en.wikipedia.org/wiki/Artificial_consciousness
https://en.wikipedia.org/wiki/Noise-based_logic
I have been deeply impressed with Laszlo Kish's work in general. A true genius.
 
If 'true randomness' is so elusive and impossible to prove it even exists, no point arguing whether it can or has been implemented.

Maybe if you chew over these two Wiki articles, something 'eureka' will emerge:
https://en.wikipedia.org/wiki/Artificial_consciousness
https://en.wikipedia.org/wiki/Noise-based_logic
I have been deeply impressed with Laszlo Kish's work in general. A true genius.
Thanksfor the links. I am particularly impressed with this quote from the Wiki ; AI_consciousness link'
Aleksander's impossible mind[edit]
Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language. Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.
I have always maintained that an AI (or even Human Intelligence) cannot function just on its operating system.

In order to be able make associative decisions, a period of "learning" (knowledge) is an absolute requirement, although verbal language cognition seems to have been solved, but must still be taught verbally by the user, voice recognition, accent, etc.

In humans the baby begins to learn its environment from the moment it is born and exposed to the environment. I believe the current estimate for a human brain to learn basic survival skills is at about 16 years, after which it is assumed that sufficient knowledge has been gained to make associative decisions. Of course, some people never learn from experience, because they have not paid attention to causality.

IOW, any "computational operating system" without knowledge will be unable to make associative cognitive decisions. A learning period is an absolute requirement for sentience to become functional.

In commercial computers, certain types of learning is easy by downloading pertinent knowledge to the HD memory partition if the information is already symbolized, such as numbers, equations, fonts, etc. We even have spell checkers, which will suggest several optional words, if it the user has made a mistake in spelling.

But downloading emotions, such as a "reward" system which provides motivation, on a computer would seem very difficult, or require a long period of learning.

OTOH, humans are born with emotional experiences such as hunger, pain, discomfort, but do not have the knowledge of the causality, which must be learned "on the fly", so to speak. As soon as a baby experiences the emotion of hunger it begins to cry and mama will feed it and satisfy the hunger. Lesson learned, when you are hungry, eat something. "Potty training" may take weeks even in a 1 year old child to grasp the (dis)association between the potty and the diaper (or the floor).
Even dogs are able to learn that when they must "void" to warn their keeper its time to open the door and let the dog out in the yard, so that it can relieve itself of it's discomfort. These emotional experiences are difficult to build into a AI.
I believe this is due to chemical signals sent to the brain, rather than electric coding.

IMO, an AI does not experience these types of physical chemical discomfort so it must be trained to recognize these and other human symptoms, such as bleeding, broken bones, head trauma, etc. "on the fly" and from experience.

To humans those phenomena are clearly symbolic of injury. To an AI they are meaningless, it is not subject to such emotional experiences as pain, satisfaction, sadness, happiness, etc. so it cannot relate to those phenomena (empathy).

So there is a certain dichotomy between teaching HI and AI .
A human brain has the operational ability to experience emotion or to recognize someone else's discomfort, but must learn language, arithmetic, history, etc. which may take many years.

An AI brain can easily learn some of those symbolized areas of knowledge by downloading all symbols which require only purely logical processing, to its HD, but must learn to recognize more subtle symbolic emotional expressions, which may take many years of exposure to human interactions.

Once the AI learns that human "tears" can signify a range of emotions, it might be able to compare and associate that symbolic phenomenon with other environmental conditions, and identify the cause for the tears.
Which would be a representation of artificial empathy.

In the movie, I Robot, this dichotomy is clearly shown. Anyone who has seen the movie will remember the "wink", the meaning of which the robot had just learned.....;)

Using that "smiley" just reminded me of our keen symbolic associative powers. This type of downloadable symbolism might even prove useful in an AI.....:)
 
Last edited:
...So there is a certain dichotomy between teaching HI and AI .
A human brain has the operational ability to experience emotion or to recognize someone else's discomfort, but must learn language, arithmetic, history, etc. which may take many years.

An AI brain can easily learn those symbolized areas of knowledge by downloading all symbolisms which require purely logical processing to its HD, but must learn to recognize more subtle symbolic emotional expressions, which may take many years of exposure to human interactions.
Once the AI learns that human tears can signify a range of emotions, it might be able to compare and associate that symbolic phenomenon with other environmental conditions, and identify the cause.
Which would be a representation of artificial empathy...
Most everyone agrees AI must be trained initially. Anyway along those above lines, here's another rather lengthy article that 'forks' more back to the OP query:
https://www.wired.com/story/how-to-build-a-self-conscious-ai-machine/
I've saved it to HD as handy go-to reference. Among other things, a nice cure to some optimist's view that AI will naturally converge to pure, neuroses-free benevolence.
 
You're the one who said properties of objects are subjective.
I said no such thing.
I said the category we label "objective" is very useful, and worth firmly separating from other categories of mental event.
I'm not familiar enough with the consequences of Bell's inequality.
https://faraday.physics.utoronto.ca/PVB/Harrison/BellsTheorem/BellsTheorem.html
Contemporary physics does not allow for the possibility of infinitary computation.
So?
You're wrong. It does. We already have a well-developed theory, several theories in fact, of how infinitary and/or nondeterministic computation would work. What we DON'T have is any physical theory that would allow us to instantiate these ideas in the physical world.
That's irrelevant, if you are talking about some hypothetical computation that produces the physical world in the first place.
You say a TM can compute something that's noncomputable? That's a square circle, a four-sided triangle. It's false by definition.
I think it can in principle emulate anything that operates on the same logic as a TM, to any degree of precision and accuracy required. Whether you wish to call what it is emulating a computation or not I don't care.
I would just like link, proof, or retraction at the points I've indicated. Especially on your claim that "countability preserves computability."

Oh and "A TM can compute something noncomputable." Link, proof, or retraction on that one too. Those two claims. If nothing else.
Countability is one way of preserving step by step logical progression, describable by propositional calculus, within an infinitely "dense" range of logical inferences or "steps".
As to the second, if you refrain from putting quote marks around your words as if they were mine the issue vanishes.
If by emulation you mean approximation, I agree with your point. But approximation's useless in this context.
Approximation gives you everything you need to emulate mental illness in a human mind or any other feature of the universe that operates by standard TM logic - in principle.
For all you know, whatever is producing the universe is doing the same thing - its output is approximations in the first place. That is after all what current physical theory seems to indicate.
 
Last edited:
I'll take the opportunity to ask if Pi is a random number?

No, Pi is not a random number.

Pi not random because its digits can be computed by a program. Even though its digits are statistically random as far as we know, they're deterministic. You can write a program that cranks out as many digits as you want. The digits look random but they are not actually random.
Such a number is called a computable real number. A real number is computable is there is a program that, given n, outputs the n-th digit of pi within finitely many steps. Since a program must be a finite string of symbols, that means that we have a finite-length description of pi.

https://en.wikipedia.org/wiki/Computable_number

There are in fact lots of finite-length, closed-form expressions for pi. There are a bunch of them here.

http://mathworld.wolfram.com/PiFormulas.html

For each formula, someone could write a program and crank out as many digits of pi as they like, subject only to limitations of computing resources. If you theoretically assumed unlimted resources, then every digit could be generated; and each individual digit could be generated in a finite number of steps.

The moral of the story is that pi actually encodes only a finite amount of information. Once you have the finite-length program, you can get as many digits as you want.

It's true that pi is irrational, meaning that it's not the ratio of two integers like 2/3 for example. But some irrational numbers are computable and some aren't. So for purposes of discussing randomness, what's important about pi is that it's computable.

The noncomputable numbers are the real numbers that are truly random. They consist of infinitely many digits, and they're irrational, and there is no program or algorithm that cranks out their digits. To express a noncomputable number, you need to supply an actually infinite amount of information. You need to give every one of its decimal digits. There's no program that generates them.

The noncomputable numbers are a subset of the real numbers that include all the rationals, and also some but not all of the irrationals.

In fact the computable numbers are a subfield of the reals. That means the sum, difference, product, and (nonzero denominator) quotient of two computable numbers is computable.

The rationals are another familiar subfield. The subfield of computable numbers contains all the rational, and then some but not all of the irrationals.

Just like the rationals, the set of computable numbers is countable. That's because there are only countably many Turing machines.

It can only be approximated but always remains open ended ... .

Yes, pi can be approximated to arbitrary precision by an algorithm. That actually makes it special. That makes it computable. Most real numbers aren't computable. Those are the random real numbers.

Is that not why we call it an "irrational number"?

Yes, pi is irrational. But not all irrational numbers are computable. Most aren't. When we talk about whether a real number is random or not, we care about computability. An irrational could be computable or noncomputable.

An irrational number is one that can't be expressed as a ratio of integers, like 2/3. But if you think of an irrational number like sqrt(2), there's an algorithm that lets you determine each of its digits. So sqrt(2) is computable.

Strangely, it seems to pop up everywhere and is not necessarily associated with circles only.

Yes it's all over the place in math and physics. Pi is "out there" in the world in some way, waiting for us to discover it. What that means metaphysically, I don't know.
 
Those Lotto number generators which blow numbered balls around until one pops into the exit tube

Not random?

What would have to be the case in order for those lotto numbers to be random?

1) There would have to be randomness in the universe in the first place. This is something we currently do not know. And it's high on the list of things we may never know. So that's one problem.

2) Even if there is randomness in the universe, our physical mechanism is subject to mechanical imperfection and bias. Those modified popcorn poppers filled with pingpong balls, have they been certified purely free of bias? I hope you can see that the answer is no. Nothing we build can be perfectly free of bias.

In the 1970's some students built the world's first wearable computer and used it to win money by exploiting tiny imperfections in casino roulette wheels. Great story. Saw a documentary about them on cable tv once. One of the students got a bad burn when the computer shorted out against their skin. But the theory worked. Roulette wheels are not random.

https://en.wikipedia.org/wiki/Eudaemons
 
If 'true randomness' is so elusive and impossible to prove it even exists, no point arguing whether it can or has been implemented.

My point exactly. People write misleading articles about "true" random number can be based on cosmic rays or other quantum-mysterious processes. But they're not random. If it's a physical mechanism it can't be perfectly random. That's all I'm saying.

Maybe if you chew over these two Wiki articles, something 'eureka' will emerge:
https://en.wikipedia.org/wiki/Artificial_consciousness
https://en.wikipedia.org/wiki/Noise-based_logic
I have been deeply impressed with Laszlo Kish's work in general. A true genius.

Thanks much. Looks interesting. Will read.
 
What would have to be the case in order for those lotto numbers to be random?

1) There would have to be randomness in the universe in the first place. This is something we currently do not know. And it's high on the list of things we may never know. So that's one problem.

2) Even if there is randomness in the universe, our physical mechanism is subject to mechanical imperfection and bias. Those modified popcorn poppers filled with pingpong balls, have they been certified purely free of bias? I hope you can see that the answer is no. Nothing we build can be perfectly free of bias.

In the 1970's some students built the world's first wearable computer and used it to win money by exploiting tiny imperfections in casino roulette wheels. Great story. Saw a documentary about them on cable tv once. One of the students got a bad burn when the computer shorted out against their skin. But the theory worked. Roulette wheels are not random.

https://en.wikipedia.org/wiki/Eudaemons

OK I can go with that

It would be a fairly pointless exercise but interesting if all Lotto winning number strings (say only those 20 digits long, or some other arbitrary length) If all Lotto winning numbers of that length were brought together to see if any were the same

My guess would be no

:)
 
My point exactly. People write misleading articles about "true" random number can be based on cosmic rays or other quantum-mysterious processes. But they're not random. If it's a physical mechanism it can't be perfectly random. That's all I'm saying.
Isn't that what Max Tegmark is saying also? A mathematical universe?
 
Isn't that what Max Tegmark is saying also? A mathematical universe?

As far as I know he's saying the universe is a mathematical structure. I don't know how that relates to what I was talking about. Tegmark's paper is on my queue. I really want to sit myself down and work through his paper on MUH line by line. It would probably raise my blood pressure but also make me smarter.
 
Isn't that what Max Tegmark is saying also? A mathematical universe?

What Universe wouldn't be , mathematical ? How could a physical Universe not have mathematics embeded in it ?

The Universe is physical , it's its Nature . The Universe by its very existence , as compared to nothing , must be .

There is no other possibility , existence will always be , for infinity . It has nothing to do with mathematics .

The existence of , any , any , I mean ANY mathematics is ALL based on physical objects . There is no getting around this fact .
 
OK I can go with that

It would be a fairly pointless exercise but interesting if all Lotto winning number strings (say only those 20 digits long, or some other arbitrary length) If all Lotto winning numbers of that length were brought together to see if any were the same

My guess would be no

I was in the local convenience store the other day. A few people were standing around buying lottery tickets. Clerk tells me it's up to a couple hundred million. I buy a ticket. Didn't win.
 
I was in the local convenience store the other day. A few people were standing around buying lottery tickets. Clerk tells me it's up to a couple hundred million. I buy a ticket. Didn't win.

Millions of people didn't win .

Dam tough to get even 3 numbers let alone 6 or 7 .
 
I said no such thing.
I said the category we label "objective" is very useful, and worth firmly separating from other categories of mental event.

I may have misunderstood your remark. I don't think it's central to the discussion.


I'll take a run at the link. It's funny. Some things I have an affinity for, others make my eyes glaze. Articles about Bell's theorem are in the latter category. I know there's a thing called entanglement where two particles far apart have correlated states in the sense that when you measure one the other's state becomes determined. And that Chinese scientists recently demonstrated the effect from earth to space I think. I remember being amazed that they have a technique to identify particular photons. That's everything I know about it.


That was in response to my saying, "Contemporary physics does not allow for the possibility of infinitary computation." And I said that because you are claiming that infinitary computation might be the ultimate answer to our philosophical mysteries. SO ... if that is true, and I think it might well be, then we will need new physics. Because existing physics doesn't allow infinitary computation.

That's irrelevant, if you are talking about some hypothetical computation that produces the physical world in the first place.

I do NOT believe the world is a computation, as computation is currently understood.

I think it can in principle emulate anything that operates on the same logic as a TM, to any degree of precision and accuracy required. Whether you wish to call what it is emulating a computation or not I don't care.

A thing is computable if and only if it is the output of a TM that halts. That's the definition.

So it's not possible for a TM to compute something that's noncomputable. You're going against the standard definition. When you make up your own meanings for technical terms, it's not conducive to conversation. You claimed a TM can compute something that's not computable. That's absurd. Not because it's a fact of nature. But because it's a fact of definition.

Countability is one way of preserving step by step logical progression,

But that's mathematically false. Countability does not preserve order. Countability does NOT preserve step by step logical progression. That's basic math.

describable by propositional calculus, within an infinitely "dense" range of logical inferences or "steps".

You're dancing where I challenged you to post a link, a proof, or a retraction for your meaningless claim that "countability preserves computability." It does not. When you talk about step-by-step anything, you are talking about ordinals. There are countable ordinals that are not computable; indeed, that do not even have notations. You are making claims about countable ordinals that are not true. https://en.wikipedia.org/wiki/Large_countable_ordinal

When you name-drop Bell's inequality I admit I am ignorant. Why don't you likewise admit that you're not up on ordinals and the theory of computation. You can't bluff your way through this.

An infinitely dense range of logical inferences? That's word salad.


As to the second, if you refrain from putting quote marks around your words as if they were mine the issue vanishes.

Because then you would not have to be reminded of the things you're saying?

Approximation gives you everything you need to emulate mental illness in a human mind or any other feature of the universe that operates by standard TM logic - in principle.
For all you know, whatever is producing the universe is doing the same thing - its output is approximations in the first place. That is after all what current physical theory seems to indicate.

I couldn't interpret this paragraph in the context of the conversation. But if the universe is only an approximation, what's it approximating? I don't follow this line of argument.
 
Last edited:
Millions of people didn't win .

Was my loss determined at the moment of the big bang? Or was it random? Or is there some intermediate state, not yet accessible to our understanding, between determinism and randomness?
 
Was my loss determined at the moment of the big bang? Or was it random? Or is there some intermediate state, not yet accessible to our understanding, between determinism and randomness?
Probability.
As long as the potential for something exists, it's just a matter of probability.

Life on earth itself is a perfect example . During the BB the potential for life was created, but not as a deterministic imperative, but even within the chaotic randomness, the probability existed, given enough time and tries.

And here we are, some 10 billion years later, at the outskirts of a galaxy. But there may well be other planets where life developed (evolved) long before us, and I am certain that there are or will be planets which will eventually produce life also.

Life was not determined, nor was it random, it was probabilistic, which fortunately for us became fully explicated during the human epoch on earth after another 3 billion years, and after 2 trillion, quadrillion, quadrillion, quadrillion tries of chemical interactions, gradually evolving into greater complexity, such as forming the first self-replicating cell.
 
Last edited:
Was my loss determined at the moment of the big bang? Or was it random? Or is there some intermediate state, not yet accessible to our understanding, between determinism and randomness?

;) god didn't want you to win?

:)
 
Back
Top