Can artificial intelligences suffer from mental illness?

Exactly, however we cannot forget that in all cases of growth there is growth until a point of unsustainable growth is reached. This is the lack of understanding of those who support the notion of "controlled growth".

Bartlett put it nicely; "it's like being on the Titanic. You can go first class or steerage. The result is the same".

Only zero growth within the limits of sustainability is possible. Exceed sustainability and bad things wil happen!

It sounds like as long as there is growth in human population the threat or reaching a limit of space and resources available here on Earth is certain, it is only a matter of time no matter what.
 
It sounds like as long as there is growth in human population the threat or reaching a limit of space and resources available here on Earth is certain, it is only a matter of time no matter what.
Right, There will come a time when "zero" growth will happen. This does not necessarily mean that this event will be a smooth transition, but more likely that at some point a massive extermination event will happen, which then allows for steady growth for another period of time, but that does not necessarily mean "human growth". It could well be that the insect will inherit the earth.

There have been 5 major extinction events so far in addition to worldwide wars and disease and we are probably in the middle of the "sixth" extinction event.
The Holocene extinction, otherwise referred to as the Sixth extinction or Anthropocene extinction, is the ongoing extinction event of species during the present Holoceneepoch, mainly as a result of human activity
https://en.wikipedia.org/wiki/Holocene_extinction
 
Last edited:
As yet, you are right, but as I understand from Apple's latest news on Siri, she learns your preferences and habits and will respond to you faster and faster as she learns your habits.
musing here... if the AI responds by prediction then it loops the behaviour pattern of the human.
is a looping process driving a behaviour pattern that is reinforcing an existing behaviour pattern... a living concept of "being" ? it does sound superficially as if it is reducing the human behaviours into more predictable patterns tha pander to the coputer lacking the ability to predict randomnes OF behaviour ... ?
 
musing here... if the AI responds by prediction then it loops the behaviour pattern of the human.
is a looping process driving a behaviour pattern that is reinforcing an existing behaviour pattern... a living concept of "being" ? it does sound superficially as if it is reducing the human behaviours into more predictable patterns tha pander to the coputer lacking the ability to predict randomnes OF behaviour ... ?

If you ALWAYS catch the 8:30am bus along with a large number of other people the bus company might predict more people in the future will need the 8:30am bus and put more busses on that route

However it just might be the people would prefer to catch the 8:45am (which does not exist as the next scheduled bus is 9:15am - to late for the 8:30)

So learn from a person's habits and pandering to them is a bit lazy

Far better if the AI would engage and request the needs

Bus company should request from the travelers / customers the best times for their needs not just assume because they send busses out and people catch them those times are the BEST

If I thought my AI was predicting my need I would commence Operation Bat Shit

I would explain what Operation Bat Shit entails but my AI might learn about it which would blunt its effectiveness

:)
 
So learn from a person's habits and pandering to them is a bit lazy

Far better if the AI would engage and request the needs

precisely !
If I thought my AI was predicting my need I would commence Operation Bat Shit
it would be your duty to do so ! :D
otherwise save the $10,000.00 home robot & simply listen to the radio tell you if its going to rain in half an hour & choose something elese to play on your stereo. and open your own web browser.

making life lazier for idiots is hardly a way to drive technology advancement...or making life poorer for working class for that matter.

a robot dancing chimp that says your name and "oh my what a big penis you have" is not really my idea of technalogical advancement.
 
Right, There will come a time when "zero" growth will happen. This does not necessarily mean that this event will be a smooth transition, but more likely that at some point a massive extermination event will happen, which then allows for steady growth for another period of time, but that does not necessarily mean "human growth". It could well be that the insect will inherit the earth.

There have been 5 major extinction events so far in addition to worldwide wars and disease and we are probably in the middle of the "sixth" extinction event.
https://en.wikipedia.org/wiki/Holocene_extinction

What possible scenarios would lead to a massive extermination of human life here on Earth? I think of nuclear war, pandemic, a massive meteorite impact, or a super volcano eruption as being some possible scenarios.
 
musing here... if the AI responds by prediction then it loops the behaviour pattern of the human.
is a looping process driving a behaviour pattern that is reinforcing an existing behaviour pattern... a living concept of "being" ? it does sound superficially as if it is reducing the human behaviours into more predictable patterns tha pander to the coputer lacking the ability to predict randomnes OF behaviour ... ?
Well, it may be able to detect a drop in atmospheric pressure, and when you get ready to go out, the robot may stand at the door with your umbrella, telling you that their is a % likelyhood of rain and that you may need the umbrella to protect you.

Note that insects have this ability and become very active feeding, hours before the storm arrives.

All the instruments we now use to measure things can be build into an AI robot and have it verbally relay this information. If it "needs" to learn something, a plug in to the internet would give it a near unlimited amount of access to information at incredible speed and weigh the information to determine what is "legitimate" knowledge and what is "speculative", from which it can make a "best guess" as to it's understanding of the answer. This is what humans do. Associative thinking and making a "best guess" of what is being observed and recorded by the neural network.

In humans, this "best guess" can easily be fooled, albeit purposely with presenting images which cannot occur in reality. Creating optical illusions is a favorite human pastime and shows the limitations of the human brain.

I wonder if an AI could be fooled that way as it observes reality in a different way than humans?
 
what is "legitimate" knowledge and what is "speculative", from which it can make a "best guess"
totally hackable.
you recall the wall st issue recently ?
hundreds of millions of dollars wiped out by a single bad tweet.
thats thousands of elderly retirees whos entire lifes savings have just be exterminated and they are getting thrown out of thier house on to the street(in the usa where they like that kinda society)

cost benifit loss evaluation
massive cost of 1 internet mining AI
vs
cost of internet subscription to subscribe to 1 internet mining AI bot with free extra cell phone data per month and a starbucks cultural sensitivity seminar discount voucher.

Creating optical illusions is a favorite human pastime and shows the limitations of the human brain.
you refer to brain R.A.M(short term&longterm) band width processing & storage ?

I wonder if an AI could be fooled that way as it observes reality in a different way than humans?
if the only need is to limit data to decide the outcome then it would be very easy.
a dns attac at a critical time would force the AI to put out numbers that would completely collapse the countrys share market or export market.
faked biological bans due to infestation of food exports would render the time delay to force the dumping of massive amounts of perishable foods which would also taint the consumer market.

simply loading a DNS attack then filtering the real news to include web data signifigance of a critical event, timed with a shut down that escalates the critical rating.

it doesnt matter too much if US house price values dropped by 50% over night for 3 or 4 days because most would think it was a mistake and make jokes about it etc...
but... export markets ... share/stock speculation...

or
being able to load entire frieghting companys on to the terrorirst funding ban list
that would cancel all thier certificated documentation for imports and exports.
the massive need to then load new paper work etc and time delays...

knowing that companies have massively down sized thier staffing of certification with a heavy reliance on pre-vetting by the customer instead of actual human vetting by the company...
that is something just waiting to be used.
A.I's would be like giving a driving license to a drunk teenager and a car and then saying i dare you.
 
Well, it may be able to detect a drop in atmospheric pressure, and when you get ready to go out, the robot may stand at the door with your umbrella, telling you that their is a % likelyhood of rain and that you may need the umbrella to protect you.

Note that insects have this ability and become very active feeding, hours before the storm arrives.

All the instruments we now use to measure things can be build into an AI robot and have it verbally relay this information. If it "needs" to learn something, a plug in to the internet would give it a near unlimited amount of access to information at incredible speed and weigh the information to determine what is "legitimate" knowledge and what is "speculative", from which it can make a "best guess" as to it's understanding of the answer. This is what humans do. Associative thinking and making a "best guess" of what is being observed and recorded by the neural network.

In humans, this "best guess" can easily be fooled, albeit purposely with presenting images which cannot occur in reality. Creating optical illusions is a favorite human pastime and shows the limitations of the human brain.

I wonder if an AI could be fooled that way as it observes reality in a different way than humans?

Some good points there
you may need the umbrella - WHY? - the ants are busy in my inbuilt ant nest

Associative thinking and making a "best guess" of what is being observed and recorded by the neural network

As long as we don't get a AI MR whose best guess for a UFO is a alien spacecraft


I wonder if an AI could be fooled that way as it observes reality in a different way than humans?

Tricky. It would need a fairly well defined reality within to make a comparison

:)
 
I wonder if an AI could be fooled that way as it observes reality in a different way than humans?
Tricky. It would need a fairly well defined reality within to make a comparison
It could stand in the doorway with an umbrella because today there is an abundance of ultraviolet light which may destroy your skin cells and cause skin cancer.

Of course you can refuse, the AI will not lock the door to prevent you from going out into full sunlight, or the expected rain...:cool:
 
Last edited:
It could stand in the doorway with an umbrella because today there is an abundance of ultraviolet light which may destroy your skin cells and cause skin cancer.

Of course you can refuse, the AI will not lock the door to prevent you from going out into full sunlight...:cool:

I recall reading a story where the 3 laws of Robotics were so rigidly enforced humans were not allowed to do anything

:)
 
Overridden by "To do so would be harmful to humans"

I can't recall if they reprogrammed themselves ......:)
In "I robot" the Maker disabled that command by forcing the AI to" promise" (make a commitment) to override the 1st Law (in this instance) and obey the command from the Maker to throw him through the window.

I find this twist a little far fetched, because he could just have the AI break the window and then just jump himself. But then that would have caused the AI to jump after him, trying to save him and that was not the intent of the Maker.

He did not want the AI to self-destruct because he was a piece of the puzzle that needed to be preserved.
 
Last edited:
It could stand in the doorway with an umbrella because today there is an abundance of ultraviolet light which may destroy your skin cells and cause skin cancer.

Of course you can refuse, the AI will not lock the door to prevent you from going out into full sunlight, or the expected rain...:cool:

that level of technology, while cute, is not able to share human emotions & may have a tendency to foster psychopathic habituation in emotional behaviour routines and physical actions.
aside from that, it would be incredibly expensive.
 
that level of technology, while cute, is not able to share human emotions & may have a tendency to foster psychopathic habituation in emotional behaviour routines and physical actions.
aside from that, it would be incredibly expensive.
Yes, I understand. But you are expecting miracles. As AIs develop with a program of a "fundamental commitment" of its resources to its owner and voice commands or questions (conversation).

Would that be so different than a dog's commitment to its owner, which in dogs emerges, but that "emotion" took a few hundred thousand years to develop in biology. An efficient AI might well advance much faster than humans, because it already started with an advantage of ability for observation, processing, and motor skills, and some fundamental instructios when it wakes up for the first time......:?

IMO, even as Seth observed; "you don't have to be smart to feel pain, but you probably do have to be alive", it does not preclude such "emotional" evolutionary advancement in AI, just at a perhaps different level.
 
Yes, I understand. But you are expecting miracles. As AIs develop with a program of a "fundamental commitment" of its resources to its owner and voice commands or questions (conversation).

Would that be so different than a dog's commitment to its owner, which in dogs emerges, but that "emotion" took a few hundred thousand years to develop in biology. An efficient AI might well advance much faster than humans, because it already started with an advantage of ability for observation, processing, and motor skills, and some fundamental instructios when it wakes up for the first time......:?

IMO, even as Seth observed; "you don't have to be smart to feel pain, but you probably do have to be alive", it does not preclude such "emotional" evolutionary advancement in AI, just at a perhaps different level.

i prefer the tone of this post.

yes indeed, i love the idea of having a house and car which are the same computer AI and can link profiles together.
give you a shopping list on your cell phone as you pull up to the super market.
remind you just prior to passing a petrol station that you need to fill up because you need to be working in 2 days time and filling now is better than waiting till the last moment(pre-empt?) = smart ?
being able to make & answer phone calls while driving purely by voice which would not distract from driving would be a massive cost benfit in business.

equally... being able to simply hack such a system and use everyones shopping app to pay for 100,000.00 of shopping from a mall that you then fence for 60,000
would be very disrupting for the individuals whom now have no food to feed the kids and have to cancel the family visit from over seas and send the elderly family members back home without seeing them for the last time etc..

big busines dont care at all about all that personal loss and emotional truama.
which is why the corporates dont really care about the security side.
they only care about as much as they can be easily sued and lose for.

the jump from personal devices which are voluntary
to
complete systems of personal access and care being hacked
is quite a big one.
one which if corporates are forced to pay immediate losses and emotional conpensation for, would probably suddenly change the language being used.
not to mention the time lines.
tech-ophile sales managers who live off company credit cards dont care if they get hacked.
they just use their own private accounts and cards.
the driving mechanism behind the culture needs to be socially engineered to ensure the small child is not raised as a child soldier that lusts after mass shootings etc..
 
Back
Top