One brain in a vat... no..make it two

Quantum Quack

Life's a tease...
Valued Senior Member
Was wondering what would be the long term outcome of placing two, independent of each other, AI's in a chamber with learning capacity. Set them off and come back in say 12 months to assess results....
Example:
Droid A teaching droid B which is teaching droid A with both droids teaching themselves from the out put of the other droid. ( Give each droid a read only access to the WWW)
 
Was wondering what would be the long term outcome of placing two, independent of each other, AI's in a chamber with learning capacity. Set them off and come back in say 12 months to assess results....
Example:
Droid A teaching droid B which is teaching droid A with both droids teaching themselves from the out put of the other droid. ( Give each droid a read only access to the WWW)

When the branes are "set off" ... an they are identical... wit identical envrioments... after A teeches B what it knows... an then B teeches A what it knows... etc... etc... they woud each take turns at havin more knowledge than the other.!!!

Next:)
 
When the branes are "set off" ... an they are identical... wit identical envrioments... after A teeches B what it knows... an then B teeches A what it knows... etc... etc... they woud each take turns at havin more knowledge than the other.!!!

Next:)
say...one was specialist in physics and the other mathematics

a TOE in the making perhaps:biggrin:
 
Was wondering what would be the long term outcome of placing two, independent of each other, AI's in a chamber with learning capacity. Set them off and come back in say 12 months to assess results....
Example:
Droid A teaching droid B which is teaching droid A with both droids teaching themselves from the out put of the other droid. ( Give each droid a read only access to the WWW)

Two or more antagonistic learning systems reciprocally regulating each other (competing) may be what will parsimoniously allow a solipsistic version of simulated reality to be possible in the future. Where coherent environments can be created on the fly for observers. Something that the brain already does in dreaming, but can be sloppy at when it comes to internal consistency. (I.e., the brain could use an adversarial critic.)

Technological solipsism has to be more lawful and coherent than dreams. More so than (what might be construed as) an early ancestor in the video below. But it's a matter of incremental improvement over the coming decades.

Old-fashioned biological dreams are still pretty impressive, though, from the standpoint of the brain generating and maintaining them improvisationally from formula (vaguely echoing the forms of Plato?). Rather than having to literally store and maintain a "whole" simulated world specifically for that "story", even when most of it would not be interacting with the roving POV (observer).

Generative adversarial network (GAN)
https://en.wikipedia.org/wiki/Generative_adversarial_network

GAN Theft Auto ..... LINK
 
Last edited:
Was wondering what would be the long term outcome of placing two, independent of each other, AI's in a chamber with learning capacity. Set them off and come back in say 12 months to assess results....
Example:
Droid A teaching droid B which is teaching droid A with both droids teaching themselves from the out put of the other droid. ( Give each droid a read only access to the WWW)
I would suggest you watch the classic film Colossus: The Forbin Project which gives a 70's answer to just this question, albeit with a militaristic slant.
Great film.
 
Old-fashioned biological dreams are still pretty impressive, though, from the standpoint of the brain generating and maintaining them improvisationally from formula (vaguely echoing the forms of Plato?). Rather than having to literally store and maintain a "whole" simulated world specifically for that "story", even when most of it would not be interacting with the roving POV (observer).
Here's a thought: a VR head-set that reads your thoughts and procedurally creates a virtual world in response.
It's AI, so you can train it through likes / dislikes of what you're seeing.
There may be a selection of preset world-types, so abstract, steampunk, medieval, contemporary urban etc, which really just give it some ground rules and starting ideas.
It doesn't create what you're thinking of, but somehow creates in response to what you're thinking.

Not sure if that would even work as an idea, though?
 
  • Like
Reactions: C C
Back
Top