If 'true randomness' is so elusive and impossible to prove it even exists, no point arguing whether it can or has been implemented.
Maybe if you chew over these two Wiki articles, something 'eureka' will emerge:
https://en.wikipedia.org/wiki/Artificial_consciousness
https://en.wikipedia.org/wiki/Noise-based_logic
I have been deeply impressed with Laszlo Kish's work in general. A true genius.
Thanksfor the links. I am particularly impressed with this quote from the Wiki ; AI_consciousness link'
Aleksander's impossible mind[edit]
Igor Aleksander, emeritus professor of Neural Systems Engineering at
Imperial College, has extensively researched
artificial neural networks and claims in his book
Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand
language. Whether this is true remains to be demonstrated and the basic principle stated in
Impossible Minds—that the brain is a
neural state machine—is open to doubt.
I have always maintained that an AI (or even Human Intelligence) cannot function just on its operating system.
In order to be able make associative decisions, a period of "learning" (knowledge) is an absolute requirement, although verbal language cognition seems to have been solved, but must still be taught verbally by the user, voice recognition, accent, etc.
In humans the baby begins to learn its environment from the moment it is born and exposed to the environment. I believe the current estimate for a human brain to learn basic survival skills is at about 16 years, after which it is assumed that sufficient knowledge has been gained to make associative decisions. Of course, some people never learn from experience, because they have not paid attention to causality.
IOW, any "computational operating system" without knowledge will be unable to make associative cognitive decisions. A learning period is an absolute requirement for sentience to become functional.
In commercial computers, certain types of learning is easy by downloading pertinent knowledge to the HD memory partition if the information is already symbolized, such as numbers, equations, fonts, etc. We even have spell checkers, which will suggest several optional words, if it the user has made a mistake in spelling.
But downloading emotions, such as a "reward" system which provides motivation, on a computer would seem very difficult, or require a long period of learning.
OTOH, humans are born with emotional experiences such as hunger, pain, discomfort, but do not have the knowledge of the causality, which must be learned "on the fly", so to speak. As soon as a baby experiences the emotion of hunger it begins to cry and mama will feed it and satisfy the hunger. Lesson learned, when you are hungry, eat something. "Potty training" may take weeks even in a 1 year old child to grasp the (dis)association between the potty and the diaper (or the floor).
Even dogs are able to learn that when they must "void" to warn their keeper its time to open the door and let the dog out in the yard, so that it can relieve itself of it's discomfort. These emotional experiences are difficult to build into a AI.
I believe this is due to chemical signals sent to the brain, rather than electric coding.
IMO, an AI does not experience these types of physical chemical discomfort so it must be trained to recognize these and other human symptoms, such as bleeding, broken bones, head trauma, etc. "on the fly" and from experience.
To humans those phenomena are clearly symbolic of injury. To an AI they are meaningless, it is not subject to such emotional experiences as pain, satisfaction, sadness, happiness, etc. so it cannot relate to those phenomena (empathy).
So there is a certain dichotomy between teaching HI and AI .
A human brain has the operational ability to experience emotion or to recognize someone else's discomfort, but must learn language, arithmetic, history, etc. which may take many years.
An AI brain can easily learn some of those symbolized areas of knowledge by downloading all symbols which require only purely logical processing, to its HD, but must learn to recognize more subtle symbolic emotional expressions, which may take many years of exposure to human interactions.
Once the AI learns that human "tears" can signify a range of emotions, it might be able to compare and associate that symbolic phenomenon with other environmental conditions, and identify the cause for the tears.
Which would be a representation of artificial empathy.
In the movie, I Robot, this dichotomy is clearly shown. Anyone who has seen the movie will remember the "wink", the meaning of which the robot had just learned.....
Using that "smiley" just reminded me of our keen symbolic associative powers. This type of downloadable symbolism might even prove useful in an AI.....
