What is consciousness? Can we create artificial consciousness?

ольга

Valued Senior Member
Я полагаю, что сознание и ИИ - не одно и то же. В чём же отличие?
 
Я полагаю, что сознание и ИИ - не одно и то же. В чём же отличие?

Non-consciousness is the absence of everything. For instance, when a person is dead, there are no longer any experiences or manifestations of images, sounds, odors, tactile feelings, etc. Both the former personal thoughts and the sensory representations of the outer environment are gone after death.

So flip that around and you have the most primal attribute of consciousness: presentations of anything.

Additional features like identification and understanding of those introspective and extrospective sensations is dependent on having a memory system (among other items), and thereby subsumed under the idea of _X_ entity having some degree of intelligence.

Cognitive activities that occur in the dark (so to speak) -- IOW, have no experiences accompanying that information processing -- can certainly be replicated by the operations of machines.

But since there is no deep, sufficient "science" or agreed upon theory for explaining what the manifestations of consciousness arise from -- other than correlating them to applicable neural activity in a brain... It is accordingly unknown whether artificial experiences are possible.

But they should be possible in theory, since it's an issue of discovering and replicating the equivalent procedures and functional structure in a device. An exception would be if the manifestations are dependent upon or inherent in certain kinds of electrochemical activity (of biology) that current technology might not be amenable to.

People who do not believe in dualism (that a brain does not produce some utterly novel field, or whatever, that is undetectable to instruments) are often stuck with the pan-experiential or Russellian monism type view below. But even if all matter has primeval internal states that a brain organizes into the complex presentations of consciousness, then still 99.9999... percent of the time (across the universe) there will be no cognitive system available to identify and understand those manifestations, or a capacity to validate that they are even present.

Lee Smolin: The problem of consciousness is an aspect of the question of what the world really is. We don't know what a rock really is, or an atom, or an electron. We can only observe how they interact with other things and thereby describe their relational properties. Perhaps everything has external and internal aspects. The external properties are those that science can capture and describe through interactions, in terms of relationships. The internal aspect is the intrinsic essence; it is the reality that is not expressible in the language of interactions and relations. Consciousness, whatever it is, is an aspect of the intrinsic essence of brains. --Time Reborn
_
 
Last edited:
This gets into David Chalmers and the "hard problem of consciousness." It seems very difficult to get at the intrinsic subjective aspects, or qualia, of a mind. Chalmers likes to use the thought experiment of the Philosophical Zombie, or P-Zed, to get at these difficulties. Thomas Nagel uses the conundrum of "what it's like to be a bat," in his seminal paper. As Smolin says in the snip #CC offered, these subjective "felt" aspects of a mind are not expressible in the language of interactions and relations. Some philosophers of mind, like John Searle, were (at least early in his career) dubbed "biochauvinists," because they argued that consciousness is something inherent in biological systems and the physical qualities that are unique to brains. Searle suggested AI could only simulate a consciousness but not be one. (I remember reading a paper of his long ago where he compared the problem to a computer that tries to perfectly simulate a torrential rainstorm - no matter how it performs the simulation, water will not gush out of the machinery) (a line of argument later shot down, deservedly I think)

My take on this is that striving to develop AI that is conscious and possesses qualia is worth the trouble, because AI without consciousness is actually far more dangerous.
 
There can hardly be any doubt that current AI is not conscious. They are simply predictive algorithms that use a very large dataset to "guess" what word comes next.

The key question, IMO, is can an artifical system reach such a state in the future?

I think so. In principle, you should be able to literally duplicate the brain using circuits and components yet-to-be-invented that mimic neuromsxand synapses. If it models the brain in complexity and function, and has access to sufficient input, there's no reason why it wouldn't be just as conscious as a human.
 
Чего именно?

Given the success with starting your first thread here, you've already expanded beyond or outgrown the other one, anyway. (Which is to say, Pinball may have been referring to this thread.)

But if some unexpected problem pops up, the latter is still there to use for miscellaneous inquiries. And there's also the Site Feedback subforum.
_
 
There can hardly be any doubt that current AI is not conscious. They are simply predictive algorithms that use a very large dataset to "guess" what word comes next.
I'm not so sure. Both things might be true at once.

Current AIs certainly come up with "what word comes next", but they do much better than we would expect from mere guesswork. It's not the case that the AIs contain some kind of large selection of chains of "words that come next", for instance. The fact is, nobody understands exactly how the "words that come next" end up encoded in the neural networks - just as nobody understands quite how they are encoded in human brains.

What test can we do to show that a current AI is conscious or not conscious? Do you have a foolproof test in mind? If so, what is it, and how do you know the result it gives is accurate?

I think so. In principle, you should be able to literally duplicate the brain using circuits and components yet-to-be-invented that mimic neuromsxand synapses.
Why do you think it is necessary to precisely mirror the particulars of the biological neurons and synapses. Are you sure that's necessary for consciousness? What makes you so sure?
 
Non-consciousness is the absence of everything. For instance, when a person is dead, there are no longer any experiences or manifestations of images, sounds, odors, tactile feelings, etc. Both the former personal thoughts and the sensory representations of the outer environment are gone after death.

So flip that around and you have the most primal attribute of consciousness: presentations of anything.

Additional features like identification and understanding of those introspective and extrospective sensations is dependent on having a memory system (among other items), and thereby subsumed under the idea of _X_ entity having some degree of intelligence.

Cognitive activities that occur in the dark (so to speak) -- IOW, have no experiences accompanying that information processing -- can certainly be replicated by the operations of machines.

But since there is no deep, sufficient "science" or agreed upon theory for explaining what the manifestations of consciousness arise from -- other than correlating them to applicable neural activity in a brain... It is accordingly unknown whether artificial experiences are possible.

But they should be possible in theory, since it's an issue of discovering and replicating the equivalent procedures and functional structure in a device. An exception would be if the manifestations are dependent upon or inherent in certain kinds of electrochemical activity (of biology) that current technology might not be amenable to.

People who do not believe in dualism (that a brain does not produce some utterly novel field, or whatever, that is undetectable to instruments) are often stuck with the pan-experiential or Russellian monism type view below. But even if all matter has primeval internal states that a brain organizes into the complex presentations of consciousness, then still 99.9999... percent of the time (across the universe) there will be no cognitive system available to identify and understand those manifestations, or a capacity to validate that they are even present.

Lee Smolin: The problem of consciousness is an aspect of the question of what the world really is. We don't know what a rock really is, or an atom, or an electron. We can only observe how they interact with other things and thereby describe their relational properties. Perhaps everything has external and internal aspects. The external properties are those that science can capture and describe through interactions, in terms of relationships. The internal aspect is the intrinsic essence; it is the reality that is not expressible in the language of interactions and relations. Consciousness, whatever it is, is an aspect of the intrinsic essence of brains. --Time Reborn
_
Как Вы думаете: что такое воля? Совместима ли она с детерминизмом?
 
I'm not so sure. Both things might be true at once.

Current AIs certainly come up with "what word comes next", but they do much better than we would expect from mere guesswork. It's not the case that the AIs contain some kind of large selection of chains of "words that come next", for instance. The fact is, nobody understands exactly how the "words that come next" end up encoded in the neural networks - just as nobody understands quite how they are encoded in human brains.

What test can we do to show that a current AI is conscious or not conscious? Do you have a foolproof test in mind? If so, what is it, and how do you know the result it gives is accurate?


Why do you think it is necessary to precisely mirror the particulars of the biological neurons and synapses. Are you sure that's necessary for consciousness? What makes you so sure?
Попробуйте создать ИИ у которого не будет определённых задач. Никаких смыслов. Просто накидывайте в него любую информацию, всю подряд. Как Вы думаете: что он выдаст Вам на выходе? Это будет результат, который можно назвать разумным?
 
This gets into David Chalmers and the "hard problem of consciousness." It seems very difficult to get at the intrinsic subjective aspects, or qualia, of a mind. Chalmers likes to use the thought experiment of the Philosophical Zombie, or P-Zed, to get at these difficulties. Thomas Nagel uses the conundrum of "what it's like to be a bat," in his seminal paper. As Smolin says in the snip #CC offered, these subjective "felt" aspects of a mind are not expressible in the language of interactions and relations. Some philosophers of mind, like John Searle, were (at least early in his career) dubbed "biochauvinists," because they argued that consciousness is something inherent in biological systems and the physical qualities that are unique to brains. Searle suggested AI could only simulate a consciousness but not be one. (I remember reading a paper of his long ago where he compared the problem to a computer that tries to perfectly simulate a torrential rainstorm - no matter how it performs the simulation, water will not gush out of the machinery) (a line of argument later shot down, deservedly I think)

My take on this is that striving to develop AI that is conscious and possesses qualia is worth the trouble, because AI without consciousness is actually far more dangerous.
Любое сознание начинается с самоосознания. Выделение себя из окружающего мира. ИИ себя самоосознаёт, или это просто определённый заданный кем то алгоритм действий?
 
There can hardly be any doubt that current AI is not conscious. They are simply predictive algorithms that use a very large dataset to "guess" what word comes next.

The key question, IMO, is can an artifical system reach such a state in the future?

I think so. In principle, you should be able to literally duplicate the brain using circuits and components yet-to-be-invented that mimic neuromsxand synapses. If it models the brain in complexity and function, and has access to sufficient input, there's no reason why it wouldn't be just as conscious as a human.
И будет сам задавать себе смыслы? Какой же главный смысл задаст себе ИИ?
 
Как Вы думаете: что такое воля? Совместима ли она с детерминизмом?

I personally believe that free will is compatible with determinism. But I don't define free will in a way where it is pre-arranged to conflict with itself or drop dead the moment it leaves the starting gate.

If free will is conceived as dependent upon randomness (as it often seems to have been in the past), then why in the world would anyone want to behave randomly, in contrast to adhering to reasoning and their personal preferences, with respect to their decision making?

For instance, an extremely mentally ill person is arguably an example of someone who is guided by randomness and massive unpredictability.

The emphasis should be placed on our capacity to reprogram ourselves if we believe we have free will (and if we need to do so because we have destructive or harmful habits). Rather than randomness.

Compatibilists point-out that we actually want to adhere to our routines and personal inclinations if they are producing more or less desirable results (i.e., to hell with being unpredictable for the sake of being unpredictable).

But this should not mean that slight randomness is disruptive to free will. It's only the high levels that can invasively undermine the functioning of regulated decision making. Which again, the insane individual can figuratively exemplify.
_
 
Last edited:
I personally believe that free will is compatible with determinism. But I don't define free will in a way where it is pre-arranged to conflict with itself or drop dead the moment it leaves the starting gate.

If free will is conceived as dependent upon randomness (as it often seems to have been in the past), then why in the world would anyone want to behave randomly, in contrast to adhering to reasoning and their personal preferences, with respect to their decision making?

For instance, an extremely mentally ill person is arguably an example of someone who is guided by randomness and massive unpredictability.

The emphasis should be placed on our capacity to reprogram ourselves if we believe we have free will (and if we need to do so because we have destructive or harmful habits). Rather than randomness.

Compatibilists point-out that we actually want to adhere to our routines and personal inclinations if they are producing more or less desirable results (i.e., to hell with being unpredictable for the sake of being unpredictable).

But this should not mean that slight randomness is disruptive to free will. It's only the high levels that can invasively undermine the functioning of good decision making. Which again, the insane individual can figuratively exemplify.
_
В чём свобода воли состоит? Почему мы принимаем те или иные решения?
 
В чём свобода воли состоит?

A mundane definition is a starting point, as well a potentially ensuing circus that might or might not illustrate a lack of consensus.

"The power of making free choices unconstrained by external agencies."

Почему мы принимаем те или иные решения?

Why I would choose to not cross the path of a speeding truck is a different reason than why I might choose to eat Neapolitan ice cream on a Saturday afternoon. IOW, there may be no universal reason that can address them all or avoid a staggeringly long list. (Even "survival" isn't applicable to the recreational acts.)
_
 
A mundane definition is a starting point, as well a potentially ensuing circus that might or might not illustrate a lack of consensus.

"The power of making free choices unconstrained by external agencies."



Why I would choose to not cross the path of a speeding truck is a different reason than why I might choose to eat Neapolitan ice cream on a Saturday afternoon. IOW, there may be no universal reason that can address them all or avoid a staggeringly long list. (Even "survival" isn't applicable to the recreational acts.)
_
Вы не перешли дорогу перед грузовиком, потому что в Вас есть смысл, который заключается в желании выжить. Откуда в живом этот смысл? Почему бы просто равнодушно и бесстрастно не рассыпаться на молекулы, как это делает, например, камень? Если инстинкт самосохранения - это программа, созданная эволюцией в результате случайного отбора, то тогда при чём тут свобода воли? Вы просто подчиняетесь программе.

С удовольствиями тоже интересно: Вам хочется мороженное, потому что Вы получаете удовольствие от его поедания. Но попробуйте съесть штук 20 за раз, и удовольствие сменится отвращением. Потому что включится всё тот же инстинкт самосохранения, иначе Вы можете просто заболеть от переедания. Есть ли какие-нибудь смыслы в человеческом бытие, которые не сводились бы к этому главному смыслу - выживанию?
 
[...] Вы просто подчиняетесь программе.

You are the "program" in very general or liberal context, though -- the memories, preferences, and so-forth are part of your identity. When referring to "self" in some context beyond the brain/body, we are unintentionally (at least when there's no ulterior motive) smuggling in a supernatural tradition about a soul, or the contemporary version of our being in a simulated reality. And that's also setting aside how everyday language or conversation itself may have quirks about it that compel us to treat "self" as an abstraction distinct from the physical organism.

[...] Есть ли какие-нибудь смыслы в человеческом бытие, которые не сводились бы к этому главному смыслу - выживанию?

Someone who chooses to commit suicide is going against survival. As well as in choosing to sacrifice their life for someone else, an idea or ideology, etc in a situation where they know death will be the result.
_
 
You are the "program" in very general or liberal context, though -- the memories, preferences, and so-forth are part of your identity. When referring to "self" in some context beyond the brain/body, we are unintentionally (at least when there's no ulterior motive) smuggling in a supernatural tradition about a soul, or the contemporary version of our being in a simulated reality. And that's also setting aside how everyday language or conversation itself may have quirks about it that compel us to treat "self" as an abstraction distinct from the physical organism.



Someone who chooses to commit suicide is going against survival. As well as in choosing to sacrifice their life for someone else, an idea or ideology, etc in a situation where they know death will be the result.
_
Самоубийцы считаются психически больными людьми. Это болезнь, отклонение, сбой в программе. Так считается.
Самопожертвование - это уже другая программа, заложенная обществом и окружением. Хотя, религия ставит Любовь к ближнему главным смыслом, выше первобытного смысла выживания. А Вы как считаете: что делает человека человеком, а не просто высшим приматом?
 
Самоубийцы считаются психически больными людьми. Это болезнь, отклонение, сбой в программе. Так считается.

Most if not all of the LGBT+ population groups were once considered "mentally ill", too. Classifications are not discovered under a rock, they are invented. This is just a culture or society at particular points in time deciding (sometimes erroneously) what is best for itself.

In other words, the evolutionary process isn't dispensing judgements or following a set agenda or program or goal. What does not immediately have detrimental consequences to a species in its contingent environmental circumstances gets passed on (for better or worse down the road).

Самопожертвование - это уже другая программа, заложенная обществом и окружением. Хотя, религия ставит Любовь к ближнему главным смыслом, выше первобытного смысла выживания. А Вы как считаете: что делает человека человеком, а не просто высшим приматом?

I'm often apathetic about such, or just go with whatever "official" qualifications a nearby community or discipline outputs for that. Barring when civil rights are applied to plants, rocks, etc. Though I'm open to listening to any pragmatic justifications that can be submitted for what initially seems ridiculous.

Again, the non-intelligent natural world beyond the human sphere lacks such cognitive distinctions. Humans have no privileged position in its thoughts, because it has no thoughts.
_
 
Back
Top