Do you think that AI will ever feel emotions?

Discussion in 'Intelligence & Machines' started by wegs, Sep 10, 2019.

  1. river

    Messages:
    17,307
    Madness because most people don't understand what you see. Understand what you understand .

    Sanity . Being informed .
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Write4U Valued Senior Member

    Messages:
    20,072
    A new instruction series starting up
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Write4U Valued Senior Member

    Messages:
    20,072
    And an update on the state of the art.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. river

    Messages:
    17,307
    Ai will never feel living , biological emotions . Because Ai is electronic .
     
  8. Write4U Valued Senior Member

    Messages:
    20,072
    But can it feel electrochemical emotions?
     
  9. river

    Messages:
    17,307


    Explain what you mean .
     
  10. Write4U Valued Senior Member

    Messages:
    20,072
    Human emotions are electrochemical experiences.
    Who is to say that electronics cannot create electronic experiences?
     
  11. river

    Messages:
    17,307
    Life , is the is beyond electrochemical . Deep in the Earths Crust Life Exists .
     
  12. Write4U Valued Senior Member

    Messages:
    20,072
    Nonsense.
     
  13. river

    Messages:
    17,307
    [QUOTE="Write4U, post: 3680376, member: 261885" Nonsense. [/quote]

    What is nonsense ?
     
  14. Write4U Valued Senior Member

    Messages:
    20,072
    [QUOTE="Write4U, post: 3680376, member: 261885" Nonsense. [/quote]
    Electrochemistry does not exist in the Earth Crust? Where did you read that?
     
  15. river

    Messages:
    17,307
    Where did you read what you know ? What Books ?
     
    Last edited: Jul 25, 2021
  16. Write4U Valued Senior Member

    Messages:
    20,072
    C'mon, you don't have access to the internet?

    Abstract
    ....more

    https://www.researchsquare.com/article/rs-106129/v1

    Who knows, maybe the earth itself is a living biome.
     
  17. James R Just this guy, you know? Staff Member

    Messages:
    39,421
    What's an electrochemical experience, and how does it differ from any other kind of experience? (Are there other kinds?)
     
  18. Write4U Valued Senior Member

    Messages:
    20,072
    I'm not sure. Some may argue that emotions are an emergent phenomenon, i.e. once removed from the actual electrochemical processes.

    Emotion as an emergent phenomenon of the neurocomputational energy regulation mechanism of a cognitive agent in a decision-making task.
    .....more
    https://journals.sagepub.com/doi/full/10.1177/1059712319880649#
     
  19. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    The thing that would be concerning, is if AI ''developed'' emotions, or let's say a program was created for AI to seem like it feels emotions, would the tendency towards error increase? Human beings make a TON of errors (bad judgement calls) for example, because we sometimes act out of emotion. But, emotions motivate us to act. It could be a weird catch-22 if AI goes in this direction.
     
  20. C C Consular Corps - "the backbone of diplomacy" Valued Senior Member

    Messages:
    3,382
    In addition, the whole point of smart machines is that humans want a category of [faster processing] slaves that can't be undermined by moral concerns.

    So endowing AI with self-interest (a core part of the emotions package) and the biased, motivated reasoning stemming from that, is detrimental to the above -- creates an object receptive to consideration for rights.

    And Jacobin descended tradition would most certainly target sentient robots and tabletop AI for a new generation of its crusading, since by then it could be running out of conventional human population groups to paternalistically exploit. Although it's possible that the number of unemployed resulting from smart machines could rejuvenate its original interest in the non-aristocratic classes [proles] -- rather than, say, its contemporary passion for drag queens and ultra-fringe communities. (There's supposedly an Italian cartoon that sarcastically highlights that by borrowing the March of Progress evolutionary template.)
     
    Last edited: Aug 5, 2021
  21. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    I think that was the case at first, but this near obsession that many have with AI developing ''consciousness,'' suggests that maybe we want to at least see what that might look like - but at best, wouldn't it just be ''artificial consciousness?''

    True, but maybe AI keeps its self-awareness ''capability'' to itself, as a means of self-preservation. lol

    Maybe it's not an either/or proposition, rather AI could merely develop (or be programmed to develop) varying layers of awareness, that would progress over time? The danger of it all comes down to speed - machines handle tasks faster than humans, so imagine how fast they'd ''evolve'' if they became aware of their actions? Sigh, sooo many lines of code, though.
     
  22. Write4U Valued Senior Member

    Messages:
    20,072
    AFAIK, GPT3 does not use that much active raw data coding like a Turing machine. It has data translators that convert data into "tokens" (memories). When it received a verbal command it consults the tokens and from the token-base it "assembles" several scenarios in order of "priority".

    Just as in the human brain. As Anil Seth explains when the brain receives external sensory data, it triggers a "recall" of similar stored data (human tokens) and forms an expectation in order of priorities. We create a mental expectation of reality and compare it with incoming data. We create our reality as much from the inside out as from the outside in!

    And AFAIK, that is how GPT3 works. It is a self-referential system that constantly monitors its position in relation to its external environment.

    It uses the internet to associate words with pictures. That is exactly how humans do a google search. We type in a word and ask for a definition and an illustration, from which we form a mental token.

    Input: describe a Cow by definition and with illustrations of different types (and related data) ---> enter

    GPT3 will generate descriptions and illustrations :

    Please Register or Log in to view the hidden image!


    A brown Swiss Fleckvieh cow wearing a cowbell

    Please Register or Log in to view the hidden image!


    A Holstein cow with prominent udder and less muscle than is typical of beef breeds

    The GPT3 knows the meaning and context of every word (it has been trained to) and if it doesn't, it can learn it.

    So from the simple command to provide the definition and description of a cow, it can research the entire family of Bovidae and everything that is known about cattle. Its strength lies in its ability to do autonomous research, just like humans do when googling a subject with only a superficial knowledge of that subject.

    It just occurred to me that GPT3 is like a self-organizing puzzle assembler. It has, or has access to all the individual pieces (tokens) on a subject, and all it needs to do is find the place for a specific token in order to assemble a completed definition and illustration of any object.
     
    Last edited: Aug 5, 2021
    wegs likes this.
  23. river

    Messages:
    17,307
    To the OP , NO .

    At least No Biological Emotions .
     

Share This Page