Do you think that AI will ever feel emotions?

Discussion in 'Intelligence & Machines' started by wegs, Sep 10, 2019.

  1. Write4U Valued Senior Member

    Messages:
    20,069
    Could the AI program be considered an artificial soul? With that I mean "character traits".
    Can we build AI which have "good souls"?

    I should like someone to ask her what she thinks when humans are talking about her as if she wasn't there.
    How does she process the information she registers when she observes human interaction.
     
    Last edited: Feb 21, 2020
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    While I agree AI, in and of itself, will have sinister aspect, the programmer can and put in sinister instructions

    Agree, in and of itself AI NO. Unless a extremely clever program gives it instructions when certain criteria are met - dominate

    Please Register or Log in to view the hidden image!

     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Possible

    I have some dealings with mobile phone companies for a few weeks

    Sometimes I get the feeling I am chat (like this) with a AI program. Get the feeling answers are pre programmed replies culled from previous questions and answers given and answered by humans

    Please Register or Log in to view the hidden image!

     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    I think what you are defining as soul “emotional or intellectual energy,” is a PROCESS

    At death this process stops so not immortal

    Please Register or Log in to view the hidden image!

     
  8. Write4U Valued Senior Member

    Messages:
    20,069
    Of course, but that is the human programmer who is evil not the AI.
    Sure, that would not be difficult. Program a few trigger words, just like humans respond to certain trigger words.
     
  9. Write4U Valued Senior Member

    Messages:
    20,069
    I agree, what we call a soul is a product of an individual's self-awareness, which stops at death.

    But I am beginning to examine words that end on .".....ness", which suggest a transcendent property. Of course that would still stop when that person is dead and does not project an image of say, self-awareness of happiness anymore...

    Please Register or Log in to view the hidden image!

     
  10. river

    Messages:
    17,307
    How does something , that has not evolved from millions of yrs. Of experience . Really understand much of anything .

    Other than what we program into it ?

    It can't .

    AI can not build its self without Life doing the building .

    Think about this .
     
    Last edited: Apr 22, 2020
  11. Write4U Valued Senior Member

    Messages:
    20,069
    We are life that's building it...! Just as bacteria are 90% of the Life in the human biome, humans are the driving force in the evolution of AI. The method of evolution is if no consequence to the result. Some evolution is fast, some emerges slowly. In the end it's all a mathematical formation of increasing complexity, in all living things.

    Natural selection does not select for intelligence per se. It selects for that which can reproduce. And self-organizing complexity of self-referential patterns in AI is just a matter of time now.

    Don't forget it takes a human some 20 years to fully mature. Give a learning AI 20 years of access to knowledge in a cloud on the internet, able to quasi-intelligently control things. The thought alone makes me gasp.
     
    Last edited: Apr 22, 2020
  12. river

    Messages:
    17,307
    Mathematics in and of itself can not produce anything physical . Without the physical pre-existing .
     
  13. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    ??? can the brain PROCESS of self-organizing complexity of self-referential patterns running around in a electrical / biochemistry continuous loop embed (etch) a pattern on the network which translates to conscientiousness?

    Will tidy up thoughts on this with coffee

    Please Register or Log in to view the hidden image!

     
  14. Write4U Valued Senior Member

    Messages:
    20,069
    We are the life-form that is the causal agent in the evolution of AI.
     
  15. river

    Messages:
    17,307

    Not causal agent but THE agent of AI . Period .
     
  16. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Problem with such a thought is we will never know

    No dead theist comes back and says I told you so

    No dead atheists comes back and says I told you so

    And, my take, we are not able to know when dead

    Please Register or Log in to view the hidden image!

     
  17. Write4U Valued Senior Member

    Messages:
    20,069
    You underestimate the sensory and reasoning abilities of AI. Big Blue is a chess program that is able to calculate all possible future moves, many moves ahead. The necessity for controlled self-referential functions is very well established.

    Sophia's algorithm seems well suited to process information into a cohesive response. IMO, it is the biological materials on which Eukaryotic organism's neural network is founded, which seems very responsive to electro-chemical stimulus and may well serve as the platform of an ORCH OR or a IIT information processing machine with an emergent self-awareness, i.e. "consciousness"?

    Let's pose the question if ORCH OR and IIT processing patterns might actually work on AI...

    Please Register or Log in to view the hidden image!

     
  18. river

    Messages:
    17,307
    I don't underestimate Big Blue chess program . But it is still a electronic program . Stop programming . Stops Big Blue .
     
  19. Write4U Valued Senior Member

    Messages:
    20,069
    That's lazy thinking. The agent of a good AI is an open platform, able to make subtle, even abstract inferences of information perceived by its senses.

    AI is already making physical macro copies of itself. An ability that not even a virus possesses.
    The programmed behavior is the causal factor. In AI it's intellectual growth pattern is a digital OS and memory (HD), which allows it to learn and reliably respond to causal circumstances. In biological organisms, physical growth patterns are encoded in DNA and the sensory information processes lie in the electro-chemical memory (microtubular pyramids).

    Theoretically, there should be no non-trivial obstacle of either system developing (evolving) a continued increase of
    complexity, sophistication, and accurate measurements on which to perform calculations.

    Don't forget, the human brain can only make a "best guess", just like an AI makes a "best guess" of the sensory input.
     
  20. Write4U Valued Senior Member

    Messages:
    20,069
    Pulling the plug on an AI is not different than pulling the plug on a human.
     
  21. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Oh there are soooooo many bad responses to that I can't use

    Please Register or Log in to view the hidden image!



    Please Register or Log in to view the hidden image!

     
    Write4U likes this.
  22. C C Consular Corps - "the backbone of diplomacy" Valued Senior Member

    Messages:
    3,320
    "Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so."

    In the course of these processing devices exploring and developing their own sets of rules for accomplishing an assigned challenge, they output algorithms which the human researchers are often unable to analyze and understand how/why they successfully work.

    Despite such arguably being explicit knowledge in terms of being stored and codified, such is also perversely akin to tacit knowledge via lacking sufficient, articulated explanation. (But that may be purely due to the time and difficulty required for studying the algorithms.)

    In a sense, it's "a machine creating an understanding of how to achieve _X_" without understanding how that "understanding" does it, despite researchers knowing the language of instructions that the "understanding" is represented slash expressed by. Perhaps partially rubbing against the territory of the symbol grounding problem.
     
    Write4U likes this.
  23. wegs Matter and Pixie Dust Valued Senior Member

    Messages:
    9,253
    I forgot that I posted this thread.
     

Share This Page