AI is ridiculous concept that many misinterpret.

Discussion in 'Intelligence & Machines' started by Bob-a-builder, Jun 15, 2019.

  1. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    The something would be the compiler of the Rule Book

    Those scribbling down stuff to slip back under the door would be the equivalent of a ticker tape machine, able to produce a output without the slightest idea of meaning

    Which would feed back to the compiler of the Rule Book

    Please Register or Log in to view the hidden image!

     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Baldeee Valued Senior Member

    Messages:
    2,226
    Ah, thank you!
    Hadn’t come across it formally like that, just some example a friend gave me once, and I couldn’t remember all the detail.
    The key that turns the non-intelligent sub-parts into an appearance of intelligence would seem to be the ruleset.
    That ruleset was provided to those in the room.
    It didn’t learn them.
    It doesn’t understand anything and just follows a systematic process.
    But the people who put the ruleset together did understand (ultimately, in case someone wants to line up a column of turtles).
    They had the semantic understanding.

    And at what stage does “appearance of intelligence” actually equate to intelligence?
    Or is the requirement of a semantic component for intelligence putting an unnecessary constraint on what it means to be intelligent?

    As I think was asked for earlier, if the word “intelligence” is more adequately defined and agreed upon, a more precise answer could possibly be given.

    Please Register or Log in to view the hidden image!

     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Write4U Valued Senior Member

    Messages:
    20,069
    First, I believe we should make a distinction between the three levels of consciousness. it would be negligent to overlook that detail when discussing "intelligence". It depends on what type of intelligent behavior you wish to emulate.

    It appears to me that the main difference between AI and OI (organic I) is the biological aspect of living organisms where the entire biome is sentient (sensory aware) due to the fact that a biological organisms is a compound living computer which actually experiences sensory input over its entire body .
    But not all sensory experience in humans is consciously experienced.

    Please Register or Log in to view the hidden image!

    Cytoskeleton
    Description
    A cytoskeleton is present in the cytoplasm of all cells, including bacteria, and archaea. It is a complex, dynamic network of interlinking protein filaments that extends from the cell nucleus to the cell membrane. The cytoskeletal systems of different organisms are composed of similar proteins. Wikipedia

    This allows for total sensory monitoring of our external envelope, as well as muscle and joint control.

    But there is a secondary secret intelligent network. A subconscious neural system which controls organ functions.

    An example is "interoception".
    Except this does not fall in the range of conscious processing of input. It is an autonomous regulatory system which operates independent of conscious thought. But it is what keeps us alive.
    It does not tell us know how well we are doing. It only lets us know when something is wrong, like hunger, thirst, etc.

    Seems to me that some imitation of such sensory experiences can be employed in AI?

    Interestingly, anesthesia only affects one of the three levels of intelligence, the conscious level, but does not affect other two levels, the ones that keep our bodies alive, while our brain is unconscious.
     
    Last edited: Jun 26, 2019
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Perhaps a temperature control

    CPU getting to hot - increase cooling fan or shut down whole system

    Please Register or Log in to view the hidden image!

     
    Write4U likes this.
  8. Write4U Valued Senior Member

    Messages:
    20,069
    Can that pass for a sentient response, if not conscious?

    It just occurred to me that an AI does not need to feel "good". All they need is the ability to tell if they feel bad (overheating, friction, disconnected wire, too much moisture) .

    Interoception, this form of unconscious monitoring of the system health is what keeps humans alive. Why should it not be sufficient to keep an AI running?

    Moreover, as human make use of bacterial activity to function properly, why not not have millions of little subcomputers running little maintenance programs which are independent of conscious control and the main processing center. Octopi (nine brains) use this semi-independent processing ability to great advantage.
     
    Last edited: Jun 26, 2019
  9. billvon Valued Senior Member

    Messages:
    21,635
    I don't find that that unusual. We have at least three - all separate, all evolved at different times, all performing very different functions.
     
  10. Write4U Valued Senior Member

    Messages:
    20,069
    Are we using this parsing of functions in AI yet?

    As far as I know, most AI use a single control center. If so, does that not place an unnecessary burden on the whole system?

    And if many small parts are used, would that not make a machine easier to service and replace failing parts?
     
    Last edited: Jun 26, 2019
  11. billvon Valued Senior Member

    Messages:
    21,635
    AI's are all over the place. Most deep learning AI systems use a multilevel network as the core of the inference engine, but there are a lot of things "hanging off" it - training algorithms, layer scramblers, minima exclusion systems (so the system doesn't get "stuck" in a local minima) etc. So to use a brain analogy they are a "big brain" with a lot of tiny brains hanging off it.
     
    Write4U likes this.
  12. Write4U Valued Senior Member

    Messages:
    20,069
    Question then becomes if such a hive-mind structure can spontaneously form an internal holographic image of it's experiences.
    If not, then what is it that sets the human brain apart from AI brains? Human brains work very much the same as all other animated life on earth. What is the common denominator that allows biological organisms to become consciously aware, rather than just chemically sensitive as in the most primitive life-forms such as bacteria, and the purely electromagnetic reactive functions of inaminate matter? Is this an evolutionary process? A matter of "refinement" in the brain's ability to measure, quantify, and qualify harvested external data?

    Due to its isolation from direct external stimulus and it's reliance on translation of sensory input, the mind (brain) can only make a best guess (it cannot see) as to what is out there and compare it to memory for experiential recognition and the imaginary representation of reality. Fine pattern recognition and memorization would seem to be a very important asset!

    I also just ran across a interesting article about sound and how the brain measures and processes sound, one of the most important natural phenomenon, the wave function.
     
    Last edited: Jun 26, 2019
  13. billvon Valued Senior Member

    Messages:
    21,635
    Well, neural networks are inherently "holographic" - if you lose neurons they still work, just not as well.
    It's an emergent property of intelligence, as far as anyone can tell. (i.e. it's a side effect of several other aspects of our mind.)
     
    Write4U likes this.
  14. Write4U Valued Senior Member

    Messages:
    20,069
    Fascinating......

    Please Register or Log in to view the hidden image!

    .....intelligence requires an ability to simultaneously receive inputs of various values, and process this input by what qualifies as a hive mind with sets of dedicated functions?

    Would conscious awareness not also emerge during this chronology?

    In a self-referential system what affects one side of the equation automatically and equally affects the other side of the equation. Greater detail makes it necessary for evolving greater (sufficient) magnification and processing power.

    I find it interesting that the largest human chromosome is also the chromosome directly associated with the evolution of intelligence in humans. A cell growth template yielding greater "brain folding" (computing surface) leading to greater physical processing powers of greater detail ....and an ability to fashion an internal holographic representation of what our senses observe...

    Please Register or Log in to view the hidden image!

     
    Last edited: Jun 26, 2019
  15. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Don't think so because sentient is considered a aspect of being alive and responsive

    Need another word for sentient if applied to AI???

    Please Register or Log in to view the hidden image!

     
  16. Write4U Valued Senior Member

    Messages:
    20,069
    Quasi-sentient.
    Quasi | Definition of Quasi at Dictionary.com

    https://www.dictionary.com/browse/quasi

    Just as AI has a quasi-intelligent processing center......

    Please Register or Log in to view the hidden image!

     
  17. Write4U Valued Senior Member

    Messages:
    20,069
    duplicate
     
  18. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Sure good - can go with that

    I did just have a brain fart. In order to fit in with the current political situation we could have FAKE Artificial Intelligence (or is that a double negative)

    And like ALTERNATIVE facts, we can have ALTERNATIVE (Artificial) Intelligence

    Does that make sense???

    Coffee moment

    Please Register or Log in to view the hidden image!

     
    Write4U likes this.
  19. Write4U Valued Senior Member

    Messages:
    20,069
    The difference between fake and quasi is that fake looks like the real thing but doesn't work, whereas quasi may not look like a thing but functions like it.....

    Please Register or Log in to view the hidden image!



    However "alternative intelligence" is a very intriguing idea. Scientist all say they are completely fascinated with the demonstrated intelligent behavior by a variety of very old species, such as octopi and cuttlefish. These creatures are capable of astoundingly intelligent behavior.

    Then there are the mathematical feats performed by slime-mold, a single celled organism with compound nuclei, which can design an efficient railway system as well as any competent civil engineer.

    Note: what you are looking at is a single cell and it is not some undefined blob of goo. This is a sophisticated organism so successful that it thrives all over the world and in different environments.

    Please Register or Log in to view the hidden image!


    Slime molds begin life as amoeba-like cells. These unicellular amoebae are commonly haploid, and feed on bacteria. These amoebae can mate if they encounter the correct mating type and form zygotes that then grow into plasmodia. These contain many nuclei without cell membranes between them, and can grow to be meters in size.

    The species Fuligo septica is often seen as a slimy yellow network in and on rotting logs. The amoebae and the plasmodia engulf microorganisms.[7] The plasmodium grows into an interconnected network of protoplasmic strands.

    https://en.wikipedia.org/wiki/Slime_mold

    All these expressions of alternative intelligence levels producing quasi intelligent behavior without a central processor or brain would support the notion that mathematical functions are an inherent essence of spacetime structure and behavior and evolution of various alternative sensory processing abilities produce their own form of natural mathematical quasi-intelligence (the way animate and dynamical things work).

    If that is the case, it should be possible to construct an efficient quasi-intelligent artificial (biological?) brain that can perform motivated analytical examination of input data.

    The one great difference is that the human body itself is a compound supercomputer.
    The cytoskeleton that provides structural stability of the cell structures consists of microtubules which also double as information processing and transmissions at nano-scale.

    Unless we can construct a body which itself is a super-computer, were always going to be restricted from full "sentience" .

    A human body has trillions of nano-scale computers which process dynamic electro/chemical information packets.

    Please Register or Log in to view the hidden image!


    THIS IS A COMPUTER!

    Please Register or Log in to view the hidden image!


    https://science.institut-curie.org/...clear-for-good-traffic-and-neuronal-survival/

    Until we can cram at least a decent percentage of neural networking into or attached to an AI, we'll be like kids building a lego set to simulate a brain.

    We don't need computing power, we need sensory abilities to produce a true AI, a quasi-intelligent self-motivating (curious) construct.
     
    Last edited: Jun 27, 2019
  20. Michael 345 New year. PRESENT is 72 years oldl Valued Senior Member

    Messages:
    13,077
    Sounds like we need

    SuperBillyGates

    Get Commissioner Gordon on the Signal Light and call our MR

    Our only hope he has kept up his ALIEN contact sim card in credit

    Please Register or Log in to view the hidden image!

     
  21. billvon Valued Senior Member

    Messages:
    21,635
    Most AI's today are neural networks.
     
  22. Bob-a-builder Registered Senior Member

    Messages:
    123
    @ Write4u

    I had pretty much given up on this thread when a member of sciforums staff with over 32 000 postings here stated,

    Given that the Google Home is not at all intelligent, how do you think it makes sense of your voice commands? Do you have any idea of how difficult a problem it is for a computer to make sense of your voice? That was problem that computer scientists worked on cracking for over 50 years since the first digital computers were invented. Do you think that it's done through a series of if-then statements?

    It became obvious to me that some here are a tad slower than others. That statement does not speak well of the type of thinking prevalent here.


    I am aware of most of the arguments that computers have intelligence from other sources. I knew of the Chinese lettering thought experiment before it was discussed here, etc.

    Computers can only accept input, compare, and output. I have even tried to look at things from other points of view by suggesting perhaps people can only do the same and thus also would not possess any actual intelligence. However the evidence has been less than compelling. I myself suggested maybe our creativity is due to faulty (remembered) inputs but that is just conjecture no matter how many Billvons try to make it seem like fact.

    These arguments have made me feel less confident in human intelligence than remotely making me think a computer can do anything except compare; to a degree.

    However,

    You made some good points and I would like to respond. You made some biological comparisons and seem to be suggesting computers may one day have biological aspects to them. That is something I contemplated many years ago.

    If you re-read the Opening Post of this thread I had stated ,



    That is from the Opening Post. It is something I considered.

    Many assume we can make "computer neural-networks" personifying them as much as the words "Artificial intelligence" but the truth is all any algorithms can do is manipulate and jog input with comparison after comparison.

    People can "ASSUME" we will one day understand all the processes of the brain and that it could be emulated in machines without biological processors, but that is not how science is supposed to work.

    I agree it would be much easier to grow an intelligence but if we limit a consciousness with steered inputs only that seems like cruelty and a form of slavery in itself. You would need to be generous with the dopamine.

    But as I said in my opening post. That would make it a life form and not a computer as typically defined.

    Steering algorythms and comparisons with any math.. even random math is still a program built upon comparison after comparison.

    Comparing is not thinking in any stretch of the imagination (although many here like to stretch).

    Kudos on the good postings and biological aspects. I had already mentioned them in the opening post though as being of a different nature than this discussion warrants. Perhaps that would be a good thread of its own. Biological computing.

    A computer cannot even choose a random number. Random numbers have been tricky to evolve in computing. They started by using outside inputs to steer random numbers like looking at the seconds on a clock (simplified) to get a random number from 1-60. Variations of that still exist except the "clocks" (simplifying) are of created designs and do less time telling. Now they have chips designed for this purpose but still based on similar thought.
    For a more day-to-day example, the computer could rely on atmospheric noise or simply use the exact time you press keys on your keyboard as a source of unpredictable data, or entropy. For example, your computer might notice that you pressed a key at exactly 0.23423523 seconds after 2 p.m.. Grab enough of the specific times associated with these key presses and you’ll have a source of entropy you can use to generate a “true” random number.

    https://www.howtogeek.com/183051/htg-explains-how-computers-generate-random-numbers/

    Cheers.
     
    Last edited: Jun 30, 2019
  23. Write4U Valued Senior Member

    Messages:
    20,069
    Question: Why would an intelligence want to generate a random number? I would rather see an ability for associative skills, which is the opposite of randomness and tries to make sense (create order) from randomness, rather than finding chaos in order.

    Do humans generate random numbers in their everyday lives? Humans are especially good at associative thinking. If "associative thinking" works for humans and to a degree in all "evolved" life, then it should work artificially, no?
    Forget the machine language. Concentrate on associative skills, flexible thinking is the key.
    IMO, a true AI should have the ability to assign and recognize "informal patterns" rather than specific data.

    How many versions of the letter R are there? In order to recognize a specific R a computer needs to be programmed with sets of "fonts" so that the computer can "recognize" a specific rendition of a letter. Humans don't need that. We can look at a thousand different renditions of the letter R and recognize it as an R. Can we build this ability into an AI?

    When we look at an any version of a letter we can make the generalization between certain key identifiers and extrapolate any kind of rendition. IMO, this is the associative abilities we should be looking at. This is what allows for abstract thinking and make deductive sense from abstractions. This is how the human brain works, to a degree.

    Some interesting observations are made by Roger Antonsen in this lecture on the power of associative thinking. Instead of pure maths, which is easy, endow an AI with algebraic ability to apply algebraic type generalizations to common patterns.



    I am not a programmer, so forgive any naive assumptions. Just voicing some "random" associative thoughts...
    a)

    Please Register or Log in to view the hidden image!

    ... blue (water!)... spigot (adjustable valve!) ... cup (half full!)... source of accessible water for human consumption.

    b) humans get thristy and use cups filled with water to drink.

    c) ask AI to get a cup of water. AI knows immediately what, where, and how.

    d) refine AI to extrapolate a cup of water from the previous sequence and then refine it to associate the entire sequence with getting "a cup of water" with just a verbal request for water.

    Is this how it works? Instill an ability for AI to "understand" (translate) human needs.
     
    Last edited: Jun 30, 2019
    Michael 345 likes this.

Share This Page