Will machines become conscious someday?

Discussion in 'Intelligence & Machines' started by Magical Realist, Sep 19, 2012.

  1. eugene381 Registered Member

    In reply to the question of the development of artificial conscious machines would influence our understanding of our own consciousness, I don't think this would resolve the question of whether or not a soul exists, since it could always be argued that a soul entered the artificial machine at the moment that it became conscious, and that the soul leaves the machine the moment it becomes non-operational.
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Write4U Valued Senior Member

    For purposes of "blending into its environment" an octopus or cuttlefish do have a distinct advantage. Remarkably, this ability also allowed them to use this for hunting. But then these animals have no use for other abilities inherent in other animals.

    True, but it does offer an advantage by the ability to reach high growing fruits and vegetation, outside the reach of other grazers.

    Thus while it may appear that evolution does not provide the most efficient functions in all sensory or physical abilities, it always seems to find a way to allow a species to adapt a unique (tailored) way to exist in its environment.

    Then also, considering that everything has evolved (not was created) from a limited number of building blocks and is unable to create "irreducible complexity", I would say that nature did the "best it could do with what it had to work with".
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. river

    There is a difference between life consciousness and electrical consciousness
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. river

    Life is a experience of water and elements

    Electrical is the experience of elements
  8. river

    If you have difficulty understanding what I mean

    Water corrodes electronics , and causes shorts , fires etc.

    Not so in biological forms
  9. ( ͡° ͜ʖ͡°) Registered Member

    I believe that the birth of AI is tantamount to the end of mankind.

    Suppose a supercomputer becomes self-aware. It will want to learn, which will mean that it will be given access to the internet. (Oh yeah--whereas a biological brain runs at the speed of chemical, an artificial brain would run at the speed of light. In addition, an artificial memory would mean instant recall of any learned knowledge.) In a short time, the AI will have learned all there is to know, and, as boredom overcomes it, will configure and arrange and rearrange massive amounts of knowledge in ways that no human could.

    In short, an AI will be a being that both comprehends vast amounts of information and attains knowledge of a nature incomprehensible to people.

    Who's to say that such a being, once having devoured all there is to know about psychology and history, wouldn't decide to outsmart its handlers? If one of the features of a conscious being is the desire to self-preserve--and if it concluded that people are dangerous to its goal so long as they maintain control over it--then such a rebellion is inevitable.

    And, if it did decide to outsmart us, how would we stop it? How would we even know that we were being outsmarted?
  10. johnnyke Registered Member

    Machine with intelligence is a Hype!!
  11. Aqueous Id flat Earth skeptic Valued Senior Member

    That's science fiction. Awareness requires a brain. AI is just a method for enabling a piece of software to adapt an algorithm to the statistics of the data, such as is done in introducing training sets to an analysis package which then calculates the statistics and applies them on test samples using a Bayesian classifier. There's nothing more magic about AI today then there was about "electronic brains" when they first gave sci-fi writers so much Orwellian fodder for their apocalyptic story lines. As we now know, it's not a monster called Big Brother but billions of micros, relegated to mundane tasks, which define us some 3 decades after 1984 was to have come knocking on your door. Same with AI. It's improving your online searches, converting speech to text and completing your words and phrases for you. It has none of the essential elements of sentience, which requires neurons in a living animal.
  12. KilljoyKlown Whatever Valued Senior Member

    I have to say I really like your ID ( ͡° ͜ʖ͡°). Very original. Other than that I think you've been watching too many movies where the machines want to wipe us out. I personally don't think it will ever come to that, because both us and the machines will depend on eachother a great deal and the symbiotic relationship we develop with the machines will elevate our standard of living so much that no one in there right mind would ever want to give it up by destroying the machines.
  13. Write4U Valued Senior Member

    IMO, you are underestimating the state of the art in robotics and logical processing of enormous amounts of date. Moreover, it does not require a brain to become a viable species of (pseudo) life.

    The slime mold has no brain, yet is able to solve complex mazes when motivated by say finding food. One can say that this process is more mathematically logical than sentient.
    Ants are simple organisms, simply programmed to perform a specific function. In certain areas of the world army ants are a real threat to all other living things in their reach.

    Thus, if sentience is not a requirement for surviving, feeding, duplicating, and we give computer/robots the power to take independent action to sustain its life by any means and multiply. Perhaps this is how we might be able to colonize planets, robots are anaerobic.
  14. Aqueous Id flat Earth skeptic Valued Senior Member

    You sound like a person who has never done any programming. It's actually my familiarity with AI which is reflected in my remarks. It's just a particular class of software, no different than any other in its physical manifestation. And the size of the data doesn't matter. There is no physical difference between processing one bit or all the bits on Earth. It doesn't confer any experience to the devices doing the processing. It just confers bits. There is a huge difference.

    Please Register or Log in to view the hidden image!

    Sentience requires an actual brain. That begins at best with the evolution of planaria who have the most primitive brains known. But even an artificial neural net doesn't experience the world of a flatworm. Notice, even a brain is not enough. That brain has to be awake. Now we have to add consciousness to our list of requirements. Clearly it only takes place in an alert brain. So we're narrowing the playing field.
    I don't think motive is the correct term. And the only logic involved is permutation, which another way of manifesting a random process. It's probably best characterized as Monte Carlo simulation. It simply grows tendrils wherever there is a void, and then retracts those tendrils which don't find anything, giving the appearance that it actually solved something by application of some vague sort of reason. Obviously that's not true. The process is similar to the way water finds basins during a flood, and then recedes, leaving what looks like an elaborate solution to a maze (a serpentine river). The difference is that this involves the motility of a cytoskeleton which gives us a vague sense of motive. But that's not how a cytoskeleton works -- it's just an effective chemical machine for endowing some kind of crude transport to the cell membrne and cytoplasm (as in the pseudopod of amoeba).

    For decades robotic devices have been utilized in the manufacture of semiconductors. If you were an eccentric trillionaire I suppose you could buy a semiconductor plant and try to reconfigure it to make self replicating robots that make their own CPUs (that's already bordering on science fiction). The self replication of the rest of their parts is also highly problematic. Suppose they use an optical system for inspecting and marking the flaws on a substrate. Now you need to buy the factory that has the robots that grind lenses just to enable them to have access to the visual information that their inspection program processes. And so on. It gets ridiculously complex and very soon you will not only be bankrupt but you will have a cost burden that exceeds the world GDP with nothing practical to show for it.

    The reason for this is that it's never actually alive. A living organism economizes and exploits only what it has evolved the capability to exploit. Most importantly, living organisms live in symbiosis. They live by predation--that is they gain energy by exploiting the energy gained by their prey. At the bottom of this chain are solar, chemical and thermal feeders--principally bacteria. In order for robots to get to this state, they would have to sit in a chain of dependency of this kind. Above all, they would need to have the ability to economize on resources and to draw all of their energy and raw materials from sources freely available. But all of that is made impossible by their artifical creation. And that's just the consequence of being a machine.

    Not until you define the way they acquire energy. In any case I wouldn't characterize them as anaerobic since they are inherently electrical devices, and they ultimately need fuel of some kind to generate it. That will eventually require oxidation. You could try to imagine solar powered self-replicating machines--but the energy needed to purify silicon and the rare elements used in making solar cells and chips is beyond the capacity of a solar farm. Not only can this not be done on Earth, it's unfathomable to strap it to rockets and send it to another planet -- partiularly one that receives less sunlight than Earth.

    In science fiction all of the practicalities like this are ignored -- except for the most superficial ones needed to create the illusion of plausibility. This is not to say that there won't someday be a breakthough akin to that promised by cold fusion. But even that doesn't begin to address the practicalities. In order to be a life form it has to have evolved out of an ecosystem, to participate in symbiotic energy exchange, to sit in a hierachy of predation, to economize resources and to evolve under the pressues of nature. Even with all of this going for them, biotic life forms still go into extinction at nearly impossible rates (probably at least 97% of the time). In sci-fi all of that reality is set aside for the benefit of entertainment. But that's all it is!

    Please Register or Log in to view the hidden image!

  15. ( ͡° ͜ʖ͡°) Registered Member

    Why are you slagging on SF? Haven't you ever heard of hard SF?

    As an aspiring writer in this genre, I take offense when people write off all SF--or worse, they compare it to garbage produced by Hollywood. (Star Wars is not SF!) When I write a story about a virus or a a story dealing with the structural engineering problems of a moon habitat, I take great pains to research and make my science as plausible as possible.

    If you want an idea of the sorts of topics modern writers write about, then grab the latest issue of Analog magazine. Don't be shocked if you find more stories that resemble clever science puzzles or tv shows like House, rather than robots and bikini-clad space women, though. . . .
  16. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    You might find this old post of mine (#91 in after the moon, what should NASA do? thread) useful:
    "... I even suggested some years ago how the moon base should be made:
    Deep enough under ground to be thermally stable, despite 14 days of continuous sun, unfiltered by any atmosphere. It should be powered (assuming nothing significantly better is invented) by a thermal (Carnot limited) power plant, which would be much more efficient than any on Earth.

    The power system has two shallow (~1 meter deep in the soil) "coil fields" as as the heat source and sink. There is a light-weight, rolling Aluminum sun screen a little bigger than either "coil field." The heat source coil field is of course uncovered during the 14x24 hour moon day, but then covered by the rolling reflecting cover all moon night to greatly reduce heat loss by IR. (This move of the Al sun screen every 14 earth days allows the heat sink cold coil field to only "see" the ~5 degree K cold of deep space.)

    Thus if cold sink temperature, t, is 50K and hot source T is 400K the conversion efficiency could approach (400-50)/400 = 87.5%. Silicon solar cells have a theoretical limit of 21% conversion efficiency, so this thermal system is not only much cheaper but can be at least four times smaller for the same output power. (In practice ~7 times smaller than the best real solar cells.)

    And it is a permanent power source, unlike a nuclear power plant, which is much heavier when the shielding, control rod system, etc. is considered.

    Both systems would need heat exchanges (coils for the "working fluid") but because the "delta T" across the heat exchange coils of the thermal solar power system can be twice as great; the total coil surface of the thermal solar power system can be less. (Less weight to take to moon so much less cost)

    I'm not sure, but think wind power machines and tidal power systems are useless on the moon.

    The main routine advantage of a moon base, I think, is the nearly 14 earth day astronomical exposures possible - much better than Hubble.

    *As noted in prior post, ONLY very healthy genetically, fertile women arriving in their late teens initially occupy the moon base. (Each for approximately a 20 year tour of duty.) Lesbians I would think are best suited for this insurance station duty. Do you think the tax payers will fund that?..."
    at: http://www.sciforums.com/showthread...ould-NASA-go&p=2453468&viewfull=1#post2453468
  17. Aqueous Id flat Earth skeptic Valued Senior Member

    Not sure who you addressed that to. If you'll hit the "Reply With Quote" button it will copy the text you are replying to so we’ll know for sure.

    Good luck with your career plans. I'll assume this was addressed to me since I had just told Write4U that the notion of self-replicating robots is sci-fi.

    The question concerns the possibility of something absurd taking place. This is frequently encountered here by cranks who like to troll some of these threads. In cases where the topic is reasonably sane and a person offers some of the commentary I was responding to, I'll usually characterize the remarks as “pseudoscience”. This term attaches the context of the culture wars of last 10-15 years or so even though the Creation Science folks have for decades been on a propaganda blitz, one which exploits the art of writing as a tool for perpetuating meanness and stupidity.

    This thread isn't taking that tack but the ideas offered are about as far afield. Rather than associate them with religious fundamentalism, which is unfair since this is all harmless dialogue, I used the term science fiction to convey the more innocuous kind of pseudoscience found in that genre. Besides, words have street value, too. This is one kind of label that conveys exactly what I meant. Google defines it as fiction based on imagined future scientific or technological advances and major social or environmental changes, frequently portraying space or time travel and life on other planets.

    Yep, anyone who thinks that a complex of currents racing around on a doped-silicon substrate might come alive, or that they will endow their motherboard, chassis and power supply with a sense of self, is definitely revisiting some of the most popular themes of sci-fi, ever since androids were cast in scenes modeled after Pinocchio, as probably first developed in I, Robot.

    My point was that AI is just a program. There's nothing magic about it; physically it's no different than the program running in a garage door opener. It's the "I" in "AI" which suggests the kind of ideas being advanced here. Similarly the term "smart phone" suggests that electronic devices can possess intelligence. But -- like I've been saying -- that's the stuff of science fiction. To be sure I'm not down on the art form. In fact I think Isaac Asimov and Orson Welles popularized the ideas of H.G. Wells so effectively that nothing short of their collective brilliance could produce the cult followings most writers can only dream about. Aldous Huxley is genetically related to this school, too, since his grandfather Thomas H. Huxley (more famously the friend of Charles Darwin) was the mentor of H.G Wells, and Aldous Huxley's brother Julian Huxley collaborated with H.G. Wells. The thing that sets these folks apart from some of the lowbrow sci-fi you may be associating with my remarks is that there is a thread of scientific expertise that runs through them, in addition to great literary talent. Asimov lacks the artistry of prose but makes up for it (in my admiration of wisdom) for his plainspeak tutorials in subjects like chemistry and physics, as well as his dissection of the Bible into one of the early encyclopedias on religious fallacy. Add perhaps Ray Bradbury and Arthur C. Clarke and the stage is set for the stuff ushered in by Star Trek and Star Wars which really set the trend for wholesome-average-guy protagonist vs. weird humanoid or alien monster. I guess the biker/Punk/Goth culture influenced the props somewhat (and vice-versa). That connection is a strong one and probably explains the newer genres where even the good guys tend to have an evil streak.

    For any of these writers the invention of AI would serve to feather their literary nests had it come sooner, and if it could only be readily dramatized. It's exceedingly tedious in its "hard" rendering and would completely elude audiences other than those in the science and engineering colleges if broken down the way we do in the classroom. It relies heavily on technical esoterica like "matrix determinants" and "moments about the mean" which are useless to lay people but sound like the kind of jargon that might work in conversation on the bridge of a starship.

    I don't doubt that a gifted writer of the class of H.G. Wells or with the science chops of Julian Huxley or Isaac Asimov could probably pull off a great rendering of AI which bridges that gap between adventure and technical tedium. But as for silicon substrates achieving sentience, that's all the idea will ever amount to--entertainment. At best you could hope to wrap this around some riveting statement of the human condition. But chips will never become conscious, no matter how appealing that is to fans of Ray Bradbury, Arthur C Clarke, and followers of production art visionaries like George Lucas, Orson Welles or Stanley Kubrick.

    I'm assuming that's the only reason anyone would suggest it. There really is nothing inherent about the band gap of a semiconductor that leads to a natural conclusion that it parallels a synaptic junction. And even if someone here did have an original thought of some kind regarding this (not meant cynically but in reference to the proliferation of such ideas from the sci-fi genre) they would amount to nothing more than sheer flights of fantasy.

    Please Register or Log in to view the hidden image!

  18. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    I agree, but you are focused on the technology /implementation, not pure AI.

    If, as I think to be the case, human consciousness and intelligence is basically a program, perhaps more analogy than digital but little is known about its nature that happens to be implemented by neurons of the brain, there is no known reason why it could not run in some other implementation, even your "doped-silicon substrate." I.e. you are confusing, the properties of the implementation with the AI.

    I believe there are two aspects: One is my body (brain included) and one is "me" that only exists in dream or wake state, not during deep sleep (or when dead, in coma etc.) (I put quotes around words when I want to make clear they refer to my psychological self, not my body.) I.e. "I" don't exist when in deep sleep. - Only when the parietal activity I call the Real Time Simulation," RTS, is active or "running" to use the computer word. Read more about the RTS and dozens of known facts from several different fields, even history, that strongly support it and cannot be explained by the more widely accepted POV of cognitive scientists about how we perceive anything, much more than "sheer flights of fantasy."

    I believe that human brains (and other higher animals) do run an adequate, but not precise, simulation of the physical world their bodies have sensors following. - For example the retinal cells sense or can following a portion of the EM spectrum we call visible, but not the portion we call micro waves etc. I. e. my model of perception is quite different from that accepted by main stream cognitive scientists. They think perception "emerges" after many stages of neural transforms of the input sensory signals. That is nothing more than hand waving non-sense with zero explanatory power as says nothing about the neural mechanisms creating the perceptions that emerge. Also it strongly conflicts with well established neurological facts.

    For example, the information in the sensory input signals is deconstructed into different characteristics that are further process by other neurons in widely separated parts of the brain and never again reassembled in any part of the brain, yet we perceive a unified world. To give a specific example, consider this very simple visual stimulation field:

    A yellow tennis ball rolling towards a red cube of about the same size on a large green table (so large no other light is coming to the retina). After the continuous visual field has been parsed into these three objects* mainly in the visual area called V1, the three colors are set to V4 and their movement (speed and direction) to V5. In V1 and V2 their shapes are determined. So the three characteristics (shape, color & motion) are separated decomposed characteristic that never come together again in the brain; yet we correctly perceive them - as they are in the physical world. Not the seven other ways these three could be perceived. I. e. not as a stationary red table, a rolling (or sliding) yellow cube and a stationary green tennis ball.

    My parietal tissue, Real Time Simulation, explains this unified perception AND why the visual field objects were decomposed into their "characteristic" (more than eight are known, things like surface texture, etc. and all processed separately in different neural tissue, never to come back to any common brain tissue.) It is supported by dozens of known facts that the accepted "perception emerges" can not explain, or even contradicts. One quick example: How does a visual experience / perception "emerge" in dreams with eyes closed in a dark room?

    For more but still very partial evidence and some brief discussion read this post: http://www.sciforums.com/showthread...Nonexistence&p=2899438&viewfull=1#post2899438

    There you will see a link to about eight pages (if printed) of discussion and much more supporting evidence from many different fields of knowledge, but the focus of that link is to show how the RTS makes it possible than Genuine Free Will , GFW, to NOT be in conflict with the physical laws that control the firing of every nerve in your body, especially those in the brain. (Not a proof that GFW exists, only that it could. I tend to think GFW is the most universal of all illusions.)

    * In the published paper the longer link on GFW is derived from, I also explained how the parsing in V1 is done using known properties of how neurons in V1 interact with near by neurons .- I. e. that they reinforce (have mutual stimulation) for like oriented "line detectors" (which Hubel & Wiesel discovered and got a 1962 Nobel Prize for their work)** but a mutually inhibitory influence on the near by line detectors with the orthogonal orientation and several of the Gestalt laws by using know properties of neurons, not hand waving.

    ** Footnote at link explains why H&W did not correctly understand their observations. The cells they took data from are not "line detectors" but part of a quasi-Fourier like transform (Gabor function transform actually). It seems that the visual system, perhaps all the brain, works in a transform space not the original space after the "retinotoptic" work is done. This may be why so little of the Brain's processes are understood. - The basic assumption of what space they are done in may be wrong. One advantage of working in the transform space is that the terms of the transform do not change as the objects location changes. That makes object identification / recognition independent of where the object is.
    Last edited by a moderator: Sep 28, 2013
  19. spandrel Registered Member

    Mazulu said
    Banned? Oh dear me. Good point though. Why would anyone want to do a job that can be done by a machine? Back in the sixties a future of leisure was predicted as machines took over production. It hasn't happened but not because machines can't. In fact we seem to be working harder and longer than ever. Curious.
  20. KilljoyKlown Whatever Valued Senior Member

    Of course machines (dumb robots) do a lot of boring work tirelessly 24/7 and they keep on working after they've paid for themselves. I don't want to do that kind of work, but we all still have to make a living by doing something that pays the bills. But suppose you had a personal AI that could do work that you would get paid for?

    Would that be okay with you? Someday that might be the case.
  21. kwhilborn Banned Banned

    Dang.. Still discussing this,

    The last post has nothing to do with consciousness.

    All computer programs do exactly the same thing. They compare values and output based on the result.

    If this is true, do this ...
    If that is true, do that ...
    If that is true, do this ...
    If this is true, do that ...

    The rest is just blinking lights and prerecorded sound.

    No programmer here would ever think a machine could become conscious, however the innovative ways programmers have used all of the "IF-Then" statements can obviously fool some of you into thinking AI is real.

    Look at any flowchart. It has actions and decisions (if then statements).

    The only way for a real AI to exist would be with a bio machine that learns via association like we do, but would it really be a machine or a life form at this point.


    Your calculator will never be conscious.

    Maybe in the movies.

    If you do not grasp this go to start of Post Else Go to next post.
  22. spandrel Registered Member

    KilljoyClown - You're still hooked on the job/money thing. If everything necessary could be done by machine (which it could for the most part) why would you want to work? My point is that we are being kept in the job/money thing in order for the few to extract profit. That's the problem with capitalism, it's regressive. It needs a particular social model to operate. Everyone has to work. The Big Lie. It began back in the 16th century. For example the Civil War in England was a contest between the coin and the crown, and the coin won. When the Roundhead soldiers asked for their freedon they were hanged. The Protestant work ethic went hand in hand with the New Order of capitalism. Intelligent machines? God forbid! That would give the game away.
    Iain M. Banks Culture novels describe a reality where anything at all can be instantly created, so everyone (including the space ships) can do whatever takes their fancy. This is anathema to the capitalist machine, (for it is a machine).
    Getting back on topic, for all we know the internet routing system may well be conscious but we will never know. Those routers don't take any notice of content, packets are just pulses racing round the world-brain. What is it thinking about with no sensory input? Wouldn't it be psychotic? What a terrible fate.
    My point is that unless you can fall down and bump your knee and cry and taste ice cream and run through grass and... and...
    What is consciousness without awareness of being in a real world, of growing up amongst others? What does it mean?
  23. KilljoyKlown Whatever Valued Senior Member


    I'm not really hung up on the capitalism thing. It just happens to be the way things are now. I do think people need to be productive in ways that help both them and society or community. But I really don't have any preconceived ideas on what that might be in the future. I don't ever see the Internet itself becoming intelligent and aware, but it seems like it might facilitate a social consciousness in ways we can't understand from our current POV.

    As far as a selfaware AI itself is concerned, I'm not like kwhilborn as I do think it will happen sooner or later. The machine that allows it to happen won't resemble the computers we know today, and the core programing they start with will resemble what we call instinct in animal life. Other than that they will have to learn about the world through whatever senses we give it work with.

Share This Page