Transhumanism.

Discussion in 'Intelligence & Machines' started by Jaster Mereel, Apr 25, 2006.

Thread Status:
Not open for further replies.
  1. Jaster Mereel Hostis Humani Generis Registered Senior Member

    Messages:
    649
    You didn't address the point I made about motivations for changing oneself. Look, you can never be "outside" of the evolutionary process. Yes, a species could potentially decide what they evolve into, but they are still acting within the framework of natural evolution because the desire to change because of the environment in itself is a part of natural evolution. The only difference is, it could be a conscious change instead of an undirected one. It doesn't mean that they are somehow outside of natural evolution, only that they are conscious of the process, which we already are.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Jaster,

    The difference is between an undirected process and a directed process. While we may not be able to foresee a final result, although a final may never be achieved, we can at least direct individual steps. E.g. disease is a natural process which through science we are gradually eliminating. There is a strong desire to have increased intelligence and through either genetic engineering or otherwise we will strive to make that happen, etc, etc. While chance will still play a role, most advances will now more likely be because we have planned them.

    It is still an evolutionary process but significantly different from that which has preceded us. I don’t think there was any implication that we are outside of evolution, whatever that means, everything evolves.

    I think we are all saying the same thing here.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Jaster,

    That’s a very narrow view of evolutionary processes. E.g. there should be no doubt that the computer has evolved and is evolving at an increasing non-linear rate. The environment and its ability to adapt seem like quite irrelevant considerations in that process. Why has this occurred - because a major component that has influenced its evolutionary history is human intelligence.

    That is a flawed perception. It is you who are now thinking that humans are outside of the evolutionary processes. We are not, we are one part of the process and our intelligence is just one component. Our future will not be undirected which means changes will inevitably occur at a vastly more rapid pace than they have for billions of years. Essentially our intelligence forms part of a feedback loop into the evolutionary processes. The singularity is inevitable because of that very process. One could say the singularity began the moment self-awareness arrived.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Jaster Mereel Hostis Humani Generis Registered Senior Member

    Messages:
    649
    Alright. No arguments with this one. Perfectly reasonable. Medical technology is increasing life spans and making disease less prevalent in those areas of the world which can afford it. Although, we have been noticing that the diseases we are fighting, unless we eliminate them outright, are beginning to adapt to our methods. The various strains of the flu are just such an example, so we are not on the fast track to eliminating disease altogether, but rather we are making it less likely that such diseases will affect large segments of the population. This is, of course, unless there is a major screw-up in public policy which allows a dangerous virus, for instance, to slip into a large population that has no protection from it. Just thought I'd add a tad bit more to what you said.

    This goes back to the argument we had before: What is intelligence? How do you quantify it? In other words, is there some measurable, consistent manner in which intelligence can be discerned, a part from the subjective judgments of a person observing the individual? No, there is no such system that works to the satisfaction of the vast majority of psychologists. Therefore, I say that it is unreasonable for you to make claims about having "greater intelligence" in the future, since you can't define intelligence.

    I've already tried to define intelligence for you, but you ignored it because you probably feel like it precludes the kinds of changes that you want. Namely, that intelligence is a given behavior that ensures that the goal of the individual is achieved in whatever circumstance or environment the individual is in, and therefore that intelligence is circumstantial and not quantifiable, and so an accurate measure of intelligence cannot be made so that the kind of "increases" you are talking about are possible.

    You are not talking about the evolution of a living organism. They are two completely separate processes, mostly because a computer, first and foremost, is a tool. It doesn't exist on it's own in the world outside of our heads until we build it, and so how it develops is dependent on us.
     
  8. eburacum45 Valued Senior Member

    Messages:
    1,297
    If we develop a self-aware computer, it will have is own goals, and how it develops will be dependent upon itself.

    Aha! You might say. Developing a self-aware, self modifying computer might be unwise; we won't do it. If you were to say that then you would be ignoring history yourself. The transhuman movement exists; as I have said, I am not one myself, but as a potentially historical phenomenon of the future they cannot be ignored; since the transhumanist movement is obsessed with building self-aware and/or human-like computers, sooner or later they will do it.
    Good or bad idea? Only history will decide.
     
  9. Jaster Mereel Hostis Humani Generis Registered Senior Member

    Messages:
    649
    I don't quite agree. Why would a self-aware computer's goals differ from those of any living organism, namely survival and propogation of it's kind? Perhaps it wouldn't be interested in the second part... or even the first. Who knows? What exactly qualifies as "self-aware", anyhow? Is that even possible? We don't have a clue. Believing it's possible requires you to assume so many things about computing, intelligence, consciousness, etc... We have to establish all of these things if we want to start discussing what a "self-aware" computer might be like. To be quite frank, nobody knows.

    That's not what my point is. I am taking issue with the very feasability of creating something like that in the first place. It's not as widely an accepted "fact" as some might believe (among cognitive scientists, that is). The problem with the idea is that we don't even know if it's possible, and in order to talk about what something like that would be like, we have to figure out if it can even happen.
     
  10. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    Jaster,

    Then use the term in a less precise perspective and wider scope, like ability to think faster, to react faster to new conditions, to remember more, to absorb information faster, to reach logical conclusions faster, etc. Unless you are some very exceptional person I assume you have sometimes wished you had thinking abilities perhaps like Einstein or a chess champion, or those with any number of clearly identifiable superior mental abilities.

    No, not at all, you simply don’t understand my perspective yet. You are requesting a degree of measurement that I think is irrelevant to this discussion. The gross subjective perceptions are more than adequate for this.

    What’s a living organism? Define alive?

    And flowers can’t exist without insects and visa versa. The whole planet is a mesh of interdependencies. The computer and human relationship is no different. The common attribute is that all are subject to change via evolutionary processes. Stop thinking that being human or biological is somehow special; we are all part of the same mix.
     
  11. nicholas1M7 Banned Banned

    Messages:
    1,417
    One of the first mistakes people make when hypothesizing about self-aware computers is to assume that intelligence implies ego/instinct. By no means am I making conclusions, but entertain for a moment, that humans were to develop a machine capable of learning, or, adding new "software" to itself in order to carry out original operations, it would be a great leap, if not impossible, to endow an ego/instinct component into the machine. Intelligence is as suggested in psychometrics to be a trait that functions in isolation to others.

    Currently, the failure of mathematical investors to predict stock prices could be due in part to behavior as a result of ego/instinct. Without psychology, intelligent machines should derive no wants, hence, it should not be interested in self-servitude outside of needs. It will not identify itself as different to humans and therefore will not exercise any power over us.
     
  12. eburacum45 Valued Senior Member

    Messages:
    1,297
    .
    Absolutely. We don't know if it will be possible or not. But there are two ways to approach this problem; we can investigate whether self-aware, self determining (and self-designing) AI are possible, or we can consider what the consequences would be if they were indeed possible.

    Now at the current state of the art it isn't possible to state definitely whether self-awareness will be possible in AI; as you have pointed out, Penrose has some doubts based on the quantum characteristics of neurons and brain structure in living tissue. On the other hand it may well require quantum level effects to achieve self awareness in an artificial entity; it might even be necessary to use some sort of biological system or biological analog device. So it can't be ruled out yet.

    If we consider what the effects of the development of self-aware and self-modifying AI will be, then that begins to be the realm of science fiction. I am entirely happy to discuss the political, historical, technological and sociological effects of strong AI on any level; it would even be instructive to discuss the implications of a failure to achieve Strong AI. But it isn't possible to dismiss the possibility yet.
    Strong AI and self-directed evolution breaks no thermodynamic laws, but could lead to some very unusual consequences. In fact, as you have rightly pointed out, self directed evolution will proceed in a very similar fashion to natural evoloution (some would say that it is natural evolution); but it is likely to proceed much faster in general.
     
  13. eburacum45 Valued Senior Member

    Messages:
    1,297
    A very interesting prospect. Such a ego-free thinking machine would be very different to a human; in some ways it might resemble a very intelligent reptile with no higher consciousness. Reptiles have very undeveloped cortical structures, and probably have little sense of ego; but they are perfectly capable of efficient behaviour in response to their environment.
    Yes; a crocodilian AI with vast competence would be possible, in my opinion; they may even become widespread or ubiquitous.
    But that wouldn't satisfy the transhumanist, I suspect. Developing a 'human-like' AI is a definite goal of that movement; there are others (such as Freeman Dyson) who expect that the eventual fate of intelligent civilisations in our universe is to become computational entities, existing for inconceivably long periods of time as programs in a processing substrate of some sort.
    So even if competent, ego-free thinking machines are availiable, there will still be a desire in some quarters to produce more human-like, or even transhuman machines.
     
  14. JoojooSpaceape Burn in hell Hippies Registered Senior Member

    Messages:
    498
    After taking a series of anatomy courses, you can come to understand the human body as a type of very advanced machine. Muscles move based on chemical transfer, chemical transfer fueled by nutrient intake, nutrient intake allowed through advanced digestion and chemical breakdown, which is fueled by a large number of natural resources around us (plants, and, even other animals). In short, we gain the mechanical/chemical energy needed to move our muscles, so we can eat more to fuel our muscles more so. I highly suggest reading into muscles, how they work and are fueled, as I have forgotten many of the names of the chemicals and the exact chemical responses that allow muscles to move. All in all it is quite interesting to read, and you can more or less see how the human body is similiar to a perfect machine, between cell division, recreation, all of that. We are effectively, self repairing, self sustaining, reproducing machinery. Scientists/Engineers are more or less trying to reproduce the abilities of a human with alloys and programming that would follow and recreate said previous abilities, I'm certainly interested to see where the world of mechanics goes.

    as for the idea that a computer could find software, well, yes it can happen. Already we have search engines and websites that we can set up to find similiar articles, or even personalize to bring things to us so we dont have to search for them when we wake up in the morning. The real challenge is to create a machine that would interpret these things.
     
  15. Jaster Mereel Hostis Humani Generis Registered Senior Member

    Messages:
    649
    You are still assuming that the difference between these people and everyone else is innate, which is why I have asked you to quantify intelligence. It could very well be (and seems to be so from what we know of the development of children early in life) that the differences are not inherent but rather learned. They are taught to think like that, either intentionally or as a product of necessity.

    As I said in my previous point, those gross differences that people commonly see are not necessarily innate, but could rather be learned (as many developmental psychologist seem to believe). If this is the case, then the very idea of "greater intelligence", as a result of design, comes into question. You could argue that this means nothing, as you may be able to design something that is more capable of learning. The problem with this is that you have no idea how the process of learning works in the brain, and so assuming that you could design something that does this better than the brain is quite a large leap of faith.

    Now who's dabbling in semantics? You know exactly what I mean by "alive", and that is chemically based systems that are commonly held to be "alive", most notably, those things which biologists study. If you are suggesting that we expand the definition of "alive" to include non-chemical systems which are usually termed "artificial", then surprisingly enough I am all for it. I, personally, consider anything which adapts to environmental systems independant of conscious external direction to be "alive".

    This is ridiculous. A computer is not a living thing, it is a tool. The relationship between computers and humans is not symbiotic. The relationship between humans and computers is that the human builds the computer to perform a specific function, and then the computer carries out that function. Computers do not have instincts, they do not have goals or desires. Computers are machines, pure and simple.
     
  16. eburacum45 Valued Senior Member

    Messages:
    1,297
    That is true today, but it may not be true in the future at some point. The human brain proves that thinking as a phenomenon is a physical possibility; there is not yet enough reason to suspect that it is impossible to replicate that phenomenon in an artificial device. Even if Penrose is right about the quantum effects in living tissue being the seat of consciousness that will just mean it will be more difficult to make a thinking machine, but not necessarily impossible.
     
  17. Jaster Mereel Hostis Humani Generis Registered Senior Member

    Messages:
    649
    I understand where you're coming from, but I don't think that you quite understand what I am getting at. A tool cannot be considered living, and as soon as it takes on the properties of a living thing (those properties which are usually defined by biologists), it is no longer a tool, it is life. That is, as soon as computers start thinking on their own and taking care of their own survival (either as symbionts to humans or entirely on their own) they are no longer computers but living, non-chemical, artifically constructed animals. In other words, people who talk about strong AI are not talking about artifical intelligence at all, but real intelligence. Then the debate ceases to be about machines, and becomes a debate over a form of life that we have never seen before.
     
  18. eburacum45 Valued Senior Member

    Messages:
    1,297
    I agree entirely. If we create a thinking device, or allow one to emerge, that device will be a living being with the same sorts of rights that other such beings have.
     
  19. q0101 Registered Senior Member

    Messages:
    388
    I think our perception of sentience and self-awareness is an illusion. I don’t believe there is any real difference between an inanimate object and a human being. Everything is programmed to do something. If you took a human being a part atom by atom we would be no different than the inanimate objects around us. Existence is information and perception. It is the sum of our parts working together at once that gives us our perception of reality. What we perceive as life is nothing more than chemical reactions. Perhaps there is something going on in our brains at a quantum level. We may be able to duplicate this process on day with quantum computing and advanced versions of the technology in the links below.

    http://www.cnn.com/2006/TECH/science/06/02/brainchips/index.html

    http://www.livescience.com/humanbiology/060327_neuro_chips.html

    http://www.neoseeker.com/news/story/5544/

    http://www.photonics.com/readart.asp?url=r...ticle&artid=344

    Some Transhumanists spend a lot of time debating each other about topics like sentience and artificial intelligence. When should a computer program have rights? Should a computer program or robot be considered a sentient being if it can make you believe that it is really alive? I don’t think most artificial life forms should have the same rights as humans if they do not have any biological functions. We can program them to simulate our behavior, but it is not the same thing as experiencing the biochemical reactions that we experience. A robot with a biological nervous system and an A.I that thinks and acts exactly like a human being should probably have some rights. It all depends on whether or not they would pose a threat to normal humans. Angry robots could be very dangerous.

    I became interested in Transhumanism because I have an obsession of escaping my genetically flawed body by connecting my brain to a realistic virtual reality or by transferring my memories to some kind of computer chip or genetically enhanced body. I would like to fund scientific research to make these things a reality in the near future.
     
  20. eburacum45 Valued Senior Member

    Messages:
    1,297
    I doubt that will happen in our lifetimes. Unfortunately.
     
  21. Jaster Mereel Hostis Humani Generis Registered Senior Member

    Messages:
    649
    I don't think it will at all. (Figured I'd resurrect this thread)
     
  22. RHaden Registered Senior Member

    Messages:
    36
    That depends on how long our lifetimes become.

    Please Register or Log in to view the hidden image!

     
  23. RHaden Registered Senior Member

    Messages:
    36
    What are your reasons for thinking so? Or should I look at the rest of the thread?
     
Thread Status:
Not open for further replies.

Share This Page