Life inside a Computer

Discussion in 'Intelligence & Machines' started by kmguru, Sep 8, 2001.

Thread Status:
Not open for further replies.
  1. Rick Valued Senior Member

    Messages:
    3,336
    and it is interestng to note that :Whole Brain Emulation (WBE) (or mind uploading) is a technological objective that has been used as a motif in science fiction for a considerable amount time, For example, Arthur C. Clarke. 2001: A Space Odyssey; Del Ray, 1968. "writes that in the future, intelligent life will give up biological bodies in favor of mechanical or electronic bodies."


    bye!
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Rick Valued Senior Member

    Messages:
    3,336
    Dalai Lama On Mind Uploading.

    I took this from the given website that you might find interesting.
    http://www.aleph.se/Trans/Global/Uploading/lama_upload.txt


    From asa@nada.kth.se Sun Jul 13 21:37:52 1995
    Status: RO
    X-Status:
    Received: from europe.std.com (root@europe.std.com [192.74.137.10])
    by mail.nada.kth.se (8.6.10/8.6.10) with ESMTP
    id VAA11371 for <asa@nada.kth.se>;
    Thu, 13 Jul 1995 21:37:25 +0200
    Received: from world.std.com by europe.std.com (8.6.12/Spike-8-1.0)
    id PAA13717; Thu, 13 Jul 1995 15:28:16 -0400
    Received: by world.std.com (5.65c/Spike-2.0)
    id AA12093; Thu, 13 Jul 1995 15:28:15 -0400
    Received: from accu (accu.accugraph.com) by world.std.com (5.65c/Spike-2.0)
    id AA12031; Thu, 13 Jul 1995 15:28:11 -0400
    Received: from huey.accugraph.com by accu (5.x/SMI-SVR4)
    id AA05107; Thu, 13 Jul 1995 13:27:27 -0600
    Message-Id: <9507131927.AA05107@accu>
    Received: by huey.accugraph.com
    (1.38.193.3/16.2) id AA04389; Thu, 13 Jul 95 13:28:16 -0600
    From: Michael LaTorra <mikel@accugraph.com>
    Subject: Buddhism, AI & Uploading
    To: omega-point-theory@world.std.com
    Date: Thu, 13 Jul 95 13:28:16 MDT
    Cc: sarfatti@ix.netcom.com
    Mailer: Elm [revision: 70.85]
    Sender: omega-point-theory-approval@world.std.com
    Precedence: bulk
    Reply-To: omega-point-theory@world.std.com

    Quoted from the book _GENTLE BRIDGES: Conversations with the Dalai Lama
    on the Sciences of Mind_ by Jeremy Hayward and Francisco Varela
    [Shambala, 1992] pp. 152-153:


    DALAI LAMA: In terms of the actual substance of which computers are
    made, are they simply metal, plastic, circuits, and so forth?

    VARELA: Yes, but this again brings up the idea of the pattern, not the
    substance but the pattern.

    DALAI LAMA: It is very difficult to say that it's not a living being,
    that it doesn't have cognition, even from the Buddhist point of view.
    We maintain that there are certain types of births in which a
    preceding continuum of consciousness is the basis. The consciousness
    doesn't actually arise from the matter, but a continuum of consciousness
    might conceivably come into it.

    HAYWARD: Does Your Holiness regard it as a definite criterion that
    there must be continuity with some prior consciousness? That
    whenever there is a cognition, there must have been a stream of
    cognition going back to beginningless time?

    DALAI LAMA: There is no possibility for a new cognition, which has no
    relationship to a previous continuum, to arise at all. I can't totally
    rule out the possibility that, if all the external conditions and the
    karmic action were there, a stream of consciousness might actually enter into
    a computer.

    HAYWARD: A stream of consciousness?

    DALAI LAMA: Yes, that's right. [DALAI LAMA laughs.] There is a
    possibility that a scientist who is very much involved his whole life
    [with computers], then the next life . . . [he would be reborn in a
    computer], same process! [laughter] Then this machine which is
    half-human and half-machine has been reincarnated.

    VARELA: You wouldn't rule it out then? You wouldn't say this is
    impossible?

    DALAI LAMA: We can't rule it out.

    ROSCH: So if there's a great yogi who is dying and he is standing
    in front of the best computer there is, could he project his
    subtle consciousness into the computer?

    DALAI LAMA: If the physical basis of the computer acquires the
    potential or the ability to serve as a basis for a continuum of
    consciousness. I feel this question about computers will be resolved
    only by time. We just have to wait and see until it actually happens.

    .......................................................................

    This body is almost finished -- standby to upload!


    Live long & prosper,

    Michael LaTorra
    mikel@huey.accugraph.com

    You are here
    ^ | -
    _/-\_.........v..............................................._( )_
    ALPHA OMEGA
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Rick Valued Senior Member

    Messages:
    3,336
    Here's another interesting article written by Toby Howard.

    Ghosts, computers, and Chinese Whispers
    Toby Howard
    This article first appeared in Personal Computer World magazine, November 1996.

    "CLICK HERE to upload your soul" was one of the tamer headlines seen recently in reports describing the "new research direction" of British Telecom's research labs at Martlesham Heath. BT, it was reported by such authorities as The Guardian, Reuters, Time, and the Electronic Telegraph, is embarking on a huge new research project. Funded to the tune of 25 million pounds, an eight-man team is developing a "Soul Catcher" memory chip. The chip will be implanted behind a person's eye, and will record all the thoughts and experiences of their lifetime.

    The idea that we might migrate minds from brains into silicon has been around for a while, and is usually referred to as "uploading". This might be achieved destructively, by visiting each neuron in turn, determining its particular characteristics and interconnections, and then replacing it with an equivalent microscopic computer based on nanotechnology; or, perhaps preferable from the patient's point of view, by non-destructively scanning the structure of the living brain and reproducing it elsewhere in silicon, like copying a complex drawing using a pantograph.

    The amount of data involved would be immense, since the hardware of our nervous system is believed to comprise around 1012 neurons with 1015 synaptic interconnections. But the capacity of silicon for storing information is increasing at an almost unbelievable rate. "Moore's Law", first expressed by Intel co-founder Gordon Moore in 1964, stated that the density of chips roughly doubles every year. Although the doubling period since the early 1970s is now more like 18 months, we are still seeing an explosive growth of chip-power. If the trend continues into the next century we might expect by 2025 to store ten million megabytes (ten terabytes, or 1013 bytes) on a single chip. If so, we might hope to record information about a brain's physical structure in a few of these chips.

    But to talk about uploading thoughts and memories is quite another matter. When we talk about brains and minds, we must confront the classical "mind-body problem". We know that our brain is a collection of biological structures of (so far, at least) unimaginable complexity. Outwardly this "brain-stuff", once described by computing pioneer Alan Turing as like a bowl of cold porridge, is unmistakably physical. Deeper, a stained microscopic section of brain matter reveals a riot of interconnected neurons that looks like squashed rhubarb. How can this biology conjure or host the mystery of human consciousness? How can our minds influence the matter in the universe? Imagine: you're in the pub, and you fancy a bottle of beer. Within moments the bartender obliges. Your mind has somehow caused uncountable billions of atoms in the universe -- atoms in your muscles, your throat, the air, the barman's ears, his brain, his muscles, the fridge, the bottle, the opener, the glass -- to directly respond to your will. Quite a trick for porridge and rhubarb.

    What is this "you" that has such power to disrupt the universe? Are "you" some ethereal entity -- a ghost, a soul -- operating the controls of the brain machine? Or is the machine itself just so complex that in our inability to understand its activity we seek refuge in the idea of a separate "soul"? For proponents of "Strong Artifical Intelligence", the answer is clear: human consciousness really is nothing more than the algorithmic bubbling of cold porridge, and in fact any sufficiently complex algorithm, running on any kind of machine, will lead inexorably to thinking and consciousness. Or so they say. A fierce debate rages over this claim.

    If the "Strong AI" researchers are right, then we should one day expect to see computers which behave as if they are conscious. (It's the "as if" here which so antagonises the philosophers). But the possibility of creating conscious machines raises serious ethical questions. What rights would a conscious machine possess? Would concerned individuals form a Royal Society for the Protection of Cruelty to Machines? Would those of a religious bent demand that a conscious machine be taught how to worship God? What if conscious machines did not like us, and turned nasty?

    But we must, I regret, be sceptical futurists, and return to earth. Is BT really trying to cache our consciousnesses onto chips stuffed in our heads? "No!", says Chris Winter, the BT researcher quoted in the press as heading the "team of eight Soul Catcher scientists". "We are not building anything!", he told me. The whole story is a media invention, developed like a game of Chinese Whispers from its origins in an after-dinner press briefing Winter gave to local journalists, intending to enthuse them about the future-looking work at BT Labs. Winter's research group had simply undertaken a "technology trend analysis" to speculate on the future capacity of silicon, and using Moore's Law had estimated the 10 terabyte chip by 2025. To illustrate the immensity of such storage, Winter compared it with a back-of-an-envelope guestimate of the volume of data input through a person's sensory organs in an average 70-year lifespan -- 10 terabytes. The press took it from there.

    Since current research into neuro-computational cochlear implants for the deaf, and retinal implants for the blind is proving successful, perhaps in the distant future something like the Soul Catcher will become a reality. However, the vision of Bob Hoskins saying "It's good to upload" makes me reach for another universe-changing beer. Toby Howard teaches at the University of Manchester.

    bye!
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Rick Valued Senior Member

    Messages:
    3,336
    GRADUAL UPLOADING AS A COGNITION OF MIND

    GRADUAL UPLOADING AS A COGNITION OF MIND

    by Algimantas Malickas

    Institute Mathematics and Informatics, Akademijos 4, Vilnius, Lithuania E-mail: malickas@pub.osf.lt

    Abstract


    Gradual uploading could be considered as an alternative to neuron-by-neuron
    uploading. In this article we discuss some technological aspects of this
    problem, including computer-brain, brain-computer interfaces and necessary
    computational power for mind simulation. We interpret the uploading system
    as a fault tolerant neural network (where damage would be brain death), as
    a "black box" (brain) imitation system. We use children mind evolution model
    for the description of uploading system evolution, where the external world
    is changed to the brain world, receptors - to the brain-computer interface,
    motor functions - to the computer-brain interface.
    Speech communication as an interface prototype is discussed.

    Contents

    1. Introduction
    2. Functional uploading
    3. Interpretation of gradual uploading
    4. Brain - computer interface.
    5. Computer - brain interface.
    6. Computer
    7. The system of brain cognition
    7.1 The principle of world modeling
    7.2 Comparison between real world and brain cognition.
    7.3 Evolution of the external system
    8. Speech as an interface
    9. The problem of person identity
    10. Discussion
    11. Conclusions
    12. Acknowledgments
    13. References



    1. Introduction

    The gradual uploading of mind (also known as gradual extension, metamorphosis
    [7], soft uploading, brain enhancement) could be evaluated as an alternative
    of atom-by-atom, or neuron-by-neuron uploading.



    The gradual uploading requires addition of artificial neural network (or other
    means of AI) to the brain. After this, the brain and the external system would
    operate as one system. During this common coexistence, the memory and other
    functions of mind would gradually grow into the external system and would
    survive after the brain's death. Therefore, the death of the brain would not
    be fatal for the person, i.e. main goal of uploading would be achieved.



    The goal of this article is to discuss some ideas, how this problem can
    be solved, what is going on in this field of research, and what is necessary
    to do in the future.
    2. Functional uploading

    The gradual uploading would be uploading of mind functions, not uploading of a
    morphological brain architectures.
    At first, the morphological structure of brain is defined by not only
    functional necessities. Evolution is very stochastic process and accidental
    factors contributed a lot to the brain structure.
    At second, some functions (homeostasis, most motor reflexes etc.) are
    not useful for the cyber system, if the 'cyber-body' differs a lot from the
    human body. In addition, it looks like those functions are not so important
    for the self-perception.
    The mechanisms of brain functioning and functions unimportant for the
    self-perception can be abolished, adapted or changed. This standpoint could
    greatly facilitate the solution of the uploading problem.

    2. Gradual uploading interpretation

    External neural network of the uploading system could be implemented using
    several ways. The interpretation of performance of this network can be various
    too:
    1. The neural network functions are distributed over the network (over large
    field of network, at least). In some cases, the neural network is a fault-
    tolerant i.e. after removal of some neurons, survived part of network can
    inherit almost complete collection of parent network functions. Under certain
    conditions similar properties could be used for the separation of external
    system in case of death of biological brain.
    2. The brain can be interpreted as a "black box" with complex inside structure
    and functional properties. If the sufficient quantity of information about
    brain properties were obtained, the imitation or reconstruction of these
    properties would be possible.
    3. Some ideas about organization and interpretation of external neural network
    can be supported using the existing example of powerful neural system - the
    human brain. Evolution of external system could be interpreted as a child
    evolution, in this case. Some cognitive science, psychological and pedagogical
    models can be used for this evolution description. We will discuss this point
    of view in details later.
    In any case, the problem of technical implementation of gradual uploading can
    be divided into:
    external system software problem,
    external system hardware problem,
    interface computer - brain problem,
    interface brain - computer problem.
    3. Interface brain - computer.

    There are several possibilities of creation of this interface. One way is to
    implement some electrodes or microchips into the brain. At this time some
    examples of this interface are available [9,10,11 ]. Main disadvantage of this
    method is necessity of surgical invasion into the brain.
    Another way is to use multichanell electroencephalogram (EEG). Some
    implementations of EEG brain-computer interfaces are also available at this
    time [1]. For the EEG, the minimal distance between electrodes must exceed 1
    centimeter, and this factor limits a number of electrodes, i.e. EEG interfaces
    permeability. The EEG could represent brain information from the cortex only.
    In addition, the EEG channel can represent only a sum of activities of many
    neurons under channel electrode.
    Other possible technology for the interface could be magnetoencephalography,
    using SQUID (Superconducting QUantum Interference Device) [13]. This is a most
    sensitive technology for registration of magnetic fields, at this time.
    For the gradual uploading goals, the brain-computer interface should have some
    specific features. One problem is representation of necessary brain information
    in the cortex field under EEG, MEG electrodes or near implant. This problem
    can be partly solved optimizing the location of electrodes according to our
    current knowledge about cortex structure.
    More powerful method would be a brain adaptation itself. The brain - computer
    interface would have feedback coupling (computer - brain interface), and this
    fact would ensure good co-operation between brain and uploading system.
    Through this co-operation, the brain adaptation would happen itself. Besides,
    some additional means could be implemented for better brain adaptation and
    those means could create necessary ways to transfer the information into the
    brain. As one of examples of this adaptation some human learning methods can
    be considered which relates the artificial electroactivity in the deeper fields
    of brain with the electroactivity in the fixed field of cortex.


    Some examples of human brain adaptation (Braille reading method, the speech of
    gestures etc.) show that such adaptation can be possible in principle, though
    these possibilities diminish during the human life.
    4. Interface computer - brain

    The brain implants could be used for information translation to the brain
    too [9,10,11]. Other way is to use sensations of human body for the
    transmission of computer - brain information.


    Human body uses many nerves to translate information about external world to
    the brain. Some of nerves could be used to translate the information from the
    computer to the brain. One way is to use tactile nerves. In this case, the
    interface hardware would include a matrix element, each consisting of a small
    vibrator (like small needle).

    The tactile brain input was widely investigated for hearing disorder
    compensation using tactile sensation. Some bibliography about this studies can
    be found in [2,3].

    Another way to transmit information transmission is to use visual sensations.
    Such interface would have better information permeability, but it might
    interfere to world`s visual perception.
    5. Computer

    Using connection machine the processing speed up to 1.3 * 10^9 synapses per
    second can be achieved [6], and using transputer system the processing speed
    up to 2.7 *10^9 synapses per second can be reached [8]. In principle, the
    most powerful current parallel hardware (1.8 *10^12 FLOP) [14] could implement
    up to 10^10 - 10^11 synapses/second.
    The evaluation of the computational power of human brain very uncertain at
    this time. Some estimates of brain power could be based on the brain synapses
    number and neurons firing rate. The human brain have a 10^11 neurons and each
    neuron has average of 10^2 - 10^4 synapses. The average firing rate of brain
    neurons is about 100-1000 Hz.
    As result the brain modeling would require the computational power
    of
    10^11 neurons * (10^2-10^4 synapses/neuron) * (100-1000 Hz) =
    10^15 - 10^18 synapses/second.
    Other estimations [12] show that the brain power could be between 10^13 and
    10^16 synapses/second.
    On the other hand, some factors could diminish the necessary computer power.
    Most artificial neural networks (backpropagation, Hopfield, Kohonen, Grossberg
    etc.) use neuron activity represented by analog signal amplitude (firing
    frequency modulation), not a firing signal. Computer simulation in this case
    is more simple and speedy.
    Change of all brain neural network to amplitudical network could be problematic.
    For example, the temporary pattern of integration/summation is complicated for
    amplitudical neural networks, but more usual for firing neurons. But some
    systems (spatial processing parts of visual system, for example) could be
    changed and simulated more effectively.
    The brain have many systems (sensors and motor channels, homeostasis
    system) which are not that important and could be replaced after uploading. Many
    sensory - motor tracts would be incompatible with new body properties, many
    old reflexes would be useless etc. The large sound articulation control network
    could be replaced to few transistors, for example. The similar changes could
    be done to most sensory-motorics systems of the brain, and this would let
    decrease the necessary computer power.
    The brain neural networks are redundant. Besides not all neurons in the
    brain are active at the same time. Those facts could lead to decrease of the
    necessary computer power.
    The evolution of external system will take some time (the children evolution
    take some years), and during this time computer will become more powerful
    (child brain after birth has little number of synapses too, but during his
    evolution this number increases).
    6. The system of brain cognition

    6.1 The principle of world modeling

    Born child has sensors, effectors, motivation, sufficiently powerful neural
    network (which has necessary architecture for self-organization). Via sensors
    and effectors child has contact with the environment (physical and social).
    During the children evolution, brain creates internal model of external world.
    Most of intellectual functions of any person can be interpreted as
    information operations in that model [4]. The same principle can be realized
    in the cyber system too, if the system has all necessary attributes (contact
    with environment, motivation etc.). Some studies in the fields of neural
    networks and AI [15,16] show how model of external world could be formed.

    6.2 Real world and brain cognition comparison

    The external world of the cyber system could not only be a physical world.
    If the sensors and effectors of cyber system were changed to brain - computer
    and computer - brain interfaces (i. e. the system would be connected with
    human brain "world"), the system would create an internal model of this brain
    (like as model of the physical world).
    The "brain world" would not be very different from the real world.
    Some similarities:
    1. The real world cognition is implemented via sensations.



    The brain cognition would be implemented via external system sensations -
    brain-computer interface.



    2. Brain have a possibility to change the real world.



    The external system would use the signal to the brain (feedback coupling)
    for the brain change.



    3. For the real world description the brain use the sensoric patterns, symbolic
    categories, combinations of patterns and categories, relationships between
    patterns and categories.



    For the "brain world" description would used patterns from brain-computer
    interface, symbolic categories, combination of these, relations between
    these patterns and categories.



    2. Many uncertainties exist in the information from the real world.
    Many uncertainties exist in the information from the brain.
    On the other hand, there are some differences in the case of external system:
    1. There are only indirect possibilities to influence the brain evolution
    process. In case of the external system it is possible to influence evolution
    of every single neuron.



    2. The real world cognition is formed from two main parts: sensoric cognition
    (unprocessed information) and symbolic cognition (this information are
    processed during society evolution). Very roughly, these could be interpreted
    as neural network learning with supervisor: the sensor information would
    be neural network input, the symbolic information (notions, for example) -
    learning target.
    In the case of external system, the "sensor" and "learning target" inputs
    would be received from the same environment - the human brain. The brain
    information contains not only unorganized sensor information (like as
    physical world), but the more organized, abstract information too. This
    information could substitute the symbolic information in the case of real
    world.
    6.3 Evolution of the external system

    In the cognitive level, the evolution of this external system will depend
    from initial and latest motivation of the system. The control of this
    motivation would allow to control an evolution of the external system.
    Inborn motivation is a very simple: stress or satisfaction. Genetic defines
    only which sensation is stress and which is satisfaction. The child evolution
    up to about two years could be described as a sensor stage. In this stage
    most sensors categories are memorized only. The motivation of the child are
    reference to these created categories.
    After sensoric stage the semantic connections between categories are created.
    Through self-organizing processes and social influence (speech mainly) the
    abstraction process of categories starts. The latest motivation of child is
    reference to new, more abstract categories.
    The external system could implement similar model. It is necessary to have some
    initial ("genetic") motives for the evolution of this system. Because the
    receptors of system would be connected to the human brain, the initial motive
    could be defined as some certain state of the brain, for example satisfaction.
    This state must be defined before system initialization, i.e. is necessary to
    know what brain output the satisfaction has. This properties could be defined
    subjectively or objectively, for example using some psychological or biometrics
    tests (like false detector).
    In the sensoric stage the external system would create some categories of
    "brain world" and use these categories to define further motives of system
    In this stage, the external system have to reply clearly to the concrete brain
    state.
    Later, the categories of the "brain world" would be related via semantic
    connections. Symbolic brain information would be used for category
    abstraction processes. During those processes the motivation of the system
    could be improved, using these new categories.
    This is a very rough description only. Real processes would be much
    complicated. An external system abstraction, rule extraction functions, and,
    the most important the human brain adaptation must be investigated.
    The possibilities of operation of external system after brain death could
    be supported by hemispherectomy - human brain hemisphere cortex removal in
    the tumor case. After the removal the brain`s ability survives, though some
    specifics hemisphere functions could be lost [5].
    It is necessary to plan, how the external system will contact with external
    world after brain`s death. This problem could be solved using artificial
    sensors and effectors, artificial means of initial processing of sensor
    or motor information, which would connect the external system after the
    brain`s death. Another way is to organize the sensor and motor channels
    during system evolution. In this case it is necessary to plan, how the
    additional external world sensation would influence the extension system -
    brain evolution.
    7. Speech as a interface

    The example of the gradual uploading rudiments could be speech communication
    between persons. The human speech could be interpreted as a interface
    between two brains, though it does not have the necessary rate for uploading.
    At average, human being uses about 10^4 notions - words of the speech.
    The speed of speech is about 2 - 5 words per second. The word information
    capacity is about about 20 bites, roughly (1 word from 10^4 is 13 -14 bites
    plus 6 bites of grammatical information). So, the speech information
    permeability would be about 40 - 100 bites per second.
    In spite of low speed, the communication results are very good. Large part
    of our knowledge is acquired by using speech or reading. Sometimes, after long
    contact with other person we can observe phenomena, which could be interpreted
    as rudiments of uploading or merging.
    8. The problem of person identify

    The very important goal of gradual uploading is the subjective perception of
    continuity of person, during all evolution of system.
    It is possible, that the brain and the external system could work as one system.
    The fighter pilot perceive machine like part of his body. The external system
    would have more communication channels and would have longer contact with the
    brain (it would be best if that contact could be unbroken). Therefore, the
    subjective self-perception of person could be not only be one's brain
    perception, but the common brain - external system perception.
    During the gradual uploading the common brain - external system neural system
    would be dramatically changed and these changes could be interpreted as person
    identity change. But these changes would have the evolutional character during
    the time of gradual uploading procedure.
    During some years the person neural system will be changed in any case (with
    or without gradual uploading). Some new impressions, emotionality, motor
    skills etc. would be acquired, some - lost. But these changes have the
    evolutional character and this fact assures the subjective perception continuity
    of person identity.
    The most drastic leap would be death of part of common neural system -
    biological brain. Fortunately, the human brain have unconscious states like
    sleep. During these states the external system could continue her activities.
    When the brain would wake up, these impressions could be interpreted like as
    "impressions after brain`s death". Such experience could decrease the
    psychological problems.
    Such person identity perception mechanism would be unique property of gradual
    uploading. In the neuron-by-neuron uploading case this problem can arise
    much more strongly.
    9. Discussion

    It is necessary to mention some problems necessary to solve for
    gradual uploading implementation.
    1. Although some examples of world modeling is created at this time, is
    necessary to perform many investigations for the creation of brain power
    cognition system.
    Necessary computer power for mind simulation is unavailable too, though
    the difference is not very large.
    2. For the external system creation it is necessary to create some
    conception of what brain functions should be uploaded to the external system,
    what functions could be imitated and what functions wouldn't be necessary
    for the person identity and functioning.



    Meantime the same problem would be faced in the atom-by-atom or neuron-by-neuron
    uploading case.
    3. Very important problem is the necessity to create information transmission
    channels into the brain. Some methods need to be created for this adaptation.
    4. Other problem for the interface is brain-computer and computer-brain
    interface's permeability. For good brain - external system co-operation we need
    to have sufficient information exchange rate. The question what rate is
    necessary, and what rate the interfaces would have, could predetermine
    usefulness of these interfaces.
    10. Conclusion

    Although the gradual uploading problem is a very complicated, many components
    of gradual uploading systems could be available at this time.
    Some of them must be adapted for these purposes, some of them must be improved,
    but there is no need for any technological qualitative leaps for its
    implementation.
    11. Acknowledgments

    I greatly appreciate very useful discussions with Anders Sandberg and Joseph
    Strout, which gave me a better understanding of this problem.
    12. References




    1. http://www.aleph.se/Trans/Global/Uploading/richard.seabrook.brain.computer.interface.txt




    2. http://mambo.ucsc.edu/psl/tactile_speech.txt




    3. http://mambo.ucsc.edu/psl/speechreading.txt




    4. Marvin L. Minsky, Matter, Mind and Models, "Semantic Information
    Processing," (Marvin Minsky, Ed.) MIT Press, 1968



    ftp://ftp.ai.mit.edu/pub/minsky/MatterMind&Models




    5. Floyd E. Bloom, Arlyne Lazerson, Laura Hofstadter, Brain, Mind and Behavior,
    New York, 1985.



    6. Alexander Singer, Exploiting the Inherent Parallelism of Artificial Neural
    networks to Achieve 1300 Million Interconnects per Second, International Neural
    Networks conference, Paris July 9-13, 1990



    7. Thomas Donaldson, Metamorphosis An Alternative To Uploading, Cryonics, May
    1990



    8. H.P. Siemon, A. Ultsch, Kohonen networks on transputers: implementation and
    animation, International Neural Networks conference, Paris July 9-13, 1990



    9. http://www.ee.surry.ac.uk/Personal/D.Banks/devel.html




    10. http://aramis.stanford.edu/cis/research/LabProject94/StanfordDVA.html




    11. http://atlas.axiom.net/cornucopia/chips.html




    12. http://www.merkle.com/brainLimits.html




    13. http://www.hypres.com/~masoud/digital.shtml




    14. http://www.ssd.intel.com/press/tflop.html




    15.Uwe R. Zimmer, Robust World-Modelling and Navigation in a Real World,
    NeuroComputing, Vol. 13, Nos. 2-4, 1996



    gopher://ag-vp-gopher.informatic.uni-kl.de/44ftp:Public:Neural_Networks
    %3aReparts%3aZimmer.Robust.ps.gz




    16. http://isd.cme.nist.gov/brochur/SPWN.ps



    bye!
     
  8. Rick Valued Senior Member

    Messages:
    3,336
    One of the Best article about uploading aspect i have ever read.

    One of the best,Worth reading at least once.
    i love this!.

    The Molecular Repair of the Brain
    by
    Ralph C. Merkle
    Xerox PARC
    3333 Coyote Hill Road
    Palo Alto, CA 94304
    merkle@parc.xerox.com
    This article was published in two parts in Cryonics magazine, Vol. 15 No's 1 & 2, January and April 1994. Cryonics is a publication of the Alcor Life Extension Foundation, Scottsdale AZ, info@alcor.org, 800-367-2228.
    The URL for this article is: "http://www.merkle.com/cryo/techFeas.html".

    A short version of this paper, titled "The Technical Feasibility of Cryonics," appeared in Medical Hypotheses Vol. 39, 1992; 6-16.

    More recent information on both nanotechnology and potential medical applications of nanotechnology is available, as well as a page on cryonics.

    CONTENTS
    ABSTRACT
    INTRODUCTION
    NANOTECHNOLOGY
    DESCRIBING THE BRAIN AT THE MOLECULAR LEVEL
    CRITERIA OF DEATH
    FREEZING DAMAGE
    ISCHEMIC INJURY AND PRESUSPENSION INJURY
    MEMORY
    TECHNICAL OVERVIEW
    THE REPAIR PROCESS
    ALTERNATIVES TO REPAIR
    RESTORATION
    CONCLUSION
    APPENDIX
    ACKNOWLEDGMENTS
    REFERENCES
    NOTES
    ABSTRACT
    Cryonic suspension is a method of stabilizing the condition of someone who is terminally ill so that they can be transported to the medical care facilities that will be available in the late 21st or 22nd century. There is little dispute that the condition of a person stored at the temperature of liquid nitrogen is stable, but the process of freezing inflicts a level of damage which cannot be reversed by current medical technology. Whether or not the damage inflicted by current methods can ever be reversed depends both on the level of damage and the ultimate limits of future medical technology. The failure to reverse freezing injury with current methods does not imply that it can never be reversed in the future, just as the inability to build a personal computer in 1890 did not imply that such machines would never be economically built. This paper considers the limits of what medical technology should eventually be able to achieve (based on the currently understood laws of chemistry and physics) and the kinds of damage caused by current methods of freezing. It then considers whether methods of repairing the kinds of damage caused by current suspension techniques are likely to be achieved in the future.
    INTRODUCTION
    Tissue preserved in liquid nitrogen can survive centuries without deterioration [note 1]. This simple fact provides an imperfect time machine that can transport us almost unchanged from the present to the future: we need merely freeze ourselves in liquid nitrogen. If freezing damage can someday be cured, then a form of time travel to the era when the cure is available would be possible. While unappealing to the healthy this possibility is more attractive to the terminally ill, whose options are somewhat limited. Far from being idle speculation, this option is available to anyone who so chooses. First seriously proposed in the 1960's by Ettinger[80] there are now three organizations in the U.S. that provide cryonic suspension services.
    Perhaps the most important question in evaluating this option is its technical feasibility: will it work?

    Given the remarkable progress of science during the past few centuries it is difficult to dismiss cryonics out of hand. The structure of DNA was unknown prior to 1953; the chemical (rather than "vitalistic") nature of living beings was not appreciated until early in the 20th century; it was not until 1864 that spontaneous generation was put to rest by Louis Pasteur, who demonstrated that no organisms emerged from heat-sterilized growth medium kept in sealed flasks; and Sir Isaac Newton's Principia established the laws of motion in 1687, just over 300 years ago. If progress of the same magnitude occurs in the next few centuries, then it becomes difficult to argue that the repair of frozen tissue is inherently and forever infeasible.

    Hesitation to dismiss cryonics is not a ringing endorsement and still leaves the basic question in considerable doubt. Perhaps a closer consideration of how future technologies might be applied to the repair of frozen tissue will let us draw stronger conclusions -- in one direction or the other. Ultimately, cryonics will either (a) work or (b) fail to work. It would seem useful to know in advance which of these two outcomes to expect. If it can be ruled out as infeasible, then we need not waste further time on it. If it seems likely that it will be technically feasible, then a number of nontechnical issues should be addressed in order to obtain a good probability of overall success.

    The reader interested in a general introduction to cryonics is referred to other sources[23, 24, 80]. Here, we focus on technical feasibility.

    While many isolated tissues (and a few particularly hardy organs) have been successfully cooled to the temperature of liquid nitrogen and rewarmed[59], further successes have proven elusive. While there is no particular reason to believe that a cure for freezing damage would violate any laws of physics (or is otherwise obviously infeasible), it is likely that the damage done by freezing is beyond the self-repair and recovery capabilities of the tissue itself. This does not imply that the damage cannot be repaired, only that significant elements of the repair process would have to be provided from an external source. In deciding whether such externally provided repair will (or will not) eventually prove feasible, we must keep in mind that such repair techniques can quite literally take advantage of scientific advances made during the next few centuries. Forecasting the capabilities of future technologies is therefore an integral component of determining the feasibility of cryonics. Such a forecast should, in principle, be feasible. The laws of physics and chemistry as they apply to biological structures are well understood and well defined. Whether the repair of frozen tissue will (or will not) eventually prove feasible within the framework defined by those laws is a question which we should be able to answer based on what is known today.

    Current research (outlined below) supports the idea that we will eventually be able to examine and manipulate structures molecule by molecule and even atom by atom. Such a technical capability has very clear implications for the kinds of damage that can (and cannot) be repaired. The most powerful repair capabilities that should eventually be possible can be defined with remarkable clarity. The question we wish to answer is conceptually straightforward: will the most powerful repair capability that is likely to be developed in the long run (perhaps over a few centuries) be adequate to repair tissue that is frozen using the best available current methods?[note 2] Eigler and Schweizer[49] have already developed the capability "... to fabricate rudimentary structures of our own design, atom by atom." Eigler said[129], "...by the time I'm ready to kick the bucket, we might be able to store enough information on my exact physical makeup that someday we'll be able to reassemble me, atom by atom."

    The general purpose ability to manipulate structures with atomic precision and low cost is often called nanotechnology (also called molecular engineering, molecular manufacturing, molecular nanotechnology , etc.). There is widespread belief that such a capability will eventually be developed [1, 2, 3, 4, 7, 8, 10, 19, 41, 47, 49, 83, 84, 85, 106, 107, 108, 116, 117, 118, 119, 121, 122] though exactly how long it will take is unclear. The long storage times possible with cryonic suspension make the precise development time of such technologies noncritical. Development any time during the next few centuries would be sufficient to save the lives of those suspended with current technology.

    In this paper, we give a brief introduction to nanotechnology and then clarify the technical issues involved in applying it in the conceptually simplest and most powerful fashion to the repair of frozen tissue.

    NANOTECHNOLOGY
    Broadly speaking, the central thesis of nanotechnology is that almost any structure consistent with the laws of chemistry and physics that can be specified can in fact be built. This possibility was first advanced by Richard Feynman in 1959 [4] when he said: "The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom." (Feynman won the 1965 Nobel prize in physics).
    This concept is receiving increasing attention in the research community. There have been two international research conferences directly on molecular manufacturing[83, 84, 116, 121] [this was written a few years ago. The Foresight Institute has continued to sponsor this conference series, see: http://www.foresight.org/Conferences/] as well as a broad range of conferences on related subjects. Science [47, page 26] said "The ability to design and manufacture devices that are only tens or hundreds of atoms across promises rich rewards in electronics, catalysis, and materials. The scientific rewards should be just as great, as researchers approach an ultimate level of control -- assembling matter one atom at a time." "Within the decade, [John] Foster [at IBM Almaden] or some other scientist is likely to learn how to piece together atoms and molecules one at a time using the STM [Scanning Tunneling Microscope]."

    Eigler and Schweizer[49] at IBM reported on "...the use of the STM at low temperatures (4 K) to position individual xenon atoms on a single-crystal nickel surface with atomic precision. This capacity has allowed us to fabricate rudimentary structures of our own design, atom by atom. The processes we describe are in principle applicable to molecules also. In view of the device-like characteristics reported for single atoms on surfaces [omitted references], the possibilities for perhaps the ultimate in device miniaturization are evident."

    J. A. Armstrong, IBM Chief Scientist and Vice President for Science and Technology[106] said "I believe that nanoscience and nanotechnology will be central to the next epoch of the information age, and will be as revolutionary as science and technology at the micron scale have been since the early '70's.... Indeed, we will have the ability to make electronic and mechanical devices atom-by-atom when that is appropriate to the job at hand."

    The New York Times said[107]: "Scientists are beginning to gain the ability to manipulate matter by its most basic components --- molecule by molecule and even atom by atom." "That ability, while now very crude, might one day allow people to build almost unimaginably small electronic circuits and machines, producing, for example, a super computer invisible to the naked eye. Some futurists even imagine building tiny robots that could travel through the body performing surgery on damaged cells."

    Drexler[1,10,19,41, 85] has proposed the assembler, a small device resembling an industrial robot which would be capable of holding and positioning reactive compounds in order to control the precise location at which chemical reactions take place. This general approach should allow the construction of large atomically precise objects by a sequence of precisely controlled chemical reactions.

    The best technical discussion of nanotechnology has recently been provided by Drexler[ 85].

    Ribosomes
    The plausibility of this approach can be illustrated by the ribosome. Ribosomes manufacture all the proteins used in all living things on this planet. A typical ribosome is relatively small (a few thousand cubic nanometers) and is capable of building almost any protein by stringing together amino acids (the building blocks of proteins) in a precise linear sequence. To do this, the ribosome has a means of grasping a specific amino acid (more precisely, it has a means of selectively grasping a specific transfer RNA, which in turn is chemically bonded by a specific enzyme to a specific amino acid), of grasping the growing polypeptide, and of causing the specific amino acid to react with and be added to the end of the polypeptide[14].
    The instructions that the ribosome follows in building a protein are provided by mRNA (messenger RNA). This is a polymer formed from the 4 bases adenine, cytosine, guanine, and uracil. A sequence of several hundred to a few thousand such bases codes for a specific protein. The ribosome "reads" this "control tape" sequentially, and acts on the directions it provides.

    Assemblers
    In an analogous fashion, an assembler will build an arbitrary molecular structure following a sequence of instructions. The assembler, however, will provide three-dimensional positional and full orientational control over the molecular component (analogous to the individual amino acid) being added to a growing complex molecular structure (analogous to the growing polypeptide). In addition, the assembler will be able to form any one of several different kinds of chemical bonds, not just the single kind (the peptide bond) that the ribosome makes.
    Calculations indicate that an assembler need not inherently be very large. Enzymes "typically" weigh about 10^5 amu (atomic mass units [note 3]), while the ribosome itself is about 3 x 10^6 amu[14]. The smallest assembler might be a factor of ten or so larger than a ribosome. Current design ideas for an assembler are somewhat larger than this: cylindrical "arms" about 100 nanometers in length and 30 nanometers in diameter, rotary joints to allow arbitrary positioning of the tip of the arm, and a worst-case positional accuracy at the tip of perhaps 0.1 to 0.2 nanometers, even in the presence of thermal noise[ 85]. Even a solid block of diamond as large as such an arm weighs only sixteen million amu, so we can safely conclude that a hollow arm of such dimensions would weigh less. Six such arms would weigh less than 10^8 amu.

    Molecular Computers
    The assembler requires a detailed sequence of control signals, just as the ribosome requires mRNA to control its actions. Such detailed control signals can be provided by a computer. A feasible design for a molecular computer has been presented by Drexler[2, 19, 85]. This design is mechanical in nature, and is based on sliding rods that interact by blocking or unblocking each other at "locks." [note 4] This design has a size of about 5 cubic nanometers per "lock" (roughly equivalent to a single logic gate). Quadrupling this size to 20 cubic nanometers (to allow for power, interfaces, and the like) and assuming that we require a minimum of 10^4 "locks" to provide minimal control results in a volume of 2 x 10^5 cubic nanometers (.0002 cubic microns) for the computational element. This many gates is sufficient to build a simple 4-bit or 8-bit general purpose computer. For example, the 6502 8-bit microprocessor can be implemented in about 10,000 gates, while an individual 1-bit processor in the Connection Machine has about 3,000 gates. Assuming that each cubic nanometer is occupied by roughly 100 atoms of carbon, this 2 x 10^5 cubic nanometer computer will have a mass of about 2 x 10^8 amu.
    An assembler might have a kilobyte of high speed (rod-logic based) RAM, (similar to the amount of RAM used in a modern one-chip computer) and 100 kilobytes of slower but more dense "tape" storage -- this tape storage would have a mass of 10^8 amu or less (roughly 10 atoms per bit -- see below). Some additional mass will be used for communications (sending and receiving signals from other computers) and power. In addition, there will probably be a "toolkit" of interchangeable tips that can be placed at the ends of the assembler's arms. When everything is added up a small assembler, with arms, computer, "toolkit," etc. should weigh less than 10^9 amu.

    E. coli (a common bacterium) weighs about 10^12 amu[14, page 123]. Thus, an assembler should be much larger than a ribosome, but much smaller than a bacterium.

    Self Replicating Systems
    It is also interesting to compare Drexler's architecture for an assembler with the von Neumann architecture for a self replicating device. Von Neumann's "universal constructing automaton"[45] had both a universal Turing machine to control its functions and a "constructing arm" to build the "secondary automaton." The constructing arm can be positioned in a two-dimensional plane, and the "head" at the end of the constructing arm is used to build the desired structure. While von Neumann's construction was theoretical (existing in a two dimensional cellular automata world), it still embodied many of the critical elements that now appear in the assembler.
    Further work on self-replicating systems was done by NASA in 1980 in a report that considered the feasibility of implementing a self-replicating lunar manufacturing facility with conventional technology[48]. One of their conclusions was that "The theoretical concept of machine duplication is well developed. There are several alternative strategies by which machine self-replication can be carried out in a practical engineering setting." They estimated it would require 20 years (and many billions of dollars) to develop such a system. While they were considering the design of a macroscopic self-replicating system (the proposed "seed" was 100 tons) many of the concepts and problems involved in such systems are similar regardless of size.

    Positional Chemistry
    Chemists have been remarkably successful at synthesizing a wide range of compounds with atomic precision. Their successes, however, are usually small in size (with the notable exception of various polymers). Thus, we know that a wide range of atomically precise structures with perhaps a few hundreds of atoms in them are quite feasible. Larger atomically precise structures with complex three-dimensional shapes can be viewed as a connected sequence of small atomically precise structures. While chemists have the ability to precisely sculpt small collections of atoms there is currently no ability to extend this capability in a general way to structures of larger size. An obvious structure of considerable scientific and economic interest is the computer. The ability to manufacture a computer from atomically precise logic elements of molecular size, and to position those logic elements into a three-dimensional volume with a highly precise and intricate interconnection pattern would have revolutionary consequences for the computer industry.
    A large atomically precise structure, however, can be viewed as simply a collection of small atomically precise objects which are then linked together. To build a truly broad range of large atomically precise objects requires the ability to create highly specific positionally controlled bonds. A variety of highly flexible synthetic techniques have been considered by Drexler [ 85]. We shall describe two such methods here to give the reader a feeling for the kind of methods that will eventually be feasible.

    We assume that positional control is available and that all reactions take place in a hard vacuum. The use of a hard vacuum allows highly reactive intermediate structures to be used, e.g., a variety of radicals with one or more dangling bonds. Because the intermediates are in a vacuum, and because their position is controlled (as opposed to solutions, where the position and orientation of a molecule are largely random), such radicals will not react with the wrong thing for the very simple reason that they will not come into contact with the wrong thing.

    It is difficult to maintain biological structures in a hard vacuum at room temperature because of water vapor and the vapor of other small compounds. By sufficiently lowering the temperature, however, it is possible to reduce the vapor pressure to effectively 0.

    Normal solution-based chemistry offers a smaller range of controlled synthetic possibilities. For example, highly reactive compounds in solution will promptly react with the solution. In addition, because positional control is not provided, compounds randomly collide with other compounds. Any reactive compound will collide randomly and react randomly with anything available (including itself). Solution-based chemistry requires extremely careful selection of compounds that are reactive enough to participate in the desired reaction, but sufficiently non-reactive that they do not accidentally participate in undesired side reactions. Synthesis under these conditions is somewhat like placing the parts of a radio into a box, shaking, and pulling out an assembled radio. The ability of chemists to synthesize what they want under these conditions is amazing.

    Much of current solution-based chemical synthesis is devoted to preventing unwanted reactions. With assembler-based synthesis, such prevention is a virtually free by-product of positional control.

    To illustrate positional synthesis in vacuum somewhat more concretely, let us suppose we wish to bond two compounds, A and B. As a first step, we could utilize positional control to selectively abstract a specific hydrogen atom from compound A. To do this, we would employ a radical that had two spatially distinct regions: one region would have a high affinity for hydrogen while the other region could be built into a larger "tip" structure that would be subject to positional control. A simple example would be the 1-propynyl radical, which consists of three co-linear carbon atoms and three hydrogen atoms bonded to the sp3 carbon at the "base" end. The radical carbon at the radical end is triply bonded to the middle carbon, which in turn is singly bonded to the base carbon. In a real abstraction tool, the base carbon would be bonded to other carbon atoms in a larger diamondoid structure which would provide positional control, and the tip might be further stabilized by a surrounding "collar" of unreactive atoms attached near the base that would limit lateral motions of the reactive tip.

    The affinity of this structure for hydrogen is quite high. Propyne (the same structure but with a hydrogen atom bonded to the "radical" carbon) has a hydrogen-carbon bond dissociation energy in the vicinity of 132 kilocalories per mole. As a consequence, a hydrogen atom will prefer being bonded to the 1-propynyl hydrogen abstraction tool in preference to being bonded to almost any other structure. By positioning the hydrogen abstraction tool over a specific hydrogen atom on compound A, we can perform a site specific hydrogen abstraction reaction. This requires positional accuracy of roughly a bond length (to prevent abstraction of an adjacent hydrogen). Quantum chemical analysis of this reaction by Musgrave et. al.[108] show that the activation energy for this reaction is low, and that for the abstraction of hydrogen from the hydrogenated diamond (111) surface (modeled by isobutane) the barrier is very likely zero.

    Having once abstracted a specific hydrogen atom from compound A, we can repeat the process for compound B. We can now join compound A to compound B by positioning the two compounds so that the two dangling bonds are adjacent to each other, and allowing them to bond.

    This illustrates a reaction using a single radical. With positional control, we could also use two radicals simultaneously to achieve a specific objective. Suppose, for example, that two atoms A1 and A2 which are part of some larger molecule are bonded to each other. If we were to position the two radicals X1 and X2 adjacent to A1 and A2, respectively, then a bonding structure of much lower free energy would be one in which the A1-A2 bond was broken, and two new bonds A1-X1 and A2-X2 were formed. Because this reaction involves breaking one bond and making two bonds (i.e., the reaction product is not a radical and is chemically stable) the exact nature of the radicals is not critical. Breaking one bond to form two bonds is a favored reaction for a wide range of cases. Thus, the positional control of two radicals can be used to break any of a wide range of bonds.

    A range of other reactions involving a variety of reactive intermediate compounds (carbenes are among the more interesting ones) are proposed in [85], along with the results of semi-empirical and ab initio quantum calculations and the available experimental evidence.

    Another general principle that can be employed with positional synthesis is the controlled use of force. Activation energy, normally provided by thermal energy in conventional chemistry, can also be provided by mechanical means. Pressures of 1.7 megabars have been achieved experimentally in macroscopic systems[30]. At the molecular level such pressure corresponds to forces that are a large fraction of the force required to break a chemical bond. A molecular vise made of hard diamond-like material with a cavity designed with the same precision as the reactive site of an enzyme can provide activation energy by the extremely precise application of force, thus causing a highly specific reaction between two compounds.

    To achieve the low activation energy needed in reactions involving radicals requires little force, allowing a wider range of reactions to be caused by simpler devices (e.g., devices that are able to generate only small force). Further analysis is provided in [85].

    Feynman said: "The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed -- a development which I think cannot be avoided." Drexler has provided the substantive analysis required before this objective can be turned into a reality. We are nearing an era when we will be able to build virtually any structure that is specified in atomic detail and which is consistent with the laws of chemistry and physics. This has substantial implications for future medical technologies and capabilities.

    Repair Devices
    A repair device is an assembler which is specialized for repair of tissue in general, and frozen tissue in particular. We assume that a repair device has a mass of between 10^9 and 10^10 amu (e.g., we assume that a repair device might be as much as a factor of 10 more complicated than a simple assembler). This provides ample margin for increasing the capabilities of the repair device if this should prove necessary.
    A single repair device of the kind described will not, by itself, have sufficient memory to store the programs required to perform all the repairs. However, if it is connected to a network (in the same way that current computers can be connected into a local area network) then a single large "file server" can provide the needed information for all the repair devices on the network. The file server can be dedicated to storing information: all the software and data that the repair devices will need. Almost the entire mass of the file server can be dedicated to storage, it can service many repair devices, and can be many times the size of one device without greatly increasing system size. Combining these advantages implies the file server will have ample storage to hold whatever programs might be required during the course of repair. In a similar fashion, if further computational resources are required they can be provided by "large" compute servers located on the network.

    Cost
    One consequence of the existence of assemblers is that they are cheap. Because an assembler can be programmed to build almost any structure, it can in particular be programmed to build another assembler. Thus, self reproducing assemblers should be feasible and in consequence the manufacturing costs of assemblers would be primarily the cost of the raw materials and energy required in their construction. Eventually (after amortization of possibly quite high development costs), the price of assemblers (and of the objects they build) should be no higher than the price of other complex structures made by self-replicating systems. Potatoes -- which have a staggering design complexity involving tens of thousands of different genes and different proteins directed by many megabits of genetic information -- cost well under a dollar per pound.
    DESCRIBING THE BRAIN AT THE MOLECULAR LEVEL
    In principle we need only repair the frozen brain, for the brain is the most critical and important structure in the body. Faithfully repairing the liver (or any other secondary tissue) molecule by molecule (or perhaps atom by atom) appears to offer no benefit over simpler techniques -- such as replacement. The calculations and discussions that follow are therefore based on the size and composition of the brain. It should be clear that if repair of the brain is feasible, then the methods employed could (if we wished) be extended in the obvious way to the rest of the body.
    The brain, like all the familiar matter in the world around us, is made of atoms. It is the spatial arrangement of these atoms that distinguishes an arm from a leg, the head from the heart, and sickness from health. This view of the brain is the framework for our problem, and it is within this framework that we must work. Our problem, broadly stated, is that the atoms in a frozen brain are in the wrong places. We must put them back where they belong (with perhaps some minor additions and removals, as well as just rearrangements) if we expect to restore the natural functions of this most wonderful organ.

    In principle, the most that we could usefully know about the frozen brain would be the coordinates of each and every atom in it (though confer note 5 ). This knowledge would put us in the best possible position to determine where each and every atom should go. This knowledge, combined with a technology that allowed us to rearrange atomic structure in virtually any fashion consistent with the laws of chemistry and physics, would clearly let us restore the frozen structure to a fully functional and healthy state.

    In short, we must answer three questions:

    Where are the atoms?
    Where should they go?
    How do we move them from where they are to where they should be?
    Regardless of the specific technical details involved, any method of restoring a person in suspension must answer these three questions, if only implicitly. Current efforts to freeze and then thaw tissue (e.g., experimental work aimed at freezing and then reviving sperm, kidneys, etc.) answer these three questions indirectly and implicitly. Ultimately, technical advances should allow us to answer these questions in a direct and explicit fashion.
    Rather than directly consider these questions at once, we shall first consider a simpler problem: how would we go about describing the position of every atom if somehow this information was known to us? The answer to this question will let us better understand the harder questions.

    Other work which considers the information required to describe a human being exists[127, 128].

    How Many Bits to Describe One Atom
    Each atom has a location in three-space that we can represent with three coordinates: X, Y, and Z. Atoms are usually a few tenths of a nanometer apart. If we could record the position of each atom to within 0.01 nanometers, we would know its position accurately enough to know what chemicals it was a part of, what bonds it had formed, and so on. The brain is roughly .1 meters across, so .01 nanometers is about 1 part in 10^10. That is, we would have to know the position of the atom in each coordinate to within one part in ten billion. A number of this size can be represented with about 33 bits. There are three coordinates, X, Y, and Z, each of which requires 33 bits to represent, so the position of an atom can be represented in 99 bits. An additional few bits are needed to store the type of the atom (whether hydrogen, oxygen, carbon, etc.), bringing the total to slightly over 100 bits [note 5].
    Thus, if we could store 100 bits of information for every atom in the brain, we could fully describe its structure in as exacting and precise a manner as we could possibly need. (Dancoff and Quastler[128], using a somewhat better encoding scheme, say that 24.5 bits per atoms should suffice). A memory device of this capacity should be quite literally possible. To quote Feynman[4]: "Suppose, to be conservative, that a bit of information is going to require a little cube of atoms 5 x 5 x 5 -- that is 125 atoms." This is indeed conservative. Single stranded DNA already stores a single bit in about 16 atoms (excluding the water that it's in). It seems likely we can reduce this to only a few atoms[1]. The work at IBM[49] suggests a rather obvious way in which the presence or absence of a single atom could be used to encode a single bit of information (although some sort of structure for the atom to rest upon and some method of sensing the presence or absence of the atom will still be required, so we would actually need more than one atom per bit in this case). If we conservatively assume that the laws of chemistry inherently require 10 atoms to store a single bit of information, we still find that the 100 bits required to describe a single atom in the brain can be represented by about 1,000 atoms. Put another way, the location of every atom in a frozen structure is (in a sense) already encoded in that structure in an analog format. If we convert from this analog encoding to a digital encoding, we will increase the space required to store the same amount of information. That is, an atom in three-space encodes its own position in the analog value of its three spatial coordinates. If we convert this spatial information from its analog format to a digital format, we inflate the number of atoms we need by perhaps as much as 1,000. If we digitally encoded the location of every atom in the brain, we would need 1,000 times as many atoms to hold this encoded data as there are atoms in the brain. This means we would require roughly 1,000 times the volume. The brain is somewhat over one cubic decimeter, so it would require somewhat over one cubic meter of material to encode the location of each and every atom in the brain in a digital format suitable for examination and modification by a computer.

    While this much memory is remarkable by today's standards, its construction clearly does not violate any laws of physics or chemistry. That is, it should literally be possible to store a digital description of each and every atom in the brain in a memory device that we will eventually be able to build.

    How Many Bits to Describe a Molecule
    While such a feat is remarkable, it is also much more than we need. Chemists usually think of atoms in groups -- called molecules. For example, water is a molecule made of three atoms: an oxygen and two hydrogens. If we describe each atom separately, we will require 100 bits per atom, or 300 bits total. If, however, we give the position of the oxygen atom and give the orientation of the molecule, we need: 99 bits for the location of the oxygen atom + 20 bits to describe the type of molecule ("water", in this case) and perhaps another 30 bits to give the orientation of the water molecule (10 bits for each of the three rotational axes). This means we can store the description of a water molecule in only 150 bits, instead of the 300 bits required to describe the three atoms separately. (The 20 bits used to describe the type of the molecule can describe up to 1,000,000 different molecules -- many more than are present in the brain).
    As the molecule we are describing gets larger and larger, the savings in storage gets bigger and bigger. A whole protein molecule will still require only 150 bits to describe, even though it is made of thousands of atoms. The canonical position of every atom in the molecule is specified once the type of the molecule (which occupies a mere 20 bits) is given. A large molecule might adopt many configurations, so it might at first seem that we'd require many more bits to describe it. However, biological macromolecules typically assume one favored configuration rather than a random configuration, and it is this favored configuration that we will describe [note 6].

    We can do even better: the molecules in the brain are packed in next to each other. Having once described the position of one, we can describe the position of the next molecule as being such-and-such a distance from the first. If we assume that two adjacent molecules are within 10 nanometers of each other (a reasonable assumption) then we need only store 10 bits of "delta X," 10 bits of "delta Y," and 10 bits of "delta Z" rather than 33 bits of X, 33 bits of Y, and 33 bits of Z. This means our molecule can be described in only 10+10+10+20+30 or 80 bits.

    We can compress this further by using various other clever stratagems (50 bits or less is quite achievable), but the essential point should be clear. We are interested in molecules, and describing a molecule takes fewer bits than describing an atom.

    Do We Really Need to Describe Each Molecule?
    A further point will be obvious to any biologist. Describing the exact position and orientation of a hemoglobin molecule within a red blood cell is completely unnecessary. Each hemoglobin molecule bounces around within the red blood cell in a random fashion, and it really doesn't matter exactly where it is, nor exactly which way it's pointing. All we need do is say, "It's in that red blood cell!" So, too, for any other molecule that is floating at random in a "cellular compartment:" we need only say which compartment it's in. Many other molecules, even though they do not diffuse freely within a cellular compartment, are still able to diffuse fairly freely over a significant range. The description of their position can be appropriately compressed.
    While this reduces our storage requirements quite a bit, we could go much further. Instead of describing molecules, we could describe entire sub-cellular organelles. It seems excessive to describe a mitochondrion by describing each and every molecule in it. It would be sufficient simply to note the location and perhaps the size of the mitochondrion, for all mitochondria perform the same function: they produce energy for the cell. While there are indeed minor differences from mitochondrion to mitochondrion, these differences don't matter much and could reasonably be neglected.

    We could go still further, and describe an entire cell with only a general description of the function it performs: this nerve cell has synapses of a certain type with that other cell, it has a certain shape, and so on. We might even describe groups of cells in terms of their function: this group of cells in the retina performs a "center surround" computation, while that group of cells performs edge enhancement. Cherniak[115] said: "On the usual assumption that the synapse is the necessary substrate of memory, supposing very roughly that (given anatomical and physiological "noise") each synapse encodes about one binary bit of information, and a thousand synapses per neuron are available for this task: 10^10 cortical neurons x 10^3 synapses = 10^13 bits of arbitrary information (1.25 terabytes) that could be stored in the cerebral cortex."

    How Many Bits Do We Really Need?
    This kind of logic can be continued, but where does it stop? What is the most compact description which captures all the essential information? While many minor details of neural structure are irrelevant, our memories clearly matter. Any method of describing the human brain which resulted in loss of long term memory has rather clearly gone too far. When we examine this quantitatively, we find that preserving the information in our long term memory might require as little as 10^9 bits (somewhat over 100 megabytes)[37]. We can say rather confidently that it will take at least this much information to adequately describe an individual brain. The gap between this lower bound and the molecule-by-molecule upper bound is rather large, and it is not immediately obvious where in this range the true answer falls. We shall not attempt to answer this question, but will instead (conservatively) simply adopt the upper bound.
    CRITERIA OF DEATH
    death \'deth\ n [ME deeth, fr. OE death; akin to ON dauthi death, deyja to die -- more at DIE] 1: a permanent cessation of all vital functions : the end of life
    Webster's New Collegiate Dictionary
    Burial alive has historically been a significant problem.
    Determining when "a permanent cessation of all vital functions" has occurred is not easy. Historically, premature declarations of death and subsequent burial alive have been a major problem. In the seventh century, Celsus wrote "... Democritus, a man of well merited celebrity, has asserted that there are in reality, no characteristics of death sufficiently certain for physicians to rely upon."[87, page 166].


    Montgomery, reporting on the evacuation of the Fort Randall Cemetery, states that nearly two percent of those exhumed were buried alive[87].

    Many people in the nineteenth century, alarmed by the prevalence of premature burial, requested, as part of the last offices, that wounds or mutilations be made to assure that they would not awaken ... embalming received a considerable impetus from the fear of premature burial.
    New Criteria
    Current criteria of "death" are sufficient to insure that spontaneous recovery in the mortuary or later is a rare occurrence. When examined closely, however, such criteria are simply a codified summary of symptoms that have proven resistant to treatment by available techniques. Historically, they derive from the fear that the patient will spontaneously recover in the morgue or crypt. There is no underlying theoretical structure to support them, only a continued accumulation of ad hoc procedures supported by empirical evidence. To quote Robert Veach[15]: "We are left with rather unsatisfying results. Most of the data do not quite show that persons meeting a given set of criteria have, in fact, irreversibly lost brain function. They show that patients lose heart function soon, or that they do not "recover." Autopsy data are probably the most convincing. Even more convincing, though, is that over the years not one patient who has met the various criteria and then been maintained, for whatever reason, has been documented as having recovered brain function. Although this is not an elegant argument, it is a reassuring." In short, current criteria are adequate to determine when current medical technology will fail to revive the patient, but are silent on the capabilities of future medical technology.
    Each new medical advance forces a reexamination and possible change of the existing ad hoc criteria. The criteria used by the clinician today to determine "death" are dramatically different from the criteria used 100 years ago, and have changed more subtly but no less surely in the last decade [note 7]. It seems almost inevitable that the criteria used 100 years from now will differ dramatically from the criteria commonly employed today.

    These ever shifting criteria for "death" raise an obvious question: is there a definition which will not change with advances in technology? A definition which does have a theoretical underpinning and is not dependent on the technology of the day?

    The answer arises from the confluence and synthesis of many lines of work, ranging from information theory, neuroscience, physics, biochemistry and computer science to the philosophy of the mind and the evolving criteria historically used to define death.

    When someone has suffered a loss of memory or mental function, we often say they "aren't themselves." As the loss becomes more serious and all higher mental functions are lost, we begin to use terms like "persistent vegetative state." While we will often refrain from declaring such an individual "dead," this hesitation does not usually arise because we view their present state as "alive" but because there is still hope of recovery to a healthy state with memory and personality intact. From a physical point of view we believe there is a chance that their memories and personalities are still present within the physical structure of the brain, even though their behavior does not provide direct evidence for this. If we could reliably determine that the physical structures encoding memory and personality had in fact been destroyed, then we would abandon hope and declare the person dead.

    The Information Theoretic Criterion of Death
    Clearly, if we knew the coordinates of each and every atom in a person's brain then we would (at least in principle) be in a position to determine with absolute finality whether their memories and personality had been destroyed in the information theoretic sense, or whether their memories and personality were preserved but could not, for some reason, be expressed. If such final destruction had taken place, then there would be little reason for hope. If such destruction had not taken place, then it would in principle be possible for a sufficiently advanced technology to restore the person to a fully functional and healthy state with their memories and personality intact.
    Considerations like this lead to the information theoretic criterion of death [note 8]. A person is dead according to the information theoretic criterion if their memories, personality, hopes, dreams, etc. have been destroyed in the information theoretic sense. That is, if the structures in the brain that encode memory and personality have been so disrupted that it is no longer possible in principle to restore them to an appropriate functional state then the person is dead. If the structures that encode memory and personality are sufficiently intact that inference of the memory and personality are feasible in principle, and therefore restoration to an appropriate functional state is likewise feasible in principle, then the person is not dead.

    A simple example from computer technology is in order. If a computer is fully functional then its memory and "personality" are completely intact. If it fell out the seventh floor window to the concrete below, it would rapidly cease to function. However, its memory and "personality" would still be present in the pattern of magnetizations on the disk. With sufficient effort, we could completely repair the computer with its memory and "personality" intact [note 9].

    In a similar fashion, as long as the structures that encode the memory and personality of a human being have not been irretrievably "erased" (to use computer jargon) then restoration to a fully functional state with memory and personality intact is in principle feasible. Any technology independent definition of "death" should conclude that such a person is not dead, for a sufficiently advanced technology could restore the person to a healthy state.

    On the flip side of the coin, if the structures encoding memory and personality have suffered sufficient damage to obliterate them beyond recognition, then death by the information theoretic criterion has occurred. An effective method of insuring such destruction is to burn the structure and stir the ashes. This is commonly employed to insure the destruction of classified documents. Under the name of "cremation" it is also employed on human beings and is sufficient to insure that death by the information theoretic criterion takes place.

    More Exotic Approaches
    It is not obvious that the preservation of life requires the physical repair or even the preservation of the brain[11,12]. Although the brain is made of neurons, synapses, protoplasm, DNA and the like; most modern philosophers of consciousness view these details as no more significant than hair color or clothing style. Three samples follow.
    The ethicist and prolific author Robert Veatch said, in Death, Dying, and the Biological Revolution, "An `artificial brain' is not possible at present, but a walking, talking, thinking individual who had one would certainly be considered living."[15, page 23].

    The noted philosopher of consciousness Paul Churchland said, in Matter and Consciousness, "If machines do come to simulate all of our internal cognitive activities, to the last computational detail, to deny them the status of genuine persons would be nothing but a new form of racism."[12, page 120].

    Hans Moravec, renowned roboticist and Director of the Mobile Robot Lab at Carnegie Mellon said, "Body-identity assumes that a person is defined by the stuff of which a human body is made. Only by maintaining continuity of body stuff can we preserve an individual person. Pattern-identity, conversely, defines the essence of a person, say myself, as the pattern and the process going on in my head and body, not the machinery supporting that process. If the process is preserved, I am preserved. The rest is mere jelly."[50, page 117].

    We'll Use the Conservative Approach
    Restoration of the existing structure will be more difficult than building an artificial brain (particularly if the restoration is down to the molecular level). Despite this, we will examine the technically more exacting problem of restoration because it is more generally acceptable. Most people accept the idea that restoring the brain to a healthy state in a healthy body is a desirable objective. A range of increasingly less restrictive objectives (as described) are possible. To the extent that more relaxed criteria are acceptable, the technical problems are much less demanding. By deliberately adopting such a conservative position, we lay ourselves open to the valid criticism that the methods described here are unlikely to prove necessary. Simpler techniques that relax to some degree the philosophical constraints we have imposed might well be adopted in practice. In this paper we will eschew the more exotic possibilities (without, however, adopting any position on their desirability).
    Another issue is not so much philosophical as emotional. Major surgery is not a pretty sight. There are few people who can watch a surgeon cut through living tissue with equanimity. In a heart transplant, for example, surgeons cut open the chest of a dying patient to rip out their dying heart, cut open a fresh cadaver to seize its still-beating heart, and then stitch the cadaver's heart into the dying patients chest. Despite this (which would have been condemned in the middle ages as the blackest of black magic), we cheer the patient's return to health and are thankful that we live in an era when medicine can save lives that were formerly lost.

    The mechanics of examining and repairing the human brain, possibly down to the level of individual molecules, might not be the best topic for after dinner conversation. While the details will vary depending on the specific method used, this could also be described by lurid language that failed to capture the central issue: the restoration to full health of a human being.

    A final issue that should be addressed is that of changes introduced by the process of restoration itself. The exact nature and extent of these changes will vary with the specific method. Current surgical techniques, for example, result in substantial tissue changes. Scarring, permanent implants, prosthetics, etc. are among the more benign outcomes. In general, methods based on a sophisticated ability to rearrange atomic structure should result in minimal undesired alterations to the tissue.

    "Minimal changes" does not mean "no changes." A modest amount of change in molecular structure, whatever technique is used, is both unavoidable and insignificant. The molecular structure of the human brain is in a constant state of change during life -- molecules are synthesized, utilized, and catabolized in a continuous cycle. Cells continuously undergo slight changes in morphology. Cells also make small errors in building their own parts. For example, ribosomes make errors when they build proteins. About one amino acid in every 10,000 added to a growing polypeptide chain by a ribosome is incorrect[14, page 383]. Changes and errors of a similar magnitude introduced by the process of restoration can reasonably be neglected.

    Does the Information Theoretic Criterion Matter?
    It is normally a matter of small concern whether a physician of 2190 would or would not concur with the diagnosis of "death" by a contemporary physician applied to a specific patient in 1990. A physician of today who found himself in 1790 would be able to do little for a patient whose heart had stopped, even though he knew intellectually that an intensive care unit would likely be able to save the patients life. Intensive care units were simply not available in 1790, no matter what the physician knew was possible. So, too, with the physician of today when informed that a physician 200 years hence could save the life of the patient that he has just pronounced "dead." There is nothing he can do, for he can only apply the technologies of today -- except in the case of cryonic suspension.
    In this one instance, we must ask not whether the person is dead by today's (clearly technology dependent) criteria, but whether the person is dead by all future criteria. In short, we must ask whether death by the information theoretic criterion has taken place. If it has not, then cryonic suspension is a reasonable (and indeed life saving) course of action.

    Experimental Proof or Disproof of Cryonics
    It is often said that "cryonics is freezing the dead." It is more accurate to say that "cryonics is freezing the terminally ill. Whether or not they are dead remains to be seen."
    The scientifically correct experiment to verify that cryonics works (or demonstrate that it does not work) is quite easy to describe:

    Select N experimental subjects.
    Freeze them.
    Wait 100 years.
    See if the technology available 100 years from now can (or cannot) cure them.
    The drawback of this experimental protocol is obvious: we can't get the results for 100 years. This problem is fundamental. The use of future technology is an inherent part of cryonics. Criticisms of cryonics based on the observation that freezing and thawing mammals with present technology don't work are irrelevant, for that is not what is being proposed.
    This kind of problem is not entirely unique to cryonics. A new AIDS treatment might undergo clinical trials lasting a few years. The ethical dilemma posed by the terminally ill AIDS patient who might be assisted by the experimental treatment is well known. If the AIDS patient is given the treatment prior to completion of the clinical trials, it is possible that his situation could be made significantly worse. On the other hand, to deny a potentially life saving treatment to someone who will soon die anyway is ethically untenable.

    In the case of cryonics this is not an interim dilemma pending the (near term) outcome of clinical trials. It is a dilemma inherent in the nature of the proposal. Clinical trials, the bulwark of modern medical practice, are useless in resolving the effectiveness of cryonics in a timely fashion.

    Further, cryonics (virtually by definition) is a procedure used only when the patient has exhausted all other available options. In current practice the patient is suspended after legal death: the fear that the treatment might prove worse than the disease is absent. Of course, suspension of the terminally ill patient somewhat before legal death has significant advantages. A patient suffering from a brain tumor might view suspension following the obliteration of his brain as significantly less desirable than suspension prior to such obliteration, even if the suspension occurred at a point in time when the patient was legally "alive."

    In such a case, it is inappropriate to disregard or override the patient's own wishes. To quote the American College of Physicians Ethics Manual, "Each patient is a free agent entitled to full explanation and full decision-making authority with regard to his medical care. John Stuart Mill expressed it as: `Over himself, his own body and mind, the individual is sovereign.' The legal counterpart of patient autonomy is self-determination. Both principles deny legitimacy to paternalism by stating unequivocally that, in the last analysis, the patient determines what is right for him." "If the [terminally ill] patient is a mentally competent adult, he has the legal right to accept or refuse any form of treatment, and his wishes must be recognized and honored by his physician."[92]

    If clinical trials cannot provide us with an answer, are there any other methods of evaluating the proposal? Can we do more than say that (a) cryonic suspension can do no harm (in keeping with the Hippocratic oath), and (b) it has some difficult-to-define chance of doing good?

    Failure Criteria
    Trying to prove something false is often the simplest method of clarifying exactly what is required to make it true. A consideration of the information theoretic criterion of death makes it clear that, from a technical point of view (ignoring various non-technical issues) there are two and only two ways in which cryonics can fail [note 10].
    Cryonics will fail if:

    Information theoretic death occurs prior to reaching liquid nitrogen temperature [note 11].
    Repair technology that is feasible in principle is never developed and applied in practice, even after the passage of centuries.
    The first failure criterion can only be considered against the background of current understanding of freezing damage, ischemic injury and mechanisms of memory and synaptic plasticity. Whether or not memory and personality are destroyed in the information theoretic sense by freezing and the ischemic injury that might precede it can only be answered by considering both the physical nature of memory and the nature of the damage to which the brain is subjected before reaching the stability provided by storage in liquid nitrogen. The following sections will therefore provide brief reviews of these subjects.
    The second failure criterion is considered in the later sections on technical issues, which discuss in more detail how future technologies might be applied to the repair of frozen tissue.

    As the reader will readily appreciate, the following reviews will consider only the most salient points that are of the greatest importance in determining overall feasibility. They are necessarily too short to consider the topics in anything like full detail, but should provide sufficient information to give the reader an overview of the relevant issues. References to further reading are provided throughout [note 12].

    FREEZING DAMAGE
    There is an extensive literature on the damage caused by both cooling and freezing to liquid nitrogen temperatures. Some reviews are[5, 6, 68, 70]. Scientific American had a recent and quite accessible article[57]. In this section, we briefly review the nature of such damage and consider whether it is likely to cause information theoretic death. Damage, per se, is not meaningful except to the extent that it obscures or obliterates the nature of the original structure.
    While cooling tissue to around 0 degrees C creates a number of problems, the ability to cool mammals to this temperature or even slightly below (with no ice formation) using current methods followed by subsequent complete recovery[61, 62] shows that this problem can be controlled and is unlikely to cause information theoretic death. We will, therefore, ignore the problems caused by such cooling. This problem is discussed in [5] and elsewhere.

    Further, some "freezing" damage in fact occurs upon re- warming. Current work supports this idea because the precise method used to re-warm tissue can strongly affect the success or failure of present experiments even when freezing conditions are identical[5, 6]. If we presume that future repair methods avoid the step of re-warming the tissue prior to analysis and instead analyze the tissue directly in the frozen state then this source of damage will be eliminated. Several current methods can be used to distinguish between damage that occurs during freezing and damage that occurs while thawing. At present, it seems likely that some damage occurs during each process. While significant damage does occur during slow freezing, it does not induce structural changes which obliterate the cell.

    Present Day Successes
    Many types of tissue including human embryos, sperm, skin, bone, red and white blood cells, bone marrow, and others [5, 6, 59] have been frozen in liquid nitrogen, thawed, and have recovered. This is not true of whole mammals [note 13]. The brain seems more resistant than most organs to freezing damage[58, 79]. Recovery of overall brain function following freezing to liquid nitrogen temperature has not been demonstrated, although recovery of unit level electrical activity following freezing to -60 degrees C has been demonstrated[79].
    Fractures
    Perhaps the most dramatic injury caused by freezing is macroscopic fractures[56]. Tissue becomes extremely brittle at or below the "glass transition temperature" at about 140K. Continued cooling to 77K (the temperature of liquid nitrogen) creates tensile stress in the glassy material. This is exacerbated by the skull, which inhibits shrinkage of the cranial contents. This stress causes readily evident macroscopic fractures in the tissue.
    Fractures that occur below the glass transition temperature result in very little information loss. While dramatic, this damage is unlikely to cause or contribute to information theoretic death.

    Ice
    The damage most commonly associated with freezing is that caused by ice. Contrary to common belief, freezing does not cause cells to burst open like water pipes on a cold winter's day. Quite the contrary, ice formation takes place outside the cells in the extracellular region. This is largely due to the presence of extracellular nucleating agents on which ice can form, and the comparative absence of intracellular nucleating agents. Consequently the intracellular liquid supercools.
    Extracellular ice formation causes an increase in the concentration of the extra-cellular solute, e.g., the chemicals in the extracellular liquid are increased in concentration by the decrease in available water. The immediate effect of this increased extracellular concentration is to draw water out of the cells by osmosis. Thus, freezing dehydrates cells.

    Damage can be caused by the extracellular ice, by the increased concentration of solute, or by the reduced temperature itself. All three mechanisms can play a role under appropriate conditions.

    The damage caused by extracellular ice formation depends largely on the fraction of the initial liquid volume that is converted to ice[6, 57]. (The initial liquid volume might include a significant amount of cryoprotectant as well as water). When the fraction of the liquid volume converted to ice is small, damage is often reversible even by current techniques. In many cases, conversion of significantly more than 40% of the liquid volume to ice is damaging[70, page 134; 71]. The brain is more resistant to such injury: conversion of up to 60% of the liquid volume in the brain to ice is associated with recovery of neuronal function[58, 62, 66, 82]. Storey and Storey said "If the cell volume falls below a critical minimum, then the bilayer of phospholipids in the membrane becomes so greatly compressed that its structure breaks down. Membrane transport functions cannot be maintained, and breaks in the membrane spill cell contents and provide a gate for ice to propagate into the cell. Most freeze-tolerant animals reach the critical minimum cell volume when about 65 percent of total body water is sequestered as ice."[57].

    Glycerol
    Appropriate treatment with cryoprotectants (in particular glycerol) prior to freezing will keep 40% or more of the liquid volume from being converted to ice even at liquid nitrogen temperatures.

    Fahy has said "All of the postulated problems in cryobiology -- cell packing [omitted reference], channel size constraints [omitted reference], optimal cooling rate differences for mixed cell populations [omitted reference], osmotically mediated injury[omitted references], and the rest -- can be solved in principle by the selection of a sufficiently high concentration of cryoprotectant prior to freezing. In the extreme case, all ice formation could be suppressed completely by using a concentration of agent sufficient to ensure vitrification of the biological system in question [omitted reference]"[73]. Unfortunately, a concentration of cryoprotectant sufficiently high to protect the system from all freezing injury would itself be injurious[73]. It should be possible to trade the mechanical injury caused by ice formation for the biochemical injury caused by the cryoprotectant, which is probably advantageous. Current suspension protocols at Alcor call for the introduction of greater than 6 molar glycerol. Both venous and arterial glycerol concentrations have exceeded 6 molar in several recent suspensions. If this concentration of cryoprotectant is also reaching the tissues, it should keep over 60% of the initial liquid volume from being converted to ice at liquid nitrogen temperatures [note 14].

    Concentration Effects
    "Dehydration and concentration of solutes past some critical level may disrupt metabolism and denature cell proteins and macromolecular complexes"[70, page 125]. The functional losses caused by this mechanism seem unlikely to result in significant information loss. One qualification to this conclusion is that cell membranes appear to be weakened by increased solute concentration[5, page 92]. To the extent that structural elements are weakened by increased solute concentrations the vulnerability of the cell to structural damage is increased.
    Denaturing
    Finally, denaturing of proteins might occur at low temperature. In this process the tertiary and perhaps even secondary structure of the protein might be disrupted leading to significant loss of protein function. However, the primary structure of the protein (the linear sequence of amino acids) is still intact and so inference of the correct functional state of the protein is in principle trivial. Further, the extent of protein denaturation caused by freezing must necessarily be limited given the relatively wide range of tissues that have been successfully frozen and thawed.
    Intracellular Freezing
    Intracellular freezing is another damaging event which might occur[6]. If cooling is slow enough to allow the removal of most of the water from the cell's interior by osmosis, then the high concentration of solute will prevent the small amount of remaining water from freezing. If cooling is too rapid, there will be insufficient time for the water within the cell to escape before it freezes. In the latter case, the intracellular contents are supercooled and freezing is abrupt (the cell "flashes"). While this correlates with a failure to recover function[5, 6, 68, 70] it is difficult to believe that rapid freezing results in significant loss of information.
    Intracellular freezing is largely irrelevant to cryonic suspensions because of the slow freezing rates dictated by the large mass of tissue being frozen. Such freezing rates are too slow for intracellular freezing to occur except when membrane rupture allows extracellular ice to penetrate the intracellular region. If the membrane does fail, one would expect the interior of the cell to "flash."

    Loss of Information versus Loss of Function
    Spontaneous recovery of function following freezing to liquid nitrogen temperatures using the best currently available techniques appears unlikely for mammalian organs, including the brain. Despite this, the level of structural preservation can be quite good. The complexity of the systems that have been successfully frozen and rewarmed is remarkable, and supports the claim that good structural preservation is often achieved. The mechanisms of damage that have been postulated in the literature are sufficiently subtle that information loss is likely to be small; that is, death by the information theoretic criterion is unlikely to have occurred. Further research aimed specifically at addressing this issue is needed.
    ISCHEMIC INJURY AND PRESUSPENSION INJURY
    Although modern cryonic suspensions can involve minimal delay [note 15] and future suspensions might eliminate delay entirely [note 16], delay is sometimes unavoidable [note 17]. The most significant type of damage that such delay causes is ischemic injury.
    Broadly speaking, the structure of the human brain remains intact for several hours or more following the cessation of blood flow, or ischemia. The tissue changes that occur subsequent to ischemia have been well studied. There have also been studies of the "postmortem" changes that occur in tissue. Perhaps the most interesting of these studies was conducted by Kalimo et. al.[65].

    "Postmortem" Changes in the Human Brain
    Many researchers have examined postmortem changes in neuronal tissues. In "A Comparison of Methodologies for the Study of Functional Transmitter Neurochemistry in Human Brain" Dodd et al.[124] said
    Effects of postmortem delay. Some brain functions are damaged irreversibly within minutes of the cessation of blood flow to the tissue. This led to the widespread belief that it would be impossible to isolate metabolically active and responsive preparations very long after death and use them to study neurotransmission. However, this is a misconception; many groups have successfully obtained functional preparations from normal (Table 1) [not present in this article] and pathological (Table 2) [not present in this article] human brain tissue from autopsies carried out up to 24 h or more postmortem. This is perhaps less surprising when the stability of enzymes, receptors, and nucleic acids is taken into consideration (see Hardy and Dodd, 1983 [reference 123 in this article]). With very few exceptions, the brain retains the metabolic machinery to reconstitute tissue metabolite and neurotransmitter pools. It also appears that sufficient structural integrity is retained to allow the various tissue compartments to remain relatively intact and distinct.
    Experiments with both animal and human brain have shown that viable preparations can be isolated routinely up to at least 24 h postmortem, a time scale within which a sufficient number of autopsies is carried out to allow extensive neurochemical studies. When the human subject has died suddenly (see below) [not in this article], such preparations exhibit the same range of characteristics as preparations made from fresh animal tissue, or from fresh human tissue obtained at biopsy or neurosurgery. Thus incubated synaptosomes and brain slices from postmortem human brain respire, accumulate tissue potassium, maintain membrane potentials, release neurotransmitters in a calcium-dependent fashion, and possess active, sodium - dependent uptake systems (see Table 1 for references [not in this article]). Electron microscopic examination of synaptosome preparations from postmortem human brain showed them to be only slightly less pure than preparations from fresh tissue, although some degree of damage is evident (Hardy et al., 1982 [not in this article]).

    In order to study immediate "postmortem" changes, Kalimo et. al. perfused the brains of 5 patients with aldehydes within half an hour of "clinical death". Subsequent examination of the preserved brain tissue with both light and electron microscopy showed the level of structural preservation. In two cases, the changes described were consistent with approximately one to two hours of ischemic injury. (Ischemic injury often begins prior to declaration of "clinical death", hence the apparently longer ischemic period compared with the interval following declaration of death and prior to perfusion of fixative). Physical preservation of cellular structure and ultrastructure was excellent. It is difficult to avoid the conclusion that information loss was negligible in these cases. In two other cases, elevated intraparenchymal pressure prevented perfusion with the preservative, thus preventing examination of the tissue. Without such an examination, it is difficult to draw conclusions about the extent of information loss. In the final case, "...the most obvious abnormality was the replacement of approximately four-fifths of the parenchyma of the brain by a fluid-containing cavity that was lined by what seemed to be very thin remnants of the cerebral cortex." Cryonic suspension in this last case would not be productive.
    As an aside, the vascular perfusion of chemical fixatives to improve stability of tissue structures prior to perfusion with cryoprotectants and subsequent storage in liquid nitrogen would seem to offer significant advantages. The main issue that would require resolution prior to such use is the risk that fixation might obstruct circulation, thus impeding subsequent perfusion with cryoprotectants. Other than this risk, the use of chemical fixatives (such as aldehydes and in particular glutaraldehyde) would reliably improve structural preservation and would be effective at halting almost all deterioration within minutes of perfusion[67]. The utility of chemical preservation has been discussed by Drexler[1] and by Olson[90], among others.

    Ischemia
    The events following ischemia have been reasonably well characterized. Following experimental induction of ischemia in cats, Kalimo et. al.[74] found "The resulting cellular alterations were homogeneous and uniform throughout the entire brain: they included early chromatin clumping, gradually increasing electron lucency of the cell sap, distention of endoplasmic reticulum and Golgi cisternae, transient mitochondrial condensation followed by swelling and appearance of flocculent densities, and dispersion of ribosomal rosettes." Energy levels within the cell drop sharply within a few minutes of cessation of blood flow. The chromatin clumping is a reversible early change. The loss of energy results fairly quickly in failure to maintain trans- membrane concentration gradients (for example the Na+K+ pump stops working, resulting in increased intracellular Na+ and increased extracellular K+). The uneven equilibration of concentration gradients results in changes in osmotic pressure with consequent flows of water. Swelling of mitochondria and other structures occurs. The appearance of "flocculent densities" in the mitochondria is thought to indicate severe internal membrane damage which is "irreversible."[note 18]
    Ischemic changes do not appear to result in any damage that would prevent repair (e.g., changes that would result in significant loss of information about structure) for at least a few hours. Temporary functional recovery has been demonstrated in optimal situations after as long as 60 minutes of total ischemia[93, 94, 95]. Hossmann, for example, reported results on 143 cats subjected to one hour of normothermic global brain ischemia[97]. "Body temperature was maintained at 36 degrees to 37 degrees C with a heating pad. ... Completeness of ischemia was tested by injecting 133Xe into the innominate artery immediately before vascular occlusion and monitoring the absence of decay of radioactivity from the head during ischemia, using external scintillation detectors. ... In 50% of the animals, even major spontaneous EEG activity returned after ischemia.... One cat survived for 1 yr after one hour of normothermic cerebrocirculatory arrest with no electrophysiologic deficit and with only minor neurologic and morphologic disturbances." Functional recovery is a more stringent criterion than the more relaxed information theoretic criterion, which merely requires adequate structural preservation to allow inference about the pre-existing structure. Reliable identification of the various cellular structures is possible hours (and sometimes even days) later. Detailed descriptions of ischemia and its time course[72, page 209 et sequitur] also clearly show that cooling substantially slows the rate of deterioration. Thus, even moderate cooling "postmortem" slows deterioration significantly.

    Lysosomes
    The theory that lysosomes ("suicide bags") rupture and release digestive enzymes into the cell that result in rapid deterioration of chemical structure appears to be incorrect. More broadly, there is a body of work suggesting that structural deterioration does not take place rapidly.
    Kalimo et. al.[74] said "It is noteworthy that after 120 min of complete blood deprivation we saw no evidence of membrane lysosomal breakdown, an observation which has also been reported in studies of in vitro lethal cell injury[omitted references], and in regional cerebral ischemia[omitted references]."

    Hawkins et. al.[75] said "...lysosomes did not rupture for approximately 4 hours and in fact did not release the fluorescent dye until after reaching the postmortem necrotic phase of injury. ... The original suicide bag mechanism of cell damage thus is apparently not operative in the systems studied. Lysosomes appear to be relatively stable organelles...."

    Messenger RNA and Protein
    Morrison and Griffin[76] said "We find that both rat and human cerebellar mRNAs are surprisingly stable under a variety of postmortem conditions and that biologically active, high-molecular-weight mRNAs can be isolated from postmortem tissue. ... A comparison of RNA recoveries from fresh rat cerebella and cerebella exposed to different postmortem treatments showed that 83% of the total cytoplasmic RNAs present immediately postmortem was recovered when rat cerebella were left at room temperature for 16 h [hours] postmortem and that 90% was recovered when the cerebella were left at 4 degrees C for this length of time .... In neither case was RNA recovery decreased by storing the cerebella in liquid nitrogen prior to analysis. ... Control studies on protein stability in postmortem rat cerebella show that the spectrum of abundant proteins is also unchanged after up to 16 h [hours] at room temperature...." Johnson et. al.[125] in "Extensive Postmortem Stability of RNA From Rat and Human Brain" found that postmortem delays of as long as 48 hours "...failed to reveal degradation of the specific rat brain mRNAs during the postmortem period." They also said "We find no effect of postmortem delay on RNA quality in both rat and human."
    20 Million Year Survival of DNA
    The ability of DNA to survive for long periods was dramatically illustrated by its recovery and sequencing from a 17 to 20 million year old magnolia leaf[81]. "Sediments and fossils seem to have accumulated in an anoxic lake bottom environment; they have remained unoxidized and water- saturated to the present day." "Most leaves are preserved as compression fossils, commonly retaining intact cellular tissue with considerable ultrastructural preservation, including cell walls, leaf phytoliths, and intracellular organelles, as well as many organic constituents such as flavonoids and steroids[omitted references]. There is little evidence of post-depositional (diagenetic) change in many of the leaf fossils."
    ADDENDUM: A 1997 paper[130] critical of earlier work which attempted to recover DNA from ancient sources said "Whereas ancient DNA sequences from specimens younger than 100 000 years old have now been replicated independently (Hagelberg et al. 1994; Hoss et al. 1994; Taylor 1996), we have singularly failed to recover authentic ancient DNA from amber fossils."
    For present purposes the distinction between 100,000 and 100,000,000 years is not critical: both are substantially longer than the time that a person might reasonably expect to stay in cryonic suspension.

    Cell Cultures taken after "Death"
    Gilden et. al.[77] report that "...nearly two-thirds of all tissue acquired in less than six hours after death was successfully grown, whereas only one-third of all tissue acquired more than six hours after death was successfully grown in tissue culture." While it would be incorrect to conclude that widespread cellular survival occurred based on these findings, they do show that structural deterioration is insufficient to disrupt function in at least some cells. This supports the idea that structural deterioration in many other cells should not be extensive.
    Information Loss and Ischemia
    It is currently possible to initiate suspension immediately after legal death. In favorable circumstances legal death can be declared upon cessation of heartbeat in an otherwise revivable terminally ill patient who wishes to die a natural death and has refused artificial means of prolonging the dying process. In such cases, the ischemic interval can be short (two or three minutes). It is implausible that ischemic injury would cause information theoretic death in such a case.
    As the ischemic interval lengthens, the level of damage increases. It is not clear exactly when information loss begins or when information theoretic death occurs. Present evidence supports but does not prove the hypothesis that information theoretic death does not occur for at least a few hours following the onset of ischemia. Quite possibly many hours of ischemia can be tolerated. Freezing of tissue within that time frame followed by long term storage in liquid nitrogen should provide adequate preservation of structure to allow repair [note 19].

    MEMORY
    It is essential to ask whether the important structural elements underlying "behavioral plasticity" (human memory and human personality) are likely to be preserved by cryonic suspension. Clearly, if human memory is stored in a physical form which is obliterated by freezing, then cryonic suspension won't work. In this section we briefly consider a few major aspects of what is known about long term memory and whether known or probable mechanisms are likely to be preserved by freezing.
    It appears likely that short term memory, which can be disrupted by trauma or a number of other processes, will not be preserved by cryonic suspension. Consolidation of short term memory into long term memory is a process that takes several hours. We will focus attention exclusively on long term memory, for this is far more stable. While the retention of short term memory cannot be excluded (particularly if chemical preservation is used to provide rapid initial fixation), its greater fragility renders this significantly less likely.

    To see the Mona Lisa or Niagara Falls changes us, as does seeing a favorite television show or reading a good book. These changes are both figurative and literal, and it is the literal (or neuroscientific) changes that we are interested in: what are the physical alterations that underlie memory?

    Briefly, the available evidence supports the idea that memory and personality are stored in identifiable physical changes in the nerve cells, and that alterations in the synapses between nerve cells play a critical role.

    Shepherd in "Neurobiology"[38, page 547] said: "The concept that brain functions are mediated by cell assemblies and neuronal circuits has become widely accepted, as will be obvious to the reader of this book, and most neurobiologists believe that plastic changes at synapses are the underlying mechanisms of learning and memory."

    Kupfermann in "Principles of Neural Science"[13, page 1005] said: "Because of the enduring nature of memory, it seems reasonable to postulate that in some way the changes must be reflected in long-term alterations of the connections between neurons."

    Eric R. Kandel in "Principles of Neural Science" [13, page 1016] said: "Morphological changes seem to be a signature of the long-term process. These changes do not occur with short-term memory (Figure 65-6 [not reproduced here]). Moreover, the structural changes that occur with the long- term process are not restricted to the [sic] growth. Long- term habituation leads to the opposite change---a regression and pruning of synaptic connections. With long-term habituation, where the functional connections between the sensory neurons and motor neurons are inactivated (Figure 65- 2[not reproduced]), the number of terminals per neuron is correspondingly reduced by one-third (Figure 65-6[not reproduced]) and the proportion of terminals with active zones is reduced from 40% to 10%."

    Squire in "Memory and Brain"[109, page 10] said: "The most prevalent view has been that the specificity of stored information is determined by the location of synaptic changes in the nervous system and by the pattern of altered neuronal interactions that these changes produce. This idea is largely accepted at the present time, and will be explored further in this and succeeding chapters in the light of current evidence."

    Lynch, in "Synapses, Circuits, and the Beginnings of Memory"[34, page 3] said: "The question of which components of the neuron are responsible for storage is vital to attempts to develop generalized hypotheses about how the brain encodes and makes use of memory. Since individual neurons receive and generate thousands of connections and hence participate in what must be a vast array of potential circuits, most theorists have postulated a central role for synaptic modifications in memory storage."

    Turner and Greenough said "Two non-mutually exclusive possible mechanisms of brain information storage have remained the leading theories since their introduction by Ramon y Cajal [omitted reference] and Tanzi [omitted reference]. The first hypothesis is that new synapse formation, or selected synapse retention, yields altered brain circuitry which encodes new information. The second is that altered synaptic efficacy brings about similar change."[22].

    Greenough and Bailey in "The anatomy of a memory: convergence of results across a diversity of tests"[39] say: "More recently it has become clear that the arrangement of synaptic connections in the mature nervous system can undergo striking changes even during normal functioning. As the diversity of species and plastic processes subjected to morphological scrutiny has increased, convergence upon a set of structurally detectable phenomena has begun to emerge. Although several aspects of synaptic structure appear to change with experience, the most consistent potential substrate for memory storage during behavioral modification is an alteration in the number and/or pattern of synaptic connections."

    It seems likely, therefore, that human long term memory is encoded by detectable physical changes in cell structure and in particular in synaptic structure.

    Plastic Changes in Model Systems
    What, exactly, might these changes be? Very strong statements are possible in simple "model systems". Bailey and Chen, for example, identified several specific changes in synaptic structure that encoded learned memories from sea slugs (Aplysia californica) by direct examination of the changed synapse with an electron microscope[36].
    "Using horseradish peroxidase (HRP) to label the presynaptic terminals (varicosities) of sensory neurons and serial reconstruction to analyze synaptic contacts, we compared the fine structure of identified sensory neuron synapses in control and behaviorally modified animals. Our results indicate that learning can modulate long-term synaptic effectiveness by altering the number, size, and vesical complement of synaptic active zones."

    Examination by transmission electron microscopy in vacuum of sections 100 nanometers (several hundred atomic diameters) thick recovers little or no chemical information. Lateral resolution is at best a few nanometers (tens of atomic diameters), and depth information (within the 100 nanometer section) is entirely lost. Specimen preparation included removal and desheathing of the abdominal ganglion which was then bathed in seawater for 30 minutes before impalement and intrasomatic pressure injection of HRP. Two hours later the ganglia were fixed, histochemically processed, and embedded. Following this treatment, Bailey and Chen concluded that "...clear structural changes accompany behavioral modification, and those changes can be detected at the level of identified synapses that are critically involved in learning."

    The following observations about this work seem in order. First, several different types of changes were present. This provides redundant evidence of synaptic alteration. Inability to detect one type of change, or obliteration of one specific type of change, would not be sufficient to prevent recovery of the "state" of the synapse. Second, examination by electron microscopy is much cruder than the techniques considered here which literally propose to analyze every molecule in the structure. Further alterations in synaptic chemistry will be detectable when the synapse is examined in more detail at the molecular level. Third, there is no reason to believe that freezing would obliterate the structure beyond recognition.

    Implications for Human Memory
    Such satisfying evidence is at present confined to "model systems;" what can we conclude about more complex systems, e.g., humans? Certainly, it seems safe to say that synaptic alterations are also used in the human memory system, that synaptic changes of various types take place when the synapse "remembers" something, that the changes involve alterations in at least many thousands of molecules and probably involve mechanisms similar to those used in lower organisms (evolution is notoriously conservative).
    It seems likely that knowledge of the morphology and connectivity of nerve cells along with some specific knowledge of the biochemical state of the cells and synapses would be sufficient to determine memory and personality. Perhaps, however, some fundamentally different mechanism is present in humans? Even if this were to prove true, any such system would be sharply constrained by the available evidence. It would have to persist over the lifetime of a human being, and thus would have to be quite stable. It would have to tolerate the natural conditions encountered by humans and the experimental conditions to which primates have been subjected without loss of memory and personality (presuming that the primate brain is similar to the human brain). And finally, it would almost certainly involve changes in tens of thousands of molecules to store each bit of information. Functional studies of human long term memory suggest it has a capacity of only 10^9 bits (somewhat over 100 megabytes)[37] (though this did not consider motor memory, e.g., the information storage required when learning to ride a bicycle). Such a low memory capacity suggests that, independent of the specific mechanism, a great many molecules are required to remember each bit. It even suggests that many synapses are used to store each bit (recall there are perhaps 10^15 synapses -- which implies some 10^6 synapses per bit of information stored in long term memory).

    Given that future technology will allow the molecule-by- molecule analysis of the structures that store memory, and given that such structures are large on the molecular scale (involving at least tens of thousands of molecules each) then it appears unlikely that such structures will survive the lifetime of the individual only to be obliterated beyond recognition by freezing. Freezing is unlikely to cause information theoretic death.

    TECHNICAL OVERVIEW
    Even if information theoretic death has not occurred, a frozen brain is not a healthy structure. While repair might be feasible in principle, it would be comforting to have at least some idea about how such repairs might be done in practice. As long as we assume that the laws of physics, chemistry, and biochemistry with which we are familiar today will still form the basic framework within which repair will take place in the future, we can draw well founded conclusions about the capabilities and limits of any such repair technology.
    The Nature of This Proposal
    To decide whether or not to pursue cryonic suspension we must answer one question: will restoration of frozen tissue to a healthy and functional state ever prove feasible? If the answer is "yes," then cryonics will save lives. If the answer is "no," then it can be ignored. As discussed earlier, effectively the most that we can usefully learn about frozen tissue is the type, location and orientation of each molecule. If this information is sufficient to permit inference of the healthy state with memory and personality intact, then repair is in principle feasible. The most that future technology could offer, therefore, is the ability to restore the structure whenever such restoration was feasible in principle. We propose that just this limit will be closely approached by future advances in technology.
    It is unreasonable to think that the current proposal will in fact form the basis for future repair methods for two reasons:

    First, better technologies and approaches are likely to be developed. Necessarily, we must restrict ourselves to methods and techniques that can be analyzed and understood using the currently understood laws of physics and chemistry. Future scientific advances, not anticipated at this time, are likely to result in cheaper, simpler or more reliable methods. Given the history of science and technology to date, the probability of future unanticipated advances is good.

    Second, this proposal was selected because of its conceptual simplicity and its obvious power to restore virtually any structure where restoration is in principle feasible. These are unlikely to be design objectives of future systems. Conceptual simplicity is advantageous when the resources available for the design process are limited. Future design capabilities can reasonably be expected to outstrip current capabilities, and the efforts of a large group can reasonably be expected to allow analysis of much more complex proposals than considered here.

    Further, future systems will be designed to restore specific individuals suffering from specific types of damage, and can therefore use specific methods that are less general but which are more efficient or less costly for the particular type of damage involved. It is easier for a general-purpose proposal to rely on relatively simple and powerful methods, even if those methods are less efficient.

    Why, then, discuss a powerful, general purpose method that is inefficient, fails to take advantage of the specific types of damage involved, and which will almost certainly be superseded by future technology?

    The purpose of this paper is not to lay the groundwork for future systems, but to answer a question: will cryonics work? The value of cryonics is clearly and decisively based on technical capabilities that will not be developed for several decades (or longer). If some relatively simple proposal appears likely to work, then the value of cryonics is established. Whether or not that simple proposal is actually used is irrelevant. The fact that it could be used in the improbable case that all other technical progress and all other approaches fail is sufficient to let us decide today whether or not cryonic suspension is of value.

    The philosophical issues involved in this type of long range technical forecasting and the methodologies appropriate to this area are addressed by work in "exploratory engineering."[1, 85] The purpose of exploratory engineering is to provide lower bounds on future technical capabilities based on currently understood scientific principles. A successful example is Konstantin Tsiolkovsky's forecast around the turn of the century that multi-staged rockets could go to the moon. His forecast was based on well understood principles of Newtonian mechanics. While it did not predict when such flights would take place, nor who would develop the technology, nor the details of the Saturn V booster, it did predict that the technical capability was feasible and would eventually be developed. In a similar spirit, we will discuss the technical capabilities that should be feasible and what those capabilities should make possible.

    Conceptually, the approach that we will follow is simple:

    Determine the coordinates and orientations of all major molecules, and store this information in a data base.
    Analyze the information stored in the data base with a computer program which determines what changes in the existing structure should be made to restore it to a healthy and functional state.
    Take the original molecules and move them, one at a time, back to their correct locations.
    The reader will no doubt agree that this proposal is conceptually simple, but might be concerned about a number of technical issues. The major issues are addressed in the following analysis.
    An obvious inefficiency of this approach is that it will take apart and then put back together again structures and whole regions that are in fact functional or only slightly damaged. Simply leaving a functional region intact, or using relatively simple special case repair methods for minor damage would be faster and less costly. Despite these obvious drawbacks, the general purpose approach demonstrates the principles involved. As long as the inefficiencies are not so extreme that they make the approach infeasible or uneconomical in the long run, then this simpler approach is easier to evaluate.

    Overview of the Brain.
    The brain has a volume of 1350 cubic centimeters (about one and a half quarts) and a weight of slightly more than 1400 grams (about three pounds). The smallest normal human brain weighed 1100 grams, while the largest weighed 2050 grams [30, page 24]. It is almost 80% water by weight. The remaining 20% is slightly less than 40% protein, slightly over 50% lipids, and a few percent of other material[16, page 419]. Thus, an average brain has slightly over 100 grams of protein, about 175 grams of lipids, and some 30 to 40 grams of "other stuff".
    How Many Molecules
    If we are considering restoration down to the molecular level, an obvious question is: how many molecules are there? We can easily approximate the answer, starting with the proteins. An "average" protein molecule has a molecular weight of about 50,000 amu. One mole of "average" protein is 50,000 grams (by definition), so the 100 grams of protein in the brain is 100/50,000 or .002 moles. One mole is 6.02 x 10^23 molecules, so .002 moles is 1.2 x 10^21 molecules.
    We proceed in the same way for the lipids (lipids are most often used to make cell membranes) -- a "typical" lipid might have a molecular weight of 500 amu, which is 100 times less than the molecular weight of a protein. This implies the brain has about 175/500 x 6.02 x 10^23 or about 2 x 10^23 lipid molecules.

    Finally, water has a molecular weight of 18, so there will be about 1400 x 0.8/18 x 6.02 x 10^23 or about 4 x 10^25 water molecules in the brain. In many cases a substantial percentage of water will have been replaced with cryoprotectant during the process of suspension; glycerol at a concentration of 4 molar or more, for example. Both water and glycerol will be treated in bulk, and so the change from water molecules to glycerol (or other cryoprotectants) should not have a significant impact on the calculations that follow.

    These numbers are fundamental. Repair of the brain down to the molecular level will require that we cope with them in some fashion.

    How Much Time
    Another parameter whose value we must decide is the amount of repair time per molecule. We assume that such repair time includes the time required to determine the location of the molecule in the frozen tissue and the time required to restore the molecule to its correct location, as well as the time to diagnose and repair any structural defects in the molecule. The computational power required to analyze larger-scale structural damage -- e.g., this mitochondria has suffered damage to its internal membrane structure (so called "flocculent densities") -- should be less than the power required to analyze each individual molecule. An analysis at the level of sub-cellular organelles involves several orders of magnitude fewer components and will therefore require correspondingly less computational power. Analysis at the cellular level involves even fewer components. We therefore neglect the time required for these additional computational burdens. The total time required for repair is just the sum over all molecules of the time required by one repair device to repair that molecule divided by the number of repair devices. The more repair devices there are, the faster the repair will be. The more molecules there are, and the more time it takes to repair each molecule, the slower repair will be.
    The time required for a ribosome to manufacture a protein molecule of 400 amino acids is about 10 seconds[14, page 393], or about 25 milliseconds to add each amino acid. DNA polymerase III can add an additional base to a replicating DNA strand in about 7 milliseconds[14, page 289]. In both cases, synthesis takes place in solution and involves significant delays while the needed components diffuse to the reactive sites. The speed of assembler-directed reactions is likely to prove faster than current biological systems. The arm of an assembler should be capable of making a complete motion and causing a single chemical transformation in about a microsecond [85]. However, we will conservatively base our computations on the speed of synthesis already demonstrated by biological systems, and in particular on the slower speed of protein synthesis.

    We must do more than synthesize the required molecules -- we must analyze the existing molecules, possibly repair them, and also move them from their original location to the desired final location. Existing antibodies can identify specific molecular species by selectively binding to them, so identifying individual molecules is feasible in principle. Even assuming that the actual technology employed is different it seems unlikely that such analysis will require substantially longer than the synthesis time involved, so it seems reasonable to multiply the synthesis time by a factor of a few to provide an estimate of time spent per molecule. This should, in principle, allow time for the complete disassembly and reassembly of the selected molecule using methods no faster than those employed in biological systems. While the precise size of this multiplicative factor can reasonably be debated, a factor of 10 should be sufficient. The total time required to simply move a molecule from its original location to its correct final location in the repaired structure should be smaller than the time required to disassemble and reassemble it, so we will assume that the total time required for analysis, repair and movement is 100 seconds per protein molecule.

    Temperature of Analysis
    Warming the tissue before determining its molecular structure creates definite problems: everything will move around. A simple solution to this problem is to keep the tissue frozen until after all the desired structural information is recovered. In this case the analysis will take place at a low temperature. Whether or not subsequent operations should be performed at the same low temperature is left open. A later section considers the various approaches that can be taken to restore the structure after it has been analyzed.
    Repair or Replace?
    In practice, most molecules will probably be intact -- they would not have to be either disassembled or reassembled. This should greatly reduce repair time. On a more philosophical note, existing biological systems generally do not bother to repair macromolecules (a notable exception is DNA -- a host of molecular mechanisms for the repair of this molecule are used in most organisms). Most molecules are generally used for a period of time and then broken down and replaced. There is a slow and steady turnover of molecular structure -- the atoms in the roast beef sandwich eaten yesterday are used today to repair and replace muscles, skin, nerve cells, etc. If we adopted nature's philosophy we would simply discard and replace any damaged molecules, greatly simplifying molecular "repair".
    Carried to its logical conclusion, we would discard and replace all the molecules in the structure. Having once determined the type, location and orientation of a molecule in the original (frozen) structure, we would simply throw that molecule out without further examination and replace it. This requires only that we be able to identify the location and type of individual molecules. It would not be necessary to determine if the molecule was damaged, nor would it be necessary to correct any damage found. By definition, the replacement molecule would be taken from a stock-pile of structurally correct molecules that had been previously synthesized, in bulk, by the simplest and most economical method available.

    Discarding and replacing even a few atoms might disturb some people. This can be avoided by analyzing and repairing any damaged molecules. However, for those who view the simpler removal and replacement of damaged molecules as acceptable, the repair process can be significantly simplified. For purposes of this paper, however, we will continue to use the longer time estimate based on the premise that full repair of every molecule is required. This appears to be conservative. (Those who feel that replacing their atoms will change their identity should think carefully before eating their next meal!)

    Total Repair Machine Seconds
    We shall assume that the repair time for other molecules is similar per unit mass. That is, we shall assume that the repair time for the lipids (which each weigh about 500 amu, 100 times less than a protein) is about 100 times less than the repair time for a protein. The repair time for one lipid molecule is assumed to be 1 second. We will neglect water molecules in this analysis, assuming that they can be handled in bulk.
    We have assumed that the time required to analyze and synthesize an individual molecule will dominate the time required to determine its present location, the time required to determine the appropriate location it should occupy in the repaired structure, and the time required to put it in this position. These assumptions are plausible but will be considered further when the methods of gaining access to and of moving molecules during the repair process are considered.

    This analysis accounts for the bulk of the molecules -- it seems unlikely that other molecular species will add significant additional repair time.

    Based on these assumptions, we find that we require 100 seconds x 1.2 x 10^21 protein molecules + 1 second times 2 x 10^23 lipids, or 3.2 x 10^23 repair-machine-seconds. This number is not as fundamental as the number of molecules in the brain. It is based on the (probably conservative) assumption that repair of 50,000 amu requires 100 seconds. Faster repair would imply repair could be done with fewer repair machines, or in less time.

    How Many Repair Machines
    If we now fix the total time required for repair, we can determine the number of repair devices that must function in parallel. We shall rather arbitrarily adopt 10^8 seconds, which is very close to three years, as the total time in which we wish to complete repairs.
    If the total repair time is 10^8 seconds, and we require 3.2 x 10^23 repair-machine-seconds, then we require 3.2 x 10^15 repair machines for complete repair of the brain. This corresponds to 3.2 x 10^15 / (6.02 x 10^23) or 5.3 x 10^-9 moles, or 5.3 nanomoles of repair machines. If each repair device weighs 10^9 to 10^10 amu, then the total weight of all the repair devices is 53 to 530 grams: a few ounces to just over a pound.

    Thus, the weight of repair devices required to repair each and every molecule in the brain, assuming the repair devices operate no faster than current biological methods, is about 4% to 40% of the total mass of the brain.

    By way of comparison, there are about 10^14 cells[44, page 3] in the human body and each cell has about 10^7 ribosomes[14, page 652] giving 10^21 ribosomes. Thus, there are about six orders of magnitude more ribosomes in the human body than the number of repair machines we estimate are required to repair the human brain.

    It seems unlikely that either more or larger repair devices are inherently required. However, it is comforting to know that errors in these estimates of even several orders of magnitude can be easily tolerated. A requirement for 530 kilograms of repair devices (1,000 to 10,000 times more than we calculate is needed) would have little practical impact on feasibility. Although repair scenarios that involve deployment of the repair devices within the volume of the brain could not be used if we required 530 kilograms of repair devices, a number of other repair scenarios would still work -- one such approach is discussed in this paper. Given that nanotechnology is feasible, manufacturing costs for repair devices will be small. The cost of even 530 kilograms of repair devices should eventually be significantly less than a few hundred dollars. The feasibility of repair down to the molecular level is insensitive to even large errors in the projections given here.

    THE REPAIR PROCESS
    We now turn to the physical deployment of these repair devices. That is, although the raw number of repair devices is sufficient, we must devise an orderly method of deploying these repair devices so they can carry out the needed repairs.
    Other Proposals: On-board Repair
    We shall broadly divide repair scenarios into two classes: on-board and off-board. In the on-board scenarios, the repair devices are deployed within the volume of the brain. Existing structures are disassembled in place, their component molecules examined and repaired, and rebuilt on the spot. (We here class as "on-board" those scenarios in which the repair devices operate within the physical volume of the brain, even though there might be substantial off-board support. That is, there might be a very large computer outside the tissue directing the repair process, but we would still refer to the overall repair approach as "on-board"). The on-board repair scenario has been considered in some detail by Drexler[18]. We will give a brief outline of the on-board repair scenario here, but will not consider it in any depth. For various reasons, it is quite plausible that on-board repair scenarios will be developed before off-board repair scenarios.
    The first advantage of on-board repair is an easier evolutionary path from partial repair systems deployed in living human beings to the total repair systems required for repair of the more extensive damage found in the person who has been cryonically suspended. That is, a simple repair device for finding and removing fatty deposits blocking the circulatory system could be developed and deployed in living humans[2], and need not deal with all the problems involved in total repair. A more complex device, developed as an incremental improvement, might then repair more complex damage (perhaps identifying and killing cancer cells) again within a living human. Once developed, there will be continued pressure for evolutionary improvements in on-board repair capabilities which should ultimately lead to repair of virtually arbitrary damage. This evolutionary path should eventually produce a device capable of repairing frozen tissue.

    It is interesting to note that "At the end of this month [August 1990], MITI's Agency of Industrial Science and Technology (AIST) will submit a budget request for ´30 million ($200,000) to launch a `microrobot' project next year, with the aim of developing tiny robots for the internal medical treatment and repair of human beings. ... MITI is planning to pour ´25,000 million ($170 million) into the microrobot project over the next ten years..."[86]. Iwao Fujimasa said their objective is a robot less than .04 inches in size that will be able to travel through veins and inside organs[17, 20]. While substantially larger than the proposals considered here, the direction of future evolutionary improvements should be clear.

    A second advantage of on-board repair is emotional. In on- board repair, the original structure (you) is left intact at the macroscopic and even light microscopic level. The disassembly and reassembly of the component molecules is done at a level smaller than can be seen, and might therefore prove less troubling than other forms of repair in which the disassembly and reassembly processes are more visible. Ultimately, though, correct restoration of the structure is the overriding concern.

    A third advantage of on-board repair is the ability to leave functional structures intact. That is, in on-board repair we can focus on those structures that are damaged, while leaving working structures alone. If minor damage has occurred, then an on-board repair system need make only minor repairs.

    The major drawback of on-board repair is the increased complexity of the system. As discussed earlier, this is only a drawback when the design tools and the resources available for the design are limited. We can reasonably presume that future design tools and future resources will greatly exceed present efforts. Developments in computer aided design of complex systems will put the design of remarkably complex systems within easy grasp.

    In on-board repair, we might first logically partition the volume of the brain into a matrix of cubes, and then deploy each repair device in its own cube. Repair devices would first get as close as possible to their assigned cube by moving through the circulatory system (we presume it would be cleared out as a first step) and would then disassemble the tissue between them and their destination. Once in position, each repair device would analyze the tissue in its assigned volume and perform any repairs required.

    The Current Proposal: Off-Board Repair
    The second class of repair scenarios, the off-board scenarios, allow the total volume of repair devices to greatly exceed the volume of the human brain.
    The primary advantage of off-board repair is conceptual simplicity. It employees simple brute force to insure that a solution is feasible and to avoid complex design issues. As discussed earlier, these are virtues in thinking about the problem today but are unlikely to carry much weight in the future when an actual system is being designed.

    The other advantages of this approach are fairly obvious. Lingering concerns about volume and heat dissipation can be eliminated. If a ton of repair devices should prove necessary, then a ton can be provided. Concerns about design complexity can be greatly reduced. Off-board repair scenarios do not require that the repair devices be mobile -- simplifying communications and power distribution, and eliminating the need for locomotor capabilities and navigational abilities. The only previous paper on off-board repair scenarios was by Merkle[101].

    Off-board repair scenarios can be naturally divided into three phases. In the first phase, we must analyze the structure to determine its state. The primary purpose of this phase is simply to gather information about the structure, although in the process the disassembly of the structure into its component molecules will also take place. Various methods of gaining access to and analyzing the overall structure are feasible -- in this paper we shall primarily consider one approach.

    We shall presume that the analysis phase takes place while the tissue is still frozen. While the exact temperature is left open, it seems preferable to perform analysis prior to warming. The thawing process itself causes damage and, once thawed, continued deterioration will proceed unchecked by the mechanisms present in healthy tissue. This cannot be tolerated during a repair time of several years. Either faster analysis or some means of blocking deterioration would have to be used if analysis were to take place after warming. We will not explore these possibilities here (although this appears worthwhile). The temperature at which other phases takes place is left open.

    The second phase of off-board repair is determination of the healthy state. In this phase, the structural information derived from the analysis phase is used to determine what the healthy state of the tissue had been prior to suspension and any preceding illness. This phase involves only computation based on the information provided by the analysis phase.

    The third phase is repair. In this phase, we must restore the structure in accordance with the blue-print provided by the second phase, the determination of the healthy state.

    Intermediate States During Off-Board Repair
    Repair methods in general start with frozen tissue, and end with healthy tissue. The nature of the intermediate states characterizes the different repair approaches. In off-board repair the tissue undergoing repair must pass through three highly characteristic states, described in the following three paragraphs.
    The first state is the starting state, prior to any repair efforts. The tissue is frozen (unrepaired).

    In the second state, immediately following the analysis phase, the tissue has been disassembled into its individual molecules. A detailed structural data base has been built which provides a description of the location, orientation, and type of each molecule, as discussed earlier. For those who are concerned that their identity or "self" is dependent in some fundamental way on the specific atoms which compose their molecules, the original molecules can be retained in a molecular "filing cabinet." While keeping physical track of the original molecules is more difficult technically, it is feasible and does not alter off-board repair in any fundamental fashion.

    In the third state, the tissue is restored and fully functional.

    By characterizing the intermediate state which must be achieved during the repair process, we reduce the problem from "Start with frozen tissue and generate healthy tissue" to "Start with frozen tissue and generate a structural data base and a molecular filing cabinet. Take the structural data base and the molecular filing cabinet and generate healthy tissue." It is characteristic of off-board repair that we disassemble the molecular structure into its component pieces prior to attempting repair.

    As an example, suppose we wish to repair a car. Rather than try and diagnose exactly what's wrong, we decide to take the car apart into its component pieces. Once the pieces are spread out in front of us, we can easily clean each piece, and then reassemble the car. Of course, we'll have to keep track of where all the pieces go so we can reassemble the structure, but in exchange for this bookkeeping task we gain a conceptually simple method of insuring that we actually can get access to everything and repair it. While this is a rather extreme method of repairing a broken carburetor, it certainly is a good argument that we should be able to repair even rather badly damaged cars. So, too, with off-board repair. While it might be an extreme method of fixing any particular form of damage, it provides a good argument that damage can be repaired under a wide range of circumstances.

    Off-Board Repair is the Best that can be Achieved
    Regardless of the initial level of damage, regardless of the functional integrity or lack thereof of any or all of the frozen structure, regardless of whether easier and less exhaustive techniques might or might not work, we can take any frozen structure and convert it into the canonical state described above. Further, this is the best that we can do. Knowing the type, location and orientation of every molecule in the frozen structure under repair and retaining the actual physical molecules (thus avoiding any philosophical objections that replacing the original molecules might somehow diminish or negate the individuality of the person undergoing repair) is the best that we can hope to achieve. We have reached some sort of limit with this approach, a limit that will make repair feasible under circumstances which would astonish most people today.
    One particular approach to off-board repair is divide-and- conquer. This method is one of the technically simplest approaches. We discuss this method in the following section.

    Divide-and-Conquer
    Divide-and-conquer is a general purpose problem-solving method frequently used in computer science and elsewhere. In this method, if a problem proves too difficult to solve it is first divided into sub-problems, each of which is solved in turn. Should the sub-problems prove too difficult to solve, they are in turn divided into sub-sub-problems. This process is continued until the original problem is divided into pieces that are small enough to be solved by direct methods.
    If we apply divide-and-conquer to the analysis of a physical object -- such as the brain -- then we must be able to physically divide the object of analysis into two pieces and recursively apply the same method to the two pieces. This means that we must be able to divide a piece of frozen tissue, whether it be the entire brain or some smaller part, into roughly equal halves. Given that tissue at liquid nitrogen temperatures is already prone to fracturing, it should require only modest effort to deliberately induce a fracture that would divide such a piece into two roughly equal parts. Fractures made at low temperatures (when the material is below the glass transition temperature) are extremely clean, and result in little or no loss of structural information. Indeed, freeze fracture techniques are used for the study of synaptic structures. Hayat [40, page 398] says "Membranes split during freeze-fracturing along their central hydrophobic plane, exposing intramembranous surfaces. ... The fracture plane often follows the contours of membranes and leaves bumps or depressions where it passes around vesicles and other cell organelles. ... The fracturing process provides more accurate insight into the molecular architecture of membranes than any other ultrastructural method." It seems unlikely that the fracture itself will result in any significant loss of structural information.

    The freshly exposed faces can now be analyzed by various surface analysis techniques. Work with STMs supports the idea that very high resolution is feasible[46]. For example, optical absorption microscopy "...generates an absorption spectrum of the surface with a resolution of 1 nanometer [a few atomic diameters]." Kumar Wickramasinghe of IBM's T. J. Watson Research Center said: "We should be able to record the spectrum of a single molecule" on a surface. Williams and Wickramasinghe said [51] "The ability to measure variations in chemical potential also allows the possibility of selectively identifying subunits of biological macromolecules either through a direct measurement of their chemical- potential gradients or by decorating them with different metals. This suggest a potentially simple method for sequencing DNA." While current devices are large, the fundamental physical principles on which they rely do not require large size. Many of the devices depend primarily on the interaction between a single atom at the tip of the STM probe and the atoms on the surface of the specimen under analysis. Clearly, substantial reductions in size in such devices are feasible.

    High resolution optical techniques can also be employed. Near field microscopy, employing light with a wavelength of hundreds of nanometers, has achieved a resolution of 12 nanometers (much smaller than a wavelength of light). To quote the abstract of a recent review article on the subject: "The near-field optical interaction between a sharp probe and a sample of interest can be exploited to image, spectroscopically probe, or modify surfaces at a resolution (down to ~12 nm) inaccessible by traditional far-field techniques. Many of the attractive features of conventional optics are retained, including noninvasiveness, reliability, and low cost. In addition, most optical contrast mechanisms can be extended to the near-field regime, resulting in a technique of considerable versatility. This versatility is demonstrated by several examples, such as the imaging of nanometric-scale features in mammalian tissue sections and the creation of ultrasmall, magneto-optic domains having implications for high-density data storage. Although the technique may find uses in many diverse fields, two of the most exciting possibilities are localized optical spectroscopy of semiconductors and the fluorescence imaging of living cells."[111]. Another article said: "Our signals are currently of such magnitude that almost any application originally conceived for far-field optics can now be extended to the near-field regime, including: dynamical studies at video rates and beyond; low noise, high resolution spectroscopy (also aided by the negligible auto-fluorescence of the probe); minute differential absorption measurements; magnetooptics; and superresolution lithography."[100] [note 20] .

    How Small are the Pieces
    The division into halves continues until the pieces are small enough to allow direct analysis by repair devices. If we presume that division continues until each repair device is assigned its own piece to repair, then there will be both 3.2 x 10^15 repair devices and pieces. If the 1350 cubic centimeter volume of the brain is divided into this many cubes, each such cube would be about .4 microns (422 nanometers) on a side. Each cube could then be directly analyzed (disassembled into its component molecules) by a repair device during our three-year repair period.
    One might view these cubes as the pieces of a three- dimensional jig-saw puzzle, the only difference being that we have cheated and carefully recorded the position of each piece. Just as the picture on a jig-saw puzzle is clearly visible despite the fractures between the pieces, so too the three-dimensional "picture" of the brain is clearly visible despite its division into pieces [note 21].

    Moving Pieces
    Subsequent work on methods of assembling macroscopic objects from molecular components can be found in the article "Convergent assembly" at http://www.zyvex.com/nanotech/convergent.html.
    There are a great many possible methods of handling the mechanical problems involved in dividing and moving the pieces. It seems unlikely that mechanical movement of the pieces will prove an insurmountable impediment, and therefore we do not consider it in detail. However, for the sake of concreteness, we outline one possibility. Human arms are about 1 meter in length, and can easily handle objects from 1 to 10 centimeters in size (.01 to .1 times the length of the arm). It should be feasible, therefore, to construct a series of progressively shorter arms which handle pieces of progressively smaller size. If each set of arms were ten times shorter than the preceding set, then we would have devices with arms of: 1 meter, 1 decimeter, 1 centimeter, 1 millimeter, 100 microns, 10 microns, 1 micron, and finally .1 microns or 100 nanometers. (Note that an assembler has arms roughly 100 nanometers long). Thus, we would need to design 8 different sizes of manipulators. At each succeeding size the manipulators would be more numerous, and so would be able to deal with the many more pieces into which the original object was divided. Transport and mechanical manipulation of an object would be done by arms of the appropriate size. As objects were divided into smaller pieces that could no longer be handled by arms of a particular size, they would be handed to arms of a smaller size.
    If it requires about three years to analyze each piece, then the time required both to divide the brain into pieces and to move each piece to an immobile repair device can reasonably be neglected. It seems unlikely that moving the pieces will take a significant fraction of three years.

    Memory Requirements
    The information storage requirements for a structural data- base that holds the detailed description and location of each major molecule in the brain can be met by projected storage methods. DNA has an information storage density of about 10^21 bits/cubic centimeter. Conceptually similar but somewhat higher density molecular "tape" systems that store 10^22 bits/cubic centimeter [1] should be quite feasible. If we assume that every lipid molecule is "significant" but that water molecules, simple ions and the like are not, then the number of significant molecules is roughly the same as the number of lipid molecules [note 22] (the number of protein molecules is more than two orders of magnitude smaller, so we will neglect it in this estimate). The digital description of these 2 x 10^23 significant molecules requires 10^25 bits (assuming that 50 bits are required to encode the location and description of each molecule). This is about 1,000 cubic centimeters (1 liter, roughly a quart) of "tape" storage. If a storage system of such capacity strikes the reader as infeasible, consider that a human being has about 10^14 cells[44, page 3] and that each cell stores 10^10 bits in its DNA[14]. Thus, every human that you see is a device which (among other things) has a raw storage capacity of 10^24 bits -- and human beings are unlikely to be optimal information storage devices.
    A simple method of reducing storage requirements by several orders of magnitude would be to analyze and repair only a small amount of tissue at a time. This would eliminate the need to store the entire 10^25 bit description at one time. A smaller memory could hold the description of the tissue actually under repair, and this smaller memory could then be cleared and re-used during repair of the next section of tissue.

    Computational Requirements
    The computational power required to analyze a data base with 10^25 bits is well within known theoretical limits[9,25,32]. It has been seriously proposed that it might be possible to increase the total computational power achievable within the universe beyond any fixed bound in the distant future[52, page 658]. More conservative lower bounds to nearer-term future computational capabilities can be derived from the reversible rod-logic molecular model of computation, which dissipates about 10^-23 joules per gate operation when operating at 100 picoseconds at room temperature[ 85]. A wide range of other possibilities exist. Likharev proposed a computational element based on Josephson junctions which operates at 4 K and in which energy dissipation per switching operation is 10^-24 joules with a switching time of 10^-9 seconds[33, 43]. Continued evolutionary reductions in the size and energy dissipation of properly designed NMOS[113] and CMOS[112, 120] circuits should eventually produce logic elements that are both very small (though significantly larger than Drexler's mechanical proposals) and which dissipate extraordinarily small amounts of energy per logic operation. Extrapolation of current trends suggest that energy dissipations in the 10^-23 joule range will be achieved before 2030[31, fig. 1]. There is no presently known reason to expect the trend to stop or even slow down at that time[9,32].
    Energy costs appear to be the limiting factor in rod logic (rather than the number of gates, or the speed of operation of the gates). Today, electric power costs about 10 cents per kilowatt hour. Future costs of power will almost certainly be much lower. Molecular manufacturing should eventually sharply reduce the cost of solar cells and increase their efficiency close to the theoretical limits. With a manufacturing cost of under 10 cents per kilogram[ 85] the cost of a one square meter solar cell will be less than a penny. As a consequence the cost of solar power will be dominated by other costs, such as the cost of the land on which the solar cell is placed. While solar cells can be placed on the roofs of existing structures or in otherwise unused areas, we will simply use existing real estate prices to estimate costs. Low cost land in the desert south western United States can be purchased for less than $1,000 per acre. (This price corresponds to about 25 cents per square meter, significantly larger than the projected future manufacturing cost of a one square meter solar cell). Land elsewhere in the world (arid regions of the Australian outback, for example) is much cheaper. For simplicity and conservatism, though, we'll simply adopt the $1,000 per acre price for the following calculations. Renting an acre of land for a year at an annual price of 10% of the purchase price will cost $100. Incident sunlight at the earth's surface provides a maximum of 1,353 watts per square meter, or 5.5 x 10^6 watts per acre. Making allowances for inefficiencies in the solar cells, atmospheric losses, and losses caused by the angle of incidence of the incoming light reduces the actual average power production by perhaps a factor of 15 to about 3.5 x 10^5 watts. Over a year, this produces 1.1 x 10^13 joules or 3.1 x 10^6 kilowatt hours. The land cost $100, so the cost per joule is 0.9 nanocents and the cost per kilowatt hour is 3.3 millicents. Solar power, once we can make the solar cells cheaply enough, will be over several thousand times cheaper than electric power is today. We'll be able to buy over 10^15 joules for under $10,000.

    While the energy dissipation per logic operation estimated by Drexler[ 85] is about 10^-23 joules, we'll content ourselves with the higher estimate of 10^-22 joules per logic operation. Our 10^15 joules will then power 10^37 gate operations: 10^12 gate operations for each bit in the structural data base or 5 x 10^13 gate operations for each of the 2 x 10^23 lipid molecules present in the brain.

    It should be emphasized that in off-board repair warming of the tissue is not an issue because the overwhelming bulk of the calculations and hence almost all of the energy dissipation takes place outside the tissue. Much of the computation takes place when the original structure has been entirely disassembled into its component molecules.

    How Much Is Enough?
    Is this enough computational power? We can get a rough idea of how much computer power might be required if we draw an analogy from image recognition. The human retina performs about 100 "operations" per pixel, and the human brain is perhaps 1,000 to 10,000 times larger than the retina. This implies that the human image recognition system can recognize an object after devoting some 10^5 to 10^6 "operations" per pixel. (This number is also in keeping with informal estimates made by individuals expert in computer image analysis). Allowing for the fact that such "retinal operations" are probably more complex than a single "gate operation" by a factor of 1000 to 10,000, we arrive at 10^8 to 10^10 gate operations per pixel -- which is well below our estimate of 10^12 operations per bit or 5 x 10^13 operations per molecule.
    To give a feeling for the computational power this represents, it is useful to compare it to estimates of the raw computational power of the human brain. The human brain has been variously estimated as being able to do 10^13[50], 10^15 or 10^16[114] operations a second (where "operation" has been variously defined but represents some relatively simple and basic action) [note 23]. The 10^37 total logic operations will support 10^29 logic operations per second for three years, which is the raw computational power of something like 10^13 human beings (even when we use the high end of the range for the computational power of the human brain). This is 10 trillion human beings, or some 2,000 times more people than currently exist on the earth today. By present standards, this is a large amount of computational power. Viewed another way, if we were to divide the human brain into tiny cubes that were about 5 microns on a side (less than the volume of a typical cell), each such cube could receive the full and undivided attention of a dedicated human analyst for a full three years.

    The next paragraph analyzes memory costs, and can be skipped without loss of continuity.

    This analysis neglects the memory required to store the complete state of these computations. Because this estimate of computational abilities and requirements depends on the capabilities of the human brain, we might require an amount of memory roughly similar to the amount of memory required by the human brain as it computes. This might require about 10^16 bits (10 bits per synapse) to store the "state" of the computation. (We assume that an exact representation of each synapse will not be necessary in providing capabilities that are similar to those of the human brain. At worst, the behavior of small groups of cells could be analyzed and implemented by the most efficient method, e.g., a "center surround" operation in the retina could be implemented as efficiently as possible, and would not require detailed modeling of each neuron and synapse. In point of fact, it is likely that algorithms that are significantly different from the algorithms employed in the human brain will prove to be the most efficient for this rather specialized type of analysis, and so our use of estimates derived from low-level parts-counts from the human brain are likely to be conservative). For 10^13 programs each equivalent in analytical skills to a single human being, this would require 10^29 bits. At 100 cubic nanometers per bit, this gives 10,000 cubic meters. Using the cost estimates provided by Drexler[85] this would be an uncomfortable $1,000,000. We can, however, easily reduce this cost by partitioning the computation to reduce memory requirements. Instead of having 10^13 programs each able to "think" at about the same speed as a human being, we could have 10^10 programs each able to "think" at a speed 1,000 times faster than a human being. Instead of having 10 trillion dedicated human analysts working for 3 years each, we would have 10 billion dedicated human analysts working for 3,000 virtual years each. The project would still be completed in 3 calendar years, for each computer "analyst" would be a computer program running 1,000 times faster than an equally skilled human analyst. Instead of analyzing the entire brain at once, we would instead logically divide the brain into 1,000 pieces each of about 1.4 cubic centimeters in size, and analyze each such piece fully before moving on to the next piece.

    This reduces our memory requirements by a factor of 1,000 and the cost of that memory to a manageable $1,000.

    It should be emphasized that the comparisons with human capabilities are used only to illustrate the immense capabilities of 10^37 logic operations. It should not be assumed that the software that will actually be used will have any resemblance to the behavior of the human brain.

    More Computer Power
    In the following paragraphs, we argue that even more computational power will in fact be available, and so our margins for error are much larger.
    Energy loss in rod logic, in Likharev's parametric quantron, in properly designed NMOS and CMOS circuits, and in many other proposals for computational devices is related to speed of operation. By slowing down the operating speed from 100 picoseconds to 100 nanoseconds or even 100 microseconds we should achieve corresponding reductions in energy dissipation per gate operation. This will allow substantial increases in computational power for a fixed amount of energy (10^15 joules). We can both decrease the energy dissipated per gate operation (by operating at a slower speed) and increase the total number of gate operations (by using more gates). Because the gates are very small to start with, increasing their number by a factor of as much as 10^10 (to approximately 10^27 gates) would still result in a total volume of 100 cubic meters (recall that each gate plus overhead is about 100 cubic nanometers). This is a cube less than 5 meters on a side. Given that manufacturing costs will eventually reflect primarily material and energy costs, such a volume of slowly operating gates should be economical and would deliver substantially more computational power per joule.

    We will not pursue this approach here for two main reasons. First, published analyses use the higher 100 picosecond speed of operation and 10^-22 joules of energy dissipation[ 85]. Second, operating at 10^-22 joules at room temperature implies that most logic operations must be reversible and that less than one logic operation in 30 can be irreversible. Irreversible logic operations (which erase information) must inherently dissipate at least kT x ln(2) for fundamental thermodynamic reasons. The average thermal energy of a single atom or molecule at a temperature T (measured in degrees K) is approximately kT where k is Boltzmann's constant. At room temperature, kT is about 4 x 10^-21 joules. Thus, each irreversible operation will dissipate almost 3 x 10^-21 joules. The number of such operations must be limited if we are to achieve an average energy dissipation of 10^-22 joules per logic operation.

    While it should be feasible to perform computations in which virtually all logic operations are reversible (and hence need not dissipate any fixed amount of energy per logic operation)[9, 25, 32, 53, 112, 120], current computer architectures might require some modification before they could be adapted to this style of operation. By contrast, it should be feasible to use current computer architectures while at the same time performing a major percentage (e.g., 99% or more) of their logic operations in a reversible fashion.

    Various electronic proposals show that almost all of the existing combinational logic in present computers can be replaced with reversible logic with no change in the instruction set that is executed[112, 113]. Further, while some instructions in current computers are irreversible and hence must dissipate at least kT x ln(2) joules for each bit of information erased, other instructions are reversible and need not dissipate any fixed amount of energy if implemented correctly. Optimizing compilers could then avoid using the irreversible machine instructions and favor the use of the reversible instructions. Thus, without modifying the instruction set of the computer, we can make most logic operations in the computer reversible.

    Further work on reversible computation can only lower the minimum energy expenditure per basic operation and increase the percentage of reversible logic operations. Much greater reductions in energy dissipation might be feasible[105]. While it is at present unclear how far the trend towards lower energy dissipation per logic operation can go, it is clear that we have not yet reached a limit and that no particular limit is yet visible.

    We can also expect further decreases in energy costs. By placing solar cells in space the total incident sunlight per square meter can be greatly increased (particularly if the solar cell is located closer to the sun) while at the same time the total mass of the solar cell can be greatly decreased. Most of the mass in earth-bound structures is required not for functional reasons but simply to insure structural integrity against the forces of gravity and the weather. In space both these problems are virtually eliminated. As a consequence a very thin solar cell of relatively modest mass can have a huge surface area and provide immense power at much lower costs than estimated here.

    If we allow for the decreasing future cost of energy and the probability that future designs will have lower energy dissipation than 10^-22 joules per logic operation, it seems likely that we will have a great deal more computational power than required. Even ignoring these more than likely developments, we will have adequate computational power for repair of the brain down to the molecular level.

    Chemical Energy of the Brain
    Another issue is the energy involved in the complete disassembly and reassembly of every molecule in the brain. The total chemical energy stored in the proteins and lipids of the human brain is quite modest in comparison with 10^15 joules. When lipids are burned, they release about 9 kilocalories per gram. (Calorie conscious dieters are actually counting "kilocalories" -- so a "300 Calorie Diet Dinner" really has 300,000 calories or 1,254,000 joules). When protein is burned, it releases about 4 kilocalories per gram. Given that there are 100 grams of protein and 175 grams of lipid in the brain, this means there is almost 2,000 kilocalories of chemical energy stored in the structure of the brain, or about 8 x 10^6 joules. This much chemical energy is over 10^8 times less than the 10^15 joules that one person can reasonably purchase in the future. It seems unlikely that the construction of the human brain must inherently require substantially more than 10^7 joules and even more unlikely that it could require over 10^15 joules. The major energy cost in repair down to the molecular level appears to be in the computations required to "think" about each major molecule in the brain and the proper relationships among those molecules.
    Determining the Healthy State
    In the second phase of the analysis, determination of the healthy state, we determine what the repaired (healthy) tissue should look like at the molecular level. That is, the initial structural data base produced by the analysis phase describes unhealthy (frozen) tissue. In determination of the healthy state, we must generate a revised structural data base that describes the corresponding healthy (functional) tissue. The generation of this revised data base requires a computer program that has an intimate understanding of what healthy tissue should look like, and the correspondence between unhealthy (frozen) tissue and the corresponding healthy tissue. As an example, this program would have to understand that healthy tissue does not have fractures in it, and that if any fractures are present in the initial data base (describing the frozen tissue) then the revised data base (describing the resulting healthy tissue) should be altered to remove them. Similarly, if the initial data base describes tissue with swollen or non-functional mitochondria, then the revised data base should be altered so that it describes fully functional mitochondria. If the initial data base describes tissue which is infected (viral or bacterial infestations) then the revised data base should be altered to remove the viral or bacterial components.
    While the revised data base describes the healthy state of the tissue that we desire to achieve, it does not specify the method(s) to be used in restoring the healthy structure. There is in general no necessary implication that restoration will or will not be done at some specific temperature, or will or will not be done in any particular fashion. Any one of a wide variety of methods could be employed to actually restore the specified structure. Further, the actual restored structure might differ in minor details from the structure described by the revised data base.

    The complexity of the program that determines the healthy state will vary with the quality of the suspension and the level of damage prior to suspension. Clearly, if cryonic suspension "almost works", then the initial data base and the revised data base will not greatly differ. Cryonic suspension under favorable circumstances preserves the tissue with good fidelity down to the molecular level. If, however, there was significant presuspension injury then deducing the correct (healthy) structural description is more complex. However, it should be feasible to deduce the correct structural description even in the face of significant damage. Only if the structure is obliterated beyond recognition will it be infeasible to deduce the undamaged state of the structure.

    ALTERNATIVES TO REPAIR
    A brief philosophical aside is in order. Once we have generated an acceptable revised structural data base, we can in fact pursue either of two distinctly different possibilities. The obvious path is to continue with the repair process, eventually producing healthy tissue. An alternative path is to use the description in the revised structural data base to guide the construction of a different but "equivalent" structure (e.g., an "artificial brain"). This possibility has been much discussed[11, 50], and has recently been called "uploading" (or "downloading")[26]. Whether or not such a process preserves what is essentially human is often hotly debated, but it has advantages wholly unrelated to personal survival. As an example, the knowledge and skills of an Einstein or Turing need not be lost: they could be preserved in a computational model. On a more commercial level, the creative skills of a Spielberg (whose movies have produced a combined revenue in the billions) could also be preserved. Whether or not the computational model was viewed as having the same essential character as the biological human after which it was patterned, it would indisputably preserve that person's mental abilities and talents.
    It seems likely that many people today will want complete physical restoration (despite the philosophical possibilities considered above) and will continue through the repair planning and repair phases.

    RESTORATION
    In the third phase of repair we start with an atomically precise description (the revised data base) of the structure that we wish to restore, and a filing cabinet holding the molecules that will be needed during restoration. Optionally, the molecules in the filing cabinet can be from the original structure. This deals with the concerns of those who want restoration with the original atoms. Our objective is to restore the original structure with a precision sufficient to support the original functional capabilities. Clearly, this would be achieved if we were to restore the structure with atomic precision. Before discussing this most technically exacting approach, we will briefly mention the other major approaches that might be employed.
    We know it is possible to make a human brain for this has been done by traditional methods for many thousands of years. If we were to adopt a restoration method that was as close as possible to the traditional technique for building a brain, we might use a "guided growth" strategy. That is, in simple organisms the growth of every single cell and of every single synapse is determined genetically. "All the cell divisions, deaths, and migrations that generate the embryonic, then the larval, and finally the adult forms of the roundworm Caenorhabditis Elegans have now been traced."[103]. "The embryonic lineage is highly invariant, as are the fates of the cells to which it gives rise"[102]. The appendix says: "Parts List: Caenorhabditis elegans (Bristol) Newly Hatched Larva. This index was prepared by condensing a list of all cells in the adult animal, then adding comments and references. A complete listing is available on request..." The adult organism has 959 cells in its body, 302 of which are nerve cells[104].

    Restoring a specific biological structure using this approach would require that we determine the total number and precise growth patterns of all the cells involved. The human brain has roughly 10^12 nerve cells, plus perhaps ten times as many glial cells and other support cells. While simply encoding this complex a structure into the genome of a single embryo might prove to be overly complex, it would certainly be feasible to control critical cellular activities by the use of on board nanocomputers. That is, each cell would be controlled by an on-board computer, and that computer would in turn have been programmed with a detailed description of the growth pattern and connections of that particular cell. While the cell would function normally in most respects, critical cellular activities, such as replication, motility, and synapse growth, would be under the direct control of the on-board computer. Thus, as in C. Elegans but on a larger scale, the growth of the entire system would be "highly invariant." Once the correct final configuration had been achieved, the on-board nanocomputers would terminate their activities and be flushed from the system as waste.

    This approach might be criticized on the grounds that the resulting person was a "mere duplicate," and so "self" had not been preserved. Certainly, precise atomic control of the structure would appear to be difficult to achieve using guided growth, for biological systems do not normally control the precise placement of individual molecules. While the same atoms could be used as in the original, it would seem difficult to guarantee that they would be in the same places.

    Concerns of this sort lead to restoration methods that provide higher precision. In these methods, the desired structure is restored directly from molecular components by placing the molecular components in the desired locations. A problem with this approach is the stability of the structure during restoration. Molecules might drift away from their assigned locations, destroying the structure.

    An approach that we might call "minimal stabilization" would involve synthesis in liquid water, with mechanical stabilization of the various lipid membranes in the system. A three-dimensional grid or scaffolding would provide a framework that would hold membrane anchors in precise locations. The membranes themselves would thus be prevented from drifting too far from their assigned locations. To prevent chemical deterioration during restoration, it would be necessary to remove all reactive compounds (e.g., oxygen).

    In this scenario, once the initial membrane "framework" was in place and held in place by the scaffolding, further molecules would be brought into the structure and put in the correct locations. In many instances, such molecules could be allowed to diffuse freely within the cellular compartment into which they had been introduced. In some instances, further control would be necessary. For example, a membrane- spanning channel protein might have to be confined to a specific region of a nerve cell membrane, and prevented from diffusing freely to other regions of the membrane. One method of achieving this limited kind of control over further diffusion would be to enclose a region of the membrane by a diffusion barrier (much like the spread of oil on water can be prevented by placing a floating barrier on the water).

    While it is likely that some further cases would arise where it was necessary to prevent or control diffusion, the emphasis in this method is in providing the minimal control over molecular position that is needed to restore the structure.

    While this approach does not achieve atomically precise restoration of the original structure, the kinds of changes that are introduced (diffusion of a molecule within a cellular compartment, diffusion of a membrane protein within the membrane) would be very similar to the kinds of diffusion that would take place in a normal biological system. Thus, the restored result would have the same molecules with the same atoms, and the molecules would be in similar (though not exactly the same) locations they had been in prior to restoration.

    To achieve even more precise control over the restored structure, we might adopt a "full stabilization" strategy. In this strategy, each major molecule would be anchored in place, either to the scaffolding or an adjacent molecule. This would require the design of a stabilizing molecule for each specific type of molecule found in the body. The stabilizing molecule would have a specific end attached to the specific molecule, and a general end attached either to the scaffolding or to another stabilizing molecule. Once restoration was complete, the stabilizing molecules would release the molecules that were being stabilized and normal function would resume. This release might be triggered by the simple diffusion of an enzyme that attacked and broke down the stabilizing molecules. This kind of approach was considered by Drexler[1].

    Low Temperature Restoration
    Finally, we might achieve stability of the intermediate structure by using low temperatures. If the structure were restored at a sufficiently low temperature, a molecule put in a certain place would simply not move. We might call this method "low temperature restoration."
    In this scenario, each new molecule would simply be stacked (at low temperature) in the right location. This can be roughly likened to stacking bricks to build a house. A hemoglobin molecule could simply be thrown into the middle of the half-restored red blood cell. Other molecules whose precise position was not critical could likewise be positioned rather inexactly. Lipids in the lipid bi-layer forming the cellular membrane would have to be placed more precisely (probably with an accuracy of several angstroms). An individual lipid molecule, having once been positioned more or less correctly on a lipid bi-layer under construction, would be held in place (at sufficiently low temperatures) by van der Waals forces. Membrane bound proteins could also be "stacked" in their proper locations. Because biological systems make extensive use of self- assembly it would not be necessary to achieve perfect accuracy in the restoration process. If a biological macromolecule is positioned with reasonable accuracy, it would automatically assume the correct position upon warming.

    Large polymers, used either for structural or other purposes, pose special problems. The monomeric units are covalently bonded to each other, and so simple "stacking" is inadequate. If such polymers cannot be added to the structure as entirely pre-formed units, then they could be incrementally restored during assembly from their individual monomers using the techniques discussed earlier involving positional synthesis using highly reactive intermediates. Addition of monomeric units to the polymer could then be done at the most convenient point during the restoration operation.

    The chemical operations required to make a polymer from its monomeric units at reduced temperatures are unlikely to use the same reaction pathways that are used by living systems. In particular, the activation energies of most reactions that take place at 310 K (98.6 degrees Fahrenheit) can not be met at 77 K: most conventional compounds don't react at that temperature. However, as discussed earlier, assembler based synthesis techniques using highly reactive intermediates in near-perfect vacuum with mechanical force providing activation energy will continue to work quite well, even if we assume that thermal activation energy is entirely absent (e.g., that the system is close to 0 Kelvins).

    An obvious problem with low temperature restoration is the need to re-warm the structure without incurring further damage. Much "freezing" injury takes place during rewarming, and this would have to be prevented. One solution is discussed in the next two paragraphs.

    Generally, the revised structural data base can be further altered to make restoration easier. While certain alterations to the structural data base must be banned (anything that might damage memory, for example) many alterations would be quite safe. One set of safe alterations would be those that correspond to real-world changes that are non-damaging. For example, moving sub-cellular organelles within a cell would be safe -- such motion occurs spontaneously in living tissue. Likewise, small changes in the precise physical location of cell structures that did not alter cellular topology would also be safe. Indeed, some operations that might at first appear dubious are almost certainly safe. For example, any alteration that produces damage that can be repaired by the tissue itself once it is restored to a functional state is in fact safe -- though we might well seek to avoid such alterations (and they do not appear necessary). While the exact range of alterations that can be safely applied to the structural data base is unclear, it is evident that the range is fairly wide.

    An obvious modification which would allow us to re-warm the structure safely would be to add cryoprotectants. Because we are restoring the frozen structure with atomic precision, we could use different concentrations and different types of cryoprotectants in different regions, thus matching the cryoprotectant requirements with exquisite accuracy to the tissue type. This is not feasible with present technology because cryoprotectants are introduced using simple diffusive techniques.

    Extremely precise control over the heating rate would also be feasible, as well as very rapid heating[126]. Rapid heating would allow less time for damage to take place. Rapid heating, however, might introduce problems of stress and resulting fractures. Two approaches for the elimination of this problem are (1) modify the structure so that the coefficient of thermal expansion is very small and (2) increase the strength of the structure.

    One simple method of insuring that the volume occupied before and after warming was the same (i.e., of making a material with a very small thermal expansion coefficient) would be to disperse many small regions with the opposite thermal expansion tendency throughout the material. For example, if a volume tended to expand upon warming the initial structure could include "nanovacuoles," or regions of about a nanometer in diameter which were empty. Such regions would be stable at low temperatures but would collapse upon warming. By finely dispersing such nanovacuoles it would be possible to eliminate any tendency of even small regions to expand on heating. Most materials expand upon warming, a tendency which can be countered by the use of nanovacuoles.

    Of course, ice has a smaller volume after it melts. The introduction of nanovacuoles would only exacerbate its tendency to shrink upon melting. In this case we could use vitrified H20 rather than the usual crystalline variety. H20 in the vitreous state is disordered (as in the liquid state) even at low temperatures, and has a lower volume than crystalline ice. This eliminates and even reverses its tendency to contract on warming. Vitrified water at low temperature is denser than liquid water at room temperature.

    Increasing the strength of the material can be done in any of a variety of ways. A simple method would be to introduce long polymers in the frozen structure. Proteins are one class of strong polymers that could be incorporated into the structure with minimal tissue compatibility concerns. Any potential fracture plane would be criss-crossed by the newly added structural protein, and so fractures would be prevented. By also including an enzyme to degrade this artificially introduced structural protein, it would be automatically and spontaneously digested immediately after warming. Very large increases in strength could be achieved by this method.

    By combining (1) rapid, highly controlled heating; (2) atomically precise introduction of cryoprotectants; (3) the controlled addition of small nanovacuoles and regions of vitrified H20 to reduce or eliminate thermal expansion and contraction; and (4) the addition of structural proteins to protect against any remaining thermally induced stresses; the damage that might otherwise occur during rewarming should be completely avoidable.

    The proposal that all four methods be used in combination is open to the valid criticism that simpler approaches are likely to suffice, e.g., precise distribution of cryoprotectants coupled with relatively slow warming[131]. The proposals advanced in this paper should not be taken as predictions about what will happen or what will be necessary, but as arguments that certain capabilities will be feasible. A belt is sufficient to hold up a pair of pants. When explaining that it is possible to hold up a pair of pants to someone who has never seen or heard of clothing, it is forgiveable to point out that the combined use of both belt and suspenders should be effective in preventing the pants from falling down.

    CONCLUSION
    Cryonic suspension can transport a terminally ill patient to future medical technology. The damage done by current freezing methods is likely to be reversible at some point in the future. In general, for cryonics to fail, one of the following "failure criteria" must be met:
    Pre-suspension and suspension injury would have to be sufficient to cause information theoretic death. In the case of the human brain, the damage would have to obliterate the structures encoding human memory and personality beyond recognition.
    Repair technologies that are clearly feasible in principle based on our current understanding of physics and chemistry would have to remain undeveloped in practice, even after several centuries.
    An examination of potential future technologies[ 85] supports the argument that unprecedented capabilities are likely to be developed. Restoration of the brain down to the molecular level should eventually prove technically feasible. Off- board repair utilizing divide-and-conquer is a particularly simple and powerful method which illustrates some of the principles that can be used by future technologies to restore tissue. Calculations support the idea that this method, if implemented, would be able to repair the human brain within about three years. For several reasons, better methods are likely to be developed and used in practice.
    Off-board repair consists of three major steps: (1) Determine the coordinates and orientation of each major molecule. (2) Determine a set of appropriate coordinates in the repaired structure for each major molecule. (3) Move them from the former location to the latter. The various technical problems involved are likely to be met by future advances in technology. Because storage times in liquid nitrogen literally extend for several centuries, the development time of these technologies is not critical.

    A broad range of technical approaches to this problem are feasible. The particular form of off-board repair that uses divide-and-conquer requires only that (1) tissue can be divided by some means (such as fracturing) which does not itself cause significant loss of structural information; (2) the pieces into which the tissue is divided can be moved to appropriate destinations (for further division or for direct analysis); (3) a sufficiently small piece of tissue can be analyzed; (4) a program capable of determining the healthy state of tissue given the unhealthy state is feasible; (5) that sufficient computational resources for execution of this program in a reasonable time frame are available; and (6) that restoration of the original structure given a detailed description of that structure is feasible.

    It is impossible to conclude based on present evidence that either failure criterion is likely to be met.

    Further study of cryonics by the technical community is needed. At present, there is a remarkable paucity of technical papers on the subject [note 24]. As should be evident from this paper multidisciplinary analysis is essential in evaluating its feasibility, for specialists in any single discipline have a background which is too narrow to encompass the whole. Given the life-saving nature of cryonics, it would be tragic if it were to prove feasible but was little used.

    bye!
     
  9. Rick Valued Senior Member

    Messages:
    3,336
    PS:I AM WAITING FOR THE DAY WHEN THE UPLOADING INCHARGE WILL ASK ME A QUESTION LIKE THIS:
    HI,I ASSUME YOU HAVE TAKEN INTEREST IN UPLOADING URSELF...OKAY WHAT KIND OF A WORLD WOULD YOUR BRAIN ADJUST PROPERLY,DEVILISH,ALL GOOD,EARTH LIKE,ENTERPRISE LIKE LIFE,TWIN STAR SOLAR SYSTEM?,MAY BE YOU"LL LIKE TO STAY AT A SIMPLE HOTEL...??!!



    BYE!
     
  10. kmguru Staff Member

    Messages:
    11,757
    Hey zion:

    Are you working for NSA or something?

    Thousands of years ago, the Tibetan people were exposed to Sanatana Dharma as well as the Chinese way of life since there was co-mingling of people from India and China in that area. The place stayed fairly untouched from outside influence until Budhha's group reached the mountain top. It occurs to me that some of the teaching of Budhha were rolled into the old philosophy. The continuum consciousness is in both Budhhisim and Hinduism. If we focus on just one cycle of consciousness, even then - it encompasses and recognizes an universal consciousness which we are a part.

    While I previously thought that after human, there is only one universal consciousness. I have revised my thinking to include a multilayer approach where universal consciousness could be at the highest level.
     
  11. Rick Valued Senior Member

    Messages:
    3,336
    A JOKE THAT I"D LIKE TO SHARE(THIS MAY BE BAD

    Please Register or Log in to view the hidden image!

    )

    <B><I><COLOR=blue><U>WHAT DOES NSA STAND FOR?</B></I></COLOR></U>

    ANSWER:NO SUCH AGENCY.

    Please Register or Log in to view the hidden image!







    BYE!
     
  12. kmguru Staff Member

    Messages:
    11,757
    Then you can cofidently say that you work for NSA?

    Please Register or Log in to view the hidden image!



    I was thinking to make a fake ID with NSA in big letters and spell out No Such Agency and post it here, but decided not to try it. Some people may not have any sense of humor. Someone from Australia could do it.
     
  13. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    I've been busy re-asserting my understanding of the whole universe, and have come to the conclusion that we are already inside a machine.

    I'll see if I an explain this in a way that everybody can understand, feel free to comment on it and/or disprove it if you will.

    Firstly the state of technology means that certain projects are going ahead that involve "Quantum Compression", using a principle of superstring and quantum jumping electrons, it's possible for a system to fold universes and create parallels.

    Not bad for something that just carries a free-floating electron charge via a photo-electric effect, considering all the information of the universe can be consealed in wires and circuitry.

    In a way it's odd, because it undermines and contridicts my understanding of how atomic particles are formed through photon acceleration (Namely electrons, protons, neutrons are just highly speed photon's that cause there own gravitation shift)

    I mentioned previously in another post how it would be possible to create a machine, create a universe in it and then exist in the universe you create, be it that the universe supposedly existed before the machine.

    (It's just a matter of ratio's, ratio's of size, ratio's of timing even ratio's of quanta.)

    I suppose I could quote something from John Lennon "Nothing is real!". (I only know that quote from a John Gribbon book)
     
  14. Rick Valued Senior Member

    Messages:
    3,336
    Seems Like theory is catching up...,Stryder Why didnt you realise it during the period when i said what if Black holes May be Hard lines to the real world,and present world is just a facade or a dream.


    bye!
     
  15. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Well Zion, as I said the theory has been shifting/morphing and evolving as my understanding changes. (Plus I didn't spot your other post)

    I originally looked into this as I was discussing with Reign_of_Error about some metaphysical understanding of blackholes. My understanding of Atomics at that time was, "An atom, it's made up of particles". My assumption was that particles were solid.

    Now understand that this was nievity that mislead me, I stumbled across some information on "Electron decay, when electrons are removed from their atomic state", along with something else I found "Neutron bombardment".

    Both were dealing with the decay or these "Particles", although the explaination about the Neutron Bombardment was about "frequency" or a "Wave of Neutrons".

    I read up from that point of how scientists for years had argued "atomic quanta's, Particles or Waveforms".
    Afterall how could something be a particle and act like a wave, and vice versa?

    Well my next thought was about how a particle could spin and cause a wave function behind it if moving at speed, but this wasn't the answer.

    My understanding came to me due to the Photo-electric effect, (when radiation bombards an atom and a rebound/reflex energy is release). I realised that the whole atom must be a waveform and the particles too.

    For instance, if you are to take light and accelerate it, you obtain electro-magnetic radiation. Electro-magnetic radiation can be percieved as giving off it's own gravity (due to the magnetics)
    Taking a highly sped charge or electromagnetics and spinning it would in theory cause itself to get caught in it's own gravity.

    This when looking at all that has been mention of classical and superstring theory physics gave the impression that the whole phenomona was made from accelerated light.

    Pretty much making us all holograms.

    (when you look deeper into how atomics is derived through the output of dieing starts, you should realise that deep within the star itself the reactions occur that create the atoms that stream out. Since light is emmited from it's proximity, it adds to the theory by suggesting that it's light that makes up the physical world.)
     
  16. itchy Registered Senior Member

    Messages:
    47
    (First I'd like to say, Strider I don't have a clue what you are talking about and I don't think you have either. Protons and neutrons are not highly speed photons, light can not be accelerated and what the hell is gravitational shift? And why do you think we live in a simulated world? Doesn't that just add to the complexity. You know Occams Razor?)

    But this is not what I was going to talk about. Earlier you talked about virtual worlds and that maybe humans could transfer themselves to these in the future. But have you thougt about the immense computational resourses such a world would require? In order to simulate a city without any optimizations you would need a computer larger than the city itself. With a reasonably correct lighting model you would have to raytrace all photons and do collision detections. Of course there are alot of optimizations you can do but it will probably be a very crude estimate of the real world. It doesn't seem very practical or pleasurable.

    And another issue touched on earlier but perhaps was forgotten. Do you think you will transfer yourself to the new world by scaning your brain? You are merely making a copy of yourself, what good is that to you? I think there is a flaw in your logic here.

    Another way to do it is to slowely merge yourself into the virtual world, slowly replacing your own body. Perhaps that is more logical.
     
  17. Rick Valued Senior Member

    Messages:
    3,336
    I dont understand what you mean.first of all,did you read the whole stuff presented here thoroughly?
    The mechanisms presently available for Mind uploads or WBE,i mean?

    secondly,We have been talking about physicalities of the proceedure and the computer required to do so on.some of the topics have already been discussed around.if you need further clarification please dont hesitate.


    bye!
     
  18. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    Ichy
    I thing you'll find your wrong, alright I might have said "Accelerated" but what is an X-ray itchy? It's still light, but just faster than the range usually thought of.

    ####The following are examples to try and give a visual clue####

    In fact it's speed gives it electro-magnetics that I can only hope to explain as a visual of something like Stargates wormhole opening, where the accelerated light drops back to normal speed causing an electromagnetic reaction.

    Another example would be to look at the aerodynamics of something like a bullet or other projectile moving along at it's decreasing velocity, the airflow "whirls" behind it to cause it to travel forwards.

    #################################################

    I mention X-rays, because Radiation (in this case, the higher speed portion of the spectrum) can pass through most mass.
    Mass is permeable.

    The original reasoning (Which I'm sure is what you will consider) for this, is the small "holes" between the atoms, the holes within the matrix. (the matters matrix)

    My understanding is that it isn't just holes, what gives it a clue is that a photo-electric effect caused by radiation outputs "Higher" a photonic field than what the matter would produce normally.

    In otherwords the radiation joins with the masses matrix and combines in an output of a higher quanta.
     
  19. Avatar smoking revolver Valued Senior Member

    Messages:
    19,083
    I'm not from Australia, but anywayz

    Please Register or Log in to view the hidden image!

     
  20. Adam §Þ@ç€ MØnk€¥ Registered Senior Member

    Messages:
    7,415
    Now that is cool.

    Please Register or Log in to view the hidden image!

     
  21. Rick Valued Senior Member

    Messages:
    3,336
    I am sure NSA has its reasons for enveloping itself in mystery.Hackers,Phreakers,Crackers like us are too curious sometimes...

    Please Register or Log in to view the hidden image!





    bye!
     
  22. Stryder Keeper of "good" ideas. Valued Senior Member

    Messages:
    13,105
    I've been told to state:
    We don't really exist
    We don't really know where you live
    We don't follow your every moves through bank transactions, bar codes and interfacing with telecommunications.
    There is no We, us or they.
    And we don't make people disappear into a pile of clothing on a beach.

    (all the above, is my sense of humour taking over)

    I'm flattered that I belong to "No Such Agency", everybody else "shouldn't" join.
    (This is where you've been told you can't, and so you want to.)
     
  23. Adam §Þ@ç€ MØnk€¥ Registered Senior Member

    Messages:
    7,415
    Speaking of people disappearing and leaving only piles of clothing on the beach... you know about Australia's Harold Holt, right?
     
Thread Status:
Not open for further replies.

Share This Page