04-06-06, 03:28 AM #1
Can a machine know?
Umm i'm doing an essay for THeory of Knowledge.. and my topic is "Can a machine know?" I was wondering if anyone could help me with some ideas?
I have to discuss the problems of knowledge, and ways of knowing. I might discuss things like, what is a machine and what is knowing. So yea any ideas or comments would be greatly appreciated !
THanks a lot
04-06-06, 06:10 AM #2
04-06-06, 10:57 AM #3
Originally Posted by roadblock
04-06-06, 11:39 AM #4
Good deal, roadblock. Google ray kurzweil, artificial intelligence, and cybernetics. You'd have to know the definition of "machine." In a few decades, many people will be enhanced by all sorts of mechanical goodies, in essence, creating a "machine." You can also define "know" to your discretion.
Machines can know when you define the terms. Of course, in the conventional "Does this machine have self-awareness?" sense, I'd assume the answer is no nowadays. In less than 50 years, that question will be inevitably answered with "Duh, of course it does!"
04-06-06, 07:21 PM #5
Originally Posted by roadblock
Typical TOK question - you can write the essay on almost anything you want, depending on how you define the terminology in the first paragraph.
04-06-06, 07:23 PM #6
Thanks guys for the replies really appreciated If you think of any more ideas can you let me know? Such as, what do you define a machine as? And how can you know? I'm doing a survey type thing as a way of knowing to find out what people think of these things. I need personal viewpoints so my essay isn't biased on my behalf (if that makes sense )
04-06-06, 07:34 PM #7
my definition of a machine is a mechanical device that converts energy into useful work
04-06-06, 08:59 PM #8
Originally Posted by roadblock
If you really want it to be unbiased, make it everyone's essay! Wait for this thread to reach the word limit, then print it out and hand it in
04-06-06, 11:33 PM #9
no a machine dosent know anything,
it just has memory and performs commands that it is issued, a machine cannot break into abstract from its primary programming, it must serve its function/s, and nothing more,
for a machine to be able to know something would mean it is aware and conscious, and for something to be aware and conscious on that level, then it is not a machine anymore,
and it has become a life form with free will and free imagination, a machine can never step over its boundaries as a functional machine, otherwise it cannot be classed as a machine anymore, as it does more than just perform commands,
it has become abstract to itself, seperating from function,
like a human, i need food, i need water, i need oxygen, but if i want to dissobey my function to do what my system needs i can,
i can choose to not eat not breathe and not drink, and make myself die, i can go against my function and needs, wich infact make me not a machine anymore, because i can go against my programming, i dont actually have to eat or drink atal, i will just die if i dont, but if i die i still died not serving my function, wich ment i had no need for it, the need would be linked to wanting to live, wich some humans do not want to do, some people want to die, wich means there not bieng machines, because life forms are programmed to reproduce and survive, some people dont want to do either, and actualy choose that fate and accomplish it, going against the machine,
04-07-06, 07:11 AM #10
Integrative Science: The Death Knell of Scientific Materialism www.designeduniverse.com/bb/integrativescience/
approx 19 scrolls to: [differencebetween machines and organisms] "One of te most spectacular differenc(s) between machines and living systems is that in the case of machines the source of the work is not related to any significant structural changes. The systemic forces of machines...work only if the constituents of the machine are taken into motion by energy sources which are outer to these constituents. The inner states of the constituents of a machine remain practically constant. The task of the constituents of a machine is to convert some kind of energy into work. In contrast,in living systems the energy of the internal build-up, of the structure of the living matter is transformed into work. The energy of the food is not transformed into work, but to the maintanence and renewal of their internal structure and inner states. Therefore, the living system are not power machines." [ff]
04-07-06, 11:30 AM #11
Originally Posted by EmptyForceOfChi
a flip flop (computer device) remembers its last state
this machine seems to know where its center of gravity is
04-19-06, 01:28 AM #12
Can the PC in front of you 'know you'?
The best thing it can do is reduce you to a digital pattern, like 10010001 then it will compare this to a list of patterns, which must be programmed beforehand, then if a match is found the program can now print 'You're my boss!'
The brain is in the same dilemma for it can only process electrochemical pulses send by sense organs. The very big difference here is that the brain has also a consciousness (CONSC) module ( don't ask me where!) & after
brain computation( which nobody knows about!) you'll have perceptions.
CONS is the hottest thing in science today. How does pulses send by the optic nerves becomes shapes, colors, etc, in my CONS? Or. how does pulses send by my auditory nerves become music within my CONS?
04-19-06, 03:09 PM #13
but if i die i still died not serving my function,
04-19-06, 03:44 PM #14
pretty sure machines\computers cannot do anything without instuctions.
04-19-06, 04:04 PM #15
Originally Posted by Alejandro
04-19-06, 07:45 PM #16
Or your preprogrammed instincts?
Without the driving force of instincts/emotions, a person would cease all activity that required conscious effort, fall over, and die of thirst in a few days.
04-19-06, 09:27 PM #17
Hi yourself and welcome.Originally Posted by SONDADAREAS
Detailed answer to the second question is in Ref 1 of the following, which I have placed in forums months before you joined sciforums, but never before in the computer related ones so here it is again; but be warned it is long and has a some introductory paragraphs about "free will" that apply equally well to "Knowing." I. e. replace "free will" with "knowing" below.
Clearly there is no free will in determinism or deterministic devices, such as normal computers.
People have consciousness, (hard to define, but most have experienced it). To a much greater extent than many wish to acknowledge, the reasons we do things are not the reasons that we accept as their cause in our consciousness. (I could cite several examples and psychological experiments that show this, but do not as I would need to expend some effort to drag them up.) This fact open up the possibility of illusionary free will. I.e. we may do something for reasons that have nothing to do with the "choice" we made and think what we did is a reasonable selection among the alternatives we, with our "free will," have "chosen." - This illusion of free will may be the result of a small conjunction of neural impulses on particular brain cell that in turn when it fires (or not) causes other action potentials etc. That is, we do things (or not) because of neurological activity in our brains (or more generally in our nervous system as some things we do not require any processing in the brain).
The true cause of these neurological patterns (making our "choices") is only partially controlled by our conscious state (also a pattern of neurological activity) which interacts with neural activity that is not accessible to consciousness, but also "our" choices are partially controlled by non-conscious neural activity. For this reason, alone, we do not really make "choices" but we can at least influence them.
As most (some retinal neurons excepted) neural activity is "all or none" firing of action potentials, and this is controlled by the timing and quantity of the inhibitory and excitatory chemicals arriving at the nerve's dendrite tree (and slightly on the cell body) it is essentially random, or certainly unknowable.
Thus, randomness may (and quite possibly does) provide the illusion of free will. As we believe we are the agents making the choices, acting "of our own free will." We can define this to be "free will," but I prefer to have what I call Genuine Free Will, and that may be possible.
Genuine Free Will is Possible
Before the advent of Quantum Mechanics, the future appeared to LaPlace to be exactly
determined by the past state of the universe, even if it was clearly unpredictable. Chaos theory
and measurement errors plus ignorance about small asteroid orbits, rupture stresses in tectonic
faults or vascular systems, etc. makes LaPlace’s future unpredictable, perhaps fatally so in
only a few seconds for some individuals. Quantum Mechanics destroyed LaPlace’s
deterministic world. Thus, thanks to QM, a “probabilistic will” is at least possible. I.e. we can
have the illusion of making “choices” that are actually made by the chance results of QM;
however, Genuine Free Will, GFW, i.e. real choices made by one’s self, still appears to be
impossible without some violation the physical laws that govern molecular interactions in our
complex neuro-physiological processes.
If GFW does not exist, it is perhaps the most universal of all human illusions. This article
will show that GFW is physically possible, even probable, without any violation of
physics if one is willing to drastically revise the usual concept of one’s self. Furthermore,
it argues that the required revision is a natural consequence of a better understanding of how
the human visual system functions and the fact that we are highly visual creatures. The
possibility that GFW is only an illusion is not excluded, but is made less probable, by the
arguments presented in “Reality, Perception, and Simulation: A Plausible Theory”1. This text
extracts from that article some aspects related to the existence of GFW.
That article focused almost entirely on the human visual system. How the brain uses a 2
dimensional array of information (neural activity present in the retina) to form a 3
dimensional perception of the environment is the central mystery of vision. The most
accepted general concept is that this 2D data array is “computationally transformed” in
successive stages neural processing until the 3D perception “emerges.” This processing
begins in the retina itself where data is compressed by almost 100 fold. (Retinal photo
detectors greatly outnumber optical nerve fibers.) The first cortical processing area, V1,
extracts some “features,” (mainly line orientations and intensities) that are present in the
visual field. Color, motion, and other primary features are extracted later in entirely separate
regions of the brain. These separated features are processed further in other regions of the
brain, but no one knows how they are reassembled into the unified 3D perception we
experience. I concur with this feature extraction and segregation model, but do not think the
3D perception we experience is the “emerging result” of automatic computational transforms
of the retinal data array as the standard theory suggests.
I contend that the visual features extracted in separate regions of the brain are never
“reassembled” and do not “emerge” to form our unified visual experience. Instead, I believe
that currently available sensorial information and one’s memory are used to construct,
probably in the parietal region of the brain, a real-time simulation of the visual world. We
experience that simulated world, not the physical world. Evolutionary selection has forced
this simulation to be a nearly perfect model of our immediate physical world. (Excepting
electromagnetic waves and other features for which we lack neural sensors.) Thus, continuous
detailed guidance is required from the senses, but hallucinations and illusions can be, and
occasionally are, created in the simulation that conflict with the physical world. These
“errors” together with dreams and visual images formed with eyes closed are difficult for the
conventional view of vision to explain in terms of automatic transformations of retinal data.
Hallucinations, visual dreams, etc. are easily understood with the concept that what we
experience is an internal simulation of the world, not an emerging transform of the retinal
Now I reproduce three of the several arguments I presented in above cited article to support
this simulation concept and to demonstration that the standard concept of 3D perception as the
emerging end result of automatic neural computational transformations of retinal data is
1) Our visual experience is uniformly rich in details over a wide area. That is, we see /
experience the environment in front of us everywhere with high resolution, but the optical
system of the eyes has very low resolution, except for one extremely small (solid angle
slightly more than one degree) part, the fovea. How can high-resolution perceptual
experiences, spanning a large part of one hemisphere, emerge from such low resolution input
data? Clearly what we experience is derived from some inter construction, not the
computational transform of retinal data. I am referring to our visual experience, not our ability
to perceive fine details far from the point of fixation. Our perception of fine details is limited
to that part of the image falling on the fovea. If our visual experience emerged from
successive neural transforms of the retina data, it would be like looking through a lightly
frosted sheet of glass, which had only one small spot completely clear (without frosting).
2) One’s perceptual experience when viewing a movie can also prove that the conventional
view of visual processes is simply wrong. Imagine that a motion picture camera, held by
someone seated in an extreme left seat of a theater, is filming actors on the stage and that
doors on the left and right sides of the stage are equally large. Now suppose that this film is
projected in a movie theater from a centrally located projection booth. The image projected on
the screen will have doors of different size. Assume the left one is 25% larger. If you are
seated on the right side of the movie theater in a location that is exactly symmetric to the
location of the camera that filmed the movie, then the two doors will form equally large
images on your retinas, but the standard theory’s “automatic computational transforms” will
compensate for the 25% greater distance to the screen image of the left door and you will
correctly perceive that the left door image is 25% larger. (This is correction for distance called
“perceptual size constancy”) 2 Likewise if a movie character, who is 80% as tall as the real
stage doors, should enter right door, and exit the left one, his height should be perceived as
changing by 25% as he walks from one door to the other, according to the convention theory.
Regardless of where they are seated, people never perceive an actor’s height as changing as
his movie image moves from one side of the screen to the other. If one of the stage actors had
walked from the extreme right rear of the stage to the extreme front edge, directly towards the
camera making the movie film, the size of his film image could easily have increased by 50%.
When you see this walk in the movie, his image on the screen will grow larger by 50%, but its
distance from you remains constant so that the automatic computational transforms applied to
this retinal image also remain constant. (The convergence angles of your eyes, etc. do not
change during his walk.) Consequently, if the standard theory were correct, his perceived size
should increase by 50% as he walks, but his perceived size is constant regardless of where he
walked on the stage. Clearly the conventional view of how visual perceptions are produced
(emerging end result of automatic computational processes) is simply wrong and needs to be
replaced. My JHU/APL article gives several other reasons why I believe we experience the
results of an internal simulation of our environment, not automatic transforms of our retinal
images. It also explains, at the neural level, how we segregate objects (parse them) from the
continuous visual field and how we then identify these parsed objects, but these processes are
not discussed here because this article concerns only the existence, or not, of GFW.
3) The primary task of living organisms is to stay alive, at least long enough to reproduce.
Neural computations require time. The world we would experience, if our experiences were
the emergent results of many successive stages of neural transformations would be delayed by
a significant fraction of a second. During our evolutionary history nothing truly discontinuous
ever happened in our visual environment. (The discontinuous changes in movie and TV
scenes did not exist.) None the less, it was essential for our ancestors to have a real-time
understanding of their surroundings despite nature’s temporal continuity and our neural
delays. - Try ducking a rock thrown towards your head if your only visual experience of it is a
display projected into the eyes (electronic goggles) that delay the image by 0.1 seconds! A
real-time simulation of the environment, can be achieved in a neural simulation by slightly
projecting ahead the sensory information to compensate for neural processing delays.
A real-time simulation would have great survival value. Perhaps the Neanderthals still
experienced slightly delayed “emerging transforms” of retinal data when our smaller brained
and weaker ancestors perfected a real-time simulation of their environment. (Ecological
pressure from the larger and stronger Neanderthals would have accelerated the rate of
evolution in our ancestors.) Likewise, the “Out of Africa” mystery, (Why one branch of
hominoids, expanded and dominated all others approximately 50,000 years ago.), which is
often assumed to be related to the acquisition of “autonomous language” (no gestures required
- hands free and education facilitated), might better be explained by the development of the
real-time simulation of the environment.
Furthermore, I think everything we perceive as being “real” in our environment, including our
physical bodies, is a part of this same simulation, not an emerging result of neural
transformations of sensorial data from any of our neural transducers. That is, all of the senses
only guide the simulation, feature by feature, to keep it highly faithful to the current external
reality. When an abrupt external event unexpectedly occurs (hidden firecracker exploding,
etc.), it significantly conflicts with the events projected in our simulation for that moment. We
are startled and the simulation must be quickly revised to conform to the unanticipated
external reality. This revision requires approximately 0.3 seconds. I think it probable that the
simulation is paused while the revision is in progress, but we do not notice as we are also
“paused” during this brief interval, just as we are not aware of hours passing while we sleep. I
think the unusual electrical activity in the brain associated with the re-initiation of the
simulation produces the EEG signal commonly called “P300,” or the “startle spike.” P300 is
strongest over the parietal region.3
Why the continuous natural environment should be dissected into “features” and separately
processed as a means of achieving a unified perception of the world is a great and
unexplained mystery for most cognitive scientists, but easily understood if a simulation of the
world is constructed by the brain. The physically sensed world is dissected into “features” for
the same reason that a pilot uses a checklist before takeoff. Dividing a complex task into its
component details and separately checking each, item by item, feature by feature, improves
task performance accuracy. Thus, both the real-time simulation and the dissection of the
visual field into features have significant survival value and consequently are probable natural
developments in the evolution of creatures as complex as man.
In order to compare the features derived from retinal data with those derived from the
simulation, they must be brought to the same neural tissue. Clearly it would be advantages to
make this comparison as early as possible in the sequential stages of “computational
transforms” of the retinal information. If the simulation is constructed in the parietal region of
the brain, then one would expect that the number of neural fiber leaving the parietal cortex
and returning to the visual cortex would at least equal those coming there, via the LGN, from
the eyes. In fact they are somewhat more numerous. They are called “retrograde fibers” and
no plausible reason for their existence has been suggested. Some of the comparison may be
made even earlier in the LGN, which is usually considered to be mainly a “relay station”
between the eyes and visual cortex. (Both areas have large projections into the parietal cortex,
so it can easily “know” when, where and what difference has been detected.) The quantity of
retrograde fibers from the visual striate cortex to the LGN slightly outnumbers the number of
fibers coming there from the eyes. About this second set of retrograde fibers, DeValois4
states: “It is by no means obvious what function is subserved by this feedback.” (from V1 to
LGN) About the retrograde set from the parietal to V1, they state: “Even less is understood (if
that is possible) about these feedback connections...” They also note that both sets are “strictly
retinotopic,” which is the neuro-physiologist’s way to compactly state that each small part of
the visual field is mapped in one-to-one correspondence with neural tissue. That is, the
retrograde fibers return to the same small area of processing cells that the prograde fibers
enter and these cells are concerned with only a small part of the image on the retina. This
approximately equal number of retinotopic retrograde fibers entering the visual cortex, is not
only explained by the theory I am suggesting; they are required for the simulation to rapidly
correct for unpredictable external events!
If a buzzer sounds while one is watching the steady predictable movement of a small light
spot and one is asked: “Where was the light spot when the buzzer first began to sound?” the
location indicated is later than the true location. Thanks to the predictive simulation, the
subject is continuously aware of the true location of the light in real-time but he only becomes
aware of the sound later after the simulation has been revised to include the sound of the
buzzer and he associates it with that later location of the light. Retrograde fibers project back
to early sensory processing stages for all of the senses to make correction of the simulation as
rapid as possible but perception of new events is still delayed enough to be easily
demonstrated in this type of psychological test. - For example, a reasonable competent
computer programmer can program his computer to move a light pixel across a stationary grid
displayed on the monitor and to randomly make a brief sound. With fine pointer, he quickly
points to the light spot location where the light was when the sound started. A few seconds
later, the computer displays where the light pixel actual was when the sound started. Note
how quickly he moves the pointer (his reaction time) does not matter. The delay measured is
the time required to revise the simulation to include the new sound. This small revision will
not produce a “P300” EEG signal because the simulation is not paused while it is made. Only
major environmental discontinuities, usually sudden unexpected loud noises, pause our
existence (startle us).
Thus the only reality we directly experience is this simulation and we are part of it. That is,
we are an informational process in a simulation, not a physical body. When we are in deep
dreamless sleep the simulation is paused and we do not exist - only our physical bodies exist.
Our bodies are at all times completely governed by physical laws, like any other physical
object; but if we are only an informational process in a slightly imperfect simulation of the
physical world, then we need not be deterministic (or quantum mechanical) beings. That is,
we may not exactly follow physical laws just as the creatures modeled in modern computers
making movies, pixel by pixel, without actors or optical cameras do not exactly follow the
physical laws. The meaning of symbols manipulated in a computer does not depend upon the
physical construction or deterministic details of the computer. The human brain is a parallel
processing computer, much more advanced than any man has yet conceived, and is fully
capable of making a real-time simulation of the world we experience.
Other humans, some of the more advanced animals, and ourselves are modeled in this
simulation as having wishes and making choices, not as bio-mechanical creatures governed by
physics. That is, the simulation in which we live and exist assumes GFW exists for some of
the more advanced creatures. Thus, GFW does exist in the only world we exist in and directly
experience. Neither we, nor GFW exists in the physical world. From our direct experiences in
the world we exist in, the simulation, we infer (I think correctly) that the physical world does
exist, but as Bishop George Berkeley noted, the existence of a physical world may be only an
erroneous belief, commonly deduced from our direct experiences. That is, the directly
experienced GFW has a stronger claim to “reality” than the inferred physical world!
Summary: This definition of one’s self as an informational process in a simulation, not a
physical body, permits you to have GFW and make other violations of physical laws,
especially in your dreams, when sensory guidance of the simulation is weak or absent. For
example, some people sincerely report “out of body” experiences etc. These physically
impossible experiences and GFW are directly experienced and thus have a strong claim to
being “real.” In contrast, the existence of a physical world is only inferred from these direct
experiences. Bishop Berkeley argued consistently that it may not exist, but he required a God
to give him his experiences. My view is similar to his in some aspects (I do not exist as a
physical object in the material world.) but it makes no reference to God. Instead, a brainbased
simulation is creating both my experiences and me. Being non-physical is the price one
must pay for GFW if one rejects miracles that violate physics.
Are you a complex bio-mechanical machine without GFW or only an informational
process that has GFW in a simulated world? If you believe the former and that belief is
correct, you must. I.e. you can make no real choices without postulating a “soul” or other
miracles that violate physics, but if the latter is correct, I can chose to believe it (or not) and
still be consistent with physics and logic. Some who believe they have free will and yet reject
the second alternative of the question may find in this dilemma a strong argument for the
existence of God and miracles, but if they do their “free will” is not GFW. Instead it is the
potentially capricious and reversible gift of a greater being, whose postulated existence is not
supported by any physical evidence. In contrast, there is a large body of physical evidence
(some given above) supporting the simulated world in which I postulate we exist with GFW.
See the first reference for more of this evidence. Consider also how many of the strange
aspects of human psychology easily fit within the framework of a simulated world and being
(phantom limbs, multiple personalities, false memories, sincere denial, déjà vu,
References and Notes:
1) For reprint, contact the Johns Hopkins University / Applied Physic Laboratory
(firstname.lastname@example.org). "Reality, Perception, and Simulation: A Plausible Theory"
appeared in the JHU/APL Technical Journal, volume 15, number 2 (1994) pages 154 - 163.
The last two pages (Philosophical Implications and Speculations) give the above solution to
the freewill vs. determinism problem.
2) For example, if a father is standing three times farther from you than his half grown son,
his image on your retina is smaller than that of his son, yet you perceive their relative sizes
correctly. Standard theory suggests that we automatically correct retinal image sizes to
compensate for distance. “Perceptual size constancy” is usually reasonably accurate. The most
notable natural exception is the moon illusion. The near horizon moon appears to be larger
than the overhead moon because humans conceive of the “sky dome”, on which the moon and
stars appear to move, as more distant near the horizon than at the zenith. Why this is so, is
partially caused by the slowing of the angular rate of movement of clouds, birds, etc. we
watch as they move towards the horizon.
3) There are many other reasons to suspect the simulation takes palace in parietal cortex, but I
will only briefly mention two. First is the geometric efficiency of the brain’s structure for a
parietal simulation. The simulation requires four main inputs. Tactical sensory cortex contacts
the anterior parietal; Visual cortex contacts the posterior, Auditory input contacts it laterally
and the primary tissue associated with memory is directly below the parietal cortex. This
minimizes neural conduction delays and “white tissue” (nerve fibers) brain volume
requirements. Even stronger support is found in the sequela of parietal strokes, which result in
“unilateral neglect.” Victims of these strokes do not recognize the existence of the contra
lateral half of the physical world. They eat only the food on one side of their plate, etc. Their
visual system can be shown to continue functioning perfectly. For example, if one briefly
flashes a small light in that part of the world that does not exists (for them), and then demands
that they guess whether this non-existent light was red or green, they perform far above
chance, while complaining that it is silly to name the color of something that did not exist.
This proves their visual system is functioning well, at least through the stage where small
color features are extracted. I explain unilateral neglect sequela by postulating that the
undamaged side of the parietal brain is continuing to make a simulation, but only of its half of
the world. Because their personality is not drastically changed, I believe frontal cortex is
utilized to construct much the “psychological self” included in the simulation but their
physical body image is a parietal construct. - If they happen to turn their head and see their
leg, whose existence is no longer represented in the simulation as part of their body, they may
try to throw this “foreign leg” away – it is disgusting close to them.
4) Page 101 of Spatial Vision, first edition, Oxford Psychology Series No. 14, by R.L. & K.K.
DeValois Oxford University Press (ISBN 0-19-505019-3)
Last edited by Billy T; 04-19-06 at 09:42 PM.
04-20-06, 02:47 AM #18
We need to separate out a computer as is, when made, from one which we might be using. The latter has been programmed with an operating system, programs, data etc. It has information and therefore knowledge. It knows things about things. But this was all put there by people. It did not exist within the machine itself. Likewise whilst further ifnormation can be deduced from the base data, this again is only done by the actions of people. The computer will not, and cannot sponataneously decide to produce a spreadsheet or a presentation. It is incapable of creative thought. It has no creativity. A computer is like a food processor. It does what someone externally decides to do with what someone externally has placed into it. Just as a food processor contians no food unless it is placed into it, so a computer per se has no intrinsic knowledge, only the capability of processing input information.
Creativity is one of the things that makes we humans rather special and not just walking computer processors. It enables us to make decisions which are not based simply on algorithms or past history or other data. We can be bold, reckless, brave, foollish or act in many other emotional ways. This is impossible to program as emotions are not physical entitities. they do not lend themselves to mathematical analysis.
Yet emotions (past and present) are an essential part of knowledge. I know whether I am happy today or not and it probably affects what I do today more than the facts I actually know. Computers can never be happy or sad. They cannot love or grieve. They really have no more knowledge than a food processor, a cooker or a vacuum cleaner!
I hope this is of assistance.
all the best with the work!
04-20-06, 09:01 PM #19
I am sure that a machine can be made to replicate the act of knowing. I am sure that it can behave in ways that suggest empathy and fear, and other emotions including love and grief etc. But is this replication the same as being?
I am sure we could build a machine that would acknowledge pain and pleasure and give every indication that it is "feeling: these things, but again is this the same thing as being?
A machine can certainly give every indication that it is alive, but does this mean it is?
04-20-06, 09:07 PM #20
Imagine we have a universe that is teaming with self animated machines, intelligent and knowing. They have all the attributes of what we refer to as living organisms.
One day a machine scientists asks the question:
Can organisms ever be considered to be alive?
Can we create an organic arrangement or composition, or structure and consider it to be alive?
What makes an organic entity any more alive than a sophisticated machine that considers itself to be alive?