Robots:are We Close Yet?

Discussion in 'Intelligence & Machines' started by Rick, Nov 19, 2001.

Not open for further replies.
1. RickॐValued Senior Member

Messages:
3,336
GREAT

Very interesting idea...

Especially for swapping part...i didnt think of this...

bye!

3. RickॐValued Senior Member

Messages:
3,336
ROBOTS AS FROM ISAAC ASIMOV.

The following is an excerpt taken from the site<url>http://info.rutgers.edu/Library/Reference/Etext/Impact.of.Science.On.Society.hd/3/5</url>here Isaac answers to a question given below.
Notice how relevent he is,even today.

Question: The first book of yours that I read was I, Robot. In
your opinion, how close are we today to the world you described in
that book?

Answer: Although the book was written in 1939, those robots were very
intelligent and human-like in their capacity. As yet, the robots we
use today are merely computerized arms that can do one specialized
job. So, we're not very close, but we're heading in the right
direction. Although I have never done any work on robots and know
almost nothing about the nuts and bolts, I think that I came close
enough that I am almost the patron saint of robotics. Most of the
people who work in robotics obtained at least some of their early
interest in the field by reading my books. I was the first person to
use the word robotics, and I spoke of the Handbook of Robotic, from
which I quoted my three laws. I said they were from the 56th edition,
in 2058 A.D. Now someone is actually in the process of putting out the
first edition of that book, and they've asked me to write the
introduction. I guess the people who are working in robotics see
themselves moving toward the world I described 40 years ago, and I'm
willing to accept their judgment.

Question: Why do you restrict yourself to looking for Earth-like
planets in the search for technological civilizations, why not
Jupiter-like planets, for instance, or Pluto-like planets?

Answer: If we assume that there can be life even under widely varying
conditions, we make the problem perhaps a little too easy. There is
also the chance that life evolving under such conditions might be so
different from human life in very basic ways that we will not be able
to detect it or to understand that it is a technological civilization
even if we encounter it. As our information and knowledge grow, we
might be able to widen our view to recognize life and civilization of
limitations and the fact that we know so little, we are looking for
technological civilizations sufficiently like our own to be perhaps
recognizable. So at the start, but not necessarily forever, we
restrict ourselves to Earthlike planets.

Question: Do you think, because our bodies are fragile and we have
limited life spans, that what we now know as humanity would ever be
replaced by inorganic intelligence?

Answer: I believe that computers have a kind of intelligence which is
extremely different from our own. The computer can do things that we
are particularly ill adapted to do. Humans don't handle rapid
intricate calculations very well, and it's good to have computers do
them. On the other hand, we have the capacity for insight, intuition,
fantasy, imagination, and creativity, which we can't program into our
computers, and it is perhaps not even advisable to try because we do
it so well ourselves. I visualize a future in which we will have both
kinds of intelligence working in cooperation, in a symbiotic
relationship, moving forward faster than either could separately. The
fact that we are so fragile and short lived is an advantage in my way
of thinking. In Robots of Dawn, I compare two civilizations; one is
our own in which people are short lived, and the other is that of our
descendants in which they are long lived. I point out the disadvantage
to the species as a whole of being long lived. I won't repeat the
arguments, because if I don't you may storm the bookstores out of
sheer curiosity to see what I've said.

Question: One of the great themes of science fiction is the settlement
of other planets. Is there any place in this solar system or nearby
that might be habitable?

Answer: As far as we know, there is no worid in our solar system that
is habitable by human beings without some form of artificial help. The
Moon and Mars, which come the closest to being tolerable, will require
us to build underground cities or dome cities, and if we venture on
the surface, we will have to wear space suits. This is not to say that
it will not be possible someday to terraform such worlds and to make
them habitable; but I honestly don't know if it will be worth it for
us to do so. As to planets circling other stars, we do not really know
of such planets in detail. We suspect their existence, and we figure
statistically that a certain number of them ought to be habitable, but
we have yet to observe any evidence of such a thing. It is still very
much in the realm of speculation.

Question: You made the analogy between the migration from Europe at
the turn of the century and possible future migrations to space
stations and other planets. It has been shown that as a result of our
technology, people in this country are taller, heavier, better built,
and able to set new records in endurance and physical capabilities.
Would you speculate about the effect that living in space stations
might have on the human body and its evolutionary potential?

Answer: It is hard to tell. I suspect that people will make the
environment of these space settlements as close to that of Earth as
possible. But in one respect, they will have problems; there is no way
that they can imitate Earth's gravitational field. They can produce a
substitute by making the space settlement rotate, so that the
centrifugal effect will force you against the inner surface and mimic
the effects of gravity. But it won't be a perfect imitation; there
won't be a Coriolis effect and, also as you approach the axes of
rotation, the gravitational effect will become weaker. The people who
will live in a space settlement will be exposed to variations in the
gravitational effect far greater than any you can possibly feel on the
surface of the Earth. This may give rise to all sorts of physiological
changes in human beings. I don't know what they will be; we can't know
until we actually try living in space. So far, people have been
subjected to essentially zero gravity for as long as 7 months at a
time without apparently permanent ill effects. But human beings have
never been born at zero gravity or under varying gravitational
conditions; they have never developed and grown up under such
conditions, and we can't be sure what the effects will be. From an
optimistic standpoint, I suppose that under such conditions human
beings will develop a greater tolerance of gravitational effects than
they now possess. This will further prepare them for life in the
universe, whereas we ourselves have been specifically evolved and
conditioned for life in one very specialized place in the galaxy. The
overall effect may be to strengthen the human species; at least, I'd
like to think so. The future will tell us if that is so.

Question: In your opinion, when will there be solar power stations in
orbit and manned ventures to Mars, considering the technological leaps
with the Space Shuttle and the Soviet's Salyut space stations?

Answer: It is hard to say when solar power stations in space will be
developed. It's up to the human governments that control the money and
the manpower. If we begin to cooperate and make a wholesale attempt,
we could have solar power stations in space before the 21st century
was very old. In other words, someone as young as the person who asked
me this question, may see space stations by the time he is
middle-aged. But on the other hand, if we choose not to do it, then we
may never have these stations in space. The choice is ours. We can
choose to develop space or we can choose world destruction. I'm at a
loss to state in words how desirable the first alternative is and how
likely the second alternative is.

Question: What kind of timetable do you envision for humanity's
exploration of space, and what good or harm do you think is done by
prospace groups?

Answer: Well, we can't expect things to happen too quickly. The
region that we now call the United States was being settled for nearly
two centuries before this country came into existence. We've
celebrated our bicentennial as a nation, but in a little over 20 years
we're going to have to celebrate the tetracentennial of our existence
as a community on American soil, from the establishment of Jamestown
in 1607. If it took nearly two centuries to settle the United States
to nationhood, it might take that long to establish a space community
strong enough to be independent of the Earth. On the other hand,
things move more quickly now; we're more advanced. It may take less
than a century to do so if we really try hard. As for the effects that
prospace organizations might have, I'm not a sociologist so I just
don't know. I'm in favor of prospace organizations doing their best to
persuade human beings to support space exploration. I don't know how

Question: Assuming that we do not annihilate ourselves, what is your
view of how life on Earth will evolve, both humans and other life
forms?

Answer: You must understand that evolution naturally is a very slow
process and human beings can well live for 100,000 years without many
serious changes. On the other hand, we are now developing methods of
genetic engineering which will, perhaps, be able to wipe out certain
inborn diseases, or correct them and improve various aspects of the
human condition. I don't know how we will develop or what we will
choose to do; it's impossible to predict.

Question: How long do you think it will be before people live in outer
space?

Answer: That's entirely up to us. In a way, we've had people living in
outer space already, ever since the first Russian cosmonaut spent 1
1/2 hours in space. We have now had people living in outer space for 7
months at a time; in fact, one Soviet cosmonaut lived in outer space
for 12 months over a period of 18 months. So we've had people living
in outer space already, and I'm sure we'll have more and more of them
for longer and loner periods of time.

U.S. GOVERNMENT PRINTING OFFICE: 1985Ñ470-563

Library of Congress Cataloging in Publication Data

Burke, James, 1936-

The impact of science on society.

(NASA SP; 482)

Series of lectures given at a public lecture series sponsored by NASA
and the College of William and Mary in 1983.

l. Science - Social aspects - Addresses, essays, lectures. 1. Bergman,
Jules. II. Asimov, Isaac, 1920- . III. United States. National
Aeronautics and Space Administration. IV. College of William and
Mary. V. Title. VI. Series.
Q175.55.B88 1985

303.4'83 84-14159

For sale by the Superintendent of Documents, U.S. Government Printing
Office Washington, D.C. 20402

Science and technology have had a major impact on society, and their
impact is growing. By drastically changing our means of communication,
the way we work, our housing, clothes, and food, our methods of
transportation, and, indeed, even the length and quality of life
itself, science has generated changes in the moral values and basic
philosophies of mankind.

Beginning with the plow, science has changed how we live and what we
believe. By making life easier, science has given man the chance to
pursue societal concerns such as ethics, aesthetics, education, and
justice; to create cultures; and to improve human conditions. But it
has also placed us in the unique position of being able to destroy
ourselves.

To celebrate the 25th anniversary of the National Aeronautics and
Space Administration (NASA) in 1983, NASA and The College of William
and Mary jointly sponsored a series of public lectures on the impact
of science on society. These lectures were delivered by British
historian James Burke, ABC TV science editor and reporter Jules
Bergman, and scientist and science fiction writer Dr. Isaac Asimov.
These authorities covered the impact of science on society from the
time of man's first significant scientific invention to that of
expected future scientific advances. The papers are edited transcripts
of these speeches. Since the talks were genera!ly given
extemporaneously, the papers are necessarily informal and may,
therefore, differ in style from the authors' more formal works.

As the included audience questions illustrate, the topic raises
far-reaching issues and concerns serious aspects of our lives and
future.

Donald P. Hearth
Former Director
NASA Langley Research Center

bye!

5. kmguruStaff Member

Messages:
11,757
A more recent extrapolation is given by Dr. Ray Kurzweil in his book "The age of spiritual machines". We are only 40 years away if the present development continues and we have not reduced our civilization to dust.

7. wet1WandererRegistered Senior Member

Messages:
8,616
zion,

Thanks, I knew I could depend on you posting the first speech somewhere. (I read the second one first) Isaac Assimov (?) has always been a favorite of mine.

Crazy, is it not? The man goes about writting for a living and finds that he is someday to be immortalized in history, awarded an honorary membership in Mensa (plus chairman), and that now they are trying to somehow program into robots his 3 laws of robotics that he used to set up his stories.

Just goes to show that things are not always as they seem and no one can predict the future in any extent.

Thanks for the post.

8. RickॐValued Senior Member

Messages:
3,336
Hey Wet1,
The man is simply brilliant and all time favorite of mine.i heard that in foundation Novels that he wrote,he fantasised being Hari Seldon...

his innovative ideas are astonishing for his own time.i mean look at his time frame,1939 and he is talking about computers connectivity etc.talk about nostradamus!,he was the real one...

bye!

9. kmguruStaff Member

Messages:
11,757
Several years ago, I heard, the Foundation series will be made into movies. I am still waiting....

PS: I have his book on Bible. It is illuminating...

10. RickॐValued Senior Member

Messages:
3,336
Hi KM,

In the foundation series,specifically in the Foundation edge,Isaac talks about fusion of brain and computer.he demonstratoes it in the begining.
a sort of a computer is there,he touches it(I mean Golan Trevize)and there is a direct connection made to his brain via his hands.the computer analyses the thoughts of the man processes it.

i"ll quote the exact part later,i dont have it now.
And the only capable guys who can convert foundation series into Movies,as far as i think are WB.Lets hope they do that in the near future...

bye!

11. RickॐValued Senior Member

Messages:
3,336
PS:Whats this book like,i mean skeptic point of biew or informative,analytic(positive,i mean)??...

wondering...

bye!

12. kmguruStaff Member

Messages:
11,757
Highly informative. That is my source when people discuss the king james version. He goes to the Hebrew source and explains the meaning of the words and how some words/meanings have changed over the years. Quite interesting if you really want to know for accuracy.

I am not a fan of any organized religion for what they claim and not what they really are - social rules to live by.

13. RickॐValued Senior Member

Messages:
3,336
Robotics FAQ.

Okay lets start a little prematuredly and take a plunge into what exactly is robotics,and what is their purpose etc.
1.1 What is the definition of a 'robot'?

"A reprogrammable, multifunctional manipulator designed to move material,
parts, tools, or specialized devices through various programmed motions for
the performance of a variety of tasks"
Robot Institute of America, 1979
Obviously, this was a committee-written definition. It's rather dry and
uninspiring. Better ones for 'robotics' might include:
Force through intelligence.
Where AI meet the real world.
Webster says: An automatic device that performs functions normally ascribed to
humans or a machine in the form of a human.

[1.2] Where did the word 'robot' come from?
The word 'robot' was coined by the Czech playwright Karel Capek (pronounced
"chop'ek") from the Czech word for forced labor or serf. Capek was reportedly
several times a candidate for the Nobel prize for his works and very influential
and prolific as a writer and playwright. Mercifully, he died before the Gestapo
got to him for his anti-Nazi sympathies in 1938.
The use of the word Robot was introduced into his play R.U.R. (Rossum's
Universal Robots) which opened in Prague in January 1921. The play was an
enormous success and productions soon opened throughout Europe and the US.
R.U.R's theme, in part, was the dehumanization of man in a technological
civilization. You may find it surprising that the robots were not mechanical in
nature but were created through chemical means. In fact, in an essay written in
1935, Capek strongly fought that this idea was at all possible and, writing in
the third person, said:
"It is with horror, frankly, that he rejects all responsibility for the idea
that metal contraptions could ever replace human beings, and that by means of
wires they could awaken something like life, love, or rebellion. He would deem
this dark prospect to be either an overestimation of machines, or a grave
offence against life."
[The Author of Robots Defends Himself - Karl Capek, Lidove noviny, June 9, 1935,
There is some evidence that the word robot was actually coined by Karl's brother
Josef, a writer in his own right. In a short letter, Capek writes that he asked
Josef what he should call the artifical workers in his new play. Karel suggests
Labori, which he thinks too 'bookish' and his brother mutters "then call them
Robots" and turns back to his work, and so from a curt response we have the word
robot.
R.U.R is found in most libraries. The most common English translation is that of
P. Selver from the 1920's which is not completely faithful to the original. A
more recent and accurate translation is in a collection of Capek's writings
tel: 203.230.2391
The term 'robotics' refers to the study and use of robots. The term was coined
and first used by the Russian-born American scientist and writer Isaac Asimov
(born Jan. 2, 1920, died Apr. 6, 1992). Asimov wrote prodigiously on a wide
variety of subjects. He was best known for his many works of science fiction.
The most famous include I Robot (1950), The Foundation Trilogy (1951-52),
Foundation's Edge (1982), and The Gods Themselves (1972), which won both the
Hugo and Nebula awards.
The word 'robotics' was first used in Runaround, a short story published in
1942. I, Robot, a collection of several of these stories, was published in 1950.
Asimov also proposed his three "Laws of Robotics", and he later added a 'zeroth
law'.
Law Zero:
A robot may not injure humanity, or, through inaction, allow humanity to come
to harm.
Law One:
A robot may not injure a human being, or, through inaction, allow a human
being to come to harm, unless this would violate a higher order law.
Law Two:
A robot must obey orders given it by human beings, except where such orders
would conflict with a higher order law.
Law Three:
A robot must protect its own existence as long as such protection does not
conflict with a higher order law.
An interesting article on this subject:
Clarke, Roger, "Asimov's Laws for Robotics: Implications for Information
Technology", Part 1 and Part 2, Computer, December 1993, pp. 53-61 and Computer,
January 1994, pp.57-65.
The article is an interesting discussion of his Laws and how they came to be in
his books, and the implications for technology today and in the future.

[1.3] When did robots, as we know them today, come into existence?
The first industrial modern robots were the Unimates developed by George Devol
and Joe Engelberger in the late 50's and early 60's. The first patents were by
Devol for parts transfer machines. Engelberger formed Unimation and was the
first to market robots. As a result, Engelberger has been called the 'father of
robotics.'
Modern industrial arms have increased in capability and performance through
controller and language development, improved mechanisms, sensing, and drive
systems. In the early to mid 80's the robot industry grew very fast primarily
due to large investments by the automotive industry. The quick leap into the
factory of the future turned into a plunge when the integration and economic
viability of these efforts proved disastrous. The robot industry has only
recently recovered to mid-80's revenue levels. In the meantime there has been an
enormous shakeout in the robot industry. In the US, for example, only one US
company, Adept, remains in the production industrial robot arm business. Most of
the rest went under, consolidated, or were sold to European and Japanese
companies.
In the research community the first automata were probably Grey Walter's machina
(1940's) and the John's Hopkins beast. Teleoperated or remote controlled devices
had been built even earlier with at least the first radio controlled vehicles
built by Nikola Tesla in the 1890's. Tesla is better known as the inventor of
the induction motor, AC power transmission, and numerous other electrical
devices. Tesla had also envisioned smart mechanisms that were as capable as
humans. An excellent biography of Tesla is Margaret Cheney's Tesla, Man Out of
SRI's Shakey navigated highly structured indoor environments in the late 60's
and Moravec's Stanford Cart was the first to attempt natural outdoor scenes in
the late 70's. From that time there has been a proliferation of work in
autonomous driving machines that cruise at highway speeds and navigate outdoor
terrains in commercial applications.
Articles on the history of personal robots:
What ever happened to ... Personal Robots? by Stan Veit The Computer Shopper,
Nov 1992 v12 n11 p794(2)
What ever happened to ... Personal Robots? (part 2) by Stan Veit Computer
Shopper, April 1993 v13 n4 p702(2)
I have the text to these online but am trying to find out if I can include these
as part of the FAQ or as separate files that are ftpable.

14. RickॐValued Senior Member

Messages:
3,336
This is a continuation of above as i thought the above reply became larger.
The following was compiled from Caltech servers.Interesting information isnt it?
[9] What is a Robot Architecture?
=====================================================================
A robot 'architecture' primarily refers to the software and hardware framework
for controlling the robot. A VME board running C code to turn motors doesn't
really constitute an architecture by itself. The development of code modules and
the communication between them begins to define the architecture.
Robotic systems are complex and tend to be difficult to develop. They integrate
multiple sensors with effectors, have many degrees of freedom and must reconcile
hard real-time systems with systems which cannot meet real-time deadlines
[Jones93]. System developers have typically relied upon robotic architectures to
guide the construction of robotic devices and for providing computational
services (e.g., communications, processing, etc.) to subsystems and components.
These architectures, however, have tended thus far to be task and domain
specific and have lacked suitability to a broad range of applications. For
example, an architecture well suited for direct teleoperation tends not to be
amenable for supervisory control or for autonomous use.
One recent trend in robotic architectures has been a focus on behavior-based or
reactive systems. Behavior based refers to the fact that these systems exhibit
various behaviors, some of which are emergent [Man92]. These systems are
characterized by tight coupling between sensors and actuators, minimal
computation, and a task-achieving "behavior" problem decomposition.
The other leading architectural trend is typified by a mixture of asynchronous
and synchronous control and data flow. Asychronous processes are characterized
as loosely coupled and event-driven without strict execution deadlines.
Synchronous processes, in contrast, are tightly coupled, utilize a common clock
and demand hard real-time execution.

Subsumption/reactive references
Arkin, R.C., Integrating Behavioral, Perceptual, and World Knowledge in Reactive
Navigation, Robotics & Autonomous Systems, 1990
Brooks, R.A., A Robust Layered Control System for a Mobile Robot, IEEE Journal
of Robotics and Automation, March 1986.
Brooks, R.A., A Robot that Walks; Emergent Behaviors from a Carefully Evolved
Network, Neural Comutation 1(2) (Summer 1989)
Brooks, Rod, AI Memo 864: A Robust Layered Control System For a Mobile Robot.
Look in ftp://publications.ai.mit.edu/
Brooks, Rod, AI Memo 1227: The Behavior Language: User's Guide. look in
ftp://publications.ai.mit.edu/
Connell, J.H., A Colony Architecture for an Artificial Creature, MIT Ph. D.
Thesis in Electrical Engineering and Computer Science, 1989.
Erann Gat, et al, Behavior Control for Robotic Exploration of Planetary Surfaces
To be published in IEEE R &A. FTPable.
ftp://robotics.jpl.nasa.gov/pub/gat/bc4pe.rtf

Insect-based control schemes
Randall D. Beer, Roy E. Ritzmann, and Thomas McKenna, editors, Biological Neural
Networks in Invertebrate Neuroethology and Robotics, Academic Press, 1993.
Hillel J. Chiel, et al, Robustness of a Distributed Neural Network Controller
for Locomotion in a Hexapod Robot, IEEE Transactions on Robotics and Automation,
8(3):293-303, June, 1992.
Joseph Ayers and Jill Crisman, Biologically-Based Control of Omnidirectional Leg
Coordination, Proceedings of the 1992 IEEE/RSJ International Conference on
Intelligent Robots and Systems, pp. 574-581.

Asynchronous/synchronous
Amidi, O., Integrated Mobile Robot Control, CMU-RI-TR-90-17, Robotics Institute,
Carnegie Mellon University, 1990.
Albus, J.S., McCain, H.G., and Lumia, R., NASA/NBS Standard Reference Model for
Telerobot Control System Architecture (NASREM) NIST Technical Note 1235, NIST,
Gaithersburg, MD, July 1987.
Butler, P.L., and Jones, J.P., A Modular Control Architecture for Real-Time
Synchronous and Asynchronous Systems, Proceedings of SPIE
Fong, T.W., A Computational Architecture for Semi-autonomous Robotic Vehicles,
AIAA Computing in Aerospace conference, AIAA 93-4508, 1993.
Lin, L., Simmons, R., and Fedor, C., Experience with a Task Control Architecture
for Mobile Robots, CMU-RI-TR 89-29, Robotics Institute, Carnegie Mellon
University, December 1989.
Schneider, S.A., Ullman, M.A., and Chen, V.W., ControlShell: A Real-time
Software Framework, Real-Time Innovations, Inc., Sunnyvale, CA 1992.
Stewart, D.B., Real-Time Software Design and Analysis of Reconfigurable
Multi-Sensor Based Systems, Ph.D. Dissertation, 1994 Dept. of Electrical and
Computer Engineering, Carnegie Mellon University, Pittsburgh. Available online
at STEWART_PHD_1994.ps.Z It's 180+ pages.
Stewart, D.B., M. W. Gertz, and P. K. Khosla, Software Assembly for Real-Time
Applications Based on a Distributed Shared Memory Model, in Proc. of the 1994
Complex Systems Engineering Synthesis and Assessment Technology Workshop (CSESAW
'94), Silver Spring, MD, pp. 217-224, July 1994

More to follow...

bye!

15. RickॐValued Senior Member

Messages:
3,336
Sensor Based Motion Planning ResearchSensor Based Motion Planning
Sensor Based Planning'' incorporates sensor information, reflecting the
current state of the environment, into a robot's planning process, as opposed to
classical planning , where full knowledge of the world's geometry is assumed to
be known prior to the planning event. Sensor based planning is important
because: (1) the robot often has no a priori knowledge of the world; (2) the
robot may have only a coarse knowledge of the world because of limited memory;
(3) the world model is bound to contain inaccuracies which can be overcome with
sensor based planning strategies; and (4) the world is subject to unexpected
occurrences or rapidly changing situations.
There already exists a large number of classical path planning methods. However,
many of these techniques are not amenable to sensor based interpretation. It is
not possible to simply add a step to acquire sensory information, and then
construct a plan from the acquired model using a classical technique, since the
robot needs a path planning strategy in the first place to acquire the world
model.
The first principal problem in sensor based motion planning is the find-goal
problem. In this problem, the robot seeks to use its on-board sensors to find a
collision free path from its current configuration to a goal configuration. In
the first variation of the find goal problem, which we term the absolute
find-goal problem, the absolute coordinates of the goal configuration are
assumed to be known. A second variation on this problem is described below.
The second principal problem in sensor based motion planning is sensor-based
exploration, in which a robot is not directed to seek a particular goal in an
unknown environment, but is instead directed to explore the apriori unknown
environment in such a way as to see all potentially important features. The
exploration problem can be motivated by the following application. Imagine that
a robot is to explore the interior of a collapsed building, which has crumbled
due to an earthquake, in order to search for human survivors. It is clearly
impossible to have knowledge of the building's interior geometry prior to the
exploration. Thus, the robot must be able to see, with its on-board sensors, all
points in the building's interior while following its exploration path. In this
way, no potential survivors will be missed by the exploring robot. Algorithms
that solve the find-goal problem are not useful for exploration because the
location of the goal'' (a human survivor in our example) is not known. A
second variation on the find-goal problem that is motivated by this scenario and
which is an intermediary between the find-goal and exploration problems is the
recognizable find-goal problem. In this case, the absolute coordinates of the
goal are not known, but it is assumed that the robot can recognize the goal if
it becomes with in line of sight. The aim of the recognizable find-goal problem
is to explore an unknown environment so as to find a recognizable goal. If the
goal is reached before the entire environment is searched, then the search
procedure is terminated.
In prior work we developed a scheme to solve one type of exploration problem. As
a byproduct, the algorithm can then also solve both variations of the find-goal
problem. The algorithm is based on the Generalized Voronoi Graph (GVG), which is
a roadmap. We have developed an incremental approach to constructing the GVG of
an unknown environment strictly from sensor data. We only assume that the robot
has a dead reckoning system and on board sensors that measure distance and
direction to nearby obstacles.
In collaboration with JPL, we have been developing algorithms for the autonomous
navigation of future Mars Rover vehicles. These algorithms (the "WedgeBug" and
"RoverBug" algorithms) are the sensor-based analogies to the classical tangent
graph algorithm, but assume no apriori knowledge of the robot's environment, and
also take the limited field-of-view of the rover's cameras into account. See
this page for more detail and some fancy figures related to this project.
Our current research activities center around how to incorporate uncertainty
into sensor-based planning.

<color=blue>Grasp analysis research</color>
---------------------------------------------------------------------
Grasp Analysis/Planning ResearchGrasp Analysis and Planning Research
We are motivated by a class of important robotic planning problems which are not
handled by current motion-planning systems. Examples are a snake-like'' robot
that crawls inside a tunnel by embracing against its sides, or a limbed robot
(analogous to a monkey'') that climbs a truss structure by pushing and
pulling. In these examples, the robot is an articulated mechanism whose motions
must be planned so as to achieve high-level goals. However, the robot's motion
is generated by the reaction forces which arise from stably bracing and/or
pushing against the environment. These interaction forces must be planned and
controlled so as to achieve stability of the robot mechanism. It should be noted
that the practically important industrial work-holding or fixturing'' problem
is a special case of this class of problems. Multi-fingered grasping and
manipulation is also a related problem. For example, during finger gaiting, the
finger tip reaction forces are used to stably secure the grasped object.
In all of these cases, the interaction forces must be planned and controlled so
as to achieve stability of the robot mechanism. In this proposal, we are
primarily concerned with planning and maintaining quasistatic stability . That
is, in motion where the inertial effects due to the moving parts of the robot
are small relative to the forces-torques of interaction between the robot and
its environment. The quasistatic assumption is immediately applicable to
planning the hand-hold'' states (analogous to the hand-holds used by rock
climbers between dynamically moving states) where the grasped object, or the
robot mechanism in the dual case, is at a static equilibrium. Moreover, if the
mechanism's motion between these static states is sufficiently slow,'' then
the quasistatic assumption will hold throughout.
Quasistatic motion planning problems are especially attractive for two reasons.
First, these problems are a natural middle ground between classical path
planning and tasks that involve the full dynamics of the robot and the objects
it manipulates, such as hopping, running, or juggling. Second, there is a vast
array of robotic tasks that fall within this category.
To date, our work has focused on developing a basic mobility theory to describe
the mobility of multiply contacting bodies. We have recently extended the theory
to include the effects of compliance, friction, and gravity. Our current efforts
are focused on using the basic methodology to develop quasi-static motion
planning techniques and algorithms for optimal grasp and fixture selection.

<color=blue>Hyper redundant Robotics research</color>
=====================================================================
Hyper-Redundant Robotics ResearchMedical Applications of Robots
The focus of our work is on the applications of robotics to minimally invasive
medical diagnosis and therapy. Minimally invasive medical techniques are aimed
at reducing the amount of extraneous tissue which must be damaged during
diagnostic or surgical procedures, thereby reducing patient recovery time,
discomfort, and deleterious side effects. Arthroscopic knee surgery is one of
the most widely known example.
We are currently developing, in collaboration with Dr. Warren Grundfest at
Cedars Sinai Hospital, a miniature "snake-like" robot for minimally invasive
traversal of the human gastro-intestinal system. A television camera will allow
the physician to visually inspect the intestinal lining. Additional diagnostic
measurements, such as temperature, pressure, and acidity, can be made with a
variety of on-board micro-sensors. In addition to diagnostic applications, the
device may ultimately be capable assisting in therapeutic procedures as well
We also have recently initiated a collaboration with Dr. Michael Levy of
Children's Hospital (Los Angeles) to develop a new generation of articulated
endoscopes for brain surgery.

More to Follow...

bye!

16. RickॐValued Senior Member

Messages:
3,336
Modular Robot ResearchModular Robotics Systems
============================================================
The kinematic performance of a conventional robotic mechanism is determined by
its kinematics parameters and its structural topology. For a given set of tasks,
the robot designer chooses these factors during the initial design phase so as
to satisfy the given task requirements. However, it is difficult or impossible
to design a single robot which can meet all task requirements in some
applications. For example, consider the robotic construction of a radio antenna
on the moon's surface. The robotic system must be able to excavate soil,
transport material, assemble parts, inspect constructed assemblies, etc. It is
difficult to design a single robot which is simultaneously strong enough, nimble
enough, and accurate enough for all of these tasks. In this kind of situation it
might be advantageous to deploy a modular robotic system which can be
reassembled into different configurations which are individually well suited to
the diverse task requirements. By a modular robotic system we mean one in which
various subassemblies, at the level of links and joints, can be easily separated
and reassembled into different configurations.
In the deployment of a modular system, one can imagine the module rearrangement
and reassembly process to occur in two ways. First, a human operator can
physically rearrange the modules, and human intuition can be used to determine
the best system configuration for a given task. However, for physically remote
applications, such as robotic lunar construction, the modular system must be
physically able to reconfigure itself. More importantly, there must be a
correspondingly automated way in which to determine a sufficient or optimal
arrangement of the modules to satisfy task criteria. Our work has been devoted
to this latter subject, which has not yet been well addressed in the literature.

To automatically determine a sufficient or optimal arrangement of the system
modules for a given task, one might try a generate-and-test'' procedure in
which all possible assembly configurations of the modular set are generated, and
then each assembly configuration is tested against the task requirements to
determine its sufficiency or optimality. However, due to symmetries in module
geometry and robot structural topology, many different assembly configurations
will have the same kinematic properties. Thus, a brute force enumeration of all
module assemblies will result in the generation of many functionally identical
candidate structures. This is undesirable from a computational complexity point
of view, as it leads to many unnecessary test procedures.
We have developed a systematic methodology to enumerate the unique, or
non-isomorphic, assembly configurations of a set of modules. This method is
based on the symmetry properties of the modules and a graph representation of
the robot's structural topology. We introduce an Assembly Incidence Matrix (AIM)
to represent a robot assembly configuration and its associated kinematic graph.
Equivalence relationships are defined on the AIMs using graph isomorphisms and
the symmetric rotation group of individual link modules. AIMs in the same
equivalence class represent isomorphic robots. This method is also useful when
designing a modular robotic system, as it can answer the important question:
what is the set of uniquely different robots that I can construct from a given
set of modules?''

17. RickॐValued Senior Member

Messages:
3,336
Robotic Locomotion Research
==================================================================
Our current work is aimed at developing a more unified theory for the analysis
and control of robotic locomotion. Our investigation of a more unified approach
began with undulatory locomotion. Undulatory robotic locomotion is the process
of generating net displacements of a robotic mechanism via periodic internal
mechanism deformations that are coupled to continuous contstraints between the
mechanism and its environment. Actuatable wheels, tracks, or legs are not
necessary. In general, undulatory locomotion is snake-like'' or worm-like,''
and includes our study of hyper-redundant robotic systems. However, there are
examples, such as the Snakeboard, which do not have biological counterparts.
From a mechanics perspective, undulatory systems are often characterized as
Lagrangian systems which exhibit symmetries and which are subject to
nonholonomic kinematic constraints. The interplay between the conserved
quantities which would arise from the symmetries (in the absence of nonholonomic
constraints) and the constraints is fundamental to the locomotion process.
Toward this end, we have been developing a control theory for mechanical systems
with symmetries and constraints.
More recently, we have been extending our basic framework for undulatory
locomotion in two directions. First, the basic theory can be extended to systems
with discontinuous contstraints (such as legged systems) by modeling such
systems on stratified sets (see the applied control theory section). Second,
preliminary work has shown that mechanics of a number of aquatic locomotion
schemes also fit into the same framework. See this page for descriptions,
pictures, and videos of our fish work. This page also has some details on our
robot fish work.

18. RickॐValued Senior Member

Messages:
3,336
Nasa Space Telerobotic program

Sensor indicates where, and how firmly, a gripper has touched an object.
NASA's Jet Propulsion Laboratory, Pasadena, California

A touch sensor for robot hands provides information about the shape of a grasped object and the force exerted by the gripper on the object. Pins projecting from the sensor create electrical signals when pressed against the object. The tactile sensor (see figure) is packaged in a small, rugged box that fits on the gripper pad. The projecting pins are arranged in a regular matrix on one face of the box. The inner ends of the pins bear on individual circuit elements. An element may be a switch that turns on when a pin is pushed and makes contact with it, or it may be a variable resistor, the conductance of which increases with the force on the pin. The prototype box is milled from a solid slab of aluminum. In it rests a printed-circuit board carrying the switch electrodes (or pressure-sensitive resistors) and the common electrode. Insulating gaps separate the electrodes from the surrounding electrode plane. Covering the printed-circuit board is a plastic insulating spacer, which confines the pins laterally. On the spacer is a rubber spring sheet. The pins pass through the rubber sheet, which restores the pins to their normal positions when a tactile force is removed. Since the holes in the spring sheet are smaller than the heads and feet of the pins, the sheet confines the pins axially. The sensing pins are electrically and mechanically separated from each other. The circuit for each pin is well defined and independent of the circuits for other pins; crosstalk is thus reduced to a minimum. The rubber spring sheet provides an effective seal around each pin and around the box wall. The sensitive portions of the sensor are deep in the box, protected from the environment; grease, dirt, and fumes have little effect on these portions. Since the box bottom supports the printed-circuit board. the board and the pins are protected from damage by overpressure and overtravel.

Point of Contact:
Howard C. Primus
Jet Propulsion Laboratory
4800 Oak Grove Drive
818-248-2638

Touch Sensor Responds to Contact Pressure
===================================================================
A pressure sensor for a mechanical hand gives better feedback of the gripping force and more-sensitive indication of when the hand contacts an object. Optical fibers bring light into cells on the gripping surface. Light is reflected from a flexible covering into other fibers leading to detectors. Distortion due to tactile pressure changes the amount of reflected light. The new device is superior to previous sensors. For example, television or other direct-viewing systems are not sensitive to contact pressure, and the contact area is often hidden from view. Electrical sensors are subject to electrical noise, especially at the low signal levels associated with low contact pressure. Optical sensors have been used to detect proximity or contact but not contact pressure. The new optical sensor is illustrated in the figure. The sensing surface of the hand is divided into cells by opaque partitions. An optical fiber brings light into each cell from a lamp, light-emitting diode, or other source. Another fiber carries light from the cell to a detector; for example, a photodiode or phototransistor. The cells are covered by an elastic material with a reflective interior surface. The rest of the cell is coated with a nonreflective material. As shown in the figure, pressure against a cell cover causes a distortion, which changes the internal reflection of light. The change is sensed by the detector, and the output signal informs the operator of contact. The greater the pressure and distortion, the greater is the change in light reflection. Thus, grip pressure can be sensed using analog circuitry. If only a touch indication is desired, a threshold detector can be included in the electronics. In an automatic manipulator, the detector signal could control the manipulator movements. The cells can be arranged such that those in each row share one light source, while those in each column share one detector. This reduces the number of sources and detectors and facilitates scanning. For example, a 10-by-10 matrix would have 100 sensing points while requiring only 10 sources and 10 detectors. The array can be scanned by sequentially pulsing sources and detectors.

Point of Contact:
Antal Bejczy
Mail Stop 198-219
Jet Propulsion Laboratory
4800 Oak Grove Drive
818-354-4568
bejczy@telerobotics.jpl.nasa.gov

19. RickॐValued Senior Member

Messages:
3,336
Nasa Contd(VISION SYSTEMS)

Position Estimation Using Visual Landmarks
Estimating position and orientation is a fundamental capability required for many mobile robot tasks and is critical for completing long-range traverses and accurate reconnaissance on planetary surfaces.
Simply put, the problem for robots, when on a mission or task, is to be able to estimate their position and orientation using visual landmarks and internal maps. This is the same problem that people face, for example, when competing in the sport of orienteering. Conventional landmark navigation fixes the position of the competitor, with respect to known landmarks at the intersection of several position surfaces. For mobile robots, the idea is to implement, as a supporting technology, the ability to estimate their position using visual landmarks. In an ideal situation, the robot would stop, look around, take in all the features of the landscape, and then calculate its position.

There are three basic techniques for finding one's location: measurement of two bearings, measurement of one bearing and one distance, and measurement of two distances. The first technique, measurement of two bearings is the most attractive because it avoids the challenges of measuring depth or range, and it capitalizes on the relatively high angular resolution of standard cameras and lenses used in the robots.

The two-bearing problem formalizes as follows:

Let (x, y) represent the location of the robot in a fixed, external reference frame W. Let p1, . . . , pn be points representing locations in W of the map landmark points. Let r1, . . . , rk be the rays emanating from (x, y) to the landmarks. A ray represents the direction in which a landmark feature is observed, but does not entail distance information.

The problem: given a set of n landmark points and k rays, find all of the poses Q such that each ray pierces at least one landmark point.

An algorithm that searches for pairings of rays and points solves this problem. The minimum number of pairings for a unique solution to exist is three. First, the search considers only cases of three rays to determine candidate solutions, then, additional rays are used to verify the candidates. The computational complexity of the algorithm is 0 (n^3).

This algorithm has been extended by introducing probability distributions on the rays and landmark points. In this statistical approach, inferences about position are obtained by maximization of the posterior distributionÑthat is, the probability of all possible explanations of what the robot has measured-for (x, y).

The introduction of probabilities allows the modeling and accommodation of various sources of noise and disturbances (noise and disturbances can be anything that results in error for the robot when estimating its position) but it also introduces some difficulties. Essentially, the posterior distribution of (x, y) involves a summation over all possible pairings of rays and landmarks. This implies an exponential effort in the determination of best pose-that is, the pose that provides the most probable position for the robot. This complexity is addressed by use of statistical tools, particularly significance tests. In this framework, not all rays are analyzed against all landmarks-positions that are highly unlikely will be regarded as impossible. On the other hand, by introducing probability distributions, positions that are likely will be calculated. The resulting technique offers a sensible statistical version of the original localization algorithm while keeping the polynomial complexity.

The table below illustrates typical results from the position estimation algorithm using simulated landmarks and simulated rays. In this dataset the algorithm found four candidates (listed in the table below). The likelihood for the correct pose* is a hundred times larger than the others, demonstrating the efficacy of the method.

x y likelihood

3.8 1.5 0.04
2.1 2.7 0.04
4.0 6.2 0.03
3.0 3.0 4.63 *

Likelihood computed for candidate poses

Point of Contact:
Eric Krotkov
Carnegie Mellon University
Pittsburgh, PA 15213
412-268-1970
epk@cs.cmu.edu

Machine-Vision for Surface Inspection
=================================================================
An automated system has been designed for performing visual surface inspection of remote space platforms. This system operates much like a mine detector, scanning across the surface of an object to detect flaws. A two-phased machine-vision approach is adopted with the first phase focussing on the detection of regions of the image where change has occurred. This is then followed by an analysis phase to determine if the change is due to a new flaw.
The system would be used to detect flaws on long duration orbiting space platforms. Such platforms require inspection for collisions with micro-meteorites and space debris; material degradation due to prolonged exposure to the harsh space environment; and geometrical mismatches at mechanical interfaces prior to assembly operations. Telerobotic operation of the automated inspection system would save considerable astronaut time and minimize EVA associated risks.

In the absence of lighting variations, viewpoint differences, and sensor noise, the detection of change could be obtained by a process of simple differencing - subtracting an earlier reference image from a new inspection image. However, lighting variation due to orbital motion can cause surface appearance to change drastically. Lack of viewpoint repeatability caused by mechanical flexibility in the robot arm leads to mis-registration of reference and inspection images. Sensor noise is inevitable, and the resulting detection problems must be well characterized and managed.

In the automated inspection system, ambient light variability is compensated by utilizing compensated reference and inspection image data. Compensation requires two image data sets, the first is illuminated only with the ambient light and the second is illuminated with the ambient light as well as an artificial illuminator. The first data set is subtracted from the second to give a compensated image that appears as if it were taken with the artificial illuminator alone. In order to have an adequate signal-to-noise ratio, the artificial illuminator must provide illumination comparable to (or more than) the ambient light. This is difficult for a low-powered continuous illuminator. Instead, an electronic strobe unit is utilized to concentrate all of the artificial illumination into a very short time interval. When the electronic shutter in the camera is set to operate only over this short time interval, the strobe provided illumination is comparable to that provided from the ambient light. The strobe illumination also enables imaging from a moving platform since it does not have the time lags associated with a continuous illuminators.

Without registration error compensation, subtracting reference and inspection images results in a number of false edges'' in the differenced image. A Gauss-Newton iterative method is used by the automated inspection system to perform reference-to-inspection image registration prior to making the comparison. The residual sum-of-squares between the actual and an estimated picture is used as an evaluation function to indicate the degree of match between the inspection data and a transformed reference image. Mis-registration is corrected by finding a suitable transformation of the reference image so that the residual is close to zero.

In addition, quantitative tools have been developed to allow an explicit tradeoff between detection probability and the false-error probability. Depending on the flaw model and noise parameters, detection thresholds can be chosen to achieve a given level of performance.

Flaw recognition is made computationally tractable by analyzing the images only in the region where differences have been found, and that too at the most appropriate scale of resolution. This scale-space'' technique maximizes flaw information while at the same time minimizing the amount of distracting information. In this solution, the optimum scale for analyzing a sensor-produced image is selected from prior knowledge of image texture/features. Following this, edge detection is performed at an optimum scale. Finally, pattern recognition is used at different scales, followed by flaw classification. Examples of images from a laboratory mockup of space platform modules have been used to test the concept.

J. Balaram and S. Hayati, Telerobotic Inspection For Remote Space Platforms'', ESA/INRIA Workshop on Computer Vision for Space Applications, Antibes, France, September 1993.

J. Balaram and K.V. Prasad, Automated Inspection For Remote Telerobotic Operations'', IEEE Conference on Robotics & Automation, Atlanta, Ga. May 1993.

Point of Contact:
J. Balaram,
Mail Stop 198-219
Jet Propulsion Laboratory
4800 Oak Grove Drive
818-354-6770
J.Balaran@jpl.nasa.gov

Perception for Rock Sampling
====================================================================
In autonomous manipulation research at Carnegie Mellon, a robot first perceives and then grasps objects with a gripper or hand-like tool. The technology has applications in both planetary exploration and excavation on earth. To perform in these natural environments, the perception system must recognize the irregular geometry of rocks and also single out objects for manipulation whether those objects exist in cluttered or barren terrains. To this end, the perception and manipulation system picked up objects successfully in mockup test beds of sand and rock. By functioning autonomously, the perception and manipulation systems avoid the drawbacks of teleoperation-particularly for planetary exploration-where long-distance operation slows communication between the robot and its operators. The robot must view and lift objects with a minimal amount of remote human instruction.
For collecting rocks in planetary exploration, the robot has three main objectives: to sense the terrain and the objects in it, to choose the appropriate three-dimensional model with which to draw a computerized image of the terrain, and to grip the object with either a gripper or hand tool. The robot senses the terrain with a short-range sensor. By projecting light on the scene in a special sequence of patterns, the sensor computes (using triangulation) the positions of all points in the scene.

The perception system then uses three successive perception modules to build an image of the objects in the terrain. The first module, using sensor geometry, conducts a feature detection and shadow analysis of the terrain; this module produces an image of the terrain's shadows and object edges. To fill in the area of the objects, the perception system next chooses between either of two modules: the superquadric surfaces analysis, which represents the object with a three-dimensional mathematical equation, and the deformable surfaces analysis, which defines all points on the object's area. The superquadric surfaces analysis is a better representation of an object that is fairly isolated in the terrain; the deformable surfaces analysis provides a more accurate analysis of objects which are clustered together. Finally, the perception system merges images of the terrain taken from different viewpoints into one composite view. Through merging these images, the robot knows where objects are in relation to itself and to the rest of the terrain.

After choosing between the superquadric and deformable surfaces modules and merging all viewpoints, the perception system must decide which grasping tool to use: the basic gripper, which works best for lifting an isolated rock and for pulling out a rock that is partially buried; or the hand (with fingers) that can negotiate an individual or smaller rock out of a more cluttered area. To determine if the object can be lifted at all, a grasping algorithm matches the dimensions of the tool with the measurements of the rock.

Currently, the perception system relies an remote operators to decide which three-dimensional modules-superquadric or deformable surfaces-or which tool-hand or gripper-to use to grasp the object. Carnegie Mellon researchers are further developing the perception and manipulation system in robots to choose autonomously which perception and grasping methods to use. Researchers will also automate the robot's ability to single out and grasp certain types of rocks.

Point of Contact:
Martial Hebert,
Katsuchi Ikeuchi
Carnegie Mellon University
Field & Mobile Robotics Building
5000 Forbes Avenue
Pittsburgh, PA 15213-3890

STAR: Satellite Test Assistant Robot Infrared Thermal Imaging System
======================================================================
For the first time spacecraft test engineers at JPL are able to evaluate flight-hardware as it undergoes rigorous thermal/vacuum testing using an advanced imaging system that provides mobile non-contact temperature measurements and high resolution video.

The advanced imaging capability is part of the instrument payload of a newly developed Satellite Test Assistant Robot (STAR). STAR allows engineers to remotely position a multi-axis inspection robot inside the space-simulation chambers during spacecraft testing.

The imaging system consists of three vacuum-rated, high-resolution black and white CCD cameras, one of which is fitted with a remotely operated zoom lens, an advanced infrared thermal imaging radiometer (IR Camera) and a controlled lighting source. The video and thermal images can be viewed, captured and processed remotely at an Operator Control Station (OCS).

The IR Camera is a commercially built unit adapted for use in hard vacuum, low temperature environments. The IR Camera is equipped with a broad-band scanner capable of imaging over the entire 3- 12 um spectrum. It has an electric stirling cycle microcooller which eliminates the need for constant LN2 refilling of the IR detector. It also has an electro-optical zoom capability and high spatial resolution. There are several modes of operation that allow real-time image averaging, line scanning with variable time integration, variable area temperature analysis, histograms, and variable ranges of dual isotherms. All the images are real-time and can be record on standard video tape or captured and stored in a tiff file format. The three B&W CCD video cameras are arranged to provide mono or stereoscopic (3-D) viewing with a scalable field of depth perception.

In September 1993, the imaging system was integrated with a three axis version of STAR and tested in JPL's 10-Foot Thermal/Vacuum Test Chamber. Vacuum levels reached 6 x 10-7 TORR and cold wall temperatures where at -190°C. The imaging system provided vivid images of a Cassini Spacecraft RTG, also under test, which reached temperatures exceeding +250°C.

STAR's advanced imaging system provides a completely new tool-set for evaluating and validating spacecraft and flight hardware prior to launch. Its thermal imaging camera allow engineers for the first time a non-conta ct method for determining temperatures on critical spacecraft surfaces such as so lar panels, radiators and antenna and thermally mapping the entire outside surfaces of a spacecraft. STAR will also aid in the calibration and maintenance of the test chambers. Its most significant contribution may be that it provides engineers with a means of addressing unforeseen anomalies that often occur during the complicated spacecraft testing process.

Point of Contact:
Charles Weisbin
Mail Stop 180-603
Jet Propulsion Laboratory
4800 Oak Grove Drive
818-354-2013
charles_r_weisbin@jpl.nasa.gov

Omniview: Electronic Aim and Zoom Camera
=====================================================================
Microprocessors select, correct, and orient portions of a hemispherical field of view. NASA Langely Research Center, Hampton, Virginia
A video camera pans, tilts, zooms, and provides rotations of images of objects of its field of view, all without moving parts. The camera can be used for surveillance in areas where movement of the camera would be conspicuous or constrained by obstructions. It can also be used for close-up tracking of multiple objects in the field of view or to break an image into sectors for simultaneous viewing, thereby replacing several cameras.

The camera includes a fisheye lens, which creates a circular image of a hemispherical field of view on a charged-couple device (CCD). The image data are stored briefly in an input image buffer for processing. High-speed x- and y-transform digital processors correct the barrel distortion introduced by the fisheye lens and thereby enable the reconstruction of undistorted views of portions of the scene, From a single camera, the system produces as many as four simultaneous different views (virtual cameras) of the scene on a standard RS-170 video monitor at 30 frames per second.

The associated electronic circuitry includes a 32-bit microprocessor with 80-bit floating-point arithmetic support for parametric calculations. It performs control interface functions and calculates the coefficients for the x- and y-transform processors, which are independent arithmetic devices. A human operator programs the microprocessor through a control panel, choosing the magnification, viewing direction, rotation, and offset of the selected portion of the image. Command parameters can also be selected via RS-232 serial communications link.

Point of Contact:
Dr. H. Lee Martin, President
TeleRobotics International, Inc.
7325 Oak Ridge Highway, Suite 104
Knoxville, TN 37931

Machine Vision Guidance for Automated Assembly
=====================================================
NASA Langley Research Center, Hampton, Virginia
The Automated Structures Assembly Laboratory (ASAL) has successfully assembled and disassembled a 102-member truss structure, including the placement of 12 hexagonal reflector-type panels on the top surface, using a semi-automated system which requires operator attention only when a problem is encountered which the automated system cannot resolve. The system software data base has also been reconfigured so that the system assembles and disassembles a truss beam. The automated assembly system employs commercially available knowledge-based expert systems to plan the assembly, monitor its operation, and assist the operator during error recovery. It also uses a expert system tools plan the sequence of assembly operations and collision-free paths for the robotic manipulator.

For system reliability, a machine vision guidance system has been implemented to locate and capture the truss nodes that the struts are installed into. The machine vision system uses small "lipstick" CCD cameras and uniquely patterned targets located on each node. The targets are fabricated from retro-reflective material and auxiliary lighting is used to bring them into sharp contrast with the cluttered background which includes glare from the truss structure. An image processing algorithm uses pattern matching techniques to identify each node. Since the nodes may be mounted on non-rigid portions of the truss structure, the vision guidance must be capable of locating two nodes and guiding the manipulator arm to capture first one and then the other, and finally pulling both into position for strut installation. The machine vision guidance system proved to be very reliable and robust for the automated assembly operations

Technology areas:

Image processing for machine vision
Guidance algorithms for complex manipulation tasks

Point of Contact:
Ralph Will
Mail Stop 152D
NASA Langley Research Center
1 South Wright Street
Haptom, VA 23681-0001
804-864-6672
ralph.w.will@larc.nasa.gov

Range sensing from wide field-of-view stereo vision

====================================================================
Robotic vehicles have important applications in planetary exploration, hazardous waste handling, battlefield operations, and factory material transportation. To enable these applications, robotic vehicles must be equipped to automatically detect obstacles in their path. Obstacle detection can be achieved by using range sensors to observe the geometry of the environment, then by analyzing the geometry to find passable routes for the vehicle. However, range sensors have not been available that meet the cost and performance requirements of most applications. JPL has taken a major step forward in this area by demonstrating a practical range sensing system based on stereo vision.
The Wide Field-of-View (WFOV) stereo system, a JPL-based technology developed for the Department of Defense's Unmanned Ground Vehicles (UGV) Project, is a real-time system which produces dense range maps from a stereo pair of cameras mounted on a HMMWV ("Hum-Vee"), the military's modern-day Jeep equivalent. The range data are being used by higher-level vehicle-control systems for autonomously navigating around local obstacles encountered during battlefield maneuvers.

Stereo vision uses two cameras to observe the environment, finds the same object in each image, and measures range to the object by triangulation; that is, by intersecting the lines of sight from each camera to the object. Finding the same object in each image is called matching and is the fundamental computational task underlying stereo vision. Matching objects at each pixel in the image produces a range estimate at each pixel; together, these range estimates form a range image of the scene. Geometric analysis of the range image identifies passable routes. For robotic vehicle applications, the primary alternatives to stereo vision-based range estimation use acoustics, radar, or scanning lasers. Compared to these alternatives, stereo vision has the significant advantage that it achieves high resolution and simultaneous acquisition of the entire range image without energy emission or moving parts. The key issue in making stereo vision practical was to find a combination of algorithms and processors that led to reliable, real-time range estimation with a computer system small and inexpensive enough to use on robotic vehicles.

In a demonstration performed in 1990 for the NASA planetary rover program, JPL used a version of this vision system to show that a robotic vehicle could perform autonomous obstacle avoidance while traversing 100 meters of off-road terrain. This demonstration established the viability and practicality of stereo vision-based range imaging for robotic vehicle applications. The impact of this work is reflected by the adoption of similar approaches for subsequent, NASA-funded robotic expeditions to volcanoes in Antarctica and Alaska, by the potential use of these algorithms in upcoming robotic missions to Mars, and by the transfer of this technology to military robotics programs funded by the Department of Defense.

Point of Contact:
L. Matthies,
Mail Stop 107-102 Jet Propulsion Laboratory
4800 Oak Grove Drive
818-354-3722
matthies@robotics.jpl.nasa.gov

====================================================================

Unified Approach to Control of Motions of Mobile Robots
Obstacle-detection systems are designed to make the most of limited data-processing resources.
A concept of perception control is guiding the continuing development of obstacle-detection systems for crosscountry navigation of robotic vehicles that are equipped with stereoscopic machine-vision systems. Perception control consists of optimally tuning sensor or processing parameters to increase efficiency of perception under design constraints and design requirements while adjusting to the environment. This particular concept of perception control is oriented toward the need to maximize vehicular safety at a given speed or, conversely, to determine the maximum speed for a given level of safety.

An obstacle-detection system according to this concept uses computing resources efficiently, without resorting to "brute-force" obstacle-detection techniques that often involve more computation than is necessary. Such a system is designed to implement a focus-ofattention approach, in which data are processed from subwindows of the stereoscopic video images of the path ahead, instead of from the entire images, to reduce the computational cost of perception. The image data are processed in the following main steps:

Pyramids (in a symbolic sense) of versions of the stereoscopic images that are band-pass-filtered in a succession of spatial-frequency bands are constructed from the stereoscopic pairs of images.
Cross-correlations are computed on any single level of an image pyramid to estimate stereoscopic disparity at every pixel of the pair of images.
The range (that is, the distance from the video cameras on the robotic vehicle) is computed from the disparity at every pixel
An obstacle-detection algorithm is applied to the resulting range image.
The obstacle-detection algorithm assumes that an obstacle consists of a nearly vertical step displacement on an otherwise nearly flat ground plane. The algorithm does this by using pairs of pixels in the same column of the range image (see figure). If the difference in height between the two pixels in any such pair exceeds a prescribed step height, then an obstacle is deemed to exist at the affected location.
The obstacle-detection system manages the computing resources by adjusting variables at three levels: image resolution, subwindows of attention (steps 2, 3, and 4 can be performed in subwindows), and detection threshold (which can be varied over the scene). The system takes into account its current lookahead requirements and determines the part of the path ahead that must be examined for obstacles at each processing step. The system assumes that the vehicle must stop before colliding with an obstacle, and allowance must be made for image-acquisition and processing time, actuation delay before brakes can be applied, and the braking time to reduce the speed from its current value to zero.

A velocity controller could be designed to operate in conjunction with an obstacle-detection system of this type. The velocity controller could be designed to try to reach the maximum allowable speed, provided that it always checked for the distance to the end of the last processed path segment and applied brakes if necessary (that is, if data on the next path segment did not come in time).

Point of Contact:
Larry Matthies
Mail Stop 107-102
Jet Propulsion Laboratory
4800 Oak Grove Drive
818-354-3722
Larry.H.Matthies@telerobotics.jpl.nasa.gov

20. RickॐValued Senior Member

Messages:
3,336
Laser Scanners.

Terrain Mapping Using Laser Rangefinders
Perception research in Carnegie Mellon's Planetary Rover program-out of which the walking robot, Ambler, was developed-enables walking robots to create computerized maps of unpredictable, outdoor terrain. Natural terrain is particularly challenging to map, because it does not contain the straight edges and constant lighting of indoor or industrial environments-areas for which perception and mapping technologies have already been developed. To ensure that a planetary rover's perception system will work in unstructured terrain, Carnegie Mellon researchers have tested the rover's perception system over hundreds of different terrains, through changes in lighting level, dust, temperature, and texture of terrain. The perception system was able to navigate the Ambler through these terrains.
Sensing. The rover's perception system views terrain through a laser rangefinder mounted on top of the robot, scanning terrain ahead of the rover within a 60-degree field of view. The rangefinder uses spinning and nodding mirrors to scan the laser beam over the terrain to produce range images. A range image directly measures the distance between the rover and visible points on the terrain ahead. Image preprocessing, which refines the laser image, compensates for "illusions" in lighting and land texture. Darker objects, for example, absorb more laser energy, produce a weaker return signal from the laser, and cause the object to appear farther away. The image preprocessor detects and deletes those pixels that appear unusually far away.

Constructing maps. From the range images, the rover's perception system produces elevation maps. Planetary rovers like Ambler use elevation maps both for locomotion-how and where the rover will place its next step-and for navigation-determining at any moment where the rover is on the landscape, planning a course, and commanding the rover where to go. Elevation maps like the one in Figure 1 allow the rover to plan where and how to place each step without colliding with obstacles in the terrain.

An algorithm in the map-building system shadows regions that are obstructed by intervening terrain. The algorithm also finds and records those regions that lie outside of its field of vision. This way, when the robot's planner asks the perception system for a broader image, the algorithm can report that certain fields of view in the planner's request are not visible. The same algorithm produces elevation maps at whatever resolution the robot's planner asks for.

Compensating for errors. The robot's perception system detects unexpected elements in the terrain produced by the unpredictable effects of lighting, temperature, and texture on the terrain. The robot detects errors by constantly calibrating the difference between its internal terrain map and the height of each foot on the ground. Ultimately trusting what it feels over what it sees, the robot uses a feedback control loop to change its elevation map according to the difference between what it saw and what its legs currently feel from the terrain. By improving the accuracy of the robot's maps, elevation error compensation improves the robot's ability to walk.

Merging maps. The perception system also builds a larger map mosaic out of the smaller map images created from different viewpoints. Generally, the map mosaic algorithm pieces together the smaller maps from the newest to the oldest mapped image. Older map data is used when the navigation planner routes the rover through currently occluded areas that were previously visible. Figure 2 is a map mosaic created from 125 range images acquired at an outdoor site.

While the Ambler walked over kilometers of outdoor terrain over many days of operation, the perception system performed on large amounts of terrain data, successfully producing image after image, map after map.

Point of Contact:
Eric Krotkov
Robotics Institue
Carnegie Mellon University
Pittsburgh, PA 15213 412-268-3058
epk@cs.cmu.edu

21. RickॐValued Senior Member

Messages:
3,336
Proximity Senser systems

3-D Guidance System With Proximity Sensors For Shuttle RMS

A 3-D guidance system which utilizes four proximity sensors on a remotely- controlled mechanical claw has been developed by the Jet Propulsion Laboratory. The sensors feed pitch and range information to a manned control station indicating how the claw is oriented relative to a mating fixture it is about to grasp. The operator then alines the claw so that the fixture is grasped correctly. This system developed for coupling space vehicles can be used in other remote manipulators. The sensors are mounted on the center square frame of the end-effector of a four-claw grapple. Each sensor consisting of an LED source and a photodetector is aimed to sense the object parallel to the shaft (the roll axis) supporting the grapple. Thus four sensitive areas are established ahead of the claws. Using the claws to define four corners of a square, the sensors are mounted at midpoints of the sides of the square. Thus, two orthogonal lines connecting opposite pairs of sensors define the pitch-and-yaw axis of the system. In the simplest arrangement, each sensor is operated in a two-state binary sensing mode, where a zero indicates a too far state while 1 indicates a too close state when the system is in the vicinity of the target. The detection distances of sensors B and D are somewhat shorter than of A and C. Thus a success state when the claws are alined with the target is defined by A signaling 1, B a zero, C a 1, and D a zero. An all-zero state shows that the entire claw is too far from the target and an all-1 that it is too close. A total of 16 combinations is possible (2**4) indicating various misalinements of yaw and pitch axes with respect to the target. One of the 16 states never occurs, when A and C are zero and B and D are 1, because of the detection pattern setup. A more precise alternative would involve three-state sensing. A signal with a value of 2 would indicate too close, 1 on target, and zero too far. Success would be defined by all 1's, and a total of 75 workable logic states would be possible, giving a more accurate feedback.

Point of Contact:
Antal Bejczy
Mail Stop 198-219
Jet Propulsion Laboratory
4800 Oak Grove Drive
818-354-4568
bejczy@telerobotics.jpl.nasa.gov

22. RickॐValued Senior Member

Messages:
3,336
Planning And reasoning.

Self-Motion Manifolds of Redundant Manipulators
A new perspective on redundancy can yield alternative control strategies. NASA's Jet Propulsion Laboratory, Pasadena, California
Self-motion manifolds are introduced in a new approach to the characterization of self-motions of a robotic manipulator that has redundant degrees of freedom. Self motions, which are made possible by the redundancy, are those motions of the robot joints that leave the position of the end effector unchanged. In much of the previous research on redundant manipulators, the approach has been to resolve the redundancy by optimizing the redundant motions of the joints with respect to additional criterion functions while commanding the end effector to follow the desired trajectory. The previous approach has involved the use of a pseudoinverse of the Jacobian matrix (which consists of derivatives of the coordinates of the end effector with respect to the coordinates of the joints) in optimizing locally Q that is, within a small range of redundant motions. In the alternative approach, the kinematics of the robot are reformulated via a manifold mapping that stresses global, rather than local, kinematic analysis. Within this theoretical framework, the infinite number of redundant solutions of the inverse kinematic problem (the problem of finding the trajectories of the joints as functions of the desired trajectory of the end effector) are naturally interpreted as a set of self-motion manifolds (see figure) rather than in terms of the Jacobian null space. This approach is useful in the study of redundant manipulator kinematics. In addition, the problem of the resolution of redundancy can be posed equivalently in this approach as the problem of the control of self-motions, and the self-moti on manifolds are useful in investigating, interpreting, and formulating both local a nd global techniques for the resolution of redundancy. Redundancy can be resolved by direct control of a set of self-motion parameters, by direct control of a rela ted set of kinematic functions defined by the user and the use of these functions to construct an augmented Jacobian, or by optimization with an objective function.

More details can be found in:

Kreutz, K., Long, M. and Seraji, H.: "Kinematic analysis of 7 DOF manipulators," Intern. Journal of Robotics Research, 1992, 11(5), pp. 469-481.

Point of Contact:
Joel Burdick,
Homayoun Seraji,
Mail Stop 198-219
Jet Propulsion Laboratory
4800 Oak Grove Drive
seraji@telerobotics.jpl.nasa.gov

Planning and Reasoning for a Telerobot
=====================================================================
Challenges in transferring technology from a testbed to an operational system are discussed.
NASAUs Jet Propulsion Laboratory, Pasadena, California

A document discusses the state of research and development of the Telerobot Interactive Planning System (TIPS). The system, designed to provide planning and reasoning for telerobots, has been installed in the telerobot testbed at NASA's Jet Propulsion Laboratory. The planning-and-reasoning technology is to be transferred to Gcddard Space Flight Center for use in NASAUs first operational-flight telerobot. When fully developed, telerobots will have both robotic and teleoperator capabilities. The first telerobot, the Flight Telerobotic Servicer, will include a base site on the Space Shuttle or Space Station and a remote site at a workplace, which may be an external structure being built on the Space Station or a satellite that needs servicing. The goal in the development of the TIPS is to enable it to accept such instructions from an operator as the command to replace a given module, then to command a run-time controller to execute operations that will execute the instructions. The runtime controller plans fine motions, grasps, compliant motions, and applications of force and maintains a geometric data base of objects in the workspace. If the run-time controller encounters problems, the TIPS must modify its commands to overcome them. When the TIPS fails, it must allow the operator to take over. The report presents the technical problems and describes the approaches that have been developed to solve them. It describes the prototype TIPS. Finally, it compares the prototype with the architecture it must support in the Flight Telerobotic Servicer.

Point of Contact:
Stephen F. Peters,
Mail Stop 301-250D
4800 Oak Grove Drive
818-354-0157
stevep@parsec.jpl.nasa.gov

David S. Mittman,
Mail Stop 230-200
4800 Oak Grove Drive
818-393-1091
dmittman@beowulf.jpl.nasa.gov

Mark J. Rokey
Mail Stop 301-235
4800 Oak Grove Drive
818-354-1138
mrokey@beowulf.jpl.nasa.gov

23. RickॐValued Senior Member

Messages:
3,336
Motion Planning.

Fast motion planners for finding collision-free robot paths
Two fast robot motion planners based on probabilistic motion planning concepts have been implemented. These path planners sacrifice completeness, namely the ability to find a path under all possible circumstances, in return for fast path determination most of the time.
Planner Using Conditional Probability Guided Search

The first path planning algorithm is motivated by the need to reduce the computational burden involved in searching a graph of a deterministic representation of free configuration space available to a robot. This exact approach is practical only if the environment is static, if there are only a few degrees-of-freedom, long motion planning times are acceptable, and if fast computing resources are available.

The fast motion planning approach instead computes the probability of a successful motion of the robot through a region of space. This probability calculation is obtained from a geometric model that captures the effects of objects in the region and the kinematics of the robot arm. The notion of a Motion Transition Probability is defined, which captures the conditional probability of the robot arm moving in a collision free manner in the vicinity of a single obstacle in the environment, while ignoring the effect of all other obstacles. These conditional probablities are then used to guide an on-line search and results, in most cases, in the successful determination of a path within a reasonably short time.

The formal mathematics of this process is reminiscent of a diffusion process, and the algorithm itself can viewed as a Monte-Carlo approach to the solution of an equation for diffusion.

Planner Using Massively Parallel Energy Computations

The second path planning algorithm is motivated by similar needs and is applicable to solve the path planning problem when the robot system has a large number of degrees-of-freedom. It utilizes a massively parallel computational method suitable for implementation on a hypercube type multi-processor computer.

Each obstacle in the environment is described as the intersection of a number of bounding hyperplanes. Each hyperplane of the obstacle contributes to a potential energy term with respect to a number of control points on the robot arm. This energy term is designed such that the peak value is reached when the point on the robot arm is inside the obstacle, and tapers off in a sigmoidal manner as the point recedes from the interior of the hyperplane. A "temperature" parameter determines the rate of this decay, with a higher temperature indicated a slow decay. By taking the product of the energy terms defined for each bounding hyperplane, an overall collision energy term can be defined with respect to the obstacle.

Energy terms are computed for an entire path wherin the energy of the path is defined as the energy of the obstacle/control-point pair at each location along the path. These energy computations lend them to easy parallelization since the energy associated with each obstacle/control-point/path location set can be computed independently from parallel copies of a geometric world-model resident in each of the multi-processor nodes. A parallel gradient computation is then performed to determine repulsive forces along an initial candidate robot path. These forces push the path away from the obstacles towards collision free configurations as they work to minimize the overall path energy. To prevent the iterative process from getting stuck in local energy minima, the initial iterations of the path deformation algorithm are performed using a "high-temperature" model of the energy term. This has the effect of "smearing out" the energy landscape. The temperature parameter is then gradually cooled down as the iterations proceed, by which time local-minima are usually avoided.

The formal mathematics of this process is reminiscent of the simulated annealing method, and the energy term calculations can be thought of as mean-field approximations similar to those used in statistical mechanics.

References:
J. Balaram and H. Stone, Automated Assembly in the JPL Telerobot Testbed'', Intelligent Robotic Systems for Space Exploration, Chapter 8, Kluwer Academic Publishers, Norwell, MA, 1992.

Point of Contact:
J. Balaram
Mail Stop 198-219
Jet Propulsion Laboratory
4800 Oak Grove Drive