Robots:are We Close Yet?

Discussion in 'Intelligence & Machines' started by Rick, Nov 19, 2001.

Thread Status:
Not open for further replies.
  1. Rick Valued Senior Member

    Messages:
    3,336
    Neural Network Classifies Teleoperation Data
    The neural network identifies phases of tasks.
    NASA's Jet Propulsion Laboratory, Pasadena, Califomia

    A prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on the manipulator hand. This prototype is an early subsystem-level product of a continuing effort to develop an automated system that assists in training and supervising the human control operator: the system would provide symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to the operator in real time during successive executions of the same task. Such an automated supervisory system could also simplify the transition between the teleoperation and autonomous modes of a telerobotic system. The prototype artificial neural network (see Figure 1) is based partly on the concept of the time-delay neural network, which involves preprocessing of a temporal sequence of input signals through a shift register to turn it into a temporal sequence of spatially arrayed input signals. A basic time-delay neural network contains only feedforward connections and does not exhibit adequate learning accuracy because it lacks an adequate temporal representation of the evolution of a task. To obtain better representation of the evolution of a task, the network is made partially recurrent by adding some connections from output nodes to nodes called Rcontext unitsS that are located in the input layer of neurons. The context units represent the previous state of the neural network, which state, in turn, represents the task phase executed previously. The network was trained by use of a back-propagation algorithm and training data from experimental teleoperation tasks in which a remote manipulator with a hand instrumented to measure forces and torques was controlled by the human operator via a force-reflecting hand controller and remote video monitoring of th e workspace. The tasks included insertion and removal of a peg into and from a hole, insertion and extraction of electrical connectors, and attachment of hook- and-pile pads. The network was then tested by using it to segment a force signal from the peg-in-hole task into task phases. As shown in Figure 2, the network performed the segmentation in real time, albeit with some lags and some errors. On the other hand, the network also exhibited an unexpected ability to recover after misidentifying some phases and to follow tasks, the phase sequences of which differed from those of the training tasks.

    More details can be found in: P. Fiorini, A. Giancaspro, S. Losito, and G. Pasquariew, RNeural Networks for the Segmentation of Teleoperation Tasks,S Presence, Vol. 2, No. 1, pp. 1-13, 1993.



    Point of Contact:
    Paolo Fiorini,
    Antonio Gial1caspro,
    Sergio Losito,
    Guido Fasquariello
    Jet Propulsion Laboratory
    4800 Oak Grove Drive
    Pasadena, CA 91109
    charles_r_weisbin@jpl.nasa.gov
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Rick Valued Senior Member

    Messages:
    3,336
    Controlling.

    Decentralized Adaptive Control for Robots
    Precise knowledge of the dynamics would not be required.

    NASA's Jet Propulsion Laboratory, Pasadena, California

    A proposed scheme for the control of a multi-jointed robotic manipulator calls fo r an independent control subsystem for each joint, consisting of a proportional/integral/derivative feedback controller and a position/velocity/ acceleration feedforward controller, both with adjustable gains. The independent joint controllers would compensate for unpredictable effects (e.g., friction, variations in payload, and imprecise knowledge of the dynamics of the manipulator), gravitation, and dynamic coupling between motions of joints, while forcing the joints to track reference trajectories. The scheme is amenable to parallel processing in a distributed computing system wherein each joint would be controlled by a relatively simple algorithm on a dedicated microprocessor. For the purpose of the scheme, it is convenient to view each joint as a subsystem of the entire manipulator system. The subsystems are considered to be interconnected by disturbance torques that represent the inertial coupling, Coriolis, centrifugal, frictional, and gravitational effects. The problem is to d esign the set of independent joint controllers in which the ith controller generates th e joint torque Ti(t) (where t = time) by responding only to the actual joint-angle trajectory  and the reference joint-angle trajectory  and makes  track . The adaptive independent controller dedicated to the ith joint would be described by

    Ti(t) = fi(t) + [ + ]

    as shown in Figure 1, where  is the position-tracking error of joint i. The term  represents an auxiliary signal synthesized by the adaptation scheme to improve the tracking performance and partly compensate for the disturbance torques. The term in the first set of brackets represents the adaptive position/velocity feedb ack controller with the adjustable gains  and  acting on the position and velocity tracking errors  and  respectively. The term in the second set of brackets represents the adaptive position/velocity/acceleration feedforward controller wit h the adjustable gains ,  and  operating on the desired position , velocity , and acceleration , respectively. A theorem derived via the theory of model-reference adaptive control provides the necessary controller-adaptation law in the form of specifications for the auxiliary signal, feedback gains and feedforward gains. The resulting independent-joint-control law can be expressed as that of the combination of the proportional/integral/ derivative feedback controller and the proportional/derivative/second-derivative feedforward controller illustrated in Figure 2. The controller-adaptation laws are simple and involve only a few arithmetic operations. The proportional-plus-integral adaptation laws give a large family of adaptation schemes, from which the most suitable scheme for a particular application can be selected. The use of proportional-plus-integral adaptation law s yields improved convergence and increased flexibility in comparison to the conventional integral adaptation laws.

    Point of Contact:
    Homayoun Seraji,
    Mail Stop 198-219
    Jet Propulsion Laboratory
    4800 Oak Grove Drive
    Pasadena, CA 91109
    seraji@telerobotics.jpl.nasa.gov


    Robot Arm Dynamic Control by Computer
    =====================================================================
    Feedforward and feedback schemes linearize responses to control inputs.
    NASA's Jet Propulsion Laboratory,Pasadena, California

    A method for the control of a robot arm is based on computed nonlinear feedback and state transformations to linearize the system and decouple the robot end-effector motions along each of the Cartesian axes in the workspace. The nonlinear feedback is augmented with an optimal scheme for the correction of errors in the workspace. The mathematical model of the robot arm is stated in homogeneous coordinates together with the Denavit- Hartenberg four parameter representation of robot-arm kinematics. Using the Lagrangian formulation of mechanics, the dynamic behavior of the robot arm is expressed in matrix/vector form and manipulated to obtain expressions of the types previously found useful in nonlinear-control theory. The resulting dynamic-control mathematical model satisfies the necessary and sufficient conditions for external (or exact) linearization and simultaneous output decoupling. By using non-linear feedback and a diffeomorphic transformation, the non- linear system of dynamical equations is converted into a Brunovsky canonical form and simultaneously output- decoupled. The linearization accomplished here by non-linear feedback is an "external linearization" as opposed to the conventional "internal linearization" (Taylor-series expansion). That is, the nonlinear character of the original system is not changed here by any approximation. Therefore, system linearization by nonlinear feedback can be called "exact linearization" in a control sense. The linearized system is unstable. To stabilize it, a linear feedback loop is added. As long as the feedback matrix is constant and block-diagonal, the system will remain an output-decoupled linear system. A major new feature of the control method is that the optimal error- correction loop directly operates on the task level and not on the joint-servo- control level. The task-level errors are then decomposed by the nonlinear-gain matrix into joint-force or joint-torque-drive commands. The new control method performed well in computer simulations. The augmentation of non-linear feedback with an optimal error-correcting control provides robust performance and assures acceptable tracking errors even when the dynamical parameters of the mathematical model of the robot arm are in error by as much as 30 percent.


    Point of Contact:
    Antal K. Bejczy
    Mail Stop 198-219
    Jet Propulsion Laboratory
    4800 Oak Grove Drive
    Pasadena CA 91109
    818-354-4568
    bejczy@telerobotics.jpl.nasa.gov
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Rick Valued Senior Member

    Messages:
    3,336
    Vitrtual reality Caliberation technology.

    Virtual Reality Calibration Technology and TELEGRIP
    JPL recently developed a virtual reality calibration technique that enables reliable and accurate matching of a graphically simulated virtual environment in 3-D geometry and perspective with actual video camera views. This technique enables high-fidelity preview/predictive displays with calibrated graphic overlay on live video for telerobotic servicing applications. Its effectiveness was successfully demonstrated in a recent JPL/NASA-GSFC ORU (Orbital Replacement Unit) changeout remote servicing task. In September 1993, with NASA's recent thrust for industry collaborations, JPL and Deneb Robotics, Inc. established a technology cooperation agreement. In this JPL-Industry cooperative Deneb Commercialization Task, JPL transfers the virtual reality calibration software technology to Deneb, and Deneb inserts this software technology into its commercial product TELEGRIP. This joint technology collaborative work thus enables Deneb to commercialize an upgraded industry product that will greatly benefit both space and terrestrial telerobotics applications.
    Key new features of the JPL calibration technique include;


    An operator-interactive method is adopted to obtain reliable correspondence data.
    A robot arm itself is used as a calibration fixture for camera calibration, eliminating a cumbersome procedure of using external calibration fixtures.
    The object localization procedure is added after the camera calibration as a new approach to obtain graphic overlay of both the robot arm and the object(s) on live video and enable effective use of the computer-generated trajectory mode in addition to the teleoperation mode.
    A projection-based linear least-squares algorithm is extended to handle multiple camera views for object localization.
    Nonlinear least-squares algorithms combined with linear ones are employed for both camera calibration and object localization. Details of the algorithms and their software listings can be found in the recent JPL Document D-11593, which was prepared as part of this JPL-Industry cooperative task.
    The JPL virtual reality calibration option is currently being implemented on Deneb's TELEGRIP which is an open architecture based upon Dynamic Shared Objects (DSO's). The TELEGRIP video overlay implementation will be based upon an application programmers interface (API) layer which insulates the overlay developer from the specifics of video hardware, thus enabling support over a wide range of video products. Graphic models can be overlaid in wire-frame or in solid-shaded polygonal rendering, with varying levels of transparency to produce different visual effects.
    The virtual reality calibration option implemented on TELEGRIP will provide 1) immediate benefits to NASA for ground-controlled telerobotics servicing in space, 2) immediate benefits to the National DOE (Department of Energy) Labs working on the disposal of nuclear waste, 3) significant enabling technology for the decontamination and decommissioning of commercial nuclear reactors, and 4) foreseeable applications in automotive, medical, and servicing industries.

    For more information see:

    W. S. Kim, Virtual Reality Calibration: Algorithms and Software Listings with an application to Preview/Predictive Displays for Telerobotic Servicing, Jet Propulsion Laboratory Internal Document D-11593, Feb. 1994.

    W. S. Kim and A. K. Bejczy, "Demonstration of a High-Fidelity Predictive/Preview Display Technique for Telerobotic Servicing in Space," IEEE Trans. on Robotics and Automation, vol. 9, no. 5, pp. 698-702, 1993.

    W. S. Kim, P. S. Schenker, A. K. Bejczy, S. Leake, and S. Ollendorf, "An Advanced Operator Interface Design with Preview/Predictive Displays for Ground-Controlled Space Telerobotic Servicing," SPIE Conference 2057: Telemanipulator Technology and Space Telerobotics, pp. 96-107, Boston, MA, Sept. 1993.


    Point of Contact:
    Won Soo Kim,
    Mail Stop 198-219
    Jet Propulsion Laboratory
    4800 Oak Grove Drive
    Pasadena CA 91109
    818-354-5047
    kim@telerobotics.jpl.nasa.gov
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Rick Valued Senior Member

    Messages:
    3,336
    Assistant Form.

    Robotic Surgical Assistant for Brain Surgery
    ===================================================================
    Stereotactic brain surgery is a technique for guiding the tip of a probe or other delicate surgical instrument in the brain, through a small burr hole drilled in the skull and without direct view of the surgical site. Minimizing brain damage as the probe travels from the skull to the surgical target deep in the brain requires a straight-lined trajectory that avoids such vital parts of the brain as the major blood vessels and motor strip. A problem inherent to stereotactic procedures is that the surgeon cannot view the surgical site. Therefore. some 3-D localization of the target area is required. The surgeon must know through what angle and how deep he must insert the probe in order to reach the target. This 3-D localization is usually accomplished by coupling the stereotactic frame with some X-ray device. The Long Beach Memorial Hospital of Long Beach California initiated an experiment to use a robot to provide localization to the surgeon by interfacing a CT image of the patient's brain to the robot's kinematic equations. The procedure consisted of using a stereotactic frame which is affixed to the patient's head on the CT scanner couch. Three N-shaped locators on this frame are used to provide a reference frame to compute the 3-D location of the target image. A robot bolted to the same CT scanner couch is used to provided the coordinates of the target relative to the stereotactic frame. The surgeon, based on observation of CT images, determines the entry point on the skull. The robot is programmed to align a guide, held by the robot's end-effector, with the target and the entry point. The surgeon then inserts instruments through the guide and the entry point, to a depth calculated by the robot.
    One main obstacle to using this robot-assisted technique was the inaccuracy of the hospital's commercial robot, a Puma 200. Only large tumors close to the surface of the skull could be treated, requiring a large entry hole. A robot calibration technique developed by NASA's Jet Propulsion laboratory was used to increase the pointing accuracy of the guide held by the robot, enabling application of the procedure to small tumors which were in the interior portion of the brain. This improvement also decreased the necessary skull drilling hole diameter by 50%. Several successful operations were performed using this procedure at the Long Beach Memorial Hospital in the late 1980's.


    Point of Contact:
    Samad Hayati
    Mail Stop 198-219
    Jet Propulsion Laboratory
    4800 Oak Grove Drive
    Pasadena, CA 91109
    818-354-8273
    Samad.A.Hayati@jpl.nasa.gov




    --------------------------------------------------------------------------------
    Robot Assisted Micro-Surgery
    Building from its established NASA technology base in teleoperations and telerobotics, JPL is developing a new robotic microdexterity platform with important applications to medicine.Through a cooperative NASA-Industry effort, the Robot Assisted Microsurgery (RAMS) task develops a dexterity-enhanced master-slave telemanipulator enabling breakthrough procedures in micro/minimally invasive surgery. A cooperative commercial development agreement with MicroDexterity Systems, Inc. The applicable medical practice includes eye, ear, nose, throat, face, hand, and cranial surgeries. As part of planned task activities, the resulting NASA robot technologies will be benchmarked in actual operating room procedures for vitreous retinal surgery.

    The primary objective of this task is to provide an integrated robotic platform for master-slave dual-arm manipulation operational in a one cubic inch work volume at features in the 100 micron range (our goal is to extend these capabilities to features in the 20 micron range). The research is a natural evolution of our extensive experience in force-reflecting teleoperation with disimilar master/slave. Capabilities will include force-reflection and textural tactile feedback, and in situ multiple-imaging modalities for improved surgical visualization and tissue discrimination. Potential NASA applications may include EVA/IVA telescience, bioprocessing, materials process and micro mechanical assembly, small-instrument servicing, and terrestrial environmental testing in vacuum.



    Point of Contact:
    Paul Schenker
    Mail Stop 125-224
    Jet Propulsion Laboratory
    4800 Oak Grove Drive
    Pasadena, CA 91109
    818-354-2681
    schenker@jpl.nasa.gov
     
  8. Rick Valued Senior Member

    Messages:
    3,336
  9. kmguru Staff Member

    Messages:
    11,757
    Excellent posting.

    So, where are we trying to go with this?
     
  10. Rick Valued Senior Member

    Messages:
    3,336
    Although primarily when i started this thread KM,i thought i including everyone else knew everything about Robotics and Robots,but i realised with all the compiled material i didnt know a bit.therefore an exhaustive but collective material is required for the purpose of my own understanding somewhere,i thought this might be beneficial to all of us in understanding of subject thoroghly and its various basics.



    bye!
     
    Last edited: Mar 10, 2002
  11. kmguru Staff Member

    Messages:
    11,757
    Works for me. A suggestion is that put the compiled version on a personal website and reference it on the sticky topic.

    Either way....excellent work.

    BTW, I am reading some stuff at http://www.ufoskeptic.org/secret.html, you might find interesting.
     
  12. Rick Valued Senior Member

    Messages:
    3,336
    Thanks for suggestion..oh yeah! and i"ll check the site out...thanks again.

    Please Register or Log in to view the hidden image!



    bye!
     
  13. Rick Valued Senior Member

    Messages:
    3,336
    Real-Time Statistical Learning for Humanoid Robotics

    The following is an abstract of the seminar attended by my father.
    ====================================================================
    Real-Time Statistical Learning for Humanoid Robotics


    Stefan Schaal
    Computational Learning and Motor Control Laboratory
    USC
    ===================================================================
    Real-time modeling of complex nonlinear dynamic processes has become increasingly important in various areas of robotics and human computer interaction, including the on-line prediction of dynamic processes observed by visual surveillance, user modeling for advanced computer interfaces and game playing, and the learning of value functions, policies, and models for learning control, particularly in the context of high-dimensional movement systems like humans or humanoid robots. To address such problems, we have been developing special statistical learning methods that meet the demands of on-line learning, in particular the need for low computational complexity, rapid learning, and scalability to high-dimensional spaces. In this talk, we introduce a novel algorithm for regression learning that possesses all the necessary properties. The algorithm combines the benefits of nonparametric learning with local linear models with a new Expectation-Maximization algorithm for finding low-dimensional projections in high-dimensional spaces; it can be regarded as a nonlinear and probabilistic version of partial least squares regression. We demonstrate the applicability of our methods in synthetic examples that have thousands of dimensions and in various applications in humanoid robotics, including the on-line learning of a full-body inverse dynamics model, an inverse kinematics model, and skill learning.

    In order to speed up skill learning, we also investigated how imitation learning can contribute to teaching humanoid robots. A novel method to encode movement plans in terms of the attractor dynamics of nonlinear dynamical systems is suggested. The shape of the attractor landscapes can be learned, either from a demonstration or by reinforcement learning, using the statistical learning techniques above. Essentially, the suggested methods provide a control theoretically sound tool to acquire a repertoire of movement primitives for various motor tasks, where primitives can rapidly adapt to a dynamic environment. Video presentation will illustrate the outcome of our robot experiments.

    Interesting,isnt it?

    Please Register or Log in to view the hidden image!



    bye!
     
  14. kmguru Staff Member

    Messages:
    11,757
    complex nonlinear dynamic processes: I did this type of work in manufacturing, multi-axis robotics, vision systems and chemical/ refinery safety processes. Now, I am focusing the same process for Business Environment and Economics. I have a long way to go. I have a hard time explaining this to MBA types. But it is fun....
     
  15. Fukushi -meta consciousness- Registered Senior Member

    Messages:
    1,231
    Zion,...

    Are you Hans Moravec?

    who will benefit from these developments,....if today the world could be different ,....

    but it's not.
     
    Last edited: Mar 11, 2002
  16. kmguru Staff Member

    Messages:
    11,757
    One possible future - written in 1993. A century past in internet times. Great stuff. As long as we are not heading towards "Terminator" story, we should be fine. All we need to do is modify our genetics to keep up with the machines.
     
  17. Rick Valued Senior Member

    Messages:
    3,336
    HAHAHAAAA! KID BROTHER BY ISAAC

    Please Register or Log in to view the hidden image!

    HAHAHAHAHAHAHAHAHAAHAHAHA...(SORRY FOR BIG LAUGH,BUT I AM SURE YOU"LL LIKE THE STORY)
    KM,
    I FELT TO SHARE THIS STORY OF ISAAC WITH YOU AND OTHERS WHO BY ACCIDENT MAY COME HERE

    Please Register or Log in to view the hidden image!

    .
    KID BROTHER
    ===============================================================
    OKAY SO THERE'S A GUY WHO HAS A KID CALLED CHARLIE.HE OUTGROWN,GREAT KID.OVERGROWN A LITTLE BIT HE HAS A PROBLEM COMMUNICATING WITH HIS FELLOW PALS.THIS MAKES HIM A LONER.BUT HIS FATHER INSISTS THAT HE IS A LEADER SO HE IS A LONER AND NOTHING TO WORRY ABOUT.AT LAST JOSIE(CHARLIE'S MOM)TELLS HER HUSBAND TO GET HIM A BROTHER A ROBOTIC BROTHER...HE HESITATES FIRST BUT THEN AGREES.THE "KID" SO CALLED BYE THE FAMILY BECOMES A GREAT PART OF FAMILY.EVEN CHARLIE AND HIS FATHER BEGIN TO LIKE HIM,BUT JOSIE,OH!SHE TREATS HIM LIKE HER REAL SON.SHE ADORES HIM,LOVES HIM TO THE CORE,AFTER ALL KID WAS DESIGNED TO BE A PERFECT BROTHER A PERFECT SON.HE HELPS JOSIE TO GET VEGETABLES,MAKE FOOD,TALK TO HER WHENEVER SHE REQUIRES EMOTIONAL SOOTHING.IN SHORT HE BECOMES A PART OF JOSIE'S LIFE.EVEN CHARLIE LIKES HIM A LOT.KID MAKES HIM FEEL BIG,DOMINATING A LEADER MOST OF ALL,SINCE HE'S PROGRAMMED FOR THAT ONLY.

    CHARLIE'S FATHER HAS TO GO FOR ONE DAY SINCE HIS BOSS HAD CALLED,BUT UNFORTUNATELY IN HOUSE FIRE BREAKES OUT,AND WHOLE HOUSE IS WRECKED UP IN THE MEANWHILE FATHER RETURNS TO SEE JOSIE.SHE WAS IN THE LAWN WHEN IT HAPPENED THEY SAID(THE COPS).HE ASKS HIM HOW 'S CHARLIE AND TELLS HER NOT TO WORRY ABOUT KID AS HE CAN BE BOUGHT AGAIN,BUT SHE KEEPS ON MUMURING"I HAD TO MAKE A CHOICE......"FINALLY HE SHOUTS AT HER ONLY TO FIND THAT KID IS LYING ON THE LAWN AND CHARLIE WAS LEFT BY HER IN THE HOUSE...

    HE IS NUMBED.
    HE SAYS...
    YOU SACRIFICED MY SONS LIFE FOR THAT PIECE-FOR THAT-FOR THAT-
    FOR THAT PIECE OF TITANIUM-FOR THAT--AAAAAAAAAAAAAAAAHHHHAHAHAAA...



    AMAZING AND AMUSING ISNT IT.
    PS:THIS WAS TAKEN FROM GOLD,ONE OF HIS LAST COLLECTION OF SCIFI STORIES.

    HOPE YOU GUYS ENJOYED IT.I"LL COME UP WITH MORE IF I FIND ANY.
    BYE!
     
    Last edited: Mar 12, 2002
  18. Rick Valued Senior Member

    Messages:
    3,336
    New Developments Regarding Robotics.

    Visionary implant

    <color=blue><b><i>Atificial Retina</color></b></i>
    =====================================================================
    Although it is still early days, the first attempts to make an artificial retina—to restore sight to the blind—look remarkably promising...
    ---------------------------------------------------------------------
    Bypassing a diseased retina to send images direct to the brain

    SPECTACLES to aid the blind might seem the stuff of “Star Trek”, but research is in the works that could bring the notion down to earth. A number of groups in America are trying to perfect an “electric retina”, a device that might one day restore vision to millions of people who have lost their sight. To do this, they are calling on the same tricks that were used to create a successful cochlea implant—a device which, in response to sound waves, uses electrical impulses to stimulate nerve cells in the inner ear. These nerve cells fire, sending (slightly imperfect) signals to the brain, where they are interpreted as sound.

    Even though researchers are not quite sure how the brain interprets the auditory signals as well as it does, they are now hoping that the same system will work for vision. In theory, an electric retina could function in a similar fashion as a cochlea implant. In healthy eyes, the retina is made up of cells that receive light and translate it into electrical impulses that are sent, via the optic nerve, to the brain. But in diseases such as macular degeneration (the leading cause of blindness in the western world) or retinitis pigmentosa (a hereditary disease causing blindness), the “rod” and “cone” cells that convert light into neural signals gradually degenerate.

    Hence the attempts to use a set of tiny electrodes that bypass these cells and send their electrical impulses direct to the ganglion cells—the layer of retinal cells behind the rods and cones. The electrical stimulation should then prompt the ganglion cells to perform the task they are meant for: to transmit information to the optic nerve.

    To do this, a tiny video camera, less than three centimetres square, is embedded in the frame of a pair of spectacles so as to capture the scene in front of the wearer. The camera's CCD (charge-coupled device) chip captures and digitises the images, which are then transmitted as radio waves to another, even smaller chip implanted inside the eye. The eye has a ribbon of 100 whispery electrodes attached to its rear. These transmit electrical impulses to the ganglion cells.

    That, at least, is the idea. The reality, however, poses a number of engineering challenges. Second Sight of Valencia, California, is trying to harness the best of the academic research to create a device that people can actually use. Its prototypes draw on work done both at the University of Southern California (USC) in Los Angeles and a collaborative group from Harvard University and the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts.

    One of the biggest problems is finding a way to attach something to the retina's surface. The membrane is no more substantial than a piece of wet tissue paper. And any electrical device left in bodily fluids for long periods tends to corrode. But the hardest task of all is working out how to translate the visual images into electrical impulses that the brain can interpret properly.

    As it turns out, the images that the camera records, even when broken down into picture elements (“pixels”), cannot be converted directly. Unfortunately, two electrodes stimulating two sites on a person's retina will not necessarily mean that the patient will see two spots. Deciphering this “neural coding” can be an arduous task. Mark Humayan, an ophthalmologist at USC, has shown that, with a short-term implant, a patient can recognise the letter “E” from five feet away. John Wyatt and Joseph Rizzo, co-directors of the Harvard/MIT retinal implant project, have managed to help a patient see a line of four dots.

    Hardly encouraging. But that they have been able to restore, even if only temporarily, a modicum of “sight” to people who had been blind for years is quite an achievement. Clearly, a better understanding will come once the group finds a way to keep an implant in the patient for weeks or even months, rather than just hours.



    bye!
     
  19. Rick Valued Senior Member

    Messages:
    3,336
    New Robot On the Dock.

    From: News and Views | BizNews |
    Friday, February 15, 2002

    Honda's Humanoid Robot
    Rings the NYSE's Bell


    By NANCY DILLON
    Daily News Business Writer

    ove over C3PO, a new humanoid robot from Honda has learned to climb stairs, fetch a cold can of beer, even ring the opening bell at the New York Stock Exchange.

    "He's so cute, I want one," said Jeanne Dixon, the wife of a Maryland Honda dealer in town to see the robot unveiled. "I wonder if he can be programmed to take out the garbage?"

    No dice, Dixon. The robot can open and close doors on its way to the garbage, use dozens of sensors to spot refuse, even wave at it. But so far it can't lift more than a can of beer.

    For Honda, though, it's a two-legged brand-extension.

    "Honda has always focused on mobility," said Hiroyuki Yoshino, Honda Motor's CEO. "We didn't see a reason to limit mobility to things we ride or drive."

    So far, Honda has built just 20 units, called ASIMO, with one recently rented to a Japanese museum. Costing more than $1 million to produce and carrying a lease price of $150,000 a year, ASIMO is one expensive docent.

    Company officials said once ASIMO's target market is determined, the robot will go into mass production and hopefully see its price drop out of the stratosphere.

    Named ASIMO after the Japanese words for "leg" and "tomorrow," the remote-controlled robot is 4-feet tall, weighs 115 pounds, and wears a white plastic space suit.

    Unfortunately, ASIMO is most likely years away from any Hammacher Schlemmer-style catalogue listing.

    Leading ASIMO's first U.S. demonstration at Wall Street's Regent Hotel, Yoshino said Honda is still trying to figure out exactly who would buy the robot and why.

    "Perhaps ASIMO will assist the elderly and help with household chores," said Yoshino.

    How about wash windows? That's probably another three years out, one engineer said.

    And forget about artificial intelligence. This robot's claim to fame is its ability to climb stairs. It won't be saving us from alien invaders anytime soon.

    "I'm a bit jealous," said NYSE chairman Dick Grasso shortly after ASIMO became the first robot to ever ring the opening bell. "ASIMO's been able to do what I haven't — get the Dow above 10,000 again."



    bye!
     
  20. Rick Valued Senior Member

    Messages:
    3,336
    Long Distance Robots (An article From Sci-Am)

    The following is an article from Sci-Am.


    Long-Distance Robots
    The technology of telepresence makes the world even smaller

    by Mark Alpert

    ...........


    A week after the World Trade Center disaster, I drove from New York City to Somerville, Mass., to visit the offices of iRobot, one of the country's leading robotics companies. I'd originally planned to fly there, but with the horrific terrorist attacks of September 11 fresh in my mind, I decided it would be prudent to rent a car. As I drove down the Massachusetts Turnpike, gazing at the American flags that hung from nearly every overpass, it seemed quite clear that traveling across the U.S., whether for business or for pleasure, would be more arduous and anxiety-provoking from now on. Coincidentally, this issue was related to the purpose of my trip: I was evaluating a new kind of robot that could allow a travel-weary executive to visit any office in the world without ever leaving his or her own desk.
    =====================================================================
    The technology is called telepresence, and it takes advantage of the vast information-carrying capacity of the Internet. A telepresence robot is typically equipped with a video camera, a microphone, and a wireless transmitter that enables it to send signals to an Internet connection. If a user at a remote location logs on to the right Web page, he or she can see what the robot sees and hear what the robot hears. What's more, the user can move the machine from place to place simply by clicking on the mouse. With the help of artificial-intelligence software and various sensors, telepresence robots can roam down hallways without bumping into walls and even climb flights of stairs.

    Until now, businesspeople have relied on techniques such as videoconferencing to participate in meetings that they can't attend. Anyone who's seen a videoconference, though, knows how frustrating the experience can be. Unless the participants are sitting right in front of the camera, it's often difficult to understand what they're saying. Researchers are developing new systems that may make videoconferences more realistic [see "Virtually There," by Jaron Lanier; Scientific American, April 2001]. But there's another problem with videoconferencing: the equipment isn't very mobile. In contrast, a telepresence robot can travel nearly anywhere and train its camera on whatever the user wishes to see. The robot would allow you to observe the activity in a company's warehouse, for example, or to inspect deliveries on the loading dock.

    The idea for iRobot's machines originated at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory. Rodney Brooks, the lab's director, co-founded the company in 1990 with M.I.T. graduates Colin Angle and Helen Greiner. iRobot's offices are on the second floor of a nondescript strip mall, just above a store selling children's clothing. It's the kind of office that an eight-year-old would adore--machines that look like miniature tanks lurk in every corner, as if awaiting orders to attack. The robots are tested in a large, high-ceilinged room called the High Bay, which is where I encountered a telepresence robot named Cobalt 2. The machine resembles a futuristic wheeled animal with a long neck and a bubblelike head. When the robot raised its head to train its camera on me, it looked kind of cute, like a baby giraffe. Angle, who is iRobot's chief executive, says the company designed the machine to appear friendly and unthreatening. "We wanted to create a device that would be easy for people to interact with," he says. The robot rides on six wheels and has a pair of "flippers" that it can extend forward for climbing stairs. The antenna is fixed to the back of the machine like a short black tail.


    TELEPRESENCE ROBOT called the Packbot is designed to do reconnaissance in dangerous environments. iRobot, a company based in Somerville, Mass., has built other mobile machines that can transmit video over the Internet.

    After I finished admiring Cobalt 2, I turned to a nearby computer monitor that showed the robot's Web page. In the center of the screen was the video that the robot was transmitting over the Internet. The machine was still staring at me, so I had a nice view of my own backside. The video was grainy and jerky; because the system transmits data at about 300 kilobits per second, the user sees only five or six frames per second (television-quality video shows 30 frames per second). "You're trading off the frame-update rate for the ability to move and control the camera," Angle explains. Transmitting audio over the Internet is more troublesome because of time lags, but users can easily get around this problem by equipping the robot with a cellular phone.

    Now I was ready to give Cobalt 2 a road test. Using the mouse, I clicked on the area of the video screen where I wanted the robot to go. The machine's motors whirred loudly as they turned the wheels, first pointing the robot in the right direction and then driving it to the indicated spot. Then I devised a tougher challenge: I directed the machine to smash into the wall on the other side of the room. Fortunately for Cobalt 2, its compact torso is studded with sensors. The machine's acoustic sensor acts like a ship's sonar, detecting obstacles by sending out sound waves and listening to the echoes. Infrared sensors gauge the distance to the obstacles and can also warn the robot if it's heading toward a drop-off. Cobalt 2 stopped just shy of the wall, thwarting my destructive intentions.

    The machine that iRobot plans to sell to businesses looks a little different from Cobalt 2. Called the CoWorker, it resembles a small bulldozer--it actually has a shovel for pushing objects out of its path. "It's a robot with a hard hat," Angle says. In addition to a video camera, the machine has a laser pointer and a robotic arm that remote users can manipulate. iRobot has not set a price for the CoWorker yet, but it is already shipping prototype versions to businesses that want to evaluate the technology. The company also plans to introduce a telepresence robot for home use. Such a device could be a lifeline for senior citizens living alone; the robot would allow nurses and relatives to see whether an elderly person is ill or needs immediate help.

    Will these mechanical avatars soon be knocking on your door? The fundamental challenge of telepresence is not technological but psychological: I, for one, would have a lot of trouble keeping a straight face if a robot sat next to me at one of our magazine's staff meetings. And can you imagine how most senior citizens would react to the wheeled contraptions? Nevertheless, people may eventually accept the technology if the potential benefits are great enough. For example, an elderly person may decide to tolerate the intrusions of a camera-wielding robot if the only safe alternative is living in a nursing home.

    As I wandered through iRobot's offices, I got a glimpse of another telepresence robot called the Packbot. About the size of a small suitcase, this low-slung machine moves on caterpillar treads and, like Cobalt 2, has extendable flippers that allow it to climb over obstacles. The Defense Advanced Research Projects Agency (DARPA)--the U.S. military's research and development arm--is funding the development of the Packbot, which is designed to do reconnaissance and surveillance in environments where it would not be safe for humans to go.

    In the aftermath of the September 11 attacks, military officials recognized that telepresence robots could aid the search-and-rescue efforts. So the engineers at iRobot attached video and infrared cameras to the prototype Packbots and rushed them to New York. At the Somerville office I watched an engineer fasten two flashlights and a camera to a Packbot that would soon be taken to the World Trade Center site.

    Although the Packbots were too large to burrow into the wreckage, the iRobot engineers used one machine to search a parking garage. Smaller telepresence robots called the MicroTrac and the MicroVGTV--machines made by Inuktun, a Canadian company that sells robots for inspecting pipes and ducts--were able to crawl through the holes in the rubble. The machines found no survivors but located the bodies of several victims.

    This grim task was perhaps the best demonstration of the value of telepresence. As I drove back to New York, I felt a grudging respect for the robots--and for the men and women who'd built them.


    --------------------------------------------------------------------------------
    bye!
     
  21. Rick Valued Senior Member

    Messages:
    3,336
    Another Amusing article taken from the Guardian

    Scientists are experimenting with robots that will eventually be able to reproduce, writes Dylan Evans

    Thursday February 14, 2002
    The Guardian

    Last week, a bizarre two-year experiment stirred into life at the Magna Science Centre in Rotherham, south Yorkshire. Before the eyes of guests, professor Noel Sharkey of Sheffield University let loose a fleet of robotic predators and observed as they chased their equally mechanical prey.
    Unlike the prey-bots, which can gather energy from high-powered lamps by their solar panels, the predators can only replenish their batteries by sucking the life out of their victims. The artificial vampires do this by plunging sinister metal fangs into the heart of the smaller but more nimble prey and stealing their electricity. What makes this experiment more than just a jazzed-up version of Robot Wars is that Sharkey's machines are genuinely autonomous.

    They take their own decisions in real time. Human intervention is reduced still further by allowing the robots' brains to evolve by means of natural selection. Every so often, the virtual genes that encode the best robot control systems are used to create a new generation of predators and prey. The hope is that this process of artificial evolution will lead to the emergence of complex pack behaviour, similar to that which natural evolution has produced in species such as wolves and lions.


    The idea of allowing robots to evolve has given rise to a new but rapidly expanding field of research known as evolutionary robotics. Although it shares many of the insights of artificial life, which pioneered the use of genetic algorithms in the 1970s and 1980s, evolutionary robotics is distinguished by its insistence on making the leap from 2D computer-animations to 3D physically embodied machines.


    The aim is to remove the human being from the process of robot construction, so that robots can eventually reproduce and maintain themselves without help. In other words, the aim is to create a whole new species from scratch - a species of organism unlike any that has appeared on this planet, composed not of cells and DNA, but of metal and plastic. In the words of Steve Grand, whose groundbreaking computer game Creatures was among the first pieces of entertainment software to incorporate the principles of artificial life, the leading scientists in this field aspire to be nothing less than latter-day Baron Frankensteins.


    But what is the point of such an endeavour? Researchers argue that autonomous robots could prove useful in a range of fields, from clearing landmines to space exploration. But concerns have also been raised about the potential dangers. Kevin Warwick, professor of cybernetics at the University of Reading, foresees a time when intelligent robots may pose a threat to the survival of humanity. For the time being, however, such warnings seem premature; the robots professor Warwick put on display during his Royal Institution Christmas lectures in 2000 inspired more laughter than fear, thanks to their reassuring tendency to break down.


    Yet it is doubtful that evolutionary robotics will attain its objectives while it concentrates on the aggressive behaviours that have dominated both the research and the public imagination.

    As long as scientists focus exclusively on the evolution of robot predators, and TV coverage of robotics merely panders to our appetite for new forms of violence, robots will never get very far. Violent robots may evolve primitive emotions such as anger and fear, but if the history of life on earth is anything to go by, that will only take them to a level of complexity approaching that of insects. To evolve the greater levels of complexity we observe in higher animals such as ourselves, robots will have to acquire a broader repertoire of emotional capacities. It will not be enough for them to get scared and angry; they will need to be able to feel surprise, to experience joy - and even fall in love.


    Before you dismiss such talk as far-fetched, consider the following experiment in preparation at the University of Bath. A population of robots will be divided into two sexes. Each robot will try to reproduce by mating with others, but unlike the experiments conducted up to now, these robots will be fussy about who they mate with. The hope is that, by introducing mate choice into the process, artificial evolution will be accelerated, just as occurs with natural evolution. The technical name for this phenomenon is sexual selection, and its fruits are among the most eye-catching in the natural world - the peacock's tail, the bower bird's nest, the baboon's bottom.


    Perhaps the most unsettling thing about this development is the unpredictable nature of the outcome. Sexual selection is notoriously capricious, picking on very small features almost at random and taking them to extremes. But even this uncertainty is preferable to the more frightening predictability of the focus on aggressive robots. Who knows? Perhaps robosex will lead to the evolution of robots that care for their human ancestors rather than wishing to destroy them?



    bye!
     
  22. kmguru Staff Member

    Messages:
    11,757
    Today, we can make millions of robots that can be used as search and destroy our enemies in caves. With a small fuel cell, a tiny robot can be powered upto 7 days. They can carry plastic explosives and blow themselves upon contact with the enemy in the caves or in a specific designated area.

    I think it is a big mistake to give robots the survival and reproduction instincts in the begining. But in about 2030, they may be the ones to manage our finances and food production. We are already using computers to produce goods, pick stocks, predict sales, find cure for cancer and so on....
     
  23. kmguru Staff Member

    Messages:
    11,757
Thread Status:
Not open for further replies.

Share This Page