Robots:are We Close Yet?

Discussion in 'Intelligence & Machines' started by Rick, Nov 19, 2001.

Thread Status:
Not open for further replies.
  1. Rick Valued Senior Member

    I dont agree with him in saying that no one is trying seriously.Most of researchers in AI arent serious enough,i dont think so(After all those detailed studies).

  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. kmguru Staff Member

    More on AI:

    “Computer! Turn on the lights!” Rodney Brooks, director of MIT’s Artificial Intelligence Laboratory—the largest A.I. lab in the world—strides into his ninth-floor office in Cambridge, MA. Despite his demand, the room stays dark. “Computer!” he repeats, sitting down at the conference table.
    “I’m…already…listening,” comes a HAL-like voice from the wall. Brooks redirects his request toward a small microphone on the table, this time enunciating more clearly: “Turn on the lights!”

    A pleasant tweeting sound signals digital comprehension. The lights click on. Brooks grins, his longish, graying curls bouncing on either side of his face, and admits his entrance was a somewhat rough demonstration of “pervasive computing.” That’s a vision of a post-PC future in which sensors and microprocessors are wired into cars, offices and homes—and carried in shirt pockets—to retrieve information, communicate and do various tasks through speech and gesture interfaces. “My staff laughs at me,” says Brooks, noting he could have simply flicked the light switch, “but I have to live with my technology.”

    In the not-too-distant future, a lot more people may be living with technologies that Brooks’s lab is developing. To help make pervasive computing a reality, researchers in his lab and MIT’s Laboratory for Computer Science are developing—in an effort Brooks codirects called Project Oxygen—the requisite embeddable and wearable devices, interfaces and communications protocols. Others are building better vision systems that do things like interpret lip movements to increase the accuracy of speech recognition software.

    Brooks’s A.I. Lab is also a tinkerer’s paradise filled with robotic machines ranging from mechanical legs to “humanoids” that use humanlike expressions and gestures as intuitive human-robot interfaces—something Brooks believes will be critical to people accepting robots in their lives. The first generation of relatively mundane versions of these machines is already marching out of the lab. The robotics company Brooks cofounded—Somerville, MA-based iRobot—is one of many companies planning this year to launch new robot products, like autonomous floor cleaners and industrial tools built to take on dirty, dangerous work like inspecting oil wells.

    Of course, autonomous oil well inspectors aren’t as thrilling as the robotic servants earlier visionaries predicted we’d own by now. But as Brooks points out, robotics and artificial intelligence have indeed worked their way into everyday life, though in less dramatic ways (see “A.I. Reboots,” TR March 2002). In conversations with TR senior editor David Talbot, Brooks spoke (with occasional interruptions from his omnipresent computer) about what we can expect from robotics, A.I. and the faceless voice from the hidden speaker in his wall.

    TR: The military has long been the dominant funder of robotics and A.I. research. How have the September 11 terror attacks influenced these fields?
    BROOKS: There was an initial push to get robots out into the field quickly, and this started around 10 a.m. on September 11 when John Blitch [director of robotics technology for the National Institute for Urban Search and Rescue in Santa Barbara, CA] called iRobot, along with other companies, to get robots down to New York City and look for survivors in the rubble. That was just a start of a push to get things into service that were not quite ready—and weren’t necessarily meant for particular jobs. In general, there has been an urgency to getting things from a development stage into a deployed stage much more quickly than was assumed would be necessary before September 11. I think people saw there was a real role for robots to keep people out of harm’s way.

    TR: What else besides…

    COMPUTER: I’m…already…listening.

    BROOKS: Go to sleep. Go to sleep. Go…to…sleep.

    COMPUTER: Going…to…sleep.

    BROOKS: As long as we don’t say the “C” word now, we’ll be okay.

    TR: Did any other robots get called for active duty?

    BROOKS: Things that were in late research-and-development stages have been pushed through, like iRobot’s “Packbot” robots. These are robots that a soldier can carry and deploy. They roll on tracks through mud and water and send back video and other sensory information from remote locations without a soldier going into the line of fire. They can go into rubble; they can go where there are booby traps. Packbots were sent for search duty at the World Trade Center site and are moving into large-scale military deployment more quickly than expected. There is more pressure on developing mine-finding robots.

    TR: How are you balancing military and commercial robot research?

    BROOKS: When I became A.I. Lab director four and a half years ago, the Department of Defense was providing 95 percent of our research funding. I thought that was just too much, from any perspective. Now it’s at about 65 percent, with more corporate funding.

    TR: What’s the future of commercial robots?

    BROOKS: There has been a great deal of movement toward commercial robots. Last November, Electrolux started selling home-cleaning robots in Sweden. They have a plan to sell them under the Eureka brand in the U.S. There are a bunch of companies that plan to bring out home-cleaning robots later this year, including Dyson in the U.K., Kärcher in Germany and Procter and Gamble in the U.S. Another growing area is remote-presence robots; these are being investigated more closely, for example, to perform remote inspections above ground at oil drilling sites. Many companies are starting to invest in that area. IRobot just completed three years of testing on oil well robots that actually go underground; we’re now starting to manufacture the first batch of these.

    TR: How is that different from other industrial robots, like spot welders, that have been around for years?

    BROOKS: These robots act entirely autonomously. It’s impossible to communicate via radio with an underground robot, and extreme depths make even a lightweight fiber-optic tether impractical. If they get in trouble they need to reconfigure themselves and get back to the surface. They have a level of autonomy and intelligence not even matched by the Mars rover Sojourner, which could get instructions from Earth. You don’t need a crew of workers with tons of cable or tons of piping for underground inspections and maintenance. You take this robot—which weighs a few hundred pounds—program it with instructions, and it crawls down the well. You have bunches of sensors on there to find out flow rates, pressures, water levels, all sorts of things that tell you the health of the well and what to do to increase oil production. They will eventually open and close sleeves that let fluids feed into the main well pipe and make adjustments. But the first versions we’re selling this year will just do data collection.

    TR: The computer that turned on the lights is part of MIT’s Project Oxygen, which aims to enable a world of pervasive computing. As codirector, what are your objectives?

    BROOKS: With Project Oxygen, we’re mostly concentrating on getting pervasive computing working in an office environment. But the different companies investing in Project Oxygen obviously have different takes on it. Philips is much more interested in technologies to make information services more available within the home. Delta Electronics is interested in the future of large-screen displays—things that can be done if you have wall-sized displays you can sell to homeowners. Nokia is interested in selling information services. They call a cell phone a “terminal.” They want to deliver stuff to this terminal and find ways we can interact with this terminal. Already, Nokia has a service in Finland where you point the cell phone at a soda machine and it bills you for the soda. In Japan, 30 million people already browse the Web on their cell phones through NTT’s i-mode. All these technologies are providing services from computing in everyday environments. We are trying to identify the next things, to see how we can improve upon or go beyond what these companies are doing.

    TR: To that end, Project Oxygen is developing a handheld device called an “H21” and an embedded-sensor suite called an “E21.” But what, exactly, will we do with these tools—besides turn on the lights?

    BROOKS: The idea is that we should have all our information services always available, no matter what we are doing, and as unobtrusive as possible. If I pick up your cell phone today and make a call, it charges you, not me. With our prototype H21s, when you pick one up and use it, it recognizes your face and customizes itself to you—it knows your schedule and where you want to be. You can talk to it, ask it for directions or make calls from it. It provides you access to the Web under voice or stylus command. And it can answer your questions rather than just giving you Web pages that you have to crawl through.
    The E21s provide the same sorts of services in a pervasive environment. The walls become screens, and the system handles multiple people by tracking them and responding to each person individually. We are experimenting with new sorts of user interfaces much like current whiteboards, except with software systems understanding what you are saying to other people, what you are sketching or writing, and connecting you with, for instance, a mechanical-design system as you work. Instead of you being drawn solitarily into the computer’s virtual desktop as you work, it supports you as you work with other people in a more natural way.

    TR: How common will pervasive computing become in the next five years to 10 years?

    BROOKS: First we have to overcome a major challenge—making these devices work anywhere. As you move around, your wireless environment changes drastically. There are campuswide networks, and cell phones in different places with different protocols. You want those protocols to change seamlessly. You want to have these handheld devices work independent of the service providers. Hari Balakrishnan [an assistant professor at MIT’s Laboratory for Computer Science] and students have demonstrated the capability—which has had great interest from the corporate partners—in having a totally roaming Internet, which we don’t have right now. That’s something I expect will be out there commercially in five years.

    TR: And in 10 years?

    BROOKS: In 10 years, we’ll see better vision systems in handheld units and in the wall units. This will be coupled with much better speech interfaces. In 10 years the commercial systems will be using computer vision to look at your face as you’re talking to improve recognition of what you are saying. In a few years, the cameras, the microphone arrays will be in the ceiling in your office and will be tracking people and discriminating who is speaking when, so that the office can understand who wants to do what and provide them with the appropriate information. We’re already demonstrating that in our Intelligent Room here in the A.I. Lab. I’ll be talking to you—then I’ll point, and up on the wall comes a Web page that relates to what I’m saying. It’s like Star Trek, in that the computer will always be available.

    TR: What is the state of A.I. research?

    BROOKS: There’s this stupid myth out there that A.I. has failed, but A.I. is everywhere around you every second of the day. People just don’t notice it. You’ve got A.I. systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an A.I. scheduling system. Every time you use a piece of Microsoft software, you’ve got an A.I. system trying to figure out what you’re doing, like writing a letter, and it does a pretty damned good job. Every time you see a movie with computer-generated characters, they’re all little A.I. characters behaving as a group. Every time you play a video game, you’re playing against an A.I. system.

    TR: But a robotic lawn mower still can’t be relied upon to cut the grass as well as a person. What are the major problems that still need solving?

    BROOKS: Perception is still difficult. Indoors, cleaning robots can estimate where they are and which part of the floor they’re cleaning, but they still can’t do it as well as a person can do. Outdoors, where the ground isn’t flat and landmarks aren’t reliable, they can’t do it. Vision systems have gotten very good at detecting motion, tracking things and even picking out faces from other objects. But there’s no artificial-vision system that can say, “Oh, that’s a cell phone, that’s a small clock and that’s a piece of sushi.” We still don’t have general “object recognition.” Not only don’t we have it solved—I don’t think anyone has a clue. I don’t think you can even get funding to work on that, because it is just so far off. It’s waiting for an Einstein—or three—to come along with a different way of thinking about the problem. But meantime, there are a lot of robots that can do without it. The trick is finding places where robots can be useful, like oil wells, without being able to do visual object recognition.

    TR: Your new book Flesh and Machines: How Robots Will Change Us argues that the distinctions between man and machine will be irrelevant some day. What does that mean?

    BROOKS: Technologies are being developed that interface our nervous systems directly to silicon. For example, tens of thousands of people have cochlear implants where electrical signals stimulate neurons so they can hear again. Researchers at the A.I. Lab are experimenting with direct interfacing to nervous systems to build better prosthetic legs and bypass diseased parts of the brain. Over the next 30 years or so we are going to put more and more robotic technology into our bodies. We’ll start to merge with the silicon and steel of our robots. We’ll also start to build robots using biological materials. The material of us and the material of our robots will converge to be one and the same, and the sacred boundaries of our bodies will be breached. This is the crux of my argument.

    TR: What are some of the wilder long-term ideas your lab is working on or that you’ve been thinking about?

    BROOKS: Really long term—really way out—we’d like to hijack biology to build machines. We’ve got a project here where Tom Knight [senior research scientist at the A.I. Lab] and his students have engineered E. coli bacteria to do very simple computations and produce different proteins as a result. I think the really interesting stuff is a lot further down the line, where we’d have digital control over what is going on inside cells, so that they, as a group, can do different things. To give a theoretical example: 30 years from now, instead of growing a tree, cutting it down and building a table, we’d just grow a table. We’d change our industrial infrastructure so we can grow things instead of building them. We’re a long way away from this. But it would be almost like a free lunch. You feed them sugar and get them to do something useful!

    TR: Project Oxygen. Robots. Growing tables. What’s the common intellectual theme for you?\

    BROOKS: It all started when I was 10 years old and built my first computer, in the early 1960s. I would switch it on and the lights flashed and it did stuff. That’s the common thread—the excitement of building something new that is able to do something that normally requires a creature, an intelligence of some level.

    TR: That excitement is still there?

    BROOKS: Oh yeah.

  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. kmguru Staff Member

    Consider this:

    BROOKS: Technologies are being developed that interface our nervous systems directly to silicon. For example, tens of thousands of people have cochlear implants where electrical signals stimulate neurons so they can hear again. Researchers at the A.I. Lab are experimenting with direct interfacing to nervous systems to build better prosthetic legs and bypass diseased parts of the brain. Over the next 30 years or so we are going to put more and more robotic technology into our bodies. We’ll start to merge with the silicon and steel of our robots. We’ll also start to build robots using biological materials. The material of us and the material of our robots will converge to be one and the same, and the sacred boundaries of our bodies will be breached. This is the crux of my argument.

    I am so excited. AI is here, AI is here. I guess I did not know that cochlear implants are AIs in disguise. The pace makers too? Something new you learn everyday....
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Rick Valued Senior Member

    Interesting KM,
    Recently while in Singapore i went to a Discovery Channel convention,there were people from Hong Kong who were in charge of Robotics division and were monitering Aibo Robot.they always refferred AI programming as Neural net programming,(which i know most of us disregard as an AI approach).they were giving lectures on how to make Robots with Neural nets so as to enable them with Survival instincts,as a learning mechanism.the first trial of such a proceedure would be essentiallya reptillian class Robotics as Hans says.

    Okay,here's another one.
    there's a Robot festival in Hong Kong,are you going KM?

    Please Register or Log in to view the hidden image!

    this will include lectures etc.

  8. Rick Valued Senior Member

    Check this out...

    Robots--working for Japan's future?

    April 2, 2002, 11:10 AM PT

    By the end of the decade, the people who disarm bombs and search for survivors after a disaster may no longer need to put their lives on the line. A machine, possibly made in Japan, may be able to handle the dangerous stuff.
    That is one goal of the Japanese government's $37.7 million Humanoid Robotics Project (HRP), which aims to market within a few years robots that can operate power shovels, assist construction workers and care for the elderly.

    In the process, a new multibillion-dollar Japanese industry could be born.

    "Just as automobiles were the biggest product of the 20th century, people might eventually look back and say that robots were the big product of the 21st century," said Hirohisa Hirukawa, a researcher for the government-affiliated National Institute of Advanced Industrial Science and Technology.

    Hirukawa heads a group that helped to develop HRP-2, a silver and blue humanoid robot that stands 5 feet tall, weighs 128 pounds and looks a bit like a child wearing a spacesuit.

    The robot, co-developed with Kawada Industries, Yaskawa Electric and Shimizu, is the latest in a series of humanoid robots unveiled by Japanese researchers in the last few years.

    The government hopes their efforts will eventually enable robots to walk out of the factory--virtually their only domain at present--and into homes, offices, hospitals and any other place where humans toil.

    It also wants to capitalize on the technological edge of Japan, the global leader in robot production and home to more than half of the world's industrial robots.

    "We want to create a new market exploiting the technology Japan has accumulated, and to help strengthen the economy over the medium to long term," said Kenichiro Yoshida, deputy director of the Trade Ministry's industrial machinery division.

    The Japan Robot Association, an industry body, estimates that the robot industry could grow to $22.61 billion by 2010. The figure has hovered around $3.8 billion for the past few years.

    The group predicts the expansion will be led by robots that perform everyday tasks and believes that, while there are no such robots on the market now, they could be ringing up annual sales of $11.3 billion by 2010.

    "We want robots to be able to function around humans and be useful in areas other than entertainment," Yoshida said.

    For the industry to take off, however, technology must become far more advanced and, perhaps more critically, researchers will need to find useful roles that humanoid robots can play in society.

    The HRP-2 appeared before the public for the first time at Robodex 2002, a four-day exhibition at the end of March that featured various robots developed by Japanese corporations and universities.

    Visitors watched the blue-helmeted android help a human carry furniture about, and an older prototype drove a forklift.

    "There is demand for robots that can be used in dangerous places and disaster areas," Hirukawa said, noting that workers could, for example, operate construction machinery from a safe distance via a remote-controlled HRP-2.

    He hopes that perhaps 10 of the HRP-2 robots could be sold within five years of the state-run project ending in March 2003.

    "Once we can sell 1,000 robots, I think the state's role will end and we will enter a natural mass-production spiral," Hirukawa said. "But I can't see yet how that will happen."

    And Hirukawa says it will be a long time before humanoid robot technology is advanced enough to foster a major industry.

    "I think the earliest we will see robots doing household chores will be by 2025, or 2050 at the latest," he said.

    The Trade Ministry, however, wants to find a quicker way to build up a new robot industry and is beginning to examine options other than humanoid machines.

    The trade ministry's Yoshida said a new project aimed at developing household robots would not focus on humanoid robots, and the ministry was considering whether to continue the humanoid robot project after March 2003.

    "Robots don't have to be humanoid to be useful in homes," he said. "We want to make robots as quickly as possible that can be used in homes or for disaster areas."

    Interesting,isnt it?

  9. kmguru Staff Member

    It is the same Japanese who declared in early 80's to built an AI using 5th generation super computers in 10 years. Neural net will work for specific activities - we shall see how it goes.

    People have a tendency to over simplify issues and solutions. To give you an example, here in US, certain government work needed so many people like yesterday to redesign certain computer systems. Our company has been trying to supply those hard to find people for the last two months - there are so many road blocks, you would not believe. It is not as simple as it sounds...

    BTW, no. I will not be going to Hong Kong. I have so much stuff in my head, I need to unload it first on some fun projects. Then I can go - to refill my idea tank.
  10. Rick Valued Senior Member

    I think you"re right in terms of over-hyping being done for AI.The development although promised to follow Moore's law has been not even at snail's pace,since we were to enter 21st century with HAL type of walking machines.the days with android type of Robots are long way but developments are underswing and must continue with positive flexible approaches(that is maturing,self evolving and utilizing of new ideas),i am sure we will reach our stereotype scifi HALs,CALs etc...who will evolve with us to a new astonishing and a great future.

    Oh yeah!i almost must check these interesting links...

  11. kmguru Staff Member

    Berry berry interesting....

    Please Register or Log in to view the hidden image!

  12. Rick Valued Senior Member

    I think Space craft healing concpet can be used in Robots as well...

    check out self healing space craft thread in Astronomy forums...

    Please Register or Log in to view the hidden image!

  13. allant Version 1.0 Registered Senior Member

    My 2 cents worth. Re the bits and pieces to make a robot and I am talking software and hardware. We have enough hardware to make an intelligent robot - even though we are not sure how much hardware that is!

    In the software though we have at least one big step to go. We can program logical thinking, planning , imagination, emotions. What is missing is patterns. If we could program to recognise patterns we could build memory like a humans, see a cup in a picture, hear a word in the noise from the microphone.

    Yes; neural nets, and a brute force approach ( see chess programs and current voice recog programs ) are a start at pattern recognition, but each has problems.

    There is a little math and logic that strongly sugests the problem of seeing a cup, hearing a word, putting a spatial map into the "thinking" are all the same problem. Solve one and you solve them all. What we dont know is if this will be tomorrow or next century.

    When we have a solution, researchers will put theat and the stuff we have together and this will likely be an eye-opener. But we are not sure what else we may run into and how easy the problems will be to solve after this is tried.

    One of the sayings AI people have

    "The easier it is for a human to do the harder it is for a Computer/Robot and vice versa. "

    You can see a door without thinking a computer really struggles with this, A human struggles to add up a million numbers, but a computer can add up a million numbers easily and get it right. (Unless of course it works for a bank, tax department or Enron...

    Please Register or Log in to view the hidden image!

  14. kmguru Staff Member

    Ahh...patterns...we can learn a lot from how humans process patterns. Here is the abstract of a paper that discusses such item:

    Talk about 'timing' (Synchronicity?) - I just read the abstract and was thinking about whether I should post it in sciforums. The opportunity presented itself when I saw the fresh posting on this topic....enjoy.

    Three-dimensional orientation tuning in macaque area V4

    David A. Hinkle & Charles E. Connor

    Department of Neuroscience, Johns Hopkins University School of Medicine and Zanvyl Krieger Mind/Brain Institute, 338 Krieger Hall, 3400 North Charles Street, Johns Hopkins University, Baltimore, Maryland 21218, USA

    Tuning for the orientation of elongated, linear image elements (edges, bars, gratings), first discovered by Hubel and Wiesel, is considered a key feature of visual processing in the brain. It has been studied extensively in two dimensions (2D) using frontoparallel stimuli, but in real life most lines, edges and contours are slanted with respect to the viewer. Here we report that neurons in macaque area V4, an intermediate stage in the ventral (object-related) pathway of visual cortex, were tuned for 3D orientation—that is, for specific slants as well as for 2D orientation. The tuning for 3D orientation was consistent across depth position (binocular disparity) and position within the 2D classical receptive field. The existence of 3D orientation signals in the ventral pathway suggests that the brain may use such information to interpret 3D shape.
  15. BatM Member At Large Registered Senior Member


    A very interesting thread in an area that I'm just beginning to dig into, but I haven't been able to wade thru everything that zion has posted (there is a LOT there!). One area that I think you've been missing in this discussion of robots and where they will lead is nanotechnology. Consider the effect of "Utility Fog" on the direction that robotics might go in the future.

    Also, although you paint a "rosy" picture of where things might go in the future, have you speculated on the "dark" side of the technology?
  16. kmguru Staff Member

    dark side of the technology

    Hmmm....let me using an airplane as a weapon of mass destruction? Honestly, There are so many ways to use technology in a bad way that I do not want to discuss it specifically in the area where someone might use it for evil purposes. On the otherhand there are some areas we can discuss that has a long term negetive effect.

    For example - sometime soon, we can map and understand every pair of gene in a plant. We can design plants to fight disease, be intelligent so that they can adapt to harsh environment and produce apples and pears in the same tree. The long term ramification can be devastating, if those intelligent plants after 50 years turn against humans thinking we are also pests.

    I do keep track of nanotechnology. nt by itself is an architecture and not a product. The products are computers, display units, robots, etc. physical items that perform one or more tasks. So, if we can focus on the tasks or objectives and work backwards as to the architecture that can provide that objective - then whether we use nanotechnology, material science, quantum structures etc...we can get there.

    Anyway...yes zion posted tons of stuff. Once you go through them, you can become an expert....this is an open call to high end information technology architects to add here so that we can produce our own idea that can rival IBM Lab.
  17. BatM Member At Large Registered Senior Member

    Bah. Too complicated. I was thinking more simply. In the case of robotics, in the long run, we may develop a utopian society where robots perform most every task (as some of what zion posted alluded to), but, in the short run, there is the issue of keeping an ever growing human population productively employed until we sort out the structure of the utopian society (these things don't happen over night). Mix in nanotechnology and now you have a population of robots that can grow as fast as (or faster than) the human population. If they learn to "evolve" along the way... :bugeye:

    Have you read Bill Joy? There are many more scenarios where this could go awry. Remember what's happening today with the Internet and the growing legion of crackers that are finding ways to subvert the technology. The good guys need to be "right" all the time while the bad guys need to be "right" only once (a la 9/11).
  18. kmguru Staff Member

    Yes I read Bill Joy and I like his thoughts. There is no such thing as good without evil. I can be as evil as good. It is all in the balance knowing the long term cause-effect. It is that ying-yang thing. If one can look at the big picture and understand the cause and effect then the unpleasant side-effect can be minimized.

    I also look forward to the robotic and automated world where the food and shelter comes free to every newborn. In the Maslov's heirarchy of needs, I would like to see that the human needs are eleveted to self-actualization.

    In India, two thousand years ago, a structure was in place where upto first 16 years, a person was pampered and nurtured and educated. Then that person enters the society to do the work to feed and raise his family and society. At something like 60 years of age, the person usually leaves the society and travels seeking self-actualization. His food and shelter is met by churches and donation type institutes (like Salvation Army) along the way. Some wrote memoir and still others stories and their experience enriching the society. It has been done and proven very successful. History can be repeated again....this time using technology to overcome the population issue.
  19. BatM Member At Large Registered Senior Member

    Rose colored glasses..?

    There are many who view the future as a very nice place and, for the most part, I think I agree with them. The problem will be getting from here in the present to that future. There is going to be a LOT of shakeup before utopia arrives in the next century:

    • Advanced robotics will lead to massive unemployment as menial jobs are taken over by the robots.
    • This will lead to the rich getting much richer and the poor getting much poorer.
    • Which may lead to partial collapses of various economies as people struggle to find money for needs let alone wants.
    • Nanotech will enhance this effect if it happens at the same time as it will make it easier to assemble most anything thus idling the other producers of goods.

    Ultimately, I find the question to be, if these two technologies begin delivering on their promise, will the world know how to convert to times of "economic" plenty? Most people would jump on this question and say "of course" (as they jumped on Bill Joy), but think about it against all that you see going on in the world today. Many great things will undoubtedly happen, but will they all be good and, if not, how bad could they be?
  20. kmguru Staff Member

    As you know, we all comment, debate based on what we read as well as based on hands on experience. I usually would put a little more emphasis on hands on than information gathered from books etc. For example, before I let a doctor do brain surgery, I would like to know how many he has done in the past and how successful...right?

    So, IMHO, my comments on robotics and automation is based on my hands on experience as an architect, designer and troubleshooter for some of the best automation systems in the world since 1970 - in chemical, refinery, manufacturing, mining etc. Every type of products you buy today is the result of partial automation (you still need some people to monitor and manage). In early seventies, when US was going through automation, productivity rose significantly. However the union people got scared that there will be massive unemployment. In 1983, I did a comparision and US automation was such that, you needed 12 times more people in China and 10 times more people in India to produce the same goods. Since then other countries started automating like crazy - so far there has not been a major non-recoverable unemployment situation anywhere due to automation. Same thing can not be said for policy changes by stupid politicians - but that is different...

    So...we will find a balance ....
Thread Status:
Not open for further replies.

Share This Page