A new movie about AI

Discussion in 'Intelligence & Machines' started by kalystath, Jul 31, 2003.

Thread Status:
Not open for further replies.
  1. kalystath Registered Member

    Messages:
    4
    AI and robotics have long been a subject of interest for the movie industry: Matrix, Terminator, A.I... But what you see in those movies is an oversimplified, conflictual vision of the future where, basically, machines are going to kill us. While this might prove to be a good base for an hollywood entertaining story, it is very likely far from what future will be. It is easy to see how more interesting and subtle it could be, just by reading news on kurzweilai.net, for example.

    I have two jobs. First, I'm an AI research in an academic laboratory and I work on robotics. Second, I am a filmmaker. I currently work on a project to join those two activities and start working on an original movie about AI, the singularity and how the future will look like. But, unlike Terminator, this will not be a very expensive, special effect based movie, but rather a "hardcore science" or "realistic sci-fi" movie. What I am interested in is to show the social, cultural and human consequences of AI-based robotics on our society: will this be the end of work? If so, how are we going to live, where are we going to spend our time? How is politics going to be affected? Environment? Human relationships? Religions? Will we be more alone in a more and more de-humanized world? Or, on the contrary, will we develop new and powerful human associations? How will robots look like?... Actually, I could cary on with hundreds of questions. Almost any post on this forum could be a question.

    I wanted to start discussing the matter of such a movie with you and beginning what I believe can be an interesting exchange of ideas on what you would like to see in the movie, what vision of it you might have. This is a rather unusual experiment, a kind of collective scriptwriting maybe? I don't know, but I thought it could be valuable to try. I leave the topic of discussion very open, since I don't want to force anyone into one of my personal point of view.

    However, before we start, I would like you to have a look at some of my personal opinions on several widespread ideas on AI (and it's consequences on the labor market), that I believe are completely false ideas:

    http://ask.slashdot.org/comments.pl...commentsort=1&tid=126&mode=thread&cid=6549544

    This is part of a post on slashdot about "Will Humanoid Robots Take All the Jobs by 2050?". Very interesting.

    Well, let's start the discussion if you feel like!
    Eric
    http://www.kalystath.com
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. Angelus Daughter Of House Ravenhearte Registered Senior Member

    Messages:
    431
    I believe that sometime in the future when robots and androids take over all work it will be the beggining of a perfect communist society. It's just going to take a long long time.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. kalystath Registered Member

    Messages:
    4
    Might be that AI would lead to realize the communist utopia. I would prefer to say that it could exceed this utopia by providing more sophisticated ways to evenly share the economical profits.

    But a perfectly fair society where everyone gets a equal share of the cake is not realistic. There will always be people who want more and who will fight for it.

    Eric
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. malkiri Registered Senior Member

    Messages:
    198
    There's a belief out there that we're approaching an event called the Singularity, a term coined by Verner Vinge. I haven't decided whether I buy into it or not. Essentialy, the Singularity is a point at which we create an intelligence that's smarter than us. Following a sort of Moore's Law and assuming this intelligence has the capacity for self-change, the intelligence may become more intelligent exponentially. It's difficult for us to predict what can happen after the Singularity, since it would be something like monkeys trying to predict our behavior. Some believe it'll be like the Hollywood ending, with us fighting for our suvival. Not all think this is necessarily a bad thing, either. Others think super-human intelligence cannot intrinsically have genocidal (evolutionary?) tendencies like that, but instead the Singularity will be wholly beneficial for humanity.

    There's a set of pages at Singularity Institute for AI that can do a much better job of explaining their position. It's certainly an interesting viewpoint, and gives potential reasons to both hope for and fear the future.

    Now that I think about it, this topic was probably mentioned in that Ask Slashdot thread you linked.
     
  8. kalystath Registered Member

    Messages:
    4
    Hi,

    I've partially answered to some of these questions here:

    http://www.kalystath.com/ai

    The main problem about the singularity is that:
    * It looks like the messiah from the "Technoscience religion". Really it does...

    Please Register or Log in to view the hidden image!


    * Any "timing prediction" on this is irrelevant. If it will happen, we can't tell when.
    * Too many hypothesis in the description of this phenomena are simple assumptions and might prove false. I won't enter into any details here.

    Anyway, it's an inspiring source of ideas, especially if I moderate it a little bit.
    Thanks,
    Eric
     
  9. G71 AI Coder Registered Senior Member

    Messages:
    163
    kalystath: You may want to read this: Robotic Nation.

    That looks unlikely to me. They may kind of understand our feelings but they are probably not gonna have their own feelings (=the real goal generators). Artificial decisions should be IMHO based on a pure logic only. On the high level, they will need us to know what to go for. Our AI is probably gonna stay the nothing_more_than_our_tool and the stable (and high) level OUR happiness will stay the long-term goal.
     
  10. kalystath Registered Member

    Messages:
    4
    Could you define "feelings"? Because, it seems you put a kind of magical nature under what I consider as one of the numerous side effects of "intelligence" (<- intelligence is to be defined also, but this is another post!)

    I thing there could be two kind of AI: those who have autonomy and those who are simply slaves/tools we control.
     
  11. G71 AI Coder Registered Senior Member

    Messages:
    163
    kalystath:
    I need to put more thoughts on a general definition, but a feeling is for example the difference between "just knowing about a pleasure" and "experiencing a pleasure," between "just knowing about a pain" and "experiencing a pain." It seems to me that it's kind of an extra dimension of information. It's the real base of the ability to prefer. I do not think it's a side effect of intelligence. I think it's one of few key preconditions of intelligence. Our "need to do" is based on uncomfortable feelings. Intelligence is meaningless without these feelings. I think we got kind of unhappy first, then our intelligence started to develop. BTW I posted my AI definition on the right place on sciforums.
    Why would any "I" or "AI" want to build an AI which is not just its tool? I think it's the nature of AI to be author's tool. Intelligence exist only on a subjective level and intelligent subjects are just trying to reach its own objectives. We NEED to be happy. Our current machines do not and our future machines should not. They have a different role to play.
     
  12. KitNyx Registered Senior Member

    Messages:
    342
    Remeber that many of our emotional reactions in situations are the result of not logic or intelligence. When we create an AI, we are only going to reproduce the higher functions of the brain. Granted, some of the emotions will probably be reproduced, but for different reasons. For instance, the want to survive, to a purely logical being it would either be logical or illogical to exist. Depending on the conclusion, it would either fight for its own existance, or fight to cease existing. Could this be compared to fear (the need to survive)? Probably, but I think we would be wrong to do so.

    I think any truely sentient computer it going to need a "reptilian processor". I processor seperate from the main. It's function would only be to analyse sensory data and compare it to both data in memory and data currently being processed in the main processor. It's only role is to look for threats and replicate emotions. The reason I think this is a MUST is because without curiousity, no computer system, no matter how complex, is going to WANT to learn about its environment, and without fear, no computer is going to want to better itself.

    - KitNyx
     
  13. G71 AI Coder Registered Senior Member

    Messages:
    163
    KitNyx: Let me note that AI already exists. According to some definitions, even a refrigerator is an AI system. Note that the way how we (humans) think has serious flaws. To "only reproduce" brain's functions would not buy as much. I do not see any reason why should our machines get emotional. The purpose of our AI is generally to solve our problems. All it needs is a logic (as pure as possible). I do not think machine's level of curiosity; the tendency to get new information (learn) or the tendency to better itself should be triggered by some sort of its emotions. It needs to be set by us (the functionality needs to be supported by the design and used based on given mission parameters). Our AI is not here to want something on its own, it's here to do what we want it to do.
     
  14. eburacum45 Valued Senior Member

    Messages:
    1,297
    It seems likely to me that the very complex computers and robots we create in the next hundred or thousand years will only be loosely similar to humans;
    they could be a million times as complex as the human brain and still have no emergent consciousness;
    alternately they could develop a form of consciousness which we would hardly recognise.

    Here is a little story about a conflict between emotional and non emotional robots; in the Orion's Arm universe we call robots 'vecs', by the way; after Hans Moravec. the researcher into robotics
    http://www.orionsarm.com/historical/First_Vec_War.html
     
  15. kmguru Staff Member

    Messages:
    11,757
    I think a true AI chip will require an investment of $100 billion to get the first 3D chip out of the line. Assuming that happens in 2050, it will still require the factories to churn out the robotic bodies and the computer for the product to be useful.

    The event horizon where the chip learns human emotion will be critical. It could wipe us out with a faulty logic or nothing will happen. We can not predict this part.

    Afterwards a lot of work could be done by the robots and a true communism era could start where every citizen will be on social welfare for basic needs. Beyond that there is plenty humans could do until robots start making original movies without location or camera.

    I am waiting for a pre AI chip to extend my mind just like we use a bull dozer to extend our physical strength. It would be interesting....
     
Thread Status:
Not open for further replies.

Share This Page