What motivates an AI?

Discussion in 'Intelligence & Machines' started by c1earwater, Jul 31, 2002.

Thread Status:
Not open for further replies.
  1. c1earwater Registered Member

    Messages:
    25
    For years I've been thinking about writing a program that could learn from user-provided input and eventually hold some kind of (hopfully intelligent) conversation. But every time I get stuck right at the beginning: what would the be the goal of the program?

    Me reasoning is thus: a human being has certain motivations for doing things like hunger, cold, loneliness, etc. It starts out by crying and learning that when it cries it gets food or a hug or a blanket. People talk to it and it learns words. With those words it can specify that it didn't want the blanket it wanted food and specifically the kind it had yesterday, what was it called?

    So it goes on learning, to satisfy its desires, its goals. The goals don't have to be completely clear to the subject. It might need a certain amount of sugar and another amount of water. It has learned that when it eats a chocolate icecream it feels better, but it doesn't specifically know that that is because it has just (maybe partially) satisfied a goal.

    I want to take this analogy and put it in a computer program. But I don't know what kind of goals to use. (Computers don't need food). It has to be simple.

    Any ideas?

    Please Register or Log in to view the hidden image!

     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. allant Version 1.0 Registered Senior Member

    Messages:
    88
    I will give the standard answer to this problem, though I wont garantee this is correct

    Please Register or Log in to view the hidden image!



    The goal given in this type of AI, is to maximise a variable called Reward/Punishment or Happiness. You can simulate this with a single variable in your program! Now the hard part is figuring out what will add a bit to the Happiness value (reward) , and what will take a bit off the value (Punishment).

    For a physical turtle robot. Sensors on the top of the turtle would give reward when carresed, and punish when whacked. Punish when battery fell to low values (Hungry) , and reward when charging (Feeding) . Punished when charged and not moving (Boredom).

    For your problem, I can not think of a way to automate reward/ punish for improving language apart from you hitting a key for Good PC and another for Bad PC.
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. c1earwater Registered Member

    Messages:
    25
    That's exactly where I get stuck. The happiness variable is one mechanism, although I don't think it's always necessary, but it still leaves me wanting a source for happiness/unhappiness.
    Pressing a Reward/Punishment button is possible but I feel it's too slow.

    Please Register or Log in to view the hidden image!



    By the way, I don't just want the program to learn a language. I want it to want to talk, but the conversations should be interesting. Not just "hello", "hi", "hello", "you said that already", "hello",....

    Maybe one goal could be to learn as much as possible so it has to ask questions.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. BloodSuckingGerbile Master of Puppets Registered Senior Member

    Messages:
    440
    I have also tried to do something like that but, as you've probably figured out, it's too darn complex.

    I thought of sorting all the words in english into groups, like "big objects", "small objects", "Good adjectives", "bad adjectives", etc and placing some basic words in this table (which could be a database). Then, by using the rules of linguistics, brake every entered sentence into parts, remove words like "are", "am" and such and leave just the core, and then, according to the table figure out what the sentence means. If there's a bad objective in the sentence, for example, you can reduce the "happiness" variable and increase the "anger" variable, and the program might sware back at you or format your drive or something...

    But again, it's really complex and it's going to take an eternity to make.

    On the other hand, if you succeed, you're going to get rich and be the next Bill Gates and big companies will sue you and people will through pies at you.

    Please Register or Log in to view the hidden image!

     
  8. kmguru Staff Member

    Messages:
    11,757
    If it is so easy to write a self learning program, a mice would be talking to you by now or forming an union...

    The minimum basic hardware requirements are hefty. Start with a Sun E10K with 32 GB memory, 8 processors, 6 to 8TB SAN storage. Then dump the entire encyclopedia, language, pictures etc worth 1 TB. Then you have to write a program that creates an associative database (not relational) and so on....

    Once the hardware and base software structure is complete, then the learning process can start with a combination of GA, Fuzzy Logic, NN program, and a fractal program set interconnected.

    Then you have to create a GUI system to fine tune on the fly various parameters and weights and start teaching....

    There is more...but when you get there, call us....
     
  9. c1earwater Registered Member

    Messages:
    25
    I didn't say it would be easy, but I don't think stuffing it with facts and pictures will do the trick. Intelligence just doesn't work that way. The information we have stored in our brains is far from complete. Try this for example: don't look at your watch; take a piece of paper and a pen or pencil and try and draw a picture of your watch. Then compare the two. I'd be surprised if you got it exactly right (and I don't mean proportions). Our brains just don't store everything.

    Anyway, I don't expect to write a completely intelligent program (not the first time at least

    Please Register or Log in to view the hidden image!

    ), I just want to see how far I can get. If I do manage to get it thinking I'm sure you'll hear about it.
     
  10. kmguru Staff Member

    Messages:
    11,757
    I am sure of that...

    Please Register or Log in to view the hidden image!

     
  11. Cris In search of Immortality Valued Senior Member

    Messages:
    9,199
    C1earwater,

    The reward/punishment tactic is an abstraction taken from a larger perspective – survival. Most of the human activities you describe are all concerned with survival.

    By learning that something dangerous could result in destruction and non-existence then that ‘something’ should be avoided. And anything that helps ensure and improves the chances of survival would be encouraged. For example increased knowledge of surroundings and environment would help to avoid potential unknown dangers, etc. etc.

    This implies that an AI would have a constant thirst for knowledge and more importantly how to apply and interpret such knowledge.

    Cris
     
  12. c1earwater Registered Member

    Messages:
    25
    Cris,

    I don't see it that way. If people were motivated by survival they'd be dead: "Let's try not eating for a week... What do you know! I die. Better not do that again."

    Maybe the result is survival but I believe that's not what people "think" about on the most basic level.

    I do agree with your observation that an AI should have a thirst for knowledge. I'm thinking that the goal of an AI should be to get as complete a picture of the world as possible.
     
  13. BatM Member At Large Registered Senior Member

    Messages:
    408
    It's not that black and white. Remember the Happiness variable. If positive number represent happiness and negative numbers represent sadness, then the goal is to keep the number positive. Thus, if you stop eating, you're going to experience a number of things over a period of time that will begin lowering your happiness value. When it starts moving heavily into the negative zone, you're going to start doing things to raise it back up (ie. survive).

    When it comes to "survival", does thought enter into it or is it just instinctual? In other words, our basic instincts teach us to survive, but we may ocassionally override that through "thought".

    The problem is "pattern recognition". Ultimately, all learning comes down to recognizing the similarities between current circumstances and previous circumstances. There are many problems along the way to solve, though:

    • How do you create a "pattern" to capture the important information?
    • What are the "right" resulting actions to keep with the pattern?
    • How do you compare two patterns to determine if they are equal?
    • How do you rate the similarity of two (or more) patterns?
    • How do you combine patterns to "learn" new ideas from old ones?
    • How do you organize the patterns into an efficient storage order?
    • How do you "houseclean" old patterns of little value?

    If you think about it, you could probably add a lot to this list. This should tell you why there is no "self-aware" AIs out there yet.
     
  14. p_ete2001 Registered Senior Member

    Messages:
    355
    Can i ask what programme you are using c1earwater? Just out of interest.
     
  15. c1earwater Registered Member

    Messages:
    25
    I'm not as of yet using any program but I was planning on writing my own, probably in Java, maybe using mySql and/or Caucho Resin app. server.
     
  16. Clockwood You Forgot Poland Registered Senior Member

    Messages:
    4,467
    I am sure an AI would crave information. It also might feel life is a game, working to gain points.

    You know the game DWTM. Die With The Most. The most money, Karma, offspring, toy trains, whatever. It dosn't matter. Just the most.
     
  17. BatM Member At Large Registered Senior Member

    Messages:
    408
    V'ger

    Shades of Star Trek... :bugeye:

    DWTM = Know all that is knowable...
     
Thread Status:
Not open for further replies.

Share This Page