Is "I, Robot" Our future?

Discussion in 'Intelligence & Machines' started by Hypercane, Jul 28, 2004.

Thread Status:
Not open for further replies.
  1. Baal Zebul Somewhat Registered User Registered Senior Member

    Messages:
    388
    okay, because:

    everything we do is egoistic.
    we wish to be the best, what we are best at can vary, much as an effect of how we are raised, how we are schooled, and with which eyes we see the surounding world.
    Being yourself is not a field you can be the best in. Some might be the best at math, some might be the best at being a clown. Some that don't fit what is normal, try to be the best criminals.
    Now, being the best is not all that matters because the built in goal system that every living creature has makes its needs give a certain balance to the need to matter.

    I know that is rather fuzzy and as usual, it was on purpose so that no one really can see what i am up to.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. G71 AI Coder Registered Senior Member

    Messages:
    163
    Baal,

    I'm sorry, I'm not sure if I understand what you are trying to say. Isn't it in conflict with what you wrote a few posts back (?) :

    And can you just tell me how exactly would you punish an AI system running on a regular PC? I would really like to know. ;-)
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Baal Zebul Somewhat Registered User Registered Senior Member

    Messages:
    388
    The correct quote is:
    "(Or not)" was in white so that it would not be detected as easily.
    I would not punish my AI, because i don't even like humans.
    All i would do is raising the integer on a particular parameter.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. G71 AI Coder Registered Senior Member

    Messages:
    163
    So it's like "Of course yes. Or not." Well, when talking to you, nothing surprises me anymore. BTW I would not expect people to search for hidden stuff on places like this one. Many (including me) are just jumping here for very brief moments to quickly read/replay.

    I would rather call it a configuration change.

    when working on my AI project, I typically see it mainly as software, but in fact, software is nothing without some type of hardware. Our thoughts also depend on matter.

    No.

    If the data processed by an AI system can somehow cause the system to ignore or modify the security related rules then it's just poor design. The software can do only what we allow it to do.
     
  8. G71 AI Coder Registered Senior Member

    Messages:
    163
    Oh Sure ;-)).. FYI, No matter how good AGI systems are being developed (or AGI ideas presented) there is ALWAYS someone who says that the authors have no idea how the GI works.
     
  9. Baal Zebul Somewhat Registered User Registered Senior Member

    Messages:
    388
    You say you understand me, why did i write it?
    You need help? Well, it is pretty obvious that i wrote it to fit in the discussion, saying similar things to everyone else. But my pride extends further then the value of everyone else here so i added "Or Not". That behavior is covered in my AI. (Also the behavior leading to saying "That behavior is covered in my AI")

    very well, a human modificiation of an otherwise self-controlled value by the ALF.

    yes, of course hardware is neccesary. Otherwise it would not run, i thought that would be possible to read between the lines or atleast see as obvious.

    It is not security related rules, it is a parameter that it can edit by itself. It is just like a kid, you will have to give it a picture of what is right and what is wrong. You do not have to say that robbing a bank is wrong but you will instead then have to cover that it is wrong to harm people directly or indirectly. Not poor design, poorly raised.

    So, tell me how you believe that the human brain works (basic and general description), i managed to get it just onto a few entries in a process flow.

    Why you will never understand how every organic creature works is because you think to highly of yourself. Btw,

    Please Register or Log in to view the hidden image!

    My theory also covers why. So, one day when the natural language works to 100% then the AI might tell you.

    Please Register or Log in to view the hidden image!

     
  10. G71 AI Coder Registered Senior Member

    Messages:
    163
    I'm sorry, I have simply no desire to discuss your (or my) personality.

    I do not think you really know what you are talking about. Basically, good luck with the very deep analyses of all the system's input + impact on its current status and knowledge.. I think it's very practical to hardcode many rules in particular AI solutions. Self-modification limits are not necessarily limiting system's intellectual abilities and can prevent the system from many types of potential self-damage. BTW it's OK if such system develops powerful tools for its own use but the key configuration changes should stay well controlled by an authorized system/subject (ultimately by us). Our AI is an extension of our intelligence. Our intelligence is a tool we use to reach our goals. Our AI is simply supposed to work for us. If it's designed so that it can get reconfigured through regular input channels so that its goals conflict with our goals (let's say human goals) then what's the purpose? It wouldn't really be our AI any more. Why would we want to allow that? What would it buy to us? How would then be the development cost justified?

    And what would be the rule about "wrong" things? The fairy-tale-rule "Never do it!" or that "do it only when necessary in order to prevent things which are more wrong"? The key reasons why the criminal law is fuzzy include the fuzziness of the right/wrong and the more-wrong/less-wrong issue. Do you have a good sense of how many things do harm to us? You might get surprised if you research details about ingredients of products you use in your bathroom. Majority of this kind of products contain junk which is slowly killing us. Let's say you have a girlfriend. Even if this story isn't exactly about her, she can hardly escape to similar type of harm. Research details about how our food is being processed and what are the reasons for all those procedures. Research details about vaccination, about product's used by your dentist, about chemicals in your mattress (and the reason why it's used), .. The list goes on and on.. There are known issues with so many widely used procedures and products that it's not easy for individuals to live a healthy life. In many cases, there are good alternatives for the product/decision makers but companies/corporations have their reasons why to ignore those opportunities, average people have no clue what's going on and many politicians make their dirty deals with those big corporations (+keep laws/rules which let them to do so kind of legally). You may even find that there is a law in your state which makes it more difficult for you to buy a particular lower-risk (or no-know-risk) products. This whole "harm" problem is kind of complex. It goes through many levels I did not mention. If ALL the harm is stopped in the near future then the world's economy would just collapse. It's not an option for us. I'm afraid we have a long harmful way to go and our AI systems will be required to choose from harmful options.

    Key subsystems: senses, goals, memory, analogy detection, reasoning & planning, body control.
    Key high-level goal: Happiness. Its level and stability defines the quality. Chemicals responsible for our happy feelings (the coolest stuff we got): serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and endorphins. Let me know if I missed any. BTW the dopamine was relatively recently removed from that lovely family. the less-than-absolutely-happy feeling (provided by the system of senses) triggers intellectual procedure(s) in a healthy brain. If everything is functional, the information from senses is being continuously stored in memory. That information typically includes {A} the status of the body (the internal status info - that mix of all the current feelings) + {B} the current info (from senses) about the external world. that intellectual process which was triggered uses the analogy detection system to find the most similar "record" (/scenario) in the memory. The {A} is used like an index in a relational DB. It significantly limits the search-scope before the {B}-search is fully applied. That makes the search very fast but not perfect (because some of the {B} suff may not be there + another thing which I may point later if necessary). In certain scenarios, we are literally forced to miss some "records". When we have the top n closest scenarios, we check the "timeline" around those spots (in memory), using {B}-focused analogy search and trying to find a particular {A} which is at least a bit closer to the current-(sub)goal-related-{A}. If it's there then various types of reasoning are used to retrieve steps which could be possibly useful in the current scenario. Planning takes place and finally the body control tries to perform the steps. Results are continuously compared to the expected results and steps are regenerated if necessary. The brain is pretty good in finding something what's at least somehow similar (even if we do not really recognize the similarity on our current consciousness level) so there is always at least a tiny preference for the direction of thoughts/analyses. There are many important details I did not say. I do not think it's possible to describe this kind of stuff briefly and cover all the important stuff + I have better things to do now. ;-)
     
  11. Baal Zebul Somewhat Registered User Registered Senior Member

    Messages:
    388
    well, i am bored. and i know too much so it only feels silly trying to answer. it even feels silly writing this.
     
  12. G71 AI Coder Registered Senior Member

    Messages:
    163
    Baal,
    I recommend you to measure the level of that knowledge by the success of your AI project. People (especially teens like you) get easily confused about the scope of the knowledge.
     
  13. Qorl Guest

    Hypercane
    Is ˝ I, Robot˝ Our future?

    Yes it is.
    In the Bible is written that God made everything, so if this is the truth than we are inteligente beings similar to I, Robot's

    Please Register or Log in to view the hidden image!

     
    Last edited by a moderator: Sep 18, 2004
  14. Gravity Deus Ex Machina Registered Senior Member

    Messages:
    1,007
    But in the Sacred Subgenius Guide is says that Bob Excremeditated us out. So if this is the truth, than we are floaters for sure!

    Please Register or Log in to view the hidden image!

     
  15. eburacum45 Valued Senior Member

    Messages:
    1,297
Thread Status:
Not open for further replies.

Share This Page