Can a machine know?

Discussion in 'Intelligence & Machines' started by roadblock, Apr 6, 2006.

Thread Status:
Not open for further replies.
  1. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    From a biological perspective humans are 'designed' as such by genes selected as advantageous in a given environment; machines also appear to undergo an evolution analogous to biological evolution. From stone tools to hand chisels to power tools to lasers - selection, deletion, drift, etc.

    If the machines evolution is selected for (in part) by the human monitoring the machines feedback loop then we can know that the machine has a true justified belief by verifying the state of the system (1 or 0); however, if the machine evolves autonomously of the human (technological determinism) then the criteria of 'knowing' for autonomous machines must be determined before we can know if the machine knows.
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. yahya1 Registered Member

    Messages:
    1
    this is really good stuff, i am also writing the essay about can a machine know, and i try believe wat u said was absolutely true, for refference reasons is this from u or did u read it of somewhere, if u read it from somewhere can u site the book or internet pls
    can u reply as soon as possible to private messages
    thanx
     
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    To "know" humans are designed is to indulge in delusion.

    To "think" humans are designed is perfectly reasonable, though a bit presumptuous from my perspective.

    Either position is eronious to the discussion.
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. mackmack Registered Senior Member

    Messages:
    123
    "i can choose to not eat not breathe and not drink, and make myself die, i can go against my function and needs, wich infact make me not a machine anymore, because i can go against my programming, i dont actually have to eat or drink atal, i will just die if i dont, but if i die i still died not"--empty

    that isn't entirely true. One thing is that humans have freewill to do whatever they want. The innate instructions that our creator instilled in us are built to prevent us from harm and to follow our functions. this doesn't mean we will never do the opposite of our functions. it means we have a choice but the decision making process will favor the functions more than anything else. this is the essence of "self-aware". being able to make decisions independently. those innate traits and the function of a human being is just there to help the individual to make that decision


    look, i think people are just looking at this "can machines know" thing from a human perspective. humans are machines too and are built a certain way. our creator can design us anyway he/she/they want.

    we humans can be attracted to a tree like how man are attracted to women. we can think pain is a good thing and not a bad one. machines can be built to have pain as a good thing and not bad. machines can be built to each bad food instead of good food. this isn't slavery but a direct command or function of that machine. the reverse of what we think of as pleasure is pain to the machines and the reverse of what we think of as pain is pleasure to the machines.

    we can't look at this "know" in the context of a human but to look at it from a neutral point of view.
     
    Last edited: Jan 21, 2007
  8. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    Messages:
    9,846
    you ARE it, so there is no other point of view to be had. nothing you could imagine could escape a human's point of view, as you are a human and your imagination is part of your point of view.
     
  9. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    Another angle is to ask: 'Did consciousness exist before the human capacity to appreciate it?'
    In 'lower' animals consciousness occurs but these animals are not aware that they are conscious (lacking the necessary wetware) a.k.a. 'blindsight'.

    So can a machine know (have a true justified belief) without being aware that it knows?
     
  10. mackmack Registered Senior Member

    Messages:
    123
    these animals or insects act on innate traits--in computer science they call this expert systems. they basically function exactly as the programmer designed it. look at a roach for example, thats not intelligence, they are driven by innate traits and they have the most basic knowledge of their existence.

    they run away from the light because they view it as pain. they eat because they were built to be hungry -- the only way to fulfill this hunger is by feeding. the roaches aren't running away when there is a human because they see the human as a threat. they run because they were built to move when they sense a moving object around it. Even humans have this lower level instincts that drives us to make decisions. but we also have a higher learned intelligence that overshadows the lower level instincts.
     
  11. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    Thanks for bringing me up-to-date with the reference to expert systems - I will follow this up. I used the example of blindsight as analogous to a machine knowing (true justified belief) without being itself aware that it knows.
    One lecture hall reference is to the brain-damaged patient who was unable to see a cup on a table, yet when asked to pick the cup up could do so - even though he couldn't see it. Of course the patient could see the cup all along - he wasn't aware that he could see the cup as the nerves between his optic and awareness apparatus were damaged.

    This is how I imagine a machine could know (but not be aware it knows).
     
  12. draqon Banned Banned

    Messages:
    35,006
    Machine can know only if it has self perception...such is not the case.
     
  13. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    So would I be right in thinking that in order for a machine to know it must be able to make 'meta-representations' (i.e. a representation of a representation)?
     
  14. ScottMana Registered Senior Member

    Messages:
    159
    Wow, I have not read thru all the comments on this post. There are alot.

    I for one do not see AI ever becoming a true life form. If you were in a car, you could see the steering wheel turn, the pedals and gages moving. But you will not see the life. You could make it automatic in ways that make it look like it is alive. But it will never be more than a complicated machine. It can be made to drive around town, look and act like life. However it will always lack self determanisim.

    All this lacks the fact that life had to make it, tell it what to do and when to do it. It can sure look like its alive but it will never be able to do anything beyond what life told it to do. Just like that car back when it was simple.
     
  15. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    Good point. We could say then that a car/machine appears to be subject to heavy determinism - whereas a human by comparison appears to have a high degree of free will.
    However, will this always be the status quo? In some ways technology already determines human behaviour (e.g. queuing at the checkout, typing posts for threads, unconscious reaction to surveillence, loss of physical community): if the exponential growth of technology continues unabated, will emergent properties further erode human free will?
     
  16. nicholas1M7 Banned Banned

    Messages:
    1,417
    Seems like a matter of semantics to me. We make calculations that lead directly to knowledge. We have something called memory that we use to store knowledge. In both these facets, computers are anything but deprived. So we can say that knowledge, in this sense, is an attribute that defines machines as much, if not more. What a machine does not have is a will. That's the one thing that separates us. This machine I type as we speak will tell you what a silly conversation this is, but that would require a will. The machine operates according to input. But some might argue that it's more complicated than that and try to define will and the inanimate. Even going so far as to argue that there are degrees of consciousness and so forth. I personally believe there are. If you don't that's cool. You're still illy.
     
  17. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    Can we define "will" for the purpose of this thread please?
     
  18. nicholas1M7 Banned Banned

    Messages:
    1,417
    http://en.wikipedia.org/wiki/Will_(philosophy)
     
  19. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    I would argue that the human also operates according to input.
    Different levels of consciousness in a machine are analogous to the development of metarepresentation (and perception + cognition) in humans; and higher levels of consciousness in a machine are a matter of designing and programming the architectural conditions for emergent properties to develop. Consciousness is inevitable given the right architecture and environment.
     
  20. mackmack Registered Senior Member

    Messages:
    123
    "if the exponential growth of technology continues unabated, will emergent properties further erode human free will?"

    to be more intelligent means that you are the in better control of the environment. we humans are the most dominant species on earth. this means that we can control the earth to a certain degree. The more intelligent we get the better we control our environment.

    the point about what if the machines get so intelligent that it surpass human intelligence, will they take over the human race. or destroy the human race?

    this is heavily debated not just in computer science but science in general. but the answer is no. you see, we designed the machines, we gave them life. therefore we humans should mean, and worth soemthing to the machines. just like how we humans pay our respects to the creator the machines will pay respect to their creator. if the machines grow exponentially in intelligence that means we benefit.
     
  21. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    I agree with above but a machine doesn't need to be more intelligent than humans to pose a threat to life or liberty. We humans, who manufacture/programme the machines are not infallible and it is important to guard against getting 'boxed in' by a technology which by its nature is an extension of ourselves...e.g. atom bomb, speed cameras, VR addiction.
     
  22. mackmack Registered Senior Member

    Messages:
    123
    i see a bigger threat of people who test weapons of mass destruction than robots that can think like humans. imagine this. one detonation test and this earth is gone.
     
  23. zenbabelfish autonomous hyperreal sophist Registered Senior Member

    Messages:
    961
    Yes, we have enough problems dealing with our own human nature without psychopathic machines added to the equation. If we design machines in our own image will they not inherit our flaws too?
     
Thread Status:
Not open for further replies.

Share This Page