Can a machine know?

Discussion in 'Intelligence & Machines' started by roadblock, Apr 6, 2006.

Thread Status:
Not open for further replies.
  1. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    How can you be sure of either of these statements? I think the second is totally false and "not walking comuters" is also.

    Admittedly brains are very far advanced parallel processors compared to any man has yet even conceived of, but still they are just digital devices the operate by the collective interactions. (In parallel, not in series like most of man's computers do.) A lot (larger active device count than total of all man's computers put together, I think.) of “off/on” devices, which we call nerves, are used in the “brain computer.”

    There must be a lot because they cannot cycle on/off/on in less than 0.001seconds. In some sense they are less than kilo Hertz, instead of Giga hertz devices, but that (and their parallel architecture) is the only difference I know of.

    Please tell why you think many fewer but faster transistors, also in a parallel architecture, could not do what these neurons do?

    Read my long post to see what and how I think humans operate as computers, possibly even “knowing ones” with “free will.”
    Last edited by a moderator: Apr 21, 2006
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Dan the Man Registered Member

    Hi guys, I also, along with roadblock, am doing an essay about can a machine know. As he has already stated I would enjoy other's input. Thanks to all those who already have. Personally I believe that you should also break it down into the ways of knowing. Language, Reason, Emotion, and Perception. Language is a rather easy one to handle, you can make the claim that with binary and a network of computers they are communicating with each other and sharing knowledge, which is a way of knowing. Emotion would be abit more difficult because some would argue taht a machine can display some emotions such as hate but others claim that it is simply imitating these emotions. Then there is reason. For reason, you can at least make part of the claim that machines can reason because they have a thought process just as you and I do, even though they did not develop this process on their own, they were given it when they were programmed. Neither have I really gone through all of reasoning to claim that machines can nor have I looked into perception.
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    You are restricting your thoughts to programed computers, I think. Certainly, "connection machines" (more often called "neural network" machines, but I do not like that name.) develop the process on their own. Usually when it is done, (when they have taught themselves how to do the job.) it is even difficulty, if not impossible for men to understand how they do the face recognition, etc. things they do. For example, if one is giving all the interview questions answers that the bank used when questioning a prospective borrower, and his repayment records (or default etc) for a few years of data, the machine is bettter than the human at evalution new loan applications prospects of repayment but no one understand why by looking at the connection strengths it creates by experience. - not only no programing, no programer even understands after the fact!
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Gordon Registered Senior Member


    The basis of my statements is from my experiience. I work for a systems company and produce logic and algorithms for computer programs.

    And this is the point. A computer works on programs. These are based on logical rules and an ordered, logical (although one might doubt that a litlle in the case of some!) language protocol. This implies that the machine can only work in accordance with a set of programmed rules or a learned ordered pattern.

    In the first case the program can say for example, 'if A or B then C, if not D'. This is input by a programmer.

    Alternatively in the case of more sophisticated systems (often erroneously called artificial intelligence) the system can store historical data and act in the new situation on the basis of the optimum of previous experience in similar circumstances (however 'optimum' is defined). Again the program to achieve this is written by a programmer and the rules for 'optimum' will be defined again by the programmer in the program.

    Without some system of logic input by a programmer into the various programs, a computer system has no basis on which to make a decision (you could insert some artificial randomness but you could argue about whether that's a true 'basis' and even then the programmer would have needed to put in the program that a random function was to be performed).

    I repeat quite unashamedly that a computer cannot be creative in the same way a human being can. It cannot devise new programs for itself from nothing. It cannot produce true music or poetry. You can do a good pretence by writing a program for the system to output a form of music or poetry but in essence it will be the product of the programmer and not the system. No computer can love or hate, because the hardware is totally physical and emotions are not. Whatever emotions, poetry, music art etc. are they cannot be expressed in terms of atoms and energy - but that is all a computer has. We, humans have to have something extra. You can argue about where it came from or what it is but it is there and it is not physical. The extremely fervent British atheist, Richard Dawkins, has recognised this and has invented a non-physical 'cultural' version of the gene which he has called a meme. You can check out his theories for yourself. I have to say that I find the hypothesis very unconvincing ( I would, wouldn't I?!). That said even Dawkins does not believe that man made machines and systems have memes. So there is an intrinsic difference that will always remain between people and machines in regard to thinking, knowing and creating.

    As a side issue, rapid computations do not indicate a greater power of thought. You can train people to recite very large amounts of information or perform mental arithmetic very quickly but that does not make them greater thinkers than Aristotle, Plato, C.S. Lewis etc. who I doubt could perform such feats but could think abstractly about life in great depth.

    Kind regards to anyone reading this,

  8. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    If you wish you can define a "computer" this way but most people use a broader concept for that term. See my reply to Dan The Man, just on the other side of yours from this one.

    I looked into "neural networks" many year ago, when they were just being to show their power. (I heard a talk by Terry Secowski* on his "netTalk" computer, which demonstrated how it leaned to read aloud from a text and corresponding verbalization. After it trained itself**, it read almost any*** book in the same language and never was there any programming. - The machine, like the brain, organized it self via trial and error on the original text and learned to generalize the production of audio from printed symbols.)

    Net talk effective looked ahead and behind several words before deciding how to say any word - how far it looked both ways may have been a decision Terri made, (also by trial and error) and programmed in. There is some "programming" required to decide how long (how many inputs) of the ASCI coming to the machine at any instant is present. Sort of corresponding to the decision to make a 16 or 32 bit digital machine, but not restricted to 2^n as in digital computers. I do not think you would call the 16 vs 32 chioce "programing" so I do not when I state there was no programing in NetTalk. Like a human, NetTalk learned from having its prior errors corrected WITHOUT any programmer or program.
    *Phonetic spelling of his name.
    **It was a very interesting demonstration lecture. NetTalk, when only partially able to read, made the same types of mistakes as child does in his first year of learning to read, but with more practice (being corrected for it errors, no programming, just told it did not say that text correctly) it read very well.
    ***If in the "never before seen" book there was a word with irregular pronounciation, or one it had rarely seen, it would make a "regularized" pronounciation of it, but very understandable still. Also because there are some patterns even to the irregularities in languages, sometimes a word which fits one of the patterns to be irregular, but is not so said by most humans, would be pronounced as if the irregular form were the one humans use.

    PS1 - Actually Net Talk could not read. It learned to make commands that drove a commercial voice synthesizer and got an "error signal" from the difference of the correct commands and the one it had just made, when in "self training mode" using the sample text, if I recall correctly, what Terri described.

    PS2 - NetTalk also had no eyes to actually read books. It was feed the ASCI character sting codes, I think, that corresponded to the book.

    PS3 - Because there was no need for any programer or programing, NetTalk could read in languages Terri could not read and did not understand.

    Anyway you idea of what is a "computer" is much too narrow and ARTIFICIALLY excluding the human brain.

    PS4 - I doubt that 10 programers and 10 linguists all working together for a year with one of your "programable" computers could do as well as NetTalk did by itself in about a month of learning and generalizing its experiences. It was significantly (more than a dozen times) faster than a human in learning from its experiences.
    Last edited by a moderator: Apr 26, 2006
  9. Tom2 Registered Senior Member

    I can't believe no one mentioned Searle's Chinese Room Argument. Whether one agrees with it or not, it is standard reading and your professor will undoubtedly think you didn't do your research if you don't at least mention it. Searle set this argument forth in 1980, and researchers in AI, epistemology, philosophy of mind, and cognitive science have been replying to it ever since.

    Long story short:

    * A man is in a room who interacts with the outside world only via a computer terminal.

    * The man is trained ('programmed') to consistently generate specific outputs for given inputs, all of which are in Chinese. This is desiged to occur in such a way that the recipient thinks the input was understood, deliberated on, and replied to.

    * The man, in fact, does not understand a word of Chinese. He's just manipulating bits of information. Furthermore he cannot understand Chinese by merely manipulating bits of information.

    * All a computer can do is manipulate bits of information.

    * Therefore, a computer cannot achieve understanding.

    The Chinese Room Argument from the Internet Encyclopedia of Philosophy
    The Chinese Room Argument from the Stanford Encyclopedia of Philosophy

    There are other links in those pages.
  10. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    I am very familiar with Searle and his works. You and he did not mention Searle's "Nation of China" Brain arguement or several others. I like Searle very much as he is one of the few who challenge the prevailling view in cognitive science (See Patricia Churchill* for good statement of it) A few post earlier I state, in a very long post, why I think the brain is not simply transforming information that comes from the sensory tranducers to produce our perception of the 3D world "out there." Searle is about as close as any to my position. When it comes to cognitive science, I am a crackpot. They are wrong! I offer three different proofs of this in my longer post below and more in the published article it is extracted from.
    *Probably it is Churchland ? - I was confused with Sir Winston, I think now. - It been more than 10 years since I spend most of my awake time reading in the area. I finally solved the "free will" vs "physics and neurology" problem, at least to my own staisfaction, and ceased to study more this area.

    PS - Unfortunately, from my POV, Terry Secowski is firmly in Patricia's camp. In fact after he left JHU (where I was also) I think he went out to Stamford or (some place in CA) to join forces with her. We need more Searles!!!! and if it is not too immodest for me to say so, for some one to pay some attention to what this crackpot said / published 12 years ago. (Original published article also explains AT THE NEURONIAL LEVEL how objects are parsed from the continuous visual field and later identified. - It is much more about mechanism of vision than "free will.")
    Last edited by a moderator: Apr 26, 2006
  11. Tom2 Registered Senior Member

    Sorry, I didn't mean to slight you. Truth be told I only read the opening post. Then I did a "CNTRL-F" search of both pages of this thread for the words "Chinese" and "Searle". Since the searches came up empty I concluded that the argument hadn't been mentioned!

    Please Register or Log in to view the hidden image!

  12. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    I had not mentioned Searle so no slight given (or taken). I have considerable respect for you in physics (Remember you were not allow to publish in the CRACKPOT journal as you are "unqualified.") Thus:

    Rather than any appology, I would much rather you read my long post of 20 april in this thread - should interest anyone who has ever wondered how free will can (it may not) be real given what is certain about physics and neurology. What do you think of my Crackpot position? (I know of one difficulty with it, but it is small, I think, compared the those of the standard cognitive science position.)
  13. Tom2 Registered Senior Member

    I'll print out your post and read it this weekend, when I can properly focus on it.
  14. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    Thanks - I look forward to your comment. I only listed the first few references.
  15. Cyperium I'm always me Valued Senior Member

    A machine cannot know anything, if it knew something (written in data) it had to reflect on it to really know, so how to make a machine reflect?

    How could that be done? First you need the machine to realise it exist. Existance is the key.

    How can a machine realise? How can it know of it's existance?

    Even if you took a complete structural image of the machine, and let the machine examine somehow this image, it still wouldn't know that the image reflected the machine. It wouldn't know anything, since it still wouldn't know of it's own existance, and even if it did, it still wouldn't exist, since that would also be just empty data.

    We will probably never get to the point that we understand the process causing awareness and consciousness.

    Even if we come to the point that we have a very plausible theory of it, it could never be proven, since we couldn't possibly know if the machine was aware or not.
  16. wesmorris Nerd Overlord - we(s):1 of N Valued Senior Member

    K, I'll go.

    No, it remembers nothing. It just does what it does, as it was designed to do. It certainly holds the charges placed upon it, but it doesn't "know" they are there. They just are.

    Again, this machine knows nothing. It just does what its program tells it to do. Via gyroscopes and whatever other sensors are put there by designers, the program running the machine does what it was designed to do. All in all, it's simply first principles being manipulated by humans to do what we want. The machine doesn't know its center of gravity. Its designers know how to make it react to its center of gravity.
  17. UNIVERSE TODAY Banned Banned

    Read about Daoism. An ancient Japanese religion which presupposes all matter to be living. Rocks and rivers. You name it. The spirit supposedly inhabits all things like a ghostly second universe.

    What would a Rock think about?
  18. Nasor Valued Senior Member

    The question can't be answered unless you have a very specific definition of "know," which is something that I doubt you could get any two people to exactly agree upon.
    Does knowing imply conscious awareness? If so, then you need to define consciousness, which is even harder. Many microbes will swim to cooler areas if their water starts to get too warm. Do they "know" that they are getting hot and need to cool down?

    The question really should have included a definition of know, unless there's some sort of standard definition in your TOK class that I'm not aware of.
  19. houseofknowledge house of knowledge Registered Senior Member

  20. Amy H Registered Member

    I am doing you TOK question as well. Plato had some good ideas on this topic, and believed that if a knowledge claim was to be made
    1) The knower must believe or realise that the claim is true.
    2) The claim must be true
    3) The knower must be able to justify their claim.
    It is my opinion that machines can not 'know' because they are unable to believe what they are doing. It is very possible that in the future there will be machines that are aware and able to make realisations based on their own thoughts, however the technology today is not sufficient.
    Referencing to movies, such as Bicentenial Man and I.A., although they are fiction, is probably worth a mention.
    This site is about a company that is trying to create cognitive machines which can interact with people as we would between ourselves.
  21. przyk squishy Valued Senior Member

    The REAL question is: What's the point of arguing about whether computer data (and the processing thereof) should be called "knowledge" or not?
  22. gloo Registered Member

    AmyH please do not post here if you do now know what your talking about, it is quiet obvoius that these claims you are making are not your own, also if you wish to seek a means of answering this TOK essay, please do not simply google the answer, do some research.
  23. Fraggle Rocker Staff Member

    We're confusing sentience (knowing) with life (living). There are several attributes of life, depending on your paradigm of choice. They usually consist of things like feeding and reproduction, but awareness is not one of them. For starters, it would disqualify the entire plant kingdom.

    Amoebas live without knowing. What's to stop us from inventing a machine that knows without living? Or to stop a machine from inventing such a machine?

    Read my all-time favorite novel, Code of the Lifemaker by James P. Hogan, for a thought-provoking perspective on this. We discover a planet inhabited by highly evolved machines with not only self-awareness but civilization, and their tools and other artifacts are constructed of what we would call artificial organic tissue. Each side believes that the other is merely a remnant of an extinct civilization of its own kind, because clearly that form of "life" could not have arisen naturally, it had to be **ahem** created. Yet each side is at the same time presented with evidence that its own form of life clearly can be created.
Thread Status:
Not open for further replies.

Share This Page