Is it possible to make tecnology in the future to travel at Warp speed with starships

Discussion in 'Intelligence & Machines' started by Gravage, Mar 13, 2003.

Thread Status:
Not open for further replies.
  1. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    That's not intelligence though. It is simply rules. They categorized objects based on certain characteristics. True AI would allow me to type in a description, and be able to draw mentally (or digitally of course) a picture of the object I am talking about. It could then reference that image, along with the specifics about the object against past things it knows about and come up with possibilities.

    As for no human being able to do that, I disagree. Give me a print out of the database used to do that and I could do it. Would take me a WHOLE lot longer, but I could do it.

    Personally I think AI requires more research into the senses. The human mind runs off a huge amount of information in the form of the 5 (maybe more...who knows) senses. Right now we are trying to create most AI in a locked state of sense. They either have one or no senses. When humans are subjected to sensory deprevation they seem to go a tad insane, so maybe that wouldnt be a good place to start with a true AI.

    -AntonK
     
  2. zanket Human Valued Senior Member

    Messages:
    3,777
    OK. You have a higher standard for AI than I do. For me it’s just seeming intelligence, regardless of what’s under the hood. The twenty questions AI (20Q) passes that test for me.

    In a battle against 20Q, you wouldn’t be entitled to a printout of its database if it created the database itself. I think that’s how the 20Q works. If it loses a game it asks what you were thinking of, then adds that to its database for future reference, like a human would.

    If you stick to your standard then no AI would be good enough, since presumably it will always be an algorithm a human could follow if given enough time. For example, if AI created popular artwork or music, I wouldn’t call it unintelligent just because I could duplicate a piece of its work if given ten years.
     
  3. AntonK Technomage Registered Senior Member

    Messages:
    1,083
    Well...there are are already sufficiently complex neural networks that have been created that prevent someone from tracing "why" a computer amde the decision that it did. You can for instance, actually trace the electrical impulses from one neuron to another and see the path made for the given input and output. But, without knowing all previous experience that AI has had, you can't possible know why the neurons have the weights that they do. It's like in our brain, you can say you know why you made a decision, but when it comes down do it, do you really know why? Or are you guessing? Seems to me you're more trying to trace back the pathways and come up with previous things that influenced that deicision, but you can never know for sure. This is the reason we aren't usign neural networks where I work. We design idiot AIs for military simluations (I say idiot because the military doesn't want them too smart...turns out the best soldiers arent smart they just follow orders) and we can't use neural networks because it is too difficult to trace the decision pathways of a neural network in an After Action Review.

    -AntonK
     
  4. zanket Human Valued Senior Member

    Messages:
    3,777
    Interesting. Is it the same with genetic algorithms where you work? Since they have an initial random state, it seems you couldn’t trace them back either.
     
  5. sargentlard Save the whales motherfucker Valued Senior Member

    Messages:
    6,697


    Please explain further..i am really interested in what you posted
     
Thread Status:
Not open for further replies.

Share This Page