That's not intelligence though. It is simply rules. They categorized objects based on certain characteristics. True AI would allow me to type in a description, and be able to draw mentally (or digitally of course) a picture of the object I am talking about. It could then reference that image, along with the specifics about the object against past things it knows about and come up with possibilities.
As for no human being able to do that, I disagree. Give me a print out of the database used to do that and I could do it. Would take me a WHOLE lot longer, but I could do it.
Personally I think AI requires more research into the senses. The human mind runs off a huge amount of information in the form of the 5 (maybe more...who knows) senses. Right now we are trying to create most AI in a locked state of sense. They either have one or no senses. When humans are subjected to sensory deprevation they seem to go a tad insane, so maybe that wouldnt be a good place to start with a true AI.
OK. You have a higher standard for AI than I do. For me it’s just seeming intelligence, regardless of what’s under the hood. The twenty questions AI (20Q) passes that test for me.
In a battle against 20Q, you wouldn’t be entitled to a printout of its database if it created the database itself. I think that’s how the 20Q works. If it loses a game it asks what you were thinking of, then adds that to its database for future reference, like a human would.
If you stick to your standard then no AI would be good enough, since presumably it will always be an algorithm a human could follow if given enough time. For example, if AI created popular artwork or music, I wouldn’t call it unintelligent just because I could duplicate a piece of its work if given ten years.
Well...there are are already sufficiently complex neural networks that have been created that prevent someone from tracing "why" a computer amde the decision that it did. You can for instance, actually trace the electrical impulses from one neuron to another and see the path made for the given input and output. But, without knowing all previous experience that AI has had, you can't possible know why the neurons have the weights that they do. It's like in our brain, you can say you know why you made a decision, but when it comes down do it, do you really know why? Or are you guessing? Seems to me you're more trying to trace back the pathways and come up with previous things that influenced that deicision, but you can never know for sure. This is the reason we aren't usign neural networks where I work. We design idiot AIs for military simluations (I say idiot because the military doesn't want them too smart...turns out the best soldiers arent smart they just follow orders) and we can't use neural networks because it is too difficult to trace the decision pathways of a neural network in an After Action Review.
Interesting. Is it the same with genetic algorithms where you work? Since they have an initial random state, it seems you couldn’t trace them back either.
Save the whales motherfucker
Originally posted by zanket
Science believes this is impossible, but acknowledges that it is effectively possible. The catch is that when you get back to your base after such a voyage, everyone you knew there would have died long ago.
You may have read that it is possible in principle to travel to the Andromeda galaxy and back, some 4 million light years round trip, within a human lifetime. Say the trip takes 40 years by your watch, that’s 100,000 times the speed of light! Or so it would seem; yet nobody including yourself measures you exceed the speed of light. If you left in 2003, when you got back to Earth the year would be past 4,000,000.
Please explain further..i am really interested in what you posted