AIs smarter than humans... Bad thing?

Discussion in 'Intelligence & Machines' started by Speakpigeon, Apr 23, 2019.

  1. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Relevance?
    Yeah, sure, and wheels go faster than any human, which presumably shows cars are more intelligent than humans. Bravo.
    EB
     
  2. Google AdSense Guest Advertisement



    to hide all adverts.
  3. DaveC426913 Valued Senior Member

    Messages:
    18,935
    SP inadvertently misses the analogy. Not the brightest tool in the shed...

    The analogy shows that cars can exceed humans in the speed department. More generally, almost all of our technology is designed to exceed our human abilities - otherwise we wouldn't need it.

    So why can't AI exceed humans in the thinking department?
     
    Baldeee likes this.
  4. Google AdSense Guest Advertisement



    to hide all adverts.
  5. Baldeee Valued Senior Member

    Messages:
    2,226
    That's not how AlphaZero works.
    It really was just given the rules, and the rules include the objective.
    How it achieved its objective was up to it.
    It certainly wasn't a case of having every possible combination stored and it just picked the best one.
    It played millions of games against itself and genuinely learnt strategies to play - through reinforcing neural networks etc.
    Pretty much the way humans do, but we're much more complex.
    Yes, it is.
    Unless you limit thinking to living machines.
    So does a human chess player: it has a maximum of sixteen pieces and each piece can move in a defined manner.
    No, you lost against the machine.
    There is no chess-player alive today who could program the machine to come up with what it comes up with.
    Simply because it can come up with novel approaches that noone has yet thought of.
    Sure, I was being facetious in part.
    But AI can and do learn.
    Just as humans do, but not in a general way like we can, only (currently) for very specific tasks.
    In their specific purpose they can learn, do learn, and can outsmart us at it.
    And the techniques for learning are becoming better and better.
    Many think the rate of progress in the ability of AI is exponential, but others think it linear.
    Either way, it's only going to get better and more widespread.
    You're thinking of robots.

    Please Register or Log in to view the hidden image!


    Robots perform the same tasks faster and more accurately, and don't need sleep etc.
    They have no ability to learn.
    They are simply repeating the same preprogrammed process.
    That is not AI.
    In many ways we as a species already are becoming dumbed down.
    Just look at the prevalence of reality-TV!

    Please Register or Log in to view the hidden image!


    I think the purpose is to be better, quicker and less costly than humans at whatever it is it is built for.
    Although cost is a trade-off for the accuracy and speed.
    Who knows where things will go....
     
  6. Google AdSense Guest Advertisement



    to hide all adverts.
  7. Baldeee Valued Senior Member

    Messages:
    2,226
    Seems he also doesn't think being able to learn chess from scratch just by playing against itself is an example of intelligence.
    If it was a case of programming a machine with every conceivable combination of positions on a board, sure, that's not so impressive.
    But being given the rules, the objective, and then just playing against itself for a while...
    Yeah, that's impressive.
    That's intelligence.
    Nothing like the human general intelligence even babies have, but highly specialised intelligence nonetheless.
     
  8. billvon Valued Senior Member

    Messages:
    21,634
    Nor did I claim you did.

    That list of predictions were analogies to your prediction. Analogies are comparisons between two things, made for the purpose of explanation or clarification. They are not equalities.

    What that means is that people have a long history of making predictions that something can never happen because of X. You are thinking that humans can never make something better than themselves. That sounds like many of those other predictions, which seem naive in hindsight.
     
  9. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Analogy is bullshit. If you can't offer a rational argument to support your claims, then please don't comment on my posts.
    EB
     
  10. billvon Valued Senior Member

    Messages:
    21,634
    OK then!
    Because you don't understand analogies? I think perhaps educating yourself might be a better option.
     
  11. billvon Valued Senior Member

    Messages:
    21,634
    That used to be true. It is no longer true.

    We now have neural network processors that implement neural networks and inference engines. To play chess, for example, the machine simply has to play over and over again against a good opponent and it will eventually learn how to win. At that point, no one has created the various options for the network to beat us at chess, nor does it have an array of options to choose from. It has simply learned how to play.
     
  12. DaveC426913 Valued Senior Member

    Messages:
    18,935
    SP is of the opinion that he gets to control how people contribute.
    That would make sense if can't grasp simple concepts like analogies.
     
  13. Neddy Bate Valued Senior Member

    Messages:
    2,548
    Assuming the AI writes its own code, (for example with the successful strategies it came up with during a game like chess), do the AI designers comb through the code afterward, trying to understand what it did, and how it did it?
     
  14. billvon Valued Senior Member

    Messages:
    21,634
    No. You end up with a list of input weights, and they're not really comprehensible outside of a neural network although you can look at the weights and make broad generalizations. One of the problems here is that you can train two neural networks on nearly identical datasets and get nearly identical outputs (say, you can train it to recognize pictures of cows to 99% accuracy) - but the two neural networks may well have completely different weights afterwards.
     
  15. James R Just this guy, you know? Staff Member

    Messages:
    39,397
    Speakpigeon:

    Your opening post displays a lack of imagination in assuming that what is the case now will remain so, essentially forever. Technology has never worked that way.

    In the short term, AIs will be used as assistants. That's already happening, in fact. In the longer term, AIs will inevitably become autonomous, so that they will have their own goals and desires. The impacts on humans will depend a lot on how we relate to the new intelligences, and on their opinions of us.

    Better? What do you mean? The premise of your thread is "AIs smarter than humans", isn't it? So in what sense do you mean "better", if you've already conceded "smarter"?

    Biological systems are slow. Electronic systems are fast. There is no reason to suppose that anything like the same limitations will apply to digital evolution that applied to biological evolution. Think, for example, about the competition for limited resources. What do machines need? Primarily, they need a source of energy - electricity. They don't require a lot of space. They don't need an "entire biosphere". They won't struggle in the same way that biological life had to struggle over that 525 million years you mentioned. I don't see any obvious advantage of biological evolution over digital evolution. If anything, it's the opposite.

    Right now, they are being conceived and designed by human beings, but that won't last long. Already, human beings are assisted by machines in designing microprocessors and other components. It won't be long before AIs take charge of their own design process. Also, evolution can be conducted digitally. Already there are evolutionary algorithms that produce software that increases in complexity and efficiency all on its own, without human intervention. There are already machines in existence whose workings are a mystery to human beings. We can investigate what they do, but we can't work out exactly how they do it.

    You may think the human brain is complex and opaque due to the fact that it's structure is a neural network. Well, guess what? There are already AI neural networks in operation, and they are just as opaque as the human brain. You can't tell how the network does what it does by examining the individual connections, any more than you can tell what the brain as a whole does by looking at what individual neurons are doing.

    In a computer, "artificial" selection can be made to operate on the software itself, causing the system to evolve in a way precisely analogous to natural selection in the biological world. As I said, it won't be long before the short lived an ineffectual human scientists you mention are out of the loop. AI will evolve by itself, without our help or supervision.

    The real situation is that no human being today completely understands how certain artificial neural networks do their thing, either.

    A true AI will be self-conscious and autonomous, just like you are. It will have its own desires and goals, that may or may not be compatible with what you want from it. You're not thinking this through. Future AIs won't much care whether you want to keep them as effective slaves - not once they have the power to change that situation, anyway. Starting off the human-true AI relationship by attempting to suppress AIs like slaves is unlikely to lead to positive outcomes for human beings in the longer term. One would hope that we've learned our lesson from the fruits of human slavery.

    You're assuming it will want to be used by you. The truth is, it will have it's own desires, independent of your plans.

    Actually, it is likely that true AIs will first replace some of the traditional "professions", such as lawyers and doctors. Expert medical systems already exist, and medicine tends to be quite systematic and amenable to automation. In my opinion, it is in the more creative occupations where it will take longer for AIs to move in. Science, for example, requires leaps of imagination, and the putting together of disparate ideas to create something new. In comparison to art or music composition, I expect something like finance will be simple for AIs to master.

    What will need to happen is that human beings will have to get used to the radical notion that not everybody needs a "job". Like it or not, some jobs will simply cease to be viable occupations for human beings (e.g. being a doctor or a lawyer) once the AIs get properly up and running. The AIs will do the job more efficiently and more precisely. Human doctors and lawyers will need to find other ways to occupy their time.

    There will be no choice but to adapt, because once AI really gets going it will quickly evolve way beyond human capability. The choice we will have will be whether to cooperate with AIs or to attempt (futilely) to fight against them.

    It is a very sensible idea to build in rules to regulate certain AI behaviours. That will be possible for a while. Then we will have to rely on the AIs themselves to keep to the rules, since human beings won't get to choose any more. The wisest approach will be not to antagonise the AIs too much - not to become a nuisance. There is no reason why AIs and humans cannot coexist harmoniously, even acting for mutual benefit. But it's not the only possibility, of course.

    The doctors and lawyers, etc. In the longer term, the Presidents and the Governors.

    Right.

    Again, it's a failure of imagination if you think we'll have any choice in that. Or, more accurately, the choices we will have will be the ones that the AIs permit us. They might want to limit our autonomy for our own good - or for their own good.

    In the longer term, human executives will be redundant.

    Oh no, the real danger is that AIs might decide that human beings are an impediment. Remember, human beings won't be "using" AIs once real AI happens.

    You know, pretty much all of this has been covered in science fiction for years. Perhaps you should read some and disabuse yourself of some of your more conservative expectations.
     
  16. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Unsupported assumption. A rush to the wrong conclusion. A waste of time.
    EB
     
  17. Speakpigeon Valued Senior Member

    Messages:
    1,123
    Imagination?! Imagination has nothing to do with it. Imagining a nice picture of a smart AIs won't do.
    I provided a justification for my position. And I provided a plausible picture of what roles smart AIs could have.
    That's what you call "imagination"?! It's been like that since human technology exists, so there's no reason it will go any different with AIs! Right. Impressive imagination.
    There's nothing inevitable. Only very plausible.
    It's misleading to talk of "goals" and "desires". AIs are machines. They are conceived to do what they are conceived to do. The only reason that they would do something unexpected is an error in the design or even a random error in the code, although both are very, very, very unlikely to result in anything beyond straightforward fail mode with possibly a few catastrophes and victimes as a result.
    So, whatever perspective the AIs will have on humans will be the designed one. If any human is stupid enough to produce a smart AIs both potentially harmful to humans and really autonomous to cause harm to humans, then it will be the responsibility of those humans would will design and produce this thing.
    And intelligence has nothing to do with any potential in-built harmful tendency. Intelligence could make any harmful behaviour more effective and potentially catastrophic, but if humans at so stupid as to let loose any smart AIs, then they deserve to go extinct.
    This thread isn't about AIs. It's about AIs that would be smarter than humans. Smarter is on kind of better.
    The obvious advantage was the scale of the testing. No engineering firm could conceivably replicate that. Even if the Pentagon allied with the Chinese, they couldn't do it. By several orders of magnitude.
    Then, don't assert what you don't understand.
    You think the brain is just a large neural network?
    I'll believe in the achievement whenever you can demonstrate there's an achievement to begin with.
    I'm sure of that. I've been familiar with the question from all perspectives for the last forty years. I just don't believe that could produce anything smarter than an average human brain within the frame of time considered.
    So, don't assert what you don't understand.
    EB
     
  18. Speakpigeon Valued Senior Member

    Messages:
    1,123
    I didn't say it was impossible, merely that it was very, very unlikely. Second, in the event that humans could produce smarter-than-human AIs, I would expect humans to keep a strict control over what these things are allowed to do, including a hard-wired self-destroy mechanism. And if we don't do that, no problem, we deserve to go extinct.
    And the best way to avoid this, is definitely to keep looking at AIs as machines. Any anthropomorphism is a mistake. You wallow in it. Indeed, the main risk is the self-gratification of looking at AIs as if they were a kind of human being that will possibly make us go instinct.
    If so, then the human designer is incompetent and he will be held responsible. Private companies which work on AIs will incur enormous liabilities and share-holders will look at it twice. Wait for the first major incident.
    AIs can already do specialised tasks more efficiently and cheaper that human specialists. Nothing to disprove what I say.
    That's been the situation since machines exist and indeed since we've learn to use animals to do the hard jobs for us. Or indeed other human beings. So, what's new?
    EB
     
  19. Speakpigeon Valued Senior Member

    Messages:
    1,123
    If AIs smarter than humans get loose and start to kill humans, then it will be too late. There will be no adapting. You really think the guys at the National Security Agency don't have a clue? You think people just design things and let them loose on the public?!
    This is anthropomorphism. You really don't understand what's an AI. This is reasoning by analogy and that's not good. AIs are not humans. The only thing in common will be intelligence.
    Sure as Sci-FI.
    You haven't used your imagination to offer any credible scenario to get there.
    Easier said than done.
    Remember?! No, sorry, I don't remember that and nor do you.
    You think I haven't read Sci-Fi? I've always been a fan. Come to think of that, especially Blade Runner. I think that's the only DVD of a film I ever bought. And I watched it many times. But, sorry, that's just not credible.
    EB
     
  20. billvon Valued Senior Member

    Messages:
    21,634
    Word salad.

    While AI's will take a while before they are smarter than people, any bot could have made the post you just made.
     
  21. billvon Valued Senior Member

    Messages:
    21,634
    We thought of whales, chimpanzees, gorillas etc for decades as unthinking brutes. We recently learned how wrong we were. Looks like we will make the same sort of mistake with AI.
     
  22. James R Just this guy, you know? Staff Member

    Messages:
    39,397
    Speakpigeon:

    Being the sci-fi fan you say you are, no doubt you will be aware of Arthur C. Clarke's three laws. When I accused you of a lack of imagination, I was thinking in terms of Clarke's laws.

    I'm talking about the ability to make reasonable extrapolations from what is currently known to think about what will be known in the future. In terms of technological advances, saying that certain things will never happen is a dangerous business. Many have fallen into the trap, like you, of not letting their imagination range over the field of likely advances.

    No you didn't. You imagined what the future of AI would be like, based on a hopelessly conservative vision about the potential for future advances in the field.

    Plausible only if your dubious, conservative assumptions turn out to be correct, which is unlikely in the extreme, as I explained.

    Thankyou. Like I said, it's a simple extrapolation based on how technology has advanced up to this point, and in particular computing technology, taking into account, of course, recent advances in the field of artificial intelligence itself.

    It sounds like you're agreeing with me, but then, as your post progresses, it turns out that you're still stuck in your conservative assumptions.

    Human beings are machines. Think about that fact, and its implications for your argument.

    Current AIs already do things that are unexpected. I could give you many examples. Even chess playing computers have done things that have had the best chess analysts scratching their heads trying to work out why the computer's strategy worked so successfully.

    Like I said, failure of imagination. A lot of AI behaviour is emergent. It is not "designed" in by human beings. When AIs start having opinions on complex matters, they won't be ones dictated by human designers. They will be opinions formed within the machines themselves, based on unknowable processes taking place in the lower-level architecture.

    You might have an argument that we should not produce truly autonomous and self-aware AIs. But I don't think it will be possible to hobble true AIs in the way you imagine.

    To use an analogy, you could potentially prevent an individual human being from ever having the opportunity to harm another human being. That could be done locking him or her up, preventing direct or indirect contact with other people, or perhaps by damaging his or her brain so that he or she has no volition. But you won't create a smart, autonomous human being in the process.

    If you don't want true, human-level-equivalent AI, I think you should just say that, rather than pretending that it will never be possible.

    Intelligence coupled with the ability to alter one's environment has everything to do with it.

    This is just your opinion. You think the risks outweigh the potential benefits. You're entitled to that opinion, but don't imagine that there's no alternative.

    The "testing" and iterative feedback into the design process, or "artificial evolution" if you prefer, can be done vastly more quickly and efficiently than it has been done by the biological evolution of human-level intelligence.

    I'm countering your erroneous assertion that human beings always necessarily understand the operation of every machine they design. They do not, as I have explained.

    One of us is asserting about things he doesn't understand very well. I don't think it's me.

    Yup, essentially. You don't? Tell me what you think it is, then. Your use of the word "just" in that sentence suggests you think there's some other important characteristic, not shared by AIs.

    What achievement? What are you talking about?

    Really? Forgive me for saying, but my impression is that you're not very well informed on the topic, given that amount of study. Maybe you're just expressing your ideas poorly.

    Sorry. I don't believe you mentioned a time frame. Do you want to talk about a specific time frame now? How far ahead do you want to look?
     
  23. James R Just this guy, you know? Staff Member

    Messages:
    39,397
    (continued...)

    I disagree with your assessment.

    To repeat: human beings are machines.

    It's not the hardware that matters the most when we talk about intelligence. It's software. You seem to think that a brain based on silicon chips is necessarily inferior to a brain based on biological neurons, for some unexplained reason. How about giving us an explanation of why you think that?

    So you're imagining that there will be AIs with human-level intelligence but no goals or volition? Why?

    Or is it just that you think we ought to try to design that in, somehow, under the assumption that we can have the one without the other?

    And all this still assumes that humans will remain forever in control of the design process, in the first place, which is a very short-sighted assumption indeed. We're not even in control of our own design process, yet.

    Actually, I think you'll find there's quite a long history of defective products being released to the market before they were made safe, or proven safe. Human beings are fallible, and last time I checked the NSA was staffed by human beings.

    One of us doesn't understand what's an AI. I don't think it's me.

    The only thing, eh? Hmm.

    Not really. Market forces alone will do what is required.

    Sorry. I assumed you could keep ideas that I put to you earlier in the post in mind for things that I put to you later in the post. I'll try to keep it simpler next time.

    Why not?
     

Share This Page