AI beats top human player in Go

Discussion in 'Intelligence & Machines' started by Plazma Inferno!, Jan 28, 2016.

  1. Plazma Inferno! Ding Ding Ding Ding Administrator

    Another step forward for AI. A computing system developed by Google researchers in Great Britain has beaten a top human player at the game of Go.
    Machines have topped the best humans at most games held up as measures of human intellect, including chess, Scrabble, Othello, even Jeopardy!. But with Go—a 2,500-year-old game that’s exponentially more complex than chess—human grandmasters have maintained an edge over even the most agile computing systems.
    Edont Knoff likes this.
  2. Google AdSense Guest Advertisement

    to hide all adverts.
  3. Edont Knoff Registered Senior Member

    Cool stuff. Twenty years ago I was given Go as an example where computers were notoriously bad, even worse than beginners at the game.

    The point must be stressed - this is probably a bigger breakthrough than chess playing computers, because it is a lot more complicated to compute go than chess.
  4. Google AdSense Guest Advertisement

    to hide all adverts.
  5. iceaura Valued Senior Member

    They're still cheating, in a sense - at least if the focus is "intelligence" - by giving the machine access to a comprehensive and searchable store of the standard joseki, endgame, and other pre-evaluated move combinations.

    Give Fan Lui game access to a data bank like that - which is separate from the "thinking" part of the machine - and see what happens.

    But aside from the human ego that's a minor point. The accomplishment is wonderful, and deep. It's reassuring to see how they did it - by giving up control of what's going on. The machine is off on its own, and since it can't explain itself there is no realtime way to know what it's doing any more. I'd be curious to know if game decisions can even be reconstructed afterwards - is it possible to dissect the process and figure out exactly what happened?
  6. Google AdSense Guest Advertisement

    to hide all adverts.
  7. Billy T Use Sugar Cane Alcohol car Fuel Valued Senior Member

    That probably would make him a less skilled player. Too much data for a human to process.

    Go is all about pattern evaluation - the pattern changes with each new "stone" placed on the board. The new AI program is better at this evaluation than most humans, if not all. It got that way by "studying" 30 million patterns that occurred in well played games. Then it advanced its skills by playing modified versions of its self. I. e. It learns in a very limited domain, more than any human can.

    There are no instructions on how to play well - no describable rules for the better placement of the next stone. Chess is a simple game to process all possible moves (and those that are possible after each of those moves) but in Go there are more such possibilities in this potential chain of events than atoms in the universe - far beyond the computational powers of any foreseeable computer to exhaustively consider them all - that was possible in chess and how a computer (deepblue) became world champion.

    Go is without doubt the most elegant game. Only two or three rules, identical for all pieces (stones) played and once played, the don't move except when captured and removed from the board. Anyone can learn to play it in less than five minutes, but a life time of daily playing is required to play at the highest level.

    On nice thing about Go is unequal players can have an equal chance of winning - the weaker one is given a few initial stones on the board to start the game - that does not change the nature of the game significantly as it does in chess, when the better player has one man removed at the start of the game - that is no longer chess, but a chess like game.

    As a graduate student I played with a six stone initial advantage against a much better player during our "brown bag" lunches. If I got 7 stones, I had a no-zero chance to win, with only 5 stones at the start I was sure to lose as I normally did even with six stones. As I would usually concede after a couple of dozen stones had been played, we could get more than one game in during the lunch hour when I had 6 stones at the start. He was hoping to, in a year or so, to get to be officially ranked, at the bottom of a very long ranking ladder.
  8. iceaura Valued Senior Member

    He wouldn't be fool enough to inundate himself - just let him search the standard library as he wants to, as the machine can.

    Then see what happens.

    I don't want to denigrate the accomplishment here, just put it in perspective with regard to "thinking".

    And I am honestly curious about whether or not the machine can explain itself - tell us what it is doing.
  9. Spellbound Banned Valued Senior Member


    Please Register or Log in to view the hidden image!

    You are a paragon of the human species.
  10. Plazma Inferno! Ding Ding Ding Ding Administrator

    Like beating Fan Lui wasn't enough. AlphaGo, a program developed by Google's DeepMind unit, has defeated another legendary Go player Lee Se-dol in the first of five historic matches being held in Seoul, South Korea. Lee resigned after about three and a half hours, with 28 minutes and 28 seconds remaining on his clock, saying he was very surprised and didn't expect to lose. The series is the first time a professional 9-dan Go player has taken on a computer, and Lee is competing for a $1 million prize.
  11. Plazma Inferno! Ding Ding Ding Ding Administrator

    What Google's DeepMind could do next, after beating two top Go player? A paper from two UCL researchers suggests one future project: playing poker. And unlike Go, victory in that field could probably fund itself – at least until humans stopped playing against the robot.
    In their paper, titled "Deep Reinforcement Learning from Self-Play in Imperfect-Information Games", the authors detail their attempts to teach a computer how to play two types of poker: Leduc, an ultra-simplified version of poker using a deck of just six cards; and Texas Hold'em, the most popular variant of the game in the world.
  12. Beer w/Straw Transcendental Ignorance! Valued Senior Member

    I dunno "In a Huge Breakthrough, Google’s AI Beats a Top Player at the Game of Go". I'm not familiar with the game, nor am I a programmer. I did challenge my computer once (Mathematica) in calculating the first million digits of Pi... I forfeited from the beginning.
  13. iceaura Valued Senior Member

    The better the computers get at making "intuitive" judgments, the less their programmers know about how these judgments are being made.

    It will be interesting to see if the programmers can figure out how AlphaGo made its decisions in the various games against Lee Sedol. Apparently the ability to explain itself is not a feature of its software.
  14. mackmack Registered Senior Member

    plazma inferno, What people don't understand about AlphaGo is that its only a tape recorder and it really doesn't have "intelligence". it can't add, subtract, do calculus problems, or generate common sense knowledge. It was able to play simple Atari games like donkey kong, centipede, or astroid, but unable to play complex games like call of duty or zelda. if they want alphaGo, which is called deep Q network, to play the call of duty, it needs to add and subtract and generate common sense knowledge like a 6 grader. For example, if the Google's AI program is playing the call of duty, it has to know how to add. if a player was surrounded by 5 enemy soldiers and the player (AI program) has 2 bullets, it has to do addition and subtraction. 5-2 is 3. that means the robot has to find 3 more bullets in order to take out the 5 enemy soldiers. If the player (AI program) is on a rooftop, he has to know knowledge about gravity. he must know that jumping off a roof will lead to death and death is a bad thing.

    AlphaGo has a ton of problems it must solve in order to play the call of duty. For example, the fact they are using a deep neural net is a stupid idea because its very hard to change data in a neural network. if you change one set of data here then you have to change lots of other data (the algorithm is integrated and dependent). Further more, the input and output layers are very hard to scale. this is the reason their software can't play games with more than 60X60 pixels, like nintendo or Xbox. The processing demand rises exponentially higher as the input data or output data increases.

    These are just some of the reasons i don't like using a deep neural network to build human like robots. this deep neural network is good with expert systems like the ibm watson. and i have looked at Google's patent on deep Q network and they stated the predicted model is where they will add "stuff" like object identification, generate common sense, and decision making. That's not a very good idea.
  15. rpenner Fully Wired Staff Member

    You are off-topic.

    AlphaGo is distinct from the Q-learning software DQN 3.0, which is Google DeepMind's video-game self-teaching program documented here:
  16. mackmack Registered Senior Member

    so what are you trying to say? they come from the same fundamental data structure, an agent program using a deep neural network.

    when they were explaining the alphaGo they were also explaining about the deep Q network.

    you need to explain yourself.

Share This Page