Viewers of Darren Aronofsky's surreal film Pi in its entirety probably walked away
with two lasting recollections: a man using a power drill to
relieve his cluster headaches, and a romantic description of
the ancient Chinese game Go. Unlike many other games of strategy, the Go
board represents--as described by the character Sol Robeson--"an extremely
complex and chaotic universe...and that is the truth of our world [...] it
can't be easily summed up with math...there is no simple pattern."
Thanks to these attributes, Go has always been a major
stumbling block for artificial intelligence. Late last year, however, Google
DeepMind's AlphaGo program made a major breakthrough by defeating a highly
ranked professional Go player sans handicap for the first time. More recently
in a five-game match that took place from March 9-15, the program won four
games against Lee Se-dol, one of the best players in the world. AI enthusiasts
consider these matches major victories for the discipline, as many assumed that
a competent Go program was still at least five years in the making.
Go's international appreciation is due to the fact that,
despite having only two essential rules, there are a vast number of possible
moves and unique games. Each player, one black and one white, takes turns
placing their stones on the game board's intersections, typically forming
chains. A chain which is completely surrounded by opposing stones is captured
and removed from the board. A game may be scored by either the number of empty
spaces a player's stones surround or the number of stones plus surrounded
intersections. The only other rule is that a player may not make any move that
returns the game to the position of the previous turn.
While the image on this page shows a 9x9 beginner board, professional games like those won by AlphaGo are played on
a 19x19 board. AI expert Victor Allis estimates that a typical 19x19 expert
game lasts 150 moves or so, with about 250 choices per move. These figures
result in a game-tree complexity--a common measure of game complexity used in combinatorial
game theory--of 10360. Compare this with tic-tac-toe's 26,830 or
chess's 10120, and you start to see why Go is so difficult for the
automated mind. (For those with enough time, patience, and interest, DeepMind's
YouTube channel features all five March 2016 matches move-by-move.)
AlphaGo cracked the Go problem not by building a program
adept at playing Go, like IBM's chess-playing Deep Blue, but by using a
combination of more general machine learning and tree search algorithms to
create a program adept at learning any game it practices and experiences. This
step toward artificial general intelligence has met with differing reactions
within the AI community. Most comment that it's probably a good time to discuss
the social/cultural impact of this general intelligence; a year ago Stephen
Hawking went so far as to suggest the possibility of a smart computer
takeover. In response to the AlphaGo victories, Murray Campbell, an IBM
scientist who worked on Deep Blue, more
or less proclaimed the victorious end of AI board game experiments.
He might be right, but in my opinion there's no currently
feasible leap from games to "real-life" general AI solutions, and computers
obviously have a long way to go before (if ever) understanding human behavior
beyond emulation. Carnegie Mellon's Claudico,
a Texas hold 'em program that uses non-game-specific algorithms similar to
AlphaGo's, lost a 2015 poker event against four top human players by over
700,000 chips. That program struggled with risky bets and bluffing, two
behaviors difficult for a ruthlessly rational program to comprehend.
Much like Deep Blue in 1997 and Watson in 2011, AlphaGo's
victory is one more AI milestone on the road to who knows where. But to quote
Lee Se-dol after his historic defeat: "robots will never understand the beauty
of the game the same way that we humans do."
Image credit: Jarrod Trainque / CC BY 2.0
|
Re: AI Passes 'Go'