Dear Commons Community,
Over this past weekend, artificial intelligence took a big step forward when a computer program, AlphaGo, beat the world champion Go player, Lee Sedol. As reported in The Chronicle of Higher Education:
“The Google-owned computer program that beat Mr. Sedol, AlphaGo, is a cutting-edge example of programs that mirror humanlike brain structures, and its success against Mr. Sedol in Seoul is now indisputable. The computer won the first three games in the match before Mr. Sedol finally captured one, on Sunday. The fifth and final game will be played on Tuesday.
The game of Go, in which Mr. Sedol is an 18-time world champion, dates back more than 2,500 years to China, where it was considered an essential spiritual art. It involves alternating placements of black and white stones on a square grid, aimed at capturing territory. The full-sized version of the game has 19 lines on each side, meaning hourslong contests with hundreds of possible moves each turn.
Even for a computer, that’s too much complexity for a “brute force” calculation of the outcomes of all possible moves over several player turns. Human experts such as Mr. Sedol typically describe their play as involving large amounts of intuition.
AlphaGo works largely by combining versions of two established computer processing techniques. One is known as an artificial neural network, which is a vast array of data-processing points that make individual contributions toward the goal of identifying a complex overall pattern. The other is known as Monte-Carlo tree search, which involves logical chains of what action leads to another, driven by feedback on outcomes and the quick elimination of nonviable pathways.
Like past instances of computers that beat top human competitors — such as Deep Blue against the chess grandmaster Garry Kasparov in 1997, and Watson against the Jeopardy!champ Ken Jennings in 2011 — AlphaGo was specifically trained for its game.
But experts credit AlphaGo with much higher expectations for wider applicability, given the need in Go to make decisions from an array of choices far more numerous than it can actually calculate. “This is much closer to the way animals do it, including us,” said Mr. Koch, a former professor at the California Institute of Technology. “It’s a big deal.”
Others are somewhat less certain. The software that beat Mr. Sedol is impressive, said Miles Brundage, a doctoral student at Arizona State University who has been studying AlphaGo. But Google also “threw a lot of hardware at it,” Mr. Brundage said.
That suspicion is shared by Mark O. Riedl, an associate professor of interactive computing at the Georgia Institute of Technology. The AlphaGo victory may reflect improvements in computer processing speeds as much as software innovation, Mr. Riedl said, and once-rapid advances in the operating speed of computer chips have slowed over the past decade.
And while a Go playing board presents a forbidding number of choices, it’s still a constrained world, not a full replication of the number of options faced in many real-world environments, said Bart Selman, a professor of computer science at Cornell University.
But even with all of those uncertainties about the importance of AlphaGo, the current rapid progress in artificial intelligence “is absolutely genuine,” said Edward M. Geist, a research fellow at the Center for International Security and Cooperation at Stanford University.”
The potential of artificial intelligence has been promoted for several decades now. However, the speed of digital technology is finally getting to the point where breakthroughs such as AlphaGo are beginning to appear. The next generation of digital design based on quantum computing will dwarf the speeds and capacities of current technology and will usher in a plethora of artificial intelligence applications probably in the next 20 years or so.
More on AlphaGo!
Dear Commons Community,
As a follow-up to this posting, Andrew McAfee, a principal research scientist at M.I.T., and Erik Brynjolfsson, a professor of management and the co-founders of the M.I.T. Initiative on the Digital Economy, had an op-ed piece (see: http://www.nytimes.com/2016/03/16/opinion/where-computers-defeat-humans-and-where-they-cant.html ) in today’s New York Times commenting on the significance of Google’s AlphaGo program. Here is an excerpt:
“The AlphaGo victories vividly illustrate the power of a new approach in which instead of trying to program smart strategies into a computer, we instead build systems that can learn winning strategies almost entirely on their own, by seeing examples of successes and failures.
Since these systems don’t rely on human knowledge about the task at hand, they’re not limited by the fact that we know more than we can tell.
AlphaGo does use simulations and traditional search algorithms to help it decide on some moves, but its real breakthrough is its ability to overcome Polanyi’s Paradox. It did this by figuring out winning strategies for itself, both by example and from experience. The examples came from huge libraries of Go matches between top players amassed over the game’s 2,500-year history. To understand the strategies that led to victory in these games, the system made use of an approach known as deep learning, which has demonstrated remarkable abilities to tease out patterns and understand what’s important in large pools of information.”
In a sense, AlphaGo is not simply programmed to play Go but is learning how to play Go.