Accessibility links

Breaking News

Go Figure! Game Victory Seen as Artificial Intelligence Milestone


FILE - For the first time, a computer defeated a professional human being in Go, a complex game using black and white pieces on a square grid.
FILE - For the first time, a computer defeated a professional human being in Go, a complex game using black and white pieces on a square grid.

You can chalk it up as another victory for the machines.

In what they called a milestone achievement for artificial intelligence, scientists said Wednesday they have created a computer program that beat a professional human player at the complex board game called Go, which originated in ancient China.

The feat recalled IBM supercomputer Deep Blue's 1997 match victory over chess world champion Garry Kasparov. But Go, a strategy board game most popular in places like China, South Korea and Japan, is vastly more complicated than chess.

"Go is considered to be the pinnacle of game AI research," said artificial intelligence researcher Demis Hassabis of Google DeepMind, the British company that developed the AlphaGo program. "It's been the grand challenge, or holy grail if you like, of AI since Deep Blue beat Kasparov at chess."

DeepMind was acquired in 2014 by Google.

FILE - World chess champion Garry Kasparov rests his head in his hands as he is seen on a monitor during game six of the chess match against IBM supercomputer Deep Blue, May 11, 1997.
FILE - World chess champion Garry Kasparov rests his head in his hands as he is seen on a monitor during game six of the chess match against IBM supercomputer Deep Blue, May 11, 1997.

AlphaGo swept a five-game match against three-time European Go champion and Chinese professional Fan Hui. Until now, the best computer Go programs had played only at the level of human amateurs.

In Go, also called Igo, Weiqi and Baduk, two players place black and white pieces on a square grid, aiming to take more territory than their adversary.

"It's a very beautiful game with extremely simple rules that lead to profound complexity. In fact, Go is probably the most complex game ever devised by humans," said Hassabis, a former child chess prodigy.

Future applications

Scientists have made artificial intelligence strides in recent years, making computers think and learn more like people do.

Hassabis acknowledged some people might worry about the increasing capabilities of artificial intelligence after the Go accomplishment, but added, "We're still talking about a game here."

FILE - Students play the board game Go during a competition to mark the 100-day countdown to the opening of the Beijing Olympics at a primary school in Suzhou, Jiangsu province, April 30, 2008.
FILE - Students play the board game Go during a competition to mark the 100-day countdown to the opening of the Beijing Olympics at a primary school in Suzhou, Jiangsu province, April 30, 2008.

While AlphaGo learns in a more human-like way, it still needs many more games of practice, millions rather than thousands, than a human expert needs to get good at Go, Hassabis said.

The scientists foresee future applications for such AI programs including: improving smartphone assistants (Apple's Siri is an example); medical diagnostics; and eventually collaborating with human scientists in research.

Hassabis said South Korea's Lee Sedol, the world's top Go player, has agreed to play AlphaGo in a five-game match in Seoul in March.

Lee said in a statement, "I heard Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win, at least this time."

The findings were published in the journal Nature.

  • 16x9 Image

    Reuters

    Reuters is a news agency founded in 1851 and owned by the Thomson Reuters Corporation based in Toronto, Canada. One of the world's largest wire services, it provides financial news as well as international coverage in over 16 languages to more than 1000 newspapers and 750 broadcasters around the globe.

XS
SM
MD
LG