In October 2015, the computer algorithm AlphaGo, created by the Google-owned company DeepMind, beat European champion Fan Hui by 5 games to 0. This was the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away. In March 2016, Google made history when its artificial intelligence (AI), AlphaGo, beat Go professional Lee Sedol, world champion, by 4 games to 1. In a 6-day tournament in Seoul, a historic battle of human and artificial intelligence was watched by a reported 100 million people around the world.
For many years, designing a program that could beat Go was believed to be out of reach for computer scientists.
IBM’s Deep Blue, the program that beat world chess champion Garry Kasparov back in 1997, was programmed by chess experts with a library of potential chess moves that it could pull from during the match.
The game of Go originated in China more than 2,500 years ago. Go is a board game where two players compete to control the most territory on the game board. There are a handful of classic board games that have been used as benchmarks for progress in the field of artificial intelligence. The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owning to its complexity and intuitive nature of the game.
“AlphaGo uses a brain-inspired architecture known as a neural network, in which connections between layers of simulated neurons strengthen on the basis of experience. It learned by first studying 30 million Go positions from human games and then improving by playing itself over and over again, a technique known as reinforcement learning from games of self-play. The more AlphaGo plays, the more it gets,” explains a social scientist. (Floro Mercene)