It has been announced that Google's AlphaGo has beaten Go master player Lee Se-dol 3-0.
We know computers can beat human's at lots of games. Deep Blue managed (just) to beat chess world champion Garry Kasparov in the late 1990s.
So why is this latest victory significant?
Deep Blue, and many other forms of game-playing computer, mainly used brute force to win against humans. By performing millions of calculations to assess almost every possible move and counter-move a computer can pick the best option each turn and simply wait for a human to make a mistake.
Go is a different game altogether. Players place stones to try to surround sections of board. Because of the multitude of choices for moves and strategies it is impossible to use brute force to "solve" a game of Go. Go playing computers have to take a different approach.
AlphaGo uses a deep neural network, which functions much more like a human brain than brute force game playing software. A neural network is trained, much like we would learn a new skill by practice. AlphaGo was taught to play using data from expert human players and then learnt to get better by playing different versions of itself. By learning what moves and strategies worked through millions of games AlphaGo became an expert player.
These machine learning approaches to tasks open up artificial intelligence solutions to problems computers previously just couldn't solve. The same techniques can be applied in all sorts of fields. We already have software to help diagnose disease. Imagine how much faster research and development of medicine might be in the future when we can use these techniques to supplement human ingenuity.
In a way, computer programs are becoming more like us. They are starting to truly learn and we are getting closer to genuinely intelligent artificial intelligence.