Since Alan Turing wrote the first computer program for chess in 1951 (completely on paper) all the way to Gary Gasparov’s infamous loss at the proverbial hand of IBM’s Deep Blue supercomputer in 1998, chess has always been used as an indicator of progress for computers. Today, artificial intelligence systems are so advanced that humans barely have a chance at beating them. Google’s AlphaZero is a prime example; it started out knowing only the rules of chess and nothing more — no opening and closing moves, no libraries, nada. In a matter of hours, it had already played more games against itself than have ever been recorded in human chess history.
In a new study, researchers in artificial intelligence at University College London have yet again turned to chess. Only this time, their machine learning program didn’t practice millions of games to master chess but rather analyzed the language of expert commentators. Someday, the researchers say that a similar approach could allow machines to decipher emotional language and acquire skills which would have otherwise been inaccessible through ‘brute force’.
First, the researchers went through 2,700 chess game commentaries, which were pruned so that ambiguous or uninteresting moves were removed. They then employed a recurrent neural network — a type of neural network where the output from the previous step is fed as input to the current step — and a mathematical technique called word embeddings to parse the language of the commentators.
The algorithm, called SentiMATE, worked out the basic rules of chess as well as several key strategies — including forking and castling — all by itself. On the flip side, it played quite poorly, at least, as compared to a grandmaster AI.
“We present SentiMATE, a novel end-to-end Deep Learning model for Chess, employing Natural Language Processing that aims to learn an effective evaluation function assessing move quality. This function is pre-trained on the sentiment of commentary associated with the training moves and is used to guide and optimize the agent’s game-playing decision making. The contributions of this research are three-fold: we build and put forward both a classifier which extracts commentary describing the quality of Chess moves in vast commentary datasets, and a Sentiment Analysis model trained on Chess commentary to accurately predict the quality of said moves, to then use those predictions to evaluate the optimal next move of a Chess agent,” the authors wrote.
High-level performance was not its objective, though. Where SentiMATE shines is in its ability to use language to acquire a skill instead of practicing it, thus employing less data and computing power than conventional approaches. AlphaZero, for instance, requires thousands of “little brain” — specialized chips called Tensor Processing Units (TPUs) — and millions of practice sessions to master games such as chess, Go, or Dota 2.
In a world with millions of books, blogs, and studies, machines like SentiMATE could find many practical applications. Such a machine, for instance, could learn to predict financial activities or write better stories simply by tapping into the sum of human knowledge.
SentiMATE was described in a paper published in the pre-print server ArXiv.