Application and analysis of deep neural networks and tree search on Othello (a.k.a. Reversi)
Inspired by AlphaGo, this project is going to discover the potential of implementing deep neural network (DNN) and Monte Carlo tree search (MCTS) on the game of Othello and describe the design, implementation as well as evaluation of our program. We also introduces the way we measure the strength of the evaluation function. If the effectiveness of DNN in game AI is proved but only in limited interval, the way of building AI will have an evolution. By comparing the different AI based on DNN and other methods, the applicability of using DNN for evaluation has been verified. This finding may have an enormous impact on game AI design.
CNN is a kind of feed-forward artificial neural network; whose artificial nerve unit can response to the surround patterns in detection field. A CNN model consists of one or multiple convolutional layers and fully-connected layers, and can also include pooling layers and relevance weights. This kind of structure enables CNN make use of the 2-dimensional structure of the input data. Due to this, compare to other deep learning models, CNN could give a better result on processing 2-dimensional data, such as the image and game board processing.
Value network v_z(s) (z for WZebra) was trained by supervised learning. And the training data of the network were from the self-playing games of another Othello AI program - WZebra, which is one of the strongest Othello AIs in the world. This AI provides various levels of search depth and evaluation score of move. We generated training games with six search steps, considering the balance of search strength and generating efficiency. Currently, over 4000 self-playing games with evaluation scores of each step were recorded as the training set. The scores provided by WZebra are generally within a range between -64 to +64 (as a reference of the result).
Policy networks were constructed with similar architecture to value networks, however the output type is changed to be categorical, to represent 60 different possible squares in Othello. Policy networks can directly give a list of probabilities for each position on the board as output with some board configuration as input.
Policy network p_SL (s) (SL for supervised learning) was also trained by supervised learning. And the training set was also the same as value network, but using move as the data label instead of scores provided by WZebra. Since these over 4000 game transcripts were generated by WZebra with human-style randomness, these training samples can have some degree of variety on chess playing routine, which is good for neural network training.
MCTS is a heuristic search algorithm, which makes move based on results of copious self-gaming. AlphaGo has implemented an asynchronous policy and value MCTS algorithm, which combined both policy network and value network into MCTS. Based on this idea, we constructed a similar MCTS algorithm using the policy network p_SL(s) and value network v_p(s). v_p(s) is an CNN model using the same structure shown in Figure 5, and trained with label set to be action value Q of each move generated by policy MCTS algorithm which only applies p_SL(s).