Giter Club home page Giter Club logo

leela-chess's Introduction

Linux Build Status Windows Build status

Introduction

This is an adaptation of GCP's Leela Zero repository to chess, using Stockfish's position representation and move generation. (No heuristics or prior knowledge are carried over from Stockfish.)

The goal is to build a strong UCT chess AI following the same type of techniques as AlphaZero, as described in Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.

We will need to do this with a distributed project, as it requires a huge amount of computations.

Please visit the LCZero forum to discuss: https://groups.google.com/forum/#!forum/lczero, or the github issues.

Contributing

For precompiled binaries, see:

For live status: http://lczero.org

The rest of this page is for users who want to compile the code themselves. Of course, we also appreciate code reviews, pull requests and Windows testers!

Compiling

Requirements

  • GCC, Clang or MSVC, any C++14 compiler
  • boost 1.54.x or later (libboost-all-dev on Debian/Ubuntu)
  • BLAS Library: OpenBLAS (libopenblas-dev) or (optionally) Intel MKL
  • zlib library (zlib1g & zlib1g-dev on Debian/Ubuntu)
  • Standard OpenCL C headers (opencl-headers on Debian/Ubuntu, or at https://github.com/KhronosGroup/OpenCL-Headers/tree/master/opencl22/)
  • OpenCL ICD loader (ocl-icd-libopencl1 on Debian/Ubuntu, or reference implementation at https://github.com/KhronosGroup/OpenCL-ICD-Loader)
  • An OpenCL capable device, preferably a very, very fast GPU, with recent drivers is strongly recommended but not required. (OpenCL 1.2 support should be enough, even OpenCL 1.1 might work).
  • Tensorflow 1.4 or higher (for training)
  • The program has been tested on Linux.

Example of compiling - Ubuntu 16.04

# Install dependencies
sudo apt install cmake g++ git libboost-all-dev libopenblas-dev opencl-headers ocl-icd-libopencl1 ocl-icd-opencl-dev zlib1g-dev

# Test for OpenCL support & compatibility
sudo apt install clinfo && clinfo

# Clone git repo
git clone https://github.com/glinscott/leela-chess.git
cd leela-chess
git submodule update --init --recursive
mkdir build && cd build

# Configure
cmake ..

# Or configure without GPU support
cmake -DFEATURE_USE_CPU_ONLY=1 ..

# Build and run tests
make
./tests

Compiling Client

See https://github.com/glinscott/leela-chess/tree/master/go/src/client/README.md. This client will produce self-play games and upload them to http://lczero.org. A central server uses these self-play game data as inputs for the training process.

Weights

The weights from the distributed training are downloadable from http://lczero.org/networks. The best one is the top network that has some games played on it.

Weights that we trained to prove the engine was solid are here https://github.com/glinscott/lczero-weights. The best weights obtained through supervised learning on a human dataset were with elo ratings > 2000.

Training

The training pipeline resides in training/tf, this requires tensorflow running on linux (Ubuntu 16.04 in this case).

Data preparation

In order to start a training session you first need to download trainingdata from http://lczero.org/training_data. This data is packed in tar.gz balls each containing 10'000 games or chunks as we call them. Preparing data requires the following steps:

tar -xzf games11160000.tar.gz
ls training.* | parallel gzip {}

This repacks each chunk into a gzipped file ready to be parsed by the training pipeline. Note that the parallel command uses all your cores and can be installed with apt-get install parallel.

Training pipeline

Now that the data is in the right format one can configure a training pipeline. This configuration is achieved through a yaml file, see training/tf/configs/example.yaml:

%YAML 1.2
---
name: 'kb1-64x6'                       # ideally no spaces
gpu: 0                                 # gpu id to process on

dataset: 
  num_chunks: 100000                   # newest nof chunks to parse
  train_ratio: 0.90                    # trainingset ratio
  input: '/path/to/chunks/*/draw/'     # supports glob

training:
    batch_size: 2048                   # training batch
    total_steps: 140000                # terminate after these steps
    shuffle_size: 524288               # size of the shuffle buffer
    lr_values:                         # list of learning rates
        - 0.02
        - 0.002
        - 0.0005
    lr_boundaries:                     # list of boundaries
        - 100000
        - 130000
    policy_loss_weight: 1.0            # weight of policy loss
    value_loss_weight: 1.0             # weight of value loss
    path: '/path/to/store/networks'    # network storage dir

model:
  filters: 64
  residual_blocks: 6
...

The configuration is pretty self explanatory, if you're new to training I suggest looking at the machine learning glossary by google. Now you can invoke training with the following command:

./train.py --cfg configs/example.yaml --output /tmp/mymodel.txt

This will initialize the pipeline and start training a new neural network. You can view progress by invoking tensorboard:

tensorboard --logdir leelalogs

If you now point your browser at localhost:6006 you'll see the trainingprogress as the trainingsteps pass by. Have fun!

Restoring models

The training pipeline will automatically restore from a previous model if it exists in your training:path as configured by your yaml config. For initializing from a raw weights.txt file you can use training/tf/net_to_model.py, this will create a checkpoint for you.

Supervised training

Generating trainingdata from pgn files is currently broken and has low priority, feel free to create a PR.

Other projects

License

The code is released under the GPLv3 or later, except for ThreadPool.h, cl2.hpp and the clblast_level3 subdir, which have specific licenses (compatible with GPLv3) mentioned in those files.

leela-chess's People

Contributors

akababa avatar ankan-ban avatar benediamond avatar blin00 avatar davidsoncolin avatar dubslow avatar error323 avatar evalon32 avatar ganeshkrishnan1 avatar gcp avatar glinscott avatar gsobala avatar jackthomson2 avatar jhellis3 avatar jjoshua2 avatar jkiliani avatar killerducky avatar kiudee avatar meekersx avatar mooskagh avatar nifgraup avatar robinhouston avatar scchess avatar syjytg avatar theanswer avatar tilps avatar ttl avatar uriopass avatar vchen30 avatar vondele avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

leela-chess's Issues

Port NNCache over?

This was a significant speed-win for Leela-zero. I haven't measured how many duplicate eval calls we make though.

Incorrect sampling from ChunkFiles

num_train = int(cfg.num_samples*cfg.train_ratio)
num_test = cfg.num_samples - num_train
print("Generating {0} training-, {1} testing-samples".format(num_train, num_test))
if cfg.output != ".":
os.makedirs(cfg.output)
with open('{}/train.bin'.format(cfg.output), 'wb') as f:
for _ in range(num_train):
data = next(gen)
f.write(data)
with open('{}/test.bin'.format(cfg.output), 'wb') as f:
for _ in range(num_test):
data = next(gen)

Currently consecutive positions from a game are gathered in a dataset via the next iterators. Instead, random sampling should be performed.

Compile on Windows?

Has anyone been able to get this working on Windows? I spent all day trying to compile it in MSYS2 but to no avail.

Tensorflow training refactor

Refactor the training to support multiple GPUs (each training a different model for now) and automatic decaying learning rate. All main() entry points should get a yaml config file as argument that carries relevant information e.g.

name: 'My Network'
gpu: 0
dataset: '/path/to/dataset'
training:
    batch_size: 1024
    learning_rate: 0.1
    decay_rate: 0.1
    decay_step: 100000
    policy_loss_weight: 1.0
    value_loss_weight: 0.01
model:
   filters: 64
   residual_blocks: 5

It would then be useful to add the yaml file to the weights.txt.gz at this repository.

Remove network history

The current nn keeps a history of 7 states. We could safely remove it to speed up the network.

Transposition table and learning

The transposition tables keep no record of the "repeatability state" of the position, nor history. It's very unlikely that two transposed nodes, with different history, will receive the same evaluation by the network. So returning the value of another node introduces a little inconsistency.
For instance, assume A and B, two nodes that transposes. We want to evaluate the root, if we take the path that evaluates A first, we will have a different result that if we take the path that evaluates B first.
Other example, maybe worst, A and B transposes and and just after B, the position could repeat and draws. B evaluation should be closer to 0.5 than A.
I believe this is very small and it worth to use transposition tables during play but I suggest to remove it during self-play because I fear that this little inconsistency could perturb the learning process.

Implement AlphaZero

Train the network continuously on the last n games. Check for new network weights after each game of self play.

Port CPU verification from leela-zero/next

GCP wrote a really nice CPU verification implementation. In verification today, the cuda kernels for convolution are not working at all... Giving different data back each time, so there is some uninitialized memory access going on.

Initial server implementation

Basic python web server probably, I have an Ubuntu 16.04 server set up, just need to deploy it and figure out what we want the API to look like.

Data augmentation/board flipping

Continued from #20 (comment) to avoid derailing that thread.

Shouldn't the data augmentation and board flipping be mutually exclusive? In the current code we already flip the board to get side-to-move on the bottom, so a data augmentation would amount to flipping the color bit, making it pure noise.

PGN Parser

This will be necessary to implement @gcp's excellent idea to validate the UCT framework. The PGN parser will need to then dump out training data to allow us to try supervised learning from current expert AI's (SF in this case).

Game result statistics

From 311 self-play games I generated so far, I counted 157 draws, 139 white wins, and 15 black wins. This strikes me as odd, considering that the first move advantage should be worth very little for a basically unskilled network, and even for a skilled one it should not be nearly this large.

Is it possible the search code has some sort of bug giving white an advantage over black?

Edit: From simply looking at the games, I would say white tends to move pawns a lot more than black, and due to the chaotic nature of the games those pawns regularly get promoted.

3-folds

just had a game where leela was not able to win with KQ vs K (vs a relatively weak engine), but instead ended the game by allowing a 3-fold. This was with the best network so far and 2000 playouts. I don't have the pgn anymore.

Is it possible to train the value output in a supervised (or even self-play) manner as AZ paper explains?

I've been working for a while in this kind of AZ projects. I started even this project some months ago: https://github.com/Zeta36/chess-alpha-zero

And I have a doubt I'd like to ask you. It's about the training of the value output. If I backprop the value output always with an integer (-1, 0, or 1) the NN should quickly be stuck in a local minimal function ignoring the input and always returning the mean of this 3 values (in this case 0). I mean, as soon as the NN learn to return always near 0 values ignoring the input planes, there will be no more improvements since it will have a high accuracy value (>25%) almost intermediately after some steps.

In fact I did a toy experiment to confirm this. As I mention the NN was unable to improve after reaching 33% accuracy (~0.65 loss in mean square). And this has sense if the NN is always returning 0 (very near zero values). Imagine we introduce a dataset of 150 games: ~50 are -1, ~50 are 0 and ~50 are 1. If the NN learns to say always near 0, we get an instant loss of (mse): 100/150 ~ 0.66 and an accuracy of ~33% (1/3).

How the hell did DeepMind to train the value network with just 3 integer values to backpropagate??
I thought the tournament selection (the evaluation worker) was implicated in helping to overcome this local minimum (stabilizing the training), but in its last paper they say they removed the eval process (??)...so, I don't really know what to think.

I don't know neither if the self-play can help in this issue. In last term we are still back-propagating an integer from a domain of just 3 values.

Btw, you can see in our project in https://github.com/Zeta36/chess-alpha-zero we got some "good" results (in a supervised way) but I suspect it was all thanks to the policy network guiding the MCTS exploration (with a value function returning always near 0 values).

What do you think about this?

Once weights are calculated, how long should a leela-zero move take to calculate?

The Silver et al paper says that they allowed one minute (of TPU?) per chess move, which equated to about 80k positions being evaluated per second. So: once a good set of Leela-chess weights are in place, how long should a 2018 desktop PC (say, 16 cores @ 4GHz?), take to evaluate (80k x 60) = 4.8M positions? Is this figure already known?

And roughly how long would (say) an NVidia Tesla V100 take to do the same thing? (NVidia sometimes claims "30x" faster for this kind of task, but real world problems sometimes don't conform to expectations.)

Thanks! :-)

Various comments

Hi glinscott,

This seems interesting, I was working on my own version using caffe, also well aware of leela-zero, but maybe it's better to work together. Anyway I have two questions:

  1. How are you getting to a policy vector size of 1924? The paper uses a policy vector of 8x8x73 = 4672. Or more specifically 8*8 * ({1,...,7}x{N,NE,E,SE,S,SW,W,NW} + {n0,n1,...,n7} + {KNIGHT,BISHOP,ROOK}x{NW,N,NE})

Kind regards,
Error323

hanging in parse.py ?

Trying to run the tf/parse.py seems stuck at:

    dataset = tf.data.Dataset.from_generator(
        parser.parse_chunk, output_types=(tf.string))

i.e. no progress after a few minutes. The output looks reasonable up to that point:

2018-01-13 21:05:36.231270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 
2018-01-13 21:05:36.231274: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y 
2018-01-13 21:05:36.231645: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: ...)
Test parse passes

Also if I enable the benchmark of the parser, this looks reasonable:

996.8451211746881 pos/sec 10.031648635864258 secs

TF is able to run its own examples (e.g. tutorials/image/mnist/convolutional.py goes fine).

Any ideas?

OpenCL calculation error

Having played 1912 games, 607 had an error > 5% in the OpenCL calculation. The average OpenCL error was 26.5%. Do others experience this also?

grep 'error=' training.out | cut -d' ' -f11 | cut -d'=' -f2 | cut -d'%' -f1 > errors.txt

>> load errors.txt
>> mean(errors)
ans =  26.452
>> std(errors)
ans =  56.250
>> min(errors)
ans =  5.0110
>> max(errors)
ans =  632.42

System configuration:

Ubuntu 16.04
Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz
Platform version: OpenCL 1.2 CUDA 9.1.98
Platform profile: FULL_PROFILE
Platform name: NVIDIA CUDA
Platform vendor: NVIDIA Corporation
Device ID: 0
Device name: GeForce GTX 1080 Ti
Device type: GPU
Device vendor: NVIDIA Corporation
Device driver: 387.34
Device speed: 1657 MHz
Device cores: 28 CU
Device score: 1112
Device ID: 1
Device name: GeForce GTX 1080 Ti
Device type: GPU
Device vendor: NVIDIA Corporation
Device driver: 387.34
Device speed: 1657 MHz
Device cores: 28 CU
Device score: 1112
Selected platform: NVIDIA CUDA
Selected device: GeForce GTX 1080 Ti
with OpenCL 1.2 capability
Wavefront/Warp size: 32
Max workgroup size: 1024
Max workgroup dimensions: 1024 1024 64

Full temperature setting

The temperature parameter for move selection, as far as utilised in Alphago Zero and AlphaZero, is already implemented in LCZero. However, it may be useful to fully implement it as described in the Alphago Zero paper:

They generalise the move probability as the exponentiated visit count, p(n) ~ N^(1/τ) with a temperature parameter τ. For τ=1, this simplifies to move probability proportional to visit count, while for an infinitesimal temperature τ→0 it means greedy selection, i.e. the move with highest visit count is always selected. The two cases τ=1 and τ→0 are currently implemented both in Leela Zero and Leela Chess.

However, there is a good reason for the generalised implementation: τ→0 produces the strongest play, but makes the code deterministic if no other random factors such as Dirichlet noise or symmetries are introduced. For Leela Zero this is not a problem, since the random symmetry applied to every neural net evaluation provides sufficient randomness, but for LCZero this is not an option (yet?). Deterministic code makes evaluation matches impossible since they will end up repeating the same game.

On the other hand, τ=1 has a lot of randomness, but vastly lowers playing strength since even moves that only got a single visit can be selected in the end, usually ending up in blunders that easily turn the game result around.

The advantage of a temperature setting in between, for example τ=0.25, is that it provides non-deterministic behaviour while affecting playing strength much less: Let's assume a scenario where the search found a move A with 400 visits, B with 350, and moves C and D with 40 and 10 visits respectively, and let's say C and D are blunders. With τ=1, A and B are picked with 50% and 44% percent, while a blunder move will be picked with a chance of 6%. If this visit distribution is typical, a blunder will happen at least once a game on average. With τ=0.25 on the other hand, the chances of selection for A, B, C, and D are 63%, 37%, 0.006% and 0.00002%. With τ→0, only A can ever be selected.

So a small but not infinitesimal temperature will select moves with only slightly less visits than the most visited one frequently (which provides the needed randomness), but won't select moves with low visit counts. This would (in my opinion) make it a good choice for implementing in Leela Chess for evaluation games, at least until other methods of providing randomness are found and tested.

The implementation is very easy: In UCTNode::randomize_first_proportionally(), accum has to be changed to a double precision float, and accum += child->get_visits() replaced by accum += child->get_visits()^(1/tau) with tau a parameter specified when enabling cfg_randomize.

Any thoughts?

When are we gonna start a RL training session?

I was just wondering, what would be a good milestone to start training a small network tabula rasa? How big will it be? 64x5? I would like to work towards that goal!

  • Obviously we need a distributed management server/client system.
  • Will we go for an Alpha Zero structure or AlphaGo Zero? (I think the former would be better, but requires some kind of adaptive throughput on the training end.)
  • A webpage showing results

I'm gathering every GPU system I can find 😏 I can't wait! Perhaps we can ask for sponsors through GM players?

Little ELO Progress

This has been a rather small effort so far, but after the initial training for less than 50 games, and then 100 (base), I generated 600 (new) games and optimized again (leelaz-model-154000.txt). The results are below, with almost no improvement. Perhaps this is expected, but I was hoping for a little bump. Maybe the incremental improvement is lost in the noise for the current settings

Score of lc_new vs lc_base: 22 - 20 - 358 [0.502] 400
Elo difference: 1.74 +/- 11.02

Project lacks software license

There's no COPYING file to tell what OSS license Leela Zero Chess is licensed under. Files copied from leela zero like https://github.com/glinscott/leela-chess/blob/master/src/config.h and from stockfish https://github.com/glinscott/leela-chess/blob/master/src/Bitboard.cpp have license comment but for example https://github.com/glinscott/leela-chess/blob/master/src/pgn.cpp lacks it.

You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.

Implement bench

Goal is that with a given set of network weights, we should explore the exact same tree with one thread. This should allow us to refactor the search/eval code without fear. Same concept exists in SF and is extremely useful.

We would specify a default set of weights, either random or perhaps the supervised network, and then a series of PGNs that the evaluation should stay consistent for.

leela-chess or leela-zero-chess

Perhaps somewhat pedantic, but if the goal here is minimal chess-specific info, then shouldn't the term "zero" be in the name. Leela by itself is quite go-centric, whereas leela-zero is the minimalist alpha-go-zero approach, I think.

Client should pass gpu parameter to lczero

The client should be able to pass the --gpu parameter to lczero for those that have multiple gpus. For now I just build a second client that uses --gpu 1 to run on my second gpu.

Also changed to pid := 2 so that they write in different data directories

PgnViewer Error

While attempting to view one of the games (http://162.217.248.187/game/2151), I got this popping up:

PgnViewer: Error parsing Bg8Kxg8, Error: unhandled from chars Bg8K

It seems like a space is missing between Bg8 and Kxg8, most likely due to a wrong method for line concatenation or something. See PR 66

Temparature parameter

Maybe a stupid and useless question but what's the point of the temperature parameter since we always choose the best move with the visit count ?
Moreover, if we decide the choose randomly the move with the distribution probability of the policy, should the temperature not decrease (with the respect of the length of the game) ?
I understand that https://github.com/Zeta36/chess-alpha-zero does this.

Thank you everyone for contributing to this amazing and dynamic project! Can't wait to see it play!

Getting started

It looks like quite a few people had no problem getting started, but I beg to differ :)

Right now, the train.sh file starts two processes, is this simply to "multithread" the code? I mean, to make sure we max out CPU and GPU, or is there some other point? I also thought it might be there to help testing some distributed code with such "data" directories spread across nodes, for example. Anyway, this leaves us with directories build/data-1 etc.

Then we jump to TF training which suggests, visually at least, that ../src/data/training is a directory, but once I get to creating/linking such directories under src, I find out that parse.py was looking for a .yaml . Now this is one hell of a "missing link"! Shouldnt this part be a bit more detailed?

Thanks!

Policy and value heads are from AlphaGo Zero, not Alpha Zero

# Policy head

The structure of these heads matches Leela Zero and the AlphaGo Zero paper, not the Alpha Zero paper.

The policy head convolves the last residual output (say 64 x 8 x 8) with a 1 x 1 into a 2 x 8 x 8 outputs, and then converts that with an FC layer into 1924 discrete outputs.

Given that 2 x 8 x 8 only has 128 possible elements that can fire, this seems like a catastrophic loss of information. I think it can actually only represent one from and one to square, so only the best move will be correct (and accuracy will look good, but not loss, and it can't reasonably represent MC probabilities over many moves).

In the AGZ paper they say: "We represent the policy π(a|s) by a 8 × 8 × 73 stack of planes encoding a probability distribution over 4,672 possible moves." Which is quite different.

They also say: "We also tried using a flat distribution over moves for chess and shogi; the final result was almost identical although training was slightly slower."

But note that for the above-mentioned reason it is almost certainly very suboptimal to construct the flat output from only 2 x 8 x 8 inputs. This works fine for Go because moves only have a to-square, but chess also has from-squares. 64 x 8 x 8 may be reasonable, if we forget about underpromotion (we probably can).

The value head has a similar problem: it convolves to a single 8 x 8 output, and then uses an FC layer to transform 64 outputs into...256 outputs. This does not really work either.

The value head isn't precisely described in the AZ paper, and a single 1 x 8 x 8 is probably good enough, but the 256 in the FC layer make no sense then. The problems the value layer has right now might have a lot to do with the fact that the input to the policy head is broken, so the residual stack must try to compensate this.

-t 1 --seed N not deterministic

The problem is this code: seed = cfg_rng_seed ^ (std::uint64_t)thread_id. We need to not mix in the thread_id at least for this case. I'm not sure exactly how this is all supposed to fit together, @Error323 can you look at this?

Exploit left-right symmetry?

Would it be worth applying the same kind of symmetry-exploiting that goes on in AlphaGo (Zero) with respect to king- and queenside? Castling would not be an issue, because it can still be considered a symmetric move in the sense that it involves an unmoved king moving two squares in one direction with the corresponding rook ending up behind it (although this is admittedly different from how it works in Chess960). Just don't forget the castling rights when mirroring a game state.

README.md build example doesn't run

The README.md example for how to clone and build the project doesn't work as written. You need to tell git to pull the submodules too on the clone (GoogleTest), otherwise you get a cmake error saying the directory gtest can't be excluded because it doesn't contain a CMakeLists.txt; patch is:

diff --git a/README.md b/README.md
index e38d897..fb78cd4 100644
--- a/README.md
+++ b/README.md
@@ -102,7 +102,7 @@ This runs an evaluation match using cutechess-cli.
     sudo apt install clinfo && clinfo

     # Clone github repo
-    git clone [email protected]:glinscott/leela-chess.git
+    git clone --recurse-submodules [email protected]:glinscott/leela-chess.git
     cd leela-chess
     mkdir build && cd build
     cmake ..

Cannot make

I can compile and make Leela-zero (go program). When trying chess, I get this:

brian@Tinker2Ubuntu:~/leela-chess/src$ make
Detected OS: Linux
make \
	CXXFLAGS='-I/usr/include/openblas -I. -Wall -Wextra -pipe -O3 -g -ffast-math -flto -march=native -std=c++14 -DNDEBUG'  \
	LDFLAGS=' -flto -g' \
	lczero
make[1]: Entering directory '/home/brian/leela-chess/src'
g++ -I/usr/include/openblas -I. -Wall -Wextra -pipe -O3 -g -ffast-math -flto -march=native -std=c++14 -DNDEBUG -MD -MP -c -o Network.o Network.cpp
In file included from /usr/include/c++/5/bits/hashtable.h:35:0,
                 from /usr/include/c++/5/unordered_map:47,
                 from Network.h:28,
                 from Network.cpp:52:
/usr/include/c++/5/bits/hashtable_policy.h: In instantiation of ‘struct std::__detail::__is_noexcept_hash<Move, std::hash<Move> >’:
/usr/include/c++/5/type_traits:137:12:   required from ‘struct std::__and_<std::__is_fast_hash<std::hash<Move> >, std::__detail::__is_noexcept_hash<Move, std::hash<Move> > >’
/usr/include/c++/5/type_traits:148:38:   required from ‘struct std::__not_<std::__and_<std::__is_fast_hash<std::hash<Move> >, std::__detail::__is_noexcept_hash<Move, std::hash<Move> > > >’
/usr/include/c++/5/bits/unordered_map.h:100:66:   required from ‘class std::unordered_map<Move, int>’
Network.cpp:67:40:   required from here
/usr/include/c++/5/bits/hashtable_policy.h:85:34: error: no match for call to ‘(const std::hash<Move>) (const Move&)’
  noexcept(declval<const _Hash&>()(declval<const _Key&>()))>
                                  ^

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.