Giter Club home page Giter Club logo

skpokereval's Introduction

SKPokerEval

A fast and lightweight 32-bit Texas Hold'em 7-card hand evaluator written in C++.

Travis status

Build Status

How do I use it?

#include <iostream>
#include "SevenEval.h"

int main() {
  // Get the rank of the seven-card spade flush, ace high.
  std::cout << SevenEval::GetRank(0, 4, 8, 12, 16, 20, 24) << std::endl;
  return 0;
}

The implementation being immutable is already thread-safe. There is no initialisation time.

How does it work?

We exploit a key-scheme that gives us just enough uniqueness to correctly identify the integral rank of any 7-card hand, where the greater this rank is the better the hand we hold and two hands of the same rank always draw. We require a memory footprint of just less than 135kB and typically six additions to rank a hand.

To start with we computed by brute force the first thirteen non-negative integers such that the formal sum of exactly seven with each taken at most four times is unique among all such sums: 0, 1, 5, 22, 98, 453, 2031, 8698, 22854, 83661, 262349, 636345 and 1479181. A valid sum might be 0+0+1+1+1+1+5 = 9 or 0+98+98+453+98+98+1 = 846, but invalid sum expressions include 0+262349+0+0+0+1 (too few summands), 1+1+5+22+98+453+2031+8698 (too many summands), 0+1+5+22+98+453+2031+8698 (again too many summands, although 1+5+22+98+453+2031+8698 is a legitimate expression) and 1+1+1+1+1+98+98 (too many 1's). We assign these integers as the card face values and add these together to generate a key for any non-flush 7-card hand. The largest non-flush key we see is 7825759, corresponding to any of the four quad-of-aces-full-of-kings.

Similarly, we assign the integer values 0, 1, 8 and 57 for spade, heart, diamond and club respectively. Any sum of exactly seven values taken from {0, 1, 8, 57} is unique among all such sums. We add up the suits of a 7-card hand to produce a "flush check" key and use this to look up the flush suit value if any. The largest flush key we see is 7999, corresponding to any of the four 7-card straight flushes with ace high, and the largest suit key is 399.

The extraordinarily lucky aspect of this is that the maximum non-flush key we have, 7825759, is a 23-bit integer (note 1<<23 = 8388608) and the largest suit key we find, 57*7 = 399, is a 9-bit integer (note 1<<9 = 512). If we bit-shift each card's flush check and add to this its non-flush face value to make a card key in advance, when we aggregate the resulting card keys over a given 7-card hand we generate a 23+9 = 32-bit integer key for the whole hand. This integer key can only just be accommodated by a standard 32-bit int type and yet still carries enough information to decide if we're looking at a flush and if not to then look up the rank of the hand.

How has the project evolved?

Taking v1.1 as the base line, the sampled relative throughput of random SevenEval access has been seen to have changed as follows (a higher multiple is better).

Version Relative throughput Reason                            
1.1                 1.00                                  
1.4.2               1.18 Hashing.                          
1.6                   1.50 Remove branching from flush case.
1.7                   1.53 Reduce the hash table.            
1.7.1               1.57 Reduce the rank hash table.      
1.8               1.93 Index cards by bytes.      
1.8.1               2.04 Simplify flush key. Smaller offset table.
1.9               2.04 Reduce the hash table.

At some point the cost of the sample for-loop iteration becomes relatively significant.

I want to contribute, how might I profile my change?

The project contains a profiler which might be used to help benchmark your changes.

g++ -c -std=c++11 -O3 Profiler.cpp
g++ -o profile Profiler.o
./profile

For optimisations this starts to be compelling with consistent gains of, say, 30% or more.

skpokereval's People

Contributors

kennethshackleton avatar vpatov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

skpokereval's Issues

Tests Slow

Some of the tests run slow on my machine, is this normal?

Running main() from gtest_main.cc
[==========] Running 19 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 18 tests from FiveEvalTest
[ RUN ] FiveEvalTest.HighCard
[ OK ] FiveEvalTest.HighCard (1 ms)
[ RUN ] FiveEvalTest.WorstPairBeatsBestHighCard
[ OK ] FiveEvalTest.WorstPairBeatsBestHighCard (0 ms)
[ RUN ] FiveEvalTest.Pair
[ OK ] FiveEvalTest.Pair (0 ms)
[ RUN ] FiveEvalTest.WorstTwoPairBeatsBestPair
[ OK ] FiveEvalTest.WorstTwoPairBeatsBestPair (0 ms)
[ RUN ] FiveEvalTest.TwoPair
[ OK ] FiveEvalTest.TwoPair (0 ms)
[ RUN ] FiveEvalTest.WorstTripleBeatsBestTwoPair
[ OK ] FiveEvalTest.WorstTripleBeatsBestTwoPair (0 ms)
[ RUN ] FiveEvalTest.Triple
[ OK ] FiveEvalTest.Triple (0 ms)
[ RUN ] FiveEvalTest.WorstStraightBeatsBestTriple
[ OK ] FiveEvalTest.WorstStraightBeatsBestTriple (0 ms)
[ RUN ] FiveEvalTest.Straight
[ OK ] FiveEvalTest.Straight (0 ms)
[ RUN ] FiveEvalTest.WorstFlushBeatsBestStraight
[ OK ] FiveEvalTest.WorstFlushBeatsBestStraight (0 ms)
[ RUN ] FiveEvalTest.Flush
[ OK ] FiveEvalTest.Flush (0 ms)
[ RUN ] FiveEvalTest.WorstFullHouseBeatsBestFlush
[ OK ] FiveEvalTest.WorstFullHouseBeatsBestFlush (0 ms)
[ RUN ] FiveEvalTest.FullHouse
[ OK ] FiveEvalTest.FullHouse (1 ms)
[ RUN ] FiveEvalTest.WorstQuadBeatsBestFullHouse
[ OK ] FiveEvalTest.WorstQuadBeatsBestFullHouse (0 ms)
[ RUN ] FiveEvalTest.Quad
[ OK ] FiveEvalTest.Quad (0 ms)
[ RUN ] FiveEvalTest.WorstStraightFlushBeatsBestQuad
[ OK ] FiveEvalTest.WorstStraightFlushBeatsBestQuad (0 ms)
[ RUN ] FiveEvalTest.StraightFlush
[ OK ] FiveEvalTest.StraightFlush (0 ms)
[ RUN ] FiveEvalTest.SevenCardHand
[ OK ] FiveEvalTest.SevenCardHand (116314 ms)
[----------] 18 tests from FiveEvalTest (116316 ms total)

[----------] 1 test from SevenEvalTest
[ RUN ] SevenEvalTest.CompareWithFiveEval
[ OK ] SevenEvalTest.CompareWithFiveEval (72018 ms)
[----------] 1 test from SevenEvalTest (72018 ms total)

[----------] Global test environment tear-down
[==========] 19 tests from 2 test cases ran. (188334 ms total)
[ PASSED ] 19 tests.

Should include rank ranges in the readme

Gaps in the enums introduce entropy to deep learning systems.
I found this useful while making onehots
These are sparse ranges
0 == error
High Card: 49 - 1277
Pair: 1296 - 4137
Two Pair: 4141 - 4995
Three of a Kind: 5004 - 5853
Straight: 5854 - 5863
Flush: 5864 - 7140 -- flushes seem to be uniform across suits, two equal flushes will score the same
Full House: 7141 - 7296
Four of a Kind: 7297 - 7452
Straight Flush: 7453 - 7461
Royal Flush: 7462 - 7462
//same as straight flush

Total distinct ranks possible:
High Cards: 407
Pair: 1470
Two Pair: 763
Three of a Kind: 575
Straight: 10
Flush: 1277
Full House: 156
Four of a Kind: 156
Straight Flush: 10

6 card eval

Hi, my tests show this evaluator is blisteringly fast. Phenomenal work.

Any chance you can add a 6 card version?

The rank hash table could be further reduced (~7%)

HI, first of all, thank you for sharing this awesome project, I was looking for a card evaluation algorithm for a custom project in Java and yours is clearly excellent (my Java implementation evaluate a random hand in ~8ns, while generating a random hand take ~16ns)

Since I wanted to understand how it worked and not copying magic numbers from your source code, I had a look on the hash table generator script (perfect_hash.py) and I enhance it so that it runs fast enough to do a brute force search of what parameters produce the smallest hash table. The script take 1 hour to compute an hash table, mine is doing the computation in roughly 1 second

Currently, the best value I have are M = 2369371, power = 9 (so side = 512), and it produce a rank array of 38995 elements, down from 42077 in your source code (so it's a 7% decrease on that table)

FYI, Here are the changes I made on your algorithm :

  • write in java, python is way too slow for such a bruteforce task
  • removed "diffused_keys" : it is only used to perform a sanity check that is redundant with the check on square
  • swap both indexes on "square" so that the inner loop iterate on the second indexes : at least in Java, the iteration will be done on elements that are next to each other in memory so it is more CPU-cache friendly
  • using arrays of 64-bits int as a bitsets to easier locate potential collusion faster : instead of iterating on each value from square/ranks, I maintain a bitset indicating where actual values are stored, so using bitwise operators I can iterate only on indexes where i now that both arrays have something else than NOT_A_VALUE. This trick made a 30x speedup (but unfortunately I don't know how to implement it in python to provide a PR

Moreover, there is a mismatch between perfect_hash.py and the current hash table : the scripts uses M=31 which compute an hash table of size 44969 (which was the hash table used in an earlier version of the C code)

FiveEval large and invalid ranks

I see that FiveEval was moved from the main repo into tests and that would appear that you no longer support it but I have a use case were I would like to evaluate the strength of 5 card poker hands. I have noticed that when evaluating a large number of hands in a row sometimes FiveEval will start returning about 50% of it's results as large invalid scores. Do you have any ideas as to what could be causing this? I have verified that the hands I am passing to FiveEval are valid 5 card poker hands(no duplicates)

dll with exported functions

Not so much an issue as a request..

Is there any way to get your code compiled into a DLL with the functions exported? - I've written a GUI in Clarion :) which shuffles and deals cards for Texas Hold'em (for my own use of course). I would like to be able to simply call your function to compare the cards for two players. 2-pocket cards each and the 5 board cards - so 7 cards per player. A value is then returned and the winner is the one with the highest/lowest value...and then possibly also be able to get the hand type like flush, straight, etc.. Is that do-able at all?

thanks!
W

This is not an issue

Sorry for abusing your issues list but I don't know where to write.

I'd like to just thank you for your amazing tool.

I wrote something similar (but based on bitboards) and SKPokerEval has been really useful to fix a couple of subtle bugs.

Now that my scoring comparison matches with SKPokerEval , I feel confident I have fully validated my little tool. Thanks again and keep up the good work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.