This is the readme file for the Poker AI project. First on the project, I want you to learn about how to use basic feature about Github. Try to learn about commit, issues, pull request and branches. The first week will only be about material. Try to scrape some thesis from others university. We can't understand them anyway, because if we can, maybe we should graduate.
This repository assumes Python 3.11 or newer is used. Used package:
- bext
- Install the code from the repo
- Run the code:
- Change the constant value in the poker_ai/poker/play.py:
- Simple warmup and learn to use Github
- Find some material
- Implement the basic UI and game structure
- Implement the AI:
- Implement the evaluate function using simple Monte-Carlo simulations
- Implement the Monte-Carlo simulation/rule-based and enumeration/rule-based AI
- Implement the Monte-Carlo tree search-based AI
- Implement a supervised learning for opponent modelling for all of the AI:
-
For the simulation/rule-based AI:
- Improve the simulations so that it is not just straight all-in simulations.
- Using opponent modelling to implement enumeration weighting, improving the CALL_CONFIDENT and the simulation itself (2.5.2.4, 2.6)
- After enumeration weighting, use selective sampling to only simulate cases that have high weight to be relevant.
- Implement adaptive sampling based on the current game state, opponents' behaviors, or other relevant factors.
-
For the enumeration/rule-based AI:
-
For the MCTS AI:
- Implement the opponent modelling as the main selection policy for the AI.
-
- Implement a supervised learning to calculate all constant that is relevant in the two above AI. 99% we will be using linear regression, because the function is a polynomial function.
- Implement a deep reinforcement AI
- Implement a performance evaluation
- Drink some water
- Touch the grass
Try ur best.