Giter Club home page Giter Club logo

hangman's Introduction

Hangman

This repository explores an automated, n-gram-based strategy to play hangman on an unseen corpus of words.

My Hangman algorithm is implemented as follows:

  1. For all words in the training corpus, calculate the number of occurrences of each n-gram (for n = 1,..., 5). An n-gram is a particular substring of letters of length n.
  2. At the beginning of each game, compute the percentage of letters that are vowels for each word that has length less than or equal to the word we are guessing. Then, compute the 75-th quantile of these values. We will use this new value as a cutoff; that is, if the percentage of known vowels in the word we are trying to guess ever surpasses this threshold, we will remove all remaining unguessed vowels from the set of possible letters to guess.
  3. Now, for each round of each game, we guess the next letter by calculating the expected number of occurrences of each unguessed letter and selecting the letter with the maximum such value. We calculate this expectation as the sum of the probabilities of a letter occurring at each unfilled space. We calculate these individual probabilities as follows: a. First, we find the most “informative” substring of consecutive known letters immediately surrounding the empty space. We define informative using two criteria: 1. Length: longer substrings of known letters provide more information about the distribution of the letter in the empty space. We select the longest possible substring up to a length of five letters (including the empty space). 2. Centered-ness: If there are multiple substrings of more than five letters, we select the one that is most centered around the empty space. That is, we select the substring which has a similar number of known letters to either side of the empty space. b. Once we have selected our conditional information substring, we consider the subset of n-grams which match the known letters and have an unguessed letter in the empty space. If there are no known letters immediately adjacent to the empty space, we consider the unigrams for each unguessed letter. Thus, there is always one associated n-gram for each unguessed letter. c. Then, for each possible unguessed letter, we calculate its probability of occurring in the blank space as the number of occurrences of its specific n-gram (as found in the training corpus) divided by the total number of occurrences of all of the n-grams in the relevant subset.

hangman's People

Contributors

massachusett avatar

Watchers

 avatar

hangman's Issues

Over-estimated

Hi, thank you very much for your strategy and code. But it's sad to inform you that your score is over-estimated, which is higher than the correct value. As in your notebook in "In 21" of code "guessed.update(set(vowels))" will change the "guessed" value in game object. Even if you do not guess the rest vowels, it will be updated in "guessed" value in game object if your ratio of vowels is high than threshold. And this update will not change after the ratio of vowels is decreased. So you should copy the "guessed" value when you get it from game object.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.