smilli / kneser-ney Goto Github PK
View Code? Open in Web Editor NEWKneser-Ney implementation in Python
Kneser-Ney implementation in Python
I changed the pad_symbol as left_pad_symbol, right_pad_symbol and add start_pad_symbol in KneserNeyLM, but there still another eroor. We may use log function with a negative value,but why it was negative?
code:
from nltk.corpus import gutenberg
from nltk.util import ngrams
from kneser_ney import KneserNeyLM
gut_ngrams = (
ngram for sent in gutenberg.sents() for ngram in ngrams(sent, 3,
pad_left=True, pad_right=True, right_pad_symbol='<s>',left_pad_symbol='<s>'))
lm = KneserNeyLM(3, gut_ngrams,start_pad_symbol='<s>', end_pad_symbol='<s>')
lm.score_sent(('This', 'is', 'a', 'sample', 'sentence', '.'))
lm.generate_sentence()
ValueError Traceback (most recent call last)
in ()
2 ngram for sent in gutenberg.sents() for ngram in ngrams(sent, 3,
3 pad_left=True, pad_right=True, right_pad_symbol='<s>',left_pad_symbol='<s>'))
----> 4 lm = KneserNeyLM(3, gut_ngrams,start_pad_symbol='<s>', end_pad_symbol='<s>')
5 lm.score_sent(('This', 'is', 'a', 'sample', 'sentence', '.'))
6 lm.generate_sentence()
in init(self, highest_order, ngrams, start_pad_symbol, end_pad_symbol)
21 self.start_pad_symbol = start_pad_symbol
22 self.end_pad_symbol = end_pad_symbol
---> 23 self.lm = self.train(ngrams)
24
25 def train(self, ngrams):
in train(self, ngrams)
30 """
31 kgram_counts = self._calc_adj_counts(Counter(ngrams))
---> 32 probs = self._calc_probs(kgram_counts)
33 return probs
34
in _calc_probs(self, orders)
62 backoffs = []
63 for order in orders[:-1]:
---> 64 backoff = self._calc_order_backoff_probs(order)
65 backoffs.append(backoff)
66 orders[-1] = self._calc_unigram_probs(orders[-1])
in _calc_order_backoff_probs(self, order)
89 for key in order.keys():
90 prefix = key[:-1]
---> 91 order[key] = math.log(order[key]/prefix_sums[prefix])
92 for prefix in backoffs.keys():
93 backoffs[prefix] = math.log(backoffs[prefix]/prefix_sums[prefix])
ValueError: math domain error
Environment:
Python 3.5.2
nltk 3.2.1
reproduce step:
python3 example.py
error message:
Traceback (most recent call last):
File "example.py", line 8, in <module>
lm = KneserNeyLM(3, gut_ngrams, end_pad_symbol='<s>')
File "/Users/username/kneser-ney/kneser_ney.py", line 24, in __init__
self.lm = self.train(ngrams)
File "/Users/username/kneser-ney/kneser_ney.py", line 33, in train
kgram_counts = self._calc_adj_counts(Counter(ngrams))
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/collections/__init__.py", line 530, in __init__
self.update(*args, **kwds)
File "/usr/local/Cellar/python3/3.5.2_3/Frameworks/Python.framework/Versions/3.5/lib/python3.5/collections/__init__.py", line 617, in update
_count_elements(self, iterable)
File "example.py", line 7, in <genexpr>
pad_left=True, pad_right=True, pad_symbol='<s>'))
TypeError: ngrams() got an unexpected keyword argument 'pad_symbol'
If this language model is trained on one corpus (e.g. gutenberg) and applied to another (e.g. brown), it is very likely to encounter out of vocabulary words or unseen ngrams. And then this happens:
TypeError: unsupported operand type(s) for +=: 'float' and 'NoneType'
Maybe it is trivial and I am wrong.
From the paper I think the count of a k-gram "word" is its occurrence in the corpus data not in its higher-order gram types. If this is the case,
Line 58 in 2740fba
new_order[suffix] += last_order[ngram]
But even this is troublesome. For example, ('?', '', '') in the last_order will add its suffix ('', '') to the new_order. But I think two pad symbol is not valid in a bigram model.
Therefore, I think a better way to do kgram count is to do each order independently and directly from corpus data.
And in the class KneserNeyLM definition, using highest_order gram ngrams as arg and in the example.py usinggut_ngrams need to be revised as well.
you pad the incoming sequence (https://github.com/smilli/kneser-ney/blob/master/kneser_ney.py#L147), but then go and use the original tuple (not padded) for scoring
Just a question about the log prob calc in prob interpolation.
https://github.com/smilli/kneser-ney/blob/master/kneser_ney.py#L132
order[kgram] += last_order[suffix] + backoff[prefix]
The original interpolation is based on normal prob instead of log prob. It seems that the above log based interpolation doesn't make sense.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.