eli-data-mining-group / pelitk Goto Github PK
View Code? Open in Web Editor NEWPitt English Language Institute ToolKit
License: GNU General Public License v3.0
Pitt English Language Institute ToolKit
License: GNU General Public License v3.0
We should add unit tests and have continuous integration testing via https://travis-ci.org/ which will run on pull requests and commits to master branch. As things are now, it is easy to break the code when committing directly to master. For example, the master branch recently contained a typo and was fixed in d69abdf. Having tests and making changes via pull requests should help prevent that.
We should properly manage our package versioning and package up releases (e.g. 0.1.0, 0.2.0, etc) and publish them to pypi so people can pip install from there, instead of directly installing from our master branch, which may or may not be in a broken state. Each release to pypi should come with either bugfixes or new features, and we should definitely make sure whatever we are publishing actually works properly (adding more tests will help ensure this).
I think it would be reasonable to make a 0.1.0 release and publish that to pypi after we resolve #3 and (hopefully) remove the bloated nltk
dependency
Just ran the lex.adv_guiraud help function and realized it needs updating.
Also the 'PET' list should be named 'PVL' (Pet Vocabulary List) to be consistent with the published work.
It would be nice to write our documentation entirely in the docstrings of the code and generate documentation via sphinx-autoapi or similar, and host at https://readthedocs.org/ e.g. https://docs.scrapy.org/en/latest/
In adv_guiraud
it seems we currently support 3 different text data input types:
I am sure this was once useful to have, but am not sure that it makes sense to support all three of these. The code is confusing to read and it would be simpler and cleaner to just support 1 input type, perhaps a list of tokens? Then the user just needs to call re_tokenize
outside or pass their own tokenized list to the function.
in adv_guiraud()
we spell check via checking enabl1 (plus 'i' and 'a'), but elsewhere we spell check via looking for lemmas in wordnet.synsets()
.
I don't recall a specific reason for doing it this way originally and it seems a bit weird to me, because the same spellcheck=True
arg functions differently across multiple functions.
I think we should keep everything consistent. Any thoughts? Also, if we remove the wordnet.synsets() stuff and switch everything to use enable1, then I think we can remove the nltk dependency.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.