Comments (12)
There's still the branch with abstracting the input vocab #13. I haven't looked into that since the last time we discussed that, so probably 2 months? I doubt that we'll figure that one out timely, unless you have a suggestion for that.
The only other thing in the pipeline is the post-training filtering approach I thought about the other day.
Storing frequencies for the tokens might be nice, too. But that'd entail adding a chunk to the finalfusion format.
I think the norms storage change is pretty important. The earlier we push 0.6.0 out, the better, since it reduces the number of embeddings in the wild that do not have norms.
I was thinking the same, for my project I need to train some models with norms.
from finalfrontier.
from finalfrontier.
Ideally this would be a finalfusion-utils utility. At least I think it would be nice if it was decoupled from training, so that you can decide to use a different cut-off. But at that point we don't have access to the counts anymore. Still, I could see a ff-filter-vocab utility, which you can provide with a list of tokens to retain. Then you could implement the same functionality by using an external counts list and do some UNIX-fu. The benefit of this approach would be that people could also filter the vocab on other criteria.
Yes and yes. I am not sure if we want more chunks ;).
Since we have the information, I'm not sure why we should discard it. If someone downloads one of our pretrained models, they might be interest in the distribution of tokens in the training corpus and how often specific tokens showed up. Right now they'd only know that a token showed up at least min_count times.
I think having both would be nice. The ability to filter based on a word-list and the ability to filter based on known frequencies. Then it'd be possible to restrict the vocab based on the number of occurrences in the corpus and based on external resources.
from finalfrontier.
I think we should postpone it for now. It will take some time to settle, even if we have an elegant solution.
I think I figured out a way that's not too hacky. If we include @NianhengWu's implementation of the directional skipgram, there might be time to polish my code to a good level. I should be able to push out the changes some time tomorrow, it's already compiling, I just want to go through all the places where things changed before letting someone look at it.
from finalfrontier.
I've added the dirgram model into finalfrontier, testing it now.
I recall we briefly discussed whether we are going to change the default context window size from 5 to 10 since in most tests it performs better, but we didn't officially confirm it. So are we changing it?
from finalfrontier.
I recall we briefly discussed whether we are going to change the default context window size from 5 to 10 since in most tests it performs better, but we didn't officially confirm it. So are we changing it?
Created a PR for this: #48
from finalfrontier.
Ok, it seems that most things are done. Maybe we should do a small test run (one epoch of every model). There are no changes that affect the model scores, but just to ensure that all the command-line handling changes are ok.
I also didn't test (and forgot) whether the current version of finalfusion-utils
, which is still pre-norms chunk, handles the presence of the norms chunk correctly. If not, we should release a new finalfusion-utils
before. I think the only blocking change is 'unnormalizing' the vectors in word2vec/GloVe writers.
from finalfrontier.
Ok, it seems that most things are done. Maybe we should do a small test run (one epoch of every model). There are no changes that affect the model scores, but just to ensure that all the command-line handling changes are ok.
I also didn't test (and forgot) whether the current version of
finalfusion-utils
, which is still pre-norms chunk, handles the presence of the norms chunk correctly. If not, we should release a newfinalfusion-utils
before. I think the only blocking change is 'unnormalizing' the vectors in word2vec/GloVe writers.
doing the test right now:)
from finalfrontier.
I trained the three models with 1 epoch, and tested with ff-analogy, ff-compute-accuracy, and ff-similarity. All works fine, and the results seems also fine to me:
skipgram:
from finalfrontier.
Thanks for testing!
from finalfrontier.
Also confirmed that finalfusion 0.5 is happy with the changed file format.
from finalfrontier.
Released in 684334f.
from finalfrontier.
Related Issues (20)
- Replace custom fallible conversion methods by TryFrom
- Automatic builds of binary packages are broken? HOT 1
- Dealing with different set of command-line options HOT 4
- Thank you
- Include generated man pages in release builds
- Make a Homebrew tap for macOS release builds HOT 1
- Training without subwords HOT 2
- Implement ASAG and AsySVRG HOT 1
- Add support for vocab size target HOT 1
- Support explicitly stored ngrams HOT 90
- Broken CI script
- Some general vocab questions HOT 28
- Fix EOS ngrams
- Document side-effects of increasing the number of threads HOT 6
- Store the number of threads in metadata HOT 1
- Provide subcommand to generate completions HOT 2
- Kick out EOS marker HOT 1
- Add overarching `finalfrontier` man page HOT 1
- Replace underscores by dashes in options
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from finalfrontier.