yumeng5 / spherical-text-embedding Goto Github PK
View Code? Open in Web Editor NEW[NeurIPS 2019] Spherical Text Embedding
License: Apache License 2.0
[NeurIPS 2019] Spherical Text Embedding
License: Apache License 2.0
Thanks a lot for this works.
According to function ReadWord
-
https://github.com/yumeng5/Spherical-Text-Embedding/blob/master/jose.c#L60
a word is defined as a sequence of characters with some delimiter (tab, space, etc.). Is it possibile to customize this approach with subwords like in fastText - https://github.com/facebookresearch/fastText/blob/master/src/dictionary.cc#L172
or some other approach like BPE. SentencePiece could be a way - https://github.com/google/sentencepiece
In this last case it would mean that we are going to replace each word (or better each BPE subword) with a unique index (BPE ids), so we need a encoding and later a decoding phase.
Hi, yumeng, Thanks for your interesting work!
After running your code, I seem got a special word in the generated vocabulary and its frequence is equal to the number of my documents. Does this mean '\n' or something else? Does this influence the training ? Thank you in advance.
Adding a new word to vocabulary could cause in segmentation fault as vocabulary grows. The reason is that only vocab
array is reallocated vocab_hash
array remains untouched.
Hi,
Is it applicable to other datasets such as router and webkb?
If possible, I wonder if the parameter setting is the same.
If i need to set other parameters, I would appreciate it if you could let me know how to set them.
thank you!
Very interesting work. Can you provide the pretrained word embeddings (as GloVe does)? So that readers can use it without re-training.
Hi, thanks for publishing the code for this paper.
Iโm trying to understand how the lines 514 to 597 map on to the update rule you laid out in Figure (7) in the paper. Is there any further explanation you could offer as to how the variables match up? Particularly what variables f and h represent and how the cosine calculations are being made.
In src/jose.c
:
if ((i = ArgPos((char *) "-train", argc, argv)) > 0) strcpy(train_file, argv[i + 1]);
if ((i = ArgPos((char *) "-save-vocab", argc, argv)) > 0) strcpy(save_vocab_file, argv[i + 1]);
if ((i = ArgPos((char *) "-read-vocab", argc, argv)) > 0) strcpy(read_vocab_file, argv[i + 1]);
if ((i = ArgPos((char *) "-load-emb", argc, argv)) > 0) strcpy(load_emb_file, argv[i + 1]);
if ((i = ArgPos((char *) "-debug", argc, argv)) > 0) debug_mode = atoi(argv[i + 1]);
if ((i = ArgPos((char *) "-alpha", argc, argv)) > 0) alpha = atof(argv[i + 1]);
if ((i = ArgPos((char *) "-word-output", argc, argv)) > 0) strcpy(word_emb, argv[i + 1]);
if ((i = ArgPos((char *) "-context-output", argc, argv)) > 0) strcpy(context_emb, argv[i + 1]);
if ((i = ArgPos((char *) "-doc-output", argc, argv)) > 0) strcpy(doc_output, argv[i + 1]);
Usage of strcpy
here with an input from argv might overflow in the destination... perhaps switch to strncpy
Hi!
Would you be interested in packaging your code into python bindings and making it public via pip? :) I would love to contribute but I just want to know whether it's something you would like to pursue.
Cheers!
Hi,
I'm trying to train new embeddings with your code on a corpus with approximately 4B tokens but the code gives me a segmentation fault right after reading the corpus and showing the number of tokens. I'm using ~200G of RAM. Do I need to use more memory? or could it be another issue. For reference, word2vec and fasttext trained just fine on this corpus.
Thanks in advance!
Can two semantically similar sentences use this method to get sentence vectors closer to each other than other methods?
And how to get the sentence embedding?
Hi,
I ran run.sh and get_avg_emb to get the 20newsgroups embedded file.
but, After embeddings, clustering was run, but it does not perform as well as the paper says.
How can I get an embedded file with that performance?
Hi there,
Is any suggestion to solve OOV problem? Should I using random value for it?
I didn't find tag like '' in vocab list but there is a 'unk' word
I noticed that you are calculating sentence embedding using an average of the individual word vectors when performing clustering, etc. Did you happen to evaluate whether SIF or uSIF would be advantageous over averaging?
The source code in src/jose.c looks pretty similary to original word2vec imlementation by Mikolov. Since word2vec sources is distributed under Apache-2 terms, it would be greate to satisfy redistribution condition.
Hello, thanks for your interesting paper. I used your released embeddings in our standard NER architecture to see if they are useful for an NER downstream task. You can see the comparison against other approaches on CoNLL-03. As the table shows, F1 score increases with more dimensions to the point of being comparable to vanilla GloVe. It would be interesting to see if this trend continues with dimensions higher than 300D and if at higher dimensionality it beats vanilla GloVe.
Would you be interested in also releasing spherical embeddings trained over the same data / parameters with 400D and 500D? If so, I'll test and report the numbers back.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.