Comments (6)
Yes, "precision at k = 1" (P@1) is the performance of the model on the word translation task. The task simply represents the accuracy of the model to find properly the translation of a given source word. The evaluation is based on a test dictionary.
from muse.
Hi @glample , excuse me please. After 30 iterations, results bellow do not change anymore.
And I can not find why, please help me, Thank you!
Here is the command:
supervised.py --src_lang zh --tgt_lang latin --src_emb ~/vecs/zhwiki-latin_15-100.zh.vec --tgt_emb ~/vecs/zhwiki-latin_15-100.latin.vec --dico_train: ~/MUSE/data/crosslingual/dictionaries/zh-latin.5000-6500.txt --n_iters 120
nn
path:data/crosslingual/dictionaries/zh-latin.5000-6500.txt
INFO - 01/23/18 07:10:58 - 0:18:22 - Found 4582 pairs of words in the dictionary (4317 unique). 18956 other pairs contained at least one unknown word (7211 in lang1, 15965 in lang2)
INFO - 01/23/18 07:10:59 - 0:18:22 - 4317 source words - nn - Precision at k = 1: 0.532777
INFO - 01/23/18 07:10:59 - 0:18:22 - 4317 source words - nn - Precision at k = 5: 0.648599
INFO - 01/23/18 07:10:59 - 0:18:22 - 4317 source words - nn - Precision at k = 10: 0.694927
csls_knn_10
path:data/crosslingual/dictionaries/zh-latin.5000-6500.txt
INFO - 01/23/18 07:10:59 - 0:18:22 - Found 4582 pairs of words in the dictionary (4317 unique). 18956 other pairs contained at least one unknown word (7211 in lang1, 15965 in lang2)
INFO - 01/23/18 07:11:01 - 0:18:25 - 4317 source words - csls_knn_10 - Precision at k = 1: 0.532777
INFO - 01/23/18 07:11:01 - 0:18:25 - 4317 source words - csls_knn_10 - Precision at k = 5: 0.602270
INFO - 01/23/18 07:11:01 - 0:18:25 - 4317 source words - csls_knn_10 - Precision at k = 10: 0.764420
INFO - 01/23/18 07:11:06 - 0:18:30 - Building the train dictionary ...
INFO - 01/23/18 07:11:06 - 0:18:30 - New train dictionary of 5461 pairs.
INFO - 01/23/18 07:11:06 - 0:18:30 - Mean cosine (nn method, S2T build, 10000 max size): 0.45140
INFO - 01/23/18 07:11:20 - 0:18:44 - Building the train dictionary ...
INFO - 01/23/18 07:11:20 - 0:18:44 - New train dictionary of 4685 pairs.
INFO - 01/23/18 07:11:20 - 0:18:44 - Mean cosine (csls_knn_10 method, S2T build, 10000 max size): 0.45934
INFO - 01/23/18 07:11:20 - 0:18:44 - log:{"n_iter": 30, "precision_at_1-nn": 0.5327773917072041, "precision_at_5-nn": 0.6485985638174658, "precision_at_10-nn": 0.6949270326615705, "precision_at_1-csls_knn_10": 0.5327773917072041, "precision_at_5-csls_knn_10": 0.6022700949733611, "precision_at_10-csls_knn_10": 0.7644197359277276, "mean_cosine-nn-S2T-10000": 0.45140331983566284, "mean_cosine-csls_knn_10-S2T-10000": 0.45934170484542847}
INFO - 01/23/18 07:11:20 - 0:18:44 - * Best value for "mean_cosine-csls_knn_10-S2T-10000": 0.45934
INFO - 01/23/18 07:11:20 - 0:18:44 - * Saving the mapping to /home/jack/codes/MUSE/dumped/1n0hf1tyx9/best_mapping.t7 ...
INFO - 01/23/18 07:11:20 - 0:18:44 - End of refinement iteration 30.
from muse.
Hi,
This is possible (and we observed that in many cases) that after a given number of iterations the algorithm converges. This is possible if the built vocabulary is the same than as previous iteration, then Procrustes will optimize the same thing, and the results won't change.
These results are pretty low though. Are you sure about the quality of your embeddings and of your dictionaries?
from muse.
Thank you, @glample
The embeddings are made by fastText. Performance on top n similarities by given a word is good for the monolingual embeddings.
Size of the original file used to train word embeddings for fastText is 1.1G for Chinese and 80M for another language(Inner Mongolian words represented by Latin characters).
Dictionary was build mannually containing word pairs like "啊 ARAB" and there are 18440 word pairs in this dictionary.
So it seems like I am training Chinese-English cross language word embeddings by MUSE.
Is it because of the small size of the file used to train word embeddings?
from muse.
Yes, 80M seems a bit small to train embeddings. Might be enough though, if the quality of your monolingual corpora is really good and not noisy at all.
Can you try to see if you have the same issue on another language with a bigger monolingual corpus, and with better embeddings?
Also, a better thing to start with would be to try to align your embeddings in a supervised way and see if there exists a good mapping? If a good mapping exists, the unsupervised approach should be able to find it, but if not, then it's unlikely it will work.
from muse.
I ran python unsupervised.py --src_lang en --tgt_lang zh --src_emb data/wiki.en.vec --tgt_emb data/wiki.zh.vec --n_refinement 5
, and it turns out that:
"n_iter": 4,
"src_ws_monolingual_scores": 0.6510844479463435,
"src_EN_SEMEVAL17": 0.7215995249434355,
"src_EN_RG-65": 0.7974341920962535,
"src_EN_MEN-TR-3k": 0.7636758545973762,
"src_EN_YP-130": 0.5332791408572926,
"src_EN_VERB-143": 0.39730727587681697,
"src_EN_SIMLEX-999": 0.3822891920029773,
"src_EN_RW-STANFORD": 0.5080076589673553,
"src_EN_MTurk-771": 0.6689253094490287,
"src_EN_MC-30": 0.8122844560731877,
"src_EN_WS-353-REL": 0.6820189571624744,
"src_EN_MTurk-287": 0.6773484753662096,
"src_EN_WS-353-SIM": 0.7811195898733946,
"src_EN_WS-353-ALL": 0.7388081960366618,
"precision_at_1-nn": 0.0,
"precision_at_5-nn": 0.06666666666666667,
"precision_at_10-nn": 0.06666666666666667,
"precision_at_1-csls_knn_10": 0.0,
"precision_at_5-csls_knn_10": 0.0,
"precision_at_10-csls_knn_10": 0.0,
"mean_cosine-nn-S2T-10000": 0.5639050602912903,
"mean_cosine-csls_knn_10-S2T-10000": 0.6014427542686462
I don't understand other output such as src_ws_monolingual_scores
, src_EN_SEMEVAL17
, or mean_cosine
.
Could anyone give me some tips? Thanks in advance :)
What I know is that src_EN_SEMEVAL17
is a filename, which may stand for precision of that file.
from muse.
Related Issues (20)
- why unsupervised can achieve Word alignment?
- Can some one give the dictionary tree of the whole project? Like in the data/crosslingual or monlingual/.. HOT 5
- non-parallel chinese traditional - english
- evaluate.py error
- openssl ssl_read ssl_error_syscall errno 110
- Reproducing Results in Table 1 HOT 1
- IndexError: index out of range in self
- AttributeError: 'Namespace' object has no attribute 'dico_max_rank'
- Assertion Error while using the unsupervised way.
- Tokenization issue in to-En bilingual dictionaries
- They hated the kid HOT 1
- Bad outcome in ja-en task HOT 1
- Rush Shhh INPUT aUTOMATION
- ValueError: too many values to unpack (expected 2) in unsupervised.py
- Will pytorch's deprecation of volatile affect the result?
- [ML Question] Is it possible somehow to translate two or three words ?
- Tried on GloVe?
- self-mapped english words in dictionaries
- ValueError: Function has keyword-only parameters or annotations, use inspect.signature() API which can support them HOT 3
- demo notebook references unavailable private files
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from muse.