Giter Club home page Giter Club logo

starspace's Introduction

StarSpace

StarSpace is a general-purpose neural model for efficient learning of entity embeddings for solving a wide variety of problems:

  • Learning word, sentence or document level embeddings.
  • Information retrieval: ranking of sets of entities/documents or objects, e.g. ranking web documents.
  • Text classification, or any other labeling task.
  • Metric/similarity learning, e.g. learning sentence or document similarity.
  • Content-based or Collaborative filtering-based Recommendation, e.g. recommending music or videos.
  • Embedding graphs, e.g. multi-relational graphs such as Freebase.
  • Image classification, ranking or retrieval (e.g. by using existing ResNet features).

In the general case, it learns to represent objects of different types into a common vectorial embedding space, hence the star ('*', wildcard) and space in the name, and in that space compares them against each other. It learns to rank a set of entities/documents or objects given a query entity/document or object, which is not necessarily the same type as the items in the set.

See the paper for more details on how it works.

News

  • StarSpace is available in Python: check out the Building StarSpace section for details.
  • Support reading from compressed file: check out the Compressed File section for more details.
  • New license and patents: now StarSpace is under MIT license. Checkout LICENSE for details.
  • StarSpace training is much faster now with mini batch training (setting batch size by "-batchSize" argument). Details in #190.
  • We added support for real-valued input and label weights: checkout the File Format and ImageSpace section for more details on how to use weights in input and label.

Requirements

StarSpace builds on modern Mac OS, Windows, and Linux distributions. Since it uses C++11 features, it requires a compiler with good C++11 support. These include :

  • (gcc-4.6.3 or newer), (Visual Studio 2015), or (clang-3.3 or newer)

Compilation is carried out using a Makefile, so you will need to have a working make.

You need to install Boost library and specify the path of boost library in makefile in order to run StarSpace. Basically:

$wget https://dl.bintray.com/boostorg/release/1.63.0/source/boost_1_63_0.zip
$unzip boost_1_63_0.zip
$sudo mv boost_1_63_0 /usr/local/bin

Optional: if one wishes to run the unit tests in src directory, google test is required and its path needs to be specified in 'TEST_INCLUDES' in the makefile.

Building StarSpace

In order to build StarSpace on Mac OS or Linux, use the following:

git clone https://github.com/facebookresearch/Starspace.git
cd Starspace
make

In order to build StarSpace on Windows, open the following in Visual Studio:

MVS\StarSpace.sln

In order to build StarSpace python wrapper, please refer README inside the directory python.

File Format

StarSpace takes input files of the following format. Each line will be one input example, in the simplest case the input has k words, and each labels 1..r is a single word:

word_1 word_2 ... word_k __label__1 ... __label__r

This file format is the same as in fastText. It assumes by default that labels are words that are prefixed by the string __label__, and the prefix string can be set by "-label" argument.

In order to learn the embeddings, do:

$./starspace train -trainFile data.txt -model modelSaveFile

where data.txt is a training file containing utf-8 encoded text. At the end of optimization the program will save two files: model and modelSaveFile.tsv. modelSaveFile.tsv is a standard tsv format file containing the entity embedding vectors, one per line. modelSaveFile is a binary file containing the parameters of the model along with the dictionary and all hyper parameters. The binary file can be used later to compute entity embedding vectors or to run evaluation tasks.

In the more general case, each label also consists of words:

word_1 word_2 ... word_k <tab> label_1_word_1 label_1_word_2 ... <tab> label_r_word_1 .. 

Embedding vectors will be learned for each word and label to group similar inputs and labels together.

In order to learn the embeddings in the more general case where each label consists of words, one needs to specify the -fileFormat flag to be 'labelDoc', as follows:

$./starspace train -trainFile data.txt -model modelSaveFile -fileFormat labelDoc

We also extend the file format to support real-valued weights (in both input and label space) by setting argument "-useWeight" to true (default is false). If "-useWeight" is true, we support weights by the following format

word_1:wt_1 word_2:wt_2 ... word_k:wt_k __label__1:lwt_1 ...    __label__r:lwt_r

e.g.,

dog:0.1 cat:0.5 ...

The default weight is 1 for any word / label that does not contain weights.

Compressed File

StarSpace can also read from compressed file (currently only support gzip files). You can skip this part if you do not plan to use compressed input files. To run StarSpace with compressed input, first compile StarSpace using makefile_compress instead of makefile:

make -f makefile_compress

Then in the train config, specify

./starspace -trainFile input -compressFile gzip -numGzFile 10 ...

It assumes that there are input files with names

input00.gz, input01.gz, ..., input09.gz 

and reads from those files.

To prepare data in this format, one can use the standard 'split' function to first split input file into multiple chunks, then compress them. For instance:

split -d -l xxx original_input.txt input && gzip input*

Training Mode

To explain how it works in different train modes, we call the input of a particular example as "LHS" (stands for left-hand-side) and the label as "RHS" (stands for right-hand-side). StarSpace supports the following training modes (the default is the first one):

  • trainMode = 0:
    • Each example contains both input and labels.
    • If fileFormat is 'fastText' then the labels are individuals features/words specified (e.g. with a prefix label, see file format above).
    • Use case: classification tasks, see tagspace example below.
    • If fileFormat is 'labelDoc' then the labels are bags of features, and one of those bags is selected (see file format, above).
    • Use case: retrieval/search tasks, each example consists of a query followed by a set of relevant documents.
  • trainMode = 1:
    • Each example contains a collection of labels. At training time, one label from the collection is randomly picked as the RHS, and the rest of the labels in the collection become the LHS.
    • Use case: content-based or collaborative filtering-based recommendation, see pagespace example below.
  • trainMode = 2:
    • Each example contains a collection of labels. At training time, one label from the collection is randomly picked as the LHS, and the rest of the labels in the collection become the RHS.
    • Use case: learning a mapping from an object to a set of objects of which it is a part, e.g. sentence (from within document) to document.
  • trainMode = 3:
    • Each example contains a collection of labels. At training time, two labels from the collection are randomly picked as the LHS and RHS.
    • Use case: learn pairwise similarity from collections of similar objects, e.g. sentence similiarity.
  • trainMode = 4:
    • Each example contains two labels. At training time, the first label from the collection will be picked as the LHS and the second label will be picked as the RHS.
    • Use case: learning from multi-relational graphs.
  • trainMode = 5:
    • Each example contains only input. At training time, it generates multiple training examples: each feature from input is picked as the RHS, and other features surronding it (up to distance ws) are picked as the LHS.
    • Use case: learn word embeddings in unsupervised way.

Example use cases

TagSpace word / tag embeddings

Setting: Learning the mapping from a short text to relevant hashtags, e.g. as in this paper. This is a classical classification setting.

Model: the mapping learnt goes from bags of words to bags of tags, by learning an embedding of both. For instance, the input “restaurant has great food <\tab> #restaurant <\tab> #yum” will be translated into the following graph. (Nodes in the graph are entities for which embeddings will be learned, and edges in the graph are relationships between the entities).

word-tag

Input file format:

restaurant has great food #yum #restaurant

Command:

$./starspace train -trainFile input.txt -model tagspace -label '#'

Example scripts:

We apply the model to the problem of text classification on AG's News Topic Classification Dataset. Here our tags are news article categories, and we use the hits@1 metric to measure classification accuracy. This example script downloads the data and run StarSpace model on it under the examples directory:

$bash examples/classification_ag_news.sh

PageSpace user / page embeddings

Setting: On Facebook, users can fan (follow) public pages they're interested in. When a user fans a page, the user can receive all things the page posts on Facebook. We want to learn page embeddings based on users' fanning data, and use it to recommend users new pages they might be interested to fan (follow). This setting can be generalized to other recommendation problems: for instance, embedding and recommending movies to users based on movies watched in the past; embed and recommend restaurants to users based on the restaurants checked-in by users in the past, etc.

Model: Users are represented as the bag of pages that they follow (fan). That is, we do not learn a direct embedding of users, instead, each user will have an embedding which is the average embedding of pages fanned by the user. Pages are embedded directly (with a unique feature in the dictionary). This setup can work better in the case where the number of users is larger than the number of pages, and the number of pages fanned by each user is small on average (i.e. the edges between user and page is relatively sparse). It also generalizes to new users without retraining. However, the more traditional recommendation setting can also be used.

user-page

Each user is represented by the bag-of-pages fanned by the user, and each training example is a single user.

Input file format:

page_1 page_2 ... page_M

At training time, at each step for each example (user), one random page is selected as a label and the rest of bag of pages are selected as input. This can be achieved by setting flag -trainMode to 1.

Command:

$./starspace train -trainFile input.txt -model pagespace -label 'page' -trainMode 1

Example scripts:

To provide an example script, we choose the Last.FM (http://www.lastfm.com) dataset from HectRec 2011 and model it similarly as in the PageSpace setting: user is represented by the bag-of-artitsts listened by the user.

 $bash examples/recomm_user_artists.sh

DocSpace document recommendation

Setting: We want to embed and recommend web documents for users based on their historical likes/click data.

Model: Each document is represented by a bag-of-words of the document. Each user is represented as a (bag of) the documents that they liked/clicked in the past. At training time, at each step one random document is selected as the label and the rest of the bag of documents are selected as input.

user-doc

Input file format:

roger federer loses <tab> venus williams wins <tab> world series ended
i love cats <tab> funny lolcat links <tab> how to be a petsitter  

Each line is a user, and each document (documents separated by tabs) are documents that they liked. So the first user likes sports, and the second is interested in pets in this case.

Command:

./starspace train -trainFile input.txt -model docspace -trainMode 1 -fileFormat labelDoc

GraphSpace: Link Prediction in Knowledge Bases

Setting: Learning the mapping between entities and relations in Freebase. In freebase, data comes in the format

(head_entity, relation_type, tail_entity)

Performing link prediction can be formalized as filling in incomplete triples like

(head_entity, relation_type, ?) or (?, relation_type, tail_entity)

Model: We learn the embeddings of all entities and relation types. For each relation_type, we learn two embeddings: one for predicting tail_entity given head_entity, one for predicting head_entity given tail_entity.

multi-rel

Example scripts:

This example script downloads the Freebase15k data from here and runs the StarSpace model on it:

$bash examples/multi_relation_example.sh

SentenceSpace: Learning Sentence Embeddings

Setting: Learning the mapping between sentences. Given the embedding of one sentence, one can find semantically similar/relevant sentences.

Model: Each example is a collection of sentences which are semantically related. Two are picked at random using trainMode 3: one as the input and one as the label, other sentences are picked as random negatives. One easy way to obtain semantically related sentences without labeling is to consider all sentences in the same document are related, and then train on those documents.

sentences

Example scripts:

This example script downloads data where each example is a set of sentences from the same Wikipedia page and runs the StarSpace model on it:

$bash examples/wikipedia_sentence_matching.sh

To run the full experiment on Wikipedia Sentence Matching presented in this paper, use this script (warning: it takes a long time to download data and train the model):

$bash examples/wikipedia_sentence_matching_full.sh

ArticleSpace: Learning Sentence and Article Embeddings

Setting: Learning the mapping between sentences and articles. Given the embedding of one sentence, one can find the most relevant articles.

Model: Each example is an article which contains multiple sentences. At training time, one sentence is picked at random as the input, the remaining sentences in the article becomes the label, other articles are picked as random negatives (trainMode 2).

Example scripts:

This example script downloads data where each example is a Wikipedia article and runs the StarSpace model on it:

$bash examples/wikipedia_article_search.sh

To run the full experiment on Wikipedia Article Search presented in this paper, use this script (warning: it takes a long time to download data and train the model):

$bash examples/wikipedia_article_search_full.sh

ImageSpace: Learning Image and Label Embeddings

With the most recent update, StarSpace can also be used to learn joint embeddings with images and other entities. For instance, one can use ResNet features (the last layer of a pre-trained ResNet model) to represent an image, and embed images with other entities (words, hashtags, etc.). Just like other entities in Starspace, images can be either on the input or the label side, depending on your task.

Here we give an example using CIFAR-10 to illustrate how we train images with other entities (in this example, image class): we train a ResNeXt model on CIFAR-10 which achieves 96.34% accuracy on test dataset, and use the last layer of ResNeXt as the features for each image. We embed 10 image classes together with image features in the same space using StarSpace. For an example image from class 1 with last layer (0.8, 0.5, ..., 1.2), we convert it to the following format:

d0:0.8  d1:0.5   ...    d1023:1.2   __label__1

After converting train and test examples of CIFAR-10 to the above format, we ran this example script:

$bash examples/image_feature_example_cifar10.sh

and achieved 96.40% accuracy on an average of 5 runs.

Full Documentation of Parameters

Run "starspace train ..."  or "starspace test ..."

The following arguments are mandatory for train: 
  -trainFile       training file path
  -model           output model file path

The following arguments are mandatory for test: 
  -testFile        test file path
  -model           model file path

The following arguments for the dictionary are optional:
  -minCount        minimal number of word occurences [1]
  -minCountLabel   minimal number of label occurences [1]
  -ngrams          max length of word ngram [1]
  -bucket          number of buckets [2000000]
  -label           labels prefix [__label__]. See file format section.

The following arguments for training are optional:
  -initModel       if not empty, it loads a previously trained model in -initModel and carry on training.
  -trainMode       takes value in [0, 1, 2, 3, 4, 5], see Training Mode Section. [0]
  -fileFormat      currently support 'fastText' and 'labelDoc', see File Format Section. [fastText]
  -validationFile  validation file path
  -validationPatience    number of iterations of validation where does not improve before we stop training [10]
  -saveEveryEpoch  save intermediate models after each epoch [false]
  -saveTempModel   save intermediate models after each epoch with an unique name including epoch number [false]
  -lr              learning rate [0.01]
  -dim             size of embedding vectors [100]
  -epoch           number of epochs [5]
  -maxTrainTime    max train time (secs) [8640000]
  -negSearchLimit  number of negatives sampled [50]
  -maxNegSamples   max number of negatives in a batch update [10]
  -loss            loss function {hinge, softmax} [hinge]
  -margin          margin parameter in hinge loss. It's only effective if hinge loss is used. [0.05]
  -similarity      takes value in [cosine, dot]. Whether to use cosine or dot product as similarity function in  hinge loss.
                   It's only effective if hinge loss is used. [cosine]
  -p               normalization parameter: we normalize sum of embeddings by deviding Size^p, when p=1, it's equivalent to taking average of embeddings; when p=0, it's equivalent to taking sum of embeddings. [0.5]
  -adagrad         whether to use adagrad in training [1]
  -shareEmb        whether to use the same embedding matrix for LHS and RHS. [1]
  -ws              only used in trainMode 5, the size of the context window for word level training. [5]
  -dropoutLHS      dropout probability for LHS features. [0]
  -dropoutRHS      dropout probability for RHS features. [0]
  -initRandSd      initial values of embeddings are randomly generated from normal distribution with mean=0, standard deviation=initRandSd. [0.001]
  -trainWord       whether to train word level together with other tasks (for multi-tasking). [0]
  -wordWeight      if trainWord is true, wordWeight specifies example weight for word level training examples. [0.5]
  -batchSize       size of mini batch in training. [5]

The following arguments for test are optional:
  -basedoc         file path for a set of labels to compare against true label. It is required when -fileFormat='labelDoc'.
                   In the case -fileFormat='fastText' and -basedoc is not provided, we compare true label with all other labels in the dictionary.
  -predictionFile  file path for save predictions. If not empty, top K predictions for each example will be saved.
  -K               if -predictionFile is not empty, top K predictions for each example will be saved.
  -excludeLHS      exclude elements in the LHS from predictions

The following arguments are optional:
  -normalizeText   whether to run basic text preprocess for input files [0]
  -useWeight       whether input file contains weights [0]
  -verbose         verbosity level [0]
  -debug           whether it's in debug mode [0]
  -thread          number of threads [10]

Note: We use the same implementation of word n-grams for words as in fastText. When "-ngrams" is set to be larger than 1, a hashing map of size specified by the "-bucket" argument is used for n-grams; when "-ngrams" is set to 1, no hash map is used, and the dictionary contains all words within the minCount and minCountLabel constraints.

Utility Functions

We also provide a few utility functions for StarSpace:

Show Predictions for Queries

A simple way to check the quality of a trained embedding model is to inspect the predictions when typing in an input. To build and use this utility function, run the following commands:

make query_predict
./query_predict <model> k [basedocs]

where "<model>" specifies a trained StarSpace model and the optional K specifies how many of the top predictions to show (top ranked first). "basedocs" points to the file of documents to rank, see also the argument of the same name in the starspace main above. If "basedocs" is not provided, the labels in the dictionary are used instead.

After loading the model, it reads a line of entities (can be either a single word or a sentence / document), and outputs the predictions.

Nearest Neighbor Queries

Another simple way to check the quality of a trained embedding model is to inspect nearest neighbors of entities. To build and use this utility function, run the following commands:

make query_nn
./query_nn <model> [k]

where "<model>" specifies a trained StarSpace model and the optional K (default value is 5) specifies how many nearest neighbors to search for.

After loading the model, it reads a line of entities (can be either a single word or a sentence / document), and output the nearest entities in embedding space.

Print Ngrams

As the ngrams used in the model are not saved in tsv format, we also provide a separate function to output n-grams embeddings from the model. To use that, run the following commands:

make print_ngrams
./print_ngrams <model>

where "<model>" specifies a trained StarSpace model with argument -ngrams > 1.

Print Sentence / Document Embedding

Sometimes it is useful to print out sentence / document embeddings from a trained model. To use that, run the following commands:

make embed_doc
./embed_doc <model> [filename]

where "<model>" specifies a trained StarSpace model. If filename is provided, it reads each sentence / document from file, line by line, and outputs vector embeddings accordingly. If the filename is not provided, it reads each sentence / document from stdin.

Citation

Please cite the arXiv paper if you use StarSpace in your work:

@article{wu2017starspace,
  title={StarSpace: Embed All The Things!},
  author = {{Wu}, L. and {Fisch}, A. and {Chopra}, S. and {Adams}, K. and {Bordes}, A. and {Weston}, J.},
  journal={arXiv preprint arXiv:{1709.03856}},
  year={2017}
}

Contact

starspace's People

Contributors

ampaho avatar bkj avatar bzz avatar cfperez avatar davidalbertonogueira avatar dcslin avatar freakeinstein avatar jaseweston avatar justinormont avatar jwijffels avatar kretes avatar ledw avatar mirwaisse avatar mkuchnik avatar piperchester avatar schneiderl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

starspace's Issues

Multi-task learning examples

Hi @ledw ,

The paper mention that multi-task learning is doable.
How do we implement that? Is there any exemplar script on how to do that?

Thanks

'Assertion `example.RHSTokens.size() > 1' failed' at test time and trainMode = 5

hello,

whenever testing with trainMode = 5 on an unlabelled corpus with default parameters, I get the following error in void InternDataHandler::convert():

starspace: src/data.cpp:90: virtual void starspace::InternDataHandler::convert(const starspace::ParseResults&, starspace::ParseResults&) const: Assertion `example.RHSTokens.size() > 1' failed.
Aborted

as a result, there's no predictionFile output.

Minor: No sanity check for train/test size == 0

Instead, you get other somewhat less clear error messages.

Empty trainFile => Empty vocabulary. Try a smaller -minCount.

Empty testFile => starspace: src/starspace.cpp:311: void starspace::StarSpace::evaluate(): Assertion numPerThread > 0' failed.`

Empty basedoc => hit@1: 1 hit@10: 1 hit@20: 1 hit@50: 1 mean ranks : 1

Floating point exception in trainMode=5

As I understood, in trainMode=5 labels are unnecessary, and Starspace learns word vectors like word2vec does.

But if there are not labels in the train corpus, Starspace crashes with division by zero here: https://github.com/facebookresearch/Starspace/blob/master/src/data.cpp#L208

Here is the simplest example:

$ echo "hello world" >train.txt
$ ./starspace train -trainFile train.txt -model model -trainMode 5
Arguments: 
lr: 0.01
dim: 10
epoch: 5
maxTrainTime: 8640000
saveEveryEpoch: 0
loss: hinge
margin: 0.05
similarity: cosine
maxNegSamples: 10
negSearchLimit: 50
thread: 10
minCount: 1
minCountLabel: 1
label: __label__
ngrams: 1
bucket: 2000000
adagrad: 1
trainMode: 5
fileFormat: fastText
normalizeText: 1
dropoutLHS: 0
dropoutRHS: 0
Start to initialize starspace model.
Build dict from input file : train.txt
Read 0M words
Number of words in dictionary:  2
Number of labels in dictionary: 0
Loading data from file : train.txt
Total number of examples loaded : 1
Training epoch 0: 0.01 0.002
Floating point exception

Feature Request: Training mode Docs Improvements

Considering that Starspace has several training modes, it solves different tasks, and has different dataset file formats and models type to deal with, I have tried to summarize/recap everything in the google sheet here

This table looks like the following

schermata 2017-09-28 alle 18 01 22

Of course there could be errors due to my understanding, but something like this could help a lot to choose the right model/dataset/training mode.

Hopefully it helps!

Is this model suitable for Chinese?

My corpus is organized like docs in wikipedia. In an article, the sentence is divided by a tab character and words are separated by a space bar. I trained my model in trainmodel 3 , but when I test , it give infomation like following:

hit@1: 0 hit@10: 0 hit@20: 0 hit@50: 0 mean ranks : 7.42448e+06 Total examples : 1000

Dose this means that my model is poor or these is somethingelse that maters? such as my corpus is organized wrongly.

Evaluation of unsupervised embedding (trainMode 5)

Which possibilities exist (out of the box) for evaluating/testing the unsupervised model (unsupervised word embeddings, trainMode 5)? When I try to use the unsupervised model as input for test -testFile (supervised classification), I only receive error "Test is undefined in trainMode5. Please use other trainMode for testing".

It would have been nice if we could use supervised classification (trainMode 0) as surrogate for testing the performance of the unsupervised learner (trainMode 5). That is, provide the trainMode 5 model (representation of the input) as input for a supervised classifier performing some task related to the domain the data is from.

Is this somehow possible, or something you plan to implement? It would then be possible to measure the performance of StarSpace (and the potential improvements it brings) more directly against other supervised classifiers using standard performance metrics.

Confusing parameters when calling ./starspace train

Hello,

I pass -normalizeText 1 and -adagrad 1 to ./starspace train, but in the list of arguments on the stdout, they tend to show values 0 and 0 respectively.
Do you have any idea what could be the reason why the args change?

Thanks,
Oliver

Non-descriptive error message for trainMode=0 with a trainFile containing zero labels

I've noticed that when you try to train StarSpace with trainMode = 0 on a (FastText formatted) trainFile that contains zero labels, the error message is of the form:

ERROR: File 'train.txt' is empty.

The file isn't actually empty (if it was, an exception would be thrown earlier and look like: "ERROR: Empty file.") and StarSpace knows that because it still prints out the number of words read in the dictionary:

Number of words in dictionary:  6
Number of labels in dictionary: 0

I suggest the error in src/data.cpp  at line 65 should be updated to something of the form:

ERROR: File 'train.txt' contains zero labels. Try trainMode = 5 or changing the -label argument.

Thanks!

Example:

echo "this is an example sentence" > ex_unlabeled.txt
echo "this is another sentence" >> ex_unlabeled.txt

./starspace train -trainMode 0 -trainFile ex_unlabeled.txt -model ex_unlabeled_model

Output:

Arguments: 
...
trainMode: 0
fileFormat: fastText
...
Start to initialize starspace model.
Build dict from input file : ex_unlabeled.txt
Read 0M words
Number of words in dictionary:  6
Number of labels in dictionary: 0
Loading data from file : ex_unlabeled.txt
Total number of examples loaded : 0
ERROR: File 'ex_unlabeled.txt' is empty.

Save embedding

Is it possible to save the embedding as a text file and then use it with other technologies (e.g python) ?

Any way to reduce data file size?

My input space has ~1000 features, and my training data files are topping 5GB just for 200,000 samples. Since the training data are provided as the -trainFile argument, rather than read from stdin, I'm not sure how I can get StarSpace to read compressed files. Is there anything I'm missing, there? Any support for file compression or more compact formats?

Compiling issue

This issue is thrown when I compile Starspace with a macos having a g++:

  • Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn)

The trace of the error is:
MacBook-Pro-di-Alberto:StarSpace albertomariopirovano$ make
g++ -pthread -std=gnu++11 -O3 -funroll-loops -g -c src/data.cpp -o data.o
In file included from src/data.cpp:12:
src/utils/utils.h:138:42: error: variable 'partitions' with variably modified type cannot be captured in a lambda expression
threads.emplace_back([i, f, &fname, &partitions] {
^
src/utils/utils.h:120:9: note: 'partitions' declared here
off_t partitions[numThreads + 1];
^
src/utils/utils.h:142:18: error: variable 'partitions' with variably modified type cannot be captured in a lambda expression
ifs2.seekg(partitions[i]);
^
src/utils/utils.h:120:9: note: 'partitions' declared here
off_t partitions[numThreads + 1];
^
src/utils/utils.h:144:28: error: variable 'partitions' with variably modified type cannot be captured in a lambda expression
while (tellg(ifs2) < partitions[i + 1] && getline(ifs2, line)) {
^
src/utils/utils.h:120:9: note: 'partitions' declared here
off_t partitions[numThreads + 1];
^
src/utils/utils.h:142:18: error: variable 'partitions' with variably modified type cannot be captured in a lambda expression
ifs2.seekg(partitions[i]);
^
src/data.cpp:42:3: note: in instantiation of function template specialization 'starspace::foreach_line<std::__1::basic_string,
(lambda at src/data.cpp:44:5)>' requested here
foreach_line(
^
src/utils/utils.h:120:9: note: 'partitions' declared here
off_t partitions[numThreads + 1];
^
src/utils/utils.h:144:28: error: variable 'partitions' with variably modified type cannot be captured in a lambda expression
while (tellg(ifs2) < partitions[i + 1] && getline(ifs2, line)) {
^
src/utils/utils.h:120:9: note: 'partitions' declared here
off_t partitions[numThreads + 1];
^
5 errors generated.
make: *** [data.o] Error 1

What is the error about?

SentenceSpace: Train error in wikipedia_sentence_matching.sh

I get a error while training the SentenceSpace example wikipedia_sentence_matching.sh . The error reported was Epoch 0 Train error : 0.02799942 +++--- ☃ for every epoch starting from 0.

Full logs:

# bash examples/wikipedia_sentence_matching.sh &> /root/starspace_train.logs &
[1] 368
# tail -f /root/starspace_train.logs
190100K .......... .......... .......... .......... .......... 90% 64.2M 0s
211050K .......... .......... .......... .......... ........  100% 83.2M=3.5s

2017-09-27 09:49:30 (58.2 MB/s) - '/root/starspace/data/wikipedia_train.tar.gz' saved [216165342/216165342]

tar: Ignoring unknown extended header keyword 'LIBARCHIVE.creationtime'
tar: Ignoring unknown extended header keyword 'SCHILY.dev'
tar: Ignoring unknown extended header keyword 'SCHILY.ino'
tar: Ignoring unknown extended header keyword 'SCHILY.nlink'
wikipedia_train250k.txt
--2017-09-27 09:49:35--  https://s3.amazonaws.com/fair-data/starspace/wikipedia_devtst.tgz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.227.219
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.227.219|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34419187 (33M) [application/gzip]
Saving to: '/root/starspace/data/wikipedia_test.tar.gz'

     0K .......... .......... .......... .......... ..........  0% 18.3M 2s
 33600K .......... ..                                         100%  218M=0.5s

2017-09-27 09:49:36 (68.1 MB/s) - '/root/starspace/data/wikipedia_test.tar.gz' saved [34419187/34419187]

tar: Ignoring unknown extended header keyword 'LIBARCHIVE.creationtime'
tar: Ignoring unknown extended header keyword 'SCHILY.dev'
tar: Ignoring unknown extended header keyword 'SCHILY.ino'
tar: Ignoring unknown extended header keyword 'SCHILY.nlink'
wikipedia_dev10k.txt
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.creationtime'
tar: Ignoring unknown extended header keyword 'SCHILY.dev'
tar: Ignoring unknown extended header keyword 'SCHILY.ino'
tar: Ignoring unknown extended header keyword 'SCHILY.nlink'
wikipedia_dev_basedocs.txt
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.creationtime'
tar: Ignoring unknown extended header keyword 'SCHILY.dev'
tar: Ignoring unknown extended header keyword 'SCHILY.ino'
tar: Ignoring unknown extended header keyword 'SCHILY.nlink'
wikipedia_test_basedocs.txt
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.creationtime'
tar: Ignoring unknown extended header keyword 'SCHILY.dev'
tar: Ignoring unknown extended header keyword 'SCHILY.ino'
tar: Ignoring unknown extended header keyword 'SCHILY.nlink'
wikipedia_test10k.txt
Compiling StarSpace
make: Nothing to be done for 'opt'.
Start to train on wikipedia data (small training set example version, not the same as the paper which takes longer to run on a bigger training set):
Arguments: 
lr: 0.05
dim: 100
epoch: 5
maxTrainTime: 8640000
saveEveryEpoch: 0
loss: hinge
margin: 0.05
similarity: cosine
maxNegSamples: 10
negSearchLimit: 10
thread: 20
minCount: 5
minCountLabel: 1
label: __label__
ngrams: 1
bucket: 2000000
adagrad: 1
trainMode: 3
fileFormat: labelDoc
normalizeText: 1
dropoutLHS: 0
dropoutRHS: 0
Start to initialize starspace model.
Build dict from input file : /root/starspace/data/wikipedia_train250k.txt
Read 104M words
Number of words in dictionary:  267523
Number of labels in dictionary: 0
Loading data from file : /root/starspace/data/wikipedia_train250k.txt
Total number of examples loaded : 241744
Initialized model weights. Model size :
matrix : 267523 100
Training epoch 0: 0.05 0.01
Epoch: 100.0%  lr: 0.040000  loss: 0.028662  eta: 0h16m  tot: 0h4m1s  (20.0%)
 ---+++                Epoch    0 Train error : 0.02799942 +++--- ☃
Training epoch 1: 0.04 0.01
Epoch: 100.0%  lr: 0.030166  loss: 0.016099  eta: 0h10m  tot: 0h7m33s  (40.0%)
 ---+++                Epoch    1 Train error : 0.01565443 +++--- ☃
Training epoch 2: 0.03 0.01
Epoch: 100.0%  lr: 0.020083  loss: 0.012366  eta: 0h7m  tot: 0h11m8s  (60.0%)
 ---+++                Epoch    2 Train error : 0.01268082 +++--- ☃
Training epoch 3: 0.02 0.01
Epoch: 100.0%  lr: 0.010000  loss: 0.011268  eta: 0h3m  tot: 0h14m52s  (80.0%)
 ---+++                Epoch    3 Train error : 0.01142365 +++--- ☃
Training epoch 4: 0.01 0.01
Epoch: 100.0%  lr: -0.000000  loss: 0.010595  eta: <1min   tot: 0h18m35s  (100.0%)
 ---+++                Epoch    4 Train error : 0.01067582 +++--- ☃

Despite of these errors, the training task ends up and the testing starts:

Saving model to file : /root/starspace/models/wikipedia_sentence_match
Saving model in tsv format : /root/starspace/models/wikipedia_sentence_match.tsv
Start to evaluate trained model:
Arguments: 
lr: 0.01
dim: 10
epoch: 5
maxTrainTime: 8640000
saveEveryEpoch: 0
loss: hinge
margin: 0.05
similarity: cosine
maxNegSamples: 10
negSearchLimit: 50
thread: 20
minCount: 1
minCountLabel: 1
label: __label__
ngrams: 1
bucket: 2000000
adagrad: 1
trainMode: 3
fileFormat: fastText
normalizeText: 1
dropoutLHS: 0
dropoutRHS: 0
Start to load a trained starspace model.
STARSPACE-2017-1
Initialized model weights. Model size :
matrix : 267523 100
Model loaded.
Loading data from file : /root/starspace/data/wikipedia_test10k.txt
Total number of examples loaded : 9654
------Loaded model args:
Arguments: 
lr: 0.01
dim: 100
epoch: 5
maxTrainTime: 8640000
saveEveryEpoch: 0
loss: hinge
margin: 0.05
similarity: cosine
maxNegSamples: 10
negSearchLimit: 10
thread: 20
minCount: 5
minCountLabel: 1
label: __label__
ngrams: 1
bucket: 2000000
adagrad: 1
trainMode: 3
fileFormat: labelDoc
normalizeText: 1
dropoutLHS: 0
dropoutRHS: 0
Finished loading base docs.
Evaluation Metrics : 
hit@1: 0.0989227 hit@10: 0.174643 hit@20: 0.215144 hit@50: 0.283302 mean ranks : 1099.95 Total examples : 9654

NOTE 1. I'm running on Docker image file here

NOTE 2. What's that snowman ☃?

Script used for preprocessing Wikipedia data

Thanks for a great release! I am wondering if you can open source the code used to preprocess the wikipedia data into wikipedia_shuf_train5M.txt? I'd like to as closely replicate it on a novel dataset and use it alongside the wikipedia data if possible.

Recommend Different Boost Install Location

On newer versions of Mac OS X, the directory /usr/bin is read only due to SIP. Recommend changing the Boost install directory to /usr/local/bin so users do not need to disable SIP to compile. See this StackOverflow for more info.

I installed to /usr/local/bin, changed the makefile and it compiled correctly. I don't think this will cause any conflict on other operating systems.

Crash on failed assertion pair.first < this->numRows()

Hi, I consistently get a crash when using starspace in test mode. In training mode I consistently get a segfault right after it writes the model and tsv.

command and output during testing is as follows:

/starspace/Starspace/starspace test -testFile /src/experiment_2_tno_data/manual_test.txt -model tagspace2 -label '__label__' -predictionFile 'predictions_manual_test2.txt'

Arguments: 
lr: 0.01
dim: 10
epoch: 5
maxTrainTime: 8640000
saveEveryEpoch: 0
loss: hinge
margin: 0.05
similarity: cosine
maxNegSamples: 10
negSearchLimit: 50
thread: 10
minCount: 1
minCountLabel: 1
label: __label__
ngrams: 1
bucket: 2000000
adagrad: 1
trainMode: 0
fileFormat: fastText
normalizeText: 0
dropoutLHS: 0
dropoutRHS: 0
Start to load a trained starspace model.
STARSPACE-2017-2
Model loaded.
Loading data from file : /src/experiment_2_tno_data/manual_test.txt
Total number of examples loaded : 1
------Loaded model args:
Arguments: 
lr: 0.01
dim: 10
epoch: 5
maxTrainTime: 8640000
saveEveryEpoch: 0
loss: hinge
margin: 0.05
similarity: cosine
maxNegSamples: 10
negSearchLimit: 50
thread: 10
minCount: 1
minCountLabel: 1
label: __label__
ngrams: 1
bucket: 2000000
adagrad: 1
trainMode: 0
fileFormat: fastText
normalizeText: 0
dropoutLHS: 0
dropoutRHS: 0
starspace: src/proj.h:57: void starspace::SparseLinear<Real>::forward(const std::vector<std::pair<int, Real> >&, starspace::Matrix<Real>&) [with Real = float]: Assertion `pair.first < this->numRows()' failed.
./test.sh: line 2:   173 Aborted                 (core dumped) /starspace/Starspace/starspace test -testFile /src/experiment_2_tno_data/manual_test.txt -model tagspace2 -label '__label__' -predictionFile 'predictions_manual_test2.txt'

can you provide an example of document ranking for unsupervised data?

I have bunch of pdf's from which i can extract the text data and convert into documents where each document can have 3 to 5 sentences . Now i want to use starspace for ranking these documents for user queries.

How can i use this unsupervised data in starspace for my use case? Correct me if my understanding is wrong on this usage of starspace.

Install error: cblas.h file not found

Hi I'm having installation error with the most recent update (didn't happen before). Getting the error message below:

In file included from src/proj.cpp:11:
src/proj.h:22:10: fatal error: 'cblas.h' file not found
#include <cblas.h>
^
1 error generated.
make: *** [proj.o] Error 1

Link broken for Freebase15k dataset

It seems like the server for the Freebase15k dataset is down (this link: https://everest.hds.utc.fr/lib/exe/fetch.php?media=en:fb15k.tgz in the examples/multi_relation_example.sh script). Could you fix this or provide an alternative download source?

Thanks, Johannes

cc @ben0it8

Core dumped at testing part of news dataset example

Was running the example for classification of news, and after model was built, starSpace test core dumped with following stacktrace:

(gdb) bt
#0  0x00007fd6d8e1cc37 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007fd6d8e20028 in __GI_abort () at abort.c:89
#2  0x00007fd6d8e636ed in __malloc_assert (
assertion=assertion@entry=0x7fd6d8f67be8 "(old_top == (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) >= (unsigned long)((((__builtin_offs"..., file=file@entry=0x7fd6d8f63768 "malloc.c", line=line@entry=2372, 
function=function@entry=0x7fd6d8f63ae6 <__func__.11321> "sysmalloc") at malloc.c:293
#3  0x00007fd6d8e66b58 in sysmalloc (av=0x7fd6d91a4760 <main_arena>, nb=112) at malloc.c:2369
#4  _int_malloc (av=0x7fd6d91a4760 <main_arena>, bytes=96) at malloc.c:3800
#5  0x00007fd6d8e686c0 in __GI___libc_malloc (bytes=96) at malloc.c:2891
#6  0x00007fd6d9943dad in operator new(unsigned long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x000000000040c029 in allocate (this=0x19c25e8, __n=4) at /usr/include/c++/4.8/ext/new_allocator.h:104
#8  _M_allocate (this=0x19c25e8, __n=<optimized out>) at /usr/include/c++/4.8/bits/stl_vector.h:168
#9  std::vector<starspace::entry, std::allocator<starspace::entry> >::_M_emplace_back_aux<starspace::entry const&> (
this=this@entry=0x19c25e8) at /usr/include/c++/4.8/bits/vector.tcc:404
#10 0x000000000040906c in push_back (__x=..., this=<optimized out>) at /usr/include/c++/4.8/bits/stl_vector.h:911
#11 starspace::Dictionary::load (this=0x19c25d8, in=...) at src/dict.cpp:123
#12 0x0000000000436c68 in starspace::StarSpace::initFromSavedModel (this=this@entry=0x7fff54ac7070, 
filename="/tmp/starspace/models/ag_news") at src/starspace.cpp:121
#13 0x0000000000404543 in main (argc=<optimized out>, argv=0x7fff54ac7218) at src/main.cpp:42

Ubuntu 14.04 with updates, GCC 4.8.4, Boost 1.54

Embedding initialization?

Hi, thanks for a great release! I have a question regarding the '-initModel' option: is it possible to initialize a model with a trained set of embeddings, not necessarily from the same training mode?
I tried to initiate a trainMode = 3 model with a trainMode = 1 model and got training file empty errors. Is there any other way to initialize embeddings with pre-trained ones? It can improve result a lot in certain cases.

Thanks

Remove the Boost Lib Dependency

Would it be possibile to remove the boost dependency? Is the matrix.hpp the only numeric library needed? If so, could be embedded.

By the way, I have upgraded boost on macos:

[loretoparisi@:mbploreto Starspace]$ brew upgrade boost
Updating Homebrew...
==> Upgrading 1 outdated package, with result:
boost 1.65.1
==> Upgrading boost 
==> Downloading https://homebrew.bintray.com/bottles/boost-1.65.1.sierra.bottle.tar.gz
######################################################################## 100,0%
==> Pouring boost-1.65.1.sierra.bottle.tar.gz
🍺  /usr/local/Cellar/boost/1.65.1: 12,679 files, 401.1MB

and after setup BOOST_DIR = /usr/local/Cellar/boost/1.65.1/ in the makefile I get:

In file included from src/proj.cpp:11:
In file included from src/proj.h:14:
src/matrix.h:26:10: fatal error: 'boost/numeric/ublas/matrix.hpp' file not found
#include <boost/numeric/ublas/matrix.hpp>
         ^
1 error generated.
make: *** [proj.o] Error 

per-epoch models files overwrite each other

With the -saveEveryEpoch argument, the model file specified in -model gets saved after every epoch, so that each per-epoch model overwrites the last. I'd like to have all of those per-epoch models available when training is done, for running on validation data and picking the best epoch. Would it be possible to (maybe optionally) save those files to, e.g., model.1, model.2, model.3...?

Multilabel training Failed

Hello,

I have dataset with 90k sentences with 7 tags and I am trying to use multi label classify them. However, it doesn't work.

The file format is:

Some sample text #tag1 #tag2
Another sample text #tag5 #tag3
..

The command for training is:
./starspace train -trainFile starspace_input.txt -model tagspace -label '#' -trainMode 0
output of training:

Epoch: 100.0%  lr: 0.008000  loss: 0.003489  eta: 0h1m  tot: 0h0m20s  (20.0%)
 ---+++                Epoch    0 Train error : 0.00351631 +++--- ☃
Training epoch 1: 0.008 0.002
Epoch: 100.0%  lr: 0.006000  loss: 0.002564  eta: <1min   tot: 0h0m40s  (40.0%)
 ---+++                Epoch    1 Train error : 0.00240930 +++--- ☃
Training epoch 2: 0.006 0.002
Epoch: 100.0%  lr: 0.004000  loss: 0.002277  eta: <1min   tot: 0h0m59s  (60.0%)
 ---+++                Epoch    2 Train error : 0.00222635 +++--- ☃
Training epoch 3: 0.004 0.002
Epoch: 100.0%  lr: 0.002021  loss: 0.002060  eta: <1min   tot: 0h1m20s  (80.0%)
 ---+++                Epoch    3 Train error : 0.00206080 +++--- ☃
Training epoch 4: 0.002 0.002
Epoch: 100.0%  lr: -0.000000  loss: 0.001748  eta: <1min   tot: 0h1m40s  (100.0%)
 ---+++                Epoch    4 Train error : 0.00182090 +++--- ☃

The command for testing:
./starspace test -testFile starspace_input.txt -model tagspace -predictionFile tags_output.txt
The out file is:

Example 0:
LHS:
Some sample text
RHS:
#tag1
Predictions:
(++) [-0.405902]        #tag1
(--) [-0.424607]        #tag2
(--) [-0.424702]        #tag3
(--) [-0.426362]        #tag4
(--) [-0.431557]        #tag5
(--) [-0.435428]        #tag6
(--) [-0.437012]        #tag7 

I have two issue the first one:
1- Why the model only mark one tag and not all tags for each sentences ?
2- Why the prediction is totally wrong despite the fact that the program tells me training error is "0.00182090" ? I have manually checked them and the propability is extremely wrong.

Installation error: isnan, isinf was not declared in this scope

I am having hard time installing in Ubuntu with gcc-5.4. I also tried using gcc-7 and clang and still the same error.

g++ --version
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
g++ -pthread -std=gnu++11 -O3 -funroll-loops -g -c src/utils/normalize.cpp
g++ -pthread -std=gnu++11 -O3 -funroll-loops -g -c src/dict.cpp
g++ -pthread -std=gnu++11 -O3 -funroll-loops -g -c src/utils/args.cpp
g++ -pthread -std=gnu++11 -O3 -funroll-loops -I/usr/bin/boost_1_63_0/ -g -c src/proj.cpp
g++ -pthread -std=gnu++11 -O3 -funroll-loops -I/usr/bin/boost_1_63_0/ -g -c src/parser.cpp -o parser.o
g++ -pthread -std=gnu++11 -O3 -funroll-loops -g -c src/data.cpp -o data.o
g++ -pthread -std=gnu++11 -O3 -funroll-loops -I/usr/bin/boost_1_63_0/ -g -c src/model.cpp
In file included from src/proj.h:19:0,
                 from src/model.h:13,
                 from src/model.cpp:10:
src/model.h: In static member function ‘static void starspace::EmbedModel::check(const boost::numeric::ublas::matrix<float, boost::numeric::ublas::basic_row_major<>, boost::numeric::ublas::unbounded_array<float, std::allocator<float> > >&)’:
src/model.h:180:30: error: ‘isnan’ was not declared in this scope
         assert(!isnan(m(i, j)));
                              ^
src/model.h:180:30: note: suggested alternative:
In file included from /usr/include/c++/5/random:38:0,
                 from src/matrix.h:21,
                 from src/model.h:12,
                 from src/model.cpp:10:
/usr/include/c++/5/cmath:641:5: note:   ‘std::isnan’
     isnan(_Tp __x)
     ^
In file included from src/proj.h:19:0,
                 from src/model.h:13,
                 from src/model.cpp:10:
src/model.h:181:30: error: ‘isinf’ was not declared in this scope
         assert(!isinf(m(i, j)));
                              ^
src/model.h:181:30: note: suggested alternative:
In file included from /usr/include/c++/5/random:38:0,
                 from src/matrix.h:21,
                 from src/model.h:12,
                 from src/model.cpp:10:
/usr/include/c++/5/cmath:621:5: note:   ‘std::isinf’
     isinf(_Tp __x)
     ^
In file included from /usr/include/c++/5/cassert:43:0,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/detail/config.hpp:16,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/exception.hpp:19,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/storage.hpp:25,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/vector.hpp:21,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/matrix.hpp:18,
                 from src/matrix.h:26,
                 from src/model.h:12,
                 from src/model.cpp:10:
src/matrix.h: In instantiation of ‘starspace::Matrix<Real>::sanityCheck() const::<lambda(Real, size_t, size_t)> [with Real = float; size_t = long unsigned int]’:
src/matrix.h:132:19:   required from ‘struct starspace::Matrix<Real>::sanityCheck() const [with Real = float]::<lambda(float, size_t, size_t)>’
src/matrix.h:132:16:   required from ‘void starspace::Matrix<Real>::sanityCheck() const [with Real = float]’
src/model.h:173:19:   required from here
src/matrix.h:133:20: error: ‘isnan’ was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
       assert(!isnan(r));
                    ^
In file included from /usr/include/c++/5/random:38:0,
                 from src/matrix.h:21,
                 from src/model.h:12,
                 from src/model.cpp:10:
/usr/include/c++/5/cmath:641:5: note: ‘template<class _Tp> constexpr typename __gnu_cxx::__enable_if<std::__is_integer<_Tp>::__value, bool>::__type std::isnan(_Tp)’ declared here, later in the translation unit
     isnan(_Tp __x)
     ^
In file included from /usr/include/c++/5/cassert:43:0,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/detail/config.hpp:16,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/exception.hpp:19,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/storage.hpp:25,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/vector.hpp:21,
                 from /usr/bin/boost_1_63_0/boost/numeric/ublas/matrix.hpp:18,
                 from src/matrix.h:26,
                 from src/model.h:12,
                 from src/model.cpp:10:
src/matrix.h:134:20: error: ‘isinf’ was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
       assert(!isinf(r));
                    ^
In file included from /usr/include/c++/5/random:38:0,
                 from src/matrix.h:21,
                 from src/model.h:12,
                 from src/model.cpp:10:
/usr/include/c++/5/cmath:621:5: note: ‘template<class _Tp> constexpr typename __gnu_cxx::__enable_if<std::__is_integer<_Tp>::__value, bool>::__type std::isinf(_Tp)’ declared here, later in the translation unit
     isinf(_Tp __x)
     ^
In file included from src/matrix.h:20:0,
                 from src/model.h:12,
                 from src/model.cpp:10:
/usr/include/c++/5/functional: At global scope:
/usr/include/c++/5/functional:2246:7: error: ‘std::function<_Res(_ArgTypes ...)>::function(_Functor) [with _Functor = starspace::Matrix<Real>::sanityCheck() const [with Real = float]::<lambda(float, size_t, size_t)>; <template-parameter-2-2> = void; <template-parameter-2-3> = void; _Res = void; _ArgTypes = {float, long unsigned int, long unsigned int}]’, declared using local type ‘starspace::Matrix<Real>::sanityCheck() const [with Real = float]::<lambda(float, size_t, size_t)>’, is used but never defined [-fpermissive]
       function<_Res(_ArgTypes...)>::
       ^
makefile:69: recipe for target 'model.o' failed
make: *** [model.o] Error 1

Assertion `id < size_' failed in dict.cpp when using ngrams > 1 in test

In case I train with trainMode = 0 and ngrams > 1, during the test call I get the following error:

starspace: src/dict.cpp:59: const string& starspace::Dictionary::getSymbol(int32_t) const: Assertion `id < size_' failed.
Evaluation Metrics :
hit@1: 0.336957 hit@10: 0.5 hit@20: 0.641304 hit@50: 0.826087 mean ranks : 28.1957 Total examples : 92
Aborted

This results in an empty predictionFile.

using pagespace, the embeddings seems small and centralize 0

command:
./starspace train -trainFile pagespace.train -model pagespace.model -label 'n' -trainMode 1 -dim 128 -minCount 5 -minCountLabel 5 -epoch 8 -saveTempModel true -thread 16

Data scale 30,000,000 user * 500,000 page, dim=128

I calculate the first dim histogram blew, it seems that the number is very small and centralize 0.
I'm not sure whether the result is correct.

-0.0457 - -0.0373 [ 4]:
-0.0373 - -0.0290 [ 18]:
-0.0290 - -0.0206 [ 87]: *
-0.0206 - -0.0122 [ 453]: ******
-0.0122 - -0.0038 [ 2346]: **********************************
-0.0038 - 0.0045 [ 5215]: ***************************************************************************
0.0045 - 0.0129 [ 1533]: **********************
0.0129 - 0.0213 [ 259]: ***
0.0213 - 0.0297 [ 77]: *
0.0297 - 0.0380 [ 8]:

Add weight importance to file format

In many cases, it makes sense to say that an observation is more important than another one.
Right now, there is no obvious way to provide this information.
Would it be possible to add such feature?
It would look like an optional key word to add to each sample (for instance __weight__:2.5).
The effect would be to increase the loss (and the correction) for that sample.

What do you think?

Crash with initModel

When initializing with fastText pretrained (by Facebook team) model, I always get this error:

Arguments: 
lr: 0.05
dim: 300
epoch: 5
maxTrainTime: 8640000
saveEveryEpoch: 0
loss: hinge
margin: 0.05
similarity: cosine
maxNegSamples: 10
negSearchLimit: 50
thread: 11
minCount: 30
minCountLabel: 1
label: __label__
ngrams: 1
bucket: 2000000
adagrad: 1
trainMode: 3
fileFormat: labelDoc
normalizeText: 0
dropoutLHS: 0
dropoutRHS: 0
Start to load a trained starspace model.
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
./learn.sh : ligne 21 :  2695 Abandon                 (core dumped) ./starspace train -trainFile ./data/q_starspace.txt -model ./data/q_learned -trainMode 3 -initRandSd 0.01 -adagrad true -ngrams 1 -lr 0.05 -margin 0.05 -epoch 5 -thread 11 -dim 300 -negSearchLimit 50 -fileFormat labelDoc -similarity "cosine" -minCount 30 -verbose true -normalizeText 0 -initModel ./data/wiki.fr.head.50k.vec

wiki.fr.head.50k.vec are the first 50K lines (I tried with full file, and with first 10K ).
I have 128Gb of ram.
Training without init the model works with pretrained file works.

What may I try to init the model and avoid this error?

MultiRelationExample not working

I guess it is used a wrong trainMode, a wrong fileFormat and missing baseDoc.
After changing trainMode to 4 and fileFormat to labelDoc it trained, but i couldnt test it, because i didnt specify a baseDoc:

Must provide base labels when label is featured.

Crash when -useWeight option used

Hello,
I have tried current version of starspace with option -useWeight and empty input file, but it crashes:

StarSpace$ cat input
StarSpace$ ./starspace train -trainFile input -model output -useWeight
terminate called after throwing an instance of 'std::logic_error'
  what():  basic_string::_S_construct null not valid
Aborted

It also crashes when the input file is not empty, for example it crashes with the same error with this input file:

dog:1.0 cat:1.0 __label__1:1.0

how to embed document space

Could you please provide an advice how to embed document space like Doc2vec.
I want to embed document space and make sure which document is most similar to one document.
I trained my textdata formatted by fasttext. So can I use trained model like a Doc2vec model?
( xxxxxx.mostsimilar(["I like dog"]))

What happens with unlabeled documents with trainMode=0?

The README for trainMode=0 states "Each example contains both input and labels." But, StarSpace executes perfectly for a mixed case where some documents are labeled and others are not.
 
Example with trainMode=0:

echo "__label__1 this is a labeled test sentence" > ex_mixed.txt
echo "__label__0 a second labeled example" >> ex_mixed.txt
echo "this is an example sentence" >> ex_mixed.txt
echo "this is another sentence" >> ex_mixed.txt
 
./starspace train -trainMode 0 -trainFile ex_mixed.txt -model ex_mixed_model_0

Output:

Arguments: 
...
trainMode: 0
...
Start to initialize starspace model.
Build dict from input file : ex_mixed.txt
Read 0M words
Number of words in dictionary:  10
Number of labels in dictionary: 2
Loading data from file : ex_mixed.txt
Total number of examples loaded : 2
Training epoch 0: 0.01 0.002
...
Saving model to file : ex_mixed_model_0
Saving model in tsv format : ex_mixed_model_0.tsv

i.e. it works.
Similarly, with trainMode=5:

./starspace train -trainMode 5 -trainFile ex_mixed.txt -model ex_mixed_model_5

Output:

Arguments: 
...
trainMode: 5
...
Start to initialize starspace model.
Build dict from input file : ex_mixed.txt
Read 0M words
Number of words in dictionary:  10
Number of labels in dictionary: 2
Loading data from file : ex_mixed.txt
Total number of examples loaded : 4
Training epoch 0: 0.01 0.002
...
Saving model to file : ex_mixed_model_5
Saving model in tsv format : ex_mixed_model_5.tsv

Again, it works.
The dictionaries are identical (suggesting that despite Total number of examples loaded : 2, the first trainMode=0 case is not throwing out the unlabeled documents) but the vectors are different. So, what is going on? How does StarSpace treat these unlabeled documents?
 
I suggest that the README be updated to note that:

  • Training with trainMode=0 does not require every document to have a label.
      - (But, there must be at least two labels.)
      - The unlabeled documents are treated ___.

This example also shows that training with trainMode=5 on a file containing labeled documents, treats each label as a word/token in the dictionary and learns their embeddings. Perhaps a warning in the README/code for this case is warranted too?

Gitter/Slack

Hi Jason & co! Would you be open to creating a room in some community platform for discussing Starspace in the open? I really like it's mission and would love to simply follow along with its development/thoughts and see how people are using it.

Nick

No Response

I train a model use my own data with labels in a short time, but when I use the model to test, there was no response, it showed have loaded the test file and model, I wait for a relative long time but nothing happened, even raised any error.

Crashing makefile

Good day, i followed your instructions to build the model, but i had a problem in the "make" line, after git clone and cd Starspace:

makefile:84: recipe for target 'data.o' failed
make: *** [data.o] Error 1

I use Ubuntu 16.04 LTS,
How can i solve this problem?
Thank you

Error compiling tests

I could not get tests to compile with make test until modifying #includes in test/*.cpp to point up a directory.

For example, in proj_test.cpp: #include "../proj.h"

Can I write R wrapper starspace?

Can I write R wrapper starspace?
package name will be "StarspaceR".

if any license issue or guide for use source?

thanks in advance.

Starspace crashes while testing model

While running ./examples/classification_ag_news.sh, ./examples/wikipedia_article_search.sh, I find Starspace is crashing while testing.

My system details -

sakets@mobile-graphics-ml:/mnt/hdd/sakets/facebook/Starspace$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
sakets@mobile-graphics-ml:/mnt/hdd/sakets/facebook/Starspace$ uname -a
Linux mobile-graphics-ml 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Attaching the log file with shows the error at -

starspace: malloc.c:2372: sysmalloc: Assertion `(old_top == (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) >= (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - 1))) && ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' failed.
./examples/classification_ag_news.sh: line 75: 29692 Aborted (core dumped) ./starspace test -model "${MODELDIR}"/ag_news -testFile "${DATADIR}"/ag_news.test -ngrams 1 -dim 10 -label "label" -thread 10 -similarity "dot" -trainMode 0 -verbose true

error.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.