Giter Club home page Giter Club logo

mmvec's Introduction

Build Status

MMvec

Neural networks for estimating microbe-metabolite interactions through their co-occurence probabilities.

Installation

MMvec can be installed via pypi as follows

pip install mmvec

If you are planning on using GPUs, be sure to pip install tensorflow-gpu <= 1.14.0.

MMvec can also be installed via conda as follows

conda install mmvec -c conda-forge

Warning : Note that this option may not work in cluster environments, it maybe workwhile to pip install within a virtual environment. It is possible to pip install mmvec within a conda environment, including qiime2 conda environments. However, pip and conda are known to have compatibility issues, so proceed with caution.

Update : conda has not aged very well since this package was released. Below is are updated install instructions using mamba install (without qiime2)

conda create -n mmvec_env mamba python=3.7 -c conda-forge
conda activate mmvec_env
mamba install mmvec -c conda-forge

Finally, MMvec is only compatible with qiime2 environments 2020.6 or before. Stay tuned for future updates.

Input data

The two basic tables required to run mmvec are:

  • Metabolite counts (.biom): A table with metabolites in rows and samples in columns.
  • Microbe abundance (.biom): A relative abundance table with microbial species in rows and samples in columns.

Getting started

To get started you can run a quick example as follows. This will learn microbe-metabolite vectors (mmvec) which can be used to estimate microbe-metabolite conditional probabilities that are accurate up to rank.

mmvec paired-omics \
        --microbe-file examples/cf/otus_nt.biom \
        --metabolite-file examples/cf/lcms_nt.biom \
        --summary-dir summary

While this is running, you can open up another session and run tensorboard --logdir . for diagnosis, see FAQs below for more details.

If you investigate the summary folder, you will notice that there are a number of files deposited.

See the following url for a more complete tutorial with real datasets.

https://github.com/knightlab-analyses/multiomic-cooccurences

More information can found under mmvec --help

Qiime2 plugin

If you want to run this in a qiime environment, install this in your qiime2 conda environment (see qiime2 installation instructions here) and run the following

pip install git+https://github.com/biocore/mmvec.git
qiime dev refresh-cache

This should allow your q2 environment to recognize mmvec. Before we test the qiime2 plugin, go to the examples/cf folder and run the following commands to import an example dataset

qiime tools import \
        --input-path otus_nt.biom \
        --output-path otus_nt.qza \
        --type FeatureTable[Frequency]

qiime tools import \
        --input-path lcms_nt.biom \
        --output-path lcms_nt.qza \
        --type FeatureTable[Frequency]

Then you can run mmvec

qiime mmvec paired-omics \
        --i-microbes otus_nt.qza \
        --i-metabolites lcms_nt.qza \
        --p-summary-interval 1 \
        --output-dir model_summary

In the results, there are three files, namely model_summary/conditional_biplot.qza, model_summary/conditionals.qza and model_summary/model_stats.qza. The conditional biplot is a biplot representation the conditional probability matrix so that you can visualize these microbe-metabolite interactions in an exploratory manner. This can be directly visualized in Emperor as shown below. We also have the estimated conditional probability matrix given in results/conditionals.qza, which an be unzip to yield a tab-delimited table via unzip results/conditionals. Each row can be ranked, so the top most occurring metabolites for a given microbe can be obtained by identifying the highest co-occurrence probabilities for each microbe.

These log conditional probabilities can also be viewed directly with qiime metadata tabulate. This can be created as follows

qiime metadata tabulate \
        --m-input-file results/conditionals.qza \
        --o-visualization conditionals-viz.qzv

Then you can run the following to generate a emperor biplot.

qiime emperor biplot \
        --i-biplot conditional_biplot.qza \
        --m-sample-metadata-file metabolite-metadata.txt \
        --m-feature-metadata-file taxonomy.tsv \
        --o-visualization emperor.qzv

The resulting biplot should look like something as follows

biplot

Here, the metabolite represent points and the arrows represent microbes. The points close together are indicative of metabolites that frequently co-occur with each other. Furthermore, arrows that have a small angle between them are indicative of microbes that co-occur with each other. Arrows that point in the same direction as the metabolites are indicative of microbe-metabolite co-occurrences. In the biplot above, the red arrows correspond to Pseudomonas aeruginosa, and the red points correspond to Rhamnolipids that are likely produced by Pseudomonas aeruginosa.

Another way to examine these associations is to build heatmaps of the log conditional probabilities between observations, using the heatmap action:

qiime mmvec heatmap \
  --i-ranks ranks.qza \
  --m-microbe-metadata-file taxonomy.tsv \
  --m-microbe-metadata-column Taxon \
  --m-metabolite-metadata-file metabolite-metadata.txt \
  --m-metabolite-metadata-column Compound_Source \
  --p-level 5 \
  --o-visualization ranks-heatmap.qzv

This action generates a clustered heatmap displaying the log conditional probabilities between microbes and metabolites. Larger positive log conditional probabilities indicate a stronger likelihood of co-occurrence. Low and negative values indicate no relationship, not necessarily a negative correlation. Rows (microbial features) can be annotated according to feature metadata, as shown in this example; we provide a taxonomic classification file and the semicolon- delimited taxonomic rank (level) that should be displayed in the color-coded margin annotation. Set level to -1 to display the full annotation (including of non-delimited feature metadata). Separate parameters are available to annotate the x-axis (metabolites) in a similar fashion. Row and column clustering can be adjusted using the method and metric parameters. This action will generate a heatmap that looks similar to this:

heatmap

Biplots and heatmaps give a great overview of co-occurrence associations, but do not provide information about the abundances of these co-occurring features in each sample. This can be done with the paired-heatmap action:

qiime mmvec paired-heatmap \
  --i-ranks ranks.qza \
  --i-microbes-table otus_nt.qza \
  --i-metabolites-table lcms_nt.qza \
  --m-microbe-metadata-file taxonomy.tsv \
  --m-microbe-metadata-column Taxon \
  --p-features TACGAAGGGTGCAAGCGTTAATCGGAATTACTGGGCGTAAAGCGCGCGTAGGTGGTTCAGCAAGTTGGATGTGAAATCCCCGGGCTCAACCTGGGAACTGCATCCAAAACTACTGAGCTAGAGTACGGTAGAGGGTGGTGGAATTTCCTG \
  --p-features TACGTAGGTCCCGAGCGTTGTCCGGATTTATTGGGCGTAAAGCGAGCGCAGGCGGTTAGATAAGTCTGAAGTTAAAGGCTGTGGCTTAACCATAGTAGGCTTTGGAAACTGTTTAACTTGAGTGCAAGAGGGGAGAGTGGAATTCCATGT \
  --p-top-k-microbes 0 \
  --p-normalize rel_row \
  --p-top-k-metabolites 100 \
  --p-level 6 \
  --o-visualization paired-heatmap-top2.qzv

This action generates paired heatmaps that are aligned on the y-axis (sample IDs): the left panel displays the abundances of each selected microbial feature in each sample, and the right panel displays the abundances of the top k metabolite features associated with each of these microbes in each sample. Microbes can be selected automatically using the top-k-microbes parameter (which selects the microbes with the top k highest relative abundances) or they can be selected by name using the features parameter (if using the QIIME 2 plugin command-line interface as shown in this example, multiple features are selected by passing this parameter multiple times, e.g., --p-features feature1 --p-features feature2; for python interfaces, pass a list of features: features=[feature1, feature2]). As with the heatmap action, microbial features can be annotated by passing in microbe-metadata and specifying a taxonomic level to display. The output looks something like this:

paired-heatmap

More information behind the actions and parameters can found under qiime mmvec --help

Model diagnostics

QIIME2 Convergence Summaries

If you are using the qiime2 interface, there won't be a tensorboard interface. But there will still be training loss curves and cross-validation statistics reported, which are currently not available in the tensorboard interface. To run this with a single model, run the following

qiime mmvec summarize-single \
        --i-model-stats model_summary/model_stats.qza \
        --o-visualization model-summary.qzv

An example of what this will look like is given as follows single_summary

Null models and QIIME 2 + MMvec

If you're running mmvec through QIIME 2, the qiime mmvec summarize-paired command allows you to view two sets of diagnostic plots at once as follows:

# Null model with only biases
qiime mmvec paired-omics \
        --i-microbes otus_nt.qza \
        --i-metabolites lcms_nt.qza \
        --p-latent-dim 0 \
        --p-summary-interval 1 \
        --output-dir null_summary

qiime mmvec summarize-paired \
        --i-model-stats model_summary/model_stats.qza \
        --i-baseline-stats null_summary/model_stats.qza \
        --o-visualization paired-summary.qzv

An example of what this will look like is given as follows paired_summary

It is important to note here that the null model has a worst cross-validation error than the first MMvec model we trained. However to make the models exactly comparable, the same samples must be used for training and cross-validation. See the --p-training-column option to manually specify samples for training and testing.

These summaries can also be extended to analyze any two models of interest. This can help with picking optimal hyper-parameters.

Interpreting Q2 values

The Q2 score is adapted from the Partial least squares literature. Here it is given by Q^2 = 1 - m1/m2, where m1 indicates the average absolute model error and m2 indicates the average absolute null or baseline model error. If Q2 is close to 1, that indicates a high predictive accuracy on the cross validation samples. If Q2 is low or below zero, that indicates poor predictive accuracy, suggesting possible overfitting. This statistic behaves similarly to the R2 classically used in a ordinary linear regression if --p-formula is "1" in the m2 model.

If the Q2 score is extremely close to 0 (or negative), this indicates that the model is overfit or that the metadata supplied to the model are not predictive of microbial composition across samples. You can think about this in terms of "how does using the metadata columns in my formula improve a model?" If there isn't really an improvement, then you may want to reconsider your formula.

... But as long as your Q2 score is above zero, your model is learning something useful.

FAQs

Q: Looks like there are two different commands, a standalone script and a qiime2 interface. Which one should I use?!?

A: It'll depend on how deep in the weeds you'll want to get. For most intents and purposes, the qiime2 interface will more practical for most analyses. There are 3 major reasons why the standalone scripts are more preferable to the qiime2 interface, namely

  1. Customized acceleration : If you want to bring down your runtime from a few days to a few hours, you may need to compile Tensorflow to handle hardware specific instructions (i.e. GPUs / SIMD instructions). It probably is possible to enable GPU compatiability within a conda environment with some effort, but since conda packages binaries, SIMD instructions will not work out of the box.

  2. Checkpoints : If you are not sure how long your analysis should run, the standalone script can allow you record checkpoints, which can allow you to recover your model parameters. This enables you to investigate your model while the model is training.

  3. More model parameters : The standalone script will return the bias parameters learned for each dataset (i.e. microbe and metabolite abundances). These are stored under the summary directory (specified by --summary) under the names embeddings.csv. This file will hold the coordinates for the microbes and metabolites, along with biases. There are 4 columns in this file, namely feature_id, axis, embed_type and values. feature_id is the name of the feature, whether it be a microbe name or a metabolite feature id. axis corresponds to the name of the axis, which either corresponds to a PC axis or bias. embed_type denotes if the coordinate corresponds to a microbe or metabolite. values is the coordinate value for the given axis, embed_type and feature_id. This can be useful for accessing the raw parameters and building custom biplots / ranks visualizations - this also has the advantage of requiring much less memory to manipulate.

It is also important to note that you don't have to explicitly choose - it is very doable to run the standalone version first, then import those output files into qiime2. Importing can be done as follows

qiime tools import --input-path <your ranks file> --output-path conditionals.qza --type FeatureData[Conditional]

qiime tools import --input-path <your ordination file> --output-path ordination.qza --type 'PCoAResults % Properties("biplot")'

Q : You mentioned that you can use GPUs. How can you do that??

A : This can be done by running pip install tensorflow-gpu in your environment. See details here.

At the moment, these capabilities are only available for the standalone CLI due to complications of installation. See the --arm-the-gpu option in the standalone interface.

Q : Neural networks scare me - don't they overfit the crap out of your data?

A : Here, we are using shallow neural networks (so only two layers). This falls under the same regime as PCA and SVD. But just as you can overfit PCA/SVD, you can also overfit mmvec. Which is why we have Tensorboard enabled for diagnostics. You can visualize the cv_rmse to gauge if there is overfitting -- if your run is strictly decreasing, then that is a sign that you are probably not overfitting. But this is not necessarily indicative that you have reach the optimal -- you want to check to see if logloss has reached a plateau as shown above.

Q : I'm confused, what is Tensorboard?

A : Tensorboard is a diagnostic tool that runs in a web browser - note that this is only explicitly supported in the standalone version of mmvec. To open tensorboard, make sure you’re in the mmvec environment and cd into the folder you are running the script above from. Then run:

tensorboard --logdir .

Returning line will look something like:

TensorBoard 1.9.0 at http://Lisas-MacBook-Pro-2.local:6006 (Press CTRL+C to quit)

Open the website (highlighted in red) in a browser. (Hint; if that doesn’t work try putting only the port number (here it is 6006), adding localhost, localhost:6006). Leave this tab alone. Now any mmvec output directories that you add to the folder that tensorflow is running in will be added to the webpage.

If working properly, it will look something like this tensorboard

FIRST graph in Tensorflow; 'Prediction accuracy'. Labelled cv_rmse

This is a graph of the prediction accuracy of the model; the model will try to guess the metabolite intensitiy values for the testing samples that were set aside in the script above, using only the microbe counts in the testing samples. Then it looks at the real values and sees how close it was.

The second graph is the likelihood - if your likelihood values are plateaued, that is a sign that you have converged and reached at a local minima.

The x-axis is the number of iterations (meaning times the model is training across the entire dataset). Every time you iterate across the training samples, you also run the test samples and the averaged results are being plotted on the y-axis.

The y-axis is the average number of counts off for each feature. The model is predicting the sequence counts for each feature in the samples that were set aside for testing. So in the graph above it means that, on average, the model is off by ~0.75 intensity units, which is low. However, this is ABSOLUTE error not relative error (unfortunately we don't know how to compute relative errors because of the sparsity in these datasets).

You can also compare multiple runs with different parameters to see which run performed the best. Useful parameters to note are --epochs and --batch-size. If you are committed to fine-tuning parameters, be sure to look at the training-column example make the testing samples consistent across runs.

Q : What's up with the --training-column argument?

A : That is used for cross-validation if you have a specific reproducibility question that you are interested in answering. It can also make it easier to compare cross validation results across runs. If this is specified, only samples labeled "Train" under this column will be used for building the model and samples labeled "Test" will be used for cross validation. In other words the model will attempt to predict the microbe abundances for the "Test" samples. The resulting prediction accuracy is used to evaluate the generalizability of the model in order to determine if the model is overfitting or not. If this argument is not specified, then 10 random samples will be chosen for the test dataset. If you want to specify more random samples to allocate for cross-validation, the num-random-test-examples argument can be specified.

Q : What sort of parameters should I focus on when picking a good model?

A : There are 3 different parameters to focus on, input-prior, output-prior and latent-dim

The --input-prior and --output-prior options specifies the width of the prior distribution of the coefficients, where the --input-prior is typically specific to microbes and the --output-prior is specific to metabolites. For a prior of 1, this means 99% of entries in the embeddings are within -3 and +3 log fold change. A prior of 0.1 would impose the constraint that 99% of the embeddings are within -0.3 and +0.3 log fold change. The higher --input-prior and --output-prior is, the more parameters can have bigger changes, so you want to keep this relatively small for small experimental studies, particularly if there are less than 20 samples (we have not been able to run MMvec on a study with fewer than 12 samples without overfitting). If you see overfitting (accuracy and fit increasing over iterations in tensorboard) you may consider reducing the --input-prior and --output-prior in order to reduce the parameter space.

Another parameter worth thinking about is --latent-dim, which controls the number of dimensions used to approximate the conditional probability matrix. This also specifies the dimensions of the microbe/metabolite embeddings that are stored in the biplot file. The more dimensions this has, the more accurate the embeddings can be -- but the higher the chance of overfitting there is. The rule of thumb to follow is in order to fit these models, you need at least 10 times as many samples as there are latent dimensions (this is following a similar rule of thumb for fitting straight lines). So if you have 100 samples, you should definitely not have a latent dimension of more than 10. Furthermore, you can still overfit certain microbes and metabolites. For example, you are fitting a model with those 100 samples and just 1 latent dimension, you can still easily overfit microbes and metabolites that appear in less than 10 samples -- so even fitting models with just 1 latent dimension will require some microbes and metabolites that appear in less than 10 samples to be filtered out.

Q : What does a good model fit look like??

A : Again the numbers vary greatly by dataset. But you want to see the both the logloss and cv_rmse curves decaying, and plateau as close to zero as possible.

Q : Should we filter low abundance microbes and metabolites?

A : A rule of thumb that we recommend is to filter out microbes and metabolites that appear in less than 10 samples. The rationale here is that it isn't practical to fit a line with less than 10 samples. By default we filter out microbes that appear in less than 10 samples; this can be controlled by the --min-feature-count option.

Q : How long should I expect this program to run?

A : Both epochs and batch-size contribute to determining how long the algorithm will run, namely

Number of iterations = epoch # multiplied by the ( Total # of microbial reads / batch-size parameter)

This also depends on if your program will converge. The learning-rate specifies the resolution, smaller step size = smaller resolution, which will increase the accuracy, but may take longer to converge. You may need to consult with Tensorboard to make sure that your model fit is sane. See this paper for more details on gradient descent: https://arxiv.org/abs/1609.04747

If you are running this on a CPU, 16 cores, a run that reaches convergence should take about 1 day. If you have a GPU - you maybe able to get this down to a few hours. However, some finetuning of the batch-size parameter maybe required -- instead of having a small batch-size < 100, you'll want to bump up the batch-size to between 1000 and 10000 to fully leverage the speedups available on the GPU.

As a good reference, the cystic fibrosis dataset can be processed within 10 minutes on a single CPU and within 1 minute on a GPU.

Q : Can I run the standalone version of mmvec and import those outputs to visualize in qiime2?

A : Yes you can! If you ran the standalone mmvec paired-omics command and you specified your ranks and ordination to be stored under conditionals.tsv and ordination.txt, you can import those as qiime2 Artifacts as follows.

qiime tools import --input-path conditionals.tsv --output-path ranks.qza --type "FeatureData[Conditional]"
qiime tools import --input-path ordination.txt --output-path biplot.qza --type "PCoAResults % Properties('biplot')"

Q : Can MMvec handle small sample studies?

A : We have ran MMvec with published studies as few as 19 samples. However running MMvec in these small sample regimes requires careful tuning of --latent-dimension in addition to the --input-prior and --output-prior commands. The desert biocrust experiment maybe a good dataset to refer to when analyzing these sorts of datasets. It is important to note that we have not been able to run MMvec on fewer than 12 samples.

Credits to Lisa Marotz (@lisa55asil), Yoshiki Vazquez-Baeza (@ElDeveloper), Julia Gauglitz (@jgauglitz) and Nickolas Bokulich (@nbokulich) for their README contributions.

Q You mentioned that MMvec learns co-occurrence probabilities. How can I extract these probabilities?

A MMvec will output a file of co-occurrence probabilities, where the rows are metabolites and columns are microbes. You can extract the co-occurrence probabilities by applying a softmax transform along the columns. In python, this done as follows

import pandas as pd
from skbio.stats.composition import clr_inv as softmax
ranks = pd.read_table('ranks.txt', index_col=0)
probs = ranks.apply(softmax)
probs.to_csv('conditional_probs.txt', sep='\t')

Citation

If you found this tool useful please cite us at

@article{morton2019learning,
  title={Learning representations of microbe--metabolite interactions},
  author={Morton, James T and Aksenov, Alexander A and Nothias, Louis Felix and Foulds, James R and Quinn, Robert A and Badri, Michelle H and Swenson, Tami L and Van Goethem, Marc W and Northen, Trent R and Vazquez-Baeza, Yoshiki and others},
  journal={Nature methods},
  volume={16},
  number={12},
  pages={1306--1314},
  year={2019},
  publisher={Nature Publishing Group}
}

mmvec's People

Contributors

earmingol avatar eldeveloper avatar gibsramen avatar mortonjt avatar mstrazar avatar nbokulich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmvec's Issues

Tutorial for viewing the ranks

When most people view these ranks, they will want to load this into excel.
However, it will require users to first transpose the ranks, since excel cannot sort along rows.

For the sake of usability - we may want to transpose the ranks by default, and add a tutorial on how to sort the columns to view the top metabolites.

Nailing down terminology

This is the most immediate struggle

Right now, --i-microbes and --i-metabolites may be a bit too restrictive, since it indicates that it is limited to only microbes and metabolites.

We will want to make this more generalizable. I've thought about renaming to input_abundances and output_abundances, but that is quite confusing given that the q2cli will look like --i-input-abundances, --i-output-abundances (the output abundances looks like a contradiction).

I'm going to just push with --i-microbes and --i-metabolites for now. But feedback on appropriate names would be great! Maybe --i-explanatory vs --i-response or something like that??

Update the tutorial

Need to have a complete dataset + tutorial on how to run rhapsody

idea : have tutorial on rhapsody + songbird on a trimmed dataset of the cancer HFD study

Numerical stability

Right now - to convert between clr and alr coordinates, the alr coordinates are first converted to proportions, which are then clr transformed.

This is known to be numerically instable (because proportions suck).
The work around is to make the corresponding alr coordinate zero, then mean center those quantities -- that will get your clr transformed coordinates.

GPU Memory Allocation Limits

There could be a command line option to set the max GPU memory allocation. By default tensorflow pre-allocates 100% of the GPU memory. If its not needed, then enabling the user to set the appropriate limit could help with GPU sharing. Example code for tensorflow is the following:

extra imports to set GPU options

import tensorflow as tf
from keras import backend as k

###################################

TensorFlow wizardry

config = tf.ConfigProto()

Don't pre-allocate memory; allocate as-needed

config.gpu_options.allow_growth = True

Only allow a total of half the GPU memory to be allocated

config.gpu_options.per_process_gpu_memory_fraction = 0.1

Create a session with the above options specified.

k.tensorflow_backend.set_session(tf.Session(config=config))
###################################

Proportion explained is out of order: axis 1 < axis 2 < axis 3

When visualizing biplots, the proportion explained for axis 1 is less than the proportion explained for axis 2, which is less than the proportion explained for axis 3. An example of this can be found in the screenshot below.

Typically, users of EMPeror are accustom to interpreting axis 1 as explaining the most variation in the data. At present, this ordering reflects an inversion which may catch users off guard.

image-1

Rename commands

autoencoder is outdated here, since we have changed the training scheme.

This should be referred to as neural network or something instead

Clean up model diagnostics

Try to make the model diagnostics returned by the standalone CLI more clean.

In other words, add labels to the output matrices.

make cli more consistent with q2

Need to output tsv for ranks instead of csv

Otherwise, there will be import issues when you import the ranks as a qiime2 object

rank dimension

Need to be careful about the rank dimensions when latent_dim = 1

Alpha release checklist

  • Travis setup
  • Update README with examples + tutorial
  • setup.py corrections
  • License
  • Changelog
  • Pip install
  • Conda install
  • Basic summary command
  • q2 rhapsody plugin

Update qiime2/GNPS tutorials

The following needs to be included in the tutorials

  • qiime2 import
  • GNPS links / download and unzip
  • Docker installation
  • Heatmap / paired heatmap tutorial

Unable to install both in Qiime2 environments and as standalone

I've tried installing within qiime2-2019.7 conda environments both on a cluster (barnacle) and on my local machine, I get the errors when I try to refresh the cache. I've also tried install mmvec alone on barnacle and got similar errors. Throwing them here -- they are all related to tensorflow.

Local Machine Error (qiime2 conda-env):
"Traceback (most recent call last):
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: dlopen(/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so, 6): Symbol not found: _SecKeyCopyExternalRepresentation
Referenced from: /Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.2.dylib
Expected in: /System/Library/Frameworks/Security.framework/Versions/A/Security
in /Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.2.dylib

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/bin/qiime", line 11, in
sys.exit(qiime())
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 1132, in invoke
cmd_name, cmd, args = self.resolve_command(ctx, args)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 1171, in resolve_command
cmd = self.get_command(ctx, cmd_name)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/commands.py", line 101, in get_command
plugin = self._plugin_lookup[name]
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/commands.py", line 77, in _plugin_lookup
import q2cli.core.cache
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 403, in
CACHE = DeploymentCache()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 61, in init
self._state = self._get_cached_state(refresh=refresh)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 107, in _get_cached_state
self._cache_current_state(current_requirements)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 200, in _cache_current_state
state = self._get_current_state()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 238, in _get_current_state
plugin_manager = qiime2.sdk.PluginManager()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/qiime2/sdk/plugin_manager.py", line 44, in new
self._init()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/qiime2/sdk/plugin_manager.py", line 59, in _init
plugin = entry_point.load()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/pkg_resources/init.py", line 2434, in load
return self.resolve()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/pkg_resources/init.py", line 2440, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/mmvec/q2/init.py", line 2, in
from ._method import paired_omics
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/mmvec/q2/_method.py", line 4, in
import tensorflow as tf
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow/init.py", line 98, in
from tensorflow_core import *
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/init.py", line 40, in
from tensorflow.python.tools import module_util as _module_util
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow/init.py", line 50, in getattr
module = self._load()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow/init.py", line 44, in _load
module = _importlib.import_module(self.name)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/init.py", line 49, in
from tensorflow.python import pywrap_tensorflow
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow.py", line 74, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: dlopen(/Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so, 6): Symbol not found: _SecKeyCopyExternalRepresentation
Referenced from: /Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.2.dylib
Expected in: /System/Library/Frameworks/Security.framework/Versions/A/Security
in /Users/avrbanac/miniconda3/envs/qiime2-2019.7/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.2.dylib

Failed to load the native TensorFlow runtime. "

Barnacle Error (qiime2 conda):
"Traceback (most recent call last):
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/bin/qiime", line 11, in
sys.exit(qiime())
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/builtin/dev.py", line 31, in refresh_cache
import q2cli.core.cache
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 403, in
CACHE = DeploymentCache()
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 61, in init
self._state = self._get_cached_state(refresh=refresh)
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 107, in _get_cached_state
self._cache_current_state(current_requirements)
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 200, in _cache_current_state
state = self._get_current_state()
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/q2cli/core/cache.py", line 238, in _get_current_state
plugin_manager = qiime2.sdk.PluginManager()
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/qiime2/sdk/plugin_manager.py", line 44, in new
self._init()
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/qiime2/sdk/plugin_manager.py", line 59, in _init
plugin = entry_point.load()
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/pkg_resources/init.py", line 2434, in load
return self.resolve()
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/pkg_resources/init.py", line 2440, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/mmvec/q2/init.py", line 2, in
from ._method import paired_omics
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/mmvec/q2/_method.py", line 7, in
from mmvec.multimodal import MMvec
File "/home/avrbanac/miniconda2/envs/qiime2-2019.7/lib/python3.6/site-packages/mmvec/multimodal.py", line 6, in
from tensorflow.contrib.distributions import Multinomial, Normal
ModuleNotFoundError: No module named 'tensorflow.contrib'"

Barnacle Error (standalone mmvec):
"Traceback (most recent call last):
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: /lib64/libc.so.6: version `GLIBC_2.17' not found (required by /home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/avrbanac/miniconda2/envs/mmvec/bin/mmvec", line 17, in
import tensorflow as tf
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/init.py", line 22, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/init.py", line 49, in
from tensorflow.python import pywrap_tensorflow
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: /lib64/libc.so.6: version `GLIBC_2.17' not found (required by /home/avrbanac/miniconda2/envs/mmvec/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)

Failed to load the native TensorFlow runtime."

Error running on GPU: device renaming issue?

Hi,

So here's a command run on a gpu node in an interactiove slurm srun session:

$ rhapsody mmvec \
   --microbe-file A.biom \
   --metabolite-file B.biom  \
   --min-feature-count 5  \
   --epochs 20000 \
   --batch-size 1000  \
   --latent-dim 3  \
   --input-prior 1  \
   --learning-rate 1e-4  \
   --beta1 0.85 \
   --beta2 0.90  \
   --checkpoint-interval 60  \
   --summary-interval 60 \
   --arm-the-gpu  \
   --summary-dir gpu_1000_1e-4_20000  \
   --ranks-file gpu_1000_1e-4_20000/ranks.csv

The (long) error (sorry):


WARNING: Logging before flag parsing goes to stderr.
W0828 12:38:30.259999 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/bin/rhapsody:156: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

W0828 12:38:30.262325 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/bin/rhapsody:157: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2019-08-28 12:38:30.262596: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2019-08-28 12:38:30.273506: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-08-28 12:38:32.273961: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560b1e030b60 executing computations on platform CUDA. Devices:
2019-08-28 12:38:32.274039: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): Tesla V100-PCIE-32GB, Compute Capability 7.0
2019-08-28 12:38:32.291287: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2100000000 Hz
2019-08-28 12:38:32.294314: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560b1d6caf10 executing computations on platform Host. Devices:
2019-08-28 12:38:32.294405: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-08-28 12:38:32.297357: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: Tesla V100-PCIE-32GB major: 7 minor: 0 memoryClockRate(GHz): 1.38
pciBusID: 0000:5e:00.0
2019-08-28 12:38:32.298520: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/slurm-18.08.0/lib::/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64:/home/flejzerowicz/openssl/lib:/home/flejzerowicz/usr/lib/lib/:/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64
2019-08-28 12:38:32.299494: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/slurm-18.08.0/lib::/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64:/home/flejzerowicz/openssl/lib:/home/flejzerowicz/usr/lib/lib/:/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64
2019-08-28 12:38:32.300329: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/slurm-18.08.0/lib::/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64:/home/flejzerowicz/openssl/lib:/home/flejzerowicz/usr/lib/lib/:/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64
2019-08-28 12:38:32.301209: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/slurm-18.08.0/lib::/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64:/home/flejzerowicz/openssl/lib:/home/flejzerowicz/usr/lib/lib/:/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64
2019-08-28 12:38:32.302105: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/slurm-18.08.0/lib::/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64:/home/flejzerowicz/openssl/lib:/home/flejzerowicz/usr/lib/lib/:/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64
2019-08-28 12:38:32.302962: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/slurm-18.08.0/lib::/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64:/home/flejzerowicz/openssl/lib:/home/flejzerowicz/usr/lib/lib/:/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64
2019-08-28 12:38:32.304020: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcudnn.so.7'; dlerror: libcudnn.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/slurm-18.08.0/lib::/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64:/home/flejzerowicz/openssl/lib:/home/flejzerowicz/usr/lib/lib/:/home/flejzerowicz/local/lib:/home/flejzerowicz/local/lib64
2019-08-28 12:38:32.304122: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen some GPU libraries. Skipping registering GPU devices...
2019-08-28 12:38:32.304182: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-28 12:38:32.304231: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0 
2019-08-28 12:38:32.304265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N 
W0828 12:38:32.641206 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:94: The name tf.log is deprecated. Please use tf.math.log instead.

W0828 12:38:32.643565 140077172123456 deprecation.py:323] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:95: multinomial (from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.random.categorical` instead.
W0828 12:38:32.655179 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:106: The name tf.random_normal is deprecated. Please use tf.random.normal instead.

W0828 12:38:32.694295 140077172123456 deprecation.py:323] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:122: Normal.__init__ (from tensorflow.python.ops.distributions.normal) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
W0828 12:38:32.695811 140077172123456 deprecation.py:323] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/ops/distributions/normal.py:160: Distribution.__init__ (from tensorflow.python.ops.distributions.distribution) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
W0828 12:38:32.724381 140077172123456 deprecation.py:323] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:139: Multinomial.__init__ (from tensorflow.python.ops.distributions.multinomial) is deprecated and will be removed after 2019-01-01.
Instructions for updating:
The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use `tfp.distributions` instead of `tf.distributions`.
W0828 12:38:32.802299 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:187: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.

W0828 12:38:32.805364 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:189: The name tf.summary.histogram is deprecated. Please use tf.compat.v1.summary.histogram instead.

W0828 12:38:32.810857 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:193: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.

W0828 12:38:32.812450 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:195: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

W0828 12:38:32.851014 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:200: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.

W0828 12:38:33.204426 140077172123456 deprecation.py:323] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/ops/clip_ops.py:286: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0828 12:38:33.331943 140077172123456 deprecation_wrapper.py:119] From /home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py:210: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

Traceback (most recent call last):
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
    return fn(*args)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1339, in _run_fn
    self._extend_graph()
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1374, in _extend_graph
    tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation random_normal/RandomStandardNormal: {{node random_normal/RandomStandardNormal}}was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0 ]. Make sure the device specification refers to a valid device.
	 [[random_normal/RandomStandardNormal]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/flejzerowicz/rhapsody_ve_new/bin/rhapsody", line 221, in <module>
    rhapsody()
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/flejzerowicz/rhapsody_ve_new/bin/rhapsody", line 168, in mmvec
    test_microbes_coo, test_metabolites_df.values)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/rhapsody/multimodal.py", line 210, in __call__
    tf.global_variables_initializer().run()
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2679, in run
    _run_using_default_session(self, feed_dict, self.graph, session)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 5614, in _run_using_default_session
    session.run(operation, feed_dict)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1173, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
    run_metadata)
  File "/home/flejzerowicz/rhapsody_ve_new/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation random_normal/RandomStandardNormal: node random_normal/RandomStandardNormal (defined at /lib/python3.6/site-packages/rhapsody/multimodal.py:106) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0 ]. Make sure the device specification refers to a valid device.
	 [[random_normal/RandomStandardNormal]]

Note the maybe relevant sinfo

$ sinfo -p gpu -N -o "%c %D %G %m %P"

CPUS NODES GRES MEMORY PARTITION
32 1 gpu:1 94208 gpu
32 1 gpu:1 94208 gpu

Any help greatly appreciated :)
Thanks!
Franck

Enabling the download of raw conditional probabilities

This came from a discussion with @wasade. Right now, all of the conditional probabilities are log transformed.

It could be more intuitive to users to have access to the raw conditional probabilities.
We could have them downloadable in the rank-heatmap or the paired-heatmap commands.

ImportError: google/protobuf/pyext/_message.cpython

Any ideas on why the minaconda install failed?

rhapsody  -h
Traceback (most recent call last):
  File "/home/linuxbrew/miniconda3/envs/rhapsody/bin/rhapsody", line 15, in <module>
    import tensorflow as tf
  File "/home/linuxbrew/miniconda3/envs/rhapsody/lib/python3.5/site-packages/tensorflow/__init__.py", line 22, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/home/linuxbrew/miniconda3/envs/rhapsody/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 52, in <module>
    from tensorflow.core.framework.graph_pb2 import *
  File "/home/linuxbrew/miniconda3/envs/rhapsody/lib/python3.5/site-packages/tensorflow/core/framework/graph_pb2.py", line 6, in <module>
    from google.protobuf import descriptor as _descriptor
  File "/home/linuxbrew/miniconda3/envs/rhapsody/lib/python3.5/site-packages/google/protobuf/descriptor.py", line 47, in <module>
    from google.protobuf.pyext import _message
ImportError: /home/linuxbrew/miniconda3/envs/rhapsody/lib/python3.5/site-packages/google/protobuf/pyext/_message.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZNK6google8protobuf10TextFormat17FieldValuePrinter9PrintBoolEb

Add in rank heatmap

This can help allow for diagnostics for posterior distributions of the resulting ranks.

Better summaries

In the old version of mmvec, we stored histograms of the parameters - this has not been added into pytorch yet.

I'm thinking this maybe a good time to refine what sort of summaries should be stored.
We can easily enable anything listed on this tutorial, for example

  • Histograms of all model parameters
  • Histograms of all gradients
  • Tensorboard embeddings - this would a cheap way to enable search-ability across points / metadata (see here: biocore/emperor#710 @ElDeveloper )
  • Display images of molecular structures - should be possible to tack on top of embeddings provided we have a mapping from images to ms2 ids @lfnothias @mwang87

Upcoming commands

Since we are going to be doing a refactor of the CLI anyways, it maybe worthwhile to think about what commands to have.

Right now, we have

rhapsody mmvec --help

Which is ok if mmvec is the only command - however in the near future, we'll be adding in multiple additional commands, namely

seq2seq: to explicitly handle sequence count inputs and sequence count output (which is the default now). May want to add overdispersion later
seq2spectra: to handle count inputs and continuous (lognormal) outputs

If we go this route, it also makes sense to ditch the wrapper name and call this whole package mmvec. So this would now look like

mmvec seq2seq --help
mmvec seq2spectra --help

Linked scatter plots through vega?

Relevant to #75

One question is, how can one choose the right microbes? Right now the only way to do this is to select microbe by pointing and clicking through Emperor.

A better way is to first select by how many samples they are most abundance (i.e. their maximal samples) and see how their balances are correlated

For instance, we could have something like this (where microbes are points)

image

And selecting 2 points will inform something like this

image

It'll make this process a little more streamlined (its quite manual in notebooks atm). @fedarko, would this be something appropriate for qurro? If so, maybe able to contribute something.

Low level optimizations

Right now, our optimization scheme is pretty dumb - its just iterating through the microbial reads one at a time.

This can be drastically speed up (likely 100-1000x faster) if all of the reads for a single microbe were considered in a single step. This should yield identical results since the sum of Gamma/Poisson RVs is still Gamma/Poisson.

Early stopping

It maybe advantageous to have a way to stop fitting after the model has reached convergence.

Switch to pytorch

Pytorch is proving to be much much easier to debug.

This will be key, especially when enabling Bayesian inference via variational inference.

missing index.html

using the current conda installation I get the following for the heatmap function:

FileNotFoundError: [Errno 2] No such file or directory: 
'/Users/ ... /assets/index.html'

I then downloaded and added the asset + index file and everything ran fine.

Seems like for some reason it was just not packaged with the newest conda install.

Unable to install through conda in Qiime environment - "ImportError: No module named 'minstrel.q2'"

I am trying to install through conda in a Qiime environment but I am receiving an error when calling the plugin. Conda says the package is installed correctly but calling qiime minstrel --help returns

QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment. Traceback (most recent call last): File "/Users/gibraanrahman/miniconda3/envs/qiime2/bin/qiime", line 11, in <module> sys.exit(qiime()) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 1132, in invoke cmd_name, cmd, args = self.resolve_command(ctx, args) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 1171, in resolve_command cmd = self.get_command(ctx, cmd_name) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/commands.py", line 99, in get_command plugin = self._plugin_lookup[name] File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/commands.py", line 75, in _plugin_lookup import q2cli.cache File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 301, in <module> CACHE = DeploymentCache() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 61, in __init__ self._state = self._get_cached_state(refresh=refresh) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 107, in _get_cached_state self._cache_current_state(current_requirements) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 200, in _cache_current_state state = self._get_current_state() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 238, in _get_current_state plugin_manager = qiime2.sdk.PluginManager() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/qiime2/sdk/plugin_manager.py", line 44, in __new__ self._init() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/qiime2/sdk/plugin_manager.py", line 59, in _init plugin = entry_point.load() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/pkg_resources/__init__.py", line 2332, in load return self.resolve() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/pkg_resources/__init__.py", line 2338, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) ImportError: No module named 'minstrel.q2'

I also tried qiime dev refresh-cache and received a similar error:

QIIME is caching your current deployment for improved performance. This may take a few moments and should only happen once per deployment. Traceback (most recent call last): File "/Users/gibraanrahman/miniconda3/envs/qiime2/bin/qiime", line 11, in <module> sys.exit(qiime()) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/dev.py", line 27, in refresh_cache import q2cli.cache File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 301, in <module> CACHE = DeploymentCache() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 61, in __init__ self._state = self._get_cached_state(refresh=refresh) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 107, in _get_cached_state self._cache_current_state(current_requirements) File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 200, in _cache_current_state state = self._get_current_state() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/q2cli/cache.py", line 238, in _get_current_state plugin_manager = qiime2.sdk.PluginManager() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/qiime2/sdk/plugin_manager.py", line 44, in __new__ self._init() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/qiime2/sdk/plugin_manager.py", line 59, in _init plugin = entry_point.load() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/pkg_resources/__init__.py", line 2332, in load return self.resolve() File "/Users/gibraanrahman/miniconda3/envs/qiime2/lib/python3.5/site-packages/pkg_resources/__init__.py", line 2338, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) ImportError: No module named 'minstrel.q2'

q2 version and sample identifiers

Hi Jamie,
After a couple months working on other projects, I've come back to our project where we want to look at correlations between the taxa in the oral microbiome and cytokine ("metabolite") concentrations (you helped Anna and I with this project using Songbird back in the fall). I did a fresh install of rhapsody as of today. I had an issue with the standalone version not responding after the progress bar was full, which I'll post as a separate issue. I tried out the QIIME2 version with my data and actually got biplot.qza and ranks.qza output files, but when I tried to visualize the biplot, I got the following error:

Plugin error from emperor:

None of the sample identifiers match between the metadata and the coordinates. Verify that you are using metadata and coordinates corresponding to the same dataset.

Debug info has been saved to /var/folders/rr/2hbbkrqs5mn86rnt2s4yywsr0003kj/T/qiime2-q2cli-err-hdt_bqk5.log

My sample id's definitely match with the metadata file, I tested them by doing DEICODE and the following Emperor plot with the same microbes.qza and metadata file, and it worked fine.

To try and debug this myself, I downloaded your CF dataset in knightlab-analyses/multiomic-cooccurences. Same as with my data, I was able to get the biplot and ranks qza files, but I got the same error when trying to make the emperor plot using your metadata files:

qiime emperor biplot
--i-biplot biplot.qza
--m-sample-metadata-file sample-metadata.txt
--m-feature-metadata-file validated-molecules.txt
--o-visualization emperor.qzv
It looks like all the sample and feature ids match between the metadata and tables...any help you could provide would be greatly appreciated. I see you're presenting on this to Rob and Pieter's meeting in a couple weeks, are you actually going to be in town?
Best,

Jon

Obvious instability warnings

The produced biplots can sometimes have surprising proportions explained, such that axis 1 < axis 2 < axis 3 or where the proportion explained for an axis is > 100%. It would be nice if mmvec generated a warning when these scenarios are observed in the results and to suggest that the user may not want to use the output.

Clean up branches

There is a version1 branch on mmvec, but that is now a misnomer since the version1 release will be with Tensorflow.

We will need to rename this branch to pytorch.

Need warning for cross validation

There is a scenario where if you specify your samples for training and testing and they get filtered out, you may get this error. This basically says that your testing samples were filtered out. A better message is needed

(from tensorflow.python.ops.random_ops) is deprecated and will be removed in a future version.
...skipping...
 accuracy/Log (defined at /lib/python3.6/site-packages/mmvec/multimodal.py:159)

Input Source operations connected to node accuracy/multinomial/Multinomial:
 accuracy/Log (defined at /lib/python3.6/site-packages/mmvec/multimodal.py:159)

Original stack trace for 'accuracy/multinomial/Multinomial':
  File "/bin/mmvec", line 225, in <module>
    mmvec()
  File "/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/bin/mmvec", line 168, in paired_omics
    test_microbes_coo, test_metabolites_df.values)
  File "/lib/python3.6/site-packages/mmvec/multimodal.py", line 160, in __call__
    self.cv_size)
  File "/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/lib/python3.6/site-packages/tensorflow/python/ops/random_ops.py", line 357, in multinomial
    return multinomial_categorical_impl(logits, num_samples, output_dtype, seed)
  File "/lib/python3.6/site-packages/tensorflow/python/ops/random_ops.py", line 393, in multinomial_categorical_impl
    logits, num_samples, seed=seed1, seed2=seed2, output_dtype=dtype)
  File "/lib/python3.6/site-packages/tensorflow/python/ops/gen_random_ops.py", line 85, in multinomial
    seed2=seed2, output_dtype=output_dtype, name=name)
  File "/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
    op_def=op_def)
  File "/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

Simplify the standalone CLI interface

There are currently a bunch of files deposited, and it isn't clear how these files can be used.

It'll be easier for the standalone CLI to deposit a OrdinationResults object as shown in the qiime2 interface. This will be especially important when enabling GPU support.

Enable simple hypothesis tests

This comes from another discussion with @wasade. It could be useful to enable simple hypothesis tests for microbe-metabolite interactions.

As shown in the CF study, we can perform a t-test on the log ratios to see if there is a clear separation between specified microbes and their corresponding metabolites

Ordination results as a separate command

It may make sense to decouple the creation of the OrdinationResults object from the model fitting. This is advantageous for the following reasons

  1. This help with software modularity
  2. Easier to debug (not exactly clear what the best centering procedure would be)
  3. Bootstrapping maybe advantageous -- with a posterior distribution, we can have jack-knifed PCoA biplots.

The creation of this procedure will be in a separate CLI command

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.