Giter Club home page Giter Club logo

candia's Introduction

CANDIA: Canonical Decomposition of Data-Independent-Acquired Spectra

logo

CANDIA is a GPU-powered unsupervised multiway factor analysis framework that deconvolves multispectral scans to individual analyte spectra, chromatographic profiles, and sample abundances, using the PARAFAC (or canonical decomposition) method. The deconvolved spectra can be annotated with traditional database search engines or used as a high-quality input for de novo sequencing methods.

Parallel Factor Analysis Enables Quantification and Identification of Highly Convolved Data-Independent-Acquired Protein Spectra
Filip Buric, Jan Zrimec, Aleksej Zelezniak, Cell Patterns (2020); DOI: https://doi.org/10.1016/j.patter.2020.100137

Note

This repository holds the CANDIA scripts that produced the results in the submitted article (in the revision with the tag "submission"). Restructuring the pipeline and making it easy to use is a work in progress. The repository will be updated but the submission revision is available as a snapshot of the source code at the time of manuscript submission.

Hardware Requirements

For the decomposition, an NVIDIA GPU is required. Models known to perform well: K80, GP100, and V100. Minimum GPU RAM: 8 GB. Recommended: >= 16 GB.

The preprocessing and downstream stages should perform acceptably with as few as 8 CPUs at 2.5 GHz and 16 GB RAM. Recommended: >= 16 CPUs at >= 3 GHz, and >= 32 GB RAM.

User guide

Installation

The current CANDIA distribution is split in two components:

  • the scripts that make up the pipeline (clone this repo to fetch them)
  • a Singularity container which includes all dependencies for running CANDIA (Python libraries and other necessary third-party software)

We recommend using the Singularity container provided on Singularity Hub (built with Singularity version 3.4.2). The container should be placed inside the cloned CANDIA repo. To use the container:

  1. Install Singularity 3.x on your system
  2. Pull the container with the command singularity pull candia.sif shub://fburic/candia:def This will download the container candia.sif (4.4 GB) inside the current working directory. The main candia script assumes candia.sif is located in the CANDIA repo.

Alternatively, the container may be built from the supplied candia.def file (requires root permissions, see instructions here).

The bulk of dependencies is managed through the conda packaging system. The container simply encapsulates such an enviroment to ensure portability, but a conda environment can also be built from scratch on the user's system with the provided candia_env.yaml specification file. Here, commands will be shown using the Singularity container.

CANDIA was developed and used on POSIX systems: Ubuntu 18.04 and CentOS 7.8 Linux, as well as macOS 10.13 (High Sierra), 10.14 (Mojave), 10.15 (Catalina). By using the Singularity container, CANDIA should be runnable on any OS that supports Singularity.

Third-party software

The Singularity container (and associated conda environment) currently includes:

  • Crux 3.2 (package crux-toolkit, bioconda channel)
  • TPP 5.0.0 (package tpp, bioconda channel)
  • msproteomicstools 0.11.0 (bioconda channel)

The software below are either not distributed through package management archives or those versions did not work with this setup.

MS-GF+

MS-GF+ comes in the form of a Java JAR package and it only needs to be unzipped into a directory (e.g. /home/$USER/software). Installation instructions: https://github.com/MSGFPlus/msgfplus

The environment variable MSGF_JAR_PATH needs to be set to inform the pipeline of the .jar location. Add it to your .bashrc or .profile. E.g.

export MSGF_JAR_PATH="$HOME/software/MSGFPlus/MSGFPlus.jar"

DIA-NN

To perform quantification with the generated CANDIA library, we provide a wrapper script for the Linux version of DIA-NN. The the user may install diann-linux from https://github.com/vdemichev/DiaNN (included in the "source code" package of each release).

CANDIA is known to work with DIA-NN 1.7.4

The wrapper scripts/quantification/diann.Snakefile is simply a convenience script that supplies all necessary parameters, and ensures resuming on error. Once the library is created, the user may run DIA-NN independently of CANDIA or the Singularity workflow.

Mayu

Note: The tpp version 5.0.0-0 bioconda package that also includes Mayu may give missing error libraries. The user can install a stand-alone version as described here.

Mayu is in the form of Perl scripts, so it only needs to be unzipped to be used. http://proteomics.ethz.ch/muellelu/web/LukasReiter/Mayu/

It was designed to be executed from its installation directory (e.g. /home/$USER/software/Mayu) so the environment variable MAYU_STANDALONE_PATH needs to be set up to point at this location, so the pipeline may run it from anywhere.

Add it to your .bashrc or .profile. E.g.

export MAYU_STANDALONE_PATH="$HOME/software/Mayu"

Usage

The pipeline currently consists of a collection of scripts for the different stages of processing. The shell script candia is provided to run all pipeline steps from preprocessing up to and including quantification library creation. However, this script is still a work in progress and is currently mainly used to test that the pipeline can execute on the user's system. This can be done by running the following from CANDIA's top level directory:

./candia test/test_experiment/config/candia.yaml

The general syntax of this script is candia EXPERIMENT_CONFIG_YAML

Note The test experiment will only run properly up to and including the decomposition step. As this is the most complex part of the pipeline, this should guarantee that your installation works for real data. A more complete test suite (more realistic data and unit tests) is planned.

Currently, it is recommended to run each stage at a time, using the corresponding scripts. Step-by-step instruction are listed here using the toy data files provided in this repo. These data files are primarily meant to test the pipeline is working properly.

Expected running times are given as a very rough estimate on 9 real DIA scan files, to indicate the time scale of the steps. These times will of course vary depending on the available resources. Actual running times for the test experiment should be exceedingly short.

We recommend running the pipeline on a workstation with multiple cores or a high-performance computing (HPC) environment. Currently, the HPC manager slurm is supported. A conda environment is also assumed to be present on the cluster. A Singularity recipe to encapsulate the environment is provided.

An NVIDIA GPU card is required for the decomposition. Partial support for CPU-only execution is implemented but the performance becomes infeasible.

0. Configure CANDIA execution

The execution of the pipeline is configured through a YAML file (e.g. test_experiment/config/candia.yaml). This configuration file specifies the location of input and intermediate files, as well as algorithm parameters. The paths are interpreted as relative to the experiment directory root_dir.

Good practice

It is recommended to create separate configuration files for each parameter variation to be included in final results, rather than changing a single configuration file. Different configuration files may also be created for each stage of the pipeline. The important thing is to supply the relevant parameters to each script.

Please read through the next steps to see which parameters are relevant at each step. A full configuration file is provided in the test experiment. More information may be found in the CANDIA article Supplemental Information.

The scripts are expected to be run from inside the CANDIA repo (Thus first cd candia after clonning).

1. Convert DIA scan files from mzML to CSV

Expected running time: 3-5 minutes per file

The basename of the input mzML files will be kept downstream, with only the extension changed.

Relevant pipeline config values:

  • samples_mzml - location of input mzML files
  • samples_csv - location of output CSV files
  • swath_windows - file listing precursor isolation windows
  • min_scan_intensity - all values below this are dropped

Note: To set Snakemake to use N cores, pass it the --jobs N argument. It should automatically use all available cores.

Commands:

configfile='test/test_experiment/config/candia.yaml'

singularity exec candia.sif \
    snakemake -s scripts/util/mzml2csv.Snakefile --configfile ${configfile}

2. Adjust precursor isolation windows

We cannot have overlapping isolation windows, so the bounds are adjusted.

Expected running time: 2 min per file

Relevant pipeline config values:

  • samples_csv - location of CSV scan files
  • samples_adjusted_swaths - location of output adjusted CSV files
  • swath_windows_adjusted - file listing adjusted precursor isolation windows

Command:

singularity exec candia.sif \
    snakemake --forceall -p -s scripts/util/adjust_swaths.Snakefile --configfile  ${configfile}

Save the adjusted intervals for downstream tasks:

EXP_DIR=$(grep "root_dir:" ${configfile} | cut -d " " -f 2 | tr -d \")
cp $(find ${EXP_DIR} -name "*.intervals" | head -n 1) \
   ${EXP_DIR}/$(grep "swath_windows_adjusted:" ${configfile} | cut -d " " -f 2 | tr -d \")

3. Split samples into slices

The input scans are partitioned into swaths and RT windows of width window_size_sec. Note that this will create many small CSV files in slices_location/swath=value/rt_window=value subdirectories.

Expected running time: 30 min

Relevant pipeline config values:

  • samples_adjusted_swaths - location of output adjusted CSV files
  • window_size_sec - the width of the RT windows
  • slices_location - location of output slices
singularity exec candia.sif \
    python scripts/util/split_csv_maps_to_slices.py --config ${configfile}

4. Generate tensor files for all slices

The slices are converted into NumPy (swath, rt, sample) tensors stored as .npy files. The Snakefile will start the conversions for each slice in parallel and each such job may be submitted independently as a HPC cluster job.

Expected running time: 10 - 40 min (depending on available resources)

Relevant pipeline config values:

  • mass_tol_ppm - The mass tolerance (in PPM) of the scan acquisition

The m/z precision is 10 decimals and m/z partitions with less than 5 time points in any sample are filtered out.

singularity exec candia.sif \
    snakemake --jobs 4 -s scripts/util/generate_slice_tensors.Snakefile \
    -R generate_slice_tensor --forceall --rerun-incomplete --configfile ${configfile} 

On a slurm-managed cluster, this would be:

NJOBS=200
ACCOUNT_NUMBER=ABC123
singularity exec candia.sif \
    snakemake --jobs ${NJOBS} -s scripts/util/generate_slice_tensors.Snakefile \
    -R generate_slice_tensor --forceall --rerun-incomplete --configfile ${configfile} \
    --cluster "sbatch -A ${ACCOUNT_NUMBER} -t 06:00:00 --ntasks 6"

5. Run PARAFAC decomposition

A decomposition is performed on each slice tensor, for all number of components F in the configured range. Multiple tensors are decomposed in parallel on each available GPU card. If multiple GPUs ara available, the set of input tensors is partitioned evenly between the cards.

Note that while the scripts below require a GPU, the decomposition script itself scripts/parafac/decompose_parafac.py may be run on CPUs if executed without the --use_gpu flag.

Relevant pipeline config values:

  • parafac_* - decomposition parameters
  • parafac_min_comp - Lower bound of the number of components F to decompose for
  • parafac_max_comp - Upper bound of the number of components F to decompose for

Expected running time: 6 - 12 hours (depending on the GPU model)

Workstation

The workstation command has the syntax:

scripts/parafac/decompose_workstation.sh EXPERIMENT_CONFIG_FILE N_PARALLEL_DECOMP_PER_GPU [SNAKEMAKE_OPTIONS]

Example running with Singularity (with 2 parallel decompositions per GPU):

scripts/parafac/decompose_workstation.sh ${configfile} 2

HPC cluster

The cluster command has the syntax:

scripts/parafac/decompose_cluster.sh ${snake_configfile} N_PARALLEL_DECOMP_PER_GPU

And is designed to be run as an array job on slurm. Example (with 6 parallel decompositions per GPU):

sbatch --array=0-1 --gres=gpu:1 --time 24:00:00 \
    scripts/parafac/decompose_cluster.sh "${configfile}" 6

If Singularity is not available, you can use the decomopse_cluster_no_singularity.sh script.

6. Index all PARAFAC models and components

All PARAFAC models and components are indexed with a unique ID, as support for downstream tasks. These IDs, along with model filenames, are saved in two database (or "index") files, in Apache Feather format.

Expected running time: a few sec

Relevant pipeline config values:

  • model_index - Model ID index filename (Apache Feather format)
  • spectrum_index - PARAFAC component ID index filename (Apache Feather format)
singularity exec candia.sif \
    python scripts/parafac/models.py -c ${configfile}

7. Select Best Models

Measure the unimodality of all PARAFAC components (for all models), then create a list of the best PARAFAC models, according to the unimodality criterion.

Expected running time: 10 sec

Relevant pipeline config values:

  • avg_peak_fwhm_sec - The expected peak FWHM, to inform the peak finding procedure.
  • window_size_sec - Width of the RT windows (in seconds).
  • time_modes_values - Tabular file in Feather format that collects peak counts for all time modes (in all components), for all models generated by CANDIA across all slices.
  • best_models - CSV file where the best model parameter values are stored

First, collect the time modes and measure unimodality fractions.

singularity exec candia.sif \
    python scripts/parafac/collect_time_mode_values.py -c ${configfile}

Then, select models:

singularity exec candia.sif \
    Rscript scripts/parafac/select_best_models.R -c ${configfile}

8. (Optional) Collect the abundance (sample mode value) of decomposed spectra

Should it be relevant for downstream analyses, the sample mode values (relative abundances) of each of the decomposed spectra can be collected with the script below.

This will write the following files to the experiment directory:

  • sample_modes_best_models.feather : The sample mode of the best mdoels, as a tabular file
  • A CSV file specified in the pipeline config file as spectra_with_sample_abundance_file containing the spetrum ID - sample abundance correspondence. The spectrum ID is the same as the <scan> attribute ID in the CANDIA output mzXML file, so it can be used to match against output from downstream programs.
singularity exec candia.sif \
  python scripts/quantification/collect_sample_modes.py --config ${configfile}

Developer note: The feather format (a table) can be read with the Python pyarrow library or the R arrow library

9. Identify proteins using Crux or MS-GF+

Configure which tool to use by setting the analysis_pipeline variable to either "crux" or "msgf+" in the configuration file. For example, in the provided test experiment, this is set to "crux"

A separate decoy sequence database must be provided besides the targets.

Expected running time: 30 min with Crux, 1 h with MS-GF+ (depends on the set number of modifications)

Relevant pipeline config values:

  • best_models_mzxml - MzXML file where the spectra from the best models are concatenated
  • best_models_crux_out_dir - The directory for Crux results (Crux outputs multiple files)
  • database - FASTA protein sequence database
  • decoy_database - FASTA decoy sequence database
  • decoy_prefix - Decoy sequence ID prefix, which should include separator (e.g. decoy_)
  • msgf_modifications - MS-GF+ modifications configuration file, if any
  • msgf_threads - The number of MS-GF+ computation threads to use

Crux version 3.2 is included in the Singularity container (and associated conda environment)

singularity exec candia.sif \
    python scripts/identification/id_models_concat.py -c ${configfile}

If running CANDIA with MS-GF+, the container may be executed with a supplied path environment variable specifying the location of the MS-GF+ program.

MSGF_JAR_PATH="$HOME/software/MSGFPlus/MSGFPlus.jar" singularity exec candia.sif \
    python scripts/identification/id_models_concat.py -c ${configfile}

10. Build library

Expected running time: 2-5 min

The Schubert et. al (2015) protocol has been implemented and can be run as below. For a more detailed description, please see the Supplemental Information of the CANDIA paper. The relevant pipeline config values are also described there.

A mixed target-decoy is required. This is specified through the mixed_database parameter

singularity exec candia.sif \
    snakemake -s scripts/quantification/build_library.Snakefile --configfile ${configfile}

Note This protocol will fail for the supplied toy test data.

11. Quantify proteins with DIA-NN

To quantify proteins using the generated CANDIA library, a wrapper script is provided for running DIA-NN. The path to the diann-linux binary should be supplied to the Singularity container (see command below).

Relevant pipeline config values:

  • samples_mzml - The DIA scan files
  • quant_library - The library file created by CANDIA
  • database - The target protein sequence FASTA database
  • diann_out_dir - Directory for DIA-NN output
  • diann_report - The filename of the TSV report output by CANDIA

Expected running time: 2-5 min per scan file

SINGULARITYENV_PREPEND_PATH=$HOME/software/diann singularity exec candia.sif \
    snakemake -p -s scripts/quantification/diann.Snakefile --configfile ${configfile}

12. De novo sequencing with Novor and DeepNovo

Configure which tool to use through the configuration file

Expected running time: 25 min with Novor, 10 min with DeepNovo (depends on GPU parameters)

TODO: expand, clarify

snakemake -s scripts/denovo/sequence_best_models.Snakefile --configfile ${configfile}

candia's People

Contributors

fburic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

candia's Issues

Diann wrapper Snakefile fails on missing irrelevant paramater

Thescripts/quantification/diann.Snakefile script tries to read the diann_library config parameter, even when it is run in normal mode (the parameter is only required for a DIA-NN library-free search). The script crashes if this parameter is not specified in the config file.

Workaround:

Add diann_library: "results/diann/dummy.tsv" to prevent the script from crashing. (The dummy file won't be created.)

Installlation problems and execution errors

Hello,

We are very excited about CANDIA's capabilities and are interested in testing it for our data analysis pipelines. I wanted to share my current experience with the installation and execution tests and maybe ask some advice on how to overcome some issues that we found.

  1. We managed to install it into our linux server (Ubutu 20.04.1 LTS), after installing Singularity, but none of the commands suggested for this in the README file actually worked. We needed to use:
singularity pull shub://fburic/candia:def

instead of the suggested

singularity pull shub://fburic/candia
  1. It wasn't clear what is the 'CANDIA's' top-level directory to run the shell script candia to test for proper installation... So we cloned the github repository into a new folder candia, and transferred the candia_def.sif file into it. The candia_def.sif is the one created into the folder where singularity was installed.

Is there a better way to get into the container directory? I might be missing something.

  1. From the top-level of this directory, it was possible to run the command,
./candia test/test_experiment/config/candia.yaml

but the pipeline only runs until the second step. Then it throws this error that seems like a missing dependency.

~/software/candia$ ./candia test/test_experiment/config/candia.yaml
Converting DIA scan files from mzML to CSV...
Building DAG of jobs...
Nothing to be done.
Complete log: /home/schilling/software/candia/.snakemake/log/2021-01-26T171607.531930.snakemake.log
done.
Adjusting precursor isolation windows...
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
       count   jobs
       2       adjust_file
       1       all
       3

[Tue Jan 26 17:16:08 2021]
rule adjust_file:
   input: test/test_experiment/samples/scans_csv/scan1.csv
   output: test/test_experiment/samples/scans_csv_adjusted/scan1_adjusted.csv
   jobid: 2
   wildcards: sample=scan1

Job counts:
       count   jobs
       1       adjust_file
       1
Rscript scripts/util/adjust_swaths.R -i test/test_experiment/samples/scans_csv/scan1.csv -o test/test_experiment/samples/scans                                                  _csv_adjusted/scan1_adjusted.csv
/bin/bash: /opt/conda/lib/libtinfo.so.6: no version information available (required by /bin/bash)
Error: package or namespace load failed for ‘tidyverse’ in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck =                                                   vI[[j]]):
namespace ‘tidyselect’ 0.2.5 is already loaded, but >= 1.1.0 is required
In addition: Warning message:
package ‘tidyverse’ was built under R version 3.6.3
Execution halted
[Tue Jan 26 17:16:10 2021]
Error in rule adjust_file:
   jobid: 0
   output: test/test_experiment/samples/scans_csv_adjusted/scan1_adjusted.csv

RuleException:
CalledProcessError in line 25 of /home/schilling/software/candia/scripts/util/adjust_swaths.Snakefile:
Command 'set -euo pipefail;  Rscript scripts/util/adjust_swaths.R -i test/test_experiment/samples/scans_csv/scan1.csv -o test/                                                  test_experiment/samples/scans_csv_adjusted/scan1_adjusted.csv' returned non-zero exit status 1.
 File "/home/schilling/software/candia/scripts/util/adjust_swaths.Snakefile", line 25, in __rule_adjust_file
 File "/opt/conda/lib/python3.6/concurrent/futures/thread.py", line 56, in run
Exiting because a job execution failed. Look above for error message
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /home/schilling/software/candia/.snakemake/log/2021-01-26T171608.586104.snakemake.log

Many thanks in advance for taking the time to read this report. I would be very glad if I can receive some input on how to set CANDIA up and running.

Best wishes,
Miguel

Add unit tests

To support future development, a set of unit tests should be available, at the very least for sanity checking.
The repo already provides a toy dataset and script to process it, but this should be improved.

Error running PARAFAC decomposition

Hello Filip,

We manage to install the CANDIA singularity container on a DENBI Ubuntu server with 2 cuda-able GPUs.

We are still not able to make the test command ./candia test/test_experiment/config/candia.yaml run through completely.

Actually, it throws an error at the stage of PARAFAC decomposition. The previous steps of the processing seem to run through. The error is persistent even if I execute the commands separately for each stage.

Something like:

Running PARAFAC decomposition...
CANDIA: 2 GPUs found. Dividing input slices into 2 partitions.
CANDIA: Output saved to test/test_experiment/logs/decompose_partition_0_20210302172404.log
CANDIA: Output saved to test/test_experiment/logs/decompose_partition_1_20210302172404.log
done.
Indexing all PARAFAC models and components...
scripts/parafac/models.py:123: YAMLLoadWarning:

calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.

[2021-03-02 17:24:07] [PID 93949] INFO: models.py:main():54:    Wrote model index
[2021-03-02 17:24:07] [PID 93949] INFO: models.py:main():58:    Wrote spectrum index
done.
Selecting best models
[2021-03-02 17:24:12] [PID 94478] WARNING:      collect_time_mode_values.py:get_model_time_mode_peak_counts():60:      Could not load model test/test_experiment/samples/scans_csv_slices/swath_lower_adjusted=623.00/rt_window=0.0/parafac_model_F12.pt
[2021-03-02 17:24:12] [PID 94477] WARNING:      collect_time_mode_values.py:get_model_time_mode_peak_counts():60:      Could not load model test/test_experiment/samples/scans_csv_slices/swath_lower_adjusted=623.00/rt_window=0.0/parafac_model_F10.pt
...
Traceback (most recent call last):
  File "scripts/parafac/collect_time_mode_values.py", line 113, in <module>
    main()
  File "scripts/parafac/collect_time_mode_values.py", line 45, in main
    model_peak_count = pd.concat(model_peak_count, ignore_index=True)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 284, in concat
    sort=sort,
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 331, in __init__
    raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate

I am attaching here the log of the execution including the results of the previous steps and the complete error.

error_report_CANDIA.txt

Do you think there's something we might be missing in the installation? What would you suggest to trouble-shoot this?

Many thanks in advance for taking a look into this.

Best wishes,
Miguel

Identification and quantitation per sample

Hello Filip,

I wanted to open a different issue for this question.

We are very happy with candia's capabilities so far and would probably be testing it soon with a bigger cohort of samples.

What I find interesting about this spectral decomposition is the ability to run 'classical' searches on the decomposed spectra.

One of the things I am interested in the detection of sequence variants, either by database search or by detecting non-annotated point mutations via xtandem or similar.

Since the candia's output is a single mzXML file, how do you think it would be possible to assign the identification (and therefore, potential quantitation) of a peptide to a particular sample/condition based on this unique mzXML file?

I understand that the quantitation can be performed via DIANN, but probably my ignorance regarding its use (I have not used it extensively) is not allowing me to understand how to differentiate identification between samples after having the decomposed spectra.

Would you have any ideas on how to go from the decomposed candia's spectra into xtandem for the identification of point-mutations not found in the fasta file, and then use this information for differential quatitation between samples?

As always, many thanks for your input.

Best wishes,
Miguel

Create development documentation

To support extending or adapting CANDIA functionality, the code base needs to be documented for developers. Ideally, most technical information should be covered here, not the user docs.

Implement better main script

The pipeline was developed as a collection of separate small workflows, to allow inspecting intermediate results and iterative/modular development. A natural way to run the pipeline is through a main control script but this is currently missing.

More comprehensive user documentation

The current README aims to provide sufficient information for a rather savvy person to get CANDIA running. One would like simpler documentation for everyday use. How much more simplified this will be depends on the functionality of the main script and pipeline configuration process.

Depends on: #2

Split decomposed mzXML output to separate step

To allow for flexibility in using the output mzXML file (containing decomposed spectra) with various search engines, this export should be split from the current identification script, which is built to use either Crux or MS-GF+

Improve logging and error reporting

Logs should clearly highlight the current stage and results of the pipeline.
Inconsequential warnings should not be shown to reduce clutter.

It should be made clear what stage failed through messages and, if possible, the cause of the failure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.