Giter Club home page Giter Club logo

directlfq's Introduction

Unit tests System tests

directLFQ

directLFQ is an open-source Python package for quantifying protein intensities based on peptide intensities or fragment-ion intensities measured with from Mass Spectrometry-based proteomics. It preserves peptide ratios, shows very accurate quantification and has a robust normalization approach. Furthermore, it allows fast processing also of very large sample cohorts, as runtime increases linearly with sample number. It is part of the AlphaPept ecosystem from the Mann Labs at the Max Planck Institute of Biochemistry and the University of Copenhagen.

You can process DIA and DDA data analyzed by AlphaPept, MaxQuant, FragPipe, Spectronaut and DIANN as well as generic formats, using a Graphical User Interface (GUI) or the python package.


About

Generating protein intensities from Mass Spectrometry proteomics data comes with a variety of challenges. Differing peptides that belong to the same protein can have strongly differing intensities, for example due to differing ionization efficiencies. Missing values (i.e. peptides that have been detected in one run but not in the other) make simple summarization of peptide intensities to protein intensities problematic. Differences in sample loading can introduce systematic biases into the analysis. With directLFQ, we provide a novel algorithm for addressing these challenges in an efficient and accurate manner. directLFQ retains peptide ratios and uses them to infer protein ratios and uses the concept of intensity traces for it's main processing steps. For futher details on the algorithm, please refer to the preprint.


Installation

directLFQ can be installed and used on all major operating systems (Windows, macOS and Linux). There are currently two different types of installation possible:

  • Developer installer: Choose this installation if you are familiar with CLI tools, conda and Python. This installation allows access to all available features of directlfq and even allows to modify its source code directly. Generally, the developer version of directlfq outperforms the precompiled versions which makes this the installation of choice for high-throughput experiments.

One-click GUI

The GUI of directlfq is a completely stand-alone tool that requires no knowledge of Python or CLI tools. Click on one of the links below to download the latest release for:

Older releases remain available on the release page, but no backwards compatibility is guaranteed.

Pip

directLFQ can be installed in an existing Python 3.8 environment with a single bash command.

pip install directlfq

This installs the core directLFQ without graphical user interface (GUI). If you want to install with additional dependencies for GUI support, you can do this with:

pip install "directlfq[gui]"

For installation with stable dependencies, use:

pip install "directlfq[stable]"

NOTE: You might need to run pip install pip==21.0 before installing directlfq like this. Also note the double quotes ".

For those who are really adventurous, it is also possible to directly install any branch (e.g. @development) with any extras (e.g. #egg=directlfq[stable,development-stable]) from GitHub with e.g.

pip install "git+https://github.com/MannLabs/directlfq.git@development#egg=directlfq[stable,development-stable]"

Developer

directlfq can also be installed in editable (i.e. developer) mode with a few bash commands. This allows to fully customize the software and even modify the source code to your specific needs. When an editable Python package is installed, its source code is stored in a transparent location of your choice. While optional, it is advised to first (create and) navigate to e.g. a general software folder:

mkdir ~/folder/where/to/install/software
cd ~/folder/where/to/install/software

The following commands assume you do not perform any additional cd commands anymore.

Next, download the directlfq repository from GitHub either directly or with a git command. This creates a new directlfq subfolder in your current directory.

git clone https://github.com/MannLabs/directlfq.git

For any Python package, it is highly recommended to use a separate conda virtual environment, as otherwise dependancy conflicts can occur with already existing packages.

conda create --name directlfq python=3.8 -y
conda activate directlfq

Finally, directlfq and all its dependencies need to be installed. To take advantage of all features and allow development (with the -e flag), this is best done by also installing the development dependencies instead of only the core dependencies:

pip install -e "./directlfq[development,gui]"

By default this installs loose dependencies (no explicit versioning), although it is also possible to use stable dependencies (e.g. pip install -e "./directlfq[stable,development-stable]").

By using the editable flag -e, all modifications to the directlfq source code folder are directly reflected when running directlfq. Note that the directlfq folder cannot be moved and/or renamed if an editable version is installed. In case of confusion, you can always retrieve the location of any Python module with e.g. the command import module followed by module.__file__.


Running directLFQ

There are three ways to use directlfq:

NOTE: The first time you use a fresh installation of directlfq, it is often quite slow because some functions might still need compilation on your local operating system and architecture. Subsequent use should be a lot faster.

GUI

If you have installed directlfq with the one-click GUI installer, you can run the GUI simply by clicking the directLFQ icon on your desktop/applications folder.

If the GUI was not installed through a one-click GUI installer, it can be activate with the following bash command:

directlfq gui

Note that this needs to be prepended with a ! when you want to run this from within a Jupyter notebook. When the command is run directly from the command-line, make sure you use the right environment (activate it with e.g. conda activate directlfq or set an alias to the binary executable (can be obtained with where directlfq or which directlfq)).

CLI

The CLI can be run with the following command (after activating the conda environment with conda activate directlfq or if an alias was set to the directlfq executable):

directlfq -h

It is possible to get help about each function and their (required) parameters by using the -h flag.

Python and Jupyter notebooks

directLFQ can be imported as a Python package into any Python script or notebook with the command import directlfq. Running the standard analysis (with plots) can be done via the command:

import directlfq.lfq_manager as lfq_manager

example_input_file_diann = "/path/to/example_input_file_diann.tsv"

lfq_manager.run_lfq(example_input_file_diann)

Several use cases for applying directLFQ can be found as Jupyter Notebooks in the tests folder. See for example the quicktests notebook.

Note that the nbdev_nbs folder contains the source code as Jupyter notebooks. These notebooks are automatically converted to Python scripts using the nbdev package and stored in the directlfq folder. The notebooks contain additional documentation and comments as well as unit tests that can be executed directly from the notebooks themselves.


Troubleshooting

In case of issues, check out the following:

  • Issues: Try a few different search terms to find out if a similar problem has been encountered before
  • Discussions: Check if your problem or feature requests has been discussed before.

Citations

In the case that directLFQ is useful to you, please consider supporting us by citing the paper

Ammar, C., Schessner, J.P., Willems, S., Michaelis, A.C., and Mann, M. (2023). Accurate label-free quantification by directLFQ to compare unlimited numbers of proteomes. Molecular & Cellular Proteomics, 100581.


How to contribute

If you like this software, you can give us a star to boost our visibility! All direct contributions are also welcome. Feel free to post a new issue or clone the repository and create a pull request with a new branch. For an even more interactive participation, check out the discussions and the the Contributors License Agreement.


License

directLFQ was developed by the Mann Labs at the Max Planck Institute of Biochemistry and the University of Copenhagen and is freely available with an Apache License. External Python packages (available in the requirements folder) have their own licenses, which can be consulted on their respective websites.


directLFQ commands

directLFQ is started internally via the directlfq.lfq_manager.run_lfq() command. In principle and for most use cases, you only need to provide the path to the AlphaPept/MaxQuant/DIA-NN etc. file of interest. However, there are several other options that can be used to customize the analysis. The following commands are available:

  • input_file: The input file containing the ion intensities. Usually the output of a search engine.
  • columns_to_add: Add the names of columns that are present in the output table and that you want to keep in the directLFQ output file. Separated by semicolons. Note that some basic additional columns such as gene names are always added to the output table by default. WARNING: Take care that columns you add are not ambigous. For example, adding the peptide sequence column will not work, because there are multiple peptide sequences per protein.
  • selected_proteins_file: If you want to perform normalization only on a subset of proteins, you can pass a .txt file containing the protein IDs, separeted by line breaks. No header expected.
  • mq_protein_groups_txt: In the case of using MaxQuant data, the proteinGroups.txt table is needed in order to map IDs analogous to MaxQuant. Adding this table improves protein mapping, but is not necessary.
  • min_nonan: Min number of ion intensities necessary in order to derive a protein intensity. Increasing the number results in more reliable protein quantification at the cost of losing IDs.
  • input_type_to_use: The type of input file to use. This is used to determine the column names of the input file. Only change this if you want to use non-default settings.
  • maximum_number_of_quadratic_ions_to_use_per_protein: How many ions are used to create the anchor intensity trace (see paper). Increasing might marginally increase performance at the cost of runtime.
  • number_of_quadratic_samples: How many samples are used to create the anchor intensity trace (see paper). Increasing might marginally increase performance at the cost of runtime
  • num_cores: The number of cores to use (default is to use multiprocessing).
  • filename_suffix: Suffix to append to the output files.
  • deactivate_normalization: Set to true, if no between-sample normalization should be performed before processing.
  • filter_dict: In case you want to define specific filters in addition to the standard filters, you can add a yaml file where the filters are defined (see example here). In the Python API you can also directly put in the dictionary instead of the .yaml file.

directLFQ output

directLFQ writes three output files into the directory of the input file:

  1. The main output file ends with .protein_intensities.tsv and contains the estimated LFQ protein intensities.
  2. The second output file ends with .ion_intensities.tsv and contains the aligned intensity traces of all ions. This allows to compare profiles of different ions to each other. In particular, if you run directLFQ with peptide-level quantification, you can use this file to compare the intensity traces of different peptides of the same protein.
  3. The third output file ends with .aq_reformat.tsv and contains the reformatted input data in matrix format (ions are rows, samples are columns). The values are identical to the values of the original input file, just the format is different.

preparing input files

Spectronaut

directLFQ takes a Spectronaut .tsv table as input. When exporting from Spectronaut, the correct columns need to be selected. These can be obtained by downloading one of the export schemes available below. We provide one export scheme for sprecursor quantification and one export scheme for fragment ion quantification. Fragment ion quantification shows slightly more accuracy, but the files are around 10 times larger.

An export scheme can then simply be loaded into Spectronaut as follows:

Go to the "Report" perspective in Spectronaut, click "Import Schema" and provide the file.

The data needs to be exported in the normal long format as .tsv file

Download Spectronaut export scheme for precursor quantification

Download Spectronaut export scheme for fragment ion quantification

DIA-NN

Provide the path to the DIANN "report.tsv" output table.

MaxQuant

Provide the path to the MaxQuant "peptides.txt" output table or the MaxQuant evidence.txt output table. Additionally and if possible, provide the path to the corresponding "proteinGroups.txt" file.

FragPipe

Provide the path to the "combined_ion.tsv" output table.

generic input format

In the case that you working with a search engine that is not supported by directLFQ, you can use the generic input format. This format is a tab-separated quantity matrix file with the following columns: "protein", "ion", "run_id1", "run_id2", ..,"run_idN". Each row contains therefore all the ion intensities that were measured for an ion in each run (see examples below). The ion identifier only needs to be unique for each ion and can be on the level you want (peptide, charged peptide, or fragment ion). After reformatting your file into this format, save the file with the ending ".aq_reformat.tsv". Then you can simply give this file as input to directLFQ and it will automatically detect the generic input format.


reproducing data analyses from the paper

If you want to reproduce data analyses presented in the manuscript, you can first download the data by executing

python tests/download_testfiles.py all_tests

This will download the underlying datasets into the appropriate location. The notebooks carrying out the analyses themselves are located in the tests directory in the respective subfolders ratio_tests, normalization_tests, runtime_tests and organellar_maps.

directlfq's People

Contributors

ammarcsj avatar georgwa avatar sander-willems-bruker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

directlfq's Issues

Missing row intensities and entries

Describe the bug
Some row entries are 0 for all replicates in the CustomDf.aq_reformat.ion_intensties even though there are valid values in the original CustomDf.aq_reformat input dataframe. Furthermore, there are row entries missing when comparing both .tsv files which i did not expect to happen.

To Reproduce
see #16 for input and output files.

Expected behavior

  1. Intensities curated via directLFQ should not be 0 for all replicates and
  2. "CustomDf.aq_reformat.tsv.ion_intensities.tsv" row entry number is the same as from the input data "CustomDf.aq_reformat.tsv"

Version (please complete the following information):

  • 0.2.11

DIA-NN proteotypic peptide filtering

Hello,

I have noticed that directLFQ does not filter out shared peptides from the DIA-NN output so the quant can be highly misleading for certain protein families. I can use the generic input format with the precursors table as a work-around, but it would be great to filter for proteotypic peptides only by default or as an optional parameter if possible.

Thanks!

Minimum number for peptides for LFQ

Hi @ammarcsj ,

I am back with an additional question :)
Is there a way to specify the minimum number of peptides a protein should have to get a LFQ value?
Neither in the GUI nor in the code I don't see a filter on this, am I wrong?
It would also be nice to get in the output table which are the peptides that were used for the LFQ calculation.

Python version > 3.8

Hi, I wanted to ask if you would at some point also support python versions > 3.8.

directLFQ fails to apply grouping in v0.2.17

Describe the bug
It looks like directLFQ ignores the grouping variable in the most recent version. Instead of 10,330 protein groups, all 174,000 fragments are handled like individual proteins and no output is generated.

Broken output 0.2.17

2024-02-15 22:21:38> ================ Protein FDR =================
2024-02-15 22:21:38> Unique protein groups in output
2024-02-15 22:21:38>   1% protein FDR: 10,330
2024-02-15 22:21:38> 
2024-02-15 22:21:38> Unique precursor in output
2024-02-15 22:21:38>   1% protein FDR: 112,094
2024-02-15 22:21:38> ================================================
2024-02-15 22:21:38> Building search statistics
2024-02-15 22:21:40> Writing stat output to disk
2024-02-15 22:21:40> Performing label free quantification
2024-02-15 22:21:41> Accumulating fragment data
2024-02-15 22:21:41> reading frag file for 20231212_OA1_MCT_SA_M768_AD02_HYE_200ng_quadPolON_sample3_01
...
2024-02-15 22:22:13> reading frag file for 20231212_OA1_MCT_SA_M768_AD02_HYE_200ng_quadPolON_sample4_01
2024-02-15 22:22:16> Performing label free quantification on the pg level
2024-02-15 22:22:16> Filtering fragments by quality
2024-02-15 22:22:16> Performing label-free quantification using directLFQ
2024-02-15 22:22:18> 10330 lfq-groups total
2024-02-15 22:22:39> using 8 processes
2024-02-15 22:22:43> lfq-object 0
2024-02-15 22:22:43> lfq-object 100
2024-02-15 22:22:43> lfq-object 200
2024-02-15 22:22:43> lfq-object 300
2024-02-15 22:22:43> lfq-object 400
2024-02-15 22:22:43> lfq-object 500
...
2024-02-15 22:24:08> lfq-object 173800
2024-02-15 22:24:08> lfq-object 173900
2024-02-15 22:24:08> lfq-object 174000
2024-02-15 22:25:17> Writing pg output to disk
2024-02-15 22:25:19> Writing psm output to disk

Correct output 0.2.14

2024-02-15 22:33:11> ================ Protein FDR =================
2024-02-15 22:33:11> Unique protein groups in output
2024-02-15 22:33:11>   1% protein FDR: 10,330
2024-02-15 22:33:11> 
2024-02-15 22:33:11> Unique precursor in output
2024-02-15 22:33:11>   1% protein FDR: 112,094
2024-02-15 22:33:11> ================================================
2024-02-15 22:33:11> Building search statistics
2024-02-15 22:33:13> Writing stat output to disk
2024-02-15 22:33:13> Performing label free quantification
2024-02-15 22:33:13> Accumulating fragment data
2024-02-15 22:33:13> reading frag file for 20231212_OA1_MCT_SA_M768_AD02_HYE_200ng_quadPolON_sample3_01
...
2024-02-15 22:33:46> reading frag file for 20231212_OA1_MCT_SA_M768_AD02_HYE_200ng_quadPolON_sample4_01
2024-02-15 22:33:48> Performing label free quantification on the pg level
2024-02-15 22:33:48> Filtering fragments by quality
2024-02-15 22:33:49> Performing label-free quantification using directLFQ
2024-02-15 22:33:51> 10330 prots total
2024-02-15 22:33:51> using 8 processes
2024-02-15 22:33:52> prot 0
2024-02-15 22:33:53> prot 1300
2024-02-15 22:33:53> prot 700
2024-02-15 22:33:53> prot 1000
2024-02-15 22:33:53> prot 1700
2024-02-15 22:33:53> prot 2300
2024-02-15 22:33:53> prot 400
...
2024-02-15 22:33:57> prot 9900
2024-02-15 22:33:57> prot 10200
2024-02-15 22:33:57> prot 10000
2024-02-15 22:33:57> prot 10300
2024-02-15 22:34:07> Writing pg output to disk
2024-02-15 22:34:08> Writing psm output to disk

Feature request: parameter for output folder

It would be very useful to be able to specify where the output should go.

Folders with the original data might be read-only, subject to file-watchers, or generally required to remain 'clean' e.g. for uncomplicated PRIDE uploads.

Execution time

I wanted to ask how long the expected execution time for e.g. 1333 Proteins across e.g. 3 replicates is to be expected.

directlfq gui windows won't start (version 0.2.13 and 0.2.14)

First of all,
thanks a lot for that great tool, very useful.
Unfortunately, the last versions 0.2.13 and 0.2.14 won't start (windows gui) the version 0.2.11 seems to be the last working version, although it comes with a warning (below) that seems not to affect running data.
thanks again for your help


WARNING:param.TextInput: Providing a width-responsive sizing_mode ('stretch_width') and a fixed width is not supported. Converting fixed width to min_width. If you intended the component to be fully width-responsive remove the heightsetting, otherwise change it to min_height. To error on the incorrect specification disable the config.layout_compatibility option.
WARNING:param.TextInput: Providing a width-responsive sizing_mode ('stretch_width') and a fixed width is not supported. Converting fixed width to min_width. If you intended the component to be fully width-responsive remove the heightsetting, otherwise change it to min_height. To error on the incorrect specification disable the config.layout_compatibility option.
WARNING:param.(optional) If you are using MaxQuant evidence.txt or peptides.txt files, you can add the link to the corresponding proteinGroups.txt file (will improve peptide-to-protein mapping): Setting non-parameter attribute default=None using a mechanism intended only for parameters
WARNING:param.: Setting non-parameter attribute default=None using a mechanism intended only for parameters
WARNING:param.TextInput: Providing a width-responsive sizing_mode ('stretch_width') and a fixed width is not supported. Converting fixed width to min_width. If you intended the component to be fully width-responsive remove the heightsetting, otherwise change it to min_height. To error on the incorrect specification disable the config.layout_compatibility option.
WARNING:param.: Setting non-parameter attribute default=None using a mechanism intended only for parameters
WARNING:param.TextInput: Providing a width-responsive sizing_mode ('stretch_width') and a fixed width is not supported. Converting fixed width to min_width. If you intended the component to be fully width-responsive remove the heightsetting, otherwise change it to min_height. To error on the incorrect specification disable the config.layout_compatibility option.
WARNING:param.: Setting non-parameter attribute default=None using a mechanism intended only for parameters
panel\util\warnings.py:26: PanelDeprecationWarning: "Row(..., background='#eaeaea')" is deprecated and will be removed in version 1.3, use "Row(..., styles={'background': '#eaeaea'})" instead.
warnings.warn(message, category, stacklevel=stacklevel)
Launching server at http://localhost:51170
WARNING:bokeh.core.validation.check:W-1005 (FIXED_SIZING_MODE): 'fixed' sizing mode requires width and height to be set: Progress(id='p1152', ...)


Issue warning if quant_id is not unique key

Describe the bug
If we pass a dataframe with duplicates in the quant_id column to lfqnorm.NormalizationManagerSamplesOnSelectedProteins() it results in a rather strange numba error.

A more informative error message or a check on the column might be usefull.

Logs

../alphadia/outputtransform.py:705: in build_lfq_tables
    lfq_df = qb.lfq(
../alphadia/outputtransform.py:284: in lfq
    protein_df, _ = lfqprot_estimation.estimate_protein_intensities(
/usr/local/miniconda/envs/alphadia/lib/python3.9/site-packages/directlfq/protein_intensity_estimation.py:44: in estimate_protein_intensities
    list_of_tuple_w_protein_profiles_and_shifted_peptides = get_list_of_tuple_w_protein_profiles_and_shifted_peptides(normed_df, num_samples_quadratic, min_nonan, num_cores)
/usr/local/miniconda/envs/alphadia/lib/python3.9/site-packages/directlfq/protein_intensity_estimation.py:60: in get_list_of_tuple_w_protein_profiles_and_shifted_peptides
    list_of_tuple_w_protein_profiles_and_shifted_peptides = get_list_with_multiprocessing(input_specification_tuplelist_idx__df__num_samples_quadratic__min_nonan, num_cores)
/usr/local/miniconda/envs/alphadia/lib/python3.9/site-packages/directlfq/protein_intensity_estimation.py:107: in get_list_with_multiprocessing
    list_of_tuple_w_protein_profiles_and_shifted_peptides = pool.starmap(calculate_peptide_and_protein_intensities, input_specification_tuplelist_idx__df__num_samples_quadratic__min_nonan)
/usr/local/miniconda/envs/alphadia/lib/python3.9/site-packages/multiprocess/pool.py:372: in starmap
    return self._map_async(func, iterable, starmapstar, chunksize).get()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <multiprocess.pool.MapResult object at 0x7fe2f7857a90>, timeout = None

    def get(self, timeout=None):
        self.wait(timeout)
        if not self.ready():
            raise TimeoutError
        if self._success:
            return self._value
        else:
>           raise self._value
E           numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
E           No implementation of function Function(<built-in function iadd>) found for signature:
E            
E            >>> iadd(Literal[int](0), array(bool, 1d, A))
E            
E           There are 18 candidate implementations:
E             - Of which 16 did not match due to:
E             Overload of function 'iadd': File: <numerous>: Line N/A.
E               With argument(s): '(int64, array(bool, 1d, A))':
E              No match.
E             - Of which 2 did not match due to:
E             Operator Overload in function 'iadd': File: unknown: Line unknown.
E               With argument(s): '(int64, array(bool, 1d, A))':
E              No match for registered cases:
E               * (int64, int64) -> int64
E               * (int64, uint64) -> int64
E               * (uint64, int64) -> int64
E               * (uint64, uint64) -> uint64
E               * (float32, float32) -> float32
E               * (float64, float64) -> float64
E               * (complex64, complex64) -> complex64
E               * (complex128, complex128) -> complex128
E           
E           During: typing of intrinsic-call at /usr/local/miniconda/envs/alphadia/lib/python3.9/site-packages/directlfq/normalization.py (304)
E           
E           File "../../../../../../usr/local/miniconda/envs/alphadia/lib/python3.9/site-packages/directlfq/normalization.py", line 304:
E               def _get_num_nas_in_row(row):
E                   <source elided>
E                   for is_nan in isnans:
E                       sum+=is_nan
E                       ^

/usr/local/miniconda/envs/alphadia/lib/python3.9/site-packages/multiprocess/pool.py:771: TypingError

Query/feature request: batch correction

I'm working with the CLI and am hoping to build in a batch-correction step for large datasets. My initial thoughts would be to use deactivate_normalization TRUE on peptide intensities which has already been normalized + undergone a batch correction externally to directLFQ. Is there a better approach, and if not, is there a recommended batch correction approach?

Many thanks

Normalisation values are extremly high

Describe the bug

Its not really a bug but i recognized that the values of the normalisation output are overall very high (around 1e15 to 1e16).
I am wondering why this is so. Processing was done with the python version on windows.

To Reproduce

simply run lfq_manager.run_lfq(CustomDf.aq_reformat.txt)

Spectronaut Report Schema

Hi, great work and thanks for sharing!

While playing around, I could not access your preferred export schema for Spectronaut's report files.
It doesn't seem to exist (404), neither for precursor nor for fragment ion quan.

Is there an update going on?

Best, Karl

DIA-NN and FDR filtering

First of all great work! :)

Then a quick question, does directlfq performs any sort of FDR filtering (as iq does) for DIA-NN data?
If not, I guess one could do this beforehand, i.e. by filtering the report.tsv, but it would be quite useful to have it done on the fly by directlfq.

directLFQ 0.2.16 fails with IndexError: list index out of range

Describe the bug
The most recent version of directLFQ fails with IndexError: list index out of range during the alphaDIA testcase.

To Reproduce
Steps to reproduce the behavior:

  1. run the test case test_output_transform() in alphadia/tests/unit_tests/test_outputtransform.py

Expected behavior
A clear and concise description of what you expected to happen.

Logs

================================================================================================= test session starts =================================================================================================
platform darwin -- Python 3.9.18, pytest-7.4.3, pluggy-1.3.0
rootdir: /Users/georgwallmann/Documents/git/alphadia
collected 59 items / 5 deselected / 54 selected                                                                                                                                                                       

tests/unit_tests/test_calibration.py ....                                                                                                                                                                       [  7%]
tests/unit_tests/test_data.py ..                                                                                                                                                                                [ 11%]
tests/unit_tests/test_fdr.py .....                                                                                                                                                                              [ 20%]
tests/unit_tests/test_fragcomp.py ...                                                                                                                                                                           [ 25%]
tests/unit_tests/test_grouping.py .........                                                                                                                                                                     [ 42%]
tests/unit_tests/test_libtransform.py .                                                                                                                                                                         [ 44%]
tests/unit_tests/test_numba.py ....                                                                                                                                                                             [ 51%]
tests/unit_tests/test_outputtransform.py F                                                                                                                                                                      [ 53%]
tests/unit_tests/test_plexscoring.py .                                                                                                                                                                          [ 55%]
tests/unit_tests/test_plotting.py ..                                                                                                                                                                            [ 59%]
tests/unit_tests/test_quadrupole.py ...                                                                                                                                                                         [ 64%]
tests/unit_tests/test_reporting.py ......                                                                                                                                                                       [ 75%]
tests/unit_tests/test_utils.py ....                                                                                                                                                                             [ 83%]
tests/unit_tests/test_workflow.py .........                                                                                                                                                                     [100%]

====================================================================================================== FAILURES =======================================================================================================
________________________________________________________________________________________________ test_output_transform ________________________________________________________________________________________________

    def test_output_transform():
        run_columns = ["run_0", "run_1", "run_2"]
    
        config = {
            "general": {
                "thread_count": 8,
            },
            "fdr": {
                "fdr": 0.01,
                "inference_strategy": "heuristic",
                "group_level": "proteins",
                "keep_decoys": False,
            },
            "search_output": {
                "min_k_fragments": 3,
                "min_correlation": 0.25,
                "num_samples_quadratic": 50,
                "min_nonnan": 1,
                "normalize_lfq": True,
                "peptide_level_lfq": False,
                "precursor_level_lfq": False,
            },
        }
    
        temp_folder = os.path.join(tempfile.gettempdir(), "alphadia")
        os.makedirs(temp_folder, exist_ok=True)
    
        progress_folder = os.path.join(temp_folder, "progress")
        os.makedirs(progress_folder, exist_ok=True)
    
        # setup raw folders
        raw_folders = [os.path.join(progress_folder, run) for run in run_columns]
    
        psm_base_df = _mock_precursor_df(n_precursor=100)
        fragment_base_df = _mock_fragment_df(n_precursor=200)
    
        for raw_folder in raw_folders:
            os.makedirs(raw_folder, exist_ok=True)
    
            psm_df = psm_base_df.sample(50)
            psm_df["run"] = os.path.basename(raw_folder)
            frag_df = fragment_base_df[
                fragment_base_df["precursor_idx"].isin(psm_df["precursor_idx"])
            ]
    
            frag_df.to_csv(os.path.join(raw_folder, "frag.tsv"), sep="\t", index=False)
            psm_df.to_csv(os.path.join(raw_folder, "psm.tsv"), sep="\t", index=False)
    
        output = outputtransform.SearchPlanOutput(config, temp_folder)
        _ = output.build_precursor_table(raw_folders, save=True)
        _ = output.build_stat_df(raw_folders, save=True)
>       _ = output.build_lfq_tables(raw_folders, save=True)

tests/unit_tests/test_outputtransform.py:169: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
alphadia/outputtransform.py:645: in build_lfq_tables
    lfq_df = qb.lfq(
alphadia/outputtransform.py:276: in lfq
    protein_df, _ = lfqprot_estimation.estimate_protein_intensities(
../../../miniconda3/envs/alpha/lib/python3.9/site-packages/directlfq/protein_intensity_estimation.py:37: in estimate_protein_intensities
    ion_df = get_ion_intensity_dataframe_from_list_of_shifted_peptides(list_of_tuple_w_protein_profiles_and_shifted_peptides, allprots)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

list_of_tuple_w_protein_profiles_and_shifted_peptides = [(array([9.24812545, 9.24812545,        nan]),                               0         1   2
pg    ion                ...6417  1.6417  1.6417
      695990860382217  1.6417  1.6417  1.6417
      695995155349513  1.6417  1.6417  1.6417), ...]
allprots = ['EPROT', 'VPROT', 'ZPROT', 'LPROT', 'FPROT', 'SPROT', ...]

    def get_ion_intensity_dataframe_from_list_of_shifted_peptides(list_of_tuple_w_protein_profiles_and_shifted_peptides, allprots):
        ion_names = []
        ion_vals = []
        protein_names = []
        column_names = list_of_tuple_w_protein_profiles_and_shifted_peptides[0][1].columns.tolist()
        for idx in range(len(list_of_tuple_w_protein_profiles_and_shifted_peptides)):
>           protein_name = allprots[idx]
E           IndexError: list index out of range

../../../miniconda3/envs/alpha/lib/python3.9/site-packages/directlfq/protein_intensity_estimation.py:206: IndexError
------------------------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------------------------
2024-01-24 12:08:13> Performing protein grouping and FDR
2024-01-24 12:08:13> Building output for run_0
2024-01-24 12:08:13> Building output for run_1
2024-01-24 12:08:13> Building output for run_2
2024-01-24 12:08:13> Building combined output
2024-01-24 12:08:13> Performing protein inference
2024-01-24 12:08:13> Inference strategy: heuristic. Using maximum parsimony with grouping for protein inference
2024-01-24 12:08:13> Performing protein FDR
2024-01-24 12:08:13> Test AUC: 1.000
2024-01-24 12:08:13> Train AUC: 1.000
2024-01-24 12:08:13> AUC difference: 0.00%
2024-01-24 12:08:13> ================ Protein FDR =================
2024-01-24 12:08:13> Unique protein groups in output
2024-01-24 12:08:13>   1% protein FDR: 24
2024-01-24 12:08:13> 
2024-01-24 12:08:13> Unique precursor in output
2024-01-24 12:08:13>   1% protein FDR: 42
2024-01-24 12:08:13> ================================================
2024-01-24 12:08:13> Writing precursor output to disk
2024-01-24 12:08:13> Building search statistics
2024-01-24 12:08:13> Reading precursors.tsv file
2024-01-24 12:08:13> Writing stat output to disk
2024-01-24 12:08:13> Performing label free quantification
2024-01-24 12:08:13> Reading precursors.tsv file
2024-01-24 12:08:13> Accumulating fragment data
2024-01-24 12:08:13> reading frag file for run_0
2024-01-24 12:08:13> reading frag file for run_1
2024-01-24 12:08:13> reading frag file for run_2
2024-01-24 12:08:13> Performing label free quantification on the pg level
2024-01-24 12:08:13> Filtering fragments by quality
2024-01-24 12:08:13> Performing label-free quantification using directLFQ
2024-01-24 12:08:13> to few values for normalization without missing values. Including missing values
2024-01-24 12:08:13> 24 lfq-groups total
2024-01-24 12:08:13> using 8 processes
2024-01-24 12:08:13> lfq-object 0
-------------------------------------------------------------------------------------------------- Captured log call --------------------------------------------------------------------------------------------------
PROGRESS root:outputtransform.py:419 Performing protein grouping and FDR
INFO     root:outputtransform.py:427 Building output for run_0
INFO     root:outputtransform.py:427 Building output for run_1
INFO     root:outputtransform.py:427 Building output for run_2
INFO     root:outputtransform.py:446 Building combined output
INFO     root:outputtransform.py:456 Performing protein inference
INFO     root:outputtransform.py:488 Inference strategy: heuristic. Using maximum parsimony with grouping for protein inference
INFO     root:outputtransform.py:501 Performing protein FDR
INFO     root:fdr.py:355 Test AUC: 1.000
INFO     root:fdr.py:356 Train AUC: 1.000
INFO     root:fdr.py:359 AUC difference: 0.00%
PROGRESS root:outputtransform.py:508 ================ Protein FDR =================
PROGRESS root:outputtransform.py:511 Unique protein groups in output
PROGRESS root:outputtransform.py:512   1% protein FDR: 24
PROGRESS root:outputtransform.py:513 
PROGRESS root:outputtransform.py:514 Unique precursor in output
PROGRESS root:outputtransform.py:515   1% protein FDR: 42
PROGRESS root:outputtransform.py:516 ================================================
INFO     root:outputtransform.py:524 Writing precursor output to disk
PROGRESS root:outputtransform.py:560 Building search statistics
INFO     root:outputtransform.py:390 Reading precursors.tsv file
INFO     root:outputtransform.py:576 Writing stat output to disk
PROGRESS root:outputtransform.py:607 Performing label free quantification
INFO     root:outputtransform.py:390 Reading precursors.tsv file
INFO     root:outputtransform.py:123 Accumulating fragment data
INFO     root:outputtransform.py:58 reading frag file for run_0
INFO     root:outputtransform.py:58 reading frag file for run_1
INFO     root:outputtransform.py:58 reading frag file for run_2
PROGRESS root:outputtransform.py:633 Performing label free quantification on the pg level
INFO     root:outputtransform.py:208 Filtering fragments by quality
INFO     root:outputtransform.py:255 Performing label-free quantification using directLFQ
INFO     directlfq.normalization:normalization.py:239 to few values for normalization without missing values. Including missing values
INFO     directlfq.protein_intensity_estimation:protein_intensity_estimation.py:32 24 lfq-groups total
INFO     directlfq.protein_intensity_estimation:protein_intensity_estimation.py:107 using 8 processes
================================================================================================== warnings summary ===================================================================================================
tests/unit_tests/test_fragcomp.py::test_fragment_competition
  /Users/georgwallmann/Documents/git/alphadia/alphadia/fragcomp.py:189: FutureWarning: The provided callable <built-in function min> is currently using SeriesGroupBy.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead.
    index_df = frag_df.groupby("_candidate_idx", as_index=False).agg(

tests/unit_tests/test_fragcomp.py::test_fragment_competition
  /Users/georgwallmann/Documents/git/alphadia/alphadia/fragcomp.py:189: FutureWarning: The provided callable <built-in function max> is currently using SeriesGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
    index_df = frag_df.groupby("_candidate_idx", as_index=False).agg(

tests/unit_tests/test_fragcomp.py::test_fragment_competition
  /Users/georgwallmann/Documents/git/alphadia/alphadia/fragcomp.py:247: FutureWarning: The provided callable <built-in function min> is currently using SeriesGroupBy.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead.
    index_df = psm_df.groupby("window_idx", as_index=False).agg(

tests/unit_tests/test_fragcomp.py::test_fragment_competition
  /Users/georgwallmann/Documents/git/alphadia/alphadia/fragcomp.py:247: FutureWarning: The provided callable <built-in function max> is currently using SeriesGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
    index_df = psm_df.groupby("window_idx", as_index=False).agg(

tests/unit_tests/test_outputtransform.py::test_output_transform
  /Users/georgwallmann/Documents/git/alphadia/alphadia/outputtransform.py:458: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
    psm_df["mods"].fillna("", inplace=True)

tests/unit_tests/test_outputtransform.py::test_output_transform
  /Users/georgwallmann/Documents/git/alphadia/alphadia/outputtransform.py:461: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
    psm_df["mod_sites"].fillna("", inplace=True)

tests/unit_tests/test_outputtransform.py::test_output_transform
  /Users/georgwallmann/miniconda3/envs/alpha/lib/python3.9/site-packages/sklearn/neural_network/_multilayer_perceptron.py:691: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.
    warnings.warn(

tests/unit_tests/test_outputtransform.py::test_output_transform
  /Users/georgwallmann/Documents/git/alphadia/alphadia/fdr.py:403: UserWarning: FigureCanvasAgg is non-interactive, and thus cannot be shown
    plt.show()

tests/unit_tests/test_plotting.py::test_plot_cycle
  /Users/georgwallmann/Documents/git/alphadia/alphadia/plotting/cycle.py:189: MatplotlibDeprecationWarning: The get_cmap function was deprecated in Matplotlib 3.7 and will be removed two minor releases later. Use ``matplotlib.colormaps[name]`` or ``matplotlib.colormaps.get_cmap(obj)`` instead.
    cmap = cm.get_cmap(cmap_name)

tests/unit_tests/test_plotting.py::test_plot_cycle
  /Users/georgwallmann/Documents/git/alphadia/alphadia/plotting/cycle.py:46: MatplotlibDeprecationWarning: The get_cmap function was deprecated in Matplotlib 3.7 and will be removed two minor releases later. Use ``matplotlib.colormaps[name]`` or ``matplotlib.colormaps.get_cmap(obj)`` instead.
    cmap = cm.get_cmap(cmap_name)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================================================================================== short test summary info ===============================================================================================
FAILED tests/unit_tests/test_outputtransform.py::test_output_transform - IndexError: list index out of range
============================================================================== 1 failed, 53 passed, 5 deselected, 10 warnings in 34.87s ===============================================================================

First QC results indicate reduced quantitative accuracy of directLFQ vs MaxLFQ

Thanks for this very interesting and easily accessible work!

Unfortunately my first attempt to reprocess a mixed proteome standard (Human/Ecoli -> 1:1 vs1:3) processed via DIA-NN only with default options resulted in clearly reduced quantitative accuracy for directLFQ.

Is this actually to be expected?

Kind regards
Michael

Capture

directlfq lfq not working as expected

Describe the bug
CLI command gives type error when trying to analyse DIA-NN main report. Works using the GUI.

To Reproduce

directlfq lfq -i /Users/tobiasko/Downloads/2541046/out-2024-07-11/WU305537_report.tsv -it diann_fragion_isotopes


***********************************************

     _ _               _   _     ______ _____
    | (_)             | | | |    |  ___|  _  |
  __| |_ _ __ ___  ___| |_| |    | |_  | | | |
 / _` | | '__/ _ \/ __| __| |    |  _| | | | |
| (_| | | | |  __/ (__| |_| |____| |   \ \/' /
 \__,_|_|_|  \___|\___|\__\_____/\_|    \_/\_


             * directLFQ 0.2.19 *
***********************************************


starting directLFQ
2024-07-31 15:50:21,099 - directlfq.lfq_manager - INFO - Starting directLFQ analysis.
Traceback (most recent call last):
  File "/Users/tobiasko/.directlfq/bin/directlfq", line 8, in <module>
    sys.exit(run())
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/directlfq/cli.py", line 205, in run_directlfq
    directlfq.lfq_manager.run_lfq(**kwargs)
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/directlfq/lfq_manager.py", line 48, in run_lfq
    input_df = lfqutils.import_data(input_file=input_file, input_type_to_use=input_type_to_use, filter_dict=filter_dict)
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/directlfq/utils.py", line 783, in import_data
    file_to_read = reformat_and_save_input_file(input_file=input_file, input_type_to_use=input_type_to_use, filter_dict=filter_dict)
  File "/Users/tobiasko/.directlfq/lib/python3.8/site-packages/directlfq/utils.py", line 803, in reformat_and_save_input_file
    config_dict_for_type['filters']=  dict(config_dict_for_type.get('filters', {}),**filter_dict)
TypeError: type object argument after ** must be a mapping, not bool

Version (please complete the following information):

  • Installation Type Pip
    • If no log is available, provide the following:
  • Platform information
    • system macOS Sonoma 14.21.

Additional context
Could you please add a minimal CLI example to your docu. It would also help to mention that parameters/options are only explained when calling directlfq lfq and list them in your docu.

Maxquant_evidence Config TypeError

Describe the bug
Running on MQ evidence file from the CLI raises TypeError: format not specified in intable_config.yaml

To Reproduce
Steps to reproduce the behavior:

directlfq lfq -i "yaddayadda\txt\evidence.txt" -it maxquant_evidence

Direct lfq Console

Starting directLFQ analysis
You provided a MaxQuant peptide or evidence file as input. To have the identical ProteinGroups as in the MaxQuant analysis, please provide the ProteinGroups.txt file as well.
Traceback (most recent call last):
  File "D:\pipenvs\directlfq\Scripts\directlfq-script.py", line 33, in <module>
    sys.exit(load_entry_point('directlfq==0.2.3', 'console_scripts', 'directlfq')())
  File "d:\pipenvs\directlfq\lib\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "d:\pipenvs\directlfq\lib\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "d:\pipenvs\directlfq\lib\site-packages\click\core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "d:\pipenvs\directlfq\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "d:\pipenvs\directlfq\lib\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "d:\pipenvs\directlfq\lib\site-packages\directlfq\cli.py", line 202, in run_directlfq
    directlfq.lfq_manager.run_lfq(**kwargs)
  File "d:\pipenvs\directlfq\lib\site-packages\directlfq\lfq_manager.py", line 36, in run_lfq
    input_df = lfqutils.import_data(input_file=input_file, input_type_to_use=input_type_to_use)
  File "d:\pipenvs\directlfq\lib\site-packages\directlfq\utils.py", line 783, in import_data
    file_to_read = reformat_and_save_input_file(input_file=input_file, input_type_to_use=input_type_to_use)
  File "d:\pipenvs\directlfq\lib\site-packages\directlfq\utils.py", line 792, in reformat_and_save_input_file
    input_type, config_dict_for_type, sep = get_input_type_and_config_dict(input_file, input_type_to_use)
  File "d:\pipenvs\directlfq\lib\site-packages\directlfq\utils.py", line 856, in get_input_type_and_config_dict
    raise TypeError("format not specified in intable_config.yaml!")
TypeError: format not specified in intable_config.yaml!

Version (please complete the following information):

  • Native python 3.8.10 venv + pip [stable, developer-stable]
  • Windows 10
  • directlfq 0.2.3

Additional context
I already checked on github and in my local clone: maxquant_evidence is in the yaml file. I tied this both on DDA and DIA output and both raise the same error.
Using the gui from the same environment starts fine, but autodetects the input as maxquant_evidence_leading_razor_protein.

AlphaPept input files

I found error while executing AlphaPept input file. According to the intable_config.yaml, the results_peptides.csv file should have these columns ['protein_group', 'decoy', 'ms1_int_sum', 'charge', 'shortname', 'sequence']. However, the column 'ms1_int_sum' is not present in the results_peptides.csv generated by the AlphaPept. Instead, the AlphaPept generated files has these columns 'ms1_int_sum_apex', 'ms1_int_sum_area', and 'ms1_int_sum_apex_dn'. Therefore, when we tried to execute AlphaPept generated results_peptides.csv file, the program terminates with error.

TypeError: format not specified in intable_config.yaml!
image

It would be helpful if it can be clarified. Also, I didnt found any sample data file/examaple for AlphaPept that can be executed with directlfq.

Lastly, I also have confusion, the results-peptides.csv file is generated in the quantification step of the AlphaPept. Can we take the required columns after the protein grouping step from the results.hdf (dataset = 'protein_fdr'). 'protein_fdr' table is saved in the results.hdf after protein grouping step.

Default config.QUANT_ID and config.PROTEIN_ID not used on import

Describe the bug
Somehow the default values defined for config.QUANT_ID and config.PROTEIN_ID are not set when importing directlfq as module.

The result is a rather confusing pandas error None of [None, None] are in the columns for users.
It can be mitigated by calling lfqconfig.set_global_protein_and_ion_id(protein_id = 'protein', quant_id = 'ion') before.

To Reproduce
Steps to reproduce the behavior:

  1. import direct LFQ
  2. call directLFQ on a dataframe with default columns protein and ion.

Logs

Traceback (most recent call last):
  File "d:\alphadia\alphadia\planning.py", line 274, in run
    output.build(workflow_folder_list, base_spec_lib)
  File "d:\alphadia\alphadia\outputtransform.py", line 365, in build
    _ = self.build_protein_table(folder_list, psm_df=psm_df, save=True)
  File "d:\alphadia\alphadia\outputtransform.py", line 571, in build_protein_table
    protein_df = qb.lfq(
  File "d:\alphadia\alphadia\outputtransform.py", line 269, in lfq
    lfq_df = lfqutils.index_and_log_transform_input_df(intensity_df)
  File "D:\Maria\anaconda\envs\alpha\lib\site-packages\directlfq\utils.py", line 323, in index_and_log_transform_input_df
    data_df = data_df.set_index([config.PROTEIN_ID, config.QUANT_ID])
  File "D:\Maria\anaconda\envs\alpha\lib\site-packages\pandas\core\frame.py", line 5859, in set_index
    raise KeyError(f"None of {missing} are in the columns")
KeyError: 'None of [None, None] are in the columns'
0:00:57.756267 �ERROR: Output failed with error 'None of [None, None] are in the columns''None of [None, None] are in the columns'

Version (please complete the following information):

  • Installation Type Pip on windows
  • directLFQ 0.2.13

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.