Giter Club home page Giter Club logo

ecgtools's Introduction

ecgtools

CI GitHub Workflow Status Code Coverage Status pre-commit.ci status
Docs Documentation Status
Package Conda PyPI
License License

Motivation

The critical requirement for using intake-esm is having a data catalog. This package enables you to build data catalogs to be read in by intake-esm, which enables a user to easily search, discover, and access datasets they are interested in using.

See documentation for more information.

Installation

ecgtools can be installed from PyPI with pip:

python -m pip install ecgtools

It is also available from conda-forge for conda installations:

conda install -c conda-forge ecgtools

ecgtools's People

Contributors

andersy005 avatar dependabot[bot] avatar dougiesquire avatar jsta avatar mgrover1 avatar pre-commit-ci[bot] avatar sherimickelson avatar thomas-moore-creative avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ecgtools's Issues

[Bug]: RootDirectory.walk fails with fsspec>=2023.6.0

What happened?

A change was introduced in fsspec=2023.6.0 to the way filesystems are walked that breaks the walk method on the ecgtools.builder.RootDirectory class. See below for a traceback. This is the reason why the ecgtools CI tests are currently failing.

This is the fsspec change that caused the issue: fsspec/filesystem_spec#1263

What did you expect to happen?

RootDirectory can be walked without error

Minimal Complete Verifiable Example

# See also the failing testing in the CI workflow
from ecgtools import RootDirectory, glob_to_regex

path = "~"
depth = 1
include_patterns=["*"]
exclude_patterns=[]
    
include_regex, exclude_regex = glob_to_regex(
    include_patterns=include_patterns, 
    exclude_patterns=exclude_patterns
)

directory = RootDirectory(
    path=path,
    depth=depth,
    include_regex=include_regex,
    exclude_regex=exclude_regex
)
directory.walk()

Relevant log output

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[13], line 20
      9 include_regex, exclude_regex = glob_to_regex(
     10     include_patterns=include_patterns, 
     11     exclude_patterns=exclude_patterns
     12 )
     14 directory = RootDirectory(
     15     path=path,
     16     depth=depth,
     17     include_regex=include_regex,
     18     exclude_regex=exclude_regex
     19 )
---> 20 directory.walk()

File ~/miniconda3/envs/test/lib/python3.11/site-packages/ecgtools/builder.py:59, in RootDirectory.walk(self)
     57 def walk(self):
     58     all_assets = []
---> 59     for root, dirs, files in self.mapper.fs.walk(self.raw_path, maxdepth=self.depth + 1):
     60         # exclude dirs
     61         dirs[:] = [os.path.join(root, directory) for directory in dirs]
     62         dirs[:] = [
     63             directory for directory in dirs if not re.match(self.exclude_regex, directory)
     64         ]

File ~/miniconda3/envs/test/lib/python3.11/site-packages/fsspec/spec.py:452, in AbstractFileSystem.walk(self, path, maxdepth, topdown, **kwargs)
    448         return
    450 for d in dirs:
    451     yield from self.walk(
--> 452         full_dirs[d],
    453         maxdepth=maxdepth,
    454         detail=detail,
    455         topdown=topdown,
    456         **kwargs,
    457     )
    459 if not topdown:
    460     # Yield after recursion if walking bottom up
    461     yield path, dirs, files

KeyError: <the first path encountered by walk>

Anything else we need to know?

I can have a stab at a PR if wanted

Tool to combine catalogs into an archive

I think a YAML file like

experiment1:
  case1:
    catalog_path: [path to catalog]
    member_id: ###
  case2:
    catalog_path: [path to catalog]
    member_id: ###
experiment2:
  case3:
    catalog_path: [path to catalog]
    member_id: ###

would define the ensemble well enough. Each individual catalog would be read in, the experiment and member_id columns would be added, and the ctrl_case column would be replaced with ctrl_experiment and ctrl_member_id (assuming ctrl_case is also a member of the ensemble). Then all the individual catalogs would be concatenated into one giant file (we'd need to replace relative path names with absolute ones at this stage).

how to combine/concatenate catalogs?

Is there a way to combine catalogs? For example, let's say I have N cases:

cat_list = []
for case, info in case_dict.items(): 
    b = Builder(**info)
    cat_list.append(b)

master_cat = ecgtools.concatenate(cat_list)

Possibly related to #15, though that seems somewhat out of date.

[Bug]: Updates needed for pydantic v2

What happened?

Pydantic v2 was released on July 1. Ecgtools uses pydantic, but does not work with pydantic v2. Simply importing ecgtools with pydantic=v2.0 results in an error.

What did you expect to happen?

Able to use ecgtools with pydantic v2

Minimal Complete Verifiable Example

# With pydantic=v2.0
import ecgtools

Relevant log output

~/miniconda3/envs/test/lib/python3.11/site-packages/pydantic/_internal/_config.py:257: UserWarning: Valid config keys have changed in V2:
* 'validate_all' has been renamed to 'validate_default'
  warnings.warn(message, UserWarning)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/ecgtools/__init__.py", line 6, in <module>
    from .builder import Builder, RootDirectory, glob_to_regex
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/ecgtools/builder.py", line 13, in <module>
    from intake_esm.cat import (
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/intake_esm/__init__.py", line 9, in <module>
    from .core import esm_datastore
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/intake_esm/core.py", line 21, in <module>
    from .cat import ESMCatalogModel
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/intake_esm/cat.py", line 68, in <module>
    class Assets(pydantic.BaseModel):
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/intake_esm/cat.py", line 77, in Assets
    @pydantic.root_validator
     ^^^^^^^^^^^^^^^^^^^^^^^
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/pydantic/deprecated/class_validators.py", line 222, in root_validator
    return root_validator()(*__args)  # type: ignore
           ^^^^^^^^^^^^^^^^
  File "~/miniconda3/envs/test/lib/python3.11/site-packages/pydantic/deprecated/class_validators.py", line 228, in root_validator
    raise PydanticUserError(
pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0/u/root-validator-pre-skip

Anything else we need to know?

This should be a pretty straightforward update. I'm happy to submit a PR if wanted.

Visualize case + ensemble members

It would be neat to have something similar to a dask task graph, similar to this
image

for the catalog building (helping to visualize experiments and their branches) - example would be the CESM-LE, where we could have the experiment, number of ensemble members, components within each, and streams. This be helpful when visualizing what is all in the catalog.

Reading in CESM history file catalog using intake-esm

When reading back in the catalog created from the cesm_history_file parser, using the following arguements:

b.save(
    # File path - could save as .csv (uncompressed csv) or .csv.gz (compressed csv)
    "/glade/work/mgrover/cesm-hist-test.csv",
    
    # Column name including filepath
    path_column='path',
    
    # Column name including variables
    variable_column='variables',
    
    # Data file format - could be netcdf or zarr (in this case, netcdf)
    data_format="netcdf",
    
    # Which attributes to groupby when reading in variables using intake-esm
    groupby_attrs=["component", "stream", "case"],
    
    # Aggregations which are fed into xarray when reading in data using intake
    aggregations=[
        {
            "type": "join_existing",
            "attribute_name": "date",
            "options": {"dim": "time", "coords": "minimal", "compat": "override"},
        }
    ],
)

And read in/subset using the following:

# Open catalog
col = intake.open_esm_datastore(
    "/glade/work/mgrover/cesm-hist-test.json", 
    csv_kwargs={"converters": {"variables": ast.literal_eval}}, sep="/"
)

# Search for the variable TEMP
cat = col.search(variables='TEMP', 
                 case='b.e20.b1850.f19_g17.test')

# Use to_dataset_dict to read in the data
dsets = cat.to_dataset_dict(cdf_kwargs={'use_cftime': True, 'chunks': {'time':10}})

The dictionary of datasets is constructed using component/stream/case/date/frequency/variables/path which is not what was specified by the aggregation specifications. Any idea of what could be going wrong here @andersy005 ?

We need to update our documentation for the path argument

What happened?

As of the latest versions of ecgtools, the input for path is a list of strings. This is not clear in the docs.

What did you expect to happen?

We should have examples that line up with the validation.

Minimal Complete Verifiable Example

ecgtools.Builder(paths=['some_path'])

Relevant log output

No response

Anything else we need to know?

No response

Add column for grid variables for CESM parsers

It would be helpful to have a list of grid variable names, such that one could easily create a dataset of only the grid variables, or append this to the single variable or variables to their search within a data catalog

confusing comment in parsers/cesm.py and duplicated code

The following comment occurs twice in parsers/cesm.py:

# Make sure to sort the streams in reverse, the reverse=True is required so as
# not to confuse `pop.h.ecosys.nday1` and `pop.h.ecosys.nday1` when looping over
# the list of streams in the parsing function

I'm guessing that the stream names should differ, but it's not clear what is intended.

Also, the code block that this comment is in appears to be duplicated. Is there a reason to do this that I'm missing?

pass list as root_dir doesn't work

I am running 'v2021.9.23' which should include #83; however, when I try to pass a list in as root_dir, I get the following error.

---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
/glade/scratch/mclong/tmp/ipykernel_239530/4167871574.py in <module>
      3     depth=1,
      4     exclude_patterns=["*/rest/*", "*/logs/*", "*/proc/*"],
----> 5     njobs=12,
      6 )
      7 b = b.build( 

~/codes/o-nets/o-nets/ecgtools/builder.py in __init__(self, root_path, extension, depth, exclude_patterns, njobs)

/glade/work/mclong/miniconda3/envs/onets/lib/python3.7/site-packages/pydantic/dataclasses.cpython-37m-x86_64-linux-gnu.so in pydantic.dataclasses._generate_pydantic_post_init._pydantic_post_init()

ValidationError: 1 validation error for Builder
root_path
  value is not a valid path (type=type_error.path)

⚠️ Nightly upstream-dev CI failed ⚠️

Workflow Run URL

Python 3.10 Test Summary
tests/test_builder.py: ImportError: cannot import name 'PydanticUndefined' from 'pydantic_core' (unknown location)
tests/parsers/test_cesm.py: ImportError: cannot import name 'PydanticUndefined' from 'pydantic_core' (unknown location)
tests/parsers/test_cmip.py: ImportError: cannot import name 'PydanticUndefined' from 'pydantic_core' (unknown location)
tests/parsers/test_obs.py: ImportError: cannot import name 'PydanticUndefined' from 'pydantic_core' (unknown location)

[FEATURE]: Make intake-esm a soft dependency

Is your feature request related to a problem?

Currently, intake-esm is set as a hard dependency due to these imports

from intake_esm.cat import (
Aggregation,
AggregationControl,
Assets,
Attribute,
DataFormat,
.

These imports are only used when saving the catalog, and should be imported inside the .save() method so as to eliminate the need for presence of intake-esm when one doesn't want to use the .save() functionality.

Describe the solution you'd like

No response

Describe alternatives you've considered

No response

Additional context

No response

support for cpl.hi files

I'm inspecting high-frequency output from a CESM experiment that @mvertens ran.
There is output every timestep from different components of CESM.
I decided to try ecgtools and intake-esm to get multi-file Datasets for each component.
I'm gettng tripped up on cpl.hi files.

The experiment was run without short-term archiving. So I soft-linked the history files to another directory, and ran Builder on that directory. (This was to avoid dealing with non-history files in the run directory.)

The directory where I soft-linked the history files is
/glade/scratch/klindsay/SMS_Vmct_Ln6.f19_g17.1850_CAM60_CLM50%BGC-CROP_CICE_POP2%ECO_MOSART_SGLC_WW3_BGC%BPRP.cheyenne_intel.validate_beta03/hist

The command I'm using for Builder is

CASE = "SMS_Vmct_Ln6.f19_g17.1850_CAM60_CLM50%BGC-CROP_CICE_POP2%ECO_MOSART_SGLC_WW3_BGC%BPRP.cheyenne_intel.validate_beta03"
USER = "klindsay"
DIR = f"/glade/scratch/{USER}/{CASE}/hist"
b = Builder(
    # Directory with the output
    f"/glade/scratch/{USER}/{CASE}/hist",
    # Depth of 1 since all of the history files are in a flat directory
    depth=1,
    # Use the parse_cesm_history parsing function
    parsing_func=parse_cesm_history,
)
print(b)
b = b.build()
print(b)

I'm getting invalid assets for the cpl.hi files. An example traceback for the cpl.hi files is

Traceback (most recent call last):
  File "/glade/work/klindsay/analysis/notebooks/ecgtools/ecgtools/parsers/cesm.py", line 143, in parse_cesm_history
    if info['stream'] is None:
KeyError: 'stream'

What needs to be done to get support for these files into ecgtools? I'm guessing that it's easy for someone that understands the code in ecgtools, but I don't fall into that category.

Add ability to build catalogs on the cloud (ex. AWS s3)

A feature that would be great to have is the ability to utilize fsspec to be able to scroll through the cloud and build catalogs. Currently, we are using pathlib which works fine for "regular" POSIX filesystems, but would not work with AWS s3 buckets.

This is a feature that will be required once we start to build catalogs for the AWS Silverlinings Project

Add ability to catalog grid variables

In most of our parsers, we check to see if variables are time variant, excluding variables that aren't. Many of those variables relate to the grid, which would be useful to include in the data catalogs. See this code snippet below which excludes the time-invariant variables

if time in da.dims and v not in {time, time_bounds}

We could specify these variables to be "static" or some better name... the case this would especially help with this is MOM data which stores the grid variables in separate static file, which often times has its own stream.

Improve handling of zarr files

Currently, the parser does not handle zarr stores well when building the file list, since these are categorized as directories instead of individual files

Missing time range in CMIP6 parser

One of the aggregatations within intake-esm for the CMIP6 dataset is time_range. This variable is missing in the table generated from the catalog builder.

⚠️ Nightly upstream-dev CI failed ⚠️

Workflow Run URL

Python 3.10 Test Summary
tests/test_builder.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip
tests/test_builder.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip
tests/parsers/test_cesm.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip
tests/parsers/test_cesm.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip
tests/parsers/test_cmip.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip
tests/parsers/test_cmip.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip
tests/parsers/test_obs.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip
tests/parsers/test_obs.py: pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.

For further information visit https://errors.pydantic.dev/2.0.1/u/root-validator-pre-skip

Tool to generate intake-catalog for CESM runs

Python script that is given a CESM caseroot and generates a pandas dataframe containing info of netcdf files in DOUT_S_ROOT

Desired columns:

  • case [strictly speaking this isn't needed, but it could provide useful]
  • path (determined by crawling the archive directory)
  • component (determined by which subdirectory of DOUT_S_ROOT contains the file)
  • stream (from filename)
  • variable (from filename)
  • date_range (from filename)
  • ctrl_branch_year (RUN_REFDATE, assuming GET_REFCASE is TRUE)
  • ctrl_case (RUN_REFCASE, assuming GET_REFCASE is TRUE)

"depth" argument in Builder yields unexpected behavior

As a user, I would expect setting depth to some maximum value would perform the most comprehensive directory search possible. Instead I find that this search

b = Builder(
    "/glade/campaign/cesm/development/espwg/SMYLE/initial_conditions/SMYLE-FOSI/",
    depth=5,
    exclude_patterns=["*/rest/*", "*/logs/*", "*/proc/*"],
    njobs=5,
)
b = b.build( 
    parse_cesm_history,
)

Yield no files. While this search

b = Builder(
    "/glade/campaign/cesm/development/espwg/SMYLE/initial_conditions/SMYLE-FOSI/",
    depth=1,
    exclude_patterns=["*/rest/*", "*/logs/*", "*/proc/*"],
    njobs=5,
)
b = b.build( 
    parse_cesm_history,
)

does.

parsing history files: assume same variables override?

When parsing multi-variable files, does the builder have to open each one to get a list of variables? Could the build process be accelerated by assuming that the same variables are present in all the files? Such behavior could be triggered by a user-supplied input parameter.

Release v2023.7.??

A few changes have been made recently to deal with updates in dependencies (see #162, #164). We should do a release that includes these. This should be done at the same time as a new Intake-ESM release, since ecgtools and Intake-ESM depend on one another and both have unreleased changes to support pydantic v2.

⚠️ Nightly upstream-dev CI failed ⚠️

Workflow Run URL

Python 3.10 Test Summary
tests/test_builder.py::test_directory[s3://ncar-cesm-lens/atm/monthly-0-storage_options2-include_patterns2-exclude_patterns2-75]: AttributeError: module 'aiobotocore' has no attribute 'AioSession'
tests/test_builder.py::test_builder_init[paths1-0-storage_options1-include_patterns1-exclude_patterns1-78]: AttributeError: module 'aiobotocore' has no attribute 'AioSession'
tests/test_builder.py::test_builder_build[paths1-0-storage_options1-include_patterns1-exclude_patterns1-4]: AttributeError: module 'aiobotocore' has no attribute 'AioSession'

Add an "include_patterns" arguement

When building intake-esm catalogs from multiple directories (different cases), it would be helpful to be able to specify which cases to include... we could do this by including an include_patterns argument

[Bug]: `pkg_resources` is deprecated and removed in Python 3.12

What happened?

Importing ecgtools with Python 3.12 gives

ModuleNotFoundError: No module named 'pkg_resources'

since pkg_resources is deprecated and removed in Python 3.12

Anything else we need to know?

This should easy to fix by using importlib instead to get the version in __init__.py.

CESM History File Parser Issue - CISM component

When the file parser attempts to read in the CISM component of CESM, the following error is returned when it reads in the file in xarray

unable to decode time units 'common_year since 1-1-1 0:0:0' with "calendar 'noleap'".

Using decode_times = false in xarray, where the time values are now a float n equal to "common_year since 0000-01-01 0:0:0"

Would it make sense to add a try, except block, and allow the parser to feed "decode_times = false" in? I am wondering if this error would propagate downstream in the analysis...

[Bug]: pydantic causing simple parse script to fail on build

What happened?

When attempting to run Build.build(custom_parser) I get a pydantic error that kills the catalog build, despite following the tutorial and successfully running the parser on test files.

What did you expect to happen?

A catalog to build successfully

Minimal Complete Verifiable Example

I am trying to build a simple custom parser. I am following [this guide](https://ecgtools.readthedocs.io/en/latest/how-to/use-a-custom-parser.html).

0. Make some fake data.

for var in ['pr', 'tas', 'tasmax']:
    ds = xr.DataArray([1,2,3], dims='time').rename(var).to_dataset()
    ds.to_netcdf(f"./data/{var}_daily_EC-Earth3_historical.nc")


1. Put together list of paths.
```python
root_path = pathlib.Path("./data/")
files = list(root_path.glob("*.nc"))
# Convert to string paths since the builder only takes string paths.
files = [str(f) for f in files]
  1. I built a simple custom parser to test this out, following the guide.
def parse_dummy(file):
    fpath = pathlib.Path(file)
    info = {}

    try:
        # Just extracting metadata from the filename in this case. Same errors
        # occur when including loading into a dataset.
        variable, temporal_resolution, model, scenario = fpath.stem.split('_')
        info = {
                'variable': variable,
                'temporal': temporal_resolution,
                'source': model,
                'path': str(file),
            }

        return info

    except Exception:
        return {INVALID_ASSET: file, TRACEBACK: traceback.format_exc()}
  1. I tested this on a simple file and it was successful
parse_dummy(files[0])
{'variable': 'pr',
 'temporal': 'daily',
 'source': 'EC-Earth3',
 'path': './data/pr_daily_EC-Earth3_historical.nc'}
  1. Now I make a builder object. The object successfully returns the expected list of files.
# Tried the pathlib object here following the demo but got an error that pydantic
# wanted strings only.
cat_builder = Builder(files)
>>> Builder(paths=['data/pr_daily_EC-Earth3_historical.nc', 'data/tasmax_daily_EC-Earth3_historical.nc', 'data/tas_daily_EC-Earth3_historical.nc'], storage_options={}, depth=0, exclude_patterns=[], include_patterns=[], joblib_parallel_kwargs={})
  1. Build the catalog with the parse script.
cat_builder.build(parse_dummy)

Relevant log output

---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
Input In [84], in <cell line: 1>()
----> 1 cat_builder.build(parse_fake)

File ~/miniconda3/envs/analysis_py39/lib/python3.9/site-packages/pydantic/decorator.py:40, in pydantic.decorator.validate_arguments.validate.wrapper_function()

File ~/miniconda3/envs/analysis_py39/lib/python3.9/site-packages/pydantic/decorator.py:133, in pydantic.decorator.ValidatedFunction.call()

File ~/miniconda3/envs/analysis_py39/lib/python3.9/site-packages/pydantic/decorator.py:130, in pydantic.decorator.ValidatedFunction.init_model_instance()

File ~/miniconda3/envs/analysis_py39/lib/python3.9/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__()

ValidationError: 2 validation errors for Build
parsing_func
  field required (type=value_error.missing)
args
  1 positional arguments expected but 2 given (type=type_error)

Anything else we need to know?

pydantic version: '1.10.2'
ecgtools version: '2022.10.7'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.