Giter Club home page Giter Club logo

erddapy's Introduction

erddapy: ERDDAP + Python.

conda-forge downloads Latest version Commits since last release # contributors zenodo zenodo pre-commit.ci status GHA-tests


Table of contents

Overview

Easier access to scientific data.

erddapy takes advantage of ERDDAP's RESTful web services and creates the ERDDAP URL for any request, like searching for datasets, acquiring metadata, downloading the data, etc.

What is ERDDAP? ERDDAP unifies the different types of data servers and offers a consistent way to get the data in multiple the formats. For more information on ERDDAP servers please see https://coastwatch.pfeg.noaa.gov/erddap/index.html.

Documentation and code

The documentation is hosted at https://ioos.github.io/erddapy.

The code is hosted at https://github.com/ioos/erddapy.

Installation

For conda users you can

conda install --channel conda-forge erddapy

or, if you are a pip users

python -m pip install erddapy

Note that, if you are installing the requirements-dev.txt, the iris package is named scitools-iris on PyPI so pip users must rename that before installing.

Example

from erddapy import ERDDAP


e = ERDDAP(
  server="https://gliders.ioos.us/erddap",
  protocol="tabledap",
)

e.response = "csv"
e.dataset_id = "whoi_406-20160902T1700"
e.constraints = {
    "time>=": "2016-07-10T00:00:00Z",
    "time<=": "2017-02-10T00:00:00Z",
    "latitude>=": 38.0,
    "latitude<=": 41.0,
    "longitude>=": -72.0,
    "longitude<=": -69.0,
}
e.variables = [
    "depth",
    "latitude",
    "longitude",
    "salinity",
    "temperature",
    "time",
]

df = e.to_pandas()

Get in touch

Report bugs, suggest features or view the source code on GitHub.

Projects using erddapy

Similar projects

  • rerddap implements a nice client for R that performs searches on a curated set of servers instead of a query per server like erddapy.

  • erddap-python 99% of the same functionality as erddapy but with a JavaScript-like API.

License and copyright

Erddapy is licensed under BSD 3-Clause "New" or "Revised" License (BSD-3-Clause).

Development occurs on GitHub at https://github.com/ioos/erddapy.

erddapy's People

Contributors

abkfenris avatar bobfrat avatar callumrollo avatar circularpenguin avatar dependabot[bot] avatar douglasnehme avatar jmunroe avatar jojobozz avatar kstonekuan avatar kthyng avatar ocefpaf avatar pmav99 avatar pre-commit-ci[bot] avatar rsignell-usgs avatar thogar-computer avatar vinisalazar avatar xeulha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

erddapy's Issues

Allow server-side functions within time parameter

Afternoon,
currently if a user wants to call an URL like the following from erddapy;

http://erddap.sensors.ioos.us/erddap/tabledap/uk_gov_metoffice_62103.htmlTable?time,latitude,longitude,wind_speed,wind_from_direction&time=max(time)+0minutes

It will fail on line 50 - erddapy.py

parse_date_time = parse_time_string(date_time)[0]

Even though this can be done from within pandas, these types of server-side functions greatly reduce the amount of data transmitted.

As a starter, having a check on the value to see if it starts with max/min? I am not totally sure on the full list of functions...

If agreed, i am happy to pick this up :)

Get the attributes of a dataset

Hello erddapy developers,

Is it possible to get the attributes of a dataset in a list or dictionary or JSON?

In the examples, I only find information on how to get the attributes with the 'get_info_url' method and then display it on screen with an iframe. I would need the attributes information in a variable.

Thanks!

Add links to the documentation in the README

There are a few spots (e.g. at the top, in the examples, etc) where you could direct users to your Sphinx documentation for more in-depth usage information. I think this would help direct people who may not know to look for the documentation link in the top of the GitHub page.

handle case where `standard_name` attribute is not lowercase

It sucks that we here we find a dataset with the requested standard_name, but then when we try to get the variable associated with that standard_name, we fail:
2017-11-12_14-36-01

The problem is that in this dataset, the standard_name is not lower case: It's Sea_Water_Pressure instead of sea_water_pressure:

2017-11-12_14-40-51
so our test against sea_water_pressure fails.

I'm not sure how to best solve this in ERDDAPY. We could just do string.lower() but that would probably cause failures for attributes other than standard_name.

Or @BobSimons, perhaps we could fix this on the ERDDAP end?
Maybe ERDDAP could make all standard_name attributes lower case?

Notebook here:
https://gist.github.com/rsignell-usgs/c6647e8debdce4654a90f8321389bd18

Factor the time zone before converting to timestamp

PR #17 changed from netcdftime to a .timestamp() method call but that can lead to wrong timestamps b/c 1970-1-1 will be 0 only when UTC is specified. Naive datetime objects will lead to wrong timestamp.

datetime(1970, 1, 1, 0, 0).timestamp()
10800.0

datetime(1970, 1, 1, 0, 0, tzinfo=pytz.utc).timestamp()
0.0

TypeError with .to_xarray()

In 0.7.0 or 0.7.1 I'm getting a TypeError: a bytes-like object is required, not '_io.BytesIO' when calling .to_xarray() in some cases from _tempnc()

@contextmanager
def _tempnc(data: BinaryIO) -> Generator[str, None, None]:
"""Creates a temporary netcdf file."""
from tempfile import NamedTemporaryFile
tmp = None
try:
tmp = NamedTemporaryFile(suffix=".nc", prefix="erddapy_")
tmp.write(data)
tmp.flush()
yield tmp.name
finally:
if tmp is not None:
tmp.close()

I'm trying to figure out a minimal example, but so far I have a gulfofmaine/buoy_barn test that throws.

I initially thought that it was because I'm using VCRpy to mock out traffic to various servers, but I'm having it occur outside of testing also.

Here's a Sentry issue with the exception.

GSoC 2022 ideas

Some users are looking for tools to help them assemble ERDDAP urls for use in their own workflows, while others would prefer to work at a higher, more opinionated level. I believe we can more cleanly separate functionality to help support the spectrum of erddapy users better.

Originally errdapy was meant to be a url builder only. We added the main class later and stop in between trying to support many different usage patterns via the single primary class ERDDAP.

Issues to address

  • Users in interactive workflows are required to transform ERDDAP objects as they wish to connect to new datasets, for example when moving from searching a server to visiting the datasets.
  • Adding constraints to a ERDDAP object are stateful in place changes, where as most interactive users are used to Numpy/Pandas/xarray style workflows where you can return or chain together changes.
  • Switching out IO is currently non-trivial due to URL generation and data transformations being tightly coupled to IO.

Proposed Solution

I proposed that we separate erddapy into more functional layers, roughly following the SQLAlchemy core/ORM model.

Core Layer

The core layer would contain two primary components of functionality: url generation & data transformation. This layer makes no choices or assumptions about IO allowing it to be reused easily.

  • URL generation - Functions to generate valid URLs from bare components, such as a dataset name, format, and dictionary of constraints to tabledap/M01_met_all.csv?time%2Cair_temperature%26air_temperature_qc%3D0%26time>%3D"2020-12-09T15%3A25%3A00.000Z"
  • Data transformation - Functions to convert a raw response (.csv, .nc, ...) into Pandas DataFrames, xarray Datasets...

Object (or opinionated) Layer

The object (or opinionated) layer would present higher level objects for searching servers and accessing datasets with a Pandas or xarray like returning or chainable API compared to the transformational API of the current ERDDAP class.
This layer uses much of the core functionality and presents it in easy to use ways with an opinion as to the access method.

Additionally if possible these objects should be serializable, so they can be pickled and passed to other processes/machines (Dask/Dagster/Prefect).

  • class ERDDAPConnection
    • While most ERDDAP servers allow connections via a bare url, some servers may require authentication to access data.
    • .get(url_part: str) -> bytes or str
      • Method actually request data.
      • Uses requests by default similar to most of the current erddapy data fetching functionality.
      • Can be overridden to use httpx, and potentially aiohttp or other async functionality, which could hopefully make anything else async compatible. (investigate await_me_maybe)
    • .open(url_part: str) -> fp
      • Yields a file-like object for access (probably use fsspec.open under the hood) for file types/tools that don't enjoy getting passed a string.
    • @property(server) -> ERDDAPConnection
      • Return a new ERDDAPConnection if trying to set a new server, or change other attributes rather than changing it in place.

For all of the remaining classes, either an ERDDAPConnection or a bare ERDDAP server url that will be transformed into an ERDDAPConnection can be passed in.

  • class ERDDAPServer

    • .__init__(connection: str | ERDDAPConnection)
    • .full_text_search(query: str) -> dict[str, ERDDAPDataset]
      • Use the native ERDDAP full text search capabilities
      • Returns a dictionary of search results with dataset ids as keys and ERDDAPDataset values.
    • .search(query: str) -> dict[str, ERDDAPDataset]
      • Points to .full_text_search
    • advanced_search(**kwargs) -> dict[str, ERDDAPDataset]
      • Uses ERDDAPs advanced search capabilities (may return pre-filtered datasets)
  • class ERDDAPDataset

    Base class for more focused table or grid datasets.

    • @property(connection)
      • Underlying ERDDAPConnection
    • .get(file_type: str) -> bytes or str
      • Requests the data using the .connection.get() method.
    • .open(file_type: str) -> fp
      • Yields a file-like object for access.
    • .get_meta()
      • Pulls the dataset info and caches it on the _meta attribute.
    • ._meta
      • Set by .get_meta()
      • Passed when a setter returns a subclass.
      • .attrs -> pd.DataFrame- Dataframe of dataset attributes.
      • .variables -> dict - Dictionary of variables as keys, and maximum extent of constraints as values.
    • @property(meta)
      • Returns the ._meta values, and will call .get_meta() if they are not already cached.
    • @property(variables)
      • List current variables the dataset requested from the dataset.
      • Setting variables returns a new ERDDAPDataset subclass.
      • If _meta is cached and an invalid variable is set, throw a KeyError instead of returning.
    • @property(constraints)
      • Returns the current constraints on the dataset.
      • Setting contraints returns a new ERDDAPDataset subclass.
      • If _meta is cached and an invalid constraint is set, throw a KeyError instead of returning.
    • .url_segment(file_type: str) -> str
      • Everything but the base section of the url (http://neracoos.org/erddap/), so tabledap/A01_met.csv....
    • .url(file_type: str) -> str
      • Returns a URL constructed using the underlying ERDDAPConnection base class server info, the dataset ID, access method (tabledap/griddap), file type, variables, and constraints.
      • This allows ERDDAPDataset subclasses to be used as more opinionated URL constructors while still not tying the users to an specific IO method.
      • Not guaranteed to capture all the specifics of formatting a request, such as if a server requires specific auth or headers.
    • .to_dataset() - Open the dataset as an xarray dataset by downloading a subset NetCDF.
    • .opendap_dataset() - Open the full dataset in xarray via OpenDAP.
  • class TableDataset(ERDDAPDataset)

    • .to_dataframe() - Open the dataset as a Pandas DataFrame.
  • class GridDataset(ERDDAPDataset)

In Practice

So how do these work in practice?
Let's look at a few different scenarios.

Interactive Search

Lets say that a user wants to find and query all datasets on a server that contain sea_water_temperature data?

First they initialize their server object.
This can be done by passing in the server URL, the short name of the server, or an ERDDAPConnection object if authentication or IO methods need to be overridden.

[1] from erddapy import ERDDAPServer

[2] server = ERDDAPServer("neracoos")

Then they can use the native ERDDAP full text search to find datasets.

[3] water_temp_datasets = server.search("sea_water_temperature")
    water_temp_datasets

[3] {"nefsc_emolt_erddap": <TableDataset ...>, "UCONN_ARTG_WQ_BTM": <TableDataset...>, ...}

From there the user can access datasets a variety of ways depending on their needs.

[4] for dataset_id, dataset in water_temp_datasets:
        df = dataset.to_dataframe()
        # Whatever esoteric things fisheries people do with their dataframes

Need to quote strings in queries

We need to quote strings when used in ERDDAP expressions.

This code:

    kwargs = {
        'minTime<=': stop_time.value,
        'maxTime>=': start_time.value,
        'cdm_data_type=': cdm_data_type
    }

    variables = ['datasetID', 'minLongitude', 'minLatitude']
    url = e.get_download_url(dataset_id='allDatasets', variables=variables, response='csv', **kwargs)

produces this URL
https://erddap-uncabled.oceanobservatories.org/uncabled/erddap/tabledap/allDatasets.csv?datasetID,minLongitude,minLatitude&minTime%3C=2018-01-23T12:52:10Z&maxTime%3E=2018-01-16T12:52:10Z&cdm_data_type=TimeSeries

which gives the response:

Query error: For constraints of String variables, the right-hand-side value must be surrounded by double quotes. Bad constraint: cdm_data_type=TimeSeries

Support ERDDAP "distinct()" server-side function

erddapy doesn't currently support ERDDAP "Server-side functions". The only such functions I'm aware of are distinct() and various ordering functions. The latter don't really seem necessary in the context of erddapy and pandas, b/c sorting can be done client-side via Pandas. But distinct() does make a difference, as it can make the response much smaller, depending on the data.

It's actually very easy to use distinct with erddapy as-is. Once I have the request ready to go, I pass this url to pandas read_csv:

e.get_download_url() + '&distinct()'

But figuring out how to do this takes a fair bit of work. It'd save users some effort if this was built into erddapy. I could be persuaded to try to add it myself and submit a PR, but not this month ...

Include a link to download the notebooks in the documentation

The quick / longer introduction are both quite useful demonstrations of erddapy! It would be helpful to include a link to the notebooks at the top of the example, e.g. something like

"This example is written in a Jupyter Notebook - click here to download the notebook so you can run it locally"

(another option would be to use something like sphinx gallery to generate the examples, but I think that's a longer fix that isn't strictly necessary

failed to install package erddapy - pycharm

Error occurred:
pipenv.patched.notpip._internal.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/tmplpc0lktbbuild/erddapy/

Command output:
Installing erddapy…


⠙ Installing...
Adding erddapy to Pipfile's [packages]…
�✔ Installation Succeeded 
Pipfile.lock (411ba7) out of date, updating to (61b1fb)…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…

⠴ Locking...�✘ Locking Failed! 
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 126, in <module>
    main()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 119, in main
    parsed.requirements_dir, parsed.packages)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 85, in _main
    requirements_dir=requirements_dir,
  File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 69, in resolve
    req_dir=requirements_dir
  File "/usr/local/lib/python3.5/dist-packages/pipenv/utils.py", line 726, in resolve_deps
    req_dir=req_dir,
  File "/usr/local/lib/python3.5/dist-packages/pipenv/utils.py", line 480, in actually_resolve_deps
    resolved_tree = resolver.resolve()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/utils.py", line 385, in resolve
    results = self.resolver.resolve(max_rounds=environments.PIPENV_MAX_ROUNDS)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/resolver.py", line 102, in resolve
    has_changed, best_matches = self._resolve_one_round()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/resolver.py", line 206, in _resolve_one_round
    for dep in self._iter_dependencies(best_match):
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/resolver.py", line 301, in _iter_dependencies
    dependencies = self.repository.get_dependencies(ireq)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 234, in get_dependencies
    legacy_results = self.get_legacy_dependencies(ireq)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 426, in get_legacy_dependencies
    results, ireq = self.resolve_reqs(download_dir, ireq, wheel_cache)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 297, in resolve_reqs
    results = resolver._resolve_one(reqset, ireq)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/resolve.py", line 260, in _resolve_one
    abstract_dist = self._get_abstract_dist_for(req_to_install)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/resolve.py", line 213, in _get_abstract_dist_for
    self.require_hashes
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/operations/prepare.py", line 294, in prepare_linked_requirement
    abstract_dist.prep_for_dist(finder, self.build_isolation)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/operations/prepare.py", line 127, in prep_for_dist
    self.req.run_egg_info()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/req/req_install.py", line 474, in run_egg_info
    command_desc='python setup.py egg_info')
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/utils/misc.py", line 705, in call_subprocess
    % (command_desc, proc.returncode, cwd))
pipenv.patched.notpip._internal.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/tmplpc0lktbbuild/erddapy/
File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 126, in <module>
    main()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 119, in main
    parsed.requirements_dir, parsed.packages)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 85, in _main
    requirements_dir=requirements_dir,
  File "/usr/local/lib/python3.5/dist-packages/pipenv/resolver.py", line 69, in resolve
    req_dir=requirements_dir
  File "/usr/local/lib/python3.5/dist-packages/pipenv/utils.py", line 726, in resolve_deps
    req_dir=req_dir,
  File "/usr/local/lib/python3.5/dist-packages/pipenv/utils.py", line 480, in actually_resolve_deps
    resolved_tree = resolver.resolve()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/utils.py", line 385, in resolve
    results = self.resolver.resolve(max_rounds=environments.PIPENV_MAX_ROUNDS)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/resolver.py", line 102, in resolve
    has_changed, best_matches = self._resolve_one_round()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/resolver.py", line 206, in _resolve_one_round
    for dep in self._iter_dependencies(best_match):
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/resolver.py", line 301, in _iter_dependencies
    dependencies = self.repository.get_dependencies(ireq)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 234, in get_dependencies
    legacy_results = self.get_legacy_dependencies(ireq)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 426, in get_legacy_dependencies
    results, ireq = self.resolve_reqs(download_dir, ireq, wheel_cache)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/piptools/repositories/pypi.py", line 297, in resolve_reqs
    results = resolver._resolve_one(reqset, ireq)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/resolve.py", line 260, in _resolve_one
    abstract_dist = self._get_abstract_dist_for(req_to_install)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/resolve.py", line 213, in _get_abstract_dist_for
    self.require_hashes
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/operations/prepare.py", line 294, in prepare_linked_requirement
    abstract_dist.prep_for_dist(finder, self.build_isolation)
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/operations/prepare.py", line 127, in prep_for_dist
    self.req.run_egg_info()
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/req/req_install.py", line 474, in run_egg_info
    command_desc='python setup.py egg_info')
  File "/usr/local/lib/python3.5/dist-packages/pipenv/patched/notpip/_internal/utils/misc.py", line 705, in call_subprocess
    % (command_desc, proc.returncode, cwd))
pipenv.patched.notpip._internal.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/tmplpc0lktbbuild/erddapy/

Make the URL check optional

Bob brought this to my attention via e-mail. While those who use erddapy as a URL constructor may need that, those who use it as a data-access library are fine waiting for the failure until the data is requested.

Add allDatasets query method

In my ERDDAP time series explorer, I used the existing erddapy method for advanced search , but also needed to use an allDatasets query to return the lon/lat values of all stations. I did this with an ugly erddap URL string formatting:
https://github.com/reproducible-notebooks/ERDDAP_timeseries_explorer/blob/master/ERDDAP_timeseries_explorer.py#L197-L200

Seems like the allDatasets query might warrant a method, since it's a bit different than a regular datasets query

erddapy with griddap .to_xarray() .to_netcdf() response forces .ncCF (not a griddap option)

I know griddap isn't officially supported but it previously had worked for the purposes I needed... seems the implementation of .ncCF response may be throwing it off (url building it forces .ncCF as the response type now but there isn't a .ncCF type for griddap)

if I change the url to response nc - i get what I expect as returned data

gist of a simple example showing the failure in trying to load - https://gist.github.com/shaunwbell/b11e4ae63638010866b6ae9d64261004

and my adhoc solution (which is purpose built, not holistic or i would push a pull request)

    
  def to_xarray(self, **kw):
        """Load the data request into a xarray.Dataset.

        Accepts any `xr.open_dataset` keyword arguments.
        """
        import xarray as xr

        url = self.get_download_url(response="nc")
        nc = _nc_dataset(url, auth=self.auth, **self.requests_kwargs)
        return xr.open_dataset(xr.backends.NetCDF4DataStore(nc), **kw)

am i missing a part of the API to set this explicitly or is it hardcoded into the script?

thanks

erddapy = 0.7.0
python 3.8.5

GSoC ideas

What is erddapy?

erddapy takes advantage of ERDDAP's RESTful web services to create the ERDDAP URLs.
The users can create virtually any request like,
searching for datasets, acquiring metadata, downloading data, etc.

What is ERDDAP?

ERDDAP is a data server that provides a consistent way to download subsets of scientific datasets.

There are many scientific data server available,
like OPeNDAP, WCS, SOS, OBIS, etc.
They all have their advantages and disadvantages,
ERDDAP goal is to fill the gaps and unify most of the advantages in a single service.
The main advantages of ERDDAP are:

  • offers an easy-to-use, consistent way to request data via the OPeNDAP;
  • returns data in the common file format of your choice,
    .html table, ESRI .asc and .csv, Google Earth .kml, OPeNDAP binary, .mat, .nc, ODV .txt, .csv, .tsv, .json, and .xhtml,
    and even some image formats (.png and .pdf);
  • standardizes the date-times where string times are always ISO 8601:2004(E) and
    numeric times are always "seconds since 1970-01-01T00:00:00Z";
  • acts as a middleman by reformatting the request into the format required by the remote server,
    and reformats the data into the format that you requested.

See https://coastwatch.pfeg.noaa.gov/erddap/index.html for more information.

Ideas for GSoC

  • idea 0: Support ERDDAP's griddap

erddapy supports only tabledap protocol but ERDDAP also provides a griddap protocol that servers gridded data like models and satellite images. One could extend erddapy's API to support griddap and expand the amount of data Python users can obtain with this library.

Issue: #32

  • Idea 1: Unify constraints

The library has two types of constraints, "regular ones" that are parsed as expected by Python users, and the relative_constraints, which are passed directly to ERDDAP without any pre-processing. Ideally the API should be unified to avoid confusing the users.

Issue: #164

  • Idea 2: High level data queries

The current API is a bit too verbose, for example:

e.dataset_id = "whoi_406-20160902T1700"

e.variables = [
    "depth",
    "latitude",
    "longitude",
    "salinity",
    "temperature",
    "time",
]

e.constraints = {
    "time>=": "2016-09-03T00:00:00Z",
    "time<=": "2016-09-04T00:00:00Z",
    "latitude>=": 38.0,
    "latitude<=": 41.0,
    "longitude>=": -72.0,
    "longitude<=": -69.0,
}

while that makes a good base for other libraries to be built on top of erddapy, like argopy and gliderpy, it makes it hard to use high level objects as constraints. For example, a shapefile are any GIS WKT-like object.

The idea would be to create a higher level API that could consume these object and output a similar query as the one described above.

Issue: #96

Griddap params

Hey there,

I'm hoping to use your nifty package but am running into an issue. You should note I don't have a climate science background.

When I run the following I receive an error, which, it seems like the generated URL doesn't conform to the griddap documentation but instead to tabledap.

from erddapy import ERDDAP

e = ERDDAP(
    server='http://oos.soest.hawaii.edu/erddap',
    protocol='griddap',
    response='csv',
    dataset_id='NCEP_Global_Best',
    constraints={
        'time>=': '2018-02-08',
    },
    variables=['time', 'longitude', 'latitude'],
)

# http://oos.soest.hawaii.edu/erddap/griddap/NCEP_Global_Best.csv?time,longitude,latitude&time%3E=1518048000.0
print(e.get_download_url())

Outputs:

Traceback (most recent call last):
  File "/tmp/a.py", line 14, in <module>
    print(e.get_download_url())
  File "/home/danny/.virtualenvs/cwwed-env/lib/python3.6/site-packages/erddapy/erddapy.py", line 143, in get_download_url
    constraints=constraints,
  File "/home/danny/.virtualenvs/cwwed-env/lib/python3.6/site-packages/erddapy/url_builder.py", line 183, in download_url
    return _check_url_response(url)
  File "/home/danny/.virtualenvs/cwwed-env/lib/python3.6/site-packages/erddapy/utilities.py", line 101, in _check_url_response
    r.raise_for_status()
  File "/home/danny/.virtualenvs/cwwed-env/lib/python3.6/site-packages/requests/models.py", line 935, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://oos.soest.hawaii.edu/erddap/griddap/NCEP_Global_Best.csv?time,longitude,latitude&time%3E=1518048000.0

If I use the Data Access Form and only add the time constraint, it generates a url like:

http://oos.soest.hawaii.edu/erddap/griddap/NCEP_Global_Best.csv?time[(2018-02-01):1:(2018-02-13T12:00:00Z)] which works.

If you browse to the malformed URL it warns about "graphical" params.

image

I must be doing something wrong but I can't identify what it is. Any help would be appreciated.

Constraining a dataset when an erddap variable name starts with "time" but it isn't a time

So I use the CF DSG datatypes (point, profile, timeseries, trajectory) as my dataset id identifiers with my erddap datasets (which i then point to and set as my cf_role) - these variables are usually populated with string names.

For example: i concatenate multiple instruments on a fixed mooring within a single erddap dataset, each instrument gets a timeseries_id that is fixed for the instrument... say mymooring_myinstrument_morerelevantdetails. And if I wanted to retrieve all data from just a single instrument I can constrain on that value...

See example of description in screen shot:
Screen Shot 2021-07-16 at 1 23 19 PM

However and herein is my problem...
erddapy looks for any constraint to start with 'time' and then assumes the content is a datetime object, but there could be cases where this is not truly the case

the issue is the following code for reference

    385             for k, v in _constraints.items():
    386                 if k.startswith("time"):
    387                     _constraints.update({k: parse_dates(v)})
    388             _constraints = _quote_string_constraints(_constraints)
    389             _constraints_url = _format_constraints_url(_constraints)

clearly the simple answer is for me to rename my variable id to not start with time... but this may just obscure an issue that another user may have should the word "time" reference data that is not easily translated into a datetime object.

HTTP Error with erddap V2.10

seems a little something has changed and is throwing an error - I've not looked into the erddapy code but here is what I did (both on the swfsc erddap server and my own):

from erddapy import ERDDAP
import pandas as pd

e = ERDDAP(server="https://coastwatch.pfeg.noaa.gov/erddap")

print(url)

url = e.get_search_url(search_for="whoi", response="csv")

df = pd.read_csv(url)
print(
    f'We have {len(set(df["tabledap"].dropna()))} '
    f'tabledap, {len(set(df["griddap"].dropna()))} '
    f'griddap, and {len(set(df["wms"].dropna()))} wms endpoints.'
)

Throws a http 500 error and following the generated link gives the following explicit error:

Error {
    code=500;
    message="Internal Server Error: ERROR in parseISODateTime: for first character of dateTime='(ANY)' isn't a digit!";
}

using 'now' as a time constraint

When trying to build a search using the time constraint with erddap's now functionality:

from datetime import date

from erddapy import ERDDAP

server = "http://osmc.noaa.gov/erddap"
e = ERDDAP(server=server, protocol="tabledap")

e.dataset_id = "ioos_obs_counts"
e.variables = ["time", "locationID", "region", "sponsor", "met", "wave"]
e.constraints = {
    "time>=": "now-18months",
}
print(e.get_download_url())

I receive the following error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
pandas\_libs\tslibs\parsing.pyx in pandas._libs.tslibs.parsing.parse_datetime_string_with_reso()

pandas\_libs\tslibs\parsing.pyx in pandas._libs.tslibs.parsing.dateutil_parse()

ValueError: Unknown datetime string format, unable to parse: now-18months

During handling of the above exception, another exception occurred:

DateParseError                            Traceback (most recent call last)
<ipython-input-16-a4bef5750bff> in <module>
     11     "time>=": "now-18months",
     12 }
---> 13 print(e.get_download_url())

~\programs\Anaconda3\envs\IOOS\lib\site-packages\erddapy\erddapy.py in get_download_url(self, dataset_id, protocol, variables, response, constraints, **kwargs)
    330             for k, v in _constraints.items():
    331                 if k.startswith("time"):
--> 332                     _constraints.update({k: parse_dates(v)})
    333             _constraints = quote_string_constraints(_constraints)
    334             _constraints = "".join([f"&{k}{v}" for k, v in _constraints.items()])

~\programs\Anaconda3\envs\IOOS\lib\site-packages\erddapy\utilities.py in parse_dates(date_time)
    100         # pandas returns a tuple with datetime, dateutil, and string representation.
    101         # we want only the datetime obj.
--> 102         parse_date_time = parse_time_string(date_time)[0]
    103     else:
    104         parse_date_time = date_time

pandas\_libs\tslibs\parsing.pyx in pandas._libs.tslibs.parsing.parse_time_string()

pandas\_libs\tslibs\parsing.pyx in pandas._libs.tslibs.parsing.parse_datetime_string_with_reso()

DateParseError: Unknown datetime string format, unable to parse: now-18months

It looks like pandas is trying to parse the dates a user supplies in the constraints field. Even before the url gets built. Is there a way to bypass that? Or was this intentional?

Expected result from get_search_url() for protocol=?

Given a valid ERDDAP url and:

e = ERDDAP(
             server=url,
             protocol='tabledap',
             response='csv'
             )

kw = {'search_for': 'all',
 'min_lon': -125.32011299999999,
 'max_lon': -122.320202,
 'min_lat': 49.042069,
 'max_lat': 49.96947800000001,
 'min_time': '2019-06-19T00:00:00Z',
 'max_time': '2019-12-31T00:00:00Z',
 'standard_name': 'sea_water_practical_salinity'}

df = pd.read_csv("{}".format(e.get_search_url(**kw), headers=headers))

I would expect the search URL to specify protocol=tabledap, but this is what we get instead:

{url}/advanced.csv?page=1&itemsPerPage=1000&protocol=(ANY)&cdm_data_type=(ANY)&institution=(ANY)&ioos_category=(ANY)&keywords=(ANY)&long_name=(ANY)&standard_name=sea_water_practical_salinity&variableName=(ANY)&minLon=-125.32011299999999&maxLon=-122.320202&minLat=49.042069&maxLat=49.96947800000001&minTime=1560902400.0&maxTime=1577750400.0&searchFor=all

I may be reading the docs wrong, but if this isn't a bug, I think it should be made clearer in the docs how to generate the protocol=tabledap filter in a search request. I see it's mentioned as a parameter in there, but I would expect the higher-level parameter value set on e would carry over to the get_search_url() call.

get_search_url with empty time bounds forms an inappropriate URL

Expected behaviour:

I can make an advanced dataset search without supplying time bounds

Actual behaviour

If time bounds are not provided to get_search_url, erddapy generates the URI string &minTime=(ANY)&maxTime=(ANY) which returns an error from the server.

Reproduceable example

from erddapy import ERDDAP
import pandas as pd

e = ERDDAP(server="https://gliders.ioos.us/erddap")

kw = {
    "standard_name": "sea_water_temperature",
    "min_lon": -72.0,
    "max_lon": -69.0,
    "min_lat": 38.0,
    "max_lat": 41.0,
    "cdm_data_type": "trajectoryprofile"
}

search_url = e.get_search_url(response="html", **kw)
pd.read_csv(search_url)

Suggested fix

If time bounds are empty, instead of generating &minTime=(ANY)&maxTime=(ANY) erddapy should generate &minTime=&maxTime=

Versions

erddapy == 1.0.0
python == 3.6.10

_check_url_response() should pass requests_kwargs

While trying to connect to a ERDDAP server that had basic authentication set up I found that _check_url_response() was not passing any previously supplied requests_kwargs which are needed to pass long the authentication information.

This is already being done for calls to urlopen()

ImportError: cannot import name 'parse_time_string' from 'pandas.core.tools.datetimes'

I just built a conda environment today with erddapy and pandas, among other things. No version pinning at all. Python 3.8.5, pandas 1.1.0, erddapy 0.5.3. Just importing erddapy produces the pandas error below.

Same issue when I built the conda env while pinning Python to 3.7.

Any suggestions for pinning a specific version of Pandas, erddapy or both to get past this issue? Thanks!

~/miniconda/envs/nvsgliderapp/lib/python3.8/site-packages/erddapy/__init__.py in <module>
----> 1 from erddapy.erddapy import ERDDAP, servers
      2 
      3 __all__ = ["ERDDAP", "servers"]
      4 
      5 from ._version import get_versions

~/miniconda/envs/nvsgliderapp/lib/python3.8/site-packages/erddapy/erddapy.py in <module>
     13 import pandas as pd
     14 
---> 15 from erddapy.utilities import (
     16     _check_url_response,
     17     _tempnc,

~/miniconda/envs/nvsgliderapp/lib/python3.8/site-packages/erddapy/utilities.py in <module>
     11 import pytz
     12 import requests
---> 13 from pandas.core.tools.datetimes import parse_time_string
     14 
     15 _server = namedtuple("server", ["description", "url"])

ImportError: cannot import name 'parse_time_string' from 'pandas.core.tools.datetimes' (/home/mayorga/miniconda/envs/nvsgliderapp/lib/python3.8/site-packages/pandas/core/tools/datetimes.py)

Re-include `requests_kwargs` to control connections to ERDDAP server

Before a13ee24 ERDDAP.requests_kwargs is a dict that can be set for any keyword arguments to be passed to requests.get() when retrieving data. It is still documented as an attribute.

Now attempting to set ERDDAP.requests_kwargs throws an AttributeError.

For example this allows setting a timeout argument that causes requests to throw a requests.Timeout if it is taking too long to retrieve information (without using something like signals to do the same).

Problem with 00-quick_intro.ipbgy.ipynb

I'm a newbie. Installed erddapy as per "pip install erddapy" instructions.
Get a full page of exceptions starting with:
AttributeError Traceback (most recent call last)
Input In [4], in <cell line: 1>()
----> 1 from erddapy import ERDDAP
4 e = ERDDAP(
5 server="UAF", # NOAA UAF (Unified Access Framework)
6 protocol="tabledap",
7 response="csv",
8 )

File ~\Anaconda3\envs\tide\lib\site-packages\erddapy_init_.py:3, in
1 """Easier access to scientific data."""
----> 3 from erddapy.erddapy import ERDDAP
4 from erddapy.servers import servers
6 all = ["ERDDAP", "servers"]
.
.
.

Ending in AttributeError: module 'brotli' has no attribute 'error' along with a full page of exceptions.

Are there release requirements? I've always updated Anaconda etc.

test_to_iris_tabledap in test_to_objects.py fails for newer versions of iris

It seems that newer versions of iris (installed using pip3 install scitools-iris) no longer supports the method extract_strict() for the CubeList object.

The latest documentation for iris CubeList can be found here: https://scitools-iris.readthedocs.io/en/stable/generated/api/iris/cube.html#iris.cube.CubeList.extract

Documentation for an older version with extract_strict: https://scitools.org.uk/iris/docs/v2.4.0/iris/iris/cube.html?highlight=cubelist#iris.cube.CubeList.extract_strict

This leads to the following error when using pytest:

―――――――――――――――――――――――――――――――――――――――――――――――― test_to_iris_tabledap ―――――――――――――――――――――――――――――――――――――――――――――――――

taodata = <erddapy.erddapy.ERDDAP object at 0x7fc80e26f400>

    @pytest.mark.web
    @pytest.mark.vcr()
    def test_to_iris_tabledap(taodata):
        cubes = taodata.to_iris()

        assert isinstance(cubes, iris.cube.CubeList)
>       assert isinstance(cubes.extract_strict("depth"), iris.cube.Cube)
E       AttributeError: 'CubeList' object has no attribute 'extract_strict'

tests/test_to_objects.py:80: AttributeError

Will there be any issue with just using extract() method?

Roadmap to v1.0.0

  • add the kwargs options to the __init__ constructor;
  • validate the URL
  • add .to_pandas() and .to_xarray() methods
  • pass requests options along so we can use private server

Simplify the API

Right now we can pass the dataset_id, constraints, and variables to the constructor or later as properties, that can be confusing b/c most of the time the users does not know that in advance to pass it to the constructor and some search or data crunching is to figure out those.

@rsignell-usgs and @kwilcox suggested to remove those from the constructor and enforce them to be passed as properties only.

Loading erddapy when offline raises an error

Loading erddapy when being offline raise an error (see below) because it can init the servers list.

A direct use of erddapy when being offline may not be relevant, but in the case where erddapy is part of a larger package that does not entirely rely on online activities, should we expect loading erddapy to fail silently ?


gaierror Traceback (most recent call last)
~/anaconda/envs/obidam36/lib/python3.6/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1317 h.request(req.get_method(), req.selector, req.data, headers,
-> 1318 encode_chunked=req.has_header('Transfer-encoding'))
1319 except OSError as err: # timeout error

~/anaconda/envs/obidam36/lib/python3.6/http/client.py in request(self, method, url, body, headers, encode_chunked)
1261 """Send a complete request to the server."""
-> 1262 self._send_request(method, url, body, headers, encode_chunked)
1263

~/anaconda/envs/obidam36/lib/python3.6/http/client.py in _send_request(self, method, url, body, headers, encode_chunked)
1307 body = _encode(body, 'body')
-> 1308 self.endheaders(body, encode_chunked=encode_chunked)
1309

~/anaconda/envs/obidam36/lib/python3.6/http/client.py in endheaders(self, message_body, encode_chunked)
1256 raise CannotSendHeader()
-> 1257 self._send_output(message_body, encode_chunked=encode_chunked)
1258

~/anaconda/envs/obidam36/lib/python3.6/http/client.py in _send_output(self, message_body, encode_chunked)
1035 del self._buffer[:]
-> 1036 self.send(msg)
1037

~/anaconda/envs/obidam36/lib/python3.6/http/client.py in send(self, data)
973 if self.auto_open:
--> 974 self.connect()
975 else:

~/anaconda/envs/obidam36/lib/python3.6/http/client.py in connect(self)
1414
-> 1415 super().connect()
1416

~/anaconda/envs/obidam36/lib/python3.6/http/client.py in connect(self)
945 self.sock = self._create_connection(
--> 946 (self.host,self.port), self.timeout, self.source_address)
947 self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)

~/anaconda/envs/obidam36/lib/python3.6/socket.py in create_connection(address, timeout, source_address)
703 err = None
--> 704 for res in getaddrinfo(host, port, 0, SOCK_STREAM):
705 af, socktype, proto, canonname, sa = res

~/anaconda/envs/obidam36/lib/python3.6/socket.py in getaddrinfo(host, port, family, type, proto, flags)
744 addrlist = []
--> 745 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
746 af, socktype, proto, canonname, sa = res

gaierror: [Errno 8] nodename nor servname provided, or not known

During handling of the above exception, another exception occurred:

URLError Traceback (most recent call last)
in
1 # from argopy.data_fetchers import erddap_data as Erddap_Fetchers
----> 2 import erddapy

~/anaconda/envs/obidam36/lib/python3.6/site-packages/erddapy/init.py in
3 """
4
----> 5 from erddapy.erddapy import ERDDAP, servers
6
7

~/anaconda/envs/obidam36/lib/python3.6/site-packages/erddapy/erddapy.py in
12 import pandas as pd
13
---> 14 from erddapy.utilities import (
15 _nc_dataset,
16 _tempnc,

~/anaconda/envs/obidam36/lib/python3.6/site-packages/erddapy/utilities.py in
51
52
---> 53 servers = servers_list()
54
55

~/anaconda/envs/obidam36/lib/python3.6/site-packages/erddapy/utilities.py in servers_list()
42 def servers_list():
43 url = "https://raw.githubusercontent.com/IrishMarineInstitute/awesome-erddap/master/erddaps.json"
---> 44 df = pd.read_json(url)
45 _server = namedtuple("server", ["description", "url"])
46 return {

~/anaconda/envs/obidam36/lib/python3.6/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
197 else:
198 kwargs[new_arg_name] = new_arg_value
--> 199 return func(*args, **kwargs)
200
201 return cast(F, wrapper)

~/anaconda/envs/obidam36/lib/python3.6/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
294 )
295 warnings.warn(msg, FutureWarning, stacklevel=stacklevel)
--> 296 return func(*args, **kwargs)
297
298 return wrapper

~/anaconda/envs/obidam36/lib/python3.6/site-packages/pandas/io/json/_json.py in read_json(path_or_buf, orient, typ, dtype, convert_axes, convert_dates, keep_default_dates, numpy, precise_float, date_unit, encoding, lines, chunksize, compression, nrows)
592 compression = infer_compression(path_or_buf, compression)
593 filepath_or_buffer, _, compression, should_close = get_filepath_or_buffer(
--> 594 path_or_buf, encoding=encoding, compression=compression
595 )
596

~/anaconda/envs/obidam36/lib/python3.6/site-packages/pandas/io/common.py in get_filepath_or_buffer(filepath_or_buffer, encoding, compression, mode, storage_options)
181 if isinstance(filepath_or_buffer, str) and is_url(filepath_or_buffer):
182 # TODO: fsspec can also handle HTTP via requests, but leaving this unchanged
--> 183 req = urlopen(filepath_or_buffer)
184 content_encoding = req.headers.get("Content-Encoding", None)
185 if content_encoding == "gzip":

~/anaconda/envs/obidam36/lib/python3.6/site-packages/pandas/io/common.py in urlopen(*args, **kwargs)
135 import urllib.request
136
--> 137 return urllib.request.urlopen(*args, **kwargs)
138
139

~/anaconda/envs/obidam36/lib/python3.6/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
221 else:
222 opener = _opener
--> 223 return opener.open(url, data, timeout)
224
225 def install_opener(opener):

~/anaconda/envs/obidam36/lib/python3.6/urllib/request.py in open(self, fullurl, data, timeout)
524 req = meth(req)
525
--> 526 response = self._open(req, data)
527
528 # post-process response

~/anaconda/envs/obidam36/lib/python3.6/urllib/request.py in _open(self, req, data)
542 protocol = req.type
543 result = self._call_chain(self.handle_open, protocol, protocol +
--> 544 '_open', req)
545 if result:
546 return result

~/anaconda/envs/obidam36/lib/python3.6/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
502 for handler in handlers:
503 func = getattr(handler, meth_name)
--> 504 result = func(*args)
505 if result is not None:
506 return result

~/anaconda/envs/obidam36/lib/python3.6/urllib/request.py in https_open(self, req)
1359 def https_open(self, req):
1360 return self.do_open(http.client.HTTPSConnection, req,
-> 1361 context=self._context, check_hostname=self.check_hostname)
1362
1363 https_request = AbstractHTTPHandler.do_request

~/anaconda/envs/obidam36/lib/python3.6/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1318 encode_chunked=req.has_header('Transfer-encoding'))
1319 except OSError as err: # timeout error
-> 1320 raise URLError(err)
1321 r = h.getresponse()
1322 except:

URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known>

griddap documentation

Where should we document the griddap capability? The "longer introduction" notebook? A notebook of its own?

query masked by WKT area

A common task is how to fetch data masked by an arbitrary region of interest (RoI) polygon.
Ideally the OPeNDAP standard & ERDDAP might allow for submission of a WKT, but masking could also be handled on the client side.

@ocefpaf : what are the first steps towards implementing RoI masking in erddapy?

Additional thoughts:

  1. Should efforts to implement RoI queries be focused on the server or is this client workaround worthwhile?
  2. Is this feature creep? Perhaps masking is a secondary task which should be kept separate from the core functionality of the API.

ImportError: No module named tslibs.parsing

Hi, when importing erddapy, I get this:

from erddapy import ERDDAP
...
...
ImportError: No module named tslibs.parsing

Is this a pandas problem? Here's the complete traceback:

ImportErrorTraceback (most recent call last)
<ipython-input-3-20c6c8f35979> in <module>()
----> 1 from erddapy import ERDDAP

/home/bblanton/.local/lib/python2.7/site-packages/erddapy/__init__.py in <module>()
      1 from __future__ import (absolute_import, division, print_function)
      2 
----> 3 from erddapy.erddapy import ERDDAP, servers
      4 
      5 __all__ = [

/home/bblanton/.local/lib/python2.7/site-packages/erddapy/erddapy.py in <module>()
      8 from __future__ import (absolute_import, division, print_function)
      9 
---> 10 from erddapy.url_builder import (
     11     download_url,
     12     info_url,

/home/bblanton/.local/lib/python2.7/site-packages/erddapy/url_builder.py in <module>()
     18     from urllib import quote_plus
     19 
---> 20 from erddapy.utilities import (_check_url_response, parse_dates, quote_string_constraints)
     21 
     22 

/home/bblanton/.local/lib/python2.7/site-packages/erddapy/utilities.py in <module>()
     10 from collections import namedtuple
     11 
---> 12 from pandas._libs.tslibs.parsing import parse_time_string
     13 
     14 import pytz

ImportError: No module named tslibs.parsing

improving Requirements-dev.txt

While doing pip3 install -r requirements-dev.txt I get error in cartopy installation.

image

To overcome the error while installing the cartopy I have included the cython library in the requirements file.
and have to add several commands before installing the requirements.txt to resolve the error while installing dependencies

For Linux Users

1-> sudo apt-get install libproj-dev proj-data proj-bin
2-> sudo apt-get install libgeos-dev
3-> pip3 install -r requirments.txt

I have attached the updated requirements file

For details refer to pr #173

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.