Giter Club home page Giter Club logo

opentopodata's Introduction

Open Topo Data

Documentation: www.opentopodata.org

Open Topo Data is a REST API server for your elevation data.

curl http://localhost:5000/v1/test-dataset?locations=56,123
{
    "results": [{
        "elevation": 815.0,
        "location": {
            "lat": 56.0,
            "lng": 123.0
        },
        "dataset": "test-dataset"
    }],
    "status": "OK"
}

You can self-host with your own dataset or use the free public API which is configured with a number of open elevation datasets. The API is largely compatible with the Google Maps Elevation API.

Installation

Install docker and git then run:

git clone https://github.com/ajnisbet/opentopodata.git
cd opentopodata
make build
make run

This will start an Open Topo Data server on http://localhost:5000/. Some extra steps might be needed for Windows, M1/Apple Silicon, and Kubernetes.

Open Topo Data supports a wide range of raster file formats and tiling schemes, including most of those used by popular open elevation datasets. See the server docs for more about configuration and adding datasets.

Usage

Open Topo Data has a single endpoint: a point query endpoint that returns the elevation at a single point or a series of points.

curl http://localhost:5000/v1/test-dataset?locations=56,123
{
    "results": [{
        "elevation": 815.0,
        "location": {
            "lat": 56.0,
            "lng": 123.0
        },
        "dataset": "test-dataset"
    }],
    "status": "OK"
}

The interpolation algorithm used can be configured as a request parameter, multiple locations can be given in a single request, and locations can also be provided in Google Polyline format.

See the API docs for more about request and response formats.

Public API

I'm hosting a free public API at api.opentopodata.org.

curl https://api.opentopodata.org/v1/srtm30m?locations=57.688709,11.976404
{
  "results": [
    {
      "elevation": 55.0,
      "location": {
        "lat": 57.688709,
        "lng": 11.976404
      },
      "dataset": "srtm30m"
    }
  ],
  "status": "OK"
}

The following datasets are available on the public API:

License

MIT

Support

Need help getting Open Topo Data running? Send me an email at [email protected] or open an issue!

Paid hosting

If you'd like me to host and manage an elevation API for your business, email me at [email protected] or check out my sister project GPXZ.

opentopodata's People

Contributors

ajnisbet avatar arnesetzer avatar hugoheneault avatar janusw avatar kant avatar khendrickx avatar khintz avatar meierbenjamin avatar ntdt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opentopodata's Issues

API blocking fetch requests

I'm trying to implement the API by fetching the JSON via JavaScript's fetch API or jQuery, but I'm getting an error that access to fetch the API has been "blocked by CORS policy," which usually indicates that the end site is blocking fetch requests.
Screenshot 2020-08-09 at 3 47 27 PM

Am I doing something wrong or is this as intended?

Server fails to handle requests with large number of locations

I am trying to retrieve elevations in large batches.

Since max_locations_per_request by default is not very large (100), I increased it to 1000 on my server. This helped a bit, but actually I could not go all the way up to 1000 locations. Up to 400 or so it works well, but then I start getting errors like: http.client.RemoteDisconnected: Remote end closed connection without response

Unfortunately this error message is not very useful. It occurs in a python script that I use for testing, and these requests go to the server through a load balancer.

The first thing I tried was increasing the timeout in the python script, and even disabling it completely, but that did not help.

Then I went on to debug it more locally, without going through the LB, and finally got responses like: 414 Request-URI Too Large. The errors seem to come from nginx, so apparently this is an nginx limitation.

API: add support for querying combinations of data sets

Currently the opentopodata API only has the possibility of querying elevation data from a specific data set.

It would be nice if there was a way to query elevation data from a combination of data sets. E.g. a user might be interested in saying:

  1. "Give me the elevation data for a particular point. I don't care about the data set, just give me the 'best' one that you have for this location." or:
  2. "Give me the elevation data for a particular point. I'd prefer to get it from data set A, but if this one does not have it, data set B is also fine. If all fails, data set C is ok as well."

For the first case, the API could just be something like /v1/best-data?locations=lat,lon. Then it would be up to the server to determine which data set is most suitable. A priority ordering of the data sets could be given by the ordering in the config.yaml file. The data sets with the highest quality / resolution would come first, and the later ones would only be used if the highest-priority data sets return a null/nodata value.

For the second case, the API call could be /v1/any-data?locations=lat,lon&data=A,B,C. Here the data priority is not given by the server setup, but specified by the user. This is more dynamic, and possibly more performant than the first option, but less convenient.

One could also unify these two cases together with the current single-data queries in a new API v2, where the data set is given as an argument, e.g. /v2/elevation?locations=lat,lon&data=something. That would include:

  • single-data queries (data=A), corresponding to the status quo (/v1/A?locations=...)
  • multi-data queries (data=A,B,C), where the desired data sets are given in order of priority (as in option 2 above)
  • queries without specifying a data set, where the server would choose the most appropriate one (as in option 1 above)

As an alternative, multi-data queries could be interpreted to return the elevation values from all three data sets at once, but I think returning only one elevation value from a priority list of data sets is the more important use case.

GCS Support

It would be nice if we could mount data from Google Cloud Storage and Amazon S3, especially when using opentopodata as a container.

Right now to use opentopodata as a docker container, we either have to mount the data inside of a Kubernetes deployment (which means we need to keep the data on Persistent Disks) or submit opentopodata with the data already inside to be built into a docker container. This can really slow things down (especially deploying new nodes and the build process itself) when there's a lot of data.

With GCS / S3 support, we could simply tell opentopodata which GCS or S3 bucket contains our data, build a container with that configuration and then deploy it.

There's alternatives like gcsfuse, but they rely on permissions not normally given to docker containers and definitely not available on cloud providers.

Add BKG data (Germany)

One of the strengths of OpenTopoData is its ability to handle various data sources (and the documentation on how to do it). I'm interested in adding yet another data source, namely the DGM data from the German 'Bundesamt für Kartographie und Geodäsie' (BKG).

They have data in different resolutions. The free version has a grid size of 200m, see: https://gdz.bkg.bund.de/index.php/default/digitale-geodaten/digitale-gelandemodelle/digitales-gelandemodell-gitterweite-200-m-dgm200.html (unfortunately it seems like this page is only available in German).

Of course this 200m data as such is not very interesting (since EU-DEM covers Germany with 25m resolution), but BKG also offers higher-resolution data commercially (down to 5m at least). Therefore it would be nice if one could make the 200m data work with OpenTopoData as a proof of concept at least.

The data is available in XYZ-ASCII or GRID-ASCII format (ARC/INFO), both of which seem to be supported by gdal (via the XYZ and AAIGrid drivers), so I hope it should be no problem to use them with OpenTopoData.

Feedback welcome ...

Non docker hosting server on windows

Hi Andrew,

Do you have any document to run the hosting server on windows machine? I have problem installing the pylibmc. Also because the windows server machine is virtualized, so we can't host it as a docker app.

thanks

Show unexpected extensions in config error message

Open Topo Data will complain with a ConfigError if an unknown raster file extension is found. This is a pretty common issue with dataset metadata (.txt, .pdf), or download files (uncompressed archives, .crdownload).

The config error should report the unknown extension.

Support samples along a path

The google maps api lets you sample along a path. I don't have plans to support this any time soon in Open Topo Data, but a user submitted a patch for v1.5.0 to add this functionality!

Update 2021-09-04: The patch below has some issues, here's a better method that I'm adding: Sampling points along a lat,lon path

Old patch
--- opentopodata.orig/api.py	2021-03-10 16:36:58.761755648 +0100
+++ opentopodata/api.py	2021-03-21 09:43:48.369460334 +0100
@@ -190,6 +190,44 @@
         return _parse_polyline_locations(locations, max_n_locations)
 
 
+def _create_path_with_samples(lats, lons, samples):
+    """Create a path with the requested samples starting from given path
+
+
+    Args:
+        lats: The latitudes
+        lons: The longitudes
+        samples: Number of point in the segments of the path.
+
+    Returns:
+        lats: List of latitude floats.
+        lons: List of longitude floats.
+    """
+
+    if(len(lats) < 2):
+        return lats, lons
+    if(int(samples) < 2):
+        samples = 2;
+    lats1 = []
+    lons1 = []
+    lat1 = lats[0]
+    lon1 = lons[0]
+    for i in range(1, len(lats)):
+        lat2 = lats[i]
+        lon2 = lons[i]
+        dlat = (lat2 - lat1) / (int(samples) - 1)
+        dlon = (lon2 - lon1) / (int(samples) - 1)
+        lat = lat1
+        lon = lon1
+        for i in range(0, int(samples)):
+            lats1.append(lat)
+            lons1.append(lon)
+            lat += dlat
+            lon += dlon
+
+    return lats1, lons1
+
+
 def _parse_polyline_locations(locations, max_n_locations):
     """Parse and validate locations in Google polyline format.
 
@@ -399,6 +437,11 @@
             request.args.get("locations"), _load_config()["max_locations_per_request"]
         )
 
+        samples = request.args.get("samples")
+        if not samples:
+            samples = 2
+        lats, lons = _create_path_with_samples(lats, lons, samples)
+
         # Get the z values.
         datasets = _get_datasets(dataset_name)
         elevations, dataset_names = backend.get_elevation(

How can I call opentopodata from javascript?

I try to call the api with

`function onGet(url) {

var headers = {}

fetch(url, {
    method: "GET",
    mode: 'no-cors',
    headers: headers
})
    .then((response) => {
        if (!response.ok) {
            throw new Error(response.error)
        }
        return response.json();
    })
    .then(data => {
        dat = data.messages;
    })
    .catch(function (error) {
        err = error.toString();
        var test = 1;
        ee = test;
    });

}`

but I receive an error.
Is it possible to call
https://api.opentopodata.org/v1/eudem25m?locations=
from javascript?

WARNING:root:Failed to get options via gdal-config (M1 mac)

Hi there,

I'm running make build on macOS (m1) 12.0.1 and encounter this issue:

#11 6.510 Collecting rasterio==1.2.9
#11 6.553   Downloading rasterio-1.2.9.tar.gz (2.3 MB)
#11 7.162   Installing build dependencies: started
#11 13.24   Installing build dependencies: finished with status 'done'
#11 13.24   Getting requirements to build wheel: started
#11 13.45   Getting requirements to build wheel: finished with status 'error'
#11 13.45   ERROR: Command errored out with exit status 1:
#11 13.45    command: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmp_b8p17k_
#11 13.45        cwd: /tmp/pip-install-rusm7ufj/rasterio_d6e3703bd238492fb428a60f0b4d1a47
#11 13.45   Complete output (2 lines):
#11 13.45   WARNING:root:Failed to get options via gdal-config: [Errno 2] No such file or directory: 'gdal-config': 'gdal-config'
#11 13.45   ERROR: A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable.
#11 13.45   ----------------------------------------
#11 13.45 WARNING: Discarding https://files.pythonhosted.org/packages/1f/70/4e7a789f4988955e4c381de80923e184f912683bbe6fc4a3a00c91efdf59/rasterio-1.2.9.tar.gz#sha256=012a4964d8db365be4fae0af9cbeba00e683e5904d5031e8ba42ccb6040cc887 (from https://pypi.org/simple/rasterio/) (requires-python:>=3.6). Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmp_b8p17k_ Check the logs for full command output.
#11 13.45 ERROR: Could not find a version that satisfies the requirement rasterio==1.2.9 (from versions: 0.1, 0.2, 0.3, 0.4, 0.5, 0.5.1, 0.6, 0.7, 0.7.1, 0.7.2, 0.7.3, 0.8, 0.9, 0.10, 0.10.1, 0.11, 0.11.1, 0.12, 0.12.1, 0.13, 0.13.1, 0.13.2, 0.14, 0.14.1, 0.15, 0.15.1, 0.16, 0.17, 0.17.1, 0.18, 0.19.0, 0.19.1, 0.20.0, 0.21.0, 0.22.0, 0.23.0, 0.24.0, 0.24.1, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 0.32.0.post1, 0.33.0, 0.34.0, 0.35.0, 0.35.0.post1, 0.35.1, 0.36.0, 1.0a1, 1.0a2, 1.0a3, 1.0a4, 1.0a6, 1.0a7, 1.0a8, 1.0a9, 1.0a10, 1.0a11, 1.0a12, 1.0b1, 1.0b2, 1.0b3, 1.0b4, 1.0rc1, 1.0rc2, 1.0rc3, 1.0rc4, 1.0rc5, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.3.post1, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.0.9, 1.0.10, 1.0.11, 1.0.12, 1.0.13, 1.0.14, 1.0.15, 1.0.16, 1.0.17, 1.0.18, 1.0.20, 1.0.21, 1.0.22, 1.0.23, 1.0.24, 1.0.25, 1.0.26, 1.0.27, 1.0.28, 1.1b1, 1.1b2, 1.1b3, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.2a1, 1.2b1, 1.2b2, 1.2b3, 1.2b4, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.3a1, 1.3a2)
#11 13.46 ERROR: No matching distribution found for rasterio==1.2.9
------
executor failed running [/bin/sh -c pip install         --no-index         --no-cache-dir         --disable-pip-version-check         --find-links=/root/wheels         uwsgi regex &&     pip install --no-cache-dir --disable-pip-version-check -r /app/requirements.txt &&         rm -rf /root/.cache/pip/* &&         rm root/wheels/* &&         rm /app/requirements.txt]: exit code: 1
make: *** [build] Error 1
Full logs
$ make build
docker build --tag opentopodata:`cat VERSION` --file docker/Dockerfile .
[+] Building 18.1s (11/14)
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                                                                                                                                                                                                                                                                     0.0s
 => => transferring dockerfile: 37B                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        0.0s
 => => transferring context: 34B                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         0.0s
 => [internal] load metadata for docker.io/library/python:3.7.12-slim-buster                                                                                                                                                                                                                                                                                                                                                                                                                                             4.3s
 => [internal] load build context                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        0.0s
 => => transferring context: 5.39kB                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      0.0s
 => [builder 1/3] FROM docker.io/library/python:3.7.12-slim-buster@sha256:866b51113519f9022e2f432869d3e7d97b090b9853f4765dbf46394ac28d5a03                                                                                                                                                                                                                                                                                                                                                                               0.0s
 => CACHED [stage-1 2/8] RUN set -e &&     apt-get update &&     apt-get install -y --no-install-recommends         nginx         memcached         supervisor &&     rm -rf /var/lib/apt/lists/*                                                                                                                                                                                                                                                                                                                        0.0s
 => CACHED [builder 2/3] RUN set -e &&     apt-get update &&     apt-get install -y --no-install-recommends         gcc         python3.7-dev                                                                                                                                                                                                                                                                                                                                                                            0.0s
 => CACHED [builder 3/3] RUN pip wheel --wheel-dir=/root/wheels uwsgi==2.0.19.1 &&     pip wheel --wheel-dir=/root/wheels regex==2021.9.30                                                                                                                                                                                                                                                                                                                                                                               0.0s
 => CACHED [stage-1 3/8] COPY --from=builder /root/wheels /root/wheels                                                                                                                                                                                                                                                                                                                                                                                                                                                   0.0s
 => CACHED [stage-1 4/8] COPY requirements.txt /app/requirements.txt                                                                                                                                                                                                                                                                                                                                                                                                                                                     0.0s
 => ERROR [stage-1 5/8] RUN pip install         --no-index         --no-cache-dir         --disable-pip-version-check         --find-links=/root/wheels         uwsgi regex &&     pip install --no-cache-dir --disable-pip-version-check -r /app/requirements.txt &&         rm -rf /root/.cache/pip/* &&         rm root/wheels/* &&         rm /app/requirements.txt                                                                                                                                                 13.6s
------
 > [stage-1 5/8] RUN pip install         --no-index         --no-cache-dir         --disable-pip-version-check         --find-links=/root/wheels         uwsgi regex &&     pip install --no-cache-dir --disable-pip-version-check -r /app/requirements.txt &&         rm -rf /root/.cache/pip/* &&         rm root/wheels/* &&         rm /app/requirements.txt:
#11 1.033 Looking in links: /root/wheels
#11 1.040 Processing /root/wheels/uWSGI-2.0.19.1-cp37-cp37m-linux_aarch64.whl
#11 1.044 Processing /root/wheels/regex-2021.9.30-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
#11 1.066 Installing collected packages: uwsgi, regex
#11 1.157 Successfully installed regex-2021.9.30 uwsgi-2.0.19.1
#11 1.157 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
#11 1.536 Collecting affine==2.3.0
#11 1.706   Downloading affine-2.3.0-py2.py3-none-any.whl (15 kB)
#11 1.746 Collecting attrs==21.2.0
#11 1.773   Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)
#11 1.839 Collecting black==21.9b0
#11 1.868   Downloading black-21.9b0-py3-none-any.whl (148 kB)
#11 1.944 Collecting certifi==2021.5.30
#11 1.980   Downloading certifi-2021.5.30-py2.py3-none-any.whl (145 kB)
#11 2.051 Collecting charset-normalizer==2.0.6
#11 2.082   Downloading charset_normalizer-2.0.6-py3-none-any.whl (37 kB)
#11 2.121 Collecting click==8.0.1
#11 2.147   Downloading click-8.0.1-py3-none-any.whl (97 kB)
#11 2.189 Collecting click-plugins==1.1.1
#11 2.215   Downloading click_plugins-1.1.1-py2.py3-none-any.whl (7.5 kB)
#11 2.266 Collecting cligj==0.7.2
#11 2.290   Downloading cligj-0.7.2-py3-none-any.whl (7.1 kB)
#11 2.632 Collecting coverage[toml]==6.0.1
#11 2.661   Downloading coverage-6.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (243 kB)
#11 2.726 Collecting flask==2.0.2
#11 2.764   Downloading Flask-2.0.2-py3-none-any.whl (95 kB)
#11 2.811 Collecting flask-caching==1.10.1
#11 2.836   Downloading Flask_Caching-1.10.1-py3-none-any.whl (34 kB)
#11 2.873 Collecting geographiclib==1.52
#11 2.898   Downloading geographiclib-1.52-py3-none-any.whl (38 kB)
#11 2.938 Collecting idna==3.2
#11 2.964   Downloading idna-3.2-py3-none-any.whl (59 kB)
#11 3.027 Collecting importlib-metadata==4.8.1
#11 3.052   Downloading importlib_metadata-4.8.1-py3-none-any.whl (17 kB)
#11 3.083 Collecting iniconfig==1.1.1
#11 3.128   Downloading iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
#11 3.160 Collecting itsdangerous==2.0.1
#11 3.200   Downloading itsdangerous-2.0.1-py3-none-any.whl (18 kB)
#11 3.241 Collecting jinja2==3.0.2
#11 3.282   Downloading Jinja2-3.0.2-py3-none-any.whl (133 kB)
#11 3.361 Collecting markupsafe==2.0.1
#11 3.384   Downloading MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (26 kB)
#11 3.409 Collecting mypy-extensions==0.4.3
#11 3.441   Downloading mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)
#11 3.736 Collecting numpy==1.21.2
#11 3.770   Downloading numpy-1.21.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (13.0 MB)
#11 4.474 Collecting packaging==21.0
#11 4.500   Downloading packaging-21.0-py3-none-any.whl (40 kB)
#11 4.544 Collecting pathspec==0.9.0
#11 4.575   Downloading pathspec-0.9.0-py2.py3-none-any.whl (31 kB)
#11 4.606 Collecting pep517==0.11.0
#11 4.640   Downloading pep517-0.11.0-py2.py3-none-any.whl (19 kB)
#11 4.702 Collecting pip-tools==6.3.0
#11 4.727   Downloading pip_tools-6.3.0-py3-none-any.whl (47 kB)
#11 4.762 Collecting platformdirs==2.4.0
#11 4.793   Downloading platformdirs-2.4.0-py3-none-any.whl (14 kB)
#11 4.843 Collecting pluggy==1.0.0
#11 4.871   Downloading pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
#11 4.898 Collecting polyline==1.4.0
#11 4.934   Downloading polyline-1.4.0-py2.py3-none-any.whl (4.4 kB)
#11 4.986 Collecting py==1.10.0
#11 5.015   Downloading py-1.10.0-py2.py3-none-any.whl (97 kB)
#11 5.055 Collecting pylibmc==1.6.1
#11 5.081   Downloading pylibmc-1.6.1.tar.gz (64 kB)
#11 5.651 Collecting pyparsing==2.4.7
#11 5.684   Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
#11 5.785 Collecting pyproj==3.2.1
#11 5.822   Downloading pyproj-3.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (6.1 MB)
#11 6.103 Collecting pytest==6.2.5
#11 6.146   Downloading pytest-6.2.5-py3-none-any.whl (280 kB)
#11 6.205 Collecting pytest-cov==3.0.0
#11 6.243   Downloading pytest_cov-3.0.0-py3-none-any.whl (20 kB)
#11 6.313 Collecting pyyaml==5.4.1
#11 6.347   Downloading PyYAML-5.4.1-cp37-cp37m-manylinux2014_aarch64.whl (716 kB)
#11 6.510 Collecting rasterio==1.2.9
#11 6.553   Downloading rasterio-1.2.9.tar.gz (2.3 MB)
#11 7.162   Installing build dependencies: started
#11 13.24   Installing build dependencies: finished with status 'done'
#11 13.24   Getting requirements to build wheel: started
#11 13.45   Getting requirements to build wheel: finished with status 'error'
#11 13.45   ERROR: Command errored out with exit status 1:
#11 13.45    command: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmp_b8p17k_
#11 13.45        cwd: /tmp/pip-install-rusm7ufj/rasterio_d6e3703bd238492fb428a60f0b4d1a47
#11 13.45   Complete output (2 lines):
#11 13.45   WARNING:root:Failed to get options via gdal-config: [Errno 2] No such file or directory: 'gdal-config': 'gdal-config'
#11 13.45   ERROR: A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable.
#11 13.45   ----------------------------------------
#11 13.45 WARNING: Discarding https://files.pythonhosted.org/packages/1f/70/4e7a789f4988955e4c381de80923e184f912683bbe6fc4a3a00c91efdf59/rasterio-1.2.9.tar.gz#sha256=012a4964d8db365be4fae0af9cbeba00e683e5904d5031e8ba42ccb6040cc887 (from https://pypi.org/simple/rasterio/) (requires-python:>=3.6). Command errored out with exit status 1: /usr/local/bin/python /usr/local/lib/python3.7/site-packages/pip/_vendor/pep517/in_process/_in_process.py get_requires_for_build_wheel /tmp/tmp_b8p17k_ Check the logs for full command output.
#11 13.45 ERROR: Could not find a version that satisfies the requirement rasterio==1.2.9 (from versions: 0.1, 0.2, 0.3, 0.4, 0.5, 0.5.1, 0.6, 0.7, 0.7.1, 0.7.2, 0.7.3, 0.8, 0.9, 0.10, 0.10.1, 0.11, 0.11.1, 0.12, 0.12.1, 0.13, 0.13.1, 0.13.2, 0.14, 0.14.1, 0.15, 0.15.1, 0.16, 0.17, 0.17.1, 0.18, 0.19.0, 0.19.1, 0.20.0, 0.21.0, 0.22.0, 0.23.0, 0.24.0, 0.24.1, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 0.32.0.post1, 0.33.0, 0.34.0, 0.35.0, 0.35.0.post1, 0.35.1, 0.36.0, 1.0a1, 1.0a2, 1.0a3, 1.0a4, 1.0a6, 1.0a7, 1.0a8, 1.0a9, 1.0a10, 1.0a11, 1.0a12, 1.0b1, 1.0b2, 1.0b3, 1.0b4, 1.0rc1, 1.0rc2, 1.0rc3, 1.0rc4, 1.0rc5, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.3.post1, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.0.9, 1.0.10, 1.0.11, 1.0.12, 1.0.13, 1.0.14, 1.0.15, 1.0.16, 1.0.17, 1.0.18, 1.0.20, 1.0.21, 1.0.22, 1.0.23, 1.0.24, 1.0.25, 1.0.26, 1.0.27, 1.0.28, 1.1b1, 1.1b2, 1.1b3, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.2a1, 1.2b1, 1.2b2, 1.2b3, 1.2b4, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.3a1, 1.3a2)
#11 13.46 ERROR: No matching distribution found for rasterio==1.2.9
------
executor failed running [/bin/sh -c pip install         --no-index         --no-cache-dir         --disable-pip-version-check         --find-links=/root/wheels         uwsgi regex &&     pip install --no-cache-dir --disable-pip-version-check -r /app/requirements.txt &&         rm -rf /root/.cache/pip/* &&         rm root/wheels/* &&         rm /app/requirements.txt]: exit code: 1
make: *** [build] Error 1

Any idea of what's going on? How could I help you help me fix this? ;-)

Thanks!

Adding Data / Mapping Directories

I'm having a problem adding datasets or mapping data directories. This is a fresh install (on linux) using:
git clone https://github.com/ajnisbet/opentopodata.git

After installing there is a "data" directory in the root directory, so I created a new directory inside of it and added some GeoTIFF's (N32W092.tiff, N32W093.tiff, etc).

After configuring config.yaml and doing "make build", when I do "make run" I get this:
ERROR:root:Invalid config: Dataset folder 'data/test/' seems to be empty.

If config.yaml isn't pointed correctly I get:
ERROR:root:Invalid config: No dataset folder found at location 'test/'

So I know it's finding the directory, but isn't seeing the files for some reason.

If I add the same folder to the "tests/data/datasets/" it works, but then "make build" takes MUCH longer, presumably because the data is getting built into the docker image (which won't scale nicely with the 3 TB of data I have).

As an alternate test I moved the data outside of the opentopodata altogether (at /test/test) and manually ran docker:
docker run --rm -it --volume /test/test:/app/data:ro -p 5000:5000 opentopodata:cat VERSION`

Again I get the "seems to be empty" error, so it's seeing the folder but not the files in the folder (or something like that).

How can I map to existing directories to add large amounts of data without making the docker image massive?

What am I missing? Why is the "tests/data/datasets/" directory structure working when "data/" and external directories don't?

Thanks!

API fails when dataset doesn't have an EPSG code.

When using a raster that has a projection without an EPSG code set, the API fails with '<=' not supported between instances of 'int' and 'NoneType'. These files aren't uncommon: EU-DEM comes this way.

Open Topo Data should support any raster file as long as it has a projection set in some format.

Usage in Commercial Applications

Thank you for this outstanding library.

I'd be interested to learn about using the different sources of elevation in terms of commercial applications. Is there a compiled list somewhere you can recommend that would help me on this topic? Whilst the Mapzen source looks quite promising, it feels a bit risky to plug and play without doing some research in the first place.

Thanks a bunch and cheers!

Better error messages for invalid datasets

Currently if a dataset in config.yml is in an invalid format, the server keeps running and the API returns either a generic server error or a null elevation.

What should happen is either that the server fails with a descriptive error message, or the API returns a helpful error message.

Bounds error for tiled datasets without overlap

With my own server (running the current version 1.4.1 of opentopodata), I see the following problem regarding the EU-DEM dataset:

curl "https://myserver/v1/eudem25m?locations=50.100,8.387"
{
  "results": [
    {
      "elevation": 362.4136962890625, 
      "location": {
        "lat": 50.1, 
        "lng": 8.387
      }
    }
  ], 
  "status": "OK"
}

curl "https://myserver/v1/eudem25m?locations=50.101,8.387"
{
  "error": "Location '50.101,8.387' has latitude outside of raster bounds", 
  "status": "INVALID_REQUEST"
}

curl "https://myserver/v1/eudem25m?locations=50.102,8.387"
{
  "results": [
    {
      "elevation": 362.61474609375, 
      "location": {
        "lat": 50.102, 
        "lng": 8.387
      }
    }
  ], 
  "status": "OK"
}

For some reason, the location 50.101,8.387 seems to trigger this bogus error about the latitude being outside of the raster bounds, although it's clearly inside of the EU-DEM bounds (and for the two neighboring locations everything seems to be fine).

The error occurs every time I query this location on my own server, but interestingly it does not occur with the public server:

curl "https://api.opentopodata.org/v1/eudem25m?locations=50.101,8.387"             
{
  "results": [
    {
      "elevation": 360.98956298828125, 
      "location": {
        "lat": 50.101, 
        "lng": 8.387
      }
    }
  ], 
  "status": "OK"
}

I assume the difference might be due to a slightly different data setup?

And then, independent of this specific problem, I noticed another small issue that occurs when querying two locations at once:

curl "https://myserver/v1/eudem25m?locations=50.101,8.387|50.102,8.387"  
{
  "error": "Location '50.102,8.387' has latitude outside of raster bounds", 
  "status": "INVALID_REQUEST"
}

As shown above, the latitude 50.101 is the problematic one, but here the error message mentions the other one (50.102). This might be an off-by-one issue in the error diagnostics.

Can't build on aarch64 / arm64

Hi, I'm trying to run "make build" on a Raspberry Pi 4 with Rasperry OS 64bit (aarch64 / arm64) or (as an alternative) tried "docker buildx --platform linux/arm64" on a Windows 10 64bit machine. Both however don't succed and throw an error when trying to install the requirements.txt
As far as I understand it, it seems like rasterio causes the problem, because there is no arm64 wheel version.
Could you somehow provide a workaround or installation instructions without docker (maybe I can locally install and compile rasterio)?

Many `null` elevations returned

Certain queries to the API return a lot of null elevations, e.g.:

https://api.opentopodata.org/v1/eudem25m?locations=37.91004381769315,20.418209430289558|37.91004381769315,20.42820943028956|37.91004381769315,20.43820943028956|37.91004381769315,20.448209430289563|37.91004381769315,20.458209430289564|37.91004381769315,20.468209430289566|37.91004381769315,20.478209430289567|37.91004381769315,20.48820943028957|37.91004381769315,20.49820943028957|37.91004381769315,20.508209430289572|37.91004381769315,20.518209430289573|37.91004381769315,20.528209430289575|37.91004381769315,20.538209430289577|37.91004381769315,20.548209430289578|37.91004381769315,20.55820943028958|37.91004381769315,20.56820943028958|37.91004381769315,20.578209430289583|37.91004381769315,20.588209430289584|37.91004381769315,20.598209430289586|37.91004381769315,20.608209430289588|37.91004381769315,20.61820943028959|37.91004381769315,20.62820943028959|37.91004381769315,20.638209430289592|37.91004381769315,20.648209430289594|37.91004381769315,20.658209430289595|37.91004381769315,20.668209430289597|37.91004381769315,20.6782094302896|37.91004381769315,20.6882094302896|37.91004381769315,20.6982094302896|37.91004381769315,20.708209430289603|37.91004381769315,20.718209430289605|37.91004381769315,20.728209430289606|37.91004381769315,20.738209430289608|37.91004381769315,20.74820943028961|37.91004381769315,20.75820943028961|37.91004381769315,20.768209430289613|37.91004381769315,20.778209430289614|37.91004381769315,20.788209430289616|37.91004381769315,20.798209430289617|37.91004381769315,20.80820943028962|37.91004381769315,20.81820943028962|37.91004381769315,20.828209430289622|37.91004381769315,20.838209430289623|37.91004381769315,20.848209430289625|37.91004381769315,20.858209430289627|37.91004381769315,20.868209430289628|37.91004381769315,20.87820943028963|37.91004381769315,20.88820943028963|37.91004381769315,20.898209430289633|37.92004381769315,20.408209430289556|37.92004381769315,20.418209430289558|37.92004381769315,20.42820943028956|37.92004381769315,20.43820943028956|37.92004381769315,20.448209430289563|37.92004381769315,20.458209430289564|37.92004381769315,20.468209430289566|37.92004381769315,20.478209430289567|37.92004381769315,20.48820943028957|37.92004381769315,20.49820943028957|37.92004381769315,20.508209430289572|37.92004381769315,20.518209430289573|37.92004381769315,20.528209430289575|37.92004381769315,20.538209430289577|37.92004381769315,20.548209430289578|37.92004381769315,20.55820943028958|37.92004381769315,20.56820943028958|37.92004381769315,20.578209430289583|37.92004381769315,20.588209430289584|37.92004381769315,20.598209430289586|37.92004381769315,20.608209430289588|37.92004381769315,20.61820943028959|37.92004381769315,20.62820943028959|37.92004381769315,20.638209430289592|37.92004381769315,20.648209430289594|37.92004381769315,20.658209430289595|37.92004381769315,20.668209430289597|37.92004381769315,20.6782094302896|37.92004381769315,20.6882094302896|37.92004381769315,20.6982094302896|37.92004381769315,20.708209430289603|37.92004381769315,20.718209430289605|37.92004381769315,20.728209430289606|37.92004381769315,20.738209430289608|37.92004381769315,20.74820943028961|37.92004381769315,20.75820943028961|37.92004381769315,20.768209430289613|37.92004381769315,20.778209430289614|37.92004381769315,20.788209430289616|37.92004381769315,20.798209430289617|37.92004381769315,20.80820943028962|37.92004381769315,20.81820943028962|37.92004381769315,20.828209430289622|37.92004381769315,20.838209430289623|37.92004381769315,20.848209430289625|37.92004381769315,20.858209430289627|37.92004381769315,20.868209430289628|37.92004381769315,20.87820943028963|37.92004381769315,20.88820943028963|37.92004381769315,20.898209430289633|37.93004381769315,20.408209430289556

Why is that?

Thanks a lot in advance, really loving this API!

Make CONFIG_PATH configurable via environment variable

Hi,
I'm deploying opentopodata to k8s and I have to do workaround to make the config.yaml file available to the pod.

  • config.yaml is a configmap
  • mount volume configmap to /app to make it available to the pod at /app/config.yaml will override all files in /app folder
  • I have to do workaround of having configmap mounted at /config then use this command to run the pod
cp /config/config.yaml /app/config.yaml && chmod +x /app/docker/run.sh && /app/docker/run.sh

My suggestion is to change the code so CONFIG_PATH could be read from environment variable so I can export it as CONFIG_PATH=/config/config.yaml

Thanks.

Recommended server configuration

Hey there,

We're wondering about the recommended server specs (CPUs, memory, SSD size) and there doesn't seem to be a word about it in the doc. What would you recommend?

Thanks!

Trouble building/using the EMOD dataset

Hello,

I'm having some trouble building/using the EMOD dataset. I downloaded the 2020 version of the dataset from the website referenced in the docs, and managed to build a single .vrt file and linking it in config.yaml, although gdalbuildvrt doesn't recognize -co as an option, so I tried to set VRT_SHARED_SOURCE=0 as an environment variable instead (perhaps this is wrong? I'm not that familiar with the gdal toolchain and geospatial data in general, unfortunately). The linking of the dataset in config.yaml seems to work, but when I try to query a location, I get the following error message:

{
  "error": "Dataset has no coordinate reference system. Check the file 'data/emod2020-vrt/emod2020.vrt' is a geo raster. Otherwise you'll have to add the crs manually with a tool like gdaltranslate.", 
  "status": "INVALID_REQUEST"
}

I've also tried using the option -oo, but that results in the same error.

Any pointers here would be greatly appreciated.

about the project

Hi,

Is it possible to make a request using axios or you need to pay for some key for that?

Incorrect example in docs

Hello and thank you for that wonderful project!

I found a mistake in your documentation where you describe how to define a multi dataset in config.yaml file. https://www.opentopodata.org/notes/multiple-datasets/

This didn't work for me:

# Hi-res New Zealand.
- name: nzdem8m
  path: data/nzdem8m/
  filename_tile_size: 65536
  filename_epsg: 2193

# Mapzen global.
- name: mapzen
  path: data/mapzen/


# NZ with mapzen fallback.
-name: nz-global
  - child_datasets:
    - nzdem8m
    - mapzen

I had to change the nz-global configuration part to:

# NZ with mapzen fallback.
- name: nz-global
  child_datasets:
    - nzdem8m
    - mapzen

There was a space missing between the first dash and name:
-name: nz-global => - name: nz-global
and I had to remove the dash before child_datasets

make run errors

Hello,

I tried to build/run the project in my aws ec2 server and got this error messages:

docker run --rm -it --volume "/home/jeremiah/mushroom/opentopodata/data:/app/data:ro" -p 5000:5000 opentopodata:`cat VERSION` 
2023-05-09 07:50:29,418 INFO Set uid to user 0 succeeded
2023-05-09 07:50:29,420 INFO supervisord started with pid 1
2023-05-09 07:50:30,423 INFO spawned: 'memcached' with pid 10
2023-05-09 07:50:30,425 INFO spawned: 'nginx' with pid 11
2023-05-09 07:50:30,427 INFO spawned: 'uwsgi' with pid 12
2023-05-09 07:50:30,432 INFO spawned: 'warm_cache' with pid 13
[uWSGI] getting INI configuration from /app/docker/uwsgi.ini
*** Starting uWSGI 2.0.21 (64bit) on [Tue May  9 07:50:30 2023] ***
compiled with version: 10.2.1 20210110 on 08 May 2023 16:41:29
os: Linux-5.19.0-1022-aws #23~22.04.1-Ubuntu SMP Fri Mar 17 15:38:24 UTC 2023
nodename: 6cde6eb06964
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 3.9.16 (main, May  3 2023, 09:54:39)  [GCC 10.2.1 20210110]
python: can't open file '/app/docker/warm_cache.py': [Errno 13] Permission denied
*** Python threads support is disabled. You can enable it with --enable-threads ***
2023-05-09 07:50:30,487 INFO exited: warm_cache (exit status 2; not expected)
Python main interpreter initialized at 0x563990143740
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 20 seconds
mapped 403029 bytes (393 KB) for 2 cores
*** Operational MODE: preforking ***
failed to open python file /app/opentopodata/api.py
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***
VACUUM: unix socket /tmp/uwsgi.sock removed.
2023-05-09 07:50:31,491 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-05-09 07:50:31,491 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-05-09 07:50:31,491 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-05-09 07:50:31,493 INFO spawned: 'warm_cache' with pid 25
2023-05-09 07:50:31,494 INFO exited: uwsgi (exit status 22; not expected)
python: can't open file '/app/docker/warm_cache.py': [Errno 13] Permission denied
2023-05-09 07:50:31,534 INFO spawned: 'uwsgi' with pid 26
2023-05-09 07:50:31,535 INFO exited: warm_cache (exit status 2; not expected)
[uWSGI] getting INI configuration from /app/docker/uwsgi.ini
*** Starting uWSGI 2.0.21 (64bit) on [Tue May  9 07:50:31 2023] ***
compiled with version: 10.2.1 20210110 on 08 May 2023 16:41:29
os: Linux-5.19.0-1022-aws #23~22.04.1-Ubuntu SMP Fri Mar 17 15:38:24 UTC 2023
nodename: 6cde6eb06964
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 3.9.16 (main, May  3 2023, 09:54:39)  [GCC 10.2.1 20210110]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x56238ea6e740
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 20 seconds
mapped 403029 bytes (393 KB) for 2 cores
*** Operational MODE: preforking ***
failed to open python file /app/opentopodata/api.py
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***
VACUUM: unix socket /tmp/uwsgi.sock removed.
2023-05-09 07:50:32,583 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-05-09 07:50:32,583 INFO exited: uwsgi (exit status 22; not expected)
2023-05-09 07:50:33,586 INFO spawned: 'uwsgi' with pid 27
2023-05-09 07:50:33,588 INFO spawned: 'warm_cache' with pid 28
[uWSGI] getting INI configuration from /app/docker/uwsgi.ini
*** Starting uWSGI 2.0.21 (64bit) on [Tue May  9 07:50:33 2023] ***
compiled with version: 10.2.1 20210110 on 08 May 2023 16:41:29
os: Linux-5.19.0-1022-aws #23~22.04.1-Ubuntu SMP Fri Mar 17 15:38:24 UTC 2023
nodename: 6cde6eb06964
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 3.9.16 (main, May  3 2023, 09:54:39)  [GCC 10.2.1 20210110]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x559dd6599740
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 20 seconds
mapped 403029 bytes (393 KB) for 2 cores
*** Operational MODE: preforking ***
failed to open python file /app/opentopodata/api.py
python: can't open file '/app/docker/warm_cache.py': [Errno 13] Permission denied
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***
VACUUM: unix socket /tmp/uwsgi.sock removed.
2023-05-09 07:50:33,647 INFO exited: warm_cache (exit status 2; not expected)
2023-05-09 07:50:33,648 INFO exited: uwsgi (exit status 22; not expected)
2023-05-09 07:50:34,650 INFO spawned: 'uwsgi' with pid 29
[uWSGI] getting INI configuration from /app/docker/uwsgi.ini
*** Starting uWSGI 2.0.21 (64bit) on [Tue May  9 07:50:34 2023] ***
compiled with version: 10.2.1 20210110 on 08 May 2023 16:41:29
os: Linux-5.19.0-1022-aws #23~22.04.1-Ubuntu SMP Fri Mar 17 15:38:24 UTC 2023
nodename: 6cde6eb06964
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 3.9.16 (main, May  3 2023, 09:54:39)  [GCC 10.2.1 20210110]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x561a10f25740
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 20 seconds
mapped 403029 bytes (393 KB) for 2 cores
*** Operational MODE: preforking ***
failed to open python file /app/opentopodata/api.py
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***
VACUUM: unix socket /tmp/uwsgi.sock removed.
2023-05-09 07:50:34,678 INFO exited: uwsgi (exit status 22; not expected)
2023-05-09 07:50:36,683 INFO spawned: 'uwsgi' with pid 30
2023-05-09 07:50:36,688 INFO spawned: 'warm_cache' with pid 31
[uWSGI] getting INI configuration from /app/docker/uwsgi.ini
*** Starting uWSGI 2.0.21 (64bit) on [Tue May  9 07:50:36 2023] ***
compiled with version: 10.2.1 20210110 on 08 May 2023 16:41:29
os: Linux-5.19.0-1022-aws #23~22.04.1-Ubuntu SMP Fri Mar 17 15:38:24 UTC 2023
nodename: 6cde6eb06964
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 3.9.16 (main, May  3 2023, 09:54:39)  [GCC 10.2.1 20210110]
python: can't open file '/app/docker/warm_cache.py': [Errno 13] Permission denied
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x562b18aaa740
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 20 seconds
mapped 403029 bytes (393 KB) for 2 cores
*** Operational MODE: preforking ***
failed to open python file /app/opentopodata/api.py
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***
VACUUM: unix socket /tmp/uwsgi.sock removed.
2023-05-09 07:50:36,740 INFO exited: uwsgi (exit status 22; not expected)
2023-05-09 07:50:36,740 INFO exited: warm_cache (exit status 2; not expected)
2023-05-09 07:50:37,741 INFO gave up: warm_cache entered FATAL state, too many start retries too quickly
2023-05-09 07:50:39,745 INFO spawned: 'uwsgi' with pid 32
[uWSGI] getting INI configuration from /app/docker/uwsgi.ini
*** Starting uWSGI 2.0.21 (64bit) on [Tue May  9 07:50:39 2023] ***
compiled with version: 10.2.1 20210110 on 08 May 2023 16:41:29
os: Linux-5.19.0-1022-aws #23~22.04.1-Ubuntu SMP Fri Mar 17 15:38:24 UTC 2023
nodename: 6cde6eb06964
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
your memory page size is 4096 bytes
detected max file descriptor number: 1048576
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
setgid() to 33
setuid() to 33
Python version: 3.9.16 (main, May  3 2023, 09:54:39)  [GCC 10.2.1 20210110]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x55be9b113740
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 20 seconds
mapped 403029 bytes (393 KB) for 2 cores
*** Operational MODE: preforking ***
failed to open python file /app/opentopodata/api.py
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***
VACUUM: unix socket /tmp/uwsgi.sock removed.
2023-05-09 07:50:39,798 INFO exited: uwsgi (exit status 22; not expected)
2023-05-09 07:50:39,803 INFO gave up: uwsgi entered FATAL state, too many start retries too quickly

I tried various thing, but none of them worked:

  • Changing permissions to files api.py and warm-cache (chmod 777)
  • Changing uwsgi.ini to change www-data to my username
  • Reinstalled uswgi (due to !!! no internal routing support, rebuild with pcre support !!!)

Any ideas?
Thanks in advace

Feature request: MOLA/DTM dataset support

I'm not sure if this has been requested before, but it would be super useful if the Open Topo Data API server was extended to include elevation data from Mars. The two datasets I'm aware are described in this SE answer:

DTM coverage is small, but the resulting topo models are awesomely detailed - they are good for assessing landing sites and building hi-fi virtual terrains at great locations (Marineris, Olympus etc.)

For the things I'm working on, the MOLA dataset would suffice to support my experiments, as it covers the entire Martian surface. Perhaps this might help others working in fields like astrogeology, space exploration and more!


How challenging would you say it would be to integrate this dataset to the current API? Do you think the available dataset format varies significantly from Earth-based data, or is it just as easy to implement?

Cheers!

Using the emod2018 dataset, opentopodata.org's API returns invalid JSON

Hi :-)

Whilst working on a geotagging program using opentopodata.org's API, I noticed some error when parsing the server's answer to a QJsonDocument. It yielded QJsonParseError::IllegalNumber. After having had a closer look, this happened when using the emod2018 dataset.

Example: Requesting https://api.opentopodata.org/v1/emod2018?locations=50,11 returns

{
"results": [
    {
    "elevation": NaN, 
    "location": {
        "lat": 50.0, 
        "lng": 11.0
    }
    }
], 
"status": "OK"
}

The elevation key's value is NaN (instead of null, as the other datasets return when a coordinate is out of bounds). This is, per RFC 4627, not permitted (cf. https://tools.ietf.org/html/rfc4627#section-2.4):

Numeric values that cannot be represented as sequences of digits
(such as Infinity and NaN) are not permitted.

Most probably, this is what QJsonDocument::fromJson complains about.

Anyway, thanks about this great piece of software and the great service you provide :-)

Severe memory leak

Thanks for your work on this awesome project.

We upgraded from 1.7.1 to 1.8.2 and and we're now seeing an infinite memory leak until the server dies. It is very painful.

We're using the stock repo config with ASTER 30m and with listen = 1024 added to uwsgi.ini.

image

By the time I finished writing this, the memory used has already gone up to 3.5GB.

Can workaround/fix it by adding reload-on-as = 512 to uwsgi.ini.

ERROR in api: Input is not a transformation

So, I'm trying to add yet another dataset (non-public data), like this:

  • Got a bunch of .asc files.
  • Converted them to .tif.
  • Renamed them to follow the SRTM naming conventions, ending up with names like N5220000E580000.tif etc
  • Added a config.yaml with the proper filename_epsg and filename_tile_size.

Unfortunately I only get errors in the end:

{
  "error": "Server error, please retry request.",
  "status": "SERVER_ERROR"
}

The other datasets on the server can be queried alright, only the new one throws this error, and I don't see what's wrong.

What is the best way to debug this? Is there a logfile somewhere that has further details? (Did not find any in the docker container.)

Docker container run: Error: could not find config file /app/docker/supervisord.conf

I am building the docker image using the provided Dockerfile like so
docker build --tag opentopodata --file .\docker\Dockerfile .

When trying to run the container with the docker run command:
docker run --rm -it --volume /opentopodata/data:/app/data:ro -p 5000:5000 opentopodata

I get the following error:

Error: could not find config file /app/docker/supervisord.conf
For help, use /usr/bin/supervisord -h

I tried with different commands in the run.sh file but it is always complaining about missing files. I checked in the container and they are definitely there.

Modifying the docker run command in such a way starts up the server successfully:
docker run --rm -it --volume /opentopodata/data:/app/data:ro -p 5000:5000 opentopodata sh -c "exec env N_UWSGI_THREADS=2 /usr/bin/supervisord -c /app/docker/supervisord.conf"

Any idea what could be wrong?

"Config Error: No dataset folder found at location 'data/etopo1/'"

Hello,

I am trying to make etopo1 following this tutorial ETOPO1, but I am getting this error:
{ "error": "Config Error: No dataset folder found at location 'data/etopo1/'", "status": "SERVER_ERROR" }

My system is Windows 10. I set it through docker.

tiff refactored by GDAL path:
opentopodata/data/etopo1/ETOPO1.tiff

config.yaml:

# An example of a config.yaml file showing all possible options. If no
# config.yaml file exists, opentopodata will load example-config.yaml instead.


# 400 error will be thrown above this limit.
max_locations_per_request: 100

# CORS header. Should be None for no CORS, '*' for all domains, or a url with
# protocol, domain, and port ('https://api.example.com/'). Default is null.
access_control_allow_origin: null

datasets:
- name: etopo1
  path: data/etopo1/

# Example config for 90 metre SRTM.
# - name: srtm90m
#   path: data/srtm-90m-nasa-v3.0/
#   filename_epsg: 4326  # This is the default value.
#   filename_tile_size: 1  # This is the default value.

Enabling CORS on api.opentopodata.org

Hiya, thanks so much for this resource, it's great. Would you consider turning on CORS headers for the public API? I was poking around and I can see there's support in the code for this already (otherwise I'd submit a PR to add it). If not, no worries, and thanks again!

Suggestion of code to bulk download DEM data more easily from USGS

I had trouble downloading the USGS data, even with the correct login information and following earthdata's advices.

But once I was authenticated correctly to the earthdata site with my account and was able to download any file manually, this code worked well for me.

Just create a text file from srtm30m_urls.txt (for example) and execute the code below with your paths to the Chrome app and data_urls.txt.

import webbrowser
import time

# Windows
chrome_path = "C:/Program Files/Google/Chrome/Application/chrome.exe %s"

# MacOS
# chrome_path = 'open -a /Applications/Google\ Chrome.app %s'
# Linux
# chrome_path = '/usr/bin/google-chrome %s'

with open("data_urls.txt", "r") as f:
    url_list = f.read().split("\n")
    
for i in range(len(url_list)):
    webbrowser.get(chrome_path).open_new_tab(url_list[i])
    if i % 100 == 0:
        time.sleep(5) #pause 5s every 100 it.

Problem with ned10m dataset

I've tried to set up a server to get data out of the ned10m dataset.

When I try to run it, I get this log :

*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 12)
spawned uWSGI worker 1 (pid: 37, cores: 1)
spawned uWSGI worker 2 (pid: 38, cores: 1)
2020-08-05 15:11:34,862 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-08-05 15:11:34,862 INFO success: memcached entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-08-05 15:11:34,862 INFO success: warm_cache entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2020-08-05 15:11:34,862 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
ERROR:root:Invalid config: Unknown dataset type for 'ned10m'.
ERROR:root:Unable to warm cache. This probably means Open Topo Data isn't working.
2020-08-05 15:11:34,940 INFO exited: warm_cache (exit status 1; not expected)

It seems that my dataset is not properly configured. My config.yaml is exactly the same as the one in the doc for the ned10m dataset. I've only downloaded a part of the ned10m dataset from USGS. Here is my folder structure for the dataset :
data/ned10m:
USGS_13_n05e162.tif USGS_13_n06e152.tif USGS_13_n09e138.tif USGS_13_n17w066.tif USGS_13_n18w156.tif USGS_13_n21w158.tif USGS_13_n24w081.tif USGS_13_n25w098.tif USGS_13_n26w099.tif
USGS_13_n05e163.tif USGS_13_n06e158.tif USGS_13_n13e144.tif USGS_13_n17w067.tif USGS_13_n19w155.tif USGS_13_n21w159.tif USGS_13_n24w082.tif USGS_13_n25w099.tif USGS_13_n26w100.tif
USGS_13_n06e134.tif USGS_13_n07e152.tif USGS_13_n14e145.tif USGS_13_n17w068.tif USGS_13_n19w157.tif USGS_13_n21w160.tif USGS_13_n24w083.tif USGS_13_n26w081.tif USGS_13_n27w081.tif
USGS_13_n06e151.tif USGS_13_n08e134.tif USGS_13_n17w065.tif USGS_13_n18w066.tif USGS_13_n20w156.tif USGS_13_n21w161.tif USGS_13_n25w082.tif USGS_13_n26w083.tif

Config reload on non-docker version

Hello,

Memcached server must be flushed (restarted) on uwsgi+memcached (no-docker) version to reload the config.yaml.

Could you please update the docs accordingly?

Best,

gRPC support

Would it be possible to build in support for gRPC? I'm building a location service that uses a lat,lon-coordinate to get the elevation from an opentopodata implementation. When importing updates of places we sometimes need to ask for elevation 100.000 times within a short duration. I'm wondering if using gRPC instead of http would speedup this process?

Healthy endpoint

When deploying behind an ELB on AWS, one needs to configure a health check endpoint.
/ and /v1 return 404 so you must configure a health check URL like localhost:5000/v1/ned10m?locations=38.7181772,-76.5999447 that actually returns data. However this only works once you've configured a dataset.

It could be helpful if you configure an endpoint e.g. /healthy that returns 200 when the server is up.

Document storage-size requirements of each data set

It's great that OpenTopoData supports such a great variety of different data sets. However, for someone wanting to host his own server (with one or several of these data sets), a limiting factor might be the storage size needed for the data. Therefore it would be great if the documentation would include the storage-size requirements for each data set.

This information could either be included on the pages that describe the data sets in detail (maybe it is already there for some of them?) and/or it could be included in the summary table on the introduction page.

Since https://www.opentopodata.org/notes/cloud-storage/ mentions the recommendation to convert the data to cloud-optimized GeoTIFF format (COG), it might be useful to document the storage size of the data both in the original format as well as COG.

API endpoint for querying server version

Minor feature request: It would be nice to have a way to query the server version via the API. AFAICS this is not possible right now, or is it?

One could either add a new endpoint (/version?), which simply returns something like:

{
    "version": "1.5.1"
}

Alternatively, one could also include it into the /health endpoint, so that it returns:

{
    "status": "OK",
    "version": "1.5.1"
}

(I'd slightly prefer the second option here.)

For the version string, I can also see two different options:

  • Either just use the content of the VERSION file (e.g. 1.5.0 or 1.5.1).
  • Or use the output of git describe --tags --always. For current master this yields v1.5.1-2-gde6948d, but for the current dev branch it just gives the git hash e72a26c.

The second variant really gives a unique version string for every commit, while the first one is more straightforward (but may not be unique, or at least less exact). Both would be an improvement over the status quo. One could also imagine outputting both of them in different json fields.

API endpoint for querying the available datasets

Carrying over a point from #47 (comment):

Btw, another useful addition for the /health endpoint might be to list the available datasets (which, apart from the version, can be another major difference in how different instances of OTD behave).

And yes, the datasets are listed in config.yaml, so of course they are obvious to the server admin, but in general not to all end users.

This information could be included in the /health endpoint or it could go into a new endpoint like /datasets or similar.

(Windows) %(ENV_N_UWSGI_THREADS)s' cannot be expanded.

I have this error when trying to docker run --rm -it --volume C:/path/to/opentopodata/data:/app/data:ro -p 5000:5000 opentopodata sh -c "/usr/bin/supervisord -c /app/docker/supervisord.conf":

Error: Format string '/usr/local/bin/uwsgi --ini /app/docker/uwsgi.ini --processes %(ENV_N_UWSGI_THREADS)s' for 'program:uwsgi.command' contains names ('ENV_N_UWSGI_THREADS') which cannot be expanded. Available names: ENV_CURL_CA_BUNDLE, ENV_GPG_KEY, ENV_HOME, ENV_HOSTNAME, ENV_LANG, ENV_PATH, ENV_PWD, ENV_PYTHON_GET_PIP_SHA256, ENV_PYTHON_GET_PIP_URL, ENV_PYTHON_PIP_VERSION, ENV_PYTHON_VERSION, ENV_TERM, group_name, here, host_node_name, process_num, program_name in section 'program:uwsgi' (file: '/app/docker/supervisord.conf') For help, use /usr/bin/supervisord -h

auditwheel repair requires patchelf >= 0.14.

Get:63 http://deb.debian.org/debian bullseye/main amd64 patchelf amd64 0.12-1 [61.9 kB]

⬆️ this seems to be the issue. is this on my end?

image

then docker system prune and retried, but same outcome

tim@virtual:~/rails/opentopodata$ docker system prune
tim@virtual:~/rails/opentopodata$ docker system prune

WARNING! This will remove:

  - all stopped containers

  - all networks not used by at least one container

  - all dangling images

  - all dangling build cache



Are you sure you want to continue? [y/N] y

Deleted Containers:

36f0bbd34ec78607c80abc07158cf202ce1864a700c3355c186d84c9c8ecd716

f4cadf00732312bcff43e87481892fc64edb31b1cc18f7e8fd84ff10423d4984

94d6c5b56165081eaaffce56ad4f2e5bf79c82fe93cec8503271adc2fe460035



Deleted Images:

deleted: sha256:a929305d17fb4dad2cb010811e8fa8e00e63a71be0fc5632313060c74dc50650

deleted: sha256:509c766c41029b1e1dd193a9037e642acc31ed027dcbc7792bebd698d547c0be



Total reclaimed space: 363.5MB

tim@virtual:~/rails/opentopodata$ 

tim@virtual:~/rails/opentopodata$ 

tim@virtual:~/rails/opentopodata$ make build

docker build --tag opentopodata:`cat VERSION` --file docker/Dockerfile .

Sending build context to Docker daemon  33.02MB

Step 1/17 : FROM python:3.9.13-slim-bullseye as builder

 ---> 5da6ce3c33c6

Step 2/17 : RUN set -e &&     apt-get update &&     apt-get install -y --no-install-recommends         build-essential         gcc         libmemcached-dev         patchelf         python3.9-dev

 ---> Running in bd556d275925

Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]

Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]

Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]

Get:4 http://deb.debian.org/debian bullseye/main amd64 Packages [8184 kB]

Get:5 http://deb.debian.org/debian-security bullseye-security/main amd64 Packages [194 kB]

Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [14.6 kB]

Fetched 8600 kB in 1s (5803 kB/s)

Reading package lists...

Reading package lists...

Building dependency tree...

Reading state information...

The following additional packages will be installed:

  binutils binutils-common binutils-x86-64-linux-gnu bzip2 cpp cpp-10 dpkg-dev

  g++ g++-10 gcc-10 libasan6 libatomic1 libbinutils libc-dev-bin libc6

  libc6-dev libcc1-0 libcrypt-dev libctf-nobfd0 libctf0 libdpkg-perl libexpat1

  libexpat1-dev libgcc-10-dev libgdbm-compat4 libgomp1 libhashkit-dev

  libhashkit2 libisl23 libitm1 liblsan0 libmemcached11 libmemcachedutil2

  libmpc3 libmpdec3 libmpfr6 libnsl-dev libperl5.32 libpython3.9

  libpython3.9-dev libpython3.9-minimal libpython3.9-stdlib libquadmath0

  libsasl2-2 libsasl2-dev libsasl2-modules-db libstdc++-10-dev libtirpc-dev

  libtsan0 libubsan1 linux-libc-dev make media-types patch perl

  perl-modules-5.32 python3.9 python3.9-minimal xz-utils zlib1g zlib1g-dev

Suggested packages:

  binutils-doc bzip2-doc cpp-doc gcc-10-locales debian-keyring g++-multilib

  g++-10-multilib gcc-10-doc gcc-multilib manpages-dev autoconf automake

  libtool flex bison gdb gcc-doc gcc-10-multilib glibc-doc libc-l10n locales

  gnupg sensible-utils git bzr libstdc++-10-doc make-doc ed diffutils-doc

  perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl

  libtap-harness-archive-perl python3.9-venv python3.9-doc binfmt-support

Recommended packages:

  fakeroot gnupg libalgorithm-merge-perl manpages manpages-dev libc-devtools

  libnss-nis libnss-nisplus libfile-fcntllock-perl liblocale-gettext-perl

  libsasl2-modules

The following NEW packages will be installed:

  binutils binutils-common binutils-x86-64-linux-gnu build-essential bzip2 cpp

  cpp-10 dpkg-dev g++ g++-10 gcc gcc-10 libasan6 libatomic1 libbinutils

  libc-dev-bin libc6-dev libcc1-0 libcrypt-dev libctf-nobfd0 libctf0

  libdpkg-perl libexpat1-dev libgcc-10-dev libgdbm-compat4 libgomp1

  libhashkit-dev libhashkit2 libisl23 libitm1 liblsan0 libmemcached-dev

  libmemcached11 libmemcachedutil2 libmpc3 libmpdec3 libmpfr6 libnsl-dev

  libperl5.32 libpython3.9 libpython3.9-dev libpython3.9-minimal

  libpython3.9-stdlib libquadmath0 libsasl2-2 libsasl2-dev libsasl2-modules-db

  libstdc++-10-dev libtirpc-dev libtsan0 libubsan1 linux-libc-dev make

  media-types patch patchelf perl perl-modules-5.32 python3.9 python3.9-dev

  python3.9-minimal xz-utils zlib1g-dev

The following packages will be upgraded:

  libc6 libexpat1 zlib1g

3 upgraded, 63 newly installed, 0 to remove and 11 not upgraded.

Need to get 84.6 MB of archives.

After this operation, 326 MB of additional disk space will be used.

Get:1 http://deb.debian.org/debian bullseye/main amd64 perl-modules-5.32 all 5.32.1-4+deb11u2 [2823 kB]

Get:2 http://deb.debian.org/debian bullseye-updates/main amd64 libc6 amd64 2.31-13+deb11u5 [2825 kB]

Get:3 http://deb.debian.org/debian bullseye/main amd64 libgdbm-compat4 amd64 1.19-2 [44.7 kB]

Get:4 http://deb.debian.org/debian bullseye/main amd64 zlib1g amd64 1:1.2.11.dfsg-2+deb11u2 [91.4 kB]

Get:5 http://deb.debian.org/debian bullseye/main amd64 libperl5.32 amd64 5.32.1-4+deb11u2 [4106 kB]

Get:6 http://deb.debian.org/debian bullseye/main amd64 perl amd64 5.32.1-4+deb11u2 [293 kB]

Get:7 http://deb.debian.org/debian bullseye/main amd64 libpython3.9-minimal amd64 3.9.2-1 [801 kB]

Get:8 http://deb.debian.org/debian-security bullseye-security/main amd64 libexpat1 amd64 2.2.10-2+deb11u5 [98.2 kB]

Get:9 http://deb.debian.org/debian bullseye/main amd64 python3.9-minimal amd64 3.9.2-1 [1955 kB]

Get:10 http://deb.debian.org/debian bullseye/main amd64 bzip2 amd64 1.0.8-4 [49.3 kB]

Get:11 http://deb.debian.org/debian bullseye/main amd64 media-types all 4.0.0 [30.3 kB]

Get:12 http://deb.debian.org/debian bullseye/main amd64 xz-utils amd64 5.2.5-2.1~deb11u1 [220 kB]

Get:13 http://deb.debian.org/debian bullseye/main amd64 binutils-common amd64 2.35.2-2 [2220 kB]

Get:14 http://deb.debian.org/debian bullseye/main amd64 libbinutils amd64 2.35.2-2 [570 kB]

Get:15 http://deb.debian.org/debian bullseye/main amd64 libctf-nobfd0 amd64 2.35.2-2 [110 kB]

Get:16 http://deb.debian.org/debian bullseye/main amd64 libctf0 amd64 2.35.2-2 [53.2 kB]

Get:17 http://deb.debian.org/debian bullseye/main amd64 binutils-x86-64-linux-gnu amd64 2.35.2-2 [1809 kB]

Get:18 http://deb.debian.org/debian bullseye/main amd64 binutils amd64 2.35.2-2 [61.2 kB]

Get:19 http://deb.debian.org/debian bullseye-updates/main amd64 libc-dev-bin amd64 2.31-13+deb11u5 [276 kB]

Get:20 http://deb.debian.org/debian-security bullseye-security/main amd64 linux-libc-dev amd64 5.10.149-2 [1551 kB]

Get:21 http://deb.debian.org/debian bullseye/main amd64 libcrypt-dev amd64 1:4.4.18-4 [104 kB]

Get:22 http://deb.debian.org/debian bullseye/main amd64 libtirpc-dev amd64 1.3.1-1+deb11u1 [191 kB]

Get:23 http://deb.debian.org/debian bullseye/main amd64 libnsl-dev amd64 1.3.0-2 [66.4 kB]

Get:24 http://deb.debian.org/debian bullseye-updates/main amd64 libc6-dev amd64 2.31-13+deb11u5 [2359 kB]

Get:25 http://deb.debian.org/debian bullseye/main amd64 libisl23 amd64 0.23-1 [676 kB]

Get:26 http://deb.debian.org/debian bullseye/main amd64 libmpfr6 amd64 4.1.0-3 [2012 kB]

Get:27 http://deb.debian.org/debian bullseye/main amd64 libmpc3 amd64 1.2.0-1 [45.0 kB]

Get:28 http://deb.debian.org/debian bullseye/main amd64 cpp-10 amd64 10.2.1-6 [8528 kB]

Get:29 http://deb.debian.org/debian bullseye/main amd64 cpp amd64 4:10.2.1-1 [19.7 kB]

Get:30 http://deb.debian.org/debian bullseye/main amd64 libcc1-0 amd64 10.2.1-6 [47.0 kB]

Get:31 http://deb.debian.org/debian bullseye/main amd64 libgomp1 amd64 10.2.1-6 [99.9 kB]

Get:32 http://deb.debian.org/debian bullseye/main amd64 libitm1 amd64 10.2.1-6 [25.8 kB]

Get:33 http://deb.debian.org/debian bullseye/main amd64 libatomic1 amd64 10.2.1-6 [9008 B]

Get:34 http://deb.debian.org/debian bullseye/main amd64 libasan6 amd64 10.2.1-6 [2065 kB]

Get:35 http://deb.debian.org/debian bullseye/main amd64 liblsan0 amd64 10.2.1-6 [828 kB]

Get:36 http://deb.debian.org/debian bullseye/main amd64 libtsan0 amd64 10.2.1-6 [2000 kB]

Get:37 http://deb.debian.org/debian bullseye/main amd64 libubsan1 amd64 10.2.1-6 [777 kB]

Get:38 http://deb.debian.org/debian bullseye/main amd64 libquadmath0 amd64 10.2.1-6 [145 kB]

Get:39 http://deb.debian.org/debian bullseye/main amd64 libgcc-10-dev amd64 10.2.1-6 [2328 kB]

Get:40 http://deb.debian.org/debian bullseye/main amd64 gcc-10 amd64 10.2.1-6 [17.0 MB]

Get:41 http://deb.debian.org/debian bullseye/main amd64 gcc amd64 4:10.2.1-1 [5192 B]

Get:42 http://deb.debian.org/debian bullseye/main amd64 libstdc++-10-dev amd64 10.2.1-6 [1741 kB]

Get:43 http://deb.debian.org/debian bullseye/main amd64 g++-10 amd64 10.2.1-6 [9380 kB]

Get:44 http://deb.debian.org/debian bullseye/main amd64 g++ amd64 4:10.2.1-1 [1644 B]

Get:45 http://deb.debian.org/debian bullseye/main amd64 make amd64 4.3-4.1 [396 kB]

Get:46 http://deb.debian.org/debian bullseye/main amd64 libdpkg-perl all 1.20.12 [1551 kB]

Get:47 http://deb.debian.org/debian bullseye/main amd64 patch amd64 2.7.6-7 [128 kB]

Get:48 http://deb.debian.org/debian bullseye/main amd64 dpkg-dev all 1.20.12 [2312 kB]

Get:49 http://deb.debian.org/debian bullseye/main amd64 build-essential amd64 12.9 [7704 B]

Get:50 http://deb.debian.org/debian-security bullseye-security/main amd64 libexpat1-dev amd64 2.2.10-2+deb11u5 [141 kB]

Get:51 http://deb.debian.org/debian bullseye/main amd64 libhashkit2 amd64 1.0.18-4.2 [46.2 kB]

Get:52 http://deb.debian.org/debian bullseye/main amd64 libhashkit-dev amd64 1.0.18-4.2 [36.7 kB]

Get:53 http://deb.debian.org/debian bullseye/main amd64 libsasl2-modules-db amd64 2.1.27+dfsg-2.1+deb11u1 [69.1 kB]

Get:54 http://deb.debian.org/debian bullseye/main amd64 libsasl2-2 amd64 2.1.27+dfsg-2.1+deb11u1 [106 kB]

Get:55 http://deb.debian.org/debian bullseye/main amd64 libmemcached11 amd64 1.0.18-4.2 [94.5 kB]

Get:56 http://deb.debian.org/debian bullseye/main amd64 libmemcachedutil2 amd64 1.0.18-4.2 [22.3 kB]

Get:57 http://deb.debian.org/debian bullseye/main amd64 libsasl2-dev amd64 2.1.27+dfsg-2.1+deb11u1 [281 kB]

Get:58 http://deb.debian.org/debian bullseye/main amd64 libmemcached-dev amd64 1.0.18-4.2 [251 kB]

Get:59 http://deb.debian.org/debian bullseye/main amd64 libmpdec3 amd64 2.5.1-1 [87.7 kB]

Get:60 http://deb.debian.org/debian bullseye/main amd64 libpython3.9-stdlib amd64 3.9.2-1 [1684 kB]

Get:61 http://deb.debian.org/debian bullseye/main amd64 libpython3.9 amd64 3.9.2-1 [1691 kB]

Get:62 http://deb.debian.org/debian bullseye/main amd64 libpython3.9-dev amd64 3.9.2-1 [4028 kB]

Get:63 http://deb.debian.org/debian bullseye/main amd64 patchelf amd64 0.12-1 [61.9 kB]

Get:64 http://deb.debian.org/debian bullseye/main amd64 python3.9 amd64 3.9.2-1 [466 kB]

Get:65 http://deb.debian.org/debian bullseye/main amd64 zlib1g-dev amd64 1:1.2.11.dfsg-2+deb11u2 [191 kB]

Get:66 http://deb.debian.org/debian bullseye/main amd64 python3.9-dev amd64 3.9.2-1 [515 kB]

debconf: delaying package configuration, since apt-utils is not installed

Fetched 84.6 MB in 6s (13.9 MB/s)

Selecting previously unselected package perl-modules-5.32.

(Reading database ... 7031 files and directories currently installed.)

Preparing to unpack .../perl-modules-5.32_5.32.1-4+deb11u2_all.deb ...

Unpacking perl-modules-5.32 (5.32.1-4+deb11u2) ...

Preparing to unpack .../libc6_2.31-13+deb11u5_amd64.deb ...

debconf: unable to initialize frontend: Dialog

debconf: (TERM is not set, so the dialog frontend is not usable.)

debconf: falling back to frontend: Readline

debconf: unable to initialize frontend: Dialog

debconf: (TERM is not set, so the dialog frontend is not usable.)

debconf: falling back to frontend: Readline

Unpacking libc6:amd64 (2.31-13+deb11u5) over (2.31-13+deb11u3) ...

Setting up libc6:amd64 (2.31-13+deb11u5) ...

debconf: unable to initialize frontend: Dialog

debconf: (TERM is not set, so the dialog frontend is not usable.)

debconf: falling back to frontend: Readline

Selecting previously unselected package libgdbm-compat4:amd64.

(Reading database ... 8425 files and directories currently installed.)

Preparing to unpack .../libgdbm-compat4_1.19-2_amd64.deb ...

Unpacking libgdbm-compat4:amd64 (1.19-2) ...

Preparing to unpack .../zlib1g_1%3a1.2.11.dfsg-2+deb11u2_amd64.deb ...

Unpacking zlib1g:amd64 (1:1.2.11.dfsg-2+deb11u2) over (1:1.2.11.dfsg-2+deb11u1) ...

Setting up zlib1g:amd64 (1:1.2.11.dfsg-2+deb11u2) ...

Selecting previously unselected package libperl5.32:amd64.

(Reading database ... 8431 files and directories currently installed.)

Preparing to unpack .../00-libperl5.32_5.32.1-4+deb11u2_amd64.deb ...

Unpacking libperl5.32:amd64 (5.32.1-4+deb11u2) ...

Selecting previously unselected package perl.

Preparing to unpack .../01-perl_5.32.1-4+deb11u2_amd64.deb ...

Unpacking perl (5.32.1-4+deb11u2) ...

Selecting previously unselected package libpython3.9-minimal:amd64.

Preparing to unpack .../02-libpython3.9-minimal_3.9.2-1_amd64.deb ...

Unpacking libpython3.9-minimal:amd64 (3.9.2-1) ...

Preparing to unpack .../03-libexpat1_2.2.10-2+deb11u5_amd64.deb ...

Unpacking libexpat1:amd64 (2.2.10-2+deb11u5) over (2.2.10-2+deb11u3) ...

Selecting previously unselected package python3.9-minimal.

Preparing to unpack .../04-python3.9-minimal_3.9.2-1_amd64.deb ...

Unpacking python3.9-minimal (3.9.2-1) ...

Selecting previously unselected package bzip2.

Preparing to unpack .../05-bzip2_1.0.8-4_amd64.deb ...

Unpacking bzip2 (1.0.8-4) ...

Selecting previously unselected package media-types.

Preparing to unpack .../06-media-types_4.0.0_all.deb ...

Unpacking media-types (4.0.0) ...

Selecting previously unselected package xz-utils.

Preparing to unpack .../07-xz-utils_5.2.5-2.1~deb11u1_amd64.deb ...

Unpacking xz-utils (5.2.5-2.1~deb11u1) ...

Selecting previously unselected package binutils-common:amd64.

Preparing to unpack .../08-binutils-common_2.35.2-2_amd64.deb ...

Unpacking binutils-common:amd64 (2.35.2-2) ...

Selecting previously unselected package libbinutils:amd64.

Preparing to unpack .../09-libbinutils_2.35.2-2_amd64.deb ...

Unpacking libbinutils:amd64 (2.35.2-2) ...

Selecting previously unselected package libctf-nobfd0:amd64.

Preparing to unpack .../10-libctf-nobfd0_2.35.2-2_amd64.deb ...

Unpacking libctf-nobfd0:amd64 (2.35.2-2) ...

Selecting previously unselected package libctf0:amd64.

Preparing to unpack .../11-libctf0_2.35.2-2_amd64.deb ...

Unpacking libctf0:amd64 (2.35.2-2) ...

Selecting previously unselected package binutils-x86-64-linux-gnu.

Preparing to unpack .../12-binutils-x86-64-linux-gnu_2.35.2-2_amd64.deb ...

Unpacking binutils-x86-64-linux-gnu (2.35.2-2) ...

Selecting previously unselected package binutils.

Preparing to unpack .../13-binutils_2.35.2-2_amd64.deb ...

Unpacking binutils (2.35.2-2) ...

Selecting previously unselected package libc-dev-bin.

Preparing to unpack .../14-libc-dev-bin_2.31-13+deb11u5_amd64.deb ...

Unpacking libc-dev-bin (2.31-13+deb11u5) ...

Selecting previously unselected package linux-libc-dev:amd64.

Preparing to unpack .../15-linux-libc-dev_5.10.149-2_amd64.deb ...

Unpacking linux-libc-dev:amd64 (5.10.149-2) ...

Selecting previously unselected package libcrypt-dev:amd64.

Preparing to unpack .../16-libcrypt-dev_1%3a4.4.18-4_amd64.deb ...

Unpacking libcrypt-dev:amd64 (1:4.4.18-4) ...

Selecting previously unselected package libtirpc-dev:amd64.

Preparing to unpack .../17-libtirpc-dev_1.3.1-1+deb11u1_amd64.deb ...

Unpacking libtirpc-dev:amd64 (1.3.1-1+deb11u1) ...

Selecting previously unselected package libnsl-dev:amd64.

Preparing to unpack .../18-libnsl-dev_1.3.0-2_amd64.deb ...

Unpacking libnsl-dev:amd64 (1.3.0-2) ...

Selecting previously unselected package libc6-dev:amd64.

Preparing to unpack .../19-libc6-dev_2.31-13+deb11u5_amd64.deb ...

Unpacking libc6-dev:amd64 (2.31-13+deb11u5) ...

Selecting previously unselected package libisl23:amd64.

Preparing to unpack .../20-libisl23_0.23-1_amd64.deb ...

Unpacking libisl23:amd64 (0.23-1) ...

Selecting previously unselected package libmpfr6:amd64.

Preparing to unpack .../21-libmpfr6_4.1.0-3_amd64.deb ...

Unpacking libmpfr6:amd64 (4.1.0-3) ...

Selecting previously unselected package libmpc3:amd64.

Preparing to unpack .../22-libmpc3_1.2.0-1_amd64.deb ...

Unpacking libmpc3:amd64 (1.2.0-1) ...

Selecting previously unselected package cpp-10.

Preparing to unpack .../23-cpp-10_10.2.1-6_amd64.deb ...

Unpacking cpp-10 (10.2.1-6) ...

Selecting previously unselected package cpp.

Preparing to unpack .../24-cpp_4%3a10.2.1-1_amd64.deb ...

Unpacking cpp (4:10.2.1-1) ...

Selecting previously unselected package libcc1-0:amd64.

Preparing to unpack .../25-libcc1-0_10.2.1-6_amd64.deb ...

Unpacking libcc1-0:amd64 (10.2.1-6) ...

Selecting previously unselected package libgomp1:amd64.

Preparing to unpack .../26-libgomp1_10.2.1-6_amd64.deb ...

Unpacking libgomp1:amd64 (10.2.1-6) ...

Selecting previously unselected package libitm1:amd64.

Preparing to unpack .../27-libitm1_10.2.1-6_amd64.deb ...

Unpacking libitm1:amd64 (10.2.1-6) ...

Selecting previously unselected package libatomic1:amd64.

Preparing to unpack .../28-libatomic1_10.2.1-6_amd64.deb ...

Unpacking libatomic1:amd64 (10.2.1-6) ...

Selecting previously unselected package libasan6:amd64.

Preparing to unpack .../29-libasan6_10.2.1-6_amd64.deb ...

Unpacking libasan6:amd64 (10.2.1-6) ...

Selecting previously unselected package liblsan0:amd64.

Preparing to unpack .../30-liblsan0_10.2.1-6_amd64.deb ...

Unpacking liblsan0:amd64 (10.2.1-6) ...

Selecting previously unselected package libtsan0:amd64.

Preparing to unpack .../31-libtsan0_10.2.1-6_amd64.deb ...

Unpacking libtsan0:amd64 (10.2.1-6) ...

Selecting previously unselected package libubsan1:amd64.

Preparing to unpack .../32-libubsan1_10.2.1-6_amd64.deb ...

Unpacking libubsan1:amd64 (10.2.1-6) ...

Selecting previously unselected package libquadmath0:amd64.

Preparing to unpack .../33-libquadmath0_10.2.1-6_amd64.deb ...

Unpacking libquadmath0:amd64 (10.2.1-6) ...

Selecting previously unselected package libgcc-10-dev:amd64.

Preparing to unpack .../34-libgcc-10-dev_10.2.1-6_amd64.deb ...

Unpacking libgcc-10-dev:amd64 (10.2.1-6) ...

Selecting previously unselected package gcc-10.

Preparing to unpack .../35-gcc-10_10.2.1-6_amd64.deb ...

Unpacking gcc-10 (10.2.1-6) ...

Selecting previously unselected package gcc.

Preparing to unpack .../36-gcc_4%3a10.2.1-1_amd64.deb ...

Unpacking gcc (4:10.2.1-1) ...

Selecting previously unselected package libstdc++-10-dev:amd64.

Preparing to unpack .../37-libstdc++-10-dev_10.2.1-6_amd64.deb ...

Unpacking libstdc++-10-dev:amd64 (10.2.1-6) ...

Selecting previously unselected package g++-10.

Preparing to unpack .../38-g++-10_10.2.1-6_amd64.deb ...

Unpacking g++-10 (10.2.1-6) ...

Selecting previously unselected package g++.

Preparing to unpack .../39-g++_4%3a10.2.1-1_amd64.deb ...

Unpacking g++ (4:10.2.1-1) ...

Selecting previously unselected package make.

Preparing to unpack .../40-make_4.3-4.1_amd64.deb ...

Unpacking make (4.3-4.1) ...

Selecting previously unselected package libdpkg-perl.

Preparing to unpack .../41-libdpkg-perl_1.20.12_all.deb ...

Unpacking libdpkg-perl (1.20.12) ...

Selecting previously unselected package patch.

Preparing to unpack .../42-patch_2.7.6-7_amd64.deb ...

Unpacking patch (2.7.6-7) ...

Selecting previously unselected package dpkg-dev.

Preparing to unpack .../43-dpkg-dev_1.20.12_all.deb ...

Unpacking dpkg-dev (1.20.12) ...

Selecting previously unselected package build-essential.

Preparing to unpack .../44-build-essential_12.9_amd64.deb ...

Unpacking build-essential (12.9) ...

Selecting previously unselected package libexpat1-dev:amd64.

Preparing to unpack .../45-libexpat1-dev_2.2.10-2+deb11u5_amd64.deb ...

Unpacking libexpat1-dev:amd64 (2.2.10-2+deb11u5) ...

Selecting previously unselected package libhashkit2:amd64.

Preparing to unpack .../46-libhashkit2_1.0.18-4.2_amd64.deb ...

Unpacking libhashkit2:amd64 (1.0.18-4.2) ...

Selecting previously unselected package libhashkit-dev:amd64.

Preparing to unpack .../47-libhashkit-dev_1.0.18-4.2_amd64.deb ...

Unpacking libhashkit-dev:amd64 (1.0.18-4.2) ...

Selecting previously unselected package libsasl2-modules-db:amd64.

Preparing to unpack .../48-libsasl2-modules-db_2.1.27+dfsg-2.1+deb11u1_amd64.deb ...

Unpacking libsasl2-modules-db:amd64 (2.1.27+dfsg-2.1+deb11u1) ...

Selecting previously unselected package libsasl2-2:amd64.

Preparing to unpack .../49-libsasl2-2_2.1.27+dfsg-2.1+deb11u1_amd64.deb ...

Unpacking libsasl2-2:amd64 (2.1.27+dfsg-2.1+deb11u1) ...

Selecting previously unselected package libmemcached11:amd64.

Preparing to unpack .../50-libmemcached11_1.0.18-4.2_amd64.deb ...

Unpacking libmemcached11:amd64 (1.0.18-4.2) ...

Selecting previously unselected package libmemcachedutil2:amd64.

Preparing to unpack .../51-libmemcachedutil2_1.0.18-4.2_amd64.deb ...

Unpacking libmemcachedutil2:amd64 (1.0.18-4.2) ...

Selecting previously unselected package libsasl2-dev.

Preparing to unpack .../52-libsasl2-dev_2.1.27+dfsg-2.1+deb11u1_amd64.deb ...

Unpacking libsasl2-dev (2.1.27+dfsg-2.1+deb11u1) ...

Selecting previously unselected package libmemcached-dev:amd64.

Preparing to unpack .../53-libmemcached-dev_1.0.18-4.2_amd64.deb ...

Unpacking libmemcached-dev:amd64 (1.0.18-4.2) ...

Selecting previously unselected package libmpdec3:amd64.

Preparing to unpack .../54-libmpdec3_2.5.1-1_amd64.deb ...

Unpacking libmpdec3:amd64 (2.5.1-1) ...

Selecting previously unselected package libpython3.9-stdlib:amd64.

Preparing to unpack .../55-libpython3.9-stdlib_3.9.2-1_amd64.deb ...

Unpacking libpython3.9-stdlib:amd64 (3.9.2-1) ...

Selecting previously unselected package libpython3.9:amd64.

Preparing to unpack .../56-libpython3.9_3.9.2-1_amd64.deb ...

Unpacking libpython3.9:amd64 (3.9.2-1) ...

Selecting previously unselected package libpython3.9-dev:amd64.

Preparing to unpack .../57-libpython3.9-dev_3.9.2-1_amd64.deb ...

Unpacking libpython3.9-dev:amd64 (3.9.2-1) ...

Selecting previously unselected package patchelf.

Preparing to unpack .../58-patchelf_0.12-1_amd64.deb ...

Unpacking patchelf (0.12-1) ...

Selecting previously unselected package python3.9.

Preparing to unpack .../59-python3.9_3.9.2-1_amd64.deb ...

Unpacking python3.9 (3.9.2-1) ...

Selecting previously unselected package zlib1g-dev:amd64.

Preparing to unpack .../60-zlib1g-dev_1%3a1.2.11.dfsg-2+deb11u2_amd64.deb ...

Unpacking zlib1g-dev:amd64 (1:1.2.11.dfsg-2+deb11u2) ...

Selecting previously unselected package python3.9-dev.

Preparing to unpack .../61-python3.9-dev_3.9.2-1_amd64.deb ...

Unpacking python3.9-dev (3.9.2-1) ...

Setting up libexpat1:amd64 (2.2.10-2+deb11u5) ...

Setting up media-types (4.0.0) ...

Setting up libpython3.9-minimal:amd64 (3.9.2-1) ...

Setting up perl-modules-5.32 (5.32.1-4+deb11u2) ...

Setting up binutils-common:amd64 (2.35.2-2) ...

Setting up linux-libc-dev:amd64 (5.10.149-2) ...

Setting up libctf-nobfd0:amd64 (2.35.2-2) ...

Setting up libgomp1:amd64 (10.2.1-6) ...

Setting up bzip2 (1.0.8-4) ...

Setting up libhashkit2:amd64 (1.0.18-4.2) ...

Setting up libasan6:amd64 (10.2.1-6) ...

Setting up libsasl2-modules-db:amd64 (2.1.27+dfsg-2.1+deb11u1) ...

Setting up libtirpc-dev:amd64 (1.3.1-1+deb11u1) ...

Setting up make (4.3-4.1) ...

Setting up libmpfr6:amd64 (4.1.0-3) ...

Setting up xz-utils (5.2.5-2.1~deb11u1) ...

update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode

update-alternatives: warning: skip creation of /usr/share/man/man1/lzma.1.gz because associated file /usr/share/man/man1/xz.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/unlzma.1.gz because associated file /usr/share/man/man1/unxz.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzcat.1.gz because associated file /usr/share/man/man1/xzcat.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzmore.1.gz because associated file /usr/share/man/man1/xzmore.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzless.1.gz because associated file /usr/share/man/man1/xzless.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzdiff.1.gz because associated file /usr/share/man/man1/xzdiff.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzcmp.1.gz because associated file /usr/share/man/man1/xzcmp.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzgrep.1.gz because associated file /usr/share/man/man1/xzgrep.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzegrep.1.gz because associated file /usr/share/man/man1/xzegrep.1.gz (of link group lzma) doesn't exist

update-alternatives: warning: skip creation of /usr/share/man/man1/lzfgrep.1.gz because associated file /usr/share/man/man1/xzfgrep.1.gz (of link group lzma) doesn't exist

Setting up libquadmath0:amd64 (10.2.1-6) ...

Setting up libmpc3:amd64 (1.2.0-1) ...

Setting up libatomic1:amd64 (10.2.1-6) ...

Setting up patch (2.7.6-7) ...

Setting up libgdbm-compat4:amd64 (1.19-2) ...

Setting up libperl5.32:amd64 (5.32.1-4+deb11u2) ...

Setting up libsasl2-2:amd64 (2.1.27+dfsg-2.1+deb11u1) ...

Setting up libubsan1:amd64 (10.2.1-6) ...

Setting up libmemcached11:amd64 (1.0.18-4.2) ...

Setting up libnsl-dev:amd64 (1.3.0-2) ...

Setting up libcrypt-dev:amd64 (1:4.4.18-4) ...

Setting up libmpdec3:amd64 (2.5.1-1) ...

Setting up python3.9-minimal (3.9.2-1) ...

Setting up libbinutils:amd64 (2.35.2-2) ...

Setting up patchelf (0.12-1) ...

Setting up libisl23:amd64 (0.23-1) ...

Setting up libc-dev-bin (2.31-13+deb11u5) ...

Setting up libcc1-0:amd64 (10.2.1-6) ...

Setting up liblsan0:amd64 (10.2.1-6) ...

Setting up cpp-10 (10.2.1-6) ...

Setting up libitm1:amd64 (10.2.1-6) ...

Setting up libpython3.9-stdlib:amd64 (3.9.2-1) ...

Setting up libtsan0:amd64 (10.2.1-6) ...

Setting up libctf0:amd64 (2.35.2-2) ...

Setting up libgcc-10-dev:amd64 (10.2.1-6) ...

Setting up libmemcachedutil2:amd64 (1.0.18-4.2) ...

Setting up libhashkit-dev:amd64 (1.0.18-4.2) ...

Setting up perl (5.32.1-4+deb11u2) ...

Setting up libpython3.9:amd64 (3.9.2-1) ...

Setting up libdpkg-perl (1.20.12) ...

Setting up cpp (4:10.2.1-1) ...

Setting up libc6-dev:amd64 (2.31-13+deb11u5) ...

Setting up python3.9 (3.9.2-1) ...

Setting up binutils-x86-64-linux-gnu (2.35.2-2) ...

Setting up libstdc++-10-dev:amd64 (10.2.1-6) ...

Setting up binutils (2.35.2-2) ...

Setting up dpkg-dev (1.20.12) ...

Setting up libsasl2-dev (2.1.27+dfsg-2.1+deb11u1) ...

Setting up libexpat1-dev:amd64 (2.2.10-2+deb11u5) ...

Setting up gcc-10 (10.2.1-6) ...

Setting up zlib1g-dev:amd64 (1:1.2.11.dfsg-2+deb11u2) ...

Setting up g++-10 (10.2.1-6) ...

Setting up libmemcached-dev:amd64 (1.0.18-4.2) ...

Setting up libpython3.9-dev:amd64 (3.9.2-1) ...

Setting up gcc (4:10.2.1-1) ...

Setting up g++ (4:10.2.1-1) ...

update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode

Setting up python3.9-dev (3.9.2-1) ...

Setting up build-essential (12.9) ...

Processing triggers for libc-bin (2.31-13+deb11u3) ...

Removing intermediate container bd556d275925

 ---> 59a80e612765

Step 3/17 : RUN pip config set global.disable-pip-version-check true &&     pip wheel --wheel-dir=/root/wheels uwsgi==2.0.19.1 &&     pip wheel --wheel-dir=/root/wheels regex==2021.11.10 &&     pip wheel --wheel-dir=/tmp/wheels pylibmc==1.6.1 &&     pip install --no-cache-dir auditwheel &&     auditwheel repair /tmp/wheels/pylibmc-*.whl -w /root/wheels --plat manylinux_2_27_x86_64

 ---> Running in 0b873bbcaeb0

Writing to /root/.config/pip/pip.conf

Collecting uwsgi==2.0.19.1

  Downloading uWSGI-2.0.19.1.tar.gz (803 kB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 803.9/803.9 KB 10.0 MB/s eta 0:00:00

  Preparing metadata (setup.py): started

  Preparing metadata (setup.py): finished with status 'done'

Building wheels for collected packages: uwsgi

  Building wheel for uwsgi (setup.py): started

  Building wheel for uwsgi (setup.py): finished with status 'done'

  Created wheel for uwsgi: filename=uWSGI-2.0.19.1-cp39-cp39-linux_x86_64.whl size=510318 sha256=89e08f122446b5771ffea02d0a7129c51c8a38edcd1865261c2bebe894fad7e9

  Stored in directory: /root/.cache/pip/wheels/17/66/cf/112237fefe0b8011e4a481305ba799b696ece8590219bd527f

Successfully built uwsgi

Collecting regex==2021.11.10

  Downloading regex-2021.11.10-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (763 kB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 763.4/763.4 KB 9.4 MB/s eta 0:00:00

Saved /root/wheels/regex-2021.11.10-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl

Collecting pylibmc==1.6.1

  Downloading pylibmc-1.6.1.tar.gz (64 kB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.2/64.2 KB 6.3 MB/s eta 0:00:00

  Preparing metadata (setup.py): started

  Preparing metadata (setup.py): finished with status 'done'

Building wheels for collected packages: pylibmc

  Building wheel for pylibmc (setup.py): started

  Building wheel for pylibmc (setup.py): finished with status 'done'

  Created wheel for pylibmc: filename=pylibmc-1.6.1-cp39-cp39-linux_x86_64.whl size=34719 sha256=57bad210fba9b1fc7c836345d2d48af62c2d54490499f3b6748e32dcc15cd6a5

  Stored in directory: /root/.cache/pip/wheels/3c/07/69/ef98d2a15137c0196d1e9a0d27e01c7200f793491b83656755

Successfully built pylibmc

Collecting auditwheel

  Downloading auditwheel-5.2.1-py3-none-any.whl (53 kB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.4/53.4 KB 7.8 MB/s eta 0:00:00

Collecting pyelftools>=0.24

  Downloading pyelftools-0.29-py2.py3-none-any.whl (174 kB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 174.3/174.3 KB 10.6 MB/s eta 0:00:00

Installing collected packages: pyelftools, auditwheel

Successfully installed auditwheel-5.2.1 pyelftools-0.29

WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

INFO:auditwheel.main_repair:Repairing pylibmc-1.6.1-cp39-cp39-linux_x86_64.whl

Traceback (most recent call last):

  File "/usr/local/bin/auditwheel", line 8, in <module>

    sys.exit(main())

  File "/usr/local/lib/python3.9/site-packages/auditwheel/main.py", line 59, in main

    rval = args.func(args, p)

  File "/usr/local/lib/python3.9/site-packages/auditwheel/main_repair.py", line 172, in execute

    patcher = Patchelf()

  File "/usr/local/lib/python3.9/site-packages/auditwheel/patcher.py", line 44, in __init__

    _verify_patchelf()

  File "/usr/local/lib/python3.9/site-packages/auditwheel/patcher.py", line 37, in _verify_patchelf

    raise ValueError(

ValueError: patchelf patchelf 0.12

 found. auditwheel repair requires patchelf >= 0.14.

The command '/bin/sh -c pip config set global.disable-pip-version-check true &&     pip wheel --wheel-dir=/root/wheels uwsgi==2.0.19.1 &&     pip wheel --wheel-dir=/root/wheels regex==2021.11.10 &&     pip wheel --wheel-dir=/tmp/wheels pylibmc==1.6.1 &&     pip install --no-cache-dir auditwheel &&     auditwheel repair /tmp/wheels/pylibmc-*.whl -w /root/wheels --plat manylinux_2_27_x86_64' returned a non-zero code: 1

make: *** [Makefile:5: build] Error 1

Diagnostic issues with locations outside of raster bounds

If I query a location outside of the region covered by the EU-DEM data on my own server, I get:

curl "https://myserver/v1/eudem25m?locations=0,0"       
{
  "error": "Location '0.0,0.0' has latitude outside of raster bounds", 
  "status": "INVALID_REQUEST"
}

If I do the same on opentopodata.org, the reply is different:

curl "https://api.opentopodata.org/v1/ned10m?locations=0,0"       
{
  "results": [
    {
      "elevation": null, 
      "location": {
        "lat": 0.0, 
        "lng": 0.0
      }
    }
  ], 
  "status": "OK"
}

So my first question is: Where does this difference come from? Seems to be another undocumented config aspect of the EU-DEM data (and apparently affects other data sets as well).

And then there is an additional problem with the error message above. If I combine a valid and an invalid point, the error message always refers to valid one:

curl "https://myserver/v1/eudem25m?locations=0,0|50,10"
{
  "error": "Location '50.0,10.0' has latitude outside of raster bounds", 
  "status": "INVALID_REQUEST"
}

This was already noticed in #24, where I suspected that this is simply an of-by-one error. However, no matter how many invalid locations I add, the error message always mentions the only valid point in the request:

curl "https://myserver/v1/eudem25m?locations=0,0|0,0|50,10|0,0|0,0|0,0"
{
  "error": "Location '50.0,10.0' has latitude outside of raster bounds", 
  "status": "INVALID_REQUEST"
}

This behavior is very weird! Unfortunately I was not able to reproduce this with the server at opentopodata.org. (In fact I was not able to trigger this error message at all, as explained above.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.