Giter Club home page Giter Club logo

asf-stac's Introduction

HyP3 documentation

DOI

HyP3 documentation is built using MkDocs and the ASF Theme.

How to

Setting up a development environment

In order to automatically document some of our APIs, we use a conda environment with our APIs installed. You can get Miniconda (recommended) here:

https://docs.conda.io/en/latest/miniconda.html

Once conda is installed, from the repository root, you can create and activate a conda environment with all the necessary dependencies

conda env create -f environment.yml
conda activate hyp3-docs

Later, you can update the environment's dependencies with

conda env update -f environment.yml

Build and view the documentation site

With the hyp3-docs conda environment activated, run

mkdocs serve

to generate the documentation. This will allow you to view it at http://127.0.0.1:8000/. MkDocs will automatically watch for new/changed files in this directory and rebuild the website so you can see your changes live (just refresh the webpage!).

Note: mkdocs serve captures your terminal; use crtl+c to exit. It is recommended you use a second/dedicated terminal so you can keep this command running.

Deploy

This documentation site is deployed as a GitHub Organization website with a CNAME so that it's viewable at https://hyp3-docs.asf.alaska.edu/. The website is served out of the special https://github.com/ASFHyP3/ASFHyP3.github.io repository. Deployment is handled automatically with the .github/workflows/deploy_to_github_io.yml GitHub Action for any merge to main.

There is also a test site deployed to https://hyp3-docs.asf.alaska.edu/hyp3-docs, which tracks the develop branch of this repo and is served out of the gh-pages branch of this repo.

Enable or disable the announcement banner

We can display a site-wide banner for important announcements. The content of this banner is specified in overrides/main.html, which should contain the following placeholder text when the banner is not in use:

{% extends "partials/main.html" %}

{# Uncomment this block to enable the announcement banner:
{% block announce %}
<div id="announcement-content">
    ⚠️ TODO: Your announcement here.<br />
    <a class="announcement-link" href="TODO">Read the full announcement.</a>
</div>
{% endblock %}
#}

In order to enable the banner, uncomment the announce block and fill in the TODOs. Below is an example of an enabled announcement banner (taken from here):

{% extends "partials/main.html" %}

{% block announce %}
<div id="announcement-content">
    ⚠️ Monthly processing quotas were replaced by a credit system on April 1st.<br />
    <a class="announcement-link" href="/using/credits">Read the full announcement.</a>
</div>
{% endblock %}

When the announcement is no longer needed, restore the file to the placeholder text in order to disable the banner.

If you are building and viewing the site locally, you will need to exit with ctrl+c and then re-run mkdocs serve in order to re-render any changes you make to this file.

Markdown formatting

The way MkDocs and GitHub parse the markdown documents are slightly different. Some compatibility tips:

  • Raw links should be wrapped in angle brackets: <https://example.com>

  • MkDocs is pickier about whitespace between types (e.g., headers, paragraphs, lists) and seems to expect indents to be 4 spaces. So to get a representation like:


    • A list item

      A sub list heading
      • A sub-list item

    in MkDocs, you'll want to write it like:

    Good

    - A list item
    
        ##### A sub list heading
        - A sub-list item
    

    Bad

    - A list item
      ##### A sub list heading
      - A sub-list item
    
    - A list item
        ##### A sub list heading
        - A sub-list item
    
    - A list item
    
      ##### A sub list heading
      - A sub-list item
    

asf-stac's People

Contributors

asjohnston-asf avatar cirrusasf avatar dependabot[bot] avatar forrestfwilliams avatar jhkennedy avatar jtherrmann avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

asfadmin

asf-stac's Issues

Deploy the STAC API to AWS

  • #7
  • #6
  • #10
  • #11
  • #28
  • #60
  • Create sandbox CICD pipeline with GitHub Actions
  • Allow deploying to test and prod
  • Deploy the database passwords to AWS Secrets Manager and document how to access them (in README)

Should we set Postgres `search_path` for the admin user?

As explained by https://stac-utils.github.io/pgstac/pgstac/#pgstac-search-path, the Postgres search_path needs to be set to allow the user access to the pgstac schema. This does not apply to our API, because the provided pgstac app sets the search path at the bottom of this file (see "search_path": "pgstac,public"). The search path is also already set correctly when logging in manually as one of the provided pgstac users. However, it is not set when we log in to the database manually as our admin user (username postgres), so we can't query the pgstac schema in this case, which can hinder debugging. We may want to consider setting the search path for our admin user in our database configuration script.

The search path can be displayed with the SHOW search_path; query.

PySTAC `Catalog.add_child()` takes progressively longer as more children are added

I'm attempting to generate the STAC catalog for the entire coherence dataset on this branch. When calling Catalog.add_child() to add the sub-collections, each call takes progressively longer as you add more children. When trying to add the ~25,000 sub-collections, I've killed the process after it ran for 1.5 hours. I tried using the Catalog.add_children() command, but had the same results. This line is where we're calling the offending method.

Ingest the partial coherence dataset into the sandbox API

Overview

We have yet to successfully ingest the partial coherence dataset into the sandbox API. During the last attempt it failed (after successfully ingesting several items) with a 500 Internal Server Error. The Lambda function log reports something about too many connections. Maybe we should debug that particular issue further? (Could it have resulted from my running the local API while the ingest script was running?)

Anyway, the ingest script should handle internal server errors by retrying the request, and it should also allow updating or skipping existing STAC items so that we can re-run the ingest script if it fails.

Edit: Maybe not necessary to retry failures due to internal server errors, as long as we can easily re-run the script by skipping existing objects.

Tasks

  • Ingest script should skip existing STAC objects
  • Ingest the dataset into the sandbox API

Deploy the STAC API frontend

Overview

Deploy the stac-fastapi container via AWS ECS Fargate.

Edit: I'm running into some sort of dependency bug (ImportError: cannot import name 'as_shape' from 'pygeoif.geometry') when attempting to build an image from their Dockerfile, but the solution is just to pin the relevant package to the latest version (pygeofilter==0.2.0). I've added a make target to allow running the API locally. I think the next step is to write our own Dockerfile and build our image from that.

Edit 2: We'll deploy a serverless API via AWS API Gateway.

Tasks

  • Deploy a STAC API via CloudFormation.
  • Troubleshoot why the Swagger UI isn't working (try to get it working in the stac-fastapi Docker container), or document workaround (paste API URL into existing Swagger interface)

Try using the Bulk Transactions extension for ingest

Overview

The ingest script currently adds one item at a time using the Transactions extension, and performance worsens as more items are inserted. We could try using the Bulk Transactions extension to add multiple items per request. The endpoint is documented here.

I tried adding two items but got a 404:

payload = json.loads('{"items": {"S78W161_winter_hh_tau": {"type": "Feature", "stac_version": "1.0.0", "id": "S78W161_winter_hh_tau", "properties": {"tile": "S78W161", "sar:instrument_mode": "IW", "sar:frequency_band": "C", "sar:product_type": "tau", "start_datetime": "2019-12-01T00:00:00Z", "end_datetime": "2020-02-28T00:00:00Z", "season": "winter", "datetime": "2020-01-14T12:00:00Z", "sar:polarizations": ["HH"]}, "geometry": {"type": "Polygon", "coordinates": [[[-160.0, -79.0], [-160.0, -78.0], [-161.0, -78.0], [-161.0, -79.0], [-160.0, -79.0]]]}, "assets": {"data": {"href": "https://sentinel-1-global-coherence-earthbigdata.s3.us-west-2.amazonaws.com/data/tiles/S78W161/S78W161_winter_hh_tau.tif", "type": "image/tiff; application=geotiff"}}, "bbox": [-161.0, -79.0, -160.0, -78.0], "stac_extensions": ["https://stac-extensions.github.io/sar/v1.0.0/schema.json"], "collection": "sentinel-1-global-coherence"}, "S78W160_069D_inc": {"type": "Feature", "stac_version": "1.0.0", "id": "S78W160_069D_inc", "properties": {"tile": "S78W160", "sar:instrument_mode": "IW", "sar:frequency_band": "C", "sar:product_type": "inc", "start_datetime": "2019-12-01T00:00:00Z", "end_datetime": "2020-11-30T00:00:00Z"}, "geometry": {"type": "Polygon", "coordinates": [[[-159.0, -79.0], [-159.0, -78.0], [-160.0, -78.0], [-160.0, -79.0], [-159.0, -79.0]]]}, "assets": {"data": {"href": "https://sentinel-1-global-coherence-earthbigdata.s3.us-west-2.amazonaws.com/data/tiles/S78W160/S78W160_069D_inc.tif", "type": "image/tiff; application=geotiff"}}, "bbox": [-160.0, -79.0, -159.0, -78.0], "stac_extensions": ["https://stac-extensions.github.io/sar/v1.0.0/schema.json"], "collection": "sentinel-1-global-coherence"}}}')
r = requests.post('http://0.0.0.0:8000/collections/sentinel-1-global-coherence/bulk_items', json=payload)
r.status_code  # gives 404

I'm not sure why the endpoint would not be registered with the app.

Before troubleshooting this further, perhaps we should read the Bulk Transactions code to confirm that it looks like it might solve our ingest performance problems.

Tasks

  • Troubleshoot the 404 for the Bulk Transactions endpoint
  • Confirm that the Bulk Transactions endpoint may help our ingest performance.

Deployment of `stac-test` is failing during database migration

Jira: https://asfdaac.atlassian.net/browse/TOOL-2047

Runs of deploy-stac-test are failing. The CodeBuild logs for the most recent run show that the failure was during the database migration step. I recall that @asjohnston-asf already discovered this issue but I don't remember if any further progress was made.

TODO after resolving this issue:

Remove `pygeofilter==0.2.0` pin after next `stac-fastapi` release

The next stac-fastapi release should no longer pin pygeofilter to an older version, so we'll be able to remove the pygeofilter==0.2.0 pin in the API requirements. We'll want to confirm that pygeofilter is getting installed to the latest version after we remove the pin. See #6 and #61 for context.

Duplicate files for N71E028 in data provider S3 bucket

Tile N71E028 is problematically organized in the sentinel-1-global-coherence-earthbigdata S3 bucket. It's both in the root data/tiles/ directory and in its own data/tiles/N71E028/ directory. The rho, rmse, tau, and all but one of the inc and lsmap files are only in the root, not in the tile directory.

$ aws s3 ls s3://sentinel-1-global-coherence-earthbigdata/data/tiles/ | grep tif
2021-09-08 06:22:24     100053 N71E028_022D_inc.tif
2021-09-08 06:22:24      34101 N71E028_022D_lsmap.tif
2021-09-08 06:22:24     613616 N71E028_051D_inc.tif
2021-09-08 06:22:24      30337 N71E028_051D_lsmap.tif
2021-09-08 06:22:24     120809 N71E028_080D_inc.tif
2021-09-08 06:22:24      36845 N71E028_080D_lsmap.tif
2021-09-08 06:22:24     612434 N71E028_124D_inc.tif
2021-09-08 06:22:24      31051 N71E028_124D_lsmap.tif
2021-09-08 06:22:24     615102 N71E028_153D_inc.tif
2021-09-08 06:22:24      29966 N71E028_153D_lsmap.tif
2021-09-08 06:22:24    2647980 N71E028_fall_vh_AMP.tif
2021-09-08 06:22:24    3149033 N71E028_fall_vv_AMP.tif
2021-09-08 06:22:24     784242 N71E028_fall_vv_COH06.tif
2021-09-08 06:22:24     765260 N71E028_fall_vv_COH12.tif
2021-09-08 06:22:24     744049 N71E028_fall_vv_COH18.tif
2021-09-08 06:22:24     751727 N71E028_fall_vv_COH24.tif
2021-09-08 06:22:24     751020 N71E028_fall_vv_COH36.tif
2021-09-08 06:22:24     740526 N71E028_fall_vv_COH48.tif
2021-09-08 06:22:24    1957831 N71E028_fall_vv_rho.tif
2021-09-08 06:22:24    1649028 N71E028_fall_vv_rmse.tif
2021-09-08 06:22:24    3084897 N71E028_fall_vv_tau.tif
2021-09-08 06:22:24    2593894 N71E028_spring_vh_AMP.tif
2021-09-08 06:22:24    3096284 N71E028_spring_vv_AMP.tif
2021-09-08 06:22:24     626721 N71E028_spring_vv_COH06.tif
2021-09-08 06:22:24     511655 N71E028_spring_vv_COH12.tif
2021-09-08 06:22:24     444280 N71E028_spring_vv_COH18.tif
2021-09-08 06:22:24     400694 N71E028_spring_vv_COH24.tif
2021-09-08 06:22:24     366048 N71E028_spring_vv_COH36.tif
2021-09-08 06:22:24     348031 N71E028_spring_vv_COH48.tif
2021-09-08 06:22:24    1323006 N71E028_spring_vv_rho.tif
2021-09-08 06:22:24    1435329 N71E028_spring_vv_rmse.tif
2021-09-08 06:22:24    2639341 N71E028_spring_vv_tau.tif
2021-09-08 06:22:24    2686252 N71E028_summer_vh_AMP.tif
2021-09-08 06:22:24    3166636 N71E028_summer_vv_AMP.tif
2021-09-08 06:22:24     777836 N71E028_summer_vv_COH06.tif
2021-09-08 06:22:24     787949 N71E028_summer_vv_COH12.tif
2021-09-08 06:22:24     783556 N71E028_summer_vv_COH18.tif
2021-09-08 06:22:24     788411 N71E028_summer_vv_COH24.tif
2021-09-08 06:22:25     792655 N71E028_summer_vv_COH36.tif
2021-09-08 06:22:24     802653 N71E028_summer_vv_COH48.tif
2021-09-08 06:22:24    2022146 N71E028_summer_vv_rho.tif
2021-09-08 06:22:25    1609180 N71E028_summer_vv_rmse.tif
2021-09-08 06:22:24    3162197 N71E028_summer_vv_tau.tif
2021-09-08 06:22:25    2651958 N71E028_winter_vh_AMP.tif
2021-09-08 06:22:25    3148272 N71E028_winter_vv_AMP.tif
2021-09-08 06:22:25     687252 N71E028_winter_vv_COH06.tif
2021-09-08 06:22:25     594746 N71E028_winter_vv_COH12.tif
2021-09-08 06:22:25     493297 N71E028_winter_vv_COH18.tif
2021-09-08 06:22:25     437508 N71E028_winter_vv_COH24.tif
2021-09-08 06:22:25     395822 N71E028_winter_vv_COH36.tif
2021-09-08 06:22:25     387693 N71E028_winter_vv_COH48.tif
2021-09-08 06:22:25    1444375 N71E028_winter_vv_rho.tif
2021-09-08 06:22:25    1406377 N71E028_winter_vv_rmse.tif
2021-09-08 06:22:25    3017052 N71E028_winter_vv_tau.tif

$ aws s3 ls s3://sentinel-1-global-coherence-earthbigdata/data/tiles/N71E028/
2021-09-08 06:22:23     120809 N71E028_080D_inc.tif
2021-09-08 06:22:23      36845 N71E028_080D_lsmap.tif
2021-09-08 06:22:23     426670 N71E028_fall_vh_AMP.tif
2021-09-08 06:22:23     472484 N71E028_fall_vv_AMP.tif
2021-09-08 06:22:23     150801 N71E028_fall_vv_COH06.tif
2021-09-08 06:22:23     146893 N71E028_fall_vv_COH12.tif
2021-09-08 06:22:23     146769 N71E028_fall_vv_COH18.tif
2021-09-08 06:22:23     148793 N71E028_fall_vv_COH24.tif
2021-09-08 06:22:23     139001 N71E028_fall_vv_COH36.tif
2021-09-08 06:22:23     141937 N71E028_fall_vv_COH48.tif
2021-09-08 06:22:23     413629 N71E028_spring_vh_AMP.tif
2021-09-08 06:22:23     463236 N71E028_spring_vv_AMP.tif
2021-09-08 06:22:23     140888 N71E028_spring_vv_COH06.tif
2021-09-08 06:22:23     121610 N71E028_spring_vv_COH12.tif
2021-09-08 06:22:23     113026 N71E028_spring_vv_COH18.tif
2021-09-08 06:22:23     104311 N71E028_spring_vv_COH24.tif
2021-09-08 06:22:23      98578 N71E028_spring_vv_COH36.tif
2021-09-08 06:22:23      95895 N71E028_spring_vv_COH48.tif
2021-09-08 06:22:23     436409 N71E028_summer_vh_AMP.tif
2021-09-08 06:22:23     474504 N71E028_summer_vv_AMP.tif
2021-09-08 06:22:23     153240 N71E028_summer_vv_COH06.tif
2021-09-08 06:22:23     158306 N71E028_summer_vv_COH12.tif
2021-09-08 06:22:23     156795 N71E028_summer_vv_COH18.tif
2021-09-08 06:22:23     159293 N71E028_summer_vv_COH24.tif
2021-09-08 06:22:23     164560 N71E028_summer_vv_COH36.tif
2021-09-08 06:22:23     162738 N71E028_summer_vv_COH48.tif
2021-09-08 06:22:23     419982 N71E028_winter_vh_AMP.tif
2021-09-08 06:22:23     468394 N71E028_winter_vv_AMP.tif
2021-09-08 06:22:24     143910 N71E028_winter_vv_COH06.tif
2021-09-08 06:22:24     124657 N71E028_winter_vv_COH12.tif
2021-09-08 06:22:24     114489 N71E028_winter_vv_COH18.tif
2021-09-08 06:22:24     110737 N71E028_winter_vv_COH24.tif
2021-09-08 06:22:24     102512 N71E028_winter_vv_COH36.tif
2021-09-08 06:22:24     103574 N71E028_winter_vv_COH48.tif

Restrict access to the Transaction endpoints

Overview

The Transaction extension supports creating, updating, and deleting items. I assume we want to restrict these endpoints to the Tools team?

Edit: Although we've restricted access at the database layer, it would still be nice to remove those endpoints from the API itself, both for extra security and so as not to confuse users who see those endpoints available in the Swagger UI. The problem is that currently we're using the app provided here, and there doesn't seem to be an easy way to just disable particular extensions from an already-created app. One option is to do all of that configuration in our lambda handler, but if the stac-fastapi developers make significant changes to how they're configuring the app, we won't know about it. Another option is to fork their repo and remove the Transaction extension in our fork, so that we could easily merge upstream changes. Another option is to find some gross non-standard way of disabling the extension in the imported module, such as mocking out the extension.

I assume the stac-fastapi devs intend for people to configure and create their own apps as needed, using the modules documented here, so maybe that's the best way to go.

Edit 2: I've opened stac-utils/stac-fastapi#492 to ask the stac-fastapi devs for their recommended approach. We could also potentially fork their repo, implement an option to disable the extension, and open a PR to merge the option upstream, so that we wouldn't have to rely on our fork forever.

Tasks

  • The AWS-hosted API should login to the database with read-only permissions
  • The make run-api command should still login to the database with read-write permissions.
  • See https://stac-utils.github.io/pgstac/pgstac/ for documentation on roles
  • Document in the README how to run the API locally for accessing the Transaction endpoints.

Move database configuration to AWS resource running in VPC

Overview

The database configuration commands in our deployment workflow no longer work, because the database only accepts connections from within the ASF network or the VPC. We need to run the database configuration commands from an AWS resource running within our VPC. We're currently exploring CodeBuild as a solution.

Tasks

Automate installing/upgrading PostGIS and PgSTAC

Overview

We should automate installing and upgrading both PostGIS and PgSTAC in the STAC API database.

To install PostGIS for the first time, run the following Postgres command:

CREATE EXTENSION postgis;

See here and here for instructions for upgrading PostGIS. We'll want to upgrade to a specific version (rather than just upgrading to the latest version).

To install PGStac into the database:

make migrate db_host=<host> db_password=<password>

After upgrading pypgstac, you can re-run the above command to upgrade the version of PGStac installed to the database. So for the CICD pipeline, it should be sufficient to install requirements to their pinned versions (including pypgstac), then run the migrate command, regardless of whether it's a first-time deployment.

For more information about migrations, see https://stac-utils.github.io/pgstac/pypgstac/.

Tasks

  • Automate installation and upgrade of PostGIS
  • Automate installation and upgrade of PgSTAC
  • Test Postgres version upgrade by deploying a stack with older Postgres (and the appropriate version of PostGIS), then upgrading to newer Postgres and PostGIS according to README instructions

Get bucket location without configuring AWS credentials

The STAC item creation scripts have to get the S3 bucket location for inclusion in the URL for each STAC item. It's slightly annoying to have to configure AWS credentials (e.g. when running the creation and ingest scripts on an EC2 instance) just for this operation. We should be able to send an unsigned request since the dataset buckets allow public access to the GetBucketLocation operation, but there seems to be a boto3 bug preventing unsigned requests for this operation: boto/boto3#3522

We could wait until the bug is fixed and then implement unsigned requests, or we could just hard-code the bucket location, though this risks that the location will change at some point and invalidate our STAC item URLs.

Add summary for tile attribute to sentinel-1-gobal-coherence collection metadata

tile is an attribute users may find useful to search on, so we should advertise it in the collection summaries at https://github.com/ASFHyP3/asf-stac/blob/develop/collections/sentinel-1-global-coherence/sentinel-1-global-coherence.json#L32

It's impracticable to list all possible values, so we'll likely provide a jsonschema with a regular expression and an example per https://github.com/radiantearth/stac-spec/blob/master/collection-spec/collection-spec.md#summaries

"tile": {
    "type": "string",
    "pattern": "....",
    "example": "N01W001"
}

Troubleshoot slow API responses

The API seems slow: https://0em7kb6wob.execute-api.us-west-2.amazonaws.com/

I wonder if any of these would help:

  • Try querying the API and running the ingest script from an EC2 SSH server
  • Move the Lambda function and the database to the same VPC This would not help because the API-database connection is not the bottleneck.
  • Allocate more resources for the Lambda function and/or API Gateway
  • Check other relevant settings for those resources (see CloudFormation templates)

Notes:

  • Querying a local API (connected to the RDS database) is almost instantaneous.
  • Querying the API from an EC2 instance is almost as slow as from my laptop.

So it seems that there is a high latency between the client and the API, regardless of the client's internet speed.

Remove Transaction endpoints from public API after next `stac-fastapi` release

Overview

Split from #2.

We've already disabled access to the Transaction endpoints at the database layer in the public API, but we still want to remove those endpoints from the API itself, both for extra security and so as not to confuse users who see those endpoints available in the Swagger UI.

I opened stac-utils/stac-fastapi#492 to ask how to do this, and then opened stac-utils/stac-fastapi#495 based on the feedback to that issue.

As soon as that PR gets merged and there's another stac-fastapi release, we can modify the API handler to populate the environment variable with the list of extensions (all of them except the two Transaction extensions) before importing the pgstac app handler.

Tasks

  • After the next stac-fastapi release, specify the list of extensions so as to exclude the Transaction extensions.

Locally run a STAC API implementation and load our catalog into the backend

We cloned https://github.com/stac-utils/stac-fastapi and ran the following command, which successfully starts two containers, stac-fastapi (for the API) and pgstac (for the Postgres database):

docker compose up app-pgstac

We were able to connect to the database with the following command (see the database service):

PGUSER=username PGPASSWORD=password PGHOST=localhost PGDATABASE=postgis PGPORT=5439 psql

We observed that the database contains three tables: geography_columns, geometry_columns, and spatial_ref_sys.

We also confirmed that the pgstac container stores the database in a Docker volume, so that it persists across container restarts.

We're currently attempting to load the demo dataset into our database using the loadjoplin-pgstac service.

TODO:

  • Load the demo dataset into our database.
  • Query our STAC API endpoints against the demo data (see https://github.com/radiantearth/stac-api-spec)
  • Determine how to load our catalog into the database via API endpoints.
  • Determine why the spatial_ref_sys table contains 8500 rows by default, and whether that's some sort of demo data or if it's actually required for the STAC API to function.
  • Determine where the STAC data gets stored when you input it via the API (as the demo ingest script does). We're not seeing obvious changes to the database (spatial_ref_sys table still shows 8500 rows before and after the ingest) but it could be stored elsewhere in the Docker volume (because it appears to persist across container restarts)? I'm really curious where the data is actually stored and if it's not in the database, then what is the purpose of the database?
  • Merge #3

Restrict incoming database connections

Overview

We want our database to only allow incoming connections from within the ASF network and from the STAC API Lambda function. This article may be useful: Amazon VPC VPCs and Amazon RDS

Verify

  • Lambda function can communicate with database
  • EC2 instances without the client security group cannot connect to the database
  • EC2 instances with the client security group can connect to the database
  • Clients outside of the ASF network cannot connect to the database
  • Clients within the ASF network can connect to the database

Deploy the STAC API backend database

Overview

Deploy the pgstac database via AWS RDS.

Tasks

  • Determine how to adapt pgstac to run in AWS RDS.
  • Deploy a Postgres database to AWS RDS via CloudFormation
  • #9

Use the `pgstac_ingest` database role for the Transactions API when the `pgstac` permissions bug is resolved

There is a pgstac bug in which the pgstac_ingest database role does not have sufficient permissions, reported here: stac-utils/pgstac#147

As a workaround, we use the database admin user postgres to run the API with the Transaction extension enabled (currently just when we run the API locally, though we may deploy a Transactions API to AWS at some point). When the pgstac bug is resolved, we can start using the pgstac_ingest user to run the API with the Transaction extension, as described in the docs:

The pgstac_ingest role has read/write priviliges on all tables and should be used for data ingest or if using the transactions extension with stac-fastapi-pgstac.

Remove permissions fix for `pgstac_read` database role when the `pgstac` bug is resolved

There is a pgstac bug in which the pgstac_read database role does not have sufficient permissions, reported here: stac-utils/pgstac#146

We grant the required permissions in the configure-database-roles.sql script. We can remove this fix when the pgstac bug is resolved.

From the docs:

The pgstac_read role has read only access to the items and collections, but will still be able to write to the logging tables.

Fix database permissions for the Filter extension

With the Filter extension enabled in our public API, an attempt to query the GET /queryables endpoint via the Swagger UI results in a 500 Internal Server Error:

{
  "code": "InsufficientPrivilegeError",
  "description": "permission denied for table queryables"
}

We should fix the database permissions and then re-add filter to the list of enabled extensions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.