Giter Club home page Giter Club logo

api's Introduction

API

Overview

Serverless APIs for www.analogstudios.net (and friends) using arc.codes.

Local Setup

Credentials

Assumes valid AWS credentials are either exported as environment variables or you have a relevant configuration setup in ~/aws/credentials.

Additionally, the following credentials files are required to run this project for the various services and APIs exposed.

preferences.arc

For running the various APIs using the Architect sandbox.

Name Service Role
AWS_ACCESS_KEY_ID AWS Publish API
AWS_SECRET_ACCESS_KEY AWS Publish API
AWS_CLOUDFRONT_ID AWS Publish API
CONTENTFUL_ACCESS_TOKEN Contentful Events API
CONTENTFUL_SPACE Contentful Events API
CONTENTFUL_WEBHOOK_ACCESS_TOKEN Contentful Publish API
DATABASE_URL Turso All APIs except Events
DATABASE_TOKEN Turso

.env

For running Prisma Studio.

Name Service Role
DATABASE_URL Local SQLite All APIs except Events

Install

  1. Clone the repo
  2. Run npm ci
  3. Run npm run arc env
  4. Run npm start to use the local Architect sandbox for development
  5. Make a copy of .env.local and rename it to .env

To use Prisma Studio, run npm run studio

Supported APIs

Albums

Data sourced from Turso for the Album resource type. Available at /albums internally and publicly as /api/v2/albums.

Options:

  • ?id=xxx - Filter by the id of the album
  • ?artistId=xxx - Filter by the id of an artist

You can only pass one or the other

Artists

Data sourced from Turso for the Artist resource type. Available at /artists internally and publicly as /api/v2/artists.

Options:

  • ?id=xxx - Filter by the id of the artist

Events

Structured events content sourced from Contentful for the Event resource type. Available at /events internally and publicly as /api/v2/events.

Options:

  • ?id=xxx - Filter by the id of the event
  • ?tag=xxx - Filter by tags of the event

You can only pass one or the other

Posts

Data sourced from Turso for the Post resource type. Available at /posts internally and publicly as /api/v2/posts.

Options:

  • ?id=xxx - Filter by the id of a post

api's People

Contributors

thescientist13 avatar

Stargazers

 avatar

Watchers

 avatar

api's Issues

add support for unit testing

after #14 / #15 / #16 and that multiple projects are now starting to rely on this API, should probably get some test cases worked into this project. Just something straight forward like mock and a couple sinon mocks, just to prevent still stuff from happening.

For any interested in contributing, I can help by getting the testing tooling setup so the focus can just be on test writing.

Migrate Albums resource endpoint

Need to migrate the /api/albums endpoint.

  1. Create /api/v2/albums function
  2. "Reflect" existing API
  3. Support filtering by ?id query param
  4. Support filtering by ?artistId query param
  5. Document in README.md (move to own README doc?)
  6. Test with downstream projects(s) / validate all content

apply wildcard (`*`) cache busting

Not sure if it has been the cause of certain bugs / issues we may be seeing in regards to inconsistent data on the frontend side of the project, but I wonder if the issue is how we are submitting our cache invalidation?

  const params = {
    DistributionId: CONFIG.distributionId,
    InvalidationBatch: {
      CallerReference: new Date().getTime(),
      Paths: {
        Quantity: 1,
        Items: [
          `/api/v2/${entity}s`
        ]
      }
    }
  };

I've always used a wildcard in these types of scenarios, so I'm thinking the code should be updated accordingly

  const params = {
    DistributionId: CONFIG.distributionId,
    InvalidationBatch: {
      CallerReference: new Date().getTime(),
      Paths: {
        Quantity: 1,
        Items: [
          `/api/v2/${entity}s/*`
        ]
      }
    }
  };

decommission database server

after the #33 completes, we should decommission the EC2 database server while doing #28

Will probably want to take a backup first and maybe just stop the instance for now instead of deleting it?

Migrate Artists resource endpoint

Need to migrate the /api/artists endpoint.

  1. Create /api/v2/artists function
  2. "Reflect" existing API
  3. Support filtering by ?id query param
  4. Document in README.md (move to own README doc?)
  5. Test with downstream projects(s) / validate all content

Migrate database server (EC2) to hosted database service (that is MySQL compatible)

As part of #28 , it should be worth considering moving off the EC2 instance currently running into something more managed, like a PlanetScale, or similar. Assuming pricing is at least equal, this would help ensure the current server doesn't ever age out or crash.

Something like this would also provide a nice browser based way to manage data updates as well, instead of having to build a custom dashboard.

This would also be a good way to potentially test Greenwood with something like this when combined with #32 .

  1. Migrate Posts w/ #25
  2. Migrate Albums w/ #26
  3. Migrate Artists w/ #27
  4. Setup web hooks for automated cache busting (this was always manual, so sounds like just a new standalone feature?) - deferred to #34

add support for tag based (pre) filtering for API endpoints

In support of the new Tuesday's Tunes website project, it would be nice if only a subset of the content could be returned to help with pre-filtering the results when calling on data.

Ideally, it would be great to

  1. Add a Tags content type in Contentful and pre-fill it with some values like tt for Tuesday's Tunes
  2. Return the tags in the response
  3. Enable the API to accept that filter via query param and pre-sort it

Contentful webhooks are returning 500 and not incoming cache busting

Probably related to the v2 migration, but looks like the webhooks in Contentful are busted :/
Screen Shot 2023-12-09 at 10 13 47 AM
Screen Shot 2023-12-09 at 10 14 30 AM

Response headers
{
  "date": "Sat, 09 Dec 2023 15:11:22 GMT",
  "content-type": "application/json",
  "content-length": "35",
  "connection": "close",
  "apigw-requestid": "PrmAGgPfIAMEY0A="
}
Response body
{
  "message": "Internal Server Error"
}

mutation workflow for database content

Overview

As an outcome of #53 and migrating to Turso / SQLite, for local development a SQLite file was generated and committed to the repository which should now at least support #46 without the need to provide any direct access to the database.

However, although tables can be edited directly in Turso's UI, or locally using Prisma studio against the .db file, one thing we did lose was was PlanetScales branching feature and nice workflow for doing migrations and data mutations.

Outcomes

  1. Confirm if Turso has a branching like feature, as was available in PlanetScale
  2. Figure out if there's a nice way to edit locally, but push remotely
  3. Document in the README

add security headers check to `POST` Publish endpoint

It would be smart to "lock down" the POST Publish endpoint coming from Contentful webhooks with a secret access token to ensure only authorized calls to invalidate the cache are made and known to come from trusted actors.

Migrate Posts resource endpoint

Need to migrate the /api/posts/ endpoint.

  1. Create /api/v2/posts function
  2. "Reflect" existing API
  3. Support filtering by Post ID ?id query param
  4. Test with downstream projects(s) / validate all content
  5. Document in README.md

Migrate to Greenwood + AWS Adapter

Since Greenwood can easily operate to serve exclusively API endpoints, with a (soon to be coming) ProjectEvergreen/greenwood#1142, it would be nice to invert the development model here to be standards based instead of owned by Architect.


As its own ticket, perhaps the Greenwood frontend could act as a local only admin dashboard? And then maybe all content could be hosted in PlanetScale, including Contentful?

AWS SDK version warning

Seeing this in the GitHub Actions log

⚠️ Warning: Found possible AWS SDK version mismatches in Lambda handler code
The following function requires or imports one or more '@aws-sdk/*' (v3) modules, which is not built into your Lambda's runtime:
- '@http post /publish' (runtime: 'nodejs16.x')
Architect does not manage AWS SDK, thus this code may be broken when deployed. See more at: https://arc.codes/aws-sdk-versions

automate and document caching strategy

Since most of this data doesn't change very often, we can aggressively cache API endpoints in CloudFront. It would be good to get this automated though from the PlanetScale side, so that whenever we update the DB, we send an invalidation request to CloudFront. (like we do when content in Contentful changes)

Bonus points for being able to tell which table was updated, and update that resource specifically. Worst case scenario would be to just invalidate everything. (which may be desirable since a lot of the content could / might be relational)


Should also document this as well somewhere.

local database for testing and local development

As the project requires connecting to a live database, it would be nice to use backup data as a way to seed the database for local development. Thinking we could just create a basic MySQL Docker container that could be spun up, or maybe just use SQLite?


Adding Prisma studio in #52 , wondering if that could be useful, either through MySQL or Shadow DBs? Basically, could someone develop without needing DB access?

deprecate API server instance

Once all APIs are migrated and projects all running on /api/v2 endpoints (#31), we should be able to de-commission the currently running PHP server.

Would also be nice to publish and archive the (now) old API repo to this GitHub organization from Bitbucket. (will also want to rotate database keys as a precaution)


Maybe just stop the instance for now in case there is something worth saving?

main GitHub action not deploying to production

I think due to the #30 , our GitHub action for the main workflow seems to be deploying to stage (the default), not production. I noticed this as after #25 , I didn't see any new lambda functions spinning up, and looking at the GitHub actions logs, main was building for staging.
https://github.com/AnalogStudiosRI/api/actions/runs/6983626787/job/19005072194

⚬ Deploy Initializing deployment
  | Stack ... ApiStaging
  | Bucket .. api-cfn-deployments-e9ce0

I think this in need to pass an extra -- when providing args with npm

# before (current)
$ npm run arc deploy --production

# after
$ npm run arc deploy -- --production

add proper caching and cache busting strategy

Right now all the data is set to not cache, which means all hits through CloudFront will be a miss. What I would like is to

  1. Have long-lived cache at the origin layer
  2. Upon publish in Contentful, clear that cache programmatically in CloudFront

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.