Giter Club home page Giter Club logo

s3_storer's Introduction

S3 storer

Node app for receiving a set of keys and URLs, store the URLs on S3 and return the set of keys with S3 (or CouldFront) URLs.

Codeship Status for inviso-org/s3_storer

API Usage

API will return 200 OK, but errors may occur during requests. The reason for this is that we'll start sending data to client right away, to keep connection open and stop Heroku from killing us. We will know at a later point in time if some URLs fails or not and the status is serialized in the JSON response. It will either be "ok", "error", or "timeout".

API request sanity validations will return 422 as they happen at the very beginning of each request.

A request to the API should behave in a transactional manner, meaning that either all given URLs are successfully uploaded, or non will be stored on S3. We will try and clean any uploaded files to S3 if other files fail.

Important - security of the API

In production all requests must be sent over https due to credentials being passed around. Please see ENV variables REQUIRE_SSL which should be true in production, and BEHIND_PROXY if you for instance are deploing on Heroku. You should also set BASIC_AUTH_USER and BASIC_AUTH_PASSWORD to restrict access to your API.

POST to /store

  • Give key-value pairs of URLs to download, store on S3 and return URLs for.
  • Available options
    • awsAccessKeyId AWS access key
    • awsSecretAccessKey AWS access secret
    • s3Bucket AWS bucket you want files uploaded to
    • s3Region AWS region you want files uploaded to
    • cloudfrontHost AWS cloud front, if any.
  • Available HTTP headers
    • Tag-Logs-With A string you want this request to be tagged with. For instance iweb prod asset-123 will log as [iweb] [prod] [asset-123]
{
  "urls": {
    "thumb": "http://www.filepicker.com/api/XXX/convert/thumb",
    "monitor": "http://www.filepicker.com/api/XXX/convert/monitor"
  },
  "options": {
    "awsAccessKeyId": "xxx",
    "awsSecretAccessKey": "xxx",
    "s3Bucket": "xxx",
    "s3Region": "xxx",
    "cloudfrontHost": "xxx" # Optional
  }
}

RESPONSE - success

  • Status is ok
  • All URLs are swapped out for stored URLs.
{
  "status": "ok",
  "urls": {
    "thumb": "http://s3.com/sha1-of-thumb-url",
    "monitor": "http://s3.com/sha1-of-monitor-url"
  }
}

RESPONSE - failure from server we GET data from

  • Status is error
  • Keys with null was ok, but is cleaned from S3 due to other version failed.
  • Keys with an object includes information about the response.
{
  "status": "error",
  "urls": {
    "thumb": null,
    "monitor": {
      "downloadResponse": {
        "status": 502,
        "body": "Bad Gateway"
      }
    }
  }
}

RESPONSE - failure from S3

  • Status is error
  • Keys with null was ok, but is cleaned from S3 due to other version failed.
  • Keys with an object includes information about the s3 error.
{
  "status": "error",
  "urls": {
    "thumb": null,
    "monitor": {
      "s3": "Some message or object(!) from s3 when we tried to upload this file"
    }
  }
}

RESPONSE - failure timeout

  • Status is timeout due to max keep alive time exceeded. See lib/middleware/keep_alive.coffee and ENV variables KEEP_ALIVE_WAIT_SECONDS and KEEP_ALIVE_MAX_ITERATIONS.
  • Any uploads to S3 we have done will be cleaned.
{
  "status": "timeout"
}

DELETE to /delete

  • The /delete action is more of a convenience action. I guess you applicaiton language have an AWS SDK available and you could potentially use that directly. If you feel like it's just as easy to make a DELETE call to the S3 Storage API feel free to do so.
  • Give array of URLs to delete.
  • Available options
    • awsAccessKeyId AWS access key
    • awsSecretAccessKey AWS access secret
    • s3Bucket AWS bucket you want files uploaded to
    • s3Region AWS region you want files uploaded to
  • Available HTTP headers
    • Tag-Logs-With A string you want this request to be tagged with. For instance iweb prod asset-123 will log as [iweb] [prod] [asset-123]
{
  "urls": [
    "http://file.in.your.s3.bucket.com/object1",
    "http://file.in.your.s3.bucket.com/object2"
  ],
  "options": {
    "awsAccessKeyId": "xxx",
    "awsSecretAccessKey": "xxx",
    "s3Bucket": "xxx",
    "s3Region": "xxx",
  }
}

RESPONSE - success

  • Status is ok
{
  "status": "ok"
}

RESPONSE - failure

  • Status is error
{
  "status": "error",
  "description": "Some explantion of the error."
}

Errors isn't likely to happen, as we do not actually check if bucket has given URLs / objects. We just make a deleteObjects call to S3 and expect S3 to remove given object keys.

Development

npm install
nodemon --exec coffee bin/www

Tests

Tests are written using Mocha and Chai expect syntax style. We use Sinon for test utilities and SuperTest for integration tests.

Run npm test when you want to run all tests. Run npm run test-unit to only run the unit tests, and npm run test-integration to only run the integration tests. You can also run mocha path/to/test if you want to run a specific test.

In our tests some ENV variables are important. They all start with TEST_* and you find examples in .envrc.example. You need to create and configure your own bucket for integration testing.

Deployment

Is may deployed on Heroku. Do the normal git push heroku master, or deploy to other servers you feel comfortable with.

s3_storer's People

Contributors

thhermansen avatar skalar-bcawkwell avatar mathiasnordvang avatar mortonfox avatar

Watchers

Christian Hager avatar Eivind Eidheim Elseth avatar James Cloos avatar  avatar Roger Guttormsen avatar Fred Lesser avatar Andreas Jacobsen avatar Johannes Holmedahl avatar Olve avatar Lars Larsen Skjæveland avatar  avatar  avatar

s3_storer's Issues

Add delete endpoint for deleting a set of URLs

Clients of this API may use S3 SDK directly in their native language to delete objects from S3 when the record the objects belongs to are removed. This /delete endpoint (or what we'll call it) will just be a convenience endpoint where clients can do DELETE request, give a set of URLs and a bucket.

Maybe something like this:

{
  "urls": [
    "http://s3.com/api/XXX/sha1ofsomefile",
    "http://or.cloudfront.com/sha1ofsomeotherfile"
  ],
  "options": {
    "awsAccessKeyId": "xxx",
    "awsSecretAccessKey": "xxx",
    "s3Bucket": "xxx",
    "s3Region": "xxx"
  }
}

This will simply delegate to S3Client#deleteUrls which supports deleting an arbitrary number of urls.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.