Giter Club home page Giter Club logo

aws-lite's Introduction

Architect Logo

GitHub CI status npm version Apache-2.0 License

Build ultra scalable database backed web apps on AWS serverless infrastructure with full local, offline workflows, and more. Full documentation found at: https://arc.codes

Requirements

Installation

Make sure you have at least Node.js version 14 installed.

Open your terminal to install arc:

npm i @architect/architect --save-dev

Check the version:

npx arc version

Protip: run arc with no arguments to get help

Work locally

Create a new app:

mkdir testapp
cd testapp
npx arc init

Kick up the local dev server:

npx arc sandbox

Cmd / Ctrl + c exits the sandbox

Deploy to AWS

Deploy the staging stack:

npx arc deploy

Protip: create additional staging stacks with --name

Ship to a production stack:

npx arc deploy --production

Add Architect syntax to your text editor

VS Code

Sublime Text

Vim

Learn more

Head to https://arc.codes to learn more!


Founding team

Amber Costley, Angelina Fabbro, Brian LeRoux, Jen Fong-Adwent, Kristofer Joseph, Kris Borchers, Ryan Block, Spencer Kelley

Special thanks

Pinyao Guo for the Architect GitHub name

aws-lite's People

Contributors

andybee avatar aspilchen avatar fedellen avatar filmaj avatar metadaddy avatar ryanblock avatar tbeseda avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

aws-lite's Issues

Add support for nodejs streams to relevant S3 operations

Describe the problem underlying the enhancement request

In my app, I upload and download large-ish files (~60MB) to/from S3 regularly from Lambda. Currently, aws-lite returns the entire contents of objects downloaded using GetObject as a Buffer in the response under the Body property. Similarly, when uploading files to S3 using PutObject, currently the only way to do so in aws-lite is to provide a File property pointing to a file path.

Describe the solution you'd like (if any)

It would be rad to support streams in GetObject and PutObject to avoid having to hold the entire S3 object contents in memory. This is especially useful in Lambda, where RAM allocations to Lambda have a direct impact on $$$ spent.

Describe alternative solutions you've considered

I tried using an intermediary file as is supported in aws-lite today for uploading, but it caused an increase in runtime in my lambda as well as a huge uptick in consumed RAM (see this Discord message for details)

Additional context or notes

Here is a gist of my current, AWS JS SDK v3-based S3 download/upload code, which does support streams: https://gist.github.com/filmaj/f4a5f398b9701b34878503051efa8e34

Ensure `@aws-lite/client` + `@aws-lite/s3` work with S3-compatible APIs

As requested in aws-lite Discord by @metadaddy: @aws-lite/client + @aws-lite/s3 should just work with S3-compatible APIs, including Backblaze B2 (among others).

This will require some refactoring, including:

I'm sure other stuff will arise during discovery, but it's a start!

RFC: compatibility with AWS SDK v2 / v3 client APIs

We've heard folks requesting aws-lite compatibility with AWS SDK v2 and v3 client APIs (example), which I wanted to discuss here in an RFC context. First, what that means in practice (with examples):

aws-lite client API:

import awsLite from '@aws-lite/client'
const aws = await awsLite(options)
await aws.DynamoDB.GetItem(params)

aws-sdk v2 client API:

import aws from 'aws-sdk'
const dynamo = new AWS.DynamoDB(options)
await dynamo.getItem(params).promise()
// legacy callback style
dynamo.getItem(params, callback)

@aws-sdk/* v3 client API:

import { DynamoDBClient, GetItemCommand } from "@aws-sdk/client-dynamodb"
const client = new DynamoDBClient(options)
const command = new GetItemCommand(params)
await client.send(command)

So the thinking here would be that to ease the transition from SDK v2 or v3 to aws-lite, we should enable something that looks a bit like this:

import awsLite from '@aws-lite/client'
const aws = await awsLite(options)

// v2-style async / await
await aws.DynamoDB.GetItem(params).promise() // Note the method casing!
// v2-style callback
aws.DynamoDB.GetItem(params, callback)

// v3-style
const client = new aws.DynamoDBClient(options)
const command = new aws.GetItemCommand(params)
await client.send(command)

Current state of AWS SDK compatibility / commonality

As we implement various AWS service APIs, our general approach with aws-lite is to use AWS's method and property names (and their casing) in priority of: service API, AWS SDK v3, then SDK v2. However, these semantics may differ based on the various scenarios we see in method naming, request shapes, response shapes, and errors.

Errors are a bit out of scope for the topic at hand (especially as they differ wildly between SDKs v2 and v3), but I'll cover each of the others below.

Method names

Method names we generally follow the service API style, which is broadly Pascal-cased in AWS-land. For example, the S3 list objects v2 method is as follows:

In this case, aws-lite uses ListObjectsV2.

Request shapes

aws-lite request shapes generally follow the service API style as well, which, as it happens, is usually is the same as what's found in AWS SDKs v2 and v3. For example, let's look at the main properties passed to S3's get object method:

Note: even though SDK v2 uses camel-cased listObjectsV2, it still Pascal-cases Bucket and Key.

Sometimes aws-lite uses different request properties when we've found a usability improvement that can be made. For example, some methods allow passing a boolean pagination (lowcased) property to enable recursive pagination of results – this is specific to aws-lite.

Another example is the addition of the File property in S3's put object method, allowing authors to specify a file on disk to be published to S3 instead of having to load a buffer themselves. This is not functionality found in the AWS SDKs.

Response shapes

aws-lite response shapes generally adhere to AWS SDK v3's interpolations and mutations. (Related: v3 generally hews quite close to v2 in my experience, so that's good.) We stick close to v3 specifically here in part to aid in interop moving into the future, in part to draft on v3's types (see more about aws-lite types), and in part because v3's changes are usually pretty sane – but not always. For example, let's look again at the response of S3's get object method:

  • S3 API: file size in bytes is sent in the Content-Length header, while the object contents are sent in the http body (reference)
  • SDK v3: file size in bytes is interpolated into the ContentLength property, while the object contents are a Body property, which is a StreamingBlobPayloadOutput (something a lot of folks really dislike, reference)
  • SDK v2: file size in bytes is interpolated into the ContentLength property, while the object contents are a Body property, which is a standard JS buffer

In this case aws-lite does the same Content-Length header to ContentLength property interpolation, while (currently) opting to pass through the Body property as a standard JS buffer (unless the object is JSON or XML, in which case it is automatically parsed). We opted to differ from v2 / 3 because that's what we thought the best user experience would be. SDKs are, ultimately, for humans.

To shim or not to shim

So now we're at the crux of the discussion: do we shim v2 and/or v3 compatibility as in the example above, or not?

It wouldn't necessarily be hard to do; allow folks to opt into a .promise() method and some callback support (v2), or to append each method with Command to some redirected intermediate method (v3). But the work isn't done there for anyone who uses the shim.

Often, your migration will have gone smoothly, but not necessarily! Perhaps the method casing changed (as in listObjectsV2ListObjectsV2). Or the request params or response shape may be slightly different (aws-lite provides a JS buffer body, SDK v3 provides a readable stream). Moreover, none of this accounts for the various irreconcilable differences in error shapes AWS made between v2 / 3; aws-lite does its best to bridge the two, but the responsible move will always be to run this new code in staging before shipping to production.

All this is to say: it would be irresponsible to assume aws-lite will flawlessly emulate SDK v2 and v3 semantics 100% of the time. The only responsible approach is to implement, then validate your changes.

Our guess is that when doing so you'll probably prefer working with aws-lite's much simpler client API. So while it may be straightforward to shimming the basic API semantics, I'd bring this back around and ask: given the above realities, is there meaningful value in shimming v2 / 3 client APIs for aws-lite?

`removeUndefinedVariables` is not passed to AWS vendor'ed marshaller

Describe the issue

If you pass an item to the DynamoDB client where a property value is undefined, the following error is thrown:

@aws-lite/client: DynamoDB.BatchWriteItem: Pass options.removeUndefinedValues=true to remove undefined values from map/array/set.

However, there appears to be nowhere documented to define this value. In the docs for arc.tables() an option is documented for awsjsonMarshall, which traces through to the aws-lite client constructor, but doesn't appear to have any affect.

Expected behavior

Setting up aws-lite as follows:

import awsLite from '@aws-lite/client'

let aws = await awsLite({
  plugins: [ '@aws-lite/dynamodb' ],
  awsjsonMarshall: {
    removeUndefinedValues: true,
  },
});

would allow:

aws.DynamoDB.BatchWriteItem({
  RequestItems: {
    foo: [
      PutRequest: {
        Item: {
          PK: 1,
          foo: undefined,
        },
      },
    ],
  }
});

without throwing an error about undefined values.

Steps to reproduce

  1. Attempt to define removeUndefinedValues AWS JSON marshaller option on aws-lite
  2. Attempt to put an item to DynamoDB with an undefined value with aws-lite

Platform / version

  • Node version: 20.14.0
  • Package manager version: 10.7.0

How urgent do you feel this bug is?

P2

Additional context

No response

License field is missing on `@aws-lite/*-types` packages

Describe the issue

The main package is licensed under Apache-2.0 license:

"license": "Apache-2.0",

As well as plugins, for example this one:

"license": "Apache-2.0",

However, the *-types packages do not have the license annotation: https://github.com/architect/aws-lite/blob/6031039be3cfb2085ebc389a1a61453b30702f35/plugins/apigateway/types/package.json

I guess, it is Apache-2.0, but it is not reflected in the npm metadata:

image

Expected behavior

All packages have license information

Steps to reproduce

n/a

Platform / version

n/a

How urgent do you feel this bug is?

In general, not sure, but missing license information does not allow using this package in projects with strict license linters.

Additional context

No response

Improve typings comparing to the original SDK version

Describe the problem underlying the enhancement request

There is a known issue on AWS SDK v3 typings: aws/aws-sdk-js-v3#1613

Marking everything as possibly undefined undermines the whole point of typings in the first place.

Since this package tries to reuse the upstream type definitions, it is suffering from the same issue too.

Describe the solution you'd like (if any)

The aws-lite project could provide enhanced typings for the most common use-cases

Describe alternative solutions you've considered

For now I am just using non-null assertion ! everywhere

Additional context or notes

This is not much of a feature request, more like a suggestion for to take an opportunity

0.21.6: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of Object

Describe the issue

We wanted to try the new aws-lite version for #127. However, we can no longer arc deploy. The stacktrace:

⚬ Hydrate Hydrating dependencies in 1 path
✓ Hydrate Hydrated src/http/get-index/
  | npm i --omit=dev: added 20 packages, and audited 21 packages in 607ms
  | npm i --omit=dev: 3 packages are looking for funding
  | npm i --omit=dev: run `npm fund` for details
  | npm i --omit=dev: found 0 vulnerabilities
✓ Hydrate Successfully hydrated dependencies
⚬ Deploy Initializing deployment
  | Stack ... <redacted>Staging
  | Bucket .. <redacted>
deploy failed! The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of Object
TypeError [ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of Object
    at new NodeError (node:internal/errors:406:5)
    at write_ (node:_http_outgoing:879:11)
    at ClientRequest.end (node:_http_outgoing:1030:5)
    at <redacted>/node_modules/@aws-lite/client/src/request/request.js:262:14
    at new Promise (<anonymous>)
    at call (<redacted>/node_modules/@aws-lite/client/src/request/request.js:71:10)
    at request (<redacted>/node_modules/@aws-lite/client/src/request/request.js:33:26)
    at makeRequest (<redacted>/node_modules/@aws-lite/client/src/request/index.js:117:16)
    at _request (<redacted>/node_modules/@aws-lite/client/src/request/index.js:12:16)
    at Object.PutObject (<redacted>/node_modules/@aws-lite/client/src/client-factory.js:144:34)

Expected behavior

No error. ;-)

Steps to reproduce

  1. arc deploy with aws-lite @0.21.6

Platform / version

  • @architect/architect: 11.0.12
  • @architect/functions: 8.1.5
  • @aws-lite/client: 0.21.6

How urgent do you feel this bug is?

P0 (emergency)

Additional context

No response

DynamoDB.Query: socket hang up

Describe the issue

Once deployed to AWS, I will get sporadic errors. Looking at the logs, I see something like this every time:

{
    "errorType": "Error",
    "errorMessage": "@aws-lite/client: DynamoDB.Query: socket hang up",
    "message": "@aws-lite/client: DynamoDB.Query: socket hang up",
    "service": "dynamodb",
    "property": "DynamoDB",
    "awsDoc": "https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html",
    "readme": "https://aws-lite.org/services/dynamodb#query",
    "rawStack": "Error: socket hang up\n    at connResetException (node:internal/errors:720:14)\n    at TLSSocket.socketCloseListener (node:_http_client:474:25)\n    at TLSSocket.emit (node:events:529:35)\n    at node:net:350:12\n    at TCP.done (node:_tls_wrap:657:7)",
    "host": "dynamodb.us-west-2.amazonaws.com",
    "protocol": "https",
    "time": "2024-05-07T17:24:28.784Z",
    "stack": [
        "Error: @aws-lite/client: DynamoDB.Query: socket hang up",
        "    at errorHandler (/var/task/node_modules/@aws-lite/client/src/error.js:13:46)",
        "    at Query (/var/task/node_modules/@aws-lite/client/src/client-factory.js:167:17)"
    ]
}

Expected behavior

I would expect it not to error. :)

Steps to reproduce

Ok, I know this is useless, but it's so sporadic I've been having zero success finding out how to reliably reproduce. It happens only occasionally, just enough to be annoying.

I'm posting this in the hopes that it's something simple or something others may have come across too.

Platform / version

  • Node version: (e.g. Node.js 20.5) nodejs18.x

How urgent do you feel this bug is?

P2

Additional context

No response

Strange CloudFront behavior

Describe the issue

When making subsequent CloudFront requests, I see very strange behavior that I can't easily trace to aws-lite. Here are two scenarios outlined:

  • Creating creating two distributions back to back
  • Creating a new distribution after invalidating another distribution

Expected behavior

I expect to be able to run any number of valid CloudFront requests serially, so long as they are not causing rate limiting issues (see https://docs.aws.amazon.com/cloudfront/latest/APIReference/CommonErrors.html error 400).

Steps to reproduce

Creating creating two distributions back to back

Repro steps:

  • Call Cloudfront.CreateDistribution() with a valid distribution configuration
  • Call it a second time immediately after with a separate, but also valid distribution configuration
  • Usually this produces a 408 error as below:
[aws-lite] Request: {
  time: '2024-01-22T19:09:06.756Z',
  service: 'cloudfront',
  method: 'POST',
  url: 'https://cloudfront.amazonaws.com/2020-05-31/distribution',
  headers: {
    'content-type': 'application/xml',
    Host: 'cloudfront.amazonaws.com',
    'Content-Length': 2437,
    'X-Amz-Date': '20240122T190905Z',
    Authorization: 'AWS4-HMAC-SHA256 Credential=AKIAIAE...'
  },
  body: '...'
} 

[aws-lite] Response: {
  time: '2024-01-22T19:11:16.920Z',
  statusCode: 408,
  headers: {
    'content-length': '0',
    date: 'Mon, 22 Jan 2024 19:10:11 GMT',
    connection: 'close'
  },
  body: '<no body>'
} 

Error: @aws-lite/client: CloudFront.CreateDistribution: unknown error
    at errorHandler (/project-path/node_modules/@aws-lite/client/src/error.js:13:46)
    at Object.CreateDistribution (/project-path/node_modules/@aws-lite/client/src/client-factory.js:207:15)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  statusCode: 408,
  headers: {
    'content-length': '0',
    date: 'Mon, 22 Jan 2024 19:10:11 GMT',
    connection: 'close'
  },
  service: 'cloudfront',
  property: 'CloudFront',
  awsDoc: 'https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html',
  readme: 'https://aws-lite.org/services/cloudfront#createdistribution'
}

Other times, we'll just a raw HTTP error:

[aws-lite] Request: {
  time: '2024-01-22T19:18:15.109Z',
  service: 'cloudfront',
  method: 'POST',
  url: 'https://cloudfront.amazonaws.com/2020-05-31/distribution',
  headers: {
    'content-type': 'application/xml',
    Host: 'cloudfront.amazonaws.com',
    'Content-Length': 2437,
    'X-Amz-Date': '20240122T191744Z',
    Authorization: 'AWS4-HMAC-SHA256 Credential=AKIAIAE...'
  },
  body: '...'
} 

[aws-lite] HTTP error: Error: read ECONNRESET
    at TLSWrap.onStreamRead (node:internal/stream_base_commons:217:20) {
  errno: -54,
  code: 'ECONNRESET',
  syscall: 'read'
}
Error: @aws-lite/client: CloudFront.CreateDistribution: read ECONNRESET
    at errorHandler (/project-path/node_modules/@aws-lite/client/src/error.js:13:46)
    at Object.CreateDistribution (/project-path/node_modules/@aws-lite/client/src/client-factory.js:207:15)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  service: 'cloudfront',
  property: 'CloudFront',
  awsDoc: 'https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html',
  readme: 'https://aws-lite.org/services/cloudfront#createdistribution',
  rawStack: 'Error: read ECONNRESET\n' +
    '    at TLSWrap.onStreamRead (node:internal/stream_base_commons:217:20)',
  host: 'cloudfront.amazonaws.com',
  protocol: 'https'
}

When spaced out enough, this does not appear to be an issue.

Creating a new distribution after invalidating another distribution

Repro steps:

  • Call Cloudfront.CreateInvalidation() with a valid invalidation payload
  • Call Cloudfront.CreateDistribution() with a valid distribution configuration right after
  • This produces a signature error(!?) as below:
@aws-lite/client: CloudFront.CreateDistribution: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
    at errorHandler (/project-path/node_modules/@aws-lite/client/src/error.js:13:46)
    at Object.CreateDistribution (/project-path/node_modules/@aws-lite/client/src/client-factory.js:207:15)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  statusCode: 403,
  headers: {
    'x-amzn-requestid': 'fa2230ee-e308-4f43-880e-64e47d74dfeb',
    'content-type': 'text/xml',
    'content-length': '433',
    date: 'Mon, 22 Jan 2024 19:47:29 GMT'
  },
  Type: 'Sender',
  code: 'SignatureDoesNotMatch',
  service: 'cloudfront',
  property: 'CloudFront',
  awsDoc: 'https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html',
  readme: 'https://aws-lite.org/services/cloudfront#createdistribution'
}

This does not occur when CreateDistribution is called without calling CreateInvalidation prior; similarly, calling CreateInvalidation multiple times in a row does not present this issue.

Platform / version

  • OS + version: macOS 14.2.1
  • Node version: Node.js 20.10
  • aws-lite version: 0.14.1
  • CloudFront plugin version: 0.0.6

How urgent do you feel this bug is?

P3

Additional context

No response

aws-lite/client may not discover all plugins

Describe the issue

When the dependency graph is broad and many packages use the same @aws-lite/* plugins, one instance may not pick up all installed plugins.
This is a side effect of Node.js dependency resolution algo and the dedupe strategy.

Expected behavior

Loading @aws-lite/client should include all plugins specified in that package's manifest. Even when the plugins are deduped across modules.

Steps to reproduce

It's hard to give discreet steps to repro

Here's some terminal output running Arc v11@RC-5 when I deploy a large Arc + Enhance application:

> arc deploy --production

         App ⌁ tbeseda-com
      Region ⌁ us-east-1
     Profile ⌁ Not configured / default
     Version ⌁ Architect 11.0.0-RC.5
         cwd ⌁ /Users/tbeseda/dev/tbeseda/tbeseda-com

deploy failed! Cannot read properties of undefined (reading 'DescribeStackResources')
TypeError: Cannot read properties of undefined (reading 'DescribeStackResources')
    at getResources (~/tbeseda-com/node_modules/@architect/deploy/src/sam/compat/index.js:77:22)
    at Array.getApiType (~/tbeseda-com/node_modules/@architect/deploy/src/sam/compat/index.js:19:9)
    at runSeries (~/tbeseda-com/node_modules/run-series/index.js:23:33)
    at compat (~/tbeseda-com/node_modules/@architect/deploy/src/sam/compat/index.js:11:3)
    at compatCheck (~/tbeseda-com/node_modules/@architect/deploy/src/sam/index.js:86:7)
    at each (~/tbeseda-com/node_modules/run-waterfall/index.js:22:22)
    at Array.createFiles (~/tbeseda-com/node_modules/@architect/deploy/src/sam/index.js:81:12)
    at runWaterfall (~/tbeseda-com/node_modules/run-waterfall/index.js:27:13)
    at samDeploy (~/tbeseda-com/node_modules/@architect/deploy/src/sam/index.js:73:3)
    at ~/tbeseda-com/node_modules/@architect/deploy/index.js:67:11

In this case, aws.cloudformation is undefined in @architect/deploy.

But @aws-lite/cloudformation is definitely in @architect/deploy's package.json. However, when npm install ran it deduped the cloudformation plugin so it doesn't actually live in the node_modules in the @architect/deploy directory in my project's node_modules.

If I patch { ... plugins: ["@aws-lite/cloudformation"]} into the @aws-lite/client in deploy, I get another error related to ssm being undefined.

So I used npm ls to see where @aws-lite/ssm is:
(so you don't need to parse this list, smm is in @architect/functions and listed as deduped in @architect/deploy)

npm ls @aws-lite/ssm
[email protected] /Users/tbeseda/dev/tbeseda/tbeseda-com
├─┬ @architect/[email protected]
│ ├─┬ @architect/[email protected]
│ │ └─┬ @architect/[email protected]
│ │   └── @aws-lite/[email protected] deduped
│ ├─┬ @architect/[email protected]
│ │ ├─┬ @architect/[email protected]
│ │ │ └── @aws-lite/[email protected] deduped
│ │ ├─┬ @architect/[email protected]
│ │ │ └─┬ @architect/[email protected]
│ │ │   └── @aws-lite/[email protected] deduped
│ │ └── @aws-lite/[email protected] deduped
│ ├─┬ @architect/[email protected]
│ │ ├─┬ @architect/[email protected]
│ │ │ └── @aws-lite/[email protected] deduped
│ │ └── @aws-lite/[email protected] deduped
│ ├─┬ @architect/[email protected]
│ │ ├─┬ @architect/[email protected]
│ │ │ └── @aws-lite/[email protected] deduped
│ │ └── @aws-lite/[email protected] deduped
│ ├─┬ @architect/[email protected]
│ │ └─┬ @architect/[email protected]
│ │   └── @aws-lite/[email protected] deduped
│ ├─┬ @architect/[email protected]
│ │ └── @aws-lite/[email protected] deduped
│ ├─┬ @architect/[email protected]
│ │ └─┬ @architect/[email protected]
│ │   └── @aws-lite/[email protected] deduped
│ └─┬ @architect/[email protected]
│   └─┬ @architect/[email protected]
│     └── @aws-lite/[email protected] deduped
├─┬ @architect/[email protected]
│ └── @aws-lite/[email protected]
└─┬ @enhance/[email protected]
  └─┬ @begin/[email protected]
    └── @aws-lite/[email protected] deduped

This is why @architect/deploy's instance of @aws-lite/client never loads any plugins :|

Platform / version

  • OS + version: macOS
  • Node version: v18 and v20
  • Package manager version: npm v10.2.3

How urgent do you feel this bug is?

P1

Additional context

Looks like the @aws-lite/client's clientFactory reads the parent package's node_modules

let mods = await readdir(nodeModulesDir)

Then it searches for official ^@aws-lite/ plugins, 3rd party, and excludes -types$ to load.

Maybe reading the parent's package.json's "dependencies": {...} in a similar way would be safer?

(Mostly) conform to WHATWG URL standard component names

aws-lite affords a lot of configurability in how requests are directed. Per #82, the addition of a new endpoint configuration property can confuse a few bits, especially since we already have a configuration property called endpointPrefix that is entirely different.

The intended target for all this configuration is Node.js HTTP request params, which largely conform to the terminology outlined WHATWG URL spec. Moreover, I'll be using the WHATWG URL parser for the new endpoint property, so it feels appropriate to use this opportunity to clean up any nomenclature confusion.

As such, moving forward I'm going to shift the project to mostly (see below) conform to the WHATWG URL standard terminology (see also: https://url.spec.whatwg.org/#example-url-components).

This will represent a breaking change, so folks should be aware and make adjustments to your plugins as necessary when installing aws-lite >= 0.15

Changes will be denoted in bold, with notes below.

URL-related client configuration options

Current Closest WHATWG name New
endpointPrefix path pathPrefix
host host host
port port port
protocol protocol, scheme protocol
url endpoint (new)
url url (alias to endpoint)

Technically pathPrefix is just part of the overall path, however it serves to prefix request paths; while it's not a URL-spec term, I feel it's accurately descriptive for how this project functions.

We will add endpoint, a non-standard URL-spec term for improved interop with AWS SDK; in the spirit of this issue, endpoint will also have the alias url, since that's what it is.

In URL spec land, for https://aws-lite.org, https is the scheme, while https: is the protocol (i.e. the protocol is the scheme followed by U+003A, :). This distinction feels a bit precious for our puroses, so in the interest of simplicity and adhering to Postel's principle, I'm going to continue to call this parameter protocol, and allow it to accept any of the following four forms: http, http:, https, https:.

URL-related request params

Current Closest WHATWG name New
endpoint path path
query query query

For pretty obvious reasons, the endpoint request parameter no longer makes sense given the above scope of work, and will now be superseded by the path parameter.

As is currently the case, individual requests can accept overriding configuration options. Thus, this example will remain the expected behavior:

const aws = await awsLite({ endpoint: 'foo.bar' })
await aws.DynamoDB.GetItem({ ...params, endpoint: 'fiz.buz' })
// Request will be made to `fiz.buz`

Is there a way to mock requests based on their incoming params, rather than their order?

Describe the problem underlying the enhancement request

I'm using aws-lite's testing utilities to mock requests in my jest test environment. I have code that is requesting s3 using aws-lite across a few different callsites, and I want to build a robust simulation of an s3 backend without needing to think about the order of the s3 calls being made by the library.

Currently, it seems like the testing utilities only support either a single mocked response for S3.GetObject, or an array of responses which will return mocks in the order requests are made awsLite.testing.mock('S3.GetObject', [{ Body: 'data' }, { Body: 'data2' }]).

This means that my tests are coupled to the precise order that my code hits s3, which makes it difficult to write reliable non-brittle tests.

Describe the solution you'd like (if any)

Libraries like aws-sdk-mock and nock allow introspection of the incoming params like Key and Bucket which allows for a fully-simulated bucket and isn't tied to request order. This means that tests are much cleaner to read and write because the intent is clear. It's also easier to throw an error when a request is made in test that is not expected, since we can match requests with mocked responses.

Alternatively, I'd be ok if aws-lite allowed aws-sdk-mock to take control if it's used. Currently it seems like I'm unable to use aws-lite alongside aws-sdk-mock for these usecases, so I'm forced to switch away from aws-lite entirely :(

Describe alternative solutions you've considered

The only alternatives I can see right now would be avoiding aws-lite, or writing tests in a way that feels brittle.

Additional context or notes

No response

RFC: explicit plugin loading by default

Early on in aws-lite I opted to pursue a path where, by default, we'd attempt to automatically load all available plugins. This was in large part due to the overwrought semantics of SDK v3, and wanting to provide a tool that, among other things, just works.

Under the hood, aws-lite makes a best-effort search for any installed @aws-lite/* or aws-lite-plugin-* modules in the most relevant possible node_modules dir the client is aware of. While this has largely been a simple and reliable way to use aws-lite, it has some drawbacks.

First, as we've seen, if your package manager doesn't properly flatten the dependency tree, plugins may go missing from the autoloader's search. While I think we've got that issue cornered, this approach also creates a specific challenge for Architect's Lambda dependency treeshaker:

Assume an Architect project generating Lambdas with the following root package.json file:

{
  "dependencies": {
    "@aws-lite/client": "latest",
    "@aws-lite/dynamodb": "latest"
  }
}

Now assume the following Lambda handler:

import awsLite from '@aws-lite/client'
export default async function handler () => {
  const aws = await awsLite()
  await aws.DynamoDB.GetItem({ params })
}

In the above example, aws-lite would successfully autoload the DynamoDB plugin when running locally, but Arc's treeshaker would not know that that handler needs @aws-lite/dynamodb, so the code would be broken in production.

It could be argued this is an Architect problem, per se, and not an aws-lite problem – but the above scenario would presumably be true if one bundled the handler code example.

So, while autoloading plugins is a nice and ergonomic way to get started with aws-lite, in production it may present significant issues that we may need to endeavor to resolve.

Assuming we make autoloading plugins an opt-in and not an opt-out, as things stand today there are a handful of potential implementation approaches we can take:

1. Register plugins via import / require

Upon import or require, each plugin proactively registers itself with @aws-lite/client behind the scenes:

import awsLite from '@aws-lite/client'
import '@aws-lite/dynamodb' // DynamoDB is now registered with the client
const aws = await awsLite()
await aws.DynamoDB.GetItem({ params })

This approach could get a bit funky if you wanted to instantiate a second client with different credentials and different plugins, as operations in global scope that occur upon import would not re-run upon a second import. (In my prototyping, I couldn't get auto-registration working when running a second import statement, but perhaps there are ways to do that that I'm not currently aware of.) Example:

import awsLite from '@aws-lite/client'

// First client
import '@aws-lite/dynamodb' // DynamoDB is now registered with the first client
import '@aws-lite/lambda' // Lambda is now registered with the first client
const clientOne = await awsLite({ credentials: $foo })

// Second client
import '@aws-lite/dynamodb' // Depending on implementation: either nothing is registered with the second client, or both DynamoDB and Lambda are registered; neither scenario is desired
const clientTwo = await awsLite({ credentials: $bar })

2. Pass imported plugins

Pretty explicit, traditional, and verbose – probably not very surprising, either:

import awsLite from '@aws-lite/client'
import dynamodb from '@aws-lite/dynamodb'
const aws = await awsLite({ plugins: [ dynamodb ] })
await aws.DynamoDB.GetItem({ params })

And if you want a second client with a second set of plugins, just pass them along:

import awsLite from '@aws-lite/client'
import dynamodb from '@aws-lite/dynamodb'
import lambda from '@aws-lite/lambda'

const clientOne = await awsLite({ plugins: [ dynamodb ] })
const clientTwo = await awsLite({ plugins: [ dynamodb, lambda ] })

3. Pass strings

How the plugins param currently works! It's nice and light, but sadly it doesn't work well with Lambda treeshaking, so I'm probably willing to eliminate it on that basis alone. Presented here just to cover the example:

import awsLite from '@aws-lite/client'
const aws = await awsLite({ plugins: [ '@aws-lite/dynamodb' ] })

If you have ideas for other approaches, please let me know – otherwise, looking forward to your thoughts and feedback!

Add pagination support for `cursor` / `token` arrays for APIs that paginate with multiple tokens

Some AWS APIs, such as Route 53's ListResourceRecordSets, paginate with multiple cursors and tokens. We need to add support for cursor / token arrays to support such APIs. An example:

const ListResourceRecordSetsPaginator = {
  cursor: [ 'name', 'type' ],
  token: [ 'NextRecordName', 'NextRecordType' ],
  accumulator: 'ResourceRecordSets.ResourceRecordSet', // XML arcana
  type: 'query',
}

These cursor / token arrays must behave as ordered, corresponding tuples, and must have the same number of values.

[RFC] Add support for AWS Standard Credential Provider Chain

Recommend implementing the standard credential provider chain as per AWS SDK standards. The providers have an order of precedence, and support refresh tokens for federation, operating in containers, and EC2 assume role.

Adding these would provide consistent experiences across runtime environment, and provide the ability to leverage AWS-lite for specific parts of an application (strangler pattern) without having to change/amend the application’s credential provider.

https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html

Expose S3 User Defined Metadata

Describe the problem underlying the enhancement request

aws-lite doesn't currently forward user defined s3 metadata from z-amz-meta-* down from header results. For our system, would be specifically useful on HeadRequest as we use this user defined metadata to determine GetRequest offsets

Describe the solution you'd like (if any)

This user defined metadata should be available in the headers at z-amz-meta-*. I left a proposed solution at PR #145 -- but I'm not entirely convinced it would do the trick or is what you're looking for there

Describe alternative solutions you've considered

We can currently head check the bucket directly to get this information as a workaround

Additional context or notes

No response

RFC: plugin semantics

Today we released @aws-lite/client 0.6.0 and the first official service plugin (@aws-lite/dynamodb). I selected Dynamo as a starting point because if its wide variety of methods, highly complex semantics (including using deeply nested AWS-flavored JSON), and ease of testing. This resulted in a bunch of changes to the plugin API thus far, including a new response() lifecycle hook, passing some utilities, and a bunch of other changes and fixes.

Now, I'd like to say up front that the aws-lite plugin API is still very much open to feedback – I am absolutely more than happy to reauthor @aws-lite/dynamodb against any great suggestions that show up here.

With all that out of the way, let me copy/paste in the new plugin documentation and contributing guidelines now in the readme:

Plugins

Out of the box, @aws-lite/client is a full-featured AWS API client that you can use to interact with any AWS service that makes use of authentication via AWS signature v4 (which should be just about all of them).

@aws-lite/client can be extended with plugins to more easily interact with AWS services. A bit more about how plugins work:

  • Plugins can be authored in ESM or CJS
  • Plugins can be dependencies downloaded from npm, or also live locally in your codebase
  • In conjunction with the open source community, aws-lite publishes service plugins under the @aws-lite/$service namespace that conform to aws-lite standards
  • @aws-lite/* plugins, and packages published to npm with the aws-lite-plugin-* prefix, are automatically loaded by the @aws-lite/client upon instantiation

Thus, to make use of the @aws-lite/dynamodb plugin, this is what your code would look like:

npm i @aws-lite/client @aws-lite/dynamodb
import awsLite from '@aws-lite/client'
const aws = await awsLite() // @aws-lite/dynamodb is now loaded
aws.dynamodb.PutItem({ TableName: 'my-table', Key: { id: 'hello' } })

Plugin API

The aws-lite plugin API is lightweight and simple to learn. It makes use of four optional lifecycle hooks:

  • validate [optional] - an object of property names and types to validate inputs with pre-request
  • request() [optional] - an async function that enables mutation of inputs to the final service API request
  • response() [optional] - an async function that enables mutation of service API responses before they are returned
  • error() [optional] - an async function that enables mutation of service API errors before they are returned

The above four lifecycle hooks must be exported as an object named methods, along with a valid AWS service code property named service, like so:

// A simple plugin for validating input
export default {
  service: 'dynamodb',
  methods: {
    PutItem: {
      validate: {
        TableName: { type: 'string', required: true }
      }
    }
  }
}
// Using the above plugin
aws.dynamodb.PutItem({ TableName: 12345 }) // Throws validation error

Example plugins can be found below, in plugins/ dir (containing @aws-lite/* plugins), and in tests.

validate

The validate lifecycle hook is an optional object containing (case-sensitive) input property names, with a corresponding object that denotes their type and whether required.

Types are as follows: array boolean number object string; additionally, a types array may be supplied. An example validate plugin:

// Validate inputs for a single DynamoDB method (`CreateTable`)
export default {
  service: 'dynamodb',
  methods: {
    CreateTable: {
      validate: {
        TableName:                  { type: 'string', required: true },
        AttributeDefinitions:       { type: 'array', required: true },
        KeySchema:                  { type: 'array', required: true },
        BillingMode:                { type: 'string' },
        DeletionProtectionEnabled:  { type: 'boolean' },
        GlobalSecondaryIndexes:     { type: 'array' },
        LocalSecondaryIndexes:      { type: 'array' },
        ProvisionedThroughput:      { type: 'object' },
        SSESpecification:           { type: 'object' },
        StreamSpecification:        { type: 'object' },
        TableClass:                 { type: 'string' },
        Tags:                       { type: 'array' },
      }
    }
  }
}

request()

The request() lifecycle hook is an optional async function that enables that enables mutation of inputs to the final service API request.

request() is executed with two positional arguments:

  • params (object)
    • The method's input parameters
  • utils (object)
    • Helper utilities for (de)serializing AWS-flavored JSON (awsjsonMarshall, awsjsonUnmarshall), config, creds, etc.

The request() method may return nothing, or a valid client request. An example:

// Automatically serialize input to AWS-flavored JSON
export default {
  service: 'dynamodb',
  methods: {
    PutItem: {
      validate: { Item: { type: 'object', required: true } },
      request: async (params, utils) => {
        params.Item = utils.awsjsonMarshall(params.Item)
        return {
          headers: { 'X-Amz-Target': `DynamoDB_20120810.PutItem` }
          payload: params
        }
      }
    }
  }
}

response()

The response() lifecycle hook is an async function that enables mutation of service API responses before they are returned.

response() is executed with two positional arguments:

  • response (object)
    • The response object contains the following properties:
      • headers (object) - the raw response headers from the service API
      • statusCode (number or undefined) - resulting status code of the API response; if an HTTP connection error occurred, no statusCode will be present
      • payload (any) - response payload
        • If the entire payload is JSON, AWS-flavored JSON, or XML, aws-lite will attempt to parse it prior to executing response(). Responses that are primarily JSON, but with nested AWS-flavored JSON, will be parsed only as JSON and may require additional deserialization with the awsjsonUnmarshall utility
  • utils (object)
    • Helper utilities for (de)serializing AWS-flavored JSON (awsjsonMarshall, awsjsonUnmarshall), config, creds, etc.

The response() method may return nothing, or anything (object, string, etc.). If returning an object, you may return the optional awsjson property (that behaves the same as in client requests). An example:

// Automatically deserialize AWS-flavored JSON
export default {
  service: 'dynamodb',
  methods: {
    GetItem: {
      // Successful responses always have an AWS-flavored JSON `Item` property
      response: async ({ payload }, utils) => {
        return { awsjson: [ 'Item' ], payload }
      }
    }
  }
}

error()

The error() lifecycle hook is an async function that enables mutation of service API errors before they are returned.

error() is executed with two positional arguments:

  • error (object)
    • The object containing the following properties:
      • error (object or string) - the raw error from the service API
        • If the entire error is JSON, AWS-flavored JSON, or XML, aws-lite will attempt to parse it prior to executing response(). Responses that are primarily JSON, but with nested AWS-flavored JSON, will be parsed only as JSON and may require additional deserialization with the awsjsonUnmarshall utility
      • headers (object) - the raw response headers from the service API
      • metadata (object) - aws-lite error metadata; to improve the quality of the errors presented by aws-lite, please only append to this object
      • statusCode (number or undefined) - resulting status code of the API response; if an HTTP connection error occurred, no statusCode will be present
  • utils (object)
    • Helper utilities for (de)serializing AWS-flavored JSON (awsjsonMarshall, awsjsonUnmarshall), config, creds, etc.

The error() method may return nothing, a new or mutated version of the error payload it was passed, a string, an object, or a JS error. An example

// Improve clarity of error output
export default {
  service: 'lambda',
  methods: {
    GetFunctionConfiguration: {
      error: async (err, utils) => {
        if (err.statusCode === 400 &&
            err?.error?.message?.match(/validation/)) {
          // Append a property to be clearly displayed along with the other error data
          err.metadata.type = 'Validation error'
        }
        return err
      }
    }
  }
}

Authoring @aws-lite/* plugins

Similar to the Definitely Typed (@types) model, aws-lite releases packages maintained by third parties under the @aws-lite/* namespace.

Plugins released within the @aws-lite/* namespace are expected to conform to the following standards:

  • @aws-lite/* plugins should read more or less like the others, and broadly adhere to the following style:
    • Plugins should be authored in ESM, be functional (read: no classes), and avoid globals / closures, etc. wherever possible
    • Plugins should be authored in JavaScript; those that require transpilation (e.g. TypeScript) will not be accepted
  • Plugins should cover all documented methods for a given service, and include links for each method within the plugin
  • Each plugin is singular for a given service
    • Example: we will not ship @aws-lite/lambda, @aws-lite/lambda-1, @aws-lite/lambda-new, etc.
    • With permission of the current maintainer(s), you may become a maintainer of an existing plugin
  • To maintain the speed, security, and lightweight size of the aws-lite project, plugins should ideally have zero external dependencies
    • If external dependencies are absolutely necessary, they should be justifiable; expect their inclusion to be heavily audited
  • Ideally (but not necessarily), each plugin should include its own tests
    • Tests should follow the project's testing methodology, utilizing tape as the runner and tap-arc as the output parser
    • Tests should not rely on interfacing with live AWS services
  • Wherever possible, plugin maintainers should attempt to employ manual verification of their plugins during development
  • By opting to author a plugin, you are opting to provide reasonably prompt bug fixes, updates, etc. for the community
    • If you are not willing to make that kind of commitment but still want to publish your plugins publicly, please feel free to do so outside this repo with an aws-lite-plugin- package prefix

Support Path-Style S3 URLs

Describe the issue

When you have an S3 bucket that has periods in it (in the style of a domain name) all the requests from @aws-lite/s3 fail with an error like like following (where [BUCKET] is like www.example.com and [REGION] is something like us-east-1):

Error: @aws-lite/client: S3.GetObject: Hostname/IP does not match certificate's altnames: Host: [BUCKET].s3.[REGION].amazonaws.com. is not in the cert's altnames: DNS:s3.amazonaws.com, DNS:*.s3.amazonaws.com, DNS:*.s3.dualstack.us-east-1.amazonaws.com, DNS:s3.dualstack.us-east-1.amazonaws.com, DNS:*.s3.us-east-1.amazonaws.com, DNS:s3.us-east-1.amazonaws.com, DNS:*.s3-control.us-east-1.amazonaws.com, DNS:s3-control.us-east-1.amazonaws.com, DNS:*.s3-control.dualstack.us-east-1.amazonaws.com, DNS:s3-control.dualstack.us-east-1.amazonaws.com, DNS:*.s3-accesspoint.us-east-1.amazonaws.com, DNS:*.s3-accesspoint.dualstack.us-east-1.amazonaws.com, DNS:*.s3-deprecated.us-east-1.amazonaws.com, DNS:s3-deprecated.us-east-1.amazonaws.com, DNS:s3-external-1.amazonaws.com, DNS:*.s3-external-1.amazonaws.com, DNS:s3-external-2.amazonaws.com, DNS:*.s3-external-2.amazonaws.com

Expected behavior

Commands like GetObject, PutObject, etc. should not fail.

Steps to reproduce

import aws from "@aws-lite/client";

const a = await aws({ plugins: ["@aws-lite/s3"], region: "us-east-1" });

const data = await a.S3.GetObject({
  Bucket: "www.example.com",
  Key: "myfile.jpg",
});

Platform / version

  • OS + version: macOS 12.6
  • Node version: 21.6.0
  • Package manager version: Yarn 4.0.2
  • Browser: N/A

How urgent do you feel this bug is?

P2

Additional context

No response

Deno + JSR support

Did some investigating into Deno + JSR support. The Node.js compatibility is pretty excellent right now, although testing the module required some workarounds. (We now have a related open issue with Deno.)

Checklist so far:

  • Port test suite to ESM (d697aba)
  • Add node: specifiers to require / import calls (e0545c0)
  • Implement node_modules workarounds to get Deno running tests (b016692)
    • Some tests are running, but will require a variety of small conditional changes
  • Update tests to run against Deno
  • Investigation into HTTP error states to ensure retries work properly
  • Add JSR entry file
  • Implement jsr.json / deno.json for root package; handle import maps, etc.
  • Implement jsr.json / deno.json for all plugins
  • Update plugin generation scripts to include jsr.json / deno.json files
  • Update CI to publish client to JSR
  • Update plugin publisher to publish to JSR

A couple (internal) notes:

JSR entry file (something like this):

import aws from 'npm:@aws-lite/[email protected]'
export let awsLite = aws

We'll need to add some occasional environment switching like so:

function isNode () {
  try { return !(Deno.env) }
  catch { return true }
}

Various tests will wind up needing environment-specific client instantiation like so:

test('Set up env', async t => {
  t.plan(1)
  if (isNode()) {
    let cwd = process.cwd()
    let sut = 'file://' + join(cwd, 'src', 'index.js')
    client = (await import(sut)).default
  }
  else {
    client = (await import('npm:@aws-lite/client')).default
  }
  t.ok(client, 'aws-lite client is present')
})

AWS test lib will need some updates as well, like:

let awsEnvVars = [
  'AMAZON_REGION',
  'AWS_ACCESS_KEY',
  'AWS_ACCESS_KEY_ID',
  'AWS_CONFIG_FILE',
  'AWS_DEFAULT_REGION',
  'AWS_ENDPOINT_URL',
  'AWS_LAMBDA_FUNCTION_NAME',
  'AWS_PROFILE',
  'AWS_REGION',
  'AWS_SDK_LOAD_CONFIG',
  'AWS_SECRET_ACCESS_KEY',
  'AWS_SECRET_KEY',
  'AWS_SESSION_TOKEN',
  'AWS_SHARED_CREDENTIALS_FILE',
]
function resetAWSEnvVars () {
  awsEnvVars.forEach(envVar => {
    if (isNode()) delete process.env[envVar]
    else Deno.env.delete(envVar)
  })

Tests checking Node's ECONNREFUSED get replaced with /\@aws-lite\/client: lambda: error sending request/

`getCredentials` Doesn't Consider Environments That Don't Need Them

Describe the issue

The getCredentials method

module.exports = async function getCreds (params) {
let paramsCreds = validate(params)
if (paramsCreds) return paramsCreds
let envCreds = getCredsFromEnv()
if (envCreds) return envCreds
let isInLambda = process.env.AWS_LAMBDA_FUNCTION_NAME
if (!isInLambda) {
let credsFileCreds = await getCredsFromFile(params)
if (credsFileCreds) return credsFileCreds
}
throw ReferenceError('You must supply AWS credentials via params, environment variables, or credentials file')
}
doesn't take into account various AWS environments that don't require credentials.

In this case, we're trying to use aws-lite within a docker container for CodeBuild which already has the appropriate roles and doesn't need to explicitly set credentials.

Expected behavior

Allow pass-through for CodeBuild

Steps to reproduce

This is fairly difficult to present succinctly. It would involve creating a CodeBuild project with a docker container containing a distribution or bundle that includes aws-lite, using one of the cached AWS images such like:

FROM public.ecr.aws/codebuild/amazonlinux2-aarch64-standard:3.0
WORKDIR /usr/app
COPY dist ./

Adding a role with appropriate permissions, like PutObject to S3. And then running the project.

Platform / version

  • OS + version: (e.g. iOS 17.2)
  • Node version: (e.g. Node.js 20.5)
  • Package manager version: (e.g. npm 10.2)
  • Browser: (e.g. Chrome 70.5, if applicable)

How urgent do you feel this bug is?

P1

Additional context

I chose P1 because it's blocking us from using aws-lite in the context we were hoping to.

DynamoDb: Response from `Query` command does not `unmarshall` `LastEvaluatedKey`attribute

Describe the issue

When running a query command that returns more than 100 Items, the responds contains a LastEvaluatedKey attribute to let you know that not all Items that match the query are returned.

When using @architect/[email protected], the value of LastEvaluatedKey is in the JSON format. When using @architect/[email protected], the value is in the DynamoDB JSON (marshalled JSON) format.

Expected behavior

LastEvaluatedKey is in the JSON format

Steps to reproduce

  1. Run a query that returns more than 100 Items
  2. The response contains a LastEvaluatedKey attribute, whose value is marshalled and needs to be unmarshalled by the library.

Platform / version

No response

How urgent do you feel this bug is?

P2

Additional context

No response

Improve .ini reads

Describe the problem underlying the enhancement request

Two main things going on with our .ini (~/.aws/credentials) support:

  1. We rely on a relatively large (relatively speaking) dependency: ini, which is about ~20KB. I know, that's not bad, but the whole core codebase (without vendored code) is less than 100KB.
  2. It's been a bit buggy when there are comments in the credentials file

Describe the solution you'd like (if any)

I think we should roll our own, probably based on what @mhart did here: https://github.com/mhart/awscred/blob/master/index.js#L320

Describe alternative solutions you've considered

That's kind of it!

Additional context or notes

No response

RFC: integrated system for using `aws-lite` in tests

Not so long ago, when we needed to mock AWS SDK v2 responses in our test suites, we'd use aws-sdk-mock – a pretty impressive and elegant tool that uses sinon to monkey patch the SDK. Overall it was fast, surprisingly reliable and generally quite nice to work with.

These days, AWS SDK v3 has another 3rd party companion mocking lib called aws-sdk-client-mock; I haven't used it, but it looks nice enough. It also looks fittingly complex for the SDK it's mocking.

In transitioning Architect to aws-lite, it's become pretty clear to me that providing an elegant, built-in solution for test mocking should be a first-class concern. Here are a few ideas of what I'm personally looking for:

  • Simple, familiar API - should feel aws-lite-y, or at very least aws-sdk-mock-y
  • Request getters (spies) - capture and expose requests to designated methods
  • Response mocks - issue one or more (sequential) mock response payloads
  • Entirely local / offline
  • Tools like proxyquire should not be necessary to achieve behavior mutation in tests calling
  • Similarly, no monkey-patching; this is a first-class concern that should be supported by aws-lite
  • As always: lightweight (minimal or no dependencies), super fast, and effectively zero impact on aws-lite performance

How this might work:

Option 1: instantiate a client, use plugin-level testing methods

// Load the client (and relevant plugins) to establish test mocks from within your tests
import awsLite from '@aws-lite/client'
import test from 'tape'
  // Assume systemUnderTest runs aws.S3.GetObject and returns the result
import systemUnderTest from '../some/path/to/system-under-test.js'

// Turn on testing; would rely on module globals
awsLite.enableTesting()

test('Test things', async t => {
  // Instantiate a client to start adding things via plugins
  const aws = await awsLite({ plugins: [ import('@aws-lite/s3') ] })

  // Add a mock for all aws-lite clients (not just this instance)
  aws.S3.GetObject.mock({
    statusCode: 200,
    headers: { ... },
    payload: Buffer.from('hi'),
  })

  t.equal(systemUnderTest().toString(), 'hi')

  aws.S3.GetObject.getLastRequest()
  // {
  //   Bucket: 'foo',
  //   Key: 'bar',
  // }
  aws.S3.GetObject.getAllRequests() // Assuming it ran more than once:
  // [
  //   {
  //     Bucket: 'foo',
  //     Key: 'bar',
  //   },
  //   {
  //     Bucket: 'fiz',
  //     Key: 'buz',
  //   },
  // ]

  // Disable testing
  awsLite.disableTesting()
})

Pros: clean, aws-lite-y semantics
Cons: you have to instantiate a client to start mocking; perhaps it's confusing to add .mock() methods to plugin methods

Option 2: built-in testing methods

import awsLite from '@aws-lite/client'
import test from 'tape'
// Assume systemUnderTest runs aws.S3.GetObject and returns the result
import systemUnderTest from '../some/path/to/system-under-test.js'

test('Test things', async t => {
  // Turn on testing; would rely on module globals
  awsLite.enableTesting()

  // Params: service, method, response(s)
  awsLite.mock('S3', 'GetObject', {
    statusCode: 200,
    headers: { ... },
    payload: Buffer.from('hi'),
  })

  t.equal(systemUnderTest().toString(), 'hi')

  awsLite.getLastRequest('S3', 'GetObject')
  // {
  //   Bucket: 'foo',
  //   Key: 'bar',
  // }
  awsLite.getAllRequests('S3', 'GetObject') // Assuming it ran more than once:
  // [
  //   {
  //     Bucket: 'foo',
  //     Key: 'bar',
  //   },
  //   {
  //     Bucket: 'fiz',
  //     Key: 'buz',
  //   },
  // ]
})

Pros: more like aws-lite-mock, lifts testing related methods (like getLastRequest()) higher up
Cons: subjective, but those higher-level methods make for kind of uglier semantics (see: getLastRequest('S3', 'GetObject') vs. S3.GetObject.getLastRequest())

Also totally open to other ideas of how this might work, this is an open invitation for all feedback, large or small!

`AwsLiteClient` is missing type of `DynamoDB`

Describe the issue

I try to use aws-lite for Dynamodb. But I got the type error Property 'DynamoDB' does not exist on type AwsLiteClient

My code

const aws = await AwsLite({
    region: 'us-west-1',
    plugins: ['@aws-lite/dynamodb', '@aws-lite/dynamodb-types'],
    autoloadPlugins: false,
    debug: true,
    keepAlive: false,
  });

  // Easily interact with the AWS services your application relies on
  await aws.DynamoDB.PutItem({
    TableName: '$table-name',
    Item: {
      // AWS-lite automatically de/serializes DynamoDB JSON
      pk: '$item-key',
      data: {
        ok: true,
        hi: 'friends',
      },
    },
  });

Dependencies:

   "@aws-lite/client": "^0.14.0",
   "@aws-lite/dynamodb": "^0.3.1",
 "@aws-lite/dynamodb-types": "^0.3.1"

Expected behavior

Passing DynamoDB type in AwsLiteClient interface.

Steps to reproduce

Platform / version

  • OS + version: Mac 14.2.1 (23C71)
  • Node version: v20.10.0
  • Package manager version: npm 10.2.3
  • Browser: (e.g. Chrome 70.5, if applicable)

How urgent do you feel this bug is?

P2

Additional context

No response

Add retries

It's time to add retry logic to aws-lite! Here's some initial scope and research for discussion:

Retry attempts

  • Default: 5 (AWS default is 3)
  • Maximum retry attempt settings:
    • Param: maxAttempts
    • Config file: max_attempts
    • Env var: AWS_MAX_ATTEMPTS
  • Maximum backoff: 20 seconds; will not be initially configurable

Retry modes

  • Default: backoff with jitter, including a retry latency floor (i.e. must always wait at least n ms before the next retry)
    • AWS suggests the following jitter calculation: min($random0-1 * 2^$seq, $max), which I think would be expressed in JS as Math.min(Math.random() * Math.pow(2, $seq), 20) * 1000; in my limited testing, this seems to produce some extremely long backoffs low in the sequence
    • AWS also suggests some other jitter algorithms:
      • "Full jitter": random_between(0, min($max, $base * 2 ** $seq))
      • "Decorrelated jitter": min($max, random_Between($base, sleep * 3))
      • "Equal jitter" (not recommended): temp = min($max, $base * 2 ** $seq); temp / 2 + random_between(0, temp / 2)
    • aws4fetch uses: Math.random() * $initRetryMs * Math.pow(2, i)) (ref)
  • Adding an option to emulate AWS's token bucket-based adaptive option will not be initially supported

Retryability

  • From my initial research, most services use a standard retry behavior, while some services customize retryability based on response data and error codes
  • To start, we will not make use of per-service retryability heuristics

Resources

S3 HeadObject response should include ContentLength

Currently, the response from HeadObject does not include ContentLength. For example,

{
  AcceptRanges: 'bytes',
  LastModified: 2023-12-11T21:00:23.000Z,
  ETag: 'f0b0de38617496a96511a3353781515e',
  VersionId: '4_z834de8717f3677488b980413_f40189138ebbcef16_d20231211_m210023_c004_v0402012_t0025_u01702328423101',
  ContentType: 'application/octet-stream'
}

ContentLength is in the response from S3, but it is ignored by parseHeadersToResults. The HeadObject API Response Elements doc lists Content-Length, so it should be included.

[PR to closely follow]

Add options for (un)marshalling AWS-flavored JSON

aws-lite relies on @aws-sdk/util-dynamodb for (un)marshalling AWS-flavored JSON. It accepts the following options:

We use the defaults, but support passing these options through. As discussed on Discord, I identified a few paths for solving for custom AWS JSON marshalling. The option we'll move forward with will be to pass these options through during client instantiation, like so:

let aws = await awsLite({ 
  awsjsonMarshall: {
    convertClassInstanceToMap: true,
  },
  awsjsonUnmarshall: {
    wrapNumbers: true
  },
})

Add type definitions for main client

Right now it's looking like this:

/**
 * @param {string} [options.host] Manually specify a hostname
 * @param {string} [options.endpoint=/] API endpoint path; should not include domain
 * @param {string} [options.region] Change the AWS service region for a single request
 * @param {string|object|array} [options.payload] JSON-serialized data, or object or array to be JSON-serialized
 * @param {string|object|array} [options.json] Alias of options.payload
 * @param {string|object|array} [options.body] Alias of options.payload
 * @param {string|object|array} [options.data] Alias of options.payload
 * @param {object} [options.headers] Request headers to overlay
 * 
 * @returns {Promise<any>} Result or error
 */

"Package" is reserved word

Describe the issue

I'm trying to get my first application up and running, so please excuse me if the bug report is misplaced. Anyhow, I get the following error:

node_modules/@aws-lite/client/src/get-plugins.js:66:10: ERROR: "package" is a reserved word and cannot be used with the "esm" output format due to strict mode [plugin css-bundle-plugin]

Here's how I got to this point:

  1. I installed Remix Grunge template: npx create-remix@latest --template remix-run/grunge-stack
  2. Bumped all the versions in the package.json to latest. Out of all the software installed, I think these versions might be of importance: @architect/architect to 11.0.3 and @architect/functions to 8.0.2.
  3. run npm install && npm run dev
  4. The error appeared
  5. Made sure that the latest version is used by running: npm install @aws-lite/client --save-exact. It installed version 0.17.1.

Expected behavior

No error.

Steps to reproduce

Please see the description.

Platform / version

  • OS + version: macOS 14.2.1
  • Node version: 21.5
  • Package manager version: npm 10.2.4
  • @aws-lite/client: 0.17.1

How urgent do you feel this bug is?

None

Additional context

No response

Add XML namespace support

Some AWS APIs make use of XML namespaces.

While S3 documents xmlns in its request shape, in the methods I've implemented thus far, it doesn't seem to care if it's not present.

Route 53, on the other hand, very much cares if the correct xmlns property isn't in the opening XML tag, and will not accept requests without it. Thus, aws-lite must support that.

I think we'll do so with an xmlns request object property, like so:

await aws({
  service: 'route53',
  method: 'POST',
  path: `/2013-04-01/hostedzone/${HostedZoneId}/rrset`,
  payload: { foo: { ... } },
  xmlns: 'https://route53.amazonaws.com/doc/2013-04-01/',
})
// POST body starts with: <foo xmlns="https://route53.amazonaws.com/doc/2013-04-01/">...

add endpoint param

Describe the problem underlying the enhancement request

aws-sdk and @AWS-SDK both support 'endpoint' so should we

Describe the solution you'd like (if any)

will mean parsing out protocol, host, port and endpointPrefix from 'endpoint' param

Describe alternative solutions you've considered

living in a hollowed out log is sounding better day by day tbh

Additional context or notes

No response

XML parsing / building

As I've worked through some of the early plugins, it's becoming increasingly clear we can't get away with all these various AWS APIs returning unparsed XML strings. I'm already a bit annoyed that nearly half of our business logic's size (about 35KB of about 70KB) is dedicated only to vendoring utilities for marshalling and unmarshalling AWS-flavored JSON, so this stings. But we're building an AWS client here, and AWS uses a lot of XML, so them's the breaks.

I will be evaluating XML parsing / building solutions on:

  • Hard requirement: no native bindings
  • Whether it has dependencies (ideally none!), and whether it's actively maintained
  • Time to require/import
  • Footprint on disk
  • Processing speed for parsing reasonably sized XML responses

I'll post results here once I've got them, a few solutions I'll be evaluating, in no particular order (and please do suggest more!):

Weird error from @aws-lite/client: SQS.SendMessage

Describe the issue

    "errorType": "TypeError",
    "errorMessage": "@aws-lite/client: SQS.SendMessage: Cannot read properties of undefined (reading 'x-amzn-requestid')",
    "service": "sqs",
    "property": "SQS",
    "time": "2024-03-23T08:01:47.140Z",
    "stack": [
        "TypeError: @aws-lite/client: SQS.SendMessage: Cannot read properties of undefined (reading 'x-amzn-requestid')",
        "    at Object.yB (/node_modules/.pnpm/@[email protected]/node_modules/@aws-lite/sqs/src/index.mjs:30:16)",
        "    at Object.SendMessage (/node_modules/.pnpm/@[email protected]/node_modules/@aws-lite/client/src/client-factory.js:160:47)",
        "    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
    ]
}

Expected behavior

in sqs/src/index.mjs - ln 33

  if (error && headers['x-amzn-requestid']) error.requestId = headers['x-amzn-requestid']

I'm not sure why headers would be undefined, but seems like we should add an extra check so it doesn't crash

Steps to reproduce

arc.queues.publish

Platform / version

  • Node version: 20.11
  • Package manager version: pnpm 8.5

How urgent do you feel this bug is?

P1

Additional context

No response

Bug with TS & DynamoDB plugin with TS option `strict` is enabled

Describe the issue

When using the plugin with TS, I got the error:

error TS7016: Could not find a declaration file for module '@aws-lite/dynamodb'. '/..../node_modules/@aws-lite/dynamodb/src/index.mjs' implicitly has an 'any' type.
Try npm i --save-dev @types/aws-lite__dynamodb if it exists or add a new declaration (.d.ts) file containing declare module '@aws-lite/dynamodb';

2 import dynamodb from '@aws-lite/dynamodb'

I've properly updated my tsconfig.json.
The bug appear because I've these options defined in the config file:

 "strict": true,
 "noImplicitAny": true,

Enable at least one of those option, raise the above error when running yarn tsc.

Might be the same as #70 but maybe without the same option in tsconfig file.

Expected behavior

It compiles without error.

Steps to reproduce

  1. Install both the client & dynamo plugin
  2. Enable TS config with option strict or noImplicitAny set to true
  3. Setup awsLite to use the Dynamo plugin
  4. Launch yarn tsc

Platform / version

  • OS + version: (e.g. iOS 17.2) unimportant
  • Node version: (e.g. Node.js 20.5) v20.10.0
  • Package manager version: (e.g. npm 10.2) 1.22.21
  • Browser: (e.g. Chrome 70.5, if applicable) unimportant
  • TS 4

How urgent do you feel this bug is?

P2

Additional context

No response

RFC: waiters

AWS SDK v2 / 3 have the concept of waiters, which are custom implementations that poll an endpoint looking for a specific change. Examples (from their blog, edited for brevity):

const Bucket = 'my-bucket'

// v2
import aws from 'aws-sdk'
const v2Client = new aws.S3(options)
await v2Client.createBucket({ Bucket }).promise()
await v2Client.waitFor('bucketExists', { Bucket });

// v3
import {
  S3Client, CreateBucketCommand, waitUntilBucketExists
} from '@aws-sdk/client-s3'
const v3Client = new S3Client(options)
const command = new CreateBucketCommand({ Bucket })
await v3Client.send(command)
await waitUntilBucketExists({ client: v3Client, maxWaitTime: 60 }, { Bucket })

These are nice, but may perhaps be limiting, and (in my personal opinion) a bit of a funky API. I can imagine an alternative approach consisting of:

  • Any method to wait for + parameters
  • Conditions to match
  • Related options (timeout, etc.)

This approach should in theory allow simple, customizable waiters. In the S3 bucket creation example, maybe it's something like this.

import awsLite from '@aws-lite/client'
const aws = await awsLite(options)
await aws.wait(
  // Method to poll
  aws.S3.HeadBucket,

  // Invocation params
  { Bucket: 'my-bucket' },
  
  // Conditions to match via statusCode, headers, payload properties
  { statusCode: 200 },
  
  // Waiter options
  {
    timeout: 30, // Fail if not completed by timeout in seconds, default 30, max 3600
    frequency: 5, // Polling frequency in seconds, default 5, min 0.1, max 10
  },
)

Then again, perhaps folks would rather just have a pre-built waiter (e.g. waitFor('bucketExists', { Bucket })) instead of specifying their own methods, criteria to match, etc.

Stoked to hear some thoughts on this!

Support SSO

Describe the problem underlying the enhancement request

My profiles are setup via aws configure sso.
So, I tried using profile

const aws = await awsLite({
  region: 'eu-central-1',
  profile: 'my-profile',
});

But I got TypeError: Profile not found: my-profile.

The profiles are stored in ~/.aws/config

Describe the solution you'd like (if any)

When entering a profile, it should determine the right profile regardless SSO or not.

Describe alternative solutions you've considered

The AWS JS SDK v3 is able to detect that using the @smithy/node-config-provider.

Additional context or notes

No response

First set of plugins

As requested by @andybee in the Arc / #aws-lite Discord, here's the Architect-centric list of plugins I'm looking at picking off on the way to 1.0. These assume, of course, that we're reasonably happy with the plugin API (see: #14).

Moreover, this list doesn't intend to imply that other plugins folks intend to be published under @aws-lite/* won't be considered or accepted, just that these are the ones we, the maintainers, are likely ourselves to prioritize (namely because they help us migrate away before the Mar 2024 aws-sdk (v2) Lambda deprecation deadline).

Requirements:

  • API Gateway v2 (@architect/deploy, @architect/functions)
  • DynamoDB (@architect/functions, @architect/sandbox)
  • SNS (@architect/functions)
  • SQS (@architect/functions)
  • SSM (@architect/functions, @architect/deploy, @architect/destroy, @architect/env, @architect/inventory)
    • This will probably be a rare partial implementation to start, as Architect's usage is limited to the Parameter Store

Nice to have:

  • CloudFormation (@architect/deploy, @architect/destroy, @architect/logs)
  • CloudFront (@architect/deploy)
  • CloudWatch Logs (@architect/destroy, @architect/logs)
  • Lambda (@architect/deploy)
  • S3 (@architect/asap, @architect/deploy, @architect/destroy)
    • This one will likely need some outside help, it's chonky! 😅

Intermittent InvalidSignatureException with DynamoDB

Describe the issue

Intermittently we see requests to DynamoDB fail with InvalidSignatureException:

 InvalidSignatureException: @aws-lite/client: DynamoDB.Query: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
    at errorHandler (/var/task/node_modules/@aws-lite/client/src/error.js:13:46)
    at Query (/var/task/node_modules/@aws-lite/client/src/client-factory.js:187:17)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async file:///var/task/index.mjs:14:20
    at async lambda (/var/task/node_modules/@architect/functions/src/http/index.js:37:22) 

Expected behavior

No InvalidSignatureException. ;-)

Steps to reproduce

Difficult to reproduce.
We have only been able to provoke the problem by creating some load.
However, we have seen instances of this error with hardly any load (single user).

Platform / version

How urgent do you feel this bug is?

4/5

We reverted to @architect/[email protected] (aws-sdk) and won’t use aws-lite until this is solved.

Additional context

  1. The signing of AWS requests seems to happen in @aws-lite/client. This suggests that other AWS services might be affected, too. However, so far, we have only observed the exception when using DynamoDB and, more specifically, Query. To be fair, this reflects our usage of AWS services, so maybe selection bias.

  2. In most cases, the InvalidSignatureException occurred on the first and only retry, after the 1st call did not seem to receive any response at all. See screenshots. However, we consider this circumstance secondary, as we have cases of the InvalidSignatureException occurring on the first try.

Call 1:
InvalidSignatureException mit retry call 1

Call 2:
InvalidSignatureException mit retry call 2

  1. This old bug in aws-sdk-go seems related: aws/aws-sdk-go#2598 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.