Giter Club home page Giter Club logo

k6-jslib-aws's Introduction

k6-jslib-aws

A library enabling users to interact with AWS resources within k6 scripts.

This AWS client library for k6 facilitates interactions with a subset of AWS services in the context of k6 load testing scripts.

Extensive documentation and examples for each of these clients can be found in the k6 documentation. Please refer to the documentation for detailed information on how to use the library.

Supported services and features

  • EventBridge: allows to put events to AWS EventBridge.
  • Kinesis: allows to list streams, create streams, put records, list shards, get shard iterators, and get records from AWS Kinesis.
  • KMS: allows to list KMS keys and generate a unique symmetric data key for use outside of AWS KMS
  • Lambda: allows to invoke functions in AWS Lambda.
  • S3Client: allows to list buckets and bucket's objects, as well as uploading, downloading, and deletion of objects.
  • SecretsManager: allows to list, get, create, update and delete secrets from the AWS secrets manager service.
  • SQS: allows to list queues and send messages from AWS SQS.
  • SSM: allows to retrieve a parameter from AWS Systems Manager
  • V4 signature: allows to sign requests to amazon AWS services

Demo

import { check } from 'k6'
import exec from 'k6/execution'
import http from 'k6/http'

import { AWSConfig, S3Client } from 'https://jslib.k6.io/aws/0.12.3/s3.js'

const awsConfig = new AWSConfig({
    region: __ENV.AWS_REGION,
    accessKeyId: __ENV.AWS_ACCESS_KEY_ID,
    secretAccessKey: __ENV.AWS_SECRET_ACCESS_KEY,
})

const s3 = new S3Client(awsConfig)
const testBucketName = 'test-jslib-aws'
const testInputFileKey = 'productIDs.json'
const testOutputFileKey = `results-${Date.now()}.json`

export async function setup() {
    // If our test bucket does not exist, abort the execution.
    const buckets = await s3.listBuckets()
    if (buckets.filter((b) => b.name === testBucketName).length == 0) {
        exec.test.abort()
    }

    // If our test object does not exist, abort the execution.
    const objects = await s3.listObjects(testBucketName)
    if (objects.filter((o) => o.key === testInputFileKey).length == 0) {
        exec.test.abort()
    }

    // Download the S3 object containing our test data
    const inputObject = await s3.getObject(testBucketName, testInputFileKey)

    // Let's return the downloaded S3 object's data from the
    // setup function to allow the default function to use it.
    return {
        productIDs: JSON.parse(inputObject.data),
    }
}

export default async function (data) {
    // Pick a random product ID from our test data
    const randomProductID = data.productIDs[Math.floor(Math.random() * data.productIDs.length)]

    // Query our ecommerce website's product page using the ID
    const res = await http.asyncRequest(
        'GET',
        `http://your.website.com/product/${randomProductID}/`
    )
    check(res, { 'is status 200': res.status === 200 })
}

export async function handleSummary(data) {
    // Once the load test is over, let's upload the results to our
    // S3 bucket. This is executed after teardown.
    await s3.putObject(testBucketName, testOutputFileKey, JSON.stringify(data))
}

Want to contribute?

The scope of this library is intentionally minimal, focusing on the use cases needed by us and our clients. If the library doesn't yet meet your needs, feel free to extend it and open a pull request. Contributions are welcome.

Build

# Install local dependencies
npm install

# Bundle the library in preparation for publication
npm run webpack

# Run the tests
npm test

For more details, refer to CONTRIBUTING.md.

k6-jslib-aws's People

Contributors

andrewslotin avatar bendennis avatar edvinasdaugirdas avatar egor-romanov avatar fczuardi avatar hagabor avatar immavalls avatar jakub-qg avatar jameshopkins avatar javaducky avatar jdinsel-xealth avatar jing-emma avatar joanlopez avatar koconchobhair avatar markcupitt avatar mcnamaram avatar mstoykov avatar nickcaballero avatar npjigak avatar oleiade avatar rgrygorovych avatar ryoshindo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k6-jslib-aws's Issues

Support for SQS doesn't seem to work

I'm trying to use the SQS client in my k6 load test as described in the docs but the test fails with this error:
Remote resolution error: "not found: https://jslib.k6.io/aws/0.7.1/sqs.js"
Screenshot:
Screenshot 2023-04-19 at 1 38 37 PM

The link to the dedicated k6 documentation page is also broken in the docs.
Screenshot 2023-04-19 at 1 41 47 PM

Is the sqs jslib still supported?

Snippet of the sqs jslib implementation:
Screenshot 2023-04-19 at 1 44 08 PM
^ I've tried 0.7.0 and 0.7.1.
I'm currently using k6 0.36.0.

How do we use Instance Profile as a credential source?

I want to run k6 tests in our CICD pipelines. The pipeline runners are on AWS and use AWS instance profiles. How can I create an AWSConfig object if I don't have access keys and secrets to pass it?

I was really surprised that jslib-aws is not based on the AWS SDK, which automatically handles authentication for you. https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html

If more than one credential source is available to the SDK, the default precedence of selection is as follows:

  1. Credentials that are explicitly set through the service-client constructor
  2. Environment variables
  3. The shared credentials file
  4. Credentials loaded from the ECS credentials provider (if applicable)
  5. Credentials that are obtained by using a credential process specified in the shared AWS config file or the shared credentials file. For more information, see Loading Credentials in Node.js using a Configured Credential Process.
  6. Credentials loaded from AWS IAM using the credentials provider of the Amazon EC2 instance (if configured in the instance metadata)

I'm using number 6.

Support AWS_SESSION_TOKEN in ssm.js

Per Issue #5 Support AWS_SESSION_TOKEN authentication, Support for AWS_SESSION_TOKEN has been added since 0.5.0.

I have tried to get a parameter from SSM by using ssm getParameter function. await systemsManager.getParameter(testParameterName). I got an error saying 'Uncaught (in promise) SystemsManagerServiceError: The security token included in the request is invalid'. We login AWS via SSO. I can do s3.listBuckets(). It is an issue only happened to ssm.

If I read the code correctly, I think the fix could be as simple as adding sessionToken: this.awsConfig.sessionToken when constructing SignatureV4 on line 39

 this.signature = new SignatureV4({
            service: this.serviceName,
            region: awsConfig.region,
            credentials: {
                accessKeyId: awsConfig.accessKeyId,
                secretAccessKey: awsConfig.secretAccessKey,
                sessionToken: this.awsConfig.sessionToken,
            },
            uriEscapePath: true,
            applyChecksum: false,
        })

I would appreciate it if you could take a look at this issue. Thanks

oh BTW kms.js does not have sessionToken passed in. I bet it had the same issue.

RFE: ability to add custom cert or disable cert validation in S3 endpoint

Hi.

When using the s3.js functions, if the s3 endpoint is an internal one with custom certs, it throws an error:

level=warning msg="Request Failed" error="Put "[https://s3.internal.com:443/k6-s3/results-1-0.json](https://s3.internal.com/k6-s3/results-1-0.json%5C)": tls: failed to verify certificate: x509: certificate signed by unknown authority"

This is an usual situation when using minio, ceph rgw, etc. We need an option to disable cert validation (non sensible envs) or add custom certs to validate them correctly.

Support AWS_SESSION_TOKEN authentication

Rationale

It has been brought to our attention that the library didn't cater to some of its users use-case, as it doesn't allow them to use session tokens-based authentication just yet.

Context

This authentication method would be rather common in contexts where our users log into AWS via SSO. The request for this feature has popped up from a support forum topic.

Feasibility and Scope

We believe this would imply some additions and modifications to this library's authentication and signature code. The feasibility is rather on the 👍🏻 side, but the scope is unclear.

Definition of done

The definition of done for session tokens-based authentication would be that users be able to pass an AWS_SESSION_TOKEN option to our client classes, such as S3Client, and successfully use the SDK with this authentication method onward:

const awsConfig = new AWSConfig(
  __ENV.AWS_REGION,
  __ENV.AWS_ACCESS_KEY_ID,
  __ENV.AWS_SECRET_ACCESS_KEY,
  __ENV.AWS_SESSION_TOKEN
);

const s3 = new S3Client(awsConfig);

support for path-style requests on s3.createMultipartUpload

AWS currently supports both path-style and virtual-hosted–style URLs and even though path-style is to be deprecated soon, other S3-compatible providers have incomplete support for virtual-hosted-style URLs. See https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#path-style-access

On the lib code, the usage is mixed, on methods like putObject it uses path-style while on others like createMultipartUpload it uses virtual-hosted-style.

Amazon CLI when provided with a --endpoint argument uses path-style for thei own requests 🤣 for example:

aws s3api --endpoint <endpoint url> create-multipart-upload --bucket <bucket name> --key <object key> --debug

shows:

...
POST
/<bucket name>/<object key>
uploads=
...

Should this be a config of the S3Client? Should we do path-styleby default on createMultipartUpload and follow aws-cli implementation?

open() performance

Hi Team,

As per the example code, it uses const testFile = open('./bonjour.txt', 'r') open(). But as per the documentation, it suggests the usage of SharedArray.

Could you please clarify which is ideal for AWS? Thanks!

Make the library asynchronous and compatible with the `async/await` syntax

Recently, k6 has received support for the async/await syntax. Considering that this javascript library is currently blocking and offers functionalities to users which would benefit from being asynchronous, we should consider making its APIs asynchronous indeed.

This would benefit the users as it would allow to avoid blocking the execution of their scripts over heavy operations such as downloading a file from S3. It would also benefit the overall performance of the scripts relying on AWS operations as it would allow the runtime to offload certain costly operations such as cryptographic signature.

The target API should lead to the following kind of code:

export default async function () {
    // List the buckets the AWS authentication configuration
    // gives us access to.
    const buckets = await s3.listBuckets()

    // If our test bucket does not exist, abort the execution.
    if (buckets.filter((b) => b.name === testBucketName).length == 0) {
        exec.test.abort()
    }

    // Let's upload our test file to the bucket
    await s3.putObject(testBucketName, testFileKey, testFile)

    // Let's list the test bucket objects
    const objects = await s3.listObjects(testBucketName)

    // And verify it does contain our test object
    if (objects.filter((o) => o.key === testFileKey).length == 0) {
        exec.test.abort()
    }

    // Let's redownload it verify it's correct, and delete it
    const obj = await s3.getObject(testBucketName, testFileKey)
    s3.deleteObject(testBucketName, testFileKey)
}

Tasks

Add signature tests validating the behavior of sessions tokens

#7 recently added support AWS session token based authentication. We have forgotten to amend the signature tests in the process. We should write tests asserting that when used, the X-AMZ-SECURITY-TOKEN is added to the request's headers and is included in the signature process.

Depends on #10

Support for headers like Content-Disposition or Content-Type with putObject

As suggested in https://community.k6.io/t/s3serviceerror-using-s3client/5586/4, it would be nice to be able to add some headers like Content-Disposition or Content-Type when using AWS PutObject.

Looking at the code it seems we always pass empty headers, maybe we could add a parameter to the putObject() function to optionally indicate additional headers. I might be wrong and there is an option to do that already.

    /**
     * Adds an object to a bucket.
     *
     * You must have WRITE permissions on a bucket to add an object to it.
     *
     * @param  {string} bucketName - The bucket name containing the object.
     * @param  {string} objectKey - Key of the object to put.
     * @param  {string | ArrayBuffer} data - the content of the S3 Object to upload.
     * @throws  {S3ServiceError}
     * @throws  {InvalidSignatureError}
     */
    putObject(bucketName: string, objectKey: string, data: string | ArrayBuffer) {
        // Prepare request
        const method = 'PUT'
        const host = `${bucketName}.${this.host}`

        const signedRequest = this.signature.sign(
            {
                method: method,
                protocol: 'https',
                hostname: host,
                path: `/${objectKey}`,
                headers: {},
                body: data,
            },
            {}
        )

        const res = http.request(method, signedRequest.url, signedRequest.body, {
            headers: signedRequest.headers,
        })
        this._handle_error('PutObject', res)
    }

Load credentials from credentials file?

A user reported on the forum supports a scenario in which they would not have access to credentials via environment variables. There seemed to be the assumption that k6 would have the same behavior as the AWS CLI on that front, and would alternatively go fetch the credentials from the $HOME/.aws/credentials.

I believe it might turn out tricky to do this in a jslib, but we probably could support reading credentials from a file indeed. To be explored! 🚀 👀

Make SignatureV4 code async

We should make the signature code asynchronous. That is, usable with async/await. To provide a real benefit, we should make sure it switches to using the k6/experimental/webcrypto module instead of the current crypto module which is synchronous.

Fix endpoint argument in `signature.sign` examples

We recently changed how the library handles endpoints to support third-party S3-compatible providers as per #21, #57, etc.

We have recently noticed that the example didn't reflect the usage of the new endpoint argument of the signature.sign, and signature.presign methods. We should update those, as well as the documentation to reflect it 👍🏻

The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

I have the following configuration while attempting to use AWS Rest API to invoke a lambda.

import http from "k6/http";
import {check, group, sleep} from "k6";
import {randomIntBetween} from "https://jslib.k6.io/k6-utils/1.4.0/index.js"
import {AWSConfig, SignatureV4} from "https://jslib.k6.io/aws/0.9.0/aws.js"

const awsConfig = new AWSConfig({
    region: "eu-west-1",
    accessKeyId: __ENV.AWS_ACCESS_KEY_ID,
    secretAccessKey: __ENV.AWS_SECRET_ACCESS_KEY,
    sessionToken: __ENV.AWS_SESSION_TOKEN,
})

const lambdaInvoke = () => {
    const signer = new SignatureV4({
        service: 'lambda',
        region: awsConfig.region,
        credentials: {
            accessKeyId: awsConfig.accessKeyId,
            secretAccessKey: awsConfig.secretAccessKey,
            sessionToken: awsConfig.sessionToken,
        }
    })

    const signedRequest = signer.sign({
        method: 'POST',
        protocol: 'https',
        hostname: 'lambda.eu-west-1.amazonaws.com',
        path: '/2015-03-31/functions/FUNCTION-NAME/invocations',
        headers: {
            'content-type': 'application/json'
        },
        applyChecksum: true,
        uriEscapePath: true
    })
    const data = {
        "hi": 'K6'
    }
    return http.post(signedRequest.url, data, {headers: signedRequest.headers});
};

export default () => {
    group("Spike Test Set", () => {
        const startDataRes = lambdaInvoke();
        console.log(startDataRes)
        sleep(randomIntBetween(0, 1));

        check([startDataRes], {
            "status is 200": (r) => r.every((res) => res.status === 200),
        });
    });
};

It is failing with generating the proper signature. The provided access keys and session tokens are valid. Is there some missing
configuration the I have missed?

Sending event with eventbridge client returns undefined when both successful or failure

While using the EventBridgeClient to put events, the response from the request is undefined and makes it impossible to know whether or not the event was successfully sent without have a cloudwatch log group set up. The same can be said about testing the sending of events with localstack.

The output below is what is received after using the AWS SDK to sent an event, and using the AWS CLI the output is a JSON object of the entries.

Events sent successfully: {
  '$metadata': {
    httpStatusCode: 200,
    requestId: '7d7d7230-2ba9-47df-9365-e9c5fe559c5b',
    extendedRequestId: undefined,
    cfId: undefined,
    attempts: 1,
    totalRetryDelay: 0
  },
  Entries: [ { EventId: '7b8f902c-b455-cd10-ca02-a70759febd56' } ],
  FailedEntryCount: 0
}

Are there any ways of returning the response rather than undefined?

lowercase Kinesis interfaces and classes property names

As we ported the initial Pull Request implementing support for Kinesis, we completely forgot to adjust the various interfaces and classes naming to our pre-existing convention: lowercase first letter.

We should make sure to change that for consistency purposes, and also because it results in an API that "feels" strange.

Unable to get example to work

I'm trying to get simple example to upload data to s3 bucket but it is not working. It does get any error message.

import http from "k6/http";
import { check } from "k6";
import { AWSConfig, S3Client } from "https://jslib.k6.io/aws/0.3.0/s3.js";

const awsConfig = new AWSConfig(
  __ENV.AWS_REGION,
  __ENV.AWS_ACCESS_KEY_ID,
  __ENV.AWS_SECRET_ACCESS_KEY
);

const s3 = new S3Client(awsConfig);

export default function () {
  let res = http.get("https://test-api.k6.io");
  check(res, { "is status 200": (r) => r.status === 200 });
}

export function handleSummary(data) {
  s3.putObject("myBucket", "myResultsKey", JSON.stringify(data));
}

When I run the above, output is as follows:

`LATLmacJ5QTMD6R:examples (main *+)$ k6 run s3_test.js

      /\      |‾‾| /‾‾/   /‾‾/   
 /\  /  \     |  |/  /   /  /    
/  \/    \    |     (   /   ‾‾\  

/ \ | |\ \ | (‾) |
/ __________ \ |__| _\ ____/ .io

execution: local
script: s3_test.js
output: -

scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)

running (00m00.1s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m00.1s/10m0s 1/1 iters, 1 per VU
LATLmacJ5QTMD6R:examples (main *+)$ `

Create bucket?

Does it make sense to have a method for creating a S3 bucket? Considering we have method for listing...

ssm error: Object has no member 'json'

I found an issue when trying to call getParameter method on ssm.js. It says Uncaught (in promise) TypeError: Object has no member 'json'. Any help is really appreciated

I guess the culprit is this (taken from https://jslib.k6.io/aws/0.11.0/ssm.js)

value: (t = Oe().mark((function e(t) {
                    var r, n, o, i = arguments;
                    return Oe().wrap((function(e) {
                        for (;;) switch (e.prev = e.next) {
                            case 0:
                                return r = i.length > 1 && void 0 !== i[1] && i[1], n = this.signature.sign({
                                    method: this.method,
                                    endpoint: this.endpoint,
                                    path: "/",
                                    headers: je(je({}, this.commonHeaders), {}, Re({}, C, "AmazonSSM.GetParameter")),
                                    body: JSON.stringify({
                                        Name: t,
                                        WithDecryption: r
                                    })
                                }, {}), e.next = 4, de().asyncRequest(this.method, n.url, n.body, {
                                    headers: n.headers
                                });
                            case 4:
                                return o = e.sent, this._handle_error(He.GetParameter, o), e.abrupt("return", Ne.fromJSON(o.json())); //this is the culprit
                            case 7:
                            case "end":
                                return e.stop()
                        }
                    }), e, this)
                }))

Issue with AWS API Gateway signed requests in k6 tests

Dear Team,

We are currently working on integrating k6 tests with our private AWS API Gateway. While doing so, we encountered an issue with performing signed requests using k6 internal libraries.

After some investigation, we found a workaround by modifying the signature implementation in the signature.ts file. Specifically, we changed the request header to accommodate a different hostname value compared to the host value in the header.

The original line in signature.ts:

request.headers[constants.HOST_HEADER] = request.endpoint.hostname

was replaced with:

request.headers[constants.HOST_HEADER] = request.headers.host

This change allowed us to successfully perform requests from the open-source k6 tool to our API. Currently, we have forked the k6-jslib-aws repository. However, we would like to seek clarification on whether this modification indicates an issue in the implementation. Should the change be made on your end, or is there something we may be doing incorrectly on our side?

Your guidance on this matter would be greatly appreciated.

Ensure CI runs on each PR

At the moment, the GitHub action workflow building the project and running tests is not triggered by PRs. We should make sure it is. The draft PR #75 should address it.

TypeError: Value is not an object: undefined

I'm trying to post to SNS and thought that your signatureV4 example might work. However even whilst using the example provided i get the error TypeError: Value is not an object: undefined. from the below line.

const signer = new SignatureV4({
        service: 'sns',
        region: awsConfig.region,
        credentials: {
            accessKeyId: awsConfig.accessKeyId,
            secretAccessKey: awsConfig.secretAccessKey,
            sessionToken: awsConfig.sessionToken,
        },
    })

I've literally just copied the example except dropping the first .js from the import. Does this work for SNS or what am i missing?
As i said im just using the defaults and havent changed it for POST etc yet

import http from 'k6/http'

import { AWSConfig, SignatureV4 } from 'https://jslib.k6.io/aws/0.7.1/kms.js'

const awsConfig = new AWSConfig({
  region: "eu-west-1",
  accessKeyId: "example",
  secretAccessKey: "example",
  sessionToken: "example",
})

export default function () {
    /**
     * In order to be able to sign an HTTP request's,
     * we need to instantiate a SignatureV4 object.
     */
    const signer = new SignatureV4({
        service: 'sns',
        region: awsConfig.region,
        credentials: {
            accessKeyId: awsConfig.accessKeyId,
            secretAccessKey: awsConfig.secretAccessKey,
            sessionToken: awsConfig.sessionToken,
        },
    })

    /**
     * The sign operation will return a new HTTP request with the
     * AWS signature v4 protocol headers added. It returns an Object
     * implementing the SignedHTTPRequest interface, holding a `url` and a `headers`
     * properties, ready to use in the context of k6's http call.
     */
    const signedRequest = signer.sign(
        /**
         * HTTP request description
         */
        {
            /**
             * The HTTP method we will use in the request.
             */
            method: 'GET',

            /**
             * The network protocol we will use to make the request.
             */
            protocol: 'https',

            /**
             * The hostname of the service we will be making the request to.
             */
            hostname: 'test-jslib-aws.s3.us-east-1.amazonaws.com',

            /**
             * The path of the request.
             */
            path: '/bonjour.txt',

            /**
             * The headers we will be sending in the request.
             */
            headers: {},

            /**
             * Whether the URI should be escaped or not.
             */
            uriEscapePath: false,

            /**
             * Whether or not the body's hash should be calculated and included
             * in the request.
             */
            applyChecksum: false,
        }
    )

    http.get(signedRequest.url, { headers: signedRequest.headers })
}

Lib does not honor scheme key in AWSConfig Object

Im trying to test k6 locally with a Minio Standalone server installed on the same machine

The AWSConfig Object supports the Normal S3 scheme key that can be set to 'http || https'

    /**
     * The HTTP scheme to use when connecting to AWS.
     *
     * @type {HTTPScheme} ['https']
     */
    scheme: HTTPScheme = 'https'

Here is the type declaration

/**
 * Type representing HTTP schemes
 */
export type HTTPScheme = 'http' | 'https'

however it appears that the functions do not honor this setting, for example

    listBuckets(): Array<S3Bucket> {
        const method = 'GET'

        const signedRequest: SignedHTTPRequest = this.signature.sign(
            {
                method: 'GET',
                protocol: 'https',  <============= Hard Coded
                hostname: this.host,
                path: '/',
                headers: {},
            },
            {}
        )

I have tried using minio in set up for ssl for a self signed cert but the lib reports a tls error (pretty normal)

Is there a way to achieve this? I would expect this is a normal dev scenario, and am hoping for a way to make this work

Many thanks

Error some headers are not signed with version 0.7.0

As reported in https://community.k6.io/t/s3serviceerror-using-s3client/5586, version 0.7.0 seems to have introduced an error.

    ERRO[0000] S3ServiceError: There were headers present in the request which were not signed
    running at value (webpack://k6-jslib-aws/./src/internal/s3.ts:289:16(55))
    default at value (webpack://k6-jslib-aws/./src/internal/s3.ts:62:27(40))
        at file:///Users/immavalls/Documents/grafana/github/k6-jslib-aws/examples/s3.js:23:20(3)
        at native  executor=per-vu-iterations scenario=default source=stacktrace

When following the s3 example, with the right credentials, it fails with the previous error.

The same example importing version 0.6.0 (before refactoring to support AWS signature v4 procedure, sign and pre-sign) works:

import { AWSConfig, S3Client } from 'https://jslib.k6.io/aws/0.6.0/s3.js';

After some digging, it seems that we need to add the host header in https://github.com/grafana/k6-jslib-aws/blob/main/src/internal/signature.ts#L102.

And we need to fix the test https://github.com/grafana/k6-jslib-aws/blob/main/tests/internal/new_signature.js#L82 adding delete request.hostname. Otherwise, the request is adding the host in the headers.

I've also spotted that we have accessKeyId: this.awsConfig.accessKeyID in https://github.com/grafana/k6-jslib-aws/blob/main/src/internal/s3.ts#L27, and it should be accessKeyId: this.awsConfig.accessKeyId. Otherwise, the AWS4-HMAC-SHA256 Credential is missing the accessKeyId.

Document and improve SQS errors

While documenting it, I noticed that the SQS service still needs to standardize its handling of errors. Let's try and improve that to align it with what the other services are doing.

As we're at it, it would be welcome to document the client's methods in the same fashion as with the other services.

Use path-style URL for AWS Config

Hi,

I'm trying to use this library to connect to my self-hosted MinIO Object Storage that has S3-compatible. I connected to my server successfully. But my server uses path-style URL so I can't get buckets on my server. I haven't found any way to config this. Is there any way to do this? For example, in AWS S3 Javascript SDK, this field is named forcePathStyle.

Thanks.

Add support for DynamoDB

I have a working dynamodb client. If you found it useful I can make an MR. I used it for clean up stuff created by load test or prepare data for test?

Module incompatible with S3-compatible services

So far I'm not having much luck getting this module working with S3-compatible APIs such as Cloudflare R2. I notice the constructor for the config does not support the traditional endpoint param that you would see in other APIs. Is this intentional?

Make SQS data structure more reusable

At the moment, the SQSClient implementation has dedicated types for each response of each service's endpoints. Ideally, we should try to align with what's been made in other services clients and replace those specific types with more general ones representing the concepts at play, such as a Queue or a message.

Omitting the `path` property in the `sign` method's first argument should use a default value instead

As reported by a user , when one omits to set the path property of the sign method first argument, it leads to an error:

ERRO[0000] TypeError: Cannot read property 'split' of undefined or null
        at value (webpack://k6-jslib-aws/./src/internal/signature.ts:404:37(12))
        at value (webpack://k6-jslib-aws/./src/internal/signature.ts:300:40(32))
        at value (webpack://k6-jslib-aws/./src/internal/signature.ts:146:40(213))
        at file:///.../k6-script.js:207:2(53)  executor=per-vu-iterations scenario=default source=stacktrace

We should probably make this property optional and use a default value instead, as it's quite common for requests to not need to override the path.

add ability to poll SQS

if possible it would be fantastic to add ability for the AWS library to poll SQS messages from a given queue, rather than just send them

Prefix tests names with their service name

At the moment, some end2end tests fail to report the component or service they test. This makes interpreting the test results harder. Let's add the component or service as a prefix to their output, and, for instance, go from:

get secret parameter

to:

[ssm] get secret parameter

Support for IRSA

Hi,

we are right now trying to upload our html reports into S3 from our kubernetes cluster but it looks like IRSA IAM roles for service accounts is not supported right now.

Would be cool if a kubernetes native tool would support common authentication methods.

Thanks

S3Client returns null when `K6_DISCARD_RESPONSE_BODIES` is set to true

A user recently reported the following issue in the k6 open-source support forum:

K6 S3Client returns null with K6_DISCARD_RESPONSE_BODIES=true.
Use case:
I don’t want to discard the body of the S3 client response to create test data, but for real load test responses, just checking the HTTP status is sufficient.

We should run an investigation on this, and figure out if this is the intended behavior, or if we need to adjust the behavior of the jslib.

Is the AWS S3 library S3-compatible?

I was looking to potentially load test S3-compatible APIs but I think it may be locked into AWS proper as I can't seem to configure the respective endpoint correctly.

A configuration like the following:

const awsConfig = new AWSConfig({
  region: __ENV.REGION,
  endpoint: `https://${__ENV.REGION}.digitaloceanspaces.com`,
  accessKeyId: __ENV.ACCESS_KEY_ID,
  secretAccessKey: __ENV.SECRET_ACCESS_KEY,
});

Produces output like:

WARN[0004] Request Failed                                error="Get \"https://[bucket].s3.sfo2.https://sfo2.digitaloceanspaces.com/?list-type=2&prefix=files%2FFirst_Look%2Fpremium%2Fcurriculum%2FAugust%202023\": lookup [bucket].s3.sfo2.https: no such host"

I can change the endpoint to just .digitaloceanspaces.com but that produces a URL like https://[bucket].s3.sfo2.digitaloceanspaces.com/ which is not correct. It produces an SSL certificate error as the wildcard cert doesn't apply at that extra level I believe (it would need to be ..sfo2).

Other S3-compatible stores like Minio wouldn't necessarily have the s3 prefix but if I controlled the domain, I could make the resolution work. I can't control Digital Ocean's domains and something like a CNAME would work in theory but it'd introduce an SSL problem that puts me back to square one.

I think my biggest hurdle to self-discovery is this is also bundled so I can't easily dig into the source to see if I could propose a change or find the incantation I need to alter the outbound URL. Nothing jumps out of the official documentation.

It's also possible this isn't or wouldn't be supported due to other technical reasons that I can't see.

`S3Client.listObjects` always return an empty array

What

It's been reported that the S3Client.listObjects returns an empty Array. We were able to reproduce the issue and should work towards a better understanding of what happens and a fix.

const s3 = new S3Client(awsConfig)
const objects = s3.listObjects(existingBucketNameWithExistingObjects)
console.log(objects) // []

Considerations

Ideally, we should consider introducing end2end tests to assert we don't break these kinds of operations in the future. localstack has been mentioned many times and seems a good option. As the signature process has unit tests asserting its correctness, we probably can afford a solution that doesn't require any authentication.

How

Needs investigation

Definition of done

For a given bucket, containing N files, a user with the appropriate permissions should be able to retrieve an array of N objects by calling S3Client.listObjects on it.

Using k6-jslib-aws

Hi Is it possible to use this library to upload or download files on Minio (hosted at our own server). As per examples I am not able to see any way to use minio.

AWS SignatureV4 got broken after 0.9.0 version - even example in documentation produces an error

Hi,

I am trying to take k6 into use and call Api Gateway apis that need calls to be signed. I immediately stumbled upon problems and it seems that what is documented and described in examples is no longer valid.

If I tried to use the latest version 0.11.0 as described in the documentation: https://grafana.com/docs/k6/latest/javascript-api/jslib/aws/signaturev4/

I got an error:

ERRO[0001] TypeError: Cannot read property 'hostname' of undefined running at value (webpack://k6-jslib-aws/./src/internal/signature.ts:102:66(87))

Minimal example code:

`
import http from 'k6/http';
import { sleep } from 'k6';
import { AWSConfig, SignatureV4 } from 'https://jslib.k6.io/aws/0.11.0/aws.js'

const AWS_ACCESS_KEY_ID = __ENV.AWS_ACCESS_KEY_ID;
const AWS_SECRET_ACCESS_KEY = __ENV.AWS_SECRET_ACCESS_KEY;
const AWS_SESSION_TOKEN = __ENV.AWS_SESSION_TOKEN;
const AWS_REGION = "us-east-1"

const awsConfig = new AWSConfig({
region: AWS_REGION,
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
sessionToken: AWS_SESSION_TOKEN,
})

export const options = {
vus: 1,
duration: '5s',
};

export default function() {

const signer = new SignatureV4({
region: awsConfig.region,
service: 's3',
credentials: {
accessKeyId: awsConfig.accessKeyId,
secretAccessKey: awsConfig.secretAccessKey,
sessionToken: awsConfig.sessionToken,
},
uriEscapePath: true,
applyChecksum: true,
})

const req = { method: 'GET',
protocol: 'https',
hostname: 'mybucket.s3.us-east-1.amazonaws.com',
path: '/myfile.txt',
headers: {}}

const signedRequest = signer.sign(req)
const res = http.get(signedRequest.url, { headers: signedRequest.headers })

console.log(res.body)
}
`

So the sign function itself seems to fail.

Same script with the version 0.9.0 does not produce that error, so clearly something has changed and existing documentation / examples do not work anymore out of the box.

Tested on Windows on WSL - k6 v0.50.0 (commit/f18209a5e3, go1.21.8, linux/amd64)

And same version also on OS X from homebrew - go1.22.1, darwin/amd64.

make signature tests 🍏

In the process of changing the way the lib is bundled up, and moving to typescript we broke the signature tests. They are currently failing at importing signature.js from src, because it... doesn't exist anymore.

We should instead import the signature.js from the build, and make sure that npm test would run the webpack command before actually running the test.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.