Giter Club home page Giter Club logo

s3-lite-client's Introduction

s3-lite-client

This is a lightweight S3 client for Deno and other modern JavaScript runtimes. It is designed to offer all the key features you may need, with no dependencies outside of the Deno standard library. It does not use any Deno-specific features, so it should work with any runtime that supports the fetch API, web streams API, and ES modules (ESM).

This client is 100% MIT licensed, and is derived from the excellent MinIO JavaScript Client.

Supported functionality:

  • Authenticated or unauthenticated requests
  • List objects: for await (const object of client.listObjects(options)) { ... }
    • Handles pagination transparently
    • Supports filtering using a prefix
    • Supports grouping using a delimiter (use client.listObjectsGrouped(...))
  • Check if an object exists: client.exists("key")
  • Get metadata about an object: client.statObject("key")
  • Download an object: client.getObject("key", options)
    • This just returns a standard HTTP Response object, so for large files, you can opt to consume the data as a stream (use the .body property).
  • Download a partial object: client.getPartialObject("key", options)
    • Like getObject, this also supports streaming the response if you want to.
  • Upload an object: client.putObject("key", streamOrData, options)
    • Can upload from a string, Uint8Array, or ReadableStream
    • Can split large uploads into multiple parts and uploads parts in parallel.
    • Can set custom headers, ACLs, and other metadata on the new object (example below).
  • Copy an object: client.copyObject({ sourceKey: "source", options }, "dest", options)
    • Can copy between different buckets.
  • Delete an object: client.deleteObject("key")
  • Create pre-signed URLs: client.presignedGetObject("key", options) or client.getPresignedUrl(method, "key", options)
  • Check if a bucket exists: client.bucketExists("bucketName")
  • Create a new bucket: client.makeBucket("bucketName")
  • Remove a bucket: client.removeBucket("bucketName")

Installation

JSR Version JSR Score

  • Deno: deno add @bradenmacdonald/s3-lite-client
  • Deno (no install): import { S3Client } from "jsr:@bradenmacdonald/[email protected]";
  • NPM: npx jsr add @bradenmacdonald/s3-lite-client
  • Yarn: yarn dlx jsr add @bradenmacdonald/s3-lite-client
  • pnpm: pnpm dlx jsr add @bradenmacdonald/s3-lite-client
  • Bun: bunx jsr add @bradenmacdonald/s3-lite-client
  • Browser:
    <script type="module">
      import { S3Client } from "https://esm.sh/jsr/@bradenmacdonald/[email protected]";
      // Or:
      const { S3Client } = await import('https://esm.sh/jsr/@bradenmacdonald/[email protected]');
    </script>

Note: if you're using Node.js, this only works on Node 19+.

Usage Examples (Quickstart)

List data files from a public data set on Amazon S3:

import { S3Client } from "@bradenmacdonald/s3-lite-client";

const s3client = new S3Client({
  endPoint: "s3.us-east-1.amazonaws.com",
  port: 443,
  useSSL: true,
  region: "us-east-1",
  bucket: "openalex",
  pathStyle: false,
});

// Log data about each object found under the 'data/concepts/' prefix:
for await (const obj of s3client.listObjects({ prefix: "data/concepts/" })) {
  console.log(obj);
}
// {
//   type: "Object",
//   key: "data/concepts/updated_date=2024-01-25/part_000.gz",
//   etag: "2c9b2843c8d2e9057656e1af1c2a92ad",
//   size: 44105,
//   lastModified: 2024-01-25T22:57:43.000Z
// },
// ...

// Or, to get all the keys (paths) as an array:
const keys = await Array.fromAsync(s3client.listObjects(), (entry) => entry.key);
// keys = [
//  "data/authors/manifest",
//  "data/authors/updated_date=2023-06-08/part_000.gz",
//  ...
// ]

Uploading and downloading a file using a local MinIO server:

import { S3Client } from "@bradenmacdonald/s3-lite-client";

// Connecting to a local MinIO server:
const s3client = new S3Client({
  endPoint: "localhost",
  port: 9000,
  useSSL: false,
  region: "dev-region",
  accessKey: "AKIA_DEV",
  secretKey: "secretkey",
  bucket: "dev-bucket",
  pathStyle: true,
});

// Upload a file:
await s3client.putObject("test.txt", "This is the contents of the file.");

// Now download it
const result = await s3client.getObject("test.txt");
// and stream the results to a local file:
const localOutFile = await Deno.open("test-out.txt", { write: true, createNew: true });
await result.body!.pipeTo(localOutFile.writable);
// or instead of streaming, you can consume the whole file into memory by awaiting
// result.text(), result.blob(), result.arrayBuffer(), or result.json()

Set ACLs, Content-Type, custom metadata, etc. during upload:

await s3client.putObject("key", streamOrData, {
  metadata: {
    "x-amz-acl": "public-read",
    "x-amz-meta-custom": "value",
  },
})`

For more examples, check out the tests in integration.ts

Developer notes

To run the tests, please use:

deno lint && deno test

To format the code, use:

deno fmt

To run the integration tests, first start MinIO with this command:

docker run --rm -e MINIO_ROOT_USER=AKIA_DEV -e MINIO_ROOT_PASSWORD=secretkey -e MINIO_REGION_NAME=dev-region -p 9000:9000 -p 9001:9001 --entrypoint /bin/sh minio/minio:RELEASE.2021-10-23T03-28-24Z -c 'mkdir -p /data/dev-bucket && minio server --console-address ":9001" /data'

Then while MinIO is running, run

deno test --allow-net integration.ts

(If you encounter issues and need to debug what MinIO is seeing, run these two commands:)

mc alias set localdebug http://localhost:9000 AKIA_DEV secretkey
mc admin trace --verbose --all localdebug

s3-lite-client's People

Contributors

bradenmacdonald avatar chromakode avatar karfau avatar nestarz avatar thepocp avatar wyozi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

s3-lite-client's Issues

Bad x-amz-content-sha256 on large upload

Hello ! Thanks for your work !!

Do you have any idea why on large files (> 100mb) I got this issue (I tried multiple partSize, and I have set the size field to the exact content length of the file uploaded), also I am streaming the file using ReadableStream (works fine on small files).

error: Uncaught (in promise) Error: The provided &apos;x-amz-content-sha256&apos; header does not match what was computed.
    return new ServerError(response.status, code, message, { key, bucketName, resource, region });
           ^
    at Module.parseServerError (https://raw.githubusercontent.com/nestarz/deno-s3-lite-client/patch-1/errors.ts:85:12)
    at eventLoopTick (ext:core/01_core.js:165:11)
    at async Client.makeRequest (https://raw.githubusercontent.com/nestarz/deno-s3-lite-client/patch-1/client.ts:281:23)

On a side note, I also have issues with filenames that contains space or characters like (), if I sluggify them it works well.

Can this lib be published to npm as well ?

I use this lib in deno function and wanted to use the same in node code and it's not available, but if i just copy the code it work so i suppose you could publish it as well

Bucket creation functionality

I'm reaching out to suggest adding makeBucket function to the library. As it stands, I couldn't find this feature. If this function is already available and I've overlooked it, I'd appreciate guidance on its usage. If not, its inclusion could greatly benefit developers needing to manage buckets programmatically.

I'm grateful for your efforts in developing this library, and I'm willing to contribute to implementing this feature

Abort the request with AbortSignal

I want to pass an AbortSignal into s3.putObject() and s3.deleteObject(), eg s3.deleteObject('myObj', { signal })

That way I can cancel the upload if the user cancels the request:

s3.putObject('myObj', bytes, { signal: c.req.signal })

Or I can cancel the upload after a certain period of time:

s3.putObject('myObj', bytes, { signal: AbortSignal.timeout(5000) })

I took a stab at making the change myself, but it would require changing too many files I'm not familiar with. This is a start:

diff --git a/client.ts b/client.ts
index 2e950b7..358aed1 100644
--- a/client.ts
+++ b/client.ts
@@ -212,7 +212,7 @@ export class Client {
   /**
    * Make a single request to S3
    */
-  public async makeRequest({ method, payload, ...options }: {
+  public async makeRequest({ method, payload, signal, ...options }: {
     method: "POST" | "GET" | "PUT" | "DELETE" | string;
     headers?: Headers;
     query?: string | Record<string, string>;
@@ -229,6 +229,8 @@ export class Client {
      * the caller can read it.
      */
     returnBody?: boolean;
+    /** Signal to abort the request. */
+    signal?: AbortSignal;
   }): Promise<Response> {
     const date = new Date();
     const { headers, host, path } = this.buildRequestOptions(options);
@@ -273,6 +275,7 @@ export class Client {
       method,
       headers,
       body: payload,
+      signal,
     });
 
     if (response.status !== statusCode) {

To complete this, all functions that call makeRequest() need to accept a signal, and then pass it down.

putObject() response is not consumed, leaking resources

error: Leaking resources:
  - A fetch response body (rid 15) was created during the test, but not consumed during the test. Consume or close the response body `ReadableStream`, e.g `await resp.text()` or `await resp.body.cancel()`.

Strip https on endPoint?

Hi, thanks for this neat little package. I was using it with Cloudflare R2 (S3-compatible storage offered by Cloudflare). By default, the URL given by R2 includes https:// which makes the s3client throw error in this line: https://github.com/bradenmacdonald/s3-lite-client/blob/main/client.ts#L164

The shape of R2 URL looks like this: https://<ACCOUNT_ID>.r2.cloudflarestorage.com

Removed the https:// in my URL and everything works fine. Wondering if it's a good idea to strip https:// (or maybe even http://) before doing the indexOf check?

Make it possible to getObject encrypted objects

Hi, I would like to specify the following params to GetObject an encrypted object.

{
  "x-amz-server-side-encryption-customer-algorithm": "AES256",
  "x-amz-server-side-encryption-customer-key": cryptoKey,
}

requestResource in canonical string generation should be urlencoded

Trying to retrieve a resource from s3 e.g. /my/folder/työnantaja.pdf fails, because the ö character in canonical string generation is not urlencoded properly.

Passing it through encodeURI (i.e. canonical.push(encodeURI(requestResource));) makes the above request work.

Don't ask why paths like this are stored in our s3 bucket 😬

Presigned URL?

Awesome project, glad to see more Deno specific modules, especially something so tiny!

I was in need of pre-signed URL here too, all parts seem to be in the signing parts already.

Is this something you can add?

putObject strange result

I tested TEXT, .putObject({test.txt,'ccc',{"Content-Type":"text/plain"}}) works fine,

but when tested IMAGE, result on the browser page or downloaded file not showing image. Are there any required arguments that I am missing? below is sample code with working image data.

Tried combinations with and without 'data:image/png;base64,.....' in the beginning of the data argument, also no luck.

There's no error etc. resopnse success returned with etag.

.putObject({test.png,'iVBORw0KGgoAAAANSUhEUgAAAQAAAAEACAIAAADTED8xAAADMElEQVR4nOzVwQnAIBQFQYXff81RUkQCOyDj1YOPnbXWPmeTRef+/3O/OyBjzh3CD95BfqICMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMK0CMO0TAAD//2Anhf4QtqobAAAAAElFTkSuQmCC',{"Content-Type":"image/png"}})

copyObject

Thank you for adding the copyObject, could it please have the ability to add the metadata options?

Amazon and redirects

When I used an EU region for Amazon, it did a 301 redirect response which was followed automatically in Deno but since you only accept 200 response, I'm not sure if it should be a setting or not?

443 port breaks canonical string

If port 443 is provided in ClientOptions, it is added to canonical request and signed as such. However, S3 canonical request doesn't include the port in this case, which means that the canonical request signature doesn't match.

s3 expected canonical request (given in s3 error):

<CanonicalRequest>GET
/test/example.pdf

host:s3.eu-north-1.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20230303T114916Z

host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855</CanonicalRequest>

library generated canonical request:

GET
/test/example.pdf

host:s3.eu-north-1.amazonaws.com:443
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20230303T114916Z

host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

Can we set ACLs?

Previously I was doing the following using the AWS SDK

const uploadParams = {
  Body: createReadStream(),
  Key: `images/${user.id}-${Date.now()}${extention}`,
  Bucket: "beep",
  ACL: "public-read"
};

const result = await s3.upload(uploadParams).promise();

Can I set an ACL using this package?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.