Giter Club home page Giter Club logo

gofakes3's Introduction

CircleCI Codecov

Logo

AWS (GOFAKE)S3

AWS S3 fake server and testing library for extensive S3 test integrations. Either by running a test-server, e.g. for testing of AWS Lambda functions accessing S3. Or, to have a simple and convencience S3 mock- and test-server.

What to use it for?

We're using it for the local development of S3 dependent Lambda functions, to test AWS S3 golang implementations and access, and to test browser based direct uploads to S3 locally.

What not to use it for?

Please don't use gofakes3 as a production service. The intended use case for gofakes3 is currently to facilitate testing. It's not meant to be used for safe, persistent access to production data at the moment.

There's no reason we couldn't set that as a stretch goal at a later date, but it's a long way down the road, especially while we have so much of the API left to implement; breaking changes are inevitable.

In the meantime, there are more battle-hardened solutions for production workloads out there, some of which are listed in the "Similar Notable Projects" section below.

How to use it?

Example (aws-sdk-go version 1)

// fake s3
backend := s3mem.New()
faker := gofakes3.New(backend)
ts := httptest.NewServer(faker.Server())
defer ts.Close()

// configure S3 client
s3Config := &aws.Config{
	Credentials:      credentials.NewStaticCredentials("YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", ""),
	Endpoint:         aws.String(ts.URL),
	Region:           aws.String("eu-central-1"),
	DisableSSL:       aws.Bool(true),
	S3ForcePathStyle: aws.Bool(true),
}
newSession := session.New(s3Config)

s3Client := s3.New(newSession)
cparams := &s3.CreateBucketInput{
	Bucket: aws.String("newbucket"),
}

// Create a new bucket using the CreateBucket call.
_, err := s3Client.CreateBucket(cparams)
if err != nil {
	// Message from an error.
	fmt.Println(err.Error())
	return
}

// Upload a new object "testobject" with the string "Hello World!" to our "newbucket".
_, err = s3Client.PutObject(&s3.PutObjectInput{
	Body:   strings.NewReader(`{"configuration": {"main_color": "#333"}, "screens": []}`),
	Bucket: aws.String("newbucket"),
	Key:    aws.String("test.txt"),
})

// ... accessing of test.txt through any S3 client would now be possible

Example for V2 (aws-sdk-go-v2)

backend := s3mem.New()
faker := gofakes3.New(backend)
ts := httptest.NewServer(faker.Server())
defer ts.Close()

// Difference in configuring the client

// Setup a new config
cfg, _ := config.LoadDefaultConfig(
	context.TODO(),
    config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider("KEY", "SECRET", "SESSION")),
    config.WithHTTPClient(&http.Client{
        Transport: &http.Transport{
            TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
        },
    }),
    config.WithEndpointResolverWithOptions(
        aws.EndpointResolverWithOptionsFunc(func(_, _ string, _ ...interface{}) (aws.Endpoint, error) {
            return aws.Endpoint{URL: ts.URL}, nil
        }),
    ),
)

// Create an Amazon S3 v2 client, important to use o.UsePathStyle
// alternatively change local DNS settings, e.g., in /etc/hosts
// to support requests to http://<bucketname>.127.0.0.1:32947/...
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
	o.UsePathStyle = true
})

Please feel free to check it out and to provide useful feedback (using github issues), but be aware, this software is used internally and for the local development only. Thus, it has no demand for correctness, performance or security.

There are two ways to run locally: using DNS, or using S3 path mode.

S3 path mode is the most flexible and least restrictive, but it does require that you are able to modify your client code.In Go, the modification would look like so:

config := aws.Config{}
config.WithS3ForcePathStyle(true)

S3 path mode works over the network by default for all bucket names.

If you are unable to modify the code, DNS mode can be used, but it comes with further restrictions and requires you to be able to modify your local DNS resolution.

If using localhost as your endpoint, you will need to add the following to /etc/hosts for every bucket you want to fake:

127.0.0.1 <bucket-name>.localhost

It is trickier if you want other machines to be able to use your fake S3 server as you need to be able to modify their DNS resolution as well.

Exemplary usage

Lambda Example

var AWS   = require('aws-sdk')

var ep = new AWS.Endpoint('http://localhost:9000');
var s3 = new AWS.S3({endpoint: ep});

exports.handle = function (e, ctx) {
  s3.createBucket({
    Bucket: '<bucket-name>',
  }, function(err, data) {
    if (err) return console.log(err, err.stack);
    ctx.succeed(data)
  });
}

Upload Example

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
  </head>
  <body>

  <form action="http://localhost:9000/<bucket-name>/" method="post" enctype="multipart/form-data">
    Key to upload:
    <input type="input"  name="key" value="user/user1/test/<filename>" /><br />
    <input type="hidden" name="acl" value="public-read" />
    <input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
    <input type="hidden" name="x-amz-server-side-encryption" value="AES256" />
    <input type="text"   name="X-Amz-Credential" value="AKIAIOSFODNN7EXAMPLE/20151229/us-east-1/s3/aws4_request" />
    <input type="text"   name="X-Amz-Algorithm" value="AWS4-HMAC-SHA256" />
    <input type="text"   name="X-Amz-Date" value="20151229T000000Z" />

    Tags for File:
    <input type="input"  name="x-amz-meta-tag" value="" /><br />
    <input type="hidden" name="Policy" value='<Base64-encoded policy string>' />
    <input type="hidden" name="X-Amz-Signature" value="<signature-value>" />
    File:
    <input type="file"   name="file" /> <br />
    <!-- The elements after this will be ignored -->
    <input type="submit" name="submit" value="Upload to Amazon S3" />
  </form>
</html>

Similar notable projects

Contributors

A big thank you to all the contributors, especially Blake @shabbyrobe who pushed this little project to the next level!

Help wanted

gofakes3's People

Contributors

aidilfbk avatar alfus avatar chrisschinnerl avatar csw avatar cure avatar darthsim avatar deansheather avatar denkoren avatar dependabot[bot] avatar dergut avatar elgohr avatar guettli avatar hlubek avatar ideal avatar igungor avatar ilkinulas avatar johannesboyne avatar jwoffindin avatar kirel avatar korya avatar kucukaslan avatar kulti avatar leoleoasd avatar mars4myshare avatar ocakhasan avatar oker1 avatar shabbyrobe avatar tasdomas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gofakes3's Issues

sftpgo s3afero compatibility

Hello,

Sftpgo supports using a s3 backend but I've come across some compatibility issues with gofakes3 when using the s3afero backend.

I've managed to resolve the following issues through a couple of changes:

  • To create a directory, sftpgo will write a 0 byte blob with content type application/x-directory and key directory_name/. This is the way that AWS recommends creating folders but it doesn't work with a fs backend. To counter this, if we notice that the file being written is 0 bytes we will delete it and create a directory in it's place.

  • The other issues are to do with how sftpgo interprets folders. If it is changing directory it will first run a head lookup on the key without the trailing "/", if that fails, it will check to see if it is a folder, so we want to return key missing if it checks on a directory without the trailing "/". It checks differently when you're running a recursive get, it will first list out all the folders then mark directories if they have the trailing "/".

I've made the changes here, it adds functionality for mkdir, changing directories, recursive get and recursive put. I've only made them for the multibucket option, but they could be extended to the other options.

I'm interested to hear your thoughts on potentially getting this merged into master. Are these changes you'd be happy to merge and if so, would you prefer I open a PR to work on getting these changes merged?

Amazon have deprecated Path-style URL access to S3 buckets

An unfortunate announcement on the AWS Developer Forums started doing the rounds today:

Amazon S3 currently supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com//key), and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //.s3.amazonaws.com/key). In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format. Customers should update their applications to use the virtual-hosted style request format when making S3 API requests before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format.

Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.

I guess that means that the "easy mode" path-based workflow I've been incrementally adding support for is on the way out the door, so we should probably shift back to improving the experience for DNS-based workflows.

Is there any way we can emulate the virtual-host style thing in a way that is turn-key for devs (i.e. not requiring them to futz around with their DNS setup manually)?

Incorrect handling of ranges

First off, thanks for such an awesome project, has been super useful.

I been getting weird failures with s3manager.Downloader. Attempting to download a small file (700 odd-bytes) always returns 0 length content and no error.

This occurs with both directfs and memory backends (only two that I tested).

It turns out func (g *GoFakeS3) getObject() is always returns an ErrInvalidRange. Unfortunately this error appears to silently ignored by s3manager so it returns 0-length files.

Tracing through the code, it appears the problem is triggered by this logic because s3manager.Downloader defaults to using PartSize of 5M.

Ignoring this unexpected behaviour of s3manager, I think the logic should, rather than being:

	if start < 0 || length < 0 || start > size || start+length > size {
		return nil, ErrInvalidRange
	}

	return &ObjectRange{Start: start, Length: length}, nil

probably be something along the lines of:

	if start < 0 || length < 0 || start > size {
		return nil, ErrInvalidRange
	}

	if start+length > size {
		return &ObjectRange{Start: start, Length: size - start}, nil
	}

	return &ObjectRange{Start: start, Length: length}, nil

If you're okay with this I can raise a PR.

Thanks!

how much features missing from s3 api

I'm need to some go library to proxy s3 api requests to own internal api storage. i found this project and try to understand how it suitable for my use-case.
Can you provide some info how much needs to be done to get full s3 api compatible?
P.S. I can provide pr if i decide to use this project =)

ListObject startAfter not working when key does not exist (s3mem)

If you have objects named "1", "3" in a bucket, then request "startAfter=2", the iterator will point to "3" after

iter.Next() // Move to the next item after the Marker

which will then be skipped past at
for iter.Next() {

I believe that line 95 should only move past the result if it is equal to the marker (which appear to be a noop directly after a seek)

PutObjectTagging overwrites metadata

The following test shows, that Metadata is set after Upload.
After executing PutObjectTagging it's gone.

func TestTagging(t *testing.T) {
	ts := httptest.NewServer(gofakes3.New(s3mem.New(), gofakes3.WithGlobalLog()).Server())
	defer ts.Close()

	ctx := context.Background()
	cfg, err := config.LoadDefaultConfig(ctx,
		config.WithRegion("us-east-1"),
		config.WithEndpointResolverWithOptions(aws.EndpointResolverWithOptionsFunc(func(_, _ string, _ ...interface{}) (aws.Endpoint, error) {
			return aws.Endpoint{
				PartitionID:       "aws",
				URL:               ts.URL,
				SigningRegion:     "us-east-1",
				HostnameImmutable: true,
			}, nil
		})),
		config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider("dummy", "dummy", "dummy")),
	)
	if err != nil {
		t.Fatal(err)
	}

	client := s3.NewFromConfig(cfg)
	_, err = client.CreateBucket(ctx, &s3.CreateBucketInput{
		Bucket: aws.String("bucket"),
	})
	if err != nil {
		t.Fatal(err)
	}

	_, err = manager.NewUploader(client).Upload(ctx, &s3.PutObjectInput{
		Bucket: aws.String("bucket"),
		Key:    aws.String("TEST"),
		Body:   strings.NewReader("TEST"),
		Metadata: map[string]string{
			"test": "test",
		},
	})
	if err != nil {
		t.Fatal(err)
	}

	head, err := client.HeadObject(ctx, &s3.HeadObjectInput{Bucket: aws.String("bucket"), Key: aws.String("TEST")})
	if err != nil {
		t.Fatal(err)
	}
	if head.Metadata["test"] == "" {
		t.Fatal("no metadata")
	}

	_, err = client.PutObjectTagging(ctx, &s3.PutObjectTaggingInput{
		Bucket: aws.String("bucket"),
		Key:    aws.String("TEST"),
		Tagging: &types.Tagging{
			TagSet: []types.Tag{
				{Key: aws.String("tag-test"), Value: aws.String("test")},
			},
		},
	})
	if err != nil {
		t.Fatal(err)
	}

	head, err = client.HeadObject(ctx, &s3.HeadObjectInput{Bucket: aws.String("bucket"), Key: aws.String("TEST")})
	if err != nil {
		t.Fatal(err)
	}
	if head.Metadata["test"] == "" {
		t.Fatal("no metadata")
	}
}

From the output I would guess, that gofakes3 creates a completly new object instead of updating the existing one

2022/05/10 09:31:17 INFO CREATE BUCKET: bucket
2022/05/10 09:31:17 INFO CREATE OBJECT: bucket TEST
2022/05/10 09:31:17 INFO HEAD OBJECT
2022/05/10 09:31:17 INFO Bucket: bucket
2022/05/10 09:31:17 INFO └── Object: TEST
2022/05/10 09:31:17 INFO CREATE OBJECT: bucket TEST
2022/05/10 09:31:17 INFO HEAD OBJECT
2022/05/10 09:31:17 INFO Bucket: bucket
2022/05/10 09:31:17 INFO └── Object: TEST

S3 Select mocking support

Hello,

I am wondering if it's possible to deal with a code which uses "S3 Select" feature.

S3 Select is an S3 feature to get contents with SQL query. It is, in general, a tough challenge with a great benefit to test a query like SQL which leads its query complex, and S3 Select is something special in testing difficulty. S3 Select is used with large aws-sdk's interface and its backend is locally unavailable.

My questions:

  • Is it in gofakes3's scope of concern to support select feature?
  • If so, how the implementation supposed to be?

I am not sure even it is good idea to use other project's SQL parser for this purpose.

Dockerfile

It would be nice to offer this as a docker image so folks can test against s3 implementations while in Docker without having to actually reach out to s3.

Persist multipart uploads in backend

Hey,

I think it would be a great idea to persist multipart uploads into the backend. (I kinda need that for a project I am working on) Currently implementing the following: (But not PR worthy yet)

Make the backends store chunks of the content, rather than the whole file. (As an optional interface maybe?)
This way we can just keep adding chunks to the backend object until we are done (maybe have a ready/finished flag on the object, too. Not sure, yet)

Then when a download is requested we can use a io.MultiReader to read out the chunks as we see fit. (maybe some clever seeking technique to not waste resources/file-handles)

panic using unaligned 64-bit atomics on 32-bit architectures

Thanks for this super-handy project!

I found a bug on a Raspberry Pi, where tests fail like:

pi@raspberrypi:~/inst/gofakes3 $ go test -run=TestCreateObjectWithMissingContentLength
log output redirected to "/tmp/gofakes3-639390565.log"
--- FAIL: TestCreateObjectWithMissingContentLength (0.01s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x3f2dac]

goroutine 6 [running]:
testing.tRunner.func1.1(0x465c10, 0x7c6ad0)
	/home/pi/sdk/go1.15.7/src/testing/testing.go:1072 +0x264
testing.tRunner.func1(0x2401500)
	/home/pi/sdk/go1.15.7/src/testing/testing.go:1075 +0x364
panic(0x465c10, 0x7c6ad0)
	/home/pi/sdk/go1.15.7/src/runtime/panic.go:969 +0x158
github.com/johannesboyne/gofakes3_test.TestCreateObjectWithMissingContentLength(0x2401500)
	/home/pi/inst/gofakes3/gofakes3_test.go:197 +0x228
testing.tRunner(0x2401500, 0x4f4298)
	/home/pi/sdk/go1.15.7/src/testing/testing.go:1123 +0xbc
created by testing.(*T).Run
	/home/pi/sdk/go1.15.7/src/testing/testing.go:1168 +0x220
exit status 2
FAIL	github.com/johannesboyne/gofakes3	0.029s

Per sync/atomic#BUG, 64-bit atomic variables must be 64-bit aligned, but to embed such field in a struct on most 32-bit architectures, the field must be at the beginning of the struct to guarantee 64-bit alignment. Moving the requestID field to the beginning of the GoFakeS3 struct fixes this.

CommonPrefixes logic does not work on non-terminated prefix (without ending '/')

Instead of providing list of directory's contents, Fake S3 server just repeats the name of higher-level directory for the request with prefix.

This is true at least for s3afero backend in single bucket mode.
The backend works well for 'root' level directories, when ListObjectsV2 is called with empty Prefix input option, but starts to just repeat the prefix, when it is not empty.

For example, if we have a bucket root with the following data inside:

level-1/level-2-1/emptyFile
level-1/level-2-2/emptyFile

the s3afero backend will return level-1/ CommonPrefixes for call with empty prefix (as it shuld), and level-1/level-1/ CommonPrefixes for call with prefix level-1 (without final /). Instead, it should provide just level-1, as real S3 API.

Here is the example of test, that shows the problem:
(the test is written for s3afero package in backend/s3afero)

package s3afero

import (
	"context"
	"net/http/httptest"
	"os"
	"path/filepath"
	"testing"

	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/credentials"
	"github.com/aws/aws-sdk-go-v2/service/s3"
	"github.com/johannesboyne/gofakes3"
	"github.com/spf13/afero"
	"github.com/stretchr/testify/require"
)

func TestCommonPrefixes_Nested(t *testing.T) {
	bucketRoot := t.TempDir()
	bucketName := "my-test-bucket"
	delimiter := string(filepath.Separator)

	createFakeS3 := func() (endpoint string) {
		bucketFS := afero.NewBasePathFs(afero.NewOsFs(), bucketRoot)

		fsBackend, err := SingleBucket(bucketName, bucketFS, afero.NewMemMapFs())
		require.NoError(t, err, "failed to create S3 backend with FS data storage")

		fakeS3 := gofakes3.New(fsBackend)
		s3Server := httptest.NewServer(fakeS3.Server())
		t.Cleanup(s3Server.Close)

		return s3Server.URL
	}

	testDir := "level1"
	testRoot := filepath.Join(bucketRoot, testDir)
	require.NoError(t, os.Mkdir(testRoot, 0o750), "failed to create test dir inside bucket root")

	toCreate := []string{
		"level2-1" + delimiter,
		"level2-2" + delimiter,
	}

	for i := range toCreate {
		dirPath := filepath.Join(testRoot, toCreate[i])
		filePath := filepath.Join(dirPath, "emptyFile")

		require.NoError(t, os.Mkdir(dirPath, 0o750), "failed to create directory in test bucket root")
		require.NoError(t, os.WriteFile(filePath, nil, 0o640), "failed to create empty file in test bucket")
	}

	endpoint := createFakeS3()

	s3Cfg, err := config.LoadDefaultConfig(context.Background(),
		config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider("test", "test", "test")),
	)
	require.NoError(t, err, "cannot init S3 client")

	client := s3.NewFromConfig(s3Cfg, func(o *s3.Options) {
		o.BaseEndpoint = &endpoint
		o.UsePathStyle = true
	})

	resp, err := client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
		Bucket:    &bucketName,
		Delimiter: &delimiter,
		Prefix:    &testDir,
	})
	require.NoError(t, err, "failed to list bucket objects")

	actualPrefixes := make([]string, 0, len(toCreate))
	for i := range resp.CommonPrefixes {
		actualPrefixes = append(actualPrefixes, *resp.CommonPrefixes[i].Prefix)
	}

	require.Equal(t, []string{"level1/"}, actualPrefixes, "wrong common prefixes returned from S3 service")

	prefix2 := testDir + "/"
	resp, err = client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
		Bucket:    &bucketName,
		Delimiter: &delimiter,
		Prefix:    &prefix2,
	})
	require.NoError(t, err, "failed to list bucket objects")

	actualPrefixes = make([]string, 0, len(toCreate))
	for i := range resp.CommonPrefixes {
		actualPrefixes = append(actualPrefixes, *resp.CommonPrefixes[i].Prefix)
	}

	require.Equal(t, []string{"level1/level2-1/", "level1/level2-2/"}, actualPrefixes, "wrong common prefixes returned from S3 service")

	resp, err = client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
		Bucket:    &bucketName,
		Delimiter: &delimiter,
		// Prefix:    new(string),
	})
	require.NoError(t, err, "failed to list bucket objects")

	actualPrefixes = make([]string, 0, len(toCreate))
	for i := range resp.CommonPrefixes {
		actualPrefixes = append(actualPrefixes, *resp.CommonPrefixes[i].Prefix)
	}

	require.Equal(t, []string{"level1/"}, actualPrefixes, "wrong common prefixes returned from S3 service")

	prefix3 := testDir + "/" + "level2"
	resp, err = client.ListObjectsV2(context.Background(), &s3.ListObjectsV2Input{
		Bucket:    &bucketName,
		Delimiter: &delimiter,
		Prefix:    &prefix3,
	})
	require.NoError(t, err, "failed to list bucket objects")

	actualPrefixes = make([]string, 0, len(toCreate))
	for i := range resp.CommonPrefixes {
		actualPrefixes = append(actualPrefixes, *resp.CommonPrefixes[i].Prefix)
	}

	require.Equal(t, []string{"level1/level2-1/", "level1/level2-2/"}, actualPrefixes, "wrong common prefixes returned from S3 service")
}

Multipart uploads fail with minio client due to mismatched etags

I was trying to use gofakes3 to test the use of the https://pkg.go.dev/github.com/minio/minio-go/v7#Client in a package. However, I've found that multipart uploads fail due to a failing test here.

The condition fails because inPart.ETag (from the xml doc) is not quoted and upPart.ETag (from the header) is.

Even though RFC7232 specifies that etags are always quoted, aws s3 docs are not clean on whether the etag should be quoted in the xml document.

Switch to active boltdb fork

s5cmd test suite uses gofakes3 extensively while interacting with S3 API. Thank you very much for this project; it's been very useful.

We're using boltdb as the S3 backend. One of our tests uses S3 CopyObject API, but when we run it along with others tests in parallel, we see random crashes. I don't have a minimal repro unfortunately but I can reproduce the crash like below:

git clone [email protected]:peak/s5cmd.git
cd s5cmd && git checkout gofakes3-bolt-crash
go test -parallel=2 -count=1 -v -run='(TestCopySingleS3ObjectToLocal$|TestCopyMultipleS3ObjectsToS3_Issue70$)' ./e2e

Crash log is here: https://gist.github.com/igungor/70239af4df7b75132e84ac1b48a550c7
Also I created a new branch in s5cmd to demonstrate the problem. You could checkout the logs here: https://github.com/peak/s5cmd/runs/2341425465

Then I switched boltdb that gofakes3 use to the active fork mainted by the etcd team to test if the problem is still there. Fortunately, it's been solved. There are no random crashes.

What do you think about changing boltdb to https://github.com/etcd-io/bbolt? If it sounds reasonable to you, I can create a PR for it.

ListObjects: IsTruncated omit when its value is false

Issue

When listObjects, if the number of objects is less than the 'MaxKeys', there is no 'IsTruncated' field

Detail

There were two objects in the bucket. Here is the return of listObjects

{
  Contents: [{
      ETag: "\"d084225e6144cafa320aa8454349094d\"",
      Key: "test1.txt",
      LastModified: 2019-12-26 08:11:40.466 +0000 UTC,
      Size: 56
    },{
      ETag: "\"d084225e6144cafa320aa8454349094d\"",
      Key: "test2.txt",
      LastModified: 2019-12-26 08:11:40.467 +0000 UTC,
      Size: 56
    }],
  Marker: "",
  MaxKeys: 3,
  Name: "newbucket",
  Prefix: ""
}

Expected return:

{
  Contents: [{
      ETag: "\"d084225e6144cafa320aa8454349094d\"",
      Key: "test1.txt",
      LastModified: 2019-12-26 08:44:01.079 +0000 UTC,
      Size: 56
    },{
      ETag: "\"d084225e6144cafa320aa8454349094d\"",
      Key: "test2.txt",
      LastModified: 2019-12-26 08:44:01.079 +0000 UTC,
      Size: 56
    }],
  IsTruncated: false,
  Marker: "",
  MaxKeys: 3,
  Name: "newbucket",
  Prefix: ""
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.