paolobarbolini / rusty-s3 Goto Github PK
View Code? Open in Web Editor NEWSimple pure Rust AWS S3 Client following a Sans-IO approach
License: BSD 2-Clause "Simplified" License
Simple pure Rust AWS S3 Client following a Sans-IO approach
License: BSD 2-Clause "Simplified" License
Calling Bucket::new
with static strings for name
and region
results in an allocation (because they are cloned as String
).
Storing the string in Cow<'static, str>
would allow borrowed strings without unneeded allocations.
I'm experimenting with the library, and I get error responses from Amazon S3 in certain circumstances. The error response text is like this:
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message><Endpoint>s3.amazonaws.com</Endpoint><Bucket>...</Bucket><RequestId>...</RequestId><HostId>...</HostId></Error>
Now, I'm using examples/list_objects.rs
as a basis for my code. The response status here is 301, so error_for_status()
doesn't catch it, and the subsequent ListObjectsV2::parse_response()
call on the response text produces a non-helpful error Custom("missing field `MaxKeys`")
.
Apparently I need to do more error handling: maybe make sure that the error code is always from the 2xx range, or "catch" parse errors and try to interpret the response as a error response. In any case, I need some way to parse the error response XML like the one I cited above. Is there any facility in rusty-s3 that would help me with it? I couldn't find any.
I think the library should have a error response type like ErrorResponse
and a function to parse it, like parse_error_response
. Maybe request objects should also have methods like parse_response_or_error
that would automatically recognize, parse and return error responses.
For an empty bucket, ListObjectsV2 fails to parse with the error missing field Contents
. Here is a response from MinIO:
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>some-bucket</Name>
<Prefix></Prefix>
<KeyCount>0</KeyCount>
<MaxKeys>4500</MaxKeys>
<Delimiter></Delimiter>
<IsTruncated>false</IsTruncated>
<EncodingType>url</EncodingType>
</ListBucketResult>
I think that <Contents/>
is optional, if the bucket is empty. But I'm not sure if this is MinIO specific.
Heya! Love the library.
Any reason BucketError
isn't or can't be made public? Right now it's private and makes wrapping errors from the bucket new
method a bit wonky.
This will allow to customize the query parameters and headers of any action.
This is a breaking change...
All structs that implement S3Action
currently have a sign_with_time
method which takes the time at which the URL should be signed at. We use that method for tests. This proposes to move that method to the S3Action
trait.
See C-CUSTOM-TYPE from the API guidelines.
Something like this would be great:
enum UrlStyle {
// This variant could be marked as #[deprecated]
Path,
VirtualHost
}
Currently examples use the standard s3 endpoints, which are ipv4 only.
As the world is slowly moving towards ipv6 we should do our part towards that goal and update our examples and doc tests to use dualstack endpoints.
More info:
Would be nice to have a typed HeadObject
struct withall the well-known fields and something like:
impl HeadObject {
fn from_pairs(pairs: impl Iterator<Item = (String, String>) -> Self { ... }
}
Credentials
secret should be zeroed on drop. The secrecy
crate provide a SecretString
that take care of this.
Hello,
I can't make the multipart work with multiple parts. It works with one part as shown on your example, but as soon as I introduce a second part I get a 400 (on minio at least).
I updated your multipart example to send two parts, you can test it here: https://github.com/irevoire/rusty-s3/blob/7d2e185d83095283f5afa091d0eb9443dfdb98c3/examples/multipart_upload.rs
And here’s the result:
multipart upload created - upload id: OGNjZDE0MTgtYzE1NS00ZWViLWFiYTMtZjhjZGRjNDM0MjA0LmNmMzM3MTY3LWU5MzEtNDVmNy1hOGIyLWFmYjM2NGMwZDljZg
etag: "25f9e794323b453885f5181f1b624d0b"
multipart upload created - upload id: OGNjZDE0MTgtYzE1NS00ZWViLWFiYTMtZjhjZGRjNDM0MjA0LmNmMzM3MTY3LWU5MzEtNDVmNy1hOGIyLWFmYjM2NGMwZDljZg
etag: "a7152fed4a9b8d6ffea3e05cd9f7b36b"
Error: reqwest::Error { kind: Status(400), url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Domain("localhost")), port: Some(9000), path: "/test/idk.txt", query: Some("X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minioadmin%2F20230927%2Fminio%2Fs3%2Faws4_request&X-Amz-Date=20230927T091303Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&uploadId=OGNjZDE0MTgtYzE1NS00ZWViLWFiYTMtZjhjZGRjNDM0MjA0LmNmMzM3MTY3LWU5MzEtNDVmNy1hOGIyLWFmYjM2NGMwZDljZg&X-Amz-Signature=1f7db60b5bba6cbc305b3c15fb43a9cf309a1925b818ed97b59004c0fa496364"), fragment: None } }
I am using this as a wasm module in js server runtime. I think changing the HMAC signing to a trait based approach would allow wasm binaries to leverage window.crypto
for HMAC signing instead of the hmac crate. This would reduce bundle size and increase compat wit wasm.
Will make PR for this
Using Google cloud storage, the response to ListObjectsV2
sometimes does not contain the StorageClass
field. This causes a failure to parse the XML response into the ListObjectsV2Response
struct.
Error: ListObjectsV2: Failed parsing response
Caused by:
missing field `StorageClass`
This can easily be solved by marking said field of ListObjectsContent
as Option
:
#[derive(Debug, Clone, Deserialize)]
pub struct ListObjectsContent {
// ...
// ...
pub storage_class: Option<String>,
}
Bucket::new
may fail if the endpoint scheme is unsupported or if the url doesn't have a host.
This should return a Result
, an Option
doesn't reflect that something failed and can be silently ignored.
For S3, I found I couldn't use CreateBucket
wihout also providing a CreateBucketConfiguration
with a LocationConstraint
if the region wasn't the default region of us-east-1.
eg.
let action = CreateBucket::new(&self.bucket, &self.credentials);
let body = vec![
r#"<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">"#,
&format!(
"<LocationConstraint>{}</LocationConstraint>",
self.bucket.region()
),
"</CreateBucketConfiguration>",
]
.join("");
self.client
.put(action.sign(ONE_HOUR))
.body(Body::from(body))
.send()?
.error_for_status()?;
Recommend adding a .body()
function to CreateBucket, similar to how CompleteMultipartUpload provides a .body()
.
Ps. Thank you for this library. Apart from this, it works perfectly, and was the first non-async S3 library I could get working.
Hello, do you think it would be possible to get a new release with all the latest changes?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.