Giter Club home page Giter Club logo

aws-sdk-net's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-sdk-net's Issues

S3 EncryptionUtils encryption IV storage

The CreateInstructionFileRequest and UpdateMetadataWithEncryptionInstructions store the encryption IV in clear text. Although "guessing" the actual key would be very difficult since it is not based on human inputs (i.e. dictionary attacks) but system generated, I recommend adding an encrypted IV property to EncryptionInstructions and encrypting the IV using EncryptionUtils.EncryptEnvelopeKey function used to encrypt the key.

internal byte[] EncryptedInitializationVector { get; private set; }

The following functions would need update

Amazon.S3.Encryption.EncryptionInstructions.EncryptionInstructions(Dictionary<string, string> materialsDescription, byte[] envelopeKey, byte[] encryptedKey, byte[] iv, byte[] encryptedIV)

Amazon.S3.Encryption.AmazonS3EncryptionClient.ProcessResponseHandlers(AmazonWebServiceResponse response, IRequest request, IWebResponseData webResponseData)

Amazon.S3.Encryption.EncryptionUtils.GenerateInstructions(EncryptionMaterials materials) add "byte[] encryptedIV = EncryptEnvelopeKey(AesObject.IV, materials);"

Amazon.S3.Encryption.EncryptionUtils.BuildInstructionsFromObjectMetadata(GetObjectResponse response, EncryptionMaterials materials) adding byte[] decryptedIV = DecryptEnvelopeKey(encryptedIV, materials);

Amazon.S3.Encryption.EncryptionUtils.BuildInstructionsUsingInstructionFile(GetObjectResponse response, EncryptionMaterials materials) adding byte[] decryptedIV = DecryptEnvelopeKey(encryptedIV, materials);

Amazon.S3.Encryption.EncryptionUtils.UpdateMetadataWithEncryptionInstructions(AmazonWebServiceRequest request, EncryptionInstructions instructions) changing byte[] IVToStoreInMetadata = instructions.EncryptedInitializationVector;

Amazon.S3.Encryption.EncryptionUtils.CreateInstructionFileRequest(AmazonWebServiceRequest request, EncryptionInstructions instructions) change byte[] IVToStoreInInstructionFile = instructions.EncryptedInitializationVector;

There are other house keeping items such as jsonData["EncryptedIV"] instead of jsonData["IV"] that should be updated too.

Setting DynamoDBTable name per environment

We currently have DynamoDB tables per environment. We seperate these tables by prefixing our tables with environment, e.g Test_XXX, Staging_XXX, Live_XXX.

We solved this nicely with 1.X version of the API using an inherited DynamoDBTableAttribute:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Struct, Inherited = true, AllowMultiple = false)]
public class EnvDynamoDBTableAttribute : DynamoDBTableAttribute
{
    public EnvDynamoDBTableAttribute(string tableName) : base(GetTableNameWithEnvironmentPrefixed(tableName))
    {
    }

    private static string GetTableNameWithEnvironmentPrefixed(string tableName)
    {
        return string.Format("{0}_{1}", ConfigSettings.Environment, tableName);
    }
}

In v2.x DynamoDBTable is now sealed.

Is it possible to get this unsealed again or provide a way to give the table name on the fly?

Custom metadata not handled correctly in AmazonS3Client.CopyObject

-- CopyObjectRequestMarshaller does not copy Metadata values from CopyObjectRequest to the IRequest Headers collection. Adding custom metadata to headers collection is a workaround, however...

-- S3Signer buildCanonicalizedHeaders does not order header values the same as the s3 server.

x-amz-metadata-directive is sorted before custom headers e.g. x-amz-meta-whatever. The signature fails validation on the server as the s3 server appears to sort x-amz-meta-whatever before x-amz-metadata-directive.

Adding a string comparer to the the orderby on line 122, e.g. headers.Keys.OrderBy(x => x.ToLower(CultureInfo.InvariantCulture), StringComparer.Ordinal)

should fix that...

S3FileInfo.Directory always returns the root of the bucket

Unless I am mistaken the check for (index < -1) in this property is incorrect.

When the object key includes a directory path, key.LastIndexOf('') will return a positive value, in which case we want to flow into the if statement and return the substring directory name?

Write-S3Object fails with null timeout

When it's called for batch upload as

Write-S3Object -BucketName $bucketName -Folder $filesFolder -KeyPrefix '/'

stack is alike:

Parameter name: timeout ---> System.ArgumentNullException: Value cannot be null.
Parameter name: timeout
at Amazon.Runtime.ClientConfig.ValidateTimeout(Nullable`1 timeout)
at Amazon.S3.Transfer.Internal.UploadDirectoryCommand.Execute()
at Amazon.PowerShell.Cmdlets.S3.WriteS3ObjectCmdlet.UploadFolderToS3

Seems that it's a BaseUploadRequest.Timeout which is obviously may be set from raw C# code but it's unclear how to set it up from AWS Powershell.

Bug in MultiTableBatchWrite with Object Persistence Framework (.Net)

I have previously reported a bug in DynamoDBv2 MultiTableBatchWrite while using the Object Persistence Framework, and it was patched in Build 1.5.30. (reference: https://forums.aws.amazon.com/thread.jspa?messageID=479072&#479072)

However, the bug seems to still exist in the current version of the 2.0 Build (v2.0.0.5).

Description:
Attempting to write to two tables with identical, but switched HashKey and RangeKey names will result in a System.ArgumentException

Environment:
.Net 4.5
Amazon.DynamoDBv2 classes
AWS SDK Build 2.0.0.5

Reproduction:
Create two tables, each with Hash + RangeKey, switched
Table1: FieldA (HashKey), FieldB (RangeKey)
Table2: FieldB (HashKey), FieldA (RangeKey)

Create two classes, each representing one of the tables, and decorate them using the DynamoDBTable, DynamoDBHashKey and DynamoDBRangeKey.

Create two CreateBatchWrite batches, each with one AddPutItem added (i.e. write one row to each Dynamo table)

Create new MultiTableBatchWrite, combining Batch1 and Batch2.

Execute the MultiTableBatchWrite

Result:
MultiTableBatchWrite.Execute() will fail with "An item with the same key has already been added." (System.ArgumentException).

AWS SDK 2.0 isn't FIPS compliant anymore

Hi,

Our application is deployed to a FIPS compliant environment.
When we upgrade from AWS SDK 1.5.39 to the latest version (2.0.2.3) of the AWS SDK, we get exceptions (our code remains the same).

Are you going to make the AWS SDK 2.x FIPS compliant?

Thanks in advance!
Kind regards,
Niels

Our code:

    public void Upload(CloudRequest request)
    {
        using (var utility = new TransferUtility(CreateClient(request)))
        {
            utility.Upload(request.FilePath, request.Bucket, request.Key);
        }
    }

    private AmazonS3Client CreateClient(CloudRequestBase request)
    {
        return new AmazonS3Client(request.AccessKey, request.SecretKey, RegionEndpoint.GetBySystemName(request.Region));
    }

The exception:

System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.InvalidOperationException: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.
at System.Security.Cryptography.MD5CryptoServiceProvider..ctor()
--- End of inner exception stack trace ---
at System.RuntimeMethodHandle._InvokeConstructor(IRuntimeMethodInfo method, Object[] args, SignatureStruct& signature, RuntimeType declaringType)
at System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.Security.Cryptography.CryptoConfig.CreateFromName(String name, Object[] args)
at Amazon.Runtime.Internal.Util.HashingWrapper.Init(String algorithmName) in c:\Jenkins\workspace\release-v2-sdk\src\AWSSDK_DotNet35\Amazon.Runtime\Internal\Util\HashingWrapper.bcl.cs:line 35
--- End of inner exception stack trace ---
at System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck)
at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache)
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean skipCheckThis, Boolean fillCache)
at System.Activator.CreateInstanceT
at Amazon.Runtime.Internal.Util.HashStream1..ctor(Stream baseStream, Byte[] expectedHash, Int64 expectedLength) in c:\Jenkins\workspace\release-v2-sdk\src\AWSSDK_DotNet35\Amazon.Runtime\Internal\Util\HashStream.cs:line 347 at Amazon.S3.Model.Internal.MarshallTransformations.PutObjectRequestMarshaller.Marshall(PutObjectRequest putObjectRequest) in c:\Jenkins\workspace\release-v2-sdk\src\AWSSDK_DotNet35\Amazon.S3\Model\Internal\MarshallTransformations\PutObjectRequestMarshaller.cs:line 94 at Amazon.Runtime.AmazonWebServiceClient.Invoke[T,R](R request, AsyncCallback callback, Object state, Boolean synchronized, IMarshaller2 marshaller, ResponseUnmarshaller unmarshaller, AbstractAWSSigner signer) in c:\Jenkins\workspace\release-v2-sdk\src\AWSSDK_DotNet35\Amazon.Runtime\AmazonWebServiceClient.cs:line 73
at Amazon.S3.AmazonS3Client.PutObject(PutObjectRequest putObjectRequest) in c:\Jenkins\workspace\release-v2-sdk\src\AWSSDK_DotNet35\Amazon.S3\AmazonS3Client.cs:line 349

Logging details missing in v2-preview

Logging doesn't seem to work correctly in the v2 preview. For example, when doing a dynamodb PutItem call with v1.5 here is what I see
2013-07-31 08:54:12,604 [13] INFO Amazon.DynamoDBv2.AmazonDynamoDBClient - Request metrics: ServiceName = AmazonDynamoDBv2; ServiceEndpoint = https://dynamodb.us-east-1.amazonaws.com/; MethodName = PutItemRequest; AsyncCall = False; RequestSize = 145; StatusCode = OK, OK; AWSRequestID = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx; BytesProcessed = 2; CredentialsRequestTime = 00:00:00.0015275; RequestSigningTime = 00:00:00.0118151; HttpRequestTime = 00:00:00.1251406; ResponseUnmarshallTime = 00:00:00.0053254; ResponseProcessingTime = 00:00:00.0080513; ClientExecuteTime = 00:00:02.4289566;

With v2 preview I only get the following 3 entries
2013-07-31 08:51:28,093 [12] DEBUG Amazon.DynamoDBv2.AmazonDynamoDBClient - Starting request PutItemRequest at
2013-07-31 08:51:28,106 [12] DEBUG Amazon.DynamoDBv2.AmazonDynamoDBClient - Request body's content size 145
2013-07-31 08:51:28,107 [12] DEBUG Amazon.DynamoDBv2.AmazonDynamoDBClient - Request body's content size 145

Dynamodb QueryAsync to List performance issues

After upgrading to first 2.0.2.2 and 2.0.2.3 the performance of enumerating a query to a list with 25k items went from roughly 4 seconds to almost 19 seconds. The query itself takes less than a second which is what you'd expect and just fine.

We reverted back to 1.5.37.0 and the issue is gone. I can't really explain why, but perhaps the single core small and medium instances aren't fond of context/thread swapping. The CPU spikes to 100% after retrieving the json data from dynamodb, rendering the instance virtually unusable for the duration.

I boiled down the issue to a simple test by just a single query in a vs unit test which confirmed where the issue lies. Basically it looks like:

var asyncSearch = Context.QueryAsync<MyItemType>(HashKey);
var searchResult = (await asyncSearch.GetRemainingAsync()).ToList();

Compared to

var result = await Task.FromResult(Context.Query<MyItemType>(HashKey));

Did we misunderstand the usage?

It's worth mentioning that some of the data is binary, in effect: it's being base64decoded but it's rarely above 64 bytes. The rest is numbers, with a hash key and a range key.

ListMultipartUploads uses wrong parameter name for upload id marker.

When listing multipart uploads, if your listing ends on a key with multiple upload IDs, you cannot get all of the upload IDs for that key. This is because the parameter passed for listing the upload ID marker is incorrect. The code has "upload-idmarker", but it should be "upload-id-marker" per the documentation:

Code:
https://github.com/aws/aws-sdk-net/blob/master/AWSSDK/Amazon.S3/AmazonS3Client.cs#L5352

Specification:
http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html

Bug in ClientConfig.cs causes ServiceURL to be always HTTPS whenever RegionEndpoint is set

Original code:
internal string DetermineServiceURL()
{
string url;
if (this.RegionEndpoint == null)
{
url = this.ServiceURL;
}
else
{
var endpoint = this.RegionEndpoint.GetEndpointForService(this.RegionEndpointServiceName);
string protocol = endpoint.HTTPS ? "https://" : "http://";
url = new Uri(string.Format("{0}{1}", protocol, endpoint.Hostname)).AbsoluteUri;
}

return url;

}

Suggested patch:
internal string DetermineServiceURL()
{
string url;
if (this.RegionEndpoint == null)
{
url = this.ServiceURL;
}
else
{
var endpoint = this.RegionEndpoint.GetEndpointForService(this.RegionEndpointServiceName);
string protocol = this.ServiceURL.ToLower().StartsWith("http:") ? "http://" : "https://";
url = new Uri(string.Format("{0}{1}", protocol, endpoint.Hostname)).AbsoluteUri;
}

return url;

}

AmazonS3Config.BufferSize not respected

Version: AWS SDK for .NET 2.0.0.3-beta, C#

Upload performance has regressed for my use case (~100MB multipart uploads) with the 2.0 preview. I'm investigating why, and one suspicion is that the buffer sizes seems to be only 4K. If true this can have a significant impact on upload speed.

Here is my circumstantial evidence:

  • Inspecting the code, AmazonS3Config.BufferSize seems unused by the uploading part
  • UploadProgressEventArgs are reported back every 4K exactly

[s3]: Correctly handle revalidation/NotModified response. (breaking)

Would be a (minor) breaking change for v1, but v2 is a good chance to make S3GetObjectRequest.NotModifiedSince, ETagToNotMatch not useless.

The bug is that S3Request.IsRedirect assumes all 3xx codes are redirects, which is technically correct in the wording of HTTP spec, but not in terms of a redirect which should be followed: only 301, 302, 303, 305 and 307 should be, from my reading of RFC-2515.

Currently our fork at skilitix/aws-sdk-net just uses bool S3GetObjectResponse.NotModified but that could be a dangerous API: possibly also throw from get ResponseStream, Write..()?

MultiBatchWrite.Execute fails if multiple batch writes with same table

I realize that this is not the expected usage, but if I add two batch writes to a MultiBatchWrite instance with the same table name in the document model of DynamoDB, the Execute() method will fail with dictionary insert exception. Maybe this is not a supported usage, but it should fail with a better error message. The best would be if it just worked.

BucketName is no longer a property of S3Object

When calling ListObjects on S3 the BucketName is not included in the S3Objects returned. Is this by design or something I can add back. I am trying to upgrade from 1.x and this is stopping me.

Unmarshaller bug resulting in properties not being set

The method that unmarshalls the JSON response has a bug which can result in existing properties not being unmarshalled if the response from the API contains new properties that are not part of the SDK.

The cause is documented in this PR #50

Since that PR was created more work has been done on the JsonUnmarshallerContext so the PR was a bit outdated and closed, the issue is still present in the latest version of the SDK.

I have created a simplified test case here: https://github.com/Webinfinity/UnmarshallerBug

DotNet45 async code written inefficiently

There are a significant number of functions that are marked as async which shouldn't be as it just creates additional task state machine logic and context switching which isn't necessary.

When a function has a single async Task which is just returning the result, there is no reason to make the function async or await on the Task. It should just return the task and let the caller of the function decide what to do and how it wants to handle the async behavior

For example

                public async Task<DeleteBucketPolicyResponse> DeleteBucketPolicyAsync(DeleteBucketPolicyRequest request, CancellationToken cancellationToken = default(CancellationToken))
        {
            var marshaller = new DeleteBucketPolicyRequestMarshaller();
            var unmarshaller = DeleteBucketPolicyResponseUnmarshaller.GetInstance();
            var response = await Invoke<IRequest, DeleteBucketPolicyRequest, DeleteBucketPolicyResponse>(request, marshaller, unmarshaller, signer, cancellationToken)
                .ConfigureAwait(continueOnCapturedContext: false);
            return response;
        }

should be changed to


                public Task<DeleteBucketPolicyResponse> DeleteBucketPolicyAsync(DeleteBucketPolicyRequest request, CancellationToken cancellationToken = default(CancellationToken))
        {
            var marshaller = new DeleteBucketPolicyRequestMarshaller();
            var unmarshaller = DeleteBucketPolicyResponseUnmarshaller.GetInstance();
            return Invoke<IRequest, DeleteBucketPolicyRequest, DeleteBucketPolicyResponse>(request, marshaller, unmarshaller, signer, cancellationToken);
        }


NullReference using AmazonS3Client child class.

Exception occurs in the transform method at assembly.GetManifestResourceStream(resourceName) call. To duplicate issue just create a child class (eg. class Foo : AmazonS3Client {}) and try to submit any request call (mine was ListObjects).

class Foo : AmazonS3Client
{
   public Foo(string accessKey, string secretKey, AmazonS3Config config)
      : base (accessKey, secretKey, config) {}
}

var client = new Foo(accessKey, secretKey, config);
var request = new ListObjectsRequest().WithBucketName(bucketName);
var response = client.ListObjects(request);

Not sure if I am using this class incorrectly (in which case it should be sealed) or the type should just be set at compile time explicit to the parent class. This is my work around for now.

var type = typeof(AmazonS3Client);
type.GetField("myType", BindingFlags.NonPublic | BindingFlags.Instance).SetValue(this, type);

DateTime checks against null ... does this work better?

    public static string BuildPolicyForSignedUrl(string resourcePath,
                                                 DateTime epochDateLessThan,
                                                 string limitToIpAddressCIDR,
                                                 DateTime epochDateGreaterThan)
    {
        if (epochDateLessThan == DateTime.MaxValue ||  epochDateLessThan == DateTime.MinValue)
        {
            throw new AmazonClientException("epochDateLessThan must be provided to sign CloudFront URLs");
        }
        if (resourcePath == null)
        {
            resourcePath = "*";
        }
        string ipAddress = (limitToIpAddressCIDR == null ? "0.0.0.0/0" // No IP
            // restriction
                : limitToIpAddressCIDR);

        string policy = "{\"Statement\": [{"
                + "\"Resource\":\""
                + resourcePath
                + "\""
                + ",\"Condition\":{"
                + "\"DateLessThan\":{\"AWS:EpochTime\":"
                + AWSSDKUtils.ConvertToUnixEpochSeconds(epochDateLessThan.ToUniversalTime())
                + "}"
                + ",\"IpAddress\":{\"AWS:SourceIp\":\""
                + ipAddress
                + "\"}"
                + (epochDateGreaterThan == DateTime.MaxValue ||  epochDateGreaterThan == DateTime.MinValue ? "" : ",\"DateGreaterThan\":{\"AWS:EpochTime\":"
                        + AWSSDKUtils.ConvertToUnixEpochSeconds(epochDateGreaterThan.ToUniversalTime()) + "}") + "}}]}";
        return policy;
    }

Adding virtual + override support in AmazonS3Client

I'm thinking about subclassing options for AmazonS3Client - its not marked as sealed, opening the door - but none of the methods are marked virtual, making subclassing less valuable.

The specific scenario I had in mind was retrospectively adding cached support - by enabling effectively "drop in replacements" wherever AmazonS3Client was used. (currently have factory object that instantiates AmazonS3Clients - could imagine that issuing CachingAmazonS3Clients instead without changing the rest of the code)

Wondered how far from project intent this would be.

This could be achieved by encapsulation of AmazonS3Client within another factory class -the work required to implement subclass and factory are quite similar - but it implies another round of refactoring to introduce the factory object.

Flicking through AmazonS3Client there is of course lots of implementation detail in these members - so you'd almost certainly need to call base at all times to get core functionality.

I suspect there are other places where this would be useful too.

Adding Begin and End method pairs for AmazonEC2Client

Whilst there are Begin and End method pairs on most clients, I noticed that these were missing in the AmazonEC2Client class, is this something that can be added in a future version?

With these Begin and End method pairs, at least we can make Async extension methods ourselves until they're officially supported.

Thanks,

Exception when using 4.5 client with config.UseHttp=true while running Fiddler.

Using AWS SDK for .NET 4.5, I wanted to see how AWS requests and responses look on the wire. To do that I set config.UserHttp=true to run in clear HTTP instead of SSL, and ran Fiddler. This produced an exception below. If Fiddler is stopped or exited, or if HTTPS is used, the problem does not occur. NET 3.5 client does not have this problem either - its traffic shows in Fiddler just fine.

Exception is thrown in AmazonWebServiceClient class, method:
private async Task InvokeConfiguredRequest(WebRequestState state, CancellationToken cancellationToken) where T : AmazonWebServiceResponse.
at
responseMessage = await httpClient.SendAsync(requestMessage, cancellationToken).ConfigureAwait(continueOnCapturedContext: false);
line.

Similar problem reported at http://social.msdn.microsoft.com/Forums/vstudio/en-US/6a745a51-8a7b-4a5c-9f07-be02cc01278c/httpwebrequest-in-c-do-not-work-with-net-45?prof=required. .

Exceptions chain:

at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable1.ConfiguredTaskAwaiter.GetResult() at Amazon.Runtime.AmazonWebServiceClient.<InvokeConfiguredRequest>d__b1.MoveNext() in c:!Projects\Fabric.WebServices\aws-sdk-net\AWSSDK_DotNet45\Amazon.Runtime\AmazonWebServiceClient.cs:line 143

Inner exception:
at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult, TransportContext& context)
at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)

The innermost exception is:
at System.DomainNameHelper.IdnEquivalent(String hostname)
at System.Uri.get_IdnHost()
at System.Net.HttpWebRequest.GetSafeHostAndPort(Uri sourceUri, Boolean addDefaultPort, Boolean forcePunycode)
at System.Net.HttpWebRequest.GenerateProxyRequestLine(Int32 headersSize)
at System.Net.HttpWebRequest.SerializeHeaders()
at System.Net.HttpWebRequest.EndSubmitRequest()
at System.Net.HttpWebRequest.SetRequestSubmitDone(ConnectStream submitStream)
at System.Net.Connection.CompleteConnection(Boolean async, HttpWebRequest request)

Amazon.EC2.Model.Vpc incomplete

The Amazon.EC2.Model.Vpc data is retrieved in the Amazon.EC2.Model.DescribeVpcsResponse type does not contain the all the attributes about the Vpc. The bool values EnableDnsHostnames and EnableDnsSupport need to be retrieved using two calls to .DescribeVpcAttribute()

Why didn't Amazon.EC2.Model.Vpc which includes the following items include these two bool values?
Amazon.EC2.Model.Vpc.CidrBlock
Amazon.EC2.Model.Vpc.DhcpOptionsId
Amazon.EC2.Model.Vpc.InstanceTenancy
Amazon.EC2.Model.Vpc.IsDefault
Amazon.EC2.Model.Vpc.State
Amazon.EC2.Model.Vpc.Tags;
Amazon.EC2.Model.Vpc.VpcId;

Possible error sending key in GetObjectMetadataRequest / GetObjectRequest

I am using the following code to check if file exists.

Version: AWS SDK for .NET 2.0.0.2, C#

try
{
    GetObjectMetadataRequest request = new GetObjectMetadataRequest();
    request.BucketName = bucketName;
    request.Key = AmazonS3Util.UrlEncode(path, true);
    //request.Key = path;

    GetObjectMetadataResponse response = client.GetObjectMetadata(request);

    return HttpStatusCode.Found;
}
catch (AmazonS3Exception amazonS3Exception)
{
    if (amazonS3Exception.StatusCode == HttpStatusCode.NotFound)
        return HttpStatusCode.NotFound;

    if (amazonS3Exception.StatusCode == HttpStatusCode.Forbidden)
        return HttpStatusCode.Forbidden;

    throw;
}

There are several files that contain special characters.

When trying to check some files, passing the path without coding, I noticed that some files are not found, although there are in S3.

So I tried to encode the files path, by AmazonS3Util.UrlEncode and files that were not found were found. But the other files that were found using the URL encoding are no longer found. Apparently does not appear to have a default send key. I am trying to locate where this deviation may be occurring or if I'm doing something wrong configuration.

Note: This same error was occurring in version 1.5.7

PutObject fails in version 2-preview (2.0.0.1) to set Content-Encoding header

I have new issues with PutObject (and PutObjectAsync) after upgrade to v2 preview: it seems like if you set "Content-Encoding" Header in request, corresponding file in S3 for some reason does not have such header.

I create repository on Github so one can reproduce issues I have (well, it's very basic console apps):
https://github.com/evereq/AWSAsyncTest

It contains 2 solutions, each one use different version of AWS SDK: one without Old prefix use new version 2.0.0.1 and one with Old prefix uses stable version 1.5.28.1.

You also need to add your own configuration inside AWSConfig.cs file: AWS Key, Secret, S3 Folder, etc.

After you configure try to run both apps:

  • When you run Old console application, it attempts to put local file to S3 folder and succeed and resulting file have "Content-Encoding" set to "gzip". So that means that old SDK 1.5.28.1 works well with such headers.
  • When you run latest console application (without Old), it attempts to put local file with the same "Content-Encoding" set to "gzip", however if you check S3 you will see that there is no such header.
  • Note also that other Headers are set correctly by both SDK versions, for example "Cache-Control" is set to "max-age=31536000, public".
  • Also note that for v2 of SDK, I tried to execute putObjectRequest.Headers.ContentEncoding = "gzip"
    instead of
    putObjectRequest.Headers["Content-Encoding"] = "gzip"
    however it does not make any difference - file in S3 still luck of such header!

Please note that basically there are no differences between 2 solutions, except different version of AWS SDK.

Any help really appreciated!
Without that I can't upload Gziped files to S3 and can't save some trees in the world! ;-)

ArgumentNullException in DynamoDBContext.Query

I get a ArgumentNullException in DynamoDBContext.Query. I assume that I have configured the model incorrectly, but it is quite difficult to find out what is wrong from that exception. Please add a more descriptive exception. See also https://forums.aws.amazon.com/thread.jspa?messageID=515973&#515973

Stacktrace:
at System.Collections.Generic.Dictionary2.FindEntry(TKey key) at System.Collections.Generic.Dictionary2.TryGetValue(TKey key, TValue& value)
at Amazon.DynamoDBv2.DataModel.ItemStorageConfig.GetPropertyStorage(String propertyName)
at Amazon.DynamoDBv2.DataModel.DynamoDBContext.ComposeQueryFilterHelper(Document hashKey, IEnumerable1 conditions, ItemStorageConfig storageConfig, DynamoDBFlatConfig currentConfig, List1& indexNames)
at Amazon.DynamoDBv2.DataModel.DynamoDBContext.ComposeQueryFilter(DynamoDBFlatConfig currentConfig, Object hashKeyValue, IEnumerable1 conditions, ItemStorageConfig storageConfig, List1& indexNames)
at Amazon.DynamoDBv2.DataModel.DynamoDBContext.ConvertQueryByValue[T](Object hashKeyValue, IEnumerable1 conditions, DynamoDBOperationConfig operationConfig, ItemStorageConfig storageConfig) at Amazon.DynamoDBv2.DataModel.DynamoDBContext.ConvertQueryByValue[T](Object hashKeyValue, QueryOperator op, IEnumerable1 values, DynamoDBOperationConfig operationConfig)
at Amazon.DynamoDBv2.DataModel.DynamoDBContext.Query[T](Object hashKeyValue, QueryOperator op, IEnumerable`1 values, DynamoDBOperationConfig operationConfig)
at Amazon.DynamoDBv2.DataModel.DynamoDBContext.Query[T](Object hashKeyValue, QueryOperator op, Object[] values)

Query:

this.DynamoDBContext.Query<QueueIdPersistModel>(
                    "somekey",
                    QueryOperator.GreaterThanOrEqual,
                    new[] {DateTime.UtcNow},
                    new DynamoDBOperationConfig()
                        {
                            IndexName = "LastRedirectAttempt-index",
                            OverrideTableName = "TableName"
                        });

Model:

  [DynamoDBTable("QueueNumbers")]
  public class QueueIdPersistModel
  {
    public void SetQueueNumber(int queueNumber);
    public static string GenerateKey(Guid queueId);
    public static string GenerateQueueNumberIndexKey(string customerId, string eventId, int queueNumber);
    [DynamoDBHashKey]
    public string Key { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBGuidConverter))]
    public Guid QueueId { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBISODateConverter))]
    public DateTime InQueueTime { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBGuidConverter))]
    public Guid UserId { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBISODateConverter))]
    public DateTime QueueItemCreated { get; set; }
    public string TargetUrl { get; set; }
    public string Language { get; set; }
    public string Layout { get; set; }
    public string CustomerId { get; set; }
    public string EventId { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBISODateConverter))]
    public DateTime? LastModified { get; set; }
    [DynamoDBGlobalSecondaryIndexHashKey("LastRedirectAttempt-index")]
    public string CustomerEventId { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBISODateListConverter))]
    public List<DateTime> RedirectAttempts { get; set; }
    public int? MaxRedirects { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBISODateConverter))]
    public DateTime? RedirectEndTime { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBISODateConverter))]
    public DateTime? CancelledTime { get; set; }
    [DynamoDBProperty(Converter = typeof (DynamoDBISODateConverter))]
    public DateTime? ExitTime { get; set; }
    public int? QueueNumber { get; set; }
    [DynamoDBGlobalSecondaryIndexHashKey("QueueNumber-index")]
    public string CustomerEventQueueNumber { get; set; }
    [DynamoDBGlobalSecondaryIndexRangeKey("LastRedirectAttempt-index", Converter = typeof (DynamoDBISODateConverter))]
    public DateTime? LastRedirectAttempt { get; set; }
  }

There still a bug in S3FileStream.PopulateData(): getObjectResponse might be disposed while getObjectResponse.ResponseStream is still being used

S3FileStream.cs, Ln. 405.

The usage of getObjectResponse.ResponseStream is wrapped with using(). While GetObjectResponse is itself IDisposable, and moreover:
*) it's Dispose() implementation closes the underlying ResponseStream,
*) it's Dispose(false) implementation is called from it's finalizer (destructor): S3Response.cs, ln. 94.

In version 1.5.6 there was no reference left to the ResponseStream instance at all. And the bug was clearly seen, because ResponseStream instance was claimed by GC almost immediately. Then the bug was "fixed": now the reference is stored in a local variable. But in Release mode the compiler is free to optimize the unused local variables and send them to GC even before the method ends. So, the bug is now reproducible only in Release mode. And that's what we're experiencing in production now, actually.

In general, disposing the underlying objects in owner's finalizer is strictly against the best practices in .Net.

First, because it's not necessary: the finalizer is supposed to release only unmanaged resources held by the object itself. All managed objects are managed and released automatically by .Net, that's why they're called managed. If and only if an underlying managed object owns any unmanaged resources, it should free them in it's own finalizer, which for sure will be called by GC.
Second, because it's dangerous: the order of finalizer's execution is undefined in .Net, so the owner's finalizer might try to release an underlying object, when it's already finalized.
And third, because it leads to bugs like this.
This rule should not be confused with the rule of implementing IDisposable: there the owner should call the underlying object's Dispose(), if it has one.
See e.g. http://msdn.microsoft.com/en-us/library/b1yfkh5e(v=vs.110).aspx for more explanation.

P.S. - there's also a Stream.CopyTo() method, so you don't need to do tempBuffers and while(...)

UploadPartRequest.StreamTransferProgress Reports Incorrectly

it appears that UploadPartRequest.StreamTransferProgress is extremly inacurate, take my test scenario:

(1 part MPU @ 30mb chunk upper limit)
1Mbps max upload speed
5.6MB file

Theoretically speaking this should take ~45 seconds to upload.

The api reports back a 100% state for this part almost instantly, and stepping through the code it is accumulating from 1->100 on each callback.

So lets put the theory aside and talk about whats actually happening. after the api reports back a 100% progress state the program itself still detects the thread is doing work and has not yet completed, not to mention that windows continues to show network traffic at its maximum for 40seconds+ - it appears that the larger the file and the faster the connection is, the less noticeable the problem becomes, however it appears to be roughly 90mb before the sdk starts reporting back an appropriate state progress, toclarify i've noticed that the first 90mb are false progress and there after things slow down and it appears to be reporting correctly.

PutObject fails in version 2-preview (2.0.0.1) in console app

I have issues with PutObject (and PutObjectAsync) after upgrade to v2 preview (latest version 2.0.0.1 with fix for Async deadlocks).

Biggest issue is that even if I try to use old methods without async, it still fails (seems because such methods are just wrappers around async methods).

I got AmazonClientException with following message "Expected hash not equal to calculated hash" (happens during 'await s3Client.PutObjectAsync' call and can take a lot of time to wait till that exception throw even for small file).

Please note that I get such exception ONLY in console application. For some reason (and that's really strange), my huge ASP.NET MVC4 application works OK and success to execute PutObject requests.

I create repository on Github so one can reproduce issues I have (well, it's very basic console apps):
https://github.com/evereq/AWSAsyncTest

It contains 2 solutions, each one use different version of AWS SDK: one without Old prefix use new version 2.0.0.1 and one with Old prefix uses stable version 1.5.28.1.

You also need to add your own configuration inside AWSConfig.cs file: AWS Key, Secret, S3 Folder, etc.

After you configure try to run both apps:

  • When you run Old console application, it attempts to put local file to S3 folder and succeed. So that means that old SDK 1.5.28.1 works well.
  • When you run latest console application (without Old), it attempts first to put local file using normal (not async) approach and FAILS. Next it attempts to put same file using async approach and FAILS too. So that mean it's impossible to use latest AWS SDK (2.0.0.1) to put files inside S3 folder.

Please note that basically there are no differences between 2 solutions, except different version of AWS SDK and additional test for async workflow using it (but again, even non async workflow fails).

If it's possible to get solution or idea as soon as possible it will really help me!
I use console application to deploy changes in my project to live (i.e. continues deploy to www.evereq.com) and after SDK upgrade I just can't do it anymore :(
... and I really don't want to revert to v1.5 because already migrate tons of code to v2 and see that usage of async improve performance (when it works :D)

Any help really appreciated!

TransferUtilityUploadDirectoryRequest.UploadDirectoryProgressEvent doesn't fire

Version: AWS SDK for .NET 2.0.2, C#

Event doesn't fire.
The same code works with AWS SDK 1.5.37
using (var directoryTransferUtility = new TransferUtility(new AmazonS3Client(amazonAccessKeyId, amazonSecretAccessKey, new AmazonS3Config { MaxErrorRetry = 200 })))
{
var transferRequest = new TransferUtilityUploadDirectoryRequest
{
BucketName = S3ContentBucket,
Directory = directoryPath,
KeyPrefix = objectKey,
SearchOption = SearchOption.AllDirectories,
SearchPattern = ".",
CannedACL = S3CannedACL.PublicRead
};
transferRequest.UploadDirectoryProgressEvent += TransferRequestOnUploadDirectoryProgressEvent;
directoryTransferUtility.UploadDirectory(transferRequest);
}

MultiBatchWrite fails creating request including items with the same hash key

We have experienced this limitation a couple of times and were wondering whether there are any plans to address it.

Here is a sample stack trace of the exception:
System.ArgumentException: An item with the same key has already been added.
at System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add)
at Amazon.DynamoDBv2.DocumentModel.MultiBatchWrite.ConstructRequest(Dictionary2 writeItems, Table targetTable, Dictionary2& documentMap, Boolean isAsync)
at Amazon.DynamoDBv2.DocumentModel.MultiBatchWrite.SendSet(Dictionary2 set, Table targetTable, Boolean isAsync)
at Amazon.DynamoDBv2.DocumentModel.MultiBatchWrite.WriteItemsHelper(List1 batches, Boolean isAsync)
at Amazon.DynamoDBv2.DocumentModel.MultiBatchWrite.WriteItems(Boolean isAsync)
at Amazon.DynamoDBv2.DocumentModel.MultiTableDocumentBatchWrite.ExecuteHelper(Boolean isAsync)
at Amazon.DynamoDBv2.DataModel.MultiTableBatchWrite.ExecuteHelper(Boolean isAsync)
at Amazon.DynamoDBv2.DataModel.MultiTableBatchWrite.b__0()
at Amazon.DynamoDBv2.DynamoDBAsyncExecutor.Execute(AsyncCall call, DynamoDBAsyncResult result)
at Amazon.DynamoDBv2.DynamoDBAsyncExecutor.EndOperation(IAsyncResult result)
at Amazon.DynamoDBv2.DataModel.MultiTableBatchWrite.EndExecute(IAsyncResult asyncResult)
at System.Threading.Tasks.TaskFactory1.FromAsyncCoreLogic(IAsyncResult iar, Func2 endFunction, Action1 endAction, Task1 promise, Boolean requiresSynchronization)

Concurrency issues using SES on mono

The listing below shows a simple program which will spawn a number of threads, each responsible for sending a single email via Amazon SES. After delivering its mail message, each thread will print the number of milliseconds elapsed since application start.

var stopwatch = new Stopwatch();
stopwatch.Start();

var mailer = new AmazonSimpleEmailServiceClient(KEY, SECRET);

for (var i = 0; i < THREAD_COUNT; i++)
{
    var thread = new Thread(() =>
        {
            mailer.SendEmail(new SendEmailRequest()
                 .WithSource(FROM_ADDRESS)
                 .WithDestination(
                     new Destination()
                         .WithToAddresses(TO_ADDRESS))
                 .WithMessage(
                     new Message()
                         .WithSubject(new Content("Mono test"))
                         .WithBody(new Body(new Content("Testing 1 2 3")))
                 ));

            Console.WriteLine(stopwatch.ElapsedMilliseconds);
        });

    thread.Start();
}

Console.ReadLine();

Using Microsoft.NET on my Windows machine, it takes around 900 ms to run this program with 1 thread, around 1100 ms with 10 threads, 1400 ms with 50 threads. That's good – it scales great with the number of threads, which is expected due to the amount of waiting involved in calling a web service.

However, when I execute the exact same program on Mono on the same Windows machine, the results a very different. 1 thread: 2 s, 10 threads: 11 s, 50 threads: 62 s. This initially looks like there is a lock() or similar around the HTTP request, effectively forcing sending to be single threaded. Looking closer at the numbers written from the threads, however, all of the threads seems to complete at almost exactly the same time (eg. with 50 threads they all complete between seconds 60 and 62). This seems to suggest some sort of locking behavior, where all of the threads have to wait on the last the finish – weird.

Any idea what this is about? I have a feeling this might be Mono bug, rather than an AWSSDK bug, but I have not been able to reproduce it with "raw" HttpWebRequest instances.

TransferUtilityUploadRequest.UploadProgressEvent reports uploads prematurely

Version: AWS SDK for .NET 2.0.0.2-beta, C#

Say I'm uploading a 1MB file on a 100kB/s connection. I'm seeing reported upload rates of ~1MB/s for the first second, then 0bytes/s for several seconds.

It seems the event is fired with data too soon - perhaps the data is reported as uploaded as soon as its been consumed into some buffer, rather than when it's been uploaded?

Task canceled exception

During a test we started to receive TaskCanceledException exceptions from the .Net 4.5 API. We are not using the Async methods from the API (as you can see from the stacktrace), so why are we getting Task Canceled Exceptions? We received the exceptions over 2 days on 2 AWS instances, on February 1st and 2nd. On 2/2 between approx 10:10 and 10:30 it appears that no calls where successful. The service was an ASP.NET application running in IIS.

Exception:

  • Message = A task was canceled.
  • Type = System.Threading.Tasks.TaskCanceledException
  • StackTrace = at Amazon.DynamoDBv2.AmazonDynamoDBClient.GetItem(GetItemRequest request)
    at Amazon.DynamoDBv2.DocumentModel.Table.GetItemHelper(Key key, GetItemOperationConfig config, Boolean isAsync)
    at Amazon.DynamoDBv2.DataModel.DynamoDBContext.LoadHelper[T](Key key, DynamoDBOperationConfig operationConfig, ItemStorageConfig storageConfig, Boolean isAsync)
    at Amazon.DynamoDBv2.DataModel.DynamoDBContext.Load[T](Object hashKey, DynamoDBOperationConfig operationConfig)

we where getting the exception from various method calls including (but not limited to)

  • Amazon.DynamoDBv2.AmazonDynamoDBClient.GetItem(GetItemRequest request)
  • Amazon.SimpleDB.AmazonSimpleDBClient.Select(SelectRequest request)
  • Amazon.DynamoDBv2.AmazonDynamoDBClient.GetItem(GetItemRequest request)
  • Amazon.SimpleNotificationService.AmazonSimpleNotificationServiceClient.ConfirmSubscription(ConfirmSubscriptionRequest request)

There is no inner exception.

Firstly, this exception should obviously by caught and wrapped in an AWS exception with more information about the rout cause.

Secondly, what may be the cause of the exception

Create data model from attribute map

Hi

I have a feature request. We are using both the DynamoDBContext for simple tasks and the DynamoDBClient for more advanced operations. I have a use case where I am using the DynamoDBClient UpdateItem setting the ReturnValues to ALL_NEW. I would then like to take the response and turn it into a Data Model object and return that from my method.

I would like to do something like:

Document doc = Document.FromAttributeMap(response.Attributes); //already possible
MyModel model = DynamoDBContext.FromDocument<MyModel>(doc); //feature request

This is not possible as that implementation is not public, which is a bit odd since the Document type has the FromAttributeMap method. Would you consider adding that?

Please keep the symbols server up to date with new releases.

Debugging open source code is much easier if the debug symbols are available in a convenient location. Looks like it happened once for this project:

http://www.symbolsource.org/Public/Metadata/NuGet/Project/AWSSDK

But hasn't since version 1.3.16.0. Please consider adding publishing of symbols to your release process, so that end users don't have to compile their own sources for debugging.

If there's another location where symbols are available, please document it.

S3DirectoryInfo.GetFiles returns exception "key is a directory name"

The following simple test returns the above exception:

S3DirectoryInfo test = new S3DirectoryInfo(client, "bucketname", "testdir/test");
test.Create();
var files = test.GetFiles("*", SearchOption.AllDirectories);

The reason is that S3DirectoryInfo.Create is creating an object called "testdir/test/" (note the trailing slash).

S3DirectoryInfo.EnumerateFiles checks each returned s3Object, skipping those that EndsWith("") - I'm taking a guess here, but perhaps the s3Object key is returned with a "/" slash, which doesn't satisfy this check?

The end result is passing an object key with a trailing slash to the S3FileInfo constructor, which then throws the exception.

EMR StepFactory NewRunHiveScriptStep using wrong default version of Hive

I am using AWS .NET SDK v. 1.5.24.1. I’m running an EMR job using the StepFactory.NewRunHiveScriptStep method. My jobs are failing on essentially the same error as describe in https://forums.aws.amazon.com/thread.jspa?messageID=350496.

In log/hadoop/steps/2/stderr I get:
sh: /home/hadoop/.versions/hive-0.7.1/bin/hive: No such file or directory
Command exiting with ret '255'

It looks like the StepFactory.NewRunHiveScriptStep code needs to be updated to accept the hive-versions argument as described here rslifka/elasticity#36.

PutObject throws NotSupportException with streams of no length, even when over-ridden.

It is currently impossible to use client.PutObject with a stream that does not present a length regardless if you over-ride this or not. see below for the example that will cause this issue to happen.

please note ive intentionally condensed the code to jump to the important parts, i am also aware that you can copy from bucket-bucket with other methods but for reasons i wont go into i'm not doing it that way.

private static String _bucketName = "__REPLACE_ME__";
private static String _objectToCopy = "__REPLACE_ME__";
private static String _objectDest = "__REPLACE_ME__";

static void Main(string[] args)
{

    Console.WriteLine("Starting...");
    //Test Case

    //simply get the metadata for the object to copy, this will contain the length.
    long contentLength = (new AmazonS3Client(Amazon.RegionEndpoint.USEast1)).GetObjectMetadata(new GetObjectMetadataRequest {
        BucketName = _bucketName, Key = _objectToCopy
    }).ContentLength;

    //get the stream for reading, this is out read only stream without a length...
    Stream inputStream = (new AmazonS3Client(Amazon.RegionEndpoint.USEast1)).GetObject(new GetObjectRequest {
        BucketName = _bucketName, Key = _objectToCopy
    }).ResponseStream;


    //Begin to copy it to new bucket.
    var request = new PutObjectRequest
    {
        BucketName = _bucketName,
        Key = _objectDest,
        InputStream = inputStream
    };

    //set the "hintLength"
    request.Headers.ContentLength = contentLength;


    //Throws a NotSupportedException - always.
    (new AmazonS3Client(Amazon.RegionEndpoint.USEast1)).PutObject(request);

    //Problem Explenation:
    //Look at file, PutObjectRequestMarshaller.cs and start at line 71.
    //above is where override the content length, this will be used when exception at line 106 on PutObjectRequestMarshaller.cs 
    //then line 116 will wrap the stream inside PartialReadOnlyWrapperStream, this object is returned to line 71.
    //finally on line 74(the immediate next line to execute) then attempts to access .Position which is set to throw an exception.

    Console.WriteLine("Done...");
    Console.ReadKey();
}

So what went wrong?
Lets look at PutObjectRequestMarshaller.cs and start at line 71. this will enter a GetStreamWithLength method, which will either return a wrapped stream, or the existing stream. on line 106, the we get exception telling us that we do not have a length associated with the stream, this will set-up the rest of the flow to return the wrapped object. line 116 wraps the stream inside PartialReadOnlyWrapperStream, we then exit this method back to line 71. Then we proceed to the next line, line 74 which attempts to access .Position property of the wrapped stream, which currently does nothing but throw an exception.

S3 invokePutObject can create FileStream that isn't disposed

If the PutObjectRequest contains a file path, then invokePutObject will allocate a new file stream to read its content. However, this stream is never disposed. This causes problems if the file exists in a temporary directory that needs to be deleted after the request is done; attempting to do so will throw an exception because the file handle may still be open (if the finalizer hasn't run).

To work around this issue, the caller must call Dispose() on the InputStream property after calling PutObject (or in the callback for BeginPutObject). This would be expected if the caller set this property, but when using a file path instead this should not be necessary, as this is an implementation detail leaking out to the caller. AmazonS3Client should either track it's own stream allocation and dispose them when it's done, or provide some other mechanism to inform the caller this step is needed.

An acceptable compromise might be to have the S3Response object own the created stream and dispose it when the response is disposed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.