Giter Club home page Giter Club logo

aws-sdk-java's Introduction

AWS SDK for Java

The AWS SDK for Java enables Java developers to easily work with Amazon Web Services and build scalable solutions with Amazon S3, Amazon DynamoDB, Amazon Glacier, and more. You can get started in minutes using Maven or by downloading a single zip file.

End-of-Support Announcement

We announced the upcoming end-of-support for AWS SDK for Java (v1). We recommend that you migrate to AWS SDK for Java v2. For dates, additional details, and information on how to migrate, please refer to the linked announcement.

Release Notes

Changes to the SDK beginning with version 1.12.1 (June 2021) are tracked in CHANGELOG.md.

Changes in the retired 1.11.x series of the SDK, beginning with version 1.11.82, are listed in the CHANGELOG-1.11.x.md file.

Getting Started

Sign up for AWS

Before you begin, you need an AWS account. Please see the Sign Up for AWS section of the developer guide for information about how to create an AWS account and retrieve your AWS credentials.

Minimum requirements

To run the SDK you will need Java 1.7+. For more information about the requirements and optimum settings for the SDK, please see the Installing a Java Development Environment section of the developer guide.

Install the SDK

The recommended way to use the AWS SDK for Java in your project is to consume it from Maven. Import the aws-java-sdk-bom and specify the SDK Maven modules that your project needs in the dependencies.

Importing the BOM
<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>com.amazonaws</groupId>
      <artifactId>aws-java-sdk-bom</artifactId>
      <version>1.12.715</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>
Using the SDK Maven modules
<dependencies>
  <dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-ec2</artifactId>
  </dependency>
  <dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-s3</artifactId>
  </dependency>
  <dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk-dynamodb</artifactId>
  </dependency>
</dependencies>

See the Set up the AWS SDK for Java section of the developer guide for more information about installing the SDK through other means.

Features

  • Provides easy-to-use HTTP clients for all supported AWS services, regions, and authentication protocols.

  • Client-Side Data Encryption for Amazon S3 - Helps improve the security of storing application data in Amazon S3.

  • Amazon DynamoDB Object Mapper - Uses Plain Old Java Object (POJOs) to store and retrieve Amazon DynamoDB data.

  • Amazon S3 Transfer Manager - With a simple API, achieve enhanced the throughput, performance, and reliability by using multi-threaded Amazon S3 multipart calls.

  • Amazon SQS Client-Side Buffering - Collect and send SQS requests in asynchronous batches, improving application and network performance.

  • Automatically uses IAM Instance Profile Credentials on configured Amazon EC2 instances.

  • And more!

Building From Source

Once you check out the code from GitHub, you can build it using Maven. To disable the GPG-signing in the build, use:

mvn clean install -Dgpg.skip=true

Getting Help

GitHub issues is the preferred channel to interact with our team. Also check these community resources for getting help:

  • Ask a question on StackOverflow and tag it with aws-java-sdk
  • Articulate your feature request or upvote existing ones on our Issues page
  • Take a look at the blog for plenty of helpful walkthroughs and tips
  • Open a case via the AWS Support Center in the AWS console
  • If it turns out that you may have found a bug, please open an issue

Maintenance and Support for SDK Major Versions

For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Reference Guide:

Supported Minor Versions

  • 1.12.x - Recommended.

  • 1.11.x - No longer supported, but migration to 1.12.x should require no code changes.

AWS SDK for Java 2.x

A version 2.x of the SDK is generally available. It is a major rewrite of the 1.x code base, built on top of Java 8+ and adds several frequently requested features. These include support for non-blocking I/O, improved start-up performance, automatic iteration over paginated responses and the ability to plug in a different HTTP implementation at run time.

For more information see the AWS SDK for Java 2.x Developer Guide or check the project repository in https://github.com/aws/aws-sdk-java-v2.

Maintenance and Support for Java Versions

The AWS Java SDK version 1 (v1) supports Java versions from 7 to 17, but may not be updated to support future Java versions. To ensure long-term compatibility with the latest JDK versions, we recommend that you migrate to AWS SDK for Java 2.x.

aws-sdk-java's People

Contributors

benkropf avatar bmaizels avatar breedloj avatar codan84 avatar dagnir avatar david-at-aws avatar davidh44 avatar davidmoten avatar eronhennessey avatar fulghum avatar gaul avatar hanshuo-aws avatar hansonchar avatar ivoanjo avatar kiiadi avatar manikandanrs avatar marcwilson avatar mateusz-lisik avatar mig281 avatar muhqu avatar nterry avatar oscarlvp avatar shorea avatar spfink avatar varunnvs92 avatar weitzj avatar wolpert avatar yifei1987 avatar zhangzhx avatar zoewangg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-sdk-java's Issues

Possible concurrency issues on InstanceProfileCredentialsProvider

There might be a few concurrency issues on the class InstanceProfileCredentialsProvider. I think the following analysis is correct, but please double-check it.

The field credentialsExpiration is written to inside the method loadCredentials(), which is synchronized. However, it is read from inside multiple methods which are not synchronized.

  1. First issue related to credentialsExpiration

The thread T1 calls getCredentials() on an instance of InstanceProfileCredentialsProvider that has just been created. It will call needsToLoadCredentials(), which will see that credentials == null and return true. Therefore, getCredentials() will call loadCredentials().

The method loadCredentials() will set the field credentials to an instance of BasicSessionCredentials or BasicAWSCredentials (which are both thread-safe since they are both immutable -- they contain only final fields). It might also set the field credentialsExpiration to an instance of Date, which is not thread-safe.

The execution then returns back to getCredentials(). Suppose expired() returns false. The method will then simply return the newly created credentials object referenced by the field credentials.

Now, suppose a thread T2 calls getCredentials(). It will call needsToLoadCredentials() as before. Now, the code is non-deterministical due to lack of synchronization -- which might not be a bug per se, but, in this case, it is. Suppose it sees a non-null field credentials. It will then proceed to the next check, credentialsExpiration != null. It is possible that this thread sees a non-null credentialsExpiration, which will make it enter the body of the if statement, where it will call credentialsExpiration.getTime(). Now, since the Date object (referenced by credentialsExpiration) is mutable and non-thread-safe, it is possible that T2 gets the value 0, which will make the method incorrectly return true. This will cause an unecessary -- but most probably harmless -- call to loadCredentials(), which could be avoided.

  1. Second issue related to credentialsExpiration

Let's consider the case that the credentials expire.

Suppose two threads, T1 and T2, share an instance of InstanceProfileCredentialsProvider. Suppose T1 is responsible for periodically calling refresh(), so that all threads sharing this instance of InstanceProfileCredentialsProvider will always have fresh credentials.

Now, suppose the credentials are about to expire. T1 will call refresh(), which in turn simply calls loadCredentials(). The method will then obtain new credentials and put them on the credentials field, and update the credentialsExpiration field.

Now, suppose T2 calls getCredentials(). It is possible for this thread to see the old versions of credentials and credentialsExpiration, the new versions of the fields, or the new version of one of them and the old version of the other, and produce unexpected results -- and provide T2 with expired credentials!


One solution to these issues would be to make the fields volatile. There might be other, better solutions, but this simple fix should work.

CountingInputStream doesn't take into account 'skip'

Compare the AWS implementation [1] with Guava's [2]

[1] https://github.com/aws/aws-sdk-java/blob/master/src/main/java/com/amazonaws/util/CountingInputStream.java
[2] https://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/io/CountingInputStream.java

I actually ran into this when trying to use the count as a condition and hit subtle/spurious error that took many hours to debug :-(

The Guava implementation is also more robust in terms of handing mark/reset.

Why doesn't DynamoDBMapper.save() append to arrays?

I have an array I want to append to in my dynamo table, currently I can't do this with DynamoDBMapper.save(). I first have to read the row out of dynamo, append to the array in memory, and then save it. I would like to be able to just save my object, and if the row exists, append to the array if my config is UPDATE, and overwrite if my config is CLOBBER.

Currently, with using an UpdateItemRequest, I can do this in one call.

Is this something that will be possible with DynamoDBMapper in the future.

Parsing date fails when using different endpoint

Hi, i'm writing patch for Clojure S3 client. I'm using the latest aws-sdk-java 1.4.0.1 and i use local openstack's S3 for testing purpose. Everything works fine, but when i'm asking object data then i get failure message dateparsing fails as you can see picture in attachment:

Screen Shot 2013-03-17 at 5 09 09 PM

AWSOpsWorksClient.describeElasticIps() throws "InvalidParameterValue" -- bad "Timestamp" param

I'm using an AWSOpsWorksClient (Version 1.4.4.2) to call describeElasticIps() and I'm getting the follow error (really, two, but one is causing the other) because "Timestamp", which is set internally, is not in true ISO8601 format. On top of that, it looks like the JSON being returned is malformed -- but I'd settle for "Timestamp" being created in the proper format ( i.e. 2013-05-24T03:02:02Z ) before being sent into the AWS endpoint.

Caused by: com.amazonaws.AmazonClientException: Unable to parse error response: 'InvalidParameterValueValue (20130524T030202Z) for parameter Timestamp is invalid. Must be in ISO8601 format.4d3e8d6a-ce9e-4b75-b27d-85111ae9b9ca'
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:49)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:1)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:619)
... 7 more
Caused by: com.amazonaws.util.json.JSONException: A JSONObject text must begin with '{' at 1 character 2 line 1
at com.amazonaws.util.json.JSONTokener.syntaxError(JSONTokener.java:422)
at com.amazonaws.util.json.JSONObject.(JSONObject.java:183)
at com.amazonaws.util.json.JSONObject.(JSONObject.java:310)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:47)

Here's the code that throws the AmazonClientException, if it matters. Assuming 'awsOpsWorks' is a valid, authenticated AWSOpsWorksClient instance, just pass in some valid Elastic IP address strings in an ArrayList.

public List describeElasticIps(List elasticIps) throws AmazonClientException {
  try { 
    DescribeElasticIpsRequest request = new DescribeElasticIpsRequest();
    request.setIps(elasticIps);
    DescribeElasticIpsResult result = awsOpsWorks.describeElasticIps(request);
    return result.getElasticIps();
  } catch (AmazonClientException e) {
    handleAceException(e);
  }
  return new ArrayList();
}

Does anyone know how to find what routine is setting that Timestamp param -- and any ideas about why this Timestamp param fails but all sorts of other API calls work? It must be shared code, right? I dug for a while, but had no luck finding the "Timestamp" param setting code -- or even the param, for that matter.

please include the cause in exception chain for HttpClientFactory

One frequent frustration we see is that the cause exception (stack trace) wasn't included in the chain. As a result, we miss very valuable troubleshooting information for SSL failure. e.g.

    } catch (NoSuchAlgorithmException e) {
        throw new AmazonClientException("Unable to access default SSL context");
    }

    } catch (Exception e) {
            throw new IOException(e.getMessage());
    }

Http client config issue

Hi,

We are loading test and profiling our code with aws sdk currently. We notice there are a few inefficient apache http client usages in aws sdk. For example, stale checking is hard coded in sdk and can't be configured.

HttpConnectionParams.setStaleCheckingEnabled(httpClientParams, true);

With this on, it will cause significant overhead and has been suggested to turn off. http://hc.apache.org/httpcomponents-core-ga/httpcore/apidocs/org/apache/http/params/CoreConnectionPNames.html

Also it would be great to have its retry with dynamodb retry.

ThreadSafeClientConnManager is still used in aws sdk for http client which is deprecated because of capacity constrains.

http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/tsccm/ThreadSafeClientConnManager.html

Thanks

Ke

Putting files with multiple consecutive "/" in the key fails

If you try to put a file with multiple "/" in the key (for example "bla//bla") fails. Tried using the 1.4.7 release as well which suggests that an issue with the keys got fixed but it fails as well. This was working ok with 1.3.5.

The following is a test which fails

    @Test
    public void test() throws FileNotFoundException, IOException {
        String bucket = "some-bucket";
        String key = "some//path";
        File target = new File("someFile.txt");
        assertTrue(target.exists());
        ObjectMetadata meta = null;
        try{
            meta = s3Client.getObjectMetadata(bucket, key);
        }catch (Exception e) {
            meta = new ObjectMetadata();
        }
        PutObjectRequest putRequest = new PutObjectRequest(bucket, key, target);
        putRequest.setMetadata(meta);
        s3Client.putObject(putRequest);
    }

The exception message is: A com.amazonaws.services.s3.model.AmazonS3Exception has been caught, The request signature we calculated does not match the signature you provided. Check your key and signing method.

The "relevant" stacktrace:

AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: ####, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: ###
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:659)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:347)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:199)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2994)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1174)
    at my.Test.test(my.Test.java:60)

If you change the key from "some//path" to "some/path" then the test works. If you add more /s for example "some///path" it fails as well.

Hope this is detailed enough.

SecurityGroup permissions not created

I have the following code

List<IpPermission> ipPermissions = new ArrayList<IpPermission>();
ipPermissions.add(new IpPermission().withIpProtocol("tcp").withFromPort(8080).withToPort(8080));

String ec2SecurityGroup = "testGroup";
amazonEC2.createSecurityGroup(new CreateSecurityGroupRequest(ec2SecurityGroup, ec2SecurityGroup));
amazonEC2.authorizeSecurityGroupIngress(new AuthorizeSecurityGroupIngressRequest(ec2SecurityGroup, ipPermissions));

The security group get created fine. However the permissions are missing, which renders it completely useless.

Can no longer use disableCertChecking property with AmazonS3Client

In version 1.4.1, there is a problem that makes the property "com.amazonaws.sdk.disableCertChecking" used in the HttpClientFactory::createHttpClient() method to not persist through execution.

I have debugged through the code and the problem is in AmazonS3Client constructor. There was an "init()" function added. This init function calls client.disableStrictHostnameVerification() that reconstructs the https scheme and thus removing the https scheme established by the disableCertChecking property being set.

S3 put object gives NPE because ETag header is case sensitive

using version 1.4.7
http headers are used in a case sensitive way

For example, when doing a put object without md5, the etag from the response is checked. However, this is just a map.get(). According to the http spec (RFC 2616 4.2) http header names are case-insensitive.

stack trace if the server returns a header "Etag" iso the expected "ETag":

java.lang.NullPointerException
    at com.amazonaws.util.BinaryUtils.fromHex(BinaryUtils.java:72)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1191)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1039)

[flow framework] Need more type info in DataConverter interface

I'm finding it difficult to write a DataConverter -- it seems like there's not really any type information passed to it for deserialization.

fromData takes a Class valueType parameter, but it seems like the flow framework calls this method with "Object[].class" -- effectively giving no type information at all (e.g. see POJOWorkflowDefinition lines 74, 111.

Since the framework expects Arrays, maybe it would make more sense to have fromData take an array of Type information to better match and give the implementer more information to make deserialization decisions on?

thanks.

New Region.createClient() method accepts wrong credentials provider class

The new Region.createClient() method accepts a credentials provider object of type org.apache.http.client.CredentialsProvider, but I'm pretty sure this is supposed to be com.amazonaws.auth.AWSCredentialsProvider.

You would expect a compile-time error if so, but this method uses this argument only by reflection, passed to constructors. These constructors take objects of AWSCredentialsProvider, so this will fail at run-time.

S3 request signing Content-Type is not "canonicalized" on the clients the same as the server

Content-Types with folded lines are inserted directly into the signing string (newlines intact) but on the server side, there is some sort of unspecified canonicalization happening.

An example of a folded content-type is:

Content-Type: application/octetstream;
    filname=file.txt

Since the specification is not explicit here, I've also opened a thread on AWS forums because this could be a server-side issue.

In any case, the result is a Signature failure and a client-side exception. I would produce a pull request, but I haven't worked out exactly what the server-side is doing to normalize the string; and even if I had, it's not clear the behavior is specified or happening on purpose.

UploadImpl.waitForUploadResult() infinite loop when upload is already done

The code should check if monitor.isDone() is already true so it can just call the Future.get().

public UploadResult waitForUploadResult() 
        throws AmazonClientException, AmazonServiceException, InterruptedException {
    try {
        UploadResult result = null;
        while (!monitor.isDone() || result == null) {
            Future<?> f = monitor.getFuture();
            result = (UploadResult)f.get();
        }
        return result;
    } catch (ExecutionException e) {
        rethrowExecutionException(e);
        return null;
    }
}

Also happens on void com.amazonaws.services.s3.transfer.internal.AbstractTransfer.waitForCompletion() throws AmazonClientException, AmazonServiceException, InterruptedException

please add support for cr1.8xlarge

My patch to do this is:

$ git diff

diff --git a/src/main/java/com/amazonaws/services/ec2/model/InstanceType.java b/src/main/java/com/amazonaws/services/ec2/model/InstanceType.java
index 12f1b6d..87ed07f 100644
--- a/src/main/java/com/amazonaws/services/ec2/model/InstanceType.java
+++ b/src/main/java/com/amazonaws/services/ec2/model/InstanceType.java
@@ -35,7 +35,8 @@ public enum InstanceType {
     Hs18xlarge("hs1.8xlarge"),
     Cc14xlarge("cc1.4xlarge"),
     Cc28xlarge("cc2.8xlarge"),
-    Cg14xlarge("cg1.4xlarge");
+    Cg14xlarge("cg1.4xlarge"),
+    Cr18xlarge("cr1.8xlarge");
 
     private String value;
 
@@ -93,6 +94,8 @@ public enum InstanceType {
             return InstanceType.Cc28xlarge;
         } else if ("cg1.4xlarge".equals(value)) {
             return InstanceType.Cg14xlarge;
+        } else if ("cr1.8xlarge".equals(value)) {
+            return InstanceType.Cr18xlarge;
         } else {
             throw new IllegalArgumentException("Cannot create enum from " + value + " value!");
         }

TransferManager.uploadDirectory doesn't support Server-Side Encryption with Amazon-managed Keys

TransferManager.uploadDirectory is a very efficient and simple way to achieve great upload throughput with the AWS SDK for Java.

It is not possible, however, to combine that with encryption for data-at-rest with Amazon-managed keys, as its not possible to specify the object metadata, or specifically, the server-side encryption flag.

In a normal put request, one invokes server-side encryption by adding:

objectMetadata.setServerSideEncryption(
ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
putRequest.setMetadata(objectMetadata);

Client-side encryption works because the AmazonS3EncryptionClient is used instead of AmazonS3Client, but this isn't usable by folks not wanting the complexity of managing keys for data-at-rest protection.

Some ideas as to how to address this:

  1. Allow TransferManagerConfiguration to specify whether server-side encryption is to be used across the board (not very granular)
  2. Allow uploadDirectory to receive a flag, true for server-side encryption (what about other attributes? like expires?, etc...)
  3. Have uploadDirectory issue an optional callback for each object, giving the app the opportunity to create an instance of ObjectMetadata for each file being uploaded. (this would make the capabilities of uploadDirectory similar to those already provided by upload)

OpsWorks describeStacks for all stacks throws validation exception

A DescribeStacksRequest describeStacks call with no ID's specified throws a validation error. Per the [API](http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/opsworks/model/DescribeStacksRequest.html#withStackIds(java.lang.String...), omitting the withStackIds() parameter should return a description of all stacks.

List result = opsWorksClient.describeStacks(new DescribeStacksRequest()).getStacks();

Exception in thread "main" ValidationException: Status Code: 400, AWS Service: AWSOpsWorks, AWS Request ID: 45957802-c956-11e2-bd64-95797b8784fd, AWS Error Code: ValidationException, AWS Error Message: Please provide no arguments or, one or more stack IDs
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:653)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:347)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:199)
at com.amazonaws.services.opsworks.AWSOpsWorksClient.invoke(AWSOpsWorksClient.java:2039)
at com.amazonaws.services.opsworks.AWSOpsWorksClient.describeStacks(AWSOpsWorksClient.java:1686)

And:

List result = opsWorksClient.describeStacks(new DescribeStacksRequest().withStackIds()).getStacks();

Exception in thread "main" ValidationException: Status Code: 400, AWS Service: AWSOpsWorks, AWS Request ID: 1e5777d7-c958-11e2-9f24-eb40b9de1b03, AWS Error Code: ValidationException, AWS Error Message: Please provide no arguments or, one or more stack IDs
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:653)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:347)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:199)
at com.amazonaws.services.opsworks.AWSOpsWorksClient.invoke(AWSOpsWorksClient.java:2039)
at com.amazonaws.services.opsworks.AWSOpsWorksClient.describeStacks(AWSOpsWorksClient.java:1686)

S3 Upload does not report errorCode or statusCode, just opaque TransferState.Failed

Thorough discussion and my specific issue is here:

http://stackoverflow.com/questions/15033064/aws-s3-java-sdk-detect-time-clock-skew-programmatically/15035392#15035392

There are various scenarios where an S3 transfer fails - some transient, some connectivity related, some related to fundamental client issues (like clock skew). The service accurately reports important diagnostic info back to the client, but the SDK swallows this information and just returns an opaque TransferState.Failed.

The SDK should report errorCode and statusCode appropriately when interacting with S3.

DefaultErrorResponseHandler masks true exception when service returns invalid XML

We are seeing an issue where calls to AmazonElasticMapReduceClient#runJobFlow intermittently fail with the following stack trace:

Caused by: com.amazonaws.AmazonClientException: Unable to unmarshall error response (Premature end of file.)
        at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:548)
        at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:290)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
        at com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient.invoke(AmazonElasticMapReduceClient.java:574)
        at com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient.runJobFlow(AmazonElasticMapReduceClient.java:432)
        at 
<snip/>
        ... 3 more
Caused by: org.xml.sax.SAXParseException: Premature end of file.
        at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
        at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
        at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:124)
        at com.amazonaws.util.XpathUtils.documentFrom(XpathUtils.java:67)
        at com.amazonaws.http.DefaultErrorResponseHandler.handle(DefaultErrorResponseHandler.java:65)
        at com.amazonaws.http.DefaultErrorResponseHandler.handle(DefaultErrorResponseHandler.java:36)
        at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:528)
        ... 14 more

However, because DefaultErrorResponseHandler is failing to parse the (presumably invalid) XML returned from the service, we cannot determine the root cause of the issue. Perhaps in these cases the DefaultErrorResponseHandler should trap the SAXParseException thrown by XpathUtils#documentFrom and either log the raw HttpResponse content or at the very least throw an AmazonServiceException containing that HTTP status code to provide a hint as to the real issue.

No warnings when attempting to overwrite immutable SWF activity registration parameters (e.g. defaultTaskList)

No warnings when attempting to overwrite immutable SWF activity registration parameters (e.g. defaultTaskList)

An example of problem that this may cause is as following. If taskList configurations are changed in code, but the version is not changed; the code will deploy just fine; yet the changes to the settings will not be propagated, and no warnings will be given. This is a frequently occurring case; and is quite difficult to discover/debug.

This issue is briefly discussed in this AWS Developer forum post.

Upgrade httpcore and httpclient to 4.2.4+

Java applications that interact with AWS are limited to the outdated versions of httpcore and httpclient that the AWS Java SDK depends on (4.1.1 as of 12-Apr-2013). Apache provides better documentation, support and bug fixes for the latest version which is currently 4.2.4.

I've tried using 4.2.4 in Asgard in order to make use of the newest http improvements, for non-Amazon interactions. Unfortunately this causes some Amazon calls to have new errors.

There is no simple way to decouple the AWS Java SDK's use of an old httpclient from our application's use of a newer httpclient.

If Amazon can't or won't upgrade the outdated libraries, then our relatively expensive choices are:

  1. Stick to the httpclient 4.1.1 API for non-Amazon calls, for which it is harder to find support
  2. Choose a less popular http Java library because Amazon got to the popular one first
  3. Write a new http library from scratch

MultipleFileUpload addProgressListener Throws Nullpointer Exception

I am getting null pointer exceptions when I add a progress listener to a multi-file upload with the transfer manager in version 1.5.3. The transfer works otherwise. This seems to be a bug in the SDK. Example:

    final MultipleFileUpload upload = tx.uploadDirectory(bucket, key, new File(directory), true);
    upload.addProgressListener(new ProgressListener(){

        @Override
        public void progressChanged(ProgressEvent progressEvent) {
            System.out.println(upload.getProgress().getPercentTransferred());
        }
    });

SQS long polling is not interruptible

It would be good if threads that are blocked reading from SQS could wake up when the thread is interrupted. Since reading from SQS is idempotent, this should be safe to do.

Intermittent "signature we calculated does not match" errors

We (and other users) have been seeing weird intermittent signing errors - in a given day, many thousands of requests succeed, but about a dozen requests fail (v1.4.4.1 & v1.4.5) :

com.amazonaws.AmazonServiceException: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

The Canonical String for this request should have been
'POST
/

content-length:222
content-type:application/x-amz-json-1.0
host:dynamodb.eu-west-1.amazonaws.com
user-agent:aws-sdk-java/1.4.5 Linux/3.4.43-43.43.amzn1.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/17.0-b16
x-amz-date:20130605T125514Z
x-amz-target:DynamoDB_20120810.Query

content-length;content-type;host;user-agent;x-amz-date;x-amz-target
87fa64cc8ba1483914fbde19afaf5a727957ed72845e4c1039320495fe46acb1'

The String-to-Sign should have been
'AWS4-HMAC-SHA256
20130605T125514Z
20130605/eu-west-1/dynamodb/aws4_request
f53aaa5fe95ed6c0e9f34f8bfd630cb87a189ed2be914e402842fe104e7cdfdf'

        at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:653) ~[ios_purchases_api.jar:na]
        at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:347) ~[ios_purchases_api.jar:na]
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:199) ~[ios_purchases_api.jar:na]
        at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1245)

We're currently getting a dozen or so errors like this a day in Production, and they seem bizarre, given that we are making lots of successful requests with the exact same credentials.

We use a single instance of the com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient class, shared between threads, which lives for the life of the JVM. Is any of that incorrect use?

cc'ing @fulghum because of the big changes to com.amazonaws.auth.AWS4Signer in v1.4.4 c2f2e516#diff-4

(initially reported at https://forums.aws.amazon.com/thread.jspa?messageID=457465)

DynamoDB Update/Clobber issue

When user wants to update some specific attributes in an item, and user set those needed attributes values in the entity and created a config with save behavior as UPDATE. Expected behavior is AWS_SDK updates the provides values and leave the rest attribute values unchanged. But here, it deletes the value of other attributes which are not specified in the entity. Hope change is needed in method save() in DynamoDBMapper class.

else if (config.getSaveBehavior() != SaveBehavior.CLOBBER ) {
          updateValues.put(attributeName, new AttributeValueUpdate().withAction("DELETE"));
        }

This should be as

else if (config.getSaveBehavior() != SaveBehavior.UPDATE) {
          updateValues.put(attributeName, new AttributeValueUpdate().withAction("DELETE"));
        }

Access request id when response is success

Hi,

I had a quick look at source code and notice there is no way to access request id from api when the response is success. However, request id is required by amazon support and would be very useful to investigate some issues like random high latency on response, random high latency on write propagation, random data lost after write etc. For those random issues on production, it's impossible to keep logging at debug level all the time. I find it's hard to change code and make request id accessible without changing api. Also it would be useful to expose AWSRequestMetrics from executionContext in api. Any suggestion on this?

Thanks,

Ke

Allow to lookup Regions by name (say eu-west-1)

Currently, in order to obtain a Regions object for a given region name, one must do this:

for (Regions region : Regions.values()) {
    if (regionName.equals(region.getName())) {
        return Region.getRegion(regionName);
    }
}

This code could easily be included as a Regions.fromName(String regionName) method to make life easier for everyone.

How to get tags from RDS DBInstance?

I have a problem when I try to get Tags from RDS DBInstances on java.

AmazonRDS rds = new AmazonRDSClient(credentialsProvider);
ListTagsForResourceRequest request = new ListTagsForResourceRequest();
ListTagsForResourceResult resultedTags = rds.listTagsForResource(request);

Status Code: 500, AWS Service: AmazonRDS, AWS Request ID: ..., AWS Error Code: InternalFailure, AWS Error Message: An internal error has occurred. Please try your query again at a later time.
Stack Trace:
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:614)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:312)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:165)
at com.amazonaws.services.rds.AmazonRDSClient.invoke(AmazonRDSClient.java:1992)
at com.amazonaws.services.rds.AmazonRDSClient.listTagsForResource(AmazonRDSClient.java:746)

thank you in advance

Bug in DynamoDBMapper.writeOneBatch?

I suspect there is a subtle bug in DynamoDBMapper.writeOneBatch. In the line

divideBatch(batch, firstHalfBatch, secondHalfBatch);

DynamoDBMapper.java#L905

I think this should be:

divideBatch(failedBatch, firstHalfBatch, secondHalfBatch);

If batch is not equal to failedBatch, i.e. callUntilCompletion(batch); was a partial success, then this code will rewrite the successful portion of batch.

Provide a securityGroupExists(String name) method similar to the S3 bucketExists() one

Currently checking if a security group exists, one must catch an Exception:

try {
    amazonEC2.describeSecurityGroups(new DescribeSecurityGroupsRequest().withGroupNames(ec2SecurityGroup));
    return; //Success, it already exists.
} catch (AmazonServiceException e) {
    //The security group doesn't exist.
}

Having a dedicated method would make this much better.

Modularize SDK

The current SDK is huge (10 MB) and yet most people only deal with a tiny fraction of it.

Why not offer the SDK in modules, say aws-s3, aws-ec2, ... ?

Each of these could then depend on a common module containing the shared stuff (Regions, ..).

The current distribution could then be offered as aws-all.

Upgrade to Jackson 2.2 (or even better: include your own JSON converter classes)

It would be nice of the SDK could generally limit its dependencies to almost zero beyond the JDK.

If this isn't doable, then upgrading larger ones (like Jackson) to newer versions would at least make it possible not having to include two versions in an application: the old one for the AWS SDK and the new one for the rest of the app.

Interrupt flag is sometimes cleared when receiving SQS messages

I don't have a small enough test case to submit yet, but my code looks something like this:

executorService.submit(new Runnable() {
    @Override public void run() {
        while (!Thread.currentThread().isInterrupted()) {
            doWork();
        }
    }
});

Sometimes, when I call Future.cancel(true), the while loop does not terminate. I'm not sure if this bug is with the AWS SDK 1.3.30 or the Apache HttpClient (I've tried the latest stable version 4.2.3 with no luck either)

I've mentioned this on the Apache HttpClient mailing list too: http://mail-archives.apache.org/mod_mbox/hc-httpclient-users/201301.mbox/%3CCAA7ZWuUPb_7ztCw1-5pyJ2%2BByWVk7McsvtGewa9ahKjXRtOzMA%40mail.gmail.com%3E

com.amazonaws.regions.Region.getRegion(Regions) doesn't seem to use proxy settings

Hello all,

It doesn't seem as though the Region.getRegions call makes use of a proxy.

AmazonCloudWatchClient cloudWatchClient = new AmazonCloudWatchClient();
cloudWatchClient.setConfiguration(new ClientConfiguration().withProxyHost("proxy.corp.com").withProxyPort(8080));
cloudWatchClient.setRegion(Region.getRegion(Regions.EU_WEST_1));
// hangs here, until it times out connecting to aws-sdk-configurations.amazonwebservices.com

It also doesn't seem to work if I set -Dhttp.proxyHost and -Dhttp.proxyPort on the command line either.

Please upgrade org.apache.httpcomponents:httpclient

Many other OS libs are now running against org.apache.httpcomponents:httpclient:4.2.x (e.g. Selenium) which seems to be incompatible with org.apache.httpcomponents:httpclient:4.1 that aws-java-sdk depends on.

If I manually upgrade to org.apache.httpcomponents:httpclient:4.2.2 the AWS client throws java.lang.ClassNotFoundException: org.apache.http.HttpEntityEnclosingRequest during initialisation.

To work around this we're going to have to turn off Firefox auto-updates and pin FF at an old version so that Selenium works. Not really ideal for a web app dev team...

QueueDoesNotExistException not thrown

When an action is performed on a non existent queue I would expect to receive a QueueDoesNotExist exception, but instead I receive an AmazonServiceException. I believe this is due to checking if the error code is: "AWS.SimpleQueueService.QueueDoesNotExist", when it should check for: "AWS.SimpleQueueService.NonExistentQueue"

Disable/Enable IdleConnectionReaper through ClientConfiguration options

In restricted environments it is very rare that a codebase has the modifyThread and modifyThread group permission. In order to accomodate for that, it would be nice if the IdleThreadReaper constructor took the ClientConfiguration options and avoided modifying the thread to a daemon thread.

Correct access/secret key env variable names in EnvironmentVariableCredentialsProvider?

In com.amazonaws.auth.EnvironmentVariableCredentialsProvider, the environment variables "AWS_ACCESS_KEY_ID" and "AWS_SECRET_KEY" are used for access and secret keys.

However in documentation like at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/SettingUp_CommandLine.html#set-aws-credentials_ the names are "AWS_ACCESS_KEY" and "AWS_SECRET_KEY".

Elsewhere, like at http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html , the values seem to be "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY".

This is maybe more of a question. But I had understood the second version to be correct.

NumberFormatException in AbstractS3ResponseHandler::populateObjectMetadata with GET Object request

For AWS SDK for Java version 1.4.1:

In the documentation it shows that the response from the GET Object has a Expires response header of the form:

Expires: Thu, 01 Dec 1994 16:00:00 GMT

This matches what the HTTP RFC2616 specifies for this field.

However, in the routine AbstractS3ResponseHandler::populateObjectMetadata, it is expecting the Expires value to be an integer; thus throws a warning exception of NumberFormatException.

The following is the current code lines from that module:

       } else if (key.equals(Headers.EXPIRES)) {
            try {
                metadata.setExpirationTime(new Date(Long.parseLong(header.getValue())));
            } catch (NumberFormatException pe) {
                log.warn("Unable to parse expiration time: " + header.getValue(), pe);
            }

Based on the API documentation and the HTTP standard, the code should really be:

} else if (key.equals(Headers.EXPIRES)) {
      try {
    SimpleDateFormat df = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss zzz");

    metadata.setExpirationTime(df.parse(header.getValue()));
} catch (ParseException pe) {
    log.warn("Unable to parse expiration time:" + header.getValue(), pe);
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.