boto / botocore Goto Github PK
View Code? Open in Web Editor NEWThe low-level, core functionality of boto3 and the AWS CLI.
License: Apache License 2.0
The low-level, core functionality of boto3 and the AWS CLI.
License: Apache License 2.0
We want to use OrderedDict to preserve the order of the keys in the JSON files but 2.6 doesn't have it in stdlib. Need to workaround that and also then use simplejson on 2.6 to use the object_pairs_hook which json in 2.6 does not support.
It seems like the file botocore/data/aws/s3.json is missing? When trying to instantiate the s3 service I get
botocore.exceptions.DataNotFoundError: Unable to load data for: aws/s3
Hi, All
AWS has enabled the service "AWS CloudTrail" which cloud capture AWS API activity. In its output, "userAgent" is also recored there.
So i just suggest if we could customize the user-agent name in botocore, i could easily let my specific module with specific user-agent name initialized. Thus when read the output, it will be more easier for me to know which module do the API calling. Most time instance and API call name are not enough to know all the stuff.
I wish you could add this as an enhancement.
Thanks
Henry
When I use awscli tool run describe-domains command, I always get such error:
$ aws cloudsearch describe-domains --domain-name imdb-movies --region us-west-2
'Service' object has no attribute 'xmlnamespace'
I think the problem is the cloudsearch.json file miss the xmlnamespace field, and when I add this field, the 'aws cloudsearch describe-domains' command works better:
diff --git a/botocore/data/aws/cloudsearch.json b/botocore/data/aws/cloudsearch.json
index 971f437..a299ee6 100644
--- a/botocore/data/aws/cloudsearch.json
+++ b/botocore/data/aws/cloudsearch.json
@@ -5,6 +5,7 @@
"signature_version": "v2",
"service_full_name": "Amazon CloudSearch",
"endpoint_prefix": "cloudsearch",
You use the Configuration Service t
"operations": {
"CreateDomain": {
The content type is triggering the response to be associated with the json response class.
Repro:
aws s3 cp foo.json s3://bucket --content-type application/json
# No response is printed.
aws s3api head-object --bucket bucket --key foo.json
Greetings. I'm trying to request a spot instance with ephemeral disks mapped:
import botocore.session
session = botocore.session.get_session()
session.set_debug_logger()
ec2 = session.get_service('ec2')
operation = ec2.get_operation('RequestSpotInstances')
endpoint = ec2.get_endpoint('us-east-1')
response = operation.call(
endpoint,
spot_price='1.00',
instance_count=1,
launch_specification={
'image_id': 'ami-33ec795a',
'instance_type': 'cc2.8xlarge',
'block_device_mappings': [
{"device_name": "/dev/sdb", "virtual_name": "ephemeral0"},
{"device_name": "/dev/sdc", "virtual_name": "ephemeral1"},
{"device_name": "/dev/sdd", "virtual_name": "ephemeral2"},
{"device_name": "/dev/sde", "virtual_name": "ephemeral3"}
]
},
)
And I'm getting the following error:
DEBUG - <?xml version="1.0" encoding="UTF-8"?><Response><Errors><Error><Code>UnknownParameter</Code><Message>The parameter BlockDeviceMappings is not recognized</Message></Error></Errors><RequestID>6536e1a9-1314-4e04-af89-9f554ec79237</RequestID></Response>
Digging into the debug output, I see:
DEBUG - label=
DEBUG - label=LaunchSpecification
DEBUG - label=LaunchSpecification.ImageId
DEBUG - label=BlockDeviceMappings.Item.1
DEBUG - label=BlockDeviceMappings.Item.1.DeviceName
DEBUG - label=BlockDeviceMappings.Item.1.VirtualName
DEBUG - label=BlockDeviceMappings.Item.2
DEBUG - label=BlockDeviceMappings.Item.2.DeviceName
DEBUG - label=BlockDeviceMappings.Item.2.VirtualName
DEBUG - label=BlockDeviceMappings.Item.3
DEBUG - label=BlockDeviceMappings.Item.3.DeviceName
DEBUG - label=BlockDeviceMappings.Item.3.VirtualName
DEBUG - label=BlockDeviceMappings.Item.4
DEBUG - label=BlockDeviceMappings.Item.4.DeviceName
DEBUG - label=BlockDeviceMappings.Item.4.VirtualName
DEBUG - label=LaunchSpecification.InstanceType
DEBUG - label=
This doesn't look right. The labels should be, e.g.,
LaunchSpecification.BlockDeviceMappings.1.DeviceName
rather than
BlockDeviceMappings.Item.1.DeviceName
So I made the following simple change:
diff --git a/botocore/parameters.py b/botocore/parameters.py
index c5e7e72..5c8598b 100644
--- a/botocore/parameters.py
+++ b/botocore/parameters.py
@@ -224,7 +224,8 @@ class ListParameter(Parameter):
member_name = member_type.xmlname
else:
member_name = 'member'
- label = '%s.%s' % (self.name, member_name)
+ if member_name != 'Item':
+ label = '%s.%s' % (self.name, member_name)
for i, v in enumerate(value, 1):
member_type.build_parameter_query(v, built_params,
'%s.%d' % (label, i),
and now I see
DEBUG - label=
DEBUG - label=LaunchSpecification
DEBUG - label=LaunchSpecification.ImageId
DEBUG - label=LaunchSpecification.BlockDeviceMapping.1
DEBUG - label=LaunchSpecification.BlockDeviceMapping.1.DeviceName
DEBUG - label=LaunchSpecification.BlockDeviceMapping.1.VirtualName
DEBUG - label=LaunchSpecification.BlockDeviceMapping.2
DEBUG - label=LaunchSpecification.BlockDeviceMapping.2.DeviceName
DEBUG - label=LaunchSpecification.BlockDeviceMapping.2.VirtualName
DEBUG - label=LaunchSpecification.BlockDeviceMapping.3
DEBUG - label=LaunchSpecification.BlockDeviceMapping.3.DeviceName
DEBUG - label=LaunchSpecification.BlockDeviceMapping.3.VirtualName
DEBUG - label=LaunchSpecification.BlockDeviceMapping.4
DEBUG - label=LaunchSpecification.BlockDeviceMapping.4.DeviceName
DEBUG - label=LaunchSpecification.BlockDeviceMapping.4.VirtualName
DEBUG - label=LaunchSpecification.InstanceType
DEBUG - label=
and the request succeeds. Not sure if this is the right fix, though. Thoughts?
Thanks!
Not sure if this is specific to ListMultipartUploads or more general, but given a response like this (I added the indentation to show the CommonPrefix part more clearly):
<?xml version="1.0" encoding="UTF-8"?>
<ListMultipartUploadsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Bucket>botocoretest1374528673-218</Bucket><KeyMarker></KeyMarker><UploadIdMarker></UploadIdMarker><NextKeyMarker></NextKeyMarker><NextUploadIdMarker></NextUploadIdMarker><Delimiter>/</Delimiter><Prefix>foo</Prefix><MaxUploads>1000</MaxUploads><IsTruncated>false</IsTruncated>
<CommonPrefixes><Prefix>foo/</Prefix></CommonPrefixes>
<CommonPrefixes><Prefix>foobar/</Prefix></CommonPrefixes>
</ListMultipartUploadsResult>
I get a parsed response like this:
{
"UploadIdMarker": null,
"CommonPrefixes": {
"Prefix": "foobar/"
},
"NextKeyMarker": null,
"Bucket": "botocoretest1374528673-218",
"Prefix": "foo",
"NextUploadIdMarker": null,
"Delimiter": "/",
"Uploads": [],
"KeyMarker": null,
"MaxUploads": 1000,
"IsTruncated": false
}
I noticed this while working on some of the paginator code. In this scenario I had a number of multipart uploads in foo/<keys>
and foobar/<keys>
.
We are also not using SigV4
For example, all directconnect requests are failing. Upon inspection, no authorization headers are being added to the request.
When you use a file like object as the body param to s3.PutObject for a bucket outside of US standard, you'll get a 307 followed by a 400 bad request and eventually a socket timeout. To repro:
op.call(endpoint, body=open('/some/file', 'rb'), ...)
You'll see a 307 then a 400. I think this is because requests is configured to follow redirects, but there's no location header and hence the bad request.
I think the fact that this is a file like object exposes the bug because we send two packets, one for the request+headers then one for the body. When the body is a string (like what we use in the integration tests), then httplib will do a msg += message_body
and send it as a single chunk which succeeds.
This also might be related to the virtual host handler we have. I noticed this in the log messages:
botocore.handlers: DEBUG: Checking for DNS compatible bucket for: https://s3-us-west-2.amazonaws.com/botocoretest1374195914-317
botocore.handlers: DEBUG: URI updated to: https://botocoretest1374195914-317.s3.amazonaws.com
Would be good to get an integration test written for this also.
See description here: aws/aws-cli#52
Problems dealing with new header format.
If we move the call to get_auth
one level up into the get_endpoint
function, this would allow people to pass in whatever auth handler they wanted if creating Endpoints manually.
It would also make it easier to write unit tests.
Trying to access the metadata service gives a traceback of:
Traceback (most recent call last):
File "c:\temp\aws-cli\awscli\clidriver.py", line 169, in _call
endpoint_url=self.main_parser.args.endpoint_url)
File "C:\Temp\venv33\lib\site-packages\botocore-0.10.0-py3.3.egg\botocore\service.py", line 113, in get_endpoint
return get_endpoint(self, region_name, endpoint_url)
File "C:\Temp\venv33\lib\site-packages\botocore-0.10.0-py3.3.egg\botocore\endpoint.py", line 223, in get_endpoint
credentials=service.session.get_credentials(),
File "C:\Temp\venv33\lib\site-packages\botocore-0.10.0-py3.3.egg\botocore\session.py", line 302, in get_credentials
metadata)
File "C:\Temp\venv33\lib\site-packages\botocore-0.10.0-py3.3.egg\botocore\credentials.py", line 188, in get_credential
s
metadata=metadata)
File "C:\Temp\venv33\lib\site-packages\botocore-0.10.0-py3.3.egg\botocore\credentials.py", line 80, in search_iam_role
metadata = _search_md()
File "C:\Temp\venv33\lib\site-packages\botocore-0.10.0-py3.3.egg\botocore\credentials.py", line 58, in _search_md
fields = r.content.split('\n')
TypeError: Type str doesn't support the buffer API
I happen to be on a windows machine, but I think this is just a python3 str vs. bytes issue.
The current response looks like:
[
"Path",
"CreateDate",
"GroupId",
"Arn",
"GroupName"
]
Which is incorrect.
It is impossible to customize classes structure without modules monkey patching. IE: to replace Service class with some subclass (MockService) I have to somehow inject function into botocore.service module. Hackish, not good.
It is expected to have factory method (like pattern defines) inside Session class, so it could be alternated in Session subclass. In That case if i need to use subclass of Operation I need to Session, Service and Endpoint. Seems to be very straight-forward and explicit to me.
Maybe this suggestion will cause some redesign. I can make a pull request if you are ok with my idea.
Looks like when I refactored the auth module for the sigv4 test suite, I broke the s3 auth class.
Fixing now.
The wrong header name is being used. Should be x-amz-security-token
.
Python Interpreter Version: 2.7.5
Botocore Version: 18.0
The SigV4Auth class generates a signature timestamp in its init and uses that timestamp for signing all requests for the endpoint. This means that an endpoint can only generate correctly signed requests for 5 minutes before the requests are rejected by AWS services as having an expired timestamp.
For example:
Signature expired: 20131008T001036Z is now earlier than 20131008T001112Z (20131008T001612Z - 5 min.)
Ran into this problem when using botocore to construct complex CloudFormation stacks that took extended periods of time to complete. I was getting signature expired errors from CloudFormation when polling for completion of the stack construction after 5 minutes because it was using the same endpoint object to poll as it used to issue the initial CreateStack.
session = botocore.session.get_session()
service = session.get_service('ec2')
endpoint = service.get_endpoint('us-east-1')
<wait more than 5 minutes>
operation = svc.get_operation(<method>)
operation.call(endpoint, ...)
This will result in a 403 error with a signature expired error like this:
Signature expired: 20131008T001036Z is now earlier than 20131008T001112Z (20131008T001612Z - 5 min.)
Instead of generating a cached signature timestamp in init for SigV4Auth, it should be generated dynamically for each signature operation.
The issue can be worked around by calling get_endpoint() for the service before each operation call which will cause a new signature timestamp to be regenerated for requests on the new endpoint and thus avoid the expired timestamp issue.
The response parser incorrectly duplicates the one returned value into all possible attributes creating a response like this:
$ aws --region us-east-1 ec2 describe-instance-attribute --instance `wget -q -O- http://169.254.169.254/latest/meta-data/instance-id` --attribute rootDeviceName
{
"UserData": {
"Value": "/dev/sda1"
},
"ProductCodes": [],
"InstanceId": "i-7a81571cā ",
"InstanceInitiatedShutdownBehavior": {
"Value": "/dev/sda1"
},
"RootDeviceName": {
"Value": "/dev/sda1"
},
"EbsOptimized": {
"Value": false
},
"BlockDeviceMappings": [],
"KernelId": {
"Value": "/dev/sda1"
},
"RamdiskId": {
"Value": "/dev/sda1"
},
"DisableApiTermination": {
"Value": false
},
"InstanceType": {
"Value": "/dev/sda1"
}
}
It appears that the data model used for DescribeNetworkInterfaceAttribute does not line up with the API description: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-DescribeNetworkInterfaceAttribute.html
Based on how DescribeInstanceAttribute is set up, I am assuming that the model should be an enum that accepts the following attributes:
description
sourceDestCheck
groupSet
atttachment
In it's current state, Amazon appears to reject any requests regardless of the string value you pass in. Please let me know if you need any additional information and thanks in advance for taking a look :).
For example, list-pipelines works:
$ aws elastictranscoder list-pipelines
{
"Pipelines": []
}
But with an arg:
$ aws elastictranscoder list-jobs-by-pipeline --pipeline-id foo --debug
2013-04-26 12:15:21,021 - botocore.base - DEBUG - Attempting to Load: aws/_services/elastictranscoder
2013-04-26 12:15:21,021 - botocore.base - DEBUG - Attempting to Load: aws
2013-04-26 12:15:21,021 - botocore.base - DEBUG - Attempting to Load: aws/_services
2013-04-26 12:15:21,022 - botocore.base - DEBUG - Found data file: /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/botocore/data/aws/_services.json
2013-04-26 12:15:21,022 - botocore.base - DEBUG - Attempting to Load: aws/_services/elastictranscoder
2013-04-26 12:15:21,022 - botocore.base - DEBUG - Attempting to Load: aws/elastictranscoder
2013-04-26 12:15:21,029 - botocore.base - DEBUG - Found data file: /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/botocore/data/aws/elastictranscoder.json
2013-04-26 12:15:21,036 - botocore.credentials - INFO - Found credentials in boto config file
2013-04-26 12:15:21,036 - botocore.operation - DEBUG - {u'pipeline_id': 'foo'}
2013-04-26 12:15:21,036 - botocore.operation - DEBUG - {u'pipeline_id': 'foo'}
2013-04-26 12:15:21,036 - botocore.endpoint - DEBUG - {'headers': {}, 'uri_params': {}, 'payload': None}
2013-04-26 12:15:21,037 - botocore.endpoint - DEBUG - SSL Verify: True
2013-04-26 12:15:21,037 - botocore.endpoint - DEBUG - path: /2012-09-25/jobsByPipeline/{PipelineId}
2013-04-26 12:15:21,037 - botocore.endpoint - DEBUG - query_params: Ascending={Ascending}&PageToken={PageToken}
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/awscli/clidriver.py", line 289, in call
**params)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/botocore/operation.py", line 53, in call
return endpoint.make_request(self, params)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/botocore/endpoint.py", line 184, in make_request
uri = self.build_uri(operation, params)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/botocore/endpoint.py", line 150, in build_uri
pc = pc.format(**params['uri_params'])
KeyError: u'PipelineId'
$ aws --debug s3 create-multipart-upload --bucket bucket --key key
Traceback (most recent call last):
File "aws-cli/awscli/clidriver.py", line 289, in call
**params)
File "botocore/botocore/operation.py", line 53, in call
return endpoint.make_request(self, params)
File "botocore/botocore/endpoint.py", line 189, in make_request
prepared_request = self.prepare_request(request)
File "botocore/botocore/endpoint.py", line 69, in prepare_request
self.auth.add_auth(request=request)
File "botocore/botocore/auth.py", line 386, in add_auth
request.headers)
File "botocore/botocore/auth.py", line 378, in get_signature
headers)
File "botocore/botocore/auth.py", line 369, in canonical_string
cs += self.canonical_resource(split)
File "botocore/botocore/auth.py", line 356, in canonical_resource
qsa.sort(cmp=lambda x, y:cmp(x[0], y[0]))
TypeError: 'cmp' is an invalid keyword argument for this function
The cmp
arg to sort()
was removed in python3. It'd be great to get some unittests around this as well.
Repro steps:
ListObjects
on that bucket (or use the aws s3 list-objects --bucket <name>
command).Traceback:
Traceback (most recent call last):
File "aws-cli/awscli/clidriver.py", line 175, in main
return command_table[parsed_args.command](remaining, parsed_args)
File "aws-cli/awscli/clidriver.py", line 268, in __call__
return command_table[parsed_args.operation](remaining, parsed_globals)
File "aws-cli/awscli/clidriver.py", line 703, in __call__
self._operation_object, call_parameters, parsed_globals)
File "aws-cli/awscli/clidriver.py", line 775, in invoke
parsed_globals)
File "aws-cli/awscli/clidriver.py", line 789, in _display_response
formatter(operation, response)
File "aws-cli/awscli/formatter.py", line 52, in __call__
response_data = response.build_full_result()
File "botocore/botocore/paginate.py", line 256, in build_full_result
for vals in zip_longest(*iterators):
File "botocore/botocore/paginate.py", line 291, in __iter__
for _, page in self._pages_iterator:
File "botocore/botocore/paginate.py", line 144, in __iter__
**current_kwargs)
File "botocore/botocore/operation.py", line 63, in call
response = endpoint.make_request(self, params)
File "botocore/botocore/endpoint.py", line 72, in make_request
return self._send_request(prepared_request, operation)
File "botocore/botocore/endpoint.py", line 89, in _send_request
response, exception = self._get_response(request, operation, attempts)
File "botocore/botocore/endpoint.py", line 107, in _get_response
http_response), None)
File "botocore/botocore/response.py", line 395, in get_response
xml_response.parse(body, encoding)
File "botocore/botocore/response.py", line 70, in parse
parser.feed(s)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 8475-8477: ordinal not in range(128)
better to add route53 request limits here:
https://github.com/boto/botocore/blob/develop/botocore/data/aws/_retry.json
Details can be found in the issue #1618 in boto:
boto/boto#1618
I attempted to set BOTO_DATA_PATH to experiment with a non-AWS provider, and this code goes into an infinite loop on my system:
135 paths = os.environ['BOTO_DATA_PATH'].split(':')
136 for path in paths:
137 path = os.path.expandvars(path)
138 path = os.path.expanduser(path)
139 paths.append(path)
the loop append to "paths" whether or not expandvars and expanduser result in a different path.
I had a ~/.boto
config file that had a [Credentials]
section with commented out keys, and instead of moving on to the next potential provider, I get:
# aws iam list-users
No option 'aws_access_key_id' in section: 'Credentials'
In boto, it moves on to the IAM role. I think it makes sense to update botocore to do the same thing.
The response is coming back as:
{
"ResponseMetadata": {
"RequestId": "17f44a0c-cf13-11e2-aaf0-a16b2a7ddbf0"
},
"VirtualMFADevice": {
"Base32StringSeed": null,
"SerialNumber": "arn:aws:iam::336924118301:mfa/ExampleMFADevice",
"QRCodePNG": null
}
}
The Base32StringSeed
and QRCodePNG
values are of type blob
which is not being handled in the current response parser.
Attempting an aws s3 copy-object
command results in a 501 Not Implemented" error. This is caused by the
Transfer-Encoding: chunked`` header that requests is adding to the request.
I need it to be described in JSON to be able to presign request for upload from browser (and clients without PUT method support).
PS: Also it would be great to have presigning API method.
Marshaling of parameters for rest-json
services like ElasticTranscoder is broken.
The README for aws-cli says you can have a profile like this:
[profile myprofile]
aws_access_key_id = ...
But if you try to use this profile by specifying --profile myprofile
you will get:
NoRegionError: You must specify a region or set the AWS_DEFAULT_REGION environment variable.
If, however, you use:
[profile "myprofile"]
It works fine. There is either a bug in the docs or a bug in the code.
The session object defines an environment variable that can be used to define a default region but that environment variable is not being used by get_endpoint
.
The data search path is constructed using the __file__
attribute of the base.py module but it doesn't take into account the drive letter on Windows installations. So, installing on C: and then running while on D: will cause an error:
botocore.exceptions.DataNotFoundError: Unable to load data for: cli
see aws/aws-cli#54
If I have credentials in the default location, four of the unit tests in test_credentials.py fail because the tests find those credentials instead of the ones we want to use for the tests.
I'm attempting to use botocore entirely standalone - no reliance on configuration files, environment variables or iam roles. It looks to me like this ought to work.
If I initialize a boto session like so:
session = botocore.session.get_session({
'access_key': "myawskeyid",
'secret_key': "myawssecret",
'region': "ap-southeast-1",
})
I get an exception when attempting to load credentials elsewhere:
Traceback (most recent call last):
...
File "/usr/local/lib/python2.7/dist-packages/botocore/service.py", line 90, in get_endpoint
return get_endpoint(self, region_name, endpoint_url)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 117, in get_endpoint
return QueryEndpoint(service, region_name, endpoint_url)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 51, in __init__
credentials=self.session.get_credentials(),
File "/usr/local/lib/python2.7/dist-packages/botocore/session.py", line 144, in get_credentials
metadata)
File "/usr/local/lib/python2.7/dist-packages/botocore/credentials.py", line 189, in get_credentials
metadata=metadata)
File "/usr/local/lib/python2.7/dist-packages/botocore/credentials.py", line 82, in search_iam_role
metadata = metadata['security-credentials']
KeyError: 'security-credentials'
Because we use izip_longest you can get a response like this:
{"CommonPrefixes": [null, null, null, null],
"Content": [{...}, {...}, {...}, {...}
}
When really if the null we shouldn't add it to the list. Then our response should look like:
{"CommonPrefixes": [],
"Content": [{...}, {...}, {...}, {...}
}
Hey, could you please update botocore to use Requests 2.0 now that it's released?
http://docs.python-requests.org/en/latest/
It might be nice to allow the use of keyring (https://pypi.python.org/pypi/keyring) for securely storing credentials, similar to what we did with boto.
Having great success working with botocore on lots of fronts with the supported APIs.
However, the lack of support for SimpleDB is a hole that forces me to fall back to old school boto when I need to use it. Is there a plan for SimpleDB support in botocore?
SSL certificate validation is failing for the Support service. This is because the commonName
in the certificate is support.us-east-1.amazonaws.com
rather than support.amazonaws.com
, which is the hostname we are using.
This relates to the same bug we have encountered before in all versions of Python < 2.7.3. The fix is to change our code to use support.us-east-1.amazonaws.com
as the hostname.
Filing this so I don't forget:
>>> import botocore.session
>>> botocore.session.get_session().get_service('badservice')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "botocore/botocore/session.py", line 333, in get_service
return botocore.service.get_service(self, service_name, provider_name)
File "botocore/botocore/service.py", line 141, in get_service
return Service(session, provider_name, service_name)
File "botocore/botocore/service.py", line 46, in __init__
self.__dict__.update(sdata)
ValueError: dictionary update sequence element #0 has length 11; 2 is required
I think the root cause might be related to the fact that get_data('aws/badservice')
returns a list of available services whereas get_data('aws/ec2')
returns the model:
>>> botocore.session.get_session().get_data('aws/badservice')
['autoscaling', 'cloudformation', ...]
See aws/aws-cli#24
It's desirable to be able to use botocore entirely standalone - no reliance on configuration files, environment variables or IAM roles.
Currently, it's necessary to do something hacky like this:
session = botocore.session.get_session()
service = session.get_service('ec2')
# HACK manually set the botocore credentials object
session._credentials = botocore.credentials.Credentials(
access_key=__opts__['AWS.id'],
secret_key=__opts__['AWS.key'],
)
endpoint = service.get_endpoint(region)
here's a traceback on describe snapshots, the issue seems to be that its passing through a unicode object when it think its a unicode encoded string around line 70 of response and it breaks when its not an ascii string. The trivial fix is to just detect the object type and encode(encoding) before passing through to the xml parser.
File "/home/ubuntu/zephyr/local/lib/python2.7/site-packages/botocore/operation.py", line 62, in call
response = endpoint.make_request(self, params)
File "/home/ubuntu/zephyr/local/lib/python2.7/site-packages/botocore/endpoint.py", line 105, in make_request
http_response)
File "/home/ubuntu/zephyr/local/lib/python2.7/site-packages/botocore/response.py", line 380, in get_response
xml_response.parse(body, encoding)
File "/home/ubuntu/zephyr/local/lib/python2.7/site-packages/botocore/response.py", line 70, in parse
parser.feed(s)
UnicodeEncodeError: 'ascii' codec can't encode character u'\ufeff' in position 17793: ordinal not in range(128)
/home/ubuntu/zephyr/local/lib/python2.7/site-packages/botocore/response.py(70)parse()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
š Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ššš
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ā¤ļø Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.