aws-beam / aws-elixir Goto Github PK
View Code? Open in Web Editor NEWAWS clients for Elixir
License: Apache License 2.0
AWS clients for Elixir
License: Apache License 2.0
Hey,
Running Elixir 1.12.3, OTP 24, MacOS 11.2.1 (Big Sur)
I've tried things first with v0.9.0, later changed deps in mix.exs to use master branch from git directly.
I can create a client with
iex(1)>client = AWS.Client.create(System.get_env("AWS_ACCESS_KEY_ID"), System.get_env("AWS_SECRET_ACCESS_KEY"), System.get_env("AWS_SESSION_TOKEN"), "my region")
%AWS.Client{
access_key_id: "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
endpoint: nil,
http_client: {AWS.HTTPClient, []},
json_module: {AWS.JSON, []},
port: 443,
proto: "https",
region: "us-east-1",
secret_access_key: "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
service: nil,
session_token: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
xml_module: {AWS.XML, []}
}
iex(2)>AWS.S3.list_objects(client, "name of bucket")
All this works just fine. I get back my wanted results. All correct and works. But then I want to "describe" my AWS IoT Thing by its name. And this is at least to me most frustration part, that some of things work, some not even though it looks like my client is working as it should. Which was my first worry due to the fact that I am using AWS SSO, but token (see above) is taken well, and client works. Output of S3 buckets works great.
By default endpoint should be set to "amazonaws.com". Is this something I would need to modify for further queries towards AWS IoT?
AWS.IoT.describe_thing(client, "thing name")
I get back... the error below which I don't really understand.
iex(x)>AWS.IoT.describe_thing(client, "thing name")
{:error,
{:unexpected_response,
%{
body: "{\"message\":\"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\"}",
headers: [
{"Date", "Tue, 19 Oct 2021 12:34:06 GMT"},
{"Content-Type", "application/json"},
{"Content-Length", "192"},
{"Connection", "close"},
{"x-amzn-RequestId", "1f67a902-3ca7-4aa1-XXXX-ceXXXXXXXbf6"},
{"Access-Control-Allow-Origin", "*"},
{"x-amzn-ErrorType", "InvalidSignatureException"},
{"x-amz-apigw-id", "HdIx2FRxFvTg="},
{"X-Amzn-Trace-Id", "Root=1-616ebb3e-6badXXXXXXXXXXXX4b"}
],
status_code: 403
}}}
I saw a few kind of similar issues, but none of them really helps.
I would really appreciate some direction/help here how to approach this or what am I doing wrong.
Thanks in advance.
Best
Hi,
i am having issues with connection to Kinesis. It looks like the endpoint you have as a default is wrong.
Same problem was at Boto library. amazonaws.com
is wrong.
But I can't find a way to override this endpoint at Client initialisation.
client = %AWS.Client{access_key_id: "secret",
secret_access_key: "secret",
region: "eu-west-1",
endpoint: "kinesis.eu-west-1.amazonaws.com"}
AWS.Kinesis.list_streams(client, %{})
Returns:
{:error, %HTTPoison.Error{id: nil, reason: :nxdomain}}
Best,
Tomaz
Hello
I think I've found an issue with the way the APIGatewayV2 import/reimport code has been generated (tested with the current hex release 0.8.0):
I expect that AWS.ApiGatewayV2.reimport_api(client, rest_api_id, Jason.encode!(openapi_spec))
would import the openapi spec.
However, AWS returns an HTTP 415 (unsupported media type). I think the issue is related to the enforced use of send_body_as_binary?
I am able to successfully call the API if I disable the send_body_as_binary?
as well as changing the expected return code to 200
. However, in this case one has to provide a somehow unnatural input, see below:
AWS.ApiGatewayV2.reimport_api(client, rest_api_id, %{"body" => Jason.encode!(openapi_spec)})
Note: In this case body
doesn't start with a capital B
. (see https://docs.aws.amazon.com/apigatewayv2/latest/api-reference/apis-apiid.html#apis-apiid-schemas)
Interestingly importing an OpenAPI Spec works using the V1 API, where the generated code doesn't use the send_body_as_binary?
Please let me know if I can assist you in any way.
Best,
Andre
The title should be self explanatory, but when running mix deps.get
on a project that uses :aws ~> 0.5.0
as a dependency, the old jkakar/aws-elixir
is fetched instead of aws-beam/aws-elixir
.
This may cause dependency conflicts. For instance, I need httpoison ~> 1.5
, but jkakar/aws-elixir
specify older dependency versions (in this case httpoison ~> 0.11.1
).
Hey,
I'm using the library to implement file upload and trying to figure out how to do form signature. In the example, they are using hand written generator for signature.
Does the library has support for something similar ?
When I try to get an object with AWS.S3.get_object/22
with an object encrypted with SSE-C, I get a mismatched signature error.
def get_object(client, bucket, filename, encryption_key, opts \\ []) do
sse_customer_algorithm = "AES256"
sse_customer_key = Base.encode64(encryption_key)
sse_customer_key_md5 = Crypto.hash(encryption_key, :md5) |> Base.encode64()
range =
case opts[:range] do
nil -> nil
{from, to} -> "bytes=#{from}-#{to}"
end
AWS.S3.get_object(
client,
bucket,
filename,
_part_number = nil,
_response_cache_control = nil,
_response_content_disposition = nil,
_response_content_encoding = nil,
_response_content_language = nil,
_response_content_type = nil,
_response_expires = nil,
_version_id = nil,
_expected_bucket_owner = nil,
_if_match = nil,
_if_modified_since = nil,
_if_none_match = nil,
_if_unmodified_since = nil,
range,
_request_payer = nil,
sse_customer_algorithm,
sse_customer_key,
sse_customer_key_md5,
_options = []
)
end
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>***redacted***</AWSAccessKeyId>
<StringToSign>AWS4-HMAC-SHA256
20210403T192431Z
20210403/ca-central-1/s3/aws4_request
***redacted***</StringToSign>
<SignatureProvided>***redacted***</SignatureProvided>
<StringToSignBytes>***redacted***</StringToSignBytes>
<CanonicalRequest>GET
/my-bucket/09b6f2f2-020f-40d6-b66f-32cdb5561b95.zip
content-type:text/xml
host:s3.ca-central-1.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20210403T192431Z
x-amz-server-side-encryption-customer-algorithm:AES256
x-amz-server-side-encryption-customer-key:***redacted***
x-amz-server-side-encryption-customer-key-md5:***redacted***
content-type;host;x-amz-content-sha256;x-amz-date;x-amz-server-side-encryption-customer-algorithm;x-amz-server-side-encryption-customer-key;x-amz-server-side-encryption-customer-key-md5
e3b0c442***redacted***b855</CanonicalRequest>
<CanonicalRequestBytes>...(truncated)
I have verified that I am using the same key as the upload.
I'm stumped...
Is a layer planned for implementing some of this:
https://docs.aws.amazon.com/sdk-for-go/api/aws/session/
~/.aws/credentials parsing to avoid duplicate config-files for aws-cli and this lib ?
load order:
* Environment Variables (edit: supported in Client.create/0 and Client.create/1 )
* Shared Credentials file
* Shared Configuration file
Hi- I see the session_token code on master and i've confirmed that it works - and that i'm unable to use the client without this token.
Is a updated tag available for this?
I'm trying to download a file from S3 using AWS.S3.get_object(client, bucket, path)
but the function fails with the error
** (FunctionClauseError) no function clause matching in :lists.prefix/2
The following arguments were given to :lists.prefix/2:
# 1
'<'
# 2
{:error, [],
<<137, 80, 78, 71, 13, 10, 26, 10, 0, 0, 0, 13, 73, 72, 68, 82, 0, 0, 9, 225,
0, 0, 6, 156, 8, 6, 0, 0, 0, 145, 146, 199, 218, 0, 0, 12, 23, 105, 67, 67,
80, 73, 67, 67, 32, 80, 114, ...>>}
(stdlib 3.13) lists.erl:192: :lists.prefix/2
(xmerl 1.3.25) xmerl_scan.erl:3910: :xmerl_scan.scan_mandatory/5
(xmerl 1.3.25) xmerl_scan.erl:572: :xmerl_scan.scan_document/2
(xmerl 1.3.25) xmerl_scan.erl:291: :xmerl_scan.string/2
(aws 0.7.0) lib/aws/xml.ex:42: AWS.XML.decode!/2
(aws 0.7.0) lib/aws/request.ex:104: AWS.Request.request_rest/9
From what I see, the S3 module uses rest-xml
as default protocol [1] which causes the response to be interpreted as a xml and decoded [2].
Is this the right way to download files or another function should be used?
[1] https://github.com/aws-beam/aws-elixir/blob/master/lib/aws/generated/s3.ex#L16
[2] https://github.com/aws-beam/aws-elixir/blob/master/lib/aws/request.ex#L104
Hey people,
I am very happy with your package, but something weird happens when I try to reach MTurk:
iex(4)> AWS.MTurk.get_account_balance(client, %{})
** (ArgumentError) argument error
(crypto 4.8.2) crypto.erl:932: :crypto.hmac/3
(aws 0.7.0) lib/aws/request.ex:438: AWS.Request.Internal.signing_key/4
(aws 0.7.0) lib/aws/request.ex:232: AWS.Request.sign_v4/6
(aws 0.7.0) lib/aws/request.ex:40: AWS.Request.request_post/5
If something is wrong with roles, authentication I've seen different errors. This is is caused by Erlang's crypto module and I haven't got the foggiest of what I have done wrong. I am using plug_crypto 1.2.2.
Could you please help me out here
thanks!
Cspr
AWS.HTTPClient, AWS.JSON and AWS.XML. Each of them with their respective docs.
/cc @vrcca
I'm on Elixir 1.6.4, using 0.5.0 of AWS.
I'm trying to use the Lambda portion of this library, and on invoking my function, I'm getting the following error:
Access calls for keywords expect the key to be an atom, got: "X-Amz-Function-Error"
It appears to be happening on line 299 here:
Lines 297 to 309 in 7d543d7
Headers are coming back like this:
[
{"Date", "Fri, 20 Apr 2018 01:12:50 GMT"},
{"Content-Type", "application/json"},
{"Content-Length", "6403"},
{"Connection", "keep-alive"},
{"x-amzn-RequestId", "f13a4d8f-4437-11e8-934b-0d4ffde93ad3"},
{"x-amzn-Remapped-Content-Length", "0"},
{"X-Amz-Executed-Version", "$LATEST"},
{"X-Amzn-Trace-Id", "root=1-5ad93e91-a1770ed8f4eed468e857e018;sampled=0"}
]
Access expects the first item in each tuple to be an atom and won't work w/strings.
The function is being invoked correctly, and the results I expect are coming back from the invocation, but then the function is dying on that error.
Let me know if I can provide any more information/help.
When we try to invoke a Lambda, we get a 403 error:
client = %AWS.Client{
access_key_id: Keyword.fetch!(config, :access_key_id),
secret_access_key: Keyword.fetch!(config, :secret_access_key),
region: Keyword.fetch!(config, :region)
}
function_name = "arn:aws:lambda:ca-central-1:3456789:function:foo_bar"
AWS.Lambda.invoke_async(client, function_name, %{foo: "bar"})
{:error,
{:unexpected_response,
%{
body: "{\"message\":\"The request signature we calculated does not match the signature you provided.
Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\"}",
headers: [
{"Date", "Tue, 15 Jun 2021 06:08:39 GMT"},
{"Content-Type", "application/json"},
{"Content-Length", "192"},
{"Connection", "keep-alive"},
{"x-amzn-RequestId", "a1e8e46f-8024-40e9-9c48-5091cb6c4ea1"},
{"x-amzn-ErrorType", "InvalidSignatureException"}
],
status_code: 403
}}}
I can't understand what I could possibly mess up here, since there is not a lot of complexity in the invoke_async/3
call?
Any ideas?
Good lib! Thanks for that.
I've got a problem and am wondering how to go about debugging it (for all I know this could be on AWS' side).
def client() do
AWS.Client.create(@access_key_id, @secret_access_key, @region)
end
def receive_message(queue_url \\ @queue_url) do
client()
|> AWS.SQS.receive_message(%{"QueueUrl" => queue_url}, %{})
end
Calling AwsUtils.Worker.receive_message()
works most times but, from time to time, I get this error:
12:16:31.250 [error] 3432- fatal: {:error, {:wfc_Legal_Character, {:error, {:bad_character, 339}}}}
โถโถโถ
** (exit) {:fatal, {{:error, {:wfc_Legal_Character, {:error, {:bad_character, 339}}}}, {:file, :file_name_unknown}, {:line, 1}, {:col, 13520}}}
(xmerl 1.3.28) xmerl_scan.erl:4127: :xmerl_scan.fatal/2
(xmerl 1.3.28) xmerl_scan.erl:2721: :xmerl_scan.scan_char_data/5
(xmerl 1.3.28) xmerl_scan.erl:2633: :xmerl_scan.scan_content/11
(xmerl 1.3.28) xmerl_scan.erl:2136: :xmerl_scan.scan_element/12
(xmerl 1.3.28) xmerl_scan.erl:2608: :xmerl_scan.scan_content/11
(xmerl 1.3.28) xmerl_scan.erl:2136: :xmerl_scan.scan_element/12
(xmerl 1.3.28) xmerl_scan.erl:2608: :xmerl_scan.scan_content/11
(xmerl 1.3.28) xmerl_scan.erl:2136: :xmerl_scan.scan_element/12
Has anyone ever seen this?
Thanks!
The error messages in mturk are always nil
. Looking at the actual HTTPoison response from create_additional_assignments_for_h_i_t\3
:
%HTTPoison.Response{
body:
"{\"__type\":\"ParameterValidationError\",\"Message\":\"The value 0 is invalid for MaxAssignmentIncrement. Valid values range from 1 to 1000000000. (1523017679385 s)\",\"Parameter\":\"MaxAssignmentIncrement\",\"TurkErrorCode\":\"AWS.ParameterOutOfRange\"}",
headers: [
{"x-amzn-RequestId", "fd299fbe-5dc7-4f08-8ed3-1c79b545725a"},
{"Content-Type", "application/x-amz-json-1.1"},
{"Content-Length", "238"},
{"Date", "Fri, 06 Apr 2018 12:27:59 GMT"},
{"Cneonction", "close"}
],
status_code: 400
}
and looking at the code it seems that the error message field is wrong since it should start with a capital:
{:ok, response=%HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body)
exception = error["__type"]
message = error["message"]
{:error, {exception, message}}
It should be message = error["Message"]
instead.
Hi,
I am trying to get through the configuration of this package. To get to the point where I would be able to get and update the devices shadow, through the REST API. For easier development, I added my self a permission where I has access to all iot:*
resources.
In my config/dev.exs
I have a confguration like in Readme.md
iex> client = %AWS.Client{access_key_id: "<access-key-id>",
secret_access_key: "<secret-access-key>",
region: "us-east-1",
endpoint: "amazonaws.com"}
Then, if i issue a code to initialize this client, I get back response like
%AWS.Client{access_key_id: "<secret>", endpoint: "amazonaws.com",
port: "443", proto: "https", region: "eu-west-1",
secret_access_key: "<secret>", service: nil}
I can even issue a call to list all the things. I get back the result with all the things listed.
BUT: When I want to get back the shadow of one of the devices, I get back the error.
I call the shadow with
AWS.IoT.DataPlane.get_thing_shadow(Shadow.Client.init_client(), "<thingName>")
init_client is just my function... nothing special.
And I get back {:error, "Not Found"}
though if I go and search it over the web console I can see it there.
I've tried to put mp3 file with code as shown in the example, so something like
iex> client = AWS.Client.create(..., ..., ...)
iex> file = File.read!("path_to_mp3_file")
iex> md5 = :crypto.hash(:md5, file) |> Base.encode64()
iex> AWS.S3.put_object(AWSUtils.client(), bucket, path, %{
"Body" => file,
"ContentMD5" => md5
})
The response was with status code 200:
%{
body: "",
headers: [...],
status_code: 200
}}
However, when I reach the expected URL I got an error from S3
This page contains the following errors:
error on line 1 at column 1: Document is empty
error on line 1 at column 1: Encoding error
P.S. I tried to use ExAws.S3.put_object
and everything worked fine.
Currently the endpoint prefix is hardcoded for mturk:
host = get_host("mturk-requester", client)
...
defp get_host(endpoint_prefix, client) do
if client.region == "local" do
"localhost"
else
"#{endpoint_prefix}.#{client.region}.#{client.endpoint}"
end
end
Since there is a sandbox development endpoint, it would be nice to support the "mturk-requester-sandbox" prefix as well.
The following lines allow passing the prefix through the options:
defp request(client, action, input, options) do
prefix = Keyword.get(options, :endpoint_prefix, "mturk-requester")
options = Keyword.delete(options, :endpoint_prefix)
client = %{client | service: "mturk-requester"}
host = get_host(prefix, client)
...
end
Then, one can pass the prefix like:
AWS.MechanicalTurk.list_h_i_ts(client, %{}, endpoint_prefix: "mturk-requester-sandbox")
Not sure if this is the best approach or if it would make more sense to pass this through the client though.
The code for the prefix fix: https://github.com/4knahs/aws-elixir/blob/mturk-sandbox/lib/aws/mechanical_turk.ex#L577
When providing multiple querystring parameters (e.g. to AWS.S3.list_objects/8
) the response from AWS is the error The request signature we calculated does not match the signature you provided. Check your key and signing method.
.
Run the following with valid values for bucket
and prefix
:
AWS.S3.list_objects(AWS.Client.create, "bucket", "/", nil, nil, nil, "prefix")
The result is an error specifying that The request signature we calculated does not match the signature you provided. Check your key and signing method.
.
The querystring parameters are sorted in reverse order. The request sends ?prefix=prefix&delimiter=%2F
and the expected querystring as reported by the error returned from AWS should be ?delimiter=%2F&prefix=prefix
.
Add a build pipeline using GitHub actions.
To reproduce:
client = AWS.Client.create(...)
client |> AWS.MediaLive.list_channels()
This is because lib/aws/request.ex always sends a payload
, even for :get
requests:
payload =
if send_body_as_binary? do
Map.fetch!(input, "Body")
else
encode!(client, metadata.protocol, input)
end
This causes an empty JSON object {}
to be sent in the HTTP request body, which gets rejected by the AWS API with 403.
Changing the payload to empty string for :get
requests fixes the issue.
Happy to raise a PR to fix this, just not sure exactly where in the code would be most suited to add this condition.
There are quite a few changes since the last release, notably the fix for #71 which is required to support OTP 24. Could we have a new release published soon, please?
select_object_content builds a query string like this:
select&select-type=2
normalize_query then expects URLs to have = in them, but this doesn't.
This causes the API not to work. So the function needs to be updated to support this edgecase.
I'll take a look at creating a PR for this ๐
The library currently does not support the EC2 API.
Related: aws-beam/aws-codegen#24
We use Linode's Object Storage. And it's fully compatible with S3.
When we try to use it with aws-elixir
we get the following error.
iex(1)> client = AWS.Client.create("Access-Key", "Secret-Key", "us-east-1") |> Map.put(:endpoint, "us-east-1.linodeobjects.com")
%AWS.Client{
access_key_id: "access-key",
endpoint: "us-east-1.linodeobjects.com",
http_client: {AWS.HTTPClient, []},
json_module: {AWS.JSON, []},
port: 443,
proto: "https",
region: "us-east-1",
secret_access_key: "secret-key",
service: nil,
session_token: nil,
xml_module: {AWS.XML, []}
}
iex(2)> AWS.S3.list_buckets(client)
[info] TLS :client: In state :certify at ssl_handshake.erl:1901 generated CLIENT ALERT: Fatal - Handshake Failure
- {:bad_cert, :hostname_check_failed}
{:error,
{:tls_alert,
{:handshake_failure,
'TLS client: In state certify at ssl_handshake.erl:1901 generated CLIENT ALERT: Fatal - Handshake Failure\n {bad_cert,hostname_check_failed}'}}}
S3cmd
and other tools work fine with S3-related operations with Linode's storage: Docs.
Would this issue be in scope of this project ?
Tested on multiple versions of Mac OS, and same error message below is printed constantly. Strange thing is, it is working fine on other OSes. Does not seem to be an issue on a dev build.
(no logger present) unexpected logger message: {log,error,"~s~n",["beam/beam_load.c(1428): Error loading module 'Elixir.AWS.DataPipeline':\n module name in object code is Elixir.AWS.Datapipeline\n"],#{error_logger=>#{emulator=>true,tag=>error},gl=><0.0.0>,pid=><0.2049.0>,time=>1630404393472978}}
The AWS.HTTPClient
logic doesn't handle HEAD HTTP calls (in this case using AWS.S3.head_object/4
):
** (exit) an exception was raised:
** (CaseClauseError) no case clause matching: {:ok, 200, [REDACTED]}
(aws 0.7.0) lib/aws/request.ex:98: AWS.Request.request_rest/9
This is due to this logic in AWS.HTTPClient
:
case :hackney.request(method, url, headers, body, options) do
{:ok, status_code, response_headers, body} ->
{:ok, %{status_code: status_code, headers: response_headers, body: body}}
error ->
error
end
A better approach would be to explicitly listen for the error tuples to prevent unexpected return values:
case :hackney.request(method, url, headers, body, options) do
{:ok, status_code, response_headers, body} ->
{:ok, %{status_code: status_code, headers: response_headers, body: body}}
{:error, reason} ->
{:error, reason}
end
The specs for http client expects body to be a binary, but I think either the body should be optional, or it should be permit nil value:
case :hackney.request(method, url, headers, body, options) do
{:ok, status_code, response_headers, body} ->
{:ok, %{status_code: status_code, headers: response_headers, body: body}}
{:ok, status_code, response_headers} ->
{:ok, %{status_code: status_code, headers: response_headers}}
# {:ok, %{status_code: status_code, headers: response_headers, body: nil}}
{:error, reason} ->
{:error, reason}
end
On December 1st 2021 Amazon announced a new feature for their Textract service - Analyze ID. This allows for extracting data from government issued ids (i.e. Driver's license). Multiple languages SDKs support this new feature. Aws-elixir Textract module does not.
Please add support for this.
Currently there is no .formatter.exs. It would be great to add it, format all the project & validate in our CI that new additions are formatted (See mix format --check-formatted
).
Depends on #44.
Is there a way to do this? I'm not seeing anything in the docs, so just wondering if it is possible.
The idea is to replicate the fix of this issue: aws-beam/aws-codegen#76
Hi all, just wanted to raise up an issue where a potential breaking changes is not documented in CHANGELOG.md.
In short, we found out that the error response of v0.5.0
is different compared to v.0.9.2
when attempting to update our dependencies:
# v0.5.0
{:error, {exception, message}}
# v0.9.2
{:error, {:unexpected_response, response}}
After some deep dive, here's the file changes related:
Line 51 in 0ee85f3
It seems like the changes happen after v0.7.0
according to this commit.
Am I right in regards of this as a breaking changes? If so, would it be good for us to update the CHANGELOG.md to document the breaking changes?
Happy to send a PR in and thanks for the amazing work as well ๐
Hi ! I'm a newbie on elixir so maybe it's an easy problem ^^ I'm trying to create a policy in iot core so:
I'm trying to use the fuction create_policy(client, "policyName", input) but I don't found an example to know how I have to write input correctly, I have an issue {error,nil} and except the input format I don't know what can be the problem because I can make a get_policy(client, "policyName") and it's working.
So if someone can send me an example of the fonction create_policy it can be great !
Thanks and have a nice day.
I'm attempting to use this library send queries to timestream. Timestream has a bit of a quirk in that queries against it must be executed on a seperate endpoint that is looked up via a describe API:
Now, the AWS.TimestreamQuery.describe_endpoints
method does exist and I can pull the right info from there, but I can't quite figure out how to insert that back into the client for use in the actual query request. The crux of the issue seems to be AWS.Request.build_host
which performs one of two functions:
Neither of these work for me. The metadata generated by AWS.TimestreamQuery
doesn't refer to the discovered endpoint (which I need to provide) but rather to a fixed point in AWS.
Is there a good way around this? Or, alternatively, a good path forward that I can go implement? Perhaps something that could be added to the supplied options
to give me an out to specify an alternative endpoint?
Is there any thought on how to integrate instance role with the rest of the module?
how about adding additional attribute on %AWS.Client struct and handle it at AWS.Request module?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Hi Team,
I was trying to follow the generated docs for Rekognition service.
But I keep getting this error.
AWS.Rekognition.detect_text(client, "s3://s3_bucket/public/cococola/c_1001.jpg")
** (FunctionClauseError) no function clause matching in AWS.Request.encode!/3
The following arguments were given to AWS.Request.encode!/3:
# 1
#AWS.Client<
endpoint: nil,
http_client: {AWS.HTTPClient, []},
json_module: {AWS.JSON, []},
port: 443,
proto: "https",
region: "us-east-2",
service: "rekognition",
session_token: nil,
xml_module: {AWS.XML, []},
...
>
# 2
"json"
# 3
"s3://cookstro/public/cococola/c_1001.jpg"
Attempted function clauses (showing 1 out of 1):
defp encode!(%AWS.Client{} = client, protocol, payload) when protocol === "query" or protocol === "json" or protocol === "rest-json" or protocol === "rest-xml" and is_map(payload)
(aws 0.10.0) lib/aws/request.ex:233: AWS.Request.encode!/3
(aws 0.10.0) lib/aws/request.ex:42: AWS.Request.request_post/5
```
We tried with `File.read!()` as well with no success.
Hey I was trying to use the lib to return a presigned url to access a file from inside a S3 bucket.
I was able to accomplish the task by following this docs https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html#query-string-auth-v4-signing-example
However I couldn't use AWS.Signature.sign_v4_query/6
because it always tries to hash to payload and for requests without a payload or when you don't know which payload will be used you have to include UNSIGNED_PAYLOAD
in the canonical URL instead.
Here's how I did it https://gist.github.com/ceolinrenato/cc7f036ef7867c4ccd08ddfd932d1520
So, Poison and HTTPoison are dependancies - any thoughts on making those adaptable with behaviours? I noticed that the library still uses a pre-1.0 version of httpoison, and I couldn't help but wonder if the library really needs to be dependent on it all.
There's a lot of options for JSON decoders, Poison, Jason, Jiffy, etc. They all basically have the same interface, making that adaptable could be as simple as plugging in a module name as part of client configuration.
I think it'd be simple enough to do the same for HTTP functionality - a bit more complicated to write an adapter, though. But probably still fairly straightforward with everything being generated.
The biggest issue would be that all HTTP requests return the HTTPoison structs, rather than say, an AWS.Response
struct - changing that means a breaking change. That's... a harder pill to swallow.
How I can specify my queue name in the url of the request?
I couldn't find the parameter that accepts the queue name in the method.
https://hexdocs.pm/aws/AWS.SQS.html#send_message/3
send_message(client, input, options \\ [])
client = AWS.Client.create("access_key", "client_secret", "us-east-1")
msg = %{
"MessageBody" => Jason.encode!(%{some_key: "message"})
}
# Where to put queue_name?
client |> AWS.SQS.send_message(msg)
Its sending POST to https://sqs.us-east-1.amazonaws.com:443/
. Instead we need to send it to the queue url.
https://sqs.us-east-1.amazonaws.com:443/account_number/queue_name
Hello! Thank you for this lib. I was looking to migrate over from ex_aws but had problems getting S3 working. It wasn't on 0.6.0 so I used the main branch and am getting this error:
** (FunctionClauseError) no function clause matching in AWS.Request.build_params/2
(aws 0.6.0) lib/aws/request.ex:85: AWS.Request.build_params([{"ACL", "x-amz-acl"}, {"CacheControl", "Cache-Control"}, {"ContentDisposition", "Content-Disposition"}, {"ContentEncoding", "Content-Encoding"}, {"ContentLanguage", "Content-Language"}, {"ContentLength", "Content-Length"}, {"ContentMD5", "Content-MD5"}, {"ContentType", "Content-Type"}, {"Expires", "Expires"}, {"GrantFullControl", "x-amz-grant-full-control"}, {"GrantRead", "x-amz-grant-read"}, {"GrantReadACP", "x-amz-grant-read-acp"}, {"GrantWriteACP", "x-amz-grant-write-acp"}, {"ObjectLockLegalHoldStatus", "x-amz-object-lock-legal-hold"}, {"ObjectLockMode", "x-amz-object-lock-mode"}, {"ObjectLockRetainUntilDate", "x-amz-object-lock-retain-until-date"}, {"RequestPayer", "x-amz-request-payer"}, {"SSECustomerAlgorithm", "x-amz-server-side-encryption-customer-algorithm"}, {"SSECustomerKey", "x-amz-server-side-encryption-customer-key"}, {"SSECustomerKeyMD5", "x-amz-server-side-encryption-customer-key-MD5"}, {"SSEKMSEncryptionContext", "x-amz-server-side-encryption-context"}, {"SSEKMSKeyId", "x-amz-server-side-encryption-aws-kms-key-id"}, {"ServerSideEncryption", "x-amz-server-side-encryption"}, {"StorageClass", "x-amz-storage-class"}, {"Tagging", "x-amz-tagging"}, {"WebsiteRedirectLocation", "x-amz-website-redirect-location"}], <<255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1, 1, 0, 75, 0, 75, 0, 0, 255, 219, 0, 67, 0, 16, 11, 12, 14, 12, 10, 16, 14, 13, 14, 18, 17, 16, 19, 24, 40, 26, 24, 22, 22, 24, 49, 35, 37, 29, ...>>)
(aws 0.6.0) lib/aws/s3.ex:4515: AWS.S3.put_object/5
Let me know if you need anything else. Thank you!
The endpoint prefix works for all service discovery operations except:
Which for some reason uses data-discovery
as the endpoint prefix.
This library is failing to work on Erlang 24.
Erlang 24 which is released in and is removing crypto:hmac which this project uses.
This project should update to use the new crypto api so projects depending on it don't fail to build.
Here is the notes from Erlang 24 http://erlang.org/doc/general_info/scheduled_for_removal.html#otp-24
This is how plug_crypto handled this situation - https://github.com/elixir-plug/plug_crypto/pull/20/files
Hello Folk's
Nice lib! :-)
i had an error when try the s3 partial_upload part
the code:
# ...
{:ok, %{
"InitiateMultipartUploadResult" => %{
"UploadId" => key
}}, _} = AWS.S3.create_multipart_upload(client, bucket, path, %{})
Stream.concat([head], rest)
|> Stream.chunk_every(chunk_size)
|> Enum.map(fn chunk ->
chunk_s =
chunk
|> Enum.join("\r\n")
size = :erlang.byte_size(chunk_s)
IO.inspect( size / @megabyte)
AWS.S3.upload_part(client, bucket, key, %{"Body" => chunk_s, "ContentMD5" => :crypto.hash(:md5, chunk_s) |> Base.encode64()})
|> IO.inspect()
end)
AWS.S3.complete_multipart_upload(client, bucket, key, %{})
error
{:error,
{:unexpected_response,
%{
body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>MethodNotAllowed</Code><Message>The specified method is not allowed against this resource.</Message><Method>POST</Method><ResourceType>OBJECT</ResourceType><RequestId>EDCQXBK7XAH5FRKH</RequestId><HostId>bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo=</HostId></Error>",
headers: [
{"x-amz-request-id", "EDCQXBK7XAH5FRKH"},
{"x-amz-id-2",
"bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo="},
{"allow", "HEAD, DELETE, GET, PUT"},
{"content-type", "application/xml"},
{"transfer-encoding", "chunked"},
{"date", "Mon, 26 Jul 2021 21:52:17 GMT"},
{"server", "AmazonS3"}
],
status_code: 405
}}}
** (Protocol.UndefinedError) protocol Enumerable not implemented for {:error, {:unexpected_response, %{body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>MethodNotAllowed</Code><Message>The specified method is not allowed against this resource.</Message><Method>POST</Method><ResourceType>OBJECT</ResourceType><RequestId>EDCQXBK7XAH5FRKH</RequestId><HostId>bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo=</HostId></Error>", headers: [{"x-amz-request-id", "EDCQXBK7XAH5FRKH"}, {"x-amz-id-2", "bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo="}, {"allow", "HEAD, DELETE, GET, PUT"}, {"content-type", "application/xml"}, {"transfer-encoding", "chunked"}, {"date", "Mon, 26 Jul 2021 21:52:17 GMT"}, {"server", "AmazonS3"}], status_code: 405}}} of type Tuple. This protocol is implemented for the following type(s): Function, MapSet, List, Stream, HashDict, GenEvent.Stream, Map, Date.Range, Range, File.Stream, IO.Stream, HashSet
(elixir 1.12.0) lib/enum.ex:1: Enumerable.impl_for!/1
(elixir 1.12.0) lib/enum.ex:141: Enumerable.reduce/3
(elixir 1.12.0) lib/stream.ex:649: Stream.run/1
some gotchas here?
In aws-erlang myself and a colleague implemented automatic retries as part of aws-beam/aws-erlang#57. This may be a feature of interest for aws-elixir as well but seeing as I'm not super-duper-familiar with this codebase (nor Elixir for that matter) I'm opening an issue instead in case anybody else feels like picking this up :-) It could be a nice beginners task.
Example implementation: onno-vos-dev/aws-erlang@7078533
Where can I find an online documentation for it?
Thank You,
Tiago.
MIX_ENV=docs mix docs
** (Mix.Config.LoadError) could not load config config/docs.exs
** (Mix) The task "docs" could not be found. Did you mean "do"?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.