Giter Club home page Giter Club logo

aws-es-proxy's Introduction

aws-es-proxy

Docker Pulls

aws-es-proxy is a small web server application sitting between your HTTP client (browser, curl, etc...) and Amazon Elasticsearch service. It will sign your requests using latest AWS Signature Version 4 before sending the request to Amazon Elasticsearch. When response is back from Amazon Elasticsearch, this response will be sent back to your HTTP client.

Kibana requests are also signed automatically.

Installation

Download binary executable

aws-es-proxy has single executable binaries for Linux, Mac and Windows.

Download the latest aws-es-proxy release.

Docker

There is an official docker image available for aws-es-proxy. To run the image:

# v0.9 and newer (latest always point to the latest release):

docker run --rm -v ~/.aws:/root/.aws -p 9200:9200 abutaha/aws-es-proxy:v1.0 -endpoint https://dummy-host.ap-southeast-2.es.amazonaws.com -listen 0.0.0.0:9200

v.08:

docker run --rm -it abutaha/aws-es-proxy ./aws-es-proxy -endpoint https://dummy-host.ap-southeast-2.es.amazonaws.com

To expose a port number other than the default 9200, pass an environment variable of PORT_NUM to docker with the port number you wish to expose for your service.

Via homebrew

brew install aws-es-proxy

Build from Source

Dependencies:

  • go1.14+
#requires go1.14
go build github.com/abutaha/aws-es-proxy

Configuring Credentials

Before using aws-es-proxy, ensure that you've configured your AWS IAM user credentials. The best way to configure credentials on a development machine is to use the ~/.aws/credentials file, which might look like:

[default]
aws_access_key_id = AKID1234567890
aws_secret_access_key = MY-SECRET-KEY

Alternatively, you can set the following environment variables:

export AWS_ACCESS_KEY_ID=AKID1234567890
export AWS_SECRET_ACCESS_KEY=MY-SECRET-KEY

aws-es-proxy also supports IAM roles. To use IAM roles, you need to modify your Amazon Elasticsearch access policy to allow access from that role. Below is an Amazon Elasticsearch access policy example allowing access from any EC2 instance with an IAM role called ec2-aws-elasticsearch.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::012345678910:role/ec2-aws-elasticsearch"
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:eu-west-1:012345678910:domain/test-es-domain/*"
    }
  ]
}

Usage example:

You can use either argument -endpoint OR environment variable ENDPOINT to specify AWS ElasticSearch endpoint.

./aws-es-proxy -endpoint https://test-es-somerandomvalue.eu-west-1.es.amazonaws.com
Listening on 127.0.0.1:9200
export ENDPOINT=https://test-es-somerandomvalue.eu-west-1.es.amazonaws.com

./aws-es-proxy  -listen 10.0.0.1:9200 -verbose
Listening on 10.0.0.1:9200

aws-es-proxy listens on 127.0.0.1:9200 if no additional argument is provided. You can change the IP and Port passing the argument -listen

./aws-es-proxy -listen :8080 -endpoint ...
./aws-es-proxy -listen 10.0.0.1:9200 -endpoint ...

By default, aws-es-proxy will not display any message in the console. However, it has the ability to print requests being sent to Amazon Elasticsearch, and the duration it takes to receive the request back. This can be enabled using the option -verbose

./aws-es-proxy -verbose ...
Listening on 127.0.0.1:9200
2016/10/31 19:48:23  -> GET / 200 1.054s
2016/10/31 19:48:30  -> GET /_cat/indices?v 200 0.199s
2016/10/31 19:48:37  -> GET /_cat/shards?v 200 0.196s
2016/10/31 19:48:49  -> GET /_cat/allocation?v 200 0.179s
2016/10/31 19:49:10  -> PUT /my-test-index 200 0.347s

For a full list of available options, use -h:

./aws-es-proxy -h
Usage of ./aws-es-proxy:
  -auth
        Require HTTP Basic Auth
  -debug
        Print debug messages
  -endpoint string
        Amazon ElasticSearch Endpoint (e.g: https://dummy-host.eu-west-1.es.amazonaws.com)
  -listen string
        Local TCP port to listen on (default "127.0.0.1:9200")
  -log-to-file
        Log user requests and ElasticSearch responses to files
  -no-sign-reqs
        Disable AWS Signature v4
  -password string
        HTTP Basic Auth Password
  -pretty
        Prettify verbose and file output
  -realm string
        Authentication Required
  -remote-terminate
        Allow HTTP remote termination
  -timeout int
        Set a request timeout to ES. Specify in seconds, defaults to 15 (default 15)
  -username string
        HTTP Basic Auth Username
  -verbose
        Print user requests
  -version
        Print aws-es-proxy version

Using HTTP Clients

After you run aws-es-proxy, you can now open your Web browser on http://localhost:9200. Everything should be working as you have your own instance of ElasticSearch running on port 9200.

To access Kibana, use http://localhost:9200/_plugin/kibana/app/kibana

aws-es-proxy's People

Contributors

abutaha avatar amcintosh avatar buzztaiki avatar chenrui333 avatar dippynark avatar diranged avatar dlackty avatar em0ney avatar gjacquet avatar joshgarnett avatar kamushadenes avatar kgizzi avatar kgston avatar mkadin avatar nouse avatar rafael-gumiero avatar saravanan30erd avatar willejs avatar zqben402 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-es-proxy's Issues

Installing Binaries

Hello,
I'm having trouble installing the executable binaries. There is no file extension so I'm unsure how to install. Thanks for your help.

POST with url parameters fails

example: query with routing

curl -X POST http://localhost:8222/index/type/_count?routing=123 -d '{
  "query": {
    "term": {
      "user": 123
    }
  }
}'

aws-es-proxy verbose log prints:

2018/08/12 08:18:32 Generated fresh AWS Credentials object

and the curl returns empty response

< HTTP/1.1 200 OK
< Date: Sun, 12 Aug 2018 08:18:32 GMT
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8

workaround: send a GET instead of a POST but this is not working with _delete_by_query as it requires POST

Kibana blank page

Hello, is this project still maintained?
Kibana page stays as blank when loaded by the proxy.
Do you plan to release v.1.0 soon or are there any hacks to fix this.
Thanks,
My ES specs is this:
{ "name" : "xxxx", "cluster_name" : "xxxx", "cluster_uuid" : "xxxx", "version" : { "number" : "7.1.1", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "7a013de", "build_date" : "2019-09-05T07:25:23.525600Z", "build_snapshot" : false, "lucene_version" : "8.0.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }

Kibana blank page - Affected ES versions 6.0+

Hi, I need help finding out why Kibana is blank in ES versions 6.0+. I suspect the issue is related to HTTP headers, but I couldn't find anything yet. If anyone can give a hand here, I will be grateful.

--header "Content-Type: application/json" doesn't works

I am using ES 6.0.

when I make a request
curl --header "Content-Type: application/json" -XPUT http://localhost:9200/myindex/ -d'{"settings":{"index.mapping.ignore_malformed": true}}'

It responses
{"error":"Content-Type header is missing","status":406}

Is there a way to pass thru the header? As ES 6.0 mandate the content type header for any RESTful request with json body.

Thanks!

Please add more running example

Hello,

I'd like to be able to pre-sign request for Kibana using your tool. We're using the AWS's VPC-supported Elasticsearch Service. We have an IAM user "ec2-aws-es-user" to allow access to resource "test-es-domain/*".

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::012345678910:user/ec2-aws-es-user"
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:us-east-1:012345678910:domain/test-es-domain/*"
    }
  ]
}

We have installed aws-es-proxy on an EC2 instance within our same VPC (say its ip is 10.20.30.40) and the sign in information for user "ec2-aws-es-user" is in ~/.aws/credentials already.

Say our test-es-domain's VPC end-point is https://vpc-test-es-domain-abcdefg.us-east-1.es.amazonaws.com. Then I run aws-es-proxy as:

./aws-es-proxy -endpoint https://vpc-test-es-domain-abcdefg.us-east-1.es.amazonaws.com -listen 10.20.30.40:8080

Now does that mean I can curl at my test-es-domain by this?
curl -GET 'http://10.20.30.40:8080/_cat/indices?v'

aws-es-proxy gave no response and after a while it timed out. What am I doing wrong?
Also, what are the settings in the /etc/kibana/kibana.yml suppose to be?

server.port: 5601 ?
server.host: "10.20.30.40" ?
server.basePath: "" ?
elasticsearch.url: "http://localhost:8080" ?

I'm running all these inside our company's firewall. My 10.20.30.40 EC2's security group allows me to ssh and http to my instance. But when I open my brower for Kibana, I needs to specify 10.20.30.40. That's why I have to put 10.20.30.40 as my server.host in my kibana.yml. How would that change with the use of aws-es-proxy?

Thanks a lot for your clarification.

uuid.go broke build

Received the following errors because uuid.NewV4() now returns the uuid and and error. Fixed locally by u,err := uuid.NewV4() and then converting u to a string.

./aws-es-proxy.go:226:44: multiple-value uuid.NewV4() in single-value context
./aws-es-proxy.go:329:59: multiple-value uuid.NewV4() in single-value context
./aws-es-proxy.go:330:61: multiple-value uuid.NewV4() in single-value context

build on EC2 failed with error

[ec2-user@ip-172-16-2-169 aws-es-proxy]$ go version
warning: GOPATH set to GOROOT (/usr/lib/golang) has no effect
go version go1.9.4 linux/amd64

[ec2-user@ip-172-16-2-169 aws-es-proxy]$ glide -v
glide version v0.13.3

[ec2-user@ip-172-16-2-169 aws-es-proxy]$ glide install
[INFO]	Lock file (glide.lock) does not exist. Performing update.
[INFO]	Downloading dependencies. Please wait...
[INFO]	--> Fetching updates for github.com/aws/aws-sdk-go
[INFO]	--> Fetching updates for github.com/satori/go.uuid
[INFO]	--> Setting version for github.com/satori/go.uuid to 36e9d2ebbde5e3f13ab2e25625fd453271d6522e.
[INFO]	--> Detected semantic version. Setting version for github.com/aws/aws-sdk-go to v1.12.79
[INFO]	Resolving imports
[INFO]	--> Fetching bytes
[WARN]	Unable to checkout bytes
[ERROR]	Error looking for bytes: Cannot detect VCS
[INFO]	--> Fetching encoding/json
[WARN]	Unable to checkout encoding/json
[ERROR]	Error looking for encoding/json: Cannot detect VCS
[INFO]	--> Fetching flag
[WARN]	Unable to checkout flag
[ERROR]	Error looking for flag: Cannot detect VCS
[INFO]	--> Fetching fmt
[WARN]	Unable to checkout fmt
[ERROR]	Error looking for fmt: Cannot detect VCS

credentials_process is unsupported

I have an ~/.aws/credentials file with the following content:

[default]
credential_process = get-toke --role my-role
region = us-west-2

When running aws-es-proxy I expect the program to execute the credentials_process program to retrieve AWS credentials. However, it does not seem that this is supported. I also notice that there is a request to support multiple profiles for AWS (#34).

I was able to resolve my issue, and support multiple profiles by simply bumping the AWS SDK version.

$ git diff glide.yaml 
diff --git a/glide.yaml b/glide.yaml
index 9282e93..c48fa3a 100644
--- a/glide.yaml
+++ b/glide.yaml
@@ -5,7 +5,7 @@ owners:
   email: [email protected]
 import:
 - package: github.com/aws/aws-sdk-go
-  version: ~v1.12.61
+  version: ~v1.18.1
   subpackages:
   - aws/credentials
   - aws/session

This also allowed me to run to choose the profile I wanted to use with aws-es-proxy as follows:

ENV AWS_DEFAULT_PROFILE=my-profile aws-es-proxy ...

Issue While connection the kibana

Hello ,

I follow your step and in browser i am not able to access the kibana URL.All Traffic is allowed to security group for 9200 port.

Here I am running EC2 instance with instance profile who has role associated with ES-FULL-ROLE

My Resource Based Policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:role/ES-FULL-ROLE"
},
"Action": "es:",
"Resource": "arn:aws:es:us-east-1:111111111111:domain/rkteam1/
"
}
]
}

My IAM Policy :

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"es:"
],
"Resource": "arn:aws:es:us-east-1:111111111111:domain/
",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::111111111111:role/ES-FULL-ROLE"
}
]
}

OUTPUT :

[root@ip-172-31-65-132 aws-es-proxy]# ./aws-es-proxy -verbose -listen 127.0.0.1:9200 -endpoint https://search-rkteam1-3lpw44lsn6ez4hccim6w66pj3e.us-east-1.es.amazonaws.com ; tail -f /var/log/messages
2018/01/19 08:30:53 Listening on 127.0.0.1:9200...

Regards,
RK

Issues on ES 6.2 to 7.1

We're currently using AWS ES PROXY version O.9 on Windows and Elasticsearch 6.2. We're planning to upgrade to Elasticsearch 7.1. After checking the proper configuration and changing the AWS credentials we are encountering this error, "AWS ES generated fresh aws credentials object"

Do you have inputs about this? I'm also new on this matter so its still hard to work on. Thank you!

SigV4 failing for Kibana Shared Short links (/goto/)

Using aws elasticsearch 5.5.2 hosted service and accessing hosted kibana via aws-es-proxy which works great. Created a Shared Short URL. Accessing short url results in an AWS SigV4 Canonicalization error.

https://kibana.mydomain.com/_plugin/kibana/goto/ce613a4285eeaa035468432fac197ce3

The error points to missing "/app/kibana/" after "/_plugin/kibana/" though adding it to goto link doesn't work either. Not sure if this is a problem with Kibana not generating a useable goto link for aws or with aws-es-proxy

The Canonical String for this request should have been\n' GET\n/_plugin/kibana/app/kibana\n\

(partially anonymized error message)
{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'GET\n/_plugin/kibana/app/kibana\n\nhost:search-ELIDED.us-west-2.es.amazonaws.com\nx-amz-date:20171208T145130Z\n\nhost;x-amz-date\ne3b0c44298fc1c149afbf4c89s6fb92427ae41e4649b934ca495991b7852b855'\n\nThe String-to-Sign should have been\n'AWS4-HMAC-SHA256\n20171208T145130Z\n20171208/us-west-2/es/aws4_request\n65dab3f58aa19eab18e4d2c479c3a956sd38a4c0399d35d1c71b9960eba46104'\n"}

Using aws-es-proxy in conjunction with elasticdump

I have multiple projects that use the proxy to gain access to es clusters within aws (moving millions of documents per week) and after bumping the version to either v1.0 or v1.1 the upload stage fails with message:

  • Client.Timeout exceeded while awaiting headers

The only thing that helps is rolling back to version 0.9 of the proxy.

Problems with Kibana 6.0.0

I cloned out v0.6.0 today and tested it against AWS ES v6.

Calls to ES it going through fine, but Kibana 6 is having problems loading correctly.

screen shot 2017-12-11 at 13 09 58

The content of the error is printed below for copy-pasting purposes.

Error: Content-Type header is missing
    at respond (http://localhost:9200/_plugin/kibana/bundles/kibana.bundle.js?v=16070:13:2730)
    at checkRespForFailure (http://localhost:9200/_plugin/kibana/bundles/kibana.bundle.js?v=16070:13:1959)
    at http://localhost:9200/_plugin/kibana/bundles/kibana.bundle.js?v=16070:2:5100
    at processQueue (http://localhost:9200/_plugin/kibana/bundles/commons.bundle.js?v=16070:39:9912)
    at http://localhost:9200/_plugin/kibana/bundles/commons.bundle.js?v=16070:39:10805
    at Scope.$digest (http://localhost:9200/_plugin/kibana/bundles/commons.bundle.js?v=16070:39:21741)
    at Scope.$apply (http://localhost:9200/_plugin/kibana/bundles/commons.bundle.js?v=16070:39:24520)
    at done (http://localhost:9200/_plugin/kibana/bundles/commons.bundle.js?v=16070:38:9495)
    at completeRequest (http://localhost:9200/_plugin/kibana/bundles/commons.bundle.js?v=16070:38:13952)
    at XMLHttpRequest.xhr.onload (http://localhost:9200/_plugin/kibana/bundles/commons.bundle.js?v=16070:38:14690)

I guess this issue is not isolated to me and would post it here for others to know that they are not alone if they face this issue. Hopefully a fix will show up at some point.

API request to Kibana not including kbn-xsrf header

Hi, I'm trying to make an API request similar to the one described in this answer about updating index patterns. My proxy is running on port 9200 and then I try something like this

curl 'http://localhost:9200/_plugin/kibana/api/saved_objects/index-pattern/INDEX_ID' -X PUT -H 'Content-Type: application/json' -H "kbn-xsrf: true" --data-binary '{"attributes":{"title":"INDEX_NAME","fields":"[ESCAPED_JSON_LIST_OF_FIELDS]"}}'

However, I get this error

{"statusCode":400,"error":"Bad Request","message":"Request must contain a kbn-xsrf header."}

Even though that header is included in the request, I think it is not being passed through the proxy correctly. Can anyone help provide some insight here? Am I missing something?

Received 403 from AWSAuth, invalidating credentials for retrial. ES inside VPC with open access

I have set up the ES inside of a VPC. The access policy is set to open. I'm running this proxy on an instance inside the same VPC with public IP. And then trying to access the /_cat/indices endpoint.
Getting a 403 on the curl and seeing following error in the proxy logs,

root@some-host:~# docker run --rm -p 9200:9200 abutaha/aws-es-proxy:v1.0 -endpoint https://vpc-mydomain-randomcharacters.us-east-1.es.amazonaws.com -listen 0.0.0.0:9200 -verbose
time="2020-08-04 09:33:15" level=info msg="Listening on 0.0.0.0:9200...\n"
time="2020-08-04 09:33:21" level=info msg="Generated fresh AWS Credentials object"
time="2020-08-04 09:33:24" level=error msg="Received 403 from AWSAuth, invalidating credentials for retrial"
2020/04/08 09:33:24  -> GET; 69.31.114.42:61611; /_cat/indices; ; 403; 3.268s

Not necessarily saying that there's a problem with the proxy.
Just need help in identifying the problem here.

The request signature we calculated does not match the signature you provided

Hi,

I'm getting the following when trying to use the proxy

The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details

I'm running it with aws-es-proxy-0.3-linux-amd64 -listen 0.0.0.0:9200 -endpoint https://search-elk-test-s5ksigjmxywxvb33qyhbhejrfy.eu-west-1.es.amazonaws.com

Any suggestions to what it could be?

Thanks

Docker Composer File

Can you please post a sample docker-compose.yml file. Entry point aws-es-proxy is not giving any response. Here is what I am doing:

awsesproxy: image: abutaha/aws-es-proxy:latest container_name: "${PROJECT_NAME}_awsesproxy" command: "-endpoint https://elastic-search-path" environment: AWS_ACCESS_KEY_ID: "KEY" AWS_SECRET_ACCESS_KEY: "PASS" PORT_NUM: 9243

[feat req] allow passing the Elasticsearch endpoint via environment vars

Currently it takes some nonstandard leaps to point the proxy to a host in an environment variable. The following is sort of 'pseudocode' for docker config.

doesn't work:

environment: ["ES_URL": "http://blah:9200"]
command: ["-endpoint","${ES_URL}"]

current workarounds:

environment: ["ES_URL": "http://blah:9200"]
command: ["sh", "-c", "aws-es-proxy -endpoint ${ES_URL}"]
command: "aws-es-proxy -endpoint ${ES_URL}"

it'd be nice to do:

environment: ["ES_URL": "http://blah:9200"]
command: [""]

It might be handy for other parameters, but this is the one that tends to vary the most- for instance, in ECS, I can look it up by storing it in ssm and then injecting that into ES_URL of the secrets section.

This is beyond my golang abilities, which is why I'm only putting it as a feature request, not a PR :/

Log JWT token in headers returned from AWS Cognito

Hello,

I've been using this signing proxy in our workflow successfully, which looks something like this:
AWS ALB --> SAML authentication via AWS Cognito --> aws-es-proxy --> AWS ElasticSearch / Kibana

I wish to log exactly who has made the request.

Therefore, I've turned on verbose logging, but want to access the headers of the request, especially X-Amzn-Oidc-Data [1], as this is a JWT token that includes information from the person that issued the request, i.e. name, email, etc. in the JWT payload.

From [1],

The JWT payload is a JSON object that contains the user claims received from the IdP user info endpoint.

{
   "sub": "1234567890",
   "name": "name",
   "email": "[email protected]",
   ...
}

Is this possible to do in this proxy, given you already read the headers here?
https://github.com/abutaha/aws-es-proxy/blob/master/aws-es-proxy.go#L284

[1] https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html#user-claims-encoding

Thanks :)

403 response from AWS is translated to 200 response

Hi there!

Thanks for aws-es-proxy, it's proven to be very useful to me :-)

While using it, I noticed some incoherent results (e.g. missing documents) in AWS ElasticSearch and a lot of Generated fresh AWS Credentials object log messages.

This message is emitted in lines 159-164:

	resp, err := http.DefaultClient.Do(req)
	if err != nil {
		// ...
	}
	if !p.nosignreq {
		// AWS credentials expired, need to generate fresh ones
		if resp.StatusCode == 403 {
			p.credentials = nil
			return
		}
	}
	defer resp.Body.Close()

Based on some experiments, I noticed that this will return an HTTP 200 response to the caller (see https://play.golang.org/p/pNKInme_sHz). I would either expect a retry or forwarding the 403 response back to the caller.

I guess we might want to remove the return statement on this line?

aws-es with vpc

Hello,

I just want to know if this proxy is working on aws-es 6.2 with vpc? We've tried it on our machine but unfortunately we can't run it successfully.

Thank you.

Empty response to HTTP redirect for Short URL

When using Kibana's Share Snapshot feature, Kibana generates a 'Short URL' like:

http://localhost:9200/_plugin/kibana/goto/2f781386f3554d41d41468fad20f24c9

aws-es-proxy responds to a request for this URL with an empty HTTP 200 response like:

HTTP/1.1 200 OK
Date: Tue, 08 May 2018 15:15:48 GMT
Content-Length: 0
Content-Type: text/plain; charset=utf-8

when the expected response would be an HTTP 302 redirect to the saved snapshot like:

HTTP/1.1 302 Found
Date: Tue, 08 May 2018 15:15:48 GMT
Content-Length: 0
Content-Type: application/json
Location: /_plugin/kibana/app/kibana#/discover?_g=(time:(from:now-24h,mode:quick,to:now))&_a=(query:(language:lucene,query:%27helloworld%27))

301 Error on Kibana Load

Environment:
-Elasticsearch 2.3 requiring https and restricting access to an IAM user.
-Windows 10 running via the exe

Error:
-On loading http://localhost:9200/_plugin/kibana/, the app crashes stating "301 response missing Location header"
-Cloudwatch logs show 0 http 300 responses and 1 InvalidHostHeaderRequest, which is unlikely to be related because it was 30 minutes ago and I've received this error many times since. This leads me to believe the issue is with aws-es-proxy mishandling something
-On loading http://localhost:9200/_plugin/kibana/app/kibana there is no crash, but I receive a blank page and a 404 error.

Invalid signature with ill formatted URL

I am facing some issues when using aws-es-proxy with a client that generates ill formatted URLs. The signature is generated based on the original signature but then go http client sanitize the URL before sending the request to AWS Elasticsearch.

More specifically I am using aws-es-proxy to authenticate calls from Jaeger to Elasticsearch. The dependency job (https://github.com/jaegertracing/spark-dependencies) makes requests to Elasticsearch with a double / in the path.

E.g. This request from the job would fail:

curl -v localhost:9200/master:jaeger-span-2018-10-30//_mapping
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 9200 (#0)
> GET /master:jaeger-span-2018-10-30//_mapping HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.54.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Fri, 02 Nov 2018 20:44:32 GMT
< Content-Length: 0
< 
* Connection #0 to host localhost left intact

But after correcting the URL it works just fine:

curl -v localhost:9200/master:jaeger-span-2018-10-30/_mapping
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 9200 (#0)
> GET /master:jaeger-span-2018-10-30/_mapping HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.54.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< Content-Length: 49
< Content-Type: application/json; charset=UTF-8
< Date: Fri, 02 Nov 2018 20:42:58 GMT
< 
* Connection #0 to host localhost left intact

I believe I am also running into #27 that swallows the original error and return a 200 instead.

One of the easy fix would be to clean the path before signing the same way the HTTP client will. E.g.

ep := *r.URL
ep.Host = p.host
ep.Scheme = p.scheme
ep.Path = path.Clean(ep.Path)

I tested that locally and it seems to do the job. Let me know if you would like me to submit a PR.

i/o Timeout to _bulk

Hi, I'm receiving an i/o timeout and then a container crash/restart when attempting to use the aws-es-proxy container in my EKS deployment (v1.11.5). I have a fluent-bit configuration pointing to the aws-es-proxy service (defined in the yaml below). Any idea on why I would be receiving the timeout? Perhaps you can see an obvious misconfiguration... Thank you in advance for any help you may provide!

Output

2018/12/19 07:46:52 Listening on 0.0.0.0:9200...
2018/12/19 07:46:55 Generated fresh AWS Credentials object
2018/12/19 07:47:25 Post https://vpc-xxxxxxxxxxxxx-abcdefghijklmnopqrstuvwxyz.us-east-1.es.amazonaws.com/_bulk: dial tcp 10.1.2.3:443: i/o timeout

AWS ES Proxy Service

apiVersion: v1
kind: Service
metadata:
  name: aws-es-proxy
  namespace: logging
  labels:
    app: aws-es-proxy
spec:
  ports:
    - port: 9200
      name: aws-es-proxy
  selector:
    app: aws-es-proxy
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: aws-es-proxy
  namespace: logging
  labels:
    app: aws-es-proxy
spec:
  selector:
    matchLabels:
      app: aws-es-proxy
  strategy:
    type: Recreate
  replicas: 1
  template:
    metadata:
      labels:
        app: aws-es-proxy
    spec:
      volumes:
      - name: varlog
        emptyDir: {}
      containers:
      - image: abutaha/aws-es-proxy:0.9
        name: aws-es-proxy
        args:
          - -listen=0.0.0.0:9200
          - -endpoint=https://vpc-xxxxxxxxxxxxx-abcdefghijklmnopqrstuvwxyz.us-east-1.es.amazonaws.com
          - -verbose
        env:
        - name: AWS_ACCESS_KEY_ID
          value: "xxxxxxxxxxxxxxxxx"
        - name: AWS_SECRET_ACCESS_KEY
          value: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
        ports:
          - containerPort: 9200
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        resources:
          requests:
            memory: "256Mi"
            cpu: "256m"
          limits:
            memory: "512Mi"
            cpu: "1G"

Fluent Bit DS

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:0.14.9
        imagePullPolicy: Always
        ports:
          - containerPort: 2020
        env:
        - name: FLUENT_ELASTICSEARCH_HOST
          value: "aws-es-proxy"
        - name: FLUENT_ELASTICSEARCH_PORT
          value: "9200"
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule

aws-es-proxy within a EC2

I would like to start aws-es-proxy inside an EC2 instance. What configuration should I add to the EC2 role in order for aws-es-proxy to correctly assume the ES roles and to be able to correctly authenticate?

I'm getting the error:
ERRO[2020-13-05 19:44:56] Received 403 from AWSAuth, invalidating credentials for retrial

Update docker image?

The latest official docker image (0.9) appears to be a year old. Since then, the aws-sdk-go has added support for assuming roles via webtokens. This is necessary in order to run aws-es-proxy on kubernetes via AWS EKS using the native support for IAM roles.

Building against a newer version of aws-sdk-go should be enough to support this, I think, so creating a new official image could be a good idea?

-profile options missing

I would like to be able to select the AWS profile to use when starting the proxy.
Currently the profile used is 'default'.
Currently if one uses several profile for same account, the proxy may fails to start with the right credentials.
Like with the awscli, it would be nice to be able to select the profile at the command line.

Crashes with negative timestamp

aws-es-proxy is almost working great for me. I can get it started up and make a few requests, but it consistently crashes after about a minute.

$ ./aws-es-proxy-0.3-mac-amd64 -endpoint https://REDACTED.ca-central-1.es.amazonaws.com -verbose
Listening on 127.0.0.1:9200
2017/09/20 21:22:55  -> GET / 200 0.460s
2017/09/20 21:22:55  -> GET /_cluster/health 200 0.461s
2017/09/20 21:22:55  -> GET /_cluster/health 200 0.460s
2017/09/20 21:22:55  -> GET / 200 0.462s
2017/09/20 21:22:55  -> GET /_cat/nodes?format=json 200 0.465s
[... 110 lines skipped ...]
2017/09/20 21:23:54  -> GET /_aliases 200 0.379s
2017/09/20 21:23:54  -> GET /_nodes/_all/os,jvm?human=true 200 0.379s
2017/09/20 21:23:54  -> GET /_nodes/stats/jvm,fs,os,process?human=true 200 0.385s
1969/12/31 16:00:00  -> GET /_stats/docs,store 200 -1505967819.950s
fatal error: unexpected signal during runtime execution
[signal 0xb code=0x1 addr=0xb01dfacedebac1e pc=0x166280]

Notice that the last request has a timestamp at the beginning of the epoch, and the request duration is similarly computed as a difference between current time and epoch.

Here's the stack trace:

goroutine 34 [running]:
runtime.throw(0x5841e0, 0x2a)
	/usr/local/go/src/runtime/panic.go:547 +0x90 fp=0xc820571718 sp=0xc820571700
runtime.sigpanic()
	/usr/local/go/src/runtime/sigpanic_unix.go:12 +0x5a fp=0xc820571768 sp=0xc820571718
sync.(*Pool).Get(0x73b310, 0x0, 0x0)
	/usr/local/go/src/sync/pool.go:102 +0x40 fp=0xc8205717b8 sp=0xc820571768
fmt.newPrinter(0x1)
	/usr/local/go/src/fmt/print.go:133 +0x27 fp=0xc8205717f8 sp=0xc8205717b8
fmt.Sprintf(0x5363b0, 0x13, 0xc820571ab0, 0x4, 0x4, 0x0, 0x0)
	/usr/local/go/src/fmt/print.go:202 +0x2b fp=0xc820571848 sp=0xc8205717f8
log.Printf(0x5363b0, 0x13, 0xc820571ab0, 0x4, 0x4)
	/usr/local/go/src/log/log.go:289 +0x49 fp=0xc820571898 sp=0xc820571848
main.(*proxy).ServeHTTP(0xc820092910, 0x8becc8, 0xc82025a000, 0xc8202f8b60)
	/Users/abutaham/go-libs/src/github.com/abutaha/aws-es-proxy/aws-es-proxy.go:138 +0xc3b fp=0xc820571b70 sp=0xc820571898
net/http.serverHandler.ServeHTTP(0xc820094380, 0x8becc8, 0xc82025a000, 0xc8202f8b60)
	/usr/local/go/src/net/http/server.go:2081 +0x19e fp=0xc820571bd0 sp=0xc820571b70
net/http.(*conn).serve(0xc820172000)
	/usr/local/go/src/net/http/server.go:1472 +0xf2e fp=0xc820571f98 sp=0xc820571bd0
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1998 +0x1 fp=0xc820571fa0 sp=0xc820571f98
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2137 +0x44e

goroutine 1 [IO wait]:
net.runtime_pollWait(0x8bea78, 0x72, 0x8b9028)
	/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc820124370, 0x72, 0x0, 0x0)
	/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc820124370, 0x0, 0x0)
	/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).accept(0xc820124310, 0x0, 0x8beb70, 0xc8200d9dc0)
	/usr/local/go/src/net/fd_unix.go:426 +0x27c
net.(*TCPListener).AcceptTCP(0xc82008e078, 0x54780, 0x0, 0x0)
	/usr/local/go/src/net/tcpsock_posix.go:254 +0x4d
net/http.tcpKeepAliveListener.Accept(0xc82008e078, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2427 +0x41
net/http.(*Server).Serve(0xc820094380, 0x8beb38, 0xc82008e078, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2117 +0x129
net/http.(*Server).ListenAndServe(0xc820094380, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2098 +0x136
net/http.ListenAndServe(0x4f7ac0, 0xe, 0x8bda78, 0xc820092910, 0x0, 0x0)
	/usr/local/go/src/net/http/server.go:2195 +0x98
main.main()
	/Users/abutaham/go-libs/src/github.com/abutaha/aws-es-proxy/aws-es-proxy.go:170 +0x63e

Steps to reproduce

I'm only doing one thing that might be unusual -- I'm trying to run the cerebro Elasticsearch web management GUI through aws-es-proxy.

  1. Start aws-es-proxy in verbose mode against AWS ES endpoint in ca-central-1 using ~/.aws/credentials
  2. Start cerebro on port 9000
  3. Log into cerebro web UI and connect to http://127.0.0.1:9000/
  4. Wait about one minute for aws-es-proxy to crash

Other details

  • aws-es-proxy version 0.3
  • Operating system: macOS 10.12.6

Can not work for China region

aws-es-proxy-0.8-linux-amd64 -endpoint https://search-real-time-bushfire-hql7siqa3m4sz6altvkp3t76e4.cn-north-1.es.amazonaws.com.cn

2019/08/14 05:50:24 error: submitted endpoint is not a valid Amazon ElasticSearch Endpoint

The code

if len(parts) == 5 {
			p.region, p.service = parts[1], parts[2]
		} else {
			return fmt.Errorf("error: submitted endpoint is not a valid Amazon ElasticSearch Endpoint")
		}

should adopt the China region url

*.cn-north-1.es.amazonaws.com.cn
*.cn-northwest-1.es.amazonaws.com.cn

Support multiple profiles for AWS

Hi,
On my machine I using another AWS services, which are using different credentials.
Credentials are stored in ~/.aws/credentials which supports multiple profiles, but the tool is not able to receive the profile name (it just uses the default one).

Is it possible to add new parameter for profile name or to provide credentials as an argument?

README for docker accessibility

In order to access the docker from a docker network, or testing it out locally, it's necessary to -listen 0.0.0.0:9200 instead of the default 127.0.0.1 (when running on docker only)
Just thought that this could be a nice addition for the README in case people do use docker

Tagged docker images

Currently there is only a latest tag available on docker hub. This can cause issues for people as they could be automatically upgraded to the new release unexpectedly.

It would be an idea to tag the docker image with the corresponding release. For example abutaha/aws-es-proxy:v0.8

We are having performance issue in Elasticsearch/Kibana after migrating to another server host.

Hello,
We are having issue in Elasticsearch/Kibana.
We have AWS Proxy and Kibana 7.1 setup in the same server. After migrating both to another server host and make it live in the load balancer, we are facing request timeout issue when accessing our Kibana application.
image

Though when using the old server back in the load balancer, it works perfectly fine.
Does anyone know why this is happening? Any related information will be appreciated. Thank you!

PS: Both servers has the same specification except that the New is Windows 2016 and Old is Windows 2012

By the way, here are the settings that we have:
AWS Elasticsearch Proxy
aws-es-proxy-0.9-windows-amd64.exe -endpoint "https://vpc_endpoint " -listen "127.0.0.1:9230"

Kibana.yml
server.port: 5609
server.host: "x.x.x.x"
server.basePath: "/7.1/kibana"
elasticsearch.hosts: "http://127.0.0.1:9230/"
elasticsearch.ssl.verificationMode: none
elasticsearch.requestTimeout: 300000

Its Cluster Health status is OK.
image

US specific end point processing...

My end point is es.us-west-2.amazonaws.com; the tool expects it to be of the form XXX.us-west-2.es.XXX. Can we make it work for US as well? Thanks

Kibana won't load

Hello,

I built your aws-es-proxy from source on centos with success. It successfully runs and I can get to the elastic search root that shows the version but when I try to go to Kibana /_plugin/kibana/ I get a blank page. My aws elastic search cluster service is setup in a VPC. My access policy is wide open:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": ""
},
"Action": "es:
",
"Resource": "arn:aws:es:us-west-2:xxxxxxxxxxxxx:domain/{cluster_name}/*"
}
]
}

The verbose logs from the aws-es-proxy

2019/08/26 19:43:33 Listening on 127.0.0.1:8080...
2019/08/26 19:46:31 Generated fresh AWS Credentials object

========================
2019/08/26 19:46:31
Remote Address: 127.0.0.1:55130
Request URI: /
Method: GET
Status: 200
Took: 0.072s
Body:

========================
2019/08/26 19:46:31
Remote Address: 127.0.0.1:55138
Request URI: /favicon.ico
Method: GET
Status: 200
Took: 0.044s
Body:

2019/08/26 19:46:39 Generated fresh AWS Credentials object

========================
2019/08/26 19:46:39
Remote Address: 127.0.0.1:55162
Request URI: /favicon.ico
Method: GET
Status: 200
Took: 0.005s
Body:

I would really like to use your application. Please help. Thank you.

Error in logs, nothing gets to ES

I am running version 0.9 hitting an AWS ES Domain.

Here is an error from the logs:

2019/01/07 22:01:01 Generated fresh AWS Credentials object
2019/01/07 22:01:02 Generated fresh AWS Credentials object
2019/01/07 22:01:02 Generated fresh AWS Credentials object
2019/01/07 22:01:03 Generated fresh AWS Credentials object
2019/01/07 22:01:04 Generated fresh AWS Credentials object
2019/01/07 22:01:05 Generated fresh AWS Credentials object
2019/01/07 22:01:05 Generated fresh AWS Credentials object
2019/01/07 22:01:05 http: panic serving 192.168.80.137:34394: runtime error: invalid memory address or nil pointer dereference
goroutine 3552 [running]:
net/http.(*conn).serve.func1(0xc4203a2780)
	/usr/local/go/src/net/http/server.go:1697 +0xd0
panic(0x747160, 0x98fe40)
	/usr/local/go/src/runtime/panic.go:491 +0x283
github.com/abutaha/aws-es-proxy/vendor/github.com/aws/aws-sdk-go/aws/credentials.(*Credentials).Get(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/abutaha/aws-es-proxy/vendor/github.com/aws/aws-sdk-go/aws/credentials/credentials.go:204 +0x6d
github.com/abutaha/aws-es-proxy/vendor/github.com/aws/aws-sdk-go/aws/signer/v4.Signer.signWithBody(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc42045e500, 0x963760, 0xc4201f18f0, ...)
	/go/src/github.com/abutaha/aws-es-proxy/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:337 +0x270
github.com/abutaha/aws-es-proxy/vendor/github.com/aws/aws-sdk-go/aws/signer/v4.Signer.Sign(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc42045e500, 0x963760, 0xc4201f18f0, ...)
	/go/src/github.com/abutaha/aws-es-proxy/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go:271 +0xf9
main.(*proxy).ServeHTTP(0xc420154000, 0x965de0, 0xc42038e380, 0xc42045e100)
	/go/src/github.com/abutaha/aws-es-proxy/aws-es-proxy.go:151 +0x199d
net/http.serverHandler.ServeHTTP(0xc42013eea0, 0x965de0, 0xc42038e380, 0xc42045e100)
	/usr/local/go/src/net/http/server.go:2619 +0xb4
net/http.(*conn).serve(0xc4203a2780, 0x966360, 0xc4202c7b00)
	/usr/local/go/src/net/http/server.go:1801 +0x71d
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2720 +0x288
2019/01/07 22:01:05 Generated fresh AWS Credentials object
2019/01/07 22:01:06 Generated fresh AWS Credentials object
2019/01/07 22:01:07 Generated fresh AWS Credentials object
2019/01/07 22:01:08 Generated fresh AWS Credentials object

I created a custom Dockerfile so I could override ENTRYPOINT [] so I could override the CMD. Was just trying to get verbose logging while trying to find where things are failing.

I'd suggest not setting an ENTRYPOINT in the upstream Dockerfile so it is easier to override usage.

A couple thoughts:

  1. Generating new credential every second seems excessive
  2. Deploying it inside the same pod as fluent-bit would prevent forwarding all log traffic through one proxy machine. Within the pod, could address localhost.

NEST Elasticsearch 6.2 using wildcard in indices

Hi, we used aws-es-proxy, I'd like to ask if there is a way to use a wildcard query in indices using NEST?

We have indices like:
index-2018.08
index-2018.09

and we only want to use those indices in one query in c#.
We used index-* but it shows invalid response. As Elastic says that the issue is somewhere from the proxy layer.

Maybe this will help,

<!DOCTYPE html>
<html>
    <head>
        <title>Runtime Error</title>
        <meta name="viewport" content="width=device-width" />
        <style>
         body {font-family:"Verdana";font-weight:normal;font-size: .7em;color:black;}
         p {font-family:"Verdana";font-weight:normal;color:black;margin-top: -5px}
         b {font-family:"Verdana";font-weight:bold;color:black;margin-top: -5px}
         H1 { font-family:"Verdana";font-weight:normal;font-size:18pt;color:red }
         H2 { font-family:"Verdana";font-weight:normal;font-size:14pt;color:maroon }
         pre {font-family:"Consolas","Lucida Console",Monospace;font-size:11pt;margin:0;padding:0.5em;line-height:14pt}
         .marker {font-weight: bold; color: black;text-decoration: none;}
         .version {color: gray;}
         .error {margin-bottom: 10px;}
         .expandable { text-decoration:underline; font-weight:bold; color:navy; cursor:hand; }
         @media screen and (max-width: 639px) {
          pre { width: 440px; overflow: auto; white-space: pre-wrap; word-wrap: break-word; }
         }
         @media screen and (max-width: 479px) {
          pre { width: 280px; }
         }
        </style>
    </head>

    <body bgcolor="white">

            <span><H1>Server Error in '/6.2/elasticsearch' Application.<hr width=100% size=1 color=silver></H1>

            <h2> <i>Runtime Error</i> </h2></span>

            <font face="Arial, Helvetica, Geneva, SunSans-Regular, sans-serif ">

            <b> Description: </b>An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine.
            <br><br>

            <b>Details:</b> To enable the details of this specific error message to be viewable on remote machines, please create a &lt;customErrors&gt; tag within a &quot;web.config&quot; configuration file located in the root directory of the current web application. This &lt;customErrors&gt; tag should then have its &quot;mode&quot; attribute set to &quot;Off&quot;.<br><br>

            <table width=100% bgcolor="#ffffcc">
               <tr>
                  <td>
                      <code><pre>

&lt;!-- Web.Config Configuration File --&gt;

&lt;configuration&gt;
    &lt;system.web&gt;
        &lt;customErrors mode=&quot;Off&quot;/&gt;
    &lt;/system.web&gt;
&lt;/configuration&gt;</pre></code>

                  </td>
               </tr>
            </table>

            <br>

            <b>Notes:</b> The current error page you are seeing can be replaced by a custom error page by modifying the &quot;defaultRedirect&quot; attribute of the application&#39;s &lt;customErrors&gt; configuration tag to point to a custom error page URL.<br><br>

            <table width=100% bgcolor="#ffffcc">
               <tr>
                  <td>
                      <code><pre>

&lt;!-- Web.Config Configuration File --&gt;

&lt;configuration&gt;
    &lt;system.web&gt;
        &lt;customErrors mode=&quot;RemoteOnly&quot; defaultRedirect=&quot;mycustompage.htm&quot;/&gt;
    &lt;/system.web&gt;
&lt;/configuration&gt;</pre></code>

                  </td>
               </tr>
            </table>

            <br>

    </body>
</html>

Getting net/http: TLS handshake timeout while accessing AWS managed Elasticsearch service

Created a container with image abutaha/aws-es-proxy:v1.1 with below args

Args:
      -endpoint=https://vpc-XXXX-logging-tracing-XXXXXXXXXX.eu-west-1.es.amazonaws.com
      -listen=:9200
      -pretty
      -verbose
      -debug

Created another container in same k8s cluster with "curl" in it and did curl
curl http://<aws-es-proxy container IP>:9200/_app/kibana

Instead of giving HTTP success getting error
Get "https://vpc-XXXX-logging-tracing-XXXXXXXXXX.eu-west-1.es.amazonaws.com/_app/kibana": net/http: TLS handshake timeout

If i run curl -k https://vpc-vijay05-logging-tracing-vcyvcp7ulgghot4gu45wgjlm4i.eu-west-1.es.amazonaws.com from the same container, i get response.

{
  "name" : "15f4f1f450sf34545420e318bd0fe",
  "cluster_name" : "123456789:\<ES-domain-name\>",
  "cluster_uuid" : "aPeEUUdag3565yjFisHCsA",
  "version" : {
    "number" : "7.4.2",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "unknown",
    "build_date" : "2020-06-22T06:09:23.151801Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

VPC based Elasticsearch access policy is open and the aws-es-proxy container is created in same VPC.

Env :-
K8S: v1.16.8-eks-e16311
docker: 19.3.6
OS: Amazon Linux 2

please advise.

Method for terminating the proxy remotely

Hi. We've had an issue when using the es-proxy with cronjobs in kubernetes. The main container finishes its work and terminates, however the es-proxy sidecar continues to run and so the pod itself never finishes. We thus need to go manually clean these up periodically.

So, I'm hoping to add a patch for the proxy to accept a HTTP POST that gracefully exits, which our main container can invoke upon completion.

Confused, kibana.

Hello we are trying to get kibana running with this tool.
but I am confused because according to this: http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-troubleshooting.html#aes-troubleshooting-kibana-configure-anonymous-access
you do not need to sign the request to kibana...
we have your proxy and another okta proxy in the front to authentificate and your app has a rol with access to elasticsearch.. we are confused because of diff places say diff things..
could you please give advice? or did they change this on AWS recently?
Thanks in advance.

Does not work with ES 6.0

Seems to be creating AWS creds on each request to the proxy when connecting to AWS ES 6.0.1

2018/01/23 12:34:37 Listening on 0.0.0.0:9200...
2018/01/23 12:34:52 Generated fresh AWS Credentials object
2018/01/23 12:34:53 Generated fresh AWS Credentials object
2018/01/23 12:34:56 Generated fresh AWS Credentials object

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.