pires / kubernetes-elasticsearch-cluster Goto Github PK
View Code? Open in Web Editor NEWElasticsearch cluster on top of Kubernetes made easy.
License: Apache License 2.0
Elasticsearch cluster on top of Kubernetes made easy.
License: Apache License 2.0
Hey,
I'm trying to setup an ES using this repo,
I made some tweaks though
es-master
deploymentes-data
deploymentI created 1 instance of master and 1 instance of data,
Both PODs became running and ES recognizes both nodes in the cluster,
Once I tried to shutdown one of the nodes the whole thing got broken,
This is what I see:
es-data-... 1/1 Running 6 17m
es-master-... 0/1 CrashLoopBackOff 6 1h
After a while both become running:
es-data-... 1/1 Running 6 20m
es-master-... 1/1 Running 7 1h
Then few seconds after the data crashes:
notebook:kub me$ kubectl get pods
NAME READY STATUS RESTARTS AGE
es-data-... 0/1 CrashLoopBackOff 6 20m
es-master-... 1/1 Running 7 1h
The best error I can see when running kubectl describe pod es-data...
is this one:
19m 6s 30 {kubelet 172.17.8.102} spec.containers{es-data} Warning BackOff Back-off restarting failed docker container
48s 6s 4 {kubelet 172.17.8.102} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "es-data" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=es-data pod=es-data-2599720219-u9axv_default(330fb913-5f22-11e6-be3c-080027e9ede8)"
Or in master (seems to be the same)
5m 10s 22 {kubelet 172.17.8.102} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "es-master" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=es-master pod=es-master-2585506078-as9kp_default(7b7095c8-5f1b-11e6-be3c-080027e9ede8)"
First, I'm not sure if this is ok to make the master node also a client and a data plus make the data a client as well, this should work at all? I'd like to start with a small cluster as possible and scale on time, should it work?
If so, any clue why it is acting that way?
Thanks,
Asaf
I have started an elasticsearch cluster on my kubernetes with file here, everything works fine except the master/lb/data pods, the discovery plugin got HTTP 401 Unauthorized
error from kube API server, the reason for that is the incorrect service account property in the pod, I'm not sure if it is changed from previous version of kubernetes, currently the correct name should be serviceAccountName
in spec v1, not serviceAccount
.
On other pods in my cluster I have no trouble with DNS:
nslookup kubernetes
Server: 192.168.4.53
Address: 192.168.4.53#53
Name: kubernetes.default.svc.cluster.local
Address: 192.168.4.1
However, when following the instructions from the readme, with the "vanilla" settings, from the master:
nslookup kubernetes
Server: (null)
Address 1: ::1 localhost
Address 2: 127.0.0.1 localhost
nslookup: can't resolve 'kubernetes': Try again
Why is the DNS server null from the master container?
The resolv.conf is the same on both hosts.
nameserver 192.168.4.53
nameserver 10.5.6.5
search default.svc.cluster.local svc.cluster.local cluster.local prod.local comp.prod.local user.prod.local
options ndots:5
This is on CoreOS 835.1 with Kubernetes 1.0.6, and certificates generated with this guide:
https://coreos.com/kubernetes/docs/latest/openssl.html
[2016-01-15 01:32:11,745][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Patsy Walker] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:245)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:182)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:173)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:472)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:105)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:99)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:879)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:335)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$5000(ZenDiscovery.java:75)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1236)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
at com.squareup.okhttp.Connection.connectTls(Connection.java:244)
at com.squareup.okhttp.Connection.connectSocket(Connection.java:199)
at com.squareup.okhttp.Connection.connect(Connection.java:172)
at com.squareup.okhttp.Connection.connectAndSetOwner(Connection.java:367)
at com.squareup.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:128)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:328)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:245)
at com.squareup.okhttp.Call.getResponse(Call.java:267)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:107)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:221)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195)
at com.squareup.okhttp.Call.execute(Call.java:79)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:180)
... 16 more
[2016-01-15 01:32:13,345][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Patsy Walker] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:245)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:182)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:173)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:472)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:105)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:99)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:249)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
at com.squareup.okhttp.Connection.connectTls(Connection.java:244)
at com.squareup.okhttp.Connection.connectSocket(Connection.java:199)
at com.squareup.okhttp.Connection.connect(Connection.java:172)
at com.squareup.okhttp.Connection.connectAndSetOwner(Connection.java:367)
at com.squareup.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:128)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:328)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:245)
at com.squareup.okhttp.Call.getResponse(Call.java:267)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:107)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:221)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195)
at com.squareup.okhttp.Call.execute(Call.java:79)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:180)
... 11 more
[2016-01-15 01:32:14,926][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Patsy Walker] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:245)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:182)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:173)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:472)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:105)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:99)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2$1.doRun(UnicastZenPing.java:253)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
DN: CN=kube-apiserver
subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
at com.squareup.okhttp.Connection.connectTls(Connection.java:244)
at com.squareup.okhttp.Connection.connectSocket(Connection.java:199)
at com.squareup.okhttp.Connection.connect(Connection.java:172)
at com.squareup.okhttp.Connection.connectAndSetOwner(Connection.java:367)
at com.squareup.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:128)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:328)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:245)
at com.squareup.okhttp.Call.getResponse(Call.java:267)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:107)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:221)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195)
at com.squareup.okhttp.Call.execute(Call.java:79)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:180)
... 11 more
Hello again.
ES 2.2.1 is out. Can you please upgrade the images/yamls?
P. S. I'm very grateful that you are maintaining this. Thanks again for your work!
How would i password protect access to the es-client?
I know that elastic offers a plugin called "shield", but how would I proceed to get it running on the cluster?
Any help would be greatly appreciated.
When I scale up to 2 client and 3 master pods then do a curl on the cluster I get status "yellow"
curl http://ip_address:9200/_cluster/health?pretty
{
"cluster_name" : "myesdb",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 6,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}
Any idea what is going wrong?
Thanks
[vagrant@localhost origin]$ kubectl --kubeconfig=kubeconfig version
Client Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-alpha.0.181+77419c48fd7d7e", GitCommit:"77419c48fd7d7e674b1947325f33716bb3671fbe", GitTreeState:"clean", BuildDate:"2016-06-15T01:42:25Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.0+6a87dba0b8a50d", GitCommit:"6a87dba0b8a50dccaddb67a4c7748696db1918ec", GitTreeState:"clean", BuildDate:"2016-04-11T19:42:40Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"}
[vagrant@localhost origin]$ kubectl --kubeconfig=kubeconfig logs es-master-6q2ny
[2016-06-15 04:57:14,245][INFO ][node ] [Black Dragon] version[2.3.3], pid[12], build[218bdf1/2016-05-17T15:40:04Z]
[2016-06-15 04:57:14,267][INFO ][node ] [Black Dragon] initializing ...
[2016-06-15 04:57:16,240][INFO ][plugins ] [Black Dragon] modules [reindex, lang-expression, lang-groovy], plugins [cloud-kubernetes], sites []
[2016-06-15 04:57:16,350][INFO ][env ] [Black Dragon] using [1] data paths, mounts [[/data (/dev/mapper/vg_vagrant-lv_root)]], net usable_space [18gb], net total_space [37.7gb], spins? [possibly], types [ext4]
[2016-06-15 04:57:16,353][INFO ][env ] [Black Dragon] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-06-15 04:57:25,015][INFO ][node ] [Black Dragon] initialized
[2016-06-15 04:57:25,018][INFO ][node ] [Black Dragon] starting ...
[2016-06-15 04:57:25,300][INFO ][transport ] [Black Dragon] publish_address {172.17.0.13:9300}, bound_addresses {172.17.0.13:9300}
[2016-06-15 04:57:25,322][INFO ][discovery ] [Black Dragon] myesdb/ccLWmZ6SRKOVyzEkPaHzKw
[2016-06-15 04:57:27,179][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Black Dragon] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:886)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:350)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
...
I'm use data node with gcePersistentDisk
as storage.
I can rolling update es-master
and es-client
without problem, but kubectl
stuck when rolling update in es-data
.
Can you tell me how to rolling update in data node
?
Thanks
Hi!
Great work on this cluster, as well as supporting the latest versions. Thanks :)
I am trying to get this to run in a nice & k8s-supported way, at scale, with a storage backend such as Flocker
Note: I tried with EBS backed volumes, and k8s fails to free the resources in time, so the volume doesn't follow the pod as needed and it's a fail.
So for that, I first need to create as many flocker volumes as I initially want, say es-data-{1, 2, 3}
Then I'd replace the emptyVol and replace by a Flocker volume. Then spin the first one. Alright, I get my first data node up & running.
But now, when I need to scale, I can't use the same RC definition, so I need a second one, with a different selector (at least I think).
Therefore, I slightly modified the template you provide, to the below example, and I create a RC per data node, with an "Id" selector.
apiVersion: v1
kind: ReplicationController
metadata:
name: es-data-VOLUME_ID
labels:
component: elasticsearch
role: data
id: "VOLUME_ID"
spec:
replicas: 1
selector:
component: elasticsearch
role: data
id: "VOLUME_ID"
template:
metadata:
labels:
component: elasticsearch
role: data
id: "VOLUME_ID"
spec:
serviceAccount: elasticsearch
containers:
- name: es-data
securityContext:
privileged: true
capabilities:
add:
- IPC_LOCK
image: quay.io/pires/docker-elasticsearch-kubernetes:2.3.2
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
value: "myesdb"
- name: NODE_MASTER
value: "false"
- name: HTTP_ENABLE
value: "false"
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- mountPath: /data
name: storage
volumes:
- name: "storage"
flocker:
datasetName: es-data-VOLUME_ID
Then I deploy 1 RC per data node.
This sort of works, but it sort of cumbersome. Without additional logic I can't autoscale, I have too many RCs, I can't update easily... Not what I would expect.
Also, I find it very unstable. I get a LOT of failure with the below log. This is not linked pods being collocated, or anything else. Even a daemonSet would die like that.
$ kubectl logs es-data-2-e7dfl
$ kubectl logs es-data-2-7029y
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
time="2016-06-13T08:01:10Z" level=info msg="Starting go-dnsmasq server 1.0.5"
time="2016-06-13T08:01:10Z" level=info msg="Upstream nameservers: [10.3.0.10:53]"
time="2016-06-13T08:01:10Z" level=info msg="Search domains: [default.svc.cluster.local. svc.cluster.local. cluster.local. us-west-2.compute.internal.]"
time="2016-06-13T08:01:10Z" level=info msg="Ready for queries on tcp://127.0.0.1:53"
time="2016-06-13T08:01:10Z" level=info msg="Ready for queries on udp://127.0.0.1:53"
[services.d] done.
[2016-06-13 08:01:22,603][INFO ][node ] [Spyne] version[2.3.2], pid[172], build[b9e4a6a/2016-04-21T16:03:47Z]
[2016-06-13 08:01:22,619][INFO ][node ] [Spyne] initializing ...
[2016-06-13 08:01:23,833][INFO ][plugins ] [Spyne] modules [reindex, lang-expression, lang-groovy], plugins [cloud-kubernetes], sites []
[2016-06-13 08:01:23,929][INFO ][env ] [Spyne] using [1] data paths, mounts [[/data (/dev/xvdf)]], net usable_space [27.8gb], net total_space [29.4gb], spins? [no], types [ext4]
[2016-06-13 08:01:23,930][INFO ][env ] [Spyne] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-06-13 08:01:30,546][INFO ][node ] [Spyne] initialized
[2016-06-13 08:01:30,549][INFO ][node ] [Spyne] starting ...
[2016-06-13 08:01:30,849][INFO ][transport ] [Spyne] publish_address {10.2.5.4:9300}, bound_addresses {10.2.5.4:9300}
[2016-06-13 08:01:30,885][INFO ][discovery ] [Spyne] myesdb/RYjiHf5WS32Myz_e8cYDGg
[2016-06-13 08:01:40,308][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.24.4, transport_address 10.2.24.4:9300
[2016-06-13 08:01:40,314][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.45.4, transport_address 10.2.45.4:9300
[2016-06-13 08:01:40,315][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.5.2, transport_address 10.2.5.2:9300
[2016-06-13 08:01:42,637][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.24.4, transport_address 10.2.24.4:9300
[2016-06-13 08:01:42,640][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.45.4, transport_address 10.2.45.4:9300
[2016-06-13 08:01:42,640][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.5.2, transport_address 10.2.5.2:9300
[2016-06-13 08:01:44,159][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.24.4, transport_address 10.2.24.4:9300
[2016-06-13 08:01:44,160][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.45.4, transport_address 10.2.45.4:9300
[2016-06-13 08:01:44,160][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.5.2, transport_address 10.2.5.2:9300
[2016-06-13 08:01:44,364][INFO ][cluster.service ] [Spyne] detected_master {Louise Mason}{fTgSKXb6T22QeEmsXHtX9Q}{10.2.45.4}{10.2.45.4:9300}{data=false, master=true}, added {{Angelo Unuscione}{2Y5T8qLgTKakOrHDAczbgw}{10.2.5.2}{10.2.5.2:9300}{data=false, master=true},{Louise Mason}{fTgSKXb6T22QeEmsXHtX9Q}{10.2.45.4}{10.2.45.4:9300}{data=false, master=true},{Anomaloco}{mSR9pYPiSkSI0hjMPjGClA}{10.2.24.3}{10.2.24.3:9300}{master=false},{Shockwave}{pGeRMjjnTuKnCZarHpiPpQ}{10.2.24.4}{10.2.24.4:9300}{data=false, master=true},{Prime Mover}{X6TIn4HFQmS_dPHTLEIThQ}{10.2.24.5}{10.2.24.5:9300}{data=false, master=false},}, reason: zen-disco-receive(from master [{Louise Mason}{fTgSKXb6T22QeEmsXHtX9Q}{10.2.45.4}{10.2.45.4:9300}{data=false, master=true}])
[2016-06-13 08:01:44,763][INFO ][node ] [Spyne] started
Killed
/run.sh exited 137
time="2016-06-13T08:01:59Z" level=info msg="Application exit requested by signal: terminated"
time="2016-06-13T08:01:59Z" level=info msg="Restoring /etc/resolv.conf"
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
Any idea of a best practice for this? How do you operate at scale?
Many thanks,
How would one go about using attached data volumes instead of storing data on the instance?
As soon as Deployments are GA.
I've just set up this with a ConfigMap so as to be able to add more settings than permitted in the default configuration.
Would you like a PR to add an example of this to the repo?
Why can we not create our own Docker image for this?
"Providing your own version of the images automatically built from this repository will not be supported."
Thanks!
Drew
I'm running your elasticsearch cluster project on top of your kubernetes coreos vagrant project.
First of all, thanks for both of these, they work great and have saved me a lot of work.
However, it looks like if I deploy the elasticsearch cluster on top of 1 master and 4 minions, and then kill one of the machines running an elasticsearch node, kubernetes does not migrate the pod to another machine. In fact, it never even seems to notice that the machine is dead. If I run kubectl get pods
it still shows the pod as "running", even though the machine is gone.
I've tried updating to kubernetes 0.12.1, and tried killing different elasticsearch machines, and it doesn't seem to make a difference.
Also, if I just kill the docker container on the machine, but leave the machine up, kubernetes will notice and move the pod as expected.
Any insight into why this is happening? Am I missing something?
Or should I take this up with the kubernetes team?
Hi,
Sorry for bugging you again, but ES 2.2.0 it out. It's required by Kibana 4.4 (freshly released as well) which brings cool features like custom chart colors.
Can you please update the dockers/yamls?
Thanks,
Zaar
As suggested in the README, --privileged
flag is required allowing the containerised ElasticSearch to use mlockall
and lock the process address space into RAM, preventing any Elasticsearch memory from being swapped out.
is it possible to use more fine grained dropping/adding of capabilities to a minimum set to ensure the cluster stays secure?
This may be related to: fabric8io/elasticsearch-cloud-kubernetes#21
More of a question than an issue - es-svc.yaml
defines load balancer service, which in Google Cloud automatically gets assigned a public address and is opened to incoming traffic from the internet. Is there a way to avoid exposing it to the internet?
This is probably simple, but I've spun up the 5-node based kubernetes cluster by following the AWS tutorial, and was able to follow your instructions to create the elasticsearch service on the cluster.
So all of that is wonderful, but here's my question: can I create multiple elasticsearch instances (a set of master, data, and load balancers) that are independent of one another on the same cluster?
I'd like to have multiple, independent elasticsearch clusters running on the same 5-node (or whatever physical count) AWS kubernetes cluster. I have many different organizations and would like each to have a dedicated elasticsearch instance, and want all of it to run on the same kubernetes cluster. I'm sure this is feasible, but I'm curious if just re-running the kubectl create commands would suffice?
Hope that makes sense! Thank you so much for your help.
The Kubernetes volumes
docs have information about the currently supported source types:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#gcepersistentdisk
What we want is the Elasticsearch cluster to have a data node that has lots of disk space. We need a persistent disk for this. The only one supported right now is persistentDisk
. But there is a limitation with it, right now:
avoid creating multiple pods that use the same Volume
if multiple pods refer to the same Volume and both are scheduled on the same machine, regardless of whether they are read-only or read-write, then the second pod scheduled will fail.
Replication controllers can only be created for pods that use read-only mounts.
This means that your pattern of a data node behind a replicationController
would fail because by design it tries to replicate multiple pods sharing a disk. That works with the current emptyDir
setup but that same setup fails the use-case of needing a persistent and large disk.
My suggestion right now is to limit the replicationController
replica count to 1
and switch over to use a GCE PD.
I am seeing this error in the logs:
2015-02-03T15:44:28.211477409Z [2015-02-03 15:44:28,211][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Stallior] Exception caught during discovery javax.ws.rs.ProcessingException : java.net.SocketException: SocketException invoking http://10.23.243.201:443/api/v1beta1/pods: Connection reset
2015-02-03T15:44:29.784310388Z [2015-02-03 15:44:29,783][WARN ][org.apache.cxf.phase.PhaseInterceptorChain] Interceptor for {http://10.23.243.201:443}WebClient has thrown exception, unwinding now
2015-02-03T15:44:29.784310388Z org.apache.cxf.interceptor.Fault: Could not send Message.
2015-02-03T15:44:29.784310388Z at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:619)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
2015-02-03T15:44:29.784310388Z at com.sun.proxy.$Proxy28.getPods(Unknown Source)
2015-02-03T15:44:29.784310388Z at io.fabric8.kubernetes.api.KubernetesHelper.getPodMap(KubernetesHelper.java:310)
2015-02-03T15:44:29.784310388Z at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:101)
2015-02-03T15:44:29.784310388Z at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:316)
2015-02-03T15:44:29.784310388Z at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.run(UnicastZenPing.java:234)
2015-02-03T15:44:29.784310388Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2015-02-03T15:44:29.784310388Z at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2015-02-03T15:44:29.784310388Z at java.lang.Thread.run(Thread.java:745)
2015-02-03T15:44:29.784310388Z Caused by: java.net.SocketException: SocketException invoking http://10.23.243.201:443/api/v1beta1/pods: Connection reset
2015-02-03T15:44:29.784310388Z at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source)
2015-02-03T15:44:29.784310388Z at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
2015-02-03T15:44:29.784310388Z at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1359)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1343)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:638)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
2015-02-03T15:44:29.784310388Z ... 12 more
2015-02-03T15:44:29.784310388Z Caused by: java.net.SocketException: Connection reset
2015-02-03T15:44:29.784310388Z at java.net.SocketInputStream.read(SocketInputStream.java:189)
2015-02-03T15:44:29.784310388Z at java.net.SocketInputStream.read(SocketInputStream.java:121)
2015-02-03T15:44:29.784310388Z at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
2015-02-03T15:44:29.784310388Z at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
2015-02-03T15:44:29.784310388Z at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
2015-02-03T15:44:29.784310388Z at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:703)
2015-02-03T15:44:29.784310388Z at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
2015-02-03T15:44:29.784310388Z at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:674)
2015-02-03T15:44:29.784310388Z at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1535)
2015-02-03T15:44:29.784310388Z at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
2015-02-03T15:44:29.784310388Z at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:266)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1557)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1527)
2015-02-03T15:44:29.784310388Z at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1330)
2015-02-03T15:44:29.784310388Z ... 15 more
2015-02-03T15:44:29.784310388Z [2015-02-03 15:44:29,784][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Stallior] Exception caught during discovery javax.ws.rs.ProcessingException : java.net.SocketException: SocketException invoking http://10.23.243.201:443/api/v1beta1/pods: Connection reset
Given this Pod that I launched (a variation on yours that uses a PD
:
id: elasticsearch-data
kind: Pod
apiVersion: v1beta1
labels:
component: elasticsearch
role: data
desiredState:
manifest:
version: v1beta1
id: elasticsearch-data
volumes:
- name: es-persistent-storage
source:
persistentDisk:
pdName: elasticsearch-data
fsType: ext4
containers:
- name: elasticsearch-data
image: pires/elasticsearch:data
ports:
- name: transport
containerPort: 9300
volumeMounts:
- name: es-persistent-storage
mountPath: /data
It seems excessive that each client, master and data node requires a ES_HEAP_SIZE
of 1GB or more of memory.
Is it only the client and master nodes that require so much memory? or just the clients?
Hi Pires,
I have created the secrets in the K8S environment and add the ca.crt in the kube-controller-manager start parameter. But exception show during SSL verification.
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
certificate: sha1/4xQd+1eSU89fBE1j3SolDM+61v8=
DN: CN=host-172-216-0-17
subjectAltNames: []
at com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:197)
at com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145)
at com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:110)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:232)
Would you please give me some tips how to handle this problem? Thank you.
I noticed you are using tags to differentiate the parts of the Elasticsearch images (data, master, API).
Would it not be better to reserve the tags for the semver
version info? Right now I don't have a choice as to which version of Elasticsearch I want to use.
Heya all,
I'm trying to run ES master on my k8s cluster (hosted on AWS) and getting an error:
admin@ip-172-20-55-229:~$ kubectl logs es-master-2201531720-neui0
[2016-09-03 05:46:00,466][INFO ][node ] [Darkstar] version[2.3.4], pid[11], build[e455fd0/2016-06-30T11:24:31Z]
[2016-09-03 05:46:00,467][INFO ][node ] [Darkstar] initializing ...
[2016-09-03 05:46:01,235][INFO ][plugins ] [Darkstar] modules [reindex, lang-expression, lang-groovy], plugins [cloud-kubernetes], sites []
[2016-09-03 05:46:01,267][INFO ][env ] [Darkstar] using [1] data paths, mounts [[/data (/dev/xvdbe)]], net usable_space [46.5gb], net total_space [49gb], spins? [no], types [ext4]
[2016-09-03 05:46:01,267][INFO ][env ] [Darkstar] heap size [247.5mb], compressed ordinary object pointers [true]
[2016-09-03 05:46:03,890][INFO ][node ] [Darkstar] initialized
[2016-09-03 05:46:03,898][INFO ][node ] [Darkstar] starting ...
Exception in thread "main" java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
at org.elasticsearch.common.network.NetworkUtils.getSiteLocalAddresses(NetworkUtils.java:186)
at org.elasticsearch.common.network.NetworkService.resolveInternal(NetworkService.java:233)
at org.elasticsearch.common.network.NetworkService.resolveInetAddresses(NetworkService.java:209)
at org.elasticsearch.common.network.NetworkService.resolveBindHostAddresses(NetworkService.java:122)
at org.elasticsearch.transport.netty.NettyTransport.bindServerBootstrap(NettyTransport.java:424)
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:321)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:182)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)
at org.elasticsearch.node.Node.start(Node.java:278)
at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:206)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:272)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Not entirely this makes sense (new to Kubernetes and somewhat to ES) but it'd be cool if you could add the Sense plugin and use kubectl proxy
to be able to live query production data.
Thoughts?
Caused by: java.net.SocketException: SocketException invoking http://10.100.0.1:443/api/v1/namespaces/default/endpoints/elasticsearch-discovery: Unexpected end of file from server
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1364)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1348)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:651)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:624)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
... 10 more
Caused by: java.net.SocketException: Unexpected end of file from server
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:792)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:789)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1535)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:275)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1563)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1533)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1335)
[2016-09-14 08:28:18,410][WARN ][org.apache.cxf.phase.PhaseInterceptorChain] Interceptor for {https://10.254.0.1:443}WebClient has thrown exception, unwinding now
org.apache.cxf.interceptor.Fault: **Could not send Message**.
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:624)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
at com.sun.proxy.$Proxy29.endpointsForService(Unknown Source)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.getNodesFromKubernetesSelector(K8sUnicastHostsProvider.java:123)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:106)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:313)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:219)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:146)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:124)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:1007)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:361)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$6100(ZenDiscovery.java:86)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1384)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: IOException invoking https://10.254.0.1:443/api/v1/namespaces/default/endpoints/elasticsearch-discovery: **HTTPS hostname wrong: should be <10.254.0.1
>**
ES 2.1.1 is out. Please upgrade.
Thank you very much for doing this work.
Hi Paulo,
Can you please update to ES 2.3.3? (It fixes support for http compression).
Thank you as always!
I have not built my image, just copyed / pasted the "Deploy" section of the readme.
kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
Everything ok untill:
kubectl get svc,rc,pods
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch 10.0.0.143 9200/TCP 20m
elasticsearch-discovery 10.0.0.27 9300/TCP 20m
kubernetes 10.0.0.1 443/TCP 11d
NAME DESIRED CURRENT AGE
es-client 1 1 18m
es-data 1 1 18m
es-master 1 1 20m
NAME READY STATUS RESTARTS AGE
es-client-yduhr 1/1 Running 0 18m
es-data-950tk 1/1 Running 0 18m
es-master-h0ble 1/1 Running 0 20m
k8s-etcd-127.0.0.1 0/1 CrashLoopBackOff 9 11d
k8s-master-127.0.0.1 4/4 Running 2 47m
k8s-proxy-127.0.0.1 1/1 Running 0 46m
But kubectl logs es-master-h0ble
(attached its output) has some errors.
What could I do?
Following the deployment of an app with 2GB of memory usage, ES cannot be resized or deployed in k8s.
Following error when spinning up ES cluster in k8s:
2015-04-25T00:05:01.856141447Z Caused by: com.fasterxml.jackson.databind.JsonMappingException: Numeric value (2147483648) out of range of int
2015-04-25T00:05:01.856141447Z at [Source: sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@511e7433; line: 2062, column: 35] (through reference chain: io.fabric8.kubernetes.api.model.PodList["items"]->io.fabric8.kubernetes.api.model.Pod["desiredState"]->io.fabric8.kubernetes.api.model.PodState["manifest"]->io.fabric8.kubernetes.api.model.ContainerManifest["containers"]->io.fabric8.kubernetes.api.model.Container["memory"])
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:197)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.wrapAndThrow(BeanDeserializerBase.java:1415)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:244)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:206)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:206)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1232)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:676)
2015-04-25T00:05:01.856141447Z at com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(ProviderBase.java:800)
2015-04-25T00:05:01.856141447Z at org.apache.cxf.jaxrs.utils.JAXRSUtils.readFromMessageBodyReader(JAXRSUtils.java:1322)
2015-04-25T00:05:01.856141447Z at org.apache.cxf.jaxrs.impl.ResponseImpl.doReadEntity(ResponseImpl.java:369)
2015-04-25T00:05:01.856141447Z ... 15 more
filed issue with fabric8io too: fabric8io/elasticsearch-cloud-kubernetes#9
hi,pires.
i am runing the cluster on aws,but has some error:
[2016-09-13 06:51:41,980][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Answer] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:886)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:350)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
at com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:188)
at com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145)
at com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
... 16 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
... 39 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
... 45 more
[2016-09-13 06:51:43,484][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Answer] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:249)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: kubernetes.default.svc: unknown error
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at com.squareup.okhttp.Dns$1.lookup(Dns.java:39)
at com.squareup.okhttp.internal.http.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:175)
at com.squareup.okhttp.internal.http.RouteSelector.nextProxy(RouteSelector.java:141)
at com.squareup.okhttp.internal.http.RouteSelector.next(RouteSelector.java:83)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:174)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
at com.squareup.okhttp.Call.execute(Call.java:80)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
... 11 more
Hi there,
It's this time of the year again - ES 2.3.0 is out and it will be absolutely awesome to have all its goodness in k8s.
Thanks again for your work.
Hey, great work in this.
Kubernetes v0.19.0 removed the v1beta2 API which the discovery plugin uses (if used with selector).
[2015-06-13 15:00:58,776][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Timeslip] Exception caught during discovery javax.ws.rs.ProcessingException : java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods?namespace: Connection refused
javax.ws.rs.ProcessingException: java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods?namespace: Connection refused
at org.apache.cxf.jaxrs.client.AbstractClient.checkClientException(AbstractClient.java:557)
at org.apache.cxf.jaxrs.client.AbstractClient.preProcessResult(AbstractClient.java:539)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:676)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
at com.sun.proxy.$Proxy30.getPods(Unknown Source)
at io.fabric8.kubernetes.api.KubernetesHelper.getFilteredPodMap(KubernetesHelper.java:446)
at io.fabric8.kubernetes.api.KubernetesHelper.getSelectedPodMap(KubernetesHelper.java:438)
at io.fabric8.kubernetes.api.KubernetesHelper.getSelectedPodMap(KubernetesHelper.java:433)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.getNodesFromKubernetesSelector(K8sUnicastHostsProvider.java:122)
at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:106)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:313)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:228)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods?namespace: Connection refused
at sun.reflect.GeneratedConstructorAccessor10.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1364)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1348)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:651)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:624)
at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
... 13 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1512)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:275)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1563)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1533)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1335)
... 19 more
If I understand it correctly, it seems to be fixed upstream with this commit:
fabric8io/fabric8@f233163
And the following one for the plugin:
fabric8io/elasticsearch-cloud-kubernetes@65bbe5a
Any chance you could bump the elasticsearch-cloud-kubernetes plugin to 1.2.0?
Hi,
I'm not sure if it's a bug or not, but following the readme doc give me an error :
[2015-09-29 15:50:57,570][ERROR][io.fabric8.kubernetes.api.KubernetesFactory] Specified CA certificate file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt does not exist or is not readable
Followed by lot of
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
What did I forgot to get this certificate working ?
Thanks for your help
Alex
Move from Docker Hub to Quay.io.
I am getting this error when I start up my first master.
[2016-03-11 20:52:43,835][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Fight-Man] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
This is on Kube 1.0.3 and a CentOS 7 Cluster.
kubectl get serviceaccounts --namespace=default
NAME SECRETS
default 1
elasticsearch 1
[root@TSTMGTINF01 kubernetes-elasticsearch-cluster]# kubectl log es-master-uvoi6
W0713 15:37:49.279352 8342 cmd.go:219] log is DEPRECATED and will be removed in a future version. Use logs instead.
Error from server: the server could not find the requested resource ( pods/log es-master-uvoi6)
Have you considered securing communication between ES nodes? Search Guard is an open source alternative to Elastic's Shield product that would enable this.
See #25.
I'm wondering have you run this in production with some serious data to stress test it?
I'm thinking about doing the same thing but worried about nodes going up&down and cause ElasticSearch moving shards around.
Kubernetes is designed for micro services which can die any time, but for Elasticsearch clusters which have a big amount of data, that would be disastrous.
In the /etc/systemd/system/kube-kubelet.service config we set the domain to--cluster_domain=vungle.local
I believe the default is cluster.local
and this doesn't seem to be configurable?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.