Giter Club home page Giter Club logo

kubernetes-elasticsearch-cluster's Introduction

This project is no longer maintained

As of November 7th, 2018, I've decided to end my commitment to maintaining this repo and related.

It's been 3 years since I last used Elasticsearch, so I no longer have the motivation it takes to maintain and evolve this project. Also, other projects need all the attention I can give.

It was a great run, thank you all.

kubernetes-elasticsearch-cluster

Elasticsearch cluster on top of Kubernetes made easy.

Table of Contents

Abstract

Elasticsearch best-practices recommend to separate nodes in three roles:

  • Master nodes - intended for clustering management only, no data, no HTTP API
  • Data nodes - intended for client usage and data
  • Ingest nodes - intended for document pre-processing during ingestion

Given this, I'm going to demonstrate how to provision a production grade scenario consisting of 3 master, 2 data and 2 ingest nodes.

  • Elasticsearch pods need for an init-container to run in privileged mode, so it can set some VM options. For that to happen, the kubelet should be running with args --allow-privileged, otherwise the init-container will fail to run.

  • By default, ES_JAVA_OPTS is set to -Xms256m -Xmx256m. This is a very low value but many users, i.e. minikube users, were having issues with pods getting killed because hosts were out of memory. One can change this in the deployment descriptors available in this repository.

  • As of the moment, Kubernetes pod descriptors use an emptyDir for storing data in each data node container. This is meant to be for the sake of simplicity and should be adapted according to one's storage needs.

  • The stateful directory contains an example which deploys the data pods as a StatefulSet. These use a volumeClaimTemplates to provision persistent storage for each pod.

  • By default, PROCESSORS is set to 1. This may not be enough for some deployments, especially at startup time. Adjust resources.limits.cpu and/or livenessProbe accordingly if required. Note that resources.limits.cpu must be an integer.

  • Kubernetes 1.11.x (tested with v1.11.2 on top of Vagrant + CoreOS).
  • kubectl configured to access the Kubernetes API.

Providing one's own version of the images automatically built from this repository will not be supported. This is an optional step. One has been warned.

Test

Deploy

kubectl create -f es-discovery-svc.yaml
kubectl create -f es-svc.yaml
kubectl create -f es-master.yaml
kubectl rollout status -f es-master.yaml

kubectl create -f es-ingest-svc.yaml
kubectl create -f es-ingest.yaml
kubectl rollout status -f es-ingest.yaml

kubectl create -f es-data.yaml
kubectl rollout status -f es-data.yaml

Let's check if everything is working properly:

kubectl get svc,deployment,pods -l component=elasticsearch
NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/elasticsearch             ClusterIP   10.100.243.196   <none>        9200/TCP   3m
service/elasticsearch-discovery   ClusterIP   None             <none>        9300/TCP   3m
service/elasticsearch-ingest      ClusterIP   10.100.76.74     <none>        9200/TCP   2m

NAME                              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/es-data     2         2         2            2           1m
deployment.extensions/es-ingest   2         2         2            2           2m
deployment.extensions/es-master   3         3         3            3           3m

NAME                             READY     STATUS    RESTARTS   AGE
pod/es-data-56f8ff8c97-642bq     1/1       Running   0          1m
pod/es-data-56f8ff8c97-h6hpc     1/1       Running   0          1m
pod/es-ingest-6ddd5fc689-b4s94   1/1       Running   0          2m
pod/es-ingest-6ddd5fc689-d8rtj   1/1       Running   0          2m
pod/es-master-68bf8f86c4-bsfrx   1/1       Running   0          3m
pod/es-master-68bf8f86c4-g8nph   1/1       Running   0          3m
pod/es-master-68bf8f86c4-q5khn   1/1       Running   0          3m

As we can assert, the cluster seems to be up and running. Easy, wasn't it?

Access the service

Don't forget that services in Kubernetes are only acessible from containers in the cluster. For different behavior one should configure the creation of an external load-balancer. While it's supported within this example service descriptor, its usage is out of scope of this document, for now.

Note: if you are using one of the cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service. You can uncomment the field in es-svc.yaml.

kubectl get svc elasticsearch
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
elasticsearch   ClusterIP   10.100.243.196   <none>        9200/TCP   3m

From any host on the Kubernetes cluster (that's running kube-proxy or similar), run:

curl http://10.100.243.196:9200

One should see something similar to the following:

{
  "name" : "es-data-56f8ff8c97-642bq",
  "cluster_name" : "myesdb",
  "cluster_uuid" : "RkRkTl26TDOE7o0FhCcW_g",
  "version" : {
    "number" : "6.3.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "053779d",
    "build_date" : "2018-07-20T05:20:23.451332Z",
    "build_snapshot" : false,
    "lucene_version" : "7.3.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Or if one wants to see cluster information:

curl http://10.100.243.196:9200/_cluster/health?pretty

One should see something similar to the following:

{
  "cluster_name" : "myesdb",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 7,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

One of the main advantages of running Elasticsearch on top of Kubernetes is how resilient the cluster becomes, particularly during node restarts. However if all data pods are scheduled onto the same node(s), this advantage decreases significantly and may even result in no data pods being available.

It is then highly recommended, in the context of the solution described in this repository, that one adopts pod anti-affinity in order to guarantee that two data pods will never run on the same node.

Here's an example:

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: component
              operator: In
              values:
              - elasticsearch
            - key: role
              operator: In
              values:
              - data
          topologyKey: kubernetes.io/hostname
  containers:
  - (...)

If one wants to ensure that no more than n Elasticsearch nodes will be unavailable at a time, one can optionally (change and) apply the following manifests:

kubectl create -f es-master-pdb.yaml
kubectl create -f es-data-pdb.yaml

Note: This is an advanced subject and one should only put it in practice if one understands clearly what it means both in the Kubernetes and Elasticsearch contexts. For more information, please consult Pod Disruptions.

WARNING: The Helm chart is maintained by someone else in the community and may not up-to-date with this repo.

Helm charts for a basic (non-stateful) ElasticSearch deployment are maintained at https://github.com/clockworksoul/helm-elasticsearch. With Helm properly installed and configured, standing up a complete cluster is almost trivial:

git clone https://github.com/clockworksoul/helm-elasticsearch.git
helm install helm-elasticsearch

Various parameters of the cluster, including replica count and memory allocations, can be adjusted by editing the helm-elasticsearch/values.yaml file. For information about Helm, please consult the complete Helm documentation.

The image used in this repo is very minimalist. However, one can install additional plug-ins at will by simply specifying the ES_PLUGINS_INSTALL environment variable in the desired pod descriptors. For instance, to install Google Cloud Storage and S3 plug-ins it would be like follows:

- name: "ES_PLUGINS_INSTALL"
  value: "repository-gcs,repository-s3"

Note: The X-Pack plugin does not currently work with the quay.io/pires/docker-elasticsearch-kubernetes image. See Issue #102

Additionally, one can run a CronJob that will periodically run Curator to clean up indices (or do other actions on the Elasticsearch cluster).

kubectl create -f es-curator-config.yaml
kubectl create -f es-curator.yaml

Please, confirm the job has been created.

kubectl get cronjobs
NAME      SCHEDULE    SUSPEND   ACTIVE    LAST-SCHEDULE
curator   1 0 * * *   False     0         <none>

The job is configured to run once a day at 1 minute past midnight and delete indices that are older than 3 days.

Notes

  • One can change the schedule by editing the cron notation in es-curator.yaml.
  • One can change the action (e.g. delete older than 3 days) by editing the es-curator-config.yaml.
  • The definition of the action_file.yaml is quite self-explaining for simple set-ups. For more advanced configuration options, please consult the Curator Documentation.

If one wants to remove the curator job, just run:

kubectl delete cronjob curator
kubectl delete configmap curator-config

WARNING: The Kibana section is maintained by someone else in the community and may not up-to-date with this repo.

Deploy

If Kibana defaults are not enough, one may want to customize kibana.yaml through a ConfigMap. Please refer to Configuring Kibana for all available attributes.

kubectl create -f kibana-cm.yaml
kubectl create -f kibana-svc.yaml
kubectl create -f kibana.yaml

Kibana will become available through service kibana, and one will be able to access it from within the cluster, or proxy it through the Kubernetes API as follows:

curl https://<API_SERVER_URL>/api/v1/namespaces/default/services/kibana:http/proxy

One can also create an Ingress to expose the service publicly or simply use the service nodeport. In the case one proceeds to do so, one must change the environment variable SERVER_BASEPATH to the match their environment.

FAQ

Why does NUMBER_OF_MASTERS differ from number of master-replicas?

The default value for this environment variable is 2, meaning a cluster will need a minimum of 2 master nodes to operate. If a cluster has 3 masters and one dies, the cluster still works. Minimum master nodes are usually n/2 + 1, where n is the number of master nodes in a cluster. If a cluster has 5 master nodes, one should have a minimum of 3, less than that and the cluster stops. If one scales the number of masters, make sure to update the minimum number of master nodes through the Elasticsearch API as setting environment variable will only work on cluster setup. More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes

How can I customize elasticsearch.yaml?

Read a different config file by settings env var ES_PATH_CONF=/path/to/my/config/ (see the Elasticsearch docs for more). Another option would be to build one's own image from this repository

Troubleshooting

No up-and-running site-local

One of the errors one may come across when running the setup is the following error:

[2016-11-29T01:28:36,515][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:116) ~[elasticsearch-5.0.1.jar:5.0.1]
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:103) ~[elasticsearch-5.0.1.jar:5.0.1]
	at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.0.1.jar:5.0.1]
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96) ~[elasticsearch-5.0.1.jar:5.0.1]
	at org.elasticsearch.cli.Command.main(Command.java:62) ~[elasticsearch-5.0.1.jar:5.0.1]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80) ~[elasticsearch-5.0.1.jar:5.0.1]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:73) ~[elasticsearch-5.0.1.jar:5.0.1]
Caused by: java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
	at org.elasticsearch.common.network.NetworkUtils.getSiteLocalAddresses(NetworkUtils.java:187) ~[elasticsearch-5.0.1.jar:5.0.1]
	at org.elasticsearch.common.network.NetworkService.resolveInternal(NetworkService.java:246) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.common.network.NetworkService.resolveInetAddresses(NetworkService.java:220) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.common.network.NetworkService.resolveBindHostAddresses(NetworkService.java:130) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.transport.TcpTransport.bindServer(TcpTransport.java:575) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.transport.netty4.Netty4Transport.doStart(Netty4Transport.java:182) ~[?:?]
 	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.transport.TransportService.doStart(TransportService.java:182) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.node.Node.start(Node.java:525) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:211) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:288) ~[elasticsearch-5.0.1.jar:5.0.1]
 	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:112) ~[elasticsearch-5.0.1.jar:5.0.1]
 	... 6 more
[2016-11-29T01:28:37,448][INFO ][o.e.n.Node               ] [kIEYQSE] stopping ...
[2016-11-29T01:28:37,451][INFO ][o.e.n.Node               ] [kIEYQSE] stopped
[2016-11-29T01:28:37,452][INFO ][o.e.n.Node               ] [kIEYQSE] closing ...
[2016-11-29T01:28:37,464][INFO ][o.e.n.Node               ] [kIEYQSE] closed

This is related to how the container binds to network ports (defaults to _local_). It will need to match the actual node network interface name, which depends on what OS and infrastructure provider one uses. For instance, if the primary interface on the node is p1p1 then that is the value that needs to be set for the NETWORK_HOST environment variable. Please see the documentation for reference of options.

In order to workaround this, set NETWORK_HOST environment variable in the pod descriptors as follows:

- name: "NETWORK_HOST"
  value: "_eth0_" #_p1p1_ if interface name is p1p1, _ens4_ if interface name is ens4, and so on.

(IPv6) org.elasticsearch.bootstrap.StartupException: BindTransportException

Intermittent failures occur when the local network interface has both IPv4 and IPv6 addresses, and Elasticsearch tries to bind to the IPv6 address first. If the IPv4 address is chosen first, Elasticsearch starts correctly.

In order to workaround this, set NETWORK_HOST environment variable in the pod descriptors as follows:

- name: "NETWORK_HOST"
  value: "_eth0:ipv4_" #_p1p1:ipv4_ if interface name is p1p1, _ens4:ipv4_ if interface name is ens4, and so on.

kubernetes-elasticsearch-cluster's People

Contributors

brumarticisco avatar bw2 avatar calbach avatar cfontes avatar clockworksoul avatar deimosfr avatar guessi avatar hikhvar avatar imesh avatar janwillies avatar jasonkuhrt avatar jevonearth avatar jjungnickel avatar khfayzullaev avatar kikulikov avatar mindw avatar navidurrahman avatar pavolloffay avatar pires avatar puja108 avatar rocketraman avatar sjdweb avatar skokovic avatar tarr11 avatar wfhartford avatar yoitsro avatar zhsj avatar zoidyzoidzoid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-elasticsearch-cluster's Issues

Hostname kubernetes.default.svc not verified in es-master node

Hi Pires,
I have created the secrets in the K8S environment and add the ca.crt in the kube-controller-manager start parameter. But exception show during SSL verification.

Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
certificate: sha1/4xQd+1eSU89fBE1j3SolDM+61v8=
DN: CN=host-172-216-0-17
subjectAltNames: []
at com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:197)
at com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145)
at com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108)
at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
at com.squareup.okhttp.Call.getResponse(Call.java:286)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:110)
at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:232)

Would you please give me some tips how to handle this problem? Thank you.

Pods status go into CrashLoopBackOff

Hey,

I'm trying to setup an ES using this repo,

I made some tweaks though

  1. I activated data/client in the es-master deployment
  2. I activated client in the es-data deployment

I created 1 instance of master and 1 instance of data,

Both PODs became running and ES recognizes both nodes in the cluster,

Once I tried to shutdown one of the nodes the whole thing got broken,

This is what I see:

es-data-...     1/1       Running            6          17m
es-master-...   0/1       CrashLoopBackOff   6          1h

After a while both become running:

es-data-...    1/1       Running   6          20m
es-master-...   1/1       Running   7          1h

Then few seconds after the data crashes:

notebook:kub me$ kubectl get pods
NAME                         READY     STATUS             RESTARTS   AGE
es-data-...     0/1       CrashLoopBackOff   6          20m
es-master-...   1/1       Running            7          1h

The best error I can see when running kubectl describe pod es-data... is this one:

  19m   6s  30  {kubelet 172.17.8.102}  spec.containers{es-data}    Warning BackOff     Back-off restarting failed docker container
  48s   6s  4   {kubelet 172.17.8.102}                  Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "es-data" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=es-data pod=es-data-2599720219-u9axv_default(330fb913-5f22-11e6-be3c-080027e9ede8)"

Or in master (seems to be the same)

  5m    10s 22  {kubelet 172.17.8.102}                  Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "es-master" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=es-master pod=es-master-2585506078-as9kp_default(7b7095c8-5f1b-11e6-be3c-080027e9ede8)"

First, I'm not sure if this is ok to make the master node also a client and a data plus make the data a client as well, this should work at all? I'd like to start with a small cluster as possible and scale on time, should it work?

If so, any clue why it is acting that way?

Thanks,

Asaf

Add Sense UI?

Not entirely this makes sense (new to Kubernetes and somewhat to ES) but it'd be cool if you could add the Sense plugin and use kubectl proxy to be able to live query production data.

Thoughts?

SSL support

Have you considered securing communication between ES nodes? Search Guard is an open source alternative to Elastic's Shield product that would enable this.

How can I spin up multiple, independent elasticsearch instances on the same kubernetes cluster?

This is probably simple, but I've spun up the 5-node based kubernetes cluster by following the AWS tutorial, and was able to follow your instructions to create the elasticsearch service on the cluster.

So all of that is wonderful, but here's my question: can I create multiple elasticsearch instances (a set of master, data, and load balancers) that are independent of one another on the same cluster?

I'd like to have multiple, independent elasticsearch clusters running on the same 5-node (or whatever physical count) AWS kubernetes cluster. I have many different organizations and would like each to have a dedicated elasticsearch instance, and want all of it to run on the same kubernetes cluster. I'm sure this is feasible, but I'm curious if just re-running the kubectl create commands would suffice?

Hope that makes sense! Thank you so much for your help.

I wish I could use my own Docker image

Why can we not create our own Docker image for this?

"Providing your own version of the images automatically built from this repository will not be supported."

Thanks!
Drew

Status "yellow"

When I scale up to 2 client and 3 master pods then do a curl on the cluster I get status "yellow"

curl http://ip_address:9200/_cluster/health?pretty
{
"cluster_name" : "myesdb",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 6,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}

Any idea what is going wrong?

Thanks

error when starting master

Caused by: java.net.SocketException: SocketException invoking http://10.100.0.1:443/api/v1/namespaces/default/endpoints/elasticsearch-discovery: Unexpected end of file from server
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1364)
    at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1348)
    at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
    at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:651)
    at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
    at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
    at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:624)
    at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
    ... 10 more
Caused by: java.net.SocketException: Unexpected end of file from server
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:792)
    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
    at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:789)
    at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1535)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
    at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
    at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:275)
    at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1563)
    at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1533)
    at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1335)

discovery warning

hi,pires.
i am runing the cluster on aws,but has some error:

[2016-09-13 06:51:41,980][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Answer] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
    at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
    at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
    at java.security.AccessController.doPrivileged(Native Method)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
    at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
    at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
    at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:886)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:350)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
    at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
    at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
    at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
    at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
    at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
    at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
    at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
    at sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
    at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
    at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
    at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
    at com.squareup.okhttp.internal.io.RealConnection.connectTls(RealConnection.java:188)
    at com.squareup.okhttp.internal.io.RealConnection.connectSocket(RealConnection.java:145)
    at com.squareup.okhttp.internal.io.RealConnection.connect(RealConnection.java:108)
    at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:184)
    at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
    at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
    at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
    at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
    at com.squareup.okhttp.Call.getResponse(Call.java:286)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
    at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
    at com.squareup.okhttp.Call.execute(Call.java:80)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
    ... 16 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
    at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
    at sun.security.validator.Validator.validate(Validator.java:260)
    at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
    at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
    at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
    at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
    ... 39 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
    at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
    at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
    at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
    at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
    ... 45 more
[2016-09-13 06:51:43,484][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Answer] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
    at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
    at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
    at java.security.AccessController.doPrivileged(Native Method)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:249)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: kubernetes.default.svc: unknown error
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
    at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
    at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
    at java.net.InetAddress.getAllByName(InetAddress.java:1192)
    at java.net.InetAddress.getAllByName(InetAddress.java:1126)
    at com.squareup.okhttp.Dns$1.lookup(Dns.java:39)
    at com.squareup.okhttp.internal.http.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:175)
    at com.squareup.okhttp.internal.http.RouteSelector.nextProxy(RouteSelector.java:141)
    at com.squareup.okhttp.internal.http.RouteSelector.next(RouteSelector.java:83)
    at com.squareup.okhttp.internal.http.StreamAllocation.findConnection(StreamAllocation.java:174)
    at com.squareup.okhttp.internal.http.StreamAllocation.findHealthyConnection(StreamAllocation.java:126)
    at com.squareup.okhttp.internal.http.StreamAllocation.newStream(StreamAllocation.java:95)
    at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:281)
    at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:224)
    at com.squareup.okhttp.Call.getResponse(Call.java:286)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:243)
    at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:205)
    at com.squareup.okhttp.Call.execute(Call.java:80)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:210)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:205)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:510)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:118)
    ... 11 more

io.fabric8.kubernetes.client.KubernetesClientException occured under kubernetes v1.3.0-alpha

[vagrant@localhost origin]$ kubectl --kubeconfig=kubeconfig version
Client Version: version.Info{Major:"1", Minor:"4+", GitVersion:"v1.4.0-alpha.0.181+77419c48fd7d7e", GitCommit:"77419c48fd7d7e674b1947325f33716bb3671fbe", GitTreeState:"clean", BuildDate:"2016-06-15T01:42:25Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3+", GitVersion:"v1.3.0-alpha.2.0+6a87dba0b8a50d", GitCommit:"6a87dba0b8a50dccaddb67a4c7748696db1918ec", GitTreeState:"clean", BuildDate:"2016-04-11T19:42:40Z", GoVersion:"go1.6", Compiler:"gc", Platform:"linux/amd64"}

[vagrant@localhost origin]$ kubectl --kubeconfig=kubeconfig logs es-master-6q2ny
[2016-06-15 04:57:14,245][INFO ][node ] [Black Dragon] version[2.3.3], pid[12], build[218bdf1/2016-05-17T15:40:04Z]
[2016-06-15 04:57:14,267][INFO ][node ] [Black Dragon] initializing ...
[2016-06-15 04:57:16,240][INFO ][plugins ] [Black Dragon] modules [reindex, lang-expression, lang-groovy], plugins [cloud-kubernetes], sites []
[2016-06-15 04:57:16,350][INFO ][env ] [Black Dragon] using [1] data paths, mounts [[/data (/dev/mapper/vg_vagrant-lv_root)]], net usable_space [18gb], net total_space [37.7gb], spins? [possibly], types [ext4]
[2016-06-15 04:57:16,353][INFO ][env ] [Black Dragon] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-06-15 04:57:25,015][INFO ][node ] [Black Dragon] initialized
[2016-06-15 04:57:25,018][INFO ][node ] [Black Dragon] starting ...
[2016-06-15 04:57:25,300][INFO ][transport ] [Black Dragon] publish_address {172.17.0.13:9300}, bound_addresses {172.17.0.13:9300}
[2016-06-15 04:57:25,322][INFO ][discovery ] [Black Dragon] myesdb/ccLWmZ6SRKOVyzEkPaHzKw
[2016-06-15 04:57:27,179][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Black Dragon] Exception caught during discovery: An error has occurred.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:125)
at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.readNodes(KubernetesUnicastHostsProvider.java:112)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.lambda$buildDynamicNodes$0(KubernetesUnicastHostsProvider.java:80)
at java.security.AccessController.doPrivileged(Native Method)
at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:79)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:886)
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:350)
at org.elasticsearch.discovery.zen.ZenDiscovery.access$4800(ZenDiscovery.java:91)
at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1237)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
...

Cannot redeploy cluster in k8s

Following the deployment of an app with 2GB of memory usage, ES cannot be resized or deployed in k8s.

Following error when spinning up ES cluster in k8s:

2015-04-25T00:05:01.856141447Z Caused by: com.fasterxml.jackson.databind.JsonMappingException: Numeric value (2147483648) out of range of int
2015-04-25T00:05:01.856141447Z  at [Source: sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@511e7433; line: 2062, column: 35] (through reference chain: io.fabric8.kubernetes.api.model.PodList["items"]->io.fabric8.kubernetes.api.model.Pod["desiredState"]->io.fabric8.kubernetes.api.model.PodState["manifest"]->io.fabric8.kubernetes.api.model.ContainerManifest["containers"]->io.fabric8.kubernetes.api.model.Container["memory"])
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:232)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:197)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.wrapAndThrow(BeanDeserializerBase.java:1415)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:244)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:232)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:206)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:232)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:206)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:25)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1232)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:676)
2015-04-25T00:05:01.856141447Z  at com.fasterxml.jackson.jaxrs.base.ProviderBase.readFrom(ProviderBase.java:800)
2015-04-25T00:05:01.856141447Z  at org.apache.cxf.jaxrs.utils.JAXRSUtils.readFromMessageBodyReader(JAXRSUtils.java:1322)
2015-04-25T00:05:01.856141447Z  at org.apache.cxf.jaxrs.impl.ResponseImpl.doReadEntity(ResponseImpl.java:369)
2015-04-25T00:05:01.856141447Z  ... 15 more

filed issue with fabric8io too: fabric8io/elasticsearch-cloud-kubernetes#9

kubectl logs errors

I have not built my image, just copyed / pasted the "Deploy" section of the readme.

kubectl version

Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}

Everything ok untill:
kubectl get svc,rc,pods

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch 10.0.0.143 9200/TCP 20m
elasticsearch-discovery 10.0.0.27 9300/TCP 20m
kubernetes 10.0.0.1 443/TCP 11d
NAME DESIRED CURRENT AGE
es-client 1 1 18m
es-data 1 1 18m
es-master 1 1 20m
NAME READY STATUS RESTARTS AGE
es-client-yduhr 1/1 Running 0 18m
es-data-950tk 1/1 Running 0 18m
es-master-h0ble 1/1 Running 0 20m
k8s-etcd-127.0.0.1 0/1 CrashLoopBackOff 9 11d
k8s-master-127.0.0.1 4/4 Running 2 47m
k8s-proxy-127.0.0.1 1/1 Running 0 46m

But kubectl logs es-master-h0ble (attached its output) has some errors.

What could I do?

kubectl_logs.txt

Certificates error

Hi,

I'm not sure if it's a bug or not, but following the readme doc give me an error :
[2015-09-29 15:50:57,570][ERROR][io.fabric8.kubernetes.api.KubernetesFactory] Specified CA certificate file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt does not exist or is not readable

Followed by lot of
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

What did I forgot to get this certificate working ?

Thanks for your help

Alex

Any stress tests?

I'm wondering have you run this in production with some serious data to stress test it?

I'm thinking about doing the same thing but worried about nodes going up&down and cause ElasticSearch moving shards around.

Kubernetes is designed for micro services which can die any time, but for Elasticsearch clusters which have a big amount of data, that would be disastrous.

Configurable Cluster Domain

In the /etc/systemd/system/kube-kubelet.service config we set the domain to--cluster_domain=vungle.local

I believe the default is cluster.local and this doesn't seem to be configurable?

serviceAccount should be serviceAccountName

I have started an elasticsearch cluster on my kubernetes with file here, everything works fine except the master/lb/data pods, the discovery plugin got HTTP 401 Unauthorized error from kube API server, the reason for that is the incorrect service account property in the pod, I'm not sure if it is changed from previous version of kubernetes, currently the correct name should be serviceAccountName in spec v1, not serviceAccount .

ES 2.3.0

Hi there,

It's this time of the year again - ES 2.3.0 is out and it will be absolutely awesome to have all its goodness in k8s.

Thanks again for your work.

Deprecated API in Kubernetes v0.19.0

Hey, great work in this.

Kubernetes v0.19.0 removed the v1beta2 API which the discovery plugin uses (if used with selector).

[2015-06-13 15:00:58,776][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Timeslip] Exception caught during discovery javax.ws.rs.ProcessingException : java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods?namespace: Connection refused
javax.ws.rs.ProcessingException: java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods?namespace: Connection refused
        at org.apache.cxf.jaxrs.client.AbstractClient.checkClientException(AbstractClient.java:557)
        at org.apache.cxf.jaxrs.client.AbstractClient.preProcessResult(AbstractClient.java:539)
        at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:676)
        at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
        at com.sun.proxy.$Proxy30.getPods(Unknown Source)
        at io.fabric8.kubernetes.api.KubernetesHelper.getFilteredPodMap(KubernetesHelper.java:446)
        at io.fabric8.kubernetes.api.KubernetesHelper.getSelectedPodMap(KubernetesHelper.java:438)
        at io.fabric8.kubernetes.api.KubernetesHelper.getSelectedPodMap(KubernetesHelper.java:433)
        at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.getNodesFromKubernetesSelector(K8sUnicastHostsProvider.java:122)
        at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:106)
        at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:313)
        at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:228)
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: ConnectException invoking http://localhost:8080/api/v1beta2/pods?namespace: Connection refused
        at sun.reflect.GeneratedConstructorAccessor10.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1364)
        at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1348)
        at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
        at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:651)
        at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
        at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
        at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:624)
        at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
        ... 13 more
Caused by: java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
        at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
        at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
        at sun.net.www.http.HttpClient.New(HttpClient.java:308)
        at sun.net.www.http.HttpClient.New(HttpClient.java:326)
        at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
        at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
        at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1512)
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
        at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
        at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:275)
        at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1563)
        at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1533)
        at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1335)
        ... 19 more

If I understand it correctly, it seems to be fixed upstream with this commit:
fabric8io/fabric8@f233163

And the following one for the plugin:
fabric8io/elasticsearch-cloud-kubernetes@65bbe5a

Any chance you could bump the elasticsearch-cloud-kubernetes plugin to 1.2.0?

Abusing image tags?

I noticed you are using tags to differentiate the parts of the Elasticsearch images (data, master, API).

Would it not be better to reserve the tags for the semver version info? Right now I don't have a choice as to which version of Elasticsearch I want to use.

ConfigMap example

I've just set up this with a ConfigMap so as to be able to add more settings than permitted in the default configuration.
Would you like a PR to add an example of this to the repo?

unable to find valid certification path to requested target es-master

I am getting this error when I start up my first master.

[2016-03-11 20:52:43,835][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Fight-Man] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

This is on Kube 1.0.3 and a CentOS 7 Cluster.

kubectl get serviceaccounts --namespace=default
NAME SECRETS
default 1
elasticsearch 1

Please upgrade to ES 2.2.0

Hi,

Sorry for bugging you again, but ES 2.2.0 it out. It's required by Kibana 4.4 (freshly released as well) which brings cool features like custom chart colors.

Can you please update the dockers/yamls?

Thanks,
Zaar

ES Security

How would i password protect access to the es-client?
I know that elastic offers a plugin called "shield", but how would I proceed to get it running on the cluster?

Any help would be greatly appreciated.

Master cannot connect to api-server from default project

On other pods in my cluster I have no trouble with DNS:

nslookup kubernetes   
Server:     192.168.4.53
Address:    192.168.4.53#53

Name:   kubernetes.default.svc.cluster.local
Address: 192.168.4.1

However, when following the instructions from the readme, with the "vanilla" settings, from the master:

nslookup kubernetes
Server:    (null)
Address 1: ::1 localhost
Address 2: 127.0.0.1 localhost

nslookup: can't resolve 'kubernetes': Try again

Why is the DNS server null from the master container?

The resolv.conf is the same on both hosts.

nameserver 192.168.4.53
nameserver 10.5.6.5
search default.svc.cluster.local svc.cluster.local cluster.local prod.local comp.prod.local user.prod.local
options ndots:5

This is on CoreOS 835.1 with Kubernetes 1.0.6, and certificates generated with this guide:

https://coreos.com/kubernetes/docs/latest/openssl.html

[2016-01-15 01:32:11,745][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Patsy Walker] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:245)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:182)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:173)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:472)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:105)
    at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:99)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:240)
    at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:106)
    at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:84)
    at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:879)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:335)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$5000(ZenDiscovery.java:75)
    at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1236)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
    at com.squareup.okhttp.Connection.connectTls(Connection.java:244)
    at com.squareup.okhttp.Connection.connectSocket(Connection.java:199)
    at com.squareup.okhttp.Connection.connect(Connection.java:172)
    at com.squareup.okhttp.Connection.connectAndSetOwner(Connection.java:367)
    at com.squareup.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:128)
    at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:328)
    at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:245)
    at com.squareup.okhttp.Call.getResponse(Call.java:267)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224)
    at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:107)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:221)
    at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195)
    at com.squareup.okhttp.Call.execute(Call.java:79)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:180)
    ... 16 more
[2016-01-15 01:32:13,345][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Patsy Walker] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:245)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:182)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:173)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:472)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:105)
    at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:99)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.doRun(UnicastZenPing.java:249)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
    at com.squareup.okhttp.Connection.connectTls(Connection.java:244)
    at com.squareup.okhttp.Connection.connectSocket(Connection.java:199)
    at com.squareup.okhttp.Connection.connect(Connection.java:172)
    at com.squareup.okhttp.Connection.connectAndSetOwner(Connection.java:367)
    at com.squareup.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:128)
    at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:328)
    at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:245)
    at com.squareup.okhttp.Call.getResponse(Call.java:267)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224)
    at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:107)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:221)
    at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195)
    at com.squareup.okhttp.Call.execute(Call.java:79)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:180)
    ... 11 more
[2016-01-15 01:32:14,926][WARN ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Patsy Walker] Exception caught during discovery: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
io.fabric8.kubernetes.client.KubernetesClientException: Error executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/elasticsearch-discovery. Cause: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestException(OperationSupport.java:245)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:182)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:173)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:472)
    at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:105)
    at io.fabric8.elasticsearch.cloud.kubernetes.KubernetesAPIServiceImpl.endpoints(KubernetesAPIServiceImpl.java:35)
    at io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider.buildDynamicNodes(KubernetesUnicastHostsProvider.java:99)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:335)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2$1.doRun(UnicastZenPing.java:253)
    at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified:
    certificate: sha1/Uuk4Na4fbmdrEmsCoaUsax0RkJw=
    DN: CN=kube-apiserver
    subjectAltNames: [192.168.4.1, 10.51.29.211, kubernetes, kubernetes.default, densicoreos002.prod.local]
    at com.squareup.okhttp.Connection.connectTls(Connection.java:244)
    at com.squareup.okhttp.Connection.connectSocket(Connection.java:199)
    at com.squareup.okhttp.Connection.connect(Connection.java:172)
    at com.squareup.okhttp.Connection.connectAndSetOwner(Connection.java:367)
    at com.squareup.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:128)
    at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:328)
    at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:245)
    at com.squareup.okhttp.Call.getResponse(Call.java:267)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224)
    at io.fabric8.kubernetes.client.utils.HttpClientUtils$3.intercept(HttpClientUtils.java:107)
    at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:221)
    at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195)
    at com.squareup.okhttp.Call.execute(Call.java:79)
    at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:180)
    ... 11 more

Please upgrade to ES 2.2.1

Hello again.

ES 2.2.1 is out. Can you please upgrade the images/yamls?

P. S. I'm very grateful that you are maintaining this. Thanks again for your work!

How to prevent ES from being exposed to the internet?

More of a question than an issue - es-svc.yaml defines load balancer service, which in Google Cloud automatically gets assigned a public address and is opened to incoming traffic from the internet. Is there a way to avoid exposing it to the internet?

Error with data node

I am seeing this error in the logs:

2015-02-03T15:44:28.211477409Z [2015-02-03 15:44:28,211][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Stallior] Exception caught during discovery javax.ws.rs.ProcessingException : java.net.SocketException: SocketException invoking http://10.23.243.201:443/api/v1beta1/pods: Connection reset
2015-02-03T15:44:29.784310388Z [2015-02-03 15:44:29,783][WARN ][org.apache.cxf.phase.PhaseInterceptorChain] Interceptor for {http://10.23.243.201:443}WebClient has thrown exception, unwinding now
2015-02-03T15:44:29.784310388Z org.apache.cxf.interceptor.Fault: Could not send Message.
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:619)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
2015-02-03T15:44:29.784310388Z  at com.sun.proxy.$Proxy28.getPods(Unknown Source)
2015-02-03T15:44:29.784310388Z  at io.fabric8.kubernetes.api.KubernetesHelper.getPodMap(KubernetesHelper.java:310)
2015-02-03T15:44:29.784310388Z  at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:101)
2015-02-03T15:44:29.784310388Z  at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:316)
2015-02-03T15:44:29.784310388Z  at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.run(UnicastZenPing.java:234)
2015-02-03T15:44:29.784310388Z  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2015-02-03T15:44:29.784310388Z  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2015-02-03T15:44:29.784310388Z  at java.lang.Thread.run(Thread.java:745)
2015-02-03T15:44:29.784310388Z Caused by: java.net.SocketException: SocketException invoking http://10.23.243.201:443/api/v1beta1/pods: Connection reset
2015-02-03T15:44:29.784310388Z  at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source)
2015-02-03T15:44:29.784310388Z  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
2015-02-03T15:44:29.784310388Z  at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.mapException(HTTPConduit.java:1359)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1343)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:56)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:638)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
2015-02-03T15:44:29.784310388Z  ... 12 more
2015-02-03T15:44:29.784310388Z Caused by: java.net.SocketException: Connection reset
2015-02-03T15:44:29.784310388Z  at java.net.SocketInputStream.read(SocketInputStream.java:189)
2015-02-03T15:44:29.784310388Z  at java.net.SocketInputStream.read(SocketInputStream.java:121)
2015-02-03T15:44:29.784310388Z  at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
2015-02-03T15:44:29.784310388Z  at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
2015-02-03T15:44:29.784310388Z  at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
2015-02-03T15:44:29.784310388Z  at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:703)
2015-02-03T15:44:29.784310388Z  at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
2015-02-03T15:44:29.784310388Z  at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:674)
2015-02-03T15:44:29.784310388Z  at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1535)
2015-02-03T15:44:29.784310388Z  at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
2015-02-03T15:44:29.784310388Z  at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.getResponseCode(URLConnectionHTTPConduit.java:266)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:1557)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:1527)
2015-02-03T15:44:29.784310388Z  at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1330)
2015-02-03T15:44:29.784310388Z  ... 15 more
2015-02-03T15:44:29.784310388Z [2015-02-03 15:44:29,784][WARN ][io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider] [Stallior] Exception caught during discovery javax.ws.rs.ProcessingException : java.net.SocketException: SocketException invoking http://10.23.243.201:443/api/v1beta1/pods: Connection reset

Given this Pod that I launched (a variation on yours that uses a PD:

id: elasticsearch-data
kind: Pod
apiVersion: v1beta1
labels:
  component: elasticsearch
  role: data
desiredState:
  manifest:
    version: v1beta1
    id: elasticsearch-data
    volumes:
      - name: es-persistent-storage
        source:
          persistentDisk:
            pdName: elasticsearch-data
            fsType: ext4
    containers:
      - name: elasticsearch-data
        image: pires/elasticsearch:data
        ports:
          - name: transport
            containerPort: 9300
        volumeMounts:
          - name: es-persistent-storage
            mountPath: /data

How to manage scale out storage-backed data nodes

Hi!
Great work on this cluster, as well as supporting the latest versions. Thanks :)

I am trying to get this to run in a nice & k8s-supported way, at scale, with a storage backend such as Flocker
Note: I tried with EBS backed volumes, and k8s fails to free the resources in time, so the volume doesn't follow the pod as needed and it's a fail.

So for that, I first need to create as many flocker volumes as I initially want, say es-data-{1, 2, 3}

Then I'd replace the emptyVol and replace by a Flocker volume. Then spin the first one. Alright, I get my first data node up & running.

But now, when I need to scale, I can't use the same RC definition, so I need a second one, with a different selector (at least I think).
Therefore, I slightly modified the template you provide, to the below example, and I create a RC per data node, with an "Id" selector.

apiVersion: v1
kind: ReplicationController
metadata:
  name: es-data-VOLUME_ID
  labels:
    component: elasticsearch
    role: data
    id: "VOLUME_ID"
spec:
  replicas: 1
  selector:
    component: elasticsearch
    role: data
    id: "VOLUME_ID"
  template:
    metadata:
      labels:
        component: elasticsearch
        role: data
        id: "VOLUME_ID"
    spec:
      serviceAccount: elasticsearch
      containers:
      - name: es-data
        securityContext:
          privileged: true
          capabilities:
            add:
              - IPC_LOCK
        image: quay.io/pires/docker-elasticsearch-kubernetes:2.3.2
        imagePullPolicy: Always
        env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: "CLUSTER_NAME"
          value: "myesdb"
        - name: NODE_MASTER
          value: "false"
        - name: HTTP_ENABLE
          value: "false"
        ports:
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: storage
      volumes:
        - name: "storage"
          flocker:
            datasetName: es-data-VOLUME_ID

Then I deploy 1 RC per data node.

This sort of works, but it sort of cumbersome. Without additional logic I can't autoscale, I have too many RCs, I can't update easily... Not what I would expect.

Also, I find it very unstable. I get a LOT of failure with the below log. This is not linked pods being collocated, or anything else. Even a daemonSet would die like that.

$ kubectl logs es-data-2-e7dfl
$ kubectl logs es-data-2-7029y
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
time="2016-06-13T08:01:10Z" level=info msg="Starting go-dnsmasq server 1.0.5" 
time="2016-06-13T08:01:10Z" level=info msg="Upstream nameservers: [10.3.0.10:53]" 
time="2016-06-13T08:01:10Z" level=info msg="Search domains: [default.svc.cluster.local. svc.cluster.local. cluster.local. us-west-2.compute.internal.]" 
time="2016-06-13T08:01:10Z" level=info msg="Ready for queries on tcp://127.0.0.1:53" 
time="2016-06-13T08:01:10Z" level=info msg="Ready for queries on udp://127.0.0.1:53" 
[services.d] done.
[2016-06-13 08:01:22,603][INFO ][node                     ] [Spyne] version[2.3.2], pid[172], build[b9e4a6a/2016-04-21T16:03:47Z]
[2016-06-13 08:01:22,619][INFO ][node                     ] [Spyne] initializing ...
[2016-06-13 08:01:23,833][INFO ][plugins                  ] [Spyne] modules [reindex, lang-expression, lang-groovy], plugins [cloud-kubernetes], sites []
[2016-06-13 08:01:23,929][INFO ][env                      ] [Spyne] using [1] data paths, mounts [[/data (/dev/xvdf)]], net usable_space [27.8gb], net total_space [29.4gb], spins? [no], types [ext4]
[2016-06-13 08:01:23,930][INFO ][env                      ] [Spyne] heap size [1015.6mb], compressed ordinary object pointers [true]
[2016-06-13 08:01:30,546][INFO ][node                     ] [Spyne] initialized
[2016-06-13 08:01:30,549][INFO ][node                     ] [Spyne] starting ...
[2016-06-13 08:01:30,849][INFO ][transport                ] [Spyne] publish_address {10.2.5.4:9300}, bound_addresses {10.2.5.4:9300}
[2016-06-13 08:01:30,885][INFO ][discovery                ] [Spyne] myesdb/RYjiHf5WS32Myz_e8cYDGg
[2016-06-13 08:01:40,308][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.24.4, transport_address 10.2.24.4:9300
[2016-06-13 08:01:40,314][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.45.4, transport_address 10.2.45.4:9300
[2016-06-13 08:01:40,315][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.5.2, transport_address 10.2.5.2:9300
[2016-06-13 08:01:42,637][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.24.4, transport_address 10.2.24.4:9300
[2016-06-13 08:01:42,640][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.45.4, transport_address 10.2.45.4:9300
[2016-06-13 08:01:42,640][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.5.2, transport_address 10.2.5.2:9300
[2016-06-13 08:01:44,159][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.24.4, transport_address 10.2.24.4:9300
[2016-06-13 08:01:44,160][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.45.4, transport_address 10.2.45.4:9300
[2016-06-13 08:01:44,160][INFO ][io.fabric8.elasticsearch.discovery.kubernetes.KubernetesUnicastHostsProvider] [Spyne] adding endpoint /10.2.5.2, transport_address 10.2.5.2:9300
[2016-06-13 08:01:44,364][INFO ][cluster.service          ] [Spyne] detected_master {Louise Mason}{fTgSKXb6T22QeEmsXHtX9Q}{10.2.45.4}{10.2.45.4:9300}{data=false, master=true}, added {{Angelo Unuscione}{2Y5T8qLgTKakOrHDAczbgw}{10.2.5.2}{10.2.5.2:9300}{data=false, master=true},{Louise Mason}{fTgSKXb6T22QeEmsXHtX9Q}{10.2.45.4}{10.2.45.4:9300}{data=false, master=true},{Anomaloco}{mSR9pYPiSkSI0hjMPjGClA}{10.2.24.3}{10.2.24.3:9300}{master=false},{Shockwave}{pGeRMjjnTuKnCZarHpiPpQ}{10.2.24.4}{10.2.24.4:9300}{data=false, master=true},{Prime Mover}{X6TIn4HFQmS_dPHTLEIThQ}{10.2.24.5}{10.2.24.5:9300}{data=false, master=false},}, reason: zen-disco-receive(from master [{Louise Mason}{fTgSKXb6T22QeEmsXHtX9Q}{10.2.45.4}{10.2.45.4:9300}{data=false, master=true}])
[2016-06-13 08:01:44,763][INFO ][node                     ] [Spyne] started
Killed
/run.sh exited 137
time="2016-06-13T08:01:59Z" level=info msg="Application exit requested by signal: terminated" 
time="2016-06-13T08:01:59Z" level=info msg="Restoring /etc/resolv.conf" 
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

Any idea of a best practice for this? How do you operate at scale?

Many thanks,

How would we use persistentDisk?

The Kubernetes volumes docs have information about the currently supported source types:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#gcepersistentdisk

What we want is the Elasticsearch cluster to have a data node that has lots of disk space. We need a persistent disk for this. The only one supported right now is persistentDisk. But there is a limitation with it, right now:

avoid creating multiple pods that use the same Volume
if multiple pods refer to the same Volume and both are scheduled on the same machine, regardless of whether they are read-only or read-write, then the second pod scheduled will fail.
Replication controllers can only be created for pods that use read-only mounts.

This means that your pattern of a data node behind a replicationController would fail because by design it tries to replicate multiple pods sharing a disk. That works with the current emptyDir setup but that same setup fails the use-case of needing a persistent and large disk.

My suggestion right now is to limit the replicationController replica count to 1 and switch over to use a GCE PD.

Rolling update in data node

I'm use data node with gcePersistentDisk as storage.
I can rolling update es-master and es-client without problem, but kubectl stuck when rolling update in es-data.
Can you tell me how to rolling update in data node?

Thanks

Killing machine doesn't migrate the pod

I'm running your elasticsearch cluster project on top of your kubernetes coreos vagrant project.

First of all, thanks for both of these, they work great and have saved me a lot of work.

However, it looks like if I deploy the elasticsearch cluster on top of 1 master and 4 minions, and then kill one of the machines running an elasticsearch node, kubernetes does not migrate the pod to another machine. In fact, it never even seems to notice that the machine is dead. If I run kubectl get pods it still shows the pod as "running", even though the machine is gone.

I've tried updating to kubernetes 0.12.1, and tried killing different elasticsearch machines, and it doesn't seem to make a difference.

Also, if I just kill the docker container on the machine, but leave the machine up, kubernetes will notice and move the pod as expected.

Any insight into why this is happening? Am I missing something?

Or should I take this up with the kubernetes team?

HTTPS hostname wrong?

es-master logs

[2016-09-14 08:28:18,410][WARN ][org.apache.cxf.phase.PhaseInterceptorChain] Interceptor for {https://10.254.0.1:443}WebClient has thrown exception, unwinding now
org.apache.cxf.interceptor.Fault: **Could not send Message**.
        at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:64)
        at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:307)
        at org.apache.cxf.jaxrs.client.AbstractClient.doRunInterceptorChain(AbstractClient.java:624)
        at org.apache.cxf.jaxrs.client.ClientProxyImpl.doChainedInvocation(ClientProxyImpl.java:674)
        at org.apache.cxf.jaxrs.client.ClientProxyImpl.invoke(ClientProxyImpl.java:224)
        at com.sun.proxy.$Proxy29.endpointsForService(Unknown Source)
        at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.getNodesFromKubernetesSelector(K8sUnicastHostsProvider.java:123)
        at io.fabric8.elasticsearch.discovery.k8s.K8sUnicastHostsProvider.buildDynamicNodes(K8sUnicastHostsProvider.java:106)
        at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:313)
        at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.ping(UnicastZenPing.java:219)
        at org.elasticsearch.discovery.zen.ping.ZenPingService.ping(ZenPingService.java:146)
        at org.elasticsearch.discovery.zen.ping.ZenPingService.pingAndWait(ZenPingService.java:124)
        at org.elasticsearch.discovery.zen.ZenDiscovery.findMaster(ZenDiscovery.java:1007)
        at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:361)
        at org.elasticsearch.discovery.zen.ZenDiscovery.access$6100(ZenDiscovery.java:86)
        at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1384)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: IOException invoking https://10.254.0.1:443/api/v1/namespaces/default/endpoints/elasticsearch-discovery: **HTTPS hostname wrong:  should be <10.254.0.1
>**

container won't work on kubernetes 1.3.5 (docker 1.11.2 build b9f10c9)

Heya all,

I'm trying to run ES master on my k8s cluster (hosted on AWS) and getting an error:

admin@ip-172-20-55-229:~$ kubectl logs es-master-2201531720-neui0
[2016-09-03 05:46:00,466][INFO ][node                     ] [Darkstar] version[2.3.4], pid[11], build[e455fd0/2016-06-30T11:24:31Z]
[2016-09-03 05:46:00,467][INFO ][node                     ] [Darkstar] initializing ...
[2016-09-03 05:46:01,235][INFO ][plugins                  ] [Darkstar] modules [reindex, lang-expression, lang-groovy], plugins [cloud-kubernetes], sites []
[2016-09-03 05:46:01,267][INFO ][env                      ] [Darkstar] using [1] data paths, mounts [[/data (/dev/xvdbe)]], net usable_space [46.5gb], net total_space [49gb], spins? [no], types [ext4]
[2016-09-03 05:46:01,267][INFO ][env                      ] [Darkstar] heap size [247.5mb], compressed ordinary object pointers [true]
[2016-09-03 05:46:03,890][INFO ][node                     ] [Darkstar] initialized
[2016-09-03 05:46:03,898][INFO ][node                     ] [Darkstar] starting ...
Exception in thread "main" java.lang.IllegalArgumentException: No up-and-running site-local (private) addresses found, got [name:lo (lo), name:eth0 (eth0)]
    at org.elasticsearch.common.network.NetworkUtils.getSiteLocalAddresses(NetworkUtils.java:186)
    at org.elasticsearch.common.network.NetworkService.resolveInternal(NetworkService.java:233)
    at org.elasticsearch.common.network.NetworkService.resolveInetAddresses(NetworkService.java:209)
    at org.elasticsearch.common.network.NetworkService.resolveBindHostAddresses(NetworkService.java:122)
    at org.elasticsearch.transport.netty.NettyTransport.bindServerBootstrap(NettyTransport.java:424)
    at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:321)
    at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)
    at org.elasticsearch.transport.TransportService.doStart(TransportService.java:182)
    at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:68)
    at org.elasticsearch.node.Node.start(Node.java:278)
    at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:206)
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:272)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.