Giter Club home page Giter Club logo

confluentinc / cp-helm-charts Goto Github PK

View Code? Open in Web Editor NEW
783.0 280.0 836.0 4.07 MB

The Confluent Platform Helm charts enable you to deploy Confluent Platform services on Kubernetes for development, test, and proof of concept environments.

Home Page: https://cnfl.io/getting-started-kafka-kubernetes

License: Apache License 2.0

Shell 7.21% Mustache 92.79%
kubernetes statefulsets zookeeper apache-kafka helm-charts helm kafka-connect kafka-rest-proxy ksql ksql-server

cp-helm-charts's Introduction

Confluent Platform Helm Charts [Deprecated]

Deprecated: The cp-helm-charts is deprecated in favor of [Confluent For Kubernetes](https://docs.confluent.io/operator/current/overview.html) .

You can use the Helm charts to deploy services on Kubernetes for development, test, and proof of concept environments.

Caution
Open Source Helm charts are not supported by Confluent.

If you want to use Confluent Platform on Kubernetes in a test or production environment, follow these instructions to install Confluent Operator.

The Confluent Platform Helm Charts enable you to deploy Confluent Platform components on Kubernetes for development, test, and proof of concept environments.

Installation

Installing helm chart
helm repo add confluentinc https://confluentinc.github.io/cp-helm-charts/   #(1)
helm repo update    #(2)
helm install confluentinc/cp-helm-charts --name my-confluent --version 0.6.0    #(3)
  1. Add confluentinc helm charts repo

  2. Update repo information

  3. Install Confluent Platform with release name «my-confluent» and version 0.6.0

Contributing

We welcome any contributions:

Note
It’s not officially supported repo, hence support is on "best effort" basis.
  • Report all enhancements, bugs, and tasks as GitHub issues

  • Provide fixes or enhancements by opening pull requests in GitHub

Documentation

Helm is an open-source packaging tool that helps you install applications and services on Kubernetes. Helm uses a packaging format called charts. Charts are a collection of YAML templates that describe a related set of Kubernetes resources.

This repository provides Helm charts for the following Confluent Platform services:

  • ZooKeeper

  • Kafka brokers

  • Kafka Connect

  • Confluent Schema Registry

  • Confluent REST Proxy

  • ksqlDB

  • Confluent Control Center

Environment Preparation

You must have a Kubernetes cluster that has Helm configured.

Tested Software

These Helm charts have been tested with the following software versions:

Warning
This guide assumes that you’re Helm 2 (tested with Helm 2.16). You can follow up on Helm 3 issues in #480

For local Kubernetes installation with Minikube, see Install Minikube and Drivers.

Install Helm on Kubernetes

Follow the directions to install and deploy Helm to the Kubernetes cluster.

View a list of all deployed releases in the local installation.

helm init
helm repo update
helm list
Important
For Helm versions prior to 2.9.1, you may see "connect: connection refused", and will need to fix up the deployment before proceeding.
kubectl delete --namespace kube-system svc tiller-deploy
kubectl delete --namespace kube-system deploy tiller-deploy
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade

Persistence

The ZooKeeper and Kafka cluster deployed with StatefulSets that have a volumeClaimTemplate which provides the persistent volume for each replica. You can define the size of the volumes by changing dataDirSize and dataLogDirSize under cp-zookeeper and size under cp-kafka in values.yaml.

You also could use the cloud provider’s volumes by specifying StorageClass. For example, if you are on AWS your storage class will look like this:

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: ssd
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
Note
To adapt this example to your needs, read the Kubernetes StorageClass documentation.

The StorageClass that was created can be specified in dataLogDirStorageClass and dataDirStorageClass under cp-zookeeper and in storageClass+ under cp-kafka in values.yaml.

To deploy non-persistent Kafka and ZooKeeper clusters, you must change the value of persistence.enabled under cp-kafka and cp-zookeeper in values.yaml

Warning
These type of clusters are suitable for strictly development and testing purposes. The StatefulSets+ are going to use emptyDir volumes, this means that its content strictly related to the pod life cycle and is deleted when the pod goes down.

Install Confluent Platform Charts

Clone the Confluent Helm Chart repo

> helm repo add confluentinc https://confluentinc.github.io/cp-helm-charts/
"confluentinc" has been added to your repositories

> helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "confluentinc" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!

Install a 3 node Zookeeper ensemble, a Kafka cluster of 3 brokers, 1 Confluent Schema Registry instance, 1 REST Proxy instance, and 1 Kafka Connect worker, 1 ksqlDB server in your Kubernetes environment.

Note
Naming the chart --name my-confluent-oss is optional, but we assume this is the name in the remainder of the documentation. Otherwise, helm will generate release name.
helm install confluentinc/cp-helm-charts --name my-confluent-oss

If you want to install without the Confluent Schema Registry instance, the REST Proxy instance, and the Kafka Connect worker:

helm install --set cp-schema-registry.enabled=false,cp-kafka-rest.enabled=false,cp-kafka-connect.enabled=false confluentinc/cp-helm-charts

View the installed Helm releases:

helm list
NAME                REVISION    UPDATED                     STATUS      CHART                   NAMESPACE
my-confluent-oss    1           Tue Jun 12 16:56:39 2018    DEPLOYED    cp-helm-charts-0.1.0    default

Verify Installation

Using Helm

Note
This step is optional
Run the embedded test pod in each sub-chart to verify installation
helm test my-confluent-oss

Verify Kafka cluster

Note
This step is optional - to verify that Kafka is working as expected, connect to one of the Kafka pods and produce some messages to a Kafka topic.
List your pods and wait until they are all in Running state.
kubectl get pods
Connect to the container cp-kafka-broker in a Kafka broker pod to produce messages to a Kafka topic.

If you specified a different release name, substitute my-confluent-oss with whatever you named your release.

kubectl exec -c cp-kafka-broker -it my-confluent-oss-cp-kafka-0 -- /bin/bash /usr/bin/kafka-console-producer --broker-list localhost:9092 --topic test

Wait for a > prompt, and enter some text.

m1
m2

Press Ctrl+C to close the producer session.

  1. Consume the messages from the same Kafka topic as above.

kubectl exec -c cp-kafka-broker -it my-confluent-oss-cp-kafka-0 -- /bin/bash  /usr/bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic test --from-beginning

You should see the messages which were published from the console producer. Press Ctrl+C to stop consuming.

Manual Test

Zookeepers
git clone https://github.com/confluentinc/cp-helm-charts.git        #(1)
kubectl apply -f cp-helm-charts/examples/zookeeper-client.yaml      #(2)
...
kubectl exec -it zookeeper-client -- /bin/bash zookeeper-shell <zookeeper service>:<port> ls /brokers/ids       #(3)
kubectl exec -it zookeeper-client -- /bin/bash zookeeper-shell <zookeeper service>:<port> get /brokers/ids/0
kubectl exec -it zookeeper-client -- /bin/bash zookeeper-shell <zookeeper service>:<port> ls /brokers/topics    #(4)
  1. Clone Helm Chars git repository

  2. Deploy a client pod.

  3. Connect to the client pod and use the zookeeper-shell command to explore brokers…​

  4. topics, etc.

Kafka
Validate Kafka installation
kubectl apply -f cp-helm-charts/examples/kafka-client.yaml #(1)
kubectl exec -it kafka-client -- /bin/bash      #(2)
  1. Deploy a Kafka client pod.

  2. Log into the Pod

From within the kafka-client pod, explore with kafka commands:
## Setup
export RELEASE_NAME=<release name>
export ZOOKEEPERS=${RELEASE_NAME}-cp-zookeeper:2181
export KAFKAS=${RELEASE_NAME}-cp-kafka-headless:9092

## Create Topic
kafka-topics --zookeeper $ZOOKEEPERS --create --topic test-rep-one --partitions 6 --replication-factor 1

## Producer
kafka-run-class org.apache.kafka.tools.ProducerPerformance --print-metrics --topic test-rep-one --num-records 6000000 --throughput 100000 --record-size 100 --producer-props bootstrap.servers=$KAFKAS buffer.memory=67108864 batch.size=8196

## Consumer
kafka-consumer-perf-test --broker-list $KAFKAS --messages 6000000 --threads 1 --topic test-rep-one --print-metrics

Run A Streams Application

ksqlDB is the streaming SQL engine that enables real-time data processing against Apache Kafka. Now that you have running in your Kubernetes cluster, you may run a ksqlDB example.

Operations

Scaling Zookeeper

Tip
All scaling operations should be done offline with no producer or consumer connection. The number of nodes should always be odd number.

Install cp-helm-charts with default 3 node ensemble

helm install cp-helm-charts

Scale nodes up to 5, change servers under cp-zookeeper to 5 in values.yaml

helm upgrade <release name> cp-helm-charts

Scale nodes down to 3, change servers under cp-zookeeper to 3 in values.yaml

helm upgrade <release name> cp-helm-charts

Scaling Kafka

Important
Scaling Kafka brokers without doing Partition Reassignment will cause data loss. You must reassign partitions correctly before scaling the Kafka cluster.
Install cp-helm-charts with default 3 brokers kafka cluster
helm install cp-helm-charts

Scale kafka brokers up to 5, change brokers+ under cp-kafka to 5 in values.yaml

helm upgrade <release name> cp-helm-charts

Scale kafka brokers down to 3, change brokers under cp-kafka to 3 in values.yaml

helm upgrade <release name> cp-helm-charts

Monitoring

JMX Metrics are enabled by default for all components, Prometheus JMX Exporter is installed as a sidecar container along with all Pods.

  1. Install Prometheus and Grafana in same Kubernetes cluster using helm

    helm install stable/prometheus
    helm install stable/grafana
  2. Add Prometheus as Data Source in Grafana, url should be something like: http://illmannered-marmot-prometheus-server:9090

  3. Import dashboard under grafana-dashboard into Grafana Kafka Dashboard ZooKeeper Dashboard

Teardown

To remove the pods, list the pods with kubectl get pods and then delete the pods by name.

kubectl get pods
kubectl delete pod <podname>

To delete the Helm release, find the Helm release name with helm list and delete it with helm delete. You may also need to clean up leftover StatefulSets, since helm delete can leave them behind. Finally, clean up all persisted volume claims (pvc) created by this release.

helm list
helm delete <release name>
kubectl delete statefulset <release name>-cp-kafka <release name>-cp-zookeeper
kubectl delete pvc --selector=release=<release name>

Appendix: Create a Local Kubernetes Cluster

There are many deployment options to get set up with a Kubernetes cluster, and this document provides instructions for using Minikube to set up a local Kubernetes cluster. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop.

You may alternatively set up a Kubernetes cluster in the cloud using other providers such as Google Kubernetes Engine (GKE).

Install Minikube and Drivers

Minikube version 0.23.0 or higher is required for docker server moby/moby#31352], which adds support for using ARG in FROM in your Dockerfile.

First follow the basic Minikube installation instructions.

Then install the Minikube drivers. Minikube uses Docker Machine to manage the Kubernetes VM so it benefits from the driver plugin architecture that Docker Machine uses to provide a consistent way to manage various VM providers. Minikube embeds VirtualBox and VMware Fusion drivers so there are no additional steps to use them. However, other drivers require an extra binary to be present in the host PATH.

Important
If you are running on macOS, in particular make sure to install the hyperkit drivers for the native OS X hypervisor:
brew install hyperkit
minikube config set driver hyperkit     #(1)
  1. Use hyperkit drivel by default

Start Minikube

Tip
The following command increases the memory to 6096 MB and uses the hyperkit driver for the native macOS Hypervisor.
  1. Start Minikube. The following command increases the memory to 6096 MB and uses the xhyve driver for the native macOS Hypervisor.

    minikube start --kubernetes-version v1.9.4 --cpus 4 --memory 6096 --vm-driver=xhyve --v=8
  2. Continue to check status of your local Kubernetes cluster until both minikube and cluster are in Running state

    ❯ minikube status
    m01
    host: Running
    kubelet: Running
    apiserver: Running
    kubeconfig: Configured
  3. Work around Minikube issue #1568.

    minikube ssh -- sudo ip link set docker0 promisc on
  4. Set the context.

    eval $(minikube docker-env)
    
    kubectl config set-context minikube.internal --cluster=minikube --user=minikube
    Context "minikube.internal" modified.
    
    kubectl config use-context minikube.internal
    Switched to context "minikube.internal".

Verify Minikube Local Kubernetes Environment

kubectl config current-context
minikube.internal

kubectl cluster-info
Kubernetes master is running at https://192.168.99.106:8443
KubeDNS is running at https://192.168.99.106:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

cp-helm-charts's People

Contributors

aayars avatar abhijeet2096-confluent avatar amit-k-yadav avatar andrewegel avatar coughman avatar crystalmethod avatar danielkoek avatar dnozay avatar fabriziofortino avatar galvo avatar gwenshap avatar joel-hamill avatar kppullin-nt avatar mandrean avatar maver1ck avatar maxzheng avatar mikkin avatar mixja avatar mohnishbasha avatar nixsticks avatar nysthee avatar ppaci avatar qshao-pivotal avatar rohit2b avatar sflandergan avatar srolija avatar sturman avatar tobydrake7 avatar xli1996 avatar ybyzek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cp-helm-charts's Issues

I get an "already exists" error when running the umbrella chart

Chens-MacBook-Pro-676:~ gwen$ helm install --debug cp-helm-charts/
[debug] Created tunnel using local port: '60028'

[debug] SERVER: "127.0.0.1:60028"

[debug] Original chart version: ""
[debug] CHART PATH: /Users/gwen/cp-helm-charts

Error: release terrific-waterbuffalo failed: services "terrific-waterbuffalo-cp-zookeeper-headless" already exists

But when I list the services, they are all created...

NAME                                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
kubernetes                                    ClusterIP   10.11.240.1     <none>        443/TCP             2h
terrific-waterbuffalo-cp-kafka                ClusterIP   10.11.247.241   <none>        9092/TCP            9m
terrific-waterbuffalo-cp-kafka-headless       ClusterIP   None            <none>        9092/TCP            9m
terrific-waterbuffalo-cp-kafka-rest           ClusterIP   10.11.245.62    <none>        8082/TCP            9m
terrific-waterbuffalo-cp-schema-registry      ClusterIP   10.11.254.86    <none>        8081/TCP            9m
terrific-waterbuffalo-cp-zookeeper            ClusterIP   10.11.244.176   <none>        2181/TCP            9m
terrific-waterbuffalo-cp-zookeeper-headless   ClusterIP   None            <none>        2888/TCP,3888/TCP   9m

Liveness / readiness for our components

I see only a single liveness probe for ZK.
Shouldn't we add liveness and readyness for all components ?

example for kafka

        livenessProbe:
          exec:
            command:
              - sh
              - -ec
              - /usr/bin/jps | /bin/grep -q SupportedKafka
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          tcpSocket:
            port: 9092
          initialDelaySeconds: 30
          timeoutSeconds: 5

for zk

        livenessProbe:
          exec:
            command: ['/bin/bash', '-c', 'echo "ruok" | nc -w 2 -q 2 localhost 2181 | grep imok']
          initialDelaySeconds: 1
          timeoutSeconds: 3
        readinessProbe:
          tcpSocket:
            port: 2181
          initialDelaySeconds: 30
          timeoutSeconds: 5

Helm Chart configmap is too large

When installing the chart via Helm the following error is received:

$ helm install cp-helm-charts
NAME:   wandering-arachnid
Error: getting deployed release "wandering-arachnid": release: "wandering-arachnid" not found

This somewhat obscure error is caused by a known issue helm/helm#1996 where the configmap that is created for the Helm release is greater than 1MB.

You also need to manually clean up all resources as follows:

$ kubectl delete poddisruptionbudgets.policy -l "release=wandering-arachnid"
$ kubectl delete configmaps -l "release=wandering-arachnid"
$ kubectl delete all -l "release=wandering-arachnid"

I found that adding the following to the .helmignore file fixes the issue:

...
...
*.tgz

Likely commit 362dc39 has caused the issue.

Add PodDisruptionBudget to cp-zookeeper charts

An Application Owner can create a PodDisruptionBudget object (PDB) for each application. A PDB limits the number pods of a replicated application that are down simultaneously from voluntary disruptions.

For ZooKeeper:
Concern: Do not reduce number of instances below quorum, otherwise writes fail.
Possible Solution 1: set maxUnavailable to 1 (works with varying scale of application).
Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). (Allows more disruptions at once).

Monitoring Solution for cp-kafka and cp-zookeeper

Problem Statement

As an operator, after provisioning a kafka cluster, I need to know what is happening to Kafka(metrics) and also be able to view after fact(logs).

Potential Solution

  • Metrics
  1. Prometheus JMX exporter + external Prometheus - all components exist, need a supported docker image for JMX exporter + definitions/transformation of all JMX metrics of interest.
  2. Prometheus Kafka exporter + external Prometheus - OSS vs Confluent version of Kafka exporter?
  3. Maybe Confluent Control Center?
  • Logs
  1. Filebeat vs Logstash vs Fluentd, extract, transform and load to Elasticsearch
  2. Confluent log extractor/transformer?

@mikkin

Unable to install new connector plugins in kafka connect

Hi,

I successfully deployed kafka confluent platform into my minikube cluster using this helm chart and I successfully added new connector configuration using the JdbcSourceConnector with a postgresql configuration.

I am now struggling to add a new connector configuration using the JdbcSourceConnector for an SQLServer database. I already downloaded the mssql driver (mssql-jdbc-7.0.0.jre8.jar) and placed it into /usr/share/java/kafka-connect-jdbc directory but I keep getting the following error:

"Invalid value java.sql.SQLException: No suitable driver found for jdbc:sqlserver://mssql-dev.mydomain.pt:1433;database=mydb;user=user_dev;password=user_pass for configuration Couldn't open connection to jdbc:sqlserver://mssql-dev.mydomain.pt:1433;database=mydb;user=user_dev;password=user_pass"

I also tried to install a third party plugin using the command confluent-hub install debezium/debezium-connector-mysql:0.8.1 however the plugin seems not to be loaded by connect because it is not listed when requesting the /connector-plugins API.

Considering that the kafka connect process is launched when the container is started (/etc/confluent/docker/run) I was wondering if the third party plugins and jdbc drivers are supposed to be present at launch. If this is the case, how can I do that?

Any help would be appreciated.

Not really an issue :)

No complaints, but I couldn't find any other way to say how excited I am to see this taking shape.

Document process for resource requirements

Kubernetes and Helm allow users to require and/or limit resources for each component. For example, we recommend 4GB RAM for Kafka Broker, so it will be good to let Kubernetes know, so brokers can be allocated on nodes with enough resources.

We don't need to have these by default (so our charts will be useful for tiny deployments), but we want to let people know they can and should do it for production, and how to do it.

Zookeeper Optional Configurations

As part of #27 and #26
In cp-zookeeper image, there are some optional configurations:
https://github.com/confluentinc/cp-docker-images/blob/master/debian/zookeeper/include/etc/confluent/docker/zookeeper.properties.template#L8-L29
Right now we only surfaced 2 config parameters: https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-zookeeper/templates/statefulset.yaml#L96-L99

Can we take a look at the parameters and decided which of them should be surfaced and what are the recommended values?

Ingresses for services

Please create ingress template and configuration for:

  • kafka-rest
  • kafka-connect
  • schema-registry

Add to documentation

Got some feedback from internal testing of the Helm charts.

  • Add validation step - how to run a simple client to test that Kafka brokers are set up properly
  • We should probably say in the README that the charts provide persistent storage (and not ephemeral storage)
  • Helm setup/install troubleshooting info
  • Explain: What are Helm charts? How are the helm templates better than vanilla templates?

Question: What about SSL and SASL?

Hello,

Looking at the chart I did not find any possible configuration for TLS encryption and SASL support.

Did you leave this out intentionally or do I miss something due to lack of experience?

Kind regards,
Marcus

Error: configmaps is forbidden

I set up a new GCP K8s cluster and followed all steps to install helm and cloned this repo. I followed the steps from readme (without the minikube steps, of course.

Error: configmaps is forbidden...

See attached file for the steps I did...

Maybe GCP K8s needs other steps than minikube? Then the docs should point this out (and describe the steps).

commands.txt

Consider adding heapOptions for kafka-connect, etc.

Just an ease of use thing, not really a bug, I've already worked around it for myself:

Kafka & Zookeeper have pretty optimized heap settings by default (512M). Kafka-connect, on the other hand, is gobbling up quite a bit on startup, at least 1G, which was enough to cause issues in my (very small) test cluster.

May save a lot of confusion and support requests to throw a heapOptions option in for other java portions of the app as well, and default it to a lower value.

Generic environmental options for every deployment

Please add this section to every deployment:

          {{- range $key, $value := .Values.env }}
          - name: "{{ $key }}"
            value: "{{ $value }}"
          {{- end }}

This will give possibility to add configure any env variable.

Helm 2.9.1 also refusing connection with minikube...

I followed all steps, but "helm list" still refusing connection even thought the read me says that only before 2.9.1. these issues exist. Though, the documented workaround does not work, unfortunately.

Here are my steps (I tried it twice):

kai.waehner@Kais-MacBook-12:~|⇒ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kai.waehner@Kais-MacBook-12:|⇒ clear
kai.waehner@Kais-MacBook-12:
|⇒ helm init
$HELM_HOME has been configured at /Users/kai.waehner/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
kai.waehner@Kais-MacBook-12:|⇒ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "coreos" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
kai.waehner@Kais-MacBook-12:
|⇒ helm list
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
kai.waehner@Kais-MacBook-12:|⇒ kubectl delete --namespace kube-system svc tiller-deploy
service "tiller-deploy" deleted
kai.waehner@Kais-MacBook-12:
|⇒ kubectl delete --namespace kube-system deploy tiller-deploy
deployment "tiller-deploy" deleted
kai.waehner@Kais-MacBook-12:|⇒ kubectl create serviceaccount --namespace kube-system tiller
Error from server (AlreadyExists): serviceaccounts "tiller" already exists
kai.waehner@Kais-MacBook-12:
|⇒ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "tiller-cluster-rule" already exists
kai.waehner@Kais-MacBook-12:|⇒ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Error from server (NotFound): deployments.extensions "tiller-deploy" not found
kai.waehner@Kais-MacBook-12:
|⇒ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /Users/kai.waehner/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
kai.waehner@Kais-MacBook-12:|⇒ helm install --name my-confluent-oss cp-helm-charts
Error: failed to download "cp-helm-charts" (hint: running helm repo update may help)
kai.waehner@Kais-MacBook-12:
|⇒ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "coreos" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
kai.waehner@Kais-MacBook-12:|⇒ helm install --name my-confluent-oss cp-helm-charts
Error: failed to download "cp-helm-charts" (hint: running helm repo update may help)
kai.waehner@Kais-MacBook-12:
|⇒ helm list
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
kai.waehner@Kais-MacBook-12:|⇒ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
kai.waehner@Kais-MacBook-12:
|⇒

cp-kafka depends on cp-zookeeper

dependencies:
- name: cp-zookeeper
- name: cp-schema-registry
- name: cp-kafka-rest
- name: cp-kafka-connect

Which is not true.

  • Only dependency for cp-kafka is cp-zookeeper.
  • for cp-schema-registry - cp-kafka and (trasitevly) cp-zookeeper
  • for cp-kafka-rest - SR (not always), Kafka and ZK
    for cp-connect - SR, Kafka and ZK

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.