Giter Club home page Giter Club logo

charts's Introduction

YugabyteDB can be deployed in various Kubernetes configurations (including single zone, multi-zone and multi-cluster) using this Helm Chart. Detailed documentation is available in YugabyteDB Docs for Kubernetes Deployments.

Artifact HUB

charts's People

Contributors

adityakaushik99-yb avatar amannijhawan avatar anmalysh-yb avatar arnav15 avatar artem-mindrov avatar baba230896 avatar bhavin192 avatar charleswang234 avatar daniel-yb avatar deepti-yb avatar govardhanjalla avatar hkandala avatar isignal avatar jharveysmith avatar jvigil-yugabyte avatar kevincox avatar kv83821-yb avatar mchiddy avatar nkhogen avatar sb-yb avatar schoudhury avatar shristy avatar skorobogatydmitry avatar subramanian-neelakantan avatar svarnau avatar vars-07 avatar vpatibandla-yb avatar wesleyw avatar ybnelson avatar yugabyte-ci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

PostGIS Installation Not Working

Hi,

I have tried installing PostGIS with a custom container using the following dockerfile:

FROM yugabytedb/yugabyte:2.7.1.1-b1

RUN rpm -Uvh https://yum.postgresql.org/11/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
RUN yum install -y postgresql11-server postgis31_11

RUN cp -v "$(/usr/pgsql-11/bin/pg_config --pkglibdir)"/*postgis*.so "$(/home/yugabyte/postgres/bin/pg_config --pkglibdir)"
RUN cp -v "$(/usr/pgsql-11/bin/pg_config --sharedir)"/extension/*postgis*.sql "$(/home/yugabyte/postgres/bin/pg_config --sharedir)"/extension
RUN cp -v "$(/usr/pgsql-11/bin/pg_config --sharedir)"/extension/*postgis*.control "$(/home/yugabyte/postgres/bin/pg_config --sharedir)"/extension

I have then used this container in the image value of the chart.

but when I try and run

create extension postgis;

I get the following error:

SQL Error [XX000]: ERROR: could not load library "/home/yugabyte/postgres/lib/postgis-3.so": libtiff.so.5: cannot open shared object file: No such file or directory
  ERROR: could not load library "/home/yugabyte/postgres/lib/postgis-3.so": libtiff.so.5: cannot open shared object file: No such file or directory

I have checked and it and it is available at /usr/lib64/libtiff.so.5 and /lib64/libtiff.so.5

ldd /home/yugabyte/postgres/lib/postgis-3.so | grep libtiff.so.5

looks like it is pointing to the correct place

libtiff.so.5 => /lib64/libtiff.so.5 (0x00007f599d706000)

Question: How to change flags in helm charts

Hi,

I wanted to change the flag async_replication_polling_delay_ms for the tserver instances.
How could that be done? I guess I have to fork the charts and add the flag manually right?

Thanks

Can't access the UI when defining it on a different port

I'm trying to configure the UI on a different port than 7000 with something like:

serviceEndpoints:
  - name: "yb-master-ui"
    type: LoadBalancer
    app: "yb-master"
    ports:
      http-ui: "80"

I can install the chart and I see the services are using the new port. When I connect to it in the browser, it immediately fails with a "connection refused" error.

With this patch it works properly but I'm not 100% sure if any other thing is needed.

--- ../../yugabyte/templates/service.yaml	2021-08-28 14:52:55.000000000 +0200
+++ service.yaml	2021-10-25 16:19:37.000000000 +0200
@@ -95,6 +95,7 @@
     {{- range $label, $port := $endpoint.ports }}
     - name: {{ $label | quote }}
       port: {{ $port }}
+      targetPort: 7000
     {{- end}}
   selector:
     {{- include “yugabyte.appselector” ($appLabelArgs) | indent 4 }}

Allow existingSecret for authCredentials in values.yaml

Hi,

It would be nice if we could specificy an existing Secret for authCredentials.ysql/ycql.password to avoid having a clear text password in the values.yaml.

Ideally I'd also like custom volume and volumeMount support for the setup-credentials-job, to allow mounting secrets with Vault CSI Provider.

num_cpus=0 when using milicores for cpu resources

How I installed the chart:

cat <<EOF | helm upgrade --install yugabyte yugabyte --repo https://charts.yugabyte.com --values -
replicas:
  master: 3
  tserver: 3
  ## Used to set replication factor when isMultiAz is set to true
  totalMasters: 3
resource:
  master:
    requests:
      cpu: 100m
      memory: 100Mi
    limits:
      cpu: 500m
      memory: 500Mi
  tserver:
    requests:
      cpu: 100m
      memory: 100Mi
    limits:
      cpu: 500m
      memory: 500Mi
EOF

You'll see --num_cpus=0 in the flags in the running processes from one of the tablet pods. Disregard qemu-x86_64, I'm using Kind on an M1 Mac.


USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0 152484  5276 ?        Ssl  01:19   0:00 /usr/bin/qemu-x86_64 /sbin/tini /sbin/tini -- /bin/bash -c if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then   /home/yugabyte/tools/k8s_preflight.py all fi && \ touch "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" && \ if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then   PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \     dnscheck \     --addr="yb-tserver-0.yb-tservers.default.svc.cluster.local" \     --port="9100" fi && \ if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then   PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \     dnscheck \     --addr="yb-tserver-0.yb-tservers.default.svc.cluster.local:9100" \     --port="9100" fi && \ if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then   PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \     dnscheck \     --addr="0.0.0.0" \     --port="9000" fi && \ if [[ -f /home/yugabyte/tools/k8s_parent.py ]]; then   k8s_parent="/home/yugabyte/tools/k8s_parent.py" else   k8s_parent="" fi && \ if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then   PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \     dnscheck \     --addr="yb-tserver-0.yb-tservers.default.svc.cluster.local" \     --port="9042" fi && \ if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then   PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \     dnscheck \     --addr="0.0.0.0:5433" \     --port="5433" fi && \ exec ${k8s_parent} /home/yugabyte/bin/yb-tserver \   --fs_data_dirs=/mnt/disk0,/mnt/disk1 \   --tserver_master_addrs=yb-master-0.yb-masters.default.svc.cluster.local:7100,yb-master-1.yb-masters.default.svc.cluster.local:7100,yb-master-2.yb-masters.default.svc.cluster.local:7100 \   --metric_node_name=yb-tserver-0 \   --memory_limit_hard_bytes=456130560000 \   --stderrthreshold=0 \   --num_cpus=0 \   --undefok=num_cpus,enable_ysql \   --use_node_hostname_for_local_tserver=true \   --rpc_bind_addresses=yb-tserver-0.yb-tservers.default.svc.cluster.local \   --server_broadcast_addresses=yb-tserver-0.yb-tservers.default.svc.cluster.local:9100 \   --webserver_interface=0.0.0.0 \   --enable_ysql=true \   --pgsql_proxy_bind_address=0.0.0.0:5433 \   --cql_proxy_bind_address=yb-tserver-0.yb-tservers.default.svc.cluster.local 
root          11  0.0  0.1 194416 21884 ?        Sl   01:19   0:00 /usr/bin/qemu-x86_64 /usr/bin/python python /home/yugabyte/tools/k8s_parent.py /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-0.yb-masters.default.svc.cluster.local:7100,yb-master-1.yb-masters.default.svc.cluster.local:7100,yb-master-2.yb-masters.default.svc.cluster.local:7100 --metric_node_name=yb-tserver-0 --memory_limit_hard_bytes=456130560000 --stderrthreshold=0 --num_cpus=0 --undefok=num_cpus,enable_ysql --use_node_hostname_for_local_tserver=true --rpc_bind_addresses=yb-tserver-0.yb-tservers.default.svc.cluster.local --server_broadcast_addresses=yb-tserver-0.yb-tservers.default.svc.cluster.local:9100 --webserver_interface=0.0.0.0 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --cql_proxy_bind_address=yb-tserver-0.yb-tservers.default.svc.cluster.local
root          59  5.4  0.6 1642568 106544 ?      Sl   01:19   0:33 /usr/bin/qemu-x86_64 /home/yugabyte/bin/yb-tserver /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-0.yb-masters.default.svc.cluster.local:7100,yb-master-1.yb-masters.default.svc.cluster.local:7100,yb-master-2.yb-masters.default.svc.cluster.local:7100 --metric_node_name=yb-tserver-0 --memory_limit_hard_bytes=456130560000 --stderrthreshold=0 --num_cpus=0 --undefok=num_cpus,enable_ysql --use_node_hostname_for_local_tserver=true --rpc_bind_addresses=yb-tserver-0.yb-tservers.default.svc.cluster.local --server_broadcast_addresses=yb-tserver-0.yb-tservers.default.svc.cluster.local:9100 --webserver_interface=0.0.0.0 --enable_ysql=true --pgsql_proxy_bind_address=0.0.0.0:5433 --cql_proxy_bind_address=yb-tserver-0.yb-tservers.default.svc.cluster.local
root         263  0.0  0.5 603052 95280 ?        Sl   01:20   0:00 /usr/bin/qemu-x86_64 /home/yugabyte/postgres/bin/postgres /home/yugabyte/postgres/bin/postgres -D /mnt/disk0/pg_data -p 5433 -h 0.0.0.0 -k /tmp/.yb.0.0.0.0:5433 -c unix_socket_permissions=0700 -c logging_collector=on -c log_directory=/mnt/disk0/yb-data/tserver/logs -c yb_pg_metrics.node_name=yb-tserver-0 -c yb_pg_metrics.port=13000 -c config_file=/mnt/disk0/pg_data/ysql_pg.conf -c hba_file=/mnt/disk0/pg_data/ysql_hba.conf
root         294  0.0  0.3 447040 49364 ?        Ssl  01:20   0:00 /usr/bin/qemu-x86_64 /home/yugabyte/postgres/bin/postgres /home/yugabyte/postgres/bin/postgres -D /mnt/disk0/pg_data -p 5433 -h 0.0.0.0 -k /tmp/.yb.0.0.0.0:5433 -c unix_socket_permissions=0700 -c logging_collector=on -c log_directory=/mnt/disk0/yb-data/tserver/logs -c yb_pg_metrics.node_name=yb-tserver-0 -c yb_pg_metrics.port=13000 -c config_file=/mnt/disk0/pg_data/ysql_pg.conf -c hba_file=/mnt/disk0/pg_data/ysql_hba.conf
root         300  0.0  0.3 612776 58164 ?        Ssl  01:20   0:00 /usr/bin/qemu-x86_64 /home/yugabyte/postgres/bin/postgres /home/yugabyte/postgres/bin/postgres -D /mnt/disk0/pg_data -p 5433 -h 0.0.0.0 -k /tmp/.yb.0.0.0.0:5433 -c unix_socket_permissions=0700 -c logging_collector=on -c log_directory=/mnt/disk0/yb-data/tserver/logs -c yb_pg_metrics.node_name=yb-tserver-0 -c yb_pg_metrics.port=13000 -c config_file=/mnt/disk0/pg_data/ysql_pg.conf -c hba_file=/mnt/disk0/pg_data/ysql_hba.conf
root         302  0.0  0.3 604216 52964 ?        Ssl  01:20   0:00 /usr/bin/qemu-x86_64 /home/yugabyte/postgres/bin/postgres /home/yugabyte/postgres/bin/postgres -D /mnt/disk0/pg_data -p 5433 -h 0.0.0.0 -k /tmp/.yb.0.0.0.0:5433 -c unix_socket_permissions=0700 -c logging_collector=on -c log_directory=/mnt/disk0/yb-data/tserver/logs -c yb_pg_metrics.node_name=yb-tserver-0 -c yb_pg_metrics.port=13000 -c config_file=/mnt/disk0/pg_data/ysql_pg.conf -c hba_file=/mnt/disk0/pg_data/ysql_hba.conf
root         304  0.0  0.3 447040 49740 ?        Ssl  01:20   0:00 /usr/bin/qemu-x86_64 /home/yugabyte/postgres/bin/postgres /home/yugabyte/postgres/bin/postgres -D /mnt/disk0/pg_data -p 5433 -h 0.0.0.0 -k /tmp/.yb.0.0.0.0:5433 -c unix_socket_permissions=0700 -c logging_collector=on -c log_directory=/mnt/disk0/yb-data/tserver/logs -c yb_pg_metrics.node_name=yb-tserver-0 -c yb_pg_metrics.port=13000 -c config_file=/mnt/disk0/pg_data/ysql_pg.conf -c hba_file=/mnt/disk0/pg_data/ysql_hba.conf
root         812  0.0  0.0 160340  9108 pts/0    Ssl+ 01:29   0:00 /usr/bin/qemu-x86_64 /usr/bin/bash bash -c ps aux|cat
root         819  0.0  0.0 200852 10100 ?        Rl+  Feb02   0:00 ps aux
root         821  0.0  0.0 152512  5436 pts/0    Sl+  01:29   0:00 /usr/bin/qemu-x86_64 /usr/bin/cat cat

Tolerations not added to setup-credentials-job

The setup-credentials-job does not currently support tolerations or nodeSelectors.

Proposal 1:
Inherit tolerations and nodeSelectors from top master values e.g.;
.Values.master.tolerations

Proposal 2:
Define job specific tolerations and nodeSelectors for authCredentials e.g.;
.Values.authCredentials.tolerations

proposal 2 seems more consistent with the current configuration since support for both of these fields exists at both the master and tserver levels.

If desired, willing to contribute changes upstream.

Add Ingress and ServiceMonitor helm chart support

Add option to disabled PDB entirely within the helm chart or to set maxUnavailable manually

Hi,

we're using Yugabyte in a DEV environment with one replica for master and tservers each.
Due to this setting the PDBs are automatically generated with a maxUnavailable setting of 0 for both statefulsets.
Unfortunately this blocks the automatic cluster node updates.

Would it be possible to either:

  1. Add an option to the chart values to enabled/ disabled pod disruption budgets entirely

or

  1. Set the values for the PDBs manually within the values file.

I do understand that this might not be wanted for most situations (where you would have at least 2 replicas for each) but it would be beneficial to have this option for scenarios like ours.

Cheers,
@cberge908

Image pull secret not added for setup credentials job

The setup credentials job is missing the image pool secret set here.

If the user sets a custom image repository here and their registry requires authentication, the setup credentials job will throw an image ImagePullBackOff and not be able to start.

Wrong certificate name when using cert manager and Istio

When using cert-manager to provision node and client certificates the nodes tries to use node.0.0.0.0:7200 for their certificate names, but should rather be using node name instead of rpc address. Only when istio is enabled

if [[ $sameRootCA -eq 0 ]]; then
            echo "Refreshing tls certs at /opt/certs/yugabyte/";
            cp /home/yugabyte/cert-manager/tls.crt /opt/certs/yugabyte/node.0.0.0.0:7100.crt;
            cp /home/yugabyte/cert-manager/tls.key /opt/certs/yugabyte/node.0.0.0.0:7100.key;

this could be used instead and it would resolve the problem
$(HOSTNAME).yugabyte-yb-masters.$(NAMESPACE).svc.cluster.local:7100.crt/key

To fix this issue for now i had to add the following to values

 gflags:
  master:
    cert_node_filename: 0.0.0.0:7100
  tserver:
    cert_node_filename: 0.0.0.0:7100

Annotations for master/t-server PVCs

We us https://github.com/topolvm/pvc-autoresizer to autoresize volumes in our projects.
To make this working also for yugabyte PVCs, we would need to set annotations also on persistence volume claim level.
Wold be nice to have this option through ie. "Values.storage.[tserver|master].annotations". We usually generate 2-3 annotations, but can be event more.

/sbin/tini not installed in default image 2.5.1.0-b153

The commit that introduced tini is breaking the default installation for me, probably because the image yugabytedb/yugabyte:2.5.1.0-b153 that is used as default in values.yaml doesn't not include it.

Error Message:
Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/sbin/tini": stat /sbin/tini: no such file or directory: unknown

This error does not appear when I overwrite it with a more current image.

Missing clusters.cluster.certificate-authority-data on Windows OS

The script generate_kubeconfig.py will not generate clusters.cluster.certificate-authority-data value on the kubeconfig file created. The issue is related with tempfile. On Windows, it seems file will not be readable before it was closed, even after calling flush method. Hence the ca_crt file will be empty when the first config command was called.

Upgrade Chart In Repo to 2.1.6

Hi,

I've just installed yugabyte db from the repo and only see 2.1.5 as a version. By running helm search repo yugabytedb/yugabyte I get

NAME               	CHART VERSION	APP VERSION	DESCRIPTION                                       
yugabytedb/yugabyte	2.1.5        	2.1.5.0-b17	YugabyteDB is the high-performance distributed ...

[BUG] Flag "temp_file_limit" not recognized

Hi,

we're using chart version 2.17.1 and want to set the temp_file_limit flag to -1.
Setting this flag to any kind of value ends up in an error:

DNS addr resolve: yb-tserver-0.yb-tservers.yugabyte.svc.cluster.local
DNS addr resolve success.
Bind ipv4: 10.244.2.47 port 9100
Bind success.
DNS addr resolve: yb-tserver-0.yb-tservers.yugabyte.svc.cluster.local
DNS addr resolve success.
Bind ipv4: 10.244.2.47 port 9100
Bind success.
DNS addr resolve: 0.0.0.0
DNS addr resolve success.
Bind ipv4: 0.0.0.0 port 9000
Bind success.
DNS addr resolve: yb-tserver-0.yb-tservers.yugabyte.svc.cluster.local
DNS addr resolve success.
Bind ipv4: 10.244.2.47 port 9042
Bind success.
DNS addr resolve: 0.0.0.0
DNS addr resolve success.
Bind ipv4: 0.0.0.0 port 5433
Bind success.
2023-02-21 10:41:41,857 [INFO] k8s_parent.py: Core files will be copied to '/mnt/disk0/cores'
2023-02-21 10:41:41,857 [INFO] k8s_parent.py: core_pattern is: core
ERROR: unknown command line flag 'temp_file_limit'
2023-02-21 10:41:41,872 [INFO] k8s_parent.py: core_pattern is: core
2023-02-21 10:41:41,872 [INFO] k8s_parent.py: Skipping copy of core files: '/mnt/disk0/cores' and '/mnt/disk0/cores' are the same directories
2023-02-21 10:41:41,872 [INFO] k8s_parent.py: Copied 0 core files to '/mnt/disk0/cores'
Stream closed EOF for yugabyte/yb-tserver-0 (yb-tserver)

This seems to be a bug, setting other flags seem to work fine.

Cheers,
cberge908

Refactorings

Can you please refactor this chart?

  • Separate single service.yaml template (wut?) into separate files per manifest
  • Fix values namings. Can you explain why values file has some PascalCase named values? (Component, Image, PodManagementPolicy, Services)

Helm template render fails with addition of secretEnv under master/tserver

If we defined secretEnv section in values.yaml for master or tserver, we observe the helm chart deployment fails and so does 'helm template' command.
Following section is added:
`master:
affinity:
podAntiAffinity: {}
extraEnv: []
extraVolumeMounts: []
extraVolumes: []
podAnnotations: {}
podLabels: {}
secretEnv:

  • name: YSQL_PASSWORD
    valueFrom:
    secretKeyRef:
    name: test-secret
    key: password`

Helm template command execution fails with reason:
Error: template: yugabyte/templates/service.yaml:385:12: executing "yugabyte/templates/service.yaml" at <include "yugabyte.addenvsecrets" $data>: error calling include: template: yugabyte/templates/_helpers.tpl:119:15: executing "yugabyte.addenvsecrets" at <$v.valueFrom.secretKeyRef.namespace>: nil pointer evaluating interface {}.secretKeyRef
Thanks
Rohan Roy Sharma

Deprecated "failure-domain" Label in Chart

failure-domain.beta.kubernetes.io/zone was deprecated with Kubernetes v1.17 and should instead be using topology.kubernetes.io/region

https://kubernetes.io/docs/reference/labels-annotations-taints/#failure-domainbetakubernetesioregion

nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- {{ $root.Values.AZ }}

Saving core dumps to PersistentVolumes

Usually kernel.core_pattern is set to a relative path which is something like core.%e.%p.%t. In such scenarios the core dumps are saved in same directory as of process' current working directory (i.e. /proc/$pid/cwd).

We can change the working directory of yb-master as well as yb-tserver process by changing container's workingDir for a pod. Setting this to a path from PV will make it possible to persist the core dumps across container restarts.

https://v1-16.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#container-v1-core

workingDir: Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.

Disable `callhome`

Hi,

Thanks for maintaining this repo.

  • Is there any documentation regarding the type of data that is collected? I tried a google search + search on the docs website but don't think I found anything.
  • Is there a way to disable callhome via the helm chart?

Cheers

Error when enabling TLS + Nodeport

Hello,

I deployed a yugabyte cluster using the helm chart v2.7.1.

I added NodePort, so that I an connect clients using a node_ip:nodePort
Everything worked fine until I tried to deploy with TLS. I get errors when trying to write data. (Complete new deployment)

tls:
    # Set to true to enable the TLS.
    enabled: true
    nodeToNode: true
    clientToServer: true
    # Set to false to disallow any service with unencrypted communication from joining this cluster
    insecure: false
    rootCA:
      cert: "......"
      key: "....."

  serviceEndpoints:
    - name: "yb-master-ui"
      type: NodePort
      app: "yb-master"
      ports:
        http-ui: "7000"

    - name: "yb-tserver-service"
      type: NodePort
      app: "yb-tserver"
      ports:
        tcp-yql-port: "9042"
        tcp-yedis-port: "6379"
        tcp-ysql-port: "5433"

And I am also using the image:

Component: "yugabytedb"
  Image:
    tag: 2.7.1.1-b1

When I try to connect with the python psycopg2 and following the doc here:
https://docs.yugabyte.com/latest/quick-start/build-apps/python/ysql-psycopg2/

I can successfully create the table employee, but impossible to insert any data, I get an error:
InternalError: Network error: Handshake failed: Network error (yb/rpc/secure_stream.cc:1108): Endpoint does not match, address: 172.23.171.61, hostname: 172.23.171.61
The IP displayed here change.

Apparently this is a pod IP

Any idea ?

Deployment fails for TLS certificates via cert-manager and ACME

Using a cluster issue for Let's Encrypt over ACME fails. Let's encrypt requires that if the common name is set, the same value must also be listed as a SAN. The yugabyte helm chart is hard-coding the common name. In addition, the TLS secret that is created does not contain the CA, the statefulset is expecting to mount the CA from the TLS secret as a volume.

I believe these are not unique to let's encrypt and corporate PKI solutions will have the same CSR requirements, and not return the CA within the TLS secret cert-manager generates.

Missing Namespace in PodDisruptionBudget

Namespace is missing in metadata of PodDisruptionBudget.

Current situation:

{{- if eq $root.Values.isMultiAz false }}
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: {{ $root.Values.oldNamingStyle | ternary (printf "%s-pdb" .label) (printf "%s-%s-pdb" (include "yugabyte.fullname" $root) .name) }}
spec:
  maxUnavailable: {{ template "yugabyte.max_unavailable_for_quorum" $root }}
  selector:
    matchLabels:
      {{- include "yugabyte.appselector" ($appLabelArgs) | indent 6 }}
{{- end }}
{{- end }}

Expected situation:

{{- if eq $root.Values.isMultiAz false }}
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: {{ $root.Values.oldNamingStyle | ternary (printf "%s-pdb" .label) (printf "%s-%s-pdb" (include "yugabyte.fullname" $root) .name) }}
  namespace: "{{ $root.Release.Namespace }}"
spec:
  maxUnavailable: {{ template "yugabyte.max_unavailable_for_quorum" $root }}
  selector:
    matchLabels:
      {{- include "yugabyte.appselector" ($appLabelArgs) | indent 6 }}
{{- end }}
{{- end }}

Unable to connect to via `ysqlsh`

After following the instructions here, I have what appears to be a functioning Yugabyte cluster, but I'm not able to connect using the command kubectl exec --namespace yb-demo -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yb-demo.

If I try it, I get this error:

Screenshot 2023-02-09 at 4 32 05 PM

Here are the running pods:

Screenshot 2023-02-09 at 4 34 07 PM

Here's the output of kubectl describe pod yb-tserver-0 -n yb-demo:

Name:             yb-tserver-0
Namespace:        yb-demo
Priority:         0
Service Account:  default
Node:             b8615d15-a0af-4d0f-9c60-455880de8e76/192.168.222.167
Start Time:       Thu, 09 Feb 2023 22:14:16 +0000
Labels:           app=yb-tserver
                  chart=yugabyte
                  component=yugabytedb
                  controller-revision-hash=yb-tserver-69d95c5685
                  heritage=Helm
                  release=yb-demo
                  statefulset.kubernetes.io/pod-name=yb-tserver-0
Annotations:      cni.projectcalico.org/containerID: f893b830e319234281af0296b433c4af7e29f1ddb367d9cdd23971824d786243
                  cni.projectcalico.org/podIP: 10.244.38.135/32
                  cni.projectcalico.org/podIPs: 10.244.38.135/32
Status:           Running
IP:               10.244.38.135
IPs:
  IP:           10.244.38.135
Controlled By:  StatefulSet/yb-tserver
Containers:
  yb-tserver:
    Container ID:  containerd://d8694c5d623ba3539ad0ab0513e2ab8c643d580f36c7ae3b1f4c7e7ace5d7f8b
    Image:         yugabytedb/yugabyte:2.17.1.0-b439
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:ed09f14588cb8cda772ec344dc1e5beb19aeed710fca41600f3a79a2c19773c0
    Ports:         9000/TCP, 12000/TCP, 11000/TCP, 13000/TCP, 9100/TCP, 6379/TCP, 9042/TCP, 5433/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      touch "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local" \
          --port="9100"
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local:9100" \
          --port="9100"
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="0.0.0.0" \
          --port="9000"
      fi && \
      if [[ -f /home/yugabyte/tools/k8s_parent.py ]]; then
        k8s_parent="/home/yugabyte/tools/k8s_parent.py"
      else
        k8s_parent=""
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local" \
          --port="9042"
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="0.0.0.0:5433" \
          --port="5433"
      fi && \
      exec ${k8s_parent} /home/yugabyte/bin/yb-tserver \
        --fs_data_dirs=/mnt/disk0,/mnt/disk1 \
        --tserver_master_addrs=yb-master-0.yb-masters.$(NAMESPACE).svc.cluster.local:7100,yb-master-1.yb-masters.$(NAMESPACE).svc.cluster.local:7100,yb-master-2.yb-masters.$(NAMESPACE).svc.cluster.local:7100 \
        --metric_node_name=$(HOSTNAME) \
        --memory_limit_hard_bytes=3649044480 \
        --stderrthreshold=0 \
        --num_cpus=2 \
        --undefok=num_cpus,enable_ysql \
        --use_node_hostname_for_local_tserver=true \
        --rpc_bind_addresses=$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local \
        --server_broadcast_addresses=$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local:9100 \
        --webserver_interface=0.0.0.0 \
        --enable_ysql=true \
        --pgsql_proxy_bind_address=0.0.0.0:5433 \
        --cql_proxy_bind_address=$(HOSTNAME).yb-tservers.$(NAMESPACE).svc.cluster.local
      
    State:          Running
      Started:      Thu, 09 Feb 2023 22:14:45 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:     2
      memory:  4Gi
    Liveness:  exec [bash -c touch "/mnt/disk0/disk.check" "/mnt/disk1/disk.check"] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_IP:                  (v1:status.podIP)
      HOSTNAME:               yb-tserver-0 (v1:metadata.name)
      NAMESPACE:              yb-demo (v1:metadata.namespace)
      YBDEVOPS_CORECOPY_DIR:  /mnt/disk0/cores
    Mounts:
      /mnt/disk0 from datadir0 (rw)
      /mnt/disk1 from datadir1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6cmn (ro)
  yb-cleanup:
    Container ID:  containerd://c43c77eefdc845884aac8bb560bf1ea180ebf932b50ec6d57856e66c806642d3
    Image:         yugabytedb/yugabyte:2.17.1.0-b439
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:ed09f14588cb8cda772ec344dc1e5beb19aeed710fca41600f3a79a2c19773c0
    Port:          <none>
    Host Port:     <none>
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      while true; do
        sleep 3600;
        /home/yugabyte/scripts/log_cleanup.sh;
      done
      
    State:          Running
      Started:      Thu, 09 Feb 2023 22:14:45 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      USER:  yugabyte
    Mounts:
      /home/yugabyte/ from datadir0 (rw,path="yb-data")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6cmn (ro)
      /var/yugabyte/cores from datadir0 (rw,path="cores")
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  datadir1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir1-yb-tserver-0
    ReadOnly:   false
  datadir0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir0-yb-tserver-0
    ReadOnly:   false
  kube-api-access-x6cmn:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  23m   default-scheduler  Successfully assigned yb-demo/yb-tserver-0 to b8615d15-a0af-4d0f-9c60-455880de8e76
  Normal  Pulling    23m   kubelet            Pulling image "yugabytedb/yugabyte:2.17.1.0-b439"
  Normal  Pulled     22m   kubelet            Successfully pulled image "yugabytedb/yugabyte:2.17.1.0-b439" in 25.578555117s (25.578633289s including waiting)
  Normal  Created    22m   kubelet            Created container yb-tserver
  Normal  Started    22m   kubelet            Started container yb-tserver
  Normal  Pulled     22m   kubelet            Container image "yugabytedb/yugabyte:2.17.1.0-b439" already present on machine
  Normal  Created    22m   kubelet            Created container yb-cleanup
  Normal  Started    22m   kubelet            Started container yb-cleanup

FYI, I installed the cluster using k0s with defaults, except I'm using Calico for networking.

tls does not seem to be working

Perhaps I'm doing something wrong but if tls is enabled in the chart and I try connect to it like the following:

ysqlsh "sslmode=require host=XXX.XXX.X.XXX dbname=yugabyte"

I get the following error:

ysqlsh: server does not support SSL, but SSL was required

Add readiness/liveness probes

Readiness probes determine whether or node the given pod is routed to as part of a Kubernetes service.

Liveness probes cause the container to restart after a failure threshold.

Missing Namespace in Service resources

The namespace that can be passed as Helm value isn't picked up in the Service resources part of the chart:

apiVersion: v1
kind: Service
metadata:
  name: {{ $root.Values.oldNamingStyle | ternary $endpoint.name (printf "%s-%s" (include "yugabyte.fullname" $root) $endpoint.name) | quote }}
  annotations:
...

=>

apiVersion: v1
kind: Service
metadata:
  name: {{ $root.Values.oldNamingStyle | ternary $endpoint.name (printf "%s-%s" (include "yugabyte.fullname" $root) $endpoint.name) | quote }}
  namespace: "{{ $root.Release.Namespace }}"
  annotations:
...

I was expecting all resources to be part of the namespace I'm specifying in the chart.

Similar to #64

Cant enable TLS, clientToServer seems to be ignored

Hello,

I try to enable TLS by using following conf.yaml file :

[...]
tls:
  # Set to true to enable the TLS.
  enabled: true
  nodeToNode: true
  clientToServer: true
  # Set to false to disallow any service with unencrypted communication from joining this cluster
  insecure: false
  # Set enabled to true to use cert-manager instead of providing your own rootCA
  certManager:
    enabled: false
    # Will create own ca certificate and issuer when set to true
    bootstrapSelfsigned: true
    # Use ClusterIssuer when set to true, otherwise use Issuer
    useClusterIssuer: false
    # Name of ClusterIssuer to use when useClusterIssuer is true
    clusterIssuer: cluster-ca
    # Name of Issuer to use when useClusterIssuer is false
    issuer: yugabyte-ca
    certificates:
      # The lifetime before cert-manager will issue a new certificate.
      # The re-issued certificates will not be automatically reloaded by the service.
      # It is necessary to provide some external means of restarting the pods.
      duration: 2160h # 90d
      renewBefore: 360h # 15d
      algorithm: ECDSA # ECDSA or RSA
      # Can be 2046, 4096 or 8192 for RSA
      # Or 256, 384 or 521 for ECDSA
      keySize: 521

  ## When certManager.enabled=false, rootCA.cert and rootCA.key are used to generate TLS certs.
  ## When certManager.enabled=true and boostrapSelfsigned=true, rootCA is ignored.
  ## When certManager.enabled=true and bootstrapSelfsigned=false, only rootCA.cert is used
  ## to verify TLS certs generated and signed by the external provider.
  rootCA:
    cert: "..."
    key: "..."
  ## When tls.certManager.enabled=false
  ## nodeCert and clientCert will be used only when rootCA.key is empty.
  ## Will be ignored and genSignedCert will be used to generate
  ## node and client certs if rootCA.key is provided.
  ## cert and key are base64 encoded content of certificate and key.
  nodeCert:
    cert: ""
    key: ""
  clientCert:
    cert: ""
    key: ""
[...]

Everything is stating fine but TLS encryption doesnt seem to be enabled. I see the following informations from the webui :
image

If I look at TLS Settings on webui, I notice that client to server encryption is not enabled (is is the same information than previous screenshot ?) :
image

So, the clientToServer parameter in yaml seems to be ignored.

Maybe I misunderstand something...

Thank you for reading, any help would be appreciated !

New Chart release

Hi guys,

This PR fixed a problem with the busybox image being hardcoded:
#54

But when I pull the chart from the helm repo I still have the old busybox image.

Is it possible to create a new release of the chart ? When will it be available ?

If I create a PR to update the version in Chart.yaml to 2.7.1, will it automatically create a new release ?

Thank you.

TLS + authCredentials leads to yugabyte-setup-credentials job failure

Versions

  • helm chart: 1.18.0
  • yugabyte: 2.18.0.1-b4

Details

The following configuration leads to the inability for the yugabyte-setup-credentials job to finish successfully

authCredentials:
  ycql:
    password: xxx
    user: yugabyte
tls:
  enabled: true
  nodeToNode: true
  clientToServer: true
  insecure: false
  certManager:
    enabled: true

I have confirmed that enabling only TLS or only authCredentials does work, but the combination of the two together fails.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.