openebs / charts Goto Github PK
View Code? Open in Web Editor NEWOpenEBS Helm Charts and other utilities
License: Apache License 2.0
OpenEBS Helm Charts and other utilities
License: Apache License 2.0
The imageTag
override for jiva
subchart should not use an older version than what's defined in
https://github.com/openebs/jiva-operator/blob/v3.6.0/deploy/helm/charts/values.yaml#L6
According to jiva chart, it is using version 3.4.0
) (ref:
https://github.com/openebs/charts/blob/openebs-3.10.0/charts/openebs/values.yaml#L644)
At the moment, the lint and test and chart releaser are running in parallel once the PR is merged. This should ideally change to:
Hi,
Is it possible to add allowVolumeExpansion
helm parameter for device storage class ?
https://github.com/openebs/device-localpv
Thanks :)
We have this for the config map right now
filterconfigs:
- key: os-disk-exclude-filter
name: os disk exclude filter
state: {{ .Values.ndm.filters.enableOsDiskExcludeFilter }}
exclude: "/,/etc/hosts,/boot"
... {{ .Values.ndm.filters.enableOsDiskExcludeFilter }} should go for exclude and not for the state ?
filterconfigs:
- key: os-disk-exclude-filter
name: os disk exclude filter
state: {{ .Values.ndm.filters.enableOsDiskState }} // or not updatable at all
exclude: {{ .Values.ndm.filters.enableOsDiskExcludeFilter }} ///????
The helm chart should support deployment in a completely offline environment with manually loaded docker images.
However some imagePullPolicy fields are hardcoded to 'Always' and will not work in an environment without internet access.
With default values simply enabling the exporter
ndmExporter:
enabled: false
Results in the following (in openebs-ndm-cluster-exporter deployment):
chart: openebs-3.5.0
heritage: Helm
openebs.io/version: "3.5.0"
app: openebs-ndm-cluster-exporter
release: openebs
component: openebs-ndm-cluster-exporter
name: openebs-ndm-node-exporter
openebs.io/component-name: openebs-ndm-cluster-exporter
name: openebs-ndm-cluster-exporter
which includes the "name" field twice causing issues such as flux helm-controller failing to install the chart
I imported the latest GF dashboard, "updated the datasource to : Prometheus"
I have this error in : localpv-dashboard
http://10.1.7.114:30006/d/2e59785a-af05-465e-b9a3-fca65a0e8572/localpv-dashboard?orgId=1
<html> <head> <script> window.onload = load('','','','',''); function getCell(name, value) { return "
"+name+"
"+value+"
"; } function load(volumeStr,pvcNameStr,namespaceStr,storageClassStr,typeStr) { var volumeStr = volumeStr.replace( /[{}]/g, '' ); var pvcNameStr = pvcNameStr.replace( /[{}]/g, '' ); var namespaceStr = namespaceStr.replace( /[{}]/g, '' ); var storageClassStr = storageClassStr.replace( /[{}]/g, '' ); var typeStr = typeStr.replace( /[{}]/g, '' ); // var pools = poolsStr.replace( /[{}]/g, '' ); // var rc = replicaCount.replace( /[{}]/g, '' ); // if(!rc) { // rc = "none"; // } // var uptimes = uptime.replace( /[{}]/g, '' ); // if(uptimes) { // var seconds = parseInt(uptimes, 10); // var days = Math.floor(seconds / (3600*24)); // seconds -= days*3600*24; // var hrs = Math.floor(seconds / 3600); // seconds -= hrs*3600; // var mnts = Math.floor(seconds / 60); // seconds -= mnts*60; // uptimes = days+" days, "+hrs+" Hrs, "+mnts+" Mins, "+seconds+" Secs"; // } // else { // uptimes = "No Data"; // } z=""; z+=getCell("PVC", pvcNameStr); z+=getCell("Namespace", namespaceStr); z+=getCell("CAS Type", typeStr); z+=getCell("Storage Class", storageClassStr); //z+=getCell("Name", volName); //z+=getCell("Pool Name", pools); //z+=getCell("No. of Replicas", rc); //z+=getCell("","") document.getElementById("volume-info").innerHTML = z; } </script> </head> <body>
</body> </html>
in : storage-class-volume-dashboard
http://10.1.7.114:30006/d/2795676f-d98f-4ba5-b424-b8b0e98ad7bd/storage-class-volume-dashboard?orgId=1&refresh=1m
there are no storage class in the dropbox, but I have a few in my cluster
root@test-pcl114:~# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-cstor-sparse openebs.io/provisioner-iscsi Delete Immediate false 27d
openebs-device openebs.io/local Delete WaitForFirstConsumer false 88d
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 88d
openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 88d
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 88d
sc-iep-localpv cstor.csi.openebs.io Delete Immediate true 83d
sc-iep-mirror cstor.csi.openebs.io Delete Immediate true 87d
sc-minio cstor.csi.openebs.io Delete WaitForFirstConsumer true 87d
sc-rocketchat cstor.csi.openebs.io Delete Immediate true 86d
root@test-pcl114:~#
In storage-pool-claim-dashboard
http://10.1.7.114:30006/d/23c14553-6c62-41b6-9eb3-c0e75531e617/storage-pool-claim-dashboard?orgId=1
root@test-pcl114:~# kubectl get cspc -n openebs
NAME HEALTHYINSTANCES PROVISIONEDINSTANCES DESIREDINSTANCES AGE
cspc-iep-localpv 1 1 1 83d
cspc-iep-mirror 1 1 1 87d
cspc-minio 1 1 1 87d
cspc-rocketchat 1 1 1 87d
root@test-pcl114:~#
In storage-pool-dashboard
http://10.1.7.114:30006/d/5d86b1fd-b2e7-4bb2-befa-4dae5b6167d6/storage-pool-dashboard?orgId=1
<html> <head> <script> window.onload = load('', '',''); function getCell(name, value) { return "
"+name+"
"+value+"
in volume-tiled-view-dashboard
http://10.1.7.114:30006/d/f46a2d03-c2af-4954-b2b3-307c4d77bcaf/volume-tiled-view-dashboard?orgId=1&refresh=1m
<html> <head> <script> window.onload = load('', '', '', ' 1', ' 1143714'); function getCell(name, value) { return "
"+name+"
"+value+"
Error Message:
create Pod nginx-0 in StatefulSet nginx failed error: failed to create PVC www-nginx-0: Internal error occurred: failed calling webhook "admission-webhook.openebs.io": Post "https://admission-server-svc.openebs.svc:443/validate?timeout=5s": context deadline exceeded
Env: Rancher k8s 1.19.8
Hi, we were using https://openebs.github.io/dynamic-nfs-provisioner
as the chart repo and that stopped working this morning.
We figured out we need to update to https://openebs-archive.github.io/dynamic-nfs-provisioner
I don't know if GH Pages supports redirects or anything that might have made this easier?
Parts of the docs suggest mayastor
should now be considered beta, and for new-comers wishing to try it out with all the integrated features that the helm chart brings, not having helm support for mayastor
adds significant challenges. Is there a timeline for introducing mayastor
support, or is the reply just "when it's GA"?
To install Local PV hostpath and device
We need to run a command like
helm install openebs openebs/openebs --namespace openebs --create-namespace \
--set localprovisioner.enabled=false \
--set ndm.enabled=false \
--set ndmOperator.enabled=false \
--set openebs-ndm.enabled=true \
--set legacy.enabled=false \
--set localpv-provisioner.enabled=true
Can the above be enhanced to handle the mutually exclusive options internally. For example:
openebs-ndm.enabled=true
then automatically consider ndm.enabled
and ndmOperator.enabled
as false.localpv-provisioner.enabled=true
then consider localprovisioner.enabled=false
Sounds like we need to install ndm components if
ndm.enabled=true and openebs-ndm.enabled=false
By using taking care of mutually exclusive flags, the command to install local pv will become:
helm install openebs openebs/openebs --namespace openebs --create-namespace \
--set legacy.enabled=false \
--set localpv-provisioner.enabled=true
--set openebs-ndm.enabled=true \
This configuration file was previously applied in one side of the application, but it was found to be invalid recently. Where can I find it again?
The new cleanup-webhook.yaml has an issue when deploying the chart with a .Values.webhook.tolerations defined. The chart uses the tolerations via:
{{- if .Values.webhook.tolerations }}
tolerations:
{{ toYaml .Values.webhook.tolerations | indent 8 }}
{{- end }}
When rendered, this indents the tolerations incorrectly due to the spaces at the start of the 'toYaml' line (see deployment-admission-server.yaml for how this should be used).
However, that said, does this actually need to pull in tolerations? This is a template for a Job, it does not have a nodeSelector that may need accompanying tolerations - I suspect the four lines above could just be removed.
Is there a way to do this with the chart?
https://docs.openebs.io/docs/next/uglocalpv-hostpath.html#create-storageclass
It would be nice to customize the storageclass
at least a way to add NodeAffinityLabel
Please publish OpenEBS to a Helm OCI repository as well.
This is slowly becoming the new and faster standard anno 2024
Normal error 89s (x8 over 2m33s) helm-controller reconciliation failed: Helm upgrade failed: error while running post render on files: map[string]interface {}(nil): yaml: unmarshal errors:
I get this when I enable:
ndmExporter:
enabled: true
There is an option of changing the default StorageClass in Kubernetes clusters, and this is as easy as adding a specific annotation to the StorageClass, you can see from the following link below 👇:
👀 https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/
We (w/@Dentrax @eminaktas @erkanzileli) thought that it would be nice to have an option to enable this support via the localpvprovisioner
section in values.yaml
.
👀 https://github.com/openebs/charts/blob/main/charts/openebs/values.yaml#L95
Hi Community!
Our team just found a possible security issue when reading the code. And we have reported to the maintainers listed in the MAINTAINERS via email , but have not received a reply yet. The title of the email is the same as the title of this issuse, and at the end of the email we attached detailed information about the issue.
Looking forward to your reply.
Kind regards,
Seona
Hi
Not sure if this is the right place, but I'll try it anyway. What I loose if I deactivate the admission server? I understand what an admission controller does generally, which is "validate" k8s objects. But I can't find information about what exactly does this admission controller on the documentation.
Thanks for your time.
PD: The chart and openebs are very cool projects, thanks !
@prateekpandey14 you need to add allowed_topologies as an env for LVM operator yaml. https://github.com/openebs/lvm-localpv/blob/master/deploy/lvm-operator.yaml#L1374.
Originally posted by @pawanpraka1 in #231 (comment)
Hi,
I am trying to install openebs chart (version: "3.1.0") on my cluster's specific node.
The options for tolerations and affinity are not available for openebs-ndm:ndmExporter.
Every other component which I am using has this option available.
Would be great if this can be accommodated in the next release.
Thanks,
Would be really nice to have it. Details here: https://coreos.com/blog/the-prometheus-operator.html
Hello,
helm chart tries to reference node-disk-manager:v0.9.0 but one gets report that it is unauthorized
if one changes image to node-disk-manager-amd64 it gets pulled
I started with a new cluster following these steps:
https://github.com/jadsy2107/k8s-ha-cluster
from: https://openebs.github.io/cstor-operators/
Installed openebs-cstor/cstor helm
The pods would not connect to their pvc, until I found someones comment that it was fixed in 3.4.0
openebs-archive/cstor-operators#435
https://openebs.github.io/charts/cstor-operator.yaml comes with 3.3.0
I did a find and replace on every deployment and daemon set in the namespace
Working now!
helm template openebs openebs/openebs --version 3.9.0 --set localprovisioner.basePath="/opt/custom/path" | grep -A 1 'name: OPENEBS_IO_BASE_PATH'
Output
- name: OPENEBS_IO_BASE_PATH
value: "/opt/custom/path"
helm template openebs openebs/openebs --version 3.9.0 --set localprovisioner.basePath="/opt/custom/path" --set mayastor.enabled=true | grep -A 1 'name: OPENEBS_IO_BASE_PATH'
Output
- name: OPENEBS_IO_BASE_PATH
value: "/var/openebs/local"
Output
- name: OPENEBS_IO_BASE_PATH
value: "/opt/custom/path"
What steps did you take and what happened:
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
on a 8 node Raspberry Pi 4 8Gb (arm64) microk8s clusterWhat did you expect to happen:
#
$ kubectl get pods -n openebs -o wide
NAME READY STATUS RESTARTS AGE
openebs-ndm-x6q6g 1/1 Running 0 18h
openebs-ndm-zl9jc 1/1 Running 0 18h
openebs-ndm-operator-7d69c98987-lz2cc 1/1 Running 0 18h
openebs-provisioner-74b57cbdbd-g4tws 1/1 Running 0 18h
openebs-snapshot-operator-5b9dfd4fcd-hnbnn 2/2 Running 0 18h
openebs-admission-server-789b9d6dbd-t2hkt 1/1 Running 1 18h
openebs-localpv-provisioner-776b54f698-flfh4 1/1 Running 0 18h
maya-apiserver-6f79bb87bd-58kp7 0/1 Running 0 15h
The output of the following commands will help us better understand what's going on:
# Describe pod so we can see the reason (timeout on liveness probe)
$ kubectl describe pod -n openebs maya-apiserver-6f79bb87bd-58kp7
Name: maya-apiserver-6f79bb87bd-58kp7
Namespace: openebs
Priority: 0
Node: node-08/10.0.19.18
Start Time: Wed, 03 Mar 2021 19:25:00 +0000
Labels: name=maya-apiserver
openebs.io/component-name=maya-apiserver
openebs.io/version=2.6.0
pod-template-hash=6f79bb87bd
Annotations: cni.projectcalico.org/podIP: 10.1.251.215/32
cni.projectcalico.org/podIPs: 10.1.251.215/32
Status: Running
IP: 10.1.251.215
IPs:
IP: 10.1.251.215
Controlled By: ReplicaSet/maya-apiserver-6f79bb87bd
Containers:
maya-apiserver:
Container ID: containerd://aacdb0533b88a39d3ca1d06a4be1fcf7e06b9c2c7a605874758d5131118e4b04
Image: registry.tekqube.lan/openebs/m-apiserver:2.6.0
Image ID: registry.tekqube.lan/openebs/m-apiserver@sha256:16f2a6d8a20d28d1326bae83e7adac2db05b5388b6a10c43e22d93153b963b9d
Port: 5656/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 03 Mar 2021 19:25:16 +0000
Ready: False
Restart Count: 0
Liveness: exec [/usr/local/bin/mayactl version] delay=30s timeout=1s period=60s #success=1 #failure=3
Readiness: exec [/usr/local/bin/mayactl version] delay=30s timeout=1s period=60s #success=1 #failure=3
Environment:
OPENEBS_NAMESPACE: openebs (v1:metadata.namespace)
OPENEBS_SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
OPENEBS_MAYA_POD_NAME: maya-apiserver-6f79bb87bd-58kp7 (v1:metadata.name)
OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG: true
OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL: false
OPENEBS_IO_JIVA_CONTROLLER_IMAGE: openebs/jiva:2.6.0
OPENEBS_IO_JIVA_REPLICA_IMAGE: openebs/jiva:2.6.0
OPENEBS_IO_JIVA_REPLICA_COUNT: 3
OPENEBS_IO_CSTOR_TARGET_IMAGE: openebs/cstor-istgt:2.6.0
OPENEBS_IO_CSTOR_POOL_IMAGE: openebs/cstor-pool:2.6.0
OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE: openebs/cstor-pool-mgmt:2.6.0
OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE: openebs/cstor-volume-mgmt:2.6.0
OPENEBS_IO_VOLUME_MONITOR_IMAGE: openebs/m-exporter:2.6.0
OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE: openebs/m-exporter:2.6.0
OPENEBS_IO_HELPER_IMAGE: openebs/linux-utils:2.6.0
OPENEBS_IO_ENABLE_ANALYTICS: false
OPENEBS_IO_INSTALLER_TYPE: openebs-operator
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from openebs-maya-operator-token-bt5rb (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
openebs-maya-operator-token-bt5rb:
Type: Secret (a volume populated by a Secret)
SecretName: openebs-maya-operator-token-bt5rb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 3m47s (x910 over 15h) kubelet Liveness probe errored: rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 1s exceeded: context deadline exceeded
# Get logs for startup api server
$ kubectl logs -n openebs maya-apiserver-6f79bb87bd-58kp7
+ MAYA_API_SERVER_NETWORK=eth0
+ ip -4 addr show scope global dev eth0
+ grep inet
+ awk '{print $2}'
+ cut -d / -f 1
+ CONTAINER_IP_ADDR=10.1.251.215
+ exec /usr/local/bin/maya-apiserver start '--bind=10.1.251.215'
I0303 19:25:16.338105 1 start.go:148] Initializing maya-apiserver...
I0303 19:25:16.529286 1 start.go:279] Starting maya api server ...
I0303 19:25:20.610703 1 start.go:288] resources applied successfully by installer
I0303 19:25:20.704653 1 start.go:193] Maya api server configuration:
I0303 19:25:20.704744 1 start.go:195] Log Level: INFO
I0303 19:25:20.704782 1 start.go:195] Region: global (DC: dc1)
I0303 19:25:20.704816 1 start.go:195] Version: 2.6.0-released
I0303 19:25:20.704846 1 start.go:201]
I0303 19:25:20.704876 1 start.go:204] Maya api server started! Log data will stream in below:
I0303 19:25:20.713460 1 runner.go:37] Starting SPC controller
I0303 19:25:20.713523 1 runner.go:40] Waiting for informer caches to sync
I0303 19:25:20.913834 1 runner.go:45] Checking for preupgrade tasks
I0303 19:25:20.948354 1 runner.go:51] Starting SPC workers
I0303 19:25:20.948478 1 runner.go:58] Started SPC workers
Anything else you would like to add:
When attaching with a shell to the running pod, I can confirm the liveness probe fails since the command takes around 12 seconds (!) to finish:
$ time mayactl version
Version: 2.6.0-released
Git commit: 519dc0e567d77f3573e4e5b8096f1450e8928f54
GO Version: go1.14.7
GO ARCH: arm64
GO OS: linux
m-apiserver url: http://10.1.251.215:5656
m-apiserver status: running
real 0m12.066s
user 0m0.012s
sys 0m0.046s
When I run the same command, but specify the server and port using parameters, the command finishes within milliseconds:
$ time mayactl -m 10.1.251.215 -p 5656 version
Version: 2.6.0-released
Git commit: 519dc0e567d77f3573e4e5b8096f1450e8928f54
GO Version: go1.14.7
GO ARCH: arm64
GO OS: linux
m-apiserver url: http://10.1.251.215:5656
m-apiserver status: running
real 0m0.037s
user 0m0.016s
sys 0m0.023s
I assume the CLI takes some of the environment variables to connect to its status endpoint, not sure what has been set incorrectly; I have included the environment set below:
$ env
KUBERNETES_SERVICE_PORT_HTTPS=443
OPENEBS_NAMESPACE=openebs
KUBERNETES_SERVICE_PORT=443
OPENEBS_IO_ENABLE_ANALYTICS=false
HOSTNAME=maya-apiserver-6f79bb87bd-58kp7
OPENEBS_IO_CSTOR_TARGET_IMAGE=openebs/cstor-istgt:2.6.0
OPENEBS_MAYA_POD_NAME=maya-apiserver-6f79bb87bd-58kp7
ADMISSION_SERVER_SVC_PORT_443_TCP=tcp://10.152.183.18:443
ADMISSION_SERVER_SVC_SERVICE_HOST=10.152.183.18
MAYA_APISERVER_SERVICE_SERVICE_HOST=10.152.183.216
ADMISSION_SERVER_SVC_PORT=tcp://10.152.183.18:443
PWD=/
MAYA_APISERVER_SERVICE_PORT_5656_TCP_PROTO=tcp
OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE=openebs/cstor-pool-mgmt:2.6.0
OPENEBS_IO_HELPER_IMAGE=openebs/linux-utils:2.6.0
OPENEBS_SERVICE_ACCOUNT=openebs-maya-operator
HOME=/root
OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE=openebs/cstor-volume-mgmt:2.6.0
OPENEBS_IO_JIVA_CONTROLLER_IMAGE=openebs/jiva:2.6.0
KUBERNETES_PORT_443_TCP=tcp://10.152.183.1:443
MAYA_APISERVER_SERVICE_SERVICE_PORT=5656
MAYA_APISERVER_SERVICE_PORT_5656_TCP_ADDR=10.152.183.216
OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE=openebs/m-exporter:2.6.0
MAYA_APISERVER_SERVICE_PORT_5656_TCP=tcp://10.152.183.216:5656
MAYA_APISERVER_SERVICE_SERVICE_PORT_API=5656
OPENEBS_IO_CSTOR_POOL_IMAGE=openebs/cstor-pool:2.6.0
TERM=xterm
ADMISSION_SERVER_SVC_SERVICE_PORT=443
SHLVL=1
ADMISSION_SERVER_SVC_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
ADMISSION_SERVER_SVC_PORT_443_TCP_ADDR=10.152.183.18
KUBERNETES_PORT_443_TCP_ADDR=10.152.183.1
OPENEBS_IO_VOLUME_MONITOR_IMAGE=openebs/m-exporter:2.6.0
OPENEBS_IO_JIVA_REPLICA_COUNT=3
KUBERNETES_SERVICE_HOST=10.152.183.1
MAYA_APISERVER_SERVICE_PORT_5656_TCP_PORT=5656
KUBERNETES_PORT=tcp://10.152.183.1:443
KUBERNETES_PORT_443_TCP_PORT=443
ADMISSION_SERVER_SVC_PORT_443_TCP_PROTO=tcp
MAYA_API_SERVER_NETWORK=eth0
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL=false
MAYA_APISERVER_SERVICE_PORT=tcp://10.152.183.216:5656
OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG=true
OPENEBS_IO_JIVA_REPLICA_IMAGE=openebs/jiva:2.6.0
OPENEBS_IO_INSTALLER_TYPE=openebs-operator
_=/usr/bin/env
Environment:
Maya version: 2.6.0-released
OpenEBS version: 2.6.0
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.7-34+fa60fe11bf77d0", GitCommit:"fa60fe11bf77d0c591abbc397e178efe296f83f9", GitTreeState:"clean", BuildDate:"2021-02-11T20:46:36Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/arm64"}
Kubernetes installer & version: microk8s 1.19/stable
Cloud provider or hardware configuration: Raspberry Pi 4 8Gb arm64
OS (e.g. from /etc/os-release
):
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
The values for the strategy are not exposed in the chart here.
In particular setting the maxUnavailalbe would be useful since Helm will consider the deployment done early (Ref). When installing a new cluster with the local-pv-provisioner along with other charts this will cause additional charts to be installed before openebs is available. If those charts need volumes the PVC will be stuck in pending
because it won't be processed as a request after openebs is done installing.
Hi,
I't seems, that the configuration for the helm charts are not fully functional with psp enabled clusters (tested with rke2 - cis-1.5/1.6 enabled):
values.yaml
rbac:
pspEnabled: true
jiva:
enabled: true
rbac:
pspEnabled: true
openebsLocalpv:
enabled: true
localpv-provisioner:
openebsNDM:
enabled: true
cstor:
enabled: true
rbac:
pspEnabled: true
openebsNDM:
enabled: true
openebs-ndm:
enabled: true
localpv-provisioner:
enabled: true
rbac:
pspEnabled: true
openebsNDM:
enabled: true
kubectl get pods -o wide
openebs openebs-admission-server-7dd88c6b6-46knl 1/1 Running 0 11m 10.42.0.30 node-01 <none> <none>
openebs openebs-apiserver-7bd8cc7c5c-rfbqb 1/1 Running 0 11m 10.42.0.31 node-01 <none> <none>
openebs openebs-cstor-admission-server-66b7895495-d66r7 0/1 CreateContainerConfigError 0 11m 10.42.0.27 node-01 <none> <none>
openebs openebs-cstor-csi-controller-0 0/6 CreateContainerConfigError 0 10m 10.42.0.33 node-01 <none> <none>
openebs openebs-cstor-csi-node-d87nx 2/2 Running 0 10m 10.0.0.1 node-01 <none> <none>
openebs openebs-cstor-csi-node-l5zrq 2/2 Running 0 10m 10.0.0.2 node-02 <none> <none>
openebs openebs-cstor-cspc-operator-759cf9cb8c-twtfg 0/1 CreateContainerConfigError 0 11m 10.42.1.15 node-02 <none> <none>
openebs openebs-cstor-cvc-operator-fb68fbdb6-nfmtq 0/1 CreateContainerConfigError 0 11m 10.42.0.28 node-01 <none> <none>
openebs openebs-jiva-csi-controller-0 0/5 CreateContainerConfigError 0 10m 10.42.1.18 node-02 <none> <none>
openebs openebs-jiva-operator-6d44c67df9-645cg 1/1 Running 0 11m 10.42.1.16 node-02 <none> <none>
openebs openebs-localpv-provisioner-7bf58d4896-jbc7h 1/1 Running 0 10m 10.42.0.34 node-01 <none> <none>
openebs openebs-ndm-operator-7d6955f6f5-55vzz 0/1 CreateContainerConfigError 0 11m 10.42.0.26 node-01 <none> <none>
openebs openebs-provisioner-554d6bb8db-hqgsr 1/1 Running 0 11m 10.42.1.17 node-02 <none> <none>
openebs openebs-snapshot-operator-b76fb87b8-n9jnf 2/2 Running 0 11m 10.42.0.32 node-01 <none> <none>
kubectl get events -w
LAST SEEN TYPE REASON OBJECT MESSAGE
4m Normal Scheduled pod/openebs-admission-server-7dd88c6b6-46knl Successfully assigned openebs/openebs-admission-server-7dd88c6b6-46knl to node-01
3m59s Normal Pulling pod/openebs-admission-server-7dd88c6b6-46knl Pulling image "openebs/admission-server:2.12.1"
3m52s Normal Pulled pod/openebs-admission-server-7dd88c6b6-46knl Successfully pulled image "openebs/admission-server:2.12.1" in 6.850287365s
3m52s Normal Created pod/openebs-admission-server-7dd88c6b6-46knl Created container admission-webhook
3m52s Normal Started pod/openebs-admission-server-7dd88c6b6-46knl Started container admission-webhook
4m Normal SuccessfulCreate replicaset/openebs-admission-server-7dd88c6b6 Created pod: openebs-admission-server-7dd88c6b6-46knl
4m Normal ScalingReplicaSet deployment/openebs-admission-server Scaled up replica set openebs-admission-server-7dd88c6b6 to 1
4m Normal Scheduled pod/openebs-apiserver-7bd8cc7c5c-rfbqb Successfully assigned openebs/openebs-apiserver-7bd8cc7c5c-rfbqb to node-01
3m59s Normal Pulling pod/openebs-apiserver-7bd8cc7c5c-rfbqb Pulling image "openebs/m-apiserver:2.12.1"
3m54s Normal Pulled pod/openebs-apiserver-7bd8cc7c5c-rfbqb Successfully pulled image "openebs/m-apiserver:2.12.1" in 4.938507915s
3m54s Normal Created pod/openebs-apiserver-7bd8cc7c5c-rfbqb Created container openebs-apiserver
3m54s Normal Started pod/openebs-apiserver-7bd8cc7c5c-rfbqb Started container openebs-apiserver
4m Normal SuccessfulCreate replicaset/openebs-apiserver-7bd8cc7c5c Created pod: openebs-apiserver-7bd8cc7c5c-rfbqb
4m Normal ScalingReplicaSet deployment/openebs-apiserver Scaled up replica set openebs-apiserver-7bd8cc7c5c to 1
4m1s Normal Scheduled pod/openebs-cstor-admission-server-66b7895495-d66r7 Successfully assigned openebs/openebs-cstor-admission-server-66b7895495-d66r7 to node-01
4m1s Normal Pulling pod/openebs-cstor-admission-server-66b7895495-d66r7 Pulling image "openebs/cstor-webhook:2.12.0"
3m55s Normal Pulled pod/openebs-cstor-admission-server-66b7895495-d66r7 Successfully pulled image "openebs/cstor-webhook:2.12.0" in 5.85522673s
100s Warning Failed pod/openebs-cstor-admission-server-66b7895495-d66r7 Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-admission-server-66b7895495-d66r7_openebs(4b0f1ae2-47a3-44d3-a5fe-2c8fe34c5df7)", container: openebs-cstor-admission-webhook)
100s Normal Pulled pod/openebs-cstor-admission-server-66b7895495-d66r7 Container image "openebs/cstor-webhook:2.12.0" already present on machine
4m1s Normal SuccessfulCreate replicaset/openebs-cstor-admission-server-66b7895495 Created pod: openebs-cstor-admission-server-66b7895495-d66r7
4m1s Normal ScalingReplicaSet deployment/openebs-cstor-admission-server Scaled up replica set openebs-cstor-admission-server-66b7895495 to 1
3m59s Normal Scheduled pod/openebs-cstor-csi-controller-0 Successfully assigned openebs/openebs-cstor-csi-controller-0 to node-01
3m50s Normal Pulled pod/openebs-cstor-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
3m50s Warning Failed pod/openebs-cstor-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-resizer)
3m50s Normal Pulled pod/openebs-cstor-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3" already present on machine
3m50s Warning Failed pod/openebs-cstor-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-snapshotter)
3m58s Normal Pulling pod/openebs-cstor-csi-controller-0 Pulling image "k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3"
3m57s Normal Pulled pod/openebs-cstor-csi-controller-0 Successfully pulled image "k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3" in 1.620208975s
3m50s Warning Failed pod/openebs-cstor-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: snapshot-controller)
3m57s Normal Pulling pod/openebs-cstor-csi-controller-0 Pulling image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0"
3m55s Normal Pulled pod/openebs-cstor-csi-controller-0 Successfully pulled image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" in 1.793285999s
3m50s Warning Failed pod/openebs-cstor-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-provisioner)
3m55s Normal Pulling pod/openebs-cstor-csi-controller-0 Pulling image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0"
3m53s Normal Pulled pod/openebs-cstor-csi-controller-0 Successfully pulled image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" in 1.686984922s
3m53s Warning Failed pod/openebs-cstor-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-attacher)
3m53s Normal Pulling pod/openebs-cstor-csi-controller-0 Pulling image "openebs/cstor-csi-driver:2.12.0"
3m50s Normal Pulled pod/openebs-cstor-csi-controller-0 Successfully pulled image "openebs/cstor-csi-driver:2.12.0" in 2.658414557s
3m50s Warning Failed pod/openebs-cstor-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: cstor-csi-plugin)
3m50s Normal Pulled pod/openebs-cstor-csi-controller-0 Container image "k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3" already present on machine
3m50s Normal Pulled pod/openebs-cstor-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" already present on machine
3m50s Normal Pulled pod/openebs-cstor-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" already present on machine
3m59s Warning FailedCreate statefulset/openebs-cstor-csi-controller create Pod openebs-cstor-csi-controller-0 in StatefulSet openebs-cstor-csi-controller failed error: pods "openebs-cstor-csi-controller-0" is forbidden: no PriorityClass with name openebs-cstor-csi-controller-critical was found
3m59s Normal SuccessfulCreate statefulset/openebs-cstor-csi-controller create Pod openebs-cstor-csi-controller-0 in StatefulSet openebs-cstor-csi-controller successful
3m57s Normal Scheduled pod/openebs-cstor-csi-node-d87nx Successfully assigned openebs/openebs-cstor-csi-node-d87nx to node-01
3m56s Normal Pulled pod/openebs-cstor-csi-node-d87nx Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0" already present on machine
3m56s Normal Created pod/openebs-cstor-csi-node-d87nx Created container csi-node-driver-registrar
3m56s Normal Started pod/openebs-cstor-csi-node-d87nx Started container csi-node-driver-registrar
3m56s Normal Pulling pod/openebs-cstor-csi-node-d87nx Pulling image "openebs/cstor-csi-driver:2.12.0"
3m51s Normal Pulled pod/openebs-cstor-csi-node-d87nx Successfully pulled image "openebs/cstor-csi-driver:2.12.0" in 5.070374526s
3m50s Normal Created pod/openebs-cstor-csi-node-d87nx Created container cstor-csi-plugin
3m50s Normal Started pod/openebs-cstor-csi-node-d87nx Started container cstor-csi-plugin
3m56s Normal Scheduled pod/openebs-cstor-csi-node-l5zrq Successfully assigned openebs/openebs-cstor-csi-node-l5zrq to node-02
3m56s Normal Pulled pod/openebs-cstor-csi-node-l5zrq Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0" already present on machine
3m56s Normal Created pod/openebs-cstor-csi-node-l5zrq Created container csi-node-driver-registrar
3m56s Normal Started pod/openebs-cstor-csi-node-l5zrq Started container csi-node-driver-registrar
3m56s Normal Pulling pod/openebs-cstor-csi-node-l5zrq Pulling image "openebs/cstor-csi-driver:2.12.0"
3m50s Normal Pulled pod/openebs-cstor-csi-node-l5zrq Successfully pulled image "openebs/cstor-csi-driver:2.12.0" in 5.707887193s
3m50s Normal Created pod/openebs-cstor-csi-node-l5zrq Created container cstor-csi-plugin
3m50s Normal Started pod/openebs-cstor-csi-node-l5zrq Started container cstor-csi-plugin
3m59s Warning FailedCreate daemonset/openebs-cstor-csi-node Error creating: pods "openebs-cstor-csi-node-" is forbidden: no PriorityClass with name openebs-cstor-csi-node-critical was found
3m57s Normal SuccessfulCreate daemonset/openebs-cstor-csi-node Created pod: openebs-cstor-csi-node-d87nx
3m57s Normal SuccessfulCreate daemonset/openebs-cstor-csi-node Created pod: openebs-cstor-csi-node-l5zrq
4m1s Normal Scheduled pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg Successfully assigned openebs/openebs-cstor-cspc-operator-759cf9cb8c-twtfg to node-02
4m1s Normal Pulling pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg Pulling image "openebs/cspc-operator:2.12.0"
3m57s Normal Pulled pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg Successfully pulled image "openebs/cspc-operator:2.12.0" in 3.738448341s
109s Warning Failed pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-cspc-operator-759cf9cb8c-twtfg_openebs(cc4ceaea-1999-4164-85ce-e758259b01ee)", container: openebs-cstor-cspc-operator)
109s Normal Pulled pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg Container image "openebs/cspc-operator:2.12.0" already present on machine
4m1s Normal SuccessfulCreate replicaset/openebs-cstor-cspc-operator-759cf9cb8c Created pod: openebs-cstor-cspc-operator-759cf9cb8c-twtfg
4m1s Normal ScalingReplicaSet deployment/openebs-cstor-cspc-operator Scaled up replica set openebs-cstor-cspc-operator-759cf9cb8c to 1
4m1s Normal Scheduled pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq Successfully assigned openebs/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq to node-01
4m1s Normal Pulling pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq Pulling image "openebs/cvc-operator:2.12.0"
3m57s Normal Pulled pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq Successfully pulled image "openebs/cvc-operator:2.12.0" in 3.960324508s
102s Warning Failed pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-cvc-operator-fb68fbdb6-nfmtq_openebs(c31f3560-9e0c-47e6-85cf-91f8a7557bab)", container: openebs-cstor-cvc-operator)
102s Normal Pulled pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq Container image "openebs/cvc-operator:2.12.0" already present on machine
4m1s Normal SuccessfulCreate replicaset/openebs-cstor-cvc-operator-fb68fbdb6 Created pod: openebs-cstor-cvc-operator-fb68fbdb6-nfmtq
4m1s Normal ScalingReplicaSet deployment/openebs-cstor-cvc-operator Scaled up replica set openebs-cstor-cvc-operator-fb68fbdb6 to 1
3m59s Normal Scheduled pod/openebs-jiva-csi-controller-0 Successfully assigned openebs/openebs-jiva-csi-controller-0 to node-02
3m33s Normal Pulled pod/openebs-jiva-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
3m45s Warning Failed pod/openebs-jiva-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: csi-resizer)
3m58s Normal Pulling pod/openebs-jiva-csi-controller-0 Pulling image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0"
3m56s Normal Pulled pod/openebs-jiva-csi-controller-0 Successfully pulled image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" in 2.036265238s
3m45s Warning Failed pod/openebs-jiva-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: csi-provisioner)
3m56s Normal Pulling pod/openebs-jiva-csi-controller-0 Pulling image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0"
3m55s Normal Pulled pod/openebs-jiva-csi-controller-0 Successfully pulled image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" in 1.779400382s
3m45s Warning Failed pod/openebs-jiva-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: csi-attacher)
3m55s Normal Pulling pod/openebs-jiva-csi-controller-0 Pulling image "openebs/jiva-csi:2.12.2"
3m47s Normal Pulled pod/openebs-jiva-csi-controller-0 Successfully pulled image "openebs/jiva-csi:2.12.2" in 7.507682819s
3m45s Warning Failed pod/openebs-jiva-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: jiva-csi-plugin)
3m47s Normal Pulling pod/openebs-jiva-csi-controller-0 Pulling image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0"
3m46s Normal Pulled pod/openebs-jiva-csi-controller-0 Successfully pulled image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0" in 1.105187761s
3m45s Warning Failed pod/openebs-jiva-csi-controller-0 Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: liveness-probe)
3m45s Normal Pulled pod/openebs-jiva-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" already present on machine
3m45s Normal Pulled pod/openebs-jiva-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" already present on machine
3m45s Normal Pulled pod/openebs-jiva-csi-controller-0 Container image "openebs/jiva-csi:2.12.2" already present on machine
3m45s Normal Pulled pod/openebs-jiva-csi-controller-0 Container image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0" already present on machine
3m59s Warning FailedCreate statefulset/openebs-jiva-csi-controller create Pod openebs-jiva-csi-controller-0 in StatefulSet openebs-jiva-csi-controller failed error: pods "openebs-jiva-csi-controller-0" is forbidden: no PriorityClass with name openebs-jiva-csi-controller-critical was found
3m59s Normal SuccessfulCreate statefulset/openebs-jiva-csi-controller create Pod openebs-jiva-csi-controller-0 in StatefulSet openebs-jiva-csi-controller successful
78s Warning FailedCreate daemonset/openebs-jiva-csi-node Error creating: pods "openebs-jiva-csi-node-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[1].securityContext.allowPrivilegeEscalation: Invalid value: true: Allowing privilege escalation for containers is not allowed]
4m Normal Scheduled pod/openebs-jiva-operator-6d44c67df9-645cg Successfully assigned openebs/openebs-jiva-operator-6d44c67df9-645cg to node-02
4m Normal Pulling pod/openebs-jiva-operator-6d44c67df9-645cg Pulling image "openebs/jiva-operator:2.12.2"
3m55s Normal Pulled pod/openebs-jiva-operator-6d44c67df9-645cg Successfully pulled image "openebs/jiva-operator:2.12.2" in 4.969210006s
3m55s Normal Created pod/openebs-jiva-operator-6d44c67df9-645cg Created container openebs-jiva-operator
3m55s Normal Started pod/openebs-jiva-operator-6d44c67df9-645cg Started container openebs-jiva-operator
4m1s Normal SuccessfulCreate replicaset/openebs-jiva-operator-6d44c67df9 Created pod: openebs-jiva-operator-6d44c67df9-645cg
4m1s Normal ScalingReplicaSet deployment/openebs-jiva-operator Scaled up replica set openebs-jiva-operator-6d44c67df9 to 1
4m1s Normal Scheduled pod/openebs-localpv-provisioner-6f9c7d84-bm5qk Successfully assigned openebs/openebs-localpv-provisioner-6f9c7d84-bm5qk to node-01
4m Normal Pulling pod/openebs-localpv-provisioner-6f9c7d84-bm5qk Pulling image "openebs/provisioner-localpv:2.12.0"
3m57s Normal Pulled pod/openebs-localpv-provisioner-6f9c7d84-bm5qk Successfully pulled image "openebs/provisioner-localpv:2.12.0" in 3.794212396s
3m57s Warning Failed pod/openebs-localpv-provisioner-6f9c7d84-bm5qk Error: cannot find volume "kube-api-access-26hfj" to mount into container "openebs-localpv-provisioner"
4m1s Normal SuccessfulCreate replicaset/openebs-localpv-provisioner-6f9c7d84 Created pod: openebs-localpv-provisioner-6f9c7d84-bm5qk
4m1s Normal SuccessfulDelete replicaset/openebs-localpv-provisioner-6f9c7d84 Deleted pod: openebs-localpv-provisioner-6f9c7d84-bm5qk
3m51s Normal Scheduled pod/openebs-localpv-provisioner-7bf58d4896-jbc7h Successfully assigned openebs/openebs-localpv-provisioner-7bf58d4896-jbc7h to node-01
3m51s Normal Pulling pod/openebs-localpv-provisioner-7bf58d4896-jbc7h Pulling image "openebs/provisioner-localpv:2.12.1"
3m47s Normal Pulled pod/openebs-localpv-provisioner-7bf58d4896-jbc7h Successfully pulled image "openebs/provisioner-localpv:2.12.1" in 3.915593629s
3m47s Normal Created pod/openebs-localpv-provisioner-7bf58d4896-jbc7h Created container openebs-localpv-provisioner
3m47s Normal Started pod/openebs-localpv-provisioner-7bf58d4896-jbc7h Started container openebs-localpv-provisioner
3m52s Normal SuccessfulCreate replicaset/openebs-localpv-provisioner-7bf58d4896 Created pod: openebs-localpv-provisioner-7bf58d4896-jbc7h
4m1s Normal ScalingReplicaSet deployment/openebs-localpv-provisioner Scaled up replica set openebs-localpv-provisioner-6f9c7d84 to 1
4m1s Normal ScalingReplicaSet deployment/openebs-localpv-provisioner Scaled down replica set openebs-localpv-provisioner-6f9c7d84 to 0
3m52s Normal ScalingReplicaSet deployment/openebs-localpv-provisioner Scaled up replica set openebs-localpv-provisioner-7bf58d4896 to 1
4m1s Normal Scheduled pod/openebs-ndm-operator-7d6955f6f5-55vzz Successfully assigned openebs/openebs-ndm-operator-7d6955f6f5-55vzz to node-01
4m1s Normal Pulling pod/openebs-ndm-operator-7d6955f6f5-55vzz Pulling image "openebs/node-disk-operator:1.6.1"
3m57s Normal Pulled pod/openebs-ndm-operator-7d6955f6f5-55vzz Successfully pulled image "openebs/node-disk-operator:1.6.1" in 3.574298184s
108s Warning Failed pod/openebs-ndm-operator-7d6955f6f5-55vzz Error: container has runAsNonRoot and image will run as root (pod: "openebs-ndm-operator-7d6955f6f5-55vzz_openebs(d2d69836-4321-4c28-b52e-3fe1a17a8367)", container: openebs-ndm-operator)
108s Normal Pulled pod/openebs-ndm-operator-7d6955f6f5-55vzz Container image "openebs/node-disk-operator:1.6.1" already present on machine
4m1s Normal SuccessfulCreate replicaset/openebs-ndm-operator-7d6955f6f5 Created pod: openebs-ndm-operator-7d6955f6f5-55vzz
4m1s Normal ScalingReplicaSet deployment/openebs-ndm-operator Scaled up replica set openebs-ndm-operator-7d6955f6f5 to 1
78s Warning FailedCreate daemonset/openebs-ndm Error creating: pods "openebs-ndm-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
4m Normal Scheduled pod/openebs-provisioner-554d6bb8db-hqgsr Successfully assigned openebs/openebs-provisioner-554d6bb8db-hqgsr to node-02
4m Normal Pulling pod/openebs-provisioner-554d6bb8db-hqgsr Pulling image "openebs/openebs-k8s-provisioner:2.12.1"
3m53s Normal Pulled pod/openebs-provisioner-554d6bb8db-hqgsr Successfully pulled image "openebs/openebs-k8s-provisioner:2.12.1" in 6.433119662s
3m53s Normal Created pod/openebs-provisioner-554d6bb8db-hqgsr Created container openebs-provisioner
3m53s Normal Started pod/openebs-provisioner-554d6bb8db-hqgsr Started container openebs-provisioner
4m Normal SuccessfulCreate replicaset/openebs-provisioner-554d6bb8db Created pod: openebs-provisioner-554d6bb8db-hqgsr
4m Normal ScalingReplicaSet deployment/openebs-provisioner Scaled up replica set openebs-provisioner-554d6bb8db to 1
4m Normal Scheduled pod/openebs-snapshot-operator-b76fb87b8-n9jnf Successfully assigned openebs/openebs-snapshot-operator-b76fb87b8-n9jnf to node-01
3m59s Normal Pulling pod/openebs-snapshot-operator-b76fb87b8-n9jnf Pulling image "openebs/snapshot-controller:2.12.1"
3m55s Normal Pulled pod/openebs-snapshot-operator-b76fb87b8-n9jnf Successfully pulled image "openebs/snapshot-controller:2.12.1" in 3.938680494s
3m55s Normal Created pod/openebs-snapshot-operator-b76fb87b8-n9jnf Created container openebs-snapshot-controller
3m54s Normal Started pod/openebs-snapshot-operator-b76fb87b8-n9jnf Started container openebs-snapshot-controller
3m54s Normal Pulling pod/openebs-snapshot-operator-b76fb87b8-n9jnf Pulling image "openebs/snapshot-provisioner:2.12.1"
3m50s Normal Pulled pod/openebs-snapshot-operator-b76fb87b8-n9jnf Successfully pulled image "openebs/snapshot-provisioner:2.12.1" in 4.064379059s
3m50s Normal Created pod/openebs-snapshot-operator-b76fb87b8-n9jnf Created container openebs-snapshot-provisioner
3m50s Normal Started pod/openebs-snapshot-operator-b76fb87b8-n9jnf Started container openebs-snapshot-provisioner
4m Normal SuccessfulCreate replicaset/openebs-snapshot-operator-b76fb87b8 Created pod: openebs-snapshot-operator-b76fb87b8-n9jnf
4m Normal ScalingReplicaSet deployment/openebs-snapshot-operator Scaled up replica set openebs-snapshot-operator-b76fb87b8 to 1
3m47s Normal LeaderElection endpoints/openebs.io-local openebs-localpv-provisioner-7bf58d4896-jbc7h_94ff8c3c-2ba8-48bb-a423-d883ad84f3a5 became leader
3m53s Normal LeaderElection endpoints/openebs.io-provisioner-iscsi openebs-provisioner-554d6bb8db-hqgsr_70df65fb-a720-4963-94ee-2b51f826bf54 became leader
3m50s Normal LeaderElection endpoints/volumesnapshot.external-storage.k8s.io-snapshot-promoter openebs-snapshot-operator-b76fb87b8-n9jnf_f0cd6250-7560-4a09-9ef9-73d1b51fc6c9 became leader
0s Normal Pulled pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg Container image "openebs/cspc-operator:2.12.0" already present on machine
0s Normal Pulled pod/openebs-jiva-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
0s Normal Pulled pod/openebs-ndm-operator-7d6955f6f5-55vzz Container image "openebs/node-disk-operator:1.6.1" already present on machine
0s Normal Pulled pod/openebs-cstor-admission-server-66b7895495-d66r7 Container image "openebs/cstor-webhook:2.12.0" already present on machine
0s Normal Pulled pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq Container image "openebs/cvc-operator:2.12.0" already present on machine
0s Normal Pulled pod/openebs-cstor-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
0s Warning FailedCreate daemonset/openebs-ndm Error creating: pods "openebs-ndm-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
0s Warning FailedCreate daemonset/openebs-jiva-csi-node Error creating: pods "openebs-jiva-csi-node-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[1].securityContext.allowPrivilegeEscalation: Invalid value: true: Allowing privilege escalation for containers is not allowed]
0s Normal Pulled pod/openebs-ndm-operator-7d6955f6f5-55vzz Container image "openebs/node-disk-operator:1.6.1" already present on machine
0s Normal Pulled pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq Container image "openebs/cvc-operator:2.12.0" already present on machine
0s Normal Pulled pod/openebs-cstor-csi-controller-0 Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-01 Ready control-plane,etcd,master 28m v1.21.4+rke2r3 10.0.0.1 XX.XX.XX.XX Ubuntu 20.04.3 LTS 5.4.0-84-generic containerd://1.4.8-k3s1
node-02 Ready <none> 26m v1.21.4+rke2r3 10.0.0.2 YY.YY.YY.YY Ubuntu 20.04.3 LTS 5.4.0-84-generic containerd://1.4.8-k3s1
kubectl get psp global-restricted-psp -o yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
psp.rke2.io/global-restricted: resolved
creationTimestamp: "2021-09-12T22:41:56Z"
name: global-restricted-psp
resourceVersion: "207"
uid: d73c27e8-cb5b-44de-8f08-3d4eef7a1468
spec:
allowPrivilegeEscalation: false
fsGroup:
ranges:
- max: 65535
min: 1
rule: MustRunAs
requiredDropCapabilities:
- ALL
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
I am trying to install ZFS and LVM charts using helm on GKE 1.19, using commands like this:
helm install openebs --namespace openebs openebs/openebs --create-namespace --set zfs-localpv.enabled=true
I see the following objects installed:
$kubectl get ClusterRole | grep zfs
openebs-zfs-driver-registrar-role 2021-07-18T05:35:21Z
openebs-zfs-provisioner-role 2021-07-18T05:35:21Z
openebs-zfs-snapshotter-role 2021-07-18T05:35:21Z
Seeing this error for ZFS Node Daemonset
$kubectl describe ds -n openebs openebs-zfs-localpv-node
...
...
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 114s (x16 over 4m39s) daemonset-controller Error creating: insufficient quota to match these scopes: [{PriorityClass In [s
ystem-node-critical system-cluster-critical]}]
Similar errors in ZFS controller.
$kubectl describe sts -n openebs openebs-zfs-localpv-controller
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 70s (x17 over 6m39s) statefulset-controller create Pod openebs-zfs-localpv-controller-0 in StatefulSet openebs-zfs-localpv
-controller failed error: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}]
As far as I can tell, there’s no way via this Helm chart to support NDM’s metaconfigs
:
When I install the latest helm chart version 3.3.0 the images included are tagged 3.2.0. I can work around this by explicitly setting the correct tags in the values.yaml but I expect the default images to be of the correct version. Thanks
$ helm template --namespace openebs openebs/openebs --version 3.3.0 | grep '3\.[0-9]\.[0-9]'
chart: openebs-3.3.0
chart: openebs-3.3.0
chart: openebs-3.3.0
chart: openebs-3.3.0
chart: openebs-3.3.0
openebs.io/version: 3.2.0
openebs.io/version: 3.2.0
chart: openebs-3.3.0
openebs.io/version: 3.2.0
openebs.io/version: 3.2.0
image: "openebs/provisioner-localpv:3.2.0"
value: "openebs/linux-utils:3.2.0"
chart: openebs-3.3.0
openebs.io/version: 3.2.0
openebs.io/version: 3.2.0
value: "openebs/linux-utils:3.2.0"
Add integration tests beyond the lint test for the helm chart changes.
So if you set it wrong in the first time, you can't change it. Not even uninstall the chart and reinstall it
I am using 2.7.0
When i try to upgrade, i get:
# helm -n openebs upgrade openebs openebs/openebs --reuse-values --version 3.8.0
false
Error: UPGRADE FAILED: Unable to continue with update: CustomResourceDefinition "volumesnapshotclasses.snapshot.storage.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "openebs"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "openebs"
Actually i have one volumesnapshotclass:
# kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
NAME DRIVER DELETIONPOLICY AGE
longhorn-snapshot-vsc driver.longhorn.io Delete 132d
# kubectl get volumesnapshotclasses.snapshot.storage.k8s.io longhorn-snapshot-vsc -o yaml
apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Delete
driver: driver.longhorn.io
kind: VolumeSnapshotClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"snapshot.storage.k8s.io/v1","deletionPolicy":"Delete","driver":"driver.longhorn.io","kind":"VolumeSnapshotClass","metadata":{"annotations":{},"labels":{"velero.io/csi-volumesnapshot-class":"true"},"name":"longhorn-snapshot-vsc"},"parameters":{"type":"snap"}}
creationTimestamp: "2023-05-23T08:24:38Z"
generation: 2
labels:
velero.io/csi-volumesnapshot-class: "true"
managedFields:
- apiVersion: snapshot.storage.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:deletionPolicy: {}
f:driver: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:velero.io/csi-volumesnapshot-class: {}
f:parameters:
.: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2023-06-28T12:59:12Z"
name: longhorn-snapshot-vsc
resourceVersion: "105590151"
uid: b4206632-3c4b-459f-a0ff-d5324b9a2b0f
parameters:
type: snap
But this is from longhorn, not openebs.
How can i get over this?
Would be great if the helm chart enabled us to specify registry credentials, so we could avoid the Dockerhub's rate-limitations for free accounts!
Hi,
On our cluster (microk8s), pods running with privileged security context are disallowed by configuration of the API server. It seems now that the localpv-provisioner starts an init container that runs with a privileged security context, hence causing the error below (from the output of kubectl describe pvc ...
). I created a custom storage class to set the base path as described here.
Is there a functional reason why the container needs the privileges or is the security context configurable somewhere? Thanks!
Warning ProvisioningFailed 20s (x2 over 35s) openebs.io/local_openebs-localpv-provisioner-695c5f756-cfptz_03c93f6f-84bf-4dc1-a06e-da2b9c7adfdc failed to provision volume with StorageClass "custom-storage-class": Pod "init-pvc-76e5fa24-8c03-4f7c-b8bd-04f09c114175" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy
Hi,
it seems the most of the matchLabels are incorrect?
take openebs-snapshot-operator
it focuses on
matchLabels:
app: openebs
release: openebs
but this targets all pods instead of just the snapshot pods
however if you look at the direct yaml deployment version, the matchLabels show
matchLabels:
name: openebs-snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
which is correct and targets only the required pods!
Attempting to use the "latest" version of the chart fails since the Chart.yaml file references 3.4.0 but the last released tar.gz is 3.3.1
openebs upgrade from 3.2.0 to 3.3.0 failed with
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
srvl012 Ready controlplane,etcd,worker 2y30d v1.23.8
srvl013 Ready controlplane,etcd,worker 2y30d v1.23.8
srvl033a Ready worker 2y30d v1.23.8
srvl033b Ready worker 2y30d v1.23.8
srvl062 Ready controlplane,etcd,worker 424d v1.23.8
% helm list -n openebs
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
openebs openebs 18 2022-06-16 09:42:58.892047791 +0200 CEST deployed openebs-3.2.0 3.2.0
% helm get values openebs -n openebs
USER-SUPPLIED VALUES:
cstor:
enabled: true
openebs-ndm:
enabled: true
% helm upgrade openebs openebs/openebs --namespace openebs --reuse-values
Error: UPGRADE FAILED: template: openebs/templates/localprovisioner/hostpath-class.yaml:32:14: executing "openebs/templates/localprovisioner/hostpath-class.yaml" at <.Values.localprovisioner.hostpathClass.ext4Quota.enabled>: nil pointer evaluating interface {}.enabled
When the 'openebs' helm chart (v3.0.0) is deployed with legacy or cStor-CSI enabled, it creates a validatingwebhookconfiguration in each case. When the legacy or cStor-CSI option is disabled using helm upgrade
, it removes all control-plane components except for the validatingwebhookconfiguration object.
Sample command:
helm upgrade openebs -n openebs openebs/openebs --set cstor.enabled=false --reuse-values
The webhook cleanup job is only triggered on uninstall.
Current chart will break clusters with:
Error from server (InternalError): Internal error occurred: failed calling webhook "admission-webhook.openebs.io": Post "https://admission-server-svc.openebs.svc:443/validate?timeout=5s": service "admission-server-svc" not found
because the chart does not create a service for the adminssion-server, and hence the webhook fails.
Using the Helm chart I would like to have the option to set resource limits on pods like the apiserver, admissions-server and provisioner.
Most helm charts support this by the option to set this the following way
apiserver:
resources:
requests:
cpu: 1
memory: "1Gi"
limits:
cpu: 2
memory: "2Gi"
@https://openebs.github.io/charts/
helm install --name openebs --namespace openebs openebs/openebs --create-namespace
This doesn´t work as --name is not an option for helm.
Should be:
helm install openebs --namespace openebs openebs/openebs --create-namespace
for some reason i cant seem to install the jiva engine AFTER i already installed the defaults with helm
the docs say that JIVA is installed by default when in fact, is not installed now by default through helm
https://openebs.io/docs/user-guides/installation#installation-through-helm
helm install openebs openebs/openebs -n openebs --create-namespace
so then afterward i try upgrading to install JIVA but get an error
helm upgrade openebs openebs/openebs -n openebs --set jiva.enabled=true --reuse-values
Error: UPGRADE FAILED: resource mapping not found for name: "openebs-jiva-default-policy" namespace: "" from "": no matches for kind "JivaVolumePolicy" in version "openebs.io/v1alpha1"
ensure CRDs are installed first
the only way to fix this is to uninstall the release then install again fresh with the values set
helm uninstall openebs -n openebs && helm install openebs openebs/openebs -n openebs --set jiva.enabled=true
EDIT: this issue seems to be related to the CRDs not being installed openebs-archive/jiva-operator#187 openebs-archive/jiva-operator#189
To disable deprecated components, multiple values need to be specified like
--set webhook.enabled=false \
--set snapshotOperator.enabled=false \
--set provisioner.enabled=false \
--set apiserver.enabled=false \
It would be nice to add a new variable that can replace all the above. For all the above flags, use this new flag with default=true in (and
)
--set legacy.enabled=true
The workflow to migrate from legacy to new components can be as follows:
helm upgrade ... --resuse-values --set cstor.enabled=true
helm upgrade ... --resuse-values --set legacy.enabled=false
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.