Giter Club home page Giter Club logo

charts's Introduction

Welcome to OpenEBS

OpenEBS Welcome Banner

OpenEBS is a modern Block-Mode storage platform, a Hyper-Converged software Storage System and virtual NVMe-oF SAN (vSAN) Fabric that is natively integrates into the core of Kubernetes.

Try our Slack channel
If you have questions about using OpenEBS, please use the CNCF Kubernetes OpenEBS slack channel, it is open for anyone to ask a question

Important

OpenEBS provides...

  • Stateful persistent Dynamically provisioned storage volumes for Kubernetes
  • High Performance NVMe-oF & NVMe/RDMA storage transport optimized for All-Flash Solid State storage media
  • Block devices, LVM, ZFS, ext2/ext3/ext4, XFS, BTRFS...and more
  • 100% Cloud-Native K8s declarative storage platform
  • A cluster-wide vSAN block-mode fabric that provides containers/Pods with HA resilient access to storage across the entire cluster.
  • Node local K8s PV's and n-way Replciated K8s PV's
  • Deployable On-premise & in-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • Enterprise Grade data management capabilities such as snapshots, clones, replicated volumes, DiskGroups, Volume Groups, Aggregates, RAID

openEBS has 2 Editions:

1. STANDARD ✔️ > Ready Player 1
2. LEGACY ⚠️ Game Over

Within STANDARD, you have a choice of 2 Types of K8s Storage Services. Replicated PV and Local PV.


Type Storage Engine Type of data services Status In OSS ver
Replicated_PV Replicated data volumes (in a Cluster wide vSAN block mode fabric)
Replicated PV Mayastor Mayastor for High Availability deploymemnts distributing & replicating volumes across the cluster Stable, deployable in PROD
Releases
v4.0.1
 
Local PV Non-replicated node local data volumes (Local-PV has multiple variants. See below) v4.0.1
Local PV Hostpath Local PV HostPath for integration with local node hostpath (e.g. /mnt/fs1) Stable, deployable in PROD
Releases
v4.0.1
Local PV ZFS Local PV ZFS for integration with local ZFS storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV LVM2 Local PV LVM for integration with local LVM2 storage deployments Stable, deployable in PROD
Releases
v4.0.1
Local PV Rawfile Local PV Rawfile for integration with Loop mounted Raw device-file filesystem Stable, deployable in PROD, undergoing evaluation & integration
release: v0.70
v4.0.1

STANDARD is optimized for NVMe and SSD Flash storage media, and integrates ultra modern cutting-edge high performance storage technologies at its core...

☑️   It uses the High performance SPDK storage stack - (SPDK is an open-source NVMe project initiated by INTEL)
☑️   The hyper modern IO_Uring Linux Kernel Async polling-mode I/O Interface - (fastest kernel I/O mode possible)
☑️   Native abilities for RDMA and Zero-Copy I/O
☑️   NVMe-oF TCP Block storage Hyper-converged data fabric
☑️   Block layer volume replication
☑️   Logical volumes and Diskpool based data managment
☑️   a Native high performance Blobstore
☑️   Native Block layer Thin provisioning
☑️   Native Block layer Snapshots and Clones

Get in touch with our team.

Vishnu Attur :octocat: @avishnu Admin, Maintainer
Abhinandan Purkait 😎 @Abhinandan-Purkait Maintainer
Niladri Halder 🚀 @niladrih Maintainer
Ed Robinson 🐶 @edrob999   CNCF Primary Liason
Special Maintainer
Tiago Castro @tiagolobocastro   Admin, Maintainer
David Brace @orville-wright     Admin, Maintainer

Activity dashbaord

Alt

Current status

Release Support Twitter/X Contrib License statue CI Staus
Releases Slack channel #openebs Twitter PRs Welcome FOSSA Status CII Best Practices

Read this in 🇩🇪 🇷🇺 🇹🇷 🇺🇦 🇨🇳 🇫🇷 🇧🇷 🇪🇸 🇵🇱 🇰🇷 other languages.

Deployment

  • In-cloud: (AWS EC2/EKS, Google GCP/GKE, Azure VM/AKS, Oracle OCI, IBM/RedHat OpenShift, Civo Cloud, Hetzner Cloud... and more)
  • On-Premise: Bare Metal, Virtualzied Hypervisor infra using VMWare ESXi, KVM/QEMU (K8s KubeVirt), Proxmox
  • Deployed as native K8s elemets: Deployments, Containers, Servcies, Stateful sets, CRD's, Sidecars, Jobs and Binaries all on K8s worker nodes.
  • Runs 100% in K8s userspace. So it's highly portable and run across many OS's & platforms.

Roadmap (as of June 2024)


OpenEBS Welcome Banner

QUICKSTART : Installation

NOTE: Depending on which of the 5 storage engines you choose to deploy, pre-requests that must be met. See detailed quickstart docs...


  1. Setup helm repository.
# helm repo add openebs https://openebs.github.io/openebs
# helm repo update

2a. Install the Full OpenEBS helm chart with default values.

  • This installs ALL OpenEBS Storage Engines* in the openebs namespace and chart name as openebs:
    Local PV Hostpath, Local PV LVM, Local PV ZFS, Replicated Mayastor
# helm install openebs --namespace openebs openebs/openebs --create-namespace

2b. To Install just the OpenEBS Replicated Mayastor Storage Engine, use the following command:

# helm install openebs --namespace openebs openebs/openebs --set engines.replicated.mayastor.enabled=false --create-namespace
  1. To view the chart
# helm ls -n openebs

Output:
NAME     NAMESPACE   REVISION  UPDATED                                   STATUS     CHART           APP VERSION
openebs  openebs     1         2024-03-25 09:13:00.903321318 +0000 UTC   deployed   openebs-4.0.1   4.0.1
  1. Verify installation
    • List the pods in namespace
    • Verify StorageClasses
# kubectl get pods -n openebs

Example Ouput:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-agent-core-674f784df5-7szbm               2/2     Running   0          11m
openebs-agent-ha-node-nnkmv                       1/1     Running   0          11m
openebs-agent-ha-node-pvcrr                       1/1     Running   0          11m
openebs-agent-ha-node-rqkkk                       1/1     Running   0          11m
openebs-api-rest-79556897c8-b824j                 1/1     Running   0          11m
openebs-csi-controller-b5c47d49-5t5zd             6/6     Running   0          11m
openebs-csi-node-flq49                            2/2     Running   0          11m
openebs-csi-node-k8d7h                            2/2     Running   0          11m
openebs-csi-node-v7jfh                            2/2     Running   0          11m
openebs-etcd-0                                    1/1     Running   0          11m
openebs-etcd-1                                    1/1     Running   0          11m
openebs-etcd-2                                    1/1     Running   0          11m
...
# kubectl get sc

Example Output:
NAME                                              READY   STATUS    RESTARTS   AGE
openebs-localpv-provisioner-6ddf7c7978-jsstg      1/1     Running   0          3m9s
openebs-lvm-localpv-controller-7b6d6b4665-wfw64   5/5     Running   0          3m9s
openebs-lvm-localpv-node-62lnq                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-lhndx                    2/2     Running   0          3m9s
openebs-lvm-localpv-node-tlcqv                    2/2     Running   0          3m9s
openebs-zfs-localpv-controller-f78f7467c-k7ldb    5/5     Running   0          3m9s
...

For more details, please refer to OpenEBS Documentation.

CNCF logo OpenEBS is a CNCF project and DataCore, Inc is a CNCF Silver member. DataCore support's CNCF extensively and has funded OpenEBS participating in every KubeCon event since 2020. Our project team is managed under the CNCF Storage Landscape and we contribute to the CNCF CSI and TAG Storage project initiatives. We proudly support CNCF Cloud Native Community Groups initiatives.

Project updates, subscribe to OpenEBS Announcements
Interacting with other OpenEBS users, subscribe to OpenEBS Users


Container Storage Interface group Storage Technical Advisory Group     Cloud Native Community Groups

Commercial Offerings

Commerically supported deployments of openEBS are avaialble via key companies. (Some provide services, funding, technology, infra, rescourced to the openEBS proejct).

(openEBS OSS is a CNCF project. CNCF does not endorse any specific company).

charts's People

Contributors

akhilerm avatar bobek avatar chandansagar avatar choilmto avatar davidkarlsen avatar eripa avatar fossabot avatar gprasath avatar gtb3nw avatar invidian avatar inyee786 avatar kmova avatar maskys avatar mosibi avatar muratkars avatar niladrih avatar nivedita-coder avatar nsathyaseelan avatar payes avatar pchandra19 avatar prateekpandey14 avatar radicand avatar ranjithwingrider avatar shovanmaity avatar shubham14bajpai avatar stncrn avatar survivant avatar umamukkara avatar vishnuitta avatar wangzihao3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

upgrading to install jiva causes resource mapping not found

for some reason i cant seem to install the jiva engine AFTER i already installed the defaults with helm

the docs say that JIVA is installed by default when in fact, is not installed now by default through helm
https://openebs.io/docs/user-guides/installation#installation-through-helm
helm install openebs openebs/openebs -n openebs --create-namespace

so then afterward i try upgrading to install JIVA but get an error
helm upgrade openebs openebs/openebs -n openebs --set jiva.enabled=true --reuse-values

Error: UPGRADE FAILED: resource mapping not found for name: "openebs-jiva-default-policy" namespace: "" from "": no matches for kind "JivaVolumePolicy" in version "openebs.io/v1alpha1"
ensure CRDs are installed first

the only way to fix this is to uninstall the release then install again fresh with the values set
helm uninstall openebs -n openebs && helm install openebs openebs/openebs -n openebs --set jiva.enabled=true

EDIT: this issue seems to be related to the CRDs not being installed openebs-archive/jiva-operator#187 openebs-archive/jiva-operator#189

Errors in Grafana Dashboards

I imported the latest GF dashboard, "updated the datasource to : Prometheus"

I have this error in : localpv-dashboard

http://10.1.7.114:30006/d/2e59785a-af05-465e-b9a3-fca65a0e8572/localpv-dashboard?orgId=1
image

<html> <head> <script> window.onload = load('','','','',''); function getCell(name, value) { return "
"+name+"
"+value+"
"; } function load(volumeStr,pvcNameStr,namespaceStr,storageClassStr,typeStr) { var volumeStr = volumeStr.replace( /[{}]/g, '' ); var pvcNameStr = pvcNameStr.replace( /[{}]/g, '' ); var namespaceStr = namespaceStr.replace( /[{}]/g, '' ); var storageClassStr = storageClassStr.replace( /[{}]/g, '' ); var typeStr = typeStr.replace( /[{}]/g, '' ); // var pools = poolsStr.replace( /[{}]/g, '' ); // var rc = replicaCount.replace( /[{}]/g, '' ); // if(!rc) { // rc = "none"; // } // var uptimes = uptime.replace( /[{}]/g, '' ); // if(uptimes) { // var seconds = parseInt(uptimes, 10); // var days = Math.floor(seconds / (3600*24)); // seconds -= days*3600*24; // var hrs = Math.floor(seconds / 3600); // seconds -= hrs*3600; // var mnts = Math.floor(seconds / 60); // seconds -= mnts*60; // uptimes = days+" days, "+hrs+" Hrs, "+mnts+" Mins, "+seconds+" Secs"; // } // else { // uptimes = "No Data"; // } z=""; z+=getCell("PVC", pvcNameStr); z+=getCell("Namespace", namespaceStr); z+=getCell("CAS Type", typeStr); z+=getCell("Storage Class", storageClassStr); //z+=getCell("Name", volName); //z+=getCell("Pool Name", pools); //z+=getCell("No. of Replicas", rc); //z+=getCell("","") document.getElementById("volume-info").innerHTML = z; } </script> </head> <body>
</body> </html>

in : storage-class-volume-dashboard
http://10.1.7.114:30006/d/2795676f-d98f-4ba5-b424-b8b0e98ad7bd/storage-class-volume-dashboard?orgId=1&refresh=1m

image

there are no storage class in the dropbox, but I have a few in my cluster

root@test-pcl114:~# kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-cstor-sparse        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  27d
openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  88d
openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  88d
openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  88d
openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  88d
sc-iep-localpv              cstor.csi.openebs.io                                       Delete          Immediate              true                   83d
sc-iep-mirror               cstor.csi.openebs.io                                       Delete          Immediate              true                   87d
sc-minio                    cstor.csi.openebs.io                                       Delete          WaitForFirstConsumer   true                   87d
sc-rocketchat               cstor.csi.openebs.io                                       Delete          Immediate              true                   86d
root@test-pcl114:~#

In storage-pool-claim-dashboard
http://10.1.7.114:30006/d/23c14553-6c62-41b6-9eb3-c0e75531e617/storage-pool-claim-dashboard?orgId=1
image

root@test-pcl114:~# kubectl get cspc -n openebs
NAME               HEALTHYINSTANCES   PROVISIONEDINSTANCES   DESIREDINSTANCES   AGE
cspc-iep-localpv   1                  1                      1                  83d
cspc-iep-mirror    1                  1                      1                  87d
cspc-minio         1                  1                      1                  87d
cspc-rocketchat    1                  1                      1                  87d
root@test-pcl114:~#

In storage-pool-dashboard
http://10.1.7.114:30006/d/5d86b1fd-b2e7-4bb2-befa-4dae5b6167d6/storage-pool-dashboard?orgId=1

image

<html> <head> <script> window.onload = load('', '',''); function getCell(name, value) { return "
"+name+"
"+value+"

in volume-tiled-view-dashboard
http://10.1.7.114:30006/d/f46a2d03-c2af-4954-b2b3-307c4d77bcaf/volume-tiled-view-dashboard?orgId=1&refresh=1m
image

<html> <head> <script> window.onload = load('', '', '', ' 1', ' 1143714'); function getCell(name, value) { return "
"+name+"
"+value+"

mayastor support

Parts of the docs suggest mayastor should now be considered beta, and for new-comers wishing to try it out with all the integrated features that the helm chart brings, not having helm support for mayastor adds significant challenges. Is there a timeline for introducing mayastor support, or is the reply just "when it's GA"?

ValidatingWebhookConfiguration object left behind after disabling cStor/legacy

When the 'openebs' helm chart (v3.0.0) is deployed with legacy or cStor-CSI enabled, it creates a validatingwebhookconfiguration in each case. When the legacy or cStor-CSI option is disabled using helm upgrade, it removes all control-plane components except for the validatingwebhookconfiguration object.

Sample command:

helm upgrade openebs -n openebs openebs/openebs --set cstor.enabled=false --reuse-values

The webhook cleanup job is only triggered on uninstall.

node selection not available for openebs-ndm:ndmExporter

Hi,

I am trying to install openebs chart (version: "3.1.0") on my cluster's specific node.
The options for tolerations and affinity are not available for openebs-ndm:ndmExporter.

Every other component which I am using has this option available.

Would be great if this can be accommodated in the next release.

Thanks,

Unable to install zfs and lvm driver using openebs charts

I am trying to install ZFS and LVM charts using helm on GKE 1.19, using commands like this:

helm install openebs --namespace openebs openebs/openebs --create-namespace --set zfs-localpv.enabled=true

I see the following objects installed:

$kubectl get ClusterRole  | grep zfs

openebs-zfs-driver-registrar-role                                      2021-07-18T05:35:21Z
openebs-zfs-provisioner-role                                           2021-07-18T05:35:21Z
openebs-zfs-snapshotter-role                                           2021-07-18T05:35:21Z

Seeing this error for ZFS Node Daemonset

$kubectl describe ds -n openebs openebs-zfs-localpv-node
...
...
 Type     Reason        Age                    From                  Message
  ----     ------        ----                   ----                  -------
  Warning  FailedCreate  114s (x16 over 4m39s)  daemonset-controller  Error creating: insufficient quota to match these scopes: [{PriorityClass In [s
ystem-node-critical system-cluster-critical]}]

Similar errors in ZFS controller.

$kubectl describe sts -n openebs openebs-zfs-localpv-controller
...
...
Events:
  Type     Reason        Age                   From                    Message
  ----     ------        ----                  ----                    -------
  Warning  FailedCreate  70s (x17 over 6m39s)  statefulset-controller  create Pod openebs-zfs-localpv-controller-0 in StatefulSet openebs-zfs-localpv
-controller failed error: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}]

remove the need to provide mutually exclusive flags - that will be error prone.

To install Local PV hostpath and device

We need to run a command like

helm install openebs openebs/openebs --namespace openebs --create-namespace \
--set localprovisioner.enabled=false \
--set ndm.enabled=false \
--set ndmOperator.enabled=false \
--set openebs-ndm.enabled=true \
--set legacy.enabled=false \
--set localpv-provisioner.enabled=true

Can the above be enhanced to handle the mutually exclusive options internally. For example:

  • If openebs-ndm.enabled=true then automatically consider ndm.enabled and ndmOperator.enabled as false.
  • If localpv-provisioner.enabled=true then consider localprovisioner.enabled=false

Sounds like we need to install ndm components if
ndm.enabled=true and openebs-ndm.enabled=false

By using taking care of mutually exclusive flags, the command to install local pv will become:

helm install openebs openebs/openebs --namespace openebs --create-namespace \
--set legacy.enabled=false \
--set localpv-provisioner.enabled=true
--set openebs-ndm.enabled=true \

allow mark localpv-provisioner as default StorageClass

There is an option of changing the default StorageClass in Kubernetes clusters, and this is as easy as adding a specific annotation to the StorageClass, you can see from the following link below 👇:

👀 https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

We (w/@Dentrax @eminaktas @erkanzileli) thought that it would be nice to have an option to enable this support via the localpvprovisioner section in values.yaml.

👀 https://github.com/openebs/charts/blob/main/charts/openebs/values.yaml#L95

missing service for the admission server

Current chart will break clusters with:

Error from server (InternalError): Internal error occurred: failed calling webhook "admission-webhook.openebs.io": Post "https://admission-server-svc.openebs.svc:443/validate?timeout=5s": service "admission-server-svc" not found

because the chart does not create a service for the adminssion-server, and hence the webhook fails.

images for node disk manager are wrong

Hello,

helm chart tries to reference node-disk-manager:v0.9.0 but one gets report that it is unauthorized
if one changes image to node-disk-manager-amd64 it gets pulled

Incorrect OPENEBS_IO_BASE_PATH when mayastor enabled

Description

helm template openebs openebs/openebs --version 3.9.0 --set localprovisioner.basePath="/opt/custom/path" | grep -A 1 'name: OPENEBS_IO_BASE_PATH' 

Output

 - name: OPENEBS_IO_BASE_PATH
 value: "/opt/custom/path"

Enable mayastor

helm template openebs openebs/openebs --version 3.9.0 --set localprovisioner.basePath="/opt/custom/path" --set mayastor.enabled=true | grep -A 1 'name: OPENEBS_IO_BASE_PATH' 

Output

- name: OPENEBS_IO_BASE_PATH
value: "/var/openebs/local"

Expected Behavior

Output

 - name: OPENEBS_IO_BASE_PATH
 value: "/opt/custom/path"

Health fails for pod maya-apiserver

What steps did you take and what happened:

  • Deploy openebs 2.6.0 using kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml on a 8 node Raspberry Pi 4 8Gb (arm64) microk8s cluster

What did you expect to happen:

  • All pods to become healthy after a period of time; in stead the maya-apiserver pod is never becoming ready:
# 
$ kubectl get pods -n openebs -o wide
NAME                                           READY   STATUS      RESTARTS   AGE
openebs-ndm-x6q6g                              1/1     Running     0          18h
openebs-ndm-zl9jc                              1/1     Running     0          18h
openebs-ndm-operator-7d69c98987-lz2cc          1/1     Running     0          18h
openebs-provisioner-74b57cbdbd-g4tws           1/1     Running     0          18h
openebs-snapshot-operator-5b9dfd4fcd-hnbnn     2/2     Running     0          18h
openebs-admission-server-789b9d6dbd-t2hkt      1/1     Running     1          18h
openebs-localpv-provisioner-776b54f698-flfh4   1/1     Running     0          18h
maya-apiserver-6f79bb87bd-58kp7                0/1     Running     0          15h

The output of the following commands will help us better understand what's going on:

# Describe pod so we can see the reason (timeout on liveness probe)
$ kubectl describe pod -n openebs maya-apiserver-6f79bb87bd-58kp7
Name:         maya-apiserver-6f79bb87bd-58kp7
Namespace:    openebs
Priority:     0
Node:         node-08/10.0.19.18
Start Time:   Wed, 03 Mar 2021 19:25:00 +0000
Labels:       name=maya-apiserver
              openebs.io/component-name=maya-apiserver
              openebs.io/version=2.6.0
              pod-template-hash=6f79bb87bd
Annotations:  cni.projectcalico.org/podIP: 10.1.251.215/32
              cni.projectcalico.org/podIPs: 10.1.251.215/32
Status:       Running
IP:           10.1.251.215
IPs:
  IP:           10.1.251.215
Controlled By:  ReplicaSet/maya-apiserver-6f79bb87bd
Containers:
  maya-apiserver:
    Container ID:   containerd://aacdb0533b88a39d3ca1d06a4be1fcf7e06b9c2c7a605874758d5131118e4b04
    Image:          registry.tekqube.lan/openebs/m-apiserver:2.6.0
    Image ID:       registry.tekqube.lan/openebs/m-apiserver@sha256:16f2a6d8a20d28d1326bae83e7adac2db05b5388b6a10c43e22d93153b963b9d
    Port:           5656/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 03 Mar 2021 19:25:16 +0000
    Ready:          False
    Restart Count:  0
    Liveness:       exec [/usr/local/bin/mayactl version] delay=30s timeout=1s period=60s #success=1 #failure=3
    Readiness:      exec [/usr/local/bin/mayactl version] delay=30s timeout=1s period=60s #success=1 #failure=3
    Environment:
      OPENEBS_NAMESPACE:                             openebs (v1:metadata.namespace)
      OPENEBS_SERVICE_ACCOUNT:                        (v1:spec.serviceAccountName)
      OPENEBS_MAYA_POD_NAME:                         maya-apiserver-6f79bb87bd-58kp7 (v1:metadata.name)
      OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG:      true
      OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL:  false
      OPENEBS_IO_JIVA_CONTROLLER_IMAGE:              openebs/jiva:2.6.0
      OPENEBS_IO_JIVA_REPLICA_IMAGE:                 openebs/jiva:2.6.0
      OPENEBS_IO_JIVA_REPLICA_COUNT:                 3
      OPENEBS_IO_CSTOR_TARGET_IMAGE:                 openebs/cstor-istgt:2.6.0
      OPENEBS_IO_CSTOR_POOL_IMAGE:                   openebs/cstor-pool:2.6.0
      OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE:              openebs/cstor-pool-mgmt:2.6.0
      OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE:            openebs/cstor-volume-mgmt:2.6.0
      OPENEBS_IO_VOLUME_MONITOR_IMAGE:               openebs/m-exporter:2.6.0
      OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE:          openebs/m-exporter:2.6.0
      OPENEBS_IO_HELPER_IMAGE:                       openebs/linux-utils:2.6.0
      OPENEBS_IO_ENABLE_ANALYTICS:                   false
      OPENEBS_IO_INSTALLER_TYPE:                     openebs-operator
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from openebs-maya-operator-token-bt5rb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  openebs-maya-operator-token-bt5rb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  openebs-maya-operator-token-bt5rb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From     Message
  ----     ------     ----                   ----     -------
  Warning  Unhealthy  3m47s (x910 over 15h)  kubelet  Liveness probe errored: rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 1s exceeded: context deadline exceeded
# Get logs for startup api server
$ kubectl logs -n openebs maya-apiserver-6f79bb87bd-58kp7
+ MAYA_API_SERVER_NETWORK=eth0
+ ip -4 addr show scope global dev eth0
+ grep inet
+ awk '{print $2}'
+ cut -d / -f 1
+ CONTAINER_IP_ADDR=10.1.251.215
+ exec /usr/local/bin/maya-apiserver start '--bind=10.1.251.215'
I0303 19:25:16.338105       1 start.go:148] Initializing maya-apiserver...
I0303 19:25:16.529286       1 start.go:279] Starting maya api server ...
I0303 19:25:20.610703       1 start.go:288] resources applied successfully by installer
I0303 19:25:20.704653       1 start.go:193] Maya api server configuration:
I0303 19:25:20.704744       1 start.go:195]          Log Level: INFO
I0303 19:25:20.704782       1 start.go:195]             Region: global (DC: dc1)
I0303 19:25:20.704816       1 start.go:195]            Version: 2.6.0-released
I0303 19:25:20.704846       1 start.go:201] 
I0303 19:25:20.704876       1 start.go:204] Maya api server started! Log data will stream in below:
I0303 19:25:20.713460       1 runner.go:37] Starting SPC controller
I0303 19:25:20.713523       1 runner.go:40] Waiting for informer caches to sync
I0303 19:25:20.913834       1 runner.go:45] Checking for preupgrade tasks
I0303 19:25:20.948354       1 runner.go:51] Starting SPC workers
I0303 19:25:20.948478       1 runner.go:58] Started SPC workers

Anything else you would like to add:

When attaching with a shell to the running pod, I can confirm the liveness probe fails since the command takes around 12 seconds (!) to finish:

$ time mayactl version
Version: 2.6.0-released
Git commit: 519dc0e567d77f3573e4e5b8096f1450e8928f54
GO Version: go1.14.7
GO ARCH: arm64
GO OS: linux
m-apiserver url:  http://10.1.251.215:5656
m-apiserver status:  running

real    0m12.066s
user    0m0.012s
sys     0m0.046s

When I run the same command, but specify the server and port using parameters, the command finishes within milliseconds:

$ time mayactl -m 10.1.251.215 -p 5656 version
Version: 2.6.0-released
Git commit: 519dc0e567d77f3573e4e5b8096f1450e8928f54
GO Version: go1.14.7
GO ARCH: arm64
GO OS: linux
m-apiserver url:  http://10.1.251.215:5656
m-apiserver status:  running

real    0m0.037s
user    0m0.016s
sys     0m0.023s

I assume the CLI takes some of the environment variables to connect to its status endpoint, not sure what has been set incorrectly; I have included the environment set below:

$ env 
KUBERNETES_SERVICE_PORT_HTTPS=443
OPENEBS_NAMESPACE=openebs
KUBERNETES_SERVICE_PORT=443
OPENEBS_IO_ENABLE_ANALYTICS=false
HOSTNAME=maya-apiserver-6f79bb87bd-58kp7
OPENEBS_IO_CSTOR_TARGET_IMAGE=openebs/cstor-istgt:2.6.0
OPENEBS_MAYA_POD_NAME=maya-apiserver-6f79bb87bd-58kp7
ADMISSION_SERVER_SVC_PORT_443_TCP=tcp://10.152.183.18:443
ADMISSION_SERVER_SVC_SERVICE_HOST=10.152.183.18
MAYA_APISERVER_SERVICE_SERVICE_HOST=10.152.183.216
ADMISSION_SERVER_SVC_PORT=tcp://10.152.183.18:443
PWD=/
MAYA_APISERVER_SERVICE_PORT_5656_TCP_PROTO=tcp
OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE=openebs/cstor-pool-mgmt:2.6.0
OPENEBS_IO_HELPER_IMAGE=openebs/linux-utils:2.6.0
OPENEBS_SERVICE_ACCOUNT=openebs-maya-operator
HOME=/root
OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE=openebs/cstor-volume-mgmt:2.6.0
OPENEBS_IO_JIVA_CONTROLLER_IMAGE=openebs/jiva:2.6.0
KUBERNETES_PORT_443_TCP=tcp://10.152.183.1:443
MAYA_APISERVER_SERVICE_SERVICE_PORT=5656
MAYA_APISERVER_SERVICE_PORT_5656_TCP_ADDR=10.152.183.216
OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE=openebs/m-exporter:2.6.0
MAYA_APISERVER_SERVICE_PORT_5656_TCP=tcp://10.152.183.216:5656
MAYA_APISERVER_SERVICE_SERVICE_PORT_API=5656
OPENEBS_IO_CSTOR_POOL_IMAGE=openebs/cstor-pool:2.6.0
TERM=xterm
ADMISSION_SERVER_SVC_SERVICE_PORT=443
SHLVL=1
ADMISSION_SERVER_SVC_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
ADMISSION_SERVER_SVC_PORT_443_TCP_ADDR=10.152.183.18
KUBERNETES_PORT_443_TCP_ADDR=10.152.183.1
OPENEBS_IO_VOLUME_MONITOR_IMAGE=openebs/m-exporter:2.6.0
OPENEBS_IO_JIVA_REPLICA_COUNT=3
KUBERNETES_SERVICE_HOST=10.152.183.1
MAYA_APISERVER_SERVICE_PORT_5656_TCP_PORT=5656
KUBERNETES_PORT=tcp://10.152.183.1:443
KUBERNETES_PORT_443_TCP_PORT=443
ADMISSION_SERVER_SVC_PORT_443_TCP_PROTO=tcp
MAYA_API_SERVER_NETWORK=eth0
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL=false
MAYA_APISERVER_SERVICE_PORT=tcp://10.152.183.216:5656
OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG=true
OPENEBS_IO_JIVA_REPLICA_IMAGE=openebs/jiva:2.6.0
OPENEBS_IO_INSTALLER_TYPE=openebs-operator
_=/usr/bin/env

Environment:

  • Maya version: 2.6.0-released

  • OpenEBS version: 2.6.0

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.7-34+fa60fe11bf77d0", GitCommit:"fa60fe11bf77d0c591abbc397e178efe296f83f9", GitTreeState:"clean", BuildDate:"2021-02-11T20:46:36Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/arm64"}
  • Kubernetes installer & version: microk8s 1.19/stable

  • Cloud provider or hardware configuration: Raspberry Pi 4 8Gb arm64

  • OS (e.g. from /etc/os-release):

NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

templating issue in cleanup-webhook.yaml

The new cleanup-webhook.yaml has an issue when deploying the chart with a .Values.webhook.tolerations defined. The chart uses the tolerations via:

  {{- if .Values.webhook.tolerations }}
  tolerations:
  {{ toYaml .Values.webhook.tolerations | indent 8 }}
  {{- end }}

When rendered, this indents the tolerations incorrectly due to the spaces at the start of the 'toYaml' line (see deployment-admission-server.yaml for how this should be used).

However, that said, does this actually need to pull in tolerations? This is a template for a Job, it does not have a nodeSelector that may need accompanying tolerations - I suspect the four lines above could just be removed.

Publish to OCI

Please publish OpenEBS to a Helm OCI repository as well.
This is slowly becoming the new and faster standard anno 2024

Enabling ndmExporter results in duplicate "name" keys

With default values simply enabling the exporter

    ndmExporter:
      enabled: false

Results in the following (in openebs-ndm-cluster-exporter deployment):

        chart: openebs-3.5.0
        heritage: Helm
        openebs.io/version: "3.5.0"
        app: openebs-ndm-cluster-exporter
        release: openebs
        component: openebs-ndm-cluster-exporter
        name: openebs-ndm-node-exporter
        openebs.io/component-name: openebs-ndm-cluster-exporter
        name: openebs-ndm-cluster-exporter

which includes the "name" field twice causing issues such as flux helm-controller failing to install the chart

Upgrade 3.7.0->3.8.0 fails when there exists a volumesnapshotclass

When i try to upgrade, i get:

# helm -n openebs upgrade openebs openebs/openebs --reuse-values --version 3.8.0 
false
Error: UPGRADE FAILED: Unable to continue with update: CustomResourceDefinition "volumesnapshotclasses.snapshot.storage.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "openebs"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "openebs"

Actually i have one volumesnapshotclass:

# kubectl get volumesnapshotclasses.snapshot.storage.k8s.io 
NAME                    DRIVER               DELETIONPOLICY   AGE
longhorn-snapshot-vsc   driver.longhorn.io   Delete           132d
# kubectl get volumesnapshotclasses.snapshot.storage.k8s.io longhorn-snapshot-vsc -o yaml
apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Delete
driver: driver.longhorn.io
kind: VolumeSnapshotClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"snapshot.storage.k8s.io/v1","deletionPolicy":"Delete","driver":"driver.longhorn.io","kind":"VolumeSnapshotClass","metadata":{"annotations":{},"labels":{"velero.io/csi-volumesnapshot-class":"true"},"name":"longhorn-snapshot-vsc"},"parameters":{"type":"snap"}}
  creationTimestamp: "2023-05-23T08:24:38Z"
  generation: 2
  labels:
    velero.io/csi-volumesnapshot-class: "true"
  managedFields:
  - apiVersion: snapshot.storage.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:deletionPolicy: {}
      f:driver: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:velero.io/csi-volumesnapshot-class: {}
      f:parameters:
        .: {}
        f:type: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2023-06-28T12:59:12Z"
  name: longhorn-snapshot-vsc
  resourceVersion: "105590151"
  uid: b4206632-3c4b-459f-a0ff-d5324b9a2b0f
parameters:
  type: snap

But this is from longhorn, not openebs.

How can i get over this?

Enhance the github action workflow

At the moment, the lint and test and chart releaser are running in parallel once the PR is merged. This should ideally change to:

  • On PR, Run chart lint and test (this workflow needs to be modified to run only on PR and not on merge)
  • On merge, Run chart lint and test, followed by release charts. The chart should not be released if the previous step fails.

Installing the latest helm chart release 3.3.0 includes images tagged 3.2.0

When I install the latest helm chart version 3.3.0 the images included are tagged 3.2.0. I can work around this by explicitly setting the correct tags in the values.yaml but I expect the default images to be of the correct version. Thanks

$ helm template --namespace openebs openebs/openebs --version 3.3.0 | grep '3\.[0-9]\.[0-9]'
    chart: openebs-3.3.0
    chart: openebs-3.3.0
    chart: openebs-3.3.0
    chart: openebs-3.3.0
    chart: openebs-3.3.0
    openebs.io/version: 3.2.0
        openebs.io/version: 3.2.0
    chart: openebs-3.3.0
    openebs.io/version: 3.2.0
        openebs.io/version: 3.2.0
        image: "openebs/provisioner-localpv:3.2.0"
          value: "openebs/linux-utils:3.2.0"
    chart: openebs-3.3.0
    openebs.io/version: 3.2.0
        openebs.io/version: 3.2.0
          value: "openebs/linux-utils:3.2.0"

Chart: Allow configuring of deployment strategy

The values for the strategy are not exposed in the chart here.

In particular setting the maxUnavailalbe would be useful since Helm will consider the deployment done early (Ref). When installing a new cluster with the local-pv-provisioner along with other charts this will cause additional charts to be installed before openebs is available. If those charts need volumes the PVC will be stuck in pending because it won't be processed as a request after openebs is done installing.

helm upgrade failed with "nil pointer evaluating interface {}.enabled"

openebs upgrade from 3.2.0 to 3.3.0 failed with

% kubectl get nodes
NAME       STATUS   ROLES                      AGE     VERSION
srvl012    Ready    controlplane,etcd,worker   2y30d   v1.23.8
srvl013    Ready    controlplane,etcd,worker   2y30d   v1.23.8
srvl033a   Ready    worker                     2y30d   v1.23.8
srvl033b   Ready    worker                     2y30d   v1.23.8
srvl062    Ready    controlplane,etcd,worker   424d    v1.23.8

% helm list -n openebs
NAME    NAMESPACE       REVISION        UPDATED                                         STATUS          CHART           APP VERSION
openebs openebs         18              2022-06-16 09:42:58.892047791 +0200 CEST        deployed        openebs-3.2.0   3.2.0      

% helm get values openebs -n openebs
USER-SUPPLIED VALUES:
cstor:
  enabled: true
openebs-ndm:
  enabled: true

% helm upgrade openebs openebs/openebs --namespace openebs --reuse-values
Error: UPGRADE FAILED: template: openebs/templates/localprovisioner/hostpath-class.yaml:32:14: executing "openebs/templates/localprovisioner/hostpath-class.yaml" at <.Values.localprovisioner.hostpathClass.ext4Quota.enabled>: nil pointer evaluating interface {}.enabled

Add a new flag to charts that will disable legacy components

To disable deprecated components, multiple values need to be specified like

--set webhook.enabled=false \
--set snapshotOperator.enabled=false \
--set provisioner.enabled=false \
--set apiserver.enabled=false \

It would be nice to add a new variable that can replace all the above. For all the above flags, use this new flag with default=true in (and)

--set legacy.enabled=true 

The workflow to migrate from legacy to new components can be as follows:

  • Enable new driver:
    helm upgrade ... --resuse-values --set cstor.enabled=true
  • Migrate older pools and volumes to new drivers
  • Disable legacy components
    helm upgrade ... --resuse-values --set legacy.enabled=false

What does admission-server do?

Hi
Not sure if this is the right place, but I'll try it anyway. What I loose if I deactivate the admission server? I understand what an admission controller does generally, which is "validate" k8s objects. But I can't find information about what exactly does this admission controller on the documentation.
Thanks for your time.
PD: The chart and openebs are very cool projects, thanks !

param : os-disk-exclude-filter.exclude should be updatable

We have this for the config map right now

filterconfigs:
      - key: os-disk-exclude-filter
        name: os disk exclude filter
        state: {{ .Values.ndm.filters.enableOsDiskExcludeFilter }}
        exclude: "/,/etc/hosts,/boot"

... {{ .Values.ndm.filters.enableOsDiskExcludeFilter }} should go for exclude and not for the state ?

filterconfigs:
      - key: os-disk-exclude-filter
        name: os disk exclude filter
        state: {{ .Values.ndm.filters.enableOsDiskState }}  // or not updatable at all
        exclude: {{ .Values.ndm.filters.enableOsDiskExcludeFilter }}  ///????

Missing Chart for nfs provisioner

Hi, we were using https://openebs.github.io/dynamic-nfs-provisioner as the chart repo and that stopped working this morning.

We figured out we need to update to https://openebs-archive.github.io/dynamic-nfs-provisioner

I don't know if GH Pages supports redirects or anything that might have made this easier?

init-pvc pod runs with priviledged security context

Hi,

On our cluster (microk8s), pods running with privileged security context are disallowed by configuration of the API server. It seems now that the localpv-provisioner starts an init container that runs with a privileged security context, hence causing the error below (from the output of kubectl describe pvc ...). I created a custom storage class to set the base path as described here.

Is there a functional reason why the container needs the privileges or is the security context configurable somewhere? Thanks!

Warning ProvisioningFailed 20s (x2 over 35s) openebs.io/local_openebs-localpv-provisioner-695c5f756-cfptz_03c93f6f-84bf-4dc1-a06e-da2b9c7adfdc failed to provision volume with StorageClass "custom-storage-class": Pod "init-pvc-76e5fa24-8c03-4f7c-b8bd-04f09c114175" is invalid: spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy

matchLabels for charts are wrong

Hi,
it seems the most of the matchLabels are incorrect?
take openebs-snapshot-operator it focuses on

matchLabels:
  app: openebs
  release: openebs

but this targets all pods instead of just the snapshot pods

however if you look at the direct yaml deployment version, the matchLabels show

matchLabels:
  name: openebs-snapshot-operator
  openebs.io/component-name: openebs-snapshot-operator

which is correct and targets only the required pods!

Issues with PSP (cis-1.5/1.6) enabled cluster

Hi,

I't seems, that the configuration for the helm charts are not fully functional with psp enabled clusters (tested with rke2 - cis-1.5/1.6 enabled):

values.yaml

rbac:
  pspEnabled: true

jiva:
  enabled: true
  rbac:
    pspEnabled: true
  openebsLocalpv:
    enabled: true
  localpv-provisioner:
    openebsNDM:
      enabled: true

cstor:
  enabled: true
  rbac:
    pspEnabled: true
  openebsNDM:
    enabled: true

openebs-ndm:
  enabled: true

localpv-provisioner:
  enabled: true
  rbac:
    pspEnabled: true
  openebsNDM:
    enabled: true

kubectl get pods -o wide

openebs           openebs-admission-server-7dd88c6b6-46knl                1/1     Running                      0          11m   10.42.0.30   node-01   <none>           <none>
openebs           openebs-apiserver-7bd8cc7c5c-rfbqb                      1/1     Running                      0          11m   10.42.0.31   node-01   <none>           <none>
openebs           openebs-cstor-admission-server-66b7895495-d66r7         0/1     CreateContainerConfigError   0          11m   10.42.0.27   node-01   <none>           <none>
openebs           openebs-cstor-csi-controller-0                          0/6     CreateContainerConfigError   0          10m   10.42.0.33   node-01   <none>           <none>
openebs           openebs-cstor-csi-node-d87nx                            2/2     Running                      0          10m   10.0.0.1     node-01   <none>           <none>
openebs           openebs-cstor-csi-node-l5zrq                            2/2     Running                      0          10m   10.0.0.2     node-02   <none>           <none>
openebs           openebs-cstor-cspc-operator-759cf9cb8c-twtfg            0/1     CreateContainerConfigError   0          11m   10.42.1.15   node-02   <none>           <none>
openebs           openebs-cstor-cvc-operator-fb68fbdb6-nfmtq              0/1     CreateContainerConfigError   0          11m   10.42.0.28   node-01   <none>           <none>
openebs           openebs-jiva-csi-controller-0                           0/5     CreateContainerConfigError   0          10m   10.42.1.18   node-02   <none>           <none>
openebs           openebs-jiva-operator-6d44c67df9-645cg                  1/1     Running                      0          11m   10.42.1.16   node-02   <none>           <none>
openebs           openebs-localpv-provisioner-7bf58d4896-jbc7h            1/1     Running                      0          10m   10.42.0.34   node-01   <none>           <none>
openebs           openebs-ndm-operator-7d6955f6f5-55vzz                   0/1     CreateContainerConfigError   0          11m   10.42.0.26   node-01   <none>           <none>
openebs           openebs-provisioner-554d6bb8db-hqgsr                    1/1     Running                      0          11m   10.42.1.17   node-02   <none>           <none>
openebs           openebs-snapshot-operator-b76fb87b8-n9jnf               2/2     Running                      0          11m   10.42.0.32   node-01   <none>           <none>

kubectl get events -w

LAST SEEN   TYPE      REASON              OBJECT                                                               MESSAGE
4m          Normal    Scheduled           pod/openebs-admission-server-7dd88c6b6-46knl                         Successfully assigned openebs/openebs-admission-server-7dd88c6b6-46knl to node-01
3m59s       Normal    Pulling             pod/openebs-admission-server-7dd88c6b6-46knl                         Pulling image "openebs/admission-server:2.12.1"
3m52s       Normal    Pulled              pod/openebs-admission-server-7dd88c6b6-46knl                         Successfully pulled image "openebs/admission-server:2.12.1" in 6.850287365s
3m52s       Normal    Created             pod/openebs-admission-server-7dd88c6b6-46knl                         Created container admission-webhook
3m52s       Normal    Started             pod/openebs-admission-server-7dd88c6b6-46knl                         Started container admission-webhook
4m          Normal    SuccessfulCreate    replicaset/openebs-admission-server-7dd88c6b6                        Created pod: openebs-admission-server-7dd88c6b6-46knl
4m          Normal    ScalingReplicaSet   deployment/openebs-admission-server                                  Scaled up replica set openebs-admission-server-7dd88c6b6 to 1
4m          Normal    Scheduled           pod/openebs-apiserver-7bd8cc7c5c-rfbqb                               Successfully assigned openebs/openebs-apiserver-7bd8cc7c5c-rfbqb to node-01
3m59s       Normal    Pulling             pod/openebs-apiserver-7bd8cc7c5c-rfbqb                               Pulling image "openebs/m-apiserver:2.12.1"
3m54s       Normal    Pulled              pod/openebs-apiserver-7bd8cc7c5c-rfbqb                               Successfully pulled image "openebs/m-apiserver:2.12.1" in 4.938507915s
3m54s       Normal    Created             pod/openebs-apiserver-7bd8cc7c5c-rfbqb                               Created container openebs-apiserver
3m54s       Normal    Started             pod/openebs-apiserver-7bd8cc7c5c-rfbqb                               Started container openebs-apiserver
4m          Normal    SuccessfulCreate    replicaset/openebs-apiserver-7bd8cc7c5c                              Created pod: openebs-apiserver-7bd8cc7c5c-rfbqb
4m          Normal    ScalingReplicaSet   deployment/openebs-apiserver                                         Scaled up replica set openebs-apiserver-7bd8cc7c5c to 1
4m1s        Normal    Scheduled           pod/openebs-cstor-admission-server-66b7895495-d66r7                  Successfully assigned openebs/openebs-cstor-admission-server-66b7895495-d66r7 to node-01
4m1s        Normal    Pulling             pod/openebs-cstor-admission-server-66b7895495-d66r7                  Pulling image "openebs/cstor-webhook:2.12.0"
3m55s       Normal    Pulled              pod/openebs-cstor-admission-server-66b7895495-d66r7                  Successfully pulled image "openebs/cstor-webhook:2.12.0" in 5.85522673s
100s        Warning   Failed              pod/openebs-cstor-admission-server-66b7895495-d66r7                  Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-admission-server-66b7895495-d66r7_openebs(4b0f1ae2-47a3-44d3-a5fe-2c8fe34c5df7)", container: openebs-cstor-admission-webhook)
100s        Normal    Pulled              pod/openebs-cstor-admission-server-66b7895495-d66r7                  Container image "openebs/cstor-webhook:2.12.0" already present on machine
4m1s        Normal    SuccessfulCreate    replicaset/openebs-cstor-admission-server-66b7895495                 Created pod: openebs-cstor-admission-server-66b7895495-d66r7
4m1s        Normal    ScalingReplicaSet   deployment/openebs-cstor-admission-server                            Scaled up replica set openebs-cstor-admission-server-66b7895495 to 1
3m59s       Normal    Scheduled           pod/openebs-cstor-csi-controller-0                                   Successfully assigned openebs/openebs-cstor-csi-controller-0 to node-01
3m50s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
3m50s       Warning   Failed              pod/openebs-cstor-csi-controller-0                                   Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-resizer)
3m50s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Container image "k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3" already present on machine
3m50s       Warning   Failed              pod/openebs-cstor-csi-controller-0                                   Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-snapshotter)
3m58s       Normal    Pulling             pod/openebs-cstor-csi-controller-0                                   Pulling image "k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3"
3m57s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Successfully pulled image "k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3" in 1.620208975s
3m50s       Warning   Failed              pod/openebs-cstor-csi-controller-0                                   Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: snapshot-controller)
3m57s       Normal    Pulling             pod/openebs-cstor-csi-controller-0                                   Pulling image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0"
3m55s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Successfully pulled image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" in 1.793285999s
3m50s       Warning   Failed              pod/openebs-cstor-csi-controller-0                                   Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-provisioner)
3m55s       Normal    Pulling             pod/openebs-cstor-csi-controller-0                                   Pulling image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0"
3m53s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Successfully pulled image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" in 1.686984922s
3m53s       Warning   Failed              pod/openebs-cstor-csi-controller-0                                   Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: csi-attacher)
3m53s       Normal    Pulling             pod/openebs-cstor-csi-controller-0                                   Pulling image "openebs/cstor-csi-driver:2.12.0"
3m50s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Successfully pulled image "openebs/cstor-csi-driver:2.12.0" in 2.658414557s
3m50s       Warning   Failed              pod/openebs-cstor-csi-controller-0                                   Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-csi-controller-0_openebs(846ad592-109c-4974-bf6c-be0df89751fa)", container: cstor-csi-plugin)
3m50s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Container image "k8s.gcr.io/sig-storage/snapshot-controller:v3.0.3" already present on machine
3m50s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Container image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" already present on machine
3m50s       Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Container image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" already present on machine
3m59s       Warning   FailedCreate        statefulset/openebs-cstor-csi-controller                             create Pod openebs-cstor-csi-controller-0 in StatefulSet openebs-cstor-csi-controller failed error: pods "openebs-cstor-csi-controller-0" is forbidden: no PriorityClass with name openebs-cstor-csi-controller-critical was found
3m59s       Normal    SuccessfulCreate    statefulset/openebs-cstor-csi-controller                             create Pod openebs-cstor-csi-controller-0 in StatefulSet openebs-cstor-csi-controller successful
3m57s       Normal    Scheduled           pod/openebs-cstor-csi-node-d87nx                                     Successfully assigned openebs/openebs-cstor-csi-node-d87nx to node-01
3m56s       Normal    Pulled              pod/openebs-cstor-csi-node-d87nx                                     Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0" already present on machine
3m56s       Normal    Created             pod/openebs-cstor-csi-node-d87nx                                     Created container csi-node-driver-registrar
3m56s       Normal    Started             pod/openebs-cstor-csi-node-d87nx                                     Started container csi-node-driver-registrar
3m56s       Normal    Pulling             pod/openebs-cstor-csi-node-d87nx                                     Pulling image "openebs/cstor-csi-driver:2.12.0"
3m51s       Normal    Pulled              pod/openebs-cstor-csi-node-d87nx                                     Successfully pulled image "openebs/cstor-csi-driver:2.12.0" in 5.070374526s
3m50s       Normal    Created             pod/openebs-cstor-csi-node-d87nx                                     Created container cstor-csi-plugin
3m50s       Normal    Started             pod/openebs-cstor-csi-node-d87nx                                     Started container cstor-csi-plugin
3m56s       Normal    Scheduled           pod/openebs-cstor-csi-node-l5zrq                                     Successfully assigned openebs/openebs-cstor-csi-node-l5zrq to node-02
3m56s       Normal    Pulled              pod/openebs-cstor-csi-node-l5zrq                                     Container image "k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0" already present on machine
3m56s       Normal    Created             pod/openebs-cstor-csi-node-l5zrq                                     Created container csi-node-driver-registrar
3m56s       Normal    Started             pod/openebs-cstor-csi-node-l5zrq                                     Started container csi-node-driver-registrar
3m56s       Normal    Pulling             pod/openebs-cstor-csi-node-l5zrq                                     Pulling image "openebs/cstor-csi-driver:2.12.0"
3m50s       Normal    Pulled              pod/openebs-cstor-csi-node-l5zrq                                     Successfully pulled image "openebs/cstor-csi-driver:2.12.0" in 5.707887193s
3m50s       Normal    Created             pod/openebs-cstor-csi-node-l5zrq                                     Created container cstor-csi-plugin
3m50s       Normal    Started             pod/openebs-cstor-csi-node-l5zrq                                     Started container cstor-csi-plugin
3m59s       Warning   FailedCreate        daemonset/openebs-cstor-csi-node                                     Error creating: pods "openebs-cstor-csi-node-" is forbidden: no PriorityClass with name openebs-cstor-csi-node-critical was found
3m57s       Normal    SuccessfulCreate    daemonset/openebs-cstor-csi-node                                     Created pod: openebs-cstor-csi-node-d87nx
3m57s       Normal    SuccessfulCreate    daemonset/openebs-cstor-csi-node                                     Created pod: openebs-cstor-csi-node-l5zrq
4m1s        Normal    Scheduled           pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg                     Successfully assigned openebs/openebs-cstor-cspc-operator-759cf9cb8c-twtfg to node-02
4m1s        Normal    Pulling             pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg                     Pulling image "openebs/cspc-operator:2.12.0"
3m57s       Normal    Pulled              pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg                     Successfully pulled image "openebs/cspc-operator:2.12.0" in 3.738448341s
109s        Warning   Failed              pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg                     Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-cspc-operator-759cf9cb8c-twtfg_openebs(cc4ceaea-1999-4164-85ce-e758259b01ee)", container: openebs-cstor-cspc-operator)
109s        Normal    Pulled              pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg                     Container image "openebs/cspc-operator:2.12.0" already present on machine
4m1s        Normal    SuccessfulCreate    replicaset/openebs-cstor-cspc-operator-759cf9cb8c                    Created pod: openebs-cstor-cspc-operator-759cf9cb8c-twtfg
4m1s        Normal    ScalingReplicaSet   deployment/openebs-cstor-cspc-operator                               Scaled up replica set openebs-cstor-cspc-operator-759cf9cb8c to 1
4m1s        Normal    Scheduled           pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq                       Successfully assigned openebs/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq to node-01
4m1s        Normal    Pulling             pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq                       Pulling image "openebs/cvc-operator:2.12.0"
3m57s       Normal    Pulled              pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq                       Successfully pulled image "openebs/cvc-operator:2.12.0" in 3.960324508s
102s        Warning   Failed              pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq                       Error: container has runAsNonRoot and image will run as root (pod: "openebs-cstor-cvc-operator-fb68fbdb6-nfmtq_openebs(c31f3560-9e0c-47e6-85cf-91f8a7557bab)", container: openebs-cstor-cvc-operator)
102s        Normal    Pulled              pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq                       Container image "openebs/cvc-operator:2.12.0" already present on machine
4m1s        Normal    SuccessfulCreate    replicaset/openebs-cstor-cvc-operator-fb68fbdb6                      Created pod: openebs-cstor-cvc-operator-fb68fbdb6-nfmtq
4m1s        Normal    ScalingReplicaSet   deployment/openebs-cstor-cvc-operator                                Scaled up replica set openebs-cstor-cvc-operator-fb68fbdb6 to 1
3m59s       Normal    Scheduled           pod/openebs-jiva-csi-controller-0                                    Successfully assigned openebs/openebs-jiva-csi-controller-0 to node-02
3m33s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
3m45s       Warning   Failed              pod/openebs-jiva-csi-controller-0                                    Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: csi-resizer)
3m58s       Normal    Pulling             pod/openebs-jiva-csi-controller-0                                    Pulling image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0"
3m56s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Successfully pulled image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" in 2.036265238s
3m45s       Warning   Failed              pod/openebs-jiva-csi-controller-0                                    Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: csi-provisioner)
3m56s       Normal    Pulling             pod/openebs-jiva-csi-controller-0                                    Pulling image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0"
3m55s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Successfully pulled image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" in 1.779400382s
3m45s       Warning   Failed              pod/openebs-jiva-csi-controller-0                                    Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: csi-attacher)
3m55s       Normal    Pulling             pod/openebs-jiva-csi-controller-0                                    Pulling image "openebs/jiva-csi:2.12.2"
3m47s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Successfully pulled image "openebs/jiva-csi:2.12.2" in 7.507682819s
3m45s       Warning   Failed              pod/openebs-jiva-csi-controller-0                                    Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: jiva-csi-plugin)
3m47s       Normal    Pulling             pod/openebs-jiva-csi-controller-0                                    Pulling image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0"
3m46s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Successfully pulled image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0" in 1.105187761s
3m45s       Warning   Failed              pod/openebs-jiva-csi-controller-0                                    Error: container has runAsNonRoot and image will run as root (pod: "openebs-jiva-csi-controller-0_openebs(b6a6fa3b-b4c8-47fe-8cdb-6ebf0f446715)", container: liveness-probe)
3m45s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Container image "k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0" already present on machine
3m45s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Container image "k8s.gcr.io/sig-storage/csi-attacher:v3.1.0" already present on machine
3m45s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Container image "openebs/jiva-csi:2.12.2" already present on machine
3m45s       Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Container image "k8s.gcr.io/sig-storage/livenessprobe:v2.3.0" already present on machine
3m59s       Warning   FailedCreate        statefulset/openebs-jiva-csi-controller                              create Pod openebs-jiva-csi-controller-0 in StatefulSet openebs-jiva-csi-controller failed error: pods "openebs-jiva-csi-controller-0" is forbidden: no PriorityClass with name openebs-jiva-csi-controller-critical was found
3m59s       Normal    SuccessfulCreate    statefulset/openebs-jiva-csi-controller                              create Pod openebs-jiva-csi-controller-0 in StatefulSet openebs-jiva-csi-controller successful
78s         Warning   FailedCreate        daemonset/openebs-jiva-csi-node                                      Error creating: pods "openebs-jiva-csi-node-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[1].securityContext.allowPrivilegeEscalation: Invalid value: true: Allowing privilege escalation for containers is not allowed]
4m          Normal    Scheduled           pod/openebs-jiva-operator-6d44c67df9-645cg                           Successfully assigned openebs/openebs-jiva-operator-6d44c67df9-645cg to node-02
4m          Normal    Pulling             pod/openebs-jiva-operator-6d44c67df9-645cg                           Pulling image "openebs/jiva-operator:2.12.2"
3m55s       Normal    Pulled              pod/openebs-jiva-operator-6d44c67df9-645cg                           Successfully pulled image "openebs/jiva-operator:2.12.2" in 4.969210006s
3m55s       Normal    Created             pod/openebs-jiva-operator-6d44c67df9-645cg                           Created container openebs-jiva-operator
3m55s       Normal    Started             pod/openebs-jiva-operator-6d44c67df9-645cg                           Started container openebs-jiva-operator
4m1s        Normal    SuccessfulCreate    replicaset/openebs-jiva-operator-6d44c67df9                          Created pod: openebs-jiva-operator-6d44c67df9-645cg
4m1s        Normal    ScalingReplicaSet   deployment/openebs-jiva-operator                                     Scaled up replica set openebs-jiva-operator-6d44c67df9 to 1
4m1s        Normal    Scheduled           pod/openebs-localpv-provisioner-6f9c7d84-bm5qk                       Successfully assigned openebs/openebs-localpv-provisioner-6f9c7d84-bm5qk to node-01
4m          Normal    Pulling             pod/openebs-localpv-provisioner-6f9c7d84-bm5qk                       Pulling image "openebs/provisioner-localpv:2.12.0"
3m57s       Normal    Pulled              pod/openebs-localpv-provisioner-6f9c7d84-bm5qk                       Successfully pulled image "openebs/provisioner-localpv:2.12.0" in 3.794212396s
3m57s       Warning   Failed              pod/openebs-localpv-provisioner-6f9c7d84-bm5qk                       Error: cannot find volume "kube-api-access-26hfj" to mount into container "openebs-localpv-provisioner"
4m1s        Normal    SuccessfulCreate    replicaset/openebs-localpv-provisioner-6f9c7d84                      Created pod: openebs-localpv-provisioner-6f9c7d84-bm5qk
4m1s        Normal    SuccessfulDelete    replicaset/openebs-localpv-provisioner-6f9c7d84                      Deleted pod: openebs-localpv-provisioner-6f9c7d84-bm5qk
3m51s       Normal    Scheduled           pod/openebs-localpv-provisioner-7bf58d4896-jbc7h                     Successfully assigned openebs/openebs-localpv-provisioner-7bf58d4896-jbc7h to node-01
3m51s       Normal    Pulling             pod/openebs-localpv-provisioner-7bf58d4896-jbc7h                     Pulling image "openebs/provisioner-localpv:2.12.1"
3m47s       Normal    Pulled              pod/openebs-localpv-provisioner-7bf58d4896-jbc7h                     Successfully pulled image "openebs/provisioner-localpv:2.12.1" in 3.915593629s
3m47s       Normal    Created             pod/openebs-localpv-provisioner-7bf58d4896-jbc7h                     Created container openebs-localpv-provisioner
3m47s       Normal    Started             pod/openebs-localpv-provisioner-7bf58d4896-jbc7h                     Started container openebs-localpv-provisioner
3m52s       Normal    SuccessfulCreate    replicaset/openebs-localpv-provisioner-7bf58d4896                    Created pod: openebs-localpv-provisioner-7bf58d4896-jbc7h
4m1s        Normal    ScalingReplicaSet   deployment/openebs-localpv-provisioner                               Scaled up replica set openebs-localpv-provisioner-6f9c7d84 to 1
4m1s        Normal    ScalingReplicaSet   deployment/openebs-localpv-provisioner                               Scaled down replica set openebs-localpv-provisioner-6f9c7d84 to 0
3m52s       Normal    ScalingReplicaSet   deployment/openebs-localpv-provisioner                               Scaled up replica set openebs-localpv-provisioner-7bf58d4896 to 1
4m1s        Normal    Scheduled           pod/openebs-ndm-operator-7d6955f6f5-55vzz                            Successfully assigned openebs/openebs-ndm-operator-7d6955f6f5-55vzz to node-01
4m1s        Normal    Pulling             pod/openebs-ndm-operator-7d6955f6f5-55vzz                            Pulling image "openebs/node-disk-operator:1.6.1"
3m57s       Normal    Pulled              pod/openebs-ndm-operator-7d6955f6f5-55vzz                            Successfully pulled image "openebs/node-disk-operator:1.6.1" in 3.574298184s
108s        Warning   Failed              pod/openebs-ndm-operator-7d6955f6f5-55vzz                            Error: container has runAsNonRoot and image will run as root (pod: "openebs-ndm-operator-7d6955f6f5-55vzz_openebs(d2d69836-4321-4c28-b52e-3fe1a17a8367)", container: openebs-ndm-operator)
108s        Normal    Pulled              pod/openebs-ndm-operator-7d6955f6f5-55vzz                            Container image "openebs/node-disk-operator:1.6.1" already present on machine
4m1s        Normal    SuccessfulCreate    replicaset/openebs-ndm-operator-7d6955f6f5                           Created pod: openebs-ndm-operator-7d6955f6f5-55vzz
4m1s        Normal    ScalingReplicaSet   deployment/openebs-ndm-operator                                      Scaled up replica set openebs-ndm-operator-7d6955f6f5 to 1
78s         Warning   FailedCreate        daemonset/openebs-ndm                                                Error creating: pods "openebs-ndm-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
4m          Normal    Scheduled           pod/openebs-provisioner-554d6bb8db-hqgsr                             Successfully assigned openebs/openebs-provisioner-554d6bb8db-hqgsr to node-02
4m          Normal    Pulling             pod/openebs-provisioner-554d6bb8db-hqgsr                             Pulling image "openebs/openebs-k8s-provisioner:2.12.1"
3m53s       Normal    Pulled              pod/openebs-provisioner-554d6bb8db-hqgsr                             Successfully pulled image "openebs/openebs-k8s-provisioner:2.12.1" in 6.433119662s
3m53s       Normal    Created             pod/openebs-provisioner-554d6bb8db-hqgsr                             Created container openebs-provisioner
3m53s       Normal    Started             pod/openebs-provisioner-554d6bb8db-hqgsr                             Started container openebs-provisioner
4m          Normal    SuccessfulCreate    replicaset/openebs-provisioner-554d6bb8db                            Created pod: openebs-provisioner-554d6bb8db-hqgsr
4m          Normal    ScalingReplicaSet   deployment/openebs-provisioner                                       Scaled up replica set openebs-provisioner-554d6bb8db to 1
4m          Normal    Scheduled           pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Successfully assigned openebs/openebs-snapshot-operator-b76fb87b8-n9jnf to node-01
3m59s       Normal    Pulling             pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Pulling image "openebs/snapshot-controller:2.12.1"
3m55s       Normal    Pulled              pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Successfully pulled image "openebs/snapshot-controller:2.12.1" in 3.938680494s
3m55s       Normal    Created             pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Created container openebs-snapshot-controller
3m54s       Normal    Started             pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Started container openebs-snapshot-controller
3m54s       Normal    Pulling             pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Pulling image "openebs/snapshot-provisioner:2.12.1"
3m50s       Normal    Pulled              pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Successfully pulled image "openebs/snapshot-provisioner:2.12.1" in 4.064379059s
3m50s       Normal    Created             pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Created container openebs-snapshot-provisioner
3m50s       Normal    Started             pod/openebs-snapshot-operator-b76fb87b8-n9jnf                        Started container openebs-snapshot-provisioner
4m          Normal    SuccessfulCreate    replicaset/openebs-snapshot-operator-b76fb87b8                       Created pod: openebs-snapshot-operator-b76fb87b8-n9jnf
4m          Normal    ScalingReplicaSet   deployment/openebs-snapshot-operator                                 Scaled up replica set openebs-snapshot-operator-b76fb87b8 to 1
3m47s       Normal    LeaderElection      endpoints/openebs.io-local                                           openebs-localpv-provisioner-7bf58d4896-jbc7h_94ff8c3c-2ba8-48bb-a423-d883ad84f3a5 became leader
3m53s       Normal    LeaderElection      endpoints/openebs.io-provisioner-iscsi                               openebs-provisioner-554d6bb8db-hqgsr_70df65fb-a720-4963-94ee-2b51f826bf54 became leader
3m50s       Normal    LeaderElection      endpoints/volumesnapshot.external-storage.k8s.io-snapshot-promoter   openebs-snapshot-operator-b76fb87b8-n9jnf_f0cd6250-7560-4a09-9ef9-73d1b51fc6c9 became leader
0s          Normal    Pulled              pod/openebs-cstor-cspc-operator-759cf9cb8c-twtfg                     Container image "openebs/cspc-operator:2.12.0" already present on machine
0s          Normal    Pulled              pod/openebs-jiva-csi-controller-0                                    Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
0s          Normal    Pulled              pod/openebs-ndm-operator-7d6955f6f5-55vzz                            Container image "openebs/node-disk-operator:1.6.1" already present on machine
0s          Normal    Pulled              pod/openebs-cstor-admission-server-66b7895495-d66r7                  Container image "openebs/cstor-webhook:2.12.0" already present on machine
0s          Normal    Pulled              pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq                       Container image "openebs/cvc-operator:2.12.0" already present on machine
0s          Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine
0s          Warning   FailedCreate        daemonset/openebs-ndm                                                Error creating: pods "openebs-ndm-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
0s          Warning   FailedCreate        daemonset/openebs-jiva-csi-node                                      Error creating: pods "openebs-jiva-csi-node-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.containers[1].securityContext.allowPrivilegeEscalation: Invalid value: true: Allowing privilege escalation for containers is not allowed]
0s          Normal    Pulled              pod/openebs-ndm-operator-7d6955f6f5-55vzz                            Container image "openebs/node-disk-operator:1.6.1" already present on machine
0s          Normal    Pulled              pod/openebs-cstor-cvc-operator-fb68fbdb6-nfmtq                       Container image "openebs/cvc-operator:2.12.0" already present on machine
0s          Normal    Pulled              pod/openebs-cstor-csi-controller-0                                   Container image "k8s.gcr.io/sig-storage/csi-resizer:v1.2.0" already present on machine

kubectl get nodes -o wide

NAME      STATUS   ROLES                       AGE   VERSION          INTERNAL-IP   EXTERNAL-IP      OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
node-01   Ready    control-plane,etcd,master   28m   v1.21.4+rke2r3   10.0.0.1      XX.XX.XX.XX   Ubuntu 20.04.3 LTS   5.4.0-84-generic   containerd://1.4.8-k3s1
node-02   Ready    <none>                      26m   v1.21.4+rke2r3   10.0.0.2      YY.YY.YY.YY   Ubuntu 20.04.3 LTS   5.4.0-84-generic   containerd://1.4.8-k3s1

kubectl get psp global-restricted-psp -o yaml

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    psp.rke2.io/global-restricted: resolved
  creationTimestamp: "2021-09-12T22:41:56Z"
  name: global-restricted-psp
  resourceVersion: "207"
  uid: d73c27e8-cb5b-44de-8f08-3d4eef7a1468
spec:
  allowPrivilegeEscalation: false
  fsGroup:
    ranges:
    - max: 65535
      min: 1
    rule: MustRunAs
  requiredDropCapabilities:
  - ALL
  runAsUser:
    rule: MustRunAsNonRoot
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    ranges:
    - max: 65535
      min: 1
    rule: MustRunAs
  volumes:
  - configMap
  - emptyDir
  - projected
  - secret
  - downwardAPI
  - persistentVolumeClaim

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.