Giter Club home page Giter Club logo

helm's People

Contributors

akshmish avatar chandru-tkc avatar dependabot[bot] avatar jonkohler avatar manavrajvanshi avatar mazin-s avatar nogodan avatar nutanix-nogodan1234 avatar s4rd1nh4 avatar sanchitatntnx avatar shenoypritika avatar subodh01 avatar tuxtof avatar vnephologist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm's Issues

Request support for RKE

Hi everyone!

I was fighting to deploy the nutanix csi in RKE, and after doing a lot of modification in my RKE environment:

  • Use the ubuntu console
  • Install within ubuntu's console the open-iscsi
  • Generate a rc.local script to enable the iscsid service
  • Edit the bindings of my containerized kubelet
    • '/lib/modules:/lib/modules'
    • '/etc/iscsi:/etc/iscsi'
    • '/sbin/iscsiadm:/sbin/iscsiadm'

The last thing was modified by hand some params of the csi-node-ntnx-plugin's daemonset,

....
    env:
       - name: DRIVER_REG_SOCK_PATH
          value: /opt/rke/var/lib/kubelet/plugins/com.nutanix.csi/csi.sock
....
    volumes:
      - hostPath:
          path: /opt/rke/var/lib/kubelet/plugins_registry/
          type: Directory
        name: registration-dir
      - hostPath:
          path: /opt/rke/var/lib/kubelet/plugins/com.nutanix.csi/
          type: DirectoryOrCreate
        name: plugin-dir
      - hostPath:
          path: /opt/rke/var/lib/kubelet/
          type: Directory
        name: pods-mount-dir
....

So, my request is the possibility to add a checkbox to automatized this modifications

P.S:
I still using ubuntu as my console, I need to create the binding of:

      - hostPath:
          path: /lib/x86_64-linux-gnu/libisns-nocrypto.so.0
          type: File
        name: libisns

Thanks to sike from rancher forums

CSI volume driver support for ```fsGroupPolicy: File```

I need to manage the default ownership/permissions of a mountpoint created by a PVC with ReadWriteMany (RWX) on Red Hat Openshift 4.13, currently not supported by Nutanix CSI.

According to Red Hat, using the current setting of fsGroupPolicy: ReadWriteOnceWithFSType results in the fsGroup only to be respected by the driver if the volume's access mode contains ReadWriteOnce (RWO).
According to this link, after changing the fsGroupPolicy parameter to value ‘File’, Kubernetes may use fsGroup to change permissions and ownership of the volume to match user requested fsGroup in the pod's SecurityPolicy regardless of fstype or access mode.

https://kubernetes-csi.github.io/docs/support-fsgroup.html

Microk8s: Nutanix CSI fails to mount NFS share (pvc directory not created)

When we try to use the Nutanix CSI on out Kubernetes cluster, we have the following error while trying to create a pod that uses a

PVC bound to Nutanix Files:
2022-03-01T15:16:43.272Z nodeserver.go:108: [INFO] NutanixFiles: successfully whitelisted 10.128.114.115 on NFS share pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16
E0301 15:16:43.363377       1 mount_linux.go:175] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs bas-k8s-storage.mydom.tld:pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16 /var/snap/microk8s/common/var/lib/kubelet/pods/c9e749d5-cce8-42ad-84d7-a9273966c6cc/volumes/kubernetes.io~csi/pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16/mount
Output: mount.nfs: mount point /var/snap/microk8s/common/var/lib/kubelet/pods/c9e749d5-cce8-42ad-84d7-a9273966c6cc/volumes/kubernetes.io~csi/pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16/mount does not exist
 
2022-03-01T15:16:43.363Z node.go:62: [INFO] NodePublishVolume returned error rpc error: code = Internal desc = rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs bas-k8s-storage.mydom.tld:pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16 /var/snap/microk8s/common/var/lib/kubelet/pods/c9e749d5-cce8-42ad-84d7-a9273966c6cc/volumes/kubernetes.io~csi/pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16/mount
Output: mount.nfs: mount point /var/snap/microk8s/common/var/lib/kubelet/pods/c9e749d5-cce8-42ad-84d7-a9273966c6cc/volumes/kubernetes.io~csi/pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16/mount does not exist
E0301 15:16:43.363464       1 utils.go:31] GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs bas-k8s-storage.mydom.tld:pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16 /var/snap/microk8s/common/var/lib/kubelet/pods/c9e749d5-cce8-42ad-84d7-a9273966c6cc/volumes/kubernetes.io~csi/pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16/mount
Output: mount.nfs: mount point /var/snap/microk8s/common/var/lib/kubelet/pods/c9e749d5-cce8-42ad-84d7-a9273966c6cc/volumes/kubernetes.io~csi/pvc-fd14e158-1524-45a8-ae1e-b7d97df2ed16/mount does not exist

We are using the helm chart provided at https://github.com/nutanix/helm/tree/nutanix-csi-storage-2.5.1/charts/nutanix-csi-storage#installing-the-chart, using the version 2.5.1 with the following value file:

# disable unused pv mechanisms
volumeClass: false
fileClass: false
 
# dynfile mechanism
dynamicFileClass: true
defaultStorageClass: dynfile
 
# what file server to use
fileServerName: bas-k8s-storage
 
# prism element ip, username and password
prismEndPoint: 10.128.4.201
username: mycoolworkingusernameinprism
password: some_cool_password
secretName: ntxn-secret
createSecret: true

CSI can create the share, we can see it in the files’s explorer. Also as you can see in the logs, whitelisting works (and if I try to manually mount the share on that server, it works fine).

Here are the versions we are using:

Ubuntu 20.04
MicroK8S v1.22.6-3+7ab10db7034594
Nutanix Files v 4.0.2

Having troubleshooted that a bit, it appears that the folder named after the PVC is not created in the pod’s directory.

Install without Internet.

Hello, Nutanixers.

We are planning upgrade CSI Driver from 2.3 to latest.
Clusters are running in dark site, We cannot get helm charts from internet.

How pulll CSI images from repository?

Ability to specify resources request and limits in Deployments and DeamonSet

Currently

https://github.com/nutanix/helm/blob/master/charts/nutanix-csi-storage/templates/ntnx-csi-node-ds.yaml
https://github.com/nutanix/helm/blob/master/charts/nutanix-csi-storage/templates/ntnx-csi-controller-deployment.yaml

have a QoS Guaranteed with fixed resource request.
In small dev clusters I notice that 0.5 CPU and 800MB of RAM are reserved, while only 0.0005 CPU and 200MB are used.

Can be possible to parametrize resources based on helm values?

Eg, replacing in templates:

      {{- with .Values.controller.resources }}
      resources:
        {{- toYaml . | nindent 12 }}
      {{- end }}

instead of :

      resources:
        requests:
          cpu: 100m
          memory: 200Mi

Problems assigning the storage class with helm for the sci storage driver

Hello I am trying to assign an storage class to a Rancher cluster. I followed this tutorial:
I followed this process:
https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v2_5:CSI-Volume-Driver-v2_5

I saw that there were some fields that I needed to edit:
https://github.com/nutanix/helm/blob/nutanix-csi-storage-2.5.0/charts/nutanix-csi-storage/values.yaml

so I downloaded the file and added the requested fields like:
prismEndPoint:

username:
password:

The containers start:
kubectl get pods -A | grep csi
ntnx-system csi-node-ntnx-plugin-ckh2l 3/3 Running 0 12m
ntnx-system csi-node-ntnx-plugin-dn5br 3/3 Running 0 12m
ntnx-system csi-node-ntnx-plugin-h4s9c 3/3 Running 0 12m
ntnx-system csi-node-ntnx-plugin-kzhn7 3/3 Running 0 12m
ntnx-system csi-node-ntnx-plugin-slzzw 3/3 Running 0 12m
ntnx-system csi-node-ntnx-plugin-wftpg 3/3 Running 0 12m
ntnx-system csi-provisioner-ntnx-plugin-0 5/5 Running 0 12m

but when I check the logs I am getting this error on all the containers:
error: a container name must be specified for pod csi-provisioner-ntnx-plugin-0, choose one of: [csi-provisioner csi-resizer csi-snapshotter ntnx-csi-plugin liveness-probe]

not sure how to fix that or if I am missing anything on the installation.

Thanks
Francisco Yanez

Use existing secret / disable secret generation

Hello,

would it be possible to add something like an "existingSecret" field to the values file?

We apply secrets manually to the cluster, to keep the User/Password information outside of our pipelines/code-repos, so if there was a possibility to just specify the name of a secret (or that a secret of the name ntnx-secret doesn't need to be created) to avoid creating or editing the secret through the helm chart, that would be great.

upgrade from beta to 1.x?

Hi - when upgrading a running cluster from ntnx/ntnx-csi:beta2 (released end 2018) to ntnx/ntnx-csi:v1.1.1 deployed using the current chart, the StorageClass (named acs-abs at the time) cannot be kept, as the csi-provisioner-ntnx-plugin-0 pod gets stuck in a crash loop; setting up with a different name (nutanix-volume by default) does work, but the old volumes are now orphaned. Is there a workaround to import the existing volumes into the new driver?

Add some parameters to change `volumeNamePrefix` with `csi-external-provisioner`

Are there any method to change volumeNamePrefix for the new PV Name ?
If there would be multiple Nutanix Volume to use, every PV Name are same with the prefix pvc- by default.

Maybe adding some args for csi-external-provisioner is a good idea. like below:
https://github.com/nutanix/helm/blob/master/charts/nutanix-csi-storage/templates/ntnx-csi-controller-deployment.yaml#L44

And the default parameter to use for volumeNamePrefix like below:
https://github.com/nutanix/csi-external-provisioner/blob/master/cmd/csi-provisioner/csi-provisioner.go#L62
https://github.com/nutanix/csi-external-provisioner/blob/master/pkg/controller/controller.go#L558

Unable to deploy nutanix-csi-snapshot on OCP 4.11

Authenticated to the OpenShift cluster as cluster-admin

$ helm install nutanix-csi-snapshot nutanix/nutanix-csi-snapshot -n ntnx-system --create-namespace

Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "[volumesnapshotclasses.snapshot.storage.k8s.io](http://volumesnapshotclasses.snapshot.storage.k8s.io/)" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "[app.kubernetes.io/managed-by](http://app.kubernetes.io/managed-by)": must be set to "Helm"; annotation validation error: missing key "[meta.helm.sh/release-name](http://meta.helm.sh/release-name)": must be set to "nutanix-csi-snapshot"; annotation validation error: missing key "[meta.helm.sh/release-namespace](http://meta.helm.sh/release-namespace)": must be set to "ntnx-system"

Question about dataServiceEndPoint setting in csi

Suppose I have three AHV hosts, each host has two network cards, one 1Gbit network card is used as a management network (192.168.10.0/20) , and a 10Git network card used as a storage network (10.0.10.0/24).

My VMs only allocates a network card with the management network 192.168.10.0/20 by default. I hope that dataServiceEndPoint is set on the storage network to avoid IO traffic going to the management network card. Assuming the dataServiceEndPoint is 10.0.0.100, does the vm need to add another network card for the storage network to access the dataServiceEndPoint? If it needs to be added, is there a risk that the broadcast domain will be too large to affect the services of the storage network when the number of VMs is too large?

On the other hand, fileHost only supports the configuration of one ip address, but fileserver does not have VIP. How to ensure high availability?

very thankful!

Read me documentation for deploying csi needs to be clear about prismEndpoint parameter

Coming from a support case logged recently, while deploying chart with following command

helm install nutanix-storage nutanix/nutanix-csi-storage -n ntnx-system --create-namespace --set volumeClass=true --set prismEndPoint=X.X.X.X --set username=admin --set password=xxxxxxxxx --set storageContainer=container_name --set fsType=xfs

there is a confusion with the prismEndpoint parameter, users are mistakenly setting it to Prism Central IP address which causes failure in creating PVC. Need a note explaining this should be the Prism Element cluster Virtual IP address where the k8s vms are running,

NTNX case 00575424 : use correct version numbers

There is an issue with the current chart. Please modify the images to v1.0.1. Some images in the values.yaml file are 1.0.2 and some are 1.0.0 causing intermittent VG discovery and mount issues:

#csi-image

attacherImageFinal: quay.io/k8scsi/csi-attacher:v1.0.1
attacherImageBeta: quay.io/k8scsi/csi-attacher:v0.4.2

nodeImageFinal: quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
nodeImageBeta: quay.io/k8scsi/driver-registrar:v0.4.2

provisionerImageFinal: quay.io/k8scsi/csi-provisioner:v1.0.1
provisionerImageBeta: quay.io/k8scsi/csi-provisioner:v0.4.2

#csi-ntnx

ntnxImageFinal: ntnx/ntnx-csi:v1.0.1
ntnxImageBeta: ntnx/ntnx-csi:beta2

MountVolume.SetUp failed for volume - the given Volume ID NutanixVolumes-***** already exists

Hi Gents,

im using latest rancher version and latest nutanix csi snapshot and csi storage.
All nodes meet the requirements as described.
I can create a pvc and it get status bound and is online in rancher and nutanix prism elements.

When creating a new pod with the pvc then i got the error that the given volume allready exists.

"MountVolume.SetUp failed for volume "pvc-xxxxx-xxxxx-xxxxxx-xxxxx" : rpc error: code = Unavailable desc = An operation with the given Volume ID NutanixVolumes-xxxxx-xxxxxx-xxxxxx-xxxxxxx already exists"

Any ideas how to solve this ?

Thanks and BR
Andre

CSI Snapshot chart to check if CRD exists

It would be nice for the CSI Snapshot chart to check if the Snapshot CRD already exists in the cluster. I was reusing my automation to install the chart with several different distributions, and it was failing for some of them because the CRD was already installed.

The documentation mentions that installing this chart is only needed if the CRD doesn't exist, but it would be a nice addition the check 😄

Feature request: Support CSI Storage Driver to be installed multiple times

I noticed that you already have support for legacy flag which can be used to switch between two different driver names.
With that one is possible install "CSI Storage Driver" twice to same cluster (in theory at least).

However it would be useful to be able to customize driver name. Then it would possible to create Kubernetes cluster which is stretched between three AHV clusters which are located to different datacenters and run example three node RabbitMQ cluster where each of those nodes would store to their data to local AHV cluster. Then would be zero down time on that RabbitMQ cluster even during complete datacenter failure.

Other parameters like storage class and prism element connectivity details are already parameterized so only driver name customization support is really missing.

I can also create pull request about this if it is agreed that it is something which will get merged to here.

Can't mount NutanixVolume in Container

Hi,

we have installed the nutanix-csi and nutanix-csi-snapshot:

NAME                	NAMESPACE  	REVISION	UPDATED                                 	STATUS  	CHART                     	APP VERSION
nutanix-csi         	ntnx-system	1       	2022-08-25 13:13:35.439657844 +0200 CEST	deployed	nutanix-csi-storage-2.5.4 	2.5.1      
nutanix-csi-snapshot	ntnx-system	1       	2022-08-25 13:13:12.811057445 +0200 CEST	deployed	nutanix-csi-snapshot-6.0.1	6.0.1

We can create PVCs fine, PVs get created automatically as expected.

k get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
data-wordpress-15-1661426949-mariadb-0   Bound    pvc-f9546b9d-e060-4e24-832e-c6d1cd7e3afe   8Gi        RWO            nutanix-volume   27m
data-wordpress-15-1661427441-mariadb-0   Bound    pvc-d211c320-a276-441d-b70b-092b386b74a6   8Gi        RWO            nutanix-volume   19m
wordpress-15-1661427441                  Bound    pvc-a1f77322-153f-422d-8a15-e94f230bd461   10Gi       RWO            nutanix-volume   19m
k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                            STORAGECLASS     REASON   AGE
pvc-a1f77322-153f-422d-8a15-e94f230bd461   10Gi       RWO            Delete           Bound    default/wordpress-15-1661427441                  nutanix-volume            19m
pvc-d211c320-a276-441d-b70b-092b386b74a6   8Gi        RWO            Delete           Bound    default/data-wordpress-15-1661427441-mariadb-0   nutanix-volume            19m
pvc-f9546b9d-e060-4e24-832e-c6d1cd7e3afe   8Gi        RWO            Delete           Bound    default/data-wordpress-15-1661426949-mariadb-0   nutanix-volume            27m

The problem though is, that the containers can't mount the volume itself (open-iscsi is installed). We tried several different applications and they all are having the same issue..

k get pods | grep wordpress
wordpress-15-1661427441-7574bc4579-q4jlz   0/1     ContainerCreating   0          21m
wordpress-15-1661427441-mariadb-0          0/1     ContainerCreating   0          21m
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  21m                 default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  21m                 default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         21m                 default-scheduler  Successfully assigned default/wordpress-15-1661427441-7574bc4579-q4jlz to ntnx-csiw1
  Warning  FailedMount       20m                 kubelet            MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc0001225b8] State:ERROR}
  Warning  FailedMount       19m                 kubelet            MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc0001220d8] State:ERROR}
  Warning  FailedMount       18m                 kubelet            MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc000123338] State:ERROR}
  Warning  FailedMount       17m                 kubelet            MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc00000efa8] State:ERROR}
  Warning  FailedMount       16m                 kubelet            MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc000123260] State:ERROR}
  Warning  FailedMount       15m                 kubelet            MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc0005184f8] State:ERROR}
  Warning  FailedMount       13m                 kubelet            MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc000518798] State:ERROR}
  Warning  FailedMount       8m1s (x2 over 17m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[kube-api-access-fxs6w wordpress-data]: timed out waiting for the condition
  Warning  FailedMount       76s (x7 over 19m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[wordpress-data], unattached volumes=[wordpress-data kube-api-access-fxs6w]: timed out waiting for the condition
  Warning  FailedMount       60s (x5 over 12m)   kubelet            (combined from similar events): MountVolume.SetUp failed for volume "pvc-a1f77322-153f-422d-8a15-e94f230bd461" : rpc error: code = Internal desc = Operation timed out: Failed to update VG: [PUT /volume_groups/{uuid}][400] PutVolumeGroupsUUID default  &{APIVersion:3.1 Code:400 Kind: MessageList:[0xc000518df8] State:ERROR}

We are using k8s v1.23.8:

k get nodes
NAME         STATUS   ROLES               AGE   VERSION
ntnx-csim1   Ready    controlplane,etcd   57m   v1.23.8
ntnx-csiw1   Ready    worker              54m   v1.23.8
ntnx-csiw2   Ready    worker              55m   v1.23.8

Any idea why this is happening? This is a major blocker for one of our customers at the moment..

Thanks,
Stefan

[Storage] talos support and dependencies on host OS

Hitting a roadblock trying to implement nutanx csi on a talos cluster, where the nutanix csi node is unable to run either mkfs.ext4 or mkfs.xfs as they are not provided by talos. Would it be possible to bundle some of these tools in the container image?

sh-4.4# ls -l  /usr/sbin | grep chroot
lrwxrwxrwx 1 root root      23 Nov  4 14:53 free -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 lsscsi -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 mkfs.ext3 -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 mkfs.ext4 -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 mount -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 mount.nfs -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 multipath -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 multipathd -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 pgrep -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 resize2fs -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 umount -> /chroot-host-wrapper.sh
lrwxrwxrwx 1 root root      23 Nov  4 14:53 xfs_growfs -> /chroot-host-wrapper.sh

Ability to set ImagePullSecrets for service accounts

Currently when deploying the nutanix-csi-storage chart in an environment that pulls from a private registry, containers receive image pull errors since the chart doesn't provide a value to add imagePullSecrets to the service accounts that are created.

The chart should allow optionally configuring imagePullSecrets for the service accounts that it creates to support pulling from private image registries.

Example of how rook handles this: https://github.com/rook/rook/blob/master/deploy/charts/library/templates/_imagepullsecret.tpl

Default to reclaimPolicy: Delete ?

Hi!

Looking through the helm chart, it seems like the default is to set the reclaimPolicy to Delete and not have a way to set it to Retain.

Is it something that is wanted or do you accept PR to have that switch implemented?

nutanix/csi-plugin repository is archived, preventing discussion or adding issues regarding the Nutanix CSI Driver

In the README.md you are stating the following:

Please file any issues, questions or feature requests you may have here for the Nutanix CSI Driver or here for the Helm chart.

The problem here is that the nutanix/csi-plugin repository has been archived, that makes it impossible to submit new issues or place comments in existing issues.

It would be nice if we could submit issues regarding the Nutanix CSI Driver again, because some issues still exist.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.