Giter Club home page Giter Club logo

charts's Introduction

democratic-csi

democratic-csi implements the csi spec to facilitate stateful workloads.

Currently democratic-csi integrates with the following storage systems:

  • TrueNAS
  • ZFS on Linux (ZoL, ie: generic Ubuntu server)
  • Synology
  • generic nfs, smb, and iscsi servers
  • local storage directly on nodes

usage

helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update

if using k8s < 1.17 please use chart version 0.5.0

links

charts's People

Contributors

dreddor avatar halkeye avatar jtackaberry avatar jvstein avatar krakazyabra avatar lion24 avatar lu1as avatar mattmattox avatar mechinn avatar tarik02 avatar therevoman avatar travisghansen avatar usa-reddragon avatar varet80 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

charts's Issues

Weird behavior on directory creation for generic NFS mounts

Hello there, I have detected some weird behavior. Let me show my values.yaml first (Chart version 0.14.2, Kubernetes k0s 1.27)

.global:
  shareHost: &globalShareHost storage-01.internal.place
  shareBasePath: &globalShareBasePath "/mnt/pool0/shared/kubernetes.nfs"
  controllerBasePath: &globalControllerBasePath "/mnt/kubernetes.nfs"

# Ref: https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/values.yaml
# Ref: https://kubesec.io/basics/
# Ref: https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/examples/nfs-client.yaml
democratic-csi:

  csiDriver:

    # Globally unique name for a given cluster
    name: "org.democratic-csi.nfs"

  storageClasses:
    - name: standard-nfs
      defaultClass: true
      reclaimPolicy: Delete
      volumeBindingMode: Immediate
      allowVolumeExpansion: false
      parameters:
        csi.storage.k8s.io/fstype: nfs

      mountOptions:
        - nfsvers=4.2
        - nconnect=8
        - hard

        # Disable completely access time log for any file (this implies applying 'nodiratime')
        - noatime

#      secrets:
#        provisioner-secret:
#        controller-publish-secret:
#        node-stage-secret:
#        node-publish-secret:
#        controller-expand-secret:

  driver:
    config:
      # https://github.com/democratic-csi/democratic-csi/tree/master/examples
      driver: nfs-client
      instance_id:
      nfs:
        shareHost: *globalShareHost
        shareBasePath: *globalShareBasePath
        # (shareHost:shareBasePath) should be mounted at this location in the controller container
        controllerBasePath: *globalControllerBasePath
        dirPermissionsMode: "0777"
        # Required to set the UID, and not the name: (dirPermissionsGroup: root) = (dirPermissionsGroup: 0)
        dirPermissionsUser: 0
        # Required to set the GID, and not the name: (dirPermissionsGroup: wheel) = (dirPermissionsGroup: 0)
        dirPermissionsGroup: 0

  node:
    # Ref: https://github.com/democratic-csi/democratic-csi/tree/master#a-note-on-non-standard-kubelet-paths
    kubeletHostPath: /var/lib/k0s/kubelet

  # Run the controller service separated from the node service, mount the base share into the controller pod at run time
  controller:

    externalResizer:
      enabled: false

    # Use the host’s network
    # Sharing the host’s network namespace permits processes in the pod
    # to communicate with processes bound to the host’s loopback adapter
    hostNetwork: true

    # Use the host’s ipc namespace
    # Sharing the host’s IPC namespace allows container processes
    # to communicate with processes on the host
    hostIPC: true

    # Controller driver needs to mount the NFS shared directory to be able to create
    # the PVC base subdirectories (pvc-xxx-something) in remote NFS server as nodes can not do it on their own.
    # If this section is not properly configured, everything
    # works fine, but directories with the PVC name on NFS server must be manually created
    driver:

      extraEnv:
        - name: SHARE_HOST
          value: *globalShareHost
        - name: SHARE_BASE_PATH
          value: *globalShareBasePath
        - name: CONTROLLER_BASE_PATH
          value: *globalControllerBasePath

      securityContext:
        allowPrivilegeEscalation: true
        capabilities:
          add:
          - SYS_ADMIN
        privileged: true

      lifecycle:
        postStart:
          exec:
            command: ["/bin/sh", "-c", "mkdir -p $CONTROLLER_BASE_PATH; mount $SHARE_HOST:$SHARE_BASE_PATH $CONTROLLER_BASE_PATH"]
        preStop:
          exec:
            command: ["/bin/sh","-c","umount $CONTROLLER_BASE_PATH"]

I have detected that it's mandatory to have all this section related to the controller's driver on your values.yaml. This section was configured because without it, the controller was not able to create the directory <basePath>/v/<pvc-something> on remote NFS server.
This error given by the controller was about directory not found, and was fixed when manually created the directory

 # Run the controller service separated from the node service, mount the base share into the controller pod at run time
  controller:

    externalResizer:
      enabled: false

    # Use the host’s network
    # Sharing the host’s network namespace permits processes in the pod
    # to communicate with processes bound to the host’s loopback adapter
    hostNetwork: true

    # Use the host’s ipc namespace
    # Sharing the host’s IPC namespace allows container processes
    # to communicate with processes on the host
    hostIPC: true

    # Controller driver needs to mount the NFS shared directory to be able to create
    # the PVC base subdirectories (pvc-xxx-something) in remote NFS server as nodes can not do it on their own.
    # If this section is not properly configured, everything
    # works fine, but directories with the PVC name on NFS server must be manually created
    driver:

      extraEnv:
        - name: SHARE_HOST
          value: *globalShareHost
        - name: SHARE_BASE_PATH
          value: *globalShareBasePath
        - name: CONTROLLER_BASE_PATH
          value: *globalControllerBasePath

      securityContext:
        allowPrivilegeEscalation: true
        capabilities:
          add:
          - SYS_ADMIN
        privileged: true

      lifecycle:
        postStart:
          exec:
            command: ["/bin/sh", "-c", "mkdir -p $CONTROLLER_BASE_PATH; mount $SHARE_HOST:$SHARE_BASE_PATH $CONTROLLER_BASE_PATH"]
        preStop:
          exec:
            command: ["/bin/sh","-c","umount $CONTROLLER_BASE_PATH"]

The point is that the Chart has a dedicated section for this kind of data and I was expecting this config to be injected and used by the controller to do exactly that under the hood without me mounting anything manually. Am I wrong with something?

The section I'm talking about is .Values.driver. But the reality is that the other section was needed in my case with a generic NFS server. Once added, the directories are managed as expected

WDYT? :)

K3OS /usr/share/zoneinfo/ not a directory

I get this error after I
helm install freenas democratic-csi/democratic-csi --dry-run

Error: execution error at (democratic-csi/templates/required.yaml:1:5): csiDriver.name is required

I suspect I need to pass in some values but I am unsure how to add these in.

K3OS NFS test pod failed with error

The Claim was successful and created the datasets
https://pastebin.com/3mz5hs9x
There was a small issue though it didn't like claims under 1Gi freenas error on the dataset created "Quota size is too small, enter a value of 1 GiB or larger."

Then I deployed the Pod
https://pastebin.com/m3GCGt1N
kubectl describe pod freenas-test-pod

Normal Scheduled default-scheduler Successfully assigned default/freenas-test-pod to k3s3497f6566641
Warning FailedMount 114s kubelet, k3s3497f6566641 Unable to attach or mount volumes: unmounted volumes=[freenas-test-volume], unattached volumes=[freenas-test-volume default-token-x5t2j]: timed out waiting for the condition
Warning FailedMount 18s (x9 over 3m47s) kubelet, k3s3497f6566641 MountVolume.MountDevice failed for volume "pvc-0490758b-a3b4-460d-9cc6-b9ab4521dc6a" : rpc error: code = Internal desc = [object Object]

unable to upgrade to 0.12.0

Hi Travis,
Sorry for bothering you again with some maybe stupid questions, but I am having trouble understanding why I cannot upgrade to version 0.12.0 of the helm chart.
I am currently running on 0.11.2. I recentrly upgraded my cluster to 1.24.0.
Yesterday I migrated from Truenas CORE to Truenas SCALE. The upgrade went fine but since I have to update the root group in the driver config from "wheel" to "root" I decided to also upgrade to the latest version of the helm chart for the democratic-csi driver.
Unfortunately the upgrade didn't go smoothly as usual. I got the following errors:

client.go:521: [debug] Patch DaemonSet "zfs-iscsi-democratic-csi-node" in namespace democratic-csi
client.go:261: [debug] error updating the resource "zfs-iscsi-democratic-csi-node":
         cannot patch "zfs-iscsi-democratic-csi-node" with kind DaemonSet: DaemonSet.apps "zfs-iscsi-democratic-csi-node" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"node", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
client.go:521: [debug] Patch Deployment "zfs-iscsi-democratic-csi-controller" in namespace democratic-csi
client.go:261: [debug] error updating the resource "zfs-iscsi-democratic-csi-controller":
         cannot patch "zfs-iscsi-democratic-csi-controller" with kind Deployment: Deployment.apps "zfs-iscsi-democratic-csi-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"controller", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
client.go:512: [debug] Looks like there are no changes for CSIDriver "org.democratic-csi.iscsi"
client.go:521: [debug] Patch VolumeSnapshotClass "freenas-iscsi-csi" in namespace 
upgrade.go:431: [debug] warning: Upgrade "zfs-iscsi" failed: cannot patch "zfs-iscsi-democratic-csi-node" with kind DaemonSet: DaemonSet.apps "zfs-iscsi-democratic-csi-node" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"node", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zfs-iscsi-democratic-csi-controller" with kind Deployment: Deployment.apps "zfs-iscsi-democratic-csi-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"controller", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error: UPGRADE FAILED: cannot patch "zfs-iscsi-democratic-csi-node" with kind DaemonSet: DaemonSet.apps "zfs-iscsi-democratic-csi-node" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"node", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zfs-iscsi-democratic-csi-controller" with kind Deployment: Deployment.apps "zfs-iscsi-democratic-csi-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"controller", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
helm.go:84: [debug] cannot patch "zfs-iscsi-democratic-csi-node" with kind DaemonSet: DaemonSet.apps "zfs-iscsi-democratic-csi-node" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"node", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "zfs-iscsi-democratic-csi-controller" with kind Deployment: Deployment.apps "zfs-iscsi-democratic-csi-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/csi-role":"controller", "app.kubernetes.io/instance":"zfs-iscsi", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

I see that it complains about some immutable field, but I have no idea which one. I didn't change anything in my config besides the root group and additionally I always thought label fields are not immutable. Do you have any idea what am I doing wrong?

Thanks!

Breaking changes: CustomResourceDefinition & field is immutable

Overview

Releases v0.12.0 and (potentially) v0.13.1 are breaking releases. If following SemVer, these should have been v1.0.0 and v2.0.0 respectively. Typically a CHANGELOG would exist with Upgrade notes.

Ideally, to resolve this issue, a CHANGELOG.md should exist that clearly shows changes made and potential breaking changes, along with special upgrade notes.

Upgrading to v0.12.0:

This is the same issue as #28 but wanted to add more details. When upgrading to v0.12.0, you'll likely face the following errors:

Error: cannot patch "<release-name>-democratic-csi-node" with kind DaemonSet: DaemonSet.apps "<release-name>-democratic-csi-node" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"node-linux", "app.kubernetes.io/csi-role":"node", "app.kubernetes.io/instance":"<release-name>", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && cannot patch "<release-name>-democratic-csi-controller" with kind Deployment: Deployment.apps "<release-name>-democratic-csi-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"controller-linux", "app.kubernetes.io/csi-role":"controller", "app.kubernetes.io/instance":"<release-name>", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

and

Error: cannot patch "<release-name>-democratic-csi-controller" with kind Deployment: Deployment.apps "<release-name>-democratic-csi-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"controller-linux", "app.kubernetes.io/csi-role":"controller", "app.kubernetes.io/instance":"<release-name>", "app.kubernetes.io/managed-by":"Helm", "app.kubernetes.io/name":"democratic-csi"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

To resolve, you need to delete the DaemonSet and Deployment for the respective pods. To do this, run:

kubectl delete deployment --namespace <your_namespace> <release-name>-democratic-csi-controller
kubectl delete ds --namespace <your_namespace> <release-name>-democratic-csi-node

Now you should be able to successfully run your helm upgrade command.

Upgrading to v0.13.1

This one isn't exactly the fault of the Helm chart but more so K8s. In v0.13.1, the CRD was removed (issue #18 & #24) due to its incompatibility with K8s >v1.21. But CRDs are not automatically removed (or updated) and manual intervention is required. As such, you may run into the following error:

Error: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

The easiest way to resolve this issue is by using the Helm MapKubeAPIs plugin.

helm mapkubeapis --namespace <your_namespace> <release-name>

Conclusion

I hope this helps other folks that ran into these issues and notes like above end-up documented somewhere (CHANGELOG, README, etc.)

MountVolume.SetUp failed for volume "registration-dir"

I don't have this /var/lib/kubelet/plugins_registry directory on my k8s nodes. I mentioned before I'm running RancherOS and currently running v1.15.10. Do I just need to create this directory, or am I missing some assumption in my setup?

Warning | FailedMount | Unable to mount volumes for pod "democratic-csi-1584812152-node-svm87_default(7934f18a-df44-4fbc-a335-460cf95a5dc0)": timeout expired waiting for volumes to attach or mount for pod "default"/"democratic-csi-1584812152-node-svm87". list of unmounted volumes=[registration-dir]. list of unattached volumes=[socket-dir plugins-dir registration-dir mountpoint-dir iscsi-dir dev-dir modules-dir localtime udev-data iscsi-info sys-dir host-dir config democratic-csi-1584812152-node-sa-token-k95m4] | a few seconds ago
-- | -- | -- | --
Warning | FailedMount | MountVolume.SetUp failed for volume "registration-dir" : hostPath type check failed: /var/lib/kubelet/plugins_registry is not a directory | a minute ago

This is all I have:

[rancher@k8s-node-2 ~]$ ls /var/lib/kubelet/
volumeplugins

Looking a templates/node.yaml I missing several other paths including:

/var/lib/kubelet/plugins
/var/lib/kubelet/plugins_registry
/etc/iscsi
/var/lib/iscsi

how can I replicate data in 3 different truenas ?

I've been using this chart for quite some time and it works very well but I have a specific question about it :

I have a kubernetes cluster, 3 zones, a Truenas used for each zone.

How can I have a statefulset with 3 replicas, one replica in each zone ? Pod is correctly deployed in each zone using node affinity but all PVC automatically gets the same storage class, which means all PV are created in the same TrueNAS.

I have tried to work with allowedTopologies https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/templates/storage-classes.yaml#L32 but I can't make it work. The same storage class gets picked up by all PVC.

missing {} on emptyDir directive in controller.yaml

Can you please add {} to the emptyDir directive on line 92 of controller.yaml

My argocd instance keeps complaining that it's out of sync.

I believe you could also leave out the emptyDir directive altogether since it's the default

Thanks,
Roelof

Helm error if defaultClass is set to true

Hello,

I used the following value file to deploy democratic-csi in my cluster.

If I set defaultClass (line 8) to true, Helm will throw Error: unable to build kubernetes objects from release manifest: unable to decode "": json: cannot unmarshal bool into Go struct field ObjectMeta.metadata.annotations of type string.
If I reset the setting to false the chart work correctly.

setup :

  • K3S v1.28.5+k3s1
  • Helm version.BuildInfo{Version:"v3.13.3", GitCommit:"c8b948945e52abba22ff885446a1486cb5fd3474", GitTreeState:"clean", GoVersion:"go1.20.11"}
csiDriver:
  # should be globally unique for a given cluster
  name: "org.democratic-csi.iscsi"

# add note here about volume expansion requirements
storageClasses:
- name: freenas-iscsi-csi
  defaultClass: false  # Setting this to true will throw error
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    # for block-based storage can be ext3, ext4, xfs
    # for nfs should be nfs
    fsType: ext4
      
    # if true, volumes created from other snapshots will be
    # zfs send/received instead of zfs cloned
    # detachedVolumesFromSnapshots: "false"
    
    # if true, volumes created from other volumes will be
    # zfs send/received instead of zfs cloned
    # detachedVolumesFromVolumes: "false"

  mountOptions: []
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
#      # any arbitrary iscsiadm entries can be add by creating keys starting with node-db.<entry.name>
#      # if doing CHAP
#      node-db.node.session.auth.authmethod: CHAP
#      node-db.node.session.auth.username: foo
#      node-db.node.session.auth.password: bar
#
#      # if doing mutual CHAP
#      node-db.node.session.auth.username_in: baz
#      node-db.node.session.auth.password_in: bar
    node-publish-secret:
    controller-expand-secret:

# if your cluster supports snapshots you may enable below
volumeSnapshotClasses: []
#- name: freenas-iscsi-csi
#  parameters:
#  # if true, snapshots will be created with zfs send/receive
#  # detachedSnapshots: "false"
#  secrets:
#    snapshotter-secret:

driver:
  config:
    # please see the most up-to-date example of the corresponding config here:
    # https://github.com/democratic-csi/democratic-csi/tree/master/examples
    # YOU MUST COPY THE DATA HERE INLINE!
    driver: freenas-iscsi
    instance_id:
    httpConnection:
      protocol: https
      host: 192.168.0.22
      port: 443
      # use only 1 of apiKey or username/password
      # if both are present, apiKey is preferred
      # apiKey is only available starting in TrueNAS-12
      apiKey: redacted
      # username: root
      # password:
      allowInsecure: true
      # use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
      # leave unset for auto-detection
      #apiVersion: 2
    sshConnection:
      host: 192.168.0.22
      port: 22
      username: csi
      # use either password or key
      # password: ""
      privateKey: |
        -----BEGIN OPENSSH PRIVATE KEY-----
        redacted
        -----END OPENSSH PRIVATE KEY-----
    zfs:
      # can be used to override defaults if necessary
      # the example below is useful for TrueNAS 12
      cli:
        sudoEnabled: true
      #
      #  leave paths unset for auto-detection
      # paths:
      #    zfs: /usr/local/sbin/zfs
      #    zpool: /usr/local/sbin/zpool
      #    sudo: /usr/local/bin/sudo
      #    chroot: /usr/sbin/chroot

      # can be used to set arbitrary values on the dataset/zvol
      # can use handlebars templates with the parameters from the storage class/CO
      #datasetProperties:
      #  "org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      #  "org.freenas:test": "{{ parameters.foo }}"
      #  "org.freenas:test2": "some value"
      
      # total volume name (zvol/<datasetParentName>/<pvc name>) length cannot exceed 63 chars
      # https://www.ixsystems.com/documentation/freenas/11.2-U5/storage.html#zfs-zvol-config-opts-tab
      # standard volume naming overhead is 46 chars
      # datasetParentName should therefore be 17 chars or less when using TrueNAS 12 or below
      datasetParentName: M2 Pool/k3s/volumes
      # do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
      # they may be siblings, but neither should be nested in the other 
      # do NOT comment this option out even if you don't plan to use snapshots, just leave it with dummy value
      detachedSnapshotsDatasetParentName: M2 Pool/k3s/snapshots
      # "" (inherit), lz4, gzip-9, etc
      zvolCompression: 
      # "" (inherit), on, off, verify
      zvolDedup: 
      zvolEnableReservation: false
      # 512, 1K, 2K, 4K, 8K, 16K, 64K, 128K default is 16K
      zvolBlocksize:
    iscsi:
      targetPortal: 192.168.0.22
      # for multipath
      targetPortals: [] # [ "server[:port]", "server[:port]", ... ]
      # leave empty to omit usage of -I with iscsiadm
      interface:

      # MUST ensure uniqueness
      # full iqn limit is 223 bytes, plan accordingly
      # default is "{{ name }}"
      #nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      namePrefix: csi-
      nameSuffix: 

      # add as many as needed
      targetGroups:
        # get the correct ID from the "portal" section in the UI
        - targetGroupPortalGroup: 1
          # get the correct ID from the "initiators" section in the UI
          targetGroupInitiatorGroup: 3
          # None, CHAP, or CHAP Mutual
          targetGroupAuthType: None
          # get the correct ID from the "Authorized Access" section of the UI
          # only required if using Chap
          targetGroupAuthGroup:

      #extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
      extentInsecureTpc: true
      extentXenCompat: false
      extentDisablePhysicalBlocksize: true
      # 512, 1024, 2048, or 4096,
      extentBlocksize: 512
      # "" (let FreeNAS decide, currently defaults to SSD), Unknown, SSD, 5400, 7200, 10000, 15000
      extentRpm: "SSD"
      # 0-100 (0 == ignore)
      extentAvailThreshold: 0

Issue mount pv to pod

I am able to install freenas-nfs-csi storage class. Testing claim created PV on NFS server.

However, when trying to mount pvc to an pod, it times out.

Here is the describe pv result
λ kubectl describe pv pvc-a7b69c78-7875-4e2b-b4c4-4e7d909a9362
Name: pvc-a7b69c78-7875-4e2b-b4c4-4e7d909a9362
Labels:
Annotations: pv.kubernetes.io/provisioned-by: org.democratic-csi.nfs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: freenas-nfs-csi
Status: Bound
Claim: default/mongodb
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 8Gi
Node Affinity:
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: org.democratic-csi.nfs
VolumeHandle: pvc-a7b69c78-7875-4e2b-b4c4-4e7d909a9362
ReadOnly: false
VolumeAttributes: node_attach_driver=nfs
provisioner_driver=freenas-nfs
server=10.64.43.20
share=/mnt/pool0/k8s-dev/a/vols/pvc-a7b69c78-7875-4e2b-b4c4-4e7d909a9362
storage.kubernetes.io/csiProvisionerIdentity=1595572865963-8081-org.democratic-csi.nfs
Events:

Describe pod that is mounting has following events
Events:
Type Reason Age From Message


Warning FailedScheduling default-scheduler error while running "VolumeBinding" filter plugin for pod "mongodb-7749784fc4-6t2s5": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling default-scheduler error while running "VolumeBinding" filter plugin for pod "mongodb-7749784fc4-6t2s5": pod has unbound immediate PersistentVolumeClaims

Should this run on arm64 please?

I have a cluster of Pi4s that I am trying to implement this on.

I just get crash loop backoff for all the pods.

The logs seem to indicate a problem with driver registrar container and maybe the cleanup container. I'm very, very new to all of this so I'm unsure.

kubectl logs democratic-csi-1591404995-node-w2tsd csi-driver
info: initializing csi driver: freenas-iscsi
info: starting csi server - name: org.democratic-csi.iscsi, version: 0.1.0, driver: freenas-iscsi, mode: node, csi version: 1.1.0, address: , socket: unix:///csi-data/csi.sock

kubectl logs democratic-csi-1591404995-node-w2tsd driver-registrar
standard_init_linux.go:211: exec user process caused "exec format error"

kubectl logs democratic-csi-1591404995-node-w2tsd cleanup
standard_init_linux.go:211: exec user process caused "exec format error"

TIA
Daz

Use explicit image registry

Setups which do not use docker hub as the default registry fail. Need to include the registry as an explicit portion of the images to prevent failure in such a scenario.

one deployment with iSCSI and NFS

Hello Again, it's been working with iSCSI perfectly but I need to add some NFS storage for ReadWriteMany will it be possible to have it on the same deployment or I need to make a new one?

Question regarding dataset creation

Two questions:

  1. Do we need enable ssh access for this to work?

  2. Do we need to create an existing dataset or it can create a new dataset under the pool automatically.

Thanks

CA_BUNDLE preventing snapshot controller from being installed

I'm having troubles with the snapshot-controller Helm chart being provided, specifically with the ValidatingWebhookConfiguration having set webhooks.clientConfig.caBundle: ${CA_BUNDLE}. I'm using ArgoCD to apply this chart, which effectively conducts a helm template ... | kubectl apply . As a result, I'm getting the following error:

error decoding from json: illegal base64 data at input byte 0

I'm wondering if Helm is expected to pull the value of CA_BUNDLE from the environment when a helm install is used, and this doesn't happen when a helm template is ran. Would it mase sense to provide a way to explicitly provide an empty string or null in values.yaml to support this flow?

This is the ValidatingWebhookConfiguration spec I'm attempting to apply. When I comment out CA_BUNDLE, all works as expected

---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: "validation-webhook.snapshot.storage.k8s.io"
  labels:
    app.kubernetes.io/name: snapshot-controller
    helm.sh/chart: snapshot-controller-0.2.4
    app.kubernetes.io/instance: snapshot-controller
    app.kubernetes.io/managed-by: Helm
webhooks:
- name: "validation-webhook.snapshot.storage.k8s.io"
  rules:
  - apiGroups:   ["snapshot.storage.k8s.io"]
    apiVersions: ["v1", "v1beta1"]
    operations:  ["CREATE", "UPDATE"]
    resources:   ["volumesnapshots", "volumesnapshotcontents"]
    scope:       "*"
  clientConfig:
    service:
      namespace: "snapshot-controller"
      name: "snapshot-validation-service"
      path: "/volumesnapshot"
      caBundle: ${CA_BUNDLE}
  admissionReviewVersions: ["v1", "v1beta1"]
  sideEffects: None
  failurePolicy: Ignore # We recommend switching to Fail only after successful installation of the webhook server and webhook.
  timeoutSeconds: 2 # This will affect the latency and performance. Finetune this value based on your application's tolerance.

Way to Move TrueNAS Secrets out of freenas-iscsi.yaml (values.yaml)?

Hi @travisghansen

I'm deploying democratic-csi via ArgoCD and everything appears to be working fine. While I'm using a Private Repository, I'm still looking for a way to move some items out of my values.yaml (freenas-iscsi.yaml is essentially my values.yaml) file so they are not checked into GitHub (or I'll store them in a Sealed Secret), such as host apiKey username privateKey.

Seems like I should be able to create a secret with these values prior to deployment and reference these within the values.yaml.

There is a secrets section:

  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
#      # any arbitrary iscsiadm entries can be add by creating keys starting with node-db.<entry.name>
#      # if doing CHAP
#      node-db.node.session.auth.authmethod: CHAP
#      node-db.node.session.auth.username: foo
#      node-db.node.session.auth.password: bar
#
#      # if doing mutual CHAP
#      node-db.node.session.auth.username_in: baz
#      node-db.node.session.auth.password_in: bar
    node-publish-secret:
    controller-expand-secret:

But it's not clear how something in driver.config.httpConnection.apiKey is mapped to something above???

Can't find a reference example on how this would work.

Initial setup assist, installed truenas core last week

Summary

My focus is kubernetes. When I saw truenas had the democratic-csi I made the leap. I installed truenas core last week and have created two pools, each with a single dataset, each dataset has been shared out via smb w/ an ACL read/write using an adgroup, I've joined truenas to ad and am able to access the two shares using an ad user. I've followed the prep steps for smb and iscsi on the nodes of my cluster. My clusters were installed using kubeadm, kubernetes version 1.24.0 .

To reproduce / Steps taken

Now I'm working to get democratic-csi installed and working, first with smb and then with iscsi.

  1. copied https://github.com/democratic-csi/charts/blob/master/stable/democratic-csi/examples/freenas-smb.yaml to be used with helm
  2. copied https://github.com/democratic-csi/democratic-csi/blob/master/examples/freenas-smb.yaml and used sed to append to values.yaml
sed 's/^/      /g' freenas-smb.yaml >> values.yaml
  1. created a secret with credentials to access truenas:
$ cat templates/node-stage-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: node-stage-secret
data:
  username: root
  password: AasdfafdASDFs=
  1. made some changes to the values.yaml, not very confident on this step:
## 1) commented out the username and password lines cause I want to use the secret
    mountOptions:
    # can put these here or can be added to the node-stage-secret below
#    - username=foo
#    - password=bar

## 2) didn't do anything here, do i need to in order to get the node-stage-secret to work?
    secrets:
      provisioner-secret:
      controller-publish-secret:
      node-stage-secret:
      #  mount_flags: "username=foo,password=bar"

## 3) will this also use the node-stage-secret or do I need to put the password here in plain text?  (but i don't want to use passwd w/ port 80)
      httpConnection:
        protocol: http
        host: truenas
        port: 80
        #apiKey:
        username: root
        password:

## 4) will this also use the node-stage-secret or do I need to put the password here in plain text?
      sshConnection:
        host: truenas
        port: 22
        username: root
        password:

      smb:
        shareHost: truenas

Some output

$ k get pods
NAME                                         READY   STATUS             RESTARTS       AGE
democratic-csi-controller-5f7d465d85-qxc6b   1/5     CrashLoopBackOff   15 (20s ago)   2m6s
democratic-csi-node-5h77h                    4/4     Running            0              63s
democratic-csi-node-fp77s
$ k get logs -f democratic-csi-controller-5f7d465d85-qxc6b
I0618 04:04:02.315029       1 feature_gate.go:245] feature gates: &{map[]}
I0618 04:04:02.315098       1 csi-provisioner.go:139] Version: v3.1.0
I0618 04:04:02.315104       1 csi-provisioner.go:162] Building kube configs for running in cluster...
I0618 04:04:02.315783       1 connection.go:154] Connecting to unix:///csi-data/csi.sock
I0618 04:04:02.316526       1 common.go:111] Probing CSI driver for readiness
I0618 04:04:02.316538       1 connection.go:183] GRPC call: /csi.v1.Identity/Probe
I0618 04:04:02.316541       1 connection.go:184] GRPC request: {}
I0618 04:04:05.356081       1 connection.go:186] GRPC response: {}
I0618 04:04:05.356162       1 connection.go:187] GRPC error: rpc error: code = Unavailable desc = unexpected HTTP status code received from server: 502 (Bad Gateway); malformed header: missing HTTP content-type
E0618 04:04:05.356203       1 csi-provisioner.go:197] CSI driver probe failed: rpc error: code = Unavailable desc = unexpected HTTP status code received from server: 502 (Bad Gateway); malformed header: missing HTTP content-type

Democratic-csi container is not versioned

Within the chart all of the images except for democratic-csi use an explicitly defined version. Is that an intentional design decision? I was running a version of the democratic-csi chart from a year ago. Nodes that pulled democratic-csi:latest from that time still work but newer nodes have ran into an issue: csi-driver - ""BREAKING CHANGE since v1.5.3! datasetPermissionsUser must be numeric: root is invalid"". Is the expectation here that regular updates should be made or should the democratic-csi image use tagged versions?

Mirror this Helm repo to ArtifactHub

Short version

Please would you consider creating a free account on ArtifactHub and mirroring your Helm repo at https://democratic-csi.github.io/charts/ to ArtifactHub? Once set up, ArtifactHub will periodically scrape your Helm repo and requires no further effort.

Long version

I have started using Fairwinds Nova to check whether my Helm deployments are up to date. It checks ArtifactHub to discover the latest versions of the Helm chart. As the democratic-csi Helm releases are not on ArtifactHub, it is unable to determine whether I am running the latest version in my cluster:

[jonathan@poseidon-gazeley-uk ~]$ nova find -a
Release Name                 Installed    Latest    Old      Deprecated
============                 =========    ======    ===      ==========
hammond                      0.1.2        0.1.3     true     false    
smtp                         0.3.0        0.3.0     false    false    
gemini                       1.0.0        1.0.0     false    false    
camerahub                    0.9.10       0.9.10    false    false    
owncloud                     0.2.2        0.2.2     false    false    
joplin-server                5.2.0        5.2.0     false    false    
about                        7.2.2        7.2.2     false    false    
influxdb                     2.1.0        2.1.0     false    false    
homer                        7.2.2        7.2.2     false    false    
cert-manager                 v1.9.0       1.9.0     false    false    
mariadb                      11.1.1       11.1.1    false    false    
bookstack                    5.0.0        5.0.0     false    false    
webtrees                     2.0.0        2.0.0     false    false    
nextcloud                    3.0.3        3.0.3     false    false    
node-problem-detector        2.2.2        2.2.2     false    false    
navidrome-jon                6.3.2        6.3.2     false    false    
prometheus-stack             38.0.2       38.0.2    false    false    
paperless                    9.0.0        9.0.0     false    false    
zfs-iscsi                    0.11.2                 false    false    
zfs-nfs                      0.13.1                 false    false    
graphite-exporter            0.1.0        0.1.0     false    false  

I set this up for my own Helm chart repo:

CRDs are too old and do not work on 1.20+ clusters

Hi Travis,
I just wanted to let you know that the CRDs that are referenced in the values. yaml file are too old and do not work with newer clusters.
The link https://raw.githubusercontent.com/kubernetes/csi-api/master/pkg/crd/manifests/csidriver.yaml points to a 3 years old file and trying to enable CRD installation through helm returns the following error:

Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(CustomResourceDefinition.spec): unknown field "validation" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): unknown field "version" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec, ValidationError(CustomResourceDefinition.spec): missing required field "versions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec]

Not a big deal as everything is working without this. However I guess it could be a problem for new installations.

Best Regards,
Borislav

Regardless of chart version, latest democratic-csi image is always pulled and deployed

Goal

Pin democratic-csi to chart version 0.8.3 (thus representing our Kubernetes cluster democratic-csi state back in November 2021).

Expected behavior

Upon reinstalling the Kubernetes cluster from scratch and installing democratic-csi helm chart version 0.8.3, all democratic-csi components, including the main docker.io/democraticcsi/democratic-csi container, are pulled and deployed similar to their state back in November 2021.

Current behavior

Main democratic-csi container docker.io/democraticcsi/democratic-csi is always pulled at version ":latest" and deployed on the cluster instead. This happens regardless of the pinned democratic-csi chart version.

From charts/stable/democratic-csi/values.yaml (L164), it is hard-coded that the :latest version of democratic-csi is always pulled and deployed. This is the only exception, as all other containers in that file have their major/minor versions properly pinned.

Ability to set pods priority

AFAIK, currently there is no possibility to set democratic-csi's pods priority (priorityClass).
Pods are running with default priority class so they can be preempted/evicted, which seems like a bad idea.

It would be nice to have some options to set deployed pods priority. Anoter option is to hardcode some high priority class like system-node-critical for example.

Questions about `datasetPermissions*` and `shareMaproot*` fields

I'm running TrueNAS Core 13 and democratic-csi Helm chart v0.13.7 (scroll down to see my Helm chart's values file).

In my TrueNAS, I've a created k8s-nfs group (GID: 1001) with k8s-nfs user (UID: 1007 ) as group member.

I also have followed the Server Prep section based on the democratic-csi's README, specifically this section:

  • Ensure that user has passwordless sudo privileges:

    csi ALL=(ALL) NOPASSWD:ALL
    
    # if on CORE 12.0-u3+ you should be able to do the following
    # which will ensure it does not get reset during reboots etc
    # at the command prompt
    cli
    
    # after you enter the truenas cli and are at that prompt
    account user query select=id,username,uid,sudo_nopasswd
    
    # find the `id` of the user you want to update (note, this is distinct from the `uid`)
    account user update id=<id> sudo=true
    account user update id=<id> sudo_nopasswd=true
    # optional if you want to disable password
    #account user update id=<id> password_disabled=true
    
    # exit cli by hitting ctrl-d
    
    # confirm sudoers file is appropriate
    cat /usr/local/etc/sudoers
    

    (note this can get reset by FreeNAS if you alter the user via the
    GUI later)

FYI, I've replaced the sample user (csi) above with my user (k8s-nfs) when executing the commands.

I noticed that my PVCs failed to provision when I set these fields:

  • driver.config.zfs.datasetPermissionsUser to 1007
  • driver.config.zfs.datasetPermissionsGroup to 1001
  • driver.config.nfs.shareMaprootUser to 1007
  • driver.config.nfs.shareMaprootGroup to 1001

But when I did not set them like below, my PVCs are happy.

csiDriver:
  name: "org.democratic-csi.nfs"

storageClasses:
- name: freenas-nfs-csi
  defaultClass: false
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: nfs

  mountOptions:
  - noatime
  - nfsvers=4
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:

driver:
  config:
    driver: freenas-nfs
    instance_id:
    httpConnection:
      protocol: http
      host: <my-truenas-core-ip>
      port: 80
      apiVersion: 2
      apiKey: redacted
      allowInsecure: true
    sshConnection:
      host: <my-truenas-core-ip>
      port: 22
      username: k8s-nfs
      privateKey: |
        -----BEGIN OPENSSH PRIVATE KEY-----
        redacted
        -----END OPENSSH PRIVATE KEY-----
    zfs:
      cli:
        sudoEnabled: true
      datasetParentName: foo/k8s/nfs/vols
      detachedSnapshotsDatasetParentName: foo/k8s/nfs/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      # datasetPermissionsUser: 1007 # <------------------- THIS
      # datasetPermissionsGroup: 1001 # <------------------ THIS
    nfs:
      shareHost: <my-truenas-core-ip>
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      # shareMaprootUser: 1007 # <------------------------- THIS
      # shareMaprootGroup: 1001 # <------------------------ THIS
      shareMapallUser: ""
      shareMapallGroup: ""

Questions

  1. What are the purpose of driver.config.zfs.datasetPermissionsUser, driver.config.zfs.datasetPermissionsGroup, driver.config.nfs.shareMaprootUser and driver.config.nfs.shareMaprootGroup fields?
  2. In what scenario(s) we need to set these fields?
    • Is it a good idea not to set them like above?
    • What are the security risks if we did not set them?
  3. Not related to question 1 and 2 (because I'm curious), do we have a list/table somewhere that shows which operations taken by democratic-csi via HTTP vs SSH? I noticed that both driver.config.httpConnection and driver.config.sshConnection blocks need to co-exist. My PVCs did not get created properly if I didn't specify both blocks.

Proposal: Add helm-docs support on charts

Good day,

I would like to suggest adding helm-docs on your Charts. and documenting automatically in README.md

what helm-docs can do is generate the variable list in your readme with all the details pulled from the values.yaml

it should be relatively easy to transition to this. I am also happy to volunteer to create an initial change with workflows, pre commit config and the values.yaml itself.

examples:
https://github.com/k8sonlab/publiccharts/blob/main/charts/zwave-js-ui/values.yaml
https://github.com/k8sonlab/publiccharts/blob/main/charts/zwave-js-ui/README.md

V

Question about the grpc-proxy

I noticed that the grpc-proxy is listed as a temporary workaround, and that the image it pulls is quite old (alpine 3.15). Is this still required for current versions of Kubernetes, or can I just disable that at this point?

Resource limits

I am currently on V 1.5.4 due to this issue, and I am seeing the iscsi controller specifically slowly increasing memory used over time. Currently some are over 3.5GB of memory (nearly 25% of the node memory). I would like to limit the memory (and other resources) used by the image if possible.

Create a second StorageClass

I successfully installed this into my cluster using Helm, and it works great! Thank you.

I would now like to create a second StorageClass which points to a different FreeNAS pool on the same server. Can you give some pointers on how to achieve that?

Split image and tag into different values in the helm chart

To make it simpler to use a local image repository for the images in the helm chart, please split the image values into separate image and tag values. For example, where there is currently

controller:
  externalAttacher:
    image: registry.k8s.io/sig-storage/csi-attacher:v4.4.0

there would be

controller:
  externalAttacher:
    image: registry.k8s.io/sig-storage/csi-attacher
    tag: v4.4.0

This would allow the image to be redirected to a different repo without having to override the tag as well.

Allow for annotations for volumesnapshotclasses via values file

My apologies if I am just not understanding how well enough to accomplish this, but it seems any annotations I add to my values.yaml are ignored and I don't see annotations in snapshot-classes.yaml, would it be possible to add? I need annotations for Kasten K10 to allow snapshots. I have manually added this for now, but I assume if I upgrade I will need to manually add again.

k10.kasten.io/is-snapshot-class: "true"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.