Giter Club home page Giter Club logo

k8s-csi-s3's Introduction

CSI for S3

This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage. This can dynamically allocate buckets and mount them via a fuse mount into any container.

Kubernetes installation

Requirements

  • Kubernetes 1.17+
  • Kubernetes has to allow privileged containers
  • Docker daemon must allow shared mounts (systemd flag MountFlags=shared)

Helm chart

Helm chart is published at https://yandex-cloud.github.io/k8s-csi-s3:

helm repo add yandex-s3 https://yandex-cloud.github.io/k8s-csi-s3/charts

helm install csi-s3 yandex-s3/csi-s3

Manual installation

1. Create a secret with your S3 credentials

apiVersion: v1
kind: Secret
metadata:
  name: csi-s3-secret
  # Namespace depends on the configuration in the storageclass.yaml
  namespace: kube-system
stringData:
  accessKeyID: <YOUR_ACCESS_KEY_ID>
  secretAccessKey: <YOUR_SECRET_ACCESS_KEY>
  # For AWS set it to "https://s3.<region>.amazonaws.com", for example https://s3.eu-central-1.amazonaws.com
  endpoint: https://storage.yandexcloud.net
  # For AWS set it to AWS region
  #region: ""

The region can be empty if you are using some other S3 compatible storage.

2. Deploy the driver

cd deploy/kubernetes
kubectl create -f provisioner.yaml
kubectl create -f driver.yaml
kubectl create -f csi-s3.yaml
Upgrading

If you're upgrading from <= 0.35.5 - delete all resources from attacher.yaml:

wget https://raw.githubusercontent.com/yandex-cloud/k8s-csi-s3/v0.35.5/deploy/kubernetes/attacher.yaml
kubectl delete -f attacher.yaml

If you're upgrading from <= 0.40.6 - delete all resources from old provisioner.yaml:

wget -O old-provisioner.yaml https://raw.githubusercontent.com/yandex-cloud/k8s-csi-s3/v0.40.6/deploy/kubernetes/provisioner.yaml
kubectl delete -f old-provisioner.yaml

Then reapply csi-s3.yaml, driver.yaml and provisioner.yaml:

cd deploy/kubernetes
kubectl apply -f provisioner.yaml
kubectl apply -f driver.yaml
kubectl apply -f csi-s3.yaml

3. Create the storage class

kubectl create -f examples/storageclass.yaml

4. Test the S3 driver

  1. Create a pvc using the new storage class:

    kubectl create -f examples/pvc.yaml
  2. Check if the PVC has been bound:

    $ kubectl get pvc csi-s3-pvc
    NAME         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    csi-s3-pvc   Bound     pvc-c5d4634f-8507-11e8-9f33-0e243832354b   5Gi        RWO            csi-s3         9s
  3. Create a test pod which mounts your volume:

    kubectl create -f examples/pod.yaml

    If the pod can start, everything should be working.

  4. Test the mount

    $ kubectl exec -ti csi-s3-test-nginx bash
    $ mount | grep fuse
    pvc-035763df-0488-4941-9a34-f637292eb95c: on /usr/share/nginx/html/s3 type fuse.geesefs (rw,nosuid,nodev,relatime,user_id=65534,group_id=0,default_permissions,allow_other)
    $ touch /usr/share/nginx/html/s3/hello_world

If something does not work as expected, check the troubleshooting section below.

Additional configuration

Bucket

By default, csi-s3 will create a new bucket per volume. The bucket name will match that of the volume ID. If you want your volumes to live in a precreated bucket, you can simply specify the bucket in the storage class parameters:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-s3-existing-bucket
provisioner: ru.yandex.s3.csi
parameters:
  mounter: geesefs
  options: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
  bucket: some-existing-bucket-name

If the bucket is specified, it will still be created if it does not exist on the backend. Every volume will get its own prefix within the bucket which matches the volume ID. When deleting a volume, also just the prefix will be deleted.

Static Provisioning

If you want to mount a pre-existing bucket or prefix within a pre-existing bucket and don't want csi-s3 to delete it when PV is deleted, you can use static provisioning.

To do that you should omit storageClassName in the PersistentVolumeClaim and manually create a PersistentVolume with a matching claimRef, like in the following example: deploy/kubernetes/examples/pvc-manual.yaml.

Mounter

We strongly recommend to use the default mounter which is GeeseFS.

However there is also support for two other backends: s3fs and rclone.

The mounter can be set as a parameter in the storage class. You can also create multiple storage classes for each mounter if you like.

As S3 is not a real file system there are some limitations to consider here. Depending on what mounter you are using, you will have different levels of POSIX compability. Also depending on what S3 storage backend you are using there are not always consistency guarantees.

You can check POSIX compatibility matrix here: https://github.com/yandex-cloud/geesefs#posix-compatibility-matrix.

GeeseFS

  • Almost full POSIX compatibility
  • Good performance for both small and big files
  • Does not store file permissions and custom modification times
  • By default runs outside of the csi-s3 container using systemd, to not crash mountpoints with "Transport endpoint is not connected" when csi-s3 is upgraded or restarted. Add --no-systemd to parameters.options of the StorageClass to disable this behaviour.

s3fs

  • Almost full POSIX compatibility
  • Good performance for big files, poor performance for small files
  • Very slow for directories with a large number of files

rclone

  • Poor POSIX compatibility
  • Bad performance for big files, okayish performance for small files
  • Doesn't create directory objects like s3fs or GeeseFS
  • May hang :-)

Troubleshooting

Issues while creating PVC

Check the logs of the provisioner:

kubectl logs -l app=csi-provisioner-s3 -c csi-s3

Issues creating containers

  1. Ensure feature gate MountPropagation is not set to false
  2. Check the logs of the s3-driver:
kubectl logs -l app=csi-s3 -c csi-s3

Development

This project can be built like any other go application.

go get -u github.com/yandex-cloud/k8s-csi-s3

Build executable

make build

Tests

Currently the driver is tested by the CSI Sanity Tester. As end-to-end tests require S3 storage and a mounter like s3fs, this is best done in a docker container. A Dockerfile and the test script are in the test directory. The easiest way to run the tests is to just use the make command:

make test

k8s-csi-s3's People

Contributors

a-narsudinov avatar boxjan avatar carlqlange avatar ctrox avatar gaul avatar kirillovap avatar kopytov avatar lennardwesterveld avatar lynnmatrix avatar niklasrosenstein avatar nuwang avatar qrtt1 avatar testwill avatar thomasjohansen avatar toby1991 avatar vitalif avatar zerosuxx avatar zjx20 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-csi-s3's Issues

Problem with "--list-type 2" option

Hello!
I try to use the --list-type 2 option, but it doesn't work.

To create PV, I use the yaml from examples (pvc-manual.yaml). I just add additional options field:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: manualbucket-with-path
spec:
  storageClassName: csi-s3
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  claimRef:
    namespace: default
    name: csi-s3-manual-pvc
  csi:
    driver: ru.yandex.s3.csi
    controllerPublishSecretRef:
      name: csi-s3-secret
      namespace: kube-system
    nodePublishSecretRef:
      name: csi-s3-secret
      namespace: kube-system
    nodeStageSecretRef:
      name: csi-s3-secret
      namespace: kube-system
    volumeAttributes:
      capacity: 20Gi
      mounter: geesefs
      options: "--list-type 2"
    volumeHandle: data/k8s-minio-poc

If I remove the options field, everything will work. However, I need the --list-type 2 option.

kubectl logs -n kube-system csi-s3-XXXXX csi-s3 output:

I0111 11:26:27.056570       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0111 11:26:27.072227       1 utils.go:97] GRPC call: /csi.v1.Node/NodePublishVolume
I0111 11:26:27.072453       1 nodeserver.go:100] target /var/lib/kubelet/pods/fb897854-ae11-4bb4-b44a-0776464eba1b/volumes/kubernetes.io~csi/manualbucket-with-path/mount
readonly false
volumeId data/k8s-minio-poc
attributes map[capacity:20Gi mounter:geesefs options:--list-type 2]
mountflags []
I0111 11:26:27.072878       1 mounter.go:63] Mounting fuse with command: geesefs and args: [--endpoint http://minio-host.xxx.ru:9000 -o allow_other --log-file /dev/stderr --list-type 2 data:k8s-minio-poc /var/lib/kubelet/pods/fb897854-ae11-4bb4-b44a-0776464eba1b/volumes/kubernetes.io~csi/manualbucket-with-path/mount]
2022/01/11 11:26:27.196202 main.INFO File system has been successfully mounted.
I0111 11:26:28.199218       1 nodeserver.go:117] s3: volume data/k8s-minio-poc successfully mounted to /var/lib/kubelet/pods/fb897854-ae11-4bb4-b44a-0776464eba1b/volumes/kubernetes.io~csi/manualbucket-with-path/mount
I0111 11:26:28.246738       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0111 11:26:28.250496       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount
Output: umount: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount: not mounted.

I0111 11:26:28.853419       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0111 11:26:28.857244       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount
Output: umount: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount: not mounted.

I0111 11:26:29.865091       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0111 11:26:29.868959       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount
Output: umount: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount: not mounted.

I0111 11:26:31.883634       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0111 11:26:31.887480       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount
Output: umount: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount: not mounted.

I0111 11:26:35.922016       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0111 11:26:35.925620       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount
Output: umount: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount: not mounted.

2022/01/11 11:26:40.378180 s3.ERROR ListObjects &{0xc000458210 <nil> <nil> 0xc000458220 <nil>} = NotImplemented: A header you provided implies functionality that is not implemented
        status code: 501, request id: 16C933B708C3F225, host id:
2022/01/11 11:26:40.378244 s3.ERROR http=501 A header you provided implies functionality that is not implemented s3=NotImplemented request=16C933B708C3F225

2022/01/11 11:26:40.378345 fuse.ERROR *fuseops.ReadDirOp error: NotImplemented: A header you provided implies functionality that is not implemented
2022/01/11 11:26:40.378366 fuse.ERROR   status code: 501, request id: 16C933B708C3F225, host id:
I0111 11:26:44.002131       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0111 11:26:44.007667       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount
Output: umount: /var/lib/kubelet/pods/efdb1f7a-af89-4435-a49e-028311b27718/volumes/kubernetes.io~csi/data-k8s-minio-poc/mount: not mounted.

Name for new bucket

Can u add options for create buckets not from pvc uid. like { Namespace } or {PVC-name} or combine ?

issue when mounting Static Provisioning PV to multi pods

created two static provisioning PV with same prefix and same s3 bucket(mybucket/myprefix) and mounted to two pods: Only one pod can access s3 and the other no will lose the connection.

and tried "add --no-systemd to parameters.options", didn't work

So what I do to allow multi pods to access the same prefix? Thanks.

how to upgrade the mount driver?

hello, the readme.md is clear on how to install csi driver.

but no tips on how to upgrade.

Should I disable k8s node schedule and delete all the pod which is using the s3 pv, before I update daemonset?

FailedMount

Hello!
Sometimes I have problem with mounting a bucket to pods.
From k8s I get FailedMount with the message: Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-qjr9f data]: timed out waiting for the condition

After restart csi pods (csi-provisioner-s3-0, csi-attacher-s3-0, csi-s3-xxxxx) it works for any time and then crashes again with the FailedMount error.

kubectl logs -n kube-system csi-s3-XXXXX csi-s3 output:

I0119 11:31:53.254133       1 mounter.go:122] Found matching pid 39 on path /var/lib/kubelet/pods/92f3b65c-8bf1-4514-ba2d-63b12b0c56d5/volumes/kubernetes.io~csi/data-bucket/mount
I0119 11:31:53.254178       1 mounter.go:87] Found fuse pid 39 of mount /var/lib/kubelet/pods/92f3b65c-8bf1-4514-ba2d-63b12b0c56d5/volumes/kubernetes.io~csi/data-bucket/mount, checking if it still runs
I0119 11:31:53.254226       1 mounter.go:147] Fuse process with PID 39 still active, waiting...
I0119 11:31:53.354755       1 mounter.go:147] Fuse process with PID 39 still active, waiting...
I0119 11:31:53.505635       1 mounter.go:147] Fuse process with PID 39 still active, waiting...
I0119 11:31:53.731230       1 mounter.go:147] Fuse process with PID 39 still active, waiting...
I0119 11:31:54.069091       1 mounter.go:147] Fuse process with PID 39 still active, waiting...
W0119 11:31:54.575966       1 mounter.go:138] Fuse process seems dead, returning
I0119 11:31:54.576087       1 nodeserver.go:137] s3: volume data has been unmounted.
I0119 11:31:58.582661       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0119 11:31:58.586674       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount
Output: umount: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount: not mounted.

I0119 11:32:05.659607       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0119 11:32:05.675449       1 utils.go:97] GRPC call: /csi.v1.Node/NodePublishVolume
I0119 11:32:05.675714       1 nodeserver.go:100] target /var/lib/kubelet/pods/95568661-488f-4b8c-af1c-dbbc21d590dd/volumes/kubernetes.io~csi/data-bucket/mount
readonly false
volumeId data
attributes map[capacity:500Gi mounter:geesefs options:--list-type 2]
mountflags []
I0119 11:32:05.676084       1 mounter.go:63] Mounting fuse with command: geesefs and args: [--endpoint http://minio-host:9000 -o allow_other --log-file /dev/stderr --list-type 2 data: /var/lib/kubelet/pods/95568661-488f-4b8c-af1c-dbbc21d590dd/volumes/kubernetes.io~csi/data-bucket/mount]
2022/01/19 11:32:05.741157 main.INFO File system has been successfully mounted.
I0119 11:32:06.744437       1 nodeserver.go:117] s3: volume data successfully mounted to /var/lib/kubelet/pods/95568661-488f-4b8c-af1c-dbbc21d590dd/volumes/kubernetes.io~csi/data-bucket/mount
I0119 11:32:06.865094       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0119 11:32:06.870086       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount
Output: umount: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount: not mounted.

I0119 11:32:07.470276       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0119 11:32:07.474229       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount
Output: umount: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount: not mounted.

I0119 11:32:08.483579       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0119 11:32:08.487524       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount
Output: umount: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount: not mounted.

I0119 11:32:10.504619       1 utils.go:97] GRPC call: /csi.v1.Node/NodeUnpublishVolume
E0119 11:32:10.507909       1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount
Output: umount: /var/lib/kubelet/pods/d496285b-ca1e-4082-a849-77336431ae82/volumes/kubernetes.io~csi/data-bucket/mount: not mounted.

Set resource requests

If the default configuration of geesefs expects 1 GiB of memory, then it would be smart to include that in resource requests.

k3s not support?

Hi!

I'm install csi-s3 in k3s with manifest and I am not able to start the pod.

Information on this situation:

  • k3s version:
$ k3s -v
k3s version v1.26.4+k3s1 (8d0255af)
go version go1.19.8
  • get pv,pvc,pod:
$ kubectl get pv,pvc,pod -A
NAME                                      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   REASON   AGE
persistentvolume/manualbucket-with-path   10Gi       RWX            Retain           Bound    default/csi-s3-manual-pvc   csi-s3                  50s

NAMESPACE   NAME                                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
default     persistentvolumeclaim/csi-s3-manual-pvc   Bound    manualbucket-with-path   10Gi       RWX                           49s

NAMESPACE     NAME                                          READY   STATUS              RESTARTS      AGE
kube-system   pod/helm-install-traefik-crd-l9cd4            0/1     Completed           0             2d
kube-system   pod/helm-install-traefik-j8ddh                0/1     Completed           2             2d
kube-system   pod/svclb-traefik-dc14317e-4v24b              2/2     Running             0             45h
kube-system   pod/local-path-provisioner-76d776f6f9-tzqh9   1/1     Running             1 (19h ago)   2d
kube-system   pod/svclb-traefik-dc14317e-4hws8              2/2     Running             2 (19h ago)   45h
kube-system   pod/traefik-56b8c5fb5c-c29h9                  1/1     Running             1 (19h ago)   2d
kube-system   pod/metrics-server-7b67f64457-m4qts           1/1     Running             1 (19h ago)   2d
kube-system   pod/coredns-59b4f5bbd5-rq95k                  1/1     Running             1 (19h ago)   2d
kube-system   pod/csi-provisioner-s3-0                      2/2     Running             0             2m25s
kube-system   pod/csi-attacher-s3-0                         1/1     Running             0             2m18s
default       pod/csi-s3-test-nginx                         0/1     ContainerCreating   0             34s
  • Message in journald:
May 27 06:36:56 ru-00 k3s[705]: E0527 06:36:56.126079     705 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx"
May 27 06:36:56 ru-00 k3s[705]: E0527 06:36:56.126131     705 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx" podUID=7ae574b9-cea8-4197-95ca-4814181f0d6e
May 27 06:39:11 ru-00 k3s[705]: E0527 06:39:11.592634     705 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx"
May 27 06:39:11 ru-00 k3s[705]: E0527 06:39:11.592691     705 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx" podUID=7ae574b9-cea8-4197-95ca-4814181f0d6e
May 27 06:41:25 ru-00 k3s[705]: E0527 06:41:25.593695     705 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[webroot], unattached volumes=[kube-api-access-2jw4l webroot]: timed out waiting for the condition" pod="default/csi-s3-test-nginx"
May 27 06:41:25 ru-00 k3s[705]: E0527 06:41:25.593771     705 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[webroot], unattached volumes=[kube-api-access-2jw4l webroot]: timed out waiting for the condition" pod="default/csi-s3-test-nginx" podUID=7ae574b9-cea8-4197-95ca-4814181f0d6e
May 27 06:43:41 ru-00 k3s[705]: E0527 06:43:41.594270     705 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx"
May 27 06:43:41 ru-00 k3s[705]: E0527 06:43:41.594345     705 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx" podUID=7ae574b9-cea8-4197-95ca-4814181f0d6e
May 27 06:45:56 ru-00 k3s[705]: E0527 06:45:56.593376     705 kubelet.go:1821] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx"
May 27 06:45:56 ru-00 k3s[705]: E0527 06:45:56.593450     705 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition" pod="default/csi-s3-test-nginx" podUID=7ae574b9-cea8-4197-95ca-4814181f0d6e
  • log csi-attacher-s3:
I0527 06:33:02.839570       1 driver.go:73] Driver: ru.yandex.s3.csi 
I0527 06:33:02.843348       1 driver.go:74] Version: v1.34.7 
I0527 06:33:02.843381       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0527 06:33:02.843386       1 driver.go:93] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0527 06:33:02.844131       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock", Net:"unix"}
I0527 06:33:02.854693       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0527 06:33:02.855769       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0527 06:33:02.856671       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0527 06:33:02.857578       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
  • kubectl describe pod csi-s3-test-nginx
Name:             csi-s3-test-nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             ru-00/10.42.2.1
Start Time:       Sat, 27 May 2023 14:34:53 +0800
Labels:           <none>
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
  csi-s3-test-nginx:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /usr/share/nginx/html/s3 from webroot (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2jw4l (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  webroot:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-s3-manual-pvc
    ReadOnly:   false
  kube-api-access-2jw4l:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
                             worker:NoSchedule op=Exists
Events:
  Type     Reason              Age                  From                     Message
  ----     ------              ----                 ----                     -------
  Normal   Scheduled           22m                  default-scheduler        Successfully assigned default/csi-s3-test-nginx to ru-00
  Warning  FailedMount         4m52s (x2 over 16m)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[kube-api-access-2jw4l webroot]: timed out waiting for the condition
  Warning  FailedAttachVolume  2m37s (x9 over 20m)  attachdetach-controller  AttachVolume.Attach failed for volume "manualbucket-with-path" : timed out waiting for external-attacher of ru.yandex.s3.csi CSI driver to attach volume myuniqnamebacket
  Warning  FailedMount         20s (x8 over 20m)    kubelet                  Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-2jw4l]: timed out waiting for the condition

How to deploy

  • commands
kubectl create -f yandex-sci.yaml
kubectl create -f k8s-csi-s3/deploy/kubernetes/provisioner.yaml
kubectl create -f k8s-csi-s3/deploy/kubernetes/attacher.yaml
kubectl create -f k8s-csi-s3/deploy/kubernetes/csi-s3.yaml
kubectl create -f k8s-csi-s3/deploy/kubernetes/examples/pvc-manual.yaml
kubectl create -f k8s-csi-s3/deploy/kubernetes/examples/pod.yaml
  • cat yandex-sci.yaml
---
apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: csi-s3-secret
stringData:
  accessKeyID: "***"
  secretAccessKey: "***"
  endpoint: https://storage.yandexcloud.net
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-s3
  #namespace: main
provisioner: ru.yandex.s3.csi
parameters:
  mounter: geesefs
  options: "--dir-mode=0777 --file-mode=0666"
  # bucket: <ะพะฟั†ะธะพะฝะฐะปัŒะฝะพ: ะธะผั ััƒั‰ะตัั‚ะฒัƒัŽั‰ะตะณะพ ะฑะฐะบะตั‚ะฐ>
  csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  • cat k8s-csi-s3/deploy/kubernetes/examples/pvc-manual.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: manualbucket-with-path
spec:
  storageClassName: csi-s3
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  claimRef:
    namespace: default
    name: csi-s3-manual-pvc
  csi:
    driver: ru.yandex.s3.csi
    controllerPublishSecretRef:
      name: csi-s3-secret
      namespace: kube-system
    nodePublishSecretRef:
      name: csi-s3-secret
      namespace: kube-system
    nodeStageSecretRef:
      name: csi-s3-secret
      namespace: kube-system
    volumeAttributes:
      capacity: 10Gi
      mounter: geesefs
    volumeHandle: myuniqnamebacket
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-s3-manual-pvc
spec:
  # Empty storage class disables dynamic provisioning
  storageClassName: ""
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Permission in object storage

I'm create service account in YC(permission: storage.editor), created static key. I'm create backet - myuniqnamebacket with full permission on service account. I also checked the possibility of mounting this backet and writing to it - I found no problems.


Please help to run in k3s

Can not attach csi-s3-pvc to pod of nginx

I create PVC of csi-s3-pvc

kubectl get pvc  |grep csi-s3-pvc
csi-s3-pvc   Bound    pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953   5Gi        RWX            csi-s3         97m

And I can list a folder of pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953 under my bucket

I create pod by kubectl create -f examples/pod.yaml, but get error:

kubectl describe pods csi-s3-test-nginx

Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Normal   Scheduled               11m                   default-scheduler        Successfully assigned default/csi-s3-test-nginx to node4
  Normal   SuccessfulAttachVolume  11m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953"
  Warning  FailedMount             3m1s (x4 over 9m50s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot default-token-58xtj]: timed out waiting for the condition
  Warning  FailedMount             81s (x13 over 11m)    kubelet                  MountVolume.SetUp failed for volume "pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953" : rpc error: code = Unknown desc = Error fuseMount command: geesefs
args: [--endpoint https://oss-cn-hongkong-internal.aliyuncs.com -o allow_other --log-file /dev/stderr --memory-limit 1000 --dir-mode 0777 --file-mode 0666 pamys-oss-hk:pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953 /var/lib/kubelet/pods/ceee53d6-70a7-4f3a-b9c8-068b2f7eca9e/volumes/kubernetes.io~csi/pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953/mount]
output:
  Warning  FailedMount  46s  kubelet  Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[default-token-58xtj webroot]: timed out waiting for the condition
kubectl logs csi-s3-ws7tl -n kube-system --all-containers

I0720 09:35:46.754621       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0720 09:35:46.761659       1 utils.go:97] GRPC call: /csi.v1.Node/NodePublishVolume
I0720 09:35:46.761789       1 nodeserver.go:100] target /var/lib/kubelet/pods/ceee53d6-70a7-4f3a-b9c8-068b2f7eca9e/volumes/kubernetes.io~csi/pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953/mount
readonly true
volumeId pamys-oss-hk/pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953
attributes map[bucket:pamys-oss-hk capacity:5368709120 mounter:geesefs options:--memory-limit 1000 --dir-mode 0777 --file-mode 0666 storage.kubernetes.io/csiProvisionerIdentity:1658303331554-8081-ru.yandex.s3.csi]
mountflags []
I0720 09:35:46.762002       1 mounter.go:63] Mounting fuse with command: geesefs and args: [--endpoint https://oss-cn-hongkong-internal.aliyuncs.com -o allow_other --log-file /dev/stderr --memory-limit 1000 --dir-mode 0777 --file-mode 0666 pamys-oss-hk:pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953 /var/lib/kubelet/pods/ceee53d6-70a7-4f3a-b9c8-068b2f7eca9e/volumes/kubernetes.io~csi/pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953/mount]
2022/07/20 09:35:46.859414 s3.INFO Falling back to v2 signer
2022/07/20 09:35:46.861777 main.ERROR Unable to access 'pamys-oss-hk': Forbidden: Forbidden
        status code: 403, request id: 62D7CC7257A4D33032575C4F, host id: 
2022/07/20 09:35:46.861955 main.FATAL Mounting file system: Mount: initialization failed
2022/07/20 09:35:47.862615 main.FATAL Unable to mount file system, see syslog for details
E0720 09:35:47.863778       1 utils.go:101] GRPC error: Error fuseMount command: geesefs
args: [--endpoint https://oss-cn-hongkong-internal.aliyuncs.com -o allow_other --log-file /dev/stderr --memory-limit 1000 --dir-mode 0777 --file-mode 0666 pamys-oss-hk:pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953 /var/lib/kubelet/pods/ceee53d6-70a7-4f3a-b9c8-068b2f7eca9e/volumes/kubernetes.io~csi/pvc-54b00cfe-4a8e-46d2-a4f9-0d7096274953/mount]
more examples/storageclass.yaml 
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-s3
provisioner: ru.yandex.s3.csi
parameters:
  mounter: geesefs
  # you can set mount options here, for example limit memory cache size (recommended)
  options: "--memory-limit 1000 --dir-mode 0777 --file-mode 0666"
  # to use an existing bucket, specify it here:
  bucket: pamys-oss-hk
  csi.storage.k8s.io/provisioner-secret-name: csi-s3-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  csi.storage.k8s.io/controller-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/node-publish-secret-name: csi-s3-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system

The link to built docker images in manifests are not publicly available or invalid

Issue: I am trying to evaluate this operator on k8s. However, I can't seem to be able to pull the image.
When proceeding with CLI :

$ docker pull cr.yandex/crp9ftr22d26age3hulg/yandex-csi-s3:0.30.4

Error response from daemon: repository cr.yandex/crp9ftr22d26age3hulg/yandex-csi-s3 not found: name unknown: Repository crp9ftr22d26age3hulg/yandex-csi-s3 not found ; requestId = a66d8c9a-a81d-4d4d-9350-1f6e8723d831

I imagine that there may currently be some restrictions on who can pull the images. As the project now seems public, it would be nice to have some publicly accessible images for these containers.

CSI-S3 fails after a few hours of inactivity

Hello. We are trying to use CSI-S3 with geesefs as storage backend for elasticsearch. We are using this elasticsearch as a snapshot checker. Most of the time it is idle and not processing any data. We noticed that after a few hours of inactivity all IO operations in elasticsearch's pod failed with following log lines in kube-system/csi-s3-XXX:

E0329 12:21:59.786708      1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 32
Unmounting arguments: /var/lib/kubelet/pods/44d8a275-2b1d-4236-8d6a-ba6f4d709b60/volumes/kubernetes.io~csi/pvc-69e61d54-8b2a-420d-b1b3-0260b790d33e/mount
Output: umount: /var/lib/kubelet/pods/44d8a275-2b1d-4236-8d6a-ba6f4d709b60/volumes/kubernetes.io~csi/pvc-69e61d54-8b2a-420d-b1b3-0260b790d33e/mount: not mounted

After we manually restarted this pod everything was fine again. We suspect that the problem could be caused by network disruption which leads to TCP connection termination, which is not being reestablished after that network problem is gone.

How do we prevent this behavior of CSI-S3?

Thank you.

Helm chart uses an outdated template file

Hello!

After comparing the output from helm template of the project's Helm chart and fixed Kubernetes manifests, it seems that changes brought by commits ecf1031 and bfba087 are not reflected is the csi-s3.yaml template file.
If that is not on purpose and was just simply overlooked, I can throw in a PR for that.

By the way, is there any particular reason Helm chart default values use cr.yandex repo for container images, and Kubernetes manifests use quay.io repo? I suppose those images are just mirrored. Also, for main csi-s3 image, in both cases cr.yandex repo is used, but it still differs a little, which creates minor extra complexity:

image: cr.yandex/crp9ftr22d26age3hulg/csi-s3:0.35.5

vs
csi: cr.yandex/crp9ftr22d26age3hulg/yandex-cloud/csi-s3/csi-s3-driver:0.35.5

Mounter not using cluster DNS

Hi,

I'm trying to use an in cluster object storage solution, and I'm unable to mount the volumes, even tho they are provisioned correctly. This is due to not using the cluster DNS. Can this be adjusted?

Example of error:
2023/06/05 09:52:41.159356 s3.ERROR code=RequestError msg=send request failed, err=Head "http://***.***.svc.cluster.local/cluster-volumes/pvc-7f661960-57c2-4e13-a9b5-9b393cf778f6/tlr9mntsgezlnlizpcjh4mkwka7wslev": dial tcp: lookup **.**.svc.cluster.local on 1.1.1.1:53: no such host

Another note, the Helm charts on cr.yandex are a version behind.

Thank you.

Unwanted default region

Version 0.30.9 seems to set a default region when no region is specified for custom s3 providers.

Could this be reversed or an option added to keep the region empty?

Bucket folder fails to unmount when deleting pod

Using a manual pvc, when deleting the pod it's mounted on and recreating it, I get this error:

MountVolume.MountDevice failed for volume "manualbucket-with-path" : kubernetes.io/csi: attacher.MountDevice failed to create dir "/var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/5e0c8b151eaa5ff5fba3d9eb96cf3f82d1b7ccc6703ef5c673ca7f59cafff43b/globalmount": mkdir /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/5e0c8b151eaa5ff5fba3d9eb96cf3f82d1b7ccc6703ef5c673ca7f59cafff43b/globalmount: file exists

I can run a umount on that directory and then rm -rf it but seems like kinda a shoddy solution. I could be missing something simple here!

PVC and bucket size limit exhaustion

Hello,

Could you please explain how the storage size limiting works with this provisioner?
I've tested it by creating storage class in GeeseFS mode, set limit to PVC and bucket itself but I was able to write much more data than allowed by both PVC size and Bucket size.

Looks like PVC size is completely ignored and Bucket size current utilization is being updated with some significant delay.
Could you please explain if it's expected behaviour or it can be configured somehow?

Thanks!

Issue getting this setup!

Hi! I've been trying a few days to get this working, and aside from probably being an idiot I suspect I'm doing something wrong. My use case is that I have an existing bucket that I want to attach pods to (in an indexed job) to use the workflow flies. What I've tried:

  • I started with the static example, but it doesn't work to provide an empty storageclass because the error message is that it doesn't know the class to use.
  • I then thought I was getting close, but it could not find a mounter named geesefs. I can't find instructions anywhere how to install / set that up, I assumed it came with the CSI here.
  • I tried changing to the other mounter type, that failed even earlier because the PVC couldn't create with "DeadlineExceeded desc = context deadline exceeded"

I just want to mount my bucket on S3. Please help.

on host: Cannot set property ExecStopPost, or unknown property.

kubectl logs --tail=200 -f -n kube-system csi-s3-b6vlp -c csi-s3

GRPC error: Error starting systemd unit geesefs-*.service on host: Cannot set property ExecStopPost, or unknown property.

docker engine: containerd
eks: 1.23
image: cr.yandex/crp9ftr22d26age3hulg/csi-s3:0.35.5

csi-s3-test-nginx AttachVolume.Attach failed for volume

Kubernetes v1.23.0
image
csi-provisioner log
I0517 20:09:35.241760 1 controller.go:1335] provision "default/csi-s3-pvc" class "csi-s3": started I0517 20:09:35.241983 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"csi-s3-pvc", UID:"1083ef7e-3d80-4708-81df-cdfefea6d9fc", APIVersion:"v1", ResourceVersion:"56383159", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/csi-s3-pvc" I0517 20:09:35.252140 1 controller.go:762] create volume rep: {CapacityBytes:5368709120 VolumeId:pvc-1083ef7e-3d80-4708-81df-cdfefea6d9fc VolumeContext:map[capacity:5368709120 mounter:geesefs options:--memory-limit 1000 --dir-mode 0777 --file-mode 0666] ContentSource:<nil> AccessibleTopology:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} I0517 20:09:35.252216 1 controller.go:838] successfully created PV pvc-1083ef7e-3d80-4708-81df-cdfefea6d9fc for PVC csi-s3-pvc and csi volume name pvc-1083ef7e-3d80-4708-81df-cdfefea6d9fc I0517 20:09:35.252246 1 controller.go:1442] provision "default/csi-s3-pvc" class "csi-s3": volume "pvc-1083ef7e-3d80-4708-81df-cdfefea6d9fc" provisioned I0517 20:09:35.252289 1 controller.go:1459] provision "default/csi-s3-pvc" class "csi-s3": succeeded I0517 20:09:35.257532 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"csi-s3-pvc", UID:"1083ef7e-3d80-4708-81df-cdfefea6d9fc", APIVersion:"v1", ResourceVersion:"56383159", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-1083ef7e-3d80-4708-81df-cdfefea6d9fc I0517 20:12:31.275352 1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:872: Watch close - *v1.PersistentVolume total 9 items received I0517 20:12:35.269276 1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:875: Watch close - *v1.StorageClass total 0 items received I0517 20:13:43.177116 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 11 items received I0517 20:15:06.175241 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 0 items received I0517 20:17:54.277344 1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:875: Watch close - *v1.StorageClass total 0 items received I0517 20:19:29.205444 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 0 items received I0517 20:20:24.175715 1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync I0517 20:20:24.273645 1 reflector.go:381] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:872: forcing resync I0517 20:20:35.191250 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 3 items received I0517 20:20:57.280450 1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:872: Watch close - *v1.PersistentVolume total 0 items received I0517 20:24:24.282455 1 reflector.go:530] sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:875: Watch close - *v1.StorageClass total 6 items received
csi-s3 log

  • I0517 20:06:11.488371 1 geesefs.go:142] Starting geesefs using systemd: /var/lib/kubelet/plugins/ru.yandex.s3.csi/geesefs -f -o allow_other --endpoint http://10.244.1.42:9000/minio --setuid 65534 --setgid 65534 --memory-limit 1000 --dir-mode 0777 --file-mode 0666 pvc-1d0004dd-6763-4cc6-9abb-5b32e74e7b2c: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1d0004dd-6763-4cc6-9abb-5b32e74e7b2c/globalmount
    E0517 20:06:23.033948 1 utils.go:101] GRPC error: Timeout waiting for mount
    I0517 20:07:27.124221 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
    I0517 20:07:27.130493 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
    I0517 20:07:27.131927 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
    I0517 20:07:27.132819 1 utils.go:97] GRPC call: /csi.v1.Node/NodeStageVolume
    I0517 20:07:27.138459 1 geesefs.go:142] Starting geesefs using systemd: /var/lib/kubelet/plugins/ru.yandex.s3.csi/geesefs -f -o allow_other --endpoint http://10.244.1.42:9000/minio --setuid 65534 --setgid 65534 --memory-limit 1000 --dir-mode 0777 --file-mode 0666 pvc-1d0004dd-6763-4cc6-9abb-5b32e74e7b2c: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-1d0004dd-6763-4cc6-9abb-5b32e74e7b2c/globalmount
    E0517 20:07:38.696702 1 utils.go:101] GRPC error: Timeout waiting for mount
    I0517 20:09:40.715093 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities

ignoring certificate check

I have internal minio server which has certificate issued from certmanager. It's a self signed certificate. The CSI mounter with geesfs is failing to validate the certificate.

Is there any option to either use http instead of https or if it's https can we ignore the verification?

[Feature request] Allow to create a pvc with a RO bucket

We have a read-only S3 bucket, provided by a cloud provider. We would like to be able to mount this bucket in pods.

Currently, the docs state:

If the bucket is specified, it will still be created if it does not exist on the backend. Every volume will get its own prefix within the bucket which matches the volume ID. When deleting a volume, also just the prefix will be deleted.

This means that the csi-provisioner tries to create a directory in the already existing bucket. This is problematic for us, as the bucket we would like to use, is read-only and thus we get an Access denied when trying to create the pvc.

An option to use the existing bucket as-is, without creating a subdirectory by default would solve this issue.

csi-s3 daemonset pod restart causes mounted pvc not accessible

Problem

If the csi-s3 daemonset pod restarts for some reason, the pod that mounts s3-based pvc will not be able to access the pvc and reports "Transport endpoint is not connected".

Reproduce

  • Deploy k8s-csi-s3 following README
cd deploy/kubernetes
kubectl create -f examples/secret.yaml
kubectl create -f provisioner.yaml
kubectl create -f attacher.yaml
kubectl create -f csi-s3.yaml
kubectl create -f examples/storageclass.yaml
  • Create pvc and pod
kubectl create -f examples/pvc.yaml
kubectl create -f examples/pod.yaml
  • Find which node the test pod is on and delete corresponding daemonset pod
# the test pod
kubectl get pod -o wide
# the daemonset pods
kubectl get pod -n kube-system -o wide
# delete corresponding daemonset pod to restart it
kubectl delete pod -n kube-system csi-s3-ccx2k
  • Try to access pvc from pod
$ kubectl exec -ti csi-s3-test-nginx bash
$ ls /usr/share/nginx/html/s3
ls: cannot access '/usr/share/nginx/html/s3': Transport endpoint is not connected

Found nothing special from csi pod logs.

Related

This issue describes the same problem and its maintainer suggests that LIST_VOLUMES and LIST_VOLUMES_PUBLISHED_NODES should be implemented. Would you please have a look?

Examples for s3fs and rclone

examples dir contains examples only for default mounter, i.e. geesefs. There should be examples for rclone and s3fs since they are supported.

Is it correct to use emptyDir for socket-path in provisioner.yaml ?

Hi.
I just noticed that in provisioner.yaml you use emptyDir inside pod with path /var/lib/kubelet/plugins/ru.yandex.s3.csi/ where (i guess) should be located csi.sock.

And in attacher.yaml you are using hostPath.

Actually, I've installed this csi-s3 plugin couple weeks ago, created pods with pv(c) using s3 and everything was fine. Today something happens and my k8s couldnot detach and reattach pv for a recreated pod (created from sts if it's matter).
Logs of csi-attacher-s3-0 are filled with a lot of

W0705 15:40:59.256683 1 connection.go:173] Still connecting to unix:///var/lib/kubelet/plugins/ru.yandex.s3.csi/csi.sock

I've checked /var/lib/kubelet/plugins/ru.yandex.s3.csi/ on master-node, where csi-provisioner-s3-0 and csi-attacher-s3-0 are running, and there is no csi.sock

Upd: I also checked path /var/lib/kubelet/plugins/ru.yandex.s3.csi/ in csi-provisioner-s3-0 pod and csi.sock exists there. Same path inside csi-attacher-s3-0 pod is empty

I applied pod.yaml file but pod is in container-creating state

Getting error in the pod is still container creating state
"
Events:
Type Reason Age From Message


Normal Scheduled 42s default-scheduler Successfully assigned default/csi-s3-test-nginx to 10.0.10.21
Warning FailedMount 10s (x7 over 42s) kubelet MountVolume.MountDevice failed for volume "pvc-f92e57af-8a49-4b4b-8910-c301013e0767" : rpc error: code = Unknown desc = Error starting systemd unit geesefs-pvc_2df92e57af_2d8a49_2d4b4b_2d8910_2dc301013e0767.service on host: Cannot set property ExecStopPost, or unknown property.

"

Pvc Error 'Cannot set property ExecStopPost, or unknown property'

When following the tutorial to Create a test pod which mounts my volume, pod stuck at the ContainerCreating status and

run kubectl logs -l app=csi-s3 -c csi-s3 got logs as below

I0823 07:18:58.868856       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0823 07:18:58.871259       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0823 07:18:58.871626       1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0823 07:18:58.872031       1 utils.go:97] GRPC call: /csi.v1.Node/NodeStageVolume
I0823 07:18:58.873757       1 geesefs.go:150] Starting geesefs using systemd: /var/lib/kubelet/plugins/ru.yandex.s3.csi/geesefs -f -o allow_other --endpoint http://192.168.80.80:9000 --setuid 65534 --setgid 65534 --memory-limit 1000 --dir-mode 0777 --file-mode 0666 persistent-storage:pvc-62a54ef5-3ef6-42e8-9786-009600435302 /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/3c9ab471f9539ae64ab85831ce0cd5457c795efa42a6458cc5815a28a73b34d1/globalmount
E0823 07:18:58.875577       1 utils.go:101] GRPC error: Error starting systemd unit geesefs-persistent_2dstorage_2fpvc_2d62a54ef5_2d3ef6_2d42e8_2d9786_2d009600435302.service on host: Cannot set property ExecStopPost, or unknown property.

The StorageClass is bound and I can see a new bucket has been create from Minio UI.

systemctl --version
systemd 219
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN

Please help.

Support kubernetes 1.22. storage.k8s.io/v1beta1 not more avialable

The storage.k8s.io/v1beta1 API version of CSIDriver, CSINode, StorageClass, and VolumeAttachment is no longer served as of v1.22.
https://kubernetes.io/docs/reference/using-api/deprecation-guide/#storage-resources-v122

E0217 10:12:58.218960 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource
I0217 10:12:59.219153 1 reflector.go:188] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:135

how to customize helm chart if kubelet path is not in /var/lib/kubelet?

how to customize helm chart if kubelet path is not in /var/lib/kubelet?

My path is in /data/kubelet, but not /var/lib/kubelet

how to customize helm chart?

kubectl logs csi-attacher-s3-0  -n kube-system 
I0905 03:23:17.697030       1 main.go:91] Version: v3.0.1-0-g4074360a
I0905 03:23:17.699004       1 connection.go:153] Connecting to unix:///data/kubelet/plugins/ru.yandex.s3.csi/csi.sock
W0905 03:23:27.699192       1 connection.go:172] Still connecting to unix:///data/kubelet/plugins/ru.yandex.s3.csi/csi.sock
W0905 03:23:37.699265       1 connection.go:172] Still connecting to unix:///data/kubelet/plugins/ru.yandex.s3.csi/csi.sock
W0905 03:23:47.699209       1 connection.go:172] Still connecting to unix:///data/kubelet/plugins/ru.yandex.s3.csi/csi.sock
W0905 03:23:57.699184       1 connection.go:172] Still connecting to unix:///data/kubelet/plugins/ru.yandex.s3.csi/csi.sock
W0905 03:24:07.699185       1 connection.go:172] Still connecting to unix:///data/kubelet/plugins/ru.yandex.s3.csi/csi.sock
W0905 03:24:17.699226       1 connection.go:172] Still connecting to unix:///data/kubelet/plugins/ru.yandex.s3.csi/csi.sock

"Input/output error" when attempting to list directory contents of directory mounted in a single bucket with geesefs.

I am testing the helm chart on my Kubernetes cluster with MinIO right now, and I ran into this strange issue when using the geesefs mounter.
If I don't assign a value to singleBucket, everything works fine.
However, if I assign a value to singleBucket I get the following message when attempting to run ls.

ls: reading directory '/mnt/test/': Input/output error

If I know the exact path of a file, I am able to read it. I am also able to write new files. I just cannot list them.
This issue does not occur with the s3fs mounter.

s3fs and rclone support

In official documentation it is mentioned that that s3fs and rclone are supported in addition to geesefs. However it seems not to be the case. I'm not an expert in CSI and drivers implementation but it seems that CSI functionality is provided by image defined in cmd/s3driver/Dockerfile.
This image only contains installation for geesefs.

Solution: either to add support for rclone, geesefs to CSI, or remove mentions of rclone and geesefs from the docs.

Support S3 credentials from IAM Role

I can't really use IAM users credentials on my environment, only IAM roles.

Would you take a patch to support interacting with S3 with AWS IAM role?

encounter problem

kubelet
MountVolume.SetUp failed for volume "pvc-c3df7c98-874e-406b-9957-c7548a93d6ba" : rpc error: code = Internal desc = stat /var/lib/kubelet/pods/7ea6c6d4-699f-4d11-99cd-7d372ff203b0/volumes/kubernetes.io~csi/pvc-c3df7c98-874e-406b-9957-c7548a93d6ba/mount: connection refused
use csi-s3-test-nginx

Add configurable tolerations to `csi-s3`

csi-s3 does not start on tainted nodes, and can't be configured to do so from Helm values.

The ebs-csi helm chart has:

      tolerations:
        {{- if .Values.node.tolerateAllTaints }}
        - operator: Exists
        {{- else }}
        - key: CriticalAddonsOnly
          operator: Exists
        - operator: Exists
          effect: NoExecute
          tolerationSeconds: 300
        {{- end }}
        {{- with .Values.node.tolerations }}
        {{- toYaml . | nindent 8 }}
        {{- end }}

although after deployment it actually has:

  tolerations:
    - key: CriticalAddonsOnly
      operator: Exists
    - operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
    - key: node.kubernetes.io/disk-pressure
      operator: Exists
      effect: NoSchedule
    - key: node.kubernetes.io/memory-pressure
      operator: Exists
      effect: NoSchedule
    - key: node.kubernetes.io/pid-pressure
      operator: Exists
      effect: NoSchedule
    - key: node.kubernetes.io/unschedulable
      operator: Exists
      effect: NoSchedule

Not sure where the others come from?

Outdated instructions in README

README section "Test the S3 driver" contains instructions as follows:

$ kubectl exec -ti csi-s3-test-nginx bash
$ mount | grep fuse
s3fs on /var/lib/www/html type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
$ touch /var/lib/www/html/hello_world

Path to mounted volume has changed /var/lib/www/html --> /usr/share/nginx/html/s3.

GRPC error: failed to check if bucket <bucketName>/volume exists

I've deployed attacher, csi-s3 and provisioner as per instructions. Then deployed StorageClass with bucket: my-bucket-name.
But when deploying the PVC, I get following error;

Warning ProvisioningFailed 0s (x2 over 1s) ru.yandex.s3.csi_csi-provisioner-s3-0_05d4e125-e140-4b37-b8da-2e1ba7d8c75b failed to provision volume with StorageClass "csi-s3": rp โ”‚ โ”‚ c error: code = Unknown desc = failed to check if bucket my-bucket-name/pvc-93dbe1c0-223c-414a-9b65-03f8e74bc084 exists: 400 Bad Request

Anyone faced similar issue ?

Provide mounting for a specific folder in bucket

s3fs supports mounting specific folder inside a bucket. For example you can mount it by this way: bucket_name:/folder.
Does geesefs support this functionality? If not may it be implemented?
When i define existing bucket in storageClass manifest like mentioned above way i can see error logs in pvc describe Bucket name contains invalid characters

Caching support

Hi,

I'm trying to enable caching in the CSI driver. I've passed extra mountOptions as follows:

mountOptions: "--memory-limit 4000 --dir-mode 0777 --file-mode 0666 --cache /tmp --debug --debug_fuse --stat-cache-ttl 9m0s --cache-to-disk-hits 1"

and they are being passed in correctly according to the logs:

I0622 15:05:55.915639 1 mounter.go:65] Mounting fuse with command: geesefs and args: [--endpoint https://s3.ap-southeast-2.amazonaws.com -o allow_other --log-file /dev/stderr --memory-limit 4000 --dir-mode 0777 --file-mode 0666 --cache /tmp --debug --debug_fuse --stat-cache-ttl 9m0s --cache-to-disk-hits 1 biorefdata:galaxy/v1/data.galaxyproject.org /var/lib/kubelet/pods/9d508976-732c-4a3f-8bf6-89bd097e831b/volumes/kubernetes.io~csi/pvc-6a8c3758-8784-4fcc-9311-4305b3cce8e4/mount]

However, the /tmp directory remains empty. Am I doing something wrong?

Also, with multiple pods mounting the same PVC, would the cache work correctly? I can see that there are multiple geesefs processes running, all pointing to the same cache path.

Finally, we want to use this with long-living, entirely read-only data (these are reference genomes and associated read-only data). This is why I set the cache-to-disk-hits to 1, assuming that caused the file to be cached on the very first read. Could you please recommend the best settings for very aggressive caching? I've noticed a lot of S3 calls being made for the same path even though that path for instance, has already been checked recently.

Error fuseMount command: geesefs

Hello
I used default examples and got this log in managed k8s in Yandex Cloiud

MountVolume.SetUp failed for volume "pvc-9aa1554b-77f7-4cba-a9ee-e9cce6c9be3b" : rpc error: code = Unknown desc = Error fuseMount command: geesefs
args: [--endpoint https://storage.yandexcloud.net --region -o allow_other --memory-limit 1000 --dir-mode 0777 --file-mode 0666 k8s-csi-s3:pvc-9aa1554b-77f7-4cba-a9ee-e9cce6c9be3b /var/lib/kubelet/pods/e4594844-fbd2-4c8c-874c-46b23bc3f6cb/volumes/kubernetes.io~csi/pvc-9aa1554b-77f7-4cba-a9ee-e9cce6c9be3b/mount]
output: 2021/09/06 11:06:59.226379 main.FATAL Unable to mount file system, see syslog for details

Support prefix

We were hoping to use an existing bucket with geesefs, with a large amount of data living under an existing prefix. It looks like the csi now only supports dynamically provisioned prefixes. Would it be possible to add back usePrefix and prefix support? I'd be happy to try and make a PR for it.

pod keeps pending

k8s version: 1.22.0
runtime: containerd
os version: centos 7.4
k8s-csi-s3 version: 0.30.9

hi The problem I encountered is that when I create a pod, I can successfully create a pvc in minio, but the pod is always in the pending state. Check the logs of csi-s3 as follows:

I0615 16:32:03.752720 1 main.go:110] Version: v1.2.0-0-g6ef000ae
I0615 16:32:03.752795 1 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0615 16:32:03.752828 1 connection.go:151] Connecting to unix:///csi/csi.sock
I0615 16:32:05.797278 1 main.go:127] Calling CSI driver to discover driver name
I0615 16:32:05.798941 1 main.go:137] CSI driver name: "ru.yandex.s3.csi"
I0615 16:32:05.799050 1 node_register.go:58] Starting Registration Server at: /registration/ru.yandex.s3.csi-reg.sock
I0615 16:32:05.799233 1 node_register.go:67] Registration Server started at: /registration/ru.yandex.s3.csi-reg.sock
I0615 16:32:07.265819 1 main.go:77] Received GetInfo call: &InfoRequest{}
I0615 16:32:07.344763 1 main.go:87] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.