Giter Club home page Giter Club logo

Comments (18)

kevinjantw avatar kevinjantw commented on June 17, 2024 1

The /sbin/iscsid was already installed, I resolved the issue by killing iscsid processes.
Now nginx pod can execute normally into Running status.

from nbp.

wisererik avatar wisererik commented on June 17, 2024

Hi Kevin, thanks for telling us the issue. From message you provided, maybe on problem is that csi plugin was installed correctly. I assume you have changed the host_ip (such as: 192.168.22.97) in installer configuration. please update configmap like the bellowing:

kind: ConfigMap
apiVersion: v1
metadata:
  name: csi-configmap-opensdsplugin
data:
  opensdsendpoint: http://192.168.22.97:50040 
  opensdsauthstrategy: keystone
  opensdsstoragetype: block
  osauthurl: http://192.168.22.97/identity
  osusername: admin
  ospassword: opensds@123
  ostenantname: admin
  osprojectname: admin
  osuserdomainid: default
  passwordencrypter: aes
  enableEncrypted: F
  osuserdomainid: default

Then re-install csi plugin by using kubectl delete/create -f *****

from nbp.

kevinjantw avatar kevinjantw commented on June 17, 2024

Thanks Erik,
But the HOST_IP has same IP configuration as configmap file.
The export HOST_IP=127.0.0.1 was configed before running opensds-ansible playbook deployment.
After opensds deployment, the 127.0.0.1 was also applied to {your_real_host_ip} in opensds CLI tool configuration and all osdsctl command tests are OK.

from nbp.

wisererik avatar wisererik commented on June 17, 2024

OK, maybe I know the issue. As a container, CSI plugin cannot reach to opensds hotpot service if we use 127.0.0.1. So we should change the HOST_IP.

from nbp.

kevinjantw avatar kevinjantw commented on June 17, 2024

The HOST_IP has been changed from 127.0.0.1 to 10.0.2.15,
but the issue still exists. I highlight the installation and test details as follows.

Kubernetes installation and test:

 ENABLE_DAEMON=true ALLOW_PRIVILEGED=true 
 FEATURE_GATES=VolumeSnapshotDataSource=true
 RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh -O

 kubectl get nodes
NAME STATUS ROLES AGE VERSION
127.0.0.1 Ready none 51m v1.14.0

OpenSDS installation and osdsctl test:

group_vars/common.yml
host_ip: 10.0.2.15

group_vars/sushi.yml:
nbp_plugin_type: csi

export HOST_IP=10.0.2.15

osdsctl profile create '{"name": "default", "description": "default policy", "storageType": "block"}'
osdsctl volume create 1 --name=test-001
osdsctl volume list
Id Name Description GroupId Size Status ProfileId
13311bbe-a46c-4403-acfd-0589f449cd21 test-001 1 available d10ec339-3357-43ff-8626-4ccdb854af3d

OpenSDS CSI Plugin test:

cat csi-configmap-opensdsplugin.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: csi-configmap-opensdsplugin
data:
  opensdsendpoint: http://10.0.2.15:50040
  opensdsauthstrategy: keystone
  opensdsstoragetype: block
  osauthurl: http://10.0.2.15/identity
  osusername: admin
  ospassword: opensds@123
  ostenantname: admin
  osprojectname: admin
  osuserdomainid: default
  passwordencrypter: aes
  enableEncrypted: F
  osuserdomainid: default

kubectl create -f nginx.yaml
kubectl get pods

NAME READY STATUS RESTARTS AGE
csi-attacher-opensdsplugin-0 3/3 Running 0 34m
csi-nodeplugin-opensdsplugin-gd9j2 2/2 Running 0 34m
csi-provisioner-opensdsplugin-0 2/2 Running 0 34m
csi-snapshotter-opensdsplugin-0 2/2 Running 0 34m
nginx 0/1 Pending 0 27m

kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-opensdsplugin Pending csi-sc-opensdsplugin 30m

kubectl describe pvc csi-pvc-opensdsplugin

Name: csi-pvc-opensdsplugin
Namespace: default
StorageClass: csi-sc-opensdsplugin
Status: Pending
Volume:
Labels:
Annotations: volume.beta.kubernetes.io/storage-provisioner: csi-opensdsplugin
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:

Type Reason Age From Message
Normal ExternalProvisioning 4m46s (x121 over 34m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi-opensdsplugin" or manually created by system administrator
Normal Provisioning 73s (x15 over 34m) csi-opensdsplugin_csi-provisioner-opensdsplugin-0_93c03670-ade6-11e9-a4aa-0242ac110003 External provisioner is provisioning volume for claim "default/csi-pvc-opensdsplugin"
Warning ProvisioningFailed 73s (x15 over 34m) csi-opensdsplugin_csi-provisioner-opensdsplugin-0_93c03670-ade6-11e9-a4aa-0242ac110003 failed to provision volume with StorageClass "csi-sc-opensdsplugin": rpc error: code = InvalidArgument desc = get profile abc failed

from nbp.

wisererik avatar wisererik commented on June 17, 2024

Thanks Kevin, it's very detailed. Profile need to be updated in nginx.yml, I'm sorry the wiki didn't mention it.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-sc-opensdsplugin
provisioner: csi-opensdsplugin
parameters:
  attachMode: rw
  profile: d10ec339-3357-43ff-8626-4ccdb854af3d

from nbp.

wisererik avatar wisererik commented on June 17, 2024

BTW, I updated the CSI WIKI:
https://github.com/opensds/opensds/wiki/OpenSDS-Integration-with-Kubernetes-CSI

Some of us are working on container storage, you can join opensds.slack.com to get quick feedback. :)

from nbp.

kevinjantw avatar kevinjantw commented on June 17, 2024

Thank for your invitation, I have joined your slack channels.
The updated profile has released the pended persistent volume claim, but the nginx pod still stuck at a ContainerCreating status.

kubectl get nodes

NAME STATUS ROLES AGE VERSION
127.0.0.1 Ready none 29h v1.14.0

kubectl create -f nginx.yaml
kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-opensdsplugin Bound pvc-5c00e92f-addb-11e9-aae2-0800275197c7 1Gi RWX csi-sc-opensdsplugin 21s

kubectl get pod

NAME READY STATUS RESTARTS AGE
csi-attacher-opensdsplugin-0 3/3 Running 1 8m51s
csi-nodeplugin-opensdsplugin-n7nlp 2/2 Running 0 8m50s
csi-provisioner-opensdsplugin-0 2/2 Running 0 8m50s
csi-snapshotter-opensdsplugin-0 2/2 Running 0 8m50s
nginx 0/1 ContainerCreating 0 31s

kubectl describe pod nginx

    Name: nginx
    Namespace: default
    Priority: 0
    PriorityClassName: <none>
    Node: 127.0.0.1/127.0.0.1
    Start Time: Wed, 24 Jul 2019 14:22:11 +0800
    Labels: <none>
    Annotations: <none>
    Status: Pending
    IP:                 
    Containers:
      nginx:
        Container ID:   
        Image: nginx
        Image ID:       
        Port: 80/TCP
        Host Port: 0/TCP
        State: Waiting
          Reason: ContainerCreating
        Ready: False
        Restart Count: 0
        Environment: <none>
        Mounts:
          /var/lib/www/html from csi-data-opensdsplugin (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-z7vs2 (ro)
    Conditions:
      Type Status
      Initialized True 
      Ready False 
      ContainersReady False 
      PodScheduled True 
    Volumes:
      csi-data-opensdsplugin:
        Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName: csi-pvc-opensdsplugin
        ReadOnly: false
      default-token-z7vs2:
        Type: Secret (a volume populated by a Secret)
        SecretName: default-token-z7vs2
        Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
Type Reason Age From Message
Warning FailedScheduling 46s (x3 over 48s) default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 43s default-scheduler Successfully assigned default/nginx to 127.0.0.1
Normal SuccessfulAttachVolume 43s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-5c00e92f-addb-11e9-aae2-0800275197c7"
Warning FailedMount 13s (x6 over 34s) kubelet, 127.0.0.1 MountVolume.MountDevice failed for volume "pvc-5c00e92f-addb-11e9-aae2-0800275197c7" : rpc error: code = FailedPrecondition desc = failed to find device: Please stop the iscsi process outside the container first: exit status 1

from nbp.

wisererik avatar wisererik commented on June 17, 2024

Can you please check if the /sbin/iscsid exists? Generally if it doesn't exist you can install open-iscsi manually.

from nbp.

wisererik avatar wisererik commented on June 17, 2024

My system woks fine, but I will check the mount function to find the reason, thanks Kevin.

from nbp.

himanshuvar avatar himanshuvar commented on June 17, 2024

@wisererik @kevinjantw If the example test is working fine, Can we close this issue?

from nbp.

jmjoo avatar jmjoo commented on June 17, 2024

The /sbin/iscsid was already installed, I resolved the issue by killing iscsid processes.
Now nginx pod can execute normally into Running status.

Hello~ I got similar issue in my system. Can you know me how to resolve the issue? Did you kill process of iscsi?


If I explain my issue. I installed it in kubernetes cluster. so I edit like below

root@opensds:~/nbp/csi/server/deploy/kubernetes# cat csi-configmap-opensdsplugin.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
  name: csi-configmap-opensdsplugin
data:
  opensdsendpoint: http://apiserver.opensds.svc.cluster.local:50040
  opensdsauthstrategy: keystone
  opensdsstoragetype: block
  osauthurl: http://authchecker.opensds.svc.cluster.local/identity
  osusername: admin
  ospassword: opensds@123
  ostenantname: admin
  osprojectname: admin
  osuserdomainid: default
  passwordencrypter: aes
  enableEncrypted: F
  osuserdomainid: default

root@opensds:~/nbp/csi/server/deploy/kubernetes# kubectl get pod

NAME READY STATUS RESTARTS AGE
csi-attacher-opensdsplugin-0 3/3 Running 4 76m
csi-nodeplugin-opensdsplugin-jpr6w 0/2 Error 13 56m
csi-provisioner-opensdsplugin-0 2/2 Running 2 76m
csi-snapshotter-opensdsplugin-0 2/2 Running 2 76m
nginx 0/1 Pending 0 3h22m
root@opensds:~# kubectl describe pod/csi-nodeplugin-opensdsplugin-jpr6w 
Name:           csi-nodeplugin-opensdsplugin-jpr6w
Namespace:      default
Priority:       0
Node:           opensds/192.168.0.86
Start Time:     Fri, 06 Sep 2019 13:25:46 +0900
Labels:         app=csi-nodeplugin-opensdsplugin
                controller-revision-hash=6c8f796f7
                pod-template-generation=1
Annotations:    <none>
Status:         Running
IP:             192.168.0.86
Controlled By:  DaemonSet/csi-nodeplugin-opensdsplugin
Containers:
  node-driver-registrar:
    Container ID:  docker://5d3ccd2360fd8f902c5a5563b7a25fa974fd5adaa095a08b84617c3894327226
    Image:         quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
    Image ID:      docker-pullable://quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
      --kubelet-registration-path=$(ADDRESS)
    State:          Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Fri, 06 Sep 2019 13:25:47 +0900
      Finished:     Fri, 06 Sep 2019 14:07:19 +0900
    Ready:          False
    Restart Count:  0
    Environment:
      ADDRESS:         /var/lib/kubelet/plugins/csi-opensdsplugin/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /registration from registration-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-nodeplugin-token-kbjlh (ro)
  opensds:
    Container ID:  docker://bc319042c34dbeab6a35aa13c62301ec612a6e223f642c865d77de1688edcfd2
    Image:         opensdsio/csiplugin:latest
    Image ID:      docker-pullable://opensdsio/csiplugin@sha256:82d230c88bdba074d4cb8a8dbe6ca1b693fad8a76a10ac63ab6f169c16a774d4
    Port:          <none>
    Host Port:     <none>
    Args:
      --csiEndpoint=$(CSI_ENDPOINT)
      --opensdsEndpoint=$(OPENSDS_ENDPOINT)
      --opensdsAuthStrategy=$(OPENSDS_AUTH_STRATEGY)
      --storageType=$(OPENSDS_STORAGE_TYPE)
      --v=8
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 06 Sep 2019 14:07:05 +0900
      Finished:     Fri, 06 Sep 2019 14:07:05 +0900
    Ready:          False
    Restart Count:  13
    Environment:
      CSI_ENDPOINT:           unix://var/lib/kubelet/plugins/csi-opensdsplugin/csi.sock
      OPENSDS_ENDPOINT:       <set to the key 'opensdsendpoint' of config map 'csi-configmap-opensdsplugin'>      Optional: false
      OPENSDS_STORAGE_TYPE:   <set to the key 'opensdsstoragetype' of config map 'csi-configmap-opensdsplugin'>   Optional: false
      OPENSDS_AUTH_STRATEGY:  <set to the key 'opensdsauthstrategy' of config map 'csi-configmap-opensdsplugin'>  Optional: false
      OS_AUTH_URL:            <set to the key 'osauthurl' of config map 'csi-configmap-opensdsplugin'>            Optional: false
      OS_USERNAME:            <set to the key 'osusername' of config map 'csi-configmap-opensdsplugin'>           Optional: false
      OS_PASSWORD:            <set to the key 'ospassword' of config map 'csi-configmap-opensdsplugin'>           Optional: false
      PASSWORD_ENCRYPTER:     <set to the key 'passwordencrypter' of config map 'csi-configmap-opensdsplugin'>    Optional: false
      ENABLE_ENCRYPTED:       <set to the key 'enableEncrypted' of config map 'csi-configmap-opensdsplugin'>      Optional: false
      OS_TENANT_NAME:         <set to the key 'ostenantname' of config map 'csi-configmap-opensdsplugin'>         Optional: false
      PASSWORD_ENCRYPTER:     <set to the key 'passwordencrypter' of config map 'csi-configmap-opensdsplugin'>    Optional: false
      ENABLE_ENCRYPTED:       <set to the key 'enableEncrypted' of config map 'csi-configmap-opensdsplugin'>      Optional: false
      OS_PROJECT_NAME:        <set to the key 'osprojectname' of config map 'csi-configmap-opensdsplugin'>        Optional: false
      OS_USER_DOMAIN_ID:      <set to the key 'osuserdomainid' of config map 'csi-configmap-opensdsplugin'>       Optional: false
    Mounts:
      /dev from pods-probe-dir (rw)
      /etc from hosts (rw)
      /etc/ceph/ from ceph-dir (rw)
      /etc/iscsi/ from iscsi-dir (rw)
      /opt/opensds-security from certificate-path (rw)
      /var/lib/kubelet/plugins/csi-opensdsplugin from socket-dir (rw)
      /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices from volume-devices-dir (rw)
      /var/lib/kubelet/pods from pods-mount-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-nodeplugin-token-kbjlh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-opensdsplugin
    HostPathType:  DirectoryOrCreate
  volume-devices-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices
    HostPathType:  DirectoryOrCreate
  pods-mount-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  Directory
  pods-probe-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  Directory
  iscsi-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/iscsi/
    HostPathType:  Directory
  ceph-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ceph/
    HostPathType:  DirectoryOrCreate
  registration-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins_registry
    HostPathType:  DirectoryOrCreate
  certificate-path:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/opensds-security
    HostPathType:  DirectoryOrCreate
  hosts:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  Directory
  csi-nodeplugin-token-kbjlh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  csi-nodeplugin-token-kbjlh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason       Age                   From               Message
\  ----     ------       ----                  ----               -------
  Normal   Scheduled    52m                   default-scheduler  Successfully assigned default/csi-nodeplugin-opensdsplugin-jpr6w to opensds
  Normal   Pulled       52m                   kubelet, opensds   Container image "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0" already present on machine
  Normal   Created      52m                   kubelet, opensds   Created container node-driver-registrar
  Normal   Started      52m                   kubelet, opensds   Started container node-driver-registrar
  Normal   Started      52m (x3 over 52m)     kubelet, opensds   Started container opensds
  Warning  FailedMount  51m (x7 over 52m)     kubelet, opensds   MountVolume.SetUp failed for volume "csi-nodeplugin-token-kbjlh" : secret "csi-nodeplugin-token-kbjlh" not found
  Normal   Pulled       51m (x4 over 52m)     kubelet, opensds   Container image "opensdsio/csiplugin:latest" already present on machine
  Normal   Created      51m (x4 over 52m)     kubelet, opensds   Created container opensds
  Warning  BackOff      12m (x188 over 52m)   kubelet, opensds   Back-off restarting failed container
  Warning  FailedMount  9m28s                 kubelet, opensds   MountVolume.SetUp failed for volume "csi-nodeplugin-token-kbjlh" : couldn't propagate object cache: timed out waiting for the condition
  Warning  FailedMount  74s (x11 over 9m28s)  kubelet, opensds   MountVolume.SetUp failed for volume "csi-nodeplugin-token-kbjlh" : secret "csi-nodeplugin-token-kbjlh" not found
  Warning  FailedMount  50s (x4 over 7m1s)    kubelet, opensds   Unable to mount volumes for pod "csi-nodeplugin-opensdsplugin-jpr6w_default(482ad595-951f-455a-b928-c271e6b20424)": timeout expired waiting for volumes to attach or mount for pod "default"/"csi-nodeplugin-opensdsplugin-jpr6w". list of unmounted volumes=[csi-nodeplugin-token-kbjlh]. list of unattached volumes=[socket-dir volume-devices-dir pods-mount-dir pods-probe-dir iscsi-dir ceph-dir registration-dir certificate-path hosts csi-nodeplugin-token-kbjlh]
root@opensds:~/nbp/csi/server/deploy/kubernetes# kubectl get pvc
NAME                    STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS           AGE
csi-pvc-opensdsplugin   Pending                                      csi-sc-opensdsplugin   3h24m

root@opensds:~/nbp/csi/server/deploy/kubernetes# kubectl get sc
NAME                   PROVISIONER         AGE
csi-sc-opensdsplugin   csi-opensdsplugin   3h24m

from nbp.

wisererik avatar wisererik commented on June 17, 2024

@himanshuvar please have a look at it again, thanks~

from nbp.

jmjoo avatar jmjoo commented on June 17, 2024

I changed configMap from domain name to Cluster-IP in kubernetes service.
it starts to work. but I failed to apply "nginx.yaml"
please check syslog. let me know if you need to more information.

kind: ConfigMap
apiVersion: v1
metadata:
name: csi-configmap-opensdsplugin
data:
opensdsendpoint: http://apiserver.opensds.svc.cluster.local:50040
opensdsauthstrategy: keystone
opensdsstoragetype: block
osauthurl: http://authchecker.opensds.svc.cluster.local/identity
osusername: admin
ospassword: opensds@123
ostenantname: admin
osprojectname: admin
osuserdomainid: default
passwordencrypter: aes
enableEncrypted: F
osuserdomainid: default
root@opensds:~/nbp/csi/server/deploy/kubernetes# kubectl get all
NAME                                     READY   STATUS    RESTARTS   AGE
pod/csi-attacher-opensdsplugin-0         3/3     Running   0          6m22s
pod/csi-nodeplugin-opensdsplugin-9sztb   2/2     Running   0          6m21s
pod/csi-provisioner-opensdsplugin-0      2/2     Running   0          6m20s
pod/csi-snapshotter-opensdsplugin-0      2/2     Running   0          6m20s
root@opensds:~# cat nginx.yaml 
\# This YAML file contains nginx & csi opensds driver objects,
\# which are necessary to run nginx with csi opensds driver.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-sc-opensdsplugin
provisioner: csi-opensdsplugin
parameters:
  attachMode: rw
  profile: 477132ef-d9b6-4a2f-a0a0-a0468be08d90
allowedTopologies:
- matchLabelExpressions:
  - key: topology.csi-opensdsplugin/zone
    values:
    - default'

root@opensds:~# tailf /var/log/syslog

Sep  6 20:48:39 localhost kubelet[1109]: E0906 20:48:39.722420    1109 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/csi/csi-opensdsplugin^47d2555a-ba5c-4f47-a22c-d8f0d2be0189\"" failed. No retries permitted until 2019-09-06 20:49:43.722403621 +0900 KST m=+24114.113890425 (durationBeforeRetry 1m4s). Error: "MountVolume.MountDevice failed for volume \"pvc-2398729a-6daf-4835-a215-f69d18e5c37f\" (UniqueName: \"kubernetes.io/csi/csi-opensdsplugin^47d2555a-ba5c-4f47-a22c-d8f0d2be0189\") pod \"nginx\" (UID: \"41d344d7-ab0b-4359-b983-cb1c690575b1\") : rpc error: code = FailedPrecondition desc = failed to find device: exit status 21"
Sep  6 20:48:52 localhost kubelet[1109]: I0906 20:48:52.848147    1109 reconciler.go:177] operationExecutor.UnmountVolume started for volume "default-token-t4wqc" (UniqueName: "kubernetes.io/secret/41d344d7-ab0b-4359-b983-cb1c690575b1-default-token-t4wqc") pod "41d344d7-ab0b-4359-b983-cb1c690575b1" (UID: "41d344d7-ab0b-4359-b983-cb1c690575b1")
Sep  6 20:48:52 localhost kubelet[1109]: I0906 20:48:52.862561    1109 operation_generator.go:860] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41d344d7-ab0b-4359-b983-cb1c690575b1-default-token-t4wqc" (OuterVolumeSpecName: "default-token-t4wqc") pod "41d344d7-ab0b-4359-b983-cb1c690575b1" (UID: "41d344d7-ab0b-4359-b983-cb1c690575b1"). InnerVolumeSpecName "default-token-t4wqc". PluginName "kubernetes.io/secret", VolumeGidValue ""
Sep  6 20:48:52 localhost kubelet[1109]: I0906 20:48:52.948748    1109 reconciler.go:297] Volume detached for volume "default-token-t4wqc" (UniqueName: "kubernetes.io/secret/41d344d7-ab0b-4359-b983-cb1c690575b1-default-token-t4wqc") on node "opensds" DevicePath ""
Sep  6 20:49:02 localhost kubelet[1109]: I0906 20:49:02.873967    1109 reconciler.go:297] Volume detached for volume "pvc-2398729a-6daf-4835-a215-f69d18e5c37f" (UniqueName: "kubernetes.io/csi/csi-opensdsplugin^47d2555a-ba5c-4f47-a22c-d8f0d2be0189") on node "opensds" DevicePath ""
root@opensds:~# kubectl logs -f pod/csi-attacher-opensdsplugin-0 -c csi-attacher

I0908 08:54:08.879524       1 reflector.go:161] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
E0908 08:54:08.881967       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:serviceaccount:default:csi-attacher" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
(repeatedly)

root@opensds:/var/lib/kubelet/pods# kubectl get CustomResourceDefinition
NAME                                             CREATED AT
bgpconfigurations.crd.projectcalico.org          2019-08-13T04:32:24Z
bgppeers.crd.projectcalico.org                   2019-08-13T04:32:24Z
blockaffinities.crd.projectcalico.org            2019-08-13T04:32:24Z
clusterinformations.crd.projectcalico.org        2019-08-13T04:32:24Z
csidrivers.csi.storage.k8s.io                    2019-09-06T01:45:51Z
felixconfigurations.crd.projectcalico.org        2019-08-13T04:32:24Z
globalnetworkpolicies.crd.projectcalico.org      2019-08-13T04:32:24Z
globalnetworksets.crd.projectcalico.org          2019-08-13T04:32:24Z
hostendpoints.crd.projectcalico.org              2019-08-13T04:32:24Z
ipamblocks.crd.projectcalico.org                 2019-08-13T04:32:24Z
ipamconfigs.crd.projectcalico.org                2019-08-13T04:32:24Z
ipamhandles.crd.projectcalico.org                2019-08-13T04:32:24Z
ippools.crd.projectcalico.org                    2019-08-13T04:32:24Z
networkpolicies.crd.projectcalico.org            2019-08-13T04:32:24Z
networksets.crd.projectcalico.org                2019-08-13T04:32:24Z
volumesnapshotclasses.snapshot.storage.k8s.io    2019-09-06T01:45:53Z
volumesnapshotcontents.snapshot.storage.k8s.io   2019-09-06T01:45:53Z
volumesnapshots.snapshot.storage.k8s.io          2019-09-06T01:45:53Z
root@opensds:~# kubectl version

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

from nbp.

himanshuvar avatar himanshuvar commented on June 17, 2024

@jaemin-joo Could you please try Kubernetes cluster v1.14.0
Here is the link for OpenSDS integration with Kubernetes CSI.
https://github.com/opensds/opensds/wiki/OpenSDS-Integration-with-Kubernetes-CSI
Pods are deployed successfully for me, except the example test issue reported by @kevinjantw . The application is stuck in ContainerCreating state

from nbp.

jmjoo avatar jmjoo commented on June 17, 2024

@himanshuvar I tried to test in v1.14.0.

root@opensds:~# kubectl version

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

unfortunately, there is same issue.

root@opensds:~# tailf /var/log/syslog

Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.585249   22078 clientconn.go:440] parsed scheme: ""
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.585385   22078 clientconn.go:440] scheme "" not registered, fallback to default scheme
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.585528   22078 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/lib/kubelet/plugins/csi-opensdsplugin/csi.sock 0  <nil>}]
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.585651   22078 clientconn.go:796] ClientConn switching balancer to "pick_first"
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.585789   22078 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0017e9050, CONNECTING
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.585907   22078 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.586327   22078 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0017e9050, READY
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.588215   22078 clientconn.go:440] parsed scheme: ""
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.588458   22078 clientconn.go:440] scheme "" not registered, fallback to default scheme
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.588613   22078 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/lib/kubelet/plugins/csi-opensdsplugin/csi.sock 0  <nil>}]
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.588763   22078 clientconn.go:796] ClientConn switching balancer to "pick_first"
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.588904   22078 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc002232f90, CONNECTING
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.589259   22078 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc002232f90, READY
Sep  9 21:48:09 localhost kubelet[22078]: I0909 21:48:09.589023   22078 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
Sep  9 21:48:10 localhost kubelet[22078]: E0909 21:48:10.640630   22078 csi_attacher.go:320] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = FailedPrecondition desc = failed to find device: exit status 21
Sep  9 21:48:10 localhost kubelet[22078]: E0909 21:48:10.640989   22078 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/csi/csi-opensdsplugin^212678a9-ef31-4857-b11b-29a39e063985\"" failed. No retries permitted until 2019-09-09 21:48:42.640968851 +0900 KST m=+1843.958924218 (durationBeforeRetry 32s). Error: "MountVolume.MountDevice failed for volume \"pvc-f8effaf8-d2ff-11e9-808c-005056abe34e\" (UniqueName: \"kubernetes.io/csi/csi-opensdsplugin^212678a9-ef31-4857-b11b-29a39e063985\") pod \"nginx\" (UID: \"f8f3e594-d2ff-11e9-808c-005056abe34e\") : rpc error: code = FailedPrecondition desc = failed to find device: exit status 21"
root@opensds:~# kubectl describe pod/nginx
Name:               nginx
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               opensds/192.168.0.86
Start Time:         Mon, 09 Sep 2019 21:47:29 +0900
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx",...
Status:             Pending
IP:                 
Containers:
  nginx:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/lib/www/html from csi-data-opensdsplugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-t4wqc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  csi-data-opensdsplugin:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  csi-pvc-opensdsplugin
    ReadOnly:   false
  default-token-t4wqc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-t4wqc
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Warning  FailedScheduling        5m1s (x3 over 5m3s)   default-scheduler        pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled               4m58s                 default-scheduler        Successfully assigned default/nginx to opensds
  Normal   SuccessfulAttachVolume  4m58s                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-f8effaf8-d2ff-11e9-808c-005056abe34e"
  Warning  FailedMount             40s (x2 over 2m55s)   kubelet, opensds         Unable to mount volumes for pod "nginx_default(f8f3e594-d2ff-11e9-808c-005056abe34e)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx". list of unmounted volumes=[csi-data-opensdsplugin]. list of unattached volumes=[csi-data-opensdsplugin default-token-t4wqc]
  Warning  FailedMount             35s (x10 over 4m56s)  kubelet, opensds         MountVolume.MountDevice failed for volume "pvc-f8effaf8-d2ff-11e9-808c-005056abe34e" : rpc error: code = FailedPrecondition desc = failed to find device: exit status 21
kubectl logs -f pod/csi-nodeplugin-opensdsplugin-hdfsv -c opensds

I0910 16:24:07.696145       1 node.go:219] start to node get capabilities
I0910 16:24:07.696158       1 node.go:222] end to node get capabilities
I0910 16:24:07.698021       1 node.go:68] start to node stage volume, Volume_id: fd913944-ceb5-4eef-85f3-d3a7a753e78e, staging_target_path: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-935fe5c4-d391-11e9-9776-005056abe34e/globalmount
I0910 16:24:07.713345       1 logs.go:40] 2019/09/10 16:24:07 receiver.go:137: 
StatusCode: 200 OK
Response Body:
[]
I0910 16:24:07.727130       1 logs.go:40] 2019/09/10 16:24:07 receiver.go:137: 
StatusCode: 200 OK
Response Body:
{"id":"fd913944-ceb5-4eef-85f3-d3a7a753e78e","createdAt":"2019-09-10T06:09:40","updatedAt":"2019-09-10T06:09:45","tenantId":"94b280022d0c4401bcf3b0ea85870519","userId":"558057c4256545bd8a307c37464003c9","name":"pvc-935fe5c4-d391-11e9-9776-005056abe34e","size":1,"availabilityZone":"default","status":"inUse","poolId":"4e79aeee-eca9-5608-a4d4-27c407b568c1","profileId":"2ce80f1a-d1de-4374-a172-96f2eccd27bb","metadata":{"lvPath":"/dev/lb001/volume-fd913944-ceb5-4eef-85f3-d3a7a753e78e"},"AttachStatus":""}
I0910 16:24:07.741934       1 logs.go:40] 2019/09/10 16:24:07 receiver.go:137: 
StatusCode: 200 OK
Response Body:
{"id":"6617f96f-97df-498f-932b-6048a1990887","createdAt":"2019-09-10T06:09:45","updatedAt":"2019-09-10T06:09:45","tenantId":"94b280022d0c4401bcf3b0ea85870519","volumeId":"fd913944-ceb5-4eef-85f3-d3a7a753e78e","status":"available","metadata":{"attachMode":"rw","availabilityZone":"default","lvPath":"/dev/lb001/volume-fd913944-ceb5-4eef-85f3-d3a7a753e78e","name":"pvc-935fe5c4-d391-11e9-9776-005056abe34e","poolId":"4e79aeee-eca9-5608-a4d4-27c407b568c1","profileId":"2ce80f1a-d1de-4374-a172-96f2eccd27bb","status":"available","storage.kubernetes.io/csiProvisionerIdentity":"1568095192089-8081-csi-opensdsplugin"},"hostInfo":{"platform":"amd64","osType":"linux","ip":"192.168.0.86","host":"opensds","initiator":"iqn.1993-08.org.debian:01:4bef7e834721"},"connectionInfo":{"driverVolumeType":"iscsi","data":{"discard":false,"targetDiscovered":true,"targetIQN":["iqn.2017-10.io.opensds:fd913944-ceb5-4eef-85f3-d3a7a753e78e"],"targetLun":1,"targetPortal":["127.0.0.1:3260"]}},"accessProtocol":"iscsi","attachMode":"rw"}
iscsi target portal: [127.0.0.1:3260], target iqn: [iqn.2017-10.io.opensds:fd913944-ceb5-4eef-85f3-d3a7a753e78e], target lun: 1
I0910 16:24:07.742191       1 logs.go:40] 2019/09/10 16:24:07 helper.go:209: TgtPortal:[127.0.0.1:3260]
I0910 16:24:07.742230       1 logs.go:40] 2019/09/10 16:24:07 common.go:27: Command: /bin/bash -c ping -c 2 127.0.0.1:
I0910 16:24:08.743240       1 logs.go:40] 2019/09/10 16:24:08 helper.go:215: ping result:PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.029 ms

--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.029/0.030/0.031/0.001 ms

I0910 16:24:08.743411       1 logs.go:40] 2019/09/10 16:24:08 helper.go:236: connmap info:  map[discard:false targetDiscovered:true targetIQN:[iqn.2017-10.io.opensds:fd913944-ceb5-4eef-85f3-d3a7a753e78e] targetLun:1 targetPortal:[127.0.0.1:3260]]
I0910 16:24:08.743478       1 logs.go:40] 2019/09/10 16:24:08 helper.go:237: conn info is:  &{    true [iqn.2017-10.io.opensds:fd913944-ceb5-4eef-85f3-d3a7a753e78e] [127.0.0.1:3260]  1 false}
I0910 16:24:08.743525       1 logs.go:40] 2019/09/10 16:24:08 common.go:27: Command: /bin/bash -c pgrep -f /sbin/iscsid:
I0910 16:24:08.745602       1 logs.go:40] 2019/09/10 16:24:08 helper.go:262: Connect portal: 127.0.0.1:3260 targetiqn: iqn.2017-10.io.opensds:fd913944-ceb5-4eef-85f3-d3a7a753e78e targetlun: 1
I0910 16:24:08.745666       1 logs.go:40] 2019/09/10 16:24:08 helper.go:271: devicepath is  /dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2017-10.io.opensds:fd913944-ceb5-4eef-85f3-d3a7a753e78e-lun-1
I0910 16:24:08.745717       1 logs.go:40] 2019/09/10 16:24:08 helper.go:124: Discovery portal: 127.0.0.1:3260
I0910 16:24:08.745835       1 logs.go:40] 2019/09/10 16:24:08 common.go:27: Command: iscsiadm -m discovery -t sendtargets -p 127.0.0.1:3260:
I0910 16:24:08.747542       1 logs.go:40] 2019/09/10 16:24:08 helper.go:127: Error encountered in sendtargets: exit status 21
E0910 16:24:08.747605       1 volume.go:569] failed to find device: exit status 21
I0910 16:24:08.747644       1 node.go:79] end to node stage volume

Can I use dns in the configMap file like below? (e.g. http://apiserver.opensds.svc.cluster.local:50040)
When I try to use dns, csi-nodeplugin pod has failed. and when I use Cluster-IP of apiserver & authchecker, it works.
Another pod(csi-attacher, csi-provisioner, csi-snapshotter) works whatever DNS and ClusterIP.

kind: ConfigMap
apiVersion: v1
metadata:
name: csi-configmap-opensdsplugin
data:
opensdsendpoint: http://apiserver.opensds.svc.cluster.local:50040
opensdsauthstrategy: keystone
opensdsstoragetype: block
osauthurl: http://authchecker.opensds.svc.cluster.local/identity
osusername: admin
ospassword: opensds@123
ostenantname: admin
osprojectname: admin
osuserdomainid: default
passwordencrypter: aes
enableEncrypted: F
osuserdomainid: default

from nbp.

jmjoo avatar jmjoo commented on June 17, 2024

When I checked configuration file. I found the issue is related to IP. It works when I change tgtBindIp to host-IP in /etc/opensds/driver/lvm.yaml

root@opensds-srv01:~# vi /etc/opensds/driver/lvm.yaml

tgtBindIp: 192.168.0.86 # This section!!
tgtConfDir: /etc/tgt/conf.d
pool:
  lb001:
    storageType: block
    availabilityZone: default
    extras:
      dataStorage:
        provisioningPolicy: Thin
        isSpaceEfficient: false
      ioConnectivity:
        accessProtocol: iscsi
        maxIOPS: 7000000
        maxBWS: 600
      advanced:
        diskType: SSD
        latency: 5ms

But dock is possible to move to other node if node is down. so It doesn't solve the issue.

from nbp.

jmjoo avatar jmjoo commented on June 17, 2024

Can I ask you why node iscsid process is checked more?
In my case, I attached iscsi volume to node which dock and csiplugin is placed.
so I can find some of errors to use nginx example for CSI.

root@opensds-srv01:~# kubectl logs -f pod/csi-nodeplugin-opensdsplugin-4dk5j -c opensds

I0925 17:25:35.358873       1 logs.go:40] 2019/09/25 17:25:35 helper.go:236: connmap info:  map[discard:false targetDiscovered:true targetIQN:[iqn.2017-10.io.opensds:cdda11b8-e04f-4f49-8c16-0f88d761859e] targetLun:1 targetPortal:[192.168.0.11:3260]]
I0925 17:25:35.358885       1 logs.go:40] 2019/09/25 17:25:35 helper.go:237: conn info is:  &{    true [iqn.2017-10.io.opensds:cdda11b8-e04f-4f49-8c16-0f88d761859e] [192.168.0.11:3260]  1 false}
I0925 17:25:35.358892       1 logs.go:40] 2019/09/25 17:25:35 common.go:27: Command: /bin/bash -c pgrep -f /sbin/iscsid:
I0925 17:25:35.360933       1 logs.go:40] 2019/09/25 17:25:35 common.go:27: Command: /bin/bash -c /sbin/iscsid:
E0925 17:25:35.362435       1 volume.go:569] failed to find device: Please stop the iscsi process outside the container first: exit status 1
I0925 17:25:35.362444       1 node.go:79] end to node stage volume

from nbp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.