Giter Club home page Giter Club logo

Comments (16)

wkrapohl avatar wkrapohl commented on August 27, 2024

If this is repeatable, you may need to get common-services involved.

The install process has been working fine without common-services, it may be just your particular scenario with installing Common Services that is breaking. I don't install commnon services.

I just finished using the play to install onto a fyre cluster and was able to create a PVC without problems.

I am not a rook/ceph expert, just wrappered in ansible play around an example install process the rook/ceph team provides in the repo to do the install of it.

General rook/ceph problems would have to go out to those supporting the repo https://github.com/rook/rook.

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

I have just created a NEW Fyre cluster and used the play to install.

I tried to create 3 PVCs, one for each storage class. They stay in pending.

image

@wkrapohl would you be able to login to my cluster to check it is as you expect?

image

image

image

image

from community-automation.

wkrapohl avatar wkrapohl commented on August 27, 2024

As you simply asking for to much storage? By default you only get 600G.

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

I asked for 1G each for the PVCs

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
  selfLink: /api/v1/namespaces/rook-ceph/persistentvolumeclaims/my-ceph-block-claim
  resourceVersion: '56770'
  name: my-ceph-block-claim
  uid: b8435cc3-54b5-4e82-b0ab-6b9fccb480a6
  creationTimestamp: '2020-12-09T16:28:53Z'
  managedFields:
    - manager: Mozilla
      operation: Update
      apiVersion: v1
      time: '2020-12-09T16:28:53Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:spec':
          'f:accessModes': {}
          'f:resources':
            'f:requests':
              .: {}
              'f:storage': {}
          'f:storageClassName': {}
          'f:volumeMode': {}
        'f:status':
          'f:phase': {}
    - manager: kube-controller-manager
      operation: Update
      apiVersion: v1
      time: '2020-12-09T16:28:53Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:volume.beta.kubernetes.io/storage-provisioner': {}
  namespace: rook-ceph
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-ceph-block
  volumeMode: Filesystem
status:
  phase: Pending
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
  selfLink: /api/v1/namespaces/rook-ceph/persistentvolumeclaims/my-csi-cephfs-claim
  resourceVersion: '56615'
  name: my-csi-cephfs-claim
  uid: 27f739b6-f5ab-4830-9f79-90a97cdaa304
  creationTimestamp: '2020-12-09T16:28:31Z'
  managedFields:
    - manager: Mozilla
      operation: Update
      apiVersion: v1
      time: '2020-12-09T16:28:31Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:spec':
          'f:accessModes': {}
          'f:resources':
            'f:requests':
              .: {}
              'f:storage': {}
          'f:storageClassName': {}
          'f:volumeMode': {}
        'f:status':
          'f:phase': {}
    - manager: kube-controller-manager
      operation: Update
      apiVersion: v1
      time: '2020-12-09T16:28:31Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:volume.beta.kubernetes.io/storage-provisioner': {}
  namespace: rook-ceph
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-cephfs
  volumeMode: Filesystem
status:
  phase: Pending
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
  selfLink: /api/v1/namespaces/rook-ceph/persistentvolumeclaims/my-rook-cephfs-claim
  resourceVersion: '56415'
  name: my-rook-cephfs-claim
  uid: 88ab91e7-90c8-4177-8d31-0c64e9cb4e4d
  creationTimestamp: '2020-12-09T16:28:04Z'
  managedFields:
    - manager: Mozilla
      operation: Update
      apiVersion: v1
      time: '2020-12-09T16:28:04Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:spec':
          'f:accessModes': {}
          'f:resources':
            'f:requests':
              .: {}
              'f:storage': {}
          'f:storageClassName': {}
          'f:volumeMode': {}
        'f:status':
          'f:phase': {}
    - manager: kube-controller-manager
      operation: Update
      apiVersion: v1
      time: '2020-12-09T16:28:04Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:volume.beta.kubernetes.io/storage-provisioner': {}
  namespace: rook-ceph
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs
  volumeMode: Filesystem
status:
  phase: Pending

from community-automation.

wkrapohl avatar wkrapohl commented on August 27, 2024

Where is the managed-nfs-storage sc coming from? Is it causing a problem? I do not have that on my clusters.

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

We have used the managed-nfs-storage sc successfully in the past before trying to use the csi sc.

I have removed the managed-nfs-storage sc but still they stay in pending.

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

I will rebuild the cluster without managed-nfs-storage sc, and see what happens.

from community-automation.

wkrapohl avatar wkrapohl commented on August 27, 2024

Just install csi-cephfs onto my new 4.7 nightly fyre cluster
All pods running

[root@waltx47test-inf ~]# oc get po -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-dv4jt                                            3/3     Running     0          8m53s
csi-cephfsplugin-k9zkb                                            3/3     Running     0          8m53s
csi-cephfsplugin-provisioner-5c65b94c8d-plx66                     6/6     Running     3          8m53s
csi-cephfsplugin-provisioner-5c65b94c8d-rdvbw                     6/6     Running     0          8m53s
csi-cephfsplugin-wdfk5                                            3/3     Running     0          8m53s
csi-rbdplugin-jlsck                                               3/3     Running     0          8m54s
csi-rbdplugin-km6gb                                               3/3     Running     0          8m54s
csi-rbdplugin-provisioner-569c75558-ggtcl                         6/6     Running     3          8m54s
csi-rbdplugin-provisioner-569c75558-l5txz                         6/6     Running     0          8m54s
csi-rbdplugin-r9zw5                                               3/3     Running     0          8m54s
rook-ceph-crashcollector-worker0.waltx47test.cp.fyre.ibm.c9g95b   1/1     Running     0          6m52s
rook-ceph-crashcollector-worker1.waltx47test.cp.fyre.ibm.cblcvd   1/1     Running     0          8m41s
rook-ceph-crashcollector-worker2.waltx47test.cp.fyre.ibm.crs2bk   1/1     Running     0          6m53s
rook-ceph-mds-myfs-a-5b75847949-gpc4h                             1/1     Running     0          6m54s
rook-ceph-mds-myfs-b-6c66d447c5-jq6pv                             1/1     Running     0          6m53s
rook-ceph-mgr-a-5fbf7b484f-q8thh                                  1/1     Running     0          7m26s
rook-ceph-mon-a-69dcd46b9b-fr5sv                                  1/1     Running     0          8m41s
rook-ceph-mon-b-6dd7885c4f-q9fdx                                  1/1     Running     0          8m32s
rook-ceph-mon-c-78944d795-7wpjx                                   1/1     Running     0          7m57s
rook-ceph-operator-59cbfb7c7c-z27qz                               1/1     Running     0          10m
rook-ceph-osd-0-5479587b8d-8hn7z                                  1/1     Running     0          7m16s
rook-ceph-osd-1-76d7df897b-pjn2c                                  1/1     Running     0          7m15s
rook-ceph-osd-2-b68c78cb9-bsd6z                                   1/1     Running     0          7m15s
rook-ceph-osd-prepare-worker0.waltx47test.cp.fyre.ibm.com-gvdm8   0/1     Completed   0          7m24s
rook-ceph-osd-prepare-worker1.waltx47test.cp.fyre.ibm.com-4jz8z   0/1     Completed   0          7m24s
rook-ceph-osd-prepare-worker2.waltx47test.cp.fyre.ibm.com-58cdc   0/1     Completed   0          7m24s
rook-discover-6dt7s                                               1/1     Running     0          9m48s
rook-discover-qsz6k                                               1/1     Running     0          9m48s
rook-discover-rxpzw                                               1/1     Running     0          9m48s
[root@waltx47test-inf ~]# 

Storage classes are

[root@waltx47test-inf ~]# oc get sc
NAME                   PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-cephfs (default)   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   11m
rook-ceph-block        rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   11m
rook-cephfs            rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   11m

PVCs created are

[root@waltx47test-inf ~]# oc get pvc -n rook-ceph
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
walt-ceph    Bound    pvc-7fbcc445-5f45-4cdf-8be5-37afbbcc7b14   1Gi        RWO            csi-cephfs        5m18s
walt-ceph2   Bound    pvc-eae95c01-e1eb-4714-96b8-ec60a12f14e4   1Gi        RWX            csi-cephfs        5m
walt-ceph3   Bound    pvc-d885802a-a54d-4f4d-9d32-3405db2cb5f2   1Gi        RWO            rook-ceph-block   4m35s
walt-ceph4   Bound    pvc-997f829f-624a-4802-a239-2e68424926ef   1Gi        RWX            rook-cephfs       4m13s

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

Thanks for the output. I will try to replicate.

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

Still have same problem.

I'm using OCP 4.6.6

[root@ib-sc-class-inf ~]# oc get po -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-2d4z7                                            3/3     Running     0          5m5s
csi-cephfsplugin-5rxgl                                            3/3     Running     0          5m5s
csi-cephfsplugin-bln9s                                            3/3     Running     0          5m5s
csi-cephfsplugin-dxx48                                            3/3     Running     0          5m5s
csi-cephfsplugin-grqzc                                            3/3     Running     0          5m5s
csi-cephfsplugin-n4rdx                                            3/3     Running     0          5m5s
csi-cephfsplugin-pbjr7                                            3/3     Running     0          5m5s
csi-cephfsplugin-provisioner-5c65b94c8d-p7p92                     6/6     Running     0          5m5s
csi-cephfsplugin-provisioner-5c65b94c8d-pnmwl                     6/6     Running     0          5m5s
csi-rbdplugin-2zzkh                                               3/3     Running     0          5m6s
csi-rbdplugin-48gxq                                               3/3     Running     0          5m6s
csi-rbdplugin-7nzcm                                               3/3     Running     0          5m6s
csi-rbdplugin-8nc85                                               3/3     Running     0          5m6s
csi-rbdplugin-jtfrh                                               3/3     Running     0          5m6s
csi-rbdplugin-provisioner-569c75558-7mbdp                         6/6     Running     0          5m6s
csi-rbdplugin-provisioner-569c75558-prxl8                         6/6     Running     0          5m6s
csi-rbdplugin-q7snz                                               3/3     Running     0          5m6s
csi-rbdplugin-rs5lv                                               3/3     Running     0          5m6s
rook-ceph-crashcollector-worker1.ib-sc-class.cp.fyre.ibm.c7dnzm   1/1     Running     0          2m4s
rook-ceph-crashcollector-worker4.ib-sc-class.cp.fyre.ibm.c2rls2   1/1     Running     0          2m3s
rook-ceph-crashcollector-worker6.ib-sc-class.cp.fyre.ibm.c8tbth   1/1     Running     0          2m43s
rook-ceph-mds-myfs-a-5bd85bc445-qsp6j                             1/1     Running     0          2m4s
rook-ceph-mds-myfs-b-56b9c448c9-52vrd                             1/1     Running     0          2m3s
rook-ceph-mgr-a-55c6859669-zglzv                                  1/1     Running     0          2m43s
rook-ceph-mon-a-777f788b8c-6t2jn                                  1/1     Running     0          4m11s
rook-ceph-mon-b-6877d999c4-lbvq7                                  1/1     Running     0          3m45s
rook-ceph-mon-c-544bc95fdf-blwh5                                  1/1     Running     0          3m14s
rook-ceph-operator-59cbfb7c7c-dwg9h                               1/1     Running     0          6m22s
rook-ceph-osd-prepare-worker0.ib-sc-class.cp.fyre.ibm.com-xwkxh   0/1     Completed   0          2m42s
rook-ceph-osd-prepare-worker1.ib-sc-class.cp.fyre.ibm.com-c89m6   0/1     Completed   0          2m41s
rook-ceph-osd-prepare-worker2.ib-sc-class.cp.fyre.ibm.com-b2gjq   0/1     Completed   0          2m41s
rook-ceph-osd-prepare-worker3.ib-sc-class.cp.fyre.ibm.com-r8wwl   0/1     Completed   0          2m41s
rook-ceph-osd-prepare-worker4.ib-sc-class.cp.fyre.ibm.com-rxrp8   0/1     Completed   0          2m40s
rook-ceph-osd-prepare-worker5.ib-sc-class.cp.fyre.ibm.com-n94km   0/1     Completed   0          2m40s
rook-ceph-osd-prepare-worker6.ib-sc-class.cp.fyre.ibm.com-p9rl5   0/1     Completed   0          2m39s
rook-discover-4hqg8                                               1/1     Running     0          5m58s
rook-discover-949f4                                               1/1     Running     0          5m58s
rook-discover-9qrbn                                               1/1     Running     0          5m58s
rook-discover-dld2z                                               1/1     Running     0          5m58s
rook-discover-xkp8z                                               1/1     Running     0          5m58s
rook-discover-xlffq                                               1/1     Running     0          5m58s
rook-discover-xm85s                                               1/1     Running     0          5m58s
[root@ib-sc-class-inf ~]# oc get sc
NAME                   PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
csi-cephfs (default)   rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   38m
rook-ceph-block        rook-ceph.rbd.csi.ceph.com      Delete          Immediate           true                   38m
rook-cephfs            rook-ceph.cephfs.csi.ceph.com   Delete          Immediate           true                   38m
[root@ib-sc-class-inf ~]# oc get pvc -n rook-ceph
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ceph   Pending                                      csi-cephfs     3m23s

Error in events is

failed to provision volume with StorageClass "csi-cephfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-ea08f962-2c43-4cc9-bb47-9a3d6397d7fc already exists

from community-automation.

wkrapohl avatar wkrapohl commented on August 27, 2024

I assume your are using a fyre api of some sort to create a cluster with 7 workers. Did you have a worker stanza that included an additional disk. The vdb additional disk is what is used by default for csi-ceph

  ],  "worker": [
    {
      "count": "7",
      "cpu": "8",
      "memory": "32",
      "additional_disk": [
        "300"
      ]
    }
  ]

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

Yes we are using ansible to deploy into fyre

We have this.

"worker": [
    {
      "count": "{{ fyre_worker_quantity }}",
      "cpu": "{{ fyre_worker_cpu }}",
      "memory": "{{ fyre_worker_memory }}",
      "additional_disk": [
      ]
    }

For the additional disk, does that need a value?

from community-automation.

wkrapohl avatar wkrapohl commented on August 27, 2024

Yes, you need an additional disk on all your worker nodes. As I stated, the vdb drive that gets created when you specify an additional disk with a value is what csi-cephfs uses to allocate storage against. The sum of all the vdb drive size on all workers is the amount of storage you have for csi-ceph and you must not go beyond that amount. For example, if you have an additional_disk of size 200, and have seven workers, then your csi-ceph total storage available is 1.4T.

Are you using the community ansible play to create a fyre instance (https://github.com/IBM/community-automation/tree/master/ansible/request-ocp-fyre-play)? The community ansible play will create a 300G additional drive on every worker with every fyre cluster.

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

Thank you for confirming.
We have our own automation to deploy into fyre at the moment. I will see if adding the 300G value resolves the issue.
If not we can look at using the community automation play to deploy.

from community-automation.

wkrapohl avatar wkrapohl commented on August 27, 2024

We also have this play that will do it all, 1) Create fyre cluster, 2) Install csi-ceph 3) Install Common-Services. This can be configured in many ways and we can work with you to customize for your needs. Here is the play https://github.com/IBM/community-automation/tree/master/ansible/request-ocp-cs-install-fyre-play.

We would really like to get teams using our https://github.com/IBM/community-automation for their automation so we have one place to go for automation, an not have every team writing their own.

from community-automation.

ianbaldwin2 avatar ianbaldwin2 commented on August 27, 2024

Thanks for your help.

The PVCs are now working.

[root@ib-sc-vdb-inf ~]# oc get po -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-75ntr                                            3/3     Running     0          11m
csi-cephfsplugin-bw5qq                                            3/3     Running     0          11m
csi-cephfsplugin-provisioner-5c65b94c8d-dfbp5                     6/6     Running     0          11m
csi-cephfsplugin-provisioner-5c65b94c8d-jd869                     6/6     Running     0          11m
csi-cephfsplugin-r4vzl                                            3/3     Running     0          11m
csi-rbdplugin-j8jjr                                               3/3     Running     0          11m
csi-rbdplugin-km95j                                               3/3     Running     0          11m
csi-rbdplugin-provisioner-569c75558-msgx6                         6/6     Running     0          11m
csi-rbdplugin-provisioner-569c75558-mvtpx                         6/6     Running     0          11m
csi-rbdplugin-pslsf                                               3/3     Running     0          11m
rook-ceph-crashcollector-worker0.ib-sc-vdb.cp.fyre.ibm.com6sh6b   1/1     Running     0          10m
rook-ceph-crashcollector-worker1.ib-sc-vdb.cp.fyre.ibm.com8sxqh   1/1     Running     0          9m35s
rook-ceph-crashcollector-worker2.ib-sc-vdb.cp.fyre.ibm.comgwxkv   1/1     Running     0          9m36s
rook-ceph-mds-myfs-a-6f8f656d97-z6b62                             1/1     Running     0          9m36s
rook-ceph-mds-myfs-b-8584f8ff9f-5rtp4                             1/1     Running     0          9m35s
rook-ceph-mgr-a-6fb5b5fc65-6dncd                                  1/1     Running     0          9m59s
rook-ceph-mon-a-586874dbf-l86bs                                   1/1     Running     0          11m
rook-ceph-mon-b-77c44b7c88-7kkhr                                  1/1     Running     0          11m
rook-ceph-mon-c-59f65b646f-qf75l                                  1/1     Running     0          10m
rook-ceph-operator-59cbfb7c7c-wxr6h                               1/1     Running     0          12m
rook-ceph-osd-0-b58fddcb4-zqk2q                                   1/1     Running     0          9m50s
rook-ceph-osd-1-7754fb8d97-st55w                                  1/1     Running     0          9m47s
rook-ceph-osd-2-9c757f4cd-ck5vt                                   1/1     Running     0          9m48s
rook-ceph-osd-prepare-worker0.ib-sc-vdb.cp.fyre.ibm.com-ndtkz     0/1     Completed   0          9m58s
rook-ceph-osd-prepare-worker1.ib-sc-vdb.cp.fyre.ibm.com-2zq9b     0/1     Completed   0          9m57s
rook-ceph-osd-prepare-worker2.ib-sc-vdb.cp.fyre.ibm.com-wxxq2     0/1     Completed   0          9m57s
rook-discover-bgtmq                                               1/1     Running     0          12m
rook-discover-cqd8r                                               1/1     Running     0          12m
rook-discover-gdqf6                                               1/1     Running     0          12m
[root@ib-sc-vdb-inf ~]# 
[root@ib-sc-vdb-inf ~]# oc get pvc -n rook-ceph
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
my-ceph    Bound    pvc-f3bfa3c4-1dc4-496e-81dd-21dd4319ddfd   1Gi        RWO            csi-cephfs        74s
my-ceph2   Bound    pvc-86c6911c-96ce-46e3-ba3b-96d0e94a2f6b   1Gi        RWX            csi-cephfs        51s
my-ceph3   Bound    pvc-51d5eed4-0ed2-4394-b229-91fa42e2a9a1   1Gi        RWO            rook-ceph-block   31s
my-ceph4   Bound    pvc-dc1f0ebe-e1e9-4624-8a88-2e495658c24d   1Gi        RWX            rook-cephfs       12s

from community-automation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.