sodafoundation / nbp Goto Github PK
View Code? Open in Web Editor NEWNorthBoundPlugins for platforms and clients to connect to SODA Data Framework
License: Apache License 2.0
NorthBoundPlugins for platforms and clients to connect to SODA Data Framework
License: Apache License 2.0
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: NBP needs a package management tool to manage library reference. Godep or Glide is a good and popular too for that.
What you expected to happen: Build Godep or Glide Tool.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind feature
What happened:
Currently the connector and client is in project opensds, nbp needs to import it as third party open source which is currently under project nbp vendor dictionary, resulting in serious coupling.
What you expected to happen:
Move client and connector from project opensds to project nbp, aim to decouple the opensds and nbp.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind feature
What happened:
Nbp does not support keystone
What you expected to happen:
Nbp support keystone
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
None
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
New Feature (For CSI compatibility)
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
Currently NBP CSI Plugin does not support volume expansion feature
What you expected to happen:
NBP CSI Plugin should start supporting volume expansion and once support is in place,
start reporting the OPTIONAL capability EXPAND_VOLUME.
It involves 2 cases of expansion
ONLINE : Volume is currently published or available on a node.
OFFLINE : Volume is currently not published or available on a node
Reference link : https://kubernetes-csi.github.io/docs/volume-expansion.html
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Currently NBP CSI Plugin does not support volume clone feature
What you expected to happen:
NBP CSI Plugin should start supporting volume clone and once support is in place,
start reporting the OPTIONAL capability CLONE_VOLUME
Reference link : https://kubernetes-csi.github.io/docs/volume-cloning.html
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Currently NBP CSI has an issue wrt provisioning block and file storage types simultaneously for kubernetes workload.
"opensdsstoragetype" need to change to "block" or "file" whenever we want different storage type to be provisiones
Reference link : https://github.com/opensds/nbp/blob/master/csi/server/deploy/kubernetes/csi-configmap-opensdsplugin.yaml
What you expected to happen:
So the usability need to improve which needs some code refactoring of file and block processing flows.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
When you start the opensds CSI server in a new host which has not beed depolyed ceph. You will find you cannot startup these two pods: csi-attacher-opensdsplugin, csi-nodeplugin-opensdsplugin.
root@ecs-74b4-0001:~/gopath/src/github.com/opensds/nbp/csi/server/deploy/kubernetes# kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-attacher-opensdsplugin-0 0/2 ContainerCreating 0 116s
csi-nodeplugin-opensdsplugin-vhklm 0/2 ContainerCreating 0 116s
csi-provisioner-opensdsplugin-0 2/2 Running 0 116s
csi-snapshotter-opensdsplugin-0 2/2 Running 0 116s
Then I describe one of the pod, you will find that the pod can not find the dirvectory ''/etc/ceph".
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 111s default-scheduler Successfully assigned default/csi-attacher-opensdsplugin-0 to 127.0.0.1
Warning FailedMount 47s (x8 over 111s) kubelet, 127.0.0.1 MountVolume.SetUp failed for volume "ceph-dir" : hostPath type check failed: /etc/ceph/ is not a directory
I just create the dirctory "/etc/ceph" to fix it.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Maybe a bug or not
What happened:
I follow the two guides to install Kubernetes & OpenSDS successfully and test them.
Everything is OK, but nginx CSI example test is failed.
https://github.com/opensds/opensds/wiki/OpenSDS-Integration-with-Kubernetes-CSI
https://github.com/opensds/opensds/wiki/OpenSDS-Cluster-Installation-through-Ansible
What you expected to happen:
The kubectl create -f nginx.yaml
was passed,
but the kubectl get pod nginx
got a nginx pod in Pending status.
I expected the nginx was in Running status.
How to reproduce it (as minimally and precisely as possible):
(1) Install Kubernetes & OpenSDS on one Virtualbox VM
(2) The used csi-configmap-opensdsplugin.yaml during OpenSDS deployment
kind: ConfigMap
apiVersion: v1
metadata:
name: csi-configmap-opensdsplugin
data:
opensdsendpoint: http://127.0.0.1:50040
opensdsauthstrategy: noauth
opensdsstoragetype: block
osauthurl: http://127.0.0.1/identity
osusername: admin
ospassword: opensds@123
ostenantname: admin
osprojectname: admin
osuserdomainid: default
passwordencrypter: aes
enableEncrypted: F
osuserdomainid: default
(3) After finishing OpenSDS deployment, run kubectl get pod
to show
the four pods: csi-attacher-opensdsplugin, csi-nodeplugin-opensdsplugin,
csi-provisioner-opensdsplugin and csi-snapshotter-opensdsplugin were Running normally.
(4) Run kubectl create -f nginx.yaml
and kubectl get pod nginx
Anything else we need to know?:
The kubectl get pvc
showed csi-pvc-opensdsplugin was in Pending status and
kubectl describe pvc csi-pvc-opensdsplugin
showed a failed message:
"failed to provision volume with StorageClass "csi-sc-opensdsplugin": rpc error: code =
InvalidArgument desc = get profile abc failed"
Detailed message snapshot:
https://1drv.ms/u/s!AtJnzpDx-p62uDRM8QWCFBBL00-E
Environment:
uname -a
): 4.15.0-45-genericIs this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
When we create fileshare and use it for kubernetes pod, Pod is not coming up as attach volume is failing. Reason for attach volume failure is found out be the issue with ip address extraction from node id during controller publish fileshare
What you expected to happen:
Attach volume should succeed and pod should come to running state with that
How to reproduce it (as minimally and precisely as possible):
Always reproducible with specified steps
1)Create fileshare
2)Create pod using above fileshare
3)Check the state of the pod and describe pod to find out the reason
Anything else we need to know?:
Similar issue exists for controller unpublish fileshare flow as well
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
This bug happened in development branch with K8S v1.12.0.
I created a PVC using opensds/nbp/csi/server/examples/kubernetes/CreateVolumefromSnapshot/pvc_sc.yaml.
Then create a nginx app using this PVC. The nginx app started successfully and OpenSDS volume is in-use. This is LVM iSCSI backend.
After I delete the nginx pod, however, the OpenSDS volume attachment is still in available state.
osdsctl volume attachment list
+--------------------------------------+----------------------------------+--------+--------------------------------------+------------+-----------+----------------+
| Id | TenantId | UserId | VolumeId | Mountpoint | Status | AccessProtocol |
+--------------------------------------+----------------------------------+--------+--------------------------------------+------------+-----------+----------------+
| 8e69cc7d-fae2-4d48-a18c-67d7904cc96a | edb007a77b4041a7b38d2fbc5995622c | | 515706c5-77d6-459e-abdb-b7eeb2cbfff2 | - | available | iscsi |
+--------------------------------------+----------------------------------+--------+--------------------------------------+------------+-----------+----------------+
osdsctl volume attachment list
+--------------------------------------+----------------------------------+--------+--------------------------------------+------------+-----------+----------------+
| Id | TenantId | UserId | VolumeId | Mountpoint | Status | AccessProtocol |
+--------------------------------------+----------------------------------+--------+--------------------------------------+------------+-----------+----------------+
| 8e69cc7d-fae2-4d48-a18c-67d7904cc96a | edb007a77b4041a7b38d2fbc5995622c | | 515706c5-77d6-459e-abdb-b7eeb2cbfff2 | - | available | iscsi |
+--------------------------------------+----------------------------------+--------+--------------------------------------+------------+-----------+----------------+
Logs in node opensds plugin:
I1122 03:12:45.302726 1 node.go:397] NodePublishVolume success
I1122 03:12:45.302739 1 node.go:398] end to NodePublishVolume
I1122 03:13:07.470291 1 node.go:409] start to NodeUnpublishVolume, Volume_id: 515706c5-77d6-459e-abdb-b7eeb2cbfff2, target_path: /var/lib/kubelet/pods/77e8683a-ee04-11e8-a98c-000c29e70439/volumes/kubernetes.iocsi/pvc-689429fe-ee04-11e8-a98c-000c29e70439/mountcsi/pvc-689429fe-ee04-11e8-a98c-000c29e70439/mount
I1122 03:13:07.470327 1 logs.go:40] 2018/11/22 03:13:07 common.go:116: Umount mountpoint: /var/lib/kubelet/pods/77e8683a-ee04-11e8-a98c-000c29e70439/volumes/kubernetes.io
I1122 03:13:07.470341 1 logs.go:40] 2018/11/22 03:13:07 common.go:32: Command: umount /var/lib/kubelet/pods/77e8683a-ee04-11e8-a98c-000c29e70439/volumes/kubernetes.io~csi/pvc-689429fe-ee04-11e8-a98c-000c29e70439/mount:
I1122 03:13:07.560334 1 logs.go:40] 2018/11/22 03:13:07 common.go:32: Command: hostname :
I1122 03:13:07.561117 1 logs.go:40] 2018/11/22 03:13:07 common.go:151: GetHostName result: ubuntu
I1122 03:13:07.561128 1 node.go:157] No more targetPath
I1122 03:13:07.580922 1 node.go:430] NodeUnpublishVolume success
I1122 03:13:07.580936 1 node.go:431] end to NodeUnpublishVolume
I1122 03:13:07.672498 1 node.go:532] start to NodeGetCapabilities
I1122 03:13:07.672525 1 node.go:535] end to NodeGetCapabilities
I1122 03:13:07.676620 1 node.go:309] start to NodeUnstageVolume, Volume_id: 515706c5-77d6-459e-abdb-b7eeb2cbfff2, staging_target_path: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-689429fe-ee04-11e8-a98c-000c29e70439/globalmount
I1122 03:13:07.676709 1 logs.go:40] 2018/11/22 03:13:07 common.go:116: Umount mountpoint: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-689429fe-ee04-11e8-a98c-000c29e70439/globalmount
I1122 03:13:07.676725 1 logs.go:40] 2018/11/22 03:13:07 common.go:32: Command: umount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-689429fe-ee04-11e8-a98c-000c29e70439/globalmount:
I1122 03:13:07.777311 1 logs.go:40] 2018/11/22 03:13:07 common.go:32: Command: hostname :
I1122 03:13:07.778011 1 logs.go:40] 2018/11/22 03:13:07 common.go:151: GetHostName result: ubuntu
I1122 03:13:07.778021 1 node.go:157] No more stagingTargetPath
I1122 03:13:07.778053 1 logs.go:40] 2018/11/22 03:13:07 helper.go:245: Disconnect portal: 192.168.1.175:3260 targetiqn: iqn.2017-10.io.opensds:515706c5-77d6-459e-abdb-b7eeb2cbfff2
I1122 03:13:07.778059 1 logs.go:40] 2018/11/22 03:13:07 common.go:32: Command: iscsiadm -m session -s:
I1122 03:13:07.780431 1 logs.go:40] 2018/11/22 03:13:07 common.go:32: Command: iscsiadm -m node -p 192.168.1.175:3260 -T iqn.2017-10.io.opensds:515706c5-77d6-459e-abdb-b7eeb2cbfff2 --logout:
I1122 03:13:08.313674 1 logs.go:40] 2018/11/22 03:13:08 common.go:32: Command: iscsiadm -m node -o show -T iqn.2017-10.io.opensds:515706c5-77d6-459e-abdb-b7eeb2cbfff2 -p 192.168.1.175:3260:
I1122 03:13:08.315719 1 logs.go:40] 2018/11/22 03:13:08 common.go:32: Command: iscsiadm -m node -o delete -T iqn.2017-10.io.opensds:515706c5-77d6-459e-abdb-b7eeb2cbfff2:
I1122 03:13:08.365060 1 node.go:336] NodeUnstageVolume success
I1122 03:13:08.365088 1 node.go:337] end to NodeUnstageVolume
Logs in attacher opensds plugin:
I1122 03:12:39.433802 1 controller.go:283] start to ControllerPublishVolume
I1122 03:12:39.482226 1 controller.go:412] nodeId: ubuntu,iqn:iqn.1993-08.org.debian:01:c6961059376e
I1122 03:12:39.709996 1 controller.go:403] end to ControllerPublishVolume
I1122 03:13:11.622518 1 controller.go:468] start to ControllerUnpublishVolume
I1122 03:13:11.686107 1 controller.go:506] end to ControllerUnpublishVolume
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
When creating a snapshot, only one VolumeSnapshot object was created in K8S. However 2 snapshots were created in OpenSDS. CreateSnapshot in OpenSDS CSI plugin should only create 1 snapshot if CreateSnapshot RPC is called multiple times with the same input parameter. A check should be added to see if the same snapshot is already existing, and return success if it is.
kubectl describe volumesnapshot
Name: new-snapshot-demo
Namespace: default
Labels:
Annotations:
API Version: snapshot.storage.k8s.io/v1alpha1
Kind: VolumeSnapshot
Metadata:
Creation Timestamp: 2018-11-26T02:50:03Z
Generation: 4
Resource Version: 3604
Self Link: /apis/snapshot.storage.k8s.io/v1alpha1/namespaces/default/volumesnapshots/new-snapshot-demo
UID: f980a869-f125-11e8-ba35-000c29e70439
Spec:
Snapshot Class Name: csi-opensds-snapclass
Snapshot Content Name: snapcontent-f980a869-f125-11e8-ba35-000c29e70439
Source:
API Group:
Kind: PersistentVolumeClaim
Name: opensdspvc
Status:
Creation Time: 2018-11-25T18:50:03Z
Ready: true
Restore Size: 1Gi
Events:
kubectl get volumesnapshotcontent
NAME AGE
snapcontent-f980a869-f125-11e8-ba35-000c29e70439 5m
osdsctl volume list
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+-----------+--------------------------------------+--------------------------------------+
| Id | Name | Description | GroupId | Size | AvailabilityZone | Status | PoolId | ProfileId |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+-----------+--------------------------------------+--------------------------------------+
| 57b506f9-df1c-41fb-9907-40ef3da20836 | pvc-83ca8ca8-f124-11e8-ba35-000c29e70439 | | | 1 | default | available | 3757513f-20b0-5735-9f51-4fa8e133bc5a | b87d1abf-0335-4cae-ac2c-c43d5a8bad5f |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+-----------+--------------------------------------+--------------------------------------+
osdsctl volume snapshot list
+--------------------------------------+-----------------------------------------------+-------------+------+-----------+--------------------------------------+
| Id | Name | Description | Size | Status | VolumeId |
+--------------------------------------+-----------------------------------------------+-------------+------+-----------+--------------------------------------+
| e0eab59b-804c-45f7-a85d-de17b0fea65a | snapshot-f980a869-f125-11e8-ba35-000c29e70439 | | 1 | available | 57b506f9-df1c-41fb-9907-40ef3da20836 |
| 87d290f0-c202-490d-89d5-38c7566f2126 | snapshot-f980a869-f125-11e8-ba35-000c29e70439 | | 1 | available | 57b506f9-df1c-41fb-9907-40ef3da20836 |
+--------------------------------------+-----------------------------------------------+-------------+------+-----------+--------------------------------------+
kubectl logs csi-snapshotter-opensdsplugin-0 opensds
2018/11/26 02:38:42 current OpenSDS Client endpoint: http://192.168.1.175:50040
2018/11/26 02:38:42 current OpenSDS Client auth strategy: keystone
2018/11/26 02:38:42 current OpenSDS Client endpoint: http://192.168.1.175:50040
2018/11/26 02:38:42 current OpenSDS Client auth strategy: keystone
I1126 02:38:42.521891 1 logs.go:40] 2018/11/26 02:38:42 util.go:71: proto: unix addr: csi/csi.sock
I1126 02:38:42.522018 1 logs.go:40] 2018/11/26 02:38:42 util.go:75: remove sock file: csi/csi.sock
I1126 02:38:42.522805 1 main.go:133] start to serve: csi/csi.sock
I1126 02:38:43.188504 1 identity.go:23] start to Probe
I1126 02:38:43.188534 1 identity.go:28] end to Probe
I1126 02:38:43.189111 1 controller.go:601] start to ControllerGetCapabilities
I1126 02:38:43.189142 1 controller.go:604] end to ControllerGetCapabilities
I1126 02:50:03.698954 1 identity.go:43] start to GetPluginInfo
I1126 02:50:03.698967 1 identity.go:46] end to GetPluginInfo
I1126 02:50:03.699333 1 controller.go:659] start to CreateSnapshot, Name: snapshot-f980a869-f125-11e8-ba35-000c29e70439, SourceVolumeId: 57b506f9-df1c-41fb-9907-40ef3da20836, CreateSnapshotSecrets: map[], parameters: map[]!
I1126 02:50:03.699376 1 controller.go:682] snapshot response:&{ snapshot-f980a869-f125-11e8-ba35-000c29e70439 0 57b506f9-df1c-41fb-9907-40ef3da20836 map[]}
I1126 02:50:03.699413 1 logs.go:40] 2018/11/26 02:50:03 receiver.go:75: POST http://192.168.1.175:50040/v1beta/0d101f665e65492c890e699928351948/block/snapshots
I1126 02:50:03.699546 1 logs.go:40] 2018/11/26 02:50:03 receiver.go:81: Request body:
{
"name": "snapshot-f980a869-f125-11e8-ba35-000c29e70439",
"volumeId": "57b506f9-df1c-41fb-9907-40ef3da20836"
}
I1126 02:50:03.866439 1 logs.go:40] 2018/11/26 02:50:03 receiver.go:101:
StatusCode: 202 Accepted
Response Body:
{"id":"e0eab59b-804c-45f7-a85d-de17b0fea65a","createdAt":"2018-11-25T18:50:03","updatedAt":"","tenantId":"0d101f665e65492c890e699928351948","name":"snapshot-f980a869-f125-11e8-ba35-000c29e70439","size":1,"status":"creating","volumeId":"57b506f9-df1c-41fb-9907-40ef3da20836","metadata":{"lvPath":"/dev/opensds-volumes/volume-57b506f9-df1c-41fb-9907-40ef3da20836"}}
I1126 02:50:03.866519 1 controller.go:694] end to CreateSnapshot
I1126 02:50:03.884952 1 identity.go:43] start to GetPluginInfo
I1126 02:50:03.884967 1 identity.go:46] end to GetPluginInfo
I1126 02:50:03.885215 1 controller.go:659] start to CreateSnapshot, Name: snapshot-f980a869-f125-11e8-ba35-000c29e70439, SourceVolumeId: 57b506f9-df1c-41fb-9907-40ef3da20836, CreateSnapshotSecrets: map[], parameters: map[]!
I1126 02:50:03.885258 1 controller.go:682] snapshot response:&{ snapshot-f980a869-f125-11e8-ba35-000c29e70439 0 57b506f9-df1c-41fb-9907-40ef3da20836 map[]}
I1126 02:50:03.885319 1 logs.go:40] 2018/11/26 02:50:03 receiver.go:75: POST http://192.168.1.175:50040/v1beta/0d101f665e65492c890e699928351948/block/snapshots
I1126 02:50:03.885335 1 logs.go:40] 2018/11/26 02:50:03 receiver.go:81: Request body:
{
"name": "snapshot-f980a869-f125-11e8-ba35-000c29e70439",
"volumeId": "57b506f9-df1c-41fb-9907-40ef3da20836"
}
I1126 02:50:04.015287 1 logs.go:40] 2018/11/26 02:50:04 receiver.go:101:
StatusCode: 202 Accepted
Response Body:
{"id":"87d290f0-c202-490d-89d5-38c7566f2126","createdAt":"2018-11-25T18:50:03","updatedAt":"","tenantId":"0d101f665e65492c890e699928351948","name":"snapshot-f980a869-f125-11e8-ba35-000c29e70439","size":1,"status":"creating","volumeId":"57b506f9-df1c-41fb-9907-40ef3da20836","metadata":{"lvPath":"/dev/opensds-volumes/volume-57b506f9-df1c-41fb-9907-40ef3da20836"}}
I1126 02:50:04.015334 1 controller.go:694] end to CreateSnapshot
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
ListSnapshots is not implemented according to the CSI spec. It needs to take into account of the input parameters:
message ListSnapshotsRequest {
int32 max_entries = 1;
string starting_token = 2;
string source_volume_id = 3;
string snapshot_id = 4;
}
It also needs to set next_token in the response:
message ListSnapshotsResponse {
message Entry {
Snapshot snapshot = 1;
}
repeated Entry entries = 1;
string next_token = 2;
}
Take a look of this driver as example: https://github.com/kubernetes-csi/drivers/blob/master/pkg/hostpath/controllerserver.go#L266
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
After creating a nginx app with an OpenSDS volume, two volume attachments were created for the same volume. Replication is not enabled. This problem may be related to the replication code.
root@ubuntu:~/gopath/src/github.com/opensds/nbp# osdsctl volume attachment list
+--------------------------------------+----------------------------------+--------+--------------------------------------+-----------------------------------------------------------------------------------------------------------------+-----------+----------------+
| Id | TenantId | UserId | VolumeId | Mountpoint | Status | AccessProtocol |
+--------------------------------------+----------------------------------+--------+--------------------------------------+-----------------------------------------------------------------------------------------------------------------+-----------+----------------+
| 1ac2fe2d-edc7-4828-a09b-5dc812f11905 | b1be463e196a4c87a5ae76256c7c4a3d | | cce23903-e48f-4dc4-b42b-fbaba41479bd | | available | iscsi |
| 03941288-56a3-4aae-b097-14feae572dc4 | b1be463e196a4c87a5ae76256c7c4a3d | | cce23903-e48f-4dc4-b42b-fbaba41479bd | /dev/disk/by-path/ip-192.168.1.175:3260-iscsi-iqn.2017-10.io.opensds:cce23903-e48f-4dc4-b42b-fbaba41479bd-lun-1 | available | iscsi |
+--------------------------------------+----------------------------------+--------+--------------------------------------+-----------------------------------------------------------------------------------------------------------------+-----------+----------------+
root@ubuntu:~/gopath/src/github.com/opensds/nbp# osdsctl volume list
+--------------------------------------+----------------------+-------------+---------+------+------------------+--------+--------------------------------------+--------------------------------------+
| Id | Name | Description | GroupId | Size | AvailabilityZone | Status | PoolId | ProfileId |
+--------------------------------------+----------------------+-------------+---------+------+------------------+--------+--------------------------------------+--------------------------------------+
| cce23903-e48f-4dc4-b42b-fbaba41479bd | pvc-99849616ba2811e8 | | | 1 | default | inUse | 9fcb495c-4a84-5e74-8a6c-0794f44c7471 | 837740f2-458e-42fa-9bcd-6050d5decb8c |
+--------------------------------------+----------------------+-------------+---------+------+------------------+--------+--------------------------------------+--------------------------------------+
root@ubuntu:~/gopath/src/github.com/opensds/nbp# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
opensdspvc Bound pvc-99849616ba2811e8 1Gi RWO csi-sc-opensdsplugin 33m
What you expected to happen:
Only 1 volume attachment should be created.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: We need CLI For test CSI Plugin Server, which will base on Client For CSI Plugin.
What you expected to happen: CLI For test CSI Plugin Server.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
OpenSDS storage platform not getting removed for vmware.
What you expected to happen:
Should perform remove operation on storage platform.
How to reproduce it (as minimally and precisely as possible):
Step1 Open OpenSDS Storgae plugin from Home/administration
Step2 Select the storage platform you want to remove.
Showing "no actions available".
Anything else we need to know?:
Its not consistent. Happening for 192.168.20.125 opensds storage.
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: We need CSI Plugin Common Library and CSI Plugin will base on it.
What you expected to happen: CSI Plugin Common Library.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Giving correct credentials for storage platform while modifying configuration.
Giving error as "Config storage failed"
What you expected to happen:
Should modify succuessfully
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind feature
What happened: NodeGetInfo was added in CSI v0.3 and it is required. NodeGetId is deprecated. We still need to keep NodeGetId until v1.0.
What you expected to happen: Implement NodeGetInfo to support v0.3 and keep NodeGetId until v1.0.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
nbp\opensds-provisioner\pkg\client\client.go not import github.com/opensds/opensds/pkg/utils/constants
What you expected to happen:
nbp\opensds-provisioner\pkg\client\client.go import github.com/opensds/opensds/pkg/utils/constants
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
None
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
While testing the latest code in nbp development branch, ran into the following bug. I created a PVC. It shows that it was bound to a PV. However when checking the volume in OpenSDS dashboard, this volume was actually in error status.
Searched the osdslet logs, it showed that the problem was because there is no default profile created.
This may be a bug in Hotpot, but since I found it by running CSI plugin, I opened the issue here.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
kubectl create -f examples/kubernetes/pvc_sc.yaml
root@ubuntu:~/gopath/src/github.com/opensds/nbp/csi/server/examples/kubernetes# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
opensdspvc Bound pvc-1ab00c49b60411e8 1Gi RWO csi-sc-opensdsplugin 5s
root@ubuntu:~/gopath/src/github.com/opensds/nbp/csi/server/examples/kubernetes# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1ab00c49b60411e8 1Gi RWO Delete Bound default/opensdspvc csi-sc-opensdsplugin 49s
vi osdslet.ubuntu.root.log.ERROR.20180908-175346.20945
E0908 17:53:46.809628 20945 controller.go:103] Get profile failed: No default profile in db.
E0908 17:53:46.810523 20945 volume.go:88] Marshal volume created result failed: No default profile in db.
E0908 17:53:47.022767 20945 db.go:123] Only the status of volume is available, attachment can be created
E0908 17:53:47.022835 20945 volume.go:329] Create volume attachment failed: Only the status of volume is available, attachment can be created
Anything else we need to know?:
Environment:
uname -a
):/kind bug
What happened:
We need to have logging in Storage Adapters.
More error handling is required in Plugin/adapter code.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I tested it when I installed the NBP project.
What you expected to happen:
Install NPB project in normal based on wiki info
But there are some image path is wrong in these file:
csi-attacher-opensdsplugin.yaml: image: docker.io/k8scsi/opensdsplugin
csi-nodeplugin-opensdsplugin.yaml: image: docker.io/k8scsi/opensdsplugin
csi-provisioner-opensdsplugin.yaml: image: docker.io/k8scsi/opensdsplugin
in these files, image path should be changed to this:
image: docker.io/opensdsio/csiplugin
How to reproduce it (as minimally and precisely as possible):
100% appears
Anything else we need to know?: NA
Environment:
uname -a
):/kind bug
What happened:
vmware/ngc/NGC-Register/src/main/resources/application.properties
pwd etc are plain text
What you expected to happen:
please encrypt the security properties
How to reproduce it (as minimally and precisely as possible):
Core walk thru
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: CSI Plugin Server will implement by the CSI.
https://github.com/container-storage-interface/spec
We need to make Unit Test for all interfaces.
CSI includes the following interfaces:
GetSupportedVersions
GetPluginInfo
CreateVolume
DeleteVolume
ControllerPublishVolume
ControllerUnpublishVolume
ValidateVolumeCapabilities
ListVolumes
GetCapacity
ControllerGetCapabilities
NodePublishVolume
NodeUnpublishVolume
GetNodeID
ProbeNode
NodeGetCapabilities
What you expected to happen: Unit Test for CSI Plugin Server Implementations.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
The nbp always create a new client every time when nbp calls opensds, which results in inefficiency.
What you expected to happen:
Use cache to speed up program response.
Environment:
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
CSI v1.1 has supported volume expand capability.
What you expected to happen:
CSI plugin should support this capability.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
BUG REPORT
What happened:
See sodafoundation/installer#27 for problem description.
What you expected to happen:
Install open-iscsi
package in Dockerfile.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: We need to release NBP Plugins by Docker Images. So we need to Build, Deploy and Test Docker Images.
What you expected to happen: Release NBP Plugins by Docker Images.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind feature
What happened: Failed to start CSI plugin pods on my setup.
What you expected to happen: After "kubectl create -f opensds/nbp/csi/server/deploy/kubernetes", I expect 3 CSI plugin pods up and running, but they failed to start.
How to reproduce it (as minimally and precisely as possible):
Deployed using opensds-installer ansible in csi mode. CSI plugin pods failed to start.
xyang@ubuntu:~/go/src/github.com/opensds/nbp/csi/server/deploy$ kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-attacher-opensdsplugin-0 0/2 CrashLoopBackOff 12 6m
csi-nodeplugin-opensdsplugin-9drgf 0/2 CrashLoopBackOff 12 6m
csi-provisioner-opensdsplugin-0 0/2 CrashLoopBackOff 12 6m
$ kubectl describe pod csi-provisioner-opensdsplugin-0
Events:
Type Reason Age From Message
Normal Scheduled 4m default-scheduler Successfully assigned csi-provisioner-opensdsplugin-0 to 127.0.0.1
Normal SuccessfulMountVolume 4m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "socket-dir"
Normal SuccessfulMountVolume 4m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "csi-provisioner-token-ds6vq"
Normal Created 4m (x3 over 4m) kubelet, 127.0.0.1 Created container
Warning Failed 4m (x3 over 4m) kubelet, 127.0.0.1 Error: failed to start container "csi-provisioner": Error response from daemon: linux mounts: Could not find source mount of /var/lib/kubelet/pods/af360ec3-6cbc-11e8-95d2-000c29f57be0/volumes/kubernetes.ioempty-dir/socket-dirempty-dir/socket-dir
Normal Pulled 4m (x3 over 4m) kubelet, 127.0.0.1 Container image "opensdsio/csiplugin:latest" already present on machine
Normal Created 4m (x3 over 4m) kubelet, 127.0.0.1 Created container
Warning Failed 4m (x3 over 4m) kubelet, 127.0.0.1 Error: failed to start container "opensds": Error response from daemon: linux mounts: Could not find source mount of /var/lib/kubelet/pods/af360ec3-6cbc-11e8-95d2-000c29f57be0/volumes/kubernetes.io
Warning BackOff 4m (x2 over 4m) kubelet, 127.0.0.1 Back-off restarting failed container
Warning BackOff 4m (x2 over 4m) kubelet, 127.0.0.1 Back-off restarting failed container
Normal Pulled 4m (x4 over 4m) kubelet, 127.0.0.1 Container image "quay.io/k8scsi/csi-provisioner:v0.2.0" already present on machine
kubectl describe pod csi-nodeplugin-opensdsplugin-9drgf
Events:
Type Reason Age From Message
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "pods-mount-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "pods-probe-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "iscsi-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "ceph-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "socket-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "csi-nodeplugin-token-dnff5"
Warning Failed 9m kubelet, 127.0.0.1 Error: failed to start container "opensds": Error response from daemon: linux mounts: Could not find source mount of /etc/ceph
Normal Created 9m (x3 over 9m) kubelet, 127.0.0.1 Created container
Warning Failed 9m (x3 over 9m) kubelet, 127.0.0.1 Error: failed to start container "driver-registrar": Error response from daemon: linux mounts: Could not find source mount of /var/lib/kubelet/plugins/csi-opensdsplugin
Normal Pulled 9m (x3 over 9m) kubelet, 127.0.0.1 Container image "opensdsio/csiplugin:latest" already present on machine
Normal Created 9m (x3 over 9m) kubelet, 127.0.0.1 Created container
Normal Pulled 9m (x3 over 9m) kubelet, 127.0.0.1 Container image "quay.io/k8scsi/driver-registrar:v0.2.0" already present on machine
Warning Failed 9m (x2 over 9m) kubelet, 127.0.0.1 Error: failed to start container "opensds": Error response from daemon: linux mounts: Could not find source mount of /etc/iscsi
Warning BackOff 4m (x23 over 9m) kubelet, 127.0.0.1 Back-off restarting failed container
kubectl describe pod csi-attacher-opensdsplugin-0
Events:
Type Reason Age From Message
Normal Scheduled 9m default-scheduler Successfully assigned csi-attacher-opensdsplugin-0 to 127.0.0.1
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "iscsi-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "ceph-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "socket-dir"
Normal SuccessfulMountVolume 9m kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "csi-attacher-token-vn2kt"
Normal Created 9m (x3 over 9m) kubelet, 127.0.0.1 Created container
Normal Pulled 9m (x3 over 9m) kubelet, 127.0.0.1 Container image "quay.io/k8scsi/csi-attacher:v0.2.0" already present on machine
Warning Failed 9m (x3 over 9m) kubelet, 127.0.0.1 Error: failed to start container "csi-attacher": Error response from daemon: linux mounts: Could not find source mount of /var/lib/kubelet/pods/af319f82-6cbc-11e8-95d2-000c29f57be0/volumes/kubernetes.ioempty-dir/socket-dirempty-dir/socket-dir
Normal Pulled 9m (x3 over 9m) kubelet, 127.0.0.1 Container image "opensdsio/csiplugin:latest" already present on machine
Normal Created 9m (x3 over 9m) kubelet, 127.0.0.1 Created container
Warning Failed 9m (x3 over 9m) kubelet, 127.0.0.1 Error: failed to start container "opensds": Error response from daemon: linux mounts: Could not find source mount of /var/lib/kubelet/pods/af319f82-6cbc-11e8-95d2-000c29f57be0/volumes/kubernetes.io
Warning BackOff 9m (x2 over 9m) kubelet, 127.0.0.1 Back-off restarting failed container
Warning BackOff 4m (x24 over 9m) kubelet, 127.0.0.1 Back-off restarting failed container
But socket-dir is available:
root@ubuntu:/var/lib/kubelet/pods/af360ec3-6cbc-11e8-95d2-000c29f57be0/volumes/kubernetes.io~empty-dir# ls -l
total 4
drwxrwxrwx 2 root root 4096 Jun 10 07:43 socket-dir
After reading this issue kubernetes-retired/kubernetes-anywhere#88, I did a bind mount:
sudo mount -o bind /var/lib/kubelet /var/lib/kubelet
sudo mount --make-shared /var/lib/kubelet
sudo mount -o bind /etc/iscsi /etc/iscsi
sudo mount --make-shared /etc/iscsi
sudo mount -o bind /etc/ceph /etc/ceph
sudo mount --make-shared /etc/ceph
Deleted the pods and recreated them. After that the provisioner and attacher pod was started successfully:
xyang@ubuntu:~/go/src/github.com/opensds/nbp/csi/server/deploy$ kubectl get pod
NAME READY STATUS RESTARTS AGE
csi-attacher-opensdsplugin-0 2/2 Running 0 4m
csi-nodeplugin-opensdsplugin-sdhs2 1/2 CrashLoopBackOff 5 4m
csi-provisioner-opensdsplugin-0 2/2 Running 0 4m
NodePublishVolume is the place to do "bind mount", while NodeStageVolume is the place to do a mount at a "global" dir. We should implement NodeStageVolume and do bind mount in NodePublishVolume.
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
This is the parent issue to track all child tasks
What you expected to happen:
child issues created and closed.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind feature
Creating this issue just remains that before we relaese the opensds Bali version, we should update the opensds to nbp firstly.
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: We need a client for test CSI Plugin Server.
What you expected to happen: Client For test CSI Plugin Server.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I build Kubernetes and run service catalog.
Also I run service broker with OpenSDS.
But created service instances always has status false.
What you expected to happen:
Kubernetes can use service instances and service bindings.
How to reproduce it (as minimally and precisely as possible):
I have Kubernetes cluster. 1 master, 2 nodes. There are on the KVM.
Platform is Kubic which is distributed by openSUSE for to build Kubernetes.
I have one LVM volume on master.
OpenSDS run on master. and setup dock above LVM.(OpenSDS recognized above LVM is pool.)
When I created service instance, OpenSDS created volume for pvc.
but, that is all.
I referenced at https://github.com/opensds/opensds/wiki/OpenSDS-Installation-with-Kubernetes-Service-Catalog
I adjusted:
service catalog:
I turn off health check, because that is failed often.
service broker:
I replace spec.spec.volumes.hostpath.path to /var/lib/ca-certificates in broker-deployment.yaml
Anything else we need to know?:
Environment:
NBP version: helm install before few days.
OS (e.g. from /etc/os-release):
openSUSE Tumbleweed Kubic
Kernel (e.g. uname -a
):
Linux linux-36zj 5.0.5-1-default #1 SMP Wed Mar 27 11:22:35 UTC 2019 (0fb0b14) x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Others:
log attached.
catalog-catalog-apiserver.log
catalog-catalog-controller-manager.log
service-broker-service-broker.log
What am I wrong?
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
In CreateVolume
method, CSI plugin gets fs_type
from parameters of CreateVolumeRequest
, this doesn't obey the CSI specification correctly.
What you expected to happen:
Retrieving fs_type from VolumeCapablities
of CreateVolumeRequest
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: CSI Plugin Server will implement by the CSI.
https://github.com/container-storage-interface/spec
CSI includes the following interfaces:
GetSupportedVersions
GetPluginInfo
CreateVolume
DeleteVolume
ControllerPublishVolume
ControllerUnpublishVolume
ValidateVolumeCapabilities
ListVolumes
GetCapacity
ControllerGetCapabilities
NodePublishVolume
NodeUnpublishVolume
GetNodeID
ProbeNode
NodeGetCapabilities
What you expected to happen: CSI Plugin Server Implementations.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Need to add logging in Storage Adapters.
Add more Error Handling for the Adapters.
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
What happened:
Deployed using opensds-installer in csi mode with LVM backend. Created a volume by "kubectl create f nginx.yml". Deleted the volume by "kubectl delete -f nginx.yml".
The volume status went from "available" to "deleting" and "errorDeleting". It was still under /dev/opensds-volumes.
After I restarted opensds CSI plugin pods, and create another volume using "kubectl create -f nginx.yml". The volume disappeared from /dev/opensds-volumes and it disappeared from db as well. (Note: I tried again but this time the volume remained in errorDeleting and won't be deleted from /dev/opensds-volumes either).
osdsctl volume list
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+-----------+--------------------------------------+--------------------------------------+
| Id | Name | Description | GroupId | Size | AvailabilityZone | Status | PoolId | ProfileId |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+-----------+--------------------------------------+--------------------------------------+
| f4301a4f-2738-428a-88ff-38ec3d846fbb | pvc-03b24bbb-6cd0-11e8-95d2-000c29f57be0 | | | 1 | default | available | 53b6de1a-df3c-50bd-af38-9cb5d4cf6db9 | faa9fe88-91c4-4a70-8121-525b1a055d74 |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+-----------+--------------------------------------+--------------------------------------+
osdsctl volume list
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+----------+--------------------------------------+--------------------------------------+
| Id | Name | Description | GroupId | Size | AvailabilityZone | Status | PoolId | ProfileId |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+----------+--------------------------------------+--------------------------------------+
| f4301a4f-2738-428a-88ff-38ec3d846fbb | pvc-03b24bbb-6cd0-11e8-95d2-000c29f57be0 | | | 1 | default | deleting | 53b6de1a-df3c-50bd-af38-9cb5d4cf6db9 | faa9fe88-91c4-4a70-8121-525b1a055d74 |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+----------+--------------------------------------+--------------------------------------+
osdsctl volume list
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+---------------+--------------------------------------+--------------------------------------+
| Id | Name | Description | GroupId | Size | AvailabilityZone | Status | PoolId | ProfileId |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+---------------+--------------------------------------+--------------------------------------+
| f4301a4f-2738-428a-88ff-38ec3d846fbb | pvc-03b24bbb-6cd0-11e8-95d2-000c29f57be0 | | | 1 | default | errorDeleting | 53b6de1a-df3c-50bd-af38-9cb5d4cf6db9 | faa9fe88-91c4-4a70-8121-525b1a055d74 |
+--------------------------------------+------------------------------------------+-------------+---------+------+------------------+---------------+--------------------------------------+--------------------------------------+
I suspect this was because we didn't log out of the iSCSI connection after the volume was "deleted" from K8S. I don't know if detach was called but it should.
We should do "iscsi login" in NodePublishVolume and "iscsi logout" in NodeUnpublishVolume so login/out is performed only once.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: When I began to develop nbp plugin for csi. I have noticed that we need to do something like nodepublishvolume and nodeunpublishvolume as the csi defines.nodepublishvolume and nodeunpublishvolume actually include volume mount/unmount and volume attach/detach.
But for the nbp project, different opensds controller backends have different attach and detach process.
and nbp actually doesn't know that the current volume is created by which kind of opensds controller backends.so it is not possible to process nodepublishvolume and nodeunpublishvolume.
What you expected to happen: The solution for attach and detach.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): linuxIs this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
CSI Specification defines error codes for each of the scenarios while any of the CSI interfaces are invoked
Currently NBP CSI Plugin does not handle all the error cases and does not report correct error code for some of the error flows
Reference link: https://github.com/container-storage-interface/spec/blob/master/spec.md
What you expected to happen:
Error handling and error codes support need to be done from CSI plugin side.
With this fix, Plugin can be CSI Specification 1.1.0 compliant wrt Error code handling
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
Add support for the rbd-nbd
feature-rich client to improve support for Ceph RBD.
The CSI plugin offers support for Ceph RBD based on the krbd kernel client. Unfortunately, krbd
can't use the librbd
user-space space library that gets most of the development focus.This caused a feature gap problems.It will cause the volume in OpenSDS fail to mount.
Beside the feature gap krbd
exhibits additional drawback. Being entirely kernel space impacts fault-tolerance as any kernel panic affects a whole node -- not only a single Pod using RBD storage.
Those issues can be addressed by employing rbd-nbd -- a thin adapter between NBD subsystem of Linux kernel and librbd.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
`
root@ecs-f386-0002:~/gopath/src/github.com/opensds/nbp# kubectl describe pod csi-nodeplugin-opensdsplugin-k8pfd
Name: csi-nodeplugin-opensdsplugin-k8pfd
Namespace: default
Node: 127.0.0.1/127.0.0.1
Start Time: Thu, 19 Apr 2018 10:37:38 +0800
Labels: app=csi-nodeplugin-opensdsplugin
controller-revision-hash=4198876669
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 127.0.0.1
Controlled By: DaemonSet/csi-nodeplugin-opensdsplugin
Containers:
driver-registrar:
Container ID: docker://570db7e9b56fb31a9792ea3d2ca29c94f40663382dda3ae3f9052a823b77dd50
Image: quay.io/k8scsi/driver-registrar:v0.2.0
Image ID: docker-pullable://quay.io/k8scsi/driver-registrar@sha256:9a84ec490b5ff5390b12be21acf707273781cd0911cc597712a254bc1862f220
Port: <none>
Host Port: <none>
Args:
--v=5
--csi-address=$(ADDRESS)
State: Running
Started: Thu, 19 Apr 2018 10:37:39 +0800
Ready: True
Restart Count: 0
Environment:
ADDRESS: /csi/csi.sock
KUBE_NODE_NAME: (v1:spec.nodeName)
Mounts:
/csi from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from csi-nodeplugin-token-b97jm (ro)
opensds:
Container ID: docker://ece89a9dee9830012f1d126007921b7f39c0f7674f005aa081332e3431dc537b
Image: opensdsio/csiplugin:latest
Image ID: docker://sha256:98509ae7a22ec946ec2a1acd03447d84e183ccf7658305b2386cb939bf57a694
Port: <none>
Host Port: <none>
Args:
--csiEndpoint=$(CSI_ENDPOINT)
--opensdsEndpoint=$(OPENSDS_ENDPOINT)
State: Waiting
Reason: RunContainerError
Last State: Terminated
Reason: ContainerCannotRun
Message: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/a03527d4-437a-11e8-88cf-fa163ee32c02/containers/opensds/6f8cebbc\\\" to rootfs \\\"/var/lib/docker/overlay2/4eec1c13c42dc43c2e6b127834064d1aa517c75b1348401321df3221db55aae9/merged\\\" at \\\"/var/lib/docker/overlay2/4eec1c13c42dc43c2e6b127834064d1aa517c75b1348401321df3221db55aae9/merged/dev/termination-log\\\" caused \\\"no such file or directory\\\"\"": unknown
Exit Code: 128
Started: Thu, 19 Apr 2018 10:37:40 +0800
Finished: Thu, 19 Apr 2018 10:37:40 +0800
Ready: False
Restart Count: 1
Environment:
CSI_ENDPOINT: unix://csi/csi.sock
OPENSDS_ENDPOINT: <set to the key 'opensdsendpoint' of config map 'csi-configmap-opensdsplugin'> Optional: false
Mounts:
/csi from socket-dir (rw)
/dev from host-dev (rw)
/etc/ceph/ from ceph-dir (rw)
/etc/iscsi/ from iscsi-dir (rw)
/var/lib/kubelet/pods from pods-mount-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from csi-nodeplugin-token-b97jm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
socket-dir:
Type: HostPath (bare host directory volume)
Path: /csi
HostPathType: DirectoryOrCreate
pods-mount-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/pods
HostPathType: Directory
host-dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType: Directory
iscsi-dir:
Type: HostPath (bare host directory volume)
Path: /etc/iscsi/
HostPathType: Directory
ceph-dir:
Type: HostPath (bare host directory volume)
Path: /etc/ceph/
HostPathType: Directory
csi-nodeplugin-token-b97jm:
Type: Secret (a volume populated by a Secret)
SecretName: csi-nodeplugin-token-b97jm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 17s kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "ceph-dir"
Normal SuccessfulMountVolume 17s kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "iscsi-dir"
Normal SuccessfulMountVolume 17s kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "socket-dir"
Normal SuccessfulMountVolume 17s kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "pods-mount-dir"
Normal SuccessfulMountVolume 17s kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "host-dev"
Normal Created 16s kubelet, 127.0.0.1 Created container
Normal Pulled 16s kubelet, 127.0.0.1 Container image "quay.io/k8scsi/driver-registrar:v0.2.0" already present on machine
Normal SuccessfulMountVolume 16s kubelet, 127.0.0.1 MountVolume.SetUp succeeded for volume "csi-nodeplugin-token-b97jm"
Normal Started 16s kubelet, 127.0.0.1 Started container
Warning Failed 16s kubelet, 127.0.0.1 Error: failed to start container "opensds": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/a03527d4-437a-11e8-88cf-fa163ee32c02/containers/opensds/4db4b97a\\\" to rootfs \\\"/var/lib/docker/overlay2/5fa1a979e27cb9794608f46b836d4fbea08de0334ad7fdb99084b3b1bf325a5a/merged\\\" at \\\"/var/lib/docker/overlay2/5fa1a979e27cb9794608f46b836d4fbea08de0334ad7fdb99084b3b1bf325a5a/merged/dev/termination-log\\\" caused \\\"no such file or directory\\\"\"": unknown
Normal Created 15s (x2 over 16s) kubelet, 127.0.0.1 Created container
Normal Pulled 15s (x2 over 16s) kubelet, 127.0.0.1 Container image "opensdsio/csiplugin:latest" already present on machine
Warning Failed 15s kubelet, 127.0.0.1 Error: failed to start container "opensds": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/a03527d4-437a-11e8-88cf-fa163ee32c02/containers/opensds/6f8cebbc\\\" to rootfs \\\"/var/lib/docker/overlay2/4eec1c13c42dc43c2e6b127834064d1aa517c75b1348401321df3221db55aae9/merged\\\" at \\\"/var/lib/docker/overlay2/4eec1c13c42dc43c2e6b127834064d1aa517c75b1348401321df3221db55aae9/merged/dev/termination-log\\\" caused \\\"no such file or directory\\\"\"": unknown
Warning BackOff 14s kubelet, 127.0.0.1 Back-off restarting failed container
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Start the kubernetes local cluster, in the csi directory of nbp:
Anything else we need to know?:
Environment:
uname -a
): 4.4.0-119-genericIs this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Concurrency security issues occur when multiple goroutines create hotpot creation clients.
What you expected to happen:
The initialization of hotpot client works normally when multiple goroutines access the hotpot.
Anything else we need to know?:
The scenario of multiple goroutines accessing the hotpot needs to be tested.
Environment:
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Analyse and feasibility study of vROps plugin in opensds nbp
What you expected to happen:
Create an analysis document which contains
Purpose
Scope
Components
Architecture Details
Dependency
Challenges
How to reproduce it (as minimally and precisely as possible):
This is to Understand vROps Plugin
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
In Hotpot project we enabled travis CI service to build, deploy and test this project. I think Sushi project should also enable this service to test the code automatically.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened: CSI v0.2.0 is cut
What you expected to happen: CSI plugin should be updated to support 0.2.0.
How to reproduce it (as minimally and precisely as possible): Test with CSI 0.2.0 and latest K8S code (1.10).
Anything else we need to know?: See this PR for example: kubernetes-retired/drivers#56
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: CSI Plugin Server will implement by the CSI.
https://github.com/container-storage-interface/spec
We need to make E2E Test for every interface.
CSI includes the following interfaces:
GetSupportedVersions
GetPluginInfo
CreateVolume
DeleteVolume
ControllerPublishVolume
ControllerUnpublishVolume
ValidateVolumeCapabilities
ListVolumes
GetCapacity
ControllerGetCapabilities
NodePublishVolume
NodeUnpublishVolume
GetNodeID
ProbeNode
NodeGetCapabilities
What you expected to happen: E2E Test for CSI Plugin Server Implementations.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxIs this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
I created a PVC and then created a nginx app. After deleting the nginx app, PVC got deleted but OpenSDS volume is not deleted. It was left in inUse status.
In NodePublishVolume, the volume status is set to inUse.
However in NodeUnpublishVolume, it exited earlier (at the first exit) and therefore didn’t reach the code at the bottom of the function that sets the volume status to Available.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
I found there is a csi-test project located in kubernetes-csi
code repo, I think it's required to import that project into csi module and trigger that in ci environment.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
@xing-yang Please take a look at that project.
Environment:
uname -a
):Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: NBP is the whole north bound interface for OpenSDS, we need to build a complete project stuctures so that we can intergrate all the plugins for OpenSDS.
What you expected to happen: Build NBP Stuctures.
How to reproduce it (as minimally and precisely as possible): N/A
Anything else we need to know?: N/A
Environment:
uname -a
): LinuxA declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.