Giter Club home page Giter Club logo

csi-driver's Introduction

This repository is inactive, please refer to: https://github.com/openshift/ovirt-csi-driver

oVirt CSI driver

Docker Repository on Quay

Implementation of a CSI driver for oVirt.

This work is a continuation of the work done in github.com/ovirt/ovirt-openshift-extensions, and is the future of the development of a storage driver for oVirt.

This repo also contains an operator to deploy the driver on OpenShift or Kubernetes with most of the code based on openshift/csi-operator.

Prerequisites

Before installation, please ensure that you are running a k8s cluster which supports the CSI implementation, such as Openshift

Installation

Openshift/OKD 4:

  • Clone this repository
  • Ensure that you have oc installed locally
  • Run:
    export KUBECONFIG=</path/to/your>/auth/kubeconfig
    oc create -f deploy/csi-driver
    
  • validate:
    export KUBECONFIG=</path/to/your>/auth/kubeconfig
    oc get pods -n ovirt-csi-driver
    oc new-project zzz-test
    oc create -f deploy/csi-driver/example
    oc get pods -n zzz-test
    

Examples for StorageClass, PVC and Pod:

StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ovirt-csi-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.ovirt.org
parameters:
  # the name of the oVirt storage domain. "nfs" is just an example.
  storageDomainName: "nfs"
  thinProvisioning: "true"

PVC:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: 1g-ovirt-cow-disk
  annotations:
    volume.beta.kubernetes.io/storage-class: ovirt-csi-sc
spec:
  storageClassName: ovirt-csi-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Pod:

apiVersion: v1 
kind: Pod 
metadata:
  name: testpodwithcsi
spec:
  containers:
  - image: busybox
    name: testpodwithcsi
    command: ["sh", "-c", "while true; do ls -la /opt; echo this file system was made availble using ovirt csi driver; sleep 1m; done"]
    imagePullPolicy: Always
    volumeMounts:
    - name: pv0002
      mountPath: "/opt"
  volumes:
  - name: pv0002
    persistentVolumeClaim:
      claimName: 1g-ovirt-cow-disk

Kubernetes:

  • tbc

Development

Deploy

  • operator
  • dev/test

OpenShift vs Kubernetes

  • credential requests (CredentialRequest) require the openshift cloud credentials operator in order to provision. You will need to either deploy the operator and create the ovirt-credentials secret in the kube-system namespace, or provision the ovirt-credentials secret yourself into the ovirt-csi-driver namespace.

Troubleshooting

Get all the objects for the CSI driver

$ oc get all -n ovirt-csi-driver
NAME                       READY   STATUS            RESTARTS   AGE
pod/ovirt-csi-node-2nptq   0/2     PodInitializing   0          2d23h
pod/ovirt-csi-node-7t266   2/2     Running           0          15m
pod/ovirt-csi-plugin-0     0/3     PodInitializing   0          2d23h

NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/ovirt-csi-node   1         1         1       1            1           <none>          2d23h

NAME                                READY   AGE
statefulset.apps/ovirt-csi-plugin   0/1     2d23h

ovirt-csi-plugin is a pod, that is part of the StatefulSet, running the controller's logic (create volume, delete volume, attach volume and more). It runs the following containers: csi-external-attacher (triggers ControllerPublish/UnpublishVolume), csi-external-provisioner (mainly for Create/DeleteVolume) and ovirt-csi-driver.

ovirt-csi-node is a DaemonSet running the csi-driver-registrar (provides information about the driver with GetPluginInfo and GetPluginCa pabilities) and ovirt-csi-driver.

The sidecar containers (csi-external-attacher, csi-external-provisioner, csi-driver-registrar) run alongside ovirt-csi-driver and run its code via gRPC.

Get inside the pod's containers:

oc -n ovirt-csi-driver rsh -c <container name> pod/ovirt-csi-node-2nptq

Watch logs:

oc logs pods/ovirt-csi-node-2nptq -n ovirt-csi-driver -c <container name> | less

csi-driver's People

Contributors

bennyz avatar mriedmann avatar openshift-merge-robot avatar peterclauterbach avatar rgolangh avatar sandrobonazzola avatar tux-o-matic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-driver's Issues

Truncated disk ID

I am running the csi-driver without openshift but have placed the secret in the namespace with the driver.

When attempting to provision the example I am seeing the following issue.
The drive is provisioned in ovirt and is attached to the correct worker the pod is scheduled on but the ovirt-csi-driver container on that worker is showing the following.

I0508 05:53:22.835356 1 node.go:134] Extracting pvc volume name aefb7ca6-ee05-485e-b186-9d93dd22f33a
I0508 05:53:22.868735 1 node.go:141] Extracted pvc volume name aefb7ca6-ee05-485e-b
E0508 05:53:22.869221 1 server.go:125] /csi.v1.Node/NodeStageVolume returned with error: lstat /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_aefb7ca6-ee05-485e-b: no such file or directory
I0508 05:55:24.908890 1 node.go:40] Staging volume aefb7ca6-ee05-485e-b186-9d93dd22f33a with volume_id:"aefb7ca6-ee05-485e-b186-9d93dd22f33a" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4c5f76b8-7b72-460c-b61a-5e6df5247298/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1588915773341-8081-csi.ovirt.org" >

A shell on the ovirt-csi-driver container on that worker can see the following
[root@ovirt-csi-node-4qcxg /]# ls /dev/disk/by-id/ | grep aefb7ca6
scsi-0QEMU_QEMU_HARDDISK_aefb7ca6-ee05-485e-b186-9d93dd22f33a

It is looking it is having a issue when extracting the name and truncating the end of it and then having issues finding it.

[OCPRHV-78] Functionality is lost when hosted-engine becomes unresponsive

Okay so the title is a bit misleading. I am seeing behavior where the ovirt hosted-engine hangs on a regular basis due to garbage collection not being quick enough to handle the number of new sessions. This is being tracked as okd-project/okd#110.

This issue is bringing out a behaviour where if any requests to attach/detach/create/remove disks via the ovirt csi plugin, then the hosted-engine is marked internally as not available until the ovirt-csi-driver and ovirt-csi-node pods are all restarted.

Volume creations time out with an api error (I will reproduce and paste the error in shortly)
pvc event log:

PersistentVolumeClaimPVCmysqltest
NamespaceNSaaa-test
Mar 26, 4:43 pm
Generated from persistentvolume-controller
3 times in the last few seconds
waiting for a volume to be created, either by external provisioner "csi.ovirt.org" or manually created by system administrator
PersistentVolumeClaimPVCmysqltest
NamespaceNSaaa-test
Mar 26, 4:43 pm
Generated from csi.ovirt.org_ovirt-csi-plugin-0_5414b70e-3116-45db-978a-b0db3dff57ed
5 times in the last few seconds
External provisioner is provisioning volume for claim "aaa-test/mysqltest"
PersistentVolumeClaimPVCmysqltest
NamespaceNSaaa-test
Mar 26, 4:43 pm
Generated from csi.ovirt.org_ovirt-csi-plugin-0_5414b70e-3116-45db-978a-b0db3dff57ed
5 times in the last few seconds
failed to provision volume with StorageClass "ovirt-csi-sc": rpc error: code = Unknown desc = Tag not matched: expect <fault> but got <html>

Volume attach reports MountVolume.MountDevice failed for volume "pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772" : rpc error: code = Unknown desc = exit status 32lsblk failed with lsblk: .: not a block device
Its worth noting that this behaviour starts when the hosted-engine is unavailable and continues until all the pods in ovirt-csi-driver are restarted.

csi-external-attacher log:

I0326 16:38:09.492850       1 csi_handler.go:111] "csi-61ca10562a339825f41f6dfe5550ff89848b3f42b0f704e9b51b9abcba3aa112" is already attached
I0326 16:38:09.493331       1 csi_handler.go:105] CSIHandler: finished processing "csi-61ca10562a339825f41f6dfe5550ff89848b3f42b0f704e9b51b9abcba3aa112"
I0326 16:38:09.492524       1 csi_handler.go:105] CSIHandler: finished processing "csi-04b75219a12949c66395fa4b85c742e25e028d7c40d9552cc7cec228f0312017"
I0326 16:38:09.524874       1 csi_handler.go:428] Saving detach error to "csi-7b64936be16eb0077d5933a80e0cd4b129f32c62a6d3d7ca8cd95753618260b5"
I0326 16:38:09.524917       1 csi_handler.go:428] Saving detach error to "csi-a1c3862511e8044fa46773f3499d70ba1c3f6f4a35ba1f1958b0cdbd0570d0df"
I0326 16:38:09.524874       1 csi_handler.go:428] Saving detach error to "csi-a833360681cc09e5eb167799f0f92176f7914aaf52f6bfe1241af4cc00ef9aa7"
I0326 16:38:09.609236       1 csi_handler.go:439] Saved detach error to "csi-7b64936be16eb0077d5933a80e0cd4b129f32c62a6d3d7ca8cd95753618260b5"
I0326 16:38:09.609336       1 csi_handler.go:99] Error processing "csi-7b64936be16eb0077d5933a80e0cd4b129f32c62a6d3d7ca8cd95753618260b5": failed to detach: rpc error: code = Unknown desc = Tag not matched: expect <fault> but got <html>
I0326 16:38:09.609925       1 controller.go:141] Ignoring VolumeAttachment "csi-a1c3862511e8044fa46773f3499d70ba1c3f6f4a35ba1f1958b0cdbd0570d0df" change
I0326 16:38:09.610107       1 controller.go:141] Ignoring VolumeAttachment "csi-7b64936be16eb0077d5933a80e0cd4b129f32c62a6d3d7ca8cd95753618260b5" change
I0326 16:38:09.610248       1 controller.go:141] Ignoring VolumeAttachment "csi-a833360681cc09e5eb167799f0f92176f7914aaf52f6bfe1241af4cc00ef9aa7" change
I0326 16:38:09.620577       1 csi_handler.go:439] Saved detach error to "csi-a833360681cc09e5eb167799f0f92176f7914aaf52f6bfe1241af4cc00ef9aa7"
I0326 16:38:09.620671       1 csi_handler.go:99] Error processing "csi-a833360681cc09e5eb167799f0f92176f7914aaf52f6bfe1241af4cc00ef9aa7": failed to detach: rpc error: code = Unknown desc = Tag not matched: expect <fault> but got <html>
I0326 16:38:09.621063       1 csi_handler.go:439] Saved detach error to "csi-a1c3862511e8044fa46773f3499d70ba1c3f6f4a35ba1f1958b0cdbd0570d0df"
I0326 16:38:09.621110       1 csi_handler.go:99] Error processing "csi-a1c3862511e8044fa46773f3499d70ba1c3f6f4a35ba1f1958b0cdbd0570d0df": failed to detach: rpc error: code = Unknown desc = Tag not matched: expect <fault> but got <html>
I0326 16:38:57.504856       1 reflector.go:370] k8s.io/client-go/informers/factory.go:133: Watch close - *v1beta1.VolumeAttachment total 6 items received

csi-plugin log:

E0326 16:36:42.106809       1 server.go:125] /csi.v1.Controller/ControllerUnpublishVolume returned with error: failed to find attachment by disk a772eca5-8fac-44c4-bbb3-b2fdd2f1ada7 for VM 3e2def01-8fd5-4b8f-8ddb-b208245360bf
E0326 16:36:42.108682       1 server.go:125] /csi.v1.Controller/ControllerUnpublishVolume returned with error: failed to find attachment by disk 7323fa9a-bd56-428a-af5c-fd6219363910 for VM 3e2def01-8fd5-4b8f-8ddb-b208245360bf
E0326 16:36:42.113144       1 server.go:125] /csi.v1.Controller/ControllerUnpublishVolume returned with error: failed to find attachment by disk 7323fa9a-bd56-428a-af5c-fd6219363910 for VM cd2f5c14-359c-4351-97ce-b52070603e61
I0326 16:38:09.497430       1 controller.go:136] Detaching Disk 7323fa9a-bd56-428a-af5c-fd6219363910 from VM 3e2def01-8fd5-4b8f-8ddb-b208245360bf
I0326 16:38:09.497431       1 controller.go:136] Detaching Disk 7323fa9a-bd56-428a-af5c-fd6219363910 from VM cd2f5c14-359c-4351-97ce-b52070603e61
I0326 16:38:09.497430       1 controller.go:136] Detaching Disk a772eca5-8fac-44c4-bbb3-b2fdd2f1ada7 from VM 3e2def01-8fd5-4b8f-8ddb-b208245360bf
E0326 16:38:09.523549       1 server.go:125] /csi.v1.Controller/ControllerUnpublishVolume returned with error: Tag not matched: expect <fault> but got <html>
E0326 16:38:09.524180       1 server.go:125] /csi.v1.Controller/ControllerUnpublishVolume returned with error: Tag not matched: expect <fault> but got <html>
E0326 16:38:09.524287       1 server.go:125] /csi.v1.Controller/ControllerUnpublishVolume returned with error: Tag not matched: expect <fault> but got <html>

ovirt-node log:

I0326 16:38:15.959322       1 node.go:89] Unmounting /var/lib/kubelet/pods/6c09c82b-5383-42d2-9950-eb4904efd5fc/volumes/kubernetes.io~csi/pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772/mount
I0326 16:38:16.165727       1 node.go:40] Staging volume e19b402e-5783-4179-9f65-4bc289f44f62 with volume_id:"e19b402e-5783-4179-9f65-4bc289f44f62" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1584894930586-8081-csi.ovirt.org" > 
E0326 16:38:16.191257       1 server.go:125] /csi.v1.Node/NodeStageVolume returned with error: exit status 32lsblk failed with lsblk: .: not a block device
I0326 16:38:16.760689       1 node.go:40] Staging volume e19b402e-5783-4179-9f65-4bc289f44f62 with volume_id:"e19b402e-5783-4179-9f65-4bc289f44f62" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1584894930586-8081-csi.ovirt.org" > 
E0326 16:38:16.781994       1 server.go:125] /csi.v1.Node/NodeStageVolume returned with error: exit status 32lsblk failed with lsblk: .: not a block device
I0326 16:38:17.874876       1 node.go:40] Staging volume e19b402e-5783-4179-9f65-4bc289f44f62 with volume_id:"e19b402e-5783-4179-9f65-4bc289f44f62" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1584894930586-8081-csi.ovirt.org" > 
E0326 16:38:17.897662       1 server.go:125] /csi.v1.Node/NodeStageVolume returned with error: exit status 32lsblk failed with lsblk: .: not a block device
I0326 16:38:20.012277       1 node.go:40] Staging volume e19b402e-5783-4179-9f65-4bc289f44f62 with volume_id:"e19b402e-5783-4179-9f65-4bc289f44f62" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1584894930586-8081-csi.ovirt.org" > 
E0326 16:38:20.035193       1 server.go:125] /csi.v1.Node/NodeStageVolume returned with error: exit status 32lsblk failed with lsblk: .: not a block device
I0326 16:38:24.117533       1 node.go:40] Staging volume e19b402e-5783-4179-9f65-4bc289f44f62 with volume_id:"e19b402e-5783-4179-9f65-4bc289f44f62" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1584894930586-8081-csi.ovirt.org" > 
E0326 16:38:24.140299       1 server.go:125] /csi.v1.Node/NodeStageVolume returned with error: exit status 32lsblk failed with lsblk: .: not a block device
I0326 16:38:32.225132       1 node.go:40] Staging volume e19b402e-5783-4179-9f65-4bc289f44f62 with volume_id:"e19b402e-5783-4179-9f65-4bc289f44f62" staging_target_path:"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80df7d-8f33-4cfc-8810-ea12b4006772/globalmount" volume_capability:<mount:<fs_type:"ext4" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_context:<key:"storage.kubernetes.io/csiProvisionerIdentity" value:"1584894930586-8081-csi.ovirt.org" > 
E0326 16:38:32.250849       1 server.go:125] /csi.v1.Node/NodeStageVolume returned with error: exit status 32lsblk failed with lsblk: .: not a block device

I would have expected that the service would continue to retry the hosted-engine until it becomes available again and then continue to function.
Hope this helps
Cheers
Craig

OKD 4.4 on oVirt 4.3 - oVirt CSI driver - PVC bound Successfully - MountVolume.MountDevice failed for volume "pvc-620d2db6-d845-4b4c-9220-d7e372d30397" : rpc error: code = Unknown desc = exit status 32lsblk failed with lsblk: .: not a block device

When trying to deploy Wordpress from

https://raw.githubusercontent.com/openshift-evangelists/wordpress-quickstart/master/templates/classic-standalone.json

on OKD 4.4 on oVirt 4.3 - oVirt CSI driver

getting the following error after PVC bound Successfully

MountVolume.MountDevice failed for volume "pvc-620d2db6-d845-4b4c-9220-d7e372d30397" : rpc error: code = Unknown desc = exit status 32lsblk failed with lsblk: .: not a block device

readme deploy instructions: namespaces "ovirt-csi-driver" not found

On a newly installed OKD 4.4 cluster on RHV 4.3.9, oc create -f deploy/csi-driver yields:

csidriver.storage.k8s.io/csi.ovirt.org created
namespace/openshift-ovirt-csi-operator created
clusterrolebinding.rbac.authorization.k8s.io/ovirt-csi-controller-provisioner-binding created
clusterrolebinding.rbac.authorization.k8s.io/ovirt-csi-controller-attacher-binding created
clusterrole.rbac.authorization.k8s.io/ovirt-csi-controller-cr created
clusterrole.rbac.authorization.k8s.io/ovirt-csi-node-cr created
clusterrole.rbac.authorization.k8s.io/openshift:csi-driver-controller-leader-election created
clusterrolebinding.rbac.authorization.k8s.io/ovirt-csi-controller-binding created
clusterrolebinding.rbac.authorization.k8s.io/ovirt-csi-leader-binding created
clusterrolebinding.rbac.authorization.k8s.io/ovirt-csi-node-binding created
clusterrolebinding.rbac.authorization.k8s.io/ovirt-csi-node-leader-binding created
credentialsrequest.cloudcredential.openshift.io/ovirt-csi-driver created
Error from server (NotFound): error when creating "deploy/csi-driver/020-autorization.yaml": namespaces "ovirt-csi-driver" not found
Error from server (NotFound): error when creating "deploy/csi-driver/020-autorization.yaml": namespaces "ovirt-csi-driver" not found
Error from server (NotFound): error when creating "deploy/csi-driver/030-node.yaml": namespaces "ovirt-csi-driver" not found
Error from server (NotFound): error when creating "deploy/csi-driver/040-controller.yaml": namespaces "ovirt-csi-driver" not found
Error from server (AlreadyExists): error when creating "deploy/csi-driver/060-credential-request.yaml": credentialsrequests.cloudcredential.openshift.io "ovirt-csi-driver" already exists

Should the manifests be adapted to create the ovirt-csi-driver namespace? Or should instances of that namespace be changed to openshift-ovirt-csi-operator? Also, I don't see an operator installed / created as a result of this command. Is that the expected behavior?

RWX support

It looks like the current version doesn't support RWX. The default storage class should be defined to not allow RWX (with an error)

Disks appear to be formatted each time they are mounted

Hey everyone,

So I've had a couple of times when I have restarted pods and then though had been going crazy when the pod comes back up but the volume is empty. I had previously chalked it up to buggy containers, or human error. I've just been having a play with deploying something and have caught the csi-node pod formatting (at least attempting to) the PVC multiple times, and the mkfs failing. The net result for this specific PV is that the data is wiped.

Logs: ovirt-csi-node-qh4gg-ovirt-csi-driver.log

In order to repeat the behaviour:
0) have an okd 4.4-beta4 cluster running on oVirt 4.3.9 (I dont think the oVirt version matters here)

  1. deploy the ovirt-csi-driver
  2. create a pvc and attach it to a container through deployment config (don't know if this matters for this behaviour, but it is what I am doing)
  3. put some dummy data in the pvc from within the pod
  4. restart your pod
  5. data is gone and csi-node log shows mkfs was attempted and failed.

I hope that I am just being an idiot and doing something wrong, but I can't for the life of me spot it. I have nothing different on the cluster away from defaults for the SC or PVC and have seen this behavior on multiple clusters.

Cheers
Craig

Drives are not detached from ovirt nodes before machines are removed

When scaling down a cluster by reducing the machineset replica count i.e from 6 to 3, any disks which are attached to the host at the time are deleted as part of removing the VM within oVirt. I suspect this is caused by having a bit of delay between workload being de-scheduled and the process to I would expect that disks are detached before the machine is removed.

ovirt-csi-driver container reports OS mismatch when pulled on okd 4.4.0

Hi all,

Sorry if this isn't the right place for this bug ticket. I am using @rgolangh 's quay.io ovirt-csi-driver image which was updated today. Part of the update seems to have broken the ovirt-csi-driver container from running in the pod with the error:
Failed to pull image "quay.io/rgolangh/ovirt-csi-driver:latest": rpc error: code = Unknown desc = Image operating system mismatch: image uses "", expecting "linux"

Many thanks
Craig

ovirt authentication ignoring "ovirt_ca_bundle" parameter

In internal/ovirt/ovirt.go the connection is done this way:

func newOvirtConnection() (*ovirtsdk.Connection, error) {
  ovirtConfig, err := GetOvirtConfig()
  if err != nil {
    return nil, err
  }
  connection, err := ovirtsdk.NewConnectionBuilder().
    URL(ovirtConfig.URL).
    Username(ovirtConfig.Username).
    Password(ovirtConfig.Password).
    CAFile(ovirtConfig.CAFile).
    Insecure(ovirtConfig.Insecure).
    Build()
  if err != nil {
    return nil, err
  }

  return connection, nil

}

A valid managed ovirt-secret has the following components:
$ oc get -o yaml secret ovirt-credentials

apiVersion: v1
data:
  ovirt_ca_bundle: LS0tLS1...
  ovirt_cafile: ""
  ovirt_insecure: Zm...
  ovirt_password: cm....
  ovirt_url: aHR....
  ovirt_username: b2N...

Which fails as the embedded CA is missing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.