Giter Club home page Giter Club logo

community-operators-prod's People

Contributors

2uasimojo avatar ack-bot avatar allda avatar aneeshkp avatar awgreene avatar chanwit avatar che-incubator-bot avatar dmesser avatar esara avatar f41gh7 avatar github-actions[bot] avatar gl-distribution-oc avatar gregsheremeta avatar j0zi avatar jmazzitelli avatar jwendell avatar maistra-bot avatar mkuznyetsov avatar mvalahtv avatar mvalarh avatar nicolaferraro avatar openshift-edge-bot avatar quay-devel avatar raffaelespazzoli avatar rh-operator-bundle-bot avatar robszumski avatar samisousa avatar scholzj avatar ssimk0 avatar vmuzikar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

community-operators-prod's Issues

Error: API rate limit exceeded for 3.230.25.92

In #102 (comment) I encountered the following error:

 Checking PR 102 on redhat-openshift-ecosystem/community-operators-prod
{"message":"API rate limit exceeded for 3.230.25.92. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}
REPO_FULL=null
BRANCH=null
COMMIT=null
REPO=
QUAY_HASH=null
OPRT_SHA=null
OPRT values set [OK]
Going to clone null
fatal: repository 'null' does not exist
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   222    0     0  100   222      0   4188 --:--:-- --:--:-- --:--:--  4188
curl: (22) The requested URL returned error: 404 
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"prow/entrypoint/run.go:80","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2021-08-12T16:02:18Z"}
error: failed to execute wrapped command: exit status 1 

-- https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/redhat-openshift-ecosystem_community-operators-prod/102/pull-ci-redhat-openshift-ecosystem-community-operators-prod-main-4.7-deploy-operator-on-openshift/1425839988112625664#1:build-log.txt%3A50

It seems like this all stems from a failure to get the PR branch info from the GitHub API.

/kind bug

Example to prune catalogs with FBC

From @madhukirans in (#512 (comment)):

Would it be possible to get an example workflow of pruning the upstream catalog index of all for specified operators? Do all i have to do is replace the 'opm index prune' command with 'rm' commands to get this working agian? The current sqlite workflow i have is:

echo "- pruning catalog index"
  MIRROR_CATALOG_LINE="./opm index prune -f ${OCP_OPERATORS_UPSTREAM_CATALOG} -p `echo $OCP_OPERATORS_KEEP` -t ${BOOTSTRAP_REGISTRY}/catalog-x86-disconnected:latest"

  echo $MIRROR_CATALOG_LINE
  eval $MIRROR_CATALOG_LINE
  
  echo "- pushing pruned catalog index locally"
  MIRROR_CATALOG_PUSH_LINE="podman push ${BOOTSTRAP_REGISTRY}/catalog-x86-disconnected:latest"
  echo $MIRROR_CATALOG_PUSH_LINE
  eval $MIRROR_CATALOG_PUSH_LINE
  
  echo "- sync'ing catalog locally"
  rm -rf manifests-catalog-x86-disconnected
  MIRROR_CATALOG_IMAGES="oc adm catalog mirror ${BOOTSTRAP_REGISTRY}/catalog-x86-disconnected:latest ${BOOTSTRAP_REGISTRY} --to-manifests=manifests-catalog-x86-disconnected --index-filter-by-os='linux/amd64' --manifests-only -a ${PULL_SECRET_JSON_FILE}"
  echo $MIRROR_CATALOG_IMAGES
  eval $MIRROR_CATALOG_IMAGES
  
  cd manifests-catalog-x86-disconnected
  cat mapping.txt |grep -v -e "^${BOOTSTRAP_REGISTRY}" > mappings.txt
  MIRROR_IMAGES_LINE="oc image mirror --skip-multiple-scopes=true -a ${PULL_SECRET_JSON_FILE} --filter-by-os='linux/amd64' -f mappings.txt"
  echo $MIRROR_IMAGES_LINE
  eval $MIRROR_IMAGES_LINE

A working example would be super helpful in helping folks with 'disconnected' requirements

Failed to pull kind-registry:5000/test-operator/catalog:v4.10s

Hi, we are getting the following failure with our pr:
#1088

test orange / Deploy o7t (v4.10-db)

fatal: [localhost]: FAILED! => changed=true 
  cmd: podman pull kind-registry:5000/test-operator/catalog:v4.10s
  delta: '0:00:00.121946'
  end: '2022-04-14 05:56:25.753183'
  msg: non-zero return code
  rc: 125
  start: '2022-04-14 05:56:25.631237'
  stderr: |-
    Trying to pull kind-registry:5000/test-operator/catalog:v4.10s...
    Error: initializing source docker://kind-registry:5000/test-operator/catalog:v4.10s: reading manifest v4.10s in kind-registry:5000/test-operator/catalog: manifest unknown: manifest unknown
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

PLAY RECAP *********************************************************************
localhost                  : ok=211  changed=27   unreachable=0    failed=1    skipped=278  rescued=0    ignored=1   

Error: Process completed with exit code 2.```

Update the readme to clarifies to the users

We need to add into the readme:

Failed to pull /kind-registry:5000/test-operator/catalog:v4.6s:

The operator is only for OCP 4.8 and above. Why is it trying to build/pull for 4.6?

#1125

orange / Deploy o7t (v4.6-db)

0.0.5/annotations.yaml has

com.redhat.openshift.versions: v4.8

TASK [index_audit : Pull index image 'kind-registry:5000/test-operator/catalog:v4.6s'] ***
fatal: [localhost]: FAILED! => changed=true 
  cmd: podman pull kind-registry:5000/test-operator/catalog:v4.6s
  delta: '0:00:00.161059'
  end: '2022-04-26 09:59:46.281812'
  msg: non-zero return code
  rc: 125
  start: '2022-04-26 09:59:46.120753'
  stderr: |-
    Trying to pull kind-registry:5000/test-operator/catalog:v4.6s...
    Error: initializing source docker://kind-registry:5000/test-operator/catalog:v4.6s: reading manifest v4.6s in kind-registry:5000/test-operator/catalog: manifest unknown: manifest unknown
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>

Issue with test suite lemon and orange

I am running the test suite locally. Kiwi test succeeds but lemon and orange fail.

Error encountered

TASK [operator_index : Set versions and versions_bt] **********************************************************************************
task path: /playbooks/upstream/roles/operator_index/tasks/op_index.yml:80
fatal: [localhost]: FAILED! => 
  msg: |-
    The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'latest'
  
    The error appears to be in '/playbooks/upstream/roles/operator_index/tasks/op_index.yml': line 80, column 3, but may
    be elsewhere in the file depending on the exact syntax problem.
  
    The offending line appears to be:
  
  
    - name: "Set versions and versions_bt"
      ^ here

Additional details
In the set versions task it is trying to access index that maps ocp version to operator version and looking for latest tag which doesn't exist hence the dict object not found error.

Steps to reproduce:

$ cd community-operators-prod
$ OPP_AUTO_PACKAGEMANIFEST_CLUSTER_VERSION_LABEL=1 OPP_PRODUCTION_TYPE=ocp \
bash <(curl -sL https://raw.githubusercontent.com/redhat-openshift-ecosystem/community-operators-pipeline/ci/latest/ci/scripts/opp.sh) \
lemon \
operators/kiali/1.9.1

Test suite issues with podman

I opened a similar issue a while back with the old repos, and it seems I'm having similar issues with the new repos/test scripts.

Old issue: operator-framework/community-operators#2115

Here's what I've tried:

# Tell kind to use podman instead of docker
$ export KIND_EXPERIMENTAL_PROVIDER=podman kind create cluster

# Prove that kind works in my podman environment on its own
$ sudo kind create cluster
enabling experimental podman provider
Creating cluster "kind" ...
 โœ“ Ensuring node image (kindest/node:v1.19.1) ๐Ÿ–ผ
 โœ“ Preparing nodes ๐Ÿ“ฆ  
 โœ“ Writing configuration ๐Ÿ“œ 
 โœ“ Starting control-plane ๐Ÿ•น๏ธ 
 โœ“ Installing CNI ๐Ÿ”Œ 
 โœ“ Installing StorageClass ๐Ÿ’พ 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community ๐Ÿ™‚

$ sudo kind delete cluster
enabling experimental podman provider
Deleting cluster "kind" ...

# Tell opp script to use podman
$ export OPP_CONTAINER_TOOL=podman

# Run test suite
$ bash <(curl -sL https://raw.githubusercontent.com/redhat-openshift-ecosystem/community-operators-pipeline/ci/latest/ci/scripts/opp.sh)   kiwi,lemon,orange operators/aqua/1.0.2
Info: No labels defined
debug=0
Using ansible 2.9.20 on host ...

One can do 'tail -f /tmp/op-test/log.out' from second console to see full logs

Checking for kind binary ...
::set-output name=opp_uncomplete_operators::
Using Varialble : OPP_FORCE_OPERATORS_TMP=OPP_FORCE_OPERATORS_kiwi () -> OPP_FORCE_OPERATORS=
Test 'kiwi' for 'operators aqua 1.0.2' ...
[kiwi] Reseting kind cluster ...

Failed with rc=1 !!!
Logs are in '/tmp/op-test/log.out'.

# See that test suite fails because its still trying to use docker with kind
$ cat /tmp/op-test/log.out
...
TASK [reset_kind : Configuring Kind with registry] ***********************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/tmp/kind-with-registry.sh", "delta": "0:00:00.003532", "end": "2021-09-15 11:27:26.792060", "msg": "non-zero return code", "rc": 127, "start": "2021-09-15 11:27:26.788528", "stderr": "/tmp/kind-with-registry.sh: line 14: docker: command not found\n/tmp/kind-with-registry.sh: line 36: docker: command not found", "stderr_lines": ["/tmp/kind-with-registry.sh: line 14: docker: command not found", "/tmp/kind-with-registry.sh: line 36: docker: command not found"], "stdout": "", "stdout_lines": []}

Inconsistent CI results: Operator test / orange / Deploy o7t (v4.10-db) failed while others (v4.9-db) passed

Operator test / orange / Deploy o7t (v4.10-db) (pull_request_target) failed

       quay.io/openshift-community-operators/cert-manager@sha256:d3afb2f010b20a739ecef272c44bc06325b256c6a0751d709e2fe24ef7b69950 kind-registry:5000/test-operator/cert-manager@sha256:d77395f0eef4dd553625403dcf629dc78a1dcdd8cc4e9292ae96d80e0683618c]" error="error loading bundle into db: FOREIGN KEY constraint failed"'
    - 'Error: error loading bundle into db: FOREIGN KEY constraint failed'

It could be this bug:

But it's not clear to me why Operator test / orange / Deploy o7t (v4.9-db) (pull_request_target) passed;
I assume it performs the same checks for the OpenShift 4.9 catalog.
And the same new bundle passes the checks in:

Is this a bug / inconsistency in the CI?

/cc @mvalarh

I removed the default channel in the bundle and then all the tests passed, as per operator-framework/operator-registry#330 (comment)

the defaultChannel is taken from the defaultChannel value from the highest semver bundle
(and since it's optional, it's really from the highest semver bundle that has a defaultChannel value)

Originally posted by @wallrj in #827 (comment)

Error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached in pull-ci-redhat-openshift-ecosystem-community-operators-prod-main-4.9-deploy-operator-on-openshift

Error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached in pull-ci-redhat-openshift-ecosystem-community-operators-prod-main-4.9-deploy-operator-on-openshift

In https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/redhat-openshift-ecosystem_community-operators-prod/1023/pull-ci-redhat-openshift-ecosystem-community-operators-prod-main-4.9-deploy-operator-on-openshift/1508802581541949440

level=error
level=error msg=Error: Error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached.
level=error msg=	status code: 400, request id: d2bcbc24-eb34-4ecf-ba1b-704aa18d5888
level=error
level=error msg=  on ../tmp/openshift-install-cluster-352765466/vpc/vpc.tf line 6, in resource "aws_vpc" "new_vpc":
level=error msg=   6: resource "aws_vpc" "new_vpc" {
level=error
level=error
level=fatal msg=failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply Terraform: failed to complete the change 

Originally posted by @wallrj in #1023 (comment)

ci/prow tests are failing for dell-csm-operator in all versions of OCP with error "dict object has no attribute stderr"

ci/prow tests are failing for all versions of OCP with error "dict object has no attribute stderr". Below is the snippet of the error in logs

The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stderr'

The error appears to be in '/tmp/playbooks2/operator-test-playbooks/upstream/roles/deploy_olm_operator_openshift_upstream/tasks/main.yml': line 571, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


- name: "Append error output to operator pod log"
  ^ here

I can also see that the operator pods are healthy after startup. Below is the summary of the operator installation in logs

prow_summary:

  • Catalog source is up and READY [OK]
  • Operatorgroup is present [OK]
  • Subscribed [OK]
  • Operator is in packagemanifests [OK]
  • Operator startup [OK]
  • Operator stayed healthy after startup [OK]

Please let know if there is something missing from operator end that needs to be fixed. As per the logs, error is in Ansible playbook and summary of operator installation is as expected
PR Link #996

V4.9 is out

Dear OpenShift Operator Community,

We are glad to announce that our pipeline is producing v4.9 index starting from today on.

Kind Regards,
The community-operators maintainers team

mentions:
@AdheipSingh, @ArangoGutierrez, @ArbelNathan, @AstroProfundis, @Avni-Sharma, @Carrefour-Group, @DTMad, @EnterpriseDB, @Flagsmith, @Fryguy, @HubertStefanski, @J0zi, @Kaitou786, @Kong, @LCaparelli, @LaVLaS, @Listson, @LorbusChris, @MarcinGinszt, @OchiengEd, @Project-Herophilus, @Rajpratik71, @RakhithaRR, @SDBrett, @Simonzhaohui, @SteveMattar, @Tatsinnit, @TheWeatherCompany, @aamirqs, @abaiken, @akhil-rane, @alien-ike, @aliok, @andreaskaris, @antonlisovenko, @apeschel, @aravindhp, @ashenoy-hedvig, @aslakknutsen, @astefanutti, @avalluri, @babbageclunk, @bart0sh, @bcrochet, @bigkevmcd, @blaqkube, @blublinsky, @bo0ts, @brian-avery, @bznein, @camilamacedo86, @cap1984, @chanwit, @chatton, @chbatey, @che-bot, @che-incubator, @che-incubator-bot, @chetan-rns, @christophd, @clamoriniere, @cliveseldon, @cloudant, @couchbase-partners, @cschwede, @ctron, @dabeeeenster, @dagrayvid, @danielpacak, @dannyzaken, @darkowlzz, @david-kow, @deeghuge, @deekshahegde86, @devOpsHelm, @dgoodwin, @dinhxuanvu, @djzager, @dlbock, @dmesser, @dragonly, @dtrawins, @dymurray, @ecordell, @eguzki, @eresende-nuodb, @erikerlandson, @esara, @evan-hataishi, @f41gh7, @fao89, @fbladilo, @fcanovai, @ferranbt, @fjammes, @flaper87, @frant-hartm, @gallettilance, @gautambaghel, @germanodasilva, @gngeorgiev, @gregsheremeta, @gunjan5, @gurushantj, @guyyakir, @gyliu513, @haibinxie, @hasancelik, @hasheddan, @hco-bot, @himanshug, @houshengbo, @husky-parul, @iamabhishek-dubey, @ibuziuk, @idanmo, @instana, @irajdeep, @ivanstanev, @ivanvtimofeev, @jeesmon, @jianzhangbjz, @jitendar-singh, @jkatz, @jkhelil, @jmazzitelli, @jmccormick2001, @jmeis, @jmesnil, @joelddiaz, @jogetworkflow, @jomeier, @jomkz, @jonathanvila, @jpkrohling, @jsenko, @juljog, @kaiso, @kerenlahav, @kerrygunn, @khaledsulayman, @kingdonb, @knrc, @ksatchit, @kshithijiyer, @kubemod, @kubernetes-projects, @kulkarnicr, @lbroudoux, @little-guy-lxr, @lrgar, @lsst, @madhukirans, @madorn, @maistra, @maskarb, @matzew, @max3903, @mdonkers, @mflendrich, @microcks, @miguelsorianod, @mkuznyetsov, @mnencia, @mrethers, @mrizzi, @msherif1234, @mtyazici, @muvaf, @mvalahtv, @mvalarh, @n1r1, @nickboldt, @nicolaferraro, @nikhil-thomas, @olukas, @open-cluster-management, @operator-framework, @oranichu, @orenc1, @oribon, @oriyarde, @owais, @pavelmaliy, @pebrc, @pedjak, @phantomjinx, @piyush-nimbalkar, @portworx, @prft-rh, @pweil-, @radtriste, @raffaelespazzoli, @rainest, @rajivnathan, @raunakkumar, @rayfordj, @redhat-cop, @rensyct, @renuka-fernando, @rgolangh, @rhm-samples, @rhrazdil, @ricardozanini, @rigazilla, @rishabh-shah12, @rmr-silicom, @robshelly, @rodrigovalin, @rohanjayaraj, @rojkov, @rubenvp8510, @rvansa, @ryanemerson, @saada, @sabinaaledort, @sabre1041, @sbose78, @scholzj, @sebsoto, @secondsun, @selvamt94, @shubham-pampattiwar, @slaskawi, @slopezz, @snyk, @snyksec, @spolti, @spron-in, @squids-io, @startxfr, @sunsingerus, @svallala, @sxd, @tahmmee, @teseraio, @thbkrkr, @tibulca, @tolusha, @tomashibm, @tomgeorge, @tphee, @tplavcic, @tumido, @twiest, @ursais, @vaibhavjainwiz, @vassilvk, @vboulineau, @vkvamsiopsmx, @vmturbo, @vmuzikar, @wallrj, @waveywaves, @waynesun09, @weii666, @welshDoug, @wiggzz, @willholley, @windup, @wmellouli, @wtam2018, @wtrocki, @xiangjingli, @yaacov, @zhiweiyin318, @zingero, @zregvart, @zroubalik

Fatal error in CI: failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply Terraform: failed to complete the change

In #827 (comment) you can see two of the tests failing.

level=error
level=error msg=Error: Provider produced inconsistent result after apply
level=error
level=error msg=When applying changes to module.vpc.aws_route_table.private_routes[1],
level=error msg=provider "registry.terraform.io/-/aws" produced an unexpected new value for
level=error msg=was present, but now absent.
level=error
level=error msg=This is a bug in the provider, which should be reported in the provider's own
level=error msg=issue tracker.
level=error
level=fatal msg=failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply Terraform: failed to complete the change 

https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/redhat-openshift-ecosystem_community-operators-prod/827/pull-ci-redhat-openshift-ecosystem-community-operators-prod-main-4.8-deploy-operator-on-openshift/1493985591560245248

Startup failure in openebs-operator causes cert-manager tests to fail

    Startup failure in openebs-operator causes  cert-manager tests to fail
Wait for the operator openebs-operator pod to start up
...
Operator startup                          [FAIL]

Full error report


{  "deploy-operator-on-openshift" pod "deploy-operator-on-openshift-deploy-operator" failed: the pod ci-op-0927kssp/deploy-operator-on-openshift-deploy-operator failed after 27m44s (failed containers: test): ContainerFailed one or more containers exited

Container test exited with code 1, reason Error
---
ce.
    See 'Wait for the operator openebs-operator pod to start up' step above.
    Also check, that the operator is not listed in running containers in 'Check all containers' step.
    In some cases step 'Output debug - catalog operator' can be helpful. Also check other debug outputs.
    -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

TASK [deploy_olm_operator_openshift_upstream : Report short prow summary] ******
ok: [localhost] => 
  prow_summary:
  - Catalog source is up and READY            [OK]
  - Operatorgroup is present                  [OK]
  - Subscribed                                [OK]
  - Operator is in packagemanifests           [OK]
  - Operator startup                          [FAIL]

TASK [deploy_olm_operator_openshift_upstream : Debug] **************************
ok: [localhost] => 
  operator_upgrade_testing_disabled: 'VARIABLE IS NOT DEFINED!: ''operator_upgrade_testing_disabled'' is undefined'

TASK [deploy_olm_operator_openshift_upstream : Debug] **************************
ok: [localhost] => 
  operator_upgrade_testing_on_openshift_disabled: 'true'

TASK [deploy_olm_operator_openshift_upstream : Fail when Operator deployment with OLM failed] ***
fatal: [localhost]: FAILED! => changed=false 
  msg: Operator deployment with OLM failed, expand log and check logs from the bottom to the top ^

PLAY RECAP *********************************************************************
localhost                  : ok=175  changed=82   unreachable=0    failed=1    skipped=90   rescued=0    ignored=9   

Reporting ...
Variable summary:
OP_NAME=cert-manager
OP_VER=1.10.0-rc1
Ansible failed, see output above
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"k8s.io/test-infra/prow/entrypoint/run.go:79","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2022-10-25T12:43:19Z"}
error: failed to execute wrapped command: exit status 1
---
Link to step on registry info site: https://steps.ci.openshift.org/reference/deploy-operator
Link to job on registry info site: https://steps.ci.openshift.org/job?org=redhat-openshift-ecosystem&repo=community-operators-prod&branch=main&test=deploy-operator-on-openshift&variant=4.9}

-- https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/redhat-openshift-ecosystem_community-operators-prod/1777/pull-ci-redhat-openshift-ecosystem-community-operators-prod-main-4.9-deploy-operator-on-openshift/1584872808293339136

Originally posted by @wallrj in #1777 (comment)

OCP 4.7 community-operators missing external-secrets-operator v0.5.1

We are using different OCP versions (4.7, 4.8, 4.9), and since #1060 was merged, external-secrets-operator have upgraded to v0.5.1 on OCP 4.8 and OCP 4.9, but clusters with OCP 4.7 have their latest version stuck at v0.5.0.

We tried recreating community-operator catalog pod without success.

Could you please submit external-secrets-operator v0.5.1 to OCP 4.7 community-operator catalog?

Thanks in advance

pr: testing failing on AWS LimitExceeded

example:
pr: #1020
example failure:
https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/redhat-openshift-ecosystem_community-operators-prod/1020/pull-ci-redhat-openshift-ecosystem-community-operators-prod-main-4.6-deploy-operator-on-openshift/1508735887767244800

level=error msg="Error: Error creating IAM instance profile ci-op-42r2swbg-96ace-47qjt-bootstrap-profile: LimitExceeded: Cannot exceed quota for InstanceProfilesPerAccount: 1000"
level=error msg="\tstatus code: 409, request id: a5f9b283-4ba6-4c0c-852f-750e014db50d"

Remove the 'olm.maxOpenShiftVersion' from etcd operator

Initial issue:

Hi, I checked the etcd operator, and found it was added the olm.maxOpenShiftVersion=4.8,

[cloud-user@preserve-olm-env jian]$ oc get csv etcdoperator.v0.9.4 -o yaml|grep "maxOpenShiftVersion"
olm.properties: '[{"type": "olm.maxOpenShiftVersion", "value": "4.8"}]'

But, I have changed its apiextensions.k8s.io to v1 from v1beta, see: #127

So, can we remove the 'olm.maxOpenShiftVersion' from it? Thanks!

Based on my understanding, the operators which use the deprecated API should be added the olm.maxOpenShiftVersion=4.8, not all of them. Correct me if I'm wrong, thanks!

Initial answer

If you have fixed API, you can definitely remove 'olm.maxOpenShiftVersion', fully agree.

Follow up question:

Hi , sorry, I'm confused. How can I fix it on my side? Thanks! I guess this 'olm.maxOpenShiftVersion' property is added when creating the bundle in your script or code

CI: Deployment tests against OCP passing when the Operator deployment is not successfully deployed.

See the PR: #1287. The CI test passed against OCP < 4.11 when it should fail since the deployment will not succeed.

The deployment of the Pod fails with :

 lastTransitionTime: '2022-06-12T21:36:53Z'
      lastUpdateTime: '2022-06-12T21:36:53Z'
      message: >-
        install failed: deployment memcached-operator-controller-manager not
        ready before timeout: deployment "memcached-operator-controller-manager"
        exceeded its progress deadline
      phase: Failed
      reason: InstallCheckFailed

We should be checking if the installation and deployment of the Operator occurred successfully as should be:

      message: install strategy completed with no errors
      phase: Succeeded
      reason: InstallSucceeded

Community operators fail to install with "unpack job not completed: "

Description of problem:
Installing a community operator from OperatorHub fails
examples include etcd and spark-gcp

Version-Release number of selected component (if applicable):
sparkoperator.v2.4.0

oc version
Client Version: 4.9.0-202109071344.p0.git.96e95ce.assembly.stream-96e95ce
Server Version: 4.9.0-0.nightly-2021-09-07-201519
Kubernetes Version: v1.22.0-rc.0+75ee307

How reproducible:
Always

Steps to Reproduce:

  1. Install spark-gcp from OperatorHub
  2. wait for timeout

Actual results:
Bundle unpacking failed. Reason: DeadlineExceeded, and Message: Job was active longer than specified deadline

oc get ip

apiVersion: operators.coreos.com/v1alpha1
kind: InstallPlan
...
status:
  bundleLookups:
  - catalogSourceRef:
      name: community-operators
      namespace: openshift-marketplace
    conditions:
    - message: bundle contents have not yet been persisted to installplan status
      reason: BundleNotUnpacked
      status: "True"
      type: BundleLookupNotPersisted
    - message: unpack job not yet started
      reason: JobNotStarted
      status: "True"
      type: BundleLookupPending
    - lastTransitionTime: "2021-09-08T13:07:21Z"
      message: Job was active longer than specified deadline
      reason: DeadlineExceeded
      status: "True"
      type: BundleLookupFailed
    identifier: etcdoperator.v0.9.4
    path: quay.io/openshift-community-operators/etcd@sha256:6334a904a229eb1d51a6708dd774e23bf6d04ab20cc5e9efffa363791dce90a8
    properties: '{"properties":[{"type":"olm.gvk","value":{"group":"etcd.database.coreos.com","kind":"EtcdBackup","version":"v1beta2"}},{"type":"olm.gvk","value":{"group":"etcd.database.coreos.com","kind":"EtcdCluster","version":"v1beta2"}},{"type":"olm.gvk","value":{"group":"etcd.database.coreos.com","kind":"EtcdRestore","version":"v1beta2"}},{"type":"olm.maxOpenShiftVersion","value":"4.8"},{"type":"olm.package","value":{"packageName":"etcd","version":"0.9.4"}}]}'
    replaces: etcdoperator.v0.9.2
  - catalogSourceRef:
      name: community-operators
      namespace: openshift-marketplace
    conditions:
    - message: bundle contents have not yet been persisted to installplan status
      reason: BundleNotUnpacked
      status: "True"
      type: BundleLookupNotPersisted
    - lastTransitionTime: "2021-09-08T13:07:21Z"
      message: 'unpack job not completed: Unpack pod(openshift-marketplace/914f29919f7da5cc1b0d0985fe5f15776ed019b010a1eb8793e54d--1-4qgzj)
        container(util) is pending. Reason: PodInitializing, Message:  | Unpack pod(openshift-marketplace/914f29919f7da5cc1b0d0985fe5f15776ed019b010a1eb8793e54d--1-4qgzj)
        container(pull) is pending. Reason: PodInitializing, Message: '
      reason: JobIncomplete
      status: "True"
      type: BundleLookupPending
    identifier: sparkoperator.v2.4.0
    path: quay.io/openshift-community-operators/spark-gcp@sha256:b0384a8fc5f25ba34988c62b53b3c5df23b0dc42f64635480b13497f9a9eabc3
    properties: '{"properties":[{"type":"olm.gvk","value":{"group":"sparkoperator.k8s.io","kind":"ScheduledSparkApplication","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"sparkoperator.k8s.io","kind":"SparkApplication","version":"v1beta1"}},{"type":"olm.maxOpenShiftVersion","value":"4.8"},{"type":"olm.package","value":{"packageName":"spark-gcp","version":"2.4.0"}}]}'
    replaces: ""
  catalogSources: []
  conditions:
  - lastTransitionTime: "2021-09-08T13:07:22Z"
    lastUpdateTime: "2021-09-08T13:07:22Z"
    message: 'Bundle unpacking failed. Reason: DeadlineExceeded, and Message: Job
      was active longer than specified deadline'
    reason: InstallCheckFailed
    status: "False"
    type: Installed
  phase: Failed

Additional info:
etcd fails in the same way

full workflow to migrate to file based catalogs for disconnected install

I've tried leveraging example from #793 to replicate prior to 4.11 workflow for mirroring a given set of operators but without success.

Fixing some of the typos mentioned at the end of this issue thread, my procedure is the following:


mkdir -p pruned-catalog/configs
opm render registry.redhat.io/redhat/redhat-operator-index:v4.11 | jq 'select( .package == "multicluster-engine" or .name == "lib-bucket-provisioner")'> pruned-catalog/configs/index.json
cd pruned-catalog/configs
opm alpha bundle generate -d . -c . -p . -u .
podman build -t 2620-52-0-1302--a6d9.sslip.io:5000/olm-index/redhat-operator-index:v4.11 -f bundle.Dockerfile .
podman push 2620-52-0-1302--a6d9.sslip.io:5000/olm-index/redhat-operator-index:v4.11

but when i try to mirror this content, i get the following error

oc adm catalog mirror 2620-52-0-1302--a6d9.sslip.io:5000/olm-index/redhat-operator-index:v4.11 2620-52-0-1302--a6d9.sslip.io:5000/olm --registry-config=openshift_pull.json  --max-per-registry=100
using index path mapping: /:/tmp/709081102
wrote database to /tmp/709081102
errors during mirroring. the full contents of the catalog may not have been mirrored: extract catalog files: no database file found in /tmp/709081102
deleted dir /tmp/709081102
error: no mapping found for index image

Can you indicate what's wrong here and what the correct workflow should be?

The bundle ember-csi-community-operator (0.9.1) cannot be load by the API

The bundle ember-csi-community-operator (0.9.1) cannot be load by the API error unmarshaling JSON: while decoding JSON: Object 'Kind' is missing in ':

# Error faced: Unable to load the bundle: error loading objs in directory: unable to decode object: error unmarshaling JSON: while decoding JSON: Object 'Kind' is missing in '{"config":{"type":"string"},"config.X_CSI_BACKEND_CONFIG.multipath":{"type":"string"},"config.driverImage":{"type":"string"},"config.envVars.X_CSI_ABORT_DUPLICATES":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ACCESSIscsi__vrts_lun_sparse":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ACCESSIscsi__vrts_target_config":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__AS13000__as13000_ipsan_pools__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__AS13000__as13000_meta_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__AS13000__as13000_token_available_time":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__FJDXFC__cinder_eternus_config_file":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__FJDXISCSI__cinder_eternus_config_file":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__FlashSystemFC__flashsystem_connection_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__FlashSystemFC__flashsystem_multihostmap_enabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__FlashSystemISCSI__flashsystem_connection_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__FlashSystemISCSI__flashsystem_multihostmap_enabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_hosts__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_hosts_key_file":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_images_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_images_share_mode__transform_empty_none":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_max_clone_depth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_mount_point_base":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_private_key":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_sparse_volumes":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_ssh_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_storage_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_strict_host_key_policy":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_user_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFSRemote__gpfs_user_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFS__gpfs_images_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFS__gpfs_images_share_mode__transform_empty_none":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFS__gpfs_max_clone_depth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFS__gpfs_mount_point_base":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFS__gpfs_sparse_volumes":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__GPFS__gpfs_storage_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_api_url":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_cpg__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_cpg_snap":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_debug":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_iscsi_chap_enabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_iscsi_ips__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_snapshot_expiration":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_snapshot_retention":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_target_nsp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__hpe3par_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__san_private_key":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__san_ssh_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__ssh_conn_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__target_ip_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARFC__target_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_api_url":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_cpg__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_cpg_snap":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_debug":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_iscsi_chap_enabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_iscsi_ips__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_snapshot_expiration":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_snapshot_retention":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_target_nsp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__hpe3par_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__san_private_key":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__san_ssh_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__ssh_conn_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__target_ip_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPE3PARISCSI__target_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPELeftHandISCSI__hpelefthand_api_url":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPELeftHandISCSI__hpelefthand_clustername":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPELeftHandISCSI__hpelefthand_debug":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPELeftHandISCSI__hpelefthand_iscsi_chap_enabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPELeftHandISCSI__hpelefthand_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPELeftHandISCSI__hpelefthand_ssh_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPELeftHandISCSI__hpelefthand_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPMSAFC__hpmsa_pool_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPMSAFC__hpmsa_pool_type":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPMSAISCSI__hpmsa_iscsi_ips__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPMSAISCSI__hpmsa_pool_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HPMSAISCSI__hpmsa_pool_type":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiFC__cinder_huawei_conf_file":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiFC__hypermetro_devices":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiFC__metro_domain_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiFC__metro_san_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiFC__metro_san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiFC__metro_san_user":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiFC__metro_storage_pools":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiISCSI__cinder_huawei_conf_file":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiISCSI__hypermetro_devices":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiISCSI__metro_domain_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiISCSI__metro_san_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiISCSI__metro_san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiISCSI__metro_san_user":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__HuaweiISCSI__metro_storage_pools":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__IBMStorage__chap":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__IBMStorage__connection_type":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__IBMStorage__management_ips":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__IBMStorage__proxy":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_allow_tenant_qos":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_localcopy_rate":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_localcopy_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_vol_autoexpand":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_vol_compression":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_vol_grainsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_vol_intier":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_vol_iogrp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_vol_rsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_vol_warning":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_mcs_volpool_name__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSFC__instorage_san_secondary_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_allow_tenant_qos":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_localcopy_rate":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_localcopy_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_vol_autoexpand":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_vol_compression":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_vol_grainsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_vol_intier":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_vol_iogrp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_vol_rsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_vol_warning":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_mcs_volpool_name__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InStorageMCSISCSI__instorage_san_secondary_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_cli_cache":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_cli_max_retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_cli_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_cli_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_iqn_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_pools_name__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_slots_a_channels_id__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__infortrend_slots_b_channels_id__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIFC__java_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_cli_cache":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_cli_max_retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_cli_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_cli_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_iqn_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_pools_name__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_slots_a_channels_id__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__infortrend_slots_b_channels_id__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__InfortrendCLIISCSI__java_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__KaminarioISCSI__disable_discovery":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__KaminarioISCSI__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__KaminarioISCSI__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__KaminarioISCSI__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__KaminarioISCSI__unique_fqdn_network":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__KaminarioISCSI__volume_dd_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__iet_conf":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__iscsi_iotype":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__iscsi_secondary_ip_addresses__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__iscsi_target_flags":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__iscsi_write_cache":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__lvm_conf_file":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__lvm_mirrors":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__lvm_suppress_fd_warnings":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__lvm_type":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__nvmet_port_id":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__scst_target_driver":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__scst_target_iqn_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__spdk_max_queue_depth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__spdk_rpc_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__spdk_rpc_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__spdk_rpc_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__spdk_rpc_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__target_helper":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__target_ip_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__target_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__target_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__target_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__volume_clear":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__volume_clear_size":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__volume_dd_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__volume_group":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LVMVolume__volumes_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LenovoFC__lenovo_pool_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LenovoFC__lenovo_pool_type":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LenovoISCSI__lenovo_iscsi_ips__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LenovoISCSI__lenovo_pool_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LenovoISCSI__lenovo_pool_type":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorDrbd__linstor_autoplace_count":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorDrbd__linstor_controller_diskless":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorDrbd__linstor_default_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorDrbd__linstor_default_storage_pool_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorDrbd__linstor_default_uri":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorDrbd__linstor_default_volume_group_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorDrbd__linstor_volume_downsize_factor__transform_string_float":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorIscsi__linstor_autoplace_count":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorIscsi__linstor_controller_diskless":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorIscsi__linstor_default_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorIscsi__linstor_default_storage_pool_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorIscsi__linstor_default_uri":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorIscsi__linstor_default_volume_group_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__LinstorIscsi__linstor_volume_downsize_factor__transform_string_float":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_actual_free_capacity":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_auto_accesscontrol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_backend_max_ld_count":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_backup_ldname_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_backup_pools__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_cv_ldname_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_diskarray_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ismcli_fip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ismcli_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ismcli_privkey":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ismcli_user":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ismview_alloptimize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ismview_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ldname_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ldset":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_pools__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_queryconfig_view":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_ssh_pool_port_number":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageFC__nec_unpairthread_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_actual_free_capacity":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_auto_accesscontrol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_backend_max_ld_count":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_backup_ldname_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_backup_pools__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_cv_ldname_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_diskarray_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ismcli_fip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ismcli_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ismcli_privkey":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ismcli_user":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ismview_alloptimize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ismview_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ldname_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ldset":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_pools__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_queryconfig_view":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_ssh_pool_port_number":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MStorageISCSI__nec_unpairthread_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__backend_availability_zone":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__chap_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__chap_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__chiscsi_conf":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__driver_client_cert":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__driver_client_cert_key":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__driver_data_namespace":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__driver_ssl_cert_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__driver_use_ssl":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__enable_unsupported_driver":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__filter_function":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__goodness_function":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__iet_conf":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__iscsi_iotype":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__iscsi_secondary_ip_addresses__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__iscsi_target_flags":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__iscsi_write_cache":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__num_shell_tries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__num_volume_device_scan_tries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__report_discard_supported":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__storage_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__target_helper":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__target_ip_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__target_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__target_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__target_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__trace_flags__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__use_chap_auth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volume_backend_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volume_clear":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volume_clear_ionice":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volume_clear_size":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volume_copy_blkio_cgroup_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volume_copy_bps_limit":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volume_dd_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANFC__volumes_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__backend_availability_zone":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__chap_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__chap_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__chiscsi_conf":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__driver_client_cert":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__driver_client_cert_key":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__driver_data_namespace":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__driver_ssl_cert_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__driver_use_ssl":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__enable_unsupported_driver":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__filter_function":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__goodness_function":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__iet_conf":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__iscsi_iotype":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__iscsi_secondary_ip_addresses__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__iscsi_target_flags":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__iscsi_write_cache":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__num_shell_tries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__num_volume_device_scan_tries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__report_discard_supported":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__storage_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__target_helper":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__target_ip_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__target_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__target_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__target_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__trace_flags__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__use_chap_auth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volume_backend_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volume_clear":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volume_clear_ionice":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volume_clear_size":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volume_copy_blkio_cgroup_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volume_copy_bps_limit":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volume_dd_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__MacroSANISCSI__volumes_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NetAppCmodeFibreChannel__netapp_vserver":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NetAppCmodeISCSI__netapp_vserver":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_dataset_compression":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_dataset_dedup":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_dataset_description":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_folder":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_group_snapshot_template":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_host":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_host_group_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_iscsi_target_host_group":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_iscsi_target_portal_groups":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_iscsi_target_portal_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_iscsi_target_portals":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_lu_writebackcache_disabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_luns_per_target":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_ns5_blocksize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_origin_snapshot_template":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_rest_backoff_factor__transform_string_float":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_rest_connect_timeout__transform_string_float":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_rest_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_rest_read_timeout__transform_string_float":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_rest_retry_count":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_sparse":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_target_group_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_target_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_use_https":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_volume":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__NexentaISCSI__nexenta_volume_group":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PSSeriesISCSI__eqlx_cli_max_retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PSSeriesISCSI__eqlx_group_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PSSeriesISCSI__eqlx_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__initiator_check":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__interval":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__powermax_array":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__powermax_port_groups__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__powermax_service_level":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__powermax_snapvx_unlink_limit":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__powermax_srp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__u4p_failover_autofailback":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__u4p_failover_backoff_factor":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__u4p_failover_retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__u4p_failover_target__transform_csv_kvs":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__u4p_failover_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxFC__vmax_workload":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__chap_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__chap_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__initiator_check":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__interval":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__powermax_array":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__powermax_port_groups__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__powermax_service_level":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__powermax_snapvx_unlink_limit":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__powermax_srp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__u4p_failover_autofailback":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__u4p_failover_backoff_factor":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__u4p_failover_retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__u4p_failover_target__transform_csv_kvs":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__u4p_failover_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__use_chap_auth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PowerMaxISCSI__vmax_workload":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__driver_ssl_cert_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_api_token":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_automatic_max_oversubscription_ratio":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_eradicate_on_delete":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_host_personality":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_iscsi_cidr":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_replica_interval_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_replica_retention_long_term_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_replica_retention_long_term_per_day_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_replica_retention_short_term_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_replication_pg_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__pure_replication_pod_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureFC__use_chap_auth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__driver_ssl_cert_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_api_token":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_automatic_max_oversubscription_ratio":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_eradicate_on_delete":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_host_personality":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_iscsi_cidr":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_replica_interval_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_replica_retention_long_term_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_replica_retention_long_term_per_day_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_replica_retention_short_term_default":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_replication_pg_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__pure_replication_pod_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__PureISCSI__use_chap_auth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__chap_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__chap_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__qnap_management_url":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__qnap_poolname":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__qnap_storage_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__target_ip_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__QnapISCSI__use_chap_auth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Quobyte__quobyte_client_cfg":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Quobyte__quobyte_mount_point_base":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Quobyte__quobyte_overlay_volumes":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Quobyte__quobyte_qcow2_volumes":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Quobyte__quobyte_sparsed_volumes":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Quobyte__quobyte_volume_from_snapshot_cache":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Quobyte__quobyte_volume_url":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__deferred_deletion_delay":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__deferred_deletion_purge_interval":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__enable_deferred_deletion":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rados_connect_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rados_connection_interval":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rados_connection_retries":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_ceph_conf":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_cluster_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_exclusive_cinder_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_flatten_volume_from_snapshot":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_max_clone_depth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_secret_uuid":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_store_chunk_size":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__rbd_user":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RBD__report_dynamic_total_capacity":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RSD__podm_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RSD__podm_url":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__RSD__podm_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_api_async_rest_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_api_sync_rest_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_sc_api_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_sc_server_folder":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_sc_ssn":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_sc_verify_cert":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_sc_volume_folder":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__dell_server_os":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__excluded_domain_ips__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__secondary_san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__secondary_san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__secondary_san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCFC__secondary_sc_api_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_api_async_rest_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_api_sync_rest_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_sc_api_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_sc_server_folder":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_sc_ssn":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_sc_verify_cert":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_sc_volume_folder":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__dell_server_os":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__excluded_domain_ips__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__secondary_san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__secondary_san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__secondary_san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SCISCSI__secondary_sc_api_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Sheepdog__sheepdog_store_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Sheepdog__sheepdog_store_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_account_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_allow_tenant_qos":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_api_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_emulate_512":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_enable_vag":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_provisioning_calc":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_svip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SolidFire__sf_volume_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorPool__storpool_replication":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorPool__storpool_template":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__cycle_period_seconds":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_peer_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_preferred_host_site__transform_csv_kvs":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_san_secondary_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_allow_tenant_qos":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_flashcopy_rate":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_flashcopy_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_mirror_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_multipath_enabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_stretched_cluster_partner":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_autoexpand":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_compression":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_easytier":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_grainsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_iogrp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_nofmtdisk":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_rsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_vol_warning":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCFC__storwize_svc_volpool_name__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__cycle_period_seconds":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_peer_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_preferred_host_site__transform_csv_kvs":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_san_secondary_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_allow_tenant_qos":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_flashcopy_rate":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_flashcopy_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_iscsi_chap_enabled":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_mirror_pool":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_stretched_cluster_partner":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_autoexpand":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_compression":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_easytier":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_grainsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_iogrp":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_nofmtdisk":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_rsize":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_vol_warning":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__StorwizeSVCISCSI__storwize_svc_volpool_name__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__chap_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__chap_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__driver_use_ssl":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__iscsi_secondary_ip_addresses__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__synology_admin_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__synology_device_id":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__synology_one_time_pass":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__synology_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__synology_pool_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__synology_ssl_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__synology_username":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__target_ip_address":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__target_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__target_prefix":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__target_protocol":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__SynoISCSI__use_chap_auth":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Unity__remove_empty_host":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Unity__unity_io_ports__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__Unity__unity_storage_pool_names__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__default_timeout":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__destroy_empty_storage_group":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__force_delete_lun_in_storagegroup":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__ignore_pool_full_threshold":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__initiator_auto_deregistration":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__initiator_auto_registration":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__io_port_list__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__iscsi_initiators":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__max_luns_per_storage_group":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__naviseccli_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__storage_vnx_authentication_type":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__storage_vnx_pool_names__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__storage_vnx_security_file_dir":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VNX__vnx_async_migrate":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VZStorage__vzstorage_default_volume_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VZStorage__vzstorage_mount_options__transform_csv":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VZStorage__vzstorage_mount_point_base":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VZStorage__vzstorage_shares_config":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VZStorage__vzstorage_sparsed_volumes":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VZStorage__vzstorage_used_ratio__transform_string_float":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VxFlexOS__vxflexos_allow_non_padded_volumes":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VxFlexOS__vxflexos_max_over_subscription_ratio__transform_string_float":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VxFlexOS__vxflexos_rest_server_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VxFlexOS__vxflexos_round_volume_capacity":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VxFlexOS__vxflexos_server_api_version":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VxFlexOS__vxflexos_storage_pools":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__VxFlexOS__vxflexos_unmap_volume_before_deletion":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__WindowsISCSI__windows_iscsi_lun_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__WindowsSmbfs__smbfs_default_volume_format":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__WindowsSmbfs__smbfs_mount_point_base":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__WindowsSmbfs__smbfs_pool_mappings__transform_csv_kvs":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__WindowsSmbfs__smbfs_shares_config":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__driver_ssl_cert_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__xtremio_array_busy_retry_count":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__xtremio_array_busy_retry_interval":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__xtremio_clean_unused_ig":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOFC__xtremio_cluster_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__driver_ssl_cert_path":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__driver_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__san_ip":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__san_login":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__san_password":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__xtremio_array_busy_retry_count":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__xtremio_array_busy_retry_interval":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__xtremio_clean_unused_ig":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__XtremIOISCSI__xtremio_cluster_name":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_access_key":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_default_snap_policy":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_ssl_cert_verify":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_use_iser":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_vol_encrypt":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_vol_name_template":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_vpsa_host":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_vpsa_poolname":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_vpsa_port":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.driver__ZadaraVPSAISCSI__zadara_vpsa_use_ssl":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.enable_unsupported_driver":{"type":"string"},"config.envVars.X_CSI_BACKEND_CONFIG.name":{"type":"string"},"config.envVars.X_CSI_DEBUG_MODE":{"type":"string"},"config.envVars.X_CSI_DEFAULT_MOUNT_FS":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.debug":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.disable_logs":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.disabled__transform_csv":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.enable_probe":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.grpc_workers":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.plugin_name":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.project_id":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.slow_operations":{"type":"string"},"config.envVars.X_CSI_EMBER_CONFIG.user_id":{"type":"string"},"config.envVars.X_CSI_PERSISTENCE_CONFIG":{"type":"string"},"config.sysFiles.name":{"type":"string"}}'

See that the examples on the CSV are not right.

Github workflow step should verify the status of community signing pipeline trigger

(Please let me know if a Jira is preferred and which project to file it)
Currently, the community signing pipeline is triggered by a curl command within the github workflow, but the results are not verified. If the webhook fails or malfunctions, new community-operator-index images would be left unsigned and become unpullable, without anyone knowing. The "Sign index" workflow step should check the return code of the webhook query, and fail the step if a non-2xx code is returned. Optionally, it'll be good if the step can then retry the query a few times until it succeeds.

postgresql bundles cannot be load by the API

The PR: a794483 broke the following distributions:

  • /operators/postgresql/4.0.1
    • /operators/postgresql/4.1.0
    • /operators/postgresql/4.2.0
    • /operators/postgresql/4.2.1
    • /operators/postgresql/4.2.2
    • /operators/postgresql/4.3.2
    • /operators/postgresql/4.4.0
    • /operators/postgresql/4.4.1
    • /operators/postgresql/4.5.0
    • /operators/postgresql/4.5.1
    • /operators/postgresql/4.6.0
    • /operators/postgresql/4.6.1
    • /operators/postgresql/4.6.2
    • /operators/postgresql/4.7.0

Why?

The manifests are duplicated and outside of the manifest directory.

Is there a reason the prometheus operator is limited to Single Namespace and OwnNamespace mode?

Hey All,

Just wondering if there was any history here or if people would be open to enabling Multi-namespace and AllNamespace capabilities, which makes this operator far more powerful/capable. I've used the underlying operator enough to know that its fully capable of being used in this way, just wondering if there was concern about conflicts with the openshift-monitoring infrastructure, or some other reason this isn't allowed.

The `ember-csi-community-operator.v0.9.1` has deprecated APIs and is without the max ocp version set

All openshift tests are failing with all veraiona

#1141

Tests are failing with the following error:

Temp index not found. Are your commits squashed? If so, please check logs https://github.com/redhat-openshift-ecosystem/community-operators-prod/actions?query=workflow%3Aprepare-test-index
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"k8s.io/test-infra/prow/entrypoint/run.go:80","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2022-04-27T15:20:05Z"}
error: failed to execute wrapped command: exit status 1 
INFO[2022-04-27T15:20:14Z] Step deploy-operator-on-openshift-deploy-operator failed after 19m50s. 
INFO[2022-04-27T15:20:14Z] Step phase test failed after 19m50s.         
INFO[2022-04-27T15:20:14Z] Running multi-stage phase post               
INFO[2022-04-27T15:20:14Z] Running step deploy-operator-on-openshift-gather-aws-console. 
INFO[2022-04-27T15:20:34Z] Step deploy-operator-on-openshift-gather-aws-console succeeded after 20s. 
INFO[2022-04-27T15:20:34Z] Running step deploy-operator-on-openshift-gather-must-gather. 
INFO[2022-04-27T15:21:44Z] Step deploy-operator-on-openshift-gather-must-gather succeeded after 1m10s. 
45 skipped lines...
Temp index not found. Are your commits squashed? If so, please check logs https://github.com/redhat-openshift-ecosystem/community-operators-prod/actions?query=workflow%3Aprepare-test-index
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"k8s.io/test-infra/prow/entrypoint/run.go:80","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2022-04-27T15:20:05Z"}
error: failed to execute wrapped command: exit status 1

Flakey test: Operator test / kiwi / Full operator test : The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'latest'

Flakey test? The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'latest'

This test passed previously with exactly the same code, so I assume it's a bug in the test:

Operator test / kiwi / Full operator test (pull_request_target) Failing after 3m โ€” kiwi / Full operator test

TASK [operator_index : Set versions and versions_bt] ***************************
fatal: [localhost]: FAILED! => 
  msg: |-
    The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'latest'
  
    The error appears to be in '/playbooks/upstream/roles/operator_index/tasks/op_index.yml': line 75, column 3, but may
    be elsewhere in the file depending on the exact syntax problem.
  
    The offending line appears to be:
  
  
    - name: "Set versions and versions_bt"
      ^ here

PLAY RECAP *********************************************************************
localhost                  : ok=186  changed=47   unreachable=0    failed=1    skipped=190  rescued=0    ignored=0   

Originally posted by @wallrj in #1023 (comment)

The community index image missed some etcd operator versions

As you can see from the below, only the v0.6.1/v0.9.4 versions are in this registry.redhat.io/redhat/community-operator-index:v4.9 index image. But, in fact, there is the v0.9.2 version in the https://github.com/redhat-openshift-ecosystem/community-operators-prod/tree/main/operators/etcd source code. I'd suggest that we add them into this community index image so that align with the source code, thanks!

[cloud-user@preserve-olm-env jian]$ oc port-forward community-operators-89rpn 50051
Forwarding from 127.0.0.1:50051 -> 50051
Forwarding from [::1]:50051 -> 50051
Handling connection for 50051
...
[cloud-user@preserve-olm-env jian]$ grpcurl -plaintext -d '{"name":"etcd"}' localhost:50051 api.Registry/GetPackage
{
  "name": "etcd",
  "channels": [
    {
      "name": "alpha",
      "csvName": "etcdoperator.v0.6.1"
    },
    {
      "name": "clusterwide-alpha",
      "csvName": "etcdoperator.v0.9.4-clusterwide"
    },
    {
      "name": "singlenamespace-alpha",
      "csvName": "etcdoperator.v0.9.4"
    }
  ],
  "defaultChannelName": "singlenamespace-alpha"
}

Help with PR

Can someone please help me with #426 ?

I have no idea how to fix the tests. Thanks.

`opm alpha bundle generate` does not target manifests directories

/tmp/opm alpha bundle generate --directory operators/$OP_NAME/$OP_VER/ -u operators/$OP_NAME --package $OP_NAME

Shouldn't this line target the manifests subdirectory of the operator bundle? The example formats given in most of the operator-registry documentation and the opm alpha bundle generate help files would seem to indicate this.

-d, --directory string The directory where bundle manifests for a specific version are located.

@mvalarh This may be the issue with my PR.

infinispan is using removed apis and is without max ocp version = 4.8

infinispan is using removed apis and is without max ocp version = 4.8

See:

All the following versions ought to have the max ocp version or be upgraded to no longer use CRD v1beta1:

  • infinispan-operator.v0.2.1
  • infinispan-operator.v0.3.0
  • infinispan-operator.v0.3.1
  • infinispan-operator.v0.3.2
  • infinispan-operator.v1.0.0
  • infinispan-operator.v1.0.1
  • infinispan-operator.v1.1.0
  • infinispan-operator.v1.1.1
  • infinispan-operator.v1.1.2
  • infinispan-operator.v2.0.0
  • infinispan-operator.v2.0.1
  • infinispan-operator.v2.0.2
  • infinispan-operator.v2.0.3
  • infinispan-operator.v2.0.4
  • infinispan-operator.v2.0.5
  • infinispan-operator.v2.0.6
  • infinispan-operator.v2.1.0
  • infinispan-operator.v2.1.1
  • infinispan-operator.v2.1.2
  • infinispan-operator.v2.1.3
  • infinispan-operator.v2.1.4

Document where to get the source code for the GitHub Action: operator-framework/community-operators

The documentation refers to a GitHub Action but it does not link to the source code for the action:

https://github.com/redhat-openshift-ecosystem/community-operators-prod/blob/gh-pages/action/index.html
https://github.com/k8s-operatorhub/community-operators/blob/gh-pages/action/index.html

I eventually found it here: https://github.com/operator-framework/community-operators/tree/action/stable in a branch of the archived repository.

I'd create a PR, but it's also not clear where the source markdown now lives.
All I could find was https://github.com/operator-framework/community-operators/blob/master/docs/action.md in the archived repository.

/cc @mvalarh

Digest change breaks catalog snapshots

Hi!
I am developing some content for RHPDS, using catalog snapshots to fix the images of the operators that are used. This is standard procedure to make sure a deployment workload will work exactly in the same way along the Lab lifecycle.
Two times already in the past months I have been impacted by digests changing in Quay. As a catalog, and therefore its snapshot, uses SHA to refer to images, deployments are failing because the images cannot be found any more if the digest changes.
For example, I'm using a community-operator snapshot from 2021_07_01 (so quite recent). The subscription creates an Install plan with this reference for the ODH operator image: quay.io/openshift-community-operators/opendatahub-operator@sha256:7cfa3371c22fb360793ea1e6a1e596fa9e8493300e509d353125f27be850c3d3
Which does not exist any more as apparently all the images have been recreated 9 days ago, apparently due to this change: operator-framework/community-operators#4172
So, is it the standard procedure to completely change images in Quay sometimes, even when there is no change at all with the operator image itself? If that's the case, is there a way to avoid the issue I am facing?
I know I could bundle the operator image itself with the catalog as part of my workload, but that would become really huge if I have to do this for all operators I'm deploying as part of this workload.
Thanks!

We are disabling `v4.6` and `v4.7`

Dear OpenShift Operator Community,
from Oct 27, 2022 we are disabling tests and updates on v4.6 and v4.7 indexes. Despite the fact, indexes will still be present and ready for use as a static index. No action needed from your side.

Kind Regards,
Community operators team

mentions:
@AdheipSingh, @ArangoGutierrez, @ArbelNathan, @Avni-Sharma, @Carrefour-Group, @DTMad, @EnterpriseDB, @Flagsmith, @Fryguy, @HubertStefanski, @Kaitou786, @Kong, @LCaparelli, @LaVLaS, @MarcinGinszt, @OchiengEd, @Project-Herophilus, @Rajpratik71, @SDBrett, @SteveMattar, @TheWeatherCompany, @aamirqs, @abaiken, @akhil-rane, @alien-ike, @aliok, @andreaskaris, @antonlisovenko, @apeschel, @aravindhp, @ashenoy-hedvig, @aslakknutsen, @astefanutti, @avalluri, @babbageclunk, @bart0sh, @bcrochet, @blaqkube, @blublinsky, @bo0ts, @bznein, @camilamacedo86, @cap1984, @chanwit, @chatton, @che-bot, @che-incubator, @che-incubator-bot, @chetan-rns, @christophd, @clamoriniere, @cliveseldon, @cloudant, @couchbase-partners, @cschwede, @dabeeeenster, @dagrayvid, @danielpacak, @darkowlzz, @david-kow, @deeghuge, @deekshahegde86, @devOpsHelm, @dgoodwin, @dinhxuanvu, @djzager, @dlbock, @dtrawins, @dymurray, @ecordell, @eresende-nuodb, @erikerlandson, @esara, @f41gh7, @fao89, @fbladilo, @fcanovai, @ferranbt, @fjammes, @flaper87, @frant-hartm, @gallettilance, @gautambaghel, @germanodasilva, @gngeorgiev, @gregsheremeta, @gurushantj, @guyyakir, @gyliu513, @haibinxie, @hasheddan, @hco-bot, @himanshug, @houshengbo, @iamabhishek-dubey, @ibuziuk, @idanmo, @instana, @irajdeep, @ivanstanev, @ivanvtimofeev, @jeesmon, @jianzhangbjz, @jkatz, @jkhelil, @jmazzitelli, @jmccormick2001, @jmesnil, @joelddiaz, @jogetworkflow, @jomeier, @jomkz, @jonathanvila, @jpkrohling, @jsenko, @juljog, @kaiso, @kerenlahav, @kerrygunn, @khaledsulayman, @kingdonb, @knrc, @kshithijiyer, @kubernetes-projects, @kulkarnicr, @lbroudoux, @little-guy-lxr, @lrgar, @lsst, @madhukirans, @madorn, @maistra, @maskarb, @max3903, @mdonkers, @microcks, @miguelsorianod, @mkuznyetsov, @mnencia, @mrethers, @mrizzi, @msherif1234, @mtyazici, @muvaf, @n1r1, @nickboldt, @nicolaferraro, @nikhil-thomas, @olukas, @operator-framework, @oranichu, @orenc1, @oribon, @oriyarde, @owais, @pavelmaliy, @pebrc, @pedjak, @phantomjinx, @piyush-nimbalkar, @portworx, @prft-rh, @pweil-, @raffaelespazzoli, @rainest, @rajivnathan, @raunakkumar, @rayfordj, @redhat-cop, @rensyct, @renuka-fernando, @rgolangh, @rhm-samples, @rhrazdil, @rigazilla, @rishabh-shah12, @rmr-silicom, @robshelly, @rodrigovalin, @rohanjayaraj, @rubenvp8510, @rvansa, @ryanemerson, @saada, @sabinaaledort, @sabre1041, @sbose78, @scholzj, @sebsoto, @secondsun, @selvamt94, @shubham-pampattiwar, @slaskawi, @slopezz, @snyk, @snyksec, @spolti, @spron-in, @startxfr, @stolostron, @sunsingerus, @sxd, @tahmmee, @teseraio, @thbkrkr, @tibulca, @tomashibm, @tomgeorge, @tphee, @tumido, @twiest, @ursais, @vaibhavjainwiz, @vboulineau, @vkvamsiopsmx, @vmturbo, @vmuzikar, @wallrj, @waynesun09, @weii666, @welshDoug, @windup, @wmellouli, @wtam2018, @wtrocki, @xiangjingli, @yaacov, @zhiweiyin318, @zingero, @zregvart, @zroubalik

Deprecated API check in production

Dear OpenShift Operator Community,

As of September 21, 2021 we would like to fail new PRs on bundles which are supposed to run on Openshift v4.9 and are using deprecated APIs. You can

  1. update APIs or
  2. add a label which prevents publishing to v4.9 to annotations.yaml:
com.redhat.openshift.versions: "v4.6-v4.8"

For those of you who use a packagemanifest format in your PR, pipeline will ask you for confirmation before adding the label on the fly to the final published bundle. So, after opening a PR containing a packagemanifest format with the wrong API, one manual intervention is needed. You will be asked to reply to the post. Then the label v4.6-4.8 will be applied to the final bundle automatically this time.

We will let you know here, once features are in place.

Kind Regards,
The community-operators maintainers team

Mentions:
@AdheipSingh, @ArangoGutierrez, @ArbelNathan, @AstroProfundis, @Avni-Sharma, @Carrefour-Group, @DTMad, @EnterpriseDB, @Flagsmith, @Fryguy, @HubertStefanski, @J0zi, @Kaitou786, @Kong, @LCaparelli, @LaVLaS, @Listson, @LorbusChris, @MarcinGinszt, @OchiengEd, @Project-Herophilus, @Rajpratik71, @RakhithaRR, @SDBrett, @Simonzhaohui, @SteveMattar, @Tatsinnit, @TheWeatherCompany, @aamirqs, @abaiken, @akhil-rane, @alien-ike, @aliok, @andreaskaris, @antonlisovenko, @apeschel, @aravindhp, @ashenoy-hedvig, @aslakknutsen, @astefanutti, @avalluri, @babbageclunk, @bart0sh, @bcrochet, @bigkevmcd, @blaqkube, @blublinsky, @bo0ts, @brian-avery, @bznein, @camilamacedo86, @cap1984, @chanwit, @chatton, @chbatey, @che-bot, @che-incubator, @che-incubator-bot, @chetan-rns, @christophd, @clamoriniere, @cliveseldon, @cloudant, @couchbase-partners, @cschwede, @ctron, @dabeeeenster, @dagrayvid, @danielpacak, @dannyzaken, @darkowlzz, @david-kow, @deeghuge, @deekshahegde86, @devOpsHelm, @dgoodwin, @dinhxuanvu, @djzager, @dlbock, @dmesser, @dragonly, @dtrawins, @dymurray, @ecordell, @eguzki, @eresende-nuodb, @erikerlandson, @esara, @evan-hataishi, @f41gh7, @fao89, @fbladilo, @fcanovai, @ferranbt, @fjammes, @flaper87, @frant-hartm, @gallettilance, @gautambaghel, @germanodasilva, @gngeorgiev, @gregsheremeta, @gunjan5, @gurushantj, @guyyakir, @gyliu513, @haibinxie, @hasancelik, @hasheddan, @hco-bot, @himanshug, @houshengbo, @husky-parul, @iamabhishek-dubey, @ibuziuk, @idanmo, @instana, @irajdeep, @ivanstanev, @ivanvtimofeev, @jeesmon, @jianzhangbjz, @jitendar-singh, @jkatz, @jkhelil, @jmazzitelli, @jmccormick2001, @jmeis, @jmesnil, @joelddiaz, @jogetworkflow, @jomeier, @jomkz, @jonathanvila, @jpkrohling, @jsenko, @juljog, @kaiso, @kerenlahav, @kerrygunn, @khaledsulayman, @kingdonb, @knrc, @ksatchit, @kshithijiyer, @kubemod, @kubernetes-projects, @kulkarnicr, @lbroudoux, @little-guy-lxr, @lrgar, @lsst, @madhukirans, @madorn, @maistra, @maskarb, @matzew, @max3903, @mdonkers, @mflendrich, @microcks, @miguelsorianod, @mkuznyetsov, @mnencia, @mrethers, @mrizzi, @msherif1234, @mtyazici, @muvaf, @mvalahtv, @mvalarh, @n1r1, @nickboldt, @nicolaferraro, @nikhil-thomas, @olukas, @open-cluster-management, @operator-framework, @oranichu, @orenc1, @oribon, @oriyarde, @owais, @pavelmaliy, @pebrc, @pedjak, @phantomjinx, @piyush-nimbalkar, @portworx, @prft-rh, @pweil-, @radtriste, @raffaelespazzoli, @rainest, @rajivnathan, @raunakkumar, @rayfordj, @redhat-cop, @rensyct, @renuka-fernando, @rgolangh, @rhm-samples, @rhrazdil, @ricardozanini, @rigazilla, @rishabh-shah12, @rmr-silicom, @robshelly, @rodrigovalin, @rohanjayaraj, @rojkov, @rubenvp8510, @rvansa, @ryanemerson, @saada, @sabinaaledort, @sabre1041, @sbose78, @scholzj, @sebsoto, @secondsun, @selvamt94, @shubham-pampattiwar, @slaskawi, @slopezz, @snyk, @snyksec, @spolti, @spron-in, @squids-io, @startxfr, @sunsingerus, @svallala, @sxd, @tahmmee, @teseraio, @thbkrkr, @tibulca, @tolusha, @tomashibm, @tomgeorge, @tphee, @tplavcic, @tumido, @twiest, @ursais, @vaibhavjainwiz, @vassilvk, @vboulineau, @vkvamsiopsmx, @vmturbo, @vmuzikar, @wallrj, @waveywaves, @waynesun09, @weii666, @welshDoug, @wiggzz, @willholley, @windup, @wmellouli, @wtam2018, @wtrocki, @xiangjingli, @yaacov, @zhiweiyin318, @zingero, @zregvart, @zroubalik

Memory consumtion of community-operators pod

Discussed in https://github.com/redhat-openshift-ecosystem/community-operators-prod/discussions/1253

Originally posted by TiloGit May 30, 2022
Hi,

notice that the Memory consumption of the community-operators pod is ~ 2 GB but the request is 50Mi.
I notice that on various systems.

  containers:
    - resources:
        requests:
          cpu: 10m
          memory: 50Mi

Can the community operator run with lower CPU or should that be adjusted to a more realistic value ? Or does this come from OCP/k8s as default?
Similar questions for CPU but the difference is not as large as on Mem.

Btw the other CatalogSource Pods seem to do the same (default 50Mi and use 500M to 2.5 GB)

mem-cpu-CatalogSource-sshot-2022-05-30- 12-38-59

Release 5.1.0: community-windows-machine-config-operator - Can't pull image

The image can not be pulled. In the file community-windows-machine-config-operator.5.1.0.clusterserviceversion.yaml only the image tag is given, but not the complete path.

Line 279: image: community-4.10-43fec34

Should be: quay.io/openshift-windows/community-windows-machine-config-operator:image: community-4.10-43fec34

Failed to pull the etcd image: "quay.io/openshift-community-operators/etcd@sha256:6334a904a229eb1d51a6708dd774e23bf6d04ab20cc5e9efffa363791dce90a8"

[cloud-user@preserve-olm-env jian]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-09-08-162532   True        False         16m     Cluster version is 4.9.0-0.nightly-2021-09-08-162532

[cloud-user@preserve-olm-env jian]$ oc get catalogsource community-operators -o yaml
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  annotations:
    operatorframework.io/managed-by: marketplace-operator
    operatorframework.io/priorityclass: system-cluster-critical
    target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}'
  creationTimestamp: "2021-09-09T02:29:33Z"
  generation: 1
  name: community-operators
  namespace: openshift-marketplace
  resourceVersion: "29664"
  uid: 0dacf10c-0f31-4bb1-b257-0429e475663a
spec:
  displayName: Community Operators
  icon:
    base64data: ""
    mediatype: ""
  image: registry.redhat.io/redhat/community-operator-index:v4.9
  priority: -400
  publisher: Red Hat
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 10m0s

[cloud-user@preserve-olm-env jian]$ oc get pods
NAME                                                              READY   STATUS                  RESTARTS   AGE
9b59f03f8e8ea2f818061847881908aae51cf41836e4a3b822dcc6--1-pv2l5   0/1     Init:ImagePullBackOff   0          3m48s


[cloud-user@preserve-olm-env jian]$ oc describe pods 9b59f03f8e8ea2f818061847881908aae51cf41836e4a3b822dcc6--1-pv2l5 
Name:         9b59f03f8e8ea2f818061847881908aae51cf41836e4a3b822dcc6--1-pv2l5
Namespace:    openshift-marketplace
Priority:     0
...
...
  Warning  Failed          <invalid> (x2 over <invalid>)  kubelet            Failed to pull image "quay.io/openshift-community-operators/etcd@sha256:6334a904a229eb1d51a6708dd774e23bf6d04ab20cc5e9efffa363791dce90a8": rpc error: code = Unknown desc = reading manifest sha256:6334a904a229eb1d51a6708dd774e23bf6d04ab20cc5e9efffa363791dce90a8 in quay.io/openshift-community-operators/etcd: manifest unknown: manifest unknown
  Warning  Failed          <invalid> (x2 over <invalid>)  kubelet            Error: ErrImagePull
  Normal   BackOff         <invalid> (x2 over <invalid>)  kubelet            Back-off pulling image "quay.io/openshift-community-operators/etcd@sha256:6334a904a229eb1d51a6708dd774e23bf6d04ab20cc5e9efffa363791dce90a8"
  Warning  Failed          <invalid> (x2 over <invalid>)  kubelet            Error: ImagePullBackOff
  Normal   Pulling         <invalid> (x3 over <invalid>)  kubelet            Pulling image "quay.io/openshift-community-operators/etcd@sha256:6334a904a229eb1d51a6708dd774e23bf6d04ab20cc5e9efffa363791dce90a8"


[cloud-user@preserve-olm-env jian]$ oc port-forward community-operators-zxtdh 50051
Forwarding from 127.0.0.1:50051 -> 50051
Forwarding from [::1]:50051 -> 50051
Handling connection for 50051
Handling connection for 50051
Handling connection for 50051

Another terminal:
[cloud-user@preserve-olm-env jian]$ grpcurl -plaintext -d '{"name":"etcd"}' localhost:50051 api.Registry/GetPackage
{
  "name": "etcd",
  "channels": [
    {
      "name": "alpha",
      "csvName": "etcdoperator.v0.6.1"
    },
    {
      "name": "clusterwide-alpha",
      "csvName": "etcdoperator.v0.9.4-clusterwide"
    },
    {
      "name": "singlenamespace-alpha",
      "csvName": "etcdoperator.v0.9.4"
    }
  ],
  "defaultChannelName": "singlenamespace-alpha"
}

Seems like something wrong with the index image. Because there are 6 versions in the code source: https://github.com/redhat-openshift-ecosystem/community-operators-prod/tree/main/operators/etcd, but now, only 3 versions here.

[cloud-user@preserve-olm-env jian]$ grpcurl -plaintext -d '{"pkgName":"etcd","channelName":"singlenamespace-alpha"}' localhost:50051 api.Registry/GetBundleForChannel
{
  "csvName": "etcdoperator.v0.9.4",
  "packageName": "etcd",
  "channelName": "singlenamespace-alpha",
...
  "bundlePath": "quay.io/openshift-community-operators/etcd@sha256:6334a904a229eb1d51a6708dd774e23bf6d04ab20cc5e9efffa363791dce90a8",
  "providedApis": [

The bundle image failed to pull.

Projects shipped with invalid examples

Note that a new check to help us ensure the CSV definition by ensuring that the alm-example JSON is parsable, see:

After the next SDK release (1.17.0) we can start to check it in the ci/pipeline and does not allow this scenario. However, the idea of this task is to let its authors know that their CSV contains invalid examples and ask for them to fix that.

The following packages and distributions are with invalid examples

infinispan:

  • Distribution: infinispan-operator.v2.1.3

Error: Value invalid character at 1260 [ { "apiVersion": "infinispan.org/v1", "kind": "Infinispan", "metadata": { "name": "example-infinispan" }, "spec": { "replicas": 1 } }, { "apiVersion": "infinispan.org/v2alpha1", "kind": "Backup", "metadata": { "name": "example-backup" }, "spec": { "cluster": "example-infinispan", "container": { "cpu": "1000m", "extraJvmOpts": "-Djava.property=me", "memory": "1Gi" }, "path": "asdasd" } }, { "apiVersion": "infinispan.org/v2alpha1", "kind": "Cache", "metadata": { "name": "example-cache" }, "spec": { "adminAuth": { "secretName": "basic-auth" }, "clusterName": "example-infinispan", "name": "mycache" } }, { "apiVersion": "infinispan.org/v2alpha1", "kind": "Restore", "metadata": { "name": "example-restore" }, "spec": { "cluster": "example-infinispan", "container": { "cpu": "1000m", "extraJvmOpts": "-Djava.property=me", "memory": "1Gi" }, "path": "asdasd" } }, { "apiVersion": "infinispan.org/v2alpha1", "kind": "Batch", "metadata": { "name": "example-batch", }<--(see the invalid character): invalid example

pulp-operator

  • pulp-operator.v0.4.0

Error: Value invalid character at 359 [ { "apiVersion": "pulp.pulpproject.org/v1beta1", "kind": "Pulp", "metadata": { "name": "example-pulp" }, "spec": { "storage_type": "File", "file_storage_size": "50Gi", "file_storage_access_mode": "ReadWriteMany", "image": "quay.io/pulp/pulp:3.16.0", "image_web": "quay.io/pulp/pulp-web:3.16.0", }<--(see the invalid character): invalid example

  • pulp-operator.v0.3.0

Error: Value invalid character at 322 [ { "apiVersion": "pulp.pulpproject.org/v1beta1", "kind": "Pulp", "metadata": { "name": "example-pulp", "namespace": "example-pulp" }, "spec": { "tag": "0.3.0", "storage_type": "File", "file_storage_size": "50Gi", "file_storage_access_mode": "ReadWriteMany", }<--(see the invalid character): invalid example

  • tf-operator

  • tf-operator.latest

Error: Value invalid character at 1464 [ { "apiVersion": "tf.tungsten.io/v1alpha1", "kind": "Manager", "metadata": { "name": "cluster1", "namespace": "tf" }, "spec": { "commonConfiguration": { "authParameters": { "authMode": "noauth" }, "hostNetwork": true, "replicas": 3, "useKubeadmConfig": true }, "services": { "analytics": { "metadata": { "labels": { "tf_cluster": "cluster1" }, "name": "analytics1" }, "spec": { "commonConfiguration": { "nodeSelector": { "node-role.kubernetes.io/master": "" } }, "serviceConfiguration": { "containers": [ { "image": "docker.io/tungstenfabric/contrail-analytics-api:latest", "name": "analyticsapi" }, { "image": "docker.io/tungstenfabric/contrail-analytics-collector:latest", "name": "collector" }, { "image": "docker.io/tungstenfabric/contrail-nodemgr:latest", "name": "nodemanager" }, { "image": "docker.io/tungstenfabric/contrail-provisioner:latest", "name": "provisioner" } ], }<--(see the invalid character): invalid example

CI artifacts should include deployment conditions

The CI artifacts include a description of all the Deployment resources for the operator-under-test,
but the description does not include the Deployment.Status.Conditions which contains error messagez linked to the Replicaset resources of the Deployment, because the output is generated using oc describe.

The artifact should be generated using og get deployment -o yaml instead, which includes the full Deployment Status.

Literally it is failing to create the first replica in the namespace. So no logs from non existing pod. ReplicaFailure True FailedCreate

Sure. But I was looking for a log of all the Kubernetes Events.
Or something else to say why it failed to create the Pod.

Originally posted by @wallrj in #1777 (comment)

operator PR passed, but not updated in the indexes.

We create a PR For sosivio operator, which passed the ci #1392
it was merged to main, passed on main, too, but never updated in the operatorhub in OCP (it happened in operatorhub, too)

info:

  • we use semver mode (in ci.yml)
  • we updated from 1.4.1 to 1.4.1-1
  • link to operatorhub

(no link to openshift for obvious reasons :) )

Can't run `orange / Deploy o7t` tests in fork repository

Hi,
I want to check my operator in a forked repository of this repository, to see if my operator doesn't fail in your tests.
It fails because the secret IIB_INPUT_REGISTRY_TOKEN is not set in my repository, I can't set it since I don't know the password to mvalahtv user.
Can you change it that it will work with forked repositories also?
thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.