Giter Club home page Giter Club logo

cilium-olm's Introduction

Cilium Logo

CII Best Practices Go Report Card CLOMonitor Artifact Hub Join the Cilium slack channel GoDoc Read the Docs Apache licensed BSD licensed GPL licensed FOSSA Status Gateway API Status Github Codespaces

Cilium is a networking, observability, and security solution with an eBPF-based dataplane. It provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay mode. It is L7-protocol aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.

Cilium implements distributed load balancing for traffic between pods and to external services, and is able to fully replace kube-proxy, using efficient hash tables in eBPF allowing for almost unlimited scale. It also supports advanced functionality like integrated ingress and egress gateway, bandwidth management and service mesh, and provides deep network and security visibility and monitoring.

A new Linux kernel technology called eBPF is at the foundation of Cilium. It supports dynamic insertion of eBPF bytecode into the Linux kernel at various integration points such as: network IO, application sockets, and tracepoints to implement security, networking and visibility logic. eBPF is highly efficient and flexible. To learn more about eBPF, visit eBPF.io.

Overview of Cilium features for networking, observability, service mesh, and runtime security

Stable Releases

The Cilium community maintains minor stable releases for the last three minor Cilium versions. Older Cilium stable versions from minor releases prior to that are considered EOL.

For upgrades to new minor releases please consult the Cilium Upgrade Guide.

Listed below are the actively maintained release branches along with their latest patch release, corresponding image pull tags and their release notes:

v1.15 2024-07-11 quay.io/cilium/cilium:v1.15.7 Release Notes
v1.14 2024-07-11 quay.io/cilium/cilium:v1.14.13 Release Notes
v1.13 2024-07-11 quay.io/cilium/cilium:v1.13.18 Release Notes

Architectures

Cilium images are distributed for AMD64 and AArch64 architectures.

Software Bill of Materials

Starting with Cilium version 1.13.0, all images include a Software Bill of Materials (SBOM). The SBOM is generated in SPDX format. More information on this is available on Cilium SBOM.

Development

For development and testing purpose, the Cilium community publishes snapshots, early release candidates (RC) and CI container images build from the main branch. These images are not for use in production.

For testing upgrades to new development releases please consult the latest development build of the Cilium Upgrade Guide.

Listed below are branches for testing along with their snapshots or RC releases, corresponding image pull tags and their release notes where applicable:

main daily quay.io/cilium/cilium-ci:latest N/A
v1.16.0-rc.2 2024-07-15 quay.io/cilium/cilium:v1.16.0-rc.2 Release Candidate Notes

Functionality Overview

Protect and secure APIs transparently

Ability to secure modern application protocols such as REST/HTTP, gRPC and Kafka. Traditional firewalls operate at Layer 3 and 4. A protocol running on a particular port is either completely trusted or blocked entirely. Cilium provides the ability to filter on individual application protocol requests such as:

  • Allow all HTTP requests with method GET and path /public/.*. Deny all other requests.
  • Allow service1 to produce on Kafka topic topic1 and service2 to consume on topic1. Reject all other Kafka messages.
  • Require the HTTP header X-Token: [0-9]+ to be present in all REST calls.

See the section Layer 7 Policy in our documentation for the latest list of supported protocols and examples on how to use it.

Secure service to service communication based on identities

Modern distributed applications rely on technologies such as application containers to facilitate agility in deployment and scale out on demand. This results in a large number of application containers being started in a short period of time. Typical container firewalls secure workloads by filtering on source IP addresses and destination ports. This concept requires the firewalls on all servers to be manipulated whenever a container is started anywhere in the cluster.

In order to avoid this situation which limits scale, Cilium assigns a security identity to groups of application containers which share identical security policies. The identity is then associated with all network packets emitted by the application containers, allowing to validate the identity at the receiving node. Security identity management is performed using a key-value store.

Secure access to and from external services

Label based security is the tool of choice for cluster internal access control. In order to secure access to and from external services, traditional CIDR based security policies for both ingress and egress are supported. This allows to limit access to and from application containers to particular IP ranges.

Simple Networking

A simple flat Layer 3 network with the ability to span multiple clusters connects all application containers. IP allocation is kept simple by using host scope allocators. This means that each host can allocate IPs without any coordination between hosts.

The following multi node networking models are supported:

  • Overlay: Encapsulation-based virtual network spanning all hosts. Currently, VXLAN and Geneve are baked in but all encapsulation formats supported by Linux can be enabled.

    When to use this mode: This mode has minimal infrastructure and integration requirements. It works on almost any network infrastructure as the only requirement is IP connectivity between hosts which is typically already given.

  • Native Routing: Use of the regular routing table of the Linux host. The network is required to be capable to route the IP addresses of the application containers.

    When to use this mode: This mode is for advanced users and requires some awareness of the underlying networking infrastructure. This mode works well with:

    • Native IPv6 networks
    • In conjunction with cloud network routers
    • If you are already running routing daemons

Load Balancing

Cilium implements distributed load balancing for traffic between application containers and to external services and is able to fully replace components such as kube-proxy. The load balancing is implemented in eBPF using efficient hashtables allowing for almost unlimited scale.

For north-south type load balancing, Cilium's eBPF implementation is optimized for maximum performance, can be attached to XDP (eXpress Data Path), and supports direct server return (DSR) as well as Maglev consistent hashing if the load balancing operation is not performed on the source host.

For east-west type load balancing, Cilium performs efficient service-to-backend translation right in the Linux kernel's socket layer (e.g. at TCP connect time) such that per-packet NAT operations overhead can be avoided in lower layers.

Bandwidth Management

Cilium implements bandwidth management through efficient EDT-based (Earliest Departure Time) rate-limiting with eBPF for container traffic that is egressing a node. This allows to significantly reduce transmission tail latencies for applications and to avoid locking under multi-queue NICs compared to traditional approaches such as HTB (Hierarchy Token Bucket) or TBF (Token Bucket Filter) as used in the bandwidth CNI plugin, for example.

Monitoring and Troubleshooting

The ability to gain visibility and troubleshoot issues is fundamental to the operation of any distributed system. While we learned to love tools like tcpdump and ping and while they will always find a special place in our hearts, we strive to provide better tooling for troubleshooting. This includes tooling to provide:

  • Event monitoring with metadata: When a packet is dropped, the tool doesn't just report the source and destination IP of the packet, the tool provides the full label information of both the sender and receiver among a lot of other information.
  • Metrics export via Prometheus: Key metrics are exported via Prometheus for integration with your existing dashboards.
  • Hubble: An observability platform specifically written for Cilium. It provides service dependency maps, operational monitoring and alerting, and application and security visibility based on flow logs.

Getting Started

What is eBPF and XDP?

Berkeley Packet Filter (BPF) is a Linux kernel bytecode interpreter originally introduced to filter network packets, e.g. for tcpdump and socket filters. The BPF instruction set and surrounding architecture have recently been significantly reworked with additional data structures such as hash tables and arrays for keeping state as well as additional actions to support packet mangling, forwarding, encapsulation, etc. Furthermore, a compiler back end for LLVM allows for programs to be written in C and compiled into BPF instructions. An in-kernel verifier ensures that BPF programs are safe to run and a JIT compiler converts the BPF bytecode to CPU architecture-specific instructions for native execution efficiency. BPF programs can be run at various hooking points in the kernel such as for incoming packets, outgoing packets, system calls, kprobes, uprobes, tracepoints, etc.

BPF continues to evolve and gain additional capabilities with each new Linux release. Cilium leverages BPF to perform core data path filtering, mangling, monitoring and redirection, and requires BPF capabilities that are in any Linux kernel version 4.8.0 or newer (the latest current stable Linux kernel is 4.14.x).

Many Linux distributions including CoreOS, Debian, Docker's LinuxKit, Fedora, openSUSE and Ubuntu already ship kernel versions >= 4.8.x. You can check your Linux kernel version by running uname -a. If you are not yet running a recent enough kernel, check the Documentation of your Linux distribution on how to run Linux kernel 4.9.x or later.

To read up on the necessary kernel versions to run the BPF runtime, see the section Prerequisites.

https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/bpf-overview.png

XDP is a further step in evolution and enables running a specific flavor of BPF programs from the network driver with direct access to the packet's DMA buffer. This is, by definition, the earliest possible point in the software stack, where programs can be attached to in order to allow for a programmable, high performance packet processor in the Linux kernel networking data path.

Further information about BPF and XDP targeted for developers can be found in the BPF and XDP Reference Guide.

To know more about Cilium, its extensions and use cases around Cilium and BPF take a look at Further Readings section.

Community

Slack

Join the Cilium Slack channel to chat with Cilium developers and other Cilium users. This is a good place to learn about Cilium, ask questions, and share your experiences.

Special Interest Groups (SIG)

See Special Interest groups for a list of all SIGs and their meeting times.

Developer meetings

The Cilium developer community hangs out on Zoom to chat. Everybody is welcome.

eBPF & Cilium Office Hours livestream

We host a weekly community YouTube livestream called eCHO which (very loosely!) stands for eBPF & Cilium Office Hours. Join us live, catch up with past episodes, or head over to the eCHO repo and let us know your ideas for topics we should cover.

Governance

The Cilium project is governed by a group of Maintainers and Committers. How they are selected and govern is outlined in our governance document.

Adopters

A list of adopters of the Cilium project who are deploying it in production, and of their use cases, can be found in file USERS.md.

Roadmap

Cilium maintains a public roadmap. It gives a high-level view of the main priorities for the project, the maturity of different features and projects, and how to influence the project direction.

License

The Cilium user space components are licensed under the Apache License, Version 2.0. The BPF code templates are dual-licensed under the General Public License, Version 2.0 (only) and the 2-Clause BSD License (you can use the terms of either license, at your option).

cilium-olm's People

Contributors

christarazi avatar errordeveloper avatar michi-covalent avatar nathanjsweet avatar nbusseneau avatar priyasharma9 avatar sayboras avatar ungureanuvladvictor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cilium-olm's Issues

test installation and upgrade

Since we have manifests for 1.8 and a few 1.9 releases, operator upgrades should be easily testable, e.g. install with 1.8.5 from manifests, use OLM to upgrade to 1.9.

scan failure: operator-metadata-preparation-bundle-image

This happen to 1.10.1 bundle:

===== Test: operator-metadata-preparation-bundle-image =====



Failed to parse and prepare operator metadata:
java.lang.Exception: No valid OCP versions found for `com.redhat.openshift.versions` label: v4.5,v4.6,v4.7 organization: certified-operators

Output of the `extract-operator-bundle.yml` ansible role:

2021-06-25 12:57:33,685 p=23 u=default n=ansible | Using /etc/ansible/ansible.cfg as config file
2021-06-25 12:57:33,897 p=23 u=default n=ansible | PLAY [Extract and parse the operator bundle image for testing usage] ***********
2021-06-25 12:57:33,910 p=23 u=default n=ansible | TASK [Extract the operator bundle image into files needed to run the tests] ****
2021-06-25 12:57:33,952 p=23 u=default n=ansible | TASK [extract_operator_bundle : Ensure that the operator bundle directory is empty] ***
2021-06-25 12:57:34,451 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python3"}, "changed": false, "path": "/tmp/operator-bundle", "state": "absent"}
2021-06-25 12:57:34,453 p=23 u=default n=ansible | TASK [extract_operator_bundle : Ensure that the operator testing directory exists and is empty] ***
2021-06-25 12:57:34,733 p=23 u=default n=ansible | ok: [localhost] => (item=absent) => {"ansible_loop_var": "item", "changed": false, "item": "absent", "path": "/home/jenkins/agent/test-operator", "state": "absent"}
2021-06-25 12:57:34,899 p=23 u=default n=ansible | changed: [localhost] => (item=directory) => {"ansible_loop_var": "item", "changed": true, "gid": 1001680000, "group": "1001680000", "item": "directory", "mode": "02777", "owner": "default", "path": "/home/jenkins/agent/test-operator", "size": 6, "state": "directory", "uid": 1001680000}
2021-06-25 12:57:34,901 p=23 u=default n=ansible | TASK [extract_operator_bundle : Copy the bundle image layers into a local directory using skopeo] ***
2021-06-25 12:57:37,142 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "cmd": "skopeo copy docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:85288 oci:/tmp/operator-bundle:latest", "delta": "0:00:01.855013", "end": "2021-06-25 12:57:37.120945", "rc": 0, "start": "2021-06-25 12:57:35.265932", "stderr": "", "stderr_lines": [], "stdout": "Getting image source signatures\nCopying blob sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac\nCopying blob sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907\nCopying blob sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831\nCopying blob sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a\nCopying config sha256:6d1571eaff425c3fa6eda782e014347669690b1176733065d562509c44103ca9\nWriting manifest to image destination\nStoring signatures", "stdout_lines": ["Getting image source signatures", "Copying blob sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac", "Copying blob sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907", "Copying blob sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831", "Copying blob sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a", "Copying config sha256:6d1571eaff425c3fa6eda782e014347669690b1176733065d562509c44103ca9", "Writing manifest to image destination", "Storing signatures"]}
2021-06-25 12:57:37,144 p=23 u=default n=ansible | TASK [extract_operator_bundle : Inspect the copied image directory] ************
2021-06-25 12:57:37,478 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "cmd": "skopeo inspect --raw oci:///tmp/operator-bundle", "delta": "0:00:00.037190", "end": "2021-06-25 12:57:37.458496", "rc": 0, "start": "2021-06-25 12:57:37.421306", "stderr": "", "stderr_lines": [], "stdout": "{\"schemaVersion\":2,\"config\":{\"mediaType\":\"application/vnd.oci.image.config.v1+json\",\"digest\":\"sha256:6d1571eaff425c3fa6eda782e014347669690b1176733065d562509c44103ca9\",\"size\":4370},\"layers\":[{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac\",\"size\":7674},{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907\",\"size\":366},{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831\",\"size\":433},{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a\",\"size\":7039}]}", "stdout_lines": ["{\"schemaVersion\":2,\"config\":{\"mediaType\":\"application/vnd.oci.image.config.v1+json\",\"digest\":\"sha256:6d1571eaff425c3fa6eda782e014347669690b1176733065d562509c44103ca9\",\"size\":4370},\"layers\":[{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac\",\"size\":7674},{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907\",\"size\":366},{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831\",\"size\":433},{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+gzip\",\"digest\":\"sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a\",\"size\":7039}]}"]}
2021-06-25 12:57:37,481 p=23 u=default n=ansible | TASK [extract_operator_bundle : Parse the image manifest json retrieved by skopeo] ***
2021-06-25 12:57:37,608 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"image_manifest": {"config": {"digest": "sha256:6d1571eaff425c3fa6eda782e014347669690b1176733065d562509c44103ca9", "mediaType": "application/vnd.oci.image.config.v1+json", "size": 4370}, "layers": [{"digest": "sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac", "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip", "size": 7674}, {"digest": "sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907", "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip", "size": 366}, {"digest": "sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831", "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip", "size": 433}, {"digest": "sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a", "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip", "size": 7039}], "schemaVersion": 2}}, "changed": false}
2021-06-25 12:57:37,610 p=23 u=default n=ansible | TASK [extract_operator_bundle : debug] *****************************************
2021-06-25 12:57:37,736 p=23 u=default n=ansible | ok: [localhost] => {
    "image_manifest": {
        "config": {
            "digest": "sha256:6d1571eaff425c3fa6eda782e014347669690b1176733065d562509c44103ca9",
            "mediaType": "application/vnd.oci.image.config.v1+json",
            "size": 4370
        },
        "layers": [
            {
                "digest": "sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac",
                "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
                "size": 7674
            },
            {
                "digest": "sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907",
                "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
                "size": 366
            },
            {
                "digest": "sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831",
                "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
                "size": 433
            },
            {
                "digest": "sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a",
                "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
                "size": 7039
            }
        ],
        "schemaVersion": 2
    }
}
2021-06-25 12:57:37,739 p=23 u=default n=ansible | TASK [extract_operator_bundle : Unpack the image layers using the umoci tool] ***
2021-06-25 12:57:38,049 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "cmd": "/usr/local/bin/umoci unpack --rootless --image /tmp/operator-bundle:latest /tmp/operator-bundle/data", "delta": "0:00:00.017238", "end": "2021-06-25 12:57:38.030055", "rc": 0, "start": "2021-06-25 12:57:38.012817", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
2021-06-25 12:57:38,052 p=23 u=default n=ansible | TASK [extract_operator_bundle : Copy the rootfs of the unpacked data into the operator work directory] ***
2021-06-25 12:57:38,355 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "cmd": "cp -r /tmp/operator-bundle/data/rootfs/* /home/jenkins/agent/test-operator", "delta": "0:00:00.004747", "end": "2021-06-25 12:57:38.335496", "rc": 0, "start": "2021-06-25 12:57:38.330749", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
2021-06-25 12:57:38,357 p=23 u=default n=ansible | TASK [Parse the operator bundle image, manifest and metadata] ******************
2021-06-25 12:57:38,391 p=23 u=default n=ansible | TASK [parse_operator_bundle : Inspect the bundle image with skopeo] ************
2021-06-25 12:57:38,732 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "cmd": "skopeo inspect oci:/tmp/operator-bundle:latest", "delta": "0:00:00.039334", "end": "2021-06-25 12:57:38.710159", "rc": 0, "start": "2021-06-25 12:57:38.670825", "stderr": "", "stderr_lines": [], "stdout": "{\n    \"Digest\": \"sha256:b281990102cb80fdc40f1f7c1bed3f8267671701dd94e4a4508e3ffa8b371928\",\n    \"RepoTags\": [],\n    \"Created\": \"2021-06-25T12:55:48.69285054Z\",\n    \"DockerVersion\": \"\",\n    \"Labels\": {\n        \"com.redhat.delivery.backport\": \"true\",\n        \"com.redhat.delivery.operator.bundle\": \"true\",\n        \"com.redhat.iib.pinned\": \"true\",\n        \"com.redhat.openshift.versions\": \"v4.5,v4.6,v4.7\",\n        \"io.buildah.version\": \"1.16.7\",\n        \"operators.operatorframework.io.bundle.channel.default.v1\": \"stable\",\n        \"operators.operatorframework.io.bundle.channels.v1\": \"stable\",\n        \"operators.operatorframework.io.bundle.manifests.v1\": \"manifests/\",\n        \"operators.operatorframework.io.bundle.mediatype.v1\": \"registry+v1\",\n        \"operators.operatorframework.io.bundle.metadata.v1\": \"metadata/\",\n        \"operators.operatorframework.io.bundle.package.v1\": \"cilium\",\n        \"operators.operatorframework.io.metrics.builder\": \"operator-sdk-v1.0.1\",\n        \"operators.operatorframework.io.metrics.mediatype.v1\": \"metrics+v1\",\n        \"operators.operatorframework.io.metrics.project_layout\": \"helm.sdk.operatorframework.io/v1\"\n    },\n    \"Architecture\": \"amd64\",\n    \"Os\": \"linux\",\n    \"Layers\": [\n        \"sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac\",\n        \"sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907\",\n        \"sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831\",\n        \"sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a\"\n    ],\n    \"Env\": [\n        \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"\n    ]\n}", "stdout_lines": ["{", "    \"Digest\": \"sha256:b281990102cb80fdc40f1f7c1bed3f8267671701dd94e4a4508e3ffa8b371928\",", "    \"RepoTags\": [],", "    \"Created\": \"2021-06-25T12:55:48.69285054Z\",", "    \"DockerVersion\": \"\",", "    \"Labels\": {", "        \"com.redhat.delivery.backport\": \"true\",", "        \"com.redhat.delivery.operator.bundle\": \"true\",", "        \"com.redhat.iib.pinned\": \"true\",", "        \"com.redhat.openshift.versions\": \"v4.5,v4.6,v4.7\",", "        \"io.buildah.version\": \"1.16.7\",", "        \"operators.operatorframework.io.bundle.channel.default.v1\": \"stable\",", "        \"operators.operatorframework.io.bundle.channels.v1\": \"stable\",", "        \"operators.operatorframework.io.bundle.manifests.v1\": \"manifests/\",", "        \"operators.operatorframework.io.bundle.mediatype.v1\": \"registry+v1\",", "        \"operators.operatorframework.io.bundle.metadata.v1\": \"metadata/\",", "        \"operators.operatorframework.io.bundle.package.v1\": \"cilium\",", "        \"operators.operatorframework.io.metrics.builder\": \"operator-sdk-v1.0.1\",", "        \"operators.operatorframework.io.metrics.mediatype.v1\": \"metrics+v1\",", "        \"operators.operatorframework.io.metrics.project_layout\": \"helm.sdk.operatorframework.io/v1\"", "    },", "    \"Architecture\": \"amd64\",", "    \"Os\": \"linux\",", "    \"Layers\": [", "        \"sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac\",", "        \"sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907\",", "        \"sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831\",", "        \"sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a\"", "    ],", "    \"Env\": [", "        \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"", "    ]", "}"]}
2021-06-25 12:57:38,735 p=23 u=default n=ansible | TASK [parse_operator_bundle : Save the skopeo inspect output to a log file] ****
2021-06-25 12:57:39,460 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "checksum": "310c712a169b0533c744c10f9da0c99e51cc2544", "dest": "/home/jenkins/agent/workspace/cvp-isv-operator-bundle-image-validation-test/bundle-skopeo-inspect.json", "gid": 0, "group": "root", "md5sum": "793c083ef28e9a4e5033bd5007f5c178", "mode": "0644", "owner": "default", "size": 1652, "src": "/home/jenkins/agent/.ansible/tmp/ansible-tmp-1624625858.8761046-231-40274241097795/source", "state": "file", "uid": 1001680000}
2021-06-25 12:57:39,463 p=23 u=default n=ansible | TASK [parse_operator_bundle : Include the skopeo inspect result vars] **********
2021-06-25 12:57:39,592 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"skopeo_inspect_json": {"Architecture": "amd64", "Created": "2021-06-25T12:55:48.69285054Z", "Digest": "sha256:b281990102cb80fdc40f1f7c1bed3f8267671701dd94e4a4508e3ffa8b371928", "DockerVersion": "", "Env": ["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"], "Labels": {"com.redhat.delivery.backport": "true", "com.redhat.delivery.operator.bundle": "true", "com.redhat.iib.pinned": "true", "com.redhat.openshift.versions": "v4.5,v4.6,v4.7", "io.buildah.version": "1.16.7", "operators.operatorframework.io.bundle.channel.default.v1": "stable", "operators.operatorframework.io.bundle.channels.v1": "stable", "operators.operatorframework.io.bundle.manifests.v1": "manifests/", "operators.operatorframework.io.bundle.mediatype.v1": "registry+v1", "operators.operatorframework.io.bundle.metadata.v1": "metadata/", "operators.operatorframework.io.bundle.package.v1": "cilium", "operators.operatorframework.io.metrics.builder": "operator-sdk-v1.0.1", "operators.operatorframework.io.metrics.mediatype.v1": "metrics+v1", "operators.operatorframework.io.metrics.project_layout": "helm.sdk.operatorframework.io/v1"}, "Layers": ["sha256:04096fcfd395ec19238b78033c1f187ad2dea9ad043de46af5d2fbbb5c2408ac", "sha256:cf462a34353a199b63e5a317a3dddbdbee5d6b2915798806e86469e8549b4907", "sha256:8383384b6e405a28ad70017140ef4e99a3a33e713b39b2291442714cda849831", "sha256:a9fe1fbb05f00be2b6dfb5116457839ea3bdd38e6a5facd43555c0f921f93e9a"], "Os": "linux", "RepoTags": []}}, "changed": false}
2021-06-25 12:57:39,595 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set the failed labels to an empty string and is_bundle_image to false] ***
2021-06-25 12:57:39,604 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"failed_labels": "", "is_bundle_image": false}, "changed": false}
2021-06-25 12:57:39,606 p=23 u=default n=ansible | TASK [parse_operator_bundle : If the required label is not found among the image labels, add it to the failed_labels] ***
2021-06-25 12:57:39,737 p=23 u=default n=ansible | skipping: [localhost] => (item=operators.operatorframework.io.bundle.package.v1)  => {"ansible_loop_var": "item", "changed": false, "item": "operators.operatorframework.io.bundle.package.v1", "skip_reason": "Conditional result was False"}
2021-06-25 12:57:39,740 p=23 u=default n=ansible | skipping: [localhost] => (item=operators.operatorframework.io.bundle.channels.v1)  => {"ansible_loop_var": "item", "changed": false, "item": "operators.operatorframework.io.bundle.channels.v1", "skip_reason": "Conditional result was False"}
2021-06-25 12:57:39,743 p=23 u=default n=ansible | skipping: [localhost] => (item=com.redhat.openshift.versions)  => {"ansible_loop_var": "item", "changed": false, "item": "com.redhat.openshift.versions", "skip_reason": "Conditional result was False"}
2021-06-25 12:57:39,745 p=23 u=default n=ansible | TASK [parse_operator_bundle : Fail if any of the required operator bundle image label(s) are not found] ***
2021-06-25 12:57:39,874 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:39,876 p=23 u=default n=ansible | TASK [parse_operator_bundle : If the image contains all of the required labels for a bundle image, set the is_bundle_image to true] ***
2021-06-25 12:57:39,886 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"is_bundle_image": true}, "changed": false}
2021-06-25 12:57:39,888 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set the main operator bundle info as parsed from the bundle image labels] ***
2021-06-25 12:57:40,019 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"channels": ["stable"], "default_channel": "", "is_backport": "", "ocp_versions": "v4.5,v4.6,v4.7", "package_name": "cilium"}, "changed": false}
2021-06-25 12:57:40,021 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set the current channel to the first value from the operators.operatorframework.io.bundle.channels.v1 label] ***
2021-06-25 12:57:40,151 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"current_channel": "stable"}, "changed": false}
2021-06-25 12:57:40,153 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set is_backport according to the com.redhat.delivery.backport label, if it's missing, an empty is_backport means not set.] ***
2021-06-25 12:57:40,284 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"is_backport": "true"}, "changed": false}
2021-06-25 12:57:40,287 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set default_channel according to the operators.operatorframework.io.bundle.channel.default.v1 label if present.] ***
2021-06-25 12:57:40,420 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"default_channel": "stable"}, "changed": false}
2021-06-25 12:57:40,423 p=23 u=default n=ansible | TASK [parse_operator_bundle : Determine paths with kind ClusterServiceVersion] ***
2021-06-25 12:57:40,880 p=23 u=default n=ansible | ok: [localhost] => {"changed": false, "examined": 8, "files": [{"atime": 1624625858.333047, "ctime": 1624625858.333047, "dev": 2065, "gid": 1001680000, "gr_name": "", "inode": 1154915079, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1624625858.333047, "nlink": 1, "path": "/home/jenkins/agent/test-operator/manifests/cilium-olm.csv.yaml", "pw_name": "default", "rgrp": true, "roth": true, "rusr": true, "size": 18736, "uid": 1001680000, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 1, "msg": ""}
2021-06-25 12:57:40,883 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set the first location found as the CSV path since the operator bundles only support having one CSV] ***
2021-06-25 12:57:41,019 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"csv_path": "/home/jenkins/agent/test-operator/manifests/cilium-olm.csv.yaml"}, "changed": false}
2021-06-25 12:57:41,022 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set variables for csv_path and current_csv_dir] ***
2021-06-25 12:57:41,156 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"current_csv_dir": "/home/jenkins/agent/test-operator/manifests"}, "changed": false}
2021-06-25 12:57:41,159 p=23 u=default n=ansible | TASK [parse_operator_bundle : Determine the CRD paths in the operator metadata directory] ***
2021-06-25 12:57:41,471 p=23 u=default n=ansible | ok: [localhost] => {"changed": false, "examined": 2, "files": [{"atime": 1624625860.8581235, "ctime": 1624625858.3340468, "dev": 2065, "gid": 1001680000, "gr_name": "", "inode": 1154915080, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1624625858.3340468, "nlink": 1, "path": "/home/jenkins/agent/test-operator/manifests/ciliumconfigs.crd.yaml", "pw_name": "default", "rgrp": true, "roth": true, "rusr": true, "size": 1716, "uid": 1001680000, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}], "matched": 1, "msg": ""}
2021-06-25 12:57:41,474 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set crd_paths to collect crd_paths] **************
2021-06-25 12:57:41,484 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"crd_paths": []}, "changed": false}
2021-06-25 12:57:41,486 p=23 u=default n=ansible | TASK [parse_operator_bundle : Get paths from crd_paths_result] *****************
2021-06-25 12:57:41,622 p=23 u=default n=ansible | ok: [localhost] => (item={'path': '/home/jenkins/agent/test-operator/manifests/ciliumconfigs.crd.yaml', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 1001680000, 'gid': 1001680000, 'size': 1716, 'inode': 1154915080, 'dev': 2065, 'nlink': 1, 'atime': 1624625860.8581235, 'mtime': 1624625858.3340468, 'ctime': 1624625858.3340468, 'gr_name': '', 'pw_name': 'default', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) => {"ansible_facts": {"crd_paths": ["/home/jenkins/agent/test-operator/manifests/ciliumconfigs.crd.yaml"]}, "ansible_loop_var": "item", "changed": false, "item": {"atime": 1624625860.8581235, "ctime": 1624625858.3340468, "dev": 2065, "gid": 1001680000, "gr_name": "", "inode": 1154915080, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mode": "0644", "mtime": 1624625858.3340468, "nlink": 1, "path": "/home/jenkins/agent/test-operator/manifests/ciliumconfigs.crd.yaml", "pw_name": "default", "rgrp": true, "roth": true, "rusr": true, "size": 1716, "uid": 1001680000, "wgrp": false, "woth": false, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
2021-06-25 12:57:41,625 p=23 u=default n=ansible | TASK [parse_operator_bundle : Read csv.yaml file] ******************************
2021-06-25 12:57:41,936 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "cmd": "cat /home/jenkins/agent/test-operator/manifests/cilium-olm.csv.yaml", "delta": "0:00:00.003739", "end": "2021-06-25 12:57:41.912348", "rc": 0, "start": "2021-06-25 12:57:41.908609", "stderr": "", "stderr_lines": [], "stdout": "apiVersion: operators.coreos.com/v1alpha1\nkind: ClusterServiceVersion\nmetadata:\n  annotations:\n    alm-examples: '[{\"apiVersion\":\"cilium.io/v1alpha1\",\"kind\":\"CiliumConfig\",\"metadata\":{\"name\":\"cilium-openshift-default\",\"namespace\":\"placeholder\"},\"spec\":{\"nativeRoutingCIDR\":\"10.128.0.0/14\",\"endpointRoutes\":{\"enabled\":true},\"kubeProxyReplacement\":\"probe\",\"cni\":{\"binPath\":\"/var/lib/cni/bin\",\"confPath\":\"/var/run/multus/cni/net.d\"},\"ipam\":{\"operator\":{\"clusterPoolIPv4PodCIDR\":\"10.128.0.0/14\",\"clusterPoolIPv4MaskSize\":\"23\"},\"mode\":\"cluster-pool\"},\"prometheus\":{\"serviceMonitor\":{\"enabled\":false}},\"hubble\":{\"tls\":{\"enabled\":false}}}}]'\n    alm-examples-metadata: '{\"cilium-openshift-default\":{\"description\":\"Default CiliumConfig CR for OpenShift\"}}'\n    capabilities: Basic Install\n    categories: Networking,Security\n    repository: http://github.com/cilium/cilium\n    support: [email protected]\n  name: cilium.v1.10.1-xdcd9835\n  namespace: placeholder\nspec:\n  apiservicedefinitions: {}\n  customresourcedefinitions:\n    owned:\n    - kind: CiliumConfig\n      name: ciliumconfigs.cilium.io\n      resources:\n      - kind: DaemonSet\n        name: cilium\n        version: v1\n      - kind: Deployment\n        name: cilium-operator\n        version: v1\n      - kind: ConfigMap\n        name: cilium-config\n        version: v1\n      statusDescriptors:\n      - description: Helm release conditions\n        displayName: Conditions\n        path: conditions\n      - description: Name of deployed Helm release\n        displayName: Deployed release\n        path: deployedRelease\n      version: v1alpha1\n  description: Cilium - eBPF-based Networking, Security, and Observability\n  displayName: Cilium\n  icon:\n  - base64data: <svg width="119" height="35" viewBox="0 0 119 35" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M29.3361 18.8075H24.2368L21.6571 23.3262L24.2368 27.7838H29.3361L31.9157 23.3262L29.3361 18.8075Z" fill="#8061A9"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M29.3361 6.83905H24.2368L21.6571 11.3577L24.2368 15.8153H29.3361L31.9157 11.3577L29.3361 6.83905Z" fill="#F17323"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 1.13983H13.9781L11.3984 5.65852L13.9781 10.1161H19.0774L21.6571 5.65852L19.0774 1.13983Z" fill="#F8C517"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.81889 6.83905H3.71959L1.13989 11.3577L3.71959 15.8153H8.81889L11.3985 11.3577L8.81889 6.83905Z" fill="#CADD72"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 12.5383H13.9781L11.3984 17.057L13.9781 21.5146H19.0774L21.6571 17.057L19.0774 12.5383Z" fill="#E82629"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.81889 18.8075H3.71959L1.13989 23.3262L3.71959 27.7838H8.81889L11.3985 23.3262L8.81889 18.8075Z" fill="#98C93E"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 24.5067H13.9781L11.3984 29.0254L13.9781 33.483H19.0774L21.6571 29.0254L19.0774 24.5067Z" fill="#628AC6"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M18.8181 20.7783H14.2377L11.9205 16.8397L14.2377 12.8471H18.8181L21.1352 16.8397L18.8181 20.7783ZM19.6441 11.3984H13.3933L10.2587 16.831L13.3933 22.227H19.6441L22.797 16.831L19.6441 11.3984Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M13.3932 23.3669L10.2587 28.7995L13.3932 34.1954H19.6441L22.797 28.7995L19.6441 23.3669H13.3932ZM11.9204 28.8082L14.2376 24.8156H18.818L21.1352 28.8082L18.818 32.7468H14.2376L11.9204 28.8082Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M13.3932 0L10.2587 5.43263L13.3932 10.8285H19.6441L22.797 5.43263L19.6441 0H13.3932ZM11.9204 5.4412L14.2376 1.4487H18.818L21.1352 5.4412L18.818 9.37985H14.2376L11.9204 5.4412Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M23.6518 17.6676L20.5172 23.1002L23.6518 28.4961H29.9026L33.0555 23.1002L29.9026 17.6676H23.6518ZM22.1791 23.1088L24.4962 19.1162H29.0766L31.3937 23.1088L29.0766 27.0475H24.4962L22.1791 23.1088Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M23.6518 5.69922L20.5172 11.1319L23.6518 16.5278H29.9026L33.0555 11.1319L29.9026 5.69922H23.6518ZM22.1791 11.1405L24.4962 7.14791H29.0766L31.3937 11.1405L29.0766 15.0791H24.4962L22.1791 11.1405Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M3.13453 17.6676L0 23.1002L3.13453 28.4961H9.38542L12.5383 23.1002L9.38542 17.6676H3.13453ZM1.66179 23.1088L3.97892 19.1162H8.55933L10.8765 23.1088L8.55933 27.0475H3.97892L1.66179 23.1088Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M3.13453 5.69922L0 11.1319L3.13453 16.5278H9.38542L12.5383 11.1319L9.38542 5.69922H3.13453ZM1.66179 11.1405L3.97892 7.14791H8.55933L10.8765 11.1405L8.55933 15.0791H3.97892L1.66179 11.1405Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M118.045 26.2212H115.684C115.68 26.1511 115.672 26.079 115.672 26.0067C115.671 25.4755 115.672 24.9443 115.672 24.4132C115.672 21.8196 115.67 19.2259 115.673 16.6323C115.674 16.0004 115.609 15.3797 115.412 14.7769C115.054 13.6769 114.285 13.0758 113.148 12.9423C111.902 12.796 110.786 13.1155 109.807 13.9085C109.246 14.3634 108.76 14.8884 108.336 15.4706C108.291 15.5323 108.275 15.6193 108.26 15.6972C108.248 15.7569 108.257 15.8209 108.257 15.8831C108.257 19.2217 108.257 22.5603 108.257 25.899V26.1745H105.813C105.81 26.0969 105.804 26.0178 105.804 25.9385C105.803 24.6416 105.803 23.3449 105.803 22.048C105.803 20.2131 105.804 18.3783 105.803 16.5434C105.802 15.9188 105.721 15.3049 105.516 14.7127C105.15 13.6524 104.389 13.076 103.289 12.9438C101.995 12.7884 100.847 13.1358 99.8485 13.9777C99.3548 14.394 98.9271 14.868 98.5513 15.3919C98.4667 15.5097 98.43 15.6273 98.4302 15.7733C98.4339 18.1876 98.4329 20.6019 98.4329 23.0162C98.4329 24.0027 98.4328 24.9891 98.4328 25.9755C98.4328 26.0506 98.4329 26.1257 98.4329 26.1966C98.268 26.2411 96.4209 26.2568 96.0083 26.2211C95.9635 26.0785 95.9475 11.5179 95.9919 11.2328C96.1392 11.1898 97.6299 11.1799 97.8791 11.224C98.0319 11.9048 98.1863 12.5934 98.3506 13.3255C98.4321 13.2375 98.483 13.1848 98.5315 13.1298C98.8733 12.7418 99.2113 12.353 99.6207 12.0286C100.297 11.4925 101.037 11.105 101.892 10.9465C102.891 10.7614 103.881 10.7693 104.858 11.0677C105.742 11.3374 106.428 11.8838 106.989 12.6016C107.236 12.9179 107.441 13.2607 107.618 13.6209C107.647 13.6811 107.68 13.74 107.726 13.8283C107.789 13.7471 107.835 13.6904 107.878 13.6318C108.362 12.9788 108.924 12.4047 109.578 11.9209C110.653 11.1269 111.865 10.7921 113.189 10.8305C113.765 10.8472 114.332 10.9405 114.878 11.109C115.94 11.4361 116.781 12.0487 117.319 13.0516C117.732 13.8198 117.961 14.6327 118.013 15.4976C118.035 15.8759 118.043 16.2556 118.044 16.6347C118.046 19.7389 118.045 22.843 118.045 25.947C118.045 26.035 118.045 26.1229 118.045 26.2212Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M88.991 11.2089H91.4298C91.4346 11.2734 91.4428 11.333 91.4428 11.3927C91.4432 14.4486 91.4462 17.5046 91.44 20.5604C91.4386 21.2439 91.3973 21.9304 91.205 22.5891C90.674 24.4077 89.5103 25.6313 87.7045 26.2308C87.1904 26.4015 86.6563 26.4686 86.121 26.5185C85.1811 26.6062 84.2427 26.5721 83.3238 26.3539C82.3006 26.111 81.3691 25.6685 80.6397 24.8872C79.9731 24.1733 79.5297 23.3348 79.3131 22.3749C79.177 21.7718 79.1183 21.1617 79.1166 20.5486C79.1082 17.4927 79.1122 14.4368 79.1121 11.3809C79.1121 11.3333 79.1162 11.2859 79.1182 11.2414C79.2665 11.1891 81.306 11.1752 81.5764 11.2274V11.4846C81.5764 14.4163 81.5769 17.348 81.5759 20.2798C81.5758 20.7979 81.5963 21.3132 81.7041 21.8228C82.0195 23.3138 83.047 24.2679 84.5593 24.4664C85.1659 24.5459 85.7728 24.5417 86.3695 24.4041C87.4572 24.153 88.1978 23.4798 88.6343 22.4662C88.9232 21.795 88.9875 21.0798 88.9892 20.359C88.9937 18.4413 88.9909 16.5236 88.991 14.6059C88.991 13.5643 88.991 12.5226 88.991 11.481V11.2089Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M52.8848 13.6787C52.2998 13.5043 51.7552 13.3306 51.204 13.1809C50.5636 13.0069 49.9104 12.9029 49.2439 12.9094C48.6119 12.9155 47.9932 13.0143 47.3964 13.2204C46.5853 13.5004 45.9217 13.989 45.4141 14.6828C44.7801 15.5493 44.4233 16.5192 44.3261 17.5886C44.2698 18.2081 44.2455 18.8272 44.29 19.4478C44.3651 20.4982 44.6577 21.477 45.2476 22.3596C45.9359 23.3894 46.9044 23.9945 48.1017 24.2496C48.8993 24.4196 49.7043 24.4049 50.5102 24.3127C51.3027 24.2219 52.0604 23.9954 52.8081 23.7288C52.8848 23.7014 52.9629 23.678 53.0557 23.6477V25.6883C52.8476 25.7841 52.6363 25.9016 52.4115 25.9814C51.2962 26.3771 50.1399 26.5561 48.9611 26.5532C47.7334 26.5502 46.5565 26.2984 45.4433 25.7559C43.8736 24.9911 42.822 23.7743 42.2258 22.1478C41.869 21.1744 41.7026 20.1666 41.672 19.1315C41.6362 17.921 41.7771 16.7359 42.1551 15.5844C42.8801 13.3763 44.3549 11.9049 46.5429 11.1349C47.0156 10.9685 47.5012 10.891 47.9971 10.8463C48.5747 10.7943 49.1516 10.7528 49.7315 10.8008C50.7623 10.8861 51.7645 11.0785 52.6962 11.5566C52.8388 11.6297 52.8927 11.7112 52.8892 11.8738C52.8776 12.4046 52.8848 12.9358 52.8848 13.4669V13.6787Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M64.4589 26.1727H66.8852V3.33331H64.4589V26.1727Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M59.3872 26.1754H56.9906C56.9796 26.1673 56.9731 26.1641 56.9688 26.159C56.9645 26.1541 56.9607 26.1475 56.9597 26.1412C56.9546 26.1073 56.9465 26.0734 56.9465 26.0395C56.9468 21.1224 56.9477 16.2052 56.9491 11.2881C56.9492 11.2686 56.9589 11.2493 56.9636 11.2314C57.1197 11.1901 59.1441 11.1809 59.3872 11.2221V26.1754Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M71.9558 11.2094H74.3597C74.4046 11.3583 74.4191 25.8966 74.3738 26.1728H71.9558V11.2094Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M59.3336 7.68597H56.9289C56.8864 7.53126 56.8771 5.21567 56.9188 4.97137H59.324C59.3637 5.12006 59.3739 7.41087 59.3336 7.68597Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M71.9128 4.96277H74.3104C74.3589 5.10865 74.3778 7.26993 74.3339 7.67746H71.9128V4.96277Z" fill="black"/>
</svg>\n    mediatype: image/svg+xml\n  install:\n    spec:\n      clusterPermissions:\n      - rules:\n        - apiGroups:\n          - security.openshift.io\n          resourceNames:\n          - hostnetwork\n          resources:\n          - securitycontextconstraints\n          verbs:\n          - use\n        - apiGroups:\n          - rbac.authorization.k8s.io\n          resources:\n          - clusterroles\n          - clusterrolebindings\n          verbs:\n          - create\n          - get\n          - patch\n          - update\n          - delete\n          - list\n          - watch\n        - apiGroups:\n          - cilium.io\n          resources:\n          - '*'\n          verbs:\n          - '*'\n        - apiGroups:\n          - apiextensions.k8s.io\n          resources:\n          - customresourcedefinitions\n          verbs:\n          - '*'\n        - apiGroups:\n          - coordination.k8s.io\n          resources:\n          - leases\n          verbs:\n          - create\n          - get\n          - update\n        - apiGroups:\n          - \"\"\n          resources:\n          - services/status\n          verbs:\n          - update\n        - apiGroups:\n          - \"\"\n          resources:\n          - pods\n          - pods/status\n          - pods/finalizers\n          verbs:\n          - get\n          - list\n          - watch\n          - update\n          - delete\n        - apiGroups:\n          - \"\"\n          resources:\n          - nodes\n          - nodes/status\n          verbs:\n          - get\n          - list\n          - watch\n          - update\n          - patch\n        - apiGroups:\n          - \"\"\n          resources:\n          - namespaces\n          - services\n          - endpoints\n          - componentstatuses\n          verbs:\n          - get\n          - list\n          - watch\n        - apiGroups:\n          - discovery.k8s.io\n          resources:\n          - endpointslices\n          verbs:\n          - get\n          - list\n          - watch\n        - apiGroups:\n          - networking.k8s.io\n          resources:\n          - networkpolicies\n          verbs:\n          - get\n          - list\n          - watch\n        serviceAccountName: cilium-olm\n      deployments:\n      - name: cilium-olm\n        spec:\n          replicas: 1\n          selector:\n            matchLabels:\n              name: cilium-olm\n          template:\n            metadata:\n              labels:\n                name: cilium-olm\n            spec:\n              containers:\n              - command:\n                - /usr/local/bin/helm-operator\n                - run\n                - --watches-file=watches.yaml\n                - --enable-leader-election\n                - --leader-election-id=cilium-olm\n                - --zap-devel\n                env:\n                - name: WATCH_NAMESPACE\n                  valueFrom:\n                    fieldRef:\n                      fieldPath: metadata.namespace\n                image: registry.connect.redhat.com/isovalent/cilium-olm@sha256:26f1ed31c8f0600fcea498116e7dbea157a8704989f762e22e8038b8ea364dce\n                name: operator\n                ports:\n                - containerPort: 9443\n                  name: https\n                  protocol: TCP\n                resources:\n                  limits:\n                    cpu: 100m\n                    memory: 150Mi\n                  requests:\n                    cpu: 100m\n                    memory: 150Mi\n                volumeMounts:\n                - mountPath: /tmp\n                  name: tmp\n              hostNetwork: true\n              serviceAccount: cilium-olm\n              terminationGracePeriodSeconds: 10\n              tolerations:\n              - operator: Exists\n              volumes:\n              - emptyDir: {}\n                name: tmp\n      permissions:\n      - rules:\n        - apiGroups:\n          - \"\"\n          resources:\n          - configmaps\n          verbs:\n          - get\n          - list\n          - watch\n          - create\n          - update\n          - patch\n          - delete\n        - apiGroups:\n          - \"\"\n          resources:\n          - events\n          verbs:\n          - create\n        - apiGroups:\n          - \"\"\n          resources:\n          - namespaces\n          verbs:\n          - get\n        - apiGroups:\n          - cilium.io\n          resources:\n          - ciliumconfigs\n          - ciliumconfigs/status\n          verbs:\n          - list\n        - apiGroups:\n          - cilium.io\n          resources:\n          - ciliumconfigs\n          - ciliumconfigs/status\n          - ciliumconfigs/finalizers\n          verbs:\n          - get\n          - patch\n          - update\n          - watch\n          - list\n          - delete\n        - apiGroups:\n          - \"\"\n          resources:\n          - events\n          verbs:\n          - create\n        - apiGroups:\n          - \"\"\n          resources:\n          - secrets\n          verbs:\n          - '*'\n        - apiGroups:\n          - \"\"\n          resources:\n          - serviceaccounts\n          - configmaps\n          - secrets\n          - services\n          verbs:\n          - '*'\n        - apiGroups:\n          - apps\n          resources:\n          - deployments\n          - daemonsets\n          verbs:\n          - '*'\n        serviceAccountName: cilium-olm\n    strategy: deployment\n  installModes:\n  - supported: true\n    type: OwnNamespace\n  - supported: true\n    type: SingleNamespace\n  - supported: false\n    type: MultiNamespace\n  - supported: false\n    type: AllNamespaces\n  keywords:\n  - networking\n  - security\n  - observability\n  - eBPF\n  links:\n  - name: Cilium Homepage\n    url: https://cilium.io/\n  maintainers:\n  - email: [email protected]\n    name: Cilium\n  maturity: stable\n  provider:\n    name: Isovalent\n  version: 1.10.1+xdcd9835\n  relatedImages:\n  - name: operator\n    image: registry.connect.redhat.com/isovalent/cilium-olm@sha256:26f1ed31c8f0600fcea498116e7dbea157a8704989f762e22e8038b8ea364dce", "stdout_lines": ["apiVersion: operators.coreos.com/v1alpha1", "kind: ClusterServiceVersion", "metadata:", "  annotations:", "    alm-examples: '[{\"apiVersion\":\"cilium.io/v1alpha1\",\"kind\":\"CiliumConfig\",\"metadata\":{\"name\":\"cilium-openshift-default\",\"namespace\":\"placeholder\"},\"spec\":{\"nativeRoutingCIDR\":\"10.128.0.0/14\",\"endpointRoutes\":{\"enabled\":true},\"kubeProxyReplacement\":\"probe\",\"cni\":{\"binPath\":\"/var/lib/cni/bin\",\"confPath\":\"/var/run/multus/cni/net.d\"},\"ipam\":{\"operator\":{\"clusterPoolIPv4PodCIDR\":\"10.128.0.0/14\",\"clusterPoolIPv4MaskSize\":\"23\"},\"mode\":\"cluster-pool\"},\"prometheus\":{\"serviceMonitor\":{\"enabled\":false}},\"hubble\":{\"tls\":{\"enabled\":false}}}}]'", "    alm-examples-metadata: '{\"cilium-openshift-default\":{\"description\":\"Default CiliumConfig CR for OpenShift\"}}'", "    capabilities: Basic Install", "    categories: Networking,Security", "    repository: http://github.com/cilium/cilium", "    support: [email protected]", "  name: cilium.v1.10.1-xdcd9835", "  namespace: placeholder", "spec:", "  apiservicedefinitions: {}", "  customresourcedefinitions:", "    owned:", "    - kind: CiliumConfig", "      name: ciliumconfigs.cilium.io", "      resources:", "      - kind: DaemonSet", "        name: cilium", "        version: v1", "      - kind: Deployment", "        name: cilium-operator", "        version: v1", "      - kind: ConfigMap", "        name: cilium-config", "        version: v1", "      statusDescriptors:", "      - description: Helm release conditions", "        displayName: Conditions", "        path: conditions", "      - description: Name of deployed Helm release", "        displayName: Deployed release", "        path: deployedRelease", "      version: v1alpha1", "  description: Cilium - eBPF-based Networking, Security, and Observability", "  displayName: Cilium", "  icon:", "  - base64data: <svg width="119" height="35" viewBox="0 0 119 35" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M29.3361 18.8075H24.2368L21.6571 23.3262L24.2368 27.7838H29.3361L31.9157 23.3262L29.3361 18.8075Z" fill="#8061A9"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M29.3361 6.83905H24.2368L21.6571 11.3577L24.2368 15.8153H29.3361L31.9157 11.3577L29.3361 6.83905Z" fill="#F17323"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 1.13983H13.9781L11.3984 5.65852L13.9781 10.1161H19.0774L21.6571 5.65852L19.0774 1.13983Z" fill="#F8C517"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.81889 6.83905H3.71959L1.13989 11.3577L3.71959 15.8153H8.81889L11.3985 11.3577L8.81889 6.83905Z" fill="#CADD72"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 12.5383H13.9781L11.3984 17.057L13.9781 21.5146H19.0774L21.6571 17.057L19.0774 12.5383Z" fill="#E82629"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.81889 18.8075H3.71959L1.13989 23.3262L3.71959 27.7838H8.81889L11.3985 23.3262L8.81889 18.8075Z" fill="#98C93E"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 24.5067H13.9781L11.3984 29.0254L13.9781 33.483H19.0774L21.6571 29.0254L19.0774 24.5067Z" fill="#628AC6"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M18.8181 20.7783H14.2377L11.9205 16.8397L14.2377 12.8471H18.8181L21.1352 16.8397L18.8181 20.7783ZM19.6441 11.3984H13.3933L10.2587 16.831L13.3933 22.227H19.6441L22.797 16.831L19.6441 11.3984Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M13.3932 23.3669L10.2587 28.7995L13.3932 34.1954H19.6441L22.797 28.7995L19.6441 23.3669H13.3932ZM11.9204 28.8082L14.2376 24.8156H18.818L21.1352 28.8082L18.818 32.7468H14.2376L11.9204 28.8082Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M13.3932 0L10.2587 5.43263L13.3932 10.8285H19.6441L22.797 5.43263L19.6441 0H13.3932ZM11.9204 5.4412L14.2376 1.4487H18.818L21.1352 5.4412L18.818 9.37985H14.2376L11.9204 5.4412Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M23.6518 17.6676L20.5172 23.1002L23.6518 28.4961H29.9026L33.0555 23.1002L29.9026 17.6676H23.6518ZM22.1791 23.1088L24.4962 19.1162H29.0766L31.3937 23.1088L29.0766 27.0475H24.4962L22.1791 23.1088Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M23.6518 5.69922L20.5172 11.1319L23.6518 16.5278H29.9026L33.0555 11.1319L29.9026 5.69922H23.6518ZM22.1791 11.1405L24.4962 7.14791H29.0766L31.3937 11.1405L29.0766 15.0791H24.4962L22.1791 11.1405Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M3.13453 17.6676L0 23.1002L3.13453 28.4961H9.38542L12.5383 23.1002L9.38542 17.6676H3.13453ZM1.66179 23.1088L3.97892 19.1162H8.55933L10.8765 23.1088L8.55933 27.0475H3.97892L1.66179 23.1088Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M3.13453 5.69922L0 11.1319L3.13453 16.5278H9.38542L12.5383 11.1319L9.38542 5.69922H3.13453ZM1.66179 11.1405L3.97892 7.14791H8.55933L10.8765 11.1405L8.55933 15.0791H3.97892L1.66179 11.1405Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M118.045 26.2212H115.684C115.68 26.1511 115.672 26.079 115.672 26.0067C115.671 25.4755 115.672 24.9443 115.672 24.4132C115.672 21.8196 115.67 19.2259 115.673 16.6323C115.674 16.0004 115.609 15.3797 115.412 14.7769C115.054 13.6769 114.285 13.0758 113.148 12.9423C111.902 12.796 110.786 13.1155 109.807 13.9085C109.246 14.3634 108.76 14.8884 108.336 15.4706C108.291 15.5323 108.275 15.6193 108.26 15.6972C108.248 15.7569 108.257 15.8209 108.257 15.8831C108.257 19.2217 108.257 22.5603 108.257 25.899V26.1745H105.813C105.81 26.0969 105.804 26.0178 105.804 25.9385C105.803 24.6416 105.803 23.3449 105.803 22.048C105.803 20.2131 105.804 18.3783 105.803 16.5434C105.802 15.9188 105.721 15.3049 105.516 14.7127C105.15 13.6524 104.389 13.076 103.289 12.9438C101.995 12.7884 100.847 13.1358 99.8485 13.9777C99.3548 14.394 98.9271 14.868 98.5513 15.3919C98.4667 15.5097 98.43 15.6273 98.4302 15.7733C98.4339 18.1876 98.4329 20.6019 98.4329 23.0162C98.4329 24.0027 98.4328 24.9891 98.4328 25.9755C98.4328 26.0506 98.4329 26.1257 98.4329 26.1966C98.268 26.2411 96.4209 26.2568 96.0083 26.2211C95.9635 26.0785 95.9475 11.5179 95.9919 11.2328C96.1392 11.1898 97.6299 11.1799 97.8791 11.224C98.0319 11.9048 98.1863 12.5934 98.3506 13.3255C98.4321 13.2375 98.483 13.1848 98.5315 13.1298C98.8733 12.7418 99.2113 12.353 99.6207 12.0286C100.297 11.4925 101.037 11.105 101.892 10.9465C102.891 10.7614 103.881 10.7693 104.858 11.0677C105.742 11.3374 106.428 11.8838 106.989 12.6016C107.236 12.9179 107.441 13.2607 107.618 13.6209C107.647 13.6811 107.68 13.74 107.726 13.8283C107.789 13.7471 107.835 13.6904 107.878 13.6318C108.362 12.9788 108.924 12.4047 109.578 11.9209C110.653 11.1269 111.865 10.7921 113.189 10.8305C113.765 10.8472 114.332 10.9405 114.878 11.109C115.94 11.4361 116.781 12.0487 117.319 13.0516C117.732 13.8198 117.961 14.6327 118.013 15.4976C118.035 15.8759 118.043 16.2556 118.044 16.6347C118.046 19.7389 118.045 22.843 118.045 25.947C118.045 26.035 118.045 26.1229 118.045 26.2212Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M88.991 11.2089H91.4298C91.4346 11.2734 91.4428 11.333 91.4428 11.3927C91.4432 14.4486 91.4462 17.5046 91.44 20.5604C91.4386 21.2439 91.3973 21.9304 91.205 22.5891C90.674 24.4077 89.5103 25.6313 87.7045 26.2308C87.1904 26.4015 86.6563 26.4686 86.121 26.5185C85.1811 26.6062 84.2427 26.5721 83.3238 26.3539C82.3006 26.111 81.3691 25.6685 80.6397 24.8872C79.9731 24.1733 79.5297 23.3348 79.3131 22.3749C79.177 21.7718 79.1183 21.1617 79.1166 20.5486C79.1082 17.4927 79.1122 14.4368 79.1121 11.3809C79.1121 11.3333 79.1162 11.2859 79.1182 11.2414C79.2665 11.1891 81.306 11.1752 81.5764 11.2274V11.4846C81.5764 14.4163 81.5769 17.348 81.5759 20.2798C81.5758 20.7979 81.5963 21.3132 81.7041 21.8228C82.0195 23.3138 83.047 24.2679 84.5593 24.4664C85.1659 24.5459 85.7728 24.5417 86.3695 24.4041C87.4572 24.153 88.1978 23.4798 88.6343 22.4662C88.9232 21.795 88.9875 21.0798 88.9892 20.359C88.9937 18.4413 88.9909 16.5236 88.991 14.6059C88.991 13.5643 88.991 12.5226 88.991 11.481V11.2089Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M52.8848 13.6787C52.2998 13.5043 51.7552 13.3306 51.204 13.1809C50.5636 13.0069 49.9104 12.9029 49.2439 12.9094C48.6119 12.9155 47.9932 13.0143 47.3964 13.2204C46.5853 13.5004 45.9217 13.989 45.4141 14.6828C44.7801 15.5493 44.4233 16.5192 44.3261 17.5886C44.2698 18.2081 44.2455 18.8272 44.29 19.4478C44.3651 20.4982 44.6577 21.477 45.2476 22.3596C45.9359 23.3894 46.9044 23.9945 48.1017 24.2496C48.8993 24.4196 49.7043 24.4049 50.5102 24.3127C51.3027 24.2219 52.0604 23.9954 52.8081 23.7288C52.8848 23.7014 52.9629 23.678 53.0557 23.6477V25.6883C52.8476 25.7841 52.6363 25.9016 52.4115 25.9814C51.2962 26.3771 50.1399 26.5561 48.9611 26.5532C47.7334 26.5502 46.5565 26.2984 45.4433 25.7559C43.8736 24.9911 42.822 23.7743 42.2258 22.1478C41.869 21.1744 41.7026 20.1666 41.672 19.1315C41.6362 17.921 41.7771 16.7359 42.1551 15.5844C42.8801 13.3763 44.3549 11.9049 46.5429 11.1349C47.0156 10.9685 47.5012 10.891 47.9971 10.8463C48.5747 10.7943 49.1516 10.7528 49.7315 10.8008C50.7623 10.8861 51.7645 11.0785 52.6962 11.5566C52.8388 11.6297 52.8927 11.7112 52.8892 11.8738C52.8776 12.4046 52.8848 12.9358 52.8848 13.4669V13.6787Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M64.4589 26.1727H66.8852V3.33331H64.4589V26.1727Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M59.3872 26.1754H56.9906C56.9796 26.1673 56.9731 26.1641 56.9688 26.159C56.9645 26.1541 56.9607 26.1475 56.9597 26.1412C56.9546 26.1073 56.9465 26.0734 56.9465 26.0395C56.9468 21.1224 56.9477 16.2052 56.9491 11.2881C56.9492 11.2686 56.9589 11.2493 56.9636 11.2314C57.1197 11.1901 59.1441 11.1809 59.3872 11.2221V26.1754Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M71.9558 11.2094H74.3597C74.4046 11.3583 74.4191 25.8966 74.3738 26.1728H71.9558V11.2094Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M59.3336 7.68597H56.9289C56.8864 7.53126 56.8771 5.21567 56.9188 4.97137H59.324C59.3637 5.12006 59.3739 7.41087 59.3336 7.68597Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M71.9128 4.96277H74.3104C74.3589 5.10865 74.3778 7.26993 74.3339 7.67746H71.9128V4.96277Z" fill="black"/>
</svg>", "    mediatype: image/svg+xml", "  install:", "    spec:", "      clusterPermissions:", "      - rules:", "        - apiGroups:", "          - security.openshift.io", "          resourceNames:", "          - hostnetwork", "          resources:", "          - securitycontextconstraints", "          verbs:", "          - use", "        - apiGroups:", "          - rbac.authorization.k8s.io", "          resources:", "          - clusterroles", "          - clusterrolebindings", "          verbs:", "          - create", "          - get", "          - patch", "          - update", "          - delete", "          - list", "          - watch", "        - apiGroups:", "          - cilium.io", "          resources:", "          - '*'", "          verbs:", "          - '*'", "        - apiGroups:", "          - apiextensions.k8s.io", "          resources:", "          - customresourcedefinitions", "          verbs:", "          - '*'", "        - apiGroups:", "          - coordination.k8s.io", "          resources:", "          - leases", "          verbs:", "          - create", "          - get", "          - update", "        - apiGroups:", "          - \"\"", "          resources:", "          - services/status", "          verbs:", "          - update", "        - apiGroups:", "          - \"\"", "          resources:", "          - pods", "          - pods/status", "          - pods/finalizers", "          verbs:", "          - get", "          - list", "          - watch", "          - update", "          - delete", "        - apiGroups:", "          - \"\"", "          resources:", "          - nodes", "          - nodes/status", "          verbs:", "          - get", "          - list", "          - watch", "          - update", "          - patch", "        - apiGroups:", "          - \"\"", "          resources:", "          - namespaces", "          - services", "          - endpoints", "          - componentstatuses", "          verbs:", "          - get", "          - list", "          - watch", "        - apiGroups:", "          - discovery.k8s.io", "          resources:", "          - endpointslices", "          verbs:", "          - get", "          - list", "          - watch", "        - apiGroups:", "          - networking.k8s.io", "          resources:", "          - networkpolicies", "          verbs:", "          - get", "          - list", "          - watch", "        serviceAccountName: cilium-olm", "      deployments:", "      - name: cilium-olm", "        spec:", "          replicas: 1", "          selector:", "            matchLabels:", "              name: cilium-olm", "          template:", "            metadata:", "              labels:", "                name: cilium-olm", "            spec:", "              containers:", "              - command:", "                - /usr/local/bin/helm-operator", "                - run", "                - --watches-file=watches.yaml", "                - --enable-leader-election", "                - --leader-election-id=cilium-olm", "                - --zap-devel", "                env:", "                - name: WATCH_NAMESPACE", "                  valueFrom:", "                    fieldRef:", "                      fieldPath: metadata.namespace", "                image: registry.connect.redhat.com/isovalent/cilium-olm@sha256:26f1ed31c8f0600fcea498116e7dbea157a8704989f762e22e8038b8ea364dce", "                name: operator", "                ports:", "                - containerPort: 9443", "                  name: https", "                  protocol: TCP", "                resources:", "                  limits:", "                    cpu: 100m", "                    memory: 150Mi", "                  requests:", "                    cpu: 100m", "                    memory: 150Mi", "                volumeMounts:", "                - mountPath: /tmp", "                  name: tmp", "              hostNetwork: true", "              serviceAccount: cilium-olm", "              terminationGracePeriodSeconds: 10", "              tolerations:", "              - operator: Exists", "              volumes:", "              - emptyDir: {}", "                name: tmp", "      permissions:", "      - rules:", "        - apiGroups:", "          - \"\"", "          resources:", "          - configmaps", "          verbs:", "          - get", "          - list", "          - watch", "          - create", "          - update", "          - patch", "          - delete", "        - apiGroups:", "          - \"\"", "          resources:", "          - events", "          verbs:", "          - create", "        - apiGroups:", "          - \"\"", "          resources:", "          - namespaces", "          verbs:", "          - get", "        - apiGroups:", "          - cilium.io", "          resources:", "          - ciliumconfigs", "          - ciliumconfigs/status", "          verbs:", "          - list", "        - apiGroups:", "          - cilium.io", "          resources:", "          - ciliumconfigs", "          - ciliumconfigs/status", "          - ciliumconfigs/finalizers", "          verbs:", "          - get", "          - patch", "          - update", "          - watch", "          - list", "          - delete", "        - apiGroups:", "          - \"\"", "          resources:", "          - events", "          verbs:", "          - create", "        - apiGroups:", "          - \"\"", "          resources:", "          - secrets", "          verbs:", "          - '*'", "        - apiGroups:", "          - \"\"", "          resources:", "          - serviceaccounts", "          - configmaps", "          - secrets", "          - services", "          verbs:", "          - '*'", "        - apiGroups:", "          - apps", "          resources:", "          - deployments", "          - daemonsets", "          verbs:", "          - '*'", "        serviceAccountName: cilium-olm", "    strategy: deployment", "  installModes:", "  - supported: true", "    type: OwnNamespace", "  - supported: true", "    type: SingleNamespace", "  - supported: false", "    type: MultiNamespace", "  - supported: false", "    type: AllNamespaces", "  keywords:", "  - networking", "  - security", "  - observability", "  - eBPF", "  links:", "  - name: Cilium Homepage", "    url: https://cilium.io/", "  maintainers:", "  - email: [email protected]", "    name: Cilium", "  maturity: stable", "  provider:", "    name: Isovalent", "  version: 1.10.1+xdcd9835", "  relatedImages:", "  - name: operator", "    image: registry.connect.redhat.com/isovalent/cilium-olm@sha256:26f1ed31c8f0600fcea498116e7dbea157a8704989f762e22e8038b8ea364dce"]}
2021-06-25 12:57:41,939 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set facts for csv_data] **************************
2021-06-25 12:57:42,073 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"csv_vars": "apiVersion: operators.coreos.com/v1alpha1\nkind: ClusterServiceVersion\nmetadata:\n  annotations:\n    alm-examples: '[{\"apiVersion\":\"cilium.io/v1alpha1\",\"kind\":\"CiliumConfig\",\"metadata\":{\"name\":\"cilium-openshift-default\",\"namespace\":\"placeholder\"},\"spec\":{\"nativeRoutingCIDR\":\"10.128.0.0/14\",\"endpointRoutes\":{\"enabled\":true},\"kubeProxyReplacement\":\"probe\",\"cni\":{\"binPath\":\"/var/lib/cni/bin\",\"confPath\":\"/var/run/multus/cni/net.d\"},\"ipam\":{\"operator\":{\"clusterPoolIPv4PodCIDR\":\"10.128.0.0/14\",\"clusterPoolIPv4MaskSize\":\"23\"},\"mode\":\"cluster-pool\"},\"prometheus\":{\"serviceMonitor\":{\"enabled\":false}},\"hubble\":{\"tls\":{\"enabled\":false}}}}]'\n    alm-examples-metadata: '{\"cilium-openshift-default\":{\"description\":\"Default CiliumConfig CR for OpenShift\"}}'\n    capabilities: Basic Install\n    categories: Networking,Security\n    repository: http://github.com/cilium/cilium\n    support: [email protected]\n  name: cilium.v1.10.1-xdcd9835\n  namespace: placeholder\nspec:\n  apiservicedefinitions: {}\n  customresourcedefinitions:\n    owned:\n    - kind: CiliumConfig\n      name: ciliumconfigs.cilium.io\n      resources:\n      - kind: DaemonSet\n        name: cilium\n        version: v1\n      - kind: Deployment\n        name: cilium-operator\n        version: v1\n      - kind: ConfigMap\n        name: cilium-config\n        version: v1\n      statusDescriptors:\n      - description: Helm release conditions\n        displayName: Conditions\n        path: conditions\n      - description: Name of deployed Helm release\n        displayName: Deployed release\n        path: deployedRelease\n      version: v1alpha1\n  description: Cilium - eBPF-based Networking, Security, and Observability\n  displayName: Cilium\n  icon:\n  - base64data: <svg width="119" height="35" viewBox="0 0 119 35" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M29.3361 18.8075H24.2368L21.6571 23.3262L24.2368 27.7838H29.3361L31.9157 23.3262L29.3361 18.8075Z" fill="#8061A9"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M29.3361 6.83905H24.2368L21.6571 11.3577L24.2368 15.8153H29.3361L31.9157 11.3577L29.3361 6.83905Z" fill="#F17323"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 1.13983H13.9781L11.3984 5.65852L13.9781 10.1161H19.0774L21.6571 5.65852L19.0774 1.13983Z" fill="#F8C517"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.81889 6.83905H3.71959L1.13989 11.3577L3.71959 15.8153H8.81889L11.3985 11.3577L8.81889 6.83905Z" fill="#CADD72"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 12.5383H13.9781L11.3984 17.057L13.9781 21.5146H19.0774L21.6571 17.057L19.0774 12.5383Z" fill="#E82629"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.81889 18.8075H3.71959L1.13989 23.3262L3.71959 27.7838H8.81889L11.3985 23.3262L8.81889 18.8075Z" fill="#98C93E"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M19.0774 24.5067H13.9781L11.3984 29.0254L13.9781 33.483H19.0774L21.6571 29.0254L19.0774 24.5067Z" fill="#628AC6"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M18.8181 20.7783H14.2377L11.9205 16.8397L14.2377 12.8471H18.8181L21.1352 16.8397L18.8181 20.7783ZM19.6441 11.3984H13.3933L10.2587 16.831L13.3933 22.227H19.6441L22.797 16.831L19.6441 11.3984Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M13.3932 23.3669L10.2587 28.7995L13.3932 34.1954H19.6441L22.797 28.7995L19.6441 23.3669H13.3932ZM11.9204 28.8082L14.2376 24.8156H18.818L21.1352 28.8082L18.818 32.7468H14.2376L11.9204 28.8082Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M13.3932 0L10.2587 5.43263L13.3932 10.8285H19.6441L22.797 5.43263L19.6441 0H13.3932ZM11.9204 5.4412L14.2376 1.4487H18.818L21.1352 5.4412L18.818 9.37985H14.2376L11.9204 5.4412Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M23.6518 17.6676L20.5172 23.1002L23.6518 28.4961H29.9026L33.0555 23.1002L29.9026 17.6676H23.6518ZM22.1791 23.1088L24.4962 19.1162H29.0766L31.3937 23.1088L29.0766 27.0475H24.4962L22.1791 23.1088Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M23.6518 5.69922L20.5172 11.1319L23.6518 16.5278H29.9026L33.0555 11.1319L29.9026 5.69922H23.6518ZM22.1791 11.1405L24.4962 7.14791H29.0766L31.3937 11.1405L29.0766 15.0791H24.4962L22.1791 11.1405Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M3.13453 17.6676L0 23.1002L3.13453 28.4961H9.38542L12.5383 23.1002L9.38542 17.6676H3.13453ZM1.66179 23.1088L3.97892 19.1162H8.55933L10.8765 23.1088L8.55933 27.0475H3.97892L1.66179 23.1088Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M3.13453 5.69922L0 11.1319L3.13453 16.5278H9.38542L12.5383 11.1319L9.38542 5.69922H3.13453ZM1.66179 11.1405L3.97892 7.14791H8.55933L10.8765 11.1405L8.55933 15.0791H3.97892L1.66179 11.1405Z" fill="#363736"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M118.045 26.2212H115.684C115.68 26.1511 115.672 26.079 115.672 26.0067C115.671 25.4755 115.672 24.9443 115.672 24.4132C115.672 21.8196 115.67 19.2259 115.673 16.6323C115.674 16.0004 115.609 15.3797 115.412 14.7769C115.054 13.6769 114.285 13.0758 113.148 12.9423C111.902 12.796 110.786 13.1155 109.807 13.9085C109.246 14.3634 108.76 14.8884 108.336 15.4706C108.291 15.5323 108.275 15.6193 108.26 15.6972C108.248 15.7569 108.257 15.8209 108.257 15.8831C108.257 19.2217 108.257 22.5603 108.257 25.899V26.1745H105.813C105.81 26.0969 105.804 26.0178 105.804 25.9385C105.803 24.6416 105.803 23.3449 105.803 22.048C105.803 20.2131 105.804 18.3783 105.803 16.5434C105.802 15.9188 105.721 15.3049 105.516 14.7127C105.15 13.6524 104.389 13.076 103.289 12.9438C101.995 12.7884 100.847 13.1358 99.8485 13.9777C99.3548 14.394 98.9271 14.868 98.5513 15.3919C98.4667 15.5097 98.43 15.6273 98.4302 15.7733C98.4339 18.1876 98.4329 20.6019 98.4329 23.0162C98.4329 24.0027 98.4328 24.9891 98.4328 25.9755C98.4328 26.0506 98.4329 26.1257 98.4329 26.1966C98.268 26.2411 96.4209 26.2568 96.0083 26.2211C95.9635 26.0785 95.9475 11.5179 95.9919 11.2328C96.1392 11.1898 97.6299 11.1799 97.8791 11.224C98.0319 11.9048 98.1863 12.5934 98.3506 13.3255C98.4321 13.2375 98.483 13.1848 98.5315 13.1298C98.8733 12.7418 99.2113 12.353 99.6207 12.0286C100.297 11.4925 101.037 11.105 101.892 10.9465C102.891 10.7614 103.881 10.7693 104.858 11.0677C105.742 11.3374 106.428 11.8838 106.989 12.6016C107.236 12.9179 107.441 13.2607 107.618 13.6209C107.647 13.6811 107.68 13.74 107.726 13.8283C107.789 13.7471 107.835 13.6904 107.878 13.6318C108.362 12.9788 108.924 12.4047 109.578 11.9209C110.653 11.1269 111.865 10.7921 113.189 10.8305C113.765 10.8472 114.332 10.9405 114.878 11.109C115.94 11.4361 116.781 12.0487 117.319 13.0516C117.732 13.8198 117.961 14.6327 118.013 15.4976C118.035 15.8759 118.043 16.2556 118.044 16.6347C118.046 19.7389 118.045 22.843 118.045 25.947C118.045 26.035 118.045 26.1229 118.045 26.2212Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M88.991 11.2089H91.4298C91.4346 11.2734 91.4428 11.333 91.4428 11.3927C91.4432 14.4486 91.4462 17.5046 91.44 20.5604C91.4386 21.2439 91.3973 21.9304 91.205 22.5891C90.674 24.4077 89.5103 25.6313 87.7045 26.2308C87.1904 26.4015 86.6563 26.4686 86.121 26.5185C85.1811 26.6062 84.2427 26.5721 83.3238 26.3539C82.3006 26.111 81.3691 25.6685 80.6397 24.8872C79.9731 24.1733 79.5297 23.3348 79.3131 22.3749C79.177 21.7718 79.1183 21.1617 79.1166 20.5486C79.1082 17.4927 79.1122 14.4368 79.1121 11.3809C79.1121 11.3333 79.1162 11.2859 79.1182 11.2414C79.2665 11.1891 81.306 11.1752 81.5764 11.2274V11.4846C81.5764 14.4163 81.5769 17.348 81.5759 20.2798C81.5758 20.7979 81.5963 21.3132 81.7041 21.8228C82.0195 23.3138 83.047 24.2679 84.5593 24.4664C85.1659 24.5459 85.7728 24.5417 86.3695 24.4041C87.4572 24.153 88.1978 23.4798 88.6343 22.4662C88.9232 21.795 88.9875 21.0798 88.9892 20.359C88.9937 18.4413 88.9909 16.5236 88.991 14.6059C88.991 13.5643 88.991 12.5226 88.991 11.481V11.2089Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M52.8848 13.6787C52.2998 13.5043 51.7552 13.3306 51.204 13.1809C50.5636 13.0069 49.9104 12.9029 49.2439 12.9094C48.6119 12.9155 47.9932 13.0143 47.3964 13.2204C46.5853 13.5004 45.9217 13.989 45.4141 14.6828C44.7801 15.5493 44.4233 16.5192 44.3261 17.5886C44.2698 18.2081 44.2455 18.8272 44.29 19.4478C44.3651 20.4982 44.6577 21.477 45.2476 22.3596C45.9359 23.3894 46.9044 23.9945 48.1017 24.2496C48.8993 24.4196 49.7043 24.4049 50.5102 24.3127C51.3027 24.2219 52.0604 23.9954 52.8081 23.7288C52.8848 23.7014 52.9629 23.678 53.0557 23.6477V25.6883C52.8476 25.7841 52.6363 25.9016 52.4115 25.9814C51.2962 26.3771 50.1399 26.5561 48.9611 26.5532C47.7334 26.5502 46.5565 26.2984 45.4433 25.7559C43.8736 24.9911 42.822 23.7743 42.2258 22.1478C41.869 21.1744 41.7026 20.1666 41.672 19.1315C41.6362 17.921 41.7771 16.7359 42.1551 15.5844C42.8801 13.3763 44.3549 11.9049 46.5429 11.1349C47.0156 10.9685 47.5012 10.891 47.9971 10.8463C48.5747 10.7943 49.1516 10.7528 49.7315 10.8008C50.7623 10.8861 51.7645 11.0785 52.6962 11.5566C52.8388 11.6297 52.8927 11.7112 52.8892 11.8738C52.8776 12.4046 52.8848 12.9358 52.8848 13.4669V13.6787Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M64.4589 26.1727H66.8852V3.33331H64.4589V26.1727Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M59.3872 26.1754H56.9906C56.9796 26.1673 56.9731 26.1641 56.9688 26.159C56.9645 26.1541 56.9607 26.1475 56.9597 26.1412C56.9546 26.1073 56.9465 26.0734 56.9465 26.0395C56.9468 21.1224 56.9477 16.2052 56.9491 11.2881C56.9492 11.2686 56.9589 11.2493 56.9636 11.2314C57.1197 11.1901 59.1441 11.1809 59.3872 11.2221V26.1754Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M71.9558 11.2094H74.3597C74.4046 11.3583 74.4191 25.8966 74.3738 26.1728H71.9558V11.2094Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M59.3336 7.68597H56.9289C56.8864 7.53126 56.8771 5.21567 56.9188 4.97137H59.324C59.3637 5.12006 59.3739 7.41087 59.3336 7.68597Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M71.9128 4.96277H74.3104C74.3589 5.10865 74.3778 7.26993 74.3339 7.67746H71.9128V4.96277Z" fill="black"/>
</svg>\n    mediatype: image/svg+xml\n  install:\n    spec:\n      clusterPermissions:\n      - rules:\n        - apiGroups:\n          - security.openshift.io\n          resourceNames:\n          - hostnetwork\n          resources:\n          - securitycontextconstraints\n          verbs:\n          - use\n        - apiGroups:\n          - rbac.authorization.k8s.io\n          resources:\n          - clusterroles\n          - clusterrolebindings\n          verbs:\n          - create\n          - get\n          - patch\n          - update\n          - delete\n          - list\n          - watch\n        - apiGroups:\n          - cilium.io\n          resources:\n          - '*'\n          verbs:\n          - '*'\n        - apiGroups:\n          - apiextensions.k8s.io\n          resources:\n          - customresourcedefinitions\n          verbs:\n          - '*'\n        - apiGroups:\n          - coordination.k8s.io\n          resources:\n          - leases\n          verbs:\n          - create\n          - get\n          - update\n        - apiGroups:\n          - \"\"\n          resources:\n          - services/status\n          verbs:\n          - update\n        - apiGroups:\n          - \"\"\n          resources:\n          - pods\n          - pods/status\n          - pods/finalizers\n          verbs:\n          - get\n          - list\n          - watch\n          - update\n          - delete\n        - apiGroups:\n          - \"\"\n          resources:\n          - nodes\n          - nodes/status\n          verbs:\n          - get\n          - list\n          - watch\n          - update\n          - patch\n        - apiGroups:\n          - \"\"\n          resources:\n          - namespaces\n          - services\n          - endpoints\n          - componentstatuses\n          verbs:\n          - get\n          - list\n          - watch\n        - apiGroups:\n          - discovery.k8s.io\n          resources:\n          - endpointslices\n          verbs:\n          - get\n          - list\n          - watch\n        - apiGroups:\n          - networking.k8s.io\n          resources:\n          - networkpolicies\n          verbs:\n          - get\n          - list\n          - watch\n        serviceAccountName: cilium-olm\n      deployments:\n      - name: cilium-olm\n        spec:\n          replicas: 1\n          selector:\n            matchLabels:\n              name: cilium-olm\n          template:\n            metadata:\n              labels:\n                name: cilium-olm\n            spec:\n              containers:\n              - command:\n                - /usr/local/bin/helm-operator\n                - run\n                - --watches-file=watches.yaml\n                - --enable-leader-election\n                - --leader-election-id=cilium-olm\n                - --zap-devel\n                env:\n                - name: WATCH_NAMESPACE\n                  valueFrom:\n                    fieldRef:\n                      fieldPath: metadata.namespace\n                image: registry.connect.redhat.com/isovalent/cilium-olm@sha256:26f1ed31c8f0600fcea498116e7dbea157a8704989f762e22e8038b8ea364dce\n                name: operator\n                ports:\n                - containerPort: 9443\n                  name: https\n                  protocol: TCP\n                resources:\n                  limits:\n                    cpu: 100m\n                    memory: 150Mi\n                  requests:\n                    cpu: 100m\n                    memory: 150Mi\n                volumeMounts:\n                - mountPath: /tmp\n                  name: tmp\n              hostNetwork: true\n              serviceAccount: cilium-olm\n              terminationGracePeriodSeconds: 10\n              tolerations:\n              - operator: Exists\n              volumes:\n              - emptyDir: {}\n                name: tmp\n      permissions:\n      - rules:\n        - apiGroups:\n          - \"\"\n          resources:\n          - configmaps\n          verbs:\n          - get\n          - list\n          - watch\n          - create\n          - update\n          - patch\n          - delete\n        - apiGroups:\n          - \"\"\n          resources:\n          - events\n          verbs:\n          - create\n        - apiGroups:\n          - \"\"\n          resources:\n          - namespaces\n          verbs:\n          - get\n        - apiGroups:\n          - cilium.io\n          resources:\n          - ciliumconfigs\n          - ciliumconfigs/status\n          verbs:\n          - list\n        - apiGroups:\n          - cilium.io\n          resources:\n          - ciliumconfigs\n          - ciliumconfigs/status\n          - ciliumconfigs/finalizers\n          verbs:\n          - get\n          - patch\n          - update\n          - watch\n          - list\n          - delete\n        - apiGroups:\n          - \"\"\n          resources:\n          - events\n          verbs:\n          - create\n        - apiGroups:\n          - \"\"\n          resources:\n          - secrets\n          verbs:\n          - '*'\n        - apiGroups:\n          - \"\"\n          resources:\n          - serviceaccounts\n          - configmaps\n          - secrets\n          - services\n          verbs:\n          - '*'\n        - apiGroups:\n          - apps\n          resources:\n          - deployments\n          - daemonsets\n          verbs:\n          - '*'\n        serviceAccountName: cilium-olm\n    strategy: deployment\n  installModes:\n  - supported: true\n    type: OwnNamespace\n  - supported: true\n    type: SingleNamespace\n  - supported: false\n    type: MultiNamespace\n  - supported: false\n    type: AllNamespaces\n  keywords:\n  - networking\n  - security\n  - observability\n  - eBPF\n  links:\n  - name: Cilium Homepage\n    url: https://cilium.io/\n  maintainers:\n  - email: [email protected]\n    name: Cilium\n  maturity: stable\n  provider:\n    name: Isovalent\n  version: 1.10.1+xdcd9835\n  relatedImages:\n  - name: operator\n    image: registry.connect.redhat.com/isovalent/cilium-olm@sha256:26f1ed31c8f0600fcea498116e7dbea157a8704989f762e22e8038b8ea364dce"}, "changed": false}
2021-06-25 12:57:42,076 p=23 u=default n=ansible | TASK [parse_operator_bundle : Determine and set fact for operator specific information - name, pod name, container name and capabilities] ***
2021-06-25 12:57:42,372 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"current_csv": "cilium.v1.10.1-xdcd9835", "operator_capabilities": "Basic Install", "operator_container_name": "operator", "operator_pod_name": "cilium-olm"}, "changed": false}
2021-06-25 12:57:42,374 p=23 u=default n=ansible | TASK [parse_operator_bundle : Determine operator_allnamespaces_support] ********
2021-06-25 12:57:42,543 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"operator_allnamespaces_support": [false]}, "changed": false}
2021-06-25 12:57:42,546 p=23 u=default n=ansible | TASK [parse_operator_bundle : Determine operator_ownnamespace_support] *********
2021-06-25 12:57:42,714 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"operator_ownnamespace_support": [true]}, "changed": false}
2021-06-25 12:57:42,717 p=23 u=default n=ansible | TASK [parse_operator_bundle : Determine operator_singlenamespace_support] ******
2021-06-25 12:57:42,888 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"operator_singlenamespace_support": [true]}, "changed": false}
2021-06-25 12:57:42,890 p=23 u=default n=ansible | TASK [parse_operator_bundle : Determine operator_multinamespace_support] *******
2021-06-25 12:57:43,060 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"operator_multinamespace_support": [false]}, "changed": false}
2021-06-25 12:57:43,062 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set boolean value for different types of namespaces] ***
2021-06-25 12:57:43,194 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"operator_allnamespaces_support": false, "operator_multinamespace_support": false, "operator_ownnamespace_support": true, "operator_singlenamespace_support": true}, "changed": false}
2021-06-25 12:57:43,200 p=23 u=default n=ansible | TASK [parse_operator_bundle : Output all collected data to a yaml file in work dir] ***
2021-06-25 12:57:43,685 p=23 u=default n=ansible | changed: [localhost] => {"changed": true, "checksum": "751c10af7d5b70aaa942659244282bfbeefac115", "dest": "/home/jenkins/agent/workspace/cvp-isv-operator-bundle-image-validation-test/parsed_operator_data.yml", "gid": 0, "group": "root", "md5sum": "5bbf286eedeb70fdd5e35af94b0c55d6", "mode": "0644", "owner": "default", "size": 644, "src": "/home/jenkins/agent/.ansible/tmp/ansible-tmp-1624625863.3402283-347-253092958666172/source", "state": "file", "uid": 1001680000}
2021-06-25 12:57:43,687 p=23 u=default n=ansible | TASK [parse_operator_bundle : Sanity check the operator bundle's information] ***
2021-06-25 12:57:43,822 p=23 u=default n=ansible | included: /home/jenkins/agent/workspace/cvp-isv-operator-bundle-image-validation-test/operators/config/ansible/roles/parse_operator_bundle/tasks/bundle_sanity_checks.yml for localhost
2021-06-25 12:57:43,834 p=23 u=default n=ansible | TASK [parse_operator_bundle : Read the variables from annotations.yaml] ********
2021-06-25 12:57:43,991 p=23 u=default n=ansible | ok: [localhost] => {"ansible_facts": {"annotations_vars": {"annotations": {"operators.operatorframework.io.bundle.channel.default.v1": "stable", "operators.operatorframework.io.bundle.channels.v1": "stable", "operators.operatorframework.io.bundle.manifests.v1": "manifests/", "operators.operatorframework.io.bundle.mediatype.v1": "registry+v1", "operators.operatorframework.io.bundle.metadata.v1": "metadata/", "operators.operatorframework.io.bundle.package.v1": "cilium", "operators.operatorframework.io.metrics.builder": "operator-sdk-v1.0.1", "operators.operatorframework.io.metrics.mediatype.v1": "metrics+v1", "operators.operatorframework.io.metrics.project_layout": "helm.sdk.operatorframework.io/v1", "operators.operatorframework.io.test.config.v1": "tests/scorecard/", "operators.operatorframework.io.test.mediatype.v1": "scorecard+v1"}}}, "ansible_included_var_files": ["/home/jenkins/agent/test-operator/metadata/annotations.yaml"], "changed": false}
2021-06-25 12:57:43,994 p=23 u=default n=ansible | TASK [parse_operator_bundle : shell] *******************************************
2021-06-25 12:57:44,145 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:44,147 p=23 u=default n=ansible | TASK [parse_operator_bundle : Debug] *******************************************
2021-06-25 12:57:44,301 p=23 u=default n=ansible | skipping: [localhost] => {}
2021-06-25 12:57:44,303 p=23 u=default n=ansible | TASK [parse_operator_bundle : Set facts] ***************************************
2021-06-25 12:57:44,455 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:44,458 p=23 u=default n=ansible | TASK [parse_operator_bundle : debug] *******************************************
2021-06-25 12:57:44,609 p=23 u=default n=ansible | ok: [localhost] => {
    "skopeo_inspect_json.Labels": {
        "com.redhat.delivery.backport": "true",
        "com.redhat.delivery.operator.bundle": "true",
        "com.redhat.iib.pinned": "true",
        "com.redhat.openshift.versions": "v4.5,v4.6,v4.7",
        "io.buildah.version": "1.16.7",
        "operators.operatorframework.io.bundle.channel.default.v1": "stable",
        "operators.operatorframework.io.bundle.channels.v1": "stable",
        "operators.operatorframework.io.bundle.manifests.v1": "manifests/",
        "operators.operatorframework.io.bundle.mediatype.v1": "registry+v1",
        "operators.operatorframework.io.bundle.metadata.v1": "metadata/",
        "operators.operatorframework.io.bundle.package.v1": "cilium",
        "operators.operatorframework.io.metrics.builder": "operator-sdk-v1.0.1",
        "operators.operatorframework.io.metrics.mediatype.v1": "metrics+v1",
        "operators.operatorframework.io.metrics.project_layout": "helm.sdk.operatorframework.io/v1"
    }
}
2021-06-25 12:57:44,611 p=23 u=default n=ansible | TASK [parse_operator_bundle : debug] *******************************************
2021-06-25 12:57:44,762 p=23 u=default n=ansible | ok: [localhost] => {
    "annotations_vars.annotations": {
        "operators.operatorframework.io.bundle.channel.default.v1": "stable",
        "operators.operatorframework.io.bundle.channels.v1": "stable",
        "operators.operatorframework.io.bundle.manifests.v1": "manifests/",
        "operators.operatorframework.io.bundle.mediatype.v1": "registry+v1",
        "operators.operatorframework.io.bundle.metadata.v1": "metadata/",
        "operators.operatorframework.io.bundle.package.v1": "cilium",
        "operators.operatorframework.io.metrics.builder": "operator-sdk-v1.0.1",
        "operators.operatorframework.io.metrics.mediatype.v1": "metrics+v1",
        "operators.operatorframework.io.metrics.project_layout": "helm.sdk.operatorframework.io/v1",
        "operators.operatorframework.io.test.config.v1": "tests/scorecard/",
        "operators.operatorframework.io.test.mediatype.v1": "scorecard+v1"
    }
}
2021-06-25 12:57:44,765 p=23 u=default n=ansible | TASK [parse_operator_bundle : Check if the operators.operatorframework.io.bundle.channels.v1 from annotation.yaml matches the bundle image label] ***
2021-06-25 12:57:44,917 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:44,919 p=23 u=default n=ansible | TASK [parse_operator_bundle : Check if the operators.operatorframework.io.bundle.manifests.v1 from annotation.yaml matches the bundle image label] ***
2021-06-25 12:57:45,072 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:45,074 p=23 u=default n=ansible | TASK [parse_operator_bundle : Check if the operators.operatorframework.io.bundle.mediatype.v1 from annotation.yaml matches the bundle image label] ***
2021-06-25 12:57:45,225 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:45,227 p=23 u=default n=ansible | TASK [parse_operator_bundle : Check if the operators.operatorframework.io.bundle.mediatype.v1 is set to the expected value] ***
2021-06-25 12:57:45,376 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:45,379 p=23 u=default n=ansible | TASK [parse_operator_bundle : Check if the operators.operatorframework.io.bundle.metadata.v1 from annotation.yaml matches the bundle image label] ***
2021-06-25 12:57:45,528 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:45,530 p=23 u=default n=ansible | TASK [parse_operator_bundle : Check if the operators.operatorframework.io.bundle.package.v1 from annotation.yaml matches the bundle image label] ***
2021-06-25 12:57:45,676 p=23 u=default n=ansible | skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
2021-06-25 12:57:45,677 p=23 u=default n=ansible | PLAY RECAP *********************************************************************
2021-06-25 12:57:45,677 p=23 u=default n=ansible | localhost                  : ok=36   changed=9    unreachable=0    failed=0    skipped=11   rescued=0    ignored=0   




-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-bundle-image-validation-test/certified-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-603a0723-d1ac-4af3-8216-20b46833dd67/603a0723-d1ac-4af3-8216-20b46833dd67/

semver used in CSV no longer valid

In efbfd7f a version suffix was introduced to ensure multiple builds of the bundle can be published for each Cilium version.

Based on some preliminary testing a version string like 1.9.6-09e7dca appears to pass basic semver regex tests, albeit it reprsents a prerelease. This worked until now, but most recent addition of 1.9.6 failed certification tests.

===== Test: operator-metadata-linting =====

 

Operator Courier version:
-------------------------

2.1.10 (https://github.com/operator-framework/operator-courier/releases/tag/v2.1.10)

Validation Warnings:
--------------------

"csv metadata.annotations.description not defined"
"csv metadata.annotations.containerImage not defined"
"csv metadata.annotations.createdAt not defined"
"csv metadata.annotations.certified not defined."
"csv metadata.annotations.description not defined.Without this field, the description displayed in the tiles of the UI will be a truncated version of spec.description."
"csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI."
"csv metadata.annotations.containerImage not defined.Without this field, the link to the operator image will not be displayed in the UI."

Validation Errors:
------------------

"spec.version 1.9.6-09e7dca is not a valid semver (example of a valid semver is: 1.0.12)"
"UI validation failed to verify that required fields for operatorhub.io are properly formatted."

stdout:
-------



stderr:
-------

WARNING: csv metadata.annotations.description not defined [1.9.6-09e7dca/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.containerImage not defined [1.9.6-09e7dca/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.createdAt not defined [1.9.6-09e7dca/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.certified not defined. [1.9.6-09e7dca/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.description not defined.Without this field, the description displayed in the tiles of the UI will be a truncated version of spec.description. [5dc4cb4a-a92b-434d-b1f4-c7b8d63da4b6/package.yaml]
WARNING: csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI. [5dc4cb4a-a92b-434d-b1f4-c7b8d63da4b6/package.yaml]
WARNING: csv metadata.annotations.containerImage not defined.Without this field, the link to the operator image will not be displayed in the UI. [5dc4cb4a-a92b-434d-b1f4-c7b8d63da4b6/package.yaml]
ERROR: spec.version 1.9.6-09e7dca is not a valid semver (example of a valid semver is: 1.0.12) [5dc4cb4a-a92b-434d-b1f4-c7b8d63da4b6/package.yaml]
ERROR: UI validation failed to verify that required fields for operatorhub.io are properly formatted. [5dc4cb4a-a92b-434d-b1f4-c7b8d63da4b6/package.yaml]
Resulting bundle is invalid, input yaml is improperly defined.


return code:
------------

1



-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-metadata-validation-test/certified-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-1b5f4a3e-a722-432d-a4e9-6837dd807df5/5dc4cb4a-a92b-434d-b1f4-c7b8d63da4b6/

determine if SCC needed at all

It appears that when installing the operator during bootstrap it's not subjected to SecurityContextConstraints, so perhaps an SCC is not needed at all?

fixed tuned config

Looks like Tuned CR introduced in edc1a10 is not working.

It has no effect in OCP 4.6, since systemd version is an older one, so it's not subject to cilium/cilium#10645.

OCP 4.6:

systemd 239
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy

OKD 4.5:

systemd 245 (v245.6-2.fc32)
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified

What happens is that tuned complains like so:

I0128 17:54:04.307384    5197 tuned.go:264] disabling system tuned...
I0128 17:54:04.388955    5197 tuned.go:820] started events processor
I0128 17:54:04.388996    5197 tuned.go:281] extracting Tuned profiles
I0128 17:54:04.389889    5197 tuned.go:863] started controller
I0128 17:54:05.428245    5197 tuned.go:359] written "/etc/tuned/recommend.d/50-openshift.conf" to set Tuned profile openshift-node-rpfilter-cilium
I0128 17:54:06.640778    5197 tuned.go:563] active profile () != recommended profile (openshift-node-rpfilter-cilium)
I0128 17:54:06.641115    5197 tuned.go:368] starting tuned...
2021-01-28 17:54:06,854 INFO     tuned.daemon.application: dynamic tuning is globally disabled
2021-01-28 17:54:06,878 INFO     tuned.daemon.daemon: using sleep interval of 1 second(s)
2021-01-28 17:54:06,878 INFO     tuned.daemon.daemon: Running in automatic mode, checking what profile is recommended for your configuration.
2021-01-28 17:54:06,879 INFO     tuned.daemon.daemon: Using 'openshift-node-rpfilter-cilium' profile
2021-01-28 17:54:06,880 INFO     tuned.profiles.loader: loading profile: openshift-node-rpfilter-cilium
2021-01-28 17:54:06,933 INFO     tuned.daemon.controller: starting controller
2021-01-28 17:54:06,934 INFO     tuned.daemon.daemon: starting tuning
2021-01-28 17:54:06,947 INFO     tuned.plugins.base: instance cpu: assigning devices cpu2, cpu3, cpu1, cpu0
2021-01-28 17:54:06,948 INFO     tuned.plugins.plugin_cpu: We are running on an x86 GenuineIntel platform
2021-01-28 17:54:06,952 ERROR    tuned.utils.commands: Executing x86_energy_perf_policy error: x86_energy_perf_policy: /dev/cpu/1/msr offset 0x1ad read failed: Input/output error
2021-01-28 17:54:06,952 WARNING  tuned.plugins.plugin_cpu: your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias
2021-01-28 17:54:06,955 INFO     tuned.plugins.base: instance disk: assigning devices sda, dm-0
2021-01-28 17:54:06,958 INFO     tuned.plugins.base: instance net: assigning devices ens4
2021-01-28 17:54:07,005 ERROR    tuned.plugins.plugin_sysctl: Failed to read sysctl parameter 'net.ipv4.conf.lxc*.rp_filter', the parameter does not exist
2021-01-28 17:54:07,005 ERROR    tuned.plugins.plugin_sysctl: sysctl option net.ipv4.conf.lxc*.rp_filter will not be set, failed to read the original value.
2021-01-28 17:54:07,005 INFO     tuned.plugins.plugin_sysctl: reapplying system sysctl
2021-01-28 17:54:07,014 INFO     tuned.daemon.daemon: static tuning from profile 'openshift-node-rpfilter-cilium' applied

It's potentially possible to use tuned in unmanaged mode to solve this, but that needs investigating.

bundle validation error - operator-packagename-uniqueness-bundle-image

===== Test: operator-packagename-uniqueness-bundle-image =====

Package cilium does not yet exist but an entry already exists with the following:

source       == certified-operators
package_name == cilium-openshift-operator
association  == ospid-c3cd8e7a-1577-4dac-bc32-cfce4a207813



---------------------------------
---------------------------------
Package Name Uniqueness Overview:
---------------------------------
---------------------------------

The use of Operators requires that all packageNames are *globally unique* across all quay.io namespaces.
This test effectively 'claims' a package name by locking it to a specific operator / source.

Source
------

Source is defined as where an operator is coming from:

All Operators come one of 4 sources:
- Upstream
- Community
- Certified
- Redhat

Association
-----------

Association is defined as an item from the source that can used
to align the packageName to.
 
All Operators have an association within their source that helps identify them.

For Upstream and Community operators, the association is derived from the operator's folder name within the GitHub repo.
For Certified operators, the association is derived from the projectID from RHConnect.
For RedHat operators, the association is derived from the Brew Package Name

Details for this Operator
-------------------------

This operator-under-test has a package name of cilium

Its source is certified-operators with an association of ospid-c3cd8e7a-1577-4dac-bc32-cfce4a207813





-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-bundle-image-validation-test/marketplace-ospid-c3cd8e7a-1577-4dac-bc32-cfce4a207813-f49b38e9-8bd4-4916-ad2c-579540c80cea/f49b38e9-8bd4-4916-ad2c-579540c80cea/

ensure installation works with OLM

Right now the operator has manifests and a CSV, but after installing from manifests it will automatically move into OLM's realm. It needs to have OperatorGroup and Subscription resources for that.

What is not clear yes this:

  • does Subscription require the operator to be on OperatorHub?
    • it has startingCSV, but how is that meant to be linked?
  • what is the link between Subscription and OperatorGroup?
  • can Subscription and OperatorGroup be passed to openshift-install, or these must be to added to the cluster once it's installed?

enabled KPR

Deploying a cluster without kube-proxy (see cilium/openshift-terrafrom-upi#16) implies that usual API access is not possible, so the operator needs to do something else.

From looking at CVO, it appears that one option could be to use nodeSelector: { node-role.kubernetes.io/master: "" } and KUBERNETES_SERVICE_HOST=127.0.0.1, because masters always run an API server.

From looking at CNO, it seems like another route could be to mount /etc/kubernetes from the host.

"Unexpected unclosed action in template clause" when trying to render templated string from values.yaml

As explained in helm/helm#7704 the new cilium helm chart does not support older versions of helm. Cilium-olm is using old libraries of operator-sdk, which does not support version 1.12 cilium helm chart. The operator will not reconcile anything, so cilium-olm v1.12 does not work properly:

2022-08-03T07:14:51.207Z ERROR helm.controller Failed to sync release {"namespace": "kube-system", "name": "cilium", "apiVersion": "cilium.io/v1alpha1", "kind": "CiliumConfig", "release": "cilium", "error": "failed to get candidate release: parse error at (cilium/templates/validate.yaml:38): unclosed action"}

operator getting oom-killed

    State:          Running
      Started:      Mon, 12 Apr 2021 15:45:23 +0100
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Mon, 12 Apr 2021 15:42:08 +0100
      Finished:     Mon, 12 Apr 2021 15:44:55 +0100
    Ready:          True
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:     100m
      memory:  100Mi
    Environment:

it looks like the operator requires more memory now...

scan failure: certified-operator-catalog-initialization-bundle-image

===== Test: operator-catalog-initialization-bundle-image =====

 The operator FAILED the index image build test.
Build state history for v4.5:
2021-02-03T15:59:17.726106Z - failed - Push to cvpops in the legacy app registry was unsucessful: Failed to push manifest: Resulting bundle is invalid, input yaml is improperly defined.
2021-02-03T15:58:50.883734Z - in_progress - Creating the manifest list
2021-02-03T15:56:03.415477Z - in_progress - Checking if bundles are already present in index image
2021-02-03T15:56:00.860995Z - in_progress - Building the index image for the following arches: amd64, ppc64le, s390x
2021-02-03T15:55:58.912571Z - in_progress - Backport legacy support will be forced
2021-02-03T15:55:53.730117Z - in_progress - Resolving the container images
2021-02-03T15:55:45.480477Z - in_progress - Resolving the bundles
2021-02-03T15:52:27.561378Z - in_progress - The request was initiated

Build state history for v4.6:
2021-02-03T16:04:57.614152Z - complete - The operator bundle(s) were successfully added to the index image
2021-02-03T16:04:50.455927Z - in_progress - Creating the manifest list
2021-02-03T16:02:05.935488Z - in_progress - Checking if bundles are already present in index image
2021-02-03T16:02:03.217716Z - in_progress - Building the index image for the following arches: amd64, ppc64le, s390x
2021-02-03T16:01:57.754333Z - in_progress - Resolving the container images
2021-02-03T16:01:54.423896Z - in_progress - Resolving the bundles
2021-02-03T15:59:20.289639Z - in_progress - The request was initiated



---------------------------------------
IIB Build Logs for the build ID - 44123 - OCP version: v4.5:

2021-02-03 10:55:37,390 iib.workers.tasks.build INFO build._cleanup Removing all existing container images
2021-02-03 10:55:37,390 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rmi --all --force"
2021-02-03 10:55:45,408 iib.workers.tasks.utils DEBUG utils.reset_docker_config Removing the Docker config at /home/iib-worker-cvp-parallel-1/.docker/config.json
2021-02-03 10:55:45,409 iib.workers.tasks.utils DEBUG utils.reset_docker_config Creating a symlink from /home/iib-worker-cvp-parallel-1/.docker/config.json.template to /home/iib-worker-cvp-parallel-1/.docker/config.json
2021-02-03 10:55:45,410 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44123 to "in_progress" with the reason "Resolving the bundles"
2021-02-03 10:55:45,410 iib.workers.api_utils INFO api_utils.update_request Patching the request 44123 with {'state': 'in_progress', 'state_reason': 'Resolving the bundles'}
2021-02-03 10:55:47,507 iib.workers.tasks.build INFO build._get_resolved_bundles Resolving bundles registry-proxy.engineering.redhat.com/rh-osbs/iib:44108
2021-02-03 10:55:47,507 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:44108 --raw"
2021-02-03 10:55:52,975 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9
2021-02-03 10:55:52,976 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9 --config"
2021-02-03 10:55:53,663 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44123 to "in_progress" with the reason "Resolving the container images"
2021-02-03 10:55:53,664 iib.workers.api_utils INFO api_utils.update_request Patching the request 44123 with {'state': 'in_progress', 'state_reason': 'Resolving the container images'}
2021-02-03 10:55:55,844 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-02-03 10:55:55,845 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry.redhat.io/redhat/certified-operator-index:v4.5
2021-02-03 10:55:55,845 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/certified-operator-index:v4.5 --raw"
2021-02-03 10:55:57,779 iib.workers.tasks.build DEBUG build._get_resolved_image registry.redhat.io/redhat/certified-operator-index:v4.5 resolved to registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:55:57,779 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:55:57,780 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.version from registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:55:57,781 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:55:57,781 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.distribution_scope from registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:55:57,781 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:55:57,782 iib.workers.tasks.build DEBUG build._prepare_request_for_build Set to build the index image for the following arches: amd64, ppc64le, s390x
2021-02-03 10:55:57,782 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5
2021-02-03 10:55:57,782 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5 --raw"
2021-02-03 10:55:58,240 iib.workers.tasks.build DEBUG build._get_resolved_image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5 resolved to registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:f723c15ac254b60db51681d8e35380f33e844ff1365fcba8053793e102f0a383
2021-02-03 10:55:58,241 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:f723c15ac254b60db51681d8e35380f33e844ff1365fcba8053793e102f0a383
2021-02-03 10:55:58,242 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.bundle.package.v1 from registry-proxy.engineering.redhat.com/rh-osbs/iib:44108
2021-02-03 10:55:58,242 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:44108
2021-02-03 10:55:58,242 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:44108 --config"
2021-02-03 10:55:58,865 iib.workers.tasks.build INFO build.handle_add_request Checking if interacting with the legacy app registry is required
2021-02-03 10:55:58,866 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44123 to "in_progress" with the reason "Backport legacy support will be forced"
2021-02-03 10:55:58,866 iib.workers.api_utils INFO api_utils.update_request Patching the request 44123 with {'state': 'in_progress', 'state_reason': 'Backport legacy support will be forced'}
2021-02-03 10:56:00,825 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9
2021-02-03 10:56:00,827 iib.workers.api_utils INFO api_utils.update_request Patching the request 44123 with {'binary_image': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5', 'binary_image_resolved': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:f723c15ac254b60db51681d8e35380f33e844ff1365fcba8053793e102f0a383', 'state': 'in_progress', 'distribution_scope': 'prod', 'state_reason': 'Building the index image for the following arches: amd64, ppc64le, s390x', 'bundle_mapping': {'cilium': ['registry-proxy.engineering.redhat.com/rh-osbs/iib:44108']}, 'from_index_resolved': 'registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e'}
2021-02-03 10:56:03,370 iib.workers.tasks.build INFO build.handle_add_request Checking if bundles are already present in index image
2021-02-03 10:56:03,370 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44123 to "in_progress" with the reason "Checking if bundles are already present in index image"
2021-02-03 10:56:03,371 iib.workers.api_utils INFO api_utils.update_request Patching the request 44123 with {'state': 'in_progress', 'state_reason': 'Checking if bundles are already present in index image'}
2021-02-03 10:56:05,497 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-02-03 10:56:05,497 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.index.database.v1 from registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:56:05,497 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:56:05,498 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman create registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e unused"
2021-02-03 10:56:32,261 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman cp 4f1b1c4aef4e04164cf789629a024ebc576dfa42555fc7cb28f02310ba958745:/database/index.db /tmp/iib-ds3p6rqw"
2021-02-03 10:56:33,871 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rm 4f1b1c4aef4e04164cf789629a024ebc576dfa42555fc7cb28f02310ba958745"
2021-02-03 10:56:35,253 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50051 list api.Registry"
2021-02-03 10:56:35,301 iib.workers.tasks.build DEBUG build._serve_index_registry_at_port Started the command "opm registry serve -p 50051 -d /tmp/iib-ds3p6rqw/index.db -t /dev/null"
2021-02-03 10:56:35,301 iib.workers.tasks.build INFO build._serve_index_registry_at_port Index registry service has been initialized.
2021-02-03 10:56:35,301 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50051 api.Registry/ListBundles"
2021-02-03 10:56:38,050 iib.workers.tasks.build INFO build._opm_index_add Generating the database file with the following bundle(s): registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9
2021-02-03 10:56:38,062 iib.workers.tasks.build INFO build._opm_index_add Using the existing database from registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e
2021-02-03 10:56:38,062 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-02-03 10:56:38,062 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "opm index add --generate --bundles registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9 --binary-image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:f723c15ac254b60db51681d8e35380f33e844ff1365fcba8053793e102f0a383 --from-index registry.redhat.io/redhat/certified-operator-index@sha256:87212f70f35a5f149054ad84155728acab1a830832e104da2d83ea603a6b307e"
2021-02-03 10:57:03,849 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.version="v4.5"
2021-02-03 10:57:03,852 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.distribution_scope="prod"
2021-02-03 10:57:03,852 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch amd64 and tagging it as iib-build:44123-amd64
2021-02-03 10:57:03,852 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch amd64 -t iib-build:44123-amd64 -f /tmp/iib-ds3p6rqw/index.Dockerfile"
2021-02-03 10:57:22,103 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:44123-amd64 to docker://quay.io/rh-osbs/iib:44123-amd64
2021-02-03 10:57:22,103 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:44123-amd64 docker://quay.io/rh-osbs/iib:44123-amd64"
2021-02-03 10:57:41,555 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:44123-amd64 was pushed as a v2 manifest due to RHBZ#1810768
2021-02-03 10:57:41,558 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:44123-amd64 --raw"
2021-02-03 10:57:41,949 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:44123-amd64 ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-02-03 10:57:41,949 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:44123-amd64 to docker://quay.io/rh-osbs/iib:44123-amd64
2021-02-03 10:57:41,949 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:44123-amd64 docker://quay.io/rh-osbs/iib:44123-amd64"
2021-02-03 10:57:46,245 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch ppc64le and tagging it as iib-build:44123-ppc64le
2021-02-03 10:57:46,246 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch ppc64le -t iib-build:44123-ppc64le -f /tmp/iib-ds3p6rqw/index.Dockerfile"
2021-02-03 10:58:10,941 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:44123-ppc64le to docker://quay.io/rh-osbs/iib:44123-ppc64le
2021-02-03 10:58:10,942 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:44123-ppc64le docker://quay.io/rh-osbs/iib:44123-ppc64le"
2021-02-03 10:58:15,739 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:44123-ppc64le was pushed as a v2 manifest due to RHBZ#1810768
2021-02-03 10:58:15,739 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:44123-ppc64le --raw"
2021-02-03 10:58:16,119 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:44123-ppc64le ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-02-03 10:58:16,120 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:44123-ppc64le to docker://quay.io/rh-osbs/iib:44123-ppc64le
2021-02-03 10:58:16,120 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:44123-ppc64le docker://quay.io/rh-osbs/iib:44123-ppc64le"
2021-02-03 10:58:18,752 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch s390x and tagging it as iib-build:44123-s390x
2021-02-03 10:58:18,753 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch s390x -t iib-build:44123-s390x -f /tmp/iib-ds3p6rqw/index.Dockerfile"
2021-02-03 10:58:39,026 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:44123-s390x to docker://quay.io/rh-osbs/iib:44123-s390x
2021-02-03 10:58:39,031 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:44123-s390x docker://quay.io/rh-osbs/iib:44123-s390x"
2021-02-03 10:58:47,678 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:44123-s390x was pushed as a v2 manifest due to RHBZ#1810768
2021-02-03 10:58:47,679 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:44123-s390x --raw"
2021-02-03 10:58:48,060 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:44123-s390x ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-02-03 10:58:48,060 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:44123-s390x to docker://quay.io/rh-osbs/iib:44123-s390x
2021-02-03 10:58:48,061 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:44123-s390x docker://quay.io/rh-osbs/iib:44123-s390x"
2021-02-03 10:58:50,784 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44123 to "in_progress" with the reason "Creating the manifest list"
2021-02-03 10:58:50,785 iib.workers.api_utils INFO api_utils.update_request Patching the request 44123 with {'state': 'in_progress', 'state_reason': 'Creating the manifest list'}
2021-02-03 10:58:53,155 iib.workers.tasks.build INFO build._create_and_push_manifest_list Creating the manifest list quay.io/rh-osbs/iib:44123
2021-02-03 10:58:53,156 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:44123-amd64 to the manifest list quay.io/rh-osbs/iib:44123
2021-02-03 10:58:53,156 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:44123-ppc64le to the manifest list quay.io/rh-osbs/iib:44123
2021-02-03 10:58:53,157 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:44123-s390x to the manifest list quay.io/rh-osbs/iib:44123
2021-02-03 10:58:53,157 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Created the manifest configuration with the following content:
image: quay.io/rh-osbs/iib:44123
manifests:
- image: quay.io/rh-osbs/iib:44123-amd64
  platform:
    architecture: amd64
    os: linux
- image: quay.io/rh-osbs/iib:44123-ppc64le
  platform:
    architecture: ppc64le
    os: linux
- image: quay.io/rh-osbs/iib:44123-s390x
  platform:
    architecture: s390x
    os: linux

2021-02-03 10:58:53,157 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "manifest-tool push from-spec /tmp/iib-rcgsb408/manifest.yaml"
2021-02-03 10:59:00,028 iib.workers.tasks.legacy INFO legacy._opm_index_export Generating the backported operator for package: cilium
2021-02-03 10:59:00,028 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "opm index export --index quay.io/rh-osbs/iib:44123 --package cilium --download-folder cilium"
2021-02-03 10:59:16,737 iib.workers.tasks.legacy INFO legacy._verify_package_info Verifying package_name cilium
2021-02-03 10:59:16,746 iib.workers.tasks.legacy INFO legacy._push_package_manifest Files are {'file': ('/tmp/iib-b4tdgrzg/cilium/manifests.zip', <_io.BufferedReader name='/tmp/iib-b4tdgrzg/cilium/manifests.zip'>)}
2021-02-03 10:59:17,657 iib.workers.tasks.legacy ERROR legacy._push_package_manifest Request to OMPS failed: {"error":"PackageValidationError","message":"Failed to push manifest: Resulting bundle is invalid, input yaml is improperly defined.","status":400,"validation_info":{"errors":["CRD.spec.version does not match CSV.spec.crd.owned.version"],"warnings":["csv metadata.annotations.description not defined","csv metadata.annotations.containerImage not defined","csv metadata.annotations.createdAt not defined","csv metadata.annotations.certified not defined."]}}

200


---------------------------------------
IIB Build Logs for the build ID - 44135 - OCP version: v4.6:

2021-02-03 11:01:41,631 iib.workers.tasks.build INFO build._cleanup Removing all existing container images
2021-02-03 11:01:41,631 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rmi --all --force"
2021-02-03 11:01:54,345 iib.workers.tasks.utils DEBUG utils.reset_docker_config Removing the Docker config at /home/iib-worker-cvp-parallel-2/.docker/config.json
2021-02-03 11:01:54,346 iib.workers.tasks.utils DEBUG utils.reset_docker_config Creating a symlink from /home/iib-worker-cvp-parallel-2/.docker/config.json.template to /home/iib-worker-cvp-parallel-2/.docker/config.json
2021-02-03 11:01:54,346 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44135 to "in_progress" with the reason "Resolving the bundles"
2021-02-03 11:01:54,346 iib.workers.api_utils INFO api_utils.update_request Patching the request 44135 with {'state': 'in_progress', 'state_reason': 'Resolving the bundles'}
2021-02-03 11:01:56,363 iib.workers.tasks.build INFO build._get_resolved_bundles Resolving bundles registry-proxy.engineering.redhat.com/rh-osbs/iib:44108
2021-02-03 11:01:56,363 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:44108 --raw"
2021-02-03 11:01:57,687 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9
2021-02-03 11:01:57,692 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44135 to "in_progress" with the reason "Resolving the container images"
2021-02-03 11:01:57,693 iib.workers.api_utils INFO api_utils.update_request Patching the request 44135 with {'state': 'in_progress', 'state_reason': 'Resolving the container images'}
2021-02-03 11:01:59,846 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-02-03 11:01:59,846 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry.redhat.io/redhat/certified-operator-index:v4.6
2021-02-03 11:01:59,846 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/certified-operator-index:v4.6 --raw"
2021-02-03 11:02:01,903 iib.workers.tasks.build DEBUG build._get_resolved_image registry.redhat.io/redhat/certified-operator-index:v4.6 resolved to registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:01,903 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:01,905 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.version from registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:01,905 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:01,906 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.distribution_scope from registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:01,906 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:01,907 iib.workers.tasks.build DEBUG build._prepare_request_for_build Set to build the index image for the following arches: amd64, ppc64le, s390x
2021-02-03 11:02:01,907 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6
2021-02-03 11:02:01,908 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6 --raw"
2021-02-03 11:02:02,425 iib.workers.tasks.build DEBUG build._get_resolved_image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6 resolved to registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:48634c6e71a4e3de321147eeeea39679c170ac25e3bb62c7f66f04cdd00cd0c7
2021-02-03 11:02:02,426 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:48634c6e71a4e3de321147eeeea39679c170ac25e3bb62c7f66f04cdd00cd0c7
2021-02-03 11:02:02,427 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.bundle.package.v1 from registry-proxy.engineering.redhat.com/rh-osbs/iib:44108
2021-02-03 11:02:02,427 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:44108
2021-02-03 11:02:02,427 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:44108 --config"
2021-02-03 11:02:03,167 iib.workers.tasks.build INFO build.handle_add_request Checking if interacting with the legacy app registry is required
2021-02-03 11:02:03,167 iib.workers.tasks.legacy INFO legacy.get_legacy_support_packages Backport legacy support is disabled for v4.6
2021-02-03 11:02:03,168 iib.workers.api_utils INFO api_utils.update_request Patching the request 44135 with {'binary_image': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6', 'binary_image_resolved': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:48634c6e71a4e3de321147eeeea39679c170ac25e3bb62c7f66f04cdd00cd0c7', 'state': 'in_progress', 'distribution_scope': 'prod', 'state_reason': 'Building the index image for the following arches: amd64, ppc64le, s390x', 'bundle_mapping': {'cilium': ['registry-proxy.engineering.redhat.com/rh-osbs/iib:44108']}, 'from_index_resolved': 'registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3'}
2021-02-03 11:02:05,892 iib.workers.tasks.build INFO build.handle_add_request Checking if bundles are already present in index image
2021-02-03 11:02:05,892 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44135 to "in_progress" with the reason "Checking if bundles are already present in index image"
2021-02-03 11:02:05,892 iib.workers.api_utils INFO api_utils.update_request Patching the request 44135 with {'state': 'in_progress', 'state_reason': 'Checking if bundles are already present in index image'}
2021-02-03 11:02:08,218 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-02-03 11:02:08,218 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.index.database.v1 from registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:08,218 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:08,219 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman create registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3 unused"
2021-02-03 11:02:44,087 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman cp da472ff6de438b76aeeee1319d213b319c1dea7ad4f4e1646050362329ececf0:/database/index.db /tmp/iib-qejqeey_"
2021-02-03 11:02:44,985 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rm da472ff6de438b76aeeee1319d213b319c1dea7ad4f4e1646050362329ececf0"
2021-02-03 11:02:46,534 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50051 list api.Registry"
2021-02-03 11:02:46,673 iib.workers.tasks.build DEBUG build._serve_index_registry_at_port Started the command "opm registry serve -p 50051 -d /tmp/iib-qejqeey_/index.db -t /dev/null"
2021-02-03 11:02:46,674 iib.workers.tasks.build INFO build._serve_index_registry_at_port Index registry service has been initialized.
2021-02-03 11:02:46,674 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50051 api.Registry/ListBundles"
2021-02-03 11:02:48,441 iib.workers.tasks.build INFO build._opm_index_add Generating the database file with the following bundle(s): registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9
2021-02-03 11:02:48,441 iib.workers.tasks.build INFO build._opm_index_add Using the existing database from registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3
2021-02-03 11:02:48,442 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-02-03 11:02:48,442 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "opm index add --generate --bundles registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:b418cbe1155e2eb12d62e82c4f396b2a1fba9c472b9f70d3913fa177af2527d9 --binary-image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:48634c6e71a4e3de321147eeeea39679c170ac25e3bb62c7f66f04cdd00cd0c7 --from-index registry.redhat.io/redhat/certified-operator-index@sha256:3ce89579be1ab940cfc67e2368ec9d479980fe18a9fb240c7ad66f2fdc0c40c3"
2021-02-03 11:03:17,865 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.version="v4.6"
2021-02-03 11:03:17,866 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.distribution_scope="prod"
2021-02-03 11:03:17,866 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch amd64 and tagging it as iib-build:44135-amd64
2021-02-03 11:03:17,867 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch amd64 -t iib-build:44135-amd64 -f /tmp/iib-qejqeey_/index.Dockerfile"
2021-02-03 11:03:40,842 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:44135-amd64 to docker://quay.io/rh-osbs/iib:44135-amd64
2021-02-03 11:03:40,843 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:44135-amd64 docker://quay.io/rh-osbs/iib:44135-amd64"
2021-02-03 11:03:47,621 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:44135-amd64 was pushed as a v2 manifest due to RHBZ#1810768
2021-02-03 11:03:47,622 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:44135-amd64 --raw"
2021-02-03 11:03:48,066 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:44135-amd64 ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-02-03 11:03:48,066 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:44135-amd64 to docker://quay.io/rh-osbs/iib:44135-amd64
2021-02-03 11:03:48,067 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:44135-amd64 docker://quay.io/rh-osbs/iib:44135-amd64"
2021-02-03 11:03:50,992 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch ppc64le and tagging it as iib-build:44135-ppc64le
2021-02-03 11:03:50,993 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch ppc64le -t iib-build:44135-ppc64le -f /tmp/iib-qejqeey_/index.Dockerfile"
2021-02-03 11:04:05,276 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:44135-ppc64le to docker://quay.io/rh-osbs/iib:44135-ppc64le
2021-02-03 11:04:05,277 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:44135-ppc64le docker://quay.io/rh-osbs/iib:44135-ppc64le"
2021-02-03 11:04:10,109 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:44135-ppc64le was pushed as a v2 manifest due to RHBZ#1810768
2021-02-03 11:04:10,110 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:44135-ppc64le --raw"
2021-02-03 11:04:10,614 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:44135-ppc64le ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-02-03 11:04:10,615 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:44135-ppc64le to docker://quay.io/rh-osbs/iib:44135-ppc64le
2021-02-03 11:04:10,615 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:44135-ppc64le docker://quay.io/rh-osbs/iib:44135-ppc64le"
2021-02-03 11:04:21,210 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch s390x and tagging it as iib-build:44135-s390x
2021-02-03 11:04:21,210 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch s390x -t iib-build:44135-s390x -f /tmp/iib-qejqeey_/index.Dockerfile"
2021-02-03 11:04:37,192 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:44135-s390x to docker://quay.io/rh-osbs/iib:44135-s390x
2021-02-03 11:04:37,193 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:44135-s390x docker://quay.io/rh-osbs/iib:44135-s390x"
2021-02-03 11:04:42,278 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:44135-s390x was pushed as a v2 manifest due to RHBZ#1810768
2021-02-03 11:04:42,279 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:44135-s390x --raw"
2021-02-03 11:04:42,781 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:44135-s390x ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-02-03 11:04:42,782 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:44135-s390x to docker://quay.io/rh-osbs/iib:44135-s390x
2021-02-03 11:04:42,782 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:44135-s390x docker://quay.io/rh-osbs/iib:44135-s390x"
2021-02-03 11:04:50,388 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44135 to "in_progress" with the reason "Creating the manifest list"
2021-02-03 11:04:50,389 iib.workers.api_utils INFO api_utils.update_request Patching the request 44135 with {'state': 'in_progress', 'state_reason': 'Creating the manifest list'}
2021-02-03 11:04:52,684 iib.workers.tasks.build INFO build._create_and_push_manifest_list Creating the manifest list quay.io/rh-osbs/iib:44135
2021-02-03 11:04:52,685 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:44135-amd64 to the manifest list quay.io/rh-osbs/iib:44135
2021-02-03 11:04:52,685 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:44135-ppc64le to the manifest list quay.io/rh-osbs/iib:44135
2021-02-03 11:04:52,686 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:44135-s390x to the manifest list quay.io/rh-osbs/iib:44135
2021-02-03 11:04:52,686 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Created the manifest configuration with the following content:
image: quay.io/rh-osbs/iib:44135
manifests:
- image: quay.io/rh-osbs/iib:44135-amd64
  platform:
    architecture: amd64
    os: linux
- image: quay.io/rh-osbs/iib:44135-ppc64le
  platform:
    architecture: ppc64le
    os: linux
- image: quay.io/rh-osbs/iib:44135-s390x
  platform:
    architecture: s390x
    os: linux

2021-02-03 11:04:52,686 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "manifest-tool push from-spec /tmp/iib-f1q4u25a/manifest.yaml"
2021-02-03 11:04:56,041 iib.workers.tasks.build INFO build._update_index_image_pull_spec Changed the index_image pull specification from quay.io/rh-osbs/iib:44135 to registry-proxy.engineering.redhat.com/rh-osbs/iib:44135
2021-02-03 11:04:56,042 iib.workers.api_utils INFO api_utils.update_request Patching the request 44135 with {'arches': ['s390x', 'ppc64le', 'amd64'], 'index_image': 'registry-proxy.engineering.redhat.com/rh-osbs/iib:44135'}
2021-02-03 11:04:57,570 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 44135 to "complete" with the reason "The operator bundle(s) were successfully added to the index image"
2021-02-03 11:04:57,570 iib.workers.api_utils INFO api_utils.update_request Patching the request 44135 with {'state': 'complete', 'state_reason': 'The operator bundle(s) were successfully added to the index image'}
200




-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-bundle-image-validation-test/certified-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-5d838e07-77d5-4a91-a3ff-70c41c9e3724/5d838e07-77d5-4a91-a3ff-70c41c9e3724/

scan failure: certified-operator-scorecard-tests

===== Test: operator-scorecard-tests =====

 The scorecard test experienced an issue while running tests on the the operator: java.lang.Exception: Error running Ansible playbook 'operator-scorecard-test.yml': hudson.AbortException: script returned exit code 2  

======== Debug output of the operator container =========

 

2021-02-09T17:55:09.378Z	INFO	cmd	Version	{"Go Version": "go1.15.5", "GOOS": "linux", "GOARCH": "amd64", "helm-operator": "v1.3.0", "commit": "1abf57985b43bf6a59dcd18147b3c574fa57d3f6"}
2021-02-09T17:55:09.381Z	INFO	cmd	Watching single namespace.	{"Namespace": "cilium"}
I0209 17:55:10.431291       1 request.go:645] Throttling request took 1.040334081s, request: GET:http://eyJhcGlWZXJzaW9uIjoiIiwia2luZCI6IiIsIm5hbWUiOiJzY29yZWNhcmQiLCJ1aWQiOiIiLCJOYW1lc3BhY2UiOiJ0ZXN0LW9wZXJhdG9yIn0K@localhost:8889/apis/machineconfiguration.openshift.io/v1?timeout=32s
2021-02-09T17:55:11.593Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
2021-02-09T17:55:11.602Z	INFO	helm.controller	Watching resource	{"apiVersion": "cilium.io/v1alpha1", "kind": "CiliumConfig", "namespace": "cilium", "reconcilePeriod": "1m0s"}
I0209 17:55:11.613491       1 leaderelection.go:243] attempting to acquire leader lease  test-operator/cilium-olm...
2021-02-09T17:55:11.613Z	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
I0209 17:55:11.634115       1 leaderelection.go:253] successfully acquired lease test-operator/cilium-olm
2021-02-09T17:55:11.634Z	DEBUG	controller-runtime.manager.events	Normal	{"object": {"kind":"ConfigMap","namespace":"test-operator","name":"cilium-olm","uid":"1b2efd0b-c98c-4019-b14b-3e0b2aed5673","apiVersion":"v1","resourceVersion":"23136"}, "reason": "LeaderElection", "message": "ip-10-0-147-140_eff2b5be-ad16-4a71-8a6e-43ae04a2aa7f became leader"}
2021-02-09T17:55:11.634Z	INFO	controller-runtime.manager.controller.ciliumconfig-controller	Starting EventSource	{"source": "kind source: cilium.io/v1alpha1, Kind=CiliumConfig"}
2021-02-09T17:55:11.734Z	INFO	controller-runtime.manager.controller.ciliumconfig-controller	Starting Controller
2021-02-09T17:55:11.734Z	INFO	controller-runtime.manager.controller.ciliumconfig-controller	Starting workers	{"worker count": 2} 

======== Error output of the operator scorecard test =========

 

 Output of null-scorecard-errors.txt: 

time="2021-02-09T17:56:41Z" level=debug msg="Debug logging is set"



-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-metadata-validation-test/certified-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-f8e7a4b3-f243-4b85-875f-a4736ad899db/e84d79b5-7369-4908-b28b-da533a04e909/

scan failure: certified-operator-metadata-linting-bundle-image

===== Test: operator-metadata-linting-bundle-image =====

 

Operator SDK version:
-------------------------

operator-sdk version: "v1.3.0", commit: "1abf57985b43bf6a59dcd18147b3c574fa57d3f6", kubernetes version: "1.19.4", go version: "go1.15.5", GOOS: "linux", GOARCH: "amd64"


Validation output:
-------

time="2021-02-03T15:50:49Z" level=debug msg="Debug logging is set"
time="2021-02-03T15:50:49Z" level=debug msg="Found manifests directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T15:50:49Z" level=debug msg="Found metadata directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T15:50:49Z" level=debug msg="Getting mediaType info from manifests directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T15:50:49Z" level=info msg="Found annotations file" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T15:50:49Z" level=info msg="Could not find optional dependencies file" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T15:50:49Z" level=debug msg="Validating bundle contents" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T15:50:49Z" level=warning msg="Warning: Value : (cilium.v1.9.3) example annotations not found"
time="2021-02-03T15:50:49Z" level=error msg="Error: Field spec.version, Value v1beta1: spec.version: Invalid value: \"v1beta1\": must match the first version in spec.versions"


return code:
------------

1



-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-bundle-image-validation-test/certified-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-5d838e07-77d5-4a91-a3ff-70c41c9e3724/5d838e07-77d5-4a91-a3ff-70c41c9e3724/

operator panics in certification tests


========== Output of the operator container log =============

Flag --enable-leader-election has been deprecated, use --leader-elect instead.
2021-04-28T16:57:28.523Z	INFO	cmd	Version	{"Go Version": "go1.15.11", "GOOS": "linux", "GOARCH": "amd64", "helm-operator": "v1.6.1+git", "commit": "b131ca8ec77c96b9898470eba9560c30af0f23f3"}
2021-04-28T16:57:28.524Z	DPANIC	cmd	odd number of arguments passed as key-value pairs for logging	{"Namespace": "test-operator", "ignored key": "WATCH_NAMESPACE"}
github.com/go-logr/zapr.handleFields
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:100
github.com/go-logr/zapr.(*zapLogger).Info
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:127
github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run.run
	/workspace/internal/cmd/helm-operator/run/cmd.go:138
github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run.NewCmd.func1
	/workspace/internal/cmd/helm-operator/run/cmd.go:69
github.com/spf13/cobra.(*Command).execute
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:854
github.com/spf13/cobra.(*Command).ExecuteC
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:958
github.com/spf13/cobra.(*Command).Execute
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
main.main
	/workspace/cmd/helm-operator/main.go:40
runtime.main
	/usr/local/go/src/runtime/proc.go:204
panic: odd number of arguments passed as key-value pairs for logging

goroutine 1 [running]:
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000d4b00, 0xc0003d15c0, 0x1, 0x1)
	/go/pkg/mod/go.uber.org/[email protected]/zapcore/entry.go:230 +0x55f
go.uber.org/zap.(*Logger).DPanic(0xc0005118c0, 0x1d8ce01, 0x3d, 0xc0003d15c0, 0x1, 0x1)
	/go/pkg/mod/go.uber.org/[email protected]/logger.go:215 +0x85
github.com/go-logr/zapr.handleFields(0xc0005118c0, 0xc00063f0c0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x10, 0x1a62300, 0x1b60701)
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:100 +0x5e5
github.com/go-logr/zapr.(*zapLogger).Info(0xc00063f0b0, 0x1d525ef, 0x22, 0xc00063f0c0, 0x1, 0x1)
	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:127 +0xb0
github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run.run(0xc0004c7b80, 0xc00019b000)
	/workspace/internal/cmd/helm-operator/run/cmd.go:138 +0xb25
github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run.NewCmd.func1(0xc0004c7b80, 0xc0003d10c0, 0x0, 0x4)
	/workspace/internal/cmd/helm-operator/run/cmd.go:69 +0x9d
github.com/spf13/cobra.(*Command).execute(0xc0004c7b80, 0xc0003d1080, 0x4, 0x4, 0xc0004c7b80, 0xc0003d1080)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:854 +0x2c2
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004c78c0, 0xc0007dff60, 0x1, 0x1)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
main.main()
	/workspace/cmd/helm-operator/main.go:40 +0xe5

Full log

Add CI workflow

  • lint dockerfiles and shell scripts
  • push images to Docker Hub & Quay
  • push images to RedHat Certification Registry (TBC)

remove cilium-olm-base image

Certification requires labels to be non-inherited, so this should be just incorporated into scripts/add-release.sh.

Re-enable BPF masquerade

BPF masquerade was disable disabled to rule it out as a source of masquerade-related problems. Testing demonstrates it can be enabled, making OpenShift configuration less special.

scan failure: marketplace-operator-catalog-initialization-bundle-image

===== Test: operator-catalog-initialization-bundle-image =====

 The operator FAILED the index image build test.
Build state history for v4.5:
2021-01-18T14:00:04.019723Z - failed - Push to cvpops in the legacy app registry was unsucessful: Failed to push manifest: package exists already
2021-01-18T13:59:44.895621Z - in_progress - Creating the manifest list
2021-01-18T13:57:32.123910Z - in_progress - Checking if bundles are already present in index image
2021-01-18T13:57:29.687433Z - in_progress - Building the index image for the following arches: amd64, ppc64le, s390x
2021-01-18T13:57:27.227627Z - in_progress - Backport legacy support will be forced
2021-01-18T13:57:17.016487Z - in_progress - Resolving the container images
2021-01-18T13:57:14.295871Z - in_progress - Resolving the bundles
2021-01-18T13:57:11.858348Z - in_progress - The request was initiated

Build state history for v4.6:
2021-01-18T14:03:42.307565Z - complete - The operator bundle(s) were successfully added to the index image
2021-01-18T14:03:37.195346Z - in_progress - Creating the manifest list
2021-01-18T14:00:46.885283Z - in_progress - Checking if bundles are already present in index image
2021-01-18T14:00:44.453699Z - in_progress - Building the index image for the following arches: amd64, ppc64le, s390x
2021-01-18T14:00:33.520952Z - in_progress - Resolving the container images
2021-01-18T14:00:30.655846Z - in_progress - Resolving the bundles
2021-01-18T14:00:23.776066Z - in_progress - The request was initiated



---------------------------------------
IIB Build Logs for the build ID - 39918 - OCP version: v4.5:

2021-01-18 08:57:14,031 iib.workers.tasks.build INFO build._cleanup Removing all existing container images
2021-01-18 08:57:14,031 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rmi --all --force"
2021-01-18 08:57:14,232 iib.workers.tasks.utils DEBUG utils.reset_docker_config Removing the Docker config at /home/iib-worker-cvp-parallel-1/.docker/config.json
2021-01-18 08:57:14,232 iib.workers.tasks.utils DEBUG utils.reset_docker_config Creating a symlink from /home/iib-worker-cvp-parallel-1/.docker/config.json.template to /home/iib-worker-cvp-parallel-1/.docker/config.json
2021-01-18 08:57:14,233 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39918 to "in_progress" with the reason "Resolving the bundles"
2021-01-18 08:57:14,233 iib.workers.api_utils INFO api_utils.update_request Patching the request 39918 with {'state': 'in_progress', 'state_reason': 'Resolving the bundles'}
2021-01-18 08:57:16,087 iib.workers.tasks.build INFO build._get_resolved_bundles Resolving bundles registry-proxy.engineering.redhat.com/rh-osbs/iib:39913
2021-01-18 08:57:16,087 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:39913 --raw"
2021-01-18 08:57:16,469 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec
2021-01-18 08:57:16,470 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec --config"
2021-01-18 08:57:16,973 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39918 to "in_progress" with the reason "Resolving the container images"
2021-01-18 08:57:16,974 iib.workers.api_utils INFO api_utils.update_request Patching the request 39918 with {'state': 'in_progress', 'state_reason': 'Resolving the container images'}
2021-01-18 08:57:18,936 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-01-18 08:57:18,936 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry.redhat.io/redhat/redhat-marketplace-index:v4.5
2021-01-18 08:57:18,937 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.5 --raw"
2021-01-18 08:57:20,006 iib.workers.tasks.build DEBUG build._get_resolved_image registry.redhat.io/redhat/redhat-marketplace-index:v4.5 resolved to registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:20,007 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:20,007 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84 --raw"
2021-01-18 08:57:21,061 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.version from registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:21,062 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:21,062 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84 --config"
2021-01-18 08:57:23,725 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.distribution_scope from registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:23,725 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:23,725 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84 --config"
2021-01-18 08:57:25,875 iib.workers.tasks.build DEBUG build._prepare_request_for_build Set to build the index image for the following arches: amd64, ppc64le, s390x
2021-01-18 08:57:25,876 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5
2021-01-18 08:57:25,876 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5 --raw"
2021-01-18 08:57:26,233 iib.workers.tasks.build DEBUG build._get_resolved_image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5 resolved to registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:0fe67e2c24ce9850a37ba22802a66fb62846c3c568f899c9315cb31f2a24b47c
2021-01-18 08:57:26,233 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:0fe67e2c24ce9850a37ba22802a66fb62846c3c568f899c9315cb31f2a24b47c
2021-01-18 08:57:26,234 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:0fe67e2c24ce9850a37ba22802a66fb62846c3c568f899c9315cb31f2a24b47c --raw"
2021-01-18 08:57:26,559 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.bundle.package.v1 from registry-proxy.engineering.redhat.com/rh-osbs/iib:39913
2021-01-18 08:57:26,559 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:39913
2021-01-18 08:57:26,560 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:39913 --config"
2021-01-18 08:57:27,175 iib.workers.tasks.build INFO build.handle_add_request Checking if interacting with the legacy app registry is required
2021-01-18 08:57:27,176 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39918 to "in_progress" with the reason "Backport legacy support will be forced"
2021-01-18 08:57:27,176 iib.workers.api_utils INFO api_utils.update_request Patching the request 39918 with {'state': 'in_progress', 'state_reason': 'Backport legacy support will be forced'}
2021-01-18 08:57:29,190 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec
2021-01-18 08:57:29,191 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec --config"
2021-01-18 08:57:29,653 iib.workers.api_utils INFO api_utils.update_request Patching the request 39918 with {'binary_image': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.5', 'binary_image_resolved': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:0fe67e2c24ce9850a37ba22802a66fb62846c3c568f899c9315cb31f2a24b47c', 'state': 'in_progress', 'state_reason': 'Building the index image for the following arches: amd64, ppc64le, s390x', 'bundle_mapping': {'cilium-rhmp': ['registry-proxy.engineering.redhat.com/rh-osbs/iib:39913']}, 'from_index_resolved': 'registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84'}
2021-01-18 08:57:32,083 iib.workers.tasks.build INFO build.handle_add_request Checking if bundles are already present in index image
2021-01-18 08:57:32,083 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39918 to "in_progress" with the reason "Checking if bundles are already present in index image"
2021-01-18 08:57:32,083 iib.workers.api_utils INFO api_utils.update_request Patching the request 39918 with {'state': 'in_progress', 'state_reason': 'Checking if bundles are already present in index image'}
2021-01-18 08:57:34,055 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-01-18 08:57:34,055 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.index.database.v1 from registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:34,055 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:34,055 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84 --config"
2021-01-18 08:57:36,038 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman create registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84 unused"
2021-01-18 08:57:50,213 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman cp 3935041e1736e82acd65d14a56aa707e852f1db03e32f5664e09af08e7b8e3a7:/database/index.db /tmp/iib-7ml1dxb9"
2021-01-18 08:57:52,006 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rm 3935041e1736e82acd65d14a56aa707e852f1db03e32f5664e09af08e7b8e3a7"
2021-01-18 08:57:53,477 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50051 list api.Registry"
2021-01-18 08:57:53,535 iib.workers.tasks.build DEBUG build._serve_index_registry_at_port Started the command "opm registry serve -p 50051 -d /tmp/iib-7ml1dxb9/index.db -t /dev/null"
2021-01-18 08:57:53,536 iib.workers.tasks.build INFO build._serve_index_registry_at_port Index registry service has been initialized.
2021-01-18 08:57:53,536 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50051 api.Registry/ListBundles"
2021-01-18 08:57:56,282 iib.workers.tasks.build INFO build._opm_index_add Generating the database file with the following bundle(s): registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec
2021-01-18 08:57:56,282 iib.workers.tasks.build INFO build._opm_index_add Using the existing database from registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84
2021-01-18 08:57:56,285 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-01-18 08:57:56,285 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "opm index add --generate --bundles registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec --binary-image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:0fe67e2c24ce9850a37ba22802a66fb62846c3c568f899c9315cb31f2a24b47c --from-index registry.redhat.io/redhat/redhat-marketplace-index@sha256:7a3e1857beb822b254fc509b06589711bddbd5ff455dd3192f987399f369cb84"
2021-01-18 08:58:13,095 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.version="v4.5"
2021-01-18 08:58:13,096 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.distribution_scope="prod"
2021-01-18 08:58:13,096 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch amd64 and tagging it as iib-build:39918-amd64
2021-01-18 08:58:13,096 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch amd64 -t iib-build:39918-amd64 -f /tmp/iib-7ml1dxb9/index.Dockerfile"
2021-01-18 08:58:32,927 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:39918-amd64 to docker://quay.io/rh-osbs/iib:39918-amd64
2021-01-18 08:58:32,929 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:39918-amd64 docker://quay.io/rh-osbs/iib:39918-amd64"
2021-01-18 08:58:36,343 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:39918-amd64 was pushed as a v2 manifest due to RHBZ#1810768
2021-01-18 08:58:36,344 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:39918-amd64 --raw"
2021-01-18 08:58:36,821 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:39918-amd64 ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-01-18 08:58:36,821 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:39918-amd64 to docker://quay.io/rh-osbs/iib:39918-amd64
2021-01-18 08:58:36,822 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:39918-amd64 docker://quay.io/rh-osbs/iib:39918-amd64"
2021-01-18 08:58:43,017 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch ppc64le and tagging it as iib-build:39918-ppc64le
2021-01-18 08:58:43,018 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch ppc64le -t iib-build:39918-ppc64le -f /tmp/iib-7ml1dxb9/index.Dockerfile"
2021-01-18 08:59:03,495 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:39918-ppc64le to docker://quay.io/rh-osbs/iib:39918-ppc64le
2021-01-18 08:59:03,495 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:39918-ppc64le docker://quay.io/rh-osbs/iib:39918-ppc64le"
2021-01-18 08:59:11,778 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:39918-ppc64le was pushed as a v2 manifest due to RHBZ#1810768
2021-01-18 08:59:11,779 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:39918-ppc64le --raw"
2021-01-18 08:59:12,151 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:39918-ppc64le ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-01-18 08:59:12,152 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:39918-ppc64le to docker://quay.io/rh-osbs/iib:39918-ppc64le
2021-01-18 08:59:12,152 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:39918-ppc64le docker://quay.io/rh-osbs/iib:39918-ppc64le"
2021-01-18 08:59:15,071 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch s390x and tagging it as iib-build:39918-s390x
2021-01-18 08:59:15,072 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch s390x -t iib-build:39918-s390x -f /tmp/iib-7ml1dxb9/index.Dockerfile"
2021-01-18 08:59:35,135 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:39918-s390x to docker://quay.io/rh-osbs/iib:39918-s390x
2021-01-18 08:59:35,135 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:39918-s390x docker://quay.io/rh-osbs/iib:39918-s390x"
2021-01-18 08:59:38,470 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:39918-s390x was pushed as a v2 manifest due to RHBZ#1810768
2021-01-18 08:59:38,470 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:39918-s390x --raw"
2021-01-18 08:59:38,817 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:39918-s390x ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-01-18 08:59:38,817 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:39918-s390x to docker://quay.io/rh-osbs/iib:39918-s390x
2021-01-18 08:59:38,817 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:39918-s390x docker://quay.io/rh-osbs/iib:39918-s390x"
2021-01-18 08:59:44,831 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39918 to "in_progress" with the reason "Creating the manifest list"
2021-01-18 08:59:44,831 iib.workers.api_utils INFO api_utils.update_request Patching the request 39918 with {'state': 'in_progress', 'state_reason': 'Creating the manifest list'}
2021-01-18 08:59:47,089 iib.workers.tasks.build INFO build._create_and_push_manifest_list Creating the manifest list quay.io/rh-osbs/iib:39918
2021-01-18 08:59:47,090 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:39918-amd64 to the manifest list quay.io/rh-osbs/iib:39918
2021-01-18 08:59:47,090 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:39918-ppc64le to the manifest list quay.io/rh-osbs/iib:39918
2021-01-18 08:59:47,090 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:39918-s390x to the manifest list quay.io/rh-osbs/iib:39918
2021-01-18 08:59:47,090 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Created the manifest configuration with the following content:
image: quay.io/rh-osbs/iib:39918
manifests:
- image: quay.io/rh-osbs/iib:39918-amd64
  platform:
    architecture: amd64
    os: linux
- image: quay.io/rh-osbs/iib:39918-ppc64le
  platform:
    architecture: ppc64le
    os: linux
- image: quay.io/rh-osbs/iib:39918-s390x
  platform:
    architecture: s390x
    os: linux

2021-01-18 08:59:47,091 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "manifest-tool push from-spec /tmp/iib-ccl2xtvx/manifest.yaml"
2021-01-18 08:59:48,644 iib.workers.tasks.legacy INFO legacy._opm_index_export Generating the backported operator for package: cilium-rhmp
2021-01-18 08:59:48,644 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "opm index export --index quay.io/rh-osbs/iib:39918 --package cilium-rhmp --download-folder cilium-rhmp"
2021-01-18 09:00:02,046 iib.workers.tasks.legacy INFO legacy._verify_package_info Verifying package_name cilium-rhmp
2021-01-18 09:00:02,053 iib.workers.tasks.legacy INFO legacy._push_package_manifest Files are {'file': ('/tmp/iib-5s37942w/cilium-rhmp/manifests.zip', <_io.BufferedReader name='/tmp/iib-5s37942w/cilium-rhmp/manifests.zip'>)}
2021-01-18 09:00:03,955 iib.workers.tasks.legacy ERROR legacy._push_package_manifest Request to OMPS failed: {"error":"QuayCourierError","message":"Failed to push manifest: package exists already","quay_response":{"error":{"code":"package-exists","details":{},"message":"package exists already"}},"status":500}

200


---------------------------------------
IIB Build Logs for the build ID - 39926 - OCP version: v4.6:

2021-01-18 09:00:25,909 iib.workers.tasks.build INFO build._cleanup Removing all existing container images
2021-01-18 09:00:25,909 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rmi --all --force"
2021-01-18 09:00:30,611 iib.workers.tasks.utils DEBUG utils.reset_docker_config Removing the Docker config at /home/iib-worker-cvp-parallel-3/.docker/config.json
2021-01-18 09:00:30,612 iib.workers.tasks.utils DEBUG utils.reset_docker_config Creating a symlink from /home/iib-worker-cvp-parallel-3/.docker/config.json.template to /home/iib-worker-cvp-parallel-3/.docker/config.json
2021-01-18 09:00:30,612 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39926 to "in_progress" with the reason "Resolving the bundles"
2021-01-18 09:00:30,612 iib.workers.api_utils INFO api_utils.update_request Patching the request 39926 with {'state': 'in_progress', 'state_reason': 'Resolving the bundles'}
2021-01-18 09:00:32,636 iib.workers.tasks.build INFO build._get_resolved_bundles Resolving bundles registry-proxy.engineering.redhat.com/rh-osbs/iib:39913
2021-01-18 09:00:32,637 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:39913 --raw"
2021-01-18 09:00:32,996 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec
2021-01-18 09:00:32,996 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec --config"
2021-01-18 09:00:33,482 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39926 to "in_progress" with the reason "Resolving the container images"
2021-01-18 09:00:33,482 iib.workers.api_utils INFO api_utils.update_request Patching the request 39926 with {'state': 'in_progress', 'state_reason': 'Resolving the container images'}
2021-01-18 09:00:35,538 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-01-18 09:00:35,538 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry.redhat.io/redhat/redhat-marketplace-index:v4.6
2021-01-18 09:00:35,539 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.6 --raw"
2021-01-18 09:00:36,534 iib.workers.tasks.build DEBUG build._get_resolved_image registry.redhat.io/redhat/redhat-marketplace-index:v4.6 resolved to registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:36,534 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:36,534 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84 --raw"
2021-01-18 09:00:37,764 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.version from registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:37,764 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:37,764 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84 --config"
2021-01-18 09:00:39,958 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of com.redhat.index.delivery.distribution_scope from registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:39,958 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:39,958 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84 --config"
2021-01-18 09:00:42,499 iib.workers.tasks.build DEBUG build._prepare_request_for_build Set to build the index image for the following arches: amd64, ppc64le, s390x
2021-01-18 09:00:42,500 iib.workers.tasks.build DEBUG build._get_resolved_image Resolving registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6
2021-01-18 09:00:42,500 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6 --raw"
2021-01-18 09:00:43,127 iib.workers.tasks.build DEBUG build._get_resolved_image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6 resolved to registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:490035a0394c1f6aa32f193b61a28f983801ef467339729d4a724bb4b3ec7f7e
2021-01-18 09:00:43,127 iib.workers.tasks.build DEBUG build._get_image_arches Get the available arches for registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:490035a0394c1f6aa32f193b61a28f983801ef467339729d4a724bb4b3ec7f7e
2021-01-18 09:00:43,128 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:490035a0394c1f6aa32f193b61a28f983801ef467339729d4a724bb4b3ec7f7e --raw"
2021-01-18 09:00:43,695 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.bundle.package.v1 from registry-proxy.engineering.redhat.com/rh-osbs/iib:39913
2021-01-18 09:00:43,695 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:39913
2021-01-18 09:00:43,696 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry-proxy.engineering.redhat.com/rh-osbs/iib:39913 --config"
2021-01-18 09:00:44,385 iib.workers.tasks.build INFO build.handle_add_request Checking if interacting with the legacy app registry is required
2021-01-18 09:00:44,386 iib.workers.tasks.legacy INFO legacy.get_legacy_support_packages Backport legacy support is disabled for v4.6
2021-01-18 09:00:44,386 iib.workers.api_utils INFO api_utils.update_request Patching the request 39926 with {'binary_image': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry:v4.6', 'binary_image_resolved': 'registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:490035a0394c1f6aa32f193b61a28f983801ef467339729d4a724bb4b3ec7f7e', 'state': 'in_progress', 'state_reason': 'Building the index image for the following arches: amd64, ppc64le, s390x', 'bundle_mapping': {'cilium-rhmp': ['registry-proxy.engineering.redhat.com/rh-osbs/iib:39913']}, 'from_index_resolved': 'registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84'}
2021-01-18 09:00:46,836 iib.workers.tasks.build INFO build.handle_add_request Checking if bundles are already present in index image
2021-01-18 09:00:46,837 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39926 to "in_progress" with the reason "Checking if bundles are already present in index image"
2021-01-18 09:00:46,837 iib.workers.api_utils INFO api_utils.update_request Patching the request 39926 with {'state': 'in_progress', 'state_reason': 'Checking if bundles are already present in index image'}
2021-01-18 09:00:49,006 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-01-18 09:00:49,006 iib.workers.tasks.build DEBUG build.get_image_label Getting the label of operators.operatorframework.io.index.database.v1 from registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:49,006 iib.workers.tasks.utils DEBUG utils.get_image_labels Getting the labels from docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:00:49,007 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84 --config"
2021-01-18 09:00:51,406 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman create registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84 unused"
2021-01-18 09:01:11,629 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman cp 307cfce15ce8fc67c64fa711ae6998b5f2288a881955be661df0ad868368f8c8:/database/index.db /tmp/iib-1jgbmvgj"
2021-01-18 09:01:12,452 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman rm 307cfce15ce8fc67c64fa711ae6998b5f2288a881955be661df0ad868368f8c8"
2021-01-18 09:01:13,717 iib.workers.tasks.build INFO build._serve_index_registry Port 50051 is in use, trying another.
2021-01-18 09:01:14,728 iib.workers.tasks.build INFO build._serve_index_registry Port 50052 is in use, trying another.
2021-01-18 09:01:15,736 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50053 list api.Registry"
2021-01-18 09:01:15,758 iib.workers.tasks.build DEBUG build._serve_index_registry_at_port Started the command "opm registry serve -p 50053 -d /tmp/iib-1jgbmvgj/index.db -t /dev/null"
2021-01-18 09:01:15,758 iib.workers.tasks.build INFO build._serve_index_registry_at_port Index registry service has been initialized.
2021-01-18 09:01:15,758 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "grpcurl -plaintext localhost:50053 api.Registry/ListBundles"
2021-01-18 09:01:18,020 iib.workers.tasks.build INFO build._opm_index_add Generating the database file with the following bundle(s): registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec
2021-01-18 09:01:18,021 iib.workers.tasks.build INFO build._opm_index_add Using the existing database from registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84
2021-01-18 09:01:18,021 iib.workers.tasks.utils DEBUG utils.set_registry_token Not changing the Docker configuration since no overwrite_from_index_token was provided
2021-01-18 09:01:18,021 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "opm index add --generate --bundles registry-proxy.engineering.redhat.com/rh-osbs/iib@sha256:e7c5b09ee788e1a428d6109ce937efe9b08c11ecc6ac977b040a139d11df9bec --binary-image registry-proxy.engineering.redhat.com/rh-osbs/openshift-ose-operator-registry@sha256:490035a0394c1f6aa32f193b61a28f983801ef467339729d4a724bb4b3ec7f7e --from-index registry.redhat.io/redhat/redhat-marketplace-index@sha256:3955e25ee8e353eeef477f9b7884c42e1b5b1f8821abeec02a1b1c5159c0ad84"
2021-01-18 09:01:35,871 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.version="v4.6"
2021-01-18 09:01:35,873 iib.workers.tasks.build DEBUG build._add_label_to_index Added the following line to index.Dockerfile: LABEL com.redhat.index.delivery.distribution_scope="prod"
2021-01-18 09:01:35,873 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch amd64 and tagging it as iib-build:39926-amd64
2021-01-18 09:01:35,873 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch amd64 -t iib-build:39926-amd64 -f /tmp/iib-1jgbmvgj/index.Dockerfile"
2021-01-18 09:02:19,285 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:39926-amd64 to docker://quay.io/rh-osbs/iib:39926-amd64
2021-01-18 09:02:19,286 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:39926-amd64 docker://quay.io/rh-osbs/iib:39926-amd64"
2021-01-18 09:02:22,679 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:39926-amd64 was pushed as a v2 manifest due to RHBZ#1810768
2021-01-18 09:02:22,679 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:39926-amd64 --raw"
2021-01-18 09:02:23,180 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:39926-amd64 ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-01-18 09:02:23,181 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:39926-amd64 to docker://quay.io/rh-osbs/iib:39926-amd64
2021-01-18 09:02:23,181 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:39926-amd64 docker://quay.io/rh-osbs/iib:39926-amd64"
2021-01-18 09:02:25,920 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch ppc64le and tagging it as iib-build:39926-ppc64le
2021-01-18 09:02:25,920 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch ppc64le -t iib-build:39926-ppc64le -f /tmp/iib-1jgbmvgj/index.Dockerfile"
2021-01-18 09:02:57,626 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:39926-ppc64le to docker://quay.io/rh-osbs/iib:39926-ppc64le
2021-01-18 09:02:57,628 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:39926-ppc64le docker://quay.io/rh-osbs/iib:39926-ppc64le"
2021-01-18 09:03:01,040 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:39926-ppc64le was pushed as a v2 manifest due to RHBZ#1810768
2021-01-18 09:03:01,041 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:39926-ppc64le --raw"
2021-01-18 09:03:01,354 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:39926-ppc64le ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-01-18 09:03:01,355 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:39926-ppc64le to docker://quay.io/rh-osbs/iib:39926-ppc64le
2021-01-18 09:03:01,355 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:39926-ppc64le docker://quay.io/rh-osbs/iib:39926-ppc64le"
2021-01-18 09:03:03,737 iib.workers.tasks.build INFO build._build_image Building the container image with the index.Dockerfile dockerfile for arch s390x and tagging it as iib-build:39926-s390x
2021-01-18 09:03:03,737 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "buildah bud --no-cache --override-arch s390x -t iib-build:39926-s390x -f /tmp/iib-1jgbmvgj/index.Dockerfile"
2021-01-18 09:03:29,330 iib.workers.tasks.build INFO build._push_image Pushing the container image iib-build:39926-s390x to docker://quay.io/rh-osbs/iib:39926-s390x
2021-01-18 09:03:29,335 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "podman push -q iib-build:39926-s390x docker://quay.io/rh-osbs/iib:39926-s390x"
2021-01-18 09:03:33,586 iib.workers.tasks.build DEBUG build._push_image Verifying that docker://quay.io/rh-osbs/iib:39926-s390x was pushed as a v2 manifest due to RHBZ#1810768
2021-01-18 09:03:33,587 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s inspect docker://quay.io/rh-osbs/iib:39926-s390x --raw"
2021-01-18 09:03:33,988 iib.workers.tasks.build WARNING build._push_image The manifest for docker://quay.io/rh-osbs/iib:39926-s390x ended up using schema version 1 due to RHBZ#1810768. Manually fixing it with skopeo.
2021-01-18 09:03:33,989 iib.workers.tasks.build DEBUG build._skopeo_copy Copying the container image docker://quay.io/rh-osbs/iib:39926-s390x to docker://quay.io/rh-osbs/iib:39926-s390x
2021-01-18 09:03:33,989 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "skopeo --command-timeout 300s copy --format v2s2 docker://quay.io/rh-osbs/iib:39926-s390x docker://quay.io/rh-osbs/iib:39926-s390x"
2021-01-18 09:03:37,122 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39926 to "in_progress" with the reason "Creating the manifest list"
2021-01-18 09:03:37,122 iib.workers.api_utils INFO api_utils.update_request Patching the request 39926 with {'state': 'in_progress', 'state_reason': 'Creating the manifest list'}
2021-01-18 09:03:39,246 iib.workers.tasks.build INFO build._create_and_push_manifest_list Creating the manifest list quay.io/rh-osbs/iib:39926
2021-01-18 09:03:39,247 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:39926-amd64 to the manifest list quay.io/rh-osbs/iib:39926
2021-01-18 09:03:39,247 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:39926-ppc64le to the manifest list quay.io/rh-osbs/iib:39926
2021-01-18 09:03:39,247 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Adding the manifest quay.io/rh-osbs/iib:39926-s390x to the manifest list quay.io/rh-osbs/iib:39926
2021-01-18 09:03:39,247 iib.workers.tasks.build DEBUG build._create_and_push_manifest_list Created the manifest configuration with the following content:
image: quay.io/rh-osbs/iib:39926
manifests:
- image: quay.io/rh-osbs/iib:39926-amd64
  platform:
    architecture: amd64
    os: linux
- image: quay.io/rh-osbs/iib:39926-ppc64le
  platform:
    architecture: ppc64le
    os: linux
- image: quay.io/rh-osbs/iib:39926-s390x
  platform:
    architecture: s390x
    os: linux

2021-01-18 09:03:39,247 iib.workers.tasks.utils DEBUG utils.run_cmd Running the command "manifest-tool push from-spec /tmp/iib-7mf6irkw/manifest.yaml"
2021-01-18 09:03:40,867 iib.workers.tasks.build INFO build._update_index_image_pull_spec Changed the index_image pull specification from quay.io/rh-osbs/iib:39926 to registry-proxy.engineering.redhat.com/rh-osbs/iib:39926
2021-01-18 09:03:40,867 iib.workers.api_utils INFO api_utils.update_request Patching the request 39926 with {'arches': ['amd64', 'ppc64le', 's390x'], 'index_image': 'registry-proxy.engineering.redhat.com/rh-osbs/iib:39926'}
2021-01-18 09:03:42,264 iib.workers.api_utils INFO api_utils.set_request_state Setting the state of request 39926 to "complete" with the reason "The operator bundle(s) were successfully added to the index image"
2021-01-18 09:03:42,264 iib.workers.api_utils INFO api_utils.update_request Patching the request 39926 with {'state': 'complete', 'state_reason': 'The operator bundle(s) were successfully added to the index image'}
200




-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-bundle-image-validation-test/marketplace-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-d702aa35-b95e-4884-9548-a8707c5dcf5d/d702aa35-b95e-4884-9548-a8707c5dcf5d/


bundle validation failure - missing CRD & unsupported media type

time="2020-12-08T21:04:55Z" level=debug msg="Debug logging is set"
time="2020-12-08T21:04:55Z" level=debug msg="Found manifests directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2020-12-08T21:04:55Z" level=debug msg="Found metadata directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2020-12-08T21:04:55Z" level=debug msg="Getting mediaType info from manifests directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2020-12-08T21:04:55Z" level=info msg="Found annotations file" bundle-dir=../../../../../test-operator container-tool=docker
time="2020-12-08T21:04:55Z" level=info msg="Could not find optional dependencies file" bundle-dir=../../../../../test-operator container-tool=docker
time="2020-12-08T21:04:55Z" level=debug msg="Validating bundle contents" bundle-dir=../../../../../test-operator container-tool=docker
time="2020-12-08T21:04:55Z" level=debug msg="Validating /, Kind= \"\"" bundle-dir=../../../../../test-operator container-tool=docker
time="2020-12-08T21:04:55Z" level=error msg="Error: Value cilium.io/v1alpha1, Kind=CiliumConfig: owned CRD \"cilium.io/v1alpha1, Kind=CiliumConfig\" not found in bundle \"cilium.v1.8.6\""
time="2020-12-08T21:04:55Z" level=warning msg="Warning: Value : (cilium.v1.8.6) example annotations not found"
time="2020-12-08T21:04:55Z" level=error msg="Error: Value /, Kind=: unsupported media type registry+v1 for bundle object"

Missing clusterPoolIPv4PodCIDR in v1.12.0 causes installation to fail

I've tried installing Cilium on OpenShift based on the documentation but installation fails when using the ciliumconfig.v1.12.yaml configuration, because it is missing the ipam.operator.clusterPoolIPv4PodCIDR section with the correct configuration for the default subnets in OpenShift.

Was this removed by design, to force people to make sure that their configuration matches?

cilium-olm pod gets oom-killed

This issue references #32 - for us, the issue has not been fixed yet. We had to adjust the memory limits to 200Mi.

Can you please have a look at this again and maybe increase the memory limit even further?

Thanks :)

new semver test failure in scorecard

This has failed for 1.9.7:

===== Test: operator-scorecard-tests =====

 The operator FAILED the scorecard criteria of passing all basic tests.

 

========== Output of CiliumConfig-cilium-openshift-default-scorecard-results.json =============

{
  "kind": "ScorecardOutput",
  "apiVersion": "osdk.openshift.io/v1alpha2",
  "metadata": {
    "creationTimestamp": null
  },
  "log": "time=\"2021-05-18T08:51:56Z\" level=info msg=\"Using config file: /home/jenkins/agent/workspace/cvp-isv-operator-metadata-validation-test/osdk-scorecard.yml\"\ntime=\"2021-05-18T08:51:56Z\" level=error msg=\"Plugin `Basic Tests` failed with error (error validating ClusterServiceVersion: Error: Value : (cilium.v1.9.7-0236092) metadata.name \\\"cilium.v1.9.7-0236092\\\" contains an invalid semver \\\"v1.9.7-0236092\\\"\\n)\"\ntime=\"2021-05-18T08:51:56Z\" level=error msg=\"Plugin `OLM Integration` failed with error (error validating ClusterServiceVersion: Error: Value : (cilium.v1.9.7-0236092) metadata.name \\\"cilium.v1.9.7-0236092\\\" contains an invalid semver \\\"v1.9.7-0236092\\\"\\n)\"\n",
  "results": []
}
 

======== Debug output of the operator container =========

 

2021-05-18T08:50:31.221Z	INFO	cmd	Version	{"Go Version": "go1.15.10", "GOOS": "linux", "GOARCH": "amd64", "helm-operator": "v1.4.0+git", "commit": "98f30d59ade2d911a7a8c76f0169a7de0dec37a0"}
2021-05-18T08:50:31.222Z	INFO	cmd	Watching single namespace.	{"Namespace": "test-operator"}
I0518 08:50:32.272953       1 request.go:655] Throttling request took 1.039339673s, request: GET:https://172.30.0.1:443/apis/operator.openshift.io/v1?timeout=32s
2021-05-18T08:50:33.432Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
2021-05-18T08:50:33.433Z	INFO	controller-runtime.injectors-warning	Injectors are deprecated, and will be removed in v0.10.x
2021-05-18T08:50:33.433Z	INFO	controller-runtime.injectors-warning	Injectors are deprecated, and will be removed in v0.10.x
2021-05-18T08:50:33.433Z	INFO	helm.controller	Watching resource	{"apiVersion": "cilium.io/v1alpha1", "kind": "CiliumConfig", "namespace": "test-operator", "reconcilePeriod": "1m0s"}
I0518 08:50:33.434003       1 leaderelection.go:243] attempting to acquire leader lease test-operator/cilium-olm...
2021-05-18T08:50:33.434Z	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
I0518 08:50:33.450720       1 leaderelection.go:253] successfully acquired lease test-operator/cilium-olm
2021-05-18T08:50:33.450Z	INFO	controller-runtime.manager.controller.ciliumconfig-controller	Starting EventSource	{"source": "kind source: cilium.io/v1alpha1, Kind=CiliumConfig"}
2021-05-18T08:50:33.451Z	DEBUG	controller-runtime.manager.events	Normal	{"object": {"kind":"ConfigMap","namespace":"test-operator","name":"cilium-olm","uid":"89e4fd00-f19a-4813-af9b-a8de02f17618","apiVersion":"v1","resourceVersion":"24180"}, "reason": "LeaderElection", "message": "ip-10-0-154-165_e8481734-fd99-4be8-bbb7-043f1d6d6e69 became leader"}
2021-05-18T08:50:33.551Z	INFO	controller-runtime.manager.controller.ciliumconfig-controller	Starting Controller
2021-05-18T08:50:33.551Z	INFO	controller-runtime.manager.controller.ciliumconfig-controller	Starting workers	{"worker count": 2} 

======== Error output of the operator scorecard test =========

 

 Output of cilium-openshift-default-scorecard-errors.txt: 

time="2021-05-18T08:51:56Z" level=debug msg="Debug logging is set"



-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-metadata-validation-test/certified-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-28c062c4-fb0b-4cd2-9f53-fe3df9d9ec90/4bca9f15-3231-4efa-be44-4943214f0639/

ensure that operator bundles are published for OpenShift 4.7

It turns out that the bundle is currently only published for 4.5 & 4.6.

On 4.6 this works:

$ kubectl get  packagemanifests -n openshift-marketplace cilium 
NAME     CATALOG               AGE
cilium   Certified Operators   122m

But no on 4.7:

Error from server (NotFound): packagemanifests.packages.operators.coreos.com "cilium" not found

This is due to:

LABEL com.redhat.openshift.versions="v4.5,v4.6"

scan failure: certified-operator-catalog-initialization logs

===== Test: operator-catalog-initialization =====

 The operator FAILED the catalog initialization test.

 

======== Debug output of the operator-registry initializer command for release-4.2 =========

# github.com/mattn/go-sqlite3
sqlite3-binding.c: In function ?sqlite3SelectNew?:
sqlite3-binding.c:123303:10: warning: function may return address of local variable [-Wreturn-local-addr]
123303 |   return pNew;
       |          ^~~~
sqlite3-binding.c:123263:10: note: declared here
123263 |   Select standin;
       |          ^~~~~~~

======== Debug output of the operator-registry initializer command for release-4.3 =========

# github.com/mattn/go-sqlite3
sqlite3-binding.c: In function ?sqlite3SelectNew?:
sqlite3-binding.c:123303:10: warning: function may return address of local variable [-Wreturn-local-addr]
123303 |   return pNew;
       |          ^~~~
sqlite3-binding.c:123263:10: note: declared here
123263 |   Select standin;
       |          ^~~~~~~
time="2021-02-01T22:14:15Z" level=fatal msg="permissive mode disabled" error="error loading manifests from directory: [error checking provided apis in bundle : error decoding CRD: no kind \"CustomResourceDefinition\" is registered for version \"apiextensions.k8s.io/v1\" in scheme \"pkg/registry/bundle.go:15\", error adding operator bundle : error decoding CRD: no kind \"CustomResourceDefinition\" is registered for version \"apiextensions.k8s.io/v1\" in scheme \"pkg/registry/bundle.go:15\", error loading package into db: [FOREIGN KEY constraint failed, no default channel specified for cilium]]"
exit status 1

======== Debug output of the operator-registry initializer command for release-4.4 =========

# github.com/mattn/go-sqlite3
sqlite3-binding.c: In function ?sqlite3SelectNew?:
sqlite3-binding.c:123303:10: warning: function may return address of local variable [-Wreturn-local-addr]
123303 |   return pNew;
       |          ^~~~
sqlite3-binding.c:123263:10: note: declared here
123263 |   Select standin;
       |          ^~~~~~~
time="2021-02-01T22:18:01Z" level=fatal msg="permissive mode disabled" error="error loading manifests from directory: [error checking provided apis in bundle : error decoding CRD: no kind \"CustomResourceDefinition\" is registered for version \"apiextensions.k8s.io/v1\" in scheme \"pkg/registry/bundle.go:15\", error adding operator bundle : error decoding CRD: no kind \"CustomResourceDefinition\" is registered for version \"apiextensions.k8s.io/v1\" in scheme \"pkg/registry/bundle.go:15\", error loading package into db: [FOREIGN KEY constraint failed, no default channel specified for cilium]]"
exit status 1

======== Debug output of the operator-registry initializer command for release-4.5 =========

# github.com/mattn/go-sqlite3
sqlite3-binding.c: In function ?sqlite3SelectNew?:
sqlite3-binding.c:123303:10: warning: function may return address of local variable [-Wreturn-local-addr]
123303 |   return pNew;
       |          ^~~~
sqlite3-binding.c:123263:10: note: declared here
123263 |   Select standin;
       |          ^~~~~~~


-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-metadata-validation-test/certified-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-3c06907c-655c-4465-94d5-c1ba6dafe42f/9dc0af89-488f-4fae-b8cb-ba99b7565918/

All level=info lines were dropped...

Allow the cilium-olm role to manage ingresses resources

Hello Team,

Env: OCP / cilium-olm v1.12.0

When hubble is enabled via the CiliumConfig, the hubble-ui pod starts failing at OCP env.

  hubble:
    enabled: true
    metrics:
      enabled:
      - dns:query;ignoreAAAA
      - drop
      - tcp
      - flow
      - icmp
      - http
      serviceMonitor:
        enabled: true
    tls:
      enabled: true
    relay:
      enabled: true
    ui:
      enabled: true
      ingress:
        enabled: true
        hosts:
          - hubble-ui-cilium.apps.xxx.xxx.xxx

There is not enough permit for "ingresses".
Adding a role can solve this, but it's useful if it's enabled from the beginning.

$ oc edit role -n cilium cilium-olm -o yaml
# add following lines
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - '*'

Additional Information:
Maybe this can be solved by adding lines like this near line 346 in rbac.cue, but I'm not familiar with cue....

	{
		apiGroups: [
			"networking.k8s.io",
		]
		resources: [
			"ingresses",
		]
		verbs: [
			"*",
		]
	},

https://github.com/cilium/cilium-olm/blob/master/config/operator/rbac.cue#L346

Thank you,

scan failure: marketplace-operator-metadata-linting-bundle-image

===== Test: operator-metadata-linting-bundle-image =====

 

Operator SDK version:
-------------------------

operator-sdk version: "v1.3.0", commit: "1abf57985b43bf6a59dcd18147b3c574fa57d3f6", kubernetes version: "1.19.4", go version: "go1.15.5", GOOS: "linux", GOARCH: "amd64"


Validation output:
-------

time="2021-02-03T09:33:50Z" level=debug msg="Debug logging is set"
time="2021-02-03T09:33:50Z" level=debug msg="Found manifests directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T09:33:50Z" level=debug msg="Found metadata directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T09:33:50Z" level=debug msg="Getting mediaType info from manifests directory" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T09:33:50Z" level=info msg="Found annotations file" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T09:33:50Z" level=info msg="Could not find optional dependencies file" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T09:33:50Z" level=debug msg="Validating bundle contents" bundle-dir=../../../../../test-operator container-tool=docker
time="2021-02-03T09:33:50Z" level=warning msg="Warning: Value : (cilium.v1.9.3) example annotations not found"
time="2021-02-03T09:33:50Z" level=error msg="Error: Field spec.versions, Value [{v1alpha1 true true false <nil> 0xc0008b0680 0xc0008f3bb0 []}]: spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion{apiextensions.CustomResourceDefinitionVersion{Name:\"v1alpha1\", Served:true, Storage:true, Deprecated:false, DeprecationWarning:(*string)(nil), Schema:(*apiextensions.CustomResourceValidation)(0xc0008b0680), Subresources:(*apiextensions.CustomResourceSubresources)(0xc0008f3bb0), AdditionalPrinterColumns:[]apiextensions.CustomResourceColumnDefinition(nil)}}: per-version schemas may not all be set to identical values (top-level validation should be used instead)"
time="2021-02-03T09:33:50Z" level=error msg="Error: Field spec.versions, Value [{v1alpha1 true true false <nil> 0xc0008b0680 0xc0008f3bb0 []}]: spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion{apiextensions.CustomResourceDefinitionVersion{Name:\"v1alpha1\", Served:true, Storage:true, Deprecated:false, DeprecationWarning:(*string)(nil), Schema:(*apiextensions.CustomResourceValidation)(0xc0008b0680), Subresources:(*apiextensions.CustomResourceSubresources)(0xc0008f3bb0), AdditionalPrinterColumns:[]apiextensions.CustomResourceColumnDefinition(nil)}}: per-version subresources may not all be set to identical values (top-level subresources should be used instead)"


return code:
------------

1



-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-bundle-image-validation-test/marketplace-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-8036634e-f0ed-4ced-bfba-c8bec7769d73/8036634e-f0ed-4ced-bfba-c8bec7769d73/

should bundle reference RH registry or quay.io is suffcient?

It's not clear if use of RH registry is necessary. For now we can probably continue using quay and only use the RedHat registry for certification scans. It's yet to be confirm if there is non-technical requirement for using the RH registry.

[1.10.0] update RBAC for cilium operator

cilium-olm pod is running, but the operator keeps erroring:

2021-04-30T14:25:03.591Z	ERROR	helm.controller	Release failed	{"namespace": "cilium", "name": "cilium", "apiVersion": "cilium.io/v1alpha1", "kind": "CiliumConfig", "release": "cilium", "error": "failed to install release: clusterroles.rbac.authorization.k8s.io \"cilium-operator\" is forbidden: user \"system:serviceaccount:cilium:cilium-olm\" (groups=[\"system:serviceaccounts\" \"system:serviceaccounts:cilium\" \"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"\"], Resources:[\"services/status\"], Verbs:[\"update\"]}"}

as result of this, the operator is not able to install the Cilium chart.

operator crashes on leader election timeout

This has been seen a few times, but doesn't appear to be a major issue. It's probably not a bad idea to disable leader election anyway, to avoid confusion with the operator doing a few restarts on bootstrap.

"}
2021-01-25T13:56:44.183Z	DEBUG	predicate	Reconciling due to dependent resource update	{"name": "cilium-olm", "namespace": "cilium", "apiVersion": "v1", "kind": "ConfigMap"}
2021-01-25T13:56:44.944Z	DEBUG	predicate	Skipping reconciliation for dependent resource creation	{"name": "packageserver-service-system:auth-delegator", "namespace": "", "apiVersion": "rbac.authorization.k8s.io/v1", "kind": "ClusterRoleBinding"}
E0125 13:56:56.184083       1 leaderelection.go:320] error retrieving resource lock cilium/cilium-olm: Get "https://172.30.0.1:443/api/v1/namespaces/cilium/configmaps/cilium-olm": context deadline exceeded
I0125 13:56:56.184166       1 leaderelection.go:277] failed to renew lease cilium/cilium-olm: timed out waiting for the condition
2021-01-25T13:56:56.184Z	ERROR	cmd	Manager exited non-zero.	{"Namespace": "cilium", "error": "leader election lost"}
github.com/go-logr/zapr.(*zapLogger).Error
	/home/travis/gopath/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run.run
	operator-sdk/internal/cmd/helm-operator/run/cmd.go:164
github.com/operator-framework/operator-sdk/internal/cmd/helm-operator/run.NewCmd.func1
	operator-sdk/internal/cmd/helm-operator/run/cmd.go:65
github.com/spf13/cobra.(*Command).execute
	/home/travis/gopath/pkg/mod/github.com/spf13/[email protected]/command.go:846
github.com/spf13/cobra.(*Command).ExecuteC
	/home/travis/gopath/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
	/home/travis/gopath/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
	operator-sdk/cmd/helm-operator/main.go:40
runtime.main
	/home/travis/.gimme/versions/go1.15.4.linux.amd64/src/runtime/proc.go:204
	```

bundle validation error - operators.operatorframework.io.bundle.package.v1 value in the annotations yaml doesn't match the image label

2021-01-07 15:18:19,992 p=278 u=default n=ansible | TASK [parse_operator_bundle : Result of failed task] ***************************
2021-01-07 15:18:20,182 p=278 u=default n=ansible | ok: [localhost] => {
    "msg": {
        "changed": false,
        "failed": true,
        "msg": "The operators.operatorframework.io.bundle.package.v1 value in the annotations yaml doesn't match the corresponding bundle image label!"
    }
}

scan failure: marketplace-operator-metadata-linting

===== Test: operator-metadata-linting =====

 

Operator Courier version:
-------------------------

2.1.10 (https://github.com/operator-framework/operator-courier/releases/tag/v2.1.10)

Validation Warnings:
--------------------

"csv metadata.annotations.categories not defined"
"csv metadata.annotations.description not defined"
"csv metadata.annotations.containerImage not defined"
"csv metadata.annotations.createdAt not defined"
"csv metadata.annotations.support not defined"
"csv metadata.annotations.certified not defined."
"csv metadata.annotations.description not defined.Without this field, the description displayed in the tiles of the UI will be a truncated version of spec.description."
"csv metadata.annotations.categories not defined.Without this field, the operator will be categorized as Other."
"csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI."
"csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI."
"csv metadata.annotations.containerImage not defined.Without this field, the link to the operator image will not be displayed in the UI."
"csv metadata.annotations.alm-examples not defined.Without this field, users will not have examples of how to write Custom Resources for the operator."
"csv spec.maintainers not defined. Without this field, the operator details page will not display the name and contact for users to get support in using the operator. The field should be a yaml list of name & email pairs."
"You should have alm-examples for every owned CRD"

Validation Errors:
------------------

"spec.icon[0].mediatype  is not a valid mediatype. It must be one of \"image/gif\", \"image/jpeg\", \"image/png\", \"image/svg+xml\""
"UI validation failed to verify that required fields for operatorhub.io are properly formatted."

stdout:
-------



stderr: 
-------

WARNING: csv metadata.annotations.categories not defined [1.9.0/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.description not defined [1.9.0/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.containerImage not defined [1.9.0/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.createdAt not defined [1.9.0/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.support not defined [1.9.0/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.certified not defined. [1.9.0/cilium-olm.csv.yaml]
WARNING: csv metadata.annotations.description not defined.Without this field, the description displayed in the tiles of the UI will be a truncated version of spec.description. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
WARNING: csv metadata.annotations.categories not defined.Without this field, the operator will be categorized as Other. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
WARNING: csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
WARNING: csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
WARNING: csv metadata.annotations.containerImage not defined.Without this field, the link to the operator image will not be displayed in the UI. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
WARNING: csv metadata.annotations.alm-examples not defined.Without this field, users will not have examples of how to write Custom Resources for the operator. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
WARNING: csv spec.maintainers not defined. Without this field, the operator details page will not display the name and contact for users to get support in using the operator. The field should be a yaml list of name & email pairs. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
WARNING: You should have alm-examples for every owned CRD [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
ERROR: spec.icon[0].mediatype  is not a valid mediatype. It must be one of "image/gif", "image/jpeg", "image/png", "image/svg+xml" [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
ERROR: UI validation failed to verify that required fields for operatorhub.io are properly formatted. [58c103af-6011-40f1-a0bd-62945c2e3672/package.yaml]
Resulting bundle is invalid, input yaml is improperly defined.


return code:
------------

1



-------------------
Execution Reference:

-> /cvp/cvp-isv-operator-metadata-validation-test/marketplace-ospid-e31ac831-7e72-42bb-baf9-f392ef7ea622-beda7aab-253f-478f-8dfc-515803469d98/58c103af-6011-40f1-a0bd-62945c2e3672/

CSV suffix hash is insufficient

Currently the CSV suffix is generated based on parameters such as operator images:

generate_instaces_cue() {
cat << EOF
package operator
instances: [
{
output: "manifests/cilium.v${cilium_version}/cluster-network-06-cilium-%s.yaml"
parameters: {
image: "${operator_image}"
test: false
onlyCSV: false
ciliumVersion: "${cilium_version}"
configVersionSuffix: "${1:-}"
}
},
{
output: "bundles/cilium.v${cilium_version}/manifests/cilium-olm.csv.yaml"
parameters: {
namespace: "placeholder"
image: "${operator_image}"
test: false
onlyCSV: true
ciliumVersion: "${cilium_version}"
configVersionSuffix: "${1:-}"
}
},
]
EOF
}
config_version_suffix_hash="$(generate_instaces_cue | git hash-object --stdin)"
generate_instaces_cue "${config_version_suffix_hash:0:7}" > config/operator/instances.cue

This does not cover fixes made for #41 and #39.

ensure OLM resources are configured correctly

Here is what is currently installed into a new cluster:

{
apiVersion: "operators.coreos.com/v1alpha2"
kind: "OperatorGroup"
metadata: {
name: constants.name
namespace: parameters.namespace
}
spec: targetNamespaces: [parameters.namespace]
},
{
apiVersion: "operators.coreos.com/v1alpha1"
kind: "Subscription"
metadata: {
name: constants.name
namespace: parameters.namespace
}
spec: {
channel: "stable"
name: constants.name
startingCSV: "cilium.v\(parameters.ciliumVersion)"
installPlanApproval: "Automatic"
source: "certified-operators"
sourceNamespace: "openshift-marketplace"
}
},

This doesn't really break installation, albeit it's not quite correct and cannot be used for direct catalog installations.

The constants.name is set to cilium-olm, which is the name of the operator workload in the cluster but in the catalogue it's cilium.

Move OLM repo to Isovalent Org

I was looking at the way other CNCF projects publish OLMs. For Antrea, VMware publishes it, submariner has one but I'm not it's not in the OSS Github repo, and Kube-OVN currently doesn't have one.

Cilium OLM should live in the Isovalent Github org rather than the Cilium one since it is published by Isovalent.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.