Giter Club home page Giter Club logo

baremetal-deploy's Introduction

Important

This repository is no longer maintained. For contributions to the ansible-ipi-install roles please visit the ansible-collection-redhatci-ocp repository.

OpenShift Baremetal Deploy

This repository stores resources and deployment artifacts for bare metal OpenShift KNI clusters

It also contains optional features focused on low-latency workloads, NFV workloads, etc.

Installation artifacts

Optional features

  • Performance. Performance-related features like Hugepages, real-time kernel, CPU Manager and Topology Manager.
  • Bonding. A helper script to create bonding devices with ignition and/or NMstate.
  • DPDK. Example workload that uses DPDK libraries for packet processing.
  • Kubernetes NMstate. Node-networking configuration driven by Kubernetes and executed by NMstate.
  • Kubernetes NMstate day1. Node-networking configuration driven by Kubernetes and executed by NMstate during the deployment of a cluster, by adding settings to install-config.yaml
  • PTP. This operator manages cluster-wide Precision Time Protocol (PTP) configuration.
  • SCTP. These assets enable Stream Control Transmission Protocol (SCTP) in the RHCOS worker nodes.
  • SR-IOV. The SR-IOV Network Operator creates and manages the components of the SR-IOV stack.
  • CNV. Container Native Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.

Performance tuning

The Performance Tuning folder contains some assets intended to improve performance such as:

  • Huge Pages
  • Topology Manager
  • CPU manager
  • real-time kernel (including a new worker-rt Kubernetes/OpenShift node role)

Those assets are applied mainly via the Node Tuning operator and/or the Machine Config operators.

How to contribute

See CONTRIBUTING for some guidelines.

Thanks

  • Netlify for PR rendering:

baremetal-deploy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

baremetal-deploy's Issues

is ansible really needed?

Currently the baremetal-prep.sh script requires ansible to create the install-config.yaml file.
As it is just for a simple template/substitution, I think a bash script with envsubst/sed script/python script would be better so you won't need to subscribe to the ansible channel nor download a few unneeded packages

Bare Metal Hosts Masters have no Nodes attached

While deploying new cluster, the BMH Masters that are 'Externally Provisioned' do not have any Nodes attached to it.

Basically the problems is discussed in here, and dev-scripts used this workaround.

The environment is using libvirt and linchpin.

This is how currently it looks like:
image

Missing platform.baremetal.provisioningNetworkInterface in install-config.yaml

When running the TASK [installer : Create OpenShift Manifest] *********************************** tasks, it fails because:

    "stderr": "level=fatal msg=\"failed to fetch Master Machines: failed to load asset \\\"Install Config\\\": invalid \\\"install-config.yaml\\\" file: platform.baremetal.provisioningNetworkInterface: Invalid value: \\\"\\\": no provisioning network interface is configured, please set this value to be the interface on the provisioning network on your cluster's baremetal hosts\"", 

My install-config.yaml generated by the playbook contains:

apiVersion: v1
baseDomain: xxx
metadata:
  name: xxx
networking:
  machineCIDR: xxx
  networkType: OVNKubernetes
compute:
- name: worker
  replicas: 1
controlPlane:
  name: master
  replicas: 3
  platform:
    baremetal: {}
platform:
  baremetal:
    apiVIP: xxx
    ingressVIP: xxx
    dnsVIP: xxx
    hosts:
xxx

And according to https://github.com/openshift/installer/blob/master/docs/user/metal/install_ipi.md#install-config it requires platform.baremetal.provisioningNetworkInterface

Feature Enhancement of ansible-ipi-install - Clean up old tmp oc binary dir

Right now when the ansible-ipi-installer kicks off it creates a tmp directory that stores the oc binaries (oc, kubelet, openshift-baremetal-install). However, if an error occurred during deployment, this tmp file isn't cleaned up. Add logic that checks if these tmp dirs exist and remove them in order to save space on the local HD.

Feature Enhancement of ansible-ipi-install - Remove binaries in /usr/local/bin on re-run of playbook

The playbook currently takes the oc binaries (oc, openshift-baremetal-install, kubelet) and places them in your /usr/local/bin directory. However, if say you wanted to reuse this provision host to install a new version of OCP, the existing playbook would notice that there is already an existing oc, openshift-baremetal-install,kubelet binary in /usr/local/bin and not place the newly created ones in /usr/local/bin. Solution is to create logic that checks if those files exist, and if they do, remove them prior to untarring the new binaries to be placed in the /usr/local/bin

Ansible: provisioning interface name needed in install-config

In 4.4, metal3-config.yaml is no longer required. Instead, the designation of the provisioning interface name has been moved to install-config.yaml as provisioningNetworkInterface. We should inject this value into the generated install-config.

install-steps.md provision_ip in metal3 yaml

A note exists under step 4 when creating the metal3-config.yaml.sample file.
The note states that the provision_ip should be modified to an available ip.

First, this should be provisioning_ip in the note. Also, it should be noted that in the config file, the deploy_kernel_url, deploy_ramdisk_url, ironic_endpoint, and ironic_inspector_endpoint should have the same ip as the provisioning_ip.

metal3-config.yaml is no longer required on 4.4

In 4.4, the metal3-config.yaml config map isn't needed anymore as the installer will pass and calculate all the required values to the machine-api-operator for standing up the Metal3 provisioning services.

Error deploying the sriov operator

error: unable to recognize “STDIN”: no matches for kind “SriovNetworkNodePolicy” in version “sriovnetwork.openshift.io/v1”

Deployment fails when install-config.yaml hosts do not include master nodes first

With the following install-config.yaml:

[kni@worker-1 ~]$ cat install-config.yaml 
apiVersion: v1
baseDomain: qe.lab.redhat.com
networking:
  machineCIDR: 192.168.123.0/24
metadata:
  name: ocp-edge-cluster
compute:
- name: worker
  replicas: 1
controlPlane:
  name: master
  replicas: 3
  platform:
    baremetal: {}
platform:
  baremetal:
    apiVIP: 192.168.123.5
    dnsVIP: 192.168.123.6
    ingressVIP: 192.168.123.10
    hosts:
      - name: openshift-master-2
        role: master
        bmc:
          address: ipmi://192.168.123.1:6232
          username: admin
          password: password
        bootMACAddress: 52:54:00:88:41:e0
        hardwareProfile: default
      - name: openshift-worker-0
        role: worker
        bmc:
          address: ipmi://192.168.123.1:6233
          username: admin
          password: password
        bootMACAddress: 52:54:00:cd:0a:b1
        hardwareProfile: unknown
      - name: openshift-master-0
        role: master
        bmc:
          address: ipmi://192.168.123.1:6230
          username: admin
          password: password
        bootMACAddress: 52:54:00:5f:b0:ef
        hardwareProfile: default
      - name: openshift-master-1
        role: master
        bmc:
          address: ipmi://192.168.123.1:6231
          username: admin
          password: password
        bootMACAddress: 52:54:00:73:53:97
        hardwareProfile: default

Deployment fails because the installer deployed:

  • openshift-master-2
  • openshift-worker-0
  • openshift-master-0

Note: deployment works correctly when I put master nodes first in the hosts list:

apiVersion: v1
baseDomain: qe.lab.redhat.com
networking:
  machineCIDR: 192.168.123.0/24
metadata:
  name: ocp-edge-cluster
compute:
- name: worker
  replicas: 1
controlPlane:
  name: master
  replicas: 3
  platform:
    baremetal: {}
platform:
  baremetal:
    apiVIP: 192.168.123.5
    dnsVIP: 192.168.123.6
    ingressVIP: 192.168.123.10
    hosts:
      - name: openshift-master-2
        role: master
        bmc:
          address: ipmi://192.168.123.1:6232
          username: admin
          password: password
        bootMACAddress: 52:54:00:88:41:e0
        hardwareProfile: default
      - name: openshift-master-0
        role: master
        bmc:
          address: ipmi://192.168.123.1:6230
          username: admin
          password: password
        bootMACAddress: 52:54:00:5f:b0:ef
        hardwareProfile: default
      - name: openshift-master-1
        role: master
        bmc:
          address: ipmi://192.168.123.1:6231
          username: admin
          password: password
        bootMACAddress: 52:54:00:73:53:97
        hardwareProfile: default
      - name: openshift-worker-0
        role: worker
        bmc:
          address: ipmi://192.168.123.1:6233
          username: admin
          password: password
        bootMACAddress: 52:54:00:cd:0a:b1
        hardwareProfile: unknown

[kni@worker-1 ~]$ echo $VERSION
4.3.0-0.nightly-2019-12-09-035405
[kni@worker-1 ~]$ echo $RELEASE_IMAGE
quay.io/openshift-release-dev/ocp-release-nightly@sha256:52d9ac31e14658a3e48bc9a2ce041220b697008e59319a9ce009e269097c3706

sriov operator doesn't work with OCP 4.4

It looks like it is needed to deploy the 4.4 sriov operator version to work with OCP 4.4, otherwise, when you create the policy, the affected node became not ready, non scheduleable because some CNI issue.
I can confirm 4.4 works.

jmespath is required by the ansible playbooks

TASK [installer : Set Fact for RHCOS_URI and RHCOS_PATH] ***********************
task path: /home/jenkins/baremetal-deploy/ansible-ipi-install/roles/installer/tasks/30_create_metal3.yml:13
Wednesday 12 February 2020  11:38:52 +0000 (0:00:00.942)       0:01:52.304 **** 
fatal: [xxx.redhat.com]: FAILED! => {
    "msg": "You need to install \"jmespath\" prior to running json_query filter"
}

In RHEL8 it is provided by the python3-jmespath package

Add chrony configs

Currently there is no NTP/chrony configuration instructions. It would be nice to include how to add the chrony/ntp configuration as part of the installation with the custom manifests, otherwise if the clocks differ, it can lead to an unstable/unusuable environment.

I use this for masters:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  creationTimestamp: null
  labels:
    machineconfiguration.openshift.io/role: master
  name: 99-master-etc-chrony-conf
spec:
  config:
    ignition:
      config: {}
      security:
        tls: {}
      timeouts: {}
      version: 2.2.0
    networkd: {}
    passwd: {}
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,c2VydmVyIGNsb2NrLmNvcnAucmVkaGF0LmNvbSBpYnVyc3QKc3RyYXR1bXdlaWdodCAwCmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKcnRjc3luYwptYWtlc3RlcCAxMCAzCmJpbmRjbWRhZGRyZXNzIDEyNy4wLjAuMQpiaW5kY21kYWRkcmVzcyA6OjEKa2V5ZmlsZSAvZXRjL2Nocm9ueS5rZXlzCmNvbW1hbmRrZXkgMQpnZW5lcmF0ZWNvbW1hbmRrZXkKbm9jbGllbnRsb2cKbG9nY2hhbmdlIDAuNQpsb2dkaXIgL3Zhci9sb2cvY2hyb255Cg==                                                                            
          verification: {}
        filesystem: root
        group:
          name: root
        mode: 420
        path: /etc/chrony.conf
        user:
          name: root
    systemd: {}
  osImageURL: ""

And this one for workers:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  creationTimestamp: null
  labels:
    machineconfiguration.openshift.io/role: worker
  name: 99-worker-etc-chrony-conf
spec:
  config:
    ignition:
      config: {}
      security:
        tls: {}
      timeouts: {}
      version: 2.2.0
    networkd: {}
    passwd: {}
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,c2VydmVyIGNsb2NrLmNvcnAucmVkaGF0LmNvbSBpYnVyc3QKc3RyYXR1bXdlaWdodCAwCmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKcnRjc3luYwptYWtlc3RlcCAxMCAzCmJpbmRjbWRhZGRyZXNzIDEyNy4wLjAuMQpiaW5kY21kYWRkcmVzcyA6OjEKa2V5ZmlsZSAvZXRjL2Nocm9ueS5rZXlzCmNvbW1hbmRrZXkgMQpnZW5lcmF0ZWNvbW1hbmRrZXkKbm9jbGllbnRsb2cKbG9nY2hhbmdlIDAuNQpsb2dkaXIgL3Zhci9sb2cvY2hyb255Cg==
          verification: {}
        filesystem: root
        group:
          name: root
        mode: 420
        path: /etc/chrony.conf
        user:
          name: root
    systemd: {}
  osImageURL: ""

install-steps.md Number of routers deployed

The "Deploying Routers on Worker Nodes" mentions setting the replicas for the routers to be the same as the number of worker nodes deployed.

If no worker nodes are deployed during installation, what value should this be set to?
For instance, if i will have one worker node added after installation because it is the node i am provisioning from.

Can this be adjusted later? Do i set it to 0 during installation and then how do i increment it when i reconfigure the provisioning system to be a worker.

For disconnected installs, consider mirroring the machine OS image as well

Installation requires downloading 2 machine OS images to install the bootstrap and the control plane and worker nodes, which typically happens over the internet.

For the boostrap, we use the QEMU RHCOS image, and for other hosts we use the OpenStack image which has the appropriate support for disconnected installs.

To set local locations for these images, update the install-config:

platform:
  baremetal:
    bootstrapOSImage: http://<local mirror>>/rhcos-43.81.201912131630.0-qemu.x86_64.qcow2.gz?sha256=XYZ'
    clusterOSImage: http://<local mirror>/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=XYZ'

Notes:

ansible-ipi-install assumes "clusterconfigs" dir has no existing terraform files

When attempting to re-run an ansible-ipi-install playbook, the TASK [installer : Deploy OpenShift Cluster] will fail if a previous run has left old terraform.tfstate files behind in the {{ansible_user}}/clusterconfigs directory.

Tuesday 04 February 2020  11:24:34 -0500 (0:00:00.102)       0:02:17.262 ****** 
...
"level=debug ms
g=\"  Loading Install Config...\"", "level=debug msg=\"  Loading Platform Credentials Check...\"", "level=debug msg=\"  Loading Terraform Variables...\"", "level=debug msg=\"  Loading Kubeadmin Password...\"", "l
evel=fatal msg=\"failed to fetch Cluster: failed to load asset \\\"Cluster\\\": \\\"terraform.tfstate\\\" already exists.  There may already be a running cluster\""], "stdout": "", "stdout_lines": []}            
                                                                                                                                                                                                                    
PLAY RECAP *********************************************************************************************************************************************************************************************************
127.0.0.1                  : ok=66   changed=24   unreachable=0    failed=1    skipped=8    rescued=0    ignored=0```

Removing the directory and re-running the playbook fixes the issue.

Unneeded packages

sudo yum -y install ansible git usbredir golang libXv virt-install libvirt libvirt-devel libselinux-utils qemu-kvm mkisofs

There are a few packages not needed that were used back in the day where a iso was needed. To improve the installation time, the packages installed need to be reduced.

Also, if they are installed as dependency of other packages, they should not be explicitly added.

Error while provisioning workers (blockdev: cannot open /dev/sda: No medium found)

openshift-machine-api   kni1-worker-0   error    provisioning             kni1-worker-0-r7gcg   ipmi://10.19.143.29   unknown            true     Image provisioning failed: node f7819fc2-50ee-4b25-bb9b-3269f3ffac85 command status errored: {'type': 'ImageWriteError', 'code': 500, 'message': 'Error writing image to device: Writing image to device /dev/sda failed with exit code 1. stdout: write_image.sh: Erasing existing GPT and MBR data structures from /dev/sda\n. stderr: blockdev: cannot open /dev/sda: No medium found\n', 'details': 'Writing image to device /dev/sda failed with exit code 1. stdout: write_image.sh: Erasing existing GPT and MBR data structures from /dev/sda\n. stderr: blockdev: cannot open /dev/sda: No medium found\n'}
Client Version: 4.3.1
Server Version: 4.3.1
Kubernetes Version: v1.16.2
      - name: kni1-worker-0
        role: worker
        bmc:
          address: ipmi://xxx
          username: root
          password: xxx
        bootMACAddress: xxx
        hardwareProfile: unknown

Documentation: Missing prereq for ansible-ipi-install

Attempting to use roles by include_role and tasks_from fails in 10_validation.yml with

TASK [node-prep : Verify DNS records for API VIP, Wildcard (Ingress) VIP] ************************** Thursday 06 February 2020 00:32:37 +0000 (0:00:00.130) 0:00:03.060 ***** fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'dig'. Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig lookup requires the python 'dnspython' library and it is not installed"}

install-steps.md - Attaching to a subscription

Step 4 under "Preparing the Provision node for Openshift Install" does not show attaching the system to a subscription.

Should add --auto-attach to the subscription-manager register command and make a note prior that could also use --activationkey option. Or "subscription-manager attach" command could be run after the system is registered.

Unable to verify certification authority

Environment Used

OCP 4.3 IPI on BM.

What is the issue?

While passing the control plane BootStrap node is unable to verify the CA, leading to API timeouts.

Host/Provisioner Node OS version

[kni@worker-0 root]$ cat /etc/os-release 
NAME="Red Hat Enterprise Linux"
VERSION="8.1 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.1"
PLATFORM_ID="platform:el8"

What is happening

Bootstrap node is up and running:

[kni@worker-0 root]$ sudo virsh list
 Id    Name                           State
----------------------------------------------------
 11    kni4-ccb7p-bootstrap           running

All the containers inside this VM is also UP:

core@localhost ~]$ sudo podman ps -a |  awk '{print $NF}'
NAMES
ironic-api
ironic-inspector
ironic-conductor
sad_jones
sweet_kalam
amazing_curran
condescending_hugle
xenodochial_ptolemy
coreos-downloader
ipa-downloader
mystifying_northcutt
sad_brahmagupta
stupefied_kepler
httpd
dnsmasq
mariadb

The logs inside the VM states:

DEBUG                                              
DEBUG Apply complete! Resources: 12 added, 0 changed, 0 destroyed. 
DEBUG OpenShift Installer v4.4.0                   
DEBUG Built from commit 8aace3c8a9497f3290cd0751fd45da1a4d7c6132 
INFO Waiting up to 30m0s for the Kubernetes API at https://api.kni4.cloud.lab.eng.bos.redhat.com:6443... 
DEBUG Still waiting for the Kubernetes API: Get https://api.kni4.cloud.lab.eng.bos.redhat.com:6443/version?timeout=32s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-apiserver-lb-signer") 
DEBUG Still waiting for the Kubernetes API: Get https://api.kni4.cloud.lab.eng.bos.redhat.com:6443/version?timeout=32s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-apiserver-lb-signer") 
DEBUG Still waiting for the Kubernetes API: Get https://api.kni4.cloud.lab.eng.bos.redhat.com:6443/version?timeout=32s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-apiserver-lb-signer") 

I had made sure to clean up all the older volumes and VM residues before starting this installation.

What was expected instead

After the API is up , I was expecting a OCP 4.3 cluster fully functional.

Additional Info

For reference, here is my install-config.yaml file

apiVersion: v1
baseDomain: <YourDomain> 
metadata:
  name: kni4
networking:
  machineCIDR: 10.19.136.0/21
compute:
- name: worker
  replicas: 0
controlPlane:
  name: master
  replicas: 3
  platform:
    baremetal: {}
platform:
  baremetal:
    apiVIP: 10.19.138.86 
    ingressVIP: 10.19.138.87
    dnsVIP: 10.19.138.88
    provisioningBridge: provisioning
    externalBridge: baremetal
    hosts:
      - name: openshift-master-0
        role: master
        bmc:
          address: ipmi://<URL>
          username: ***
          password: ******
        bootMACAddress: ec:f4:bb:da:0c:58
        hardwareProfile: default
      - name: ********
        role: master
        bmc:
          address: ipmi://<URL> 
          username: *****
          password: *****
        bootMACAddress: ec:f4:bb:da:32:88
        hardwareProfile: default
      - name: *********
        role: master
        bmc:
          address: ipmi://<URL>
          username: ******
          password: ******** 
        bootMACAddress: ec:f4:bb:da:0d:98
        hardwareProfile: default
pullSecret: '<YourPullSecret>'
sshKey: '<YourSSH>'

I have removed few lines in between, so it will not work as is.

[Performance feature] Error deploying feature on E2E use case

This is the job executed:

if [ -f "features/performance/Makefile" ];then
    cd features/performance
	ISOLATED_CPUS=2 RESERVED_CPUS=2 make
else
    exit 1
fi

This is the Jenkins Output:

Started by upstream project "TE-UC01" build number 36
originally caused by:
 Started by user admin
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building in workspace /var/lib/jenkins/workspace/E2E-Steps/Worker-Performance
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/openshift-kni/baremetal-deploy.git
 > git init /var/lib/jenkins/workspace/E2E-Steps/Worker-Performance # timeout=10
Fetching upstream changes from https://github.com/openshift-kni/baremetal-deploy.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/openshift-kni/baremetal-deploy.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/openshift-kni/baremetal-deploy.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/openshift-kni/baremetal-deploy.git # timeout=10
Fetching upstream changes from https://github.com/openshift-kni/baremetal-deploy.git
 > git fetch --tags --progress -- https://github.com/openshift-kni/baremetal-deploy.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 6ceaead20d5a777505315e137d08b2ede4772a0b (refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6ceaead20d5a777505315e137d08b2ede4772a0b # timeout=10
Commit message: "Provide scripts to generate and deploy performance manifests (#36)"
 > git rev-list --no-walk 6ceaead20d5a777505315e137d08b2ede4772a0b # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the SCM step.
[EnvInject] - Injecting as environment variables the properties content 
KUBECONFIG=/var/lib/jenkins/.kube/e2e_kubeconfig

[EnvInject] - Variables injected successfully.
[Worker-Performance] $ /bin/sh -xe /tmp/jenkins212272282312616800.sh
+ '[' -f features/performance/Makefile ']'
+ cd features/performance
+ ISOLATED_CPUS=2
+ RESERVED_CPUS=2
+ make
./hack/generate.sh
./hack/deploy.sh
node/worker-0 labeled
machineconfigpool.machineconfiguration.openshift.io/master patched
machineconfigpool.machineconfiguration.openshift.io/worker patched
error: must specify one of -f and -k
make: *** [Makefile:3: deploy] Error 1
Build step 'Execute shell' changed build result to UNSTABLE
Finished: UNSTABLE

install-steps.md Drops connection when configuring bridges via ssh.

When configuring the bridges on the provision node via ssh, the connection will drop when removing the interface the connection is on. This can prevent all the commands from running if copy/pasted and prevents the ability to finish the configuration.

This can be fixed by wrapping the commands using nohup.
example: nohup bash -c 'nmcli commands'
Using nohup, once the configuration is done, the connection should resume.

Also, the steps will create a temporary connection called "Wired connection ..." when the interface is removed if the interface is active when it is removed.
Need to bring the interfaces down before removing them.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.