Giter Club home page Giter Club logo

machine-api-operator's Introduction

Machine API Operator

The Machine API Operator manages the lifecycle of specific purpose CRDs, controllers and RBAC objects that extend the Kubernetes API. This allows to convey desired state of machines in a cluster in a declarative fashion.

See https://github.com/openshift/enhancements/tree/master/enhancements/machine-api for more details.

Have a question? See our Frequently Asked Questions for common inquiries.

Architecture

Machine API Operator overview

CRDs

  • MachineSet
  • Machine
  • MachineHealthCheck

Controllers

Creating machines

You can create a new machine by applying a manifest representing an instance of the machine CRD

The machine.openshift.io/cluster-api-cluster label will be used by the controllers to lookup for the right cloud instance.

You can set other labels to provide a convenient way for users and consumers to retrieve groups of machines:

machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker

Dev

  • Generate code (if needed):

    $ make generate
  • Build:

    $ make build
  • Run:

    Extract images.json to a file from install/0000_30_machine-api-operator_01_images.configmap.yaml and run:

    $ ./bin/machine-api-operator start --kubeconfig ${HOME}/.kube/config --images-json=path/to/images.json
  • Image:

    $ make image
    

The Machine API Operator is designed to work in conjunction with the Cluster Version Operator. You can see it in action by running an OpenShift Cluster deployed by the Installer.

However you can run it in a vanilla Kubernetes cluster by precreating some assets:

For more information see hacking-guide.

Machine API operator with Kubemark over Kubernetes

INFO: For development and testing purposes only

  1. Deploy MAO over Kubernetes:
 $ kustomize build | kubectl apply -f -
  1. Deploy Kubemark actuator prerequisities:

    $ kustomize build config | kubectl apply -f -
  2. Create cluster infrastructure.config.openshift.io to tell the MAO to deploy kubemark provider:

    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: infrastructures.config.openshift.io
    spec:
      group: config.openshift.io
      names:
        kind: Infrastructure
        listKind: InfrastructureList
        plural: infrastructures
        singular: infrastructure
      scope: Cluster
      versions:
      - name: v1
        served: true
    storage: true
    ---
    apiVersion: config.openshift.io/v1
    kind: Infrastructure
    metadata:
      name: cluster
    status:
      platform: kubemark

    The file is already present under config/kubemark-config-infra.yaml so it's sufficient to run:

    $ kubectl apply -f config/kubemark-config-infra.yaml

OpenShift Bugzilla

The Bugzilla product for this repository is "Cloud Compute" under OpenShift Container Platform.

CI & tests

Run unit test:

$ make test

Run e2e-aws-operator tests. This tests assume that a cluster deployed by the Installer is up and running and a KUBECONFIG environment variable is set:

$ make test-e2e

Tests are located under machine-api-operator repository and executed in prow CI system. A link to failing tests is published as a comment in PR by @openshift-ci-robot. Current test status for all OpenShift components can be found in https://deck-ci.svc.ci.openshift.org.

CI configuration is stored under openshift/release repository and is split into 4 files:

More information about those files can be found in ci-operator onboarding file.

machine-api-operator's People

Contributors

aleskandro avatar alexander-demicev avatar bison avatar bostrt avatar damdo avatar danil-grigorev avatar dhellmann avatar elmiko avatar enxebre avatar fedosin avatar frobware avatar ingvagabund avatar joelspeed avatar lobziik avatar michaelgugino avatar odvarkadaniel avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar paulfantom avatar racheljpg avatar radekmanak avatar russellb avatar rvanderp3 avatar sadasu avatar samuelstuchly avatar slintes avatar srcarrier avatar vikaschoudhary16 avatar zaneb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

machine-api-operator's Issues

BareMetalHost CRD not being installed

The CRD defined in install/0000_30_machine-api-operator_13_baremetalhost.crd.yaml is not being installed when the machine-api-operator launches, which means that the metal3 app crash loops on baremetal systems.

Update metrics doc with controller specific information

Currently the metrics doc contains information about getting metrics from the machine-api-operator. There are a few metrics that get exposed from the individual controllers as well, most notably the mapi_instance_*_failed metrics. These metrics, and instructions about where to scrape, should be added to the document.

MachineHealthCheck fights with MachineSet on invalid configuration

The enhancement doc for the Machine lifecycle states that a Machine will move to the Failed phase if there is an error in the Machine configuration that precludes trying to create a provider.

The MachineHealthCheck controller will immediately delete any Machine in the Failed phase. On platforms that put Machines into the Failed phase due to invalid configuration, this will result in a fight with the MachineSet controller, constantly creating and deleting Machines.

The following actuators are affected:

  • AWS
  • Azure
  • GCP
  • OpenStack
  • libvirt
  • oVirt
  • kubevirt
  • kubemark

The vSphere actuator won't display this behaviour until #735 is merged.
The baremetal (Metal³) actuator doesn't currently return InvalidConfiguration errors, but this is planned for the future.

A solution to this might be for the MachineHealthCheck to not queue failed Machines for immediate deletion in the case where the ErrorReason is InvalidConfiguration, and instead only delete them after a timeout. One obstacle to this is that the ErrorReason is not currently recorded by the Machine controller. That will be fixed by #701.

Need mertics on maxReplicas and minReplicas of MachineAutoscaler

It would be great if you can please publish a Prometheus metrics for maxReplicas of MachineAutoscaler . We have to manually compare the current number of replicas with max replicas on ROSA platform and increase the maxReplicas if current node count reach maxReplicas

Machine API doesn't support `platform: baremetal`

RHHI.Next clusters fail after bootstrap is complete:

oc --config ocp/auth/kubeconfig -n openshift-machine-api logs machine-api-operator-5bbc744598-cs6cm | tail -n 3
E0308 20:06:07.164857       1 operator.go:176] Failed getting operator config: no platform provider found on install config
E0308 20:06:27.657099       1 operator.go:176] Failed getting operator config: no platform provider found on install config
E0308 20:07:08.622265       1 operator.go:176] Failed getting operator config: no platform provider found on install config

/cc @booxter

Duplicate credentialsrequests

Last 2 credentials are being provisioned on Azure cluster with AzurePlatform as provider
.

[mjudeiki@redhat openshift-azure]$ oc get credentialsrequests.cloudcredential.openshift.io 
NAME                               AGE
azure-openshift-ingress            161m
cloud-credential-operator-iam-ro   161m
openshift-image-registry           161m
openshift-ingress                  161m

openshift-machine-api              161m
openshift-machine-api-azure        161m

One of them is OpenStack:

[mjudeiki@redhat openshift-azure]$ oc get credentialsrequests.cloudcredential.openshift.io openshift-machine-api -o yaml
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  creationTimestamp: "2019-06-17T10:22:45Z"
  finalizers:
  - cloudcredential.openshift.io/deprovision
  generation: 48
  labels:
    controller-tools.k8s.io: "1.0"
  name: openshift-machine-api
  namespace: openshift-cloud-credential-operator
  resourceVersion: "84721"
  selfLink: /apis/cloudcredential.openshift.io/v1/namespaces/openshift-cloud-credential-operator/credentialsrequests/openshift-machine-api
  uid: d8c61b45-90e9-11e9-bbff-000d3a18e198
spec:
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: OpenStackProviderSpec
  secretRef:
    name: openstack-cloud-credentials
    namespace: openshift-machine-api
status:
  lastSyncGeneration: 48
  lastSyncTimestamp: "2019-06-17T12:57:45Z"
  provisioned: true
[mjudeiki@redhat openshift-azure]$ oc get credentialsrequests.cloudcredential.openshift.io openshift-machine-api-azure -o yaml
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  creationTimestamp: "2019-06-17T10:23:02Z"
  finalizers:
  - cloudcredential.openshift.io/deprovision
  generation: 1
  labels:
    controller-tools.k8s.io: "1.0"
  name: openshift-machine-api-azure
  namespace: openshift-cloud-credential-operator
  resourceVersion: "2160"
  selfLink: /apis/cloudcredential.openshift.io/v1/namespaces/openshift-cloud-credential-operator/credentialsrequests/openshift-machine-api-azure
  uid: e30e41bf-90e9-11e9-bbff-000d3a18e198
spec:
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: AzureProviderSpec
    roleBindings:
    - role: passthrough
      scope: resourcegroup
  secretRef:
    name: azure-cloud-credentials
    namespace: openshift-machine-api

I suspect all file is being evaluated and because 2 last does not overwrite each other we get multiple credentials.
Should they contain all Cloud provider credentials like this? Where is the logic personalize and evaluate only required file to be created?

/cc @ingvagabund @enxebre @abhinavdahiya @mandre

No new Machinesets get scheduled if there is an existing machineset with configuration issues

Version: 4.1.20

How to reproduce:
Create two machineset yaml files. config1 should have an incorrect entry such as a label with a boolean value that is not in quotes (ex: ssd: false). The second should be properly formed. Apply the first yaml with the error then apply the second one without the error.

Result:
Any machineset created after the incorrect one will not be scheduled until the bad one is fixed. If there are multiple bad ones, all must be fixed before any others are scheduled.

Status change on idle cluster

I see the cluster-api-operator status transitioning quickly from Available to Progressing and back over a few seconds, every 1-3m on an idle cluster. The operator state should stable on an idle cluster.

From a watch on the operator status every 1s

machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   3m
machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   3m
machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   3m
machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   3m
machine-api-operator   v0.1.0-142-ge791b3db-dirty   False   True   False   0s <-- Progressing
machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   0s <-- Available
machine-api-operator   v0.1.0-142-ge791b3db-dirty   False   True   False   1s <-- Progressing
machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   1s <-- Available
machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   2s
machine-api-operator   v0.1.0-142-ge791b3db-dirty   True   False   False   4s

I see this in the log when it happens

I0129 15:20:17.327786       1 sync.go:43] Synched up cluster api controller
I0129 15:20:17.438134       1 operator.go:177] Getting operator config using kubeclient
I0129 15:20:17.527225       1 sync.go:25] Syncing ClusterOperatorStatus

fyi @smarterclayton

panic in operator e2e

https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_machine-api-operator/137/pull-ci-openshift-machine-api-operator-master-e2e-aws-operator/73/build-log.txt

I1126 21:49:42.191957    2157 main.go:79] RUN: ExpectAllMachinesLinkedToANode
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xeb6226]

goroutine 1 [running]:
main.(*testConfig).ExpectAllMachinesLinkedToANode(0xc4202ee0c0, 0x1, 0x1)
    /go/src/github.com/openshift/machine-api-operator/test/e2e/operator_expectations.go:124 +0x416
main.runSuite(0x0, 0x0)
    /go/src/github.com/openshift/machine-api-operator/test/e2e/main.go:80 +0x4fc
main.main()
    /go/src/github.com/openshift/machine-api-operator/test/e2e/main.go:44 +0x27
exit status 2
make: *** [test-e2e] Error 1
2018/11/26 21:49:43 Container test in pod e2e-aws-operator failed, exit code 2, reason Error

NodeRef is nil for the machine

Add bare-metal actuator

Hi guys,
I want to integrate our work on the bare metal actuator(runs fencing commands via IPMI) to machine-api-operator, but currently we only fence existing nodes without provisioning support, because of it I have trouble to implement bootstrapping logic under baremetal-actuator CLI(similar to https://github.com/openshift/cluster-api-provider-libvirt/blob/d82d0107b90324aa38ea23b132bae8b8da67b3bb/cmd/libvirt-actuator/main.go#L174).

So I want to know if it a hard requirement for integration and if you have some other requirements from the actuator I also like to hear them.

Link on bare metal actuator - https://github.com/kubevirt/cluster-api-provider-external

Update vendored openshift/api code

The openshift/api code that is vendored with the machine-api-operator is currently outdated. I would like to make enhancements to the MAO that include accessing the "BareMetalPlatformStatus" structure in openshift/api/config/v1/types_infrastructure.go

Precedence in Cloud Identification

The machine api now is getting the cloud it is running in infrastructure object.

https://github.com/openshift/machine-api-operator/blob/master/pkg/operator/operator.go#L253-L258

func getProviderFromInfrastructure(infra *configv1.Infrastructure) (configv1.PlatformType, error) {
if infra.Status.Platform == "" {
return "", fmt.Errorf("no platform provider found on install config")
}
return infra.Status.Platform, nil
}

Can it be configured to be read from a config map and default to the infrastructure Object .?

This would be help full in bare metal install in any cloud which wants to use machines and machinesets.?

MachineHealthCheck not working if Node goes down

I have Openshift 4.5.4 cluster on AWS, and I am using mixture of Spot Instances & on-demand instances as worker nodes, the issue is Spot Instances can terminate at any time, so I was looking into MachineHealthCheck, and applied one but it isn't doing anything. What happens is node gets terminated, Machine goes into Failed State, MachineSet shows 0 of 1 instance, MachineHealthCheck also shows 0 healthy Machines,but doesn't do anything to bring back the node...
Below is my MachineHealthCheck manifest

apiVersion: machine.openshift.io/v1beta1
kind: MachineHealthCheck
metadata:
  name: spot-node
  namespace: openshift-machine-api
spec:
  maxUnhealthy: 0
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-machineset: lab-spot-node
  unhealthyConditions:
    - status: Unknown
      timeout: 60s
      type: Ready
    - status: 'False'
      timeout: 60s
      type: Ready
    - status: NotReady
      timeout: 60s
      type: Ready
status:
  currentHealthy: 0
  expectedMachines: 1

Is there any issue in the manifest or my understanding?

Update metrics doc with provider controller information

Currently the metrics doc contains information about getting metrics from the machine-api-operator. There are a few metrics that get exposed from the individual controllers as well, most notably the mapi_instance_*_failed metrics. These metrics, and instructions about where to scrape, should be added to the document.

Definite Issue With Pods Running Before CRDs Available

On a new cluster from openshift-install this morning I ended up no worker nodes. I had the expected MachineSets, but no Machines.

$ klf clusterapi-manager-controllers-68d86f58b6-kgpck controller-manager                                             
2018/10/24 11:30:18 Registering Components.
2018/10/24 11:30:18 Starting the Cmd.
E1024 11:30:18.217697       1 controller.go:231] unable to create a machine = , due to machines.cluster.k8s.io "dgoodwin1-worker-0-t5tjm" is forbidden: cannot set blockOwnerDeletion
 in this case because cannot find RESTMapping for APIVersion cluster.k8s.io/v1alpha1 Kind MachineSet: no matches for kind "MachineSet" in version "cluster.k8s.io/v1alpha1"         
E1024 11:30:18.229702       1 controller.go:231] unable to create a machine = , due to machines.cluster.k8s.io "dgoodwin1-worker-1-wlbt4" is forbidden: cannot set blockOwnerDeletion
 in this case because cannot find RESTMapping for APIVersion cluster.k8s.io/v1alpha1 Kind MachineSet: no matches for kind "MachineSet" in version "cluster.k8s.io/v1alpha1"         
E1024 11:30:18.303835       1 controller.go:231] unable to create a machine = , due to machines.cluster.k8s.io "dgoodwin1-worker-2-wnjds" is forbidden: cannot set blockOwnerDeletion
 in this case because cannot find RESTMapping for APIVersion cluster.k8s.io/v1alpha1 Kind MachineSet: no matches for kind "MachineSet" in version "cluster.k8s.io/v1alpha1"         
E1024 11:30:18.337241       1 controller.go:231] unable to create a machine = , due to machines.cluster.k8s.io "dgoodwin1-worker-0-n64h9" is forbidden: cannot set blockOwnerDeletion
 in this case because cannot find RESTMapping for APIVersion cluster.k8s.io/v1alpha1 Kind MachineSet: no matches for kind "MachineSet" in version "cluster.k8s.io/v1alpha1"         
E1024 11:30:18.339724       1 controller.go:231] unable to create a machine = , due to machines.cluster.k8s.io "dgoodwin1-worker-1-g5dt8" is forbidden: cannot set blockOwnerDeletion
 in this case because cannot find RESTMapping for APIVersion cluster.k8s.io/v1alpha1 Kind MachineSet: no matches for kind "MachineSet" in version "cluster.k8s.io/v1alpha1"         
E1024 11:30:18.342185       1 controller.go:231] unable to create a machine = , due to machines.cluster.k8s.io "dgoodwin1-worker-2-lbndq" is forbidden: cannot set blockOwnerDeletion
 in this case because cannot find RESTMapping for APIVersion cluster.k8s.io/v1alpha1 Kind MachineSet: no matches for kind "MachineSet" in version "cluster.k8s.io/v1alpha1" 

The pod appears to have started running before the MachineSet API was registered? However MachineSets did exist, just no Machines. I deleted this pod above and let the deployment recreate it, and immediately my worker Machine's were created, indicating a resync would have fixed the issue whenever it occurred. (10 minutes?)

no worker nodes with latest libvirt

$ oc get clusterversion
NAME      VERSION                           AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.alpha-2019-02-11-201342   False       True          20m     Unable to apply 4.0.0-0.alpha-2019-02-11-201342: the cluster operator openshift-controller-manager has not yet successfully rolled out
$ oc get nodes
NAME              STATUS   ROLES    AGE   VERSION
osiris-master-0   Ready    master   20m   v1.12.4+9e35761882
$ oc -n openshift-machine-api get --all-namespaces machinesets
NAMESPACE               NAME              DESIRED   CURRENT   READY   AVAILABLE   AGE
openshift-cluster-api   osiris-worker-0   2                                       20m
$ oc get machines.machine.openshift.io --all-namespaces
No resources found.
$ oc get -n openshift-machine-api pods
NAME                                              READY   STATUS    RESTARTS   AGE
clusterapi-manager-controllers-7cfcd6575f-j6rxm   4/4     Running   0          16m
machine-api-operator-5bc6869c76-spscn             1/1     Running   0          17m
$ oc -n openshift-machine-api logs deploy/clusterapi-manager-controllers -c machine-controller
I0211 22:37:36.638055       1 main.go:47] Registering Components.
I0211 22:37:36.653680       1 main.go:64] Starting the Cmd.
I0211 22:37:36.654277       1 reflector.go:202] Starting reflector *v1beta1.Machine (10h0m0s) from github.com/openshift/cluster-api-provider-libvirt/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126
I0211 22:37:36.654299       1 reflector.go:240] Listing and watching *v1beta1.Machine from github.com/openshift/cluster-api-provider-libvirt/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126
E0211 22:41:45.637088       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=89, ErrCode=NO_ERROR, debug=""
E0211 22:41:45.638205       1 reflector.go:322] github.com/openshift/cluster-api-provider-libvirt/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to watch *v1beta1.Machine: Get https://172.30.0.1:443/apis/machine.openshift.io/v1beta1/machines?resourceVersion=3382&timeoutSeconds=485&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
... repeats many times ...
E0211 22:44:30.437762       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""
W0211 22:44:36.364222       1 reflector.go:341] github.com/openshift/cluster-api-provider-libvirt/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1beta1.Machine ended with: too old resource version: 10780 (11610)
I0211 22:44:37.364452       1 reflector.go:240] Listing and watching *v1beta1.Machine from github.com/openshift/cluster-api-provider-libvirt/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126

Dropping Container Linux?

The installer dropped support for Container Linux in openshift/installer#221. But there appear to still be some assumptions about Container Linux in this repository (e.g. here). Is there a plan for dropping that? Or am I just misunderstanding something about how this operator fits into the cluster?

Node Controller link broken in README.md

There's a link to the Node Controller in the README, that link goes to a 404.

I don't know the history, so maybe this just needs removing? Or updating to something else?

how to recover the master node when the master machine status is Failed

I have a OCP cluster deployed on GCP. I have 3 master nodes + 6 worker nodes.

And after running for some days, one of my master node was failed and 3 of the worker nodes failed as well.

for worker nodes, I can delete the worker machine CR in openshift-machine-api namespace it let the machine-operator create new workers.

but for master nodes, delete the master machine will not trigger to re-create a new one, so how to recover the master node?

hchenxa@hchenxadeMacBook-Pro:daily_work$ oc get machine -n openshift-machine-api
NAME                                      PHASE     TYPE             REGION        ZONE            AGE
sert-gcp-operator4-bpzh7-master-0         Failed    n1-standard-8    us-central1   us-central1-a   7d4h
sert-gcp-operator4-bpzh7-master-1         Running   n1-standard-8    us-central1   us-central1-b   7d4h
sert-gcp-operator4-bpzh7-master-2         Running   n1-standard-8    us-central1   us-central1-c   7d4h
sert-gcp-operator4-bpzh7-worker-a-lm67k   Failed    n1-standard-32   us-central1   us-central1-a   2d5h
sert-gcp-operator4-bpzh7-worker-b-kq6h5   Running   n1-standard-32   us-central1   us-central1-b   7d4h
sert-gcp-operator4-bpzh7-worker-c-fg7dw   Running   n1-standard-32   us-central1   us-central1-c   7d4h
sert-gcp-operator4-bpzh7-worker-c-xqtrm   Running   n1-standard-32   us-central1   us-central1-c   2d5h
sert-gcp-operator4-bpzh7-worker-f-hq8g5   Failed    n1-standard-32   us-central1   us-central1-f   2d5h
sert-gcp-operator4-bpzh7-worker-f-pnm8j   Failed    n1-standard-32   us-central1   us-central1-f   2d5h

I can delete the worker machine CR to let it re-created.

hchenxa@hchenxadeMacBook-Pro:daily_work$ oc delete machine -n openshift-machine-api sert-gcp-operator4-bpzh7-worker-a-lm67k sert-gcp-operator4-bpzh7-worker-f-pnm8j sert-gcp-operator4-bpzh7-worker-f-hq8g5
machine.machine.openshift.io "sert-gcp-operator4-bpzh7-worker-a-lm67k" deleted
machine.machine.openshift.io "sert-gcp-operator4-bpzh7-worker-f-pnm8j" deleted
machine.machine.openshift.io "sert-gcp-operator4-bpzh7-worker-f-hq8g5" deleted
hchenxa@hchenxadeMacBook-Pro:daily_work$ oc get machine -n openshift-machine-api
NAME                                      PHASE         TYPE             REGION        ZONE            AGE
sert-gcp-operator4-bpzh7-master-0         Failed        n1-standard-8    us-central1   us-central1-a   7d4h
sert-gcp-operator4-bpzh7-master-1         Running       n1-standard-8    us-central1   us-central1-b   7d4h
sert-gcp-operator4-bpzh7-master-2         Running       n1-standard-8    us-central1   us-central1-c   7d4h
sert-gcp-operator4-bpzh7-worker-a-26xhq   Provisioned   n1-standard-32   us-central1   us-central1-a   53s
sert-gcp-operator4-bpzh7-worker-b-kq6h5   Running       n1-standard-32   us-central1   us-central1-b   7d4h
sert-gcp-operator4-bpzh7-worker-c-fg7dw   Running       n1-standard-32   us-central1   us-central1-c   7d4h
sert-gcp-operator4-bpzh7-worker-c-xqtrm   Running       n1-standard-32   us-central1   us-central1-c   2d5h
sert-gcp-operator4-bpzh7-worker-f-p4szb   Provisioned   n1-standard-32   us-central1   us-central1-f   52s
sert-gcp-operator4-bpzh7-worker-f-wkgr9   Provisioned   n1-standard-32   us-central1   us-central1-f   53s

openshift-tests: Managed cluster should ensure control plane pods do not run in best-effort QoS

openshift-tests complains about the Qos of the baremetal pod

failed: (1.3s) 2019-10-07T13:21:43 "[Feature:Platform][Smoke] Managed cluster should ensure control plane pods do not run in best-effort QoS [Suite:openshift/conformance/parallel]"

openshift-machine-api/metal3-788d885944-vn99t is running in best-effort QoS

https://github.com/openshift/origin/blob/32e88dec78fa081dafa31c9ea1d79897761a0cc3/test/extended/operators/qos.go#L45

kubeRBACProxy image are not available in pkg/operator/fixtures/images.json

When I run the machine-api-operator locally with command $ ./bin/machine-api-operator start --kubeconfig ${HOME}/.kube/config --images-json=pkg/operator/fixtures/images.json, the kubeRBACProxy image cannot be downloaded:

Error response from daemon: manifest for openshift/origin-kube-rbac-proxy:v4.0.0 not found: manifest unknown: manifest unknown

There is no v4.0.0 tag in https://hub.docker.com/r/openshift/origin-kube-rbac-proxy/tags?page=1&ordering=last_updated

clusterapi-manager-controllers in `CrashLoopBackOff`, bad --log-level flag

clusterapi-manager-controllers in CrashLoopBackOff

$ oc project openshift-cluster-api
Now using project "openshift-cluster-api" on server "https://dev-api.libvirt.variantweb.net:6443".
[sjennings@cerebellum ~]$ oc get pods
NAME                                             READY     STATUS             RESTARTS   AGE
clusterapi-manager-controllers-c59577c86-pbcvk   1/2       CrashLoopBackOff   8          16m
machine-api-operator-fd9c447d8-dnlcv             1/1       Running            0          19m
[sjennings@cerebellum ~]$ oc logs clusterapi-manager-controllers-c59577c86-pbcvk -c machine-controller
flag provided but not defined: -log-level
Usage of /machine-controller-manager:
  -alsologtostderr
    	log to standard error as well as files
  -kubeconfig string
    	Paths to a kubeconfig. Only required if out-of-cluster.
  -log_backtrace_at value
    	when logging hits line file:N, emit a stack trace
  -log_dir string
    	If non-empty, write log files in this directory
  -logtostderr
    	log to standard error instead of files
  -master string
    	The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.
  -stderrthreshold value
    	logs at or above this threshold go to stderr
  -v value
    	log level for V logs
  -vmodule value
    	comma-separated list of pattern=N settings for file-filtered logging

fyi @rphillips

Support encrypted EBS

As an enterprise cluster operator, I need to encrypt my EC2 EBS volumes, it would be nice to have an option for that.

[vSphere] Hashed machine names not working with external load balancer and ingress

Hi,

in our setup we have an external load balancer (Citrix Netscaler) before our OpenShift 4.5 cluster on vSphere.

If we add a new Machine (increasing MachineSet by 1) in OpenShift, a new VM is created in vSphere where the name of the VM consists of the MachineSet name and a random string hash e.g.: "workers-sdfsda".

If one of the ingress pods is scheduled to this new VM, the problem is that the hostname of this VM is not in the backend pool of our external loadbalancer. This ingress can't be reached through the load balancer afterwards.

Is it possible to use a naming scheme for the ephemeral workers created by the machine-api-operator that looks like this:

<NAME OF MACHINESET>-[1 ... 1000]

We could add these hostnames to our external load balancers backend pool in advance and then placing ingress pods to the ephemeral workers would work out of the box.

Thanks and greetings,

Josef

The machine-api-operator overwrites taints written by either API or oc/kubectl commands

With a cluster brought up with the 4.0 installer, it seems that the only way to apply a taint is to kubectl edit machine ... and add the taint much like https://github.com/openshift/machine-api-operator/blob/master/install/0000_50_machine-api-operator_02_machine.crd.yaml#L43 . If I try to do so via oc/kubctl or a direct Taint API call, it would get overwritten.

Is this by design moving forward or am I doing something wrong? For reference, in RHCOS we would like nodes to be tainted if someone SSH'es into them, and have admins be able to un-taint the machines via. oc if necessary.

New Node is not created when cloud VM is destroyed

I am trying a simple resilience test. Is this a valid scenario?

I am new to this operator so sorry if this is a problem with the underlying provider and not the operator.

Steps to reproduce

Terminate a work node EC2 instance in the AWS console.

Excepted Behaviour

The operator detects this and spawns a new Node for that Machine.

Actual Behaviour

EC2 instance is successfully created but the Machine still points to the old non-existent Node and no new Node is ever created (after even days).

Additional information

Please note I retested this multiple times with the same results.

I am not sure relevant but I tried to "clear up" this inconsistent state by scaling the MachineSet to 0 and then back 1. This did create a new Machine and corresponding Node and EC2 instance but the old Machine is still there and it cannot even be deleted via CLI or Console.

Here is the immortal Machine's configuration

apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
  creationTimestamp: "2019-06-20T06:54:48Z"
  deletionGracePeriodSeconds: 0
  deletionTimestamp: "2019-07-10T03:12:57Z"
  finalizers:
  - machine.machine.openshift.io
  generateName: ocp4-plbsz-worker-ap-southeast-2a-
  generation: 2
  labels:
    machine.openshift.io/cluster-api-cluster: ocp4-plbsz
    machine.openshift.io/cluster-api-machine-role: worker
    machine.openshift.io/cluster-api-machine-type: worker
    machine.openshift.io/cluster-api-machineset: ocp4-plbsz-worker-ap-southeast-2a
  name: ocp4-plbsz-worker-ap-southeast-2a-fptqg
  namespace: openshift-machine-api
  ownerReferences:
  - apiVersion: machine.openshift.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: MachineSet
    name: ocp4-plbsz-worker-ap-southeast-2a
    uid: 08bf6e32-9328-11e9-9f99-020bb2404310
  resourceVersion: "6926860"
  selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machines/ocp4-plbsz-worker-ap-southeast-2a-fptqg
  uid: 4b5afeb3-9328-11e9-9f99-020bb2404310
spec:
  metadata:
    creationTimestamp: null
  providerSpec:
    value:
      ami:
        id: ami-0d980796ce258b5d5
      apiVersion: awsproviderconfig.openshift.io/v1beta1
      blockDevices:
      - ebs:
          iops: 0
          volumeSize: 120
          volumeType: gp2
      credentialsSecret:
        name: aws-cloud-credentials
      deviceIndex: 0
      iamInstanceProfile:
        id: ocp4-plbsz-worker-profile
      instanceType: m4.large
      kind: AWSMachineProviderConfig
      metadata:
        creationTimestamp: null
      placement:
        availabilityZone: ap-southeast-2a
        region: ap-southeast-2
      publicIp: null
      securityGroups:
      - filters:
        - name: tag:Name
          values:
          - ocp4-plbsz-worker-sg
      subnet:
        filters:
        - name: tag:Name
          values:
          - ocp4-plbsz-private-ap-southeast-2a
      tags:
      - name: kubernetes.io/cluster/ocp4-plbsz
        value: owned
      userDataSecret:
        name: worker-user-data
status:
  addresses:
  - address: 10.0.130.185
    type: InternalIP
  - address: ""
    type: ExternalDNS
  - address: ip-10-0-130-185.ap-southeast-2.compute.internal
    type: InternalDNS
  lastUpdated: "2019-07-08T06:42:32Z"
  nodeRef:
    kind: Node
    name: ip-10-0-133-228.ap-southeast-2.compute.internal
    uid: 1ec1e629-9329-11e9-b98a-02df9d640d4c
  providerStatus:
    apiVersion: awsproviderconfig.openshift.io/v1beta1
    conditions:
    - lastProbeTime: "2019-06-20T06:56:30Z"
      lastTransitionTime: "2019-06-20T06:56:30Z"
      message: machine successfully created
      reason: MachineCreationSucceeded
      status: "True"
      type: MachineCreation
    instanceId: i-085307b37a9bd3227
    instanceState: running
    kind: AWSMachineProviderStatus

There are no interesting logs in any of the operator's pods. Here is the only thing that relates to the scenario from machine-api-controllers:

E0710 03:15:50.715705       1 node_controller.go:90] Unable to retrieve Node /ip-10-0-143-38.ap-southeast-2.compute.internal from store: Node "ip-10-0-143-38.ap-southeast-2.compute.internal" not found
I0710 03:22:11.284616       1 controller.go:297] MachineSet "ocp4-plbsz-worker-ap-southeast-2a" in namespace "openshift-machine-api" doesn't specify "cluster.k8s.io/cluster-name" label, assuming nil cluster
I0710 03:22:31.437876       1 controller.go:297] MachineSet "ocp4-plbsz-worker-ap-southeast-2a" in namespace "openshift-machine-api" doesn't specify "cluster.k8s.io/cluster-name" label, assuming nil cluster

Please let me know if I should be capturing anything else.

OpenShift 4.1.4
Kubernetes v1.13.4+c62ce01

Feature Request: Allow for local-ssd in GCP Actuator

No response on previously created issues[1][2] so I'm giving it another try here.
[1] openshift/cluster-api-provider-gcp#124
[2] kubernetes-sigs/cluster-api-provider-gcp#316

Given that GCP local-ssds are very performant and with a 3 year commitment less expensive than pd-standard (I do not believe you can get commitment discounts on other storage) I would like to use them for my /var (and use a small pd-standard boot disk). The fact that they are ephemeral and I can not shutdown machines where they are present is not a concern for openshift nodes.

Unfortunately, the cluster-api-provider-gcp does not support them. When I create a machine config with a disk of type local-ssd I get the following error message:

error launching instance: googleapi: Error 400: Invalid value for field 'resource.disks[1].type': 'PERSISTENT'. Cannot create local SSD as persistent disk., invalid

Here is the spec/providerSpec/disk[1] definition I used:

            - autoDelete: true
              image: blank
              sizeGb: 375
              type: local-ssd

When I look at the allowed configuration values and the reconciler I don't see any way to specify the type as required by the google api.

Additionally, I could not find a way to keep the sourceImage and 'sizeGB' from being specified or set the interface.

I am happy to work on a PR but have to major obstacles:

Any help or advice is greatly appreciated.

[RFE] create machineset via CLI

Right now the only way to create a machineset is to know all of the magic settings or to essentially massage an existing machineset that the installer created for your cluster.

Being able to do something like oc create machineset or similar that would either spit out a valid machineset (by taking in various options) or spit out a machineset template (using current cluster defaults) would be really useful.

machine-api-operator no "--config" option

Hi, I was trying to follow https://github.com/openshift/machine-api-operator/blob/master/README.md to deploy aws actuator. I failed at step./bin/machine-api-operator --kubeconfig ${HOME}/.kube/config --config tests/e2e/manifests/mao-config.yaml --manifest-dir manifests. I found no ''--config'' option, am I missing something?

[root@ip-172-18-2-128 bin]# ./machine-api-operator 
Run Cluster API Controller

Usage:
  machine-api-operator [command]

Available Commands:
  help        Help about any command
  start       Starts Machine API Operator
  version     Print the version number of Machine API Operator

Flags:
      --alsologtostderr                  log to standard error as well as files
  -h, --help                             help for machine-api-operator
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory
      --logtostderr                      log to standard error instead of files
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          log level for V logs
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

[root@ip-172-18-2-128 bin]# ./machine-api-operator version
MachineAPIOperator v0.0.0-was-not-built-properly

STS workload prevents machine-controller from deleting the node object when machine has been removed from the cluster

When OKD cluster is running a sample StatefulSet workload (2 pods) and a machine object associated with a node running one of the pods is deleted, the node object gets stuck in NotReady state. As a result, the STS pod gets stuck in Terminating state and it is never going to be recreated.

Version number:

Client Version: version.Info{Major:"4", Minor:"0+", GitVersion:"v4.0.0-alpha.0+af45cda-1969", GitCommit:"af45cda", GitTreeState:"", BuildDate:"2019-04-10T12:18:35Z", GoVersion:"", Compiler:"", Platform:""}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.4+838b4fa", GitCommit:"838b4fa", GitTreeState:"clean", BuildDate:"2019-05-19T23:51:04Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Steps to Reproduce:

  1. Install OKD-4.1 cluster with NFS storage via kubevirt/kubevirtci:
# git clone https://github.com/kubevirt/kubevirtci.git
# cd kubevirtci
# makedir nfs
# export KUBEVIRT_PROVIDER=okd-4.1.0 
# export KUBECONFIG=$($HOME/kubevirtci/cluster-up/kubeconfig.sh)
# export RHEL_NFS_DIR=$HOME/kubevirtci/nfs
# make cluster-up
  1. Scale cluster down to 0 worker nodes
# cluster-up/oc.sh get machineset -n openshift-machine-api -o name
machineset.machine.openshift.io/test-1-qvx66-worker-0
# cluster-up/oc.sh scale --replicas=0 machineset test-1-qvx66-worker-0 -n openshift-machine-api
  1. Scale cluster up to 3 nodes
# cluster-up/oc.sh scale --replicas=3 machineset test-1-qvx66-worker-0 -n openshift-machine-api
  1. Verify if all worker nodes received an IP
# docker ps
...
99182b5fca2d        kubevirtci/okd-4.1.0@sha256:...
# docker exec -it 99182b5fca2d /bin/bash
[99182b5fca2d]# virsh list
...
 1     test-1-qvx66-master-0          running
 3     test-1-qvx66-worker-0-mwfxb    running
 4     test-1-qvx66-worker-0-9jbpt    running
 5     test-1-qvx66-worker-0-rj5bq    running

[99182b5fca2d]# virsh net-dhcp-leases test-1-qvx66
...
192.168.126.52/24         test-1-qvx66-worker-0-9jbpt
192.168.126.11/24         test-1-qvx66-master-0
192.168.126.53/24         test-1-qvx66-worker-0-rj5bq
  1. If not (as test-1-qvx66-worker-0-mwfxb in the example above), delete the corresponding machine object:
# cluster-up/oc.sh delete machine test-1-qvx66-worker-0-mwfxb -n openshift-machine-api

Verify again if the newly created node received an IP:

[99182b5fca2d]# virsh list
...
 1     test-1-qvx66-master-0          running
 3     test-1-qvx66-worker-0-jbn9z    running
 4     test-1-qvx66-worker-0-9jbpt    running
 5     test-1-qvx66-worker-0-rj5bq    running

[99182b5fca2d]# virsh net-dhcp-leases test-1-qvx66
IP address                Hostname
192.168.126.52/24         test-1-qvx66-worker-0-9jbpt
192.168.126.11/24         test-1-qvx66-master-0
192.168.126.53/24         test-1-qvx66-worker-0-rj5bq
192.168.126.54/24         test-1-qvx66-worker-0-ux8tu
  1. Approve pending CSRs (might need to repeat the command) until all 3 worker nodes are not running
# cluster-up/oc.sh get csr -o name | xargs cluster-up/oc.sh adm certificate approve
# cluster-up/oc.sh get nodes
NAME                          STATUS     ROLES
test-1-qvx66-master-0         Ready      master
test-1-qvx66-worker-0-9jbpt   Ready      worker
test-1-qvx66-worker-0-jbn9z   Ready      worker
test-1-qvx66-worker-0-rj5bq   Ready      worker
  1. Create two RWO PVs (see attached 'nfs-rwo-pvs.yaml' file)
# cluster-up/oc.sh create -f nfs-rwo-pvs.yaml
  1. Create sample STS workload with 2 pods (see attached 'nginx-nfs-sts.yaml' file)
# cluster-up/oc.sh create -f nginx-nfs-sts.yaml
  1. Delete machine where one of the STS pods is running
# cluster-up/oc.sh get po nginx-nfs-sts-0 -o jsonpath='{.spec.nodeName}{"\n"}'
test-1-qvx66-worker-0-jbn9z
# cluster-up/oc.sh delete machine test-1-qvx66-worker-0-9jbpt -n openshift-machine-api

Actual results:
The node has been deleted by actuator:

[99182b5fca2d]# virsh list
...
 1     test-1-qvx66-master-0          running
 3     test-1-qvx66-worker-0-jbn9z    running
 5     test-1-qvx66-worker-0-rj5bq    running

But node object has been left stuck in 'NotReady' state (Ready: Unknown).

# cluster-up/oc.sh get nodes
NAME                          STATUS     ROLES
test-1-qvx66-master-0         Ready      master
test-1-qvx66-worker-0-9jbpt   Ready      worker
test-1-qvx66-worker-0-jbn9z   NotReady   worker
test-1-qvx66-worker-0-rj5bq   Ready      worker

This is a perfectly valid state for a node that had been just shut down; however, it is undesirable for a node removed from the cluster.

As a consequence, the sample STS pod is stuck in 'Terminating' state waiting forever until the node will be back online.

# cluster-up/oc.sh get po
NAME              READY   STATUS
nginx-nfs-sts-0   1/1     Terminating
nginx-nfs-sts-1   1/1     Running

Expected results:
Node is removed shortly after machine object has been deleted.

# cluster-up/oc.sh get nodes
NAME                          STATUS     ROLES
test-1-qvx66-master-0         Ready      master
test-1-qvx66-worker-0-9jbpt   Ready      worker
test-1-qvx66-worker-0-rj5bq   Ready      worker

The STS pod will be recreated on another node.

# cluster-up/oc.sh get po
NAME              READY   STATUS
nginx-nfs-sts-0   0/1     ContainerCreating
nginx-nfs-sts-1   1/1     Running

Additional info:
The issue has been filed as Bug 1723355.

Assign a priority class to pods

Priority classes docs:
https://docs.openshift.com/container-platform/3.11/admin_guide/scheduling/priority_preemption.html#admin-guide-priority-preemption-priority-class

Example: https://github.com/openshift/cluster-monitoring-operator/search?q=priority&unscoped_q=priority

Notes: The pre-configured system priority classes (system-node-critical and system-cluster-critical) can only be assigned to pods in kube-system or openshift-* namespaces. Most likely, core operators and their pods should be assigned system-cluster-critical. Please do not assign system-node-critical (the highest priority) unless you are really sure about it.

GenerateMachineConfigsforRole failed with error failed to read dir

Hi,

I'm seeing a recurring message out of the machine-config-controller, which goes like this:

$ oc logs -n openshift-machine-config-operator machine-config-controller-746ddfd848-fhqb2
I1117 04:44:57.725629       1 start.go:50] Version: v4.2.4-201911050122-dirty (55bb5fc17da0c3d76e4ee6a55732f0cba93e8520)
...
I1117 08:04:59.748929       1 kubelet_config_controller.go:303] Error syncing kubeletconfig cluster: GenerateMachineConfigsforRole failed with error failed to read dir "/etc/mcc/templates/infra": open /etc/mcc/templates/infra: no such file or directory
I1117 08:04:59.757897       1 kubelet_config_controller.go:303] Error syncing kubeletconfig cluster: GenerateMachineConfigsforRole failed with error failed to read dir "/etc/mcc/templates/infra": open /etc/mcc/templates/infra: no such file or directory
I1117 08:04:59.771613       1 kubelet_config_controller.go:303] Error syncing kubeletconfig cluster: GenerateMachineConfigsforRole failed with error failed to read dir "/etc/mcc/templates/infra": open /etc/mcc/templates/infra: no such file or directory
I1117 08:04:59.796013       1 kubelet_config_controller.go:303] Error syncing kubeletconfig cluster: GenerateMachineConfigsforRole failed with error failed to read dir "/etc/mcc/templates/infra": open /etc/mcc/templates/infra: no such file or directory
I1117 08:04:59.879614       1 kubelet_config_controller.go:303] Error syncing kubeletconfig cluster: GenerateMachineConfigsforRole failed with error failed to read dir "/etc/mcc/templates/infra": open /etc/mcc/templates/infra: no such file or directory
I1117 08:04:59.964246       1 kubelet_config_controller.go:303] Error syncing kubeletconfig cluster: GenerateMachineConfigsforRole failed with error failed to read dir "/etc/mcc/templates/infra": open /etc/mcc/templates/infra: no such file or directory

Note that I did add an "infra" machineconfigpool:

$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED
infra    rendered-infra-bed1dda9d08a9f80cd088e667c206fb4    True      False      False
master   rendered-master-a54208d3fe789a6c2647471c1a6b2015   True      False      False
worker   rendered-worker-917c185a9e38f77f491fc863367e60fb   True      False      False

Using rsh, I eventually tried and created a symlink, having /etc/mcc/templates/infra pointing to /etc/mcc/templates/worker, which seems to calm it down. Though eventually, the controller crashes, restarts, and back to zero. ( which is an other problem: controller tends to segfault quite a lot -- see https://bugzilla.redhat.com/show_bug.cgi?id=1772680 ).

Stop logging to stderr by default

All components are logging to stderr by default.
This is spamming the logs and makes debugging a cluster unnecessarily hard.
Log levels should reflect, what the log contains, err --> an error, info --> an information (eg "syncing foo from bar"), this is also valid for the place to where logs are sent.

a log going to stderr will automatically be categorized as 'err' level log by industry standard tools like kibana.

Makefile only supports Docker not Podman

currently the Makefile only appears to use the commands docker or imagebuilder for doing some of it's targets. this makes it difficult for systems that do not have docker, or restrict access to the docker daemon. additionally, the imagebuilder command does not seem to have instructions on where to find/install it.

i am wondering if we could add some detection logic to use podman as an alternative to docker?

Support Azure

Hi,

are there any plans to support Azure?

Thanks,
Luis

openshift-tests: Pods found with invalid container image pull policy not equal to IfNotPresent

Running openshift-tests against a baremetal ipi deployed cluster fails with

fail [github.com/openshift/origin/test/extended/operators/images.go:142]: Oct 7 09:22:27.146: Pods found with invalid container image pull policy not equal to IfNotPresent:

openshift-machine-api/metal3-788d885944-vn99t/metal3-dnsmasq imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-httpd imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-ipa-downloader imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-ironic-api imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-ironic-conductor imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-ironic-inspector imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-mariadb imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-rhcos-downloader imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-static-ip-manager imagePullPolicy=Always
openshift-machine-api/metal3-788d885944-vn99t/metal3-static-ip-set imagePullPolicy=Always

test here
openshift-machine-api/
https://github.com/openshift/origin/blob/4f3d9bd4502a841922b35914d0fd3036d54c5e64/test/extended/operators/images.go#L136

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.