Giter Club home page Giter Club logo

community's People

Contributors

cadawson avatar chenz4027 avatar dhaiducek avatar gparvin avatar jnpacker avatar luckyfengyong avatar mdelder avatar mikeshng avatar morvencao avatar mprahl avatar openshift-merge-robot avatar qiujian16 avatar swopebe avatar tpouyer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

community's Issues

submariner integration

We need to onboard submariner integration with open-cluster-management with addon-framework

/kind feature

Provide a document with sample user scenarios to attract potential users

To attract users who are not familar with OCM but interesting with multi-cluster solutions, we should provide a document about what end user can do with OCM in a sample user scenario. With this document, potential users can get what they can do with OCM in a shorter time instead of checking some function documents and put the functions together by themselves.

Some user scenarios in my mind (OCM is already installed):

  1. To check the cluster status (memory/cpu usage), userA can run command xxx.
  2. To create configuration (E.g., a crd) or a service on the managed cluster, userA can run command xxx with file xxx.
  3. To create services on a managed cluster, userA can run command xxx with file xxx.
  4. To dispatch workload to a managed cluster through hub cluster's API server, userA can run command xxx with ManifestWork or xxx.
  5. To dispatch workload to a managed cluster directly (avoid performance impact in hub API server) based on custom placement rules, userA can check custom placementdecision first, then run command xxx with clusternet.

We can mention some in processing features if necessary.

Apply to be a OCM member

Community Participant
Asked some questions and replied some messages on slack channel "#open-cluster-mgmt"

Initialized some features/enhancement and contribute ideas like add a user scenario ......, update placement API with PrioritizerConfigs

Contributed some bug fixed at placement, registration, community.

Requirements

  • Enabled [two-factor authentication] on their GitHub account
    Yes
  • Have made multiple contributions to the project or community. Contribution may include, but is not limited to:
    - Authoring or reviewing PRs on GitHub
    - Filing or commenting on issues on GitHub
    - Contributing to documentation, blogs, tutorials
    - Contributing to test automation
    • Contributing to community discussions (e.g. meetings, Slack, mailing list discussion)
      Yes
  • Subscribed to [https://groups.google.com/g/open-cluster-management]
    Yes
  • Have read the [contributor guide]
    Yes
  • Actively contributing to 1 or more subprojects.
    Yes
  • Sponsored by 2 reviewers. Note the following requirements for sponsors:
    • Sponsors must have close interactions with the prospective member, e.g. code/design/proposal review, coordinating on issues, etc.
    • Sponsors must be reviewers or approvers in at least 1 OWNERS file in any repo in the [Open-Cluster-Management org], or the org they are sponsoring for.
    • An approver/reviewer in the [Open-Cluster-Management org] may sponsor someone for the [Open-Cluster-Management org]
      or any of the related [Open-Cluster-Management GitHub organizations]; as long as it's a project they're involved with.
    • Sponsors must be from multiple member companies to demonstrate integration across the community.
      Yes. Talked with Yuan Yuan and Jian, they are OK for this application.
  • [Open an issue][membership request] against the Open-Cluster-Management/community repo
    • Ensure your sponsors are @mentioned on the issue
    • Complete every item on the checklist
    • Make sure that the list of contributions included is representative of your work on the project.
      Yes
  • Have your sponsoring reviewers reply confirmation of sponsorship: +1
  • Once your sponsors have responded, your request will be reviewed by the [Open-Cluster-Management GitHub Admin team]. Any missing information will be requested.

Do we need any of this template info?
Continues to contribute regularly, as demonstrated by having at least [TODO: Number] [TODO: Metric] a year, as demonstrated by [TODO: contributor metrics source]. -> placement, registration, community.
[TODO: Number] accepted PRs, -> 4
Reviewed [TODO: Number] PRs, -> 4
Resolved and closed [TODO: Number] Issues, -> 4
Must have been contributing for at least [TODO: Number] months -> 3

Search is not operational in the community version

The search component is currently not operational when installing the community version due to restrictions with the database component.

We are planning to replace the database component so we can enable the search capability.

`File exists` conflict for kubebuilder

Related Repos

https://github.com/open-cluster-management-io/config-policy-controller
https://github.com/open-cluster-management-io/governance-policy-template-sync
https://github.com/open-cluster-management-io/governance-policy-status-sync
https://github.com/open-cluster-management-io/governance-policy-spec-sync
https://github.com/open-cluster-management-io/governance-policy-propagator

What happened

I run make test-dependency and get an error:

mv: cannot move '/tmp/kubebuilder_2.3.0_linux_amd64' to '/usr/local/kubebuilder/kubebuilder_2.3.0_linux_amd64': File exists

This is because I have test other repo before and I already run make test-dependency once.

Do we need to check whether the /usr/local/kubebuilder/kubebuilder_2.3.0_linux_amd64 is existed before mv it?

No "IamPolicy" on Hub

What have I done:

Step1: Install Policy framework

Step2: Install Policy controllers

Step3: Apply IamPolicy on OKD kubectl apply -f policy-test/iam/policy-iam.yaml and the yaml file is:

apiVersion: policy.open-cluster-management.io/v1
kind: IamPolicy
metadata:
  name: iam-grc-policy
  label:
    category: "System-Integrity"
spec:
  namespaceSelector:
    include: ["default","kube-*"]
    exclude: ["kube-system"]
  remediationAction: inform
  disabled: false
  maxClusterRoleBindingUsers: 5
---
apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
  name: binding-policy-iam
placementRef:
  name: placement-policy-iam
  kind: PlacementRule
  apiGroup: apps.open-cluster-management.io
subjects:
  - name: iam-grc-policy
    kind: IamPolicy
    apiGroup: policy.open-cluster-management.io
---
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
  name: placement-policy-iam
spec:
  clusterConditions:
    - status: "True"
      type: ManagedClusterConditionAvailable
  clusterSelector:
    matchExpressions: []

Then I got an error:

error: unable to recognize "policy-test/iam/policy-iam.yaml": no matches for kind "IamPolicy" in version "policy.open-cluster-management.io/v1"

Lack of default env variable in Makefile

Related Repo

https://github.com/open-cluster-management-io/config-policy-controller

What happened

I run make create-ns and get an error:

error: exactly one NAME is required, got 0

This is because make create-ns aim to create two namespace:

create-ns:
	@kubectl create namespace $(CONTROLLER_NAMESPACE) || true
	@kubectl create namespace $(WATCH_NAMESPACE) || true

But there is no default value set for CONTROLLER_NAMESPACE in Makefile.

It should be fixed by adding a default value for CONTROLLER_NAMESPACE.

Provide workload monitor guidance

There is no workload monitor in OCM on the management hub now, it is inconvenient for user to check the job status on the managed clusters. Please provide guidance about the integratoin with workload monitor tools, like Thanos on ACM.

Make deploy failed for `config-policy-controller`

Related Repo

https://github.com/open-cluster-management-io/config-policy-controller

What happened

First issue

I run make deploy and get an error:

make: `deploy' is up to date.

this should be fixed by adding .PHONY: deploy before:

deploy:
	kubectl apply -f deploy/ -n $(CONTROLLER_NAMESPACE)
	kubectl apply -f deploy/crds/ -n $(CONTROLLER_NAMESPACE)
	kubectl set env deployment/$(IMG) -n $(CONTROLLER_NAMESPACE) WATCH_NAMESPACE=$(WATCH_NAMESPACE)

Second issue

After adding .PHONY, it still not work and get an error:

error: unable to recognize "deploy/crds/policy.open-cluster-management.io_v1alpha1_configurationpolicy_cr.yaml": no matches for kind "ConfigurationPolicy" in version "policy.open-cluster-management.io/v1"

cluster-manager-xxx-webhook-sa cannot list resources for missing permission

In a hub cluster, cluster-manager-work-webhook outputs below error log

reflector.go:138]] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1beta1.PriorityLevelConfiguration: failed to list *v1beta1.PriorityLevelConfiguration: prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:open-cluster-management-hub:cluster-manager-work-webhook-sa" cannot list resource "prioritylevelconfigurations" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope

and cluster-manager-registration-webhook outputs similar error

reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1beta1.FlowSchema: failed to list *v1beta1.FlowSchema: flowschemas.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:open-cluster-management-hub:cluster-manager-registration-webhook-sa" cannot list resource "flowschemas" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope                                             
reflector.go:138] k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1beta1.PriorityLevelConfiguration: failed to list *v1beta1.PriorityLevelConfiguration: prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:open-cluster-management-hub:cluster-manager-registration-webhook-sa" cannot list resource "prioritylevelconfigurations" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope 

image: quay.io/open-cluster-management/registration:latest (SHA256: 9a9db2eb9c8a)
clustermanager csv 0.4.0

Deploy the subscription operators to managed cluster failed on OKD

Run make deploy-community-managed as this document said and got an error when downloading "build-harness-bootstrap".

fixed it after deleted token ${GITHUB_TOKEN}' in this line: https://github.com/open-cluster-management/multicloud-operators-subscription/blob/59b66a70ce1b02a243db9140c917e7caaa209b09/Makefile#L67

-include $(shell curl -H 'Authorization: token ${GITHUB_TOKEN}' -H 'Accept: application/vnd.github.v4.raw' -L https://api.github.com/repos/open-cluster-management/build-harness-extensions/contents/templates/Makefile.build-harness-bootstrap -o .build-harness-bootstrap; echo .build-harness-bootstrap)

Maybe another approach is considering whether GITTHUB_TOKEN existed as governance-policy-framework did:

ifndef GITHUB_TOKEN
-include $(shell curl -H 'Accept: application/vnd.github.v4.raw' -L https://api.github.com/repos/open-cluster-management/build-harness-extensions/contents/templates/Makefile.build-harness-bootstrap -o .build-harness-bootstrap; echo .build-harness-bootstrap)
else
-include $(shell curl -H 'Authorization: token ${GITHUB_TOKEN}' -H 'Accept: application/vnd.github.v4.raw' -L https://api.github.com/repos/open-cluster-management/build-harness-extensions/contents/templates/Makefile.build-harness-bootstrap -o .build-harness-bootstrap; echo .build-harness-bootstrap)
endif

Import managed cluster failed cause by version conflict on OKD

After processed these steps:

Step1 : Install community operator from OperatorHub.io
Step2 : Create a Cluster Manager on console
Step3 : Install the managedcluster-import-controller from source files
Step4 : Manually register a cluster

We got error:

E0526 05:51:18.661381       1 lease_controller.go:127] unable to get cluster lease "managed-cluster-lease" on hub cluster: leases.coordination.k8s.io "managed-cluster-lease" is forbidden: User "system:open-cluster-management:cluster1:cv6xm" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "cluster1"

We fixed it by change all images version in the import.yaml from "latest" to "0.0.3" at Step4

Application lifecycle management: container start failed

Related Repo

https://github.com/open-cluster-management/multicloud-operators-subscription

What happened

After run make deploy-managed, the container start failed, got err msg as the following:

unknown flag: --cluster-namespace

After deleted - --cluster-namespace=managed_cluster_name # cluster1 in deploy/managed/operator.yaml, the container start successfully, but then get error in log:

I0706 06:32:32.132637       1 manager.go:60] LeaderElection enabled as running in a cluster
I0706 06:32:32.222951       1 manager.go:118] Starting ... Registering Components for cluster: cluster1/cluster1
I0706 06:32:32.504304       1 namespace_subscriber.go:126] default namespace subscriber with id:cluster1/cluster1
I0706 06:32:32.505549       1 namespace_subscriber.go:129] Done setup namespace subscriber
I0706 06:32:32.509991       1 subscription.go:1013] No multiclusterHub resource found, err: the server could not find the requested resource
I0706 06:32:32.510045       1 controller.go:54] Add helmrelease controller when the remote subscription is NOT running on hub or standalone subscription
I0706 06:32:32.510384       1 helmrelease_controller.go:84] The MaxConcurrentReconciles is set to: 10
E0706 06:32:32.556900       1 placement.go:131] ACM Cluster API service NOT ready: no matches for kind "ManagedCluster" in version "cluster.open-cluster-management.io/v1"
I0706 06:32:32.586759       1 spoke_token_controller.go:78] Adding klusterlet token controller.
I0706 06:32:32.635489       1 lease_controller.go:72] trying to update lease "open-cluster-management-agent-addon"/"application-manager"
I0706 06:32:32.636434       1 manager.go:187] Starting the Cmd.
I0706 06:32:32.636588       1 leaderelection.go:243] attempting to acquire leader lease kube-system/multicloud-operators-remote-subscription-leader.open-cluster-management.io...
I0706 06:32:32.650651       1 lease_controller.go:113] addon lease "open-cluster-management-agent-addon"/"application-manager" updated
I0706 06:32:49.265782       1 leaderelection.go:253] successfully acquired lease kube-system/multicloud-operators-remote-subscription-leader.open-cluster-management.io
I0706 06:32:49.266868       1 sync_server.go:170] start synchronizer
I0706 06:32:49.309966       1 discovery.go:86] Synchronizer cache (re)started
I0706 06:32:49.310041       1 sync_server.go:190] stop synchronizer
E0706 06:32:49.310268       1 leaderelection.go:325] error retrieving resource lock kube-system/multicloud-operators-remote-subscription-leader.open-cluster-management.io: Get "https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/multicloud-operators-remote-subscription-leader.open-cluster-management.io": context canceled
I0706 06:32:49.310650       1 leaderelection.go:278] failed to renew lease kube-system/multicloud-operators-remote-subscription-leader.open-cluster-management.io: timed out waiting for the condition
I0706 06:32:49.310821       1 sync_server.go:231] stop synchronizer channel
E0706 06:32:49.310886       1 manager.go:191] controller was started more than once. This is likely to be caused by being added to a manager multiple timesManager exited non-zero

Managedcluster capacity included unscheduled worker node

In maintenance window, some worker nodes may be cordon as unscheduled to do some repair work, it may take a long time. When managedcluster calculate total capacity, these cordon worker nodes capacity should be removed.

Today, when I cordon some worker nodes in managedcluster, no capacity change in ManagedCluster CR status.
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
name: managed-cluster1
status:
allocatable:
cpu: "8"
memory: 13504Mi
capacity:
cpu: "8"
memory: 15504Mi

Support calling API of the services running in management cluster directly from hub cluster (a.k.a push mode)

We have following use cases which requires calling K8s API or ingress endpoint of the services in managed cluster directly from hub cluster.

  1. Monitoring: Query current detailed status of the resource being provisioned
  2. Maintenance with Out-of-Band: Agent in managed cluster used to retrieve kubernentes resource from hub cluster (a.k.a pull mode) does not work
  3. Compatibility: Support application which used to work with "Kubefed" solution

A few open source projects support API proxy which could be leveraged in OCM to support push mode, such as ANP (https://github.com/kubernetes-sigs/apiserver-network-proxy), clusternet (https://github.com/clusternet/clusternet). Alternatively, it could be implemented in existing OCM component or introducing a new OCM component.

Need a simple deployment method that lay down all the ocm pieces

Currently, we don’t have an easy way to stand up the entire community edition of open cluster management. Ideally, we should have an installer or an operator that performs the deployment.

This deployment is about getting all the open pieces to stack up properly and requires efforts from all components and not just from install/mch-operator.

The current deployment method is either via OKD operator hub marketplace or install from component source repo. See https://open-cluster-management.io for more details.

Migrate Build Infrastructure from Travis to Prow

Most OCM components are built using Travis which limits the ability of external collaborators to submit pull requests that are automatically tested.

To address this, we are working to migrate the build infrastructure from Travis to OpenShift CI Prow. This will allow us to approve external collaborators to open PRs that can be automatically tested before they are reviewed and merged.

The work currently consists of these tasks:

  • Develop a Prow based build harness extension to integrate Prow builds into to the ACM pipeline
  • Create a Prow image builder for Go
  • Create a Prow image builder for NodeJS
  • Unify our Travis and Prow build harness extensions to ease migration
  • Develop Prow workflows to allow testing to use cluster pools to create multiple clusters during testing
  • Develop a sample component as a template for migrating from Travis to Prow
  • Document the steps needed to migrate a repo from Travis to Prow

The Prow based build harness extensions are here:
https://github.com/open-cluster-management/build-harness-ext-osci

The Prow image builder for Go is here (this is where the NodeJS image builder will be added as well):
https://github.com/open-cluster-management/image-builder

The Travis based build harness extensions are here (this is where the Prow based extensions will be integrated):
https://github.com/open-cluster-management/build-harness-extensions

There are a few OCM repos that were created in Prow. Those repos are configured here in OpenShift CI:
https://github.com/openshift/release/tree/master/ci-operator/config/open-cluster-management
https://github.com/openshift/release/tree/master/ci-operator/jobs/open-cluster-management

The Prow workflows will be here:
https://github.com/openshift/release/tree/master/ci-operator/step-registry/acm

Links to other relevant repos and documentation will be added as they are created.
This issue will be updated as work progress and plans are updated.

We anticipate this work being completed by the end of January 2021, but this is not a firm schedule and is subject to change.

The risk of token expired on OKD

What have I done: I installed policy framework and policy controllers successfully.

What happened: The day after I installed policy framework and policy controllers, policies can not be propagated.

The reason why:

To login an OKD we need to use command:

oc login ...

But oc login only generate a user token (not a certification or username&password) in the kubeconfig.

- name: kube:admin/api-aws-okd-dev04-red-chesterfield-com:6443
  user:
    token: sha256~7Zc0cmUbXBFWTRP5pUOZL8C8ZlP45pGrdedoNixFSA4

Since we store this kubeconfig as a secret when we are installing "policy-framework" on managed cluster, the information can not be sync anymore after token expired.

Download build-harness-bootstrap failed because of rate limiting

Related Repos

https://github.com/open-cluster-management-io/config-policy-controller
https://github.com/open-cluster-management-io/governance-policy-template-sync
https://github.com/open-cluster-management-io/governance-policy-status-sync
https://github.com/open-cluster-management-io/governance-policy-spec-sync
https://github.com/open-cluster-management-io/governance-policy-propagator

What happened

Run make build-image failed, check the .build-harness-bootstrap get the following:

{"message":"API rate limit exceeded for 66.187.232.127. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}

Check the Makefile and found we do download everytime run a command:

ifndef USE_VENDORIZED_BUILD_HARNESS
	ifeq ($(TRAVIS_BUILD),1)
	-include $(shell curl -H 'Accept: application/vnd.github.v4.raw' -L https://api.github.com/repos/open-cluster-management/build-harness-extensions/contents/templates/Makefile.build-harness-bootstrap -o .build-harness-bootstrap; echo .build-harness-bootstrap)
	endif
else
-include vbh/.build-harness-vendorized
endif

Is it possible to ignore the download part if the file already exist?

How to add ManagedCluster into multi ManagedClusterSet

To add a ManagedCluster to a ManagedClusterSet, user needs to set a label cluster.open-cluster-management.io/clusterset={clusterset name} on the ManagedCluster.

Does this mean a ManagedCluster can not be added into multi ManagedClusterSet?

Meeting information not in expected location

The README says:

Meeting dial-in details, meeting notes and agendas are announced and published to the open-cluster-management mailing list on Google Groups

...but there's no information in the google group; the meeting details appear to be on the project board instead. Update the README? Or post to the mailing list?

`go get` not working for policy framework

go get -u github.com/open-cluster-management-io/governance-policy-propagator
go get: github.com/open-cluster-management-io/governance-policy-propagator@none updating to
	github.com/open-cluster-management-io/[email protected]: parsing go.mod:
	module declares its path as: github.com/open-cluster-management/governance-policy-propagator
	        but was required as: github.com/open-cluster-management-io/governance-policy-propagator

I'm thinking it won't work for our other repos either, but we'll need to verify:

Generate policies from standard resources to make creating policies easier

Easily create ACM policies for various sources of policy:

Some sample sources of policies might include:

  • k8s CRS e.g., roles, role bindings etc
  • REGO policies enforced by OPA
  • Kyverno policies
  • etc

The goal is to point to a source above and provide a placement detail, the this new tool can auto generate ACM policies.

A script may be the starting point, and then a CLI could be integrated to enhance this.

ginkgo: command not found

Related Repos:

https://github.com/open-cluster-management-io/config-policy-controller
https://github.com/open-cluster-management-io/governance-policy-template-sync
https://github.com/open-cluster-management-io/governance-policy-status-sync
https://github.com/open-cluster-management-io/governance-policy-spec-sync
https://github.com/open-cluster-management-io/governance-policy-propagator

What Happened

Run make e2e-test and get an error:

ginkgo -v --slowSpecThreshold=10 test/e2e
/bin/bash: ginkgo: command not found
make: *** [e2e-test] Error 127

This is because we installed ginkgo under .go/bin, the default GOBIN set in Makefile.

GOPATH_DEFAULT := $(PWD)/.go
export GOPATH ?= $(GOPATH_DEFAULT)
GOBIN_DEFAULT := $(GOPATH)/bin
export GOBIN ?= $(GOBIN_DEFAULT)

Add export PATH := $(PATH):$(GOBIN) should fix this.

Enhance clusteradm so that user can submit workload to target cluster directly

There is no way for an user to submit workload to the target managed cluster directly now, the user has to write some codes to get information from placementdecision, analyse it and then forword workoad to the target cluster, this is inconvenient.

A draft idea is to enhance clusteradm, then user can submit workload like following:

  1. User submit workload with “clusteradm submit-workload xxx.yaml --placement abc”
  2. Clusteradm checks the managed cluster candidate in the specified placement, then forward the xxx.yaml to the target cluster by clusternet or manifestwork.
  3. The managed cluster will create the xxx.yaml, the workload in xxx.yaml will run on it.

image

Manifestwork enhancement: query workload status which created by Manifestwork

  1. End user wants to run a Job (e.g. tf job) in OCM.
  2. End user created a manifestwork in one managed cluster namespace.
  3. Managed cluster pull the manifestwork and run on the job.
  4. End user wants to know what is the status of the Job (e.g. tf job).

OCM should provide a way to let end user know the status of the real workload (tf job in above scenario).

Create resource "Application" failed on OKD

What have I done:

Step1: Install from source

Step2: Run kubectl apply -f application.yaml and the yaml file is:

apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
  name: okd
spec:
  componentKinds:
    - group: apps.open-cluster-management.io
      kind: Subscription
  descriptor: {}

Then I got an error:

Error "failed calling webhook "applications.apps.open-cluster-management.webhook": Post "https://multicluster-operators-application-svc.multicluster-operators.svc:443/app-validate?timeout=10s": no endpoints available for service "multicluster-operators-application-svc"" for field "undefined".

Slack Channels?

Hi,

Is there any public slack channel where users could make questions?

Thank you.

enhance manifest work controller code to resolve field dangling issue in OCM environment

Use case scenario:
If a user deploys a manifest work in the OCM environment and this manifest work contains a deployment A which has a replica field value equal to 5. However, this OCM environment also has a HPA controller that owns and manages the replica field for every deployment. The HPA controller will change deployment A's replica field to 3, but after that the work agent will change deployment A's replica field back to 5 since the work agent thinks he owns the replica field so he will make sure the replica field equals the desired value. It will start an infinity loop, the deployment A's replica field will change from 5 to 3, 3 to 5, 5 to 3, 3 to 5...

It's not what we wanted. What we want is Deploy a manifest work deployment, and give one specific field ownership to another player, (for example give the replica field ownership to hpa controller) and let another player change some specific field (for example hpa changes replica field), and the work agent will not change it back.

Placement enhancement: Support enabled/disabled klusterlet addon in predicate clusterSelector match expression

Summary

Placement should support match expression within the clusterSelector predicate that can operate against klusterlet addon enabled/disabled status.

Example

I have a need to select managed clusters that have the policyController addon enabled because I want to utilize GRC policies to manage those clusters.

Currently I cannot create a valid Placement that will accomplish this goal because klusterlet addon enabled/disabled status is not available for me to use in the match expression of the cluster selector.

Install Multicluster Subscription Operator from OperatorHub.io failed by lacking of resouce "KlusterletAddonConfig" on OKD

Installing Application lifecycle management on OKD as this document said:

Container multicluster-operators-argocdcluster-xxx start failed:

E0526 10:06:50.536999       1 manager.go:99] no matches for kind "KlusterletAddonConfig" in version "agent.open-cluster-management.io/v1"Manager exited non-zero

Fixed by install klusterlet CRD First.
See how to do it in : https://github.com/open-cluster-management/klusterlet-addon-controller

Policy framework should create metrics that reflect when policies are not compliant

The GRC Policy framework needs to expose some metrics to help get visibility into what is going on inside of the policy framework. The metrics must provide details about which policies are compliant and not compliant, and what managed cluster has caused the noncompliance. There is also a need to find out how many managed clusters a policy is distributed to.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.