Giter Club home page Giter Club logo

controller-runtime's Introduction

Go Report Card godoc

Kubernetes controller-runtime Project

The Kubernetes controller-runtime Project is a set of go libraries for building Controllers. It is leveraged by Kubebuilder and Operator SDK. Both are a great place to start for new projects. See Kubebuilder's Quick Start to see how it can be used.

Documentation:

Versioning, Maintenance, and Compatibility

The full documentation can be found at VERSIONING.md, but TL;DR:

Users:

  • We follow Semantic Versioning (semver)
  • Use releases with your dependency management to ensure that you get compatible code
  • The main branch contains all the latest code, some of which may break compatibility (so "normal" go get is not recommended)

Contributors:

FAQ

See FAQ.md

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

controller-runtime is a subproject of the kubebuilder project in sig apimachinery.

You can reach the maintainers of this project at:

Contributing

Contributions are greatly appreciated. The maintainers actively manage the issues list, and try to highlight issues suitable for newcomers. The project follows the typical GitHub pull request model. See CONTRIBUTING.md for more details. Before starting any work, please either comment on an existing issue, or file a new one.

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

controller-runtime's People

Contributors

alenkacz avatar alexeldeib avatar alvaroaleman avatar bharathi-tenneti avatar calind avatar christopherhein avatar dependabot[bot] avatar directxman12 avatar djzager avatar droot avatar estroz avatar fillzpp avatar grantr avatar inteon avatar joelanford avatar joelspeed avatar k8s-ci-robot avatar kevindelgado avatar knabben avatar negz avatar pwittrock avatar rashmigottipati avatar sbueringer avatar seans3 avatar strrl avatar troy0820 avatar varshaprasad96 avatar vincepri avatar xrstf avatar zqzten avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

controller-runtime's Issues

Doesn't appear to honour KUBECONFIG environment variable

In pkg/client/config/config.go there's a check to see if KUBECONFIG is set, but then it is not used

if len(os.Getenv("KUBECONFIG")) > 0 {
    return clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
}

In using this from kubebuilder it worked for me when I changed the above to

if len(os.Getenv("KUBECONFIG")) > 0 {
    return clientcmd.BuildConfigFromFlags(masterURL, os.Getenv("KUBECONFIG"))
}

Remove vendor directory

It's not clear to me why the vendor directory exists in source code in the first place... deps will pull and "install" those as needed...
For what it's worth, I'm trying to consume controller runtime in a library project that uses glide and somehow go runtime hasn't figured out that libraries in controller-runtime/vendor/ are actually the same as the ones in my library's vendor directory and hence get type mismatching problems all over...

Decouple webhook and manager packages

Context

I'm currently integrating multicluster-controller with controller-runtime's webhook package, which, unfortunately, depends on the manager package.

Multicluster-controller doesn't use controller-runtime's manager package, but a more lightweight version that only orchestrates runnables (caches and controllers), while the cluster dependencies are extracted into multiple Cluster structs.

Problem

Controller-runtime's webhook package depends on the manager.Manager interface, even though it only needs a fraction of it, namely Add(Runnable) for the server, and GetScheme() and GetRESTMapper() for the builder.

I could create an ad-hoc implementation of manager.Manager on my side, with lots of panics in the methods webhook doesn't need, forwarding the three that are actually needed to a multicluster-controller manager and cluster, but I believe there is a cleaner solution, which could also benefit others.

Proposed Solution

Define small interfaces where they're needed. They will be implemented implicitly by controller-runtime's controllerManager struct, and could be implemented by third-party packages.

// in pkg/webhook/server.go
type Manager interface {
	Add(Runnable) error
} 
// in pkg/webhook/admission/builder/builder.go
type Manager interface {
	GetScheme() *runtime.Scheme
	GetRESTMapper() meta.RESTMapper
} 

It should also make testing easier, and would be arguably more idiomatic: https://blog.chewxy.com/2018/03/18/golang-interfaces/

If you agree to this proposed change, I can submit a PR.

runtime/log.go compilation is broken

runtime/log.go file has compilation issue. probably got introduced in this PR #71

vendor/sigs.k8s.io/controller-runtime/pkg/runtime/log/log.go:10:2: imported and not used: "cnrm-kube/vendor/github.com/go-logr/zapr"
vendor/sigs.k8s.io/controller-runtime/pkg/runtime/log/log.go:28:9: undefined: zaplogr

dep ensure emits a warning and requires Mercurial

When I try to run dep ensure, I get this warning, plus the command never returns:

Warning: the following project(s) have [[constraint]] stanzas in Gopkg.toml:

  โœ—  k8s.io/kube-aggregator

However, these projects are not direct dependencies of the current project:
they are not imported in any .go files, nor are they in the 'required' list in
Gopkg.toml. Dep only applies [[constraint]] rules to direct dependencies, so
these rules will have no effect.

Either import/require packages from these projects so that they become direct
dependencies, or convert each [[constraint]] to an [[override]] to enforce rules
on these projects, if they happen to be transitive dependencies.

When I Ctrl-C, I get this error:

grouped write of manifest, lock and vendor: error while writing out vendor tree: failed to write dep tree: failed to export bitbucket.org/ww/goautoneg:
	(1) hg is not installed:
	(2) hg is not installed:
	(3) hg is not installed:
	(4) failed to list versions for https://bitbucket.org/ww/goautoneg: remote: Not Found
fatal: repository 'https://bitbucket.org/ww/goautoneg/' not found
: exit status 128
	(5) failed to list versions for ssh://[email protected]/ww/goautoneg: : signal: interrupt
	(6) context canceled
	(7) context canceled

When I install Mercurial, the warning remains but the command finishes.

How should I inject other dependencies into reconcilers?

Is there a plan to allow the inject interfaces to inject arbitrary dependencies? Or maybe there's a better way to do this?

Concrete use case: I want reconcilers to have access to a logger instance, but it's not a logr logger so I can't use the log promise feature, and I'd like to avoid relying on package variables if possible. Also, I'd like each controller to have the same initialization signature ProvideController(manager.Manager, <something>) (and I acknowledge this might be a silly requirement).

@droot maybe you have some context on plans for inject.

client.Delete doesn't support PropagationPolicy and other delete options

The controller-runtime client doesn't seem to support passing DeleteOptions in the body of a delete request.

Use case: when a Job is deleted, its pods are normally orphaned by the GC. To delete a Job and GC its pods with the normal client-go, I pass a DeleteOptions with PropagationPolicy set. I can't do that with the controller-runtime client.

Allow restricting the Cache's ListWatch to be namespaced

Problem

Currently the ListWatch for the cache's informers are non-namespaced.
https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/cache/internal/informers_map.go#L218-L227

This means the Manager always requires cluster scoped permissions to work. While kubebuilder uses ClusterRole and ClusterRolebinding by default, that assumption isn't always true for an operator/controller (at least not in our context with the operator-sdk).

With just a Role and Rolebinding, the informers fail to list resources at the cluster scope.

E0828 23:41:19.472228       1 reflector.go:205] github.com/operator-framework/operator-sdk-samples/app-operator/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:106: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:haseeb:default" cannot list pods at the cluster scope
E0828 23:41:20.141658       1 reflector.go:205] github.com/operator-framework/operator-sdk-samples/app-operator/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:106: Failed to list *v1alpha1.App: apps.app.example.com is forbidden: User "system:serviceaccount:haseeb:default" cannot list apps.app.example.com at the cluster scope

Proposed Fix

Unless this is already supported or I've missed an easier way to do this, I've found that I can easily pipe down the namespace as an option from the Manager->Cache->InfromersMap->ListWatch.

mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})

Possible fix: hasbro17@55894c2
That fixes the permissions issue as the ListWatch requests are now restricted to the desired namespace.

And in the default case of not specifying a namespace the ListWatch goes back to making cluster-scoped requests.
https://github.com/kubernetes/client-go/blob/master/rest/request.go#L424

test sometimes fails

A test in sigs.k8s.io/controller-runtime/pkg/internal/controller sometimes fails.

I checked out the master branch, currently 53fc44b. Then I ran:

TRACE=1 ./hack/check-everything.sh

I got a failed test. I immediately ran it again, and the test passed. The full results for the failed run:

$ TRACE=1 ./hack/check-everything.sh
++ NO_COLOR=
++ '[' -z '' ']'
++ header=''
++ reset=''
+ k8s_version=1.10.1
+ goarch=amd64
+ goos=unknown
+ [[ linux-gnu == \l\i\n\u\x\-\g\n\u ]]
+ goos=linux
+ [[ linux == \u\n\k\n\o\w\n ]]
+ tmp_root=/tmp
+ kb_root_dir=/tmp/kubebuilder
+ SKIP_FETCH_TOOLS=
+ header_text 'using tools'
+ echo 'using tools'
using tools
+ which gometalinter.v2
/home/mhrivnak/golang/bin/gometalinter.v2
+ fetch_kb_tools
+ '[' -n '' ']'
+ header_text 'fetching tools'
+ echo 'fetching tools'
fetching tools
+ kb_tools_archive_name=kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ kb_tools_download_url=https://storage.googleapis.com/kubebuilder-tools/kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ kb_tools_archive_path=/tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ '[' '!' -f /tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz ']'
+ curl -sL https://storage.googleapis.com/kubebuilder-tools/kubebuilder-tools-1.10.1-linux-amd64.tar.gz -o /tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz
+ tar -zvxf /tmp/kubebuilder-tools-1.10.1-linux-amd64.tar.gz -C /tmp/
kubebuilder/
kubebuilder/bin/
kubebuilder/bin/gen-apidocs
kubebuilder/bin/openapi-gen
kubebuilder/bin/lister-gen
kubebuilder/bin/informer-gen
kubebuilder/bin/client-gen
kubebuilder/bin/conversion-gen
kubebuilder/bin/deepcopy-gen
kubebuilder/bin/defaulter-gen
kubebuilder/bin/kube-controller-manager
kubebuilder/bin/kubectl
kubebuilder/bin/kube-apiserver
kubebuilder/bin/etcd
+ setup_envs
+ header_text 'setting up env vars'
+ echo 'setting up env vars'
setting up env vars
+ [[ -z '' ]]
+ export KUBEBUILDER_ASSETS=/tmp/kubebuilder/bin
+ KUBEBUILDER_ASSETS=/tmp/kubebuilder/bin
+ ./hack/verify.sh
++ NO_COLOR=
++ '[' -z '' ']'
++ header=''
++ reset=''
+ header_text 'running go vet'
+ echo 'running go vet'
running go vet
+ go vet ./pkg/...
+ header_text 'running gometalinter.v2'
+ echo 'running gometalinter.v2'
running gometalinter.v2
+ gometalinter.v2 --disable-all --deadline 5m --enable=misspell --enable=structcheck --enable=golint --enable=deadcode --enable=goimports --enable=errcheck --enable=varcheck --enable=goconst --enable=unparam --enable=ineffassign --enable=nakedret --enable=interfacer --enable=misspell --enable=gocyclo --line-length=170 --enable=lll --dupl-threshold=400 --enable=dupl --skip=atomic ./pkg/...
+ ./hack/test-all.sh
++ NO_COLOR=
++ '[' -z '' ']'
++ header=''
++ reset=''
+ setup_envs
+ header_text 'setting up env vars'
+ echo 'setting up env vars'
setting up env vars
+ [[ -z /tmp/kubebuilder/bin ]]
+ header_text 'running go test'
+ echo 'running go test'
running go test
+ go test ./pkg/... -parallel 4
?   	sigs.k8s.io/controller-runtime/pkg	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/builder	12.764s
ok  	sigs.k8s.io/controller-runtime/pkg/cache	21.267s
?   	sigs.k8s.io/controller-runtime/pkg/cache/informertest	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/cache/internal	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/client	46.047s
?   	sigs.k8s.io/controller-runtime/pkg/client/apiutil	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/client/config	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/client/fake	0.029s
ok  	sigs.k8s.io/controller-runtime/pkg/controller	12.362s
?   	sigs.k8s.io/controller-runtime/pkg/controller/controllertest	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/controller/controllerutil	10.605s
ok  	sigs.k8s.io/controller-runtime/pkg/envtest	10.071s
?   	sigs.k8s.io/controller-runtime/pkg/envtest/printer	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/event	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/handler	0.034s
ok  	sigs.k8s.io/controller-runtime/pkg/internal/admission	0.007s [no tests to run]
Running Suite: Controller Integration Suite
===========================================
Random Seed: 1538060558
Will run 23 of 23 specs

โ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ขโ€ข
------------------------------
โ€ข Failure [0.001 seconds]
controller
/home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:40
  Processing queue items from a Controller
  /home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:261
    should requeue a Request if the Result sets Requeue:true and continue processing items [It]
    /home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:329

    Expected
        <int>: 0
    to equal
        <int>: 1

    /home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:346
------------------------------
2018-09-27T11:02:48.195-0400	INFO	kubebuilder.controller	Starting Controller	{"Controller": ""}
2018-09-27T11:02:48.195-0400	INFO	kubebuilder.controller	Starting workers	{"Controller": "", "WorkerCount": 1}
STEP: Invoking Reconciler which will ask for requeue
2018-09-27T11:02:48.196-0400	INFO	kubebuilder.controller	Stopping workers	{"Controller": ""}
โ€ขโ€ขโ€ขโ€ขโ€ข


Summarizing 1 Failure:

[Fail] controller Processing queue items from a Controller [It] should requeue a Request if the Result sets Requeue:true and continue processing items 
/home/mhrivnak/golang/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller_test.go:346

Ran 23 of 23 Specs in 9.660 seconds
FAIL! -- 22 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestSource (9.66s)
FAIL
FAIL	sigs.k8s.io/controller-runtime/pkg/internal/controller	9.678s
ok  	sigs.k8s.io/controller-runtime/pkg/internal/recorder	11.082s
?   	sigs.k8s.io/controller-runtime/pkg/leaderelection	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/leaderelection/fake	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/manager	11.825s
?   	sigs.k8s.io/controller-runtime/pkg/patch	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/patterns/application	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/patterns/operator	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/predicate	0.027s
ok  	sigs.k8s.io/controller-runtime/pkg/reconcile	0.011s
?   	sigs.k8s.io/controller-runtime/pkg/reconcile/reconciletest	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/recorder	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/runtime/inject	0.013s
ok  	sigs.k8s.io/controller-runtime/pkg/runtime/log	0.021s
ok  	sigs.k8s.io/controller-runtime/pkg/runtime/scheme	0.014s
ok  	sigs.k8s.io/controller-runtime/pkg/runtime/signals	1.016s
ok  	sigs.k8s.io/controller-runtime/pkg/source	8.748s
ok  	sigs.k8s.io/controller-runtime/pkg/source/internal	0.023s
?   	sigs.k8s.io/controller-runtime/pkg/webhook	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/webhook/admission	0.064s
?   	sigs.k8s.io/controller-runtime/pkg/webhook/admission/builder	[no test files]
?   	sigs.k8s.io/controller-runtime/pkg/webhook/admission/types	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert	0.040s
ok  	sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/generator	1.007s
?   	sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/generator/fake	[no test files]
ok  	sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/writer	1.398s
ok  	sigs.k8s.io/controller-runtime/pkg/webhook/internal/cert/writer/atomic	0.075s
?   	sigs.k8s.io/controller-runtime/pkg/webhook/types	[no test files]

Starting event source log for Channel contains error

Right now I am getting this log when starting a watch on a source of type Channel

{"level":"info","ts":1537378068.766453,"logger":"kubebuilder.controller","caller":"controller/controller.go:120","msg":"Starting EventSource","Controller":"database-controller","SourceError":"json: unsupported type: <-chan event.GenericEvent"}

Reduce deleted boilerplate

Almost all Controllers have boilerplate at the top to check if the object has been deleted. Figure out a way to reduce this.

Set logger verbosity level

I'm trying to set logger verbosity from command line through -v flag, but how can I set the logging level? Currently, I'm initializing the logger like:

import logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
var log = logf.Log.WithName("sidecar")
func main () {
	logf.SetLogger(logf.ZapLogger(true))
        log.V(2).Info("msg")
}

but for any verbosity logging (V(2), V(3), V(4), ...) there is no output. Which is the best way to set the verbosity level for this logger?

Thank you!

Fix coverage regressions

Coverage results went down as a result of PR #6

Fix the coverage gaps that were introduced:

go test ./pkg/...  -coverprofile cover.out -parallel 4 | grep -v "coverage: 100.0% of statements"  | grep -v "pkg/admission/certprovisioner\|pkg/internal/admission\|pkg/cache\|pkg/client\|pkg/event\|pkg/client/config\|pkg/controller/controllertest\|pkg/reconcile/reconciletest\|\|pkg/runtime/inject\|pkg/runtime/log\|\|pkg/runtime/signals\|pkg/test\|pkg/runtime/inject\|pkg/runtime/signals"
ok  	github.com/kubernetes-sigs/controller-runtime/pkg/manager	7.544s	coverage: 98.6% of statements
ok  	github.com/kubernetes-sigs/controller-runtime/pkg/source	7.257s	coverage: 94.1% of statements

deserializing into a runtime.RawExtension should fill in the Object if the type is known

RawExtension.Object should get either the go struct or an unstructured.Unstructured object from the client.

Steps to reproduce:

  • Create a Resource with RawExtension field
    • RawExtension accepts any runtime.Object
  • Create an instance of the Resource with a Deployment for the field
  • In the Controller, read object and using the client into an unstructured.Unstructured object
  • The unstructured should have Object populated with a Deployment, but it is nil.

source.Channel gets a nil value for stop channel

Problem

When adding a Channel to a controller by calling Watch(), InjectStopChannel gets called with a value of nil instead of a real channel.

This appears to be because nothing sets the value of stop before Watch() gets called. The value of stop only gets set once the manager's Start() method is called.

Here's the order of operations:

  1. Manager gets created
  2. Controller gets created and added to the manager
  3. Controller's Watch() method gets called with the new Channel as an argument
  4. Manager's Start() method gets called, with a stop channel being passed in. This is the opportunity for a stop channel to be provided by a controller author.
  5. The manager's Start() method calls each controller's Start() method, also passing through the stop channel.

At step 3, the manager's stop channel gets injected into the Channel. But the manager's stop channel doesn't get set until step 4, so at step 3 it has a nil value.

Workaround

The controller author can call InjectStopChannel directly and pass it the same channel they'll later pass to the manager's Start() method. But the InjectStopChannel method is clearly documented as not being intended for this purpose, nor is this approach helpful to the controller author.

Panic in fake client

I tried to write unit tests using the fake client and List() panics reliably when accessing opts.Raw:

func (c *fakeClient) List(ctx context.Context, opts *client.ListOptions, list runtime.Object) error {
gvk := opts.Raw.TypeMeta.GroupVersionKind()
gvr, _ := meta.UnsafeGuessKindToResource(gvk)

Why opts.Raw must be filled (it's not necessary when using the real client) and why it's TypeMeta should have any value?

If I understand the code (and I probably don't), it expects opts.Raw.TypeMeta.GroupVersionKind() to be the kind of items in returned list, not the list itself (i.e. Pod instead of PodList). Such requirement looks very odd.

Prometheus metrics

I would like to add prometheus metrics to the controllers I'm building using kubebuilder.

It would be good to have information both from the controller internals themselves as well as from the reconciler loops I'm implementing.

My initial idea is to add a prometheus metric registry to the controller manager that it can use itself and can be used by reconcil.Reconcilers to register their own metrics. Then have the controller manager start the metrics server when Start is called.

Does this seem reasonable or should people just handle metrics on their own? PR welcome?

Add metalinter linters to .travis.yaml with --enable

Add linters to the .travis.yaml gometalinter.v2 script line using --enable. Fix issues as needed to make the linters pass.

  • structcheck
  • maligned
  • nakedret
  • deadcode
  • gocyclo
  • ineffassign
  • dupl
  • golint
  • gotype
  • goimports
  • errcheck
  • varcheck
  • interfacer
  • goconst
  • gosimple
  • staticcheck
  • unparam
  • unused
  • misspell
  • lll
  • gas
  • safesql

Why DelegatingClient is needed?

From the code,
the Manager.GetClient returns an DelegatingClient which read from cache for structured object, but read directly for unstructured object.

This forbids us to use the client before controller loops(since cache needs to be synced). (Our use case is load configurations from configMap before controller loops).

From my perspective, users should use manager.GetCache if they need caching behavior(and we can do the delegation there to read directly for unstructured object).
For manager.GetClient, it's better to let it always read directly without cache.

Incorrect logr usage

log.Error(err, "if %s is a CRD, should install it before calling Start",
kindMatchErr.GroupKind)

This seems to not be using logr correctly. I got the following error when I hit this line:

{"level":"dpanic","ts":1540850293.0815783,"logger":"kubebuilder.source","caller":"zapr/zapr.go:129","msg":"odd number of arguments passed as key-value pairs for logging" ...

Watching Channels doesn't work

Adding a watch on channels, as mentioned in kubebuilder docs, on a fresh generated controller fails when running tests with the following error:

	testing_t_support.go:22: 
			/home/u/go/src/github.com/presslabs/mysql-operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:69 +0x1ed
		github.com/presslabs/mysql-operator/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc42039ba40, 0x1332420, 0x1b92b68, 0x0, 0x0, 0x0, 0x1342260)
			/home/u/go/src/github.com/presslabs/mysql-operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:43 +0xae
		github.com/presslabs/mysql-operator/pkg/controller/mysqlbackup.TestReconcile(0xc42010e1e0)
			/home/u/go/src/github.com/presslabs/mysql-operator/pkg/controller/mysqlbackup/mysqlbackup_controller_test.go:53 +0x2f5
		testing.tRunner(0xc42010e1e0, 0x12ae268)
			/usr/local/go/src/testing/testing.go:777 +0xd0
		created by testing.(*T).Run
			/usr/local/go/src/testing/testing.go:824 +0x2e0
		
		Expected error:
		    <*errors.errorString | 0xc4200bf5d0>: {
		        s: "must call InjectStop on Channel before calling Start",
		    }
		    must call InjectStop on Channel before calling Start
		not to have occurred

The changes that I made to a fresh generated controller are:

diff --git a/pkg/controller/mysqlbackup/mysqlbackup_controller.go b/pkg/controller/mysqlbackup/mysqlbackup_controller.go
index bebdfce8..7935c0fb 100644
--- a/pkg/controller/mysqlbackup/mysqlbackup_controller.go
+++ b/pkg/controller/mysqlbackup/mysqlbackup_controller.go
@@ -35,6 +35,8 @@ import (
        "sigs.k8s.io/controller-runtime/pkg/manager"
        "sigs.k8s.io/controller-runtime/pkg/reconcile"
        "sigs.k8s.io/controller-runtime/pkg/source"
+
+       "sigs.k8s.io/controller-runtime/pkg/event"
 )
 
 /**
@@ -78,6 +80,15 @@ func add(mgr manager.Manager, r reconcile.Reconciler) error {
                return err
        }
 
+       events := make(chan event.GenericEvent)
+       err = c.Watch(
+               &source.Channel{Source: events},
+               &handler.EnqueueRequestForObject{},
+       )
+       if err != nil {
+               return err
+       }
+
        return nil
 }

What happens:

  • the stop channel is added after the watcher is registered, when the manager is started
  • then when the Channel source is registered in the Watch method the stop channel is inserted but its inserts a nil channel.
  • when the source starts checks for a stop channel to be different than nil and fails , here

Provide an option to run the webhook w/o bootstrapping

Provide an pure server mode to run the webhook w/o bootstrapping.

If the user use this mode, they will first do a dry-run with the webhook server to get a pile of yaml files.
Then applying the yaml files will install the webhookConfiguration, secret, service etc.

Admission webhooks

I can not find any open issue about admissionwebkook branch. What is its status? What prevents it from being merged?

Fake client doesn't implement delete propagation

Code under test may expect that setting PropagationPolicy to Foreground will cause child objects to be deleted before returning. Currently the fake client doesn't implement this, nor does it offer a hook to allow this behavior to be simulated (see #72).

OwnerReferences must be in the same namespace as object

https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#OwnerReference

OwnerReference contains enough information to let you identify an owning object. Currently, an owning object must be in the same namespace, so there is no namespace field

The admission controller secret writer sets the webhook as owner for the generated secret even if they are in different namespaces. I don't know if it's by design but it looks like a bug.

func (s *secretReadWriter) buildSecret(webhookName string) (*corev1.Secret, *generator.Artifacts, error) {
v := s.webhookMap[webhookName]
webhook := v.webhook
commonName, err := dnsNameForWebhook(&webhook.ClientConfig)
if err != nil {
return nil, nil, err
}
certs, err := s.certGenerator.Generate(commonName)
if err != nil {
return nil, nil, err
}
secret := certsToSecret(certs, v.secret)
err = controllerutil.SetControllerReference(s.webhookConfig.(metav1.Object), secret, scheme.Scheme)
return secret, certs, err
}

I found it by #99 which enforces a check between owner and object.

Can't run tests locally

When I run go test ./pkg/..., I get a bunch of errors that seem to be complaining about a missing /usr/local/bin/kubebuilder/bin/etcd executable:

fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory

So I try to run ./test.sh but that exits immediately:

$ ./test.sh
using tools

Are there any docs on how to run tests?

Non-caching client

We already provide a caching client for controller.

As @pwittrock pointed out, we should provide a no caching client. Because if the cache becomes out-of-date, the admission webhook may make wrong decision based on stale objects. A no-caching client may be helpful in this case.

[feature request] Reduce dependant resrouce creation boilerplate

Currently, in order to create a dependent resource (eg. a deployment) the code always looks like:

deploy := &appsv1.Deployment{
    ObjectMeta: metav1.ObjectMeta{
        Name:      instance.Name + "-deployment",
        Namespace: instance.Namespace,
    },
    Spec: appsv1.DeploymentSpec{
       // ...
    },
}
if err := controllerutil.SetControllerReference(instance, deploy, r.scheme); err != nil {
    return reconcile.Result{}, err
}

// Check if the Deployment already exists
found := &appsv1.Deployment{}
err = r.Get(context.TODO(), types.NamespacedName{Name: deploy.Name, Namespace: deploy.Namespace}, found)
if err != nil && errors.IsNotFound(err) {
    log.Printf("Creating Deployment %s/%s\n", deploy.Namespace, deploy.Name)
    err = r.Create(context.TODO(), deploy)
    if err != nil {
        return reconcile.Result{}, err
    }
} else if err != nil {
    return reconcile.Result{}, err
}

// Update the found object and write the result back if there are any changes
if !reflect.DeepEqual(deploy.Spec, found.Spec) {
    found.Spec = deploy.Spec
    log.Printf("Updating Deployment %s/%s\n", deploy.Namespace, deploy.Name)
    err = r.Update(context.TODO(), found)
    if err != nil {
        return reconcile.Result{}, err
    }
}

Which is about 50 lines of boilerplate.

What I've found very handy when working on controllers, were the CreateOrPatch methods from https://github.com/appscode/kutil.

Do you think that something similar can be implemented in the generic client?

Support for leader election

Currently the manager won't do any kind of leader election. I think that the manager should implement of it so that you can run highly available controllers.

Channel Source is borked due to stop channel injection

The Channel source expects the stop channel to be injected before calling Start, but the stop channel on the controller manager will be nil until ControllerManager#Start is called. Since most people will probably call Controller#Watch before ControllerManager#Start, this breaks the channel source.

I've got a patch to fix it in the works, but wanted to file this so that I don't forget.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.