This documentation and template serves as a reference to implement a module (component) operator, for integration with the lifecycle-manager. It utilizes the kubebuilder framework with some modifications to implement Kubernetes APIs for custom resource definitions (CRDs). Additionally, it hides Kubernetes boilerplate code to develop fast and efficient control loops in Go.
- Understanding module development in Kyma
- Implementation
- Bundling and installation
- Using your module in the Lifecycle Manager ecosystem
Before going in-depth, make sure you are familiar with:
This Guide serves as comprehensive Step-By-Step tutorial on how to properly create a module from scratch by using an operator that is installing a Helm Chart. Note that while other approaches are encouraged, there is no dedicated guide available yet and these will follow with sufficient requests and adoption of Kyma modularization.
Every Kyma Module using an Operator follows 5 basic Principles:
- Declared as available for use in a release channel through the
ModuleTemplate
Custom Resource in the control-plane - Declared as desired state within the
Kyma
Custom Resource in runtime or control-plane - Installed / managed in the runtime by Module Manager through a
Manifest
custom resource in the control-plane - Owns at least 1 Custom Resource Definition that defines the contract towards a Runtime Administrator and configures its behaviour
- Is operating on at most 1 runtime at every given time
Release channels let customers try new modules and features early, and decide when the updates should be applied. For more info, see the release channels documentation in our Modularization overview.
The channel name has the following rules:
- Lower case letters from a to z.
- The total length is between 3 and 32.
In case you are planning to migrate a pre-existing module within Kyma, please familiarize yourself with the transition plan for existing modules
Compared to OLM, the Kyma Modularization is similar, but distinct in a few key aspects. While OLM is built heavily around a static dependency expression, Kyma Modules are expected to resolve dependencies dynamically.
Concretely, this means that while in OLM a Module has to declare CRDs and APIs that it depends on, in Kyma, all modules have the ability to depend on each other without declaring it in advance. This makes it of course harder to understand compared to a strict dependency graph, but it comes with a few key advantages:
- Concurrent optimisation on controller level: every controller in Kyma is installed simultaneously and is not blocked from installation until other operators are available. This makes it easy to e.g. create or configure resources that do not need to wait for the dependency (e.g. a ConfigMap can be created even before a deployment that has to wait for an API to be present). While this enforces controllers to think about a case where the dependency is not present, we encourage Eventual Consistency and do not enforce a strict lifecycle model on our modules
- Discoverability is handled not through a registry / server, but through a declarative configuration.
Every Module is installed through the
ModuleTemplate
, which is semantically the same as registering an operator in an OLM registry orCatalogSource
. The ModuleTemplate however is a normal CR and can be installed into a Control-Plane dynamically and with GitOps practices. This allows multiple control-planes to offer differing modules simply at configuration time. Also, we do not use File-Based Catalogs for maintaining our catalog, but maintain everyModuleTemplate
through Open Component Model, an open standard to describe software artifact delivery.
Regarding release channels for operators, Lifecycle Manager operates at the same level as OLM. However, with Kyma
we ensure bundling of the ModuleTemplate
to a specific release channel.
We are heavily inspired by the way that OLM handles release channels, but we do not have an intermediary Subscription
that assigns the catalog to the channel. Instead, every module is deliverd in a ModuleTemplate
in a channel already.
There is a distinct difference in parts of the ModuleTemplate
.
The ModuleTemplate contains not only a specification of the operator to be installed through a dedicated Layer.
It also consists of a set of default values for a given channel when installed for the first time.
When installing an operator from scratch through Kyma, this means that the Module will already be initialized with a default set of values.
However, when upgrading it is not expected from the Kyma Lifecycle to update the values to eventual new defaults. Instead it is a way for module developers to prefill their Operator with instructions based on a given environment (the channel).
It is important to note that these default values are static once they are installed, and they will not be updated unless a new installation of the module occurs, even when the content of ModuleTemplate
changes.
This is because a customer is expected to be able to change the settings of the module CustomResource at any time without the Kyma ecosystem overriding it.
Thus, the CustomResource of a Module can also be treated as a customer/runtime-facing API that allows us to offer typed configuration for multiple parts of Kyma.
With Crossplane, you are fundamentally allowing Providers to interact in your control-plane.
When looking at the Crossplane Lifecycle, the most similar aspect is that we also use opinionated OCI Images to bundle our Modules.
We use the ModuleTemplate
to reference our layers containing the necessary metadata to deploy our controllers, just like Crossplane.
However, we do not opinionate on Permissions of controllers and enforce stricter versioning guarantees, only allowing semver
to be used for modules, and Digest
for sha
digests for individual layers of modules.
Fundamentally different is also the way that Providers
and Composite Resources
work compared to the Kyma ecosystem.
While Kyma allows any module to bring an individual CustomResource into the cluster for configuration, a Provider
in Crossplane is located in the control-plane and only directs installation targets.
We handle this kind of data centrally through acquisition-strategies for credentials and other centrally managed data in the Kyma
Custom Resource.
Thus, it is most fitting, to consider the Kyma eco-system as a heavily opinionated Composite Resource
from Crossplane, with the Managed Resource
being tracked with the Module Manager Manifest
.
Compared to Crossplane, we also encourage the creation of own CustomResourceDefinitions in place of the concept of the Managed Resource
, and in the case of configuration, we are able to synchronize not only a desired state for all modules from the control-plane, but also from the runtime.
Similarly, we make the runtime module catalog discoverable from inside the runtime with a dedicated synchronization mechanism.
Lastly, compared to Crossplane, we do not have as many choices when it comes to revision management and dependency resolution. While in Crossplane, it is possible to define custom Package, Revision and Dependency Policies. However, in Kyma we opinionated here, since managed use-cases usually require unified revision handling, and we do not target a generic solution for revision management of different module eco-systems.
-
A provisioned Kubernetes Cluster and OCI Registry
WARNING: For all use cases in the guide, you will need a cluster for end-to-end testing outside your envtest integration test suite. This guide is HIGHLY RECOMMENDED to be followed for a smooth development process. This is a good alternative if you do not want to use an entire control-plane infrastructure and still want to properly test your operators._
-
# you could use one of the following options # option 1: using brew brew install kubebuilder # option 2: fetch sources directly curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH) chmod +x kubebuilder && mv kubebuilder /usr/local/bin/
-
A Helm Chart to install from your control-loop (if you do not have one ready, feel free to use the stateless redis chart from this sample)
-
Initialize
kubebuilder
project. Please make sure domain is set tokyma-project.io
.kubebuilder init --domain kyma-project.io --repo github.com/kyma-project/test-operator/operator --project-name=test-operator --plugins=go/v4-alpha
-
Create API group version and kind for the intended custom resource(s). Please make sure the
group
is set asoperator
.kubebuilder create api --group operator --version v1alpha1 --kind Sample --resource --controller --make
-
Run
make manifests
, to generate CRDs respectively.
A basic kubebuilder operator with appropriate scaffolding should be setup.
If the module operator will be deployed under same namespace with other operators, differentiate your resources by adding common labels.
-
Add
commonLabels
to defaultkustomization.yaml
, reference implementation. -
Include all resources (e.g: manager.yaml) which contain label selectors by using
commonLabels
.
Further reading: Kustomize built-in commonLabels
-
Refer to State requirements and include them in your
Status
sub-resource similarly.This
Status
sub-resource should contain all validState
values (.status.state
) values in order to be compliant with the Kyma ecosystem.package v1alpha1 // Status defines the observed state of Module CR. type Status struct { // State signifies current state of Module CR. // Value can be one of ("Ready", "Processing", "Error", "Deleting"). // +kubebuilder:validation:Required // +kubebuilder:validation:Enum=Processing;Deleting;Ready;Error State State `json:"state"` }
Include the
State
values in yourStatus
sub-resource, either through inline reference or direct inclusion. These values have literal meaning behind them, so use them appropriately. -
Optionally, you can add additional fields to your
Status
sub-resource. -
For instance,
Conditions
are added toSampleCR
in the API definition andSampleHelmCR
in the API definition. This also includes the requiredState
values, using an inline reference.Reference implementation SampleCR
package v1alpha1 // Sample is the Schema for the samples API type Sample struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata,omitempty"` Spec SampleSpec `json:"spec,omitempty"` Status SampleStatus `json:"status,omitempty"` } type SampleStatus struct { Status `json:",inline"` // Conditions contain a set of conditionals to determine the State of Status. // If all Conditions are met, State is expected to be in StateReady. Conditions []metav1.Condition `json:"conditions,omitempty"` // add other fields to status subresource here }
-
Run
make generate manifests
, to generate boilerplate code and manifests.
Warning: This sample implementation is only for reference. You could copy parts of implementation but please do not add this repository as a dependency to your project.
-
Implement
State
handling to represent the corresponding state of the reconciled resource, by following kubebuilder guidelines to implement controllers. -
You could refer either to
SampleCR
controller implementation orSampleHelmCR
controller implementation for setting appropriateState
andConditions
values to yourStatus
sub-resource.SampleCR
is reconciled to install / uninstall a list of rendered resources from a YAML file on the file system. WhereasSampleHelmCR
is reconciled to install / uninstall (using SSA, see next point) rendered resources from a local Helm Chart. The latter uses the Helm library purely to render resources.r.setStatusForObjectInstance(ctx, objectInstance, status. WithState(v1alpha1.StateReady). WithInstallConditionStatus(metav1.ConditionTrue, objectInstance.GetGeneration()))
-
The reference controller implementations listed above use Server-side apply instead of conventional methods to process resources on the target cluster. Parts of this logic could be leveraged to implement your own controller logic. Checkout functions inside these controllers for state management and other implementation details.
- Connect to your cluster and ensure
kubectl
is pointing to the desired cluster. - Install CRDs with
make install
WARNING: This installs a CRD on your cluster, so create your cluster before running theinstall
command. See Pre-requisites for details on the cluster setup. - Local setup: install your module CR on a cluster and execute
make run
to start your operator locally.
WARNING: Note that while make run
fully runs your controller against the cluster, it is not feasible to compare it to a productive operator.
This is mainly because it runs with a client configured with privileges derived from your KUBECONFIG
environment variable. For in-cluster configuration, see our Guide on RBAC Management.
You can extend the operator further by using automated dashboard generation for grafana.
By the following command, two grafana dashboard files with controller related metrics will be generated under /grafana
folder.
kubebuilder edit --plugins grafana.kubebuilder.io/v1-alpha
To import the grafana dashboard, please read official grafana guide. This feature is supported by kubebuilder grafana plugin.
Make sure you have appropriate authorizations assigned to you controller binary, before you run it inside a cluster (not locally with make run
).
The Sample CR controller implementation includes rbac generation (via kubebuilder) for all resources across all API groups.
This should be adjusted according to the chart manifest resources and reconciliation types.
Towards the earlier stages of your operator development RBACs could simply accommodate all resource types and adjusted later, as per your requirements.
package controllers
// TODO: dynamically create RBACs! Remove line below.
//+kubebuilder:rbac:groups="*",resources="*",verbs="*"
WARNING: Do not forget to run make manifests
after this adjustment for it to take effect!
WARNING: This step requires the working OCI Registry from our Pre-requisites
-
Include the static module data in your Dockerfile:
FROM gcr.io/distroless/static:nonroot WORKDIR / COPY module-data/ module-data/ COPY --from=builder /workspace/manager . USER 65532:65532 ENTRYPOINT ["/manager"]
The sample module data in this repository includes both a Helm Chart and a YAML manifest in
module-data/helm
andmodule-data/yaml
directories, respectively. You reference the Helm Chart directory withspec.chartPath
attribute of theSampleHelm
CR. You reference the YAML manifest directory withspec.resourceFilePath
attribute of theSample
CR. The example custom resources in theconfig/samples
directory are already referencing the mentioned directories. Feel free to organize the static data in a different way, the includedmodule-data
directory serves just as an example. You may also decide to not include any static data at all - in that case you have to provide the controller with the Helm/YAML data at runtime using other techniques, for example Kubernetes volume mounting. -
Build and push your module operator binary by adjusting
IMG
if necessary and running the inbuilt kubebuilder commands. Assuming your operator image has the following base settings:- hosted at
op-kcp-registry.localhost:8888/unsigned/operator-images
- controller image name is
sample-operator
- controller image has version
0.0.1
You can run the following command
make docker-build docker-push IMG="op-kcp-registry.localhost:8888/unsigned/operator-images/sample-operator:0.0.1"
- hosted at
This will build the controller image and then push it as the image defined in IMG
based on the kubebuilder targets.
WARNING: This step requires the working OCI Registry, Cluster and Kyma CLI from our Pre-requisites
-
The module operator manifests from the
default
kustomization (not the controller image) will now be bundled and pushed. Assuming the settings from Prepare and build module operator image for single-cluster mode, and assuming the following module settings:-
hosted at
op-kcp-registry.localhost:8888/unsigned
-
generated for channel
regular
(or any other name follow the channel naming rules) -
module has version
0.0.1
-
module name is
template
-
for a k3d registry enable the
insecure
flag (http
instead ofhttps
for registry communication) -
uses Kyma CLI in
$PATH
underkyma
-
a simple
config.yaml
is present for module configuration with the content# Samples Config configs:
WARNING: Even though this file is empty, it is mandatory for the command to succeed as it will be bundled as layer! kubebuilder projects by default do not have such a file (it is introduced by modularization) and you will need to create one on your own if not already done.
-
the default sample under
config/samples/operator_v1alpha1_sample.yaml
has been adjusted to be a valid CR by setting the default generatedFoo
field instead of a TODO.apiVersion: operator.kyma-project.io/v1alpha1 kind: Sample metadata: name: sample-sample spec: foo: bar
WARNING: The settings above reflect your default configuration for a module. If you want to change this you will have to manually adjust it to a different configuration. You can also define multiple files in
config/samples
, however you will need to then specify the correct file during bundling. -
The
.gitignore
has been adjusted and following ignores were added# kyma module cache mod # generated dummy charts charts # kyma generated by scripts or local testing kyma.yaml # template generated by kyma create module template.yaml
Now, run the following command to create and push your module operator image to the specified registry:
kyma alpha create module --version 0.0.1 -w --insecure --registry op-kcp-registry.localhost:8888/unsigned
WARNING: For external registries (e.g. Google Container/Artifact Registry), never use insecure. Instead specify credentials. More details can be found in the help documentation of the CLI
To make a setup work in single-cluster mode adjust the generated
template.yaml
to install the module in the Control Plane, by assigning the field.spec.target
to valuecontrol-plane
. This will install all operators and modules in the same cluster.apiVersion: operator.kyma-project.io/v1alpha1 kind: ModuleTemplate #... spec: target: control-plane
-
-
Verify that the module creation succeeded and observe the
mod
folder. It will contain acomponent-descriptor.yaml
with a definition of local layers.Sample
component: componentReferences: [] name: kyma-project.io/module/sample provider: internal repositoryContexts: - baseUrl: op-kcp-registry.localhost:8888/unsigned componentNameMapping: urlPath type: ociRegistry resources: - access: filename: sha256:fafc3be538f68a786f3b8ef39bd741805314253f81cf4a5880395dcecf599ef5 mediaType: application/gzip type: localFilesystemBlob name: sample-operator relation: local type: helm-chart version: 0.0.1 - access: filename: sha256:db86408caca4c94250d8291aa79655b84146f9cc45e0da49f05a52b3722d74a0 mediaType: application/octet-stream type: localFilesystemBlob name: config relation: local type: yaml version: 0.0.1 sources: [] version: 0.0.1 meta: schemaVersion: v2
As you can see the CLI created various layers that are referenced in the
blobs
directory. For more information on layer structure please reference the module creation withkyma alpha mod create --help
.
WARNING: This step requires the working OCI Registry and Cluster from our Pre-requisites
Now that everything is prepared in a cluster of your choice, you are free to reference the module within any Kyma
custom resource in your Control Plane cluster.
Deploy the Lifecycle Manager & Module Manager to the Control Plane cluster with:
kyma alpha deploy
WARNING: For single-cluster mode, module manager needs additional privileges to work in the cluster as it usually does not need to access all resource types within the control-plane.
This can be fixed by editing the necessary ClusterRole with kubectl edit clusterrole module-manager-manager-role
with the following adjustment:
- apiGroups:
- "*"
resources:
- "*"
verbs:
- "*"
Note that this is very hard to properly protect against privilege escalation in single-cluster mode, which is one of the reasons we heavily discourage it for productive use
Now run the command for creating the ModuleTemplate
in the cluster.
After this the module will be available for consumption based on the module name configured with the label operator.kyma-project.io/module-name
on the ModuleTemplate
.
WARNING: Depending on your setup against either a k3d cluster/registry, you will need to run the script in hack/local-template.sh
before pushing the ModuleTemplate to have proper registry setup.
(This is necessary for k3d clusters due to port-mapping issues in the cluster that the operators cannot reuse, please take a look at the relevant issue for more details)
kubectl apply -f template.yaml
For single-cluster mode, you could use the existing Kyma custom resource generated for the Control Plane in kyma.yaml
with this:
kubectl patch kyma default-kyma -n kcp-system --type='json' -p='[{"op": "add", "path": "/spec/modules", "value": [{"name": "sample" }] }]'
This adds your module into .spec.modules
with a name originally based on the "operator.kyma-project.io/module-name": "sample"
label that was generated in template.yaml
:
spec:
modules:
- name: sample
If required, you can adjust this Kyma CR based on your testing scenario. For example, if you are running a dual-cluster setup, you might want to enable the synchronization of the Kyma CR into the runtime cluster for a full E2E setup. On creation of this kyma CR in your Control Plane cluster, installation of the specified modules should start immediately.
The operator ecosystem around Kyma is complex, and it might become troublesome to debug issues in case your module is not installed correctly. For this very reason here are some best practices on how to debug modules developed through this guide.
- Verify the Kyma installation state is ready by verifying all conditions.
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.reason}:{@.status};{end}{end}' \ && kubectl get kyma -o jsonpath="$JSONPATH" -n kcp-system
- Verify the Manifest installation state is ready by verifying all conditions.
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ && kubectl get manifest -o jsonpath="$JSONPATH"-n kcp-system
- Depending on your issue, either observe the deployment logs from either
lifecycle-manager
and/ormodule-manager
. Make sure that no errors have occurred.
Usually the issue is related to either RBAC configuration (for troubleshooting minimum privileges for the controllers, see our dedicated RBAC section), mis-configured image, module registry or ModuleTemplate
.
As a last resort, make sure that you are aware if you are running within a single-cluster or a dual-cluster setup, watch out for any steps with a WARNING
specified and retry with a freshly provisioned cluster.
For cluster provisioning, please make sure to follow our recommendations for clusters mentioned in our Pre-requisites for this guide.
Lastly, if you are still unsure, please feel free to open an issue, with a description and steps to reproduce, and we will be happy to help you with a solution.
For global usage of your module, the generated template.yaml
from Build and push your module to the registry needs to be registered in our control-plane.
This relates to Phase 2 of the Module Transition Plane. Please be patient until we can provide you with a stable guide on how to properly integrate your template.yaml
with an automated test flow into the central control-plane Offering.