Giter Club home page Giter Club logo

kustomizer's Introduction

kustomizer

report e2e codecov license release

Kustomizer is an experimental package manager for distributing Kubernetes configuration as OCI artifacts. It offers commands to publish, fetch, diff, customize, validate, apply and prune Kubernetes resources.

Kustomizer relies on server-side apply and requires a Kubernetes cluster v1.20 or newer.

Install

The Kustomizer CLI is available as a binary executable for all major platforms, the binaries can be downloaded from GitHub releases. The binaries checksums are signed with Cosign and each release comes with a Software Bill of Materials (SBOM) in SPDX format.

Install the latest release on macOS or Linux with Homebrew:

brew install stefanprodan/tap/kustomizer

For other installation methods, see kustomizer.dev/install.

Get started

To get started with Kustomizer please visit the documentation website at kustomizer.dev.

Concepts

OCI Artifacts

Kustomizer offers a way to distribute Kubernetes configuration using container registries. It can package Kubernetes manifests in an OCI image and store them in a container registry, right next to your applications' images.

Kustomizer comes with commands for managing OCI artifacts:

  • kustomizer push artifact oci://<image-url>:<tag> -k [-f] [-p]
  • kustomizer tag artifact oci://<image-url>:<tag> <new-tag>
  • kustomizer list artifacts oci://<repo-url> --semver <condition>
  • kustomizer pull artifact oci://<image-url>:<tag>
  • kustomizer inspect artifact oci://<image-url>:<tag>
  • kustomizer diff artifact <oci url> <oci url>

Kustomizer is compatible with Docker Hub, GHCR, ACR, ECR, GCR, Artifactory, self-hosted Docker Registry and others. For auth, it uses the credentials from ~/.docker/config.json.

Sign & Verify Artifacts

Kustomizer can sign and verify artifacts using sigstore/cosign either with static keys, Cloud KMS or keyless signatures (when running Kustomizer with GitHub Actions):

  • kustomizer push artifact --sign --cosign-key <private key>
  • kustomizer pull artifact --verify --cosign-key <public key>
  • kustomizer inspect artifact --verify --cosign-key <public key>

For an example on how to secure your Kubernetes supply chain with Kustomizer and Cosign please see this guide.

Resource Inventories

Kustomizer offers a way for grouping Kubernetes resources. It generates an inventory which keeps track of the set of resources applied together. The inventory is stored inside the cluster in a ConfigMap object and contains metadata such as the resources provenance and revision.

The Kustomizer garbage collector uses the inventory to keep track of the applied resources and prunes the Kubernetes objects that were previously applied but are missing from the current revision.

You specify an inventory name and namespace at apply time, and then you can use Kustomizer to list, diff, update, and delete inventories:

  • kustomizer apply inventory <name> [--artifact <oci url>] [-f] [-p] -k
  • kustomizer diff inventory <name> [-a] [-f] [-p] -k
  • kustomizer get inventories --namespace <namespace>
  • kustomizer inspect inventory <name> --namespace <namespace>
  • kustomizer delete inventory <name> --namespace <namespace>

When applying resources from OCI artifacts, Kustomizer saves the artifacts URL and the image SHA-2 digest in the inventory. For deterministic and repeatable apply operations, you could use digests instead of tags.

Encryption at rest

Kustomizer has builtin support for encrypting and decrypting Kubernetes configuration (packaged as OCI artifacts) using age asymmetric keys.

To securely distribute sensitive Kubernetes configuration to trusted users, you can encrypt the artifacts with their age public keys:

  • kustomizer push artifact oci://<image-url>:<tag> --age-recipients <public keys>

Users can access the artifacts by decrypting them with their age private keys:

  • kustomizer inspect artifact oci://<image-url>:<tag> --age-identities <private keys>
  • kustomizer pull artifact oci://<image-url>:<tag> --age-identities <private keys>
  • kustomizer apply inventory <name> [--artifact <oci url>] --age-identities <private keys>
  • kustomizer diff inventory <name> [--artifact <oci url>] --age-identities <private keys>

Contributing

Kustomizer is Apache 2.0 licensed and accepts contributions via GitHub pull requests.

kustomizer's People

Contributors

dependabot[bot] avatar developer-guy avatar goreleaserbot avatar lalloni avatar ndrpnt avatar stefanprodan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

kustomizer's Issues

Cluster scoped resources with namespace fields (incorrectly) set never go healthy

I've noticed an issue when using Kustomizer: if I have a cluster-scoped resource (i.e. a ClusterRole) that incorrectly specifies a namespace, the inventory of objects that kustomizer passes to flux/ssa never succeeds. The object will have an Unknown status, because Kustomizer built an inventory object with a namespace: flux/ssa correctly tracks the status of a bad ClusterRole object (it sets namespace: ""), but Kustomizer still believes they're Unknown. The tracked object list will have two copies in the set: one with a namespace (status: Unknown, and one without a namespace (status: Current, or whatever it may be).

It's not a huge problem: I can just fix the manifests themselves. But it took me a while to track this down, so I figured I'd write down the issue. It could be fixed in Kustomizer by looking up whether the object should be cluster-scoped or not before building the set of objects.

Also: thanks for Kustomizer! It nicely fills a gap in kube tooling.

Support specifying how to run kubectl

We use kubectl (on CI) behind Rancher CLI using something like:

rancher kubectl apply ....

This allows rancher-cli to take care of kubectl authentication, among other things.

It would be useful if kustomizer would allow to customize how it runs kubectl so we could tell it to do like the above.

Something like:

kustomizer apply --with-kubectl "rancher kubectl" ...

Note that in this case, kustomizer would have to space-split the received string to use the first element as the command to execute and prepend what's left into it's internally-built kubectl arguments list.

Does this sound right?

Error on apply

Just tried this tool for the first time, but couldn't get it to work.

I have a directory with manifests, managed by kustomize. I run:

kustomizer apply deployment/development/ --name=test --revision=1 --use-kustomize

I then get exit status 1 without further info.

Is this because I use kustomize 3? When I omit --use-kustomize, I get:

accumulating resources: accumulateFile "accumulating resources from '../base':
evalsymlink failure on '/tmp/base' : lstat /tmp/base: no such file or directory",
loader.New "error loading ../base with git: url lacks host: ../base,
dir: evalsymlink failure on '/tmp/base' : lstat /tmp/base: no such file or directory,
get: invalid source string: ../base"

Tried on Arch Linux against a kind cluster with kustomizer_0.2.1_linux_amd64.tar.gz downloaded from github.

Kustomizer leaks secrets on diff when using variable substitution in Flux

An example:

Command:

kustomizer diff -f cluster/apps/velero/helm-release.yaml

Output, take notice $SECRET_DOMAIN and $NAS_ADDR

► HelmRelease/velero/velero drifted
  strings.Join({
  	... // 2008 identical bytes
  	"a21296fe3e8eb933f67f76adf4f429\n  creationTimestamp: \"2021-09-07T",
  	"15:11:54Z\"\n  finalizers:\n  - finalizers.fluxcd.io\n  generation: ",
- 	"9",
+ 	"10",
  	"\n  labels:\n    kustomize.toolkit.fluxcd.io/name: apps\n    kustom",
  	"ize.toolkit.fluxcd.io/namespace: flux-system\n  name: velero\n  na",
  	... // 157 identical bytes
  	"   sourceRef:\n        kind: HelmRepository\n        name: vmware-",
  	"tanzu-charts\n        namespace: flux-system\n      version: 2.23.",
- 	"8",
+ 	"6",
  	"\n  interval: 5m\n  values:\n    cleanUpCRDs: false\n    configurati",
  	"on:\n      backupStorageLocation:\n        bucket: velero\n        ",
  	"config:\n          publicUrl: https://s3.",
- 	"----------------THIS IS THE ACTUAL VALUE OF ${SECRET_DOMAIN}----------------",
+ 	"${SECRET_DOMAIN}",
  	"\n          region: us-east-1\n          s3ForcePathStyle: true\n  ",
  	"        s3Url: http://",
- 	"----------------THIS IS THE ACTUAL VALUE OF ${NAS_ADDR}----------------",
+ 	"${NAS_ADDR}",
  	":9000\n        name: default\n      extraEnvVars:\n        TZ: Amer",
  	"ica/New_York\n      provider: aws\n      resticTimeout: 6h\n      v",
  	... // 169 identical bytes
  	"age:\n      repository: ghcr.io/k8s-at-home/velero\n    initContai",
  	"ners:\n    - image: ghcr.io/k8s-at-home/velero-plugin-for-aws:v1.",
- 	"2.1",
+ 	"3.0",
  	"\n      imagePullPolicy: IfNotPresent\n      name: velero-plugin-f",
  	"or-aws\n      volumeMounts:\n      - mountPath: /target\n        na",
  	"me: plugins\n    kubectl:\n      image:\n        repository: ghcr.i",
  	"o/k8s-at-home/kubectl\n        tag: v1",
- 	".22.2",
+ 	".22.1",
  	"\n    metrics:\n      enabled: true\n      serviceMonitor:\n        ",
  	"enabled: true\n    resources:\n      limits:\n        memory: 1500M",
  	... // 1052 identical bytes
  }, "")

Also the description of this sub command says "Diff compares the local Kubernetes manifests with the in-cluster objects and prints the YAML diff to stdout." but the format is all messed up.

Build inventory should match the original manifest no matter the flag

Description

Kustomizer allows you to take an artifact and a kustomize local repo to build a manifest. These can have duplicate manifests when specifying both the remote OCI artifact and the local storage.

Steps to reproduce

Create the manifests using the various flags

$ kustomizer build inventory demo-app --kustomize ./examples/demo-app --artifact oci://ghcr.io/stefanprodan/kustomizer-demo-app:1.0.0 > artifact-kustomize.yaml
$ kustomizer build inventory demo-app --kustomize ./examples/demo-app > kustomize.yaml
$ kustomizer build inventory demo-app  --artifact oci://ghcr.io/stefanprodan/kustomizer-demo-app:1.0.0 > artifact.yaml

Now diff the manifests

$ diff artifact.yaml kustomize.yaml
$
$ diff artifact-kustomize.yaml kustomize.yaml
11,19d10
< kind: Namespace
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
<   name: kustomizer-demo-app
< ---
< apiVersion: v1
36,51d26
< data:
<   redis.conf: |
<     maxmemory 64mb
<     maxmemory-policy allkeys-lru
<     save ""
<     appendonly no
< kind: ConfigMap
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
<   name: redis-config-bd2fcfgt6k
<   namespace: kustomizer-demo-app
< ---
< apiVersion: v1
81,103d55
<   name: backend
<   namespace: kustomizer-demo-app
< spec:
<   ports:
<   - name: http
<     port: 9898
<     protocol: TCP
<     targetPort: http
<   - name: grpc
<     port: 9999
<     protocol: TCP
<     targetPort: grpc
<   selector:
<     app: backend
<   type: ClusterIP
< ---
< apiVersion: v1
< kind: Service
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
123,141d74
<   name: cache
<   namespace: kustomizer-demo-app
< spec:
<   ports:
<   - name: redis
<     port: 6379
<     protocol: TCP
<     targetPort: redis
<   selector:
<     app: cache
<   type: ClusterIP
< ---
< apiVersion: v1
< kind: Service
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
154,172d86
< apiVersion: v1
< kind: Service
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
<   name: frontend
<   namespace: kustomizer-demo-app
< spec:
<   ports:
<   - name: http
<     port: 80
<     protocol: TCP
<     targetPort: http
<   selector:
<     app: frontend
<   type: ClusterIP
< ---
259,337d172
<   name: backend
<   namespace: kustomizer-demo-app
< spec:
<   minReadySeconds: 3
<   progressDeadlineSeconds: 60
<   revisionHistoryLimit: 5
<   selector:
<     matchLabels:
<       app: backend
<   strategy:
<     rollingUpdate:
<       maxUnavailable: 0
<     type: RollingUpdate
<   template:
<     metadata:
<       annotations:
<         prometheus.io/port: "9797"
<         prometheus.io/scrape: "true"
<       labels:
<         app: backend
<     spec:
<       containers:
<       - command:
<         - ./podinfo
<         - --port=9898
<         - --port-metrics=9797
<         - --grpc-port=9999
<         - --grpc-service-name=backend
<         - --level=info
<         - --cache-server=cache:6379
<         env:
<         - name: PODINFO_UI_COLOR
<           value: '#34577c'
<         image: ghcr.io/stefanprodan/podinfo:6.0.0
<         imagePullPolicy: IfNotPresent
<         livenessProbe:
<           exec:
<             command:
<             - podcli
<             - check
<             - http
<             - localhost:9898/healthz
<           initialDelaySeconds: 5
<           timeoutSeconds: 5
<         name: backend
<         ports:
<         - containerPort: 9898
<           name: http
<           protocol: TCP
<         - containerPort: 9797
<           name: http-metrics
<           protocol: TCP
<         - containerPort: 9999
<           name: grpc
<           protocol: TCP
<         readinessProbe:
<           exec:
<             command:
<             - podcli
<             - check
<             - http
<             - localhost:9898/readyz
<           initialDelaySeconds: 5
<           timeoutSeconds: 5
<         resources:
<           limits:
<             cpu: 2000m
<             memory: 512Mi
<           requests:
<             cpu: 100m
<             memory: 32Mi
< ---
< apiVersion: apps/v1
< kind: Deployment
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
401,463d235
<   name: cache
<   namespace: kustomizer-demo-app
< spec:
<   selector:
<     matchLabels:
<       app: cache
<   template:
<     metadata:
<       labels:
<         app: cache
<     spec:
<       containers:
<       - command:
<         - redis-server
<         - /redis-master/redis.conf
<         image: public.ecr.aws/docker/library/redis:6.2.0
<         imagePullPolicy: IfNotPresent
<         livenessProbe:
<           initialDelaySeconds: 5
<           tcpSocket:
<             port: redis
<           timeoutSeconds: 5
<         name: redis
<         ports:
<         - containerPort: 6379
<           name: redis
<           protocol: TCP
<         readinessProbe:
<           exec:
<             command:
<             - redis-cli
<             - ping
<           initialDelaySeconds: 5
<           timeoutSeconds: 5
<         resources:
<           limits:
<             cpu: 1000m
<             memory: 128Mi
<           requests:
<             cpu: 100m
<             memory: 32Mi
<         volumeMounts:
<         - mountPath: /var/lib/redis
<           name: data
<         - mountPath: /redis-master
<           name: config
<       volumes:
<       - emptyDir: {}
<         name: data
<       - configMap:
<           items:
<           - key: redis.conf
<             path: redis.conf
<           name: redis-config-bd2fcfgt6k
<         name: config
< ---
< apiVersion: apps/v1
< kind: Deployment
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
535,612d306
< apiVersion: apps/v1
< kind: Deployment
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
<   name: frontend
<   namespace: kustomizer-demo-app
< spec:
<   minReadySeconds: 3
<   progressDeadlineSeconds: 60
<   revisionHistoryLimit: 5
<   selector:
<     matchLabels:
<       app: frontend
<   strategy:
<     rollingUpdate:
<       maxUnavailable: 0
<     type: RollingUpdate
<   template:
<     metadata:
<       annotations:
<         prometheus.io/port: "9797"
<         prometheus.io/scrape: "true"
<       labels:
<         app: frontend
<     spec:
<       containers:
<       - command:
<         - ./podinfo
<         - --port=9898
<         - --port-metrics=9797
<         - --level=info
<         - --backend-url=http://backend:9898/echo
<         - --cache-server=cache:6379
<         env:
<         - name: PODINFO_UI_COLOR
<           value: '#34577c'
<         image: ghcr.io/stefanprodan/podinfo:6.0.0
<         imagePullPolicy: IfNotPresent
<         livenessProbe:
<           exec:
<             command:
<             - podcli
<             - check
<             - http
<             - localhost:9898/healthz
<           initialDelaySeconds: 5
<           timeoutSeconds: 5
<         name: frontend
<         ports:
<         - containerPort: 9898
<           name: http
<           protocol: TCP
<         - containerPort: 9797
<           name: http-metrics
<           protocol: TCP
<         - containerPort: 9999
<           name: grpc
<           protocol: TCP
<         readinessProbe:
<           exec:
<             command:
<             - podcli
<             - check
<             - http
<             - localhost:9898/readyz
<           initialDelaySeconds: 5
<           timeoutSeconds: 5
<         resources:
<           limits:
<             cpu: 1000m
<             memory: 128Mi
<           requests:
<             cpu: 100m
<             memory: 32Mi
< ---
636,683d329
< ---
< apiVersion: autoscaling/v2beta2
< kind: HorizontalPodAutoscaler
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
<   name: backend
<   namespace: kustomizer-demo-app
< spec:
<   maxReplicas: 2
<   metrics:
<   - resource:
<       name: cpu
<       target:
<         averageUtilization: 99
<         type: Utilization
<     type: Resource
<   minReplicas: 1
<   scaleTargetRef:
<     apiVersion: apps/v1
<     kind: Deployment
<     name: backend
< ---
< apiVersion: autoscaling/v2beta2
< kind: HorizontalPodAutoscaler
< metadata:
<   annotations:
<     env: demo
<   labels:
<     app.kubernetes.io/instance: webapp
<   name: frontend
<   namespace: kustomizer-demo-app
< spec:
<   maxReplicas: 4
<   metrics:
<   - resource:
<       name: cpu
<       target:
<         averageUtilization: 99
<         type: Utilization
<     type: Resource
<   minReplicas: 1
<   scaleTargetRef:
<     apiVersion: apps/v1
<     kind: Deployment
<     name: frontend

Using both --artifact and --kustomize will duplicate each resource.

If the flags cannot be combined, then it should either state it, or create a merged manifest.

Kustomizer can't handle Kustomize overlays referring to sibling directories

When using kustomizer on a kustomization overlay which refers to bases located on sibling directories, it fails because it copies the directory to a temporary location and base relative paths are then broken.

For example, when kustomization.yaml listing "../../../../bases/main" as one of the bases gets copied to the temporary dir and tried to build using kustomize from there, kustomize fails because the base path is now wrong.

I think kustomizer should not copy the kustomization out of where it is supposed to be built.

Allow overriding the path to the config file

Hi, I have a use case where I have some custom resources in my manifest, and ideally they need to be created in a specific order, I can solve this by listing them in the config file - yeah thanks for this feature. - however, I really want to commit this config to git along with the manifests and build scripts etc.

It appears that the config file path is hardcoded to $HOME/.kustomizer/config, so to do this I would have to add a step to my build script to copy the config file to that location (and hope we never have 2 different versions that run at the same time)

Can we have a command line parameter that sets the location of the file that we will loaded (like --kubeconfig to kubectl), or if you really don't want that an environment variable to override the default name (like $KUBECONFIG does for kubectl), or both :-) .

Many thanks for considering this.

unknown flag for namespace in "get inventories"

Following the documentation at https://kustomizer.dev/ , the command kustomizer get inventories -n default does not work.

Reproduce :

$ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋

# Follow the doc at https://kustomizer.dev/
$ kustomizer apply -f ./testdata/plain --prune --wait \
    --source="$(git ls-remote --get-url)" \
    --revision="$(git describe --always)" \
    --inventory-name=demo \
    --inventory-namespace=default
    
building inventory...
applying 10 manifest(s)...
Namespace/kustomizer-demo created
ServiceAccount/kustomizer-demo/demo created
ClusterRole/kustomizer-demo-read-only created
ClusterRoleBinding/kustomizer-demo-read-only created
Service/kustomizer-demo/backend created
Service/kustomizer-demo/frontend created
Deployment/kustomizer-demo/backend created
Deployment/kustomizer-demo/frontend created
HorizontalPodAutoscaler/kustomizer-demo/backend created
HorizontalPodAutoscaler/kustomizer-demo/frontend created
waiting for resources to become ready...
all resources are ready

$ kustomizer get inventories -n default
✗ unknown shorthand flag: 'n' in -n

bug: kustomizer container image does not work

To reproduce the issue:

$ docker container run --rm ghcr.io/stefanprodan/kustomizer:v2.2.1 kustomizer
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "kustomizer": executable file not found in $PATH: unknown.

somehow kustomizer binary is not included into an image. We should investigate this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.