Giter Club home page Giter Club logo

helmify's Introduction

Helmify

CI GitHub go.mod Go version GitHub GitHub release (latest by date) Go Report Card GoDoc GitHub total downloads

CLI that creates Helm charts from kubernetes manifests.

Helmify reads a list of supported k8s objects from stdin and converts it to a helm chart. Designed to generate charts for k8s operators but not limited to. See examples of charts generated by helmify.

Supports Helm >=v3.6.0

Submit issue if some features missing for your use-case.

Usage

  1. As pipe:

    cat my-app.yaml | helmify mychart

    Will create 'mychart' directory with Helm chart from yaml file with k8s objects.

    awk 'FNR==1 && NR!=1  {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart

    Will create 'mychart' directory with Helm chart from all yaml files in <my_directory> directory.

  2. From filesystem:

    helmify -f /my_directory/my-app.yaml mychart

    Will create 'mychart' directory with Helm chart from my_directory/my-app.yaml.

    helmify -f /my_directory mychart

    Will create 'mychart' directory with Helm chart from all yaml files in <my_directory> directory.

    helmify -f /my_directory -r mychart

    Will create 'mychart' directory with Helm chart from all yaml files in <my_directory> directory recursively.

    helmify -f ./first_dir -f ./second_dir/my_deployment.yaml -f ./third_dir  mychart

    Will create 'mychart' directory with Helm chart from multiple directories and files.

  3. From kustomize output:

    kustomize build <kustomize_dir> | helmify mychart

    Will create 'mychart' directory with Helm chart from kustomize output.

Integrate to your Operator-SDK/Kubebuilder project

  1. Open Makefile in your operator project generated by Operator-SDK or Kubebuilder.
  2. Add these lines to Makefile:
  • With operator-sdk version < v1.23.0
    HELMIFY = $(shell pwd)/bin/helmify
    helmify:
    	$(call go-get-tool,$(HELMIFY),github.com/arttor/helmify/cmd/[email protected])
    
    helm: manifests kustomize helmify
    	$(KUSTOMIZE) build config/default | $(HELMIFY)
  • With operator-sdk version >= v1.23.0
    HELMIFY ?= $(LOCALBIN)/helmify
    
    .PHONY: helmify
    helmify: $(HELMIFY) ## Download helmify locally if necessary.
    $(HELMIFY): $(LOCALBIN)
    	test -s $(LOCALBIN)/helmify || GOBIN=$(LOCALBIN) go install github.com/arttor/helmify/cmd/helmify@latest
        
    helm: manifests kustomize helmify
    	$(KUSTOMIZE) build config/default | $(HELMIFY)
  1. Run make helm in project root. It will generate helm chart with name 'chart' in 'chart' directory.

Install

With Homebrew (for MacOS or Linux): brew install arttor/tap/helmify

Or download suitable for your system binary from the Releases page. Unpack the helmify binary and add it to your PATH and you are good to go!

Available options

Helmify takes a chart name for an argument. Usage:

helmify [flags] CHART_NAME - CHART_NAME is optional. Default is 'chart'. Can be a directory, e.g. 'deploy/charts/mychart'.

flag description sample
-h -help Prints help helmify -h
-f File source for k8s manifests (directory or file), multiple sources supported helmify -f ./test_data
-r Scan file directory recursively. Used only if -f provided helmify -f ./test_data -r
-v Enable verbose output. Prints WARN and INFO. helmify -v
-vv Enable very verbose output. Also prints DEBUG. helmify -vv
-version Print helmify version. helmify -version
-crd-dir Place crds in their own folder per Helm 3 docs. Caveat: CRDs templating is not supported by Helm. helmify -crd-dir
-image-pull-secrets Allows the user to use existing secrets as imagePullSecrets helmify -image-pull-secrets
-original-name Use the object's original name instead of adding the chart's release name as the common prefix. helmify -original-name
-cert-manager-as-subchart Allows the user to install cert-manager as a subchart helmify -cert-manager-as-subchart
-cert-manager-version Allows the user to specify cert-manager subchart version. Only useful with cert-manager-as-subchart. (default "v1.12.2") helmify -cert-manager-as-subchart
-preserve-ns Allows users to use the object's original namespace instead of adding all the resources to a common namespace. (default "false") helmify -preserve-ns

Status

Supported k8s resources:

  • Deployment, DaemonSet, StatefulSet
  • Job, CronJob
  • Service, Ingress
  • PersistentVolumeClaim
  • RBAC (ServiceAccount, (cluster-)role, (cluster-)roleBinding)
  • configs (ConfigMap, Secret)
  • webhooks (cert, issuer, ValidatingWebhookConfiguration)
  • custom resource definitions (CRD)

Known issues

  • Helmify will not overwrite Chart.yaml file if presented. Done on purpose.
  • Helmify will not delete existing template files, only overwrite.
  • Helmify overwrites templates and values files on every run. This means that all your manual changes in helm template files will be lost on the next run.
  • if switching between the using the -crd-dir flag it is better to delete and regenerate the from scratch to ensure crds are not accidentally spliced/formatted into the same chart. Bear in mind you will want to update your Chart.yaml thereafter.

Develop

To support a new type of k8s object template:

  1. Implement helmify.Processor interface. Place implementation in pkg/processor. The package contains examples for most k8s objects.
  2. Register your processor in the pkg/app/app.go
  3. Add relevant input sample to test_data/kustomize.output.

Run

Clone repo and execute command:

cat test_data/k8s-operator-kustomize.output | go run ./cmd/helmify mychart

Will generate mychart Helm chart form file test_data/k8s-operator-kustomize.output representing typical operator kustomize output.

Test

For manual testing, run program with debug output:

cat test_data/k8s-operator-kustomize.output | go run ./cmd/helmify -vv mychart

Then inspect logs and generated chart in ./mychart directory.

To execute tests, run:

go test ./...

Beside unit-tests, project contains e2e test pkg/app/app_e2e_test.go. It's a go test, which uses test_data/* to generate a chart in temporary directory. Then runs helm lint --strict to check if generated chart is valid.

helmify's People

Contributors

abdennour avatar arttor avatar bigkevmcd avatar chenrui333 avatar chriskery avatar dashanji avatar dontan001 avatar duologic avatar etrikp avatar gprossliner avatar holyspectral avatar i-yxd avatar imilchev avatar juanramino avatar kaleho avatar mgsousa avatar mowies avatar nishyt avatar onelapahead avatar realanna avatar rebenkoy avatar rsrchboy avatar sf1tzp avatar shubhamcoc avatar starefossen avatar stotz89 avatar suvaanshkumar avatar vbehar avatar yxd-ym avatar zalsader avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helmify's Issues

More flags to control what is generated

For our use case, we really want to leverage helmify to keep CRDs and RBAC up to date with changes in the chart kept in the same repository that houses a large set of controllers.

However, we want to maintain the chart (e.g. deployment, values, and so on) ourselves. We would really benefit from individual flags within helmify to control what it generates and what it doesn't (e.g. only do CRDs and RBAC).

allow to control CRDs installation

It would be useful to generate a template for CRDs that allow them to be installed or not, depending on a value ("enabled")

Indeed, if one plan to install several instances of the chart (in different namespaces or not) , the CRDs can only be installed once as they are cluster wide (their name does not depend on the release name).

env var & fieldRef bug

Hi,

#72 released in https://github.com/arttor/helmify/releases/tag/v0.3.24 introduced a bug in the env vars using a fieldRef, such as

        - name: OPERATOR_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

which gets converted to

        - name: OPERATOR_NAMESPACE
          value: {{ .Values.controllerManager.manager.operatorNamespace }}
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

because we don't handle fieldRef and resourceFieldRef - see https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#envvarsource-v1-core

I'll open a PR tomorrow to fix it

Add crd spec to values.yaml

If my yaml file contains an object that is created by a CRD in the same file, it would be nice to add the spec to that object in values.yaml. Alternatively, #67 could be used and the user would have to do it manually.

Here's an example input:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
spec:
  names:
    kind: FooBar
  scope: Namespaced
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
          spec:
            properties:
              baz:
                type: boolean
...
---
...
kind: FooBar
metadata:
  name: my-foo-bar
spec:
  baz: true

Desired <chart>/templates/my-foo-bar.yaml:

...
kind: FooBar
metadata:
  name: my-foo-bar
spec:
{{ toYaml .Values.myFooBarSpec | indent 2 }}

Desired <chart>/values.yaml:

...
myFooBarSpec:
  baz: true

Ingress backend service name not being prefixed with xx.fullname and service name is being changed incorrectly

apiVersion: v1
kind: Service
metadata:
name: firelymongo-service
namespace: xxx
spec:
selector:
pod-Label: my-pod
ports:

  • protocol: TCP
    port: 4080
    targetPort: 4080

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: firelyapi-ingress
namespace: xxx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:

  • host: dev.local.xyz.com
    http:
    paths:
    • backend:
      service:
      name: firelymongo-service
      port:
      number: 4080
      path: /firelyapi(/|$)(.*)
      pathType: ImplementationSpecific
      tls:
  • hosts:
    • dev.local.xyz.com

The service is named as 'firelymongo-service', but the file generated for it is called just mongo-service.yaml. In that file...
apiVersion: v1
kind: Service
metadata:
name: {{ include "ft.fullname" . }}-mongo-service
labels:
{{- include "ft.labels" . | nindent 4 }}

the service has it's 'firely' part removed which is wrong I believe.

The api-ingress.yaml that is generated is...

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "ft.fullname" . }}-api-ingress
labels:
{{- include "ft.labels" . | nindent 4 }}
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:

  • host: dev.local.elektaplatform.com
    http:
    paths:
    • backend:
      service:
      name: firelymongo-service
      port:
      number: 4080
      path: /firelyapi(/|$)(.*)
      pathType: ImplementationSpecific
      tls:
  • hosts:
    • dev.local.elektaplatform.com

Note:- the name: is 'firelymongo-service.' There is no fullname template here. so I'm having to manually change it to ...
name: {{ include "ft.fullname" . }}-mongo-service
which works fine.
I'm using powershell Get-content firelyServer-WithMongo.yaml | helmify ft

So part of the service name being removed and the file that is generated also has that part removed (this isn't critical but seems strange)
Also the ingress name is not being templated, which is causing an issue.

Or am I missing something?

Liveness and Readiness probes

for testing issues, it can be nice to remove readiness and liveness probes completely, but for production I would expect the possibility to customize them 100%

I tried a first solution appending helm ifs statements to the container spec by treating it as a string. This will of course make the container spec impossible to be cast back to corev1.Containers , but as long as this step is done right before the conversion to string of the metadata I see no problems with it...

we do not use startup probes, but maybe this approach could be extended

current implementation here #73

Multiple breaks helm linter

The helm linter returns an error when the template is broken into multiple lines.

To reproduce:

>> git  clone https://github.com/arttor/helmify.git
>> cd helmify/examples/operator/
>> >> helm lint
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /var/snap/microk8s/2694/credentials/client.config
==> Linting .
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: parse error at (operator/templates/secret-vars.yaml:8): unclosed action

Error: 1 chart(s) linted, 1 chart(s) failed

Issue with apiVersion key in every yaml file

Hello I found an issue when either using awk 'FNR==1 && NR!=1 {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart or manually concatenating yaml files and then using cat my-app.yaml | helmify mychart.

If at the beginning of every yaml file you have the apiVersion key apiVersion: v2,the helmify command will output helm charts with required errors and hence the helm installation would fail.

Not sure if this happens with any other key different than apiVersion in multiple yaml files. Great tool btw.

When I commented out this key at the beginning of each yaml file, helmify output the correct content for every k8s resource and it only left the apiVersion key in blank, which can be easily populated with a script so that the helm installation doesn't fail.

Error not showing file or line number

I run helmify on about 20+ files and only get errors like this:

ERRO[0000] helmify finished with error                   error="unable to cast to deployment: unrecognized type: int32"

So I have to check all files for possible ints?

Is it possible to get better error reporting?

Thanks
K

Add Image Pull Policy template

Our development pipeline overrides the imagePullPolicy to make sure the image built in the pipeline is used, it would be nice to have this value templated.

I propose to add something like this to make sure the field is templated

Seg fault processing a Deployment

Trying to helmify some manifests I have the following segmentation fault code:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1691f93]

goroutine 1 [running]:
github.com/arttor/helmify/pkg/processor/deployment.deployment.Process({}, {0x1906678, 0xc0000aed40}, 0xc00000e548)
	/home/runner/work/helmify/helmify/pkg/processor/deployment/deployment.go:71 +0x293
github.com/arttor/helmify/pkg/app.(*appContext).process(0xc0000e9d50, 0xc00000e548)
	/home/runner/work/helmify/helmify/pkg/app/context.go:75 +0x39c
github.com/arttor/helmify/pkg/app.(*appContext).CreateHelm(0xc0000e9d50, 0xc000308000)
	/home/runner/work/helmify/helmify/pkg/app/context.go:57 +0x252
github.com/arttor/helmify/pkg/app.Start({0x18f4d20, 0xc00000e010}, {{0x7ffeefbffa88, 0xc}, {0x17ee766, 0x1}, 0x0, 0x0})
	/home/runner/work/helmify/helmify/pkg/app/app.go:58 +0x737
main.main()
	/home/runner/work/helmify/helmify/cmd/helmify/main.go:21 +0x1b1

Someone had a similar situation?

Allow to specify existing imagePullSecret for podSpec

It is considered a best practise to include an imagePullSecrets field in values.yaml so that the consumer of a helm chart can use a private registry, for which a Secret has already been created.

It would be nice if helmify checks if there is anything assigned to the ImagePullSecrets field of a PodSpec (which may be part of a Deployment, DaemonSet, ...). If this is set, helmify should continue as-is.

But if the ImagePullSecret is not set, helmify should include a imagePullSecrets in values.yaml and include it in the PodSpec, if it has been set by the user.

What do you think?

Webhook does not have correct certificate

I am trying the tool in our operator, and unfortunately it doesn't seem to inject the certificate into the ValidatingWebhookConfiguration and MutatingWebhookConfiguration correctly:

kustomize patch:

apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: mutating-webhook-configuration
  annotations:
    cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

kustomize cert:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: serving-cert  # this name should match the one appeared in kustomizeconfig.yaml
  namespace: kyma-operator
[...]

generated helm manifests:

kind: MutatingWebhookConfiguration
metadata:
  name: {{ include "chart.fullname" . }}-mutating-webhook-configuration
  annotations:
    cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "chart.fullname" . }}-kyma-operator/kyma-operator-serving-cert
[...]

---

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: {{ include "chart.fullname" . }}-serving-cert
[...]

Secret value not transferred to output values.yaml

I have a secret defined in a kubernetes yaml file but the value doesn't get transferred to generated values.yaml file

Input

apiVersion: v1
kind: Secret
metadata:
  name: db-password
type: Opaque
data:
  password: c3VwZXJzZWNyZXRwYXNzd29yZAo=

Output

dbPassword:
  password: ""
kubernetesClusterDomain: cluster.local

Expected output

dbPassword:
  password: "supersecretpassword"
kubernetesClusterDomain: cluster.local

Keep original manifest filename, support list of files

Hi,
I want to use helmify on many files and keeping original name and mutualise values.yaml

Here is how I run it with bash

rm -rf pypi-server

for f in $(ls pypi-server-src);
do 
  cat pypi-server-src/$f | ./helmify pypi-server;
  mv pypi-server/templates/pyserver.yaml pypi-server/templates/$f;
  cat pypi-server/values.yaml >> pypi-server/values-all.yaml;
done

[Feature Request] Add support for secrets

In kustomization.yaml it is possible to generate secrets using secretGenerator:

secretGenerator:
- files:
  - ca.crt
  name: ca
  type: opaque
- envs:
  - envfile.env
  name: my-secret
  type: opaque

I would like the helmify tool to generate the secrets in the Helm Chart output.

spec.template.metadata.annotations should add a return

Thank you for this amazing project!

the output:

  replicas: {{ .Values.Cuda10Z48C.replicas }}
  selector:
    matchLabels:
      app: middle-cuda10-z48c
    {{- include "middle-cuda10.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        app: middle-cuda10-z48c
        app-group-id: "1"
        service-config-id: cgsvcjgw4xnwk4-29wb
        service-group-id: middle-cuda10
      {{- include "middle-cuda10.selectorLabels" . | nindent 8 }}      annotations:
        AppGroupID: "1"

initialising/updating crds do not inherit chart labels when running helmify against the kustomize build example

Helmify version: 0.3.15
example chart: https://github.com/fire-ant/consulkv-operator

instructions:
clone the example chart and run 'make helm'. the output returns a diff where the labels will not render.

expected result:
the go template instruction remains unchanged and is inserted into the labels field of the crd.yaml.

actual result:
helmify should render the labels for the crd in 'crds/'

Due to how crds are templated it appears that the labels line of the crd needs to be affixed to include from the _helpers inside the templates directory, alternatively it may be better to handle this templating inside the logic of helmify itself (Im a little unsure of which is better and why)

Helm Plugin

Why not install and run as a Helm plugin to leverage some of the same context, such as the required Helm version and cluster context.

Does Helmify improve, differ, compliment, and/or substitute the languishing Chartify plugin?

put any CRD in its own directory

thanks for making/maintaining this tool, fills a much needed gap in Operator-SDK, appreciated.

per the latest helm3 docs it would be nice if there was the option to place crds in their own directory so that users can take advantage of any further changes to how CRDs are handled during the install process. I can also see how this would be cleaner and makes using other validation tooling simpler to use by convention.

I haven't written much go but could have a look if its more than 5 minutes for anyone else.

Chart.Appversion not as a default usable since image.tag set in values.yaml

Thank you for this amazing project.

When generating the charts, the image tag is also set in values.yaml. In the deployment itself, the default is set correctly (.Chart.Appversion) , but it can not be used by default since the image tag is set in the values.yaml. Especially with the combination of helmify with helmfile where values are only overwritten this is a little bit disturbing.

Wrong templating in certificate if chart name is the same as namespace name

The generated Certificate looks like this:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: {{ include "busola-operator.fullname" . }}-serving-cert
  labels:
  {{- include "busola-operator.labels" . | nindent 4 }}
spec:
  dnsNames:
  - '{{ include "{{ .Release.Namespace }}.fullname" . }}-$(SERVICE_NAME).$(SERVICE_NAMESPACE).svc'
  - '{{ include "{{ .Release.Namespace }}.fullname" . }}-$(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local'
  issuerRef:
    kind: Issuer
    name: '{{ include "busola-operator.fullname" . }}-selfsigned-issuer'
  secretName: webhook-server-cert

Under spec.dnsNames the template is {{ include "{{ .Release.Namespace }}.fullname" . }} although it should be {{ include "busola-operator.fullname" . }}

I am using helmify version 0.3.7

It works fine when the chart name is different to the namespace name, this only happens when both are the same

Issue when using imagePullSecret (Secret type: kubernetes.io/dockerconfigjson)

Hello! Hope you are well.

When given a secret of type: kubernetes.io/dockerconfigjson, helmify does not preserve the type field in the resulting template:

Input:

apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
  name: foobar-registry-credentials
data:
  .dockerconfigjson: |
    ewogICAgImF1dGhzIjogewogICAgICAgICJmb28uYmFyLmlvIjogewogICAgICAgICAgIC
    AidXNlcm5hbWUiOiAidXNlcm5hbWUiLAogICAgICAgICAgICAicGFzc3dvcmQiOiAic2Vj
    cmV0IiwKICAgICAgICAgICAgImF1dGgiOiAiZFhObGNtNWhiV1U2YzJWamNtVjAiCiAgIC
    AgICAgfQogICAgfQp9

Output:

apiVersion: v1
kind: Secret
metadata:
  name: {{ include "foobar.fullname" . }}-registry-credentials
  labels:
  {{- include "foobar.labels" . | nindent 4 }}
data:
  .dockerconfigjson: {{ required "registryCredentials.dockerconfigjson is required"
    .Values.registryCredentials.dockerconfigjson | b64enc | quote }}

I looked around the code and it seems like I might be able to simply add a field for type here and here. I will try to get something working and will make a PR if I'm successful.

Also (maybe unrelated issue) - I am using kustomize to add the 'foobar' name prefix. Helmify properly added {{ include "foobar.fullname" . }} to my Secret's name, however in my deployment spec, I just see 'foobar'

...
spec:
  template:
    spec:
      ...
      imagePullSecrets:
      - name: foobar-registry-credentials

Allow more configuration to be customised

The generated helm chart should allow changing things like:

  • annotations
  • nodeSelector
  • affinity rules
  • etc

It would be great if we could support a config file where the user can specify schema/attribute name and field-ref/s and helmify will automatically templatize it.

Example:

vars:
- name: annotations
  optional: true
  objref:
    kind: Deployment
    group: apps
    version: v1
    name: xyz
  fieldrefs:
    - fieldpath: spec.template.metadata.annotation

PS: If this the community and maintainers are interested in I am happy to contribute

Error unclosed action

Hi,
I'm very impressed with helmify, saving me a lot of work, thank you!

One problem...
PS C:\dev\Helm\Helmify\One> helm install firstheml mychart
Error: parse error at (mychart/templates/mongodb-configmap.yaml:8): unclosed action

that file looks like this
apiVersion: v1
kind: Secret
metadata:
name: {{ include "mychart.fullname" . }}-mongodb-secret
labels:
{{- include "mychart.labels" . | nindent 4 }}
data:
mongo-root-password: {{ required "mongodbSecret.mongoRootPassword is required" .Values.mongodbSecret.mongoRootPassword | b64enc | quote }}
mongo-root-username: {{ required "mongodbSecret.mongoRootUsername is required" .Values.mongodbSecret.mongoRootUsername | b64enc | quote }}

the .Values file contains
mongodbSecret:
mongoRootPassword: ""
mongoRootUsername: ""

All help much appreciated

Doug

Names that are too long generate invalid multi-line strings

The following snippet of YAML results in an invalid credentials.yaml file:

apiVersion: v1
kind: Secret
metadata:
  name: spellbook-credentials
type: Opaque
data:
  MQTT_HOST: "Zm9vYmFy"
  MQTT_PORT: "Zm9vYmFy"
  ELASTIC_HOST: "Zm9vYmFy"
  ELASTIC_PORT: "Zm9vYmFy"
  ELASTIC_PROTOCOL: "Zm9vYmFy"
  ELASTIC_PASSWORD: "Zm9vYmFy"
  ELASTIC_FOOBAR_HUNTER123_MEOWTOWN_VERIFY: "Zm9vYmFy"

The resulting output:

apiVersion: v1
kind: Secret
metadata:
  name: {{ include "spellbook.fullname" . }}-credentials
  labels:
  {{- include "spellbook.labels" . | nindent 4 }}
data:
  ELASTIC_FOOBAR_HUNTER123_MEOWTOWN_VERIFY: {{ required "credentials.elasticFoobarHunter123MeowtownVerify
    is required" .Values.credentials.elasticFoobarHunter123MeowtownVerify | b64enc
    | quote }}
  ELASTIC_HOST: {{ required "credentials.elasticHost is required" .Values.credentials.elasticHost
    | b64enc | quote }}
  ELASTIC_PASSWORD: {{ required "credentials.elasticPassword is required" .Values.credentials.elasticPassword
    | b64enc | quote }}
  ELASTIC_PORT: {{ required "credentials.elasticPort is required" .Values.credentials.elasticPort
    | b64enc | quote }}
  ELASTIC_PROTOCOL: {{ required "credentials.elasticProtocol is required" .Values.credentials.elasticProtocol
    | b64enc | quote }}
  MQTT_HOST: {{ required "credentials.mqttHost is required" .Values.credentials.mqttHost
    | b64enc | quote }}
  MQTT_PORT: {{ required "credentials.mqttPort is required" .Values.credentials.mqttPort
    | b64enc | quote }}

Running helm install will error out because of the invalid YAML generated for key names past a certain length:

$ helm install -f ./contrib/k8s/examples/spellbook/values.yaml webrtc-dev-telemetry ./contrib/k8s/charts/spellbook
Error: INSTALLATION FAILED: parse error at (spellbook/templates/credentials.yaml:8): unterminated quoted string

Thanks for your work on this, by the way.

Set empty value for NodeSelector

Is it possible to set empty value for NodeSelector?
Use case: If we want to generate helm chart with default values as empty for NodeSelector.

Let me know, if it is possible currently.

Deployment namespace is removed

If the deployment.yaml specifies a namespace I would expect the output template to still have it, it would be nice to explicitly template it with {{ .Release.Namespace | quote }}

Is there any reason why the namespace is removed? If not I would like to propose this change .

Helmify Repo Contribution

Hi,

I'm a developer in AWS working on k8s ML platform/toolings. I recently came across your helmify tool that seems to solve the same problem we had a while ago.

At first we were deploying our program with kustomize but in our recent release we are supporting both helm and terraform. You can check out our repo: https://github.com/awslabs/kubeflow-manifests

I wonder if we can contribute back to the repo instead of solving the same thing.

It will be helpful if you can:

  • Give a brief introduction about yourself
  • Who is maintaining the repo? I see there are 13 contributors so far and it seems to be pretty well maintained (last commit was 7 hours ago), but it seems like you do the majority of the commits.
  • I see that it's under MIT license, are there other licensing issues you want us to be aware of?

generate multiline configmap

I am trying to generate helm chart from yaml file https://github.com/Altinity/clickhouse-operator/blob/master/deploy/operator/clickhouse-operator-install-bundle.yaml
but it creates variables with all multiline code as one variable:
`
defaultPodTemplateYamlExample: |

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallationTemplate"
metadata:
  name: "default-oneperhost-pod-template"
spec:
  templates:
    podTemplates: 
      - name: default-oneperhost-pod-template
        distribution: "OnePerHost"

`
do I do something wrong or is it a what should I expect of using helmify ?

ERRO[0000] wrong image format: XXX_production_traefik"

kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath

Screenshot 2022-05-28 at 03 39 24

Then using helmify:

awk 'FNR==1 && NR!=1 {print "---"}{print}' $HOME/Documents/arthur_aks/arthur-kompose-manifests/production/templates/*.yaml | helmify $HOME/Documents/arthur_aks/arthur-helmify

ERRO[0000] helmify finished with error error="wrong image format: arthur_paf_production_traefik"

This is the production.yaml (scaffolding from cookiecutter-django)

version: '3'

volumes:
![Screenshot 2022-05-28 at 03 36 18](https://user-images.githubusercontent.com/1273014/170806461-2ca470b6-b5bd-4621-9a48-d45cef7299f7.png)

  production_mongodb_data:
  production_traefik: {}

services:
  django:
    build:
      context: .
      dockerfile: ./compose/production/django/Dockerfile
    image: arthuracr.azurecr.io/arthur_paf_production_django:v1.0.0
    container_name: arthur_paf_production_django
    # platform: linux/x86_64
    depends_on:
      - mongo
    volumes:
      - .:/app:z
      - ../RTL_data/_PAF_Data_samples/PAF_MAIN_FILE:/uploaddata

    env_file:
      - ./.envs/.production/.django
      - ./.envs/.production/.mongodb
    ports:
      - "8000:8000"
      - "3000:3000"
    command: /start
    stdin_open: true
    tty: true

  mongo:
    image: mongo:5.0.6
    container_name: "mongo"
    restart: always
    env_file:
      - ./.envs/.production/.mongodb
    environment:
      - MONGO_INITDB_ROOT_USERNAME=<USERNAME>
      - MONGO_INITDB_ROOT_PASSWORD=<PASSWORD>
      - MONGO_INITDB_DATABASE=<DATABASE>
      - MONGO_INITDB_USERNAME=<USERNAME> 
      - MONGO_INITDB_PASSWORD=<PASSWORD>
    volumes:
      - production_mongodb_data:/data/db
      # - ${PWD}/_data/mongo:/data/db
      # - ${PWD}/docker/_mongo/fixtures:/import
      # - ${PWD}/docker/_mongo/scripts/init.sh:/docker-entrypoint-initdb.d/setup.sh
    ports:
      - 27017:27017

  traefik:
    build:
      context: .
      dockerfile: ./compose/production/traefik/Dockerfile
    image: arthur_paf_production_traefik
    container_name: arthur_paf_production_traefik
    depends_on:
      - django
    volumes:
      - production_traefik:/etc/traefik/acme:z
    ports:
      - "0.0.0.0:80:80"
      - "0.0.0.0:443:443"

helmify -version
Version: 0.3.12
Build Time: 2022-05-17T09:02:12Z
Git Commit: 3634ef5

helm version --short
v3.7.1+g1d11fcb

k version --short
Client Version: v1.22.2
Server Version: v1.21.9

Mac: 12.4

traefik-deployment.yaml:


apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: $HOME/.asdf/installs/kompose/1.26.0/bin/kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath
    kompose.version: 1.26.0 (40646f47)
  creationTimestamp: null
  labels:
    io.kompose.service: traefik
  name: traefik
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: traefik
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        kompose.cmd: $HOME/.asdf/installs/kompose/1.26.0/bin/kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath
        kompose.version: 1.26.0 (40646f47)
      creationTimestamp: null
      labels:
        io.kompose.service: traefik
    spec:
      containers:
        - image: arthur_paf_production_traefik
          name: arthur-paf-production-traefik
          ports:
            - containerPort: 80
            - containerPort: 443
          resources: {}
          volumeMounts:
            - mountPath: /etc/traefik/acme
              name: production-traefik
      restartPolicy: Always
      volumes:
        - hostPath:
            path: $HOME/Documents/arthur_paf
          name: production-traefik
status: {}

traefik-service.yaml:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: $HOME/.asdf/installs/kompose/1.26.0/bin/kompose --file production.yml -o $HOME/Documents/arthur_aks/arthur-kompose-manifests/production convert -c --volumes hostPath
    kompose.version: 1.26.0 (40646f47)
  creationTimestamp: null
  labels:
    io.kompose.service: traefik
  name: traefik
spec:
  ports:
    - name: "80"
      port: 80
      targetPort: 80
    - name: "443"
      port: 443
      targetPort: 443
  selector:
    io.kompose.service: traefik
status:
  loadBalancer: {}



Priority class created but name isn't match

When run: awk 'FNR==1 && NR!=1 {print "---"}{print}' manifests/*.yaml | helmify project Priority Class objects are created but the name of each class appended with prefix of the project (helm chart) name.
While in templates/deployment.yaml and templates/statefulset.yaml priority class names are unchanged and when deploying chart it makes replicasets to fail with:

Warning FailedCreate 86s (x16 over 4m11s) replicaset-controller Error creating: pods "project-k8s-server-79466b6fc-" is forbidden: no PriorityClass with name low-priority was found
because priority class names aren't match.

resolving claimName of PersistentVolumeClaim

The metadata.name of PersistentVolumeClaim get extended by the full name.
References to the PersistentVolumeClaim in pod.spec.volumes.persistentVolumeClaim.claimName do not get adapted.

demo.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-storage
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    volumeMounts:
    - mountPath: /test
      name: demo
  volumes:
  - name: demo
    persistentVolumeClaim:
      claimName: test-storage

Allow adding more values.yaml

Since helmify always overwrites values.yaml, any additional values I would like to add get removed. One way to solve this would be to add an option where a file containing additional values is appeneded to the values.yaml file. Basically What I'd like to do is:

cat my-app.yaml | helmify -additional-values=additionalValues.yaml mychart

Which would be effectively equivalent to:

cat my-app.yaml | helmify mychart
cat additionalValues.yaml >> mychart/values.yaml

Alternatively, it would be nice if the values.yaml was edited instead of overwritten, since I would also like to add comments and use tools like helm-docs

Doesn't convert arrays properly to the values file, using hardcoded values

If you got something like this in the kustomize file:

authorizedUsers: ["user1", "user2"]

Or like this:

authorizedUsers:
  - user1
  - user2

Then is parsed as the same value directly on the template, like a hardcoded value, instead of creating this array on the values.yaml and using it on the template as every other variable does.

Expected in the template example:

authorizedUsers: {{ toYaml .Values.authorizedUsers | nindent 8 }}

Change in `values.yaml` v0.0.30 to v0.0.31 - removes ability to define end user params via ConfigMap

Due to PR #94 the use of a configMapGenerator with a YAML file in a Kustomize, is generating an invalid config map.

Error: YAML parse error on kaleido-helm/templates/operator.yaml: error converting YAML to JSON: yaml: line 12: mapping values are not allowed in this context

I get YAML as so, where you can the line with indent 1, which does not work with a complex set of values

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "myname.fullname" . }}-operator
  labels:
  {{- include "myname.labels" . | nindent 4 }}
data:
  config.yaml: {{ .Values.myname.configYaml | toYaml | indent 1 }}

You can see the same issue in the example provided in the PR:
https://github.com/arttor/helmify/blob/main/examples/operator/templates/manager-config.yaml

Here's an example of what a helm template --debug shows me, showing how the YAML is incorrectly inserted:

data:
  config.yaml:  images:
   myImage: myreg.jfrog.io/myreg:mytag

Can we add more custom pod level specs?

For our use-case we would need the helm values to give user the possibility to add :

  • topology constraints
  • node selector
  • tolerations

Since these are defined for the whole Pod spec but not always present in the manifests, I would go with bitnami approach and add them as empty values to be filled in the template if present.

My suggestions are drafted here #77

`-crd-dir` does not handle webhook namespace

When running helmify with the -crd-dir flag, the operator namespace is not removed from the generated CRDs. I am not sure what the correct approach should be here, since the crds cannot be templated.

Have a look at the code diff in this branch, reproduced here:

diff --git a/examples/operator/templates/cephvolume-crd.yaml b/examples/operator/crds/cephvolume-crd.yaml
similarity index 91%
rename from examples/operator/templates/cephvolume-crd.yaml
rename to examples/operator/crds/cephvolume-crd.yaml
index 4b187e3..84153d9 100644
--- a/examples/operator/templates/cephvolume-crd.yaml
+++ b/examples/operator/crds/cephvolume-crd.yaml
@@ -1,14 +1,13 @@
 apiVersion: apiextensions.k8s.io/v1
 kind: CustomResourceDefinition
 metadata:
-  name: cephvolumes.test.example.com
   annotations:
-    cert-manager.io/inject-ca-from: '{{ .Release.Namespace }}/{{ include "operator.fullname"
-      . }}-serving-cert'
+    cert-manager.io/inject-ca-from: my-operator-system/my-operator-serving-cert
     example-annotation: xyz
+  creationTimestamp: null
   labels:
     example-label: my-app
-  {{- include "operator.labels" . | nindent 4 }}
+  name: cephvolumes.test.example.com
 spec:
   group: test.example.com
   names:
diff --git a/examples/operator/templates/manifestcephvolume-crd.yaml b/examples/operator/crds/manifestcephvolume-crd.yaml
similarity index 90%
rename from examples/operator/templates/manifestcephvolume-crd.yaml
rename to examples/operator/crds/manifestcephvolume-crd.yaml
index 64abcc4..3c807c9 100644
--- a/examples/operator/templates/manifestcephvolume-crd.yaml
+++ b/examples/operator/crds/manifestcephvolume-crd.yaml
@@ -1,20 +1,18 @@
 apiVersion: apiextensions.k8s.io/v1
 kind: CustomResourceDefinition
 metadata:
-  name: manifestcephvolumes.test.example.com
   annotations:
-    cert-manager.io/inject-ca-from: '{{ .Release.Namespace }}/{{ include "operator.fullname"
-      . }}-serving-cert'
-  labels:
-  {{- include "operator.labels" . | nindent 4 }}
+    cert-manager.io/inject-ca-from: my-operator-system/my-operator-serving-cert
+  creationTimestamp: null
+  name: manifestcephvolumes.test.example.com
 spec:
   conversion:
     strategy: Webhook
     webhook:
       clientConfig:
         service:
-          name: '{{ include "operator.fullname" . }}-webhook-service'
-          namespace: '{{ .Release.Namespace }}'
+          name: my-operator-webhook-service
+          namespace: my-operator-system
           path: /convert
       conversionReviewVersions:
       - v1

Open to adding args/extraArgs?

Awesome project!

We're using helmify in a kubebuilder project as mentioned in the README.md.

Now we're looking into ways to allow the user to configure the controller a bit more when they install the helm chart. I'm not sure what the most idiomatic patterns for doing this but args is one that comes to mind. Would the helmify project be open to a change like that?

For example:

  • extracting args from the deployment into a the values.yaml for direct control during helm install
  • extraArgs as a convenience to add additional configuration without overriding the defaults?

I imagine it work similarly to the securityContext extraction already in place.

Happy to put together a PR if this makes sense?

Only one ouput template-file is generated

Hi!
i have the following issue. I'm pipeing the following into helmify via
oc process -f template.yaml -o yaml | yq '.items[] | split_doc' | helmify
oc is a CLI for working with Openshift Cluster. The ouput before helmify is the following:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
... 
---
apiVersion: v1
kind: Service
...
---
apiVersion: route.openshift.io/v1
kind: Route
...

I expected, that the chart would have three files, one per resource, but i only get one with all three in it.
Is this intended or is there some error in my approach?

Thanks a lot for help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.