Giter Club home page Giter Club logo

helm-app-operator-kit's People

Contributors

alecmerdler avatar aravindhp avatar benjaminapetersen avatar benmathews avatar ecordell avatar joelanford avatar josephschorr avatar jzelinskie avatar madorn avatar msfuko avatar nak3 avatar nicolast avatar robszumski avatar shettyh avatar tobru avatar zbwright avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-app-operator-kit's Issues

Helm install failed. Forbidden: User system:serviceaccount:ndn-base:default

Created operator as mentioned in documentation. But i am getting this error while deploying the custom resource.


"release helm-app-operator-ndn.xyz.com failed: namespaces \"ndn-base\" is forbidden: User \"system:serviceaccount:ndn-base:default\" cannot get namespaces in the namespace \"ndn-base\""
time="2018-09-04T10:07:10Z" level=error msg="error syncing key (ndn-base/ndn.xyz.com): release helm-app-operator-ndn.xyz.com failed: namespaces \"ndn-base\" is forbidden: User \"system:serviceaccount:ndn-base:default\" cannot get namespaces in the namespace \"ndn-base\""

And after this error on next event the operator itself exits with following error

time="2018-09-04T10:07:10Z" level=info msg="processing ndn-base/ndn.xyz.com"
time="2018-09-04T10:07:10Z" level=info msg="using values: size: 1\n"
ERROR: logging before flag.Parse: E0904 10:07:10.778333       1 runtime.go:66] Observed a panic: "index out of range" (runtime error: index out of range)
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:573
/usr/local/go/src/runtime/panic.go:502
/usr/local/go/src/runtime/panic.go:28
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/vendor/k8s.io/helm/pkg/storage/storage.go:132
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/vendor/k8s.io/helm/pkg/tiller/release_update.go:73
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/vendor/k8s.io/helm/pkg/tiller/release_update.go:38
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/pkg/helm/helm.go:86
/go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator/pkg/stub/handler.go:39

Any idea what could be wrong ?

"chart metadata (Chart.yaml) missing" when deploying custom resource

Hi,

I deployed OLM to my cluster and Tomcat Operator from 'examples' (https://github.com/operator-framework/helm-app-operator-kit/tree/master/examples/tomcat-operator) . They are both up & running and Tomcat Operator is starting with following logs

time="2018-09-18T10:23:09Z" level=info msg="Go Version: go1.10.4"
time="2018-09-18T10:23:09Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-09-18T10:23:09Z" level=info msg="operator-sdk Version: 0.0.5+git"
time="2018-09-18T10:23:09Z" level=info msg="Watching apache.org/v1alpha1, Tomcat, default, 5"

When I try to create new Tomcat resource with kubectl create -f cr.yaml , where cr.yaml is :

apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
  name: example-app
spec:
  replicaCount: 2

then I get following error in Tomcat Operator logs :

time="2018-09-18T10:24:00Z" level=info msg="processing default/example-app"
time="2018-09-18T10:24:00Z" level=info msg="using values: replicaCount: 2\n"
time="2018-09-18T10:24:00Z" level=error msg="chart metadata (Chart.yaml) missing"
time="2018-09-18T10:24:00Z" level=error msg="error syncing key (default/example-app): chart metadata (Chart.yaml) missing"

I SSH'ed to Tomcat Operator Pod and I see tomcat helm chart under /chart directory and it contains Chart.yaml

Update helm-app-operator-kit to use controller-runtime

There has been significant effort to transition operator-sdk to use controller-runtime (https://github.com/kubernetes-sigs/controller-runtime).

See the refactor/controller-runtime branch: https://github.com/operator-framework/operator-sdk/tree/refactor/controller-runtime

This operator is currently based on the "previous" operator-sdk scaffolding and APIs, which will soon be deprecated in favor of the controller-runtime updates.

I'm working on a PR for this work, but would like to open this up for discussion and planning.

@hasbro17 - Is the refactor/controller-runtime branch sufficiently ready to be used as a basis for scaffolding out a new version of this operator?

How to override the values in the chart ?

Tried adding the values in the spec, like this

apiVersion: xyz.com/v1alpha1
kind: Xyz
metadata:
  name: xyz.com
  labels:
    app: xyz
spec:
  size: 1
  global:
    host: x.x.x.x
    serviceA:
      enabled: false

But the chart is not using those values when installed.

Can anyone please tell me how to properly set the values for the chart with CR ?

Thanks in advance

Not able to install

I am just trying to run this project from scratch.

I get the following error

kubectl create -f example-app-operator.v0.0.1.clusterserviceversion.yaml
error: unable to recognize "example-app-operator.v0.0.1.clusterserviceversion.yaml": no matches for kind "ClusterServiceVersion-v1" in version "app.coreos.com/v1alpha1"

Clearly, I do not have the Kind or the APIgroup? What are these? How do I install them

Operator pod error when new cr created.

I'd build an operator for the chart stable/memcached. and the operator had been deployed successfully. When I deploy the new instance of my chart using kubectl apply -f cr.yaml, the statefulset and the service had been created , but the operator pod will fall in the state of error, here is the log.
log.txt

How to use this thing?

Hey all, when I try to run a demo on a bare kube cluster, I get this:

kubectl apply -f example-app-operator.v0.0.1.clusterserviceversion.yaml
error: unable to recognize "example-app-operator.v0.0.1.clusterserviceversion.yaml": no matches for app.coreos.com/, Kind=ClusterServiceVersion-v1

What am I missing here and how to get it ?

crd-name is a required parameter

Got something like this when running your example:

{"level":"error","ts":1528876437.6925492,"caller":"cmd/start.go:49","msg":"failed to start server","error":"crd-name is a required parameter","stacktrace":"github.com/wpengine/lostromos/cmd.glob..func2\n\t/home/alec/go/src/github.com/wpengine/lostromos/cmd/start.go:49\ngithub.com/wpengine/lostromos/vendor/github.com/spf13/cobra.(*Command).execute\n\t/home/alec/go/src/github.com/wpengine/lostromos/vendor/github.com/spf13/cobra/command.go:702\ngithub.com/wpengine/lostromos/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/home/alec/go/src/github.com/wpengine/lostromos/vendor/github.com/spf13/cobra/command.go:783\ngithub.com/wpengine/lostromos/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/home/alec/go/src/github.com/wpengine/lostromos/vendor/github.com/spf13/cobra/command.go:736\ngithub.com/wpengine/lostromos/cmd.Execute\n\t/home/alec/go/src/github.com/wpengine/lostromos/cmd/cmd.go:52\nmain.main\n\t/home/alec/go/src/github.com/wpengine/lostromos/main.go:22\nruntime.main\n\t/usr/lib/go/src/runtime/proc.go:195"}
{"level":"error","ts":1528876437.6926394,"caller":"cmd/cmd.go:49","msg":"failed to sync logger","error":"sync /dev/stderr: invalid argument","stacktrace":"github.com/wpengine/lostromos/cmd.Execute.func1\n\t/home/alec/go/src/github.com/wpengine/lostromos/cmd/cmd.go:49\ngithub.com/wpengine/lostromos/cmd.Execute\n\t/home/alec/go/src/github.com/wpengine/lostromos/cmd/cmd.go:56\nmain.main\n\t/home/alec/go/src/github.com/wpengine/lostromos/main.go:22\nruntime.main\n\t/usr/lib/go/src/runtime/proc.go:195"}```

Um, help?

Error when using a local Helm chart directory

Originating from this issue.

In summary, when I attempt to build with a local Helm chart:

docker build -t quay.io/benjaminapetersen/myapp-operator --build-arg HELM_CHART=examples/myapp/ --build-arg API_VERSION=app.benjaminapetersen.me/v1alpha1 --build-arg KIND= MyApp .

the output is:

Sending build context to Docker daemon  337.2MB
Step 1/20 : FROM golang:1.10 as builder
 ---> d0e7a411e3da
Step 2/20 : ARG HELM_CHART
 ---> Using cache
 ---> a601d3a18a00
Step 3/20 : ARG API_VERSION
...
... redacted ....
...
Step 18/20 : RUN mkdir /chart   && tar -xzf /chart.tgz --strip-components=1 -C /chart   && rm /chart.tgz
 ---> Running in 6a1f2e491c55
tar: invalid magic
tar: short read
The command '/bin/sh -c mkdir /chart   && tar -xzf /chart.tgz --strip-components=1 -C /chart   && rm /chart.tgz' returned a non-zero code: 1

"invalid magic" is pretty awesome, but @alecmerdler stated:

Docker build has slightly inconsistent behavior of the ADD command. 
See https://docs.docker.com/engine/reference/builder/#add
The reason the Dockerfile was written this way was to let users quickly create an Operator 
using a pre-made, remotely hosted Helm Chart. Obviously we need more logic here to 
determine if you are trying to use a local, uncompressed Chart directory.

Release name lacks sufficient uniqueness

This issue manifests itself only after #48 is merged, once multiple resource types are supported.

The release name lookup for an object returns a value that is not sufficiently unique:

func releaseName(r *v1alpha1.HelmApp) string {
return fmt.Sprintf("%s-%s", operatorName, r.GetName())
}

Kubernetes allows different resource types to be named the same, even within the same namespace. If the operator is watching multiple resource types and there is an instance of each resource type with the same name, they will resolve to the same release name, which tiller does not support.

An obvious option is to include the resource type in the release name, but that is problematic due to a 53-character limit on the release name imposed by helm.

A better solution may be something like:

func releaseName(r *v1alpha1.HelmApp) string {
	if r.Status.Release != nil {
		return r.Status.Release.GetName()
	}
	return ""
}

This would pull the release name from the object's actual release, and if that doesn't exist, an empty string would result in Tiller generating a unique release name during release installation. This approach would require the use of a finalizer (provided by #48) to be able to lookup the release name during uninstallation.

We could also consider supporting an annotation or two that allows a user to override the default provided by tiller. For example:

  • <resource_string>/resource-name-as-helm-release-name: "true"|"false"
  • <resource_string>/helm-release-name: <release_name>

Thoughts?

Read config file to support multiple charts simultaneously

To make this operator more versatile, it could be updated to support watching multiple CRs, each mapped to a different Helm chart directory based on the contents of a config file, much like github.com/water-hole/ansible-operator

Something like this:

---

- group: example.com
  version: v1alpha1
  kind: MyApp
  chart: /charts/myapp

- group: example.com
  version: v1alpha1
  kind: MyDatabase
  chart: /charts/mydatabase

I'd propose adding a new environment variable (e.g. HELM_WATCHES_FILE). If it is present and set, read the mapping config from it. If not, just fall back to the existing environment variables to maintain backward compatibility.

It looks pretty straightforward to update register.go to expose a function to dynamically add new types to the scheme, and to update helm.go to use a GVK -> chartDir lookup rather than a static chartDir setting.

Thoughts?

Charts with randomly generated fields that are part of pod annotations fail to be installed (e.g. stable/redis)

While attempting to install the default version (i.e. without specifying any overrides or changing the values*.yaml files) of the stable/redis chart, I'm noticing that the Redis master pod goes into a Started -> Terminating -> Started loop. This happens because the master statefulset keeps getting revised. The reason why this is happening for the Redis chart is that it contains the following pod annotation in a StatefulSet spec (this is part of the redis-master-statefulset.yaml file in the chart):

checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}

The secret itself is randomly generated thanks to this:

data:
  {{- if .Values.password }}
  redis-password: {{ .Values.password | b64enc | quote }}
  {{- else }}
  redis-password: {{ randAlphaNum 10 | b64enc | quote }}
  {{- end }}
{{- end -}}

In other words, if the values.yaml file doesn't specify a value for the password, the pod annotation will be randomly generated.

This results in a continuous release update loop when the helm-app-operator tries to install the release. As far as I can tell from the code, this is the sequence of events:

  1. the chart is installed for the first time. reconcile.go::Reconcile is invoked and after the installation finishes, the reconciler creates a work request to do a resync because of return reconcile.Result{RequeueAfter: r.ResyncPeriod}, err
  2. the resync is triggered. This time, we fall through to if manager.IsUpdateRequired() {, which returns true. The reason is that the way the operator checks if an update to a chart has happened is by doing a dry run installation of the chart. Naturally, each dry run will produce a manifest with a different value for the password field.
  3. the update process begins, leading to a new revision of the statefulset (leading to the old pod being terminated and replaced) being created.
  4. Repeat 2 and 3. (edited)

I understand that this problem is avoided by providing an override for generated values. However, for my project, it would be great if charts could work as-is, without having to provide overridden values.

Log what resources are created in K8s

I ran into an issue debugging the example Tomcat operator (and a MariaDB example of my own), where the operator processes cr.yaml and creates the resources transparently (K8s Deployment and Service objects defined by the chart), but pods themselves are never created as this action was blocked by an OpenShift Security Context Constraint. It appeared as though the operator wasn't actually deploying resources when processing cr.yaml, however it actually did, it just wasn't obvious by viewing the output of oc logs <operator-pod-id>.

After giving the code a rough glance, it appears that these messages are logged via logrus. It would be ideal to have something logged in addition to processing <namespace>/my-tomcat using values: replicaCount: 2 which states the Kind and the name of resources that are created by the Helm operator.

Chart with multiple dependencies with `requirements.yaml` not deploying all the pods

I have a chart with multiple dependent charts ( All are local charts only), structure is something like this

Chart
   - values.yaml
   - requirements.yaml
   - charts
         - depA
              - ...
          - depB
              - ...
          - etc

Operator deploys the chart properly and dont show any error. But only few pods are deployed.
Not sure why, kubectl get events also not showing any error.

If i try to to deploy the chart using helm cli client, it works just fine.

Any idea what needs to be done here ?

Error parsing reference go1.10 as build when following README instructions

In the instructions block:

$ docker build -t quay.io/<namespace>/<chart>-operator --build-arg HELM_CHART=/path/to/helm/chart --build-arg API_VERSION=<group/version> --build-arg KIND=<Kind> .

I ran as the following:

$ docker build -t quay.io/benjaminapetersen/myapp-operator --build-arg=examples/myapp/ --build-arg API_VERSION=app.benjaminapetersen.me/v1alpha1 --build-arg KIND=MyApp .

And received this error output:

Sending build context to Docker daemon 324.6 kB
Step 1/20 : FROM golang:1.10 as builder
Error parsing reference: "golang:1.10 as builder" is not a valid repository/tag: invalid reference format

I had go1.8.3, so updated to go1.10 and ran again, which resulted in the same output.

I put my (working) helm chart within the project under examples/myapp as this somewhat seems to follow the suggested convention.

Thanks!

Operator only sets status when the reconciler succeeds.

Right now, when a failure occurs in the Helm reconciler, no status updates are made to the resource, so consumers of these resources don't have great visibility when things go wrong. Currently the only way to see failures is by inspecting the operator logs.

We should update the operator to correctly set status at various points of reconciliation, not just on success

How to make operator work across namespaces?

I've got a helm chart that will install an instance of our companies software for test purposes. Generally, I install one per namespace. I'd like to have an operator that watches for a CRD anywhere in the cluster and then makes the appropriate changes there. Not my preference, but it would also work if the CRD was placed in the operator namespace and had a destination namespace key that the operator used on.

Is this possible?

seperate chart data from generic operator

I've been working on a driver that uses a data container as a volume. This would allow:
FROM scratch
ADD chart.tar.gz /chart

and a generic prebuilt helm-app-operator container to be used together, decoupling the cpu architecture specific operator container from the noarch chart container.

See: kubernetes-retired/drivers#85
for details. The main issue left is getting inline pod support for ephemeral pods into kubernetes.
kubernetes/kubernetes#68232

This would require a bit of support from the csv though I think. Are you interested in this use case?

Provide CRD object example

Hi,

Thank you for this project.

Can you provide in the documentation an example of on CRD definition to instance the Helm chart?
How the "final user" create a new helm release? How to provides the release name, the namespace, the values?

Thank you.

Error creating new instance of example-app

I am trying to follow the documentation to bring up the example application on a local OpenShift cluster created using oc cluster up --version=v3.9. I was successfully able to execute all the steps but am seeing the following error in the example app operator logs when trying to create an instance of the example application:

2018-05-23T18:40:06.917Z	INFO	cmd/cmd.go:90	loading config...	{"configFile": "/config.yaml"}
2018-05-23T18:40:06.918Z	INFO	version/version.go:30	version info	{"version": "v1.1.0-22-gdfd0b8c-dirty", "gitCommitHash": "dfd0b8c", "buildTime": "2018-02-12_09:22:57PM"}
2018-05-23T18:40:06.922Z	INFO	cmd/start.go:168	using helm controller for deployment	{"controller": "helm", "helmChart": "/chart", "helmNamespace": "myproject", "helmReleasePrefix": "lostromos", "helmWait": false, "helmWaitTimeout": 120}
2018-05-23T18:41:57.298Z	INFO	helmctlr/helm.go:90	resource added	{"controller": "helm", "resource": "sample-example"}
2018-05-23T18:41:57.766Z	ERROR	helmctlr/helm.go:93	failed to create resource	{"controller": "helm", "error": "release lostromos-sample-example failed: deployments.apps \"sample-example-hello\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: User \"system:serviceaccount:myproject:example-app-operator\" cannot update example-apps/finalizers.example.com in project \"myproject\", <nil>", "resource": "sample-example"}
github.com/wpengine/lostromos/helmctlr.Controller.ResourceAdded
	/home/alec/go/src/github.com/wpengine/lostromos/helmctlr/helm.go:93
github.com/wpengine/lostromos/helmctlr.(*Controller).ResourceAdded
	<autogenerated>:1
github.com/wpengine/lostromos/crwatcher.(*CRWatcher).setupHandler.func1
	/home/alec/go/src/github.com/wpengine/lostromos/crwatcher/watcher.go:107
github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache/controller.go:195
github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnAdd
	<autogenerated>:1
github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.NewInformer.func1
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache/controller.go:314
github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:451
github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.(*controller).processLoop
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache/controller.go:150
github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.(*controller).(github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.processLoop)-fm
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache/controller.go:124
github.com/wpengine/lostromos/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
github.com/wpengine/lostromos/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
github.com/wpengine/lostromos/vendor/k8s.io/apimachinery/pkg/util/wait.Until
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/k8s.io/client-go/tools/cache/controller.go:124
github.com/wpengine/lostromos/crwatcher.(*CRWatcher).Watch
	/home/alec/go/src/github.com/wpengine/lostromos/crwatcher/watcher.go:190
github.com/wpengine/lostromos/cmd.startServer
	/home/alec/go/src/github.com/wpengine/lostromos/cmd/start.go:229
github.com/wpengine/lostromos/cmd.glob..func2
	/home/alec/go/src/github.com/wpengine/lostromos/cmd/start.go:48
github.com/wpengine/lostromos/vendor/github.com/spf13/cobra.(*Command).execute
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/github.com/spf13/cobra/command.go:702
github.com/wpengine/lostromos/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/github.com/spf13/cobra/command.go:783
github.com/wpengine/lostromos/vendor/github.com/spf13/cobra.(*Command).Execute
	/home/alec/go/src/github.com/wpengine/lostromos/vendor/github.com/spf13/cobra/command.go:736
github.com/wpengine/lostromos/cmd.Execute
	/home/alec/go/src/github.com/wpengine/lostromos/cmd/cmd.go:52
main.main
	/home/alec/go/src/github.com/wpengine/lostromos/main.go:22
runtime.main
/usr/lib/go/src/runtime/proc.go:195

I have installed the OLM as per the documentation. The alm and catalog operators runs in a separate namespace. Do I need to do anything extra on the cluster that is not mentioned in the documentation to make this work?

Here are the versions of oc, kubectl and helm that I am using:

$ oc version
oc v3.9.0+191fece
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
openshift v3.9.0+46ff3a0-18
kubernetes v1.9.1+a0ce1bc657

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1+a0ce1bc657", GitCommit:"a0ce1bc", GitTreeState:"clean", BuildDate:"2018-05-16T21:26:57Z", GoVersion:"go1.9", Compiler:"gc", Platform:"linux/amd64"}

$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Please let me know if you need any more information.

Publish a Docker Image to Public Registry

The current usage instructions require the user to clone this repository then execute a docker build, which builds the operator itself as well as copies in the helm chart. For my own experimentation, I executed the build to create a reusable docker base image. For my operators, I start with that base image, set environment variables, then copy in the chart. Like this:

FROM quay.io/cpitman/helm-app-operator
ENV API_VERSION cop.redhat.com/v1alpha1
ENV KIND SpringBootApp
ENV HELM_CHART /basic-spring-boot
ADD /basic-spring-boot/ /basic-spring-boot

This feels like a much easier way to work with the operator, since many build systems (like Quay) don't support setting build arguments.

Could one of the maintainers setup an official base image, with the required hooks to keep it up to date?

Error running sample tomcat command

$ docker build \
>   --build-arg HELM_CHART=https://storage.googleapis.com/kubernetes-charts/tomcat-0.1.0.tgz \
>   --build-arg API_VERSION=apache.org/v1alpha1 \
>   --build-arg KIND=Tomcat .
Sending build context to Docker daemon    276kB
Step 1/20 : FROM golang:1.10 as builder
 ---> d0e7a411e3da
Step 2/20 : ARG HELM_CHART
 ---> Using cache
 ---> 17b6710ecc47
Step 3/20 : ARG API_VERSION
 ---> Using cache
 ---> 49cb4c923dbd
Step 4/20 : ARG KIND
 ---> Using cache
 ---> 23def5c57805
Step 5/20 : WORKDIR /go/src/github.com/operator-framework/helm-app-operator-kit/helm-app-operator
 ---> Using cache
 ---> 2b569f913bd4
Step 6/20 : COPY helm-app-operator .
 ---> Using cache
 ---> 4cceb9f2ffd5
Step 7/20 : RUN CGO_ENABLED=0 GOOS=linux go build -o bin/operator cmd/helm-app-operator/main.go
 ---> Running in 0fe362c2b498
pkg/stub/handler.go:8:2: cannot find package "github.com/operator-framework/operator-sdk/pkg/sdk" in any of:
	/usr/local/go/src/github.com/operator-framework/operator-sdk/pkg/sdk (from $GOROOT)
	/go/src/github.com/operator-framework/operator-sdk/pkg/sdk (from $GOPATH)
pkg/apis/app/v1alpha1/register.go:7:2: cannot find package "github.com/operator-framework/operator-sdk/pkg/util/k8sutil" in any of:
	/usr/local/go/src/github.com/operator-framework/operator-sdk/pkg/util/k8sutil (from $GOROOT)
	/go/src/github.com/operator-framework/operator-sdk/pkg/util/k8sutil (from $GOPATH)
cmd/helm-app-operator/main.go:10:2: cannot find package "github.com/operator-framework/operator-sdk/version" in any of:
	/usr/local/go/src/github.com/operator-framework/operator-sdk/version (from $GOROOT)
	/go/src/github.com/operator-framework/operator-sdk/version (from $GOPATH)
pkg/helm/engine.go:7:2: cannot find package "github.com/sirupsen/logrus" in any of:
	/usr/local/go/src/github.com/sirupsen/logrus (from $GOROOT)
	/go/src/github.com/sirupsen/logrus (from $GOPATH)
pkg/helm/helm.go:9:2: cannot find package "gopkg.in/yaml.v2" in any of:
	/usr/local/go/src/gopkg.in/yaml.v2 (from $GOROOT)
	/go/src/gopkg.in/yaml.v2 (from $GOPATH)
pkg/apis/app/v1alpha1/register.go:8:2: cannot find package "k8s.io/apimachinery/pkg/apis/meta/v1" in any of:
	/usr/local/go/src/k8s.io/apimachinery/pkg/apis/meta/v1 (from $GOROOT)
	/go/src/k8s.io/apimachinery/pkg/apis/meta/v1 (from $GOPATH)
pkg/apis/app/v1alpha1/types.go:7:2: cannot find package "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" in any of:
	/usr/local/go/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured (from $GOROOT)
	/go/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured (from $GOPATH)
pkg/apis/app/v1alpha1/register.go:9:2: cannot find package "k8s.io/apimachinery/pkg/runtime" in any of:
	/usr/local/go/src/k8s.io/apimachinery/pkg/runtime (from $GOROOT)
	/go/src/k8s.io/apimachinery/pkg/runtime (from $GOPATH)
pkg/apis/app/v1alpha1/register.go:10:2: cannot find package "k8s.io/apimachinery/pkg/runtime/schema" in any of:
	/usr/local/go/src/k8s.io/apimachinery/pkg/runtime/schema (from $GOROOT)
	/go/src/k8s.io/apimachinery/pkg/runtime/schema (from $GOPATH)
pkg/helm/helm.go:11:2: cannot find package "k8s.io/client-go/rest" in any of:
	/usr/local/go/src/k8s.io/client-go/rest (from $GOROOT)
	/go/src/k8s.io/client-go/rest (from $GOPATH)
pkg/helm/engine.go:11:2: cannot find package "k8s.io/helm/pkg/chartutil" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/chartutil (from $GOROOT)
	/go/src/k8s.io/helm/pkg/chartutil (from $GOPATH)
pkg/helm/helm.go:13:2: cannot find package "k8s.io/helm/pkg/engine" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/engine (from $GOROOT)
	/go/src/k8s.io/helm/pkg/engine (from $GOPATH)
pkg/helm/helm.go:14:2: cannot find package "k8s.io/helm/pkg/kube" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/kube (from $GOROOT)
	/go/src/k8s.io/helm/pkg/kube (from $GOPATH)
pkg/helm/engine.go:12:2: cannot find package "k8s.io/helm/pkg/proto/hapi/chart" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/proto/hapi/chart (from $GOROOT)
	/go/src/k8s.io/helm/pkg/proto/hapi/chart (from $GOPATH)
pkg/apis/app/v1alpha1/types.go:9:2: cannot find package "k8s.io/helm/pkg/proto/hapi/release" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/proto/hapi/release (from $GOROOT)
	/go/src/k8s.io/helm/pkg/proto/hapi/release (from $GOPATH)
pkg/helm/helm.go:17:2: cannot find package "k8s.io/helm/pkg/proto/hapi/services" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/proto/hapi/services (from $GOROOT)
	/go/src/k8s.io/helm/pkg/proto/hapi/services (from $GOPATH)
pkg/helm/helm.go:18:2: cannot find package "k8s.io/helm/pkg/storage" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/storage (from $GOROOT)
	/go/src/k8s.io/helm/pkg/storage (from $GOPATH)
cmd/helm-app-operator/main.go:14:2: cannot find package "k8s.io/helm/pkg/storage/driver" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/storage/driver (from $GOROOT)
	/go/src/k8s.io/helm/pkg/storage/driver (from $GOPATH)
pkg/helm/helm.go:19:2: cannot find package "k8s.io/helm/pkg/tiller" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/tiller (from $GOROOT)
	/go/src/k8s.io/helm/pkg/tiller (from $GOPATH)
pkg/helm/engine.go:13:2: cannot find package "k8s.io/helm/pkg/tiller/environment" in any of:
	/usr/local/go/src/k8s.io/helm/pkg/tiller/environment (from $GOROOT)
	/go/src/k8s.io/helm/pkg/tiller/environment (from $GOPATH)
pkg/helm/helm.go:21:2: cannot find package "k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset" in any of:
	/usr/local/go/src/k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset (from $GOROOT)
	/go/src/k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset (from $GOPATH)
The command '/bin/sh -c CGO_ENABLED=0 GOOS=linux go build -o bin/operator cmd/helm-app-operator/main.go' returned a non-zero code: 1

Instructions block in the README lacks clear direction & purpose for those new to the concept of operators

Greetings! So confession up front, I'm new to operators. Therefore, this issue is just describing some of my personal friction trying to use helm-app-operator-kit with a helm chart I have created. There is a lot of assumption in the directions, it would be great to clarify to improve adoption.

I'll start with this block and outline my questions:

This project is a component of the Operator Framework

Makes implies I should download a binary & run it against my helm chart (I see there is golang here, though I am not asked to build it). However:

This repository serves as a template for easily creating...

Makes me think think this repo is more of an example/tutorial.

Run the following:
$ git checkout [email protected]:operator-framework/helm-app-operator-kit.git && cd helm-app-operator-kit
$ docker build -t quay.io/<namespace>/<chart>-operator -e HELM_CHART=/path/to/helm/chart -e API_VERSION=<group/version> -e KIND=<Kind> .
$ docker push quay.io/<namespace>/<chart>-operator

Could be greatly improved with some explanation. A few questions/suggestions:

  • There is a tomcat-operator directory included, and the README sounds a bit like a tutorial, but implies HELM_CHART=/path/to/wherever. Why does tomcat-operator appear as a top level item?
  • API_VERSION=<group/version> implies an input, but there isn't an explanation as to what this is for. Assuming someone is coming to this project who isn't super familiar with operators, this would be nice to explain.
  • KIND=<kind> also could be explained for the same reason as above, very helpful to assume people are new to the concept of operators
  • I ran the docker build -t quay.io/myuser/my-new-operator -e HELM_CHART=<~/path/to/a/place/where/I/have/a/chart> -e API_VERSION=<foo/bar> -e KIND=<kind>, but -e is not a flag that docker build expects, so the output is unknown shorthand flag: 'e' in -e.
  • Under section 2., if my HELM_CHART=/path/to/chart, the ./deploy directory here is strange. If my helm chart path is elsewhere, I don't quite understand why I am working with the /deploy directory here.

Appreciate any feedback, and happy to help improve the doc via PRs if open to discussion. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.