Giter Club home page Giter Club logo

epinio's Introduction

Epinio

Opinionated platform that runs on Kubernetes to take you from Code to URL in one step.

godoc Go Report Card CI golangci-lint AKS-CI EKS-CI GKE-CI RKE-CI
RKE2-EC2-CI AKS-LETSENCRYPT-CI GKE-LETSENCRYPT-CI GKE-UPGRADE-CI RKE-UPGRADE-CI

E2E tests:

Rancher-UI-1-Chrome Rancher-UI-1-Firefox Standalone UI Chrome Standalone UI Firefox

What problem does Epinio solve?

Epinio makes it easy for developers to develop their applications running in Kubernetes clusters. Easy means:

  • No experience with Kubernetes is needed
  • No steep learning curve
  • Quick local setup with minimal configuration
  • Deploying to production the same as deploying in development

Kubernetes is the standard for container orchestration. Developers may want to use Kubernetes for its benefits or because their Ops team has chosen it. In either case, working with Kubernetes can be complex depending on the environment. It has a steep learning curve and doing it right is a full-time job. Developers should spend their time working on their applications, not doing operations.

Epinio adds the needed abstractions and tools to allow developers to use Kubernetes as a PaaS.

Documentation

Documentation is available at docs.epinio.io.

Installation

Requires a Kubernetes cluster, an Ingress Controller and a Cert Manager as explained in the installation documentation. Once this is in place, and leaving out DNS setup, an installation reduces to:

helm repo add epinio https://epinio.github.io/helm-charts
helm repo update

helm install --namespace epinio --create-namespace epinio epinio/epinio \
  --set global.domain=mydomain.example.com

CLI installation

Installation of the Epinio CLI is by downloading a binary from the release page, or usage of brew, i.e.

brew install epinio

There is further documentation here.

Quick Start Tutorial

Our QuickStart Tutorial works through how to create a namespace and push an application.

Reach Us

Contributing

Epinio uses Github Project for tracking issues.

Find more information in the Contribution Guide.

The developer documentation explains how to build and run Epinio from a git source checkout.

Features

  • Security
    • TLS secured API server
    • Basic Authentication to access the API
    • or OIDC-based token
  • Epinio Clients
    • Web UI
    • Epinio CLI
  • Apps
    • Push code directly without further tools or steps
    • Basic operation of your application once pushed
    • Cloud Native Buildpacks build and containerize your code for you
  • Configurations
    • CRUD operations on your configuration. A configuration can be a database, SaaS etc. A configuration can be an external component or can be created using epinio configuration.
    • Bind configurations to apps.

Example apps

License

Copyright (c) 2020-2023 SUSE, LLC

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

epinio's People

Contributors

agracey avatar andreas-kupries avatar dependabot[bot] avatar enrichman avatar flaviodsr avatar jhkrug avatar jimmykarily avatar juadk avatar k-harker avatar kkaempf avatar ldevulder avatar manno avatar mmartin24 avatar mudler avatar mysticaltech avatar olblak avatar oziel-silva avatar raulcabello avatar richard-cox avatar rohitsakala avatar svollath avatar thardeck avatar thehejik avatar viccuad avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

epinio's Issues

quarkssecret fails to create cert

In one specific setup (rke on bare metal with metallb and longhorn), Carrier consistently failed to install with the following error:

 Eirini deployed
🚒 Deploying Registry...

πŸ•ž  Creating app.kubernetes.io/name=container-registry in carrier-registry .

πŸ•ž  Starting app.kubernetes.io/name=container-registry in carrier-registry .

πŸ•ž    Starting pod registry-6ffbfb9f86-wcwrs in carrier-registry ...................................................................................................................................................................................
error installing Carrier: failed waiting Registry deployment to come up: Failed waiting for registry-6ffbfb9f86-wcwrs: timed out waiting for the condition
Pod Events: 
Successfully assigned carrier-registry/registry-6ffbfb9f86-wcwrs to rke-worker-4
MountVolume.SetUp failed for volume "tls-self" : secret "registry-tls-self" not found
MountVolume.SetUp failed for volume "tls" : secret "registry-tls" not found
Unable to attach or mount volumes: unmounted volumes=[tls], unattached volumes=[config tls-self registry tls auth default-token-9s8lv]: timed out waiting for the condition

Debugging shows that quarks-secret fails to copy the registry-tls secret into eirini-workloads namespace. See the quarks-secret pod logs: ab-session.log (replaced inlined log with uploaded attached file)

Upgrade kubernetes libraries and get rid of `replace` in go.mod

When installing in some kubernetes versions we get this error after install:

error initializing cli: can't get gitea url: failed to list ingresses for gitea: the server could not find the requested resource

there is a hypothesis that this is due to our kubernetes libraries not recognizing the CRDs in newer versions of kubernetes. To verify we need to upgrade our libraries in go.mod

Right now we have replace statements in the go.mod file. Removing them and running go mod tidy reveals the problem.

~/workspace/suse/carrier/cli (main)*$ go mod tidy
github.com/suse/carrier/cli/kubernetes imports
	k8s.io/client-go/kubernetes/scheme imports
	k8s.io/api/auditregistration/v1alpha1: module k8s.io/api@latest found (v0.20.2), but does not contain package k8s.io/api/auditregistration/v1alpha1
github.com/suse/carrier/cli/kubernetes/config imports
	k8s.io/client-go/discovery imports
	github.com/googleapis/gnostic/OpenAPIv2: module github.com/googleapis/gnostic@latest found (v0.5.3), but does not contain package github.com/googleapis/gnostic/OpenAPIv2

which leads to this issue: kubernetes/client-go#741

Again, upgrading to newer versions would probably solve this issue but it's probably due to quarks-utils needing a very old version of client-go that we can't.

We remove our dependency to quarks-utils here but we still get it indirectly because we import eirini which imports eirinix which imports quarks-utils.

The reason we import Eirini is this line here: https://github.com/SUSE/carrier/blob/main/cli/paas/client.go#L177

This is hardly a good reason for all this mess so maybe we can find another way to delete LRPs without the need to import the code generated client from eirini. (How is kubectl able to delete any Custom Resource without importing anything? Maybe apiextensions-apiserver project described here is useful?)

Also, this seems to be a better way to setup a kubernetes client than the one we use

Add check for dependencies to `carrier install`

carrier has a few dependencies:

  • kubectl
  • helm
  • ?

The install command should be extended to check for their presence in the PATH before doing anything.

An advanced form would also be able to check for the version of the command

Carrier install + push app is a lot slower when carrier is run remotely

I created a box on gcloud and installed k3s there. I targeted that cluster from my system (carrier cli is run in Pyrgos Greece, cluster in Frankfurt). It took ~7 minutes for Carrier to install and app to be ready.

Doing the same from withing the box took ~ 3 minutes. This means we spend a big amount of time waiting for responses to requests we make against the cluster. We should look for such requests in code and try to avoid any requests that are not needed.
Examples:

  • Finding gitea/tekton/other ingress and url: we can either cache this information in ~/.config or assume the url since we know the system_domain.
  • Completely avoid requests if not needed
  • Parallelize requests

Improve application staging logs

Right now we print pod names and other information but it's not obvious to the user where the logs come from and what they refer to (e.g. "pushing code", "staging" ?)

We should limit the output to only the useful stuff.

Use a cancellable context

This one is pretty good: sigs.k8s.io/controller-runtime/pkg/manager/signals

Usage:

ctx := signals.SetupSignalHandler()
...

select {
case <-ctx.Done():
  log.Println("done")
}

`carrier install` installs wiremock (test) component or eirini

The testing directory should be skipped when installing Eirini with flat yaml files:

https://github.com/cloudfoundry-incubator/eirini-release/tree/master/deploy/testing

Wiremock is only used in tests (in Eirini project) to mock Cloud Controller requests. This has no place in production deployments. Consider opening an issue in the eirini-release repo although those flat yamls are meant to be used as a guide on how to deploy Eirini rather than ready-to-install solution.

Generate all passwords.

Right now there are hardcoded passwords in various places:

Unsure about the tokens below. Are they related to the gitea user/pass ? Derived ?

Some of these services are exposed publicly through Ingress resources and thus make Carrier installation open to malicious users.

Every password we use should be generated and possible to rotate.

If it's too much work, consider splitting the generation and the rotation in 2 different stories.

carrier cli install - Should it roll back on failure ?

Tried to use the cli install in a k3d cluster I used for a bash-based carrier before. The bash carrier was deleted.
Result:

work@tagetarl:~/SUSE/dev/carrier> cli/dist/carrier install
Carrier installing...
[Shared] system_domain The domain you are planning to use for Carrier. Should be pointing to the traefik public IP () : 
🚒 Traefik already installed, skipping
🚒 Deploying Quarks
βœ”οΈ  Quarks deployed
Couldn't install carrier: Namespace gitea present already

So, wondering if the install should roll back the pieces it has already done when it fails in some later stage.
Or, maybe have uninstall capable of cleaning up a partial installation ?

This all might be for post MVP.

Support upgrades

A checklist of requirements:

  • being able to programmatically tell which component was installed by Epinio
  • ability to determine the version of an installed component
  • ability to freeze the version of a component in Epinio (hardcoded, but clearly defined somewhere in code)
  • proper handling of secrets and passwords (keep them unchanged across the upgrade ?)
  • keep applications running across the upgrade
  • keep applications accessible during the upgrade (traefik upgrades are the difficult point here).
  • upgrades πŸ±β€πŸ

Improve README

There should be a section describing:

  • How to push an app

  • How to delete an app

  • How to create an org

  • How to uninstall Carrier

  • How to get help (e.g. run carrier help)

The staging pipeline pods are never deleted / cleaned up.

See also #17.

In testing the latest PR around app methods (push, apps, delete) I noticed that the staging-pipeline-... pods are never deleted.
While they show up in the kubectl pod listing as completed, they do show up even after push has the app running.
Deleting the app does not delete them either.
Nore is any reuse happening.

For the moment (pre-demo) this should be ok. However for production, maybe even MVP, I can see these pods starting to clutter the low-level pod display as apps are created, updated, and deleted, with ever more staging-pipeline pods shown.

Confirmed that creating more apps creates more sets of staging-pipeline pods, no reuse.

Carrier Acceptance Tests

We need a test suite we can run to be sure we didn't break any existing functionality.

Maybe it's too early but better start early than having to backfill the suite with tests later.

Acceptance Criteria

  • It should be possible to parallelize (each instance uses it's own cluster ? cluster pool ?)
  • Should be possible to run in GitHub actions
  • The test suite should complete in <= 10 minutes (and stay this way, either by parallelizing more or making the suite faster)
  • For now it should check the basic functionality: (installation, application pushing, uninstallation etc)

Some tests could assume Carrier is running (e.g. tests that push apps) but some others can't (e.g. the tests that check install/uninstall). These 2 groups may not be able to run parallelized, sharing the same cluster. Needs some thought.

`carrier cli option help` - Post MVP

Currently you see help for options when asking for help about a command.
The various deployments which are part of carrier declare their own options (See NeededOptions in the Deployment interface, and the gitea implementation for an example).

These options do not appear in the cli help.
When the user is interactively asked to enter their data only a short description is available and seen.

Requested features:

  1. Extend the InstallationOption structure with a Help field to contain a longer help text.
    Modify the interactive reader for option values to present this help on demand.
    For example when the user enters the value ?
    (This supposes that no option will have ? among the set of it legal values).

  2. Find a way to show these options and their descriptions and/or help text in the cli help for the relevant methods.

cli ignores option --kubeconfig/-c

Testing PR #43 for #40 I noticed that carrier ignores the --kubeconfig option to provide the path to the kubeconfig to use.
I approved the PR even so, because I further determined that this issue was present on main as well, and thus did not came from #43.

carrier looks to solely use the environment variable KUBECONFIG to determine where to find the cluster.

carrier cli - system domain - traefik chicken and egg

The description of the system_domain carrier configuration variable (*) contains the text

Should be pointing to the traefik public IP

I believe that this pre-supposes that the underlying cluster already has traefik installed, and running, with an IP to use.

What happens for a cluster without traefik ?

While carrier then installs traefik for itself, at the point it does so the config variables have already been collected, from the user, who does not know what IP will be assigned to carrier's traefik, right ? So what would the user enter at config collection time ?

Of course, if we go and decide that carrier should work only on traefik enabled clusters (**), then that deployment can be dropped from carrier itself.


(*) I decided to not call them options because of the confusion with cli options and the like.
(**) I currently only know of k3s/k3d.

`carrier apps` output is (too) low-level (at the moment)

carrier apps currently simply shows all the pods in the eirini-workloads namespace.

Not all of them may be associated with an app (Example: el-staging-listener-...), or may be remnants of the staging pipeline pods for an app. Although I suspect that the staging-pipeline-...-run-... pod is the app (when successfully deployed).

IMHO the apps method will need filtering and translation of the low-level pods into the actual active apps.

Need GA workflow for PRs

This workflow should run various linting tools over the go code.

I.e.

  • go fmt
  • go vet
  • make test
  • ...

These should all exist as Makefile targets, the workflow would then simply invoke them (or a single super target which calls on all)

Results requirements:

  • Makefile targets for fmt and vet.
  • Branch is clean with respect to make test, ... fmt, and ... vet.
  • PR workflow is specified, and working.

Carrier should not uninstall pre-existing components

I installed and uninstalled carrier into a cluster that already had tekton and traefik installed. When I ran carrier uninstall it removed my own tekton and traefik.

It would be nice if carrier-installed tools had annotations so the uninstall could be more selective.

As an aside: A pattern that I'm seeing a lot is to have a platform specific ingresses. I'd imagine that could extend eventually to all the components.

TODO

  • replace drone with brigade/argo/prow
  • replace nginx with traefik
  • add detect script for traefik
  • add detect scripts for each thing, so if they are already setup, don't setup
  • add a pipeline to test on GHA with k3s
  • drone/brigade should be configured using quarks
  • start the shim project
  • apps should have a GUID
  • only install QuarksSecret and QuarksJob
  • all resources managed by carrier should be marked, so on uninstall we don't delete anything by accident
  • docs site
  • one line installer
  • detection for kind and minikube
  • releases with tarballs
  • use a docker registry as a component

Ensure golang clients are always talking to the same cluster as external `kubectl`

The golang installer is using both, external kubectl calls and golang kubernetes clients to do things on the cluster. The golang client only depends on KUBECONFIG environment variable while kubectl could be using a cluster targeted by ~/.kube/config. This could result in some of our resource being installed on one cluster and some others on another.

We should make sure external kubectl calls point kubectl to the same cluster. The way the cli works now, this could simply mean setting the KUBECONFIG environment variable in the external kubectl invocation.

NOTE: The same thing should be happening with helm external invocations.

Refactor cli UI (general output, user feedback)

Currently all user feedback of the cli (general and logging) is done directly via fmt in the various packages of carrier.

It will be better to have this refactored to have the actual output divorced from the functionality itself, to keep the packages UI independent.

Note that the above is in part covered by #36, however that ticket limits itself to logging, and not the general UI feedback of the cli.

Services

Carrier should have some kind of support for services (databases, messages buses etc). Let's collect ideas on this issue before we come up with concrete tasks.

Links

Goal of the spike

EventListener not found error

In some cases, after installing Carrier, when pushing the sample app, it's stuck on waiting for staging to happen (and eventually times out).

Looking at the el-staging-listener-xxxx pod with this command:

kubectl  logs -f -n eirini-workloads el-staging-listener-74654f778c-lmsw8

there is an error:

{"level":"error","ts":"2021-02-03T08:29:16.553Z","logger":"eventlistener","caller":"sink/sink.go:80","msg":"Error getting EventListener staging-listener in Namespace eirini-workloads: eventlistener.triggers.tekton.dev \"staging-listener\" not found","knative.dev/controller":"eventlistener","logging.googleapis.com/labels":{},"logging.googleapis.com/sourceLocation":{"file":"github.com/tektoncd/triggers/pkg/sink/sink.go","line":"80","function":"github.com/tektoncd/triggers/pkg/sink.Sink.HandleEvent"},"stacktrace":"github.com/tektoncd/triggers/pkg/sink.Sink.HandleEvent\n\tgithub.com/tektoncd/triggers/pkg/sink/sink.go:80\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2042\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2417\nnet/http.(*timeoutHandler).ServeHTTP.func1\n\tnet/http/server.go:3272"}

This doesn't always happen and it's not 100% sure that it's not a red herring.

Checking manually shows that the eventlisteners is there:

dimitris@dk-summit-demo-ssd:~$ kubectl  get eventlistener.triggers.tekton.dev --all-namespaces
NAMESPACE          NAME               ADDRESS                                                              AVAILABLE   REASON
eirini-workloads   staging-listener   http://el-staging-listener.eirini-workloads.svc.cluster.local:8080   True        MinimumReplicasAvailable

(maybe wasn't Available at that time?).

It doesn't happen always and it's not clear what causes it. It seems like some kind of race condition. If you kill the el-staging-listener pod and try to push the app again it usually deploys fine though listing apps failed once after trying this workaround (relevant?)

When installer times out waiting for a Pod, print the relevant events

E.g. when dockerhub is rate limiting us, carrier installer will simply timeout but there is not way for the user to tell because the relevant events are never printed out.

The user could go and check manually on what's wrong but it would be useful to see the relevant events immediately.

It's also useful for CI (when things timeout) so we don't need to find a way to ssh to the CI container and check.

Possible fix:

We need to run the equivalent of this command:

kubectl get event --namespace carrier-registry --field-selector involvedObject.name=$(kubectl get pods -o=jsonpath='{.items[0].metadata.name}'  --selector=app.kubernetes.io/name=container-registry -n carrier-registry)

somewhere here: https://github.com/SUSE/carrier/blob/main/cli/kubernetes/cluster.go#L213

`install --non-interactive` should be the default

The installation should be as quick and as non-interactive as possible so we should make non-interactive the default and add an option for --interactive.

Document the alternative of setting system_domain through a flag.

Out of the box TLS for Applications

One of the thing that most users find tedious and confusing is how to enable TLS for their application. There are various ways to do this but some are already popular in the Kubernetes world (see links below).

From the user's perspective, any application that is deployed on Carrier should get an https url with a proper certificate, suitable for production use.

Rancher Rio has solved this in an interesting way (that resembles what Heroku does): https://rancher.com/blog/2019/rio-revolutionizing-the-way-you-deploy-apps#automatic-dns-tls

We need to decide which option is the best for Carrier and implement it.

Links:

Install only quarks-secret

Replace quarks with quarks-secret and only install quarks-secret. Right now, we don't use quarks-job and quarks-statefulset.

`carrier cli option validation/entry` - Post MVP

The carrier install method currently does not validate the data entered interactively for the options needed by the deployments it uses.

Post MVP.

Split off into #29:
None of these options are exposed to the command line either, i.e. currently only interactive entry is possible.
Entry via cli options is not supported, but IMHO should.
Entry via environment variables is not suported either.
Note, this might be automatically supported via cobra when they are exposed as cli options.

The sandbox will go away - Move the things on it we need somewhere else.

From a πŸš€ thread with Andrew Gracey:

atgracey: Should be fixed but what are you using the sandbox for?
Anything you are using there should be migrated out as it'll probably
not stay alive for all that much longer.

There's no end date set and it'll be several months at least but it's
best to move out sooner rather than later.

aku: The sandbox is running the vault instance containing various
tokens and secrets used by CI and other things. Another thing is
the retro board - https://retros.cap.explore.suse.dev/retros/kubecf
The vault instance is https://volt.cap.explore.suse.dev
Not sure what else. You might wish to ask in the `cap-devel` channel as well.

atgracey: I would definitely suggest moving that out of the sandbox
after the summit as I’m not guaranteeing any sort of uptime and it’s
likely just going to get worse

aku: Noted. I will make a ticket for us about this. 

Bring your own Cert for Applications

Depends on: #62

We should implement the automated TLS setup for applications but we should also allow the usage of existing certificates. Some users may already be paying for their certificates and they may want to use them.

After we fix #62 we should find a way to allow users to use existing certificates instead of letting Carrier create new ones for them.

Add a set of commands to manage service brokers, and services

Give a carrier instance knowledge of service brokers it can use to manage services.

Users operations should be:

  • Add service broker
  • Remove service broker
  • List brokers
  • Create service (through a broker)
  • Remove service
  • List services

Note: If a brokers are capable of listing the services currently create through them then carrier does not need a local list of services, it can simply ask the brokers it knows about.

This depends on the underlying API used to converse with browsers.

See OSBAPI (Open Service Broker API).

See also #90

carrier push with no user.name and user.email configured in git

When running carrier push but you don't have a user.email and user.name globally set for git you get an error:

From http://gitea.172.19.0.2.nip.io/dev/dizzy
 * [new branch]      main       -> carrier/main
Author identity unknown

*** Please tell me who you are.

Run

  git config --global user.email "[email protected]"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: no email was given and auto-detection is disabled
error: src refspec master does not match any

Carrier should assume any specific git configuration on the user's machine.

NOTE: If cf cli is supposed to do the pushing from here on, then we can close this issue.

carrier install --system-domain does not accept k3d.localhost

$ carrier install --system-domain k3d.localhost
🚒 Carrier installing...

Configuration...
  🧭 system_domain: 172.23.0.2.nip.io

⚠️  Traefik Ingress already installed, skipping

Have not tried with other "normal" domains.

The -i interactive option worked, and ingresses were created with the k3d.localhost domain.

Enhancement: Compatibility for arm (aarch62/arm32v7)

It would be a nice to have to be able to have build targets for arm platforms, like aarch64/arm32v7.

Reasons:

  • There is a growing market for arm based servers and it would be nice to be able to run the cluster there.
  • Rancher provides experimental support for arm64 platforms.
  • Lots of enthusiast run clusters on raspberry pi and similar platforms, providing support for those platforms would enable us to reach more people in the open-source community with carrier.

Idea/Future: More declarative setup by `install`

From the chat ...

@jimmykarily I keep thinking that what we are doing right now could be done in a better way. Right now we are just translating bash commands to go commands in the same "procedural" way.
After we are done we should consider moving to a more "descriptive" way of installing things. E.g. each deployment declares the "conditions" that should be met before the installation starts. We could then fire all of them in parallel and let them do their magic.
This would enter the realm of other projects though (e.g. what Matt presented yesterday) so I'm not sure we want to go there.

@andreas-kupries That might then also allow for continuing a partial install, i.e. skip the the pieces already installed, and run only the missing parts.

Failed push leaves staging pods behind

See also #55.

After deploying carrier on a local rancher k3d cluster I tested app push using

./carrier push ge ../apps/go-env

where go-env is the standard go-based env example app.

This failed in the end, with the staging-pipeline-...-run-... pod in Error and the clone and stage pods claiming completion.
Albeit with a quite a lot of warnings about permission denied when copying various things around.
The error in run then was a missing path/directory, so possibly a consequence of that.

Regardless, after the failure push wrongly claims http://ge.172.20.0.2.nip.io is online and all three pipeline pods are left behind.

This might be ok for debugging at the moment, however in production this might not be a nice thing to do.

It might be tolerable if carrier delete ... would remove these remnants of a failed deployment. Right now it only tries to delete the various things a successful and running app would have (image, service, lrp, ingress).

Right now cleaning up a failed push requires manual deletion of these pods.

Remove Eirini?

Right now Eirini is only used in order to create a Statefulset out of an LRP. We don't use any other functionality from Eirini.

We should consider removing Eirini because:

  • LRP is a CloudFoundry thing. Not known to anyone outside CloudFoundry
  • It's equally easy to create a Deployment or Statefulset as it is to create an LRP. That means, we can directly create a Deployment instaed of an LRP
  • There are other alternatives to abstract the Application concept if that's desirable (e.g. https://github.com/kubernetes-sigs/application). These tools could have bigger communities than Eirini in this world of Kubernetes.

One Carrier per cluster?

We should better decide if we ever want to support multiple Carrier instances installed on the same cluster early. The decision will probably affect various other decisions (e.g. whether we need to know which carrier on the cluster we are talking to, caching carrier information like gitea urls etc).

Let's use this issue to comment on the matter.

[CI] Release when main branch is tagged

We need a GitHub workflow that creates a GitHub release whenever we tag the main branch.

An action like this one could be used: https://github.com/fnkr/github-action-ghr
(example usage: https://github.com/jimmykarily/fuzzygui/blob/master/.github/workflows/release.yml#L29)

There should be a new make target that creates a smaller binary for the release. With -s -w ldflags (as described here: https://stackoverflow.com/a/37468877), your binary size drops from 60M to 48M.
The release workflow should create binaries for all architectures.

BUE (bad user experience) After killing a cluster (not uninstall) target org information can screw with a new install

Basically I

  1. Had a cluster running, and targeted org foo.
  2. Tore the cluster down, created a new and installed carrier.
  3. At that point the foo org in carrier config was out of date, the only org was the default workspace.
  4. Then pushed some apps with an older binary which did not use the config, so pushed to workspace fine.
  5. After that, trying a new binary with tweaks to the tailer, it was unable to see anything because that then used the saved config and tried to access the non-existent org foo.

After reading and understanding the low-level error message of Unprocessable Entity: {"message":"org does not exist [id: 0, name: foo]" things became more clear. apps only said error listing apps: failed to list apps: 404 Not Found which was not as helpful.

IMHO this can be fixed by

  • When creating the default workspace org, target it immediately, properly. This will squash any leftover targeting config from whatever previous cluster.
  • Rework push, apps to print a nicer error message when the targeted org they use does not exist.
  • Rework apps to be clear about the org it is listing the apps for.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.