Giter Club home page Giter Club logo

kalm's Introduction

Kalm - Kubernetes Application Manager

CircleCI Go Report

Kalm provides a web interface that makes it easy to perform common Kubernetes workflows, including:

  • Creating and updating applications
  • Scaling
  • Handling external traffic
  • Setting up probes for auto-healing
  • Attaching and using Volumes

In addition, Kalm simplifies the processes for many common Kubernetes integration points:

  • CI/CD webhooks
  • Obtaining HTTPS Certificates (via Let's Encrypt)
  • Setting up Single Sign On access for any application in your cluster
  • Configuring private image registries
  • Plugin log systems such as PLG(Loki) and ELK

Kalm

overview video with voiceover

Kalm is intended as an alternative to writing and maintaining scripts and internal tools. Since Kalm is implemented as a Kubernetes operator and a set of Custom Resource Definitions, it can be used alongside existing Kubernetes tooling. Kalm tries to minimize the amount of time you have to spend writing yaml files and executing one off kubectl commands, but doesn't prevent you from doing so if necessary.

Project Status

Kalm is currently in Closed Beta.

  • Alpha
  • Closed Beta
  • Open Beta, CRD schema frozen
  • Public Release

Installation

Kalm can be used with any Kubernetes cluster. For getting started on localhost, make sure kubectl is installed and a minikube cluster is created before hand.

If you already have access to an existing cluster via kubectl, deploy Kalm via:

# clone the repo 
git clone https://github.com/kalmhq/kalm.git
cd kalm

# run the install script
./scripts/install-local-mode.sh

The whole process typically takes up to 5-10 minutes. Relax or check out the docs in the mean time.

Once the installation is complete, open a port to the web server.

kubectl port-forward -n kalm-system $(kubectl get pod -n kalm-system -l app=kalm -ojsonpath="{.items[0].metadata.name}") 3010:3010

Kalm should now be accessible at http://localhost:3010.

Uninstall

It is safe to ignore errors for non-existent resources because they may have been deleted hierarchically.

# rm kalm-operator
kubectl delete --ignore-not-found=true -f kalm-install-operator.yaml

# rm kalm
kubectl delete --ignore-not-found=true -f kalm.yaml

Docs & Guides

Detailed Documentation and Guides can be found at https://docs.kalm.dev

License

Apache License V2

kalm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kalm's Issues

error log when debugging on k3s cluster

version: v0.1.0-alpha.5

panic: runtime error: invalid memory address or nil pointer dereference
 [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x197e49a]
 goroutine 40070 [running]:
 github.com/kalmhq/kalm/api/resources.(*Builder).List(...)
     /workspace/api/resources/common.go:266
 github.com/kalmhq/kalm/api/resources.(*Builder).GetProtectedEndpointsChannel.func1(0x0, 0x0, 0x0, 0x0, 0xc001b2cb90)
     /workspace/api/resources/sso.go:59 +0x4a
 created by github.com/kalmhq/kalm/api/resources.(*Builder).GetProtectedEndpointsChannel
     /workspace/api/resources/sso.go:57 +0xe0
 stream closed

echo: http: panic serving 127.0.0.1:35600: runtime error: invalid memory address or nil pointer dereference
goroutine 40582 [running]:
net/http.(*conn).serve.func1(0xc0002f30e0)
	/usr/local/go/src/net/http/server.go:1795 +0x139
panic(0x1d03fe0, 0x31dfe40)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/kalmhq/kalm/api/resources.(*Builder).List(...)
	/workspace/api/resources/common.go:266
github.com/kalmhq/kalm/api/resources.(*Builder).ListNodes(0x0, 0x22dbce0, 0xc0005d05a0, 0x0)
	/workspace/api/resources/node.go:164 +0x4a
github.com/kalmhq/kalm/api/handler.(*ApiHandler).handleListNodes(0xc00026e5a0, 0x22dbce0, 0xc0005d05a0, 0xc0005d0620, 0xc0005d0620)
	/workspace/api/handler/nodes.go:8 +0x51
github.com/kalmhq/kalm/api/handler.(*ApiHandler).AuthClientMiddleware.func1(0x22dbce0, 0xc0005d05a0, 0xc0000ba040, 0xc0000ba040)
	/workspace/api/handler/middleware.go:31 +0x13e
github.com/labstack/echo/v4.(*Echo).add.func1(0x22dbce0, 0xc0005d05a0, 0x223a440, 0xc0000ba040)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:512 +0x8a
github.com/labstack/echo/v4/middleware.StaticWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0x1fa77c8, 0x20)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/static.go:169 +0x2b9
github.com/labstack/echo/v4/middleware.CORSWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0xf, 0xc000b43980)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/cors.go:121 +0x477
github.com/kalmhq/kalm/api/server.middlewareLogging.func1(0x22dbce0, 0xc0005d05a0, 0xffffffffffffffff, 0xc0011451e0)
	/workspace/api/server/server.go:72 +0x233
github.com/labstack/echo/v4/middleware.GzipWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0x0, 0x0)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/compress.go:92 +0x1eb
github.com/labstack/echo/v4.(*Echo).ServeHTTP.func1(0x22dbce0, 0xc0005d05a0, 0x1, 0x0)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:617 +0x110
github.com/labstack/echo/v4/middleware.RemoveTrailingSlashWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0x1, 0x1)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/slash.go:118 +0x19f
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc00000c1e0, 0x2282640, 0xc00099f340, 0xc000035200)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:623 +0x16c
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x223b320, 0xc00000c1e0, 0xc0000978c0, 0x2282640, 0xc00099f340, 0xc000035200)
	/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:98 +0x44b
net/http.serverHandler.ServeHTTP(0xc00012a0e0, 0x2282640, 0xc00099f340, 0xc000035200)
	/usr/local/go/src/net/http/server.go:2831 +0xa4
net/http.(*conn).serve(0xc0002f30e0, 0x2288b80, 0xc0002e7940)
	/usr/local/go/src/net/http/server.go:1919 +0x875
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2957 +0x384
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x197d2da]

goroutine 44500 [running]:
github.com/kalmhq/kalm/api/resources.(*Builder).List(...)
	/workspace/api/resources/common.go:266
github.com/kalmhq/kalm/api/resources.(*Builder).getRoleBindingListChannel.func1(0x0, 0x0, 0x0, 0xc000dca270)
	/workspace/api/resources/rolebinding.go:25 +0xba
created by github.com/kalmhq/kalm/api/resources.(*Builder).getRoleBindingListChannel
	/workspace/api/resources/rolebinding.go:23 +0xd6

Endless error-out in install script when applying manifests to supported kubernetes API version with unsupported client

This happens when attempting to apply using a supported version of the Kubernetes server but an unsupported Kubernetes client and feels more like a UX issue to me.

Referencing the command from https://kalm.dev/docs/install:

curl -sL https://get.kalm.dev | bash

This results in an endless loop of:

Awaiting installation of CRDs
error: SchemaError(io.k8s.api.policy.v1beta1.PodDisruptionBudgetList): invalid object doesn't have additional properties

Output of kubectl version follows:

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

While the approrpriate error is dumped to stdout:

error: SchemaError(io.k8s.api.policy.v1beta1.PodDisruptionBudgetList): invalid object doesn't have additional properties

This is quickly overridden by the endless Awaiting installation of CRDs. I see a commented sleep 1 which would help with this problem but a better way would be to error out if the kubectl apply -f ... exits with a non-zero exit code. Could raise an MR to fix this behaviour if a maintainer could indicate what the project's preference for this behaviour is?

SSO page gets into bad state if I incorrectly setup a Github connector

Repro:

  1. Install Kalm on GKE, finished the setup steps
  2. Login via generated email/password
  3. Go to SSO and add a Github corrector
  4. type in a non-existing org (I did this by accident)
  5. Log out by clearing the cache (because I wanted to test Github SSO login)
  6. Log in via generated email/password again
  7. visit /sso (so I can edit the SSO settings)

Result:

Screenshot from 2020-10-02 15-06-57

How to create a LB?

I have created a cluster using RKE. Now I want to point a load balancer, provided by my provider, to my kalm instance. I created a LB from 443 to 3010 on each node, but have no success reaching the kalm dashboard. Only the kubectl-localhost-port-forwarding-method works. I can't access it publicly via LB. Am I missing something?

Uninstall documentation

It seems I have messed up the "Finish setup" steps and can't get a username/password back. How can I cleanly uninstall kalm to try to install it again?

Broken setup process

Started the "finish setup" process but it broke and I don't have an user now. I can still get into the admin with port forward but I can't access apps (times out) or single sign-on. It seems that it created the main domain with all its certs (e.g. kalm-cert and sso-domain-*) and I can go to the domain but I can't login. How can I create a user manually?

Consider migrating CRDs to stable namespace

Description

Kalm at the moment uses deprecated apiextensions.k8s.io/v1beta1 namespace for CRDs. This will throw a warning on K8s/K3s 1.20+ and will not work in K8s/K3s 1.22+

image

Multi-Level Subdomains

Hey,

Recently I have been looking for a dashboard which can manage routes for k8s but then later I found Kalm scrolling through Reddit.
Thanks for your great work!

So the issue is when I'm try to add multi-level subdomains like foo.bar.example.com in routes it gives this error
Screen Shot 2020-09-04 at 2 40 04 PM

Leverage/configure LB with Proxy-Protocol Support

I'm looking for a way to use Kalm to deploy and manage pods but where the HTTP-based container applications will have access to X-Forwarded-For, X-Originating-IP, X-Remote-IP, and/or X-Remote-Addr.

err when run operator

branch: https://github.com/kalmhq/kalm/tree/operator

Error:

Internal error occurred: failed calling webhook "vcomponent.kb.io": Post https://kalm-webhook-service.kalm-system.svc:443/validate-core-kalm-dev-v1alpha1-component?timeout=30s: x509: certificate signed by unknown authority

Details:

2020-08-03T17:57:49.403+0800	ERROR	controller-runtime.controller	Reconciler error	{"controller": "kalmoperatorconfig", "name": "reconcile-caused-by-dp-change-in-essential-ns-kalm", "namespace": "kalm-system", "error": "Internal error occurred: failed calling webhook \"vcomponent.kb.io\": Post https://kalm-webhook-service.kalm-system.svc:443/validate-core-kalm-dev-v1alpha1-component?timeout=30s: x509: certificate signed by unknown authority"}
github.com/go-logr/zapr.(*zapLogger).Error
	/Users/liumingmin/.go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/Users/liumingmin/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/Users/liumingmin/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/Users/liumingmin/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
	/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
	/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
	/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90

how to reproduce:

# in a new minikube cluster
# minikube delete
# minikube start

# in branch: operator
make install

# in repo root dir
kubectl apply -f kalm-install-kalmoperatorconfig.yaml

Wait till the last step of install: install Kalm as a component, the error will occur.
The installation will finally be ok after several failures.

Any plans to support multi tenancy?

Hi. Project looks exciting! Is there any plan to support multi tenancy logic? If yes, it could be very lightweight alternative to Openshift. Looking forward to see how this project is going to evolve!

How to put Apps in specific namespaces?

The issue I am seeing is that kalm is very opinionated. It doesn't give me option to provide namespace. For each app, a new namespace is created.

Also, How can I browse existing applications, deployments on my cluster through kalm?

Initializing Kalm - 3/4 modules fails on Prometheus

Similar to #138 but prometheus is failing with the following error:

$ kubectl get pods -A -w
NAMESPACE            NAME                                         READY   STATUS              RESTARTS   AGE
cert-manager         cert-manager-7cb75cf6b4-gbmfz                1/1     Running             0          2m11s
cert-manager         cert-manager-cainjector-759496659c-76tm4     1/1     Running             0          2m11s
cert-manager         cert-manager-webhook-7c75b89bf6-hkvzb        1/1     Running             0          2m11s
istio-operator       istio-operator-7c96dd898b-9t9dz              1/1     Running             0          2m10s
istio-system         istio-ingressgateway-7bf98d4db8-54sbf        1/1     Running             0          56s
istio-system         istiod-d474486d7-7mvdg                       1/1     Running             0          76s
istio-system         prometheus-5767f54db5-hl57v                  0/2     ContainerCreating   0          55s
istio-system         prometheus-7dcd44bbcf-wr88t                  0/2     ContainerCreating   0          54s
kalm-operator        kalm-operator-559c67b785-87cnj               2/2     Running             0          2m39s
kube-system          coredns-66bff467f8-wsmdr                     1/1     Running             0          3m33s
kube-system          coredns-66bff467f8-xv4b6                     1/1     Running             0          3m33s
kube-system          etcd-kalm-control-plane                      1/1     Running             0          3m48s
kube-system          kindnet-82fn9                                1/1     Running             0          3m17s
kube-system          kindnet-ckbhx                                1/1     Running             0          3m33s
kube-system          kindnet-j5xfx                                1/1     Running             2          3m16s
kube-system          kindnet-srtzq                                1/1     Running             0          3m17s
kube-system          kube-apiserver-kalm-control-plane            1/1     Running             0          3m48s
kube-system          kube-controller-manager-kalm-control-plane   1/1     Running             0          3m48s
kube-system          kube-proxy-5k7lp                             1/1     Running             0          3m17s
kube-system          kube-proxy-fbhcb                             1/1     Running             0          3m33s
kube-system          kube-proxy-jtdmx                             1/1     Running             0          3m17s
kube-system          kube-proxy-jzkfb                             1/1     Running             0          3m16s
kube-system          kube-scheduler-kalm-control-plane            1/1     Running             0          3m48s
local-path-storage   local-path-provisioner-bd4bb6b75-znm7d       1/1     Running             0          3m33s

$ kubectl logs -f prometheus-5767f54db5-hl57v -n istio-system -c prometheus
level=warn ts=2020-09-09T15:09:52.183Z caller=main.go:283 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead."
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:330 msg="Starting Prometheus" version="(version=2.15.1, branch=HEAD, revision=8744510c6391d3ef46d8294a7e1f46e57407ab13)"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:331 build_context="(go=go1.13.5, user=root@4b1e33c71b9d, date=20191225-01:04:15)"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:332 host_details="(Linux 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 prometheus-5767f54db5-hl57v (none))"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:333 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:334 vm_limits="(soft=unlimited, hard=unlimited)"
level=error ts=2020-09-09T15:09:52.183Z caller=query_logger.go:107 component=activeQueryTracker msg="Failed to create directory for logging active queries"
level=error ts=2020-09-09T15:09:52.184Z caller=query_logger.go:85 component=activeQueryTracker msg="Error opening query log file" file=data/queries.active err="open data/queries.active: no such file or directory"
panic: Unable to create mmap-ed active query log

goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x24dda5b, 0x5, 0x14, 0x2c62100, 0xc0006bf890, 0x2c62100)
	/app/promql/query_logger.go:115 +0x48c
main.main()
	/app/cmd/prometheus/main.go:362 +0x5229

I'm using a Kind cluster to install Kalm with:

$ cat kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    kubeadmConfigPatches:
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            node-labels: "ingress-ready=true"
            authorization-mode: "AlwaysAllow"
    extraPortMappings:
      - containerPort: 80
        hostPort: 80
        protocol: TCP
      - containerPort: 443
        hostPort: 443
        protocol: TCP
  - role: worker
  - role: worker
  - role: worker

$ kind create cluster --name kalm --config kind.yaml
...

$ curl -sL https://get.kalm.dev | bash
Initializing Kalm - 3/4 modules ready:

✔ kalm-operator
✔ cert-manager
✔ istio-system

No way to redo "Finish The Setup Steps"

I went through "FINISH THE SETUP STEPS", but on the last step clicked one time too many and forgot to record the generated user-name and password. (Is there anyway to retrieve this information after leaving the screen?)

I tried to redo the FINISH THE SETUP steps, but could never get back to the state where the button is available. For example I tried to toggle and delete things in Admin/Single Sign-On page, but could not find anyway to get back to a state where I can generate a new login.

Edit: after more experimentation, I found that localhost:3010/setup contains a "reset" button. However I don't think there is a way to find this URL except by accident.

Can't finsh install (only get to 3/4 steps)

Just tried installing this using your curl command, it got to 3/4 steps and just hangs. I notice in the kalm-system namespace, the kalm pod fails gets the following errors before going into crashloopbackoff:

2020-09-09T03:39:42.229Z        ERROR   Error updating metrics  {"error": "Get https://10.96.0.1:443/apis/metrics.k8s.io/v1beta1/pods: dial tcp 10.96.0.1:443: connect: connection refused"}
github.com/go-logr/zapr.(*zapLogger).Error
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/kalmhq/kalm/api/log.Error
        /workspace/api/log/logger.go:42
github.com/kalmhq/kalm/api/resources.StartMetricScraper
        /workspace/api/resources/metric_scraper.go:64
main.startMetricServer
        /workspace/api/main.go:137
2020-09-09T03:39:47.229Z        ERROR   Error scraping pod metrics      {"error": "Get https://10.96.0.1:443/apis/metrics.k8s.io/v1beta1/pods: dial tcp 10.96.0.1:443: connect: connection refused"}
github.com/go-logr/zapr.(*zapLogger).Error
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/kalmhq/kalm/api/log.Error
        /workspace/api/log/logger.go:42
github.com/kalmhq/kalm/api/resources.update
        /workspace/api/resources/metric_scraper.go:73
github.com/kalmhq/kalm/api/resources.StartMetricScraper
        /workspace/api/resources/metric_scraper.go:62
main.startMetricServer
        /workspace/api/main.go:137
2020-09-09T03:39:47.229Z        ERROR   Error updating metrics  {"error": "Get https://10.96.0.1:443/apis/metrics.k8s.io/v1beta1/pods: dial tcp 10.96.0.1:443: connect: connection refused"}
github.com/go-logr/zapr.(*zapLogger).Error
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/kalmhq/kalm/api/log.Error
        /workspace/api/log/logger.go:42
github.com/kalmhq/kalm/api/resources.StartMetricScraper
        /workspace/api/resources/metric_scraper.go:64
main.startMetricServer
        /workspace/api/main.go:137

if i kill the pod and let it restart, sometimes it fails with crashloopbackoff, sometimes the istio-proxy container fails with OOM:

2020-09-09T03:39:45.136643Z     info    sds     resource:default new connection
2020-09-09T03:39:45.136857Z     info    sds     Skipping waiting for ingress gateway secret
2020-09-09T03:39:48.545442Z     info    cache   Root cert has changed, start rotating root cert for SDS clients
2020-09-09T03:39:48.545524Z     info    cache   GenerateSecret default
2020-09-09T03:39:48.545766Z     info    sds     resource:default pushed key/cert pair to proxy
2020-09-09T03:39:51.237322Z     info    sds     resource:ROOTCA new connection
2020-09-09T03:39:51.237468Z     info    sds     Skipping waiting for ingress gateway secret
2020-09-09T03:39:51.237512Z     info    cache   Loaded root cert from certificate ROOTCA
2020-09-09T03:39:51.237670Z     info    sds     resource:ROOTCA pushed root cert to proxy
2020-09-09T03:39:57.831084Z     warning envoy filter    [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-09-09T03:39:57.834243Z     warning envoy filter    [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-09-09T03:39:58.531777Z     info    sds     resource:ROOTCA connection is terminated: rpc error: code = Canceled desc = context canceled
2020-09-09T03:39:58.531765Z     info    transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2020-09-09T03:39:58.531780Z     info    sds     resource:default connection is terminated: rpc error: code = Canceled desc = context canceled
2020-09-09T03:39:58.531880Z     error   sds     Remote side closed connection
2020-09-09T03:39:58.531847Z     error   sds     Remote side closed connection
2020-09-09T03:39:58.532186Z     warn    Envoy may have been out of memory killed. Check memory usage and limits.
2020-09-09T03:39:58.532231Z     error   Epoch 0 exited with error: signal: killed
2020-09-09T03:39:58.532241Z     info    No more active epochs, terminating

The kalm pod then goes into a CrashLoopBackoff.

My system has 64GB of ram free, so its not a system issue

Normal   Scheduled  <unknown>             default-scheduler   Successfully assigned kalm-system/kalm-5f58d8bd9-6pqzt to homelab-a
  Normal   Pulling    10m                   kubelet, homelab-a  Pulling image "docker.io/istio/proxyv2:1.6.1"
  Normal   Pulled     10m                   kubelet, homelab-a  Successfully pulled image "docker.io/istio/proxyv2:1.6.1"
  Normal   Created    10m                   kubelet, homelab-a  Created container istio-init
  Normal   Started    10m                   kubelet, homelab-a  Started container istio-init
  Normal   Created    10m                   kubelet, homelab-a  Created container kalm
  Normal   Pulled     10m                   kubelet, homelab-a  Container image "kalmhq/kalm:v0.1.0-alpha.5" already present on machine
  Normal   Pulling    10m                   kubelet, homelab-a  Pulling image "docker.io/istio/proxyv2:1.6.1"
  Normal   Started    10m                   kubelet, homelab-a  Started container kalm
  Normal   Pulled     10m                   kubelet, homelab-a  Successfully pulled image "docker.io/istio/proxyv2:1.6.1"
  Normal   Created    10m                   kubelet, homelab-a  Created container istio-proxy
  Normal   Started    10m                   kubelet, homelab-a  Started container istio-proxy
  Warning  Unhealthy  9m51s (x14 over 10m)  kubelet, homelab-a  Readiness probe failed: Get http://10.1.0.42:15021/healthz/ready: dial tcp 10.1.0.42:15021: connect: connection refused
  Warning  BackOff    21s (x35 over 9m21s)  kubelet, homelab-a  Back-off restarting failed container
free -m
              total        used        free      shared  buff/cache   available
Mem:         128713       61142        1289        2558       66281       65946
Swap:             0           0           0
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:10:16Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Question about DNS

In the demo gif in github homepage, I noticed we can access the app I deployed in kalm through xxx.kapp.live.
This is a little confusing me, how could I accomplish this since I didn't find any resolve in public DNS server?
Do I need to config my DNS config of my desktop?
PS: I know this this not a bug, but I didn't find a place to ask this question..

No IP Address in Domain Configuration

I installed Kalm on a Rancher k8s cluster. The access by kubectl port-forward ... is working fine, but when I tryed to Finish The Setup Steps, Kalm can't show the load balancer IP address, as showed bellow:

image

My k8s cluster is behind a nginx acting as reverse proxy. I created a entry on my DNS to point to this reverse proxy, and from there, to the actual k8s cluster nodes. When I try to access the URL pointing to Kalm, I receive the following message on the browser:

image

When I check and continue, I receive the message on the image above.

If I continue anyway on Kalm setup screen, after a while it shows all green but still not working.

Please help me

Component imagePullPolicy is IfNotPresent

Having the component imagePullPolicy as IfNotPresent means that an update to an image tag, e.g. latest, won't results in a proper refresh when the component is scaled or restarted.

Excerpt from Kalm created pod, kubectl describe command:

spec:
  containers:
  - image: $CONTAINER_REGISTRY/$CONTAINER_IMAGE:$IMAGE_TAG
    imagePullPolicy: IfNotPresent

kalm operators keep crashing

Here are the logs:

2020-09-06T14:48:08.699330038Z E0906 14:48:08.693285       1 leaderelection.go:320] error retrieving resource lock kalm-operator/kalm-operator: context deadline exceeded
2020-09-06T14:48:08.699399910Z I0906 14:48:08.693395       1 leaderelection.go:277] failed to renew lease kalm-operator/kalm-operator: timed out waiting for the condition
2020-09-06T14:48:08.699413232Z 2020-09-06T14:48:08.693Z	ERROR	setup	problem running manager	{"error": "leader election lost"}
2020-09-06T14:48:08.699440164Z github.com/go-logr/zapr.(*zapLogger).Error
2020-09-06T14:48:08.699452875Z 	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
2020-09-06T14:48:08.699460283Z main.main
2020-09-06T14:48:08.699468612Z 	/workspace/main.go:97
2020-09-06T14:48:08.699478565Z runtime.main
2020-09-06T14:48:08.699485542Z 	/usr/local/go/src/runtime/proc.go:203
2020-09-06T14:53:28.342223345Z 2020-09-06T14:53:28.341Z	ERROR	controller-runtime.manager	Failed to get API Group-Resources	{"error": "the server has received too many requests and has asked us to try again later"}
2020-09-06T14:53:28.342317663Z github.com/go-logr/zapr.(*zapLogger).Error
2020-09-06T14:53:28.342341490Z 	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
2020-09-06T14:53:28.342350340Z sigs.k8s.io/controller-runtime/pkg/manager.New
2020-09-06T14:53:28.342357754Z 	/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/manager.go:258
2020-09-06T14:53:28.342364539Z main.main
2020-09-06T14:53:28.342370112Z 	/workspace/main.go:70
2020-09-06T14:53:28.342374413Z runtime.main
2020-09-06T14:53:28.342378348Z 	/usr/local/go/src/runtime/proc.go:203
2020-09-06T14:53:28.342383493Z 2020-09-06T14:53:28.341Z	ERROR	setup	unable to start manager	{"error": "the server has received too many requests and has asked us to try again later"}
2020-09-06T14:53:28.342390498Z github.com/go-logr/zapr.(*zapLogger).Error
2020-09-06T14:53:28.342395748Z 	/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
2020-09-06T14:53:28.342401600Z main.main
2020-09-06T14:53:28.342406018Z 	/workspace/main.go:80
2020-09-06T14:53:28.342410231Z runtime.main
2020-09-06T14:53:28.342414268Z 	/usr/local/go/src/runtime/proc.go:203

This is after a fresh install in k3s (without traefik). I can get into the admin web interface but http://localhost:3010/applications just waits forever and doesn't work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.