webmeshproj / webmesh-vdi Goto Github PK
View Code? Open in Web Editor NEWA Kubernetes-native Virtual Desktop Infrastructure
License: GNU General Public License v3.0
A Kubernetes-native Virtual Desktop Infrastructure
License: GNU General Public License v3.0
Hi i tried your project :) Very nice -thanks!
But it is possible to install docker within a desktop environment?
First of all, thanks for this great project! :)
We noticed that im some case, the PV userdata configmap ends up containing the same PV names for multiple users. This results in multiple PVCs being created for the same PV when two users with the same PV in the configmap try to start a session at the same time.
Without digging too deep in the code, this might be a result of how Kubernetes handles PV: It looks like Kubernetes is reusing an existing (currenty unused) PVs whenever a new user is starting a session for the first time.
I already created a dedicated storage class with reclaimPolicy: Retain
. Isn't this behaviour expected, as the PVs are released after each session and Kubernetes will just reuse them for any new PVC?
We are using Kubernetes v1.21.4 with the vsphere-csi plugin and the following config (deployed using the helm chart):
...
userdataSpec:
accessModes: ["ReadWriteOnce"]
storageClassName: vsphere-block-retain
resources:
requests:
storage: 20Gi
...
Thanks for your help!
Update: I forgot to mention we are using OIDC for user auth
Hi, I cannot deploy kvdi on my k8s cluster, because app pod is not creating. There is only the manager pod.
I tried it with versions 0.3.0, 0.3.1, 0.3.2, 0.3.3 and 0.3.4 and no additional config.
Here are logs from the manager pod:
2021-07-02T07:14:00.086Z INFO setup kVDI Version: v0.3.0
2021-07-02T07:14:00.086Z INFO setup kVDI Commit: b8e35d41d3e7d9e1f3f4c4868531658d1c98ae68
2021-07-02T07:14:00.086Z INFO setup Go Version: go1.16
2021-07-02T07:14:00.086Z INFO setup Go OS/Arch: linux/amd64
2021-07-02T07:14:00.790Z INFO controller-runtime.metrics metrics server is starting to listen {"addr": "127.0.0.1:8080"}
2021-07-02T07:14:00.791Z INFO setup starting manager
I0702 07:14:00.791860 1 leaderelection.go:243] attempting to acquire leader lease k8s-kvdi/095fd8bb.kvdi.io...
2021-07-02T07:14:00.791Z INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
I0702 07:14:19.518960 1 leaderelection.go:253] successfully acquired lease k8s-kvdi/095fd8bb.kvdi.io
2021-07-02T07:14:19.518Z DEBUG controller-runtime.manager.events Normal {"object": {"kind":"ConfigMap","namespace":"k8s-kvdi","name":"095fd8bb.kvdi.io","uid":"3a6ddb90-1737-4422-bf93-43c36f12fe5c
","apiVersion":"v1","resourceVersion":"3513670"}, "reason": "LeaderElection", "message": "kvdi-manager-74bd587bf9-sw2x4_e5123369-cf8b-47bd-b181-b6e2bc4b7a52 became leader"}
2021-07-02T07:14:19.519Z DEBUG controller-runtime.manager.events Normal {"object": {"kind":"Lease","namespace":"k8s-kvdi","name":"095fd8bb.kvdi.io","uid":"25decd51-08e2-4f61-85bf-88394cf68680","a
piVersion":"coordination.k8s.io/v1","resourceVersion":"3513671"}, "reason": "LeaderElection", "message": "kvdi-manager-74bd587bf9-sw2x4_e5123369-cf8b-47bd-b181-b6e2bc4b7a52 became leader"}
2021-07-02T07:14:19.519Z INFO controller-runtime.manager.controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, K
ind="}
2021-07-02T07:14:19.519Z INFO controller-runtime.manager.controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source
: /, Kind="}
2021-07-02T07:14:19.620Z INFO controller-runtime.manager.controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, K
ind="}
2021-07-02T07:14:19.620Z INFO controller-runtime.manager.controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source
: /, Kind="}
2021-07-02T07:14:19.721Z INFO controller-runtime.manager.controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source
: /, Kind="}
2021-07-02T07:14:19.721Z INFO controller-runtime.manager.controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, K
ind="}
2021-07-02T07:14:19.822Z INFO controller-runtime.manager.controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source
: /, Kind="}
2021-07-02T07:14:19.822Z INFO controller-runtime.manager.controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, K
ind="}
2021-07-02T07:14:19.923Z INFO controller-runtime.manager.controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, K
ind="}
2021-07-02T07:14:19.923Z INFO controller-runtime.manager.controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source
: /, Kind="}
2021-07-02T07:14:19.923Z INFO controller-runtime.manager.controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source
: /, Kind="}
2021-07-02T07:14:20.024Z INFO controller-runtime.manager.controller.session Starting Controller {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session"}
2021-07-02T07:14:20.024Z INFO controller-runtime.manager.controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source
: /, Kind="}
2021-07-02T07:14:20.125Z INFO controller-runtime.manager.controller.session Starting workers {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "worker count": 1}
2021-07-02T07:14:20.125Z INFO controller-runtime.manager.controller.vdicluster Starting Controller {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster"}
2021-07-02T07:14:20.125Z INFO controller-runtime.manager.controller.vdicluster Starting workers {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "worker count": 1}
2021-07-02T07:14:20.125Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:20.125Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:20.125Z INFO controllers.app.VDICluster Generating password and creating new admin secret {"vdicluster": "/kvdi", "Secret.Name": "kvdi-admin-secret", "Secret.Namespace": "de
fault"}
2021-07-02T07:14:20.135Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:14:20.139Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:20.143Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:20.143Z INFO controllers.app.VDICluster Creating new VDI role {"vdicluster": "/kvdi", "Name": "kvdi-launch-templates"}
2021-07-02T07:14:20.161Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:20.168Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:20.268Z INFO controllers.app.VDICluster Creating new service account {"vdicluster": "/kvdi", "ServiceAccount.Name": "kvdi-app", "ServiceAccount.Namespace": "default"}
2021-07-02T07:14:20.274Z INFO controllers.app.VDICluster Creating new cluster role {"vdicluster": "/kvdi", "Name": "kvdi-app"}
2021-07-02T07:14:20.329Z INFO controllers.app.VDICluster Creating new cluster role binding {"vdicluster": "/kvdi", "Name": "kvdi-app"}
2021-07-02T07:14:20.350Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"} [75/1199]
2021-07-02T07:14:20.350Z INFO controllers.app.VDICluster Generating new CA for the kVDI cluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:31.654Z INFO controllers.app.VDICluster Generating new app certificate/key-pair {"vdicluster": "/kvdi", "Certificate": {"namespace": "default", "name": "kvdi-app-server"}}
2021-07-02T07:14:43.121Z INFO controllers.app.VDICluster Generating new app certificate/key-pair {"vdicluster": "/kvdi", "Certificate": {"namespace": "default", "name": "kvdi-app-client"}}
2021-07-02T07:14:48.934Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.938Z INFO controllers.app.VDICluster Creating new deployment {"vdicluster": "/kvdi", "Deployment.Name": "kvdi-app", "Deployment.Namespace": "default"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Created new deployment with wait, requeing for status check {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.945Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.946Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.949Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.949Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.983Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.983Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.983Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.983Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.984Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.984Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.984Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.984Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.987Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:48.987Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.001Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.001Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.001Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.001Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.002Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.002Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.002Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.002Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.005Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.005Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.021Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.021Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.021Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.021Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.021Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.021Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.022Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.022Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.025Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:49.025Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.945Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.948Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:51.949Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.949Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.949Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.949Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"} [16/1199]
2021-07-02T07:14:54.949Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.949Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.950Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.950Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.950Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.953Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:54.956Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.957Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.960Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:14:57.960Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.961Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.964Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:15:00.964Z INFO controllers.app.VDICluster Requeueing in 3 seconds for: Waiting for kvdi-app to be ready {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.672Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.675Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.676Z INFO controllers.app.VDICluster Creating new service {"vdicluster": "/kvdi", "Service.Name": "kvdi-app", "Service.Namespace": "default"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconcile finished {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.706Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.709Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.709Z INFO controllers.app.VDICluster Reconcile finished {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.965Z INFO controllers.app.VDICluster Reconciling PKI resources for mTLS {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.968Z INFO controllers.app.VDICluster Reconciling app deployment and services {"vdicluster": "/kvdi"}
2021-07-02T07:15:03.969Z INFO controllers.app.VDICluster Reconcile finished {"vdicluster": "/kvdi"}
I don't see anything suspicious, maybe u wil.
First, thanks very much for this project. It's great!
One change that would help us tremendously is the ability to "capture all keys" from the client machine to ensure they're sent to the VDI. It's quite maddening, for example, to try to use vim (or emacs) in full-screen mode since the ESC key currently exits the full-screen VDI session.
Perhaps a very uncommon keystroke like Ctrl+F11 would be a better shortcut for exiting full-screen?
I like kvdi, thanks!
It appears that when using LDAP authentication, you are using a filter to search for attributes cn,dn,uid, memberOf, and accountStatus. That appears to be ok for OpenLDAP, but not for FreeIPA (my LDAP server) and probably not for ActiveDirectory either. (I could be wrong, as I only have direct access to FreeIPA).
In ./kvdi-main/pkg/auth/providers/ldap/authenticate.go, you do:
if strings.ToLower(user.GetAttributeValue("accountStatus")) != "active" {
return nil, fmt.Errorf("User account %s is disabled", user.GetAttributeValue("uid"))
}
This is the error I am getting in the logs for the app, and I verified that you are querying for accountStatus via wireshark.
As accountStatus does not exist in freeIPA, I cannot use kvdi with FreeIPA.
Perhaps just binding as the user in FreeIPA, is enough. Disabled users cannot bind as themselves. An entry in values.yaml to skip the accountStatus check may be good enough
More generally, it would be good to have the ability to specify the attributes you will query, perhaps as an entry in values.yaml. For example, with Active Directory the user attribute is (or was) sAMAccountName, not uid.
kubernetes concepts encourage us to assume that a pod can be rescheduled at any time. In this case what happens to an active session if the desktop pod behind it is rescheduled?
Some folks are forced to use private registries on a disconnected network. Two things I have found that do not appear configurable apart from editing code in place are:
image: ghcr.io/kubebuilder/kube-rbac=proxy:v0.5.0
I do not see a k8s way of changing that one to a private registry (currently resorting to a sed script during build (yuk)). I could be wrong on there not being a way to change this via k8s.
There may be others. The first one tripped me up for a few hours ;-)
I have a few users, they are all using the same docker image, the same desktop template (with very small diffs) and they all did work.
2 hours ago, one user showed me that starting a session via the UI results in the "Waiting" screen. Looking at the K8S Dash, I can see that a service is there but no POD at all. I can't tell if there are logs anywhere, but its just strange.
Other users can use the same template but it will take a while to start with lots of POD restarts...
Any clue what is going on?
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:326: applying cgroup configuration for process caused: mkdir /sys/fs/cgroup/memory/docker/48f82b4914f55027cfb887db030760b9608d9992a263f16cdeb110eb4d564292: cannot allocate memory: unknown.
Decouple the frontend from the server so
Is there a way to have the pod names be specific to the user? Currently you have them as desktop-somehexstring. It would be nice to have them as desktop-user1-0, desktop-user1-1, desktop-user2-0, desktop-user2-1 (if more than one allowed per user). That way, an admin can quickly hunt down a pod and kill or connect to it as necessary. Thx
Hi,
Had an interesting thing happen today.
I'm using OIDC auth, and have a username with a capital letter. KVDI shoots out this error in that case.
2022-01-13T14:16:41.519Z ERROR controller.session Reconciler error {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "name": "my-new-template-82889", "namespace": "kvdi", "error": "PersistentVolumeClaim \"kvdi-AP12345678-userdata\" is invalid: metadata.name: Invalid value: \"kvdi-AP12345678-userdata\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227
I'll pretty-print it for convenience.
{
"reconciler group": "desktops.kvdi.io",
"reconciler kind": "Session",
"name": "my-new-template-82889",
"namespace": "kvdi",
"error": "PersistentVolumeClaim \"kvdi-AP12345678-userdata\" is invalid: metadata.name: Invalid value: \"kvdi-AP12345678-userdata\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')"
}
Maybe the pvc name construction should lcase the username?
In my case, I fixed this by popping open ADUC and changing this person's sAMAccountName to all lowercase. That worked but probably won't be available as an option in other cases.
On a side note, this project is incredible and you should sell it for millions of dollars.
Hi,
Is there a way to configure nodeSelecor & affinity on the desktop pods ?
Hi i want to install your project on an aks cluster. But the manager pod fails with CrashLoopBackOff OOMKilled.
logs:
kube-rbac-proxy I0212 09:18:32.850504 1 main.go:186] Valid token audiences:
kube-rbac-proxy I0212 09:18:32.850592 1 main.go:232] Generating self signed cert as no cert is provided
kube-rbac-proxy I0212 09:18:33.365315 1 main.go:281] Starting TCP socket on 0.0.0.0:8443
kube-rbac-proxy I0212 09:18:33.366432 1 main.go:288] Listening securely on 0.0.0.0:8443
manager stream closed
kubectl describe pod:
Name: kvdi-manager-58ff97d849-7xtk2
Namespace: default
Priority: 0
Node: aks-agentpool-12592036-2/10.240.0.4
Start Time: Fri, 12 Feb 2021 10:18:28 +0100
Labels: app.kubernetes.io/instance=kvdi
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kvdi
app.kubernetes.io/version=v0.2.1
helm.sh/chart=kvdi-v0.2.1
pod-template-hash=58ff97d849
Annotations:
Status: Running
IP: 10.244.1.2
IPs:
IP: 10.244.1.2
Controlled By: ReplicaSet/kvdi-manager-58ff97d849
Containers:
kube-rbac-proxy:
Container ID: docker://62d2f50fad42802dfe423bae5c4a251d0191b6fa24ec24678a30237eed24848c
Image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
Image ID: docker-pullable://gcr.io/kubebuilder/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b
Port: 8443/TCP
Host Port: 0/TCP
Args:
--secure-listen-address=0.0.0.0:8443
--upstream=http://127.0.0.1:8080/
--logtostderr=true
--v=10
State: Running
Started: Fri, 12 Feb 2021 10:18:32 +0100
Ready: True
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kvdi-token-66vnx (ro)
manager:
Container ID: docker://ce6a9d24fda5687c56793b60f6cb0ad6c7bdbcbd779470ecb9e121dd01549e77
Image: ghcr.io/tinyzimmer/kvdi:manager-v0.2.1
Image ID: docker-pullable://ghcr.io/tinyzimmer/kvdi@sha256:642cfe9c46935201ea40d766684ca9fbb5ffebf03351d10bcfc7f80879f11e67
Port:
Host Port:
Command:
/manager
Args:
--health-probe-bind-address=:8081
--metrics-bind-address=127.0.0.1:8080
--leader-elect
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Fri, 12 Feb 2021 10:24:42 +0100
Finished: Fri, 12 Feb 2021 10:24:44 +0100
Ready: False
Restart Count: 6
Limits:
cpu: 100m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Liveness: http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
Readiness: http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
OPERATOR_NAMESPACE: default (v1:metadata.namespace)
POD_NAME: kvdi-manager-58ff97d849-7xtk2 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
OPERATOR_NAME: kvdi
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kvdi-token-66vnx (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kvdi-token-66vnx:
Type: Secret (a volume populated by a Secret)
SecretName: kvdi-token-66vnx
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 6m30s default-scheduler Successfully assigned default/kvdi-manager-58ff97d849-7xtk2 to aks-agentpool-XXXXXX-2
Normal Pulling 6m29s kubelet Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0"
Normal Pulled 6m27s kubelet Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0"
Normal Created 6m26s kubelet Created container kube-rbac-proxy
Normal Started 6m26s kubelet Started container kube-rbac-proxy
Normal Pulling 6m26s kubelet Pulling image "ghcr.io/tinyzimmer/kvdi:manager-v0.2.1"
Normal Pulled 6m24s kubelet Successfully pulled image "ghcr.io/tinyzimmer/kvdi:manager-v0.2.1"
Normal Created 5m26s (x4 over 6m24s) kubelet Created container manager
Normal Started 5m26s (x4 over 6m23s) kubelet Started container manager
Normal Pulled 5m26s (x3 over 6m21s) kubelet Container image "ghcr.io/tinyzimmer/kvdi:manager-v0.2.1" already present on machine
Warning BackOff 79s (x25 over 6m18s) kubelet Back-off restarting failed container`
Do i miss something ? Do you need further informations?
Thanks in advance !
Hi,
Any chance we can enable the copy and past keyboard shortcuts from the local machine to the Kvdi machine without pressing the sync clipboard every-time ?
Whenever I am trying to attach a PVC to the Desktop template, I get an error like: json: cannot unmarshal object into Go struct field DesktopConfig.spec.config.capabilities of type []v1.Capabilit
I might have a bad YAML but I have no examples in the docs for reference.
The YAML I am trying to setup is:
apiVersion: kvdi.io/v1alpha1
kind: DesktopTemplate
metadata:
name: ubuntu-xfce
spec:
config:
allowFileTransfer: true
allowRoot: true
init: systemd
capabilities:
PersistentVolumeClaim:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: volume1
storageClassName: sc-volume1
volumeMode: Filesystem
status:
phase: Bound
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
image: 'ghcr.io/tinyzimmer/kvdi:ubuntu-xfce4-latest'
imagePullPolicy: IfNotPresent
resources: {}
tags:
applications: minimal
desktop: xfce4
os: ubuntu
Error while copying stream from client connection to display socket {"error": "write unix @->/var/run/kvdi/display.sock: write: broken pipe"}
When I am trying to expose the pod with:
kubectl expose pod hanan -n kvdi --name=hanan-svc --type=LoadBalancer --port=22 --target-port=22
I get an ip but even after installing sshd and making sure its running, I can't ssh to the machine.
I am using the kvdi app with tls on the app. Is there a way to force it to negotiate tls v1.2, denying 1.1 and 1.0? I am using v0.3.4 of the app, but I suspect the same issue exists in 0.3.6.
I see in the source for the app, in vdicluster_app_util.go, functions such as:
func (c *VDICluster) GetAppSecretsName() string {
This (to me) suggests a reference back to a crd value but nothing for something like minTLSVersion. So is there any way to force a mnimum TLS version?
Thanks
TODO
Hi, it's me again... Could you tell me, if I can pass bearer token from user oidp login to desktop container as env var? Or how can I simple achieve this?
I had enable the audio option, then play youtube from browser, but no any sound form it...
OS: MAC M1
Log:
2022-11-18T14:20:33.346Z INFO kvdi_proxy Serving new request {"Type": "display", "Client": "10.244.1.25:46270"}
2022-11-18T14:20:33.346Z INFO kvdi_proxy Received display proxy request, connecting to unix:///var/run/kvdi/display.sock
2022-11-18T14:20:33.347Z INFO kvdi_proxy Connection to display server established
2022-11-18T14:20:33.347Z INFO kvdi_proxy Starting display proxy
2022-11-18T14:20:33.348Z INFO kvdi_proxy Connecting to pulse server: /run/user/9000/pulse/native
2022-11-18T14:20:33.349Z INFO kvdi_proxy Setting up audio devices
2022-11-18T14:21:53.349Z INFO kvdi_proxy Connection is alive {"Connection": "display", "BytesSent": 158001231, "BytesReceived": 70721}
2022-11-18T14:22:03.350Z INFO kvdi_proxy Connection is alive {"Connection": "display", "BytesSent": 187096637, "BytesReceived": 77579}
2022-11-18T14:22:13.349Z
When operating behind load balancers / ingress controllers that have
an idle timeout configured, it's common to see disconnection of the seession on the UI.
Is there any suggestion to avoid this problem? if there is a mechanism that can be used inside this project, it will be better than
changing the setting of the ingress
I copied example of ubuntu image, and pasted it in web gui. After creation it's saying emplate.desktops.kvdi.io "ubuntu-xfce" is invalid: spec.image: Required value
. I don't know what to do :v
Hello again,
I closed my browser tab after launching desktop image, then I closed the tab and reopen it. After login I couldn't attach to my session. I can see it from API as a session, but I there is no button I can click, to attach to this session. I saw this and this, but I didn't find the answer. After few tries I checked browsers logs.
This is after login:
This is after soft refresh (f5):
After soft refresh it shows button and I can attach session. But I think it's some kind of bug :v
This is my config
vdi:
spec:
appNamespace: k8s-kvdi
auth:
tokenDuration: "8h"
oidcAuth:
preserveTokens: True
# issuerURL:
issuerURL:
clientCredentialsSecret: "oidc-secret-kvdi"
redirectURL:
groupScope: kvdi-policy
adminGroups: [ "admin-user" ]
allowNonGroupedReadOnly: True
When I set up namespace for chart and app, main page (after OIDC login) starts to refreshing all the time. It works only in default namespace which I don't want to use. Am I doing something wrong? I tested it on the same namespace for app and chart, and also for different ones. Please help.
Do you think there is a good way to disconnect sessions after an inactivity period?
Right now I have simulated this by:
So the end result being that nginx disconnects me after 10 minutes (screensaver + proxy-read-timeout). And the user can't just refresh the page, because /api/refresh_token is blocked, so it causes a brand new login.
But this is somewhat fragile because it relies on the screensaver to stop websocket traffic. This seems like it does not work 100% of the time.
So I wonder, is it possible to modify the UI code to somehow recognize that the user does not have the tab open and disconnect the websocket client-side?
Hi,
It looks like interacting with role annotations is not possible from the web ui when using OIDC auth.
The web ui shows 'Annotations are not used for local authentication'.
I think the problem might be that this is looking for IssuerURL
when the CRD specifies that it must be issuerURL
(lowercase first letter).
authMethod: state => {
if (state.serverConfig.auth !== undefined) {
if (state.serverConfig.auth.ldapAuth !== undefined && state.serverConfig.auth.ldapAuth.URL) {
return 'ldap'
}
if (state.serverConfig.auth.oidcAuth !== undefined && state.serverConfig.auth.oidcAuth.IssuerURL) {
return 'oidc'
}
}
return 'local'
}
if so, this might be a really quick fix
In this file:
https://github.com/kvdi/kvdi/blob/main/ui/app/src/store/config.js#L63
My template as below
apiVersion: desktops.kvdi.io/v1
kind: Template
metadata:
name: ubuntu-xfce
spec:
desktop:
image: ghcr.io/kvdi/ubuntu-xfce4:latest
imagePullPolicy: IfNotPresent
allowRoot: true
proxy:
allowFileTransfer: true
dind:
image: "docker:19-dind" # Defaults to latest which may have issues depending on your runtime
tags:
os: ubuntu
desktop: xfce4
applications: minimal
Hi there,
I was using ubuntu template, then I changed tab to API, and starting from there all API request are code 403
and I cannot connect back to session:
I don't know where is the problem.
My config:
vdi:
spec:
appNamespace: k8s-kvdi
app:
serviceType: ClusterIP
#corsEnabled: true
resources:
limits:
cpu: "2000m"
memory: "2G"
requests:
cpu: "1000m"
memory: "1G"
auth:
tokenDuration: "8h"
oidcAuth:
preserveTokens: True
#issuerURL: ""
issuerURL: ""
clientCredentialsSecret: ""
#redirectURL: ""
redirectURL: ""
#scopes: [ "email", "profile", "groups" ]
groupScope: kvdi-policy
adminGroups: [ "admin-user" ]
allowNonGroupedReadOnly: True
Hi,
First, let me say that you guys are doing an awesome job for this project!
My use case calls for the ability for users to store some data on their home directory. I saw that this feature is supposed to be available by defining the userdataSpec element during deployment. The documentation indicates that in this case, there will be a PersistentVolume for each user. That doesn't seem to be working. The first user that logs in gets one, and that same one is then re-used for all other users. In fact, that is also preventing two users to be logged in at the same time as the second instance of the desktop pod is stuck waiting for the persistent volume to be available, but that is being held by the first instance. Also noticed that the volume mapping seems suspicious as my two users have the same pvc assigned. See below.
apiVersion: v1
kind: ConfigMap
metadata:
name: kvdi-userdata-volume-map
namespace: vdi
data:
admin: pvc-93576f47-1f8c-4135-adee-47da89d813ba
test.user: pvc-93576f47-1f8c-4135-adee-47da89d813ba
I'm using this values.yaml to along with helm to deploy kvdi under vdi namespace:
vdi:
spec:
appNamespace: vdi
auth:
tokenDuration: 8h
userdataSpec:
resources:
requests:
storage: 10Gi
accessModes:
- ReadWriteOnce
This is happening on GKE. The image I'm using for the template is ghcr.io/tinyzimmer/kvdi:ubuntu-xfce4-latest.
Regards,
Luiz
Trying some experiment with setting socketAddr to tcp://hostname:5900 or tcp://hostname:5900/ so to have kvdi-proxy try to connect to a VNC port somewhere. When the kvdi-proxy container is pulled, it can't start its pod, saying that the mountPath is invalid. Doing a kubectl get pod pod/podname -o yaml shows that tcp://host:5900 is associated with a mountPath/volume, and as such is invalid for a mount or volume. This makes sense if we are using a unix domain socket, but not a TCP address.
Hello!
As already expressed, big love for this project. I am actually more looking for a rough guide as to where potential issues could arise in support ARM. I'd like to contribute on this end, as my project is based upon a mixed arichtectural cluster. If you could give me rough outlines what you expect i'd love to work on that.
How can we learn about webmesh-vdi without a fancy site?
Landing page
Installation instructions for non technical
We use Longhorn as our storage solution in the cluster. This thing is sometimes taking time to bind, and the result is usually a POD that states that the mount fails. Maybe a longer timeout is required?
I want to create a centos desktop image, and refer to the ubuntu example, but there are so many details that I can not figure out quickly, is there any guide about how to do it? thanks in advance.
Hi,
To avoid creating a template for each user, I would like to use the user name as a parameter in the templates. Is that possible?
When I tried to run a template of a virtual desktop, I get these errors:
{
"error": "pods \"ubuntu-xfce-tndmb\" not found",
"status": "NotFound"
}
{
"error": "pods \"dosbox-5hc9g\" not found",
"status": "NotFound"
}
I then realised that the kvdi manager is crashing, which might be the reason for it (CrashLoopBackOff
)
When I look into the logs I seepanic: runtime error: invalid memory address or nil pointer dereference
2022-02-01T17:06:35.612Z INFO setup kVDI Version: v0.3.6
2022-02-01T17:06:35.612Z INFO setup kVDI Commit: 00f33f4eda3aef727dfbec324ef82865b656f873
2022-02-01T17:06:35.612Z INFO setup Go Version: go1.17.2
2022-02-01T17:06:35.612Z INFO setup Go OS/Arch: linux/amd64
2022-02-01T17:06:35.866Z INFO controller-runtime.metrics metrics server is starting to listen {"addr": "127.0.0.1:8080"}
2022-02-01T17:06:35.866Z INFO setup starting manager
I0201 17:06:35.867162 1 leaderelection.go:248] attempting to acquire leader lease kvdi/095fd8bb.kvdi.io...
2022-02-01T17:06:35.867Z INFO starting metrics server {"path": "/metrics"}
I0201 17:06:51.525120 1 leaderelection.go:258] successfully acquired lease kvdi/095fd8bb.kvdi.io
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.session Starting EventSource {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting EventSource {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "source": "kind source: /, Kind="}
2022-02-01T17:06:51.525Z INFO controller.vdicluster Starting Controller {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster"}
2022-02-01T17:06:51.525Z INFO controller.session Starting Controller {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session"}
2022-02-01T17:06:51.525Z DEBUG events Normal {"object": {"kind":"ConfigMap","namespace":"kvdi","name":"095fd8bb.kvdi.io","uid":"f045eca8-6d6d-4fe3-95f5-3e189c200278","apiVersion":"v1","resourceVersion":"8126"}, "reason": "LeaderElection", "message": "kvdi-manager-9684bdc6b-mzfg5_d0de0aea-0439-4a4e-9ccd-a5af5afcc403 became leader"}
2022-02-01T17:06:51.525Z DEBUG events Normal {"object": {"kind":"Lease","namespace":"kvdi","name":"095fd8bb.kvdi.io","uid":"26e37f72-cb97-430b-8f43-ac33336f8f36","apiVersion":"coordination.k8s.io/v1","resourceVersion":"8127"}, "reason": "LeaderElection", "message": "kvdi-manager-9684bdc6b-mzfg5_d0de0aea-0439-4a4e-9ccd-a5af5afcc403 became leader"}
2022-02-01T17:06:51.629Z INFO controller.session Starting workers {"reconciler group": "desktops.kvdi.io", "reconciler kind": "Session", "worker count": 1}
2022-02-01T17:06:51.629Z INFO controller.vdicluster Starting workers {"reconciler group": "app.kvdi.io", "reconciler kind": "VDICluster", "worker count": 1}
2022-02-01T17:06:51.630Z INFO controllers.desktops.Session Reconciling Desktop {"session": "default/dosbox-5hc9g"}
2022-02-01T17:06:51.630Z INFO controllers.app.VDICluster Reconciling VDICluster {"vdicluster": "/kvdi"}
2022-02-01T17:06:51.630Z INFO controllers.desktops.Session Retrieving template and cluster for session {"session": "default/dosbox-5hc9g"}
2022-02-01T17:06:51.630Z INFO controllers.app.VDICluster Reconciling admin password secret {"vdicluster": "/kvdi"}
2022-02-01T17:06:51.630Z INFO controllers.app.VDICluster Setting up a temporary connection to the cluster secrets backend {"vdicluster": "/kvdi"}
2022-02-01T17:06:51.630Z INFO controllers.app.VDICluster Reconciling JWT secrets {"vdicluster": "/kvdi"}
2022-02-01T17:06:51.630Z INFO controllers.app.VDICluster Reconciling built-in VDIRoles {"vdicluster": "/kvdi"}
2022-02-01T17:06:51.630Z INFO controllers.app.VDICluster Reconciling required resources for the configured authentication provider {"vdicluster": "/kvdi"}
2022-02-01T17:06:51.630Z INFO controllers.app.VDICluster Reconciling RBAC resources for the app servers {"vdicluster": "/kvdi"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x13614de]
goroutine 373 [running]:
github.com/kvdi/kvdi/apis/app/v1.(*VDICluster).GetUserdataVolumeSpec(0xc00003d800)
/build/apis/app/v1/vdicluster_common_util.go:110 +0x5e
github.com/kvdi/kvdi/pkg/resources/desktop.(*Reconciler).Reconcile(0xc00017e2a0, {0x1a5f8a8, 0xc00017e180}, {0x1a7bfc0, 0xc0002d2240}, 0xc00044e180)
/build/pkg/resources/desktop/reconciler.go:94 +0x21a
github.com/kvdi/kvdi/controllers/desktops.(*SessionReconciler).Reconcile(0xc0007478f0, {0x1a5f8a8, 0xc00017e180}, {{{0xc0006e0ba9, 0x7}, {0xc0006e0b94, 0xc}}})
/build/controllers/desktops/session_controller.go:81 +0x2f5
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc000356b40, {0x1a5f8a8, 0xc00017e0f0}, {{{0xc0006e0ba9, 0x16fc440}, {0xc0006e0b94, 0xc00074f000}}})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114 +0x222
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000356b40, {0x1a5f800, 0xc00047b400}, {0x16813e0, 0xc000762000})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311 +0x2f2
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000356b40, {0x1a5f800, 0xc00047b400})
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x205
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x354
{"time":"2022-02-01T16:44:15.226781208Z","method":"GET","path":"/api/readyz","statusCode":200,"size":19,"remoteHost":"192.168.0.1"}
2022/02/01 16:44:22 http: TLS handshake error from 192.168.0.1:14731: remote error: tls: unknown certificate
2022/02/01 16:44:22 http: TLS handshake error from 192.168.0.1:40314: remote error: tls: unknown certificate
{"time":"2022-02-01T16:44:25.207026382Z","method":"GET","path":"/api/readyz","statusCode":200,"size":19,"remoteHost":"192.168.0.1"}
2022/02/01 16:44:25 http: TLS handshake error from 192.168.0.1:64067: remote error: tls: unknown certificate
2022/02/01 16:44:25 http: TLS handshake error from 192.168.0.1:22453: remote error: tls: unknown certificate
{"time":"2022-02-01T16:44:25.657811559Z","method":"GET","path":"/","statusCode":200,"size":900,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:25.751002852Z","method":"GET","path":"/css/app.983fa484.css","statusCode":200,"size":29063,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:25.750150202Z","method":"GET","path":"/css/vendor.954f525a.css","statusCode":200,"size":213477,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:25.751761858Z","method":"GET","path":"/js/app.343390e7.js","statusCode":200,"size":297062,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:25.751501852Z","method":"GET","path":"/js/vendor.3101e6dc.js","statusCode":200,"size":3481381,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.38269946Z","method":"GET","path":"/statics/icons/quasar.svg","statusCode":200,"size":2522,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.383207019Z","method":"GET","path":"/statics/github.png","statusCode":200,"size":24525,"remoteHost":"192.168.0.1"}
2022-02-01T16:44:26.387Z ERROR api Authentication failed, checking if anonymous is allowed{"error": "User 'anonymous' not found in the cluster"}
github.com/kvdi/kvdi/pkg/api.DecodeRequest.func1
/build/pkg/api/api_decoder.go:81
Forbidden request due to: User 'anonymous' not found in the cluster
net/http.HandlerFunc.ServeHTTP
/usr/local/go/src/net/http/server.go:2046
github.com/kvdi/kvdi/pkg/api.doRequestMetrics
/build/pkg/api/api_metrics.go:171
github.com/kvdi/kvdi/pkg/api.prometheusMiddleware.func1
/build/pkg/api/api_metrics.go:157
net/http.HandlerFunc.ServeHTTP
/usr/local/go/src/net/http/server.go:2046
github.com/gorilla/mux.(*Router).ServeHTTP
/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210
github.com/kvdi/kvdi/pkg/api.(*desktopAPI).ServeHTTP
/build/pkg/api/api_router.go:163
github.com/gorilla/mux.(*Router).ServeHTTP
/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210
github.com/gorilla/handlers.loggingHandler.ServeHTTP
/go/pkg/mod/github.com/gorilla/[email protected]/logging.go:47
github.com/gorilla/handlers.CompressHandlerLevel.func1
/go/pkg/mod/github.com/gorilla/[email protected]/compress.go:141
net/http.HandlerFunc.ServeHTTP
/usr/local/go/src/net/http/server.go:2046
github.com/gorilla/handlers.ProxyHeaders.func1
/go/pkg/mod/github.com/gorilla/[email protected]/proxy_headers.go:59
net/http.HandlerFunc.ServeHTTP
/usr/local/go/src/net/http/server.go:2046
net/http.serverHandler.ServeHTTP
/usr/local/go/src/net/http/server.go:2878
net/http.initALPNRequest.ServeHTTP
/usr/local/go/src/net/http/server.go:3479
net/http.(*http2serverConn).runHandler
/usr/local/go/src/net/http/h2_bundle.go:5832
{"time":"2022-02-01T16:44:26.38826115Z","method":"GET","path":"/statics/app-logo-128x128.png","statusCode":200,"size":9181,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.383150054Z","method":"POST","path":"/api/login","statusCode":403,"size":77,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.432480528Z","method":"GET","path":"/fonts/KFOmCnqEu92Fr1Mu4mxM.9b78ea3b.woff","statusCode":200,"size":20332,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.432658237Z","method":"GET","path":"/fonts/KFOlCnqEu92Fr1MmEU9fBBc-.ddd11dab.woff","statusCode":200,"size":20532,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.432749242Z","method":"GET","path":"/fonts/KFOlCnqEu92Fr1MmWUlfBBc-.0344cc3c.woff","statusCode":200,"size":20396,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.427174318Z","method":"GET","path":"/fonts/flUhRq6tzZclQEJ-Vdg-IuiaDsNcIhQ8tQ.12730e02.woff2","statusCode":200,"size":113328,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:26.529928368Z","method":"GET","path":"/statics/icons/favicon.ico","statusCode":200,"size":5238,"remoteHost":"192.168.0.1"}
{"time":"2022-02-01T16:44:35.225822138Z","method":"GET","path":"/api/readyz","statusCode":200,"size":19,"remoteHost":"192.168.0.1"}
KVDI Installation: helm install kvdi kvdi/kvdi -n kvdi --create-namespace
kubectl version --short
Client Version: v1.23.2
Server Version: v1.23.3
Docker Version: 20.10.12
Debian 11
Thanks in advance for any help!
I accidentally closed my Firefox tab while using a desktop. The desktop is still running (it shows up in kubectl get pods
), but it no longer shows up in the UI, so I can't get back into it.
Ended up completely removing and then reinstalling kVDI to get rid of it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.