Giter Club home page Giter Club logo

devworkspace-operator's People

Contributors

amisevsk avatar aobuchow avatar benoitf avatar che-incubator-bot avatar davidfestal avatar dependabot[bot] avatar dkwon17 avatar dmytro-ndp avatar flacatus avatar gbonnefille avatar ibuziuk avatar jdubrick avatar jpinkney avatar l0rd avatar lglussen avatar mazarslan avatar metlos avatar mkuznyetsov avatar musienko-maxim avatar nickboldt avatar piotrlewandowski323 avatar skoriksergey avatar sleshchenko avatar tinakurian avatar tolusha avatar vinokurig avatar vitaliy-guliy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

devworkspace-operator's Issues

expose component status

Currently with dev-workspace I can create a workspace and I've access to:

  • devworkspace object that has spec.template with devfile content listing components/commands that I want
  • workspaceRoutings object saying which endpoint are exposed.

But where to grab component status ?
(with che server it was within workspace.runtime object)

It seems for now I've to look at pod object and compare some container name with component/name but it looks very fragile.

Links to external resources from a devworkspace

Is your enhancement related to a problem? Please describe.

When we've a dev workspace, we can ask resources like with

$ kubectl get devworkspaces --all-namespaces

but runtime information is missing as well as the components from these objects.
That information is stored in other custom resources

Describe the solution you'd like

It would be nice that from a DevWorkspace we could have pointers to the runtime info custom resources and component custom resources

There is like a pattern-rule with workspace-id suffix but would be more convenient to know the related source names from the DevWorkspace resource

According to 2nd October call:

We could use labels for all workspace related objects with an identifier of the dev workspace.

It may be simpler to use workspace name but not workspaceId that we have since namespace/name fully identifies devworkspace CR instance.

Most of workspace-related objects are cluster-scoped (so namespace label seems to be redundant), but some of them could be cluster-scoped, like OpenShift Oauth client. Then to grab all workspace-related objects we need to have namespace label as well.

Example of label: io.devfile.dev_workspace.name + io.devfile.dev_workspace.namespace

Che-Theia knows the identifier of the dev workspace and then will be able to do the queries.

* name/namespace could be combined in one label if it makes sense + namespace format/name format satisfies label value format, like we won't exceed the maximum length.
Having different labels for name and namespace could be more straightforward since namespace is optional when you do a namespace-scoped query (which should 100% case of Che Theia).

Contribute the flattened devfile 2.0 with needed plugins and tools

We have a goal to dogfood, so develop DevWorkpace Operator with DevWorkspace 2.0

This issue is about to contribute initial Devfile 2.0 we can start with, that will have needed tools (kubectl, oc, kustomize, controller-gen, ...) and commands to:

  • install the Operator on the cluster(not the one where we dogfood);
  • run(+in debug mode) devworkspace that connects to another cluster;
  • build docker image (it requires a rootless tool, like buildah?)
  • run e2e tests;

Implement async storage option for DevWorkspaces

Description

"Async storage" as used by Che requires

  • A shared "sync" server deployment that mounts a PVC and listens via ssh
  • A sidecar "sync" container to be injected into workspaces to rsync files from the workspace to the storage server

This allows workspaces to avoid issues around backing storage for PVCs (e.g. Gluster volumes have trouble synchronizing when many files are touched -- e.g. in javascript .node_modules) while also providing persistence (unlike ephemeral volumes).

We should reuse the sync components used by Che to implement this in the DevWorkspace operator.

Requirements

A basic implementation of async storage would

  • Manage the creation of ssh keypairs to allow the sidecar and storage server to communicate
  • Manage a per-namespace async storage deployment + service in namespaces where workspaces exist
  • Provision a sidecar container, secret, etc. for DevWorkspaces
  • Correctly manage cleanup when workspaces are deleted:
    • Remove unowned resources when appropriate (ssh key for deleted workspace)
    • Clean up PVC when a workspace is deleted
    • Remove async deployment when all workspaces are deleted

Scope

For the first implementation, we should target

  • One workspace per namespace
  • Async deployment is removed when workspace is deleted
  • PVC cleanup runs as it does for the common strategy (depends on async deployment being removed)

This is mainly to avoid edge cases in managing out-of-devworkspace resources when there are multiple devworkspaces:

  • What happens if a secret is deleted and we have to regenerate an ssh keypair?
  • How do we clean up a subpath in the PVC when it's mounted by the async server (if e.g. two workspaces are running and one is deleted)
  • How do we keep the authorized_keys used by the sync server in sync with existing workspaces?

Additional info

Investigating faster workspace startup

What problem do we want to address?

Loading a Che workspace is currently an operation that takes 45 secs or more.

We want to speed this up as much as possible. Ideally under 10 secs because fast software is the best software.

How are we going to address it?

We have the opportunity to experiment new ideas with the DevWorkspace that we could not test before.

In particular:

  • Reduce the number of containers (i.e. running only a Theia container and tooling container) #218
  • Start an editor only workspace and asynchronously update the CR and "recreate" the workspace pod
  • Cache vsx, images, DevWorkspace CRs result of devfile flattening
  • Volume Snapshots
  • Always have a Pod with minimal CPU and memory request up and running and ready to serve

In this gdoc @amisevsk has added some considerations and a proposal.

Web Terminal Tooling depends on injecting kubeconfig into the first available container

Description

The Web Terminal Tooling plugin handles /exec/init calls by attempting to inject kubeconfig into the first container in the pod. If this fails, the whole call fails, so the plugin only works when the tooling container is first in the list.

However, the devfile/api functions for merging plugin components into a devworkspace merge components in the order 1. Parent, 2. Plugins, 3. Main content, resulting in the tooling container being last in the list. This causes web terminal to fail with changes from #240

Short-term solution

The web terminal should resolve the first compatible container (i.e. if it can't resolve an exec in the first container, it should try the second, etc.)

Long-term solution

We need a way of specifying where web terminal should inject kubeconfig.

Some images (alpine-based?) can't seem to download vsix files from github

I'm creating this issue to bring up the problem I faced, in case anyone else sees the same thing.

Platform

minikube version: v1.15.1

Description

Some images can't seem to correctly pull .vsix files.

Working:

  • quay.io/fedora/fedora:34
  • quay.io/eclipse/che-nodejs10-ubi:nightly

Not working:

  • quay.io/samsahai/curl:latest -- alpine-based, used in VSX installer in devfiles
  • quay.io/eclipse/che-plugin-registry:nightly -- alpine-based

Reproduction

IMAGES=(
  "quay.io/fedora/fedora:34" 
  "quay.io/eclipse/che-nodejs10-ubi:nightly" 
  "quay.io/samsahai/curl:latest" 
  "quay.io/eclipse/che-plugin-registry:nightly"
  )
for image in $IMAGES; do
  cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: download-test-$(echo ${image} | sed 's|[^a-z0-9]|-|g')
  labels:
    app: download-test
spec:
  restartPolicy: Never
  containers:
    - image: ${image}
      name: test-download
      command: 
        - '/bin/sh' 
      args:
        - '-c'
        - >
          if which wget >/dev/null; then
            wget -S https://github.com/golang/vscode-go/releases/download/v0.16.1/go-0.16.1.vsix -O /tmp/test
          elif which curl >/dev/null; then
            curl -L https://github.com/golang/vscode-go/releases/download/v0.16.1/go-0.16.1.vsix > /tmp/test
          fi
EOF
done

Check pods:

$ kubectl get po
NAME                                                        READY   STATUS      RESTARTS   AGE
download-test-quay-io-eclipse-che-nodejs10-ubi-nightly      0/1     Completed   0          5s
download-test-quay-io-eclipse-che-plugin-registry-nightly   0/1     Error       0          4s
download-test-quay-io-fedora-fedora-34                      0/1     Completed   0          5s
download-test-quay-io-samsahai-curl-latest                  0/1     Error       0          4s

to cleanup:

kubectl delete all -l 'app=download-test'

Implement Volume component

It's needed to implement Volume component that will define PVC or emptyDir with size configuration.

components:
  - name: maven
    container:
      image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
      mountSources: true
      volumeMounts:
        - name: my-storage
          path: /home/jboss/.settings
  - name: my-storage
    volume:
      size: 500Mi

If component references non-declared volume - devworkspace should fail to start or failed to be created?
Probably failed to start since such validation needs external resources to be fetched like in plugins case. See below.

? mountSource: true should be converted to volume with name projects?

Volume can be reused/configured from plugin. Like here https://github.com/devfile/api/blob/master/samples/devfiles/spring-boot-http-booster-devfile.yaml.

...
components:
  - name: java-support
    plugin:
      id: redhat/java8/latest
      components:
        - name: vscode-java            
          container:
            memoryLimit: 2Gi
        - name: m2 # it already has volume define we just configure it
          volume:
            size: 2G
...
  - name: maven-tooling
    container:
      image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
      mountSources: true
      memoryLimit: 768Mi
      volumeMounts:
        - name: m2 # using volume from plugin definition
          path: /home/jboss/.m2

Which PVC strategy we should support in devworkspace? Probably common only - one PVC for one namespace. Should we implement the same isolation mechanism as for Che?

- PVC structure:
/workspaceId1
  /volumeName1
  /volumeName2
/workspaceId2
  /volumeName1
  /volumeName2

? Since plugins need volumes in initContainers, we can face issues with initSubpaths from initContainer

func precreateSubpathsInitContainer(workspaceId string) corev1.Container {
and probably we need to run a separate job to do it

It should be possible to start a devworkspace without IDE

It should be possible to start a devworkspace without IDE, like the following:

kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha2
metadata:
  name: java-sample
spec:
  started: true
  template:
    projects:
      - name: frontend
        git:
          remotes:
            origin: https://github.com/spring-projects/spring-petclinic
    components:
    - name: maven
      container:
        image: quay.io/eclipse/che-java8-maven:nightly

Implement the mechanism to configure dockercfg secret as imagePullSecrets for devworkspace

In the scope of eclipse-che/che#18990 Dashboard is going to create dockercfg in the DevWorkspace namespace.

It's needed to implement the mechanism to use such secret as imagePullSecret.

Possible alternatives:

  • secret is labeled with something like controller.devfile.io/secret-kind: imagePullSecret
  • devworkspace operator lists and mounts all available dockercfg secrets to the devworkspace pods.
  • something else

Investigate how reducing containers number influences workspace-startup time

This issue is about investigating how reducing containers number influences workspace-startup time
As the object of investigation can be chosen any language pack devfile, like Java, Go...

As an example Java Maven https://github.com/eclipse/che-devfile-registry/blob/master/devfiles/java-maven/devfile.yaml
Currently, if you run it on OSIO you'll get the following images:

init containers
"quay.io/eclipse/che-theia-endpoint-runtime-binary:7.22.0"
"quay.io/eclipse/che-plugin-artifacts-broker:v3.4.0"

authentication
"quay.io/eclipse/che-jwtproxy:0.10.0"
// are run for every Theia based containers
"quay.io/eclipse/che-theia:7.22.0"
"quay.io/eclipse/che-machine-exec:7.22.0"
// language specific sidecar, runs vsx + has preinstalled tools for them
"quay.io/eclipse/che-sidecar-java:11-86274e3"
// container designed for user to use terminal and execute commands
"quay.io/eclipse/che-java11-maven:7.22.0"
// OSIO specific plugin-container provisioned in every workspace
"quay.io/eclipse/che-workspace-telemetry-woopra-plugin:latest"

According to https://docs.google.com/document/d/1V8RA6_wEd20vTRKL60yPXmRIN8A_lo5ps8mb0T9FUwY/edit#heading=h.s3u51mcvho3h each image takes 2 additional seconds even if it's already cached on the node.

It's needed to investigate what here can be merged and how much it speeds up workspace start-up, like: plugin-artifact-broker can be removed if we download VSX into plugin sider on the build. The same about Endpoint runtime binary.
Che Theia can be merged with Che Machine Exec.

Then we could go in different ways:
Prepare language-specific flat all-on-one image, like Theia + java plugin (user still is able to add additional images if needed)
or run Theia + prepare language specific flat tooling image (user still is able to add additional images if needed)
See #209 (comment)

I'm not sure what we can do with JWTProxy + Telemetry sidecars.

Cannot download go modules without go proxy

Describe the problem

It's not possible to download go modules without using proxy.golang.org due to a couple of issues:

  • Running go mod download in the current master gives the error
    go: github.com/eclipse/[email protected]+incompatible: invalid version: unknown revision b20597f15e4c
    
  • replacing v3.1.1-0.20200207223144-b20597f15e4c+incompatible with v3.1.1 in the go.mod file and running again updates the dependency to
    require github.com/eclipse/che-plugin-broker v3.1.1+incompatible
    
    and outputs the error
    go: github.com/openshift/[email protected]+incompatible: invalid pseudo-version: preceding tag (v3.9.0) not found
    
    which is related to operator-framework/operator-lifecycle-manager#1241, though it appears that this should not apply to this project since we're using operator-sdk 0.17

Additional details

The default go package in Fedora 31 and 32 uses GOPROXY=direct, which causes the issue above; this will likely impact rhel-based distros as well.

Workaround

Set GOPROXY=https://proxy.golang.org,direct before downloading modules or download and use a separate binary -- e.g.

pushd $(mktemp -d)
go get golang.org/dl/go${VERSION}
go${VERSION} download
alias go=go${VERSION}
popd

Once modules have been successfully cached, the error above is avoided even without using proxy.golang.org

Update environment variables

Is your enhancement related to a problem? Please describe.

Lot of environment variables defined in containers are using CHE_ prefix.
It shouldn't as it is agnostic of Che.

Describe the solution you'd like

I would get a clear mapping between old and new names

Also, some environment variables could be available through files
For example k8s stores some config in /var/run/secrets/kubernetes.io/serviceaccount file

CHE_MACHINE_TOKEN (or others) could be a candidate to use files instead of ENV variables ? like /var/run/secrets/devfile.io/machine-token

ENV NAME Replacement
CHE_API N/A
CHE_API_INTERNAL N/A
CHE_WORKSPACE_ID N/A (end user info)
CHE_MACHINE_TOKEN N/A
CHE_WORKSPACE_TELEMETRY_BACKEND_PORT (not defined per che server) CHE_WORKSPACE_TELEMETRY_BACKEND_PORT (not yet implemented). It's needed to take a look if Devfile 2.x allows plugins to bring their env vars into containers, like it's supported in plugin.meta.yaml
CHE_MACHINE_NAME DEV_WORKSPACE_COMPONENT_NAME
CHE_PROJECTS_ROOT COMPONENT_PROJECTS_ROOT or PROJECTS_ROOT

Non CHE_ variables

ENV NAME Replacement
NO_PROXY NO_PROXY
HTTP_PROXY HTTP_PROXY
HTTPS_PROXY HTTPS_PROXY

Che/Theia is expecting as well a

ENV NAME Replacement
PRODUCT_JSON config map for json

Use cert-webhook-server job to get certs on K8s

Initially, we used Che specific job that simply uses openssl to create certificates.
It's better to use https://github.com/newrelic/k8s-webhook-cert-manager or https://github.com/jet/kube-webhook-certgen(used by nginx ingress controller).

At the time we'll have DevWorkspace Operator it may not be actual anymore and we may fully rely on OLM for certificates for webhook server but we'll see. I even am glad to see any better propose an alternative.

Update:

point to the last documentation about webhooks in Kubebuilder, which is now completely the basis of the new OperatorSDK 1.0:
https://book.kubebuilder.io/cronjob-tutorial/running.html and https://book.kubebuilder.io/cronjob-tutorial/cert-manager.html

Support workspace routing controller configuration on DevWorkspace object

With the advent of external workspace routing controllers, there has arisen the need to configure them on per-workspace basis. In another words it should be possible to pass down routing class specific configuration to the external controller.

== Proposed Solution

There is a precedent for handling this kind of "polymorphism" in Kubernetes with the Ingress annotations, e.g. https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/.

I would like to propose configuring the workspace routing controller using annotations on the DevWorkspace object.

Let's say we have a workspace routing controller handling the myrouting routing class.

We would be able to configure it on the DevWorkspace object like this:

kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
  name: cloud-shell
  annotations:
    controller.devfile.io/restricted-access: "true"
    controller.devfile.io/other-cotroller-annotation: "yes"
    myrouting.routingclass.controller.devfile.io/answer: "42"
spec:
  started: true
  routingClass: myrouting
  template:
    ...

This would create a workspace routing object with the 2 following annotations:

kind: WorkspaceRouting
apiVersion: workspace.devfile.io/v1alpha1
metadata:
  name: ...
  annotations:
    controller.devfile.io/restricted-access: "true"
    myrouting.routingclass.controller.devfile.io/answer: "42"
...

The restricted-access is already being passed down by the existing code. myrouting.routingclass.controller.devfile.io/answer is considered a configuration property of the controller and therefore is passed down to the WorkspaceRouting object. controller.devfile.io/other-cotroller-annotation is NOT passed down, because it is unrelated to the workpace routing.

== Alternative Solution

We could also specify the configuration directly in the spec of the Devworkspace, e.g.:

kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
  name: cloud-shell
  annotations:
    controller.devfile.io/restricted-access: "true"
    controller.devfile.io/other-cotroller-annotation: "yes"
spec:
  started: true
  routingClass: myrouting
  routingAnnotations:
    answer: "42"
  template:
    ...

This feels a little bit less idiomatic Kubernetes to me though.

Every time devworkspace operator is restarted, a new serviceaccount token is created

In cases where the devworkspace operator deployment is stuck in a crashloop, every time it starts it will manage the webhooks server serviceaccount in such a way that a new sa-token is created.

❯ kc get secrets
NAME                                      TYPE                                  DATA   AGE
default-token-tcwqm                       kubernetes.io/service-account-token   3      31m
devworkspace-operator-webhook-cert        kubernetes.io/tls                     3      31m
devworkspace-webhook-server-token-52mhn   kubernetes.io/service-account-token   3      29m
devworkspace-webhook-server-token-5427w   kubernetes.io/service-account-token   3      24m
devworkspace-webhook-server-token-gbgjc   kubernetes.io/service-account-token   3      4m16s
devworkspace-webhook-server-token-hj8jt   kubernetes.io/service-account-token   3      19m
devworkspace-webhook-server-token-lhlvg   kubernetes.io/service-account-token   3      27m
devworkspace-webhook-server-token-mb6dk   kubernetes.io/service-account-token   3      31m
devworkspace-webhook-server-token-mdljr   kubernetes.io/service-account-token   3      9m23s
devworkspace-webhook-server-token-ns25v   kubernetes.io/service-account-token   3      14m
devworkspace-webhook-server-token-r67bb   kubernetes.io/service-account-token   3      30m
devworkspace-webhook-server-token-v92b7   kubernetes.io/service-account-token   3      30m
devworkspace-webhook-server-token-xlh7g   kubernetes.io/service-account-token   3      30m
devworkspace-webhook-server-token-zbstn   kubernetes.io/service-account-token   3      83s

Make basic routing working again

To move forward faster with an ability to use devworkspace controller as workspace engine in chectl, it's needed to provide some quick solution to make basic routing working again, it's needed:

Later these hacks should be replaced with proper routing with authentication enabled.

workspace routings objects name

Is your enhancement related to a problem? Please describe.

workspaces resources are available with devworkspaces
but workspace routings are with workspaceroutings

Describe the solution you'd like

if instead of using Workspace we use DevWorkspace
we should have WorkspaceRoutings renamed to DevWorkspaceRoutings

Verify if mkdir container can be safely removed

DevWorkspace controller from POC stage run mkdir containers as init containers but that's not going to help solve files permissions issues if other init containers mounts some subfolders, because (as Che used to do it) subfolders must be initialized before they are mounted.

But since it works somehow with the current approach, and Angel heard that it might be resolved on k8s side(we don't have a good referene), WE MUST make sure it works on K8s/OpenShift clusters and remove mkdir init container at all,
OR such permissions issue still exists on some k8s/openshift clusters - we should rework it in the proper way, where subfolders are inited from a separate pod before they are mounted into devworkspace pod.

For more see https://github.com/eclipse/che/blob/master/assembly/assembly-wsmaster-war/src/main/webapp/WEB-INF/classes/che/che.properties#L338

Remove configmap from operator

Is your enhancement related to a problem? Please describe.

Currently, the devworkspace operator relies on a configmap to define a few options (default routingClass, etc.). However, operators as managed by OLM don't come with configmaps, and instead we create a configmap on cluster during startup as a way of manually configuring the operator after the fact.

Describe the solution you'd like

We should remove configmap functionality as its generally not used for operators. Instead configuration should be defined in a standard way, e.g. by OLM descriptors

Additional context

This issue was created for a TODO added in PR #187

Clean up files belonging to removed workspaces

Currently, DevWorkspace Operator uses one PVC for all workspace and provides isolation with subpaths.
And these subpaths are never clean up. It means that if user recreates workspace - they probably will exceed file system quota.

It's needed to clean up subpaths that belong to removed workspaces. I see it's implemented with an additional finalizer, that set up a dedicated Deployment/Pod/Job that mounts / of the workspace PVC and cleans up the needed files. Note that this Pod can be blocked if another workspace is running at the same time (Since PVC has RWO). To avoid unneeded runs of such pre-create deployment, we may store initialized subpaths in the PVC annotations.

Also, pay attention to be able to remove subpaths - it may be needed to initialize them in the right way, where we mount /, initialize subpaths, and only then we are able to mount subpaths which we should be able to clean up without permissions issues. More see #211

Investigate support for rolling workspace deployments

Description

Currently, the devworkspace operator runs workspaces in deployments with a recreate strategy. This is required because rolling deployments can hang if they mount a RWO volume, but it also means that most modifications to a devworkspace result in a short time where the workspace is offline.

We should look into ways to configure whether rolling deployments should be used, and potentially enable rolling deployments automatically (if e.g. a workspace doesn't mount any PVCs or something like async storage is used).

Implement apply of container component on preStart events

Is your task related to a problem? Please describe.

To support the new plugins model, it's needed to implement apply of container component on preStart events.
Then we'll be able to describe plugins like the following (it's adapted samples from https://github.com/devfile/api/tree/master/samples):

Base vsx plugin template
schemaVersion: 2.0.0
metadata:
  publisher: redhat
  name: vsx-template
  type: template
  parameters:
    VSX_LIST # ????
components:
  - name: vsx-installer
    container:
      image: vsx-installer # technically it's adapted artifacts plugin broker which is not in place yet
      volumeMounts:
        - name: vsx
          path: "/vsx"
      env:
        - name: VSX_LIST
          value: ""
    - name: theia-remote-injector
      container: 
       image: "quay.io/eclipse/che-theia-endpoint-runtime-binary:7.20.0"
      volumesMounts:
      - mountPath: "/remote-endpoint"
        name: remote-endpoint
      env:
        - name: PLUGIN_REMOTE_ENDPOINT_EXECUTABLE
          value: /remote-endpoint/plugin-remote-endpoint
        - name: REMOTE_ENDPOINT_VOLUME_NAME
          value: remote-endpoint
  - name: remote-endpoint
    volume:
      emptyDir: {} ? №2
commands:
 - id: copyVsx
   apply:
     component: vsx-installer
 - id: injectRemoteILauncher
   apply:
     component: theia-remote-injector
events:
  preStart:
    - copyVsx
    - injectRemoteILauncher

? №1 We don't want to get copies of vsxInstaller and injectRemoteILauncher but the model does not allow to define merging the same components. So, maybe it should be implementation-specific for plugin component - if a different component brings the component with the same name - we try to merge them. Everything except VSX_LIST should be the same.
It may be a bit simpler in terms on interface declaration if we define different env vars in different plugins, like VSX_JAVA_8, VSX_JAVA_DEBUG, ... Otherwise we should hardcode that only VSX_LIST should be merged with appending.

? №2 empty dir volumes are not implemented yet devfile/api#189

Then plugin definition:
schemaVersion: 2.0.0
metadata:
  publisher: redhat
  name: java8
  version: latest
  displayName: Language Support for Java 8
  title: Language Support for Java(TM) by ...
  description: Java Linting, Intellisense ...
  icon: https://.../logo-eclipseche.svg
  repository: https://github.../vscode-java
  category: Language
  firstPublicationDate: "2020-02-20"
  pluginType: che-theia-vsx
parent:
  id: redhat/theia-vsx-template/latest
  components:
    - name: vsx-installer
      container:
        env:
          - name: VSX_LIST
            value: java-dbg.vsix,java.vsix
components:
  - name: vscode-java
    container:
      image: ...che-sidecar-java
      memoryLimit: "1500Mi"
      volumeMounts:
        - path: "/home/theia/.m2"
          name: m2
  - name: m2
    volume: {}    

? №3 # plugin sidercar has entrypoint with env var stub that should be injected from remote injector. See https://github.com/che-dockerfiles/che-sidecar-java/blob/master/Dockerfile#L32
Currently che-plugin-broker encapsulates this logic and apply configuration if the plugin is theia or vscode: https://github.com/eclipse/che-plugin-broker/blob/40cdcfb0e54ef1bf170690045802cc6710c33dfc/brokers/metadata/broker.go#L134
In Devfile 2.0 there is an issue to provide env var to all containers devfile/api#149
But what with remoteInjector emptyDir volumes? Should we contribute it to every container as well? Or maybe producing duplicates but more consistent - plugin container should define remote-endpoint volumeMount.

Then Devfile is just
schemaVersion: 2.0.0
metadata:
  name: spring-boot-http-booster
  type: workspace
projects:
  - name: spring-boot-http-booster
    git:
      remotes:
        origin: https://github.com/snowdrop/spring-boot-http-booster
      checkoutFrom:
        revision: master
components:
 # Should we explicitly define theia as plugin? Probably yes or we should analyze resolved plugin configuration for some indicator if it's editor or not - before providing the default one.
 - name: java-support
    plugin:
      id: redhat/java8/latest
      components:
        - name: vscode-java            
          container:
            memoryLimit: 2Gi
        - name: m2
          volume:
            size: 2G
  - name: maven-tooling
    container:
      image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
      mountSources: true
      memoryLimit: 768Mi
      env:
        - name: JAVA_OPTS
          value: >-
            -XX:MaxRAMPercentage=50.0 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10
            -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4
            -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true
            -Xms20m -Djava.security.egd=file:/dev/./urandom -Duser.home=/home/jboss
        - name: MAVEN_OPTS
          value: $(JAVA_OPTS)
      endpoints:
        - name: 8080-tcp
          targetPort: 8080
          exposure: public
      volumeMounts:
        - name: m2
          path: /home/jboss/.m2
commands:
  - id: build 
    exec:
      component: maven
      commandLine: mvn -Duser.home=${HOME} -DskipTests clean install
      workingDir: '${PROJECTS_ROOT}/spring-boot-http-booster'
      env:
        - name: MAVEN_OPTS
          value: "-Xmx200m"

User, ssh, preferences, telemetry services when no che server

Is your enhancement related to a problem? Please describe.

How to grab user/ssh/preferences etc from a DevWorkspace

is it the responsibility of for example Che/Theia to create such resources if it doesn't exists or DevWorkspaces can provide config maps ?

Describe the solution you'd like

A solution to use in Che/Theia to be able to start smoothly Theia (no missing service) for example

user settings - config map
workspace setting - config map
ssh keys - secret
^ these objects should define with annotations that they should be mount to devworkspaces, via env vars or file(to which path)
--> controller should take into account with objects and them to devworkspace-related deployment

--> another controller should manage user
The basic idea is that each 'component/controller/client' should handle a specific part.

  • che-theia should have mounted config map and can use it/update
  • dev workspace controller should create workspace by mounting annotated objects (if secrets annotated --> mount it), (if config map, etc, mount it)
  • other controller : create config maps or create ssh key secrets, etc
    for openshift, namespace-configuration-operator which is creating from templates namespaces/configmap could be used

Where to start ? who is creating the config map if not there ? (before we add this new controller micro-service)

che-theia ? no, as it should be mounted first. Let's move it to a custom chectl command:

$ chectl workspace:create <namespace> (should use workspace engine selected by the user when doing server:start)

it will take care of checking if there is a config map or not for this namespace and if not, prompt the user about its name, git stuff, etc.
namespace needs to have a custom label to identify the user

Projects clone should happen before other components are started

What problem do we want to address?

The ones described here

How are we going to address it?

  • When a DevWorkspace CR has one or more projects, those should be cloned before any other component is deployed
  • Kubernetes secrets containing Git credentials and SSH key pairs should be used to clone password protected repositories

Investigate OpenID Connect authentication for devworkspaces

The secure routing for the devworkspace we proposed was htpasswd.
And it's simple but it does not seem to provide a good UX for kubernetes cluster users.
So, instead, we should investigate OpenID Connect routing that can be used if K8s cluster is configured with such authentication https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens

The devworkspace operator ideally should be free from the OpenID provider implementation,
The OpenID server which we can try to use for such integration https://github.com/dexidp/dex

Prepare the DevWorkspaceOperator for Devfile 2.0 support in Che

  • Use the Devfile 2.0 Specification
    • 1. Validate the new plugin mechanism with a complete devfile that contains only containers components, commands and events (cf. the following gist: https://github.com/davidfestal/api/blob/devfile-2.0-vscode-plugin-management/samples/plugin-sample/all-in-one-theia-nodejs.devworkspace.yaml)
      • Implement the apply of container components on preStart events => create an initContainer #183
      • Complete the Devfile 2.0 implementation of Endpoints: exposure, protocol , secure, path are not out of attributes #184
      • Implement the Volume component (or infer it when there is a mount for now ?) #185
      • Create the VSX installer docker image to be used in the vsx-copier component definition
    • 2. Use the latest devfile/api repo content
      • Update the DevWorkspace Controller to Operator SDK 1.0 (compatible with the new kubebuilder) #180
      • Include the conversion between v1alpha1 et v1alpha2 dans devfile/api cf https://book.kubebuilder.io/multiversion-tutorial/conversion.html - #189
      • Add conversion-gen calls to generate conversion code for all the parts that are the same
      • Create the conversion webhook based on this code in the DevWorkspace controller
    • 3. Validate the new Devfile 2.0 plugin mechanism
      • Allow inline parents and plugins and implement them in the DevWorkspace controller
      • Implement minimal flattening in case of inline parents / plugins
      • Test devfile 2.0 plugin mechanism with for terminal, Theia editor and one Theia remote plugin - only based on inline elements
      • Evaluate and fix the impacts on Che-theia integration:
        • Get the flattened devfile from the new che-theia workspace client library
        • How do we flag components that come from a plugin, to be able to gather them in the che-theia UI and make the distinction from user-runtime containers ? => Add an optional attribute on a non-plugin components ?
    • 4. Implement full support of Devfile 2.0 plugins
      • Implement complete flattening of parents / plugins, as a dedicated controller on a dedicated custom resource (simply a DevWorkspaceTemplate if we add a DevWorkspaceTemplateSpecContent in its status ?). There should be an option (false by default) to enable the use Devfile 2.0 plugin mechanism for plugins loaded through ID or url.
      • Build a plugin v2.0.0 registry based on devfile/registry-support#2, and fill it with the translation of v1.0.0 plugin into v2.0 plugins
    • 5. Make full support of Devfile 2.0 plugins the default behavior
      • Update the DevWorkspaceController embedded plugin registry to be compatible with the devfile 2.0.0
      • In CheCtl, when deploying with the devworkspace engine, deploy a Devfile 2.0.0 plugin registry. Or setup a conversion mechanism in a plugin/registry, that provides devfile 2.0.0 plugin yamls from the 1.0.0 ones ?
      • Set the default Devfile 2.0.0 plugins option to true by default

(1 // 2) > 3 > 4 > 5

Should we consider reconciliation of commands and projects?

The fields commands and projects of a DevWorkspace are not managed by the DevWorkspace operator.

That means that tools that use those fields need to:

  • directly watch for changes to the CR
  • eventually create/delete kubernetes objects as a consequence of changes to the CR

For example, in a Che-Theia workspace scenario, if a new project is added to a DevWorkspace, Che-Theia needs to immediately git clone and add a new workspace. If the editor is not Che-Theia but IntelliJ, we need to implement it on that editor too.

That has a couple of problems:

  • tool needs to (re-)implement reconciliation
  • possible inconsistency in how reconciliation is implemented

Possible reconciliations handled by the DevWorkspace controller:

  • projects --> #205
  • commands --> CM mounted as files?

Implement metrics for operator

Description

We currently support basic metrics (while in dev mode/experimental features enabled) to track how long it takes to start a DevWorkspace. This support should be expanded to report more info about the operator, and the Operator metrics endpoint should be used.

Additional Info

Kubebuilder docs

Unavailability of packages.operators.coreos.com/v1 causes webhook server failure

I saw this issue on crc, and only the update helped me to solve this issue.

Now, I see this on a real OpenShift Cluster, and at time when everything seems to work fine(deployment, pods, ...), except namespace removing(is terminating forever), webhook server fails to start with error

2020-12-07T15:15:47.140Z        INFO    webhook.server  ERROR: Could not evaluate if admission webhook configurations are available     {"error": "unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request"}
2020-12-07T15:15:47.140Z        ERROR   cmd     Failed to create webhooks       {"error": "unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request"}
main.main
        /devworkspace-operator/webhook/main.go:78
runtime.main
        /usr/local/go/src/runtime/proc.go:203

Since webhook server prevents all pod/exec I wonder if we can make it safer, and not fail if we face that error.

Annotate WorkspaceRouting with routing class specific annotations

With the advent of external workspace routing controllers, there has arisen the need to configure them on per-workspace basis. In another words it should be possible to pass down routing class specific configuration to the external controller.

== Proposed Solution

There is a precedent for handling this kind of "polymorphism" in Kubernetes with the Ingress annotations, e.g. https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/.

I would like to propose configuring the workspace routing controller using annotations on the DevWorkspace object.

Let's say we have a workspace routing controller handling the myrouting routing class.

We would be able to configure it on the DevWorkspace object like this:

kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
  name: cloud-shell
  annotations:
    controller.devfile.io/restricted-access: "true"
    controller.devfile.io/other-cotroller-annotation: "yes"
    myrouting.routingclass.controller.devfile.io/answer: "42"
spec:
  started: true
  routingClass: myrouting
  template:
    ...

This would create a workspace routing object with the 2 following annotations:

kind: WorkspaceRouting
apiVersion: workspace.devfile.io/v1alpha1
metadata:
  name: ...
  annotations:
    controller.devfile.io/restricted-access: "true"
    myrouting.routingclass.controller.devfile.io/answer: "42"
...

The restricted-access is already being passed down by the existing code. myrouting.routingclass.controller.devfile.io/answer is considered a configuration property of the controller and therefore is passed down to the WorkspaceRouting object. controller.devfile.io/other-cotroller-annotation is NOT passed down, because it is unrelated to the workpace routing.

== Alternative Solution

We could also specify the configuration directly in the spec of the Devworkspace, e.g.:

kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
  name: cloud-shell
  annotations:
    controller.devfile.io/restricted-access: "true"
    controller.devfile.io/other-cotroller-annotation: "yes"
spec:
  started: true
  routingClass: myrouting
  routingAnnotations:
    answer: "42"
  template:
    ...

This feels a little bit less idiomatic Kubernetes to me though.

Grab registries URLs from a workspace

Is your enhancement related to a problem? Please describe.

Che-theia (or other workspaces) may want to know which registries (like plug-in and/or devfile) were used to create this workspace. (For example for che-theia it is to list all other plug-ins and the one who are enabled)

There is a config map of the devworkspace controller but workspaces won't have the permission to read from there.

Describe the solution you'd like

Registries URLs should be available inside DevWorkspace annotation or config maps information or mounted information file in /var/run/secrets/devfile.io/ that can be used to grab these URLs or anything else

Should we have a flattening subcontroller?

Description

As we implement the full devfile/api plugins functionality, we should consider whether the flattening process should be separated out into a subcontroller, similar to what we had for Component subresources

Pros:

  • Cache results so that we don't need to flatten on every reconcile loop
    • This could be especially useful as we start making http requests to flatten (plugin specified by URI)
  • Expose flattened devfile through status of the subresource

Cons:

  • Caching may not be that useful, if a) kubernetes references to devworkspacetemplates are expected to be the main use case and b) the flatten step is fast enough
  • More complexity; another fixed internal API we have to deal with.
  • We can expose the flattened devfile in other, potentially more useful ways

Support basic TLS routing

That's really a prerequisite to support TLS on the routing since Theia Webview does not work without it.
To avoid importing CA into the browser, it makes sense to include here single host issue.

  • on OpenShift #201:

  • TLS: I think on OpenShift we should just enable Edge Termination + Redirect terminal policy and rely on the cluster certificates;

  • SingleHost: for simplification, we just should set the same host which includes workspaceID and use like component/endpoint names in path rule;

  • On Kubernetes there are different ways to go:

  • seems to be the simplest: generate wildcard certificate like with cert-manager and configure it as the default one https://kubernetes.github.io/ingress-nginx/user-guide/tls; Then TLS and Singlehost implementation is the same as for OpenShift;
    Alternatives:

  • not to require wildcard certificate: we could generate certificate per operators, which then should be propagated to the workspaces namespaces. Then workspaceID goes to the path rule. It's a bit more difficult in terms of the need to propagate secret;

  • not to require a wildcard certificate, each workspace could own its own certificate. The difficulty - the operator is dependant on cert-manager and should manage certificates CR as well.

I think it worths to go with the simplest and then change it later.

DevWorkspace dogfooding

What problem do we want to address?

We want to demonstrate/validate that a Theia based DevWorkspace can be used to work on a real-world, cloud-native project (this one).

How are we going to address it?

  • Contribute the initial devfile 2.0 with needed plugins and tools (additional cluster is needed to run test controller) #217
  • Adapt devworkspace operator to make developers able to test local version on the same cluster:
    📓 it does not have a clear ideal solution. To test the controller itself we may introduce annotation that will make CR be handled only by operators run in local mode (none all namespace);
    But if we need to test CRDs changes, then it's tricky. We need to generate apiVersion or apiGroup or Kind or all not to break all DevWorkspaces on the cluster. That generated value should be then propagated to be operator code base + samples which are used for testing.

Support parents in the devfile

Devfiles format allows to specify the parent but it's not implemented yet on devworkspace operator side.

So, this issue is about implementing it

Populate devworkspace.status.message with info about failures

Now that devfile/api#221 is merged, we should utilize the new field to give a short description explaining why workspaces failed (e.g. "plugin not found", "opeshift-oauth routing only supported on OpenShift")

This will require updating the devfile/api dependency, and IIRC there are some incompatibilities that need to be fixed.

Workspace routing objects for the current workspace are not available within the workspace pod.

I created a DevWorkspace using devWorkspace Operator on minikube (no Eclipse Che there)

Then, from che-theia container, I tried to get workspacerouting objects and it failed

/projects/tmp $ ./kubectl get workspaceroutings.controller.devfile.io/routing-workspaceeb55021d3cff42e0 -n che

Error from server (Forbidden): workspaceroutings.controller.devfile.io "routing-workspaceeb55021d3cff42e0" is forbidden: User "system:serviceaccount:che:workspaceeb55021d3cff42e0-sa" cannot get resource "workspaceroutings" in API group "controller.devfile.io" in the namespace "che"

but I can access the dev workspace object:

/projects/tmp $ ./kubectl  get devworkspaces/theia -n che
NAME    WORKSPACE ID                PHASE     URL
theia   workspaceeb55021d3cff42e0   Running   http://workspaceeb55021d3cff42e0-theia-3100.192.168.64.31.nip.io

or the pod object

/projects/tmp $ ./kubectl  get pods -n che
NAME                                         READY   STATUS    RESTARTS   AGE
workspaceeb55021d3cff42e0-77f7bd767f-tld2s   3/3     Running   3          16d

Trying on the host( where minikube is launched, the command is successful)

$ kubectl get workspaceroutings.controller.devfile.io/routing-workspaceeb55021d3cff42e0 -n che                                                        NAME                                AGE
routing-workspaceeb55021d3cff42e0   16d

Set up indexing for devworkspace objects (Operator SDK 1.1)

Is your enhancement related to a problem? Please describe.

Operators bootstrapped by kubebuilder are set up with a scaffold for indexing objects created on the cluster. This is currently unimplemented as of PR #187

Describe the solution you'd like

Set up indexing (see doc) as appropriate

Additional context

Created for PR #187 (see comment)

Begin publishing flattened, ready-to-deploy yaml templates of the controller

Description

Currently, the DevWorkspace operator is only deployable by setting environment variables and running kustomize. To support deploying the operator via chectl, we need to publish processed yamls that can be referenced in chectl without the need to run kustomization. This could potentially be done via GitHub releases to avoid filling the repo with thousands of lines of yaml.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.