Giter Club home page Giter Club logo

enhancements's Introduction

title linktitle description type weight
Enhancement Process
Enhancement Process
Labs enhancements process
docs
10

Enhancements

The enhancements repository contains design proposals for Jenkins X enhancements.

All proposals are welcome, please follow this process:

  • Raise an issue on this repo describing at high level what the enhancement looks to achieve and whether you would like any help with the proposal.
  • If you would like some early collaboration from the Jenkins X community you might want to start with Google Docs for a faster feedback cycle.
  • Open a pull request containing markdown giving some context, describing the problem the enhancement aims to solve along with possible solutions.

enhancements's People

Contributors

ankitm123 avatar garethjevans avatar hferentschik avatar jordangoasdoue avatar jstrachan avatar marckk avatar msvticket avatar rawlingsj avatar vbehar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enhancements's Issues

new modular CLI

the next phase of the modularity work is creating a new jx CLI based on small modular binary plugins.

The idea is to provide a similar CLI for jx 2.x or 3.x built from the ground up with binary plugins so it’s easier to incrementally refactor / replace / improve the code, it’s quality, reporting and testing.

As part of this effort it would be good to review the UX and try improve it so it’s more intuitive - while also allowing for more flexibility & to support different personas. Eg someone administrating Jenkins X versus a developer working on some microservices may want different CLI sub commands.

From a UX perspective we may want to align with other UX approaches - eg https://github.com/knative/client/blob/master/conventions/cli.md & integrate with other CLIs as plugins like tekton / istio CLIs?

There is a PoC for the new 3.x CLI at jenkins-x/jx-cli

support Tekton Catalog

the tekton catalog provides a great way to share Task and Pipeline resources across CI/CD projects for anyone using Tekton. Currently its not easy to reuse Tekton catalog with Jenkins X.

It would be awesome if we could support Tekton Catalog so that:

  • Jenkins X users can consume any Task or Pipeline resources easily from the catalog
  • we can expose Jenkins X Pipelines to our own Jenkins X Tekton Catalog so that anyone from the Tekton ecosystem can reuse any of the Jenkins X TasksPipelines
  • we figure out a nice way to share tekton resources across microservices and teams without just 'copy & paste' everywhere which soon turns into a maintenance nightmare

We maybe also want to ensure folks can reuse the ChatOps capabilities from lighthouse when using vanilla Tekton.

Being able to enhance the ChatOps capabilities in Jenkins X via Tekton Catalogs would be aweome too! - e.g. binding Task/Pipeline resources from a catalog to a ChatOps command on a number of repositories/branches etc

create ARM capable Jenkins X

it would be good to support ARM kubernetes clusters for some/all of Jenkins X.

Lets define the requirements on this issue such as:

  • what images to create in what priority order
  • how/where to publish them
  • how to use the images on ARM

To support all the various docs and issues we have a github project here: https://github.com/jenkins-x/jenkins-x-arm-support

Initial questions

  • Where should we publish container images?
  • How should we differentiate ARM versus amd64 images?
  • Should we use docker hub?

Do not remove support for Static Jenkins config

I thought I read somewhere in slack channel that there was plans for removing support for Static Jenkins config.

Not every organization will be able to put their heads around using the "Serverless Jenkins X Pipelines with Tekton". We also use this instance of Jenkins for other automation.

Please do not remove support for Static Jenkins config, but allow us to more easily evolve to that architecture when we are ready.

Implement Version Management for Builders, BuildPacks and Quickstarts

There is an intrinsic dependency relationship between quickstarts, builders and buildpacks that spans the lifecycle of any customer application code derived from them.

We should assert the following use-cases:

  1. Customers should always be able to rebuild an application that they have previously successfully built on the platform, within an agreed time window of support that should be measured in years.

  2. It must be possible for the Jenkins-X project to upgrade builders on a regular cadence to mitigate vulnerabilities and add new features.

  3. It must be possible for the Jenkins-X project to upgrade buildpacks on a regular cadence to mitigate vulnerabilities and add new features.

  4. There must be a migration path for customer codebases to upgrade to later versions of builders and buildpacks (on the basis that this may necessitate changes to their codebase).

  5. It must be possible for the Jenkins-X project to deprecate and eventually remove instances of quickstarts, builders and buildpacks at the end of a support window.

Implications:

  • Quickstarts, buildpacks and builders must be versioned
  • The relationship between related versions of quickstarts, builders and buildpacks must be managed and maintained by the platform
  • There is a requirement to support multiple versions of quickstarts, builders and buildpacks in parallel on any given platform instance
  • The cadence of upgrades is much shorter than the desired length of support window, therefore there is a challenge to managing multiplying numbers of quickstart, builder and buildpack instances
  • Under the current approach, builders contain jx client code and are therefore not immutable and must be regenerated upon each release
  • Builders act as a local cache of dependencies on customer instances, to accelerate build activities
  • Changes to builders invalid local caches and can introduce unexpected errors to previously working codebases
  • Versions of quickstarts, builders and buildpacks may be tied to versions of nodepools on customer K8S instances, especially in MLOps scenarios where customers are using dedicated hardware resources such as GPU or TPU instances.

Drivers:

  1. It is currently not possible to upgrade builders and buildpacks without breaking customer code. This has meant that we are not providing timely security patches to core dependencies.

  2. It is not possible to complete the transition from Skaffold to Kaniko due to buildpack dependencies.

  3. It is currently not possible to support the rapid pace of change to core frameworks in MLOps due to the need to be able to support multiple versions of dependencies in parallel. in production.

Constraints:

  • Under the current model, proliferation of builders would have a severely detrimental effect upon the performance of the Jenkins-X release process

It is proposed that we discuss ways to re-architect the current solution to meet the above use cases.

See also jenkins-x/jx#4671

provide different version stream channels

right now we are continuously upgrading Jenkins X 2.x and the new 3.x version streams which can move too fast for some folks.

It might be nice to introduce alternative channels of consumption - so an LTS kind of channel could be created?

Each channel could run at a different cadence such that:

  • changes from the continuous channel need human approval to move into the downstream (e.g. stable) channel
  • we periodically try push changes from the continuous channel to the downstream channel

Its easy for folks to upgrade daily/weekly/monthly already using an upgrade schedule; but it might be nice to upgrade more frequently to a more stable channel.

If we had a more stable channel (e.g. monthly) - we could create better documentation on whats changed in that month?

What about using natively Tekton Dashboard for Jx3

Hello everyone,

After explaining it during the last Toc, i'm writting this enhancements:
In order to have less to maintain in the jx3 ecosystem, step by step,
what about firstly using natively the Tekton Dashboard instead of jx-ui ?

We did it and we are happy of this dashboard, here is a screenshot

Screenshot from 2023-03-22 17-40-36

Of course, we needed to configure lighthouse to automatically point to the tekton dashboard instead of the jx-one, don't worry it's possible.

When we changed our UI dashboard to the Tekton one, we obviously had to redo the Logs persistency part, back then following this old walkthrough, doing our own internal code for this => https://github.com/tektoncd/dashboard/blob/main/docs/walkthrough/walkthrough-logs.md

The great news is that tekton has finally implemented their own TEP-0117: Tekton Results Logs (tektoncd/community#994)

Tekton Results aims to help users logically group CI/CD workload history and separate out long term result storage away from the Pipeline controller.

Tekton Results is composed of 2 main components:
A queryable gRPC API server backed by persistent storage (see proto/v1alpha2 for the latest API spec).
A controller to watch and report TaskRun and PipelineRun updates to the API server.

Thanks to this PR => tektoncd/results#301 allowing to support log persistence outside of the host Kubernetes cluster. With this feature results will be able to store PipelineRun and TaskRun logs in the following persistent locations:

  • File (PVC)
  • S3 (S3 compatible storage)

In order to persist the Logs with this Tekton Result, these are the needed dependencies:

Kubernetes version: 1.22.x
Tekton Pipeline version: 0.42.0

We would need to plug differently lighthouse because through tekton dashboard we don't handle pipelineactivity that is specific of jx3

What do you think ? @msvticket @ankitm123 @tomhobson @babadofar @rajatgupta24

Enhance JX UI

JX UI looks great and it gives a high-level view of the current status of the builds, environments, projects and so on.
As user, it would be really great to be able to:

  • Configure namespaces (at the moment 'jx' is hardcoded in some places and it is not possible to configure it)
  • Support GHE or a configurable wat for Git Providers/Services (e.g. links/images seem to point towards Github only - so they are broken)
  • Have covered edge-cases in builds' completion/status (collected builds, old ones, stuck, etc..)
  • Check the Codebase for potential contributions

Jenkins X 3.x

we are aiming for an initial version of Jenkins X 3.x based on:

by making those capabilities available as a new jx 3.x alpha binary so that we can keep 2.x of Jenkins X stable and iterate quickly on the new 3.x alpha.

During this enhancement proposal I hope we can determine the release criteria to progress from alpha > beta > GA & how and when features make it into the release

e.g. we could also roll in these kinds of features too - but depending how things progress we may with to defer to versions after 3.0.x:

Get more out of Kubernetes' Operator Pattern

Currently most of the Jenkins X logic is implemented in the jx binary which is called everywhere. This results in the following problems:

  • hardly possible to develop/test components in an isolated manner.
  • not easy for the community to contribute.
  • does not support easy replacement of certain components with others.
  • does not offer as many extension points as k8s components usually have naturally when loosely coupled via the k8s API (CRDs/events) - currently k8s is mostly used as DB, very few controllers
  • not all processes are modeled using a CRD: therefore some information can only be derived by joining multiple resources. Instead there should be a denormalized CRD for every business process to support simple k8s API queries and extension.
  • not as resilient as it could be: implements long running business processes as a single piece of blocking code: On intermediate/temporary failure or pod termination eventually the whole process fails.
  • cannot be called from a controller efficiently since long blocking code in a reconcile loop reduces concurrency (requires decomposition first).

Therefore I propose to establish best practices and an iterative approach to move logic from the jx CLI into k8s controllers following the Operator Pattern.

arm64 building

Hi,

I'm trying to build for arm64, the documentation points to a 404.

Preview with Helmfile

See #26

TL;DR

  • the preview environments are based on Helm 2 and "umbrella charts" - so we can add dependencies, but it's not very flexible
  • we'll need to switch to Helm 3 - related to #34
  • Helmfile is a higher-level tool build on top of Helm, which can be used to declaratively define an environment composed of multiple Helm releases. It supports both Helm 2 and Helm 3, and could be used to switch smoothly from one to the other for the preview env.

Replace jx-git-operator with Argo CD

The release pipeline on the gitops cluster repo (bootjob) has a few issues

  • it does not truly run in parallel. as the number of namespaces and installed charts grows, the pipeline duration gets exponentially longer
  • a single kubectl apply failure will stop the entire pipeline before applying all valid k8s manifests
  • failure notifications can be sent via slack, but nobody is pinged directly (at least i can't get the directmessage settings to work)
  • the makefile that runs during the release pipeline is spaghetti and not very flexible

We run ArgoCD to sync the our non-jenkins-x config repo to our preprod environment and it is very intuitive and flexible. There is a proposal in the kubernetes slack #jenkins-x-dev channel to replace the above process with Argo CD

The most basic POC could look something like

  1. install the ArgoCD helm chart via terraform with lifecycle configuration to ignore all future changes. normally, we would manage helm releases via helmfile, but because we need to bootstrap the cluster and run a first ArgoCD sync, we can use the terraform helm provider
    resource "helm_release" "argocd_bootstrap" {
      chart            = "argo-cd"
      create_namespace = true
      namespace        = var.namespace
      name             = "argocd"
      version          = "5.5.7"
      repository       = "https://argoproj.github.io/argo-helm"
      values = [
        jsonencode(
          {
            "controller" : {
              "serviceAccount" : {
                "annotations" : {
                  "iam.gke.io/gcp-service-account" : "argocd-${var.cluster_name}@${var.gcp_project}.iam.gserviceaccount.com"
                }
              },
            },
            "repoServer" : {
              "autoscaling" : {
                "enabled" : true,
                "minReplicas" : 2
              },
              "initContainers" : [
                {
                  "name" : "download-tools",
                  "image" : "ghcr.io/helmfile/helmfile:v0.147.0",
                  "command" : [
                    "sh",
                    "-c"
                  ],
                  "args" : [
                    "wget -qO /custom-tools/argo-cd-helmfile.sh https://raw.githubusercontent.com/travisghansen/argo-cd-helmfile/master/src/argo-cd-helmfile.sh && chmod +x /custom-tools/argo-cd-helmfile.sh && mv /usr/local/bin/helmfile /custom-tools/helmfile"
                  ],
                  "volumeMounts" : [
                    {
                      "mountPath" : "/custom-tools",
                      "name" : "custom-tools"
                    }
                  ]
                }
              ],
              "serviceAccount" : {
                "annotations" : {
                  "iam.gke.io/gcp-service-account" : "argocd-${var.cluster_name}@${var.gcp_project}.iam.gserviceaccount.com"
                }
              },
              "volumes" : [
                {
                  "name" : "custom-tools",
                  "emptyDir" : {}
                }
              ],
              "volumeMounts" : [
                {
                  "mountPath" : "/usr/local/bin/argo-cd-helmfile.sh",
                  "name" : "custom-tools",
                  "subPath" : "argo-cd-helmfile.sh"
                },
                {
                  "mountPath" : "/usr/local/bin/helmfile",
                  "name" : "custom-tools",
                  "subPath" : "helmfile"
                }
              ]
            },
            "server" : {
              "autoscaling" : {
                "enabled" : true,
                "minReplicas" : 2
              }
              "ingress" : {
                "enabled" : true,
                "annotations" : {
                  "nginx.ingress.kubernetes.io/backend-protocol" : "HTTPS",
                  "nginx.ingress.kubernetes.io/force-ssl-redirect" : "true",
                  "nginx.ingress.kubernetes.io/ssl-passthrough" : "true"
                },
                "hosts" : [
                  "argocd.${var.apex_domain}"
                ],
                "serviceAccount" : {
                  "annotations" : {
                    "iam.gke.io/gcp-service-account" : "argocd-${var.cluster_name}@${var.gcp_project}.iam.gserviceaccount.com"
                  }
                }
              }
            }
          }
        )
      ] 
    
      set {
        name  = "server.config.configManagementPlugins"
        value = <<-EOT
        - name: helmfile
          init:                          # Optional command to initialize application source directory
            command: ["argo-cd-helmfile.sh"]
            args: ["init"]
          generate:                      # Command to generate manifests YAML
            command: ["argo-cd-helmfile.sh"]
            args: ["generate"]
        EOT
      }
      set {
        name  = "configs.credentialTemplates.https-creds.url"
        value = regex("\\w+://\\w+\\.\\w+", var.jx_git_url)
      }
      set_sensitive {
        name  = "configs.credentialTemplates.https-creds.username"
        value = var.jx_bot_username
      }
      set_sensitive {
        name  = "configs.credentialTemplates.https-creds.password"
        value = var.jx_bot_token
      }
    
      dynamic "set" {
        for_each = var.helm_settings
        content {
          name  = set.key
          value = set.value
        }
    
      lifecycle {
        ignore_changes = all
      }
    }
  2. use terraform to configure ArgoCD to sync the config-root folder of the dev gitops repo to the dev gke cluster. maybe we can package this as a separate helm chart called argo-cd-apps or something
      - apiVersion: argoproj.io/v1alpha1
        kind: ApplicationSet
        metadata:
          name: dev
        spec:
          generators:
          - git:
              repoURL: https://github.com/{{.Values.jxRequirements.cluster.environmentGitOwner}}/{{.Values.jxRequirements.environments.0.repository}}
              revision: HEAD
              directories:
              - path: helmfiles/*
              # - path: config-root/customresourcedefinitions
              # - path: config-root/namespaces/*
          template:
            metadata:
              name: '{{path.basename}}'
            spec:
              project: default
              source:
                repoURL: https://github.com/{{.Values.jxRequirements.cluster.environmentGitOwner}}/{{.Values.jxRequirements.environments.0.repository}}
                targetRevision: HEAD
                path: '{{path}}'
                plugin:
                  env:
                  - name: HELMFILE_USE_CONTEXT_NAMESPACE
                    value: "true"
                  - name: HELM_TEMPLATE_OPTIONS
                    value: --skip-tests
              destination:
                server: https://kubernetes.default.svc
                namespace: '{{path.basename}}'
              syncPolicy:
                automated:
                  prune: true
                  selfHeal: true
                syncOptions:
                - CreateNamespace=true
    • We should probably figure out whether we want to continue rendering kubernetes templates and committing them back to the cluster repo at PR time or if we should just use argo to sync directly from the helmfile. There's an example bot and a github action that post the output of helmfile diff or argo diff as a PR comment
  3. After the first sync, ArgoCD manages its own helm chart installation

i have some rough ideas here:
jenkins-x/terraform-google-jx#228
joshuasimon-taulia/helm-argo-cd-apps@4572c61

this is what the demo appset generates in my atlantis project
Screen Shot 2022-10-20 at 7 56 13 PM
when you click on the "namespace" application, the ui drills down into your actual k8s objects
Screen Shot 2022-10-20 at 8 09 06 PM

Feature preview environments

Jenkins X preview environments have always been one of the projects favourite features. The ability to preview an applications changes in an isolated environment before merging to the mainline gives developers faster feedback and reviewers greater confidence that the change does what's expected.

Questions we always get about preview environments are "how to I use a database in my preview environment?" or "how can I test against other microservices?". The advice here is to deploy a copy of what you need into your preview environment or link to services running in your staging environment.

This has worked great when working on a single application but often developers are working on a feature that span multiple microservices. You'd want to preview your changes along with others microservices for the same feature which means today you would need to deploy them in your preview environment and yours in theirs, manually keeping them up to date as more commits land in the PR. You don't want to use the staging environment as your not sure how they behave together and with trunk based development you would want code merged to mainline to be shippable. If you find an issue in staging then that can prevent other fixes going in until you revert your change.

This issue proposes the concept of "feature preview environments". A super easy way for developers to create a short term environment backed by gitops for deployment automation where multiple microservices can be previewed and tested together from changes to pull requests from multiple application git repos. Once the feature is ready an approved would merge and release the microservices together and promote the feature into staging / production.

This would mean Jenkins X supports

  • application previews (environment git repo is under the single applications charts/preview directory as it is today)
  • feature previews (environment git repo is separate much like a staging or production one where multiple applications can promote a pull request change into)

migration of Jenkins setups to Jenkins X

Hi,
Since, most of the companies are currently using Jenkins (or Jenkins2) , it would be really very useful if there is a migration document which covers from migrating from Jenkins and Jenkins2 to JenkinsX. Let me know whether I need to submit any doc. with details for this. Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.