Giter Club home page Giter Club logo

k8s-pipeline-library's Introduction

k8s-pipeline-library

Description

Jenkins shared pipeline library to be used for deployment in Kubernetes clusters.

Documentation

Deployment in k8s - spec

Main global steps

  1. Generic entry point for Jenkinsfile - the generic entry point for the continuous delivery pipelines. See spec
  2. UPP entry point for Jenkinsfile main entry point for Continuous Delivery in UPP clusters. See spec
  3. PAC entry point for Jenkinsfile main entry point for Continuous Delivery in UPP clusters. See spec
  4. Install Helm chart - this is the step used by the generic job for installing a Helm chart. See spec
  5. Build and deploy in team envs - this is the step that handles the building and deployment into the team envs. See spec
  6. Build and deploy in upper envs - this is the step that handles the building and deployment into the upper envs (staging and prod). See spec

Utility global steps

  1. Diff & sync 2 envs: - this is the main step used in the Diff & Sync 2 k8s envs. This job can be used to keep the team envs in sync with the prod ones, or when provisioning a new environment.
  2. Update cluster using the provisioner - this is the main step used in the Update a Kubernetes cluster using the Provisioner. This job can be used for updating the CoreOS version in a cluster.
  3. Update Dex configs - this is the main step used in the job Update Dex Config that updates the Dex configurations in multiple clusters at once. For more information on Dex, see Content auth

Helm integration

On every helm install/upgrade the following values are automatically inserted:

  1. region: the reigon where the targeted cluster lives. Example: eu, us
  2. target_env: the name of the environment as defined in the Environment registry. Example: k8s, prod
  3. __ext.target_cluster.sub_domain: the DNS subdomain of the targeted cluster. This is computed from the mapped API server declared in the EnvsRegistry. Example: upp-prod-publish-us, pac-prod-eu
  4. For every cluster in the targeted environment, the URLs are exposed with the values cluster.${cluster_label}.url. Example: --set cluster.delivery.url=https://upp-k8s-dev-delivery-eu.ft.com --set cluster.publishing.url=https://upp-k8s-dev-publish-eu.ft.com

NOTE: in the future all these values will be moved under the __ext namespace to avoid clashes with other developer introduced values.

What to do when adding a new environment

When provisioning a new environment, Jenkins needs to "see" it, in order to be able to deploy to it. Here are the steps needed in order for Jenkins to "see" it.

  1. Create a new branch for this repository

  2. Add the definition of the new environment in the EnvsRegistry.groovy. Here's an example:

          Environment prod = new Environment()
          prod.name = Environment.PROD_NAME
          prod.slackChannel = "#k8s-pipeline-notif"
          prod.regions = ["eu", "us"]
          prod.clusters = [Cluster.DELIVERY, Cluster.PUBLISHING, Cluster.NEO4J]
          prod.clusterToApiServerMap = [
              ("eu-" + Cluster.DELIVERY)  : "https://upp-prod-delivery-eu-api.ft.com",
              ("us-" + Cluster.DELIVERY)  : "https://upp-prod-delivery-us-api.ft.com",
              ("eu-" + Cluster.PUBLISHING): "https://upp-prod-publish-eu-api.ft.com",
              ("us-" + Cluster.PUBLISHING): "https://upp-prod-publish-us-api.ft.com",
              ("eu-" + Cluster.NEO4J): "https://upp-prod-neo4j-eu-api.ft.com",
              ("us-" + Cluster.NEO4J): "https://upp-prod-neo4j-us-api.ft.com"
          ]
    

    Here are the characteristics of an Environment:

    1. It has a name and a notifications slack channel.
    2. It might be spread across multiple AWS regions
    3. In each region, it might have multiple clusters (stacks).
    4. For each cluster(stack) we must define the URL of the K8S APi server.

    The name of the environment is very important as it is correlated with the envs name from the Helm chart app-configs folder and with the ones in the Github releases for team environments. This is why this name must contain only alphanumeric characters. - and _ are not allowed in the name. Valid names may be: k8s, xp, myteam, rjteam

  3. Don't forget to add the newly defined environment to the envs list in the EnvsRegistry class.

  4. Define in Jenkins the credentials needed for accessing the K8S API servers. For each of the API servers in the environment Jenkins needs 1 key in order to access it, therefore you need to create 1 Jenkins credential / cluster that are of type Secret Text with the following ids

    1. ft.k8s-auth.${full-cluster-name}.token (example ft.k8s-auth.upp-k8s-dev-delivery-eu.token) -> this is the token of the Jenkins service account from the Kubernetes cluster.
  5. Define in Jenkins the credentials with the TLS assets of the cluster. This will be used when updating the kubernetes cluster using (this Jenkins job)Update a Kubernetes cluster The credential must be named ft.k8s-provision.${full-cluster-name}.credentials. Example ft.k8s-provision.upp-k8s-dev-delivery-eu.credentials. The type must be Secret file and the zip should be fetched from Last Pass.

  6. Push the branch and create a Pull Request.

  7. After merge, add the new environment to the Jenkins jobs:

    1. Deploys a helm chart from the upp repo
    2. Update a Kubernetes cluster

Developer documentation

Intellij Idea project setup

Steps:

  1. Make sure you have Groovy language support plugin enabled in Intellij
  2. Import the project from the maven POM: File -> New -> Project from existing sources -> go to project folder & select -> choose External model -> Maven
  3. Set var and intellij-gdsl as Source folders

With this setup you will have completion in groovy files for out of the box functions injected by pipeline plugins in Jenkins. This might help you in some cases.

Pipeline development tips & tricks

How do I know what functions are available OOTB ?

You have 2 options:

  1. Checkout the pipeline syntax page. Go to any pipeline job & click the "Pipeline syntax". Here is a link to such page. This page generates snippets that you can paste into your script.
  2. Use Intellij with GDSL (see setup above). This might not be useful sometimes, as the parameters are maps.

Recommendations

  1. Prefer using docker images that you can control over Jenkins plugins. Depending on Jenkins plugins makes Jenkins hard to upgrade. The pipeline steps support running logic inside docker containers, so they are recommended, especially if you need command line tools. As an example, we're using the k8s-cli-utils docker image for making kubectl or helm calls instead of relying on a plugin that installs these utilities on the Jenkins slaves.
  2. Always declare types and avoid using “def”
  3. Use the @NonCPS annotation for methods that use objects that are not serializable. See the docs here and a practical example in this stackoverflow question.
  4. In order to test some code you can “Replay” a job run and place the code changes directly in the window.

How to test and roll out pipeline changes

See How to test and roll out pipeline changes

Pipeline integration points

The pipeline has several integration points to achieving its goals. For this it keeps the secret data (API keys, username & passwords) as Jenkins credentials . The whole list of credentials set in Jenkins can be accessed here

See Pipeline Integration points for details.

Used Jenkins plugins

The pipeline code tries to keep to a minimum the used plugins, and uses docker images for command line tools. The used plugins by the pipeline code are:

  1. HTTP Request Plugin for making HTTP requests from variuos integrations, like Slack.
  2. Lockable Resources Plugin for updating the index.yaml file of the Helm repository.
  3. Mask Passwords Plugin for masking sensitive input data in the logs.

Permissive script security

By default the Groovy pipelines run in Jenkins in a Sandbox that limits the methods and objects you can use in the code. This is done by the Script Security Plugin. Since this is annoying and devs might not know how to overcome this, we decided to disable this behavior by using the Permissive Script Security Plugin.

How to create a job for a new repo

When a new app is created that needs Continuous Delivery on our Kubernetes clusters, you can enable this by creating a new multibranch job in Jenkins following these steps:

  1. Login into Jenkins using the AD credentials
  2. Go to the appropriate folder for the platform. For UPP go to UPP: Pipelines for application deployments and for Pac go to PAC: Pipelines for application deployments
  3. Click the “New Item” link in the left side
  4. A template job is already defined in Jenkins, so in the new item dialog fill the following:
    • Item name: {replace_with_app_name}-auto-deploy
    • copy from: k8s-deployment/deploy-pipeline-template 1
  5. Configure the job: fill in the display name with “{replace-with-app-name} dev and release pipeline” and the 2 Git branch sources. 2
  6. Click save

How to trigger the pipeline job

If you've just created a new branch or a new tag or a new commit on a branch that should be picked up by Jenkins you have 2 options:

  1. wait for Jenkins to pick it up. It is set to scan all repos each 2 mins.
  2. Trigger the scanning of the multibranch manually. Go to the multibranch pipeline job like aggregate-concept-transformer job and click Scan Multibranch Pipeline Now from the left hand side of the screen.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.