Giter Club home page Giter Club logo

cloud-computing's Introduction

Deployment with kubernetes Engine

    Create a cluster with five n1-standard-1 nodes

    gcloud container clusters create --num-nodes --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"

    The explain command in kubectl can tell us about the Deployment object.

    kubectl explan deployment

    We can also see all of the fields using the --recursive option.

    kubectl explain deployment --recursive

    You can use the explain command as you go through the lab to help you understand the structure of a Deployment object and understand what the individual fields do.

    kubectl explain deployment.metadata.name

    create your deployment object using kubectl create

    kubectl create -f

    Verify

    kubectl get replicasets

    View Pods

    kubectl get pods

    create service file for above config file

    kubectl create -f services/auth.yaml

    create and expose fontend deployment

    kubectl create secret generic tls-certs --from-file tls/
    kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf

    kubectl create -f deployments/frontend.yaml
    kubectl create -f services/frontend.yaml

    Interact with frontend by external IP adn curling to it

    curl -ks https://

    You can also use the output templating feature of kubectl to use curl as a one-liner:

    curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`

Scale Deployment

    Now that we have a Deployment created, we can scale it. Do this by updating the spec.replicas field. You can look at an explanation of this field using the kubectl explain command again.

    kubectl explain deployment.spec.replicas

    The replicas field can be most easily updated using the kubectl scale command

    kubectl scale deployment hello --replicas=5

    Verify that there are now 5 hello Pods running:

    kubectl get pods | grep hello- | wc -l

    Now scale back the application:

    kubectl scale deployment hello --replicas=3

Rolling Updates

    To update your Deployment, run the following command

    kubectl edit deployment hello

    Change the image in the containers section of the Deployment to the following See the new ReplicaSet that Kubernetes creates.

    kubectl get replicaset

    You can also see a new entry in the rollout history

    kubectl rollout history deployment/hello

## Pause a rolling update
    If you detect problems with a running rollout, pause it to stop the update.

    kubectl rollout pause deployment/hello

    Verify the current state of the rollout

    kubectl rollout status deployment/hello

    You can also verify this on the Pods directly:

    kubectl get pods -o jsonpath --template='{range .items[*]}{.metadata.name}{"\t"}{"\t"}{.spec.containers[0].image}{"\n"}{end}'

Resume Rolling Update

    The rollout is paused which means that some pods are at the new version and some pods are at the older version. We can continue the rollout using the resume command.

    kubectl rollout resume deployment/hello

RollBack an update

    Assume that a bug was detected in your new version. Since the new version is presumed to have problems, any users connected to the new Pods will experience those issues.

    You will want to roll back to the previous version so you can investigate and then release a version that is fixed properly.

    Use the rollout command to roll back to the previous version:

    kubectl rollout undo deployment/hello

    Verify the roll back in the history:

    kubectl rollout history deployment/hello

    Finally, verify that all the Pods have rolled back to their previous versions:

    kubectl get pods -o jsonpath --template='{range .items[*]}{.metadata.name}{"\t"}{"\t"}{.spec.containers[0].image}{"\n"}{end}'

Canary deployments

When you want to test a new deployment in production with a subset of your users, use a canary deployment. Canary deployments allow you to release a change to a small subset of your users to mitigate risk associated with new releases.

    Create a canary deployment

    create a new canary deployment for the new version:

    kubectl create -f deployments/hello-canary.yaml

    Verify the canary deployment

    You can verify the hello version being served by the request:

    curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`/version

    Canary deployments in production - session affinity In this lab, each request sent to the Nginx service had a chance to be served by the canary deployment. But what if you wanted to ensure that a user didn't get served by the Canary deployment? A use case could be that the UI for an application changed, and you don't want to confuse the user. In a case like this, you want the user to "stick" to one deployment or the other.
    You can do this by creating a service with session affinity. This way the same user will always be served from the same version. In the example below the service is the same as before, but a new sessionAffinity field has been added, and set to ClientIP. All clients with the same IP address will have their requests sent to the same version of the hello application

    kind: Service
    apiVersion: v1
    metadata:
    name: "hello"
    spec:
    sessionAffinity: ClientIP
    selector:
    app: "hello"
    ports:
    - protocol: "TCP"
    port: 80
    targetPort: 80

    Due to it being difficult to set up an environment to test this, you don't need to here, but you may want to use sessionAffinity for canary deployments in production.
# Blue-green deployment Rolling updates are ideal because they allow you to deploy an application slowly with minimal overhead, minimal performance impact, and minimal downtime. There are instances where it is beneficial to modify the load balancers to point to that new version only after it has been fully deployed. In this case, blue-green deployments are the way to go.

Kubernetes achieves this by creating two separate deployments; one for the old "blue" version and one for the new "green" version. Use your existing hello deployment for the "blue" version. The deployments will be accessed via a Service which will act as the router. Once the new "green" version is up and running, you'll switch over to using that version by updating the Service.

    The Service

    Use the existing hello service, but update it so that it has a selector app:hello, version: 1.0.0. The selector will match the existing "blue" deployment. But it will not match the "green" deployment because it will use a different version.
    Update the service

    kubectl apply -f services/hello-blue.yaml

    Updating using Blue-Green Deployment
    In order to support a blue-green deployment style, we will create a new "green" deployment for our new version. The green deployment updates the version label and the image path.

    apiVersion: apps/v1 kind: Deployment
    metadata:
    name: hello-green
    spec:
    replicas: 3
    selector:
    matchLabels:
    app: hello
    template:
    metadata:
    labels:
    app: hello
    track: stable
    version: 2.0.0
    spec:
    containers:
    - name: hello
    image: kelseyhightower/hello:2.0.0
    ports:
    - name: http
    containerPort: 80
    - name: health
    containerPort: 81
    resources:
    limits:
    cpu: 0.2
    memory: 10Mi
    livenessProbe:
    httpGet:
    path: /healthz
    port: 81
    scheme: HTTP
    initialDelaySeconds: 5
    periodSeconds: 15
    timeoutSeconds: 5
    readinessProbe:
    httpGet:
    path: /readiness
    port: 81
    scheme: HTTP
    initialDelaySeconds: 5
    timeoutSeconds: 1

    Create the green deployment:

    kubectl create -f deployments/hello-green.yaml

    Once you have a green deployment and it has started up properly, verify that the current version of 1.0.0 is still being used:

    curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`/version

    Update the service to point to the new version:

    kubectl apply -f services/hello-green.yaml

    With the service is updated, the "green" deployment will be used immediately. You can now verify that the new version is always being used.

    curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`/version

cloud-computing's People

Contributors

arish82 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.