Giter Club home page Giter Club logo

blackbelt-aks-hackfest's Introduction

Azure Container Hackfest

Delivering modern cloud native applications with ​open source technologies on Azure​

NOTE: We have moved this content over to a new and updated repository https://github.com/Azure/kubernetes-hackfest

Overview

This workshop will guide you through migrating an application from "on-premise" to containers running in Azure Kubernetes Service.

The labs are based upon a node.js application that allows for voting on the Justice League Superheroes (with more options coming soon). Data is stored in MongoDB.

Note: These labs are designed to run on a Linux CentOS VM running in Azure (jumpbox) along with Azure Cloud Shell. They can potentially be run locally on a Mac or Windows, but the instructions are not written towards that experience. ie - "You're on your own."

Note: Since we are working on a jumpbox, note that Copy and Paste are a bit different when working in the terminal. You can use Shift+Ctrl+C for Copy and Shift+Ctrl+V for Paste when working in the terminal. Outside of the terminal Copy and Paste behaves as expected using Ctrl+C and Ctrl+V.

Lab Guides - Day 1

  1. Setup Lab environment
  2. Run app locally to test components
  3. Create Docker images for apps and push to Azure Container Registry(ACR Build)
  4. Create Docker images for apps and push to Azure Container Registry
  5. Create an Azure Kubernetes Service (AKS) cluster
  6. Deploy application to Azure Kubernetes Service
  7. Kubernetes UI Overview
  8. Operational Monitoring and Log Management
  9. Application and Infrastructure Scaling
  10. Moving your data services to Azure PaaS (CosmosDB)
  11. Update and Deploy New Version of Application
  12. Upgrade an Azure Kubernetes Service (AKS) cluster

Lab Guides - Day 2

These labs can be completed in no particular order.

  1. CI/CD Automation
  2. Kubernetes Ingress Controllers
  3. Kubernetes InitContainers
  4. Azure Service Broker
  5. Persistent Storage
  6. Azure Container Instances and ACI Connector
  7. Kubernetes Stateful Sets (coming soon)
  8. Secrets and ConfigMaps (coming soon)
  9. Helm Charts deep dive (coming soon)
  10. Troubleshooting and debugging (coming soon)
  11. RBAC and Azure AD integration (coming soon)

Contributing

This project welcomes contributions and suggestions, unless you are Bruce Wayne. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

License

This software is covered under the MIT license. You can read the license here.

This software contains code from Heroku Buildpacks, which are also covered by the MIT license.

This software contains code from [Helm][], which is covered by the Apache v2.0 license.

You can read third-party software licenses [here][Third-Party Licenses].

blackbelt-aks-hackfest's People

Contributors

adriantodorov avatar ahmedkhamessi avatar chzbrgr71 avatar clarenceb avatar desreela avatar dstrebel avatar felickz avatar heoelri avatar jontreynes avatar jschluchter avatar kvnloo avatar lastcoolnameleft avatar mathieu-benoit avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar msftgits avatar naseemkullah avatar praveenanil avatar raykao avatar snpdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blackbelt-aks-hackfest's Issues

Test HOL:9

Moving your data services to Hosted Data Solutions

Step 10 needs more beer.

https://github.com/Azure/blackbelt-aks-hackfest/blob/master/linux-container-workshop/hol-content/10-cluster-upgrading.md

This step says to use az aks get-versions which doesn't work..

$ az aks get-versions
az aks get-versions: error: the following arguments are required: --location/-l
usage: az aks get-versions [-h] [--verbose] [--debug]
                           [--output {json,jsonc,table,tsv}]
                           [--query JMESPATH] --location LOCATION

I suspect it wants us to use az aks get-upgrades except it never explains how to set $AKS_CLUSTER_NAME or $RESOURCE_GROUP_NAME

Reusing the vars from the earlier steps.. I guessed at..

NAME=$(az group list | jq '.[0]."name"' -r)
CLUSTER_NAME="${NAME//_}"
AKS_CLUSTER_NAME=$CLUSTER_NAME
RESOURCE_GROUP_NAME=$NAME

Either the page needs to use the old vars and explain how to set them, or it should show how to set the new vars used on that page.

Page also needs more beer. Amount consumed while writing this page was insufficient leading to inconsistencies.

Error: {ClusterServiceClassExternalName:"azure-cosmosdb-mongo-account"} does not exist

I am facing an error while applying the yaml file with the Service Instance for the OSBA lab.

Even after installing osba with --set modules.minStability=EXPERIMENTAL and having the class name as azure-cosmosdb-mongo-account, getting the following error:

=====================
kubectl apply -f heroes-cosmosdb.yaml

Error from server (Forbidden): error when creating "heroes-cosmosdb.yaml": serviceinstances.servicecatalog.k8s.io "heroes-cosmosdb-instance" is forbidden: ClusterServiceClass {ClusterServiceClassExternalName:"azure-cosmosdb-mongo-account"} does not exist, cannot figure out the default ClusterServicePlan.

And azure-cosmos-mongo-account is not listed in the 'svcat get classes' and the 'svcat get plans' output too. AKS cluster is v1.9.6.

Superfluous comment?

In the HOL 2 section "Tag images with ACR server and repository", the first line of code is:

# Be sure to replace the login server value

Since the $ACR_SERVER variable is set in the prior step, this comment is not necessary, right?

I'll submit a PR to remove the comment, upon confirmation.

Ingress Controllers Lab - Helm ingress install fails on AKS due to RBAC

In the Ingress Controllers lab, following Helm command:

helm install --name ingress stable/nginx-ingress --namespace kube-system

fails with error
Error: release ingress failed: clusterroles.rbac.authorization.k8s.io "ingress-nginx-ingress" is forbidden:.......

After doing a bit of research found that this is a known issue: rbitia/aci-demos#74

Workaround which seems to work is to set the RABC properties:

helm install stable/nginx-ingress --name ingress --namespace kube-system --set rbac.create=false --set

I would recommend adding a note regarding this in the lab, as it will be helpful in case someone faces the same issue.

Use --password-stdin

For users using their own jumpbox, on the HOL #2 step:

docker login --username $ACR_USER --password $ACR_PWD $ACR_SERVER

... they may receive the following message:

WARNING! Using --password via the CLI is insecure. Use --password-stdin.

The solution is to use vi to write the password to a txt file (e.g. ~/acr-pw.txt) and then use this alternative login command:

 cat ~/acr-pw.txt | docker login $ACR_SERVER -u $ACR_USER --password-stdin

Is this deserving of a PR? Perhaps it's a bit out of scope.

Test HOL:4

Deploy the Superhero Ratings App to AKS

Test HOL:6

Working with Azure Kubernetes Service Cluster Scaling

AKS version not available

I'm looking at the aks versions available and see that 1.11.2 is not available, not in the major us regions at least... eastus, westus, centralus

Possibly, we want to use the latest one. I will use 1.12.5 and let you know if the rest of the lab goes okay.

bash-4.4# az aks get-versions --location eastus --output table
KubernetesVersion Upgrades


1.12.5 None available
1.12.4 1.12.5
1.11.7 1.12.4, 1.12.5
1.11.6 1.11.7, 1.12.4, 1.12.5
1.10.12 1.11.6, 1.11.7
1.10.9 1.10.12, 1.11.6, 1.11.7
1.9.11 1.10.9, 1.10.12
1.9.10 1.9.11, 1.10.9, 1.10.12
bash-4.4# az aks get-versions --location westus --output table
KubernetesVersion Upgrades


1.12.5 None available
1.12.4 1.12.5
1.11.7 1.12.4, 1.12.5
1.11.6 1.11.7, 1.12.4, 1.12.5
1.10.12 1.11.6, 1.11.7
1.10.9 1.10.12, 1.11.6, 1.11.7
1.9.11 1.10.9, 1.10.12
1.9.10 1.9.11, 1.10.9, 1.10.12
bash-4.4# az aks get-versions --location centralus --output table
KubernetesVersion Upgrades


1.12.5 None available
1.12.4 1.12.5
1.11.7 1.12.4, 1.12.5
1.11.6 1.11.7, 1.12.4, 1.12.5
1.10.12 1.11.6, 1.11.7
1.10.9 1.10.12, 1.11.6, 1.11.7
1.9.11 1.10.9, 1.10.12
1.9.10 1.9.11, 1.10.9, 1.10.12

03 has hardwired location

the 3rd hackfest lab has a hardwired location (-l eastus), perhaps make a LOCATION=xxx similar to the approach used for resource group and cluster name?

OSBA Lab should state min K8s version

The lab instructions should state at the top, "Service Catalog currently works with Kubernetes version 1.9.0 and higher, so you will need a Kubernetes cluster that is version 1.9.0 or higher."

Kubernetes dashboard can be viewed using `az aks browse` to simplify instructions

In labs/day1-labs/05-kubernetes-ui.md the suggested way to view the Kubernetes dashboard is to get the cluster credentials (for kubectl) then run kubectl proxy and finally browse to a long URL.

You can just use az aks browse -n $CLUSTER_NAME -g $NAME to achieve the same thing in one command. Any particular reason to not change the instructions to say this?

Switch to B-series VM size?

Now that AKS has B-series VM support, should we change HOL 3 to use a B-series VM?

Otherwise the more expensive Standard_DS1_v2 default is used.

I imagine Standard_B1ms would be fine, but perhaps Standard_B1s would suffice (see B-series VM options). I have not tested.

The new command line would be along the lines of...

az aks create -n $CLUSTER_NAME -g $NAME -c 2 -k 1.7.7 --node-vm-size=Standard_B1ms --generate-ssh-keys -l eastus

Thoughts?

Site does not render in IE

Works in Safari, Chrome for Mac, Chrome for PC, Edge, but not IE version 11.192.16299.0 on Windows 10

Day 1, Lab 2: Docker user-defined networks provide DNS so you don't need to use IP addresses

In labs/day1-labs/02-dockerize-apps.md a user-defined bridge network is created with a specific subnet and then the 3 docker containers are started using IPs like so:

docker network create --subnet=172.18.0.0/16 my-network
docker run -d --name db --net my-network --ip 172.18.0.10 -p 27017:27017 rating-db
docker run -d --name api -e "MONGODB_URI=mongodb://172.18.0.10:27017/webratings" --net my-network --ip 172.18.0.11 -p 3000:3000 rating-api
docker run -d --name web -e "API=http://172.18.0.11:3000/" --net my-network --ip 172.18.0.12 -p 8080:8080 rating-web

Using IPs is both unnessary and also confusing for readers.

You can achieve the same thing by using the container names instead of IPs:

docker network create my-network
docker run -d --name db --net my-network -p 27017:27017 rating-db
docker run -d --name api -e "MONGODB_URI=mongodb://db:27017/webratings" --net my-network -p 3000:3000 rating-api
docker run -d --name web -e "API=http://api:3000/" --net my-network -p 8080:8080 rating-web

User-defined networks in Docker provide built-in DNS so you don't need to refer to containers by their IPs.

acr configuration needs to be easily copy/paste-able

Should look more like this with semi-colon at the end so it's copy-pasteable.


# set these values to yours
ACR_SERVER=<fill_acr_server>;
ACR_USER=<fill_acr_user>;
ACR_PWD=<fill_acr_pwd>;

docker login --username $ACR_USER --password $ACR_PWD $ACR_SERVER


docker --build-arg tags failing push to ACR

For some reason I'm getting the same issue as:

Azure/azure-cli#8305

when building with the docker file with tags. I simply take out the build tags and it's able to build and push.

This is at step
https://github.com/Azure/blackbelt-aks-hackfest/blob/master/labs/day1-labs/02-dockerize-apps(alt-acr-build).md
When running:
az acr build --registry $ACR_NAME --build-arg BUILD_DATE=date -u +"%Y-%m-%dT%H:%M:%SZ" --build-arg VCS_REF=git rev-parse --short HEAD --build-arg IMAGE_TAG_REF=v1 --image azureworkshop/rating-web:v1 .

Version 1.7.7 in Lab 03 of Day 1 non longer supported in az cli

Using:

az aks create -n $CLUSTER_NAME -g $NAME -c 2 -k 1.7.7 --generate-ssh-keys -l $LOCATION

Now gives the following message:

Operation failed with status: 'Bad Request'. Details: RBAC feature is not supported with theselected Kubernetes version. The version of Kubernetes must be 1.8.x or higher

aks resource group create needs to be in a aks region

https://github.com/Azure/blackbelt-aks-hackfest/blob/master/linux-container-workshop/hol-content/02-dockerize-apps.md

Need to call out that when az group create ... is called that it needs to be in a region that supports AKS, otherwise you will get an error like this –

$ az aks create -n $NAME -g $NAME -c 2 -k 1.7.7 --generate-ssh-keys
The provided location 'westus' is not available for resource type 'Microsoft.ContainerService/managedClusters'. List of available regions for the resource type is 'eastus,westeurope,centralus,canadacentral,canadaeast,westus2'.

Fix and Update Lab 7 - Add Monitoring to an Azure Kubernetes Service Cluster

Need contributions and help here, anyone would like to jump on this one?

Since PR #108 and like explained there, "Lab 7 - Add Monitoring to an Azure Kubernetes Service Cluster" is not working well now. Data are not properly populated in Grafana dashboards.

So far, here are what have been found and the thoughts on that so far:

  • The lab is using old versions:
    • Prometheus Helm chart 4.6.13 (12/2017), furthermore Prometheus itself changed from 1 to 2 with breaking changes.
    • Grafana Helm chart 0.5.1 (12/2017)
  • For this specific Hackfest content, do we want to use the monitoring aks addon instead (Azure Monitor)? Could be enabled at creation or afterward, and it's pretty straightforward - https://docs.microsoft.com/en-us/azure/monitoring/monitoring-container-health
  • We are seeing more and more Prometheus Operator...

Test HOL:3

Azure Kubernetes Service (AKS) Deployment

Test HOL:7

Add Monitoring to an Azure Kubernetes Service Cluster

VM is missing jq command

/linux-container-workshop/hol-content/05-kubernetes-ui.md

tells the user to use NAME=$(az group list | jq '.[0]."name"' -r)

But the VM does not have jq installed

sudo yum install jq is one option.. or pre-install it ?

Test HOL:8

Upgrade an Azure Container Service (AKS) cluster

OSBA Cosmos Class Missing

In the OSBA lab using Cosmos it's using the class azure-cosmos-mongo-db. Doesn't look like that class exists any longer as running apply results in
Error from server (Forbidden): error when creating "heroes-cosmosdb.yaml": serviceinstances.servicecatalog.k8s.io "heroes-cosmosdb-instance" is forbidden: ClusterServiceClass {ClassExternalName:"azure-cosmos-mongo-db"} does not exist, can not figure out the default ClusterServicePlan.
I ran ./svcat get classes and azure-cosmos-mongo-db wasn't listed.

CI/CD Brigade Failure

Two of us at SNP (@jayachandralingam and I) independently completed the CI/CD Brigade lab, but in the end our web apps did not update.

We could see the Brigade worker and jobs running and the webhook invoked in GitHub. We installed Kashti to get better visibility into the Brigade builds. In the Kashti dashboard we see the GitHub events for the project, but each in a failed state. In the Build page, it reads "There are currently no jobs inside this build."

Any thoughts as to what may be the cause?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.