Giter Club home page Giter Club logo

sg-aks-workshop's Introduction

WORK IN PROGRESS

This repo is a WORK IN PROGRESS.

Cloud Native App Governance + Security Workshop

Hello, and welcome to the workshop. This is a 2-day hands-on workshop focused on setting up AKS along with additional technologies to make it adhere to the governance and security needs of highly regulated customers.

The workshop runs over 2 days and is meant to take an outside in approach. Meaning, we will start from the outside of the architecture and make our way inwards. It starts with focusing on Governance and Security decisions that need to be made before a single Azure resource is provisioned. We will then focus on decisions that need to get made prior to provisioning the cluster. Next, we will provision the cluster along with focusing on how to deploy common components post-provisioning. Once the cluster is configured, the next steps are to actually deploy workloads. Finally when the workloads are deployed, we will focus in on Day 2 operations when it comes to managing, maintaining and provising observability into the cluster.

End Goal

The end goal is to take you from having a kubernetes setup that is unsecure by default, to an Enterprise ready configuration that is secure by default. To help understand what that means please see the following illustrations showing a before and after setup.

Before Picture

Before Configuration

After Picture

After Configuration

Lab Guides - Day 1

  1. Customer Scenario
  2. Security, Governance & Azure Security Setup
  3. Cluster Design
  4. Cluster Pre-Provisioning
  5. Cluster Provisioning
  6. Post-Provisioning
  7. Cost Governance
  8. Deploy App

Lab Guides - Day 2

  1. Deploy App
  2. Day 2 Operations
  3. Service Mesh - Do I need it?
  4. Validate Scenarios
  5. Thought Leadership

Prerequisites

The following are the requirements to start.

Fork the Repo

It is important to Fork this repo, not just clone it. You will be creating Personal Access Tokens, which in turn will be creating SSH keys, and they will be used to make changes to a GitHub repo.

Forking a Repository

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Legal Notices

Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the Creative Commons Attribution 4.0 International Public License, see the LICENSE file, and grant you a license to any code in the repository under the MIT License, see the LICENSE-CODE file.

Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.

Privacy information can be found at https://privacy.microsoft.com/en-us/

Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel or otherwise.

sg-aks-workshop's People

Contributors

denniszielke avatar dstrebel avatar gkaleta avatar lgmorand avatar microsoft-github-operations[bot] avatar microsoftopensource avatar ms-jasondel avatar patpicos avatar raykao avatar thomq avatar yokimjd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sg-aks-workshop's Issues

Cluster Provisioning diagram not matching the terrafrom deployment

Hi all,

The pre-provisioning is setting up the UDR routes. And the cluster provisioning diagram is also mentioning that egress traffic is going through the AZ FW.

But the terraform files don't have that configuration.
As far as I can see terraform is deploying a cluster with "Standard Load Balancer" and "VMSS" which implies the SLB with a public IP address. Would the UDR work in that case, as we would have a LB with public IP that is overriding the UDR?
Im guessing the outbound_type is missing in the terraform config. https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#outbound_type

https://github.com/Azure/sg-aks-workshop/blob/master/cluster-pre-provisioning/README.md
https://github.com/Azure/sg-aks-workshop/blob/master/cluster-provisioning/README.md
https://docs.microsoft.com/en-us/azure/aks/egress-outboundtype

Regrads,
Zarko

Add Qualys for image scanning

Once not in preview anymore, could be useful to add a small paragraph in Deploy App > Image Vulnerability Scanning and Management, to say that we have "embedded" image scanning within ACR.

If we quote every product on the market except "our" solution... :)

AKS Falco audit rules not working

We have deployed the Falco k8s_audit_rules.yaml section as mentioned in the Cluster-config folder which is linked to Flux.
After the deployment, It seems only the falco_rules.yaml section is working. We were able to query the log results from Log analytics as well.

These are some falco rules that are working.

  • Log files were tampered
  • Shell history had been deleted or renamed
  • Privileged container started
  • Error File below / or /root opened for writing
  • Notice A shell was spawned in a container with an attached terminal
  • A shell configuration file has been modified
  • Docker or kubernetes client executed in container
  • Unexpected connection to K8s API Server from container

But for K8s-audit rules, the most important rules are as follows and they doesn't seem to be working.

  1. Detect any attempt to create a namespace outside of a set of known namespaces
  2. Detect any attempt to create a pod in the kube-system or kube-public namespaces
  3. Detect any attempt to create a serviceaccount in the kube-system or kube-public namespaces
  4. Detect any attempt to modify/delete a ClusterRole/Role starting with system
  5. Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
  6. Detect any attempt to create a Role/ClusterRole with wildcard resources or verbs
  7. Detect any attempt to create a Role/ClusterRole that can perform write-related actions
  8. Detect any attempt to create a Role/ClusterRole that can exec to pods
  9. Detect any attempt to create a deployment
  10. Detect any attempt to delete a deployment
  11. Detect any attempt to create a service
  12. Detect any attempt to delete a service
  13. Detect any attempt to create a configmap
  14. Detect any attempt to delete a configmap
  15. Detect any attempt to create a namespace
  16. Detect any attempt to delete a namespace
  17. Detect any attempt to create a service account
  18. Detect any attempt to delete a service account
  19. Detect any attempt to create a cluster role/role
  20. Detect any attempt to delete a cluster role/role
  21. Detect any attempt to create a clusterrolebinding
  22. Detect any attempt to delete a clusterrolebinding

In the validate scenarios,
https://github.com/Azure/sg-aks-workshop/tree/master/validate-scenarios
image

Here the rules validated are for falco rules but there isn't any description provided for audit rules.

In the Falco documentation, it says to enable auditing policy for Master Node Kube API Server but in AKS since the users doesn't have access to Master Node, How does the Falco audit rules comes into action?

variables not set while running terraform plan

Hi,

I am using VS code for running the TF scripts in the sample. When i initalize the below variables i get an error that "expoort commend is not recognizable"

export TF_VAR_prefix=$PREFIX
export TF_VAR_resource_group=$RG
export TF_VAR_location=$LOC
export TF_VAR_client_id=$APPID
export TF_VAR_client_secret=$PASSWORD
export TF_VAR_azure_subnet_id=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --query id -o tsv)
export TF_VAR_azure_aag_subnet_id=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $APPGWSUBNET_NAME --query id -o tsv)
export TF_VAR_azure_subnet_name=$APPGWSUBNET_NAME
export TF_VAR_azure_aag_name=$AGNAME
export TF_VAR_azure_aag_public_ip=$(az network public-ip show -g $RG -n $AGPUBLICIP_NAME --query id -o tsv)
export TF_VAR_azure_vnet_name=$VNET_NAME
export TF_VAR_github_organization=Azure # PLEASE NOTE: This should be your github username if you forked the repository.
export TF_VAR_github_token=
export TF_VAR_aad_server_app_id=<ask_instructor>
export TF_VAR_aad_server_app_secret=<ask_instructor>
export TF_VAR_aad_client_app_id=<ask_instructor>
export TF_VAR_aad_tenant_id=<ask_instructor>**

If i try to set the variables as below then when i am running terraform plan it shows variables are not getting set correctly and command keeps prompting for all variables starting from client_id to the bottom.

$TF_VAR_prefix=$PREFIX
$TF_VAR_resource_group=$RG
$TF_VAR_location=$LOC
$TF_VAR_client_id=$SERVER_APPID
$TF_VAR_client_secret=$PASSWORD
$TF_VAR_azure_subnet_id=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --query id -o tsv)
$TF_VAR_azure_aag_subnet_id=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $APPGWSUBNET_NAME --query id -o tsv)
$TF_VAR_azure_subnet_name=$APPGWSUBNET_NAME
$TF_VAR_azure_aag_name=$AGNAME
$TF_VAR_azure_aag_public_ip=$(az network public-ip show -g $RG -n $AGPUBLICIP_NAME --query id -o tsv)
$TF_VAR_azure_vnet_name=$VNET_NAME
$TF_VAR_github_organization= $GITHUB_ORG_NAME # PLEASE NOTE: This should be your github username if you forked the repository.
$TF_VAR_github_token=$GITHUB_TOKEN
$TF_VAR_aad_server_app_id=$SERVER_APPID
$TF_VAR_aad_server_app_secret=$PASSWORD
$TF_VAR_aad_client_app_id=$CLIENT_APPID
$TF_VAR_aad_tenant_id=$TENANT_ID

Can anyone help how to resolve this issue?

New Secret Management options

In the Secrets Management Chapter (https://github.com/Azure/sg-aks-workshop/blob/master/cluster-design/SecretManagement.md) we propose 4 options:

  • Using the plain Kubernetes secrets model
  • Using the plain Kubernetes secrets model but encrypt the data in etcd using Azure KeyVault and the KMS plugin for Azure Key Vault
  • Using the plain Kubernetes secrets but storing encrypted values in the Kubernetes secrets which are only encrypted at runtime
  • Using Azure KeyVault for storing secrets and certificates and mounting them into a Kubernetes volume using the KeyVault FlexVolume driver.

And indeed there is a fifth and recommended option based on managed identities:

https://docs.microsoft.com/en-us/azure/aks/developer-best-practices-pod-security#use-pod-managed-identities

Do you know if there is any issue with this option? If not, I would like to add it to the Secrets Management Chapter.

CI/CD with Github Actions

  • Add Full deployment of all components
  • Add workflow for cluster deployment only
  • Add CI/CD workflow of our example app

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.