azure / sg-aks-workshop Goto Github PK
View Code? Open in Web Editor NEWSecurity + Governance Workshop
License: Creative Commons Attribution 4.0 International
Security + Governance Workshop
License: Creative Commons Attribution 4.0 International
Build multi-tenancy requirements into a requirement
Create Blue/green Cluster deployment scenario
In the Secrets Management Chapter (https://github.com/Azure/sg-aks-workshop/blob/master/cluster-design/SecretManagement.md) we propose 4 options:
And indeed there is a fifth and recommended option based on managed identities:
Do you know if there is any issue with this option? If not, I would like to add it to the Secrets Management Chapter.
Would be good to include the personas (dev, operations, networking, security, etc) in the roles that play a part in the Governance + Security Scenario
an idea would be to document the impact of applications logs generating additional cost and how to control this.
in particular some clusters would benefit from using Capacity commitment in Log analytics
https://azure.microsoft.com/en-us/updates/azure-monitor-log-analytics-new-capacity-based-pricing-option-is-now-available/
Creat a doc that explains in greater detail the design choices that were made.
Hi all,
The pre-provisioning is setting up the UDR routes. And the cluster provisioning diagram is also mentioning that egress traffic is going through the AZ FW.
But the terraform files don't have that configuration.
As far as I can see terraform is deploying a cluster with "Standard Load Balancer" and "VMSS" which implies the SLB with a public IP address. Would the UDR work in that case, as we would have a LB with public IP that is overriding the UDR?
Im guessing the outbound_type is missing in the terraform config. https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#outbound_type
https://github.com/Azure/sg-aks-workshop/blob/master/cluster-pre-provisioning/README.md
https://github.com/Azure/sg-aks-workshop/blob/master/cluster-provisioning/README.md
https://docs.microsoft.com/en-us/azure/aks/egress-outboundtype
Regrads,
Zarko
kubernetes_namespace.flux: Creating...
Error: Post http://localhost/api/v1/namespaces: dial tcp 127.0.0.1:80: connectex: No connection could be made because the target machine actively refused it.
on flux.tf line 13, in resource "kubernetes_namespace" "flux":
13: resource "kubernetes_namespace" "flux" {
We have deployed the Falco k8s_audit_rules.yaml section as mentioned in the Cluster-config folder which is linked to Flux.
After the deployment, It seems only the falco_rules.yaml section is working. We were able to query the log results from Log analytics as well.
These are some falco rules that are working.
But for K8s-audit rules, the most important rules are as follows and they doesn't seem to be working.
In the validate scenarios,
https://github.com/Azure/sg-aks-workshop/tree/master/validate-scenarios
Here the rules validated are for falco rules but there isn't any description provided for audit rules.
In the Falco documentation, it says to enable auditing policy for Master Node Kube API Server but in AKS since the users doesn't have access to Master Node, How does the Falco audit rules comes into action?
Use multiple different attack scenario and show how to alert on those events
Once not in preview anymore, could be useful to add a small paragraph in Deploy App > Image Vulnerability Scanning and Management, to say that we have "embedded" image scanning within ACR.
If we quote every product on the market except "our" solution... :)
Hi,
I am using VS code for running the TF scripts in the sample. When i initalize the below variables i get an error that "expoort commend is not recognizable"
export TF_VAR_prefix=$PREFIX
export TF_VAR_resource_group=$RG
export TF_VAR_location=$LOC
export TF_VAR_client_id=$APPID
export TF_VAR_client_secret=$PASSWORD
export TF_VAR_azure_subnet_id=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --query id -o tsv)
export TF_VAR_azure_aag_subnet_id=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $APPGWSUBNET_NAME --query id -o tsv)
export TF_VAR_azure_subnet_name=$APPGWSUBNET_NAME
export TF_VAR_azure_aag_name=$AGNAME
export TF_VAR_azure_aag_public_ip=$(az network public-ip show -g $RG -n $AGPUBLICIP_NAME --query id -o tsv)
export TF_VAR_azure_vnet_name=$VNET_NAME
export TF_VAR_github_organization=Azure # PLEASE NOTE: This should be your github username if you forked the repository.
export TF_VAR_github_token=
export TF_VAR_aad_server_app_id=<ask_instructor>
export TF_VAR_aad_server_app_secret=<ask_instructor>
export TF_VAR_aad_client_app_id=<ask_instructor>
export TF_VAR_aad_tenant_id=<ask_instructor>**
If i try to set the variables as below then when i am running terraform plan it shows variables are not getting set correctly and command keeps prompting for all variables starting from client_id to the bottom.
$TF_VAR_prefix=$PREFIX
$TF_VAR_resource_group=$RG
$TF_VAR_location=$LOC
$TF_VAR_client_id=$SERVER_APPID
$TF_VAR_client_secret=$PASSWORD
$TF_VAR_azure_subnet_name=$APPGWSUBNET_NAME
$TF_VAR_azure_aag_name=$AGNAME
$TF_VAR_azure_vnet_name=$VNET_NAME
$TF_VAR_github_organization= $GITHUB_ORG_NAME # PLEASE NOTE: This should be your github username if you forked the repository.
$TF_VAR_github_token=$GITHUB_TOKEN
$TF_VAR_aad_server_app_id=$SERVER_APPID
$TF_VAR_aad_server_app_secret=$PASSWORD
$TF_VAR_aad_client_app_id=$CLIENT_APPID
$TF_VAR_aad_tenant_id=$TENANT_ID
Can anyone help how to resolve this issue?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.