Make sure you set up the Installation dependencies
first, before moving on to the Deployment dependencies
.
- AWS CLI - Make sure that you have installed the AWS CLI - Instructions
- AWS CLI - Configure the AWS CLI to your account credentials - Instructions
- Terraform CLI - Install the Terraform CLI here - Instructions
- Helm - Install Helm - Instructions
- From 1.0.0 | Configure locals.tf and variables.tf to your desired values
- From 1.0.0 | (Only for first time) Run
terraform init
- From 1.0.0 | Run
AWS_PROFILE=*named_profile* terraform apply
- From 1.0.0 | Run
aws eks get-token --cluster-name *cluster_name* --profile *named_profile*
- From 4.0.0 | Run
./components.sh bringup
- From 4.0.0 | Check for Ingress-Nginx NLB DNS name after deployed
- From 4.0.0 | Configure dns-terraform-module/locals.tf by adding the DNS name of your NLB above into it. Make other necessary changes too.
- From 4.0.0 | (Only for first time) Run
terraform -chdir=dns-terraform-module init
- From 4.0.0 | Run
AWS_PROFILE=*named_profile* terraform -chdir=dns-terraform-module apply
- From 4.0.0 | Apply custom DNS settings if not on Route 53. See here for more infomation.
- From 4.0.0 | Run
AWS_PROFILE=*named_profile* terraform -chdir=dns-terraform-module destroy
- From 4.0.0 | Run
./components.sh bringdown
- From 4.0.0 | Run
AWS_PROFILE=*named_profile* terraform destroy
#1 - Getting Started - Link
To apply the changes from the Terraform configuration, simply run the following command ( with or without the AWS_PROFILE
variable depending on your use of named profiles):
AWS_PROFILE=*profile_name* terraform apply
#2 - Interacting with your cluster? Web dashboards? Terminal? - Link
- Run the following command
kubectl proxy --port=8001
- Get your token with the following command
You can get your token with the following command, copy the token in the returned JSON:
aws eks get-token --cluster-name *cluster_name* --profile *named_profile*
(Alternative) For MacOS, if you have jq
installed, the following command should automatically copy the token:
aws eks get-token --cluster-name *cluster_name* --profile *named_profile* | jq -r .status.token | pbcopy
-
Open the dashboard from this link
-
Use the token from step 2 to log in
#3 - Role-Based Access Control (RBAC) - Link
In the following, we will test our backend role with a random user name SampleBackendUser
. This user will be assign to our group AWS IAM eks_backend_group
(found in locals.tf
), which is linked to the EksBackendRole
Role in our Kubernetes cluster in EKS.
Do change the SampleBackendUser
, EksBackendRole
, and eks_backend_group
values if you wish to test out other roles (like devops) or the ones that you create yourself.
- Create a backend user with the following command
aws iam create-user --user-name SampleBackendUser --profile *aws_profile*
- Add the user to the backend group on AWS IAM
aws iam add-user-to-group --group-name eks_backend_group --user-name SampleBackendUser --profile *aws_profile*
- Verify that
SampleBackendUser
has been assigned to the AWS IAM group by checking the output of the following command
aws iam get-group --group-name eks_backend_group --profile *aws_profile*
- Create the access key for this backend user
aws iam create-access-key --user-name SampleBackendUser --profile *aws_profile* | tee /tmp/SampleBackendUser.json
- Run the following script, and append the
echo
output to~/.aws/credentials
(replace previous test account if there are)
######## For AWS Config: ~/.aws/credentials
AWS_USER="SampleBackendUser"
AWS_ROLE="EksBackendRole"
AWS_FILE_PATH="/tmp/$AWS_USER.json"
echo "\n[$AWS_USER]\naws_access_key_id=$(jq -r .AccessKey.AccessKeyId $AWS_FILE_PATH)\naws_secret_access_key=$(jq -r .AccessKey.SecretAccessKey $AWS_FILE_PATH)"
- Run the following script, and append the
echo
output to~/.aws/config
(again, replace previous test account if there are)
######## For AWS Config: ~/.aws/config
AWS_USER="SampleBackendUser"
AWS_ROLE="EksBackendRole"
AWS_FILE_PATH="/tmp/$AWS_USER.json"
ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text --profile $AWS_USER`
echo "\n[profile $AWS_ROLE]\nrole_arn=arn:aws:iam::${ACCOUNT_ID}:role/$AWS_ROLE\nsource_profile=$AWS_USER"
- Update your cluster credentials as the new backend user account with the following command, before testing out access control with
Kubernetes Dashboard
ork9s
aws eks --region *region* update-kubeconfig --name *cluster_name* --profile EksBackendRole
- Test your user access control with the following 2 commands. Note that the backend's role can be verified from backend_role.yaml.tmpl. The first command should return a
no
, while the second command should return ayes
.
kubectl auth can-i get secret -n production
and
kubectl auth can-i get secret -n development
- (Cleanup) Once you are done with your testing, make sure you cleanup and delete the user with the following commands
aws iam remove-user-from-group --group-name eks_backend_group --user-name SampleBackendUser --profile *aws_profile*
aws iam delete-access-key --user-name SampleBackendUser --access-key-id `jq -r '.AccessKey.AccessKeyId' /tmp/SampleBackendUser.json` --profile *aws_profile*
aws iam delete-user --user-name SampleBackendUser --profile *aws_profile*
Following the steps above from testing, we can also reapply some of the same steps to make it easier for our team members to access our Kubernetes clusters. YourUserProfile
will be the user's local AWS profile (if not set, likely it should be default
), and YourIntendedAWSRoleForUser
is the AWS IAM role assigned to your team member, feel free to change these two values to your intended values in the following steps
The following steps assume that your team members have already set up their AWS cli. If it is not yet done, visit here to find out how to do it.
- Run the following and add the
echo
output to~/.aws/config
######## For AWS Config: ~/.aws/config
AWS_MAIN_PROFILE="YourUserProfile"
AWS_ROLE="YourIntendedAWSRoleForUser"
ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text --profile $AWS_MAIN_PROFILE`
echo "\n[profile $AWS_ROLE]\nrole_arn=arn:aws:iam::${ACCOUNT_ID}:role/$AWS_ROLE\nsource_profile=$AWS_MAIN_PROFILE"
- Update your team member's cluster credentials with the following command:
aws eks --region *region* update-kubeconfig --name *cluster_name* --profile YourIntendedAWSRoleForUser
And your team member should have the cluster set up! Note that the above new profile YourIntendedAWSRoleForUser
can also be used to generate the token for gaining access to the Kubernetes Dashboard
.
#3.0.1 - Scalability - Link
No additional instructions
#4 - All About Ingress - Link
No additional instructions
#4.0.1 - Limits, Taints, and Affinities - Link
No additional instructions