Giter Club home page Giter Club logo

distributed-load-testing-using-aws-fargate's Introduction

[NEW] We launched as an official AWS Solution!

The last few months we have been working hard on making this project better and we are happy to announce that v1.0 is now available to be deployed as an official AWS Solution: https://aws.amazon.com/solutions/distributed-load-testing-on-aws.

The solution offers a polished version of this project with proper documentation and a User Interface (Console) where you can run and manage your load test scenarios. The solution is also open source, so you can fork it, customize it or even contribute back if you are interested! GitHub repository: https://github.com/awslabs/distributed-load-testing-on-aws.

The instructions below will continue to work but our development efforts will be focused on the new release.

[OLD] Distributed Load Testing Using Fargate

This project launches a solution that runs Distributed Load Tests using AWS Fargate and Taurus. You can use it to test your services under high stress scenarios and understand it's behavior and scalability.

Architecture

Taurus acts as a wrapper around JMeter and allows you to generate HTTP requests in parallel simulating a real-world scenario. This solution shows how to run Taurus on Docker containers and deploy them to Fargate clusters running in different AWS regions, so that you can simulate requests coming from different geographic locations into your service.

Note: Your service (system under test) does not have to be running on AWS. You can configure this solution to hit any HTTP endpoint as long as it's accessible through the internet. However, this solution is meant to be deployed on an AWS account.

License Summary

This sample code is made available under a modified MIT license. See the LICENSE file.

Requirements

Getting Started

Step 1. Launch Solution

In this step you will launch the master CloudFormation stack that will create a Fargate Cluster, an ECR Docker registry, an IAM Execution Role, a Task Definition, a CloudWatch Log Group, a Security Group, a new VPC, a CodeCommit repository, a CodePipeline and 2 CodeBuild projects with their associated IAM Roles.

Region Name Region Code Launch
US East (N. Virginia) us-east-1 Launch in us-east-1
US East (Ohio) us-east-2 Launch in us-east-2
US West (Oregon) us-west-2 Launch in us-west-2
EU West (Ireland) eu-west-1 Launch in eu-west-1

You will notice that the CloudFormation master stack expands itself into 4 stacks: The master one, which takes care of creating resources that will be shared across regions like the ECR Registry and the CodePipeline; The VPC nested stack, which creates the network configuration and subnets; The Fargate nested stack which creates the cluster and task definition and the Pipeline nested stack that creates the CodeCommit repository and the CodeBuild projects to build and run the load tests in Fargate.

Step 2. Clone this repository

git clone https://github.com/aws-samples/distributed-load-testing-using-aws-fargate.git

Step 3. Modify the load test scenario

Configure your test scenario by editing the examples/http/taurus.yml file. The default example shown below runs a load test for 5 minutes with 5 concurrent requests per second against https://aws.amazon.com with a ramp-up time of 1 minute.

execution:
- concurrency: 5
  ramp-up: 1m
  hold-for: 5m
  scenario: aws-website-test

scenarios:
  aws-website-test:
    requests:
    - https://aws.amazon.com

To learn more about the syntax of this file, check the Taurus docs: https://gettaurus.org/kb/Index.

Step 4. Push to CodeCommit

One of the resources that gets created when deploying this solution is a CodeCommit repository and a CodePipeline connected to it. On every commit the pipeline will run automatically, build the load test scenarios into a Docker image and push the image to the ECR registry and finally run the load tests into the Fargate clusters.

Now, remove the current Git origin of the project because you just cloned it from Github and you want your remote origin to be the CodeCommit repository instead.

git remote rm origin

Now, set the origin to be your new CodeCommit repository. An easy way to find the repository URL is in the Outputs section of the CloudFormation stack you launched on Step 1. See the following screenshot. The URL should look something like this: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/distributed-load-testing-using-aws-fargate.

git remote add origin {code_commit_repository_url}

Finally, push the code.

git push -u origin master

Note If you get a permission denied error when pushing, you may not have configured CodeCommit credentials. Follow this steps to get your authentication setup: Git Credentials for AWS CodeCommit.

If the push was successful, go to CodePipeline and check how the build progresses all the way to the last step, which will run the tests in the Fargate cluster. The pipeline should look like this:

Pipeline

Step 5. Monitor the test execution

When the last step of the pipeline has finished successfully, it means that your load tests are now running on Fargate. Go to the ECS Console to find your Fargate cluster, click on it and you should see 3 tasks running:

FargateTasks

Finally, this solution also created a CloudWatch Metric Filter that will capture the average response times for each HTTP request that is issued to your system under test by the Docker tasks running on Fargate.

What this filter is doing, is parsing the Taurus logs and assigning a variable name to each value in the log. It ignores all values in the log except for avgRt which is captured as a new metric and stored in your CloudWatch Metrics. To learn more about how this metric filter was created, check the CloudFormation template cloudformation/fargate-cluster.yaml.

Go to CloudWatch and you should see a Dashboard that was created automatically for you under the Dashboards section. It will display the metrics captured by the filter described above and display it in a nice graph:

Dashboard

Launch Additional Regions (Optional)

It may be likely that running this solution from a single AWS region is enough to load test your service. However, if you want to take it a step further, you can deploy Fargate clusters in multiple regions and make it a multi-region load test simulation. In order to do this, I created a separate CloudFormation template that launches this solution in additional regions. The difference between this template and the Master one is that this one does not create the ECR Docker Registry, the CodePipeline and IAM Roles, as those resources will be shared across all your regions.

Use the following links to launch the solution in the desired additional regions:

Additional Region Region Code Launch
US East (N. Virginia) us-east-1 Launch in us-east-1
US East (Ohio) us-east-2 Launch in us-east-2
US West (Oregon) us-west-2 Launch in us-west-2
EU West (Ireland) eu-west-1 Launch in eu-west-1

This stack will ask for 2 parameters: The DockerRegistryURL and TaskExecutionRoleArn. You can find the values for those parameters in the Outputs section of the Master CloudFormation stack launched in Step 1.

MasterOutputs

AdditionalRegionParams

Finally, Once the stack has been completed, you need to tell CodePipeline that a new region has been configured so that it also launches the Load Tests scenarios in this region. To do this, go back to the region where you launched the Master stack and follow these steps:

  1. Go to CodePipeline, then select the pipeline created by the master stack
  2. Click on the LoadTest step to edit it.
  3. Go to Edit > Environment
  4. Expand the Additional Configuration section
  5. Add 2 new environment variables to have your additional region added.
Name Value
REGION_2 Additional region (i.e. us-east-1)
REGION_2_STACK_NAME CloudFormation stack name (i.e. DistributedLoadTesting)

*Gif to help demonstrate the steps mentioned above:

AddRegion

Save the changes, run the pipeline and you are done! Repeat those steps for every additional region.

Monitor All Regions

When you launch this solution in multiple regions, I recommend you centralize the monitoring into the same CloudWatch Dashboard for easier visibility. In order to do this, you need to pull metrics from different regions into the Dashboard by follow this steps. Once configured, the multi-region Dashboard will look something like this:

CloudWatch

How to Run Locally

It's a good practice to run your tests locally before pushing them to CodeCommit to make sure they run correctly in a Docker container. To do this, build the image by issuing the following command in the root directory of this project.

docker build -t load-tests-using-fargate .

Then run the container locally.

docker run -it load-tests-using-fargate taurus.yml

If you are happy with the result, you can push the changes to the CodeCommit repository and let the pipeline do the rest.

distributed-load-testing-using-aws-fargate's People

Contributors

dependabot[bot] avatar ferdingler avatar jpeddicord avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distributed-load-testing-using-aws-fargate's Issues

Clarification question

Hi I am new to load testing and AWS, but believe this solution could be beneficial for future projects.

But require some clarification, would this running my full load in all az or is it splitting my load across all the az for example: if I had basic test which contains 250 rps would this solution make it 250 * each az or would it be 250/each az

IoT Performance Testing

Write an example that simulates connected IoT devices sending data every X amount of time via MQTT protocol to an IoT topic.

Add more regions to launch solution

So far this solution can be launched in 3 regions: us-east-1, us-east-2 and us-west-2. The reason is because at the moment of writing it, Fargate was not available in all regions. However it is now available in much more.

Supporting more regions on this project is easy:

  1. Make sure all services used in the project are available in the desired regions.
  2. Change the template located in cloudformation/fargate-vpc.yaml to add the availability zones corresponding to each region in the RegionMapping field.
  3. Update README file to add buttons to launch on the additional regions.

Increase the ECS instance limit

At the moment the number of tasks able to be launched for a given scenario is 10 do to restriction with ECS. A workaround that creates multiple ECS instances for a given scenario based on the number of tasks requested should be added.

Capturing different metrics in CloudWatch

Guide of an example capturing other metrics, besides avgResponseTime.

I'd like to capture latency - TTFB - or Time To First Byte. Maybe some others, I'm sure some others' would be interested in this, as well.

JMeter Distributed Mode

How can the task containers be configured as hosts when taurus.yml is configured using JMeter in Distributed Mode? I know networking with fargate can be tricky without an ALB. Looking for some insight here, please? Thanks. Great stuff here BTW!

Multiple tasks make logging and metrics tricky

Great template, got up and started quickly with no fuss.

One question I have tho, it's about the metrics. If you create CloudWatch Metrics for Number of Concurrent Users, that value is only an average of all LoadTestRunner Tasks. For example, if I have 150 concurrent users, and three LoadTestRunner Tasks are created by default, that is actually 450 users. Furthermore, the average response time is somewhat inaccurate because that's an average of an average.

Is there any way to consolidate logging between Tasks, or a way to group the metrics together?

See the Taurus Dashboard

This is more of a question than an issue. When I try to run locally, the taurus dashboard appears. Is it also possible to see the dashboard or summary when running in AWS?

Kinesis Producers - Performance Testing

Write an example of a kinesis producer writing random data to a stream. The goal is not to test Kinesis, but to test the scale of the application consuming the stream.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.