Giter Club home page Giter Club logo

performance's Introduction

OpenShift Pipelines Perf&Scale testing

How to run manually

If you want to run the test manually, you will need these tools:

  • kubectl
  • oc
  • jq

Setup the OpenShift cluster (assuming oc login ... happened already):

export DEPLOYMENT_TYPE="downstream"
export DEPLOYMENT_VERSION="1.14"
export DEPLOYMENT_PIPELINES_CONTROLLER_HA_REPLICAS=""
export DEPLOYMENT_CHAINS_CONTROLLER_HA_REPLICAS=""
export DEPLOYMENT_PIPELINES_KUBE_API_QPS=""
export DEPLOYMENT_PIPELINES_KUBE_API_BURST=""
export DEPLOYMENT_PIPELINES_THREADS_PER_CONTROLLER=""
export DEPLOYMENT_CHAINS_KUBE_API_QPS=""
export DEPLOYMENT_CHAINS_KUBE_API_BURST=""
export DEPLOYMENT_CHAINS_THREADS_PER_CONTROLLER=""
export DEPLOYMENT_PIPELINES_CONTROLLER_RESOURCES="1/2Gi/1/2Gi"
ci-scripts/setup-cluster.sh

Run the test:

export TEST_NAMESPACE="1"
export TEST_DO_CLEANUP="false"
export TEST_TOTAL="100"
export TEST_CONCURRENT="10"
export TEST_TIMEOUT=18000
export TEST_SCENARIO="math"   # pick this scenario or some of these below
# export TEST_SCENARIO="build"
# export TEST_SCENARIO="signing-ongoing"
# export TEST_SCENARIO="signing-bigbang"
# export TEST_SCENARIO="signing-tr-varying-concurrency"
# export CHAINS_ENABLE_TIME=0
# ...and more
ci-scripts/load-test.sh

Collect the results:

ci-scripts/collect-results.sh

Dependencies

This is what I did recently on RHEL9 to make test run:

# Packages
rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
dnf install tmux python3-pip jq parallel git-core

# kubectl
curl -Lso /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x /usr/local/bin/kubectl

# oc from https://access.redhat.com/downloads/content/290
curl -o oc-4.15.0-linux.tar.gz -L "https://access.cdn.redhat.com/content/origin/files/sha256/f0/f0.../oc-4.15.0-linux.tar.gz?user=...&_auth_=..."
tar xzf oc-4.15.0-linux.tar.gz
cp oc /usr/local/bin/oc
chmod +x /usr/local/bin/oc

# cosign
curl -O -L "https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64"
mv cosign-linux-amd64 /usr/local/bin/cosign
chmod +x /usr/local/bin/cosign

# Login to OCP cluster
oc login https://...:6443 --username ... --password ... --insecure-skip-tls-verify

What scenarios are there

You can run multiple different scenarios. These are configured via TEST_SCENARIO environment variable. To learn what each scenario does, check readme files in tests/scaling-pipelines/scenario/ subfolders.

How perf&scale CI works

This section describes what is configured where when it comes to automated runs of this test in OpenShift CI/Prow system.

Prow

To execute the tests we are using Prow. Jobs in Prow were configured in openshift/release PR#44206.

Nice documentation on how to onboard new test is OpenShift CI Scenario Onboarding Guide.

Description of ci-operator configuration is in Types of Tests.

If we ever need to add some secrets to the test, review OpenShift CI Interop Scenario Secrets Guide docs. There is openshift-pipelines-perfscale collection in OpenShift CI Secret Collection Management. Login there and ping @jhutar to make you a member to be able to see it. Once added, you should be able to see the secret in OpenShift CI Vault. In the job, secrets needs to be mounted under /usr/local/ci-secrets/openshift-pipelines-perfscale directory (it was removed as not necessary after initial PR).

In openshift/release repo PR, you can trigger the test with /pj-rehearse pull-ci-openshift-pipelines-performance-master-scaling-pipelines. Also twice a day (02:00 and 14:00 UTC) Prow will trigger periodic-ci-openshift-pipelines-performance-master-scaling-pipelines-daily (history).

Test code is in tests/scalingPipelines/ directory. See readme in that directory for more info.

Pusher

Every hour we run a CI puller script (see ci-scripts/prow-to-storage.sh) via Jenkins job. There is a Jenkinsfile and JobDSL file for this job.

Script ci-scripts/prow-to-storage.sh lists N recent Prow builds of the job and if not pushed already, pushes their results JSON file to Horreum and OpenSearch. After uploading to Horreum, script checks if change detection detected some change, and if so, adds a "result" key to the JSON with "FAIL" value, otherwise "PASS". Upload to OpenSearch happens with this value in place.

Horreum

Horreum instance we are using is: https://horreum.corp.redhat.com/ (managed by Horreum team: Horreum Google Chat space). It is meant to help spot failing test by comparing it with historical data.

You can browse data and graphs without login, but to change the configuration, you will need an account. In Horreum we have a team Openshift-pipelines. Ping @johara in above linked Google Chat space to create you a user and then @kbaig or @jhutar to add you to the team.

Current test configuration:

Check test change detection settings to understand under which circumstances Horreum tags new result as a "change".

OpenSearch

OpenSearch (a.k.a. ElasticSearch) instance we are using: http://elasticsearch.intlab.perf-infra.lab.eng.rdu2.redhat.com/ and OpenSearch Dashboard (a.k.a. Kibana) instance we are using: http://kibana.intlab.perf-infra.lab.eng.rdu2.redhat.com/ (managed by Perf&Scale Integrations lab team: INTLAB Jira). It is meant to provide useful dashboard and a way how to explore historical test data.

All data are being pushed to pipelines_ci_status_data index in OpenSearch. You can browse the data in "Discover" section with that index selected. As basic insight into the data you can use this dashboard. It's JSON definition is backed up in config/kibana/ directory.

performance's People

Contributors

jhutar avatar rh-rahulshetty avatar khrm avatar openshift-merge-bot[bot] avatar openshift-ci[bot] avatar vdemeester avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.