Giter Club home page Giter Club logo

datadog / system-tests Goto Github PK

View Code? Open in Web Editor NEW
20.0 404.0 9.0 25.89 MB

Test framework for libraries and agents.

License: Apache License 2.0

Shell 1.63% Python 23.33% HTML 59.49% Dockerfile 1.09% C# 1.56% Java 3.76% Go 0.65% JavaScript 1.11% TypeScript 0.36% F# 0.03% PHP 0.29% PowerShell 0.03% Ruby 6.08% CSS 0.10% Nix 0.02% Scala 0.27% CMake 0.02% C++ 0.19% PLpgSQL 0.01% C 0.01%
functional-testing fuzzing blackbox-testing integration-testing end-to-end-testing

system-tests's Introduction

System tests

Workbench designed to run advanced tests (integration, smoke, functionnal, fuzzing and performance)

Requirements

bash, docker and python3.9. More infos in the documentation

How to use

Add a valid staging DD_API_KEY environment variable (you can set it in a .env file). Then:

flowchart TD
    BUILDNODE[./build.sh nodejs] --> BUILT
    BUILDDOTNET[./build.sh dotnet] --> BUILT
    BUILDJAVA[./build.sh java] --> BUILT
    BUILDGO[./build.sh golang] --> BUILT
    BUILDPHP[./build.sh php] --> BUILT
    BUILDPY[./build.sh python] --> BUILT
    BUILDRUBY[./build.sh ruby] --> BUILT
    BUILT[Build complete] --> RUNDEFAULT
    RUNDEFAULT[./run.sh] -->|wait| FINISH
    FINISH[Tests complete] --> LOGS
    FINISH[Tests complete] --> OUTPUT
    OUTPUT[Test output in bash]
    LOGS[Logs directory per scenario]

Understand the parts of the tests at the architectural overview.

More details in build documentation and run documentation.

Output on success

Complete documentation

system-tests's People

Contributors

ahmed-mez avatar anna-git avatar bouwkast avatar cataphract avatar cbeauchesne avatar christophe-papazian avatar dianashevchenko avatar emmettbutler avatar estringana avatar gnufede avatar gustavocaso avatar hokitam avatar juanjux avatar kyle-verhoog avatar link04 avatar lloeki avatar marcotc avatar mehulsonowal avatar nachoechevarria avatar nizox avatar paullegranddc avatar perfectslayer avatar robertomonteromiguel avatar robertpi avatar smola avatar songy23 avatar wconti27 avatar ygree avatar zacharycmontoya avatar zstriker19 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

system-tests's Issues

parametric/apps/golang: Flush() doesn't flush stats

The FlushTraceStats function for the Go implementation should be able to call Flush(), but this function doesn't actually flush the trace stats. This causes problems for the some of the tests which are then forced to call Stop().

#576 will include some stopgaps, but we need to investigate if the Go Tracer should actually be flushing stats in Flush()

/cc @Kyle-Verhoog

Enable VScode env

out-of-the-box, VScode should provide python autocompletion and pylint/black utils

Add a way to force a test to be executed

If this env var is set : SYSTEM_TESTS_FORCE_EXECUTE=feature-id-1,feature-id-2,feature-id-3

Then feature-id-1, feature-id-2 and feature-id-3 must be executed, even if they are skipped.

Improve developer experience

Issue list

TODO

Build time, Exec time

Hard to write tests

  • #1722
  • Hard to know what are the scenario useful for a given feature
  • Sugars

Debuggability

CI ecosystem

Done during Q2-Q3 2023

Build time, Exec time

  • #1278
  • try to understand why nodejs tracer take 15mn to build (and improve).
  • Exec time is too slow
    • #1093 and #1169
    • option to not restart containers ? (need to know what is the time saved with that, and be sure we do not introduce flakyness)

Hard to write tests

  • Hard to debug test locally // Hard to write a new test from scratch
  • Hard to know what are the scenario useful for a given feature
    • #1086
    • #1087
    • ./run.sh SCENARIO --help. -> print the documentation of scenario

PR process in system tests

  • Egg-chicken issue between system tests PR and distant PR
    • #1054
    • #913
    • Communicate about the current recipe
  • Decorators merge conflicts hell // Maintenance
  • PR review bottleneck -> #1481
  • PR without description
  • Improve system tests CI

Maintain our CI

Other

  • Execution en local des N variants

    • N variants. Indeed. Hard to tackle, as one run require mostly all your laptop ressources. Maybe a script that run all variant for a given tracer, but it'll be very slow.
  • Importance to follow feature matrix

Pre-build static docker images

Build step is very slow...

  • get data on each image build time
  • pre-build them, and use pre-build images

WIP

  • cpp
  • golang
  • java
  • nodejs
  • php
  • python: #1331
  • ruby

POC : synchronous test

As now, test are asynchronous :

class TestCase():
    def test_case(self):
        # some setup code that should not fails
        r = get_weblog()
        # real test logic asyncronously
        interface.weblog.assert_waf_attack(r)

It's painful because

  • we have to hack pytest to rewrite failure (test_case never fails)
  • if test_case fails, we have to hack pytest to give the output
  • if test_case fails, but is flagged as xfail, the error is silent
  • and there are lot of hack to decorate asynchronous validation

It may be possible to change this to stick to a pure pytest code style :

class TestCase():
    def setup_case(self):
        self.r = get_weblog()

    def test_case(self):
        interface.weblog.assert_waf_attack(self.r)  # now synchronous

The test executor will look like :

collect
execute all setup_XXXX (need to be writted)
wait interfaces
execute test
  1. no more asynchronous test
  2. no more dirty hack to collect metadata

The only draw back is that object set in setup is no more sticked to the test logic. But I think it's reasonnable.

Move scenario test set into python files

Actually, test set is declared in run.sh :

elif [ $SYSTEMTESTS_SCENARIO = "REMOTE_CONFIG_MOCKED_BACKEND_ASM_DD" ]; then
    export RUNNER_ARGS="scenarios/remote_config/test_remote_configuration.py::Test_RemoteConfigurationUpdateSequenceASMDD"

The idea is to move this set inside python files :

@scenarios("REMOTE_CONFIG_MOCKED_BACKEND_ASM_DD")
class Test_RemoteConfigurationUpdateSequenceASMDD():
    ...

if a test class does not have a @scenarios decorator, then it's included in default scenario.

The decorator can be added on both class on test method.

`__pycache__` folders created with owner as root

Plateform: Linux Ubuntu 22.04 (datadog preseed image), x64, docker compose compatibility.

If I execute ./build.sh twice, it fails with a access denied on pycache files. I checked and the files are all owned by root. (I did not run the tests outside of docker)
Workaround is to run py3clean . between every run.

Execute test scenarios in parallel in CI

The CI taking 30 minutes to complete does not spark joy. I've seen that the scenarios are executed 1 by 1 where they could be parallelized. In a local dev env it wouldn't make sense but in CI I'm guessing it does.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.