Giter Club home page Giter Club logo

redant's Introduction

redant

Open in Visual Studio Code

    ____  __________  ___    _   ________
   / __ \/ ____/ __ \/   |  / | / /_  __/
  / /_/ / __/ / / / / /| | /  |/ / / /   
 / _, _/ /___/ /_/ / ___ |/ /|  / / /    
/_/ |_/_____/_____/_/  |_/_/ |_/ /_/     
                                         

usage: redant_main.py [-h] -c CONFIG_FILE -t TEST_DIR [-l LOG_DIR] [-ll LOG_LEVEL]
                      [-cc CONCUR_COUNT] [-xls EXCEL_SHEET][--show-backtrace] [-kold]

Redant test framework main script.

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG_FILE, --config CONFIG_FILE
                        Config file(s) to read.
  -t TEST_DIR, --test-dir TEST_DIR
                        The test directory where TC(s) exist
  -l LOG_DIR, --log-dir LOG_DIR
                        The directory wherein log will be stored.
  -ll LOG_LEVEL, --log-level LOG_LEVEL
                        The log level. Default log level is Info
  -cc CONCUR_COUNT, --concurrency-count CONCUR_COUNT
                        Number of concurrent test runs. Default is 2.
  -xls EXCEL_SHEET, --excel-sheet EXCEL_SHEET
                        Spreadsheet for result. Default value is NULL
  --show-backtrace      Show full backtrace on error
  -kold, --keep-old-logs
                        Don't clear the old glusterfs logs directory during environment setup.
                        Default behavior is to clear the logs directory on each run.

Tested and Supported Distros

Distro Redant Gluster Server Gluster Client
Fedora 32 ✔️ ✔️ ✔️
Fedora 34 ✔️ ✖️ ✖️
RHEL 7.9 ✖️ ✔️ ✔️
RHEL 8.4 ✔️ ✔️ ✔️

The architects of any project won't be there forever with it ( not everyone has the luxury to be a BDFL ), hence it is important to have the thought process documented so that one doesn't need to go through the code. We for one believe in proper documentation. The very idea of developers and engineers being spartans who understand logic only from code is what we feel as . We need to be civilized humans and make it easy for the next person coming in to just glance at what it is, why it is and how it is.

Before trying out redant, do check the known issues section

For those who want a pure markdown experience and a deeper dive...

Readme Docs

The Documentation index can be found at Docs

Contents

Set up

Pre requisites:

  1. Passwordless ssh between all (to self as well) the nodes in the cluster.
  2. Gluster installed on all the nodes and the bricks which would be used in the volumes, are created on all the servers.
  3. The following packages should be installed on all the nodes in the cluster, it includes some packages which are required by external tools used in some test cases:
  4. git
  5. make
  6. gcc
  7. autoconf
  8. automake
  9. cronie
  10. rsync
  11. numpy
  12. sh

To start Working:

  1. Clone redant repo.
  2. Populate the config.yml with relevant server and client details.

STEP-BY-STEP procedure to run:

  1. git clone [your fork for this repo]
  2. Create a virtual environment using : virtualenv <virtual_env_name> or python3 -m venv <virtual-env-name>
  3. Activate the virtual-env : source <virtual_env_name>/bin/activate
  4. cd [the-fork]
  5. Run pip3 install -r requirements.txt
  6. Install the packages needed by some TCs by running the scripts under tools/pre-req_scripts
  7. To run the sample TC, just run the below cmd after populating the config file with relevant values. The command has to be run from the main redant reository. The tests path should be given with respect to the redant directory. python3 ./core/redant_main.py -c ./config/config.yml -t tests/example/ For more options, run python3 ./core/redant_main.py --help
  8. Log files can be found at /var/log/redant/ [ default path ].

The logging is specific to a TC run. So when a user gives a specific base dir for logging when invoking redant_main.py, that directory will inturn contain the following dirs, -> functional -> performance -> example

Now, based on the invocation, directory of a component will be created inside the functional and performace dirs. And inside the component directory, the Test case specific directory will be created which inturn will contain volume specific log files. So for example to see the log files of a test case which is, functional/<gluster_component>/<test_name>/test_sample.py one would have to go to the directory, <base_log_dir>/<time_stamp>/functional/<gluster_component>/test_sample, which will inturn contain the log files specific to volume type.

In addition to running TCs from within a suite, either performance or functional or even under a more granular level of component, one can select to run a specific TC also. For example, python3 core/redant_main.py -c config/config.yml -t tests/example/sample_component

One can also run the scripts given under the tools dir which will reduce the lengthy commands to be typed out everytime. Check out the README.md at the link Tools-README

Those looking to get into the action of migrating test cases

Please refer the doc : migratingTC.md

Design Document

Redant Design Doc

redant's People

Contributors

aujjwal-redhat avatar baraksason avatar csabahenk avatar dependabot[bot] avatar neesingh-rh avatar nik-redhat avatar nishith-vihar avatar obnoxxx avatar schaffung avatar sheetalpamecha avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

redant's Issues

Lint and Flake errors fixing

All the files have to fixed for lint and flake errors. This issue can be referred for all the PRs fixing lint and flake.

Modify CONTRIBUTION.md

Modify the CONTRIBUTION file as per the progress.
For now, the CONTRIBUTION file looks great and helpful.

remote_exec class variable in test case to be changed

the remote_exec class variable in the test_case can be easily misinterpreted as remote executioner whereas in the actual sense it the reference of the mixin object passed to the test case be the runner_thread. So the variable has to be changed accordingly.

Adding the parsing module

The parsing module consists of 3 components:

  1. The detailed config file
  2. The Parser class which parses the config file and generates config hashmap given the path of config file
  3. The ParamsHandler which parses the configuration parameters from config hashmap

The gluster_test_main file makes call to parsing module whose function is mentioned in the Gluster-test-design document.

Logging to be re-structurized

Logging is important for debugging the carcass of the TCs which fail. Hence it would be annoying to go through a data dump containing logs for all the TC to check why it failed. The better way would be to mimic the tests director structure and create a per volume type run per TC log file. So, no more going over a big file and using grep. Just navigate to that particular log file belonging to a TC.

Creating readme at the directory level

The idea is to document for posterity. Now, python has sphinx and other ways of directly converting the code to a documented website ( though we have to do some configurations ). But, I believe that will be more of a "what's the API and class the module has", rather than the "why" and "how".

Let's keep this issue as long running. We can give PRs and actually add the Updates: # keyword during a commit. So do not close this issue.

Make logging API a little generic

The current logging is done by the following manner RR.rlogger.<log_level>(<log_message>) now, instead of this, we can have one method which is of the form, RR.rlogger.log(<log_message>, log_level>

Making changes to test cases

The IPs are taken from the config file and and are accessed through Parent Test. The test case should be running successfully along with other components.

Command execution result validation

There should be validation for the value being returned by the remote command executor. The value being returned will be a dictionary and hence has to be parsed accordingly for success or failure. If there is a failure, an exception must be thrown. Acc to design, we will be relying on the try-except flow rather than the return values being checked on the TC.

Extracting test type and volume types from a TC

The TC will contain a comment line at the very beginning stating the TCs type, whether it is disruptive or non-disruptive and along with that as to what all volume types, this TC will have to be run for.

For example,

#disruptive;replicated,distributed,distributed-replicated

Now this parsing has to be part of the test_list_builder.

Adding gluster Ops library

The gluster Ops library will mainly contain operations for starting and stopping gluster service on the server.

Test runner script

The test runner is deemed to be the script which would be actually invoked by the user to run the test cases.

The idea is that the test runner will parse through the existing test cases and decide their order of execution and also if they can be parallelized. For executing the TCs, the test runner will spin up the thread_runner which would inturn be responsible for that one TC ( for a given volume type ).

This issue is to track as to what all features will come under the test runner.

Pylint specific errors and the messages

Errors & Messages:

E1101:

'%s %r has no %r member%s'
https://pycodequ.al/docs/pylint-messages/e1101-no-member.html

In our code

E1101: ❌

Instance of 'VolumeOps' has no 'rlog' member (no-member)

E0401: ❌

Unable to import 'redant_libs.support_libs.rexe' (import-error)

R0801: ❌

Similar lines in 5 files

R0903: ❌

Too few public methods (0/2) (too-few-public-methods)

C0103: ❌

Variable name "f" doesn't conform to snake_case naming style (invalid-name)

W0703: ❌

Catching too general exception Exception (broad-except)

R0902: ❌

Too many instance attributes (12/7) (too-many-instance-attributes)

Regerating the errors locally :

  1. Remove the specific errors or all of them from disable in .pylintrc file.
  2. Then run the command:
pylint -j 4 --rcfile=.pylintrc redant_libs/

Enhancing redant_test_main

The redant_test_main has the following functionalities under it,

  • Parsing of the config file
  • Invocation of the test list builder and forming the order of test execution
  • Using the above two data to invoke the test_runner.

Now to achieve this, following parameters have to be provided to the redant_test_main during invocation as command line arguments,

  1. Config file path
  2. Test directory ( which contains the test cases or tree of test cases within )
  3. Log file directory ( within which the log files for each TC will be created. )

Sphinx Documentation and read-the-docs

We should start exploring as to how to spin up a sphinx document for the code base and also as to how to host it in read-the-docs. ( or any other hosting site which is free and open source ).

Logging messages to follow the required format

Log Message format

===================
All log messages MUST include:

  • The cmd that is/was executed
  • The machine(s) on which the command is/was executed

Ex: peer_ops

On starting the execution of cmd on a node:

 self.rlog("Running %s on node %s" % (cmd,node), 'I')

On completing the execution:

 self.rlog("Successfully ran %s on %s " % (cmd, node),'I')

Log levels

===========

  • Everything in the ops library has to be in info mode.
  • Everything in the test cases has to be in error mode.
  • Everything other than ops to be in debug mode.

Changing the directory format of the framework

The whole framework is divided into the following parts

  1. Core - Contains the core test framework which includes : Parsing + Test List Builder + Test Runner + Runner Thread + Test Main
  2. Config - Contains only the config file
  3. Tests - Contains : Parent Test + Performance tests + Functional Tests
  4. Support - Contains: Ops libs

Changes done according to the format

-> Added Try-Except blocks
-> Removed the extra spaces and the imports
-> Created the class and defined all ops func inside it.
-> Used format specifiers

Mixin Pattern across the base classes

The current dependency flow is a little circuitous and can lead to issues. I'm proposing we combine all the base support libraries and ops into a framework_mixin which can be used across the test functions instead of individual calls to the classes and modules.

Following pep-3107 standard

We all know the python is slick in handling types, for instance we can create a function called

def sample_func(sample_var):
   print(sample_var)

and this function can take in any data type and print it ( even functions as functions are first class objects in python ). So this can be disastrous for us who are designing the framework as this can lead to issues when somebody invokes and uses the function
in a way which isn't intended. The question is, do we have a salvation ?

Yes,

PEP-3107 : https://www.python.org/dev/peps/pep-3107/

The very least we can do is annotate the parameters so that it is seen in the function definitions when the user if going through them. If anyone uses IDEs, these annotations would help in correcting a user if they're going to commit a mistake in the type of data they're going to pass.

The relog module already uses function annotations, and can be used as a reference for further changes across the framework.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.