Comments (4)
The existence of GenWqe is checked during the test suites class level setUp()
. A problem could be that it is hard to distinguish these two cases: (1) the device has no GenWqe and should SKIP the tests and (2) the device has GenWqe but it is not working correctly and CANCEL the tests (as the check is executed in the setUp()
FAIL is no option here I guess).
from avocado-misc-tests.
How can I identify tests which SKIPed and CANCELed due to an unsuitable configuration of the system under test? Is there a way to "blacklist" these tests prior to test execution?
No. In Avocado, the test discovery (used as the only step by avocado list
and as the first step by avocado run
) follows a design (security) decision of not executing code from the test. We have to discover all tests by static syntax analysis. Checking whether a test will be skipped/canceled or not before actually running any test code would probably be impossible.
The existence of GenWqe is checked during the test suites class level setUp(). A problem could be that it is hard to distinguish these two cases: (1) the device has no GenWqe and should SKIP the tests and (2) the device has GenWqe but it is not working correctly and CANCEL the tests (as the check is executed in the setUp() FAIL is no option here I guess).
Once you understand the statuses meaning, you can use them as you wish. It's acceptable to call self.fail()
from the setUp()
. The main difference between SKIP
/CANCEL
and FAIL
is that FAIL will make the Job to exit with a non-zero exit status:
~$ avocado run canceltest.py
JOB ID : 188d3bd44b28913ed1b5f26d71e51d3c6c6daa87
JOB LOG : /home/apahim/avocado/job-results/job-2017-11-09T13.29-188d3bd/job.log
(1/1) canceltest.py:CancelTest.test: CANCEL (0.00 s)
RESULTS : PASS 0 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 1
JOB TIME : 0.83 s
JOB HTML : /home/apahim/avocado/job-results/job-2017-11-09T13.29-188d3bd/results.html
~$ echo $?
0
~$ avocado run failtest.py
JOB ID : c373f5c43d78a980ee68319ffc82b525feece90b
JOB LOG : /home/apahim/avocado/job-results/job-2017-11-09T13.29-c373f5c/job.log
(1/1) failtest.py:FailTest.test: FAIL (0.04 s)
RESULTS : PASS 0 | ERROR 0 | FAIL 1 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME : 1.00 s
JOB HTML : /home/apahim/avocado/job-results/job-2017-11-09T13.29-c373f5c/results.html
~$ echo $?
1
Now, between SKIP
and CANCEL
, the difference is that a skipped test cannot have any part of it executed (not even setUp()
). A canceled test, can. Meaning that, to SKIP
a test, the condition check code has to live outside the test class. To CANCEL
a test, the condition can live inside the test code and it also means that setUp()
and tearDown()
will both be executed.
from avocado-misc-tests.
Btw, not really related, but maybe a way out: you can tag
the tests and then run only the tests tagged with a given tag: http://avocado-framework.readthedocs.io/en/55.0/WritingTests.html#categorizing-tests
from avocado-misc-tests.
No. In Avocado, the test discovery (used as the only step by avocado list and as the first step by avocado run) follows a design (security) decision of not executing code from the test. We have to discover all tests by static syntax analysis. Checking whether a test will be skipped/canceled or not before actually running any test code would probably be impossible.
Ok, that's reasonable.
Once you understand the statuses meaning, you can use them as you wish. It's acceptable to call self.fail() from the setUp(). The main difference between SKIP/CANCEL and FAIL is that FAIL will make the Job to exit with a non-zero exit status:
I didn't think about that ...
Now, between SKIP and CANCEL, the difference is that a skipped test cannot have any part of it executed (not even setUp()). A canceled test, can. Meaning that, to SKIP a test, the condition check code has to live outside the test class. To CANCEL a test, the condition can live inside the test code and it also means that setUp() and tearDown() will both be executed.
... then there is no doubt about how to interpret the test status any more. Thanks for the explanation.
Btw, not really related, but maybe a way out: you can tag the tests and then run only the tests tagged with a given tag: http://avocado-framework.readthedocs.io/en/55.0/WritingTests.html#categorizing-tests
I will consider that for potential tests I add by myself or for possible PR for existing tests.
from avocado-misc-tests.
Related Issues (20)
- When using rdma_tests.py of Avocado (version: 90.0) to test MLNX Network-Card in Rhel8.4, it shows “FAIL: Client cmd: ib_atomic_bw -F”. HOT 1
- After running perf_fuzzer.py in Rhel8.4 with Avocado (version: 93.0), it shows “RuntimeError: Test interrupted by SIGTERM”. HOT 3
- While running ndctl.py with Avocado 92.0, traceback messages are seen [(most recent call last):File "/usr/lib64/python3.6/multiprocessing/process.py"..] HOT 1
- After running avago9361_vd.py in Rhel8.4 with Avocado (version: 93.0), it shows “FAIL: Failed to set drive to online state”. HOT 1
- After running xfstests.py in Rhel8.4 OS with Avocado (version: 93.0), Rhel8.4 OS will automatically reboot and can't complete the test of xfstests.py. HOT 3
- When using disk_info.py to test NVMe-SSD in Rhel8.2 with Avocado (version: 94.0), it shows "\nGiven disk nvme11n1 is not present in lshw -c disk | grep -i nvme11n1\nGiven disk nvme11n1 not having uuid". HOT 1
- When using module_unload_load.py to test NVMe-SSD in Rhel8.3 with Avocado (version: 94.0), it shows "FAIL: Failed Modules: ['nvme']". HOT 1
- When using nvme_cli_selftests.py to test NVMe-SSD in Rhel8.3 with Avocado (version: 94.0), nvme_compare_test and nvme_smart_log_test are FAIL, but Manual-Test are PASS. HOT 1
- When using nvmetest.py to test NVMe-SSD in Rhel8.3 with Avocado (version: 94.0), it shows "ERROR: Command 'make' failed" and "CANCEL: /dev/nvme11n1 does not exist". HOT 1
- bonding.py : bonding tests fails for active-backup tests HOT 1
- multipath_test.py: Need proper clean up of paths after each tests run HOT 1
- After running nvme_cli_selftests.py in Rhel8.4, “run-test-nvme_compare_test-1521” & “run-test-nvme_get_features_test-0994” are FAIL HOT 1
- After running nvmetest.py in Rhel8.4, it shows “No such file or directory: './NVME-VERSION-GEN'” HOT 1
- pread fail case in random test code while running in multiple threads HOT 7
- perf_uprobe error handling HOT 1
- Stress-ng Failing HOT 5
- fio-pmem test failing HOT 2
- Run failed: xfstests - iommu (cba565f) HOT 6
- fio: Move pmem create code out of fio test HOT 3
- Need for Basic sanity check for yaml files in Travis CI HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from avocado-misc-tests.