Currently, when a constraint of a UI Element does not specify Otherwise
steps, a Test Case that explores that constraint (with an invalid input value) receives a tag @fail
, since the Test Case should not pass the tests. After executing the corresponding test scripts, Concordia evaluates expected results and considers expected failures as successful results - labelled as adjusted
instead of passed
in test reports:
An alternative would be using NLP to negate postconditions (Then
steps) and, therefore, to invert their expectations. In this case, the tag @fail
would be no longer needed and test reports would not needed to analyze results before presenting them.
Example (today):
...
Variant: To add an item updates the total
...
When I fill {Product Code}
and fill {Quantity}
and I click on {Add Item}
Then I do not see {Total} with "0,00"
...
UI Element: Quantity
- data type is integer
- minimum value is 1
Concordia will produce a test case in which Quantity
will receive the value 0
, in order to explore the constraint minimum value is 1
. Thus, the postcondition Then I do not see {Total} with "0,00"
should fail, since the input value is considered incorrect. However, Quantity
does not declare an Otherwise
sentence for the corresponding constraint, aiming to describe the expected behavior. Concordia will then simply add a flag @fail
to the Test Case to indicate the expectation of failure.
Proposed
Negating the original post-conditions may give the same effect, without having to analyze test cases' expectations:
Then I do not see {Total} with "0,00"
would become
Then I see {Total} with "0,00"
That is, when using an invalid/icorrect input value for Quantity - such as 0
-, the feature should not be able to produce the original post-conditions. So negate them is a way to assert that.