Giter Club home page Giter Club logo

Comments (3)

mcking65 avatar mcking65 commented on May 25, 2024

General feedback:

  • Would be helpful to name the use cases. OK if they have numbers as well. But, when browsing by heading through all use cases, and when conversing, names would help.
  • We need a definition of system roles. Should they match up to the roles in #41? Not all the roles in #41 need a matching system role. But, I wonder if each system role should match to one or more roles in the process document.
  • We need a place to describe/define various objects and states of the objects in the system. For example, what are the states that a test plan can be in, e.g., draft, in review, ready-to-run.

The use cases would be easier to specify if we have more precise language. So, I think we need to define some terms:

  • Assertion: Specifies an assistive technology behavior that is expected because a given accessibility semantic exists in a given context. For example, specifies how a screen reader is expected to behave when reading an element that has the checkbox role.
  • test: Specifies assertions to test in a specific scenario. That is, given an implementation of an ARIA design pattern, specifies a task for a tester to complete and the assertions that need to be tested after completing the task. For example, given a checkbox, read the checkbox and then test that it's role, name, and state are correctly conveyed. Note that a screen reader may provide multiple commands that read a checkbox. So, a test includes a list of commands for the tester to test.
  • Test plan: specifies all tests for a particular implementation of an ARIA design pattern. A plan covers all AT currently supported by the project, e.g., all tests for all AT for the grouped checkbox example.
  • Test run: A run of a test plan for a specific browser/AT combination, e.g., run the checkbox test with JAWS and Chrome.
  • Test run suite: The set of test runs that covers all applicable, in-scope AT/Browser combinations for a test plan. In the present scope, we have 6 AT/Browser combinations, so a test run suite would have 6 test runs and cover a given test plan.
  • And, After the conversation with Glen, I think we also need: Testing round: A collection of test run suites completed using a defined set of specific assistive technology versions. Thus, the testers would not choose which version of JAWS to use for the JAWS runs. The runs would specifically require a version, such as JAWS 2020.1912.11. A testing round would be a round of testing that completes all test plans using all in-scope assistive technologies at specific versions.

Use case 1 feedback

After tests have been designed and reviewed, the Admin prioritize and adds them to the system for testers to be executed. After that, the Admin will review them and later publish them 

Using the above terms, I would rewrite this as:

After test plans have been designed and reviewed, the Admin creates a test run suite for each test plan. A suite consists of a run of the test plan for each AT/Browser combination. The admin also prioritizes test run suites. After testers execute the runs, the Admin manages the review and publication process for each test run report.

What do we mean by prioritize? I think this means sequence rather than having buckets of priorities, e.g., hi/med/low.

A test run would be executing a test plan for a single AT/Browser combination. Thus with our current scope, each test plan will have 6 test runs, making up a test suite. The system could automatically sequence runs within a suite. For instance, when we configure a test round with a specific list of browser and assistive technology versions, we could specify their priority sequence.

So, the admin would prioritize or sequence suites, not individual runs. That is, the admin would say I want all the tests for this checkbox example done, then for this combobox, then for a different combobox, and so on. Put more simply, it is like prioritizing or sequencing test plans.

Admin submit/assign tests to testers

By default, test should all be "up for grabs". So, assigning to a tester should be optional step. I can imagine scenarios where we want to assign a particular test to a particular person so another person does not grab it. So, the ability to assign is important.

Since we need 2 people to grab each test, a test should be up for grabs until it has two assignees.

I can imagine that an individual tester may only run tests for a specific set of browser/AT combinations. We may want something in a tester's profile to say which ones they can run. Then, the admin cannot assign the wrong person by mistake. Or, we could even auto-assign people based on that field in their profile and their current backlog, and perhaps an availability field in their profile.

Initially, if we don't have tester profiles that specify such things, we may need to do one of:

  1. Rely on testers self-assigning
  2. Admins knowing very well the skills and availability of each tester
  3. Making assignments in some kind of planning meeting.

Use case 2 feedback

Tester executes test 

Need to be specific that this for a single AT/Browser combo. May this should be tester executes test run? Or, do we just mean individual test in a test run? I think only test runs.

I think testers should only see runs in their queue, not individual tests. And, the run should have a status showing x of y tests complete. Opening the run could automatically show the page for the first incomplete test in the run.

Precondition 1: Tests have been prioritized.

I think this should be test runs have been prioritized. Within a test run, the test plan specifies the sequence of tests.

Precondition 2: Tests have been added to the pipeline.

I think this should be test runs have been added to the pipeline. Seems like these preconditions should be reversed -- 1 should be 2 and 2 should be 1.

Use Case 2 - Basic flow: Execute test

I think we mean test run here.

This is the main success scenario. It describes the situation where only executing and submitting a test are required.

I would rewrite as:

This is the main success scenario. It describes the situation where only executing and submitting the tests in a test run is required. 

Then, the first step needs to be that the tester needs to choose a test run from their queue. If the tester is not assigned a test run, then the tester would grab the first test run that is up for grabs that matches the AT/Browser combination that the tester is prepared to use. The tester's queue could automatically be filtered based on the browser in use.

1: Tester provides information about what browser and screen reader combination they will be using.

Note that we need to word use cases based on the long term scope, so use the word "assistive technology" instead of "screen reader" where appropriate.

That said, we don't need this step. The tester will choose a test run, e.g., "Checkbox tests in Chrome with JAWS", which specifies both the browser and AT.

As I described above, given Glen's feedback, at a given point in the project timeline, We may only want the checkbox tests performed with a specific version of JAWS, e.g., JAWS 2020.1912.11. So, the test admin may set up the run so it specifies both the browser and the exact version of the assistive technology. So, we may need this step to ask the tester to verify they are using the correct version of the AT.

That is, this step could be:

Tester verifies the version of the assistive technology being used exactly matches the version required and that it is running with a default configuration as specified on the "Test setup requirements" page of the wiki.

Next:

3 Tester opens the test.

You don't need this step because the previous step says:

2: Tester gets a set of tests according to the browser and screen reader combination they will be using.

Perhaps you could merge these two into a single step that is:

Tester is presented with the first incomplete test in the sequence of tests in the test run

6: Tester submits the test for review.

The tester does their own review of what they have saved. I think the steps are, as we have in the current runner that the tester follows the steps, saves and previews results, then goes to the next test in the test run. After all tests are complete, the tester submits the test run.

Use case 3

Admin Publishes Test Results

I wonder what level of gramularity we want here. I think we will only publish results for complete runs. That is the only unit that is worth reviewing by an AT developer.

On the other hand, say an AT developer fixes a bug that is only supposed to fix behaviors in only one test in an entire run. I wonder if we should always run the entire run and republish the entire run? Seems like that would be necessary. If we didn't, then unintended side effects of a bug fix would not be caught.

So, I'm thinking that we only publish complete test runs, and we only review complete test runs with an AT developer. Thus, a test run needs to be complete, i.e., all tests in the plan have been run for the AT/Browser combo, before we would review it. We should specify this in the process (#41).

This scenario describes the situation where the results of the execution of a test are incorrect and need to be executed again. 

We might want a specific test within a test run re-run. Should their be the ability for the Admin to remove results from that single test only, which would then make the run incomplete, and then put the test run back in the tester's queue? In this case, it would show the tester that 15 of 16 tests are complete for example. Opening the run would go directly to the incomplete test. Perhaps there could be a note from the admin at the top of the test.

from aria-at.

isaacdurazo avatar isaacdurazo commented on May 25, 2024

Thanks for the thoughtful feedback, @mcking65! I've now incorporated your suggestions and expanded the use cases with two more alternative flows that include: 1) requiring a specific AT version when submitting a test run to testers and 2) an option for selecting a group of testers depending on the needs of the test run created by the test admin. I've also made several improvements to the wording to make it consistent with the Working Mode Document.

This use cases live now in the wiki page. Let's keep this issue to continue the discussion.

from aria-at.

zcorpan avatar zcorpan commented on May 25, 2024

There's also a wiki page for "high-level use cases". Should these 2 pages be merged? Or should they be separate but be renamed?

from aria-at.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.