Giter Club home page Giter Club logo

alkiln's Introduction

The Legal Innovation and Technology Lab's website

This repo is served up as the LIT Lab's website and is available at suffolklitlab.github.io.

alkiln's People

Contributors

berit avatar brycestevenwilley avatar jostran14 avatar mcdonaldcarolyn avatar miabonardi avatar michaelhofrichter avatar niharikasingh avatar plocket avatar rpigneri-vol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

alkiln's Issues

Action: Do we need to add action to update supporting files in dependent repos?

When this repo gets updated and new code gets added, some of the supporting files, like .gitignore, need to be changed. For example, when we add the ability to download files, we then need to ignore those in .gitignore.

Do we need another action to do that, or can we do that with the existing push action? We would at least have to handle a file not having changes so that won't trigger an error.

[See https://github.com/plocket/push-generated-file/issues/2]

Add more instructions about running the tests in test message

  1. Don't run multiple tests in one Playground account at the same time, even if they're on different branches. That includes:
    1. Hitting the 'commit' button in the playground. That is, pushing a branch (pushing to an old branch or pushing to a new branch)
    2. Making a pull request
    3. Merging a pull request
  2. Getting a 'timeout' error failure might mean you just have to run the test again. Look at what step the test failed on and what the screenshot from the error looks like.
  3. If there is a spot the tests are often failing from a timeout in a particular spot, add a 'When I wait 1 second' or 'When I wait 2 seconds' just before the step that fails. There may be something that often takes some extra time there.

Allow using variable name to identify a field

Apparently the field name attribute value is the base64 encoded version of the variable name. That can be useful for people handling translations through separate .yml files as it can be language agnostic.

Wait for 'daMainQuestion' on each new page?

Do we do that already? Something keeps timing out in MADE between "rental agreement" and "tenancy facts" and it fails pretty quickly even though continuing waits for navigation.

Random input tests

This is an alternative to having the developer write out every scenario to cover all their code. It's not ideal, but since you can't abstract in cucumber, writing every single scenario can be a huge task. It is possible, in cucumber, to allow the user to pass in data structures like lists. We would just have to handle randomly selecting them.

Note: This is not a fault in cucumber - it's not meant to be used the way we're using it.

Also need to think whether the developer will need to copy/paste this 'scenario' for however many times they want the random tests to be run, or if we can run them repeatedly somehow. This might be better in its own issue. [Edit: This is probably doable now that we have the knowledge of setting, and resetting, our own custom timeouts.]

Make multilingual tests easier to set up with less duplication

Right now, setting up multilingual tests would be a real pain. You'd have to repeat each test for each different language, even if you were using variable names to test - all the steps would be the same except the first step where you'd specify a language. That's a lot of tests and a lot to maintain.

What are ideas for how to make it easier? We've come up with the one that we currently think will be easiest for developers using this stuff, but are open to other thoughts. This package here would implement this functionality in the dependent repo.

Prep

  1. Allow the developer to specify their desired languages (probably as an optional environment variable)
  2. .gitignore a specific folder in the features folder. Currently making the files right in the features folder, so this isn't needed. May be worth discussing.

Additional npm script

First, make the ignored folder in the features folder. Then, for each language listed, do this:

  1. In the ignored folder, add a folder with the code of the language.
  2. Duplicate all testing files that are not in the ignored folder.
  3. Copy them into this language folder.
  4. Add a step to each scenario that would tap the button of the correct language on the intro page.

Add this npm script to the testing action just before running the tests. Maybe add another script to the package.json that would include the build step when it runs the tests.

Stretch goals

  1. Allow the developer to specify a language or list of languages when manually running the tests (once manual test running is available).

Questions

  1. If the default language isn't specified on the list, should we find a way to only run the specified languages?
  2. There was something else that seemed significant, but I don't remember what it is.

Add advice on how to handle interview defaults

Developers should write tests so that they actually set all the unimportant default field values in each test. When interviews get translated, they often lose those defaults because translators edit them. If the default isn't particularly important, it shouldn't cause the test to fail.

The only reason to leave a default is if the default is crucial and its absence should cause an error.

In that case, though, they should write a test that checks that the field's default state is as it should be when the page loads instead of relying on side-effect errors bringing that up. Those steps aren't defined yet, but we should make that into its own issue.

Steps: Make `I set the...` for checkboxes?

Situation:

  1. Developer makes a default for a checkbox that makes it 'checked' when the user first comes to the page.
  2. Translator accidentally translate the words that set the default, leaving the checkbox without a default, meaning it starts as 'unchecked'. Let's call the language 'Foonian'
  3. Test comes along. To test default language behavior, it does not tap the checkbox, leaving it checked.
  4. Automated language tests come along and build the 'Foonian' test. That test also leaves the checkbox alone, which means the checkbox is unchecked.
  5. The checkbox determines other behavior in the interview and so the tests in 'Foonian' appear to be broken, even though that default isn't crucial to the functioning of the interview.

It's going to be a bit of a pain to add that. I can't think of another way. It may mean that we really do only need one tests for setting any variable, so that might be a plus...

Try to start an interview a few times with a fair break in between

We're still having trouble getting an interview to start for the first test in the series. The interview file doesn't load for the first test even though the tests after the first one do start. That sounds like the server may have started for 'test.yml', but it hasn't finished loading everything. We can do something similar, though less extreme, to what we do with 'test.yml' to give it a couple extra shots.

Let's call it 3 or so times with 20 or 30 seconds between each.

If first `test.yml` interview fails to load, stop all further scenarios and features

[It would fail because the server was not working (since test.yml exists in the Project's files already and isn't dependent on the GitHub code).] This would be after we implement a more robust way of loading the interview initially. Track #52 for this.

Maybe look at https://github.com/cucumber/cucumber-js/blob/master/docs/support_files/hooks.md Also maybe CLI options, though I haven't found docs on those yet.

[CLI options: https://github.com/cucumber/cucumber-js/blob/master/docs/cli.md]

[We actually probably want to do this in the actions script. Really want to figure out how we can set up that script in a way that lets us change it when we need to instead of people having to update their action scripts. Bleh.]

Separate some of the interview start options into individual steps

I overloaded the 'Given' step a bit. We can possibly break out:

  1. Choosing a device: "I start on mobile" or something.
  2. Possibly break out choosing a language into its own step too, though it might complicate the automated language selection step, so may not be worth it.

Thoughts?

Detect actual breaking interview error more quickly

We might look into whether <meta itemprop="name" content="docassemble: Error"> is consistently present on actually broken pages. Would really speed up error finding. This might at least speed up finding one type of error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.