Giter Club home page Giter Club logo

Comments (7)

deanishe avatar deanishe commented on July 28, 2024

So this is some kind of integration testing (your code <--> Alfred interaction)?

What goes in is info.plist, but what comes out exactly?

Regarding unit-testing in general:

One of the main points of test-driven development is to encourage you to structure your code in such a way that most of it can be easily tested, i.e. your data-handling functions aren't tightly-coupled to the data source. If it ain't easily testable, it's probably badly designed…

What that means is "layering" the code. The part of your code that talks to the Zotero database should be kept separate from the main workflow logic (connected via a defined API that communicates in standard Python data objects), so that you can isolate errors to a specific layer.

Apply judicious logging at the point where the layers touch to help debug users' problems.

If an error pops up in your workflow's logic, you can create a test case that calls the buggy function with a Python dict/list etc. or a JSON file that tests for that specific error without having to involve the database layer.

Alfred-Workflow is a bad example wrt to unit tests: it's grown far too large and monolithic. My main aim for v2 is to refactor the code so that all the code that can be tested without the custom workflow-ish environment set up by run-tests.sh. (Pretty much all of the test_*.py modules will fail totally outside of that environment.)

This presentation is a great example of what I'm aiming for.

from alfred-workflow.

fractaledmind avatar fractaledmind commented on July 28, 2024

I agree with the overall view of testing <--> code. I've gotten a lot better at this lately, but I'm sure I'm still a long way out. However, I don't think that you need to write unit tests for absolutely everything, especially in an Alfred workflow written with Alfred-Workflow. If your code is written such that Alfred only have bash calls to a python script (with args), then you can easily automate running the various aspects of your workflow. So,

What goes in is info.plist, but what comes out exactly?

It will basically just run all of the scripts in your Workflow (script filters, actions scripts, output scripts). You pass the filters a query, and then they run, grab the first arg, and then run all of the script connections iteratively.

I think of this is level one testing. The focus here is coverage and ease of setup. While not everyone will write their code in such a way as to write tests, and others will but won't write the tests, everyone who uses Alfred-Workflow could easily "test" their code by running all of it in one shebang and see what comes out.

from alfred-workflow.

deanishe avatar deanishe commented on July 28, 2024

You can't write a unit test for everything. What you should strive for is code structured in such a way that you can feed any possible data into the relevant parts of the code (i.e. you don't need the whole Zotero database unless you're testing the—hopefully thin—database layer).

How does the code get the results back out of the script calls? How does it verify them?

My impression is that you could be writing tests at a more appropriate level (i.e. the function level).

from alfred-workflow.

fractaledmind avatar fractaledmind commented on July 28, 2024

It just uses subprocess.check_output() to get the results. For script filters, it uses ETree to parse the XML (merely to get the arg of the first result, to "pipe" into any connectors). The idea is that it doesn't "verify" anything. It merely tells you what it got. I'll throw up a Gist of my results from running that script on an empty ZotQuery workflow (i.e. before a first run, so it needs to generate all data).

I do plan on writing some function level tests (specific to ZotQuery, obviously). But I was thinking that a workflow level testing module could prove helpful. This allows a workflow developer to at least run his entire workflow and check out the results before pushing an update (and without having to manually open and enter stuff into Alfred). I think having testing available at multiple levels is beneficial. And it is possible (as I've shown with the Gist) to write something that will simply work with any Alfred-Python written workflow that implements a CLI.

I am not advocating this as a replacement for written, function-level, workflow-specific unit tests. I am offering it as an addition that can be used by more people (let's be honest, most workflow authors using Alfred-Workflow don't have any unit tests) and provide real utility. If you break your workflow level API anywhere, this testing will tell you.

Another benefit is beta testing. You can send the workflow to a beta tester with a new environment and tell them simply to run this one script/command, then return you the results. You will again be able to see simply and easily if the workflow level API is functioning in their environment.

Perhaps it would be better not to call this "testing". Maybe that's what's hanging you up. It is simply a script to run all of the Python code present in the workflow (this is what it gets from info.plist) used via some CLI. So, it offers a full-coverage, automated run of your entire workflow. You will see all of the logs, the input and the output for each script call. If anything breaks, you will see the Traceback. You run one command, and your entire Workflow (all of the python script.py calls that is) is run, with logic to pipe args into connections. It's a handy tool.

from alfred-workflow.

fractaledmind avatar fractaledmind commented on July 28, 2024

Here's a sample output from the script: https://gist.github.com/smargh/3b1ede56360b4cceb863

This was run on a completely clean environment. So the first call has the logs for all of the data creation.

from alfred-workflow.

fractaledmind avatar fractaledmind commented on July 28, 2024

That video is great. Thanks for sharing. I have actually been trying to figure out how best to organize my code. Clean Architecture makes a lot of sense. In fact, I think it helps to put my "testing" module in context.

If you write unit tests for all of your logic, pure functions, how do you test your higher level I/O intensive code? That's where coverall.py (new name?) comes in. By running all of your workflow commands, it will ensure to run all of you higher level code (with the I/O). No extra set-up, no new unit tests. This will then also allow beta testers/users to run both types of tests (lower level, data in/data out unit tests; higher level I/O tests). I think that these two methods actually fit quite well together.

from alfred-workflow.

deanishe avatar deanishe commented on July 28, 2024

You create a fake environment for your I/O code. You monkey patch the libraries your code calls, replacing real I/O libraries with canned responses and objects that collect any input, which you can then verify.

In the case of web.py, I could (should?) swap out the underlying urllib functions it calls with ones that return canned responses. I haven't bothered, as httpbin proves a great API for testing HTTP code.

For update.py, I did implement a (bad) mock Workflow that captures the input to subprocess.call() (instead of letting the tests actually try to update the workflow or otherwise run external processes) and replaces sys.exit(), so the tests don't terminate the process, which would otherwise happen when trying to test the magic arg handling.

from alfred-workflow.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.