Giter Club home page Giter Club logo

evaluation-toolkit's Introduction

Goals of this repository

This repo is a collection of resources intended to help guide and improve the evaluation process of data visualizations. It is written by and for the Axis Design team, but can also act as a point of reference for all data vis practitioners. Again, none of the guidelines mentioned here should be followed to the tee but rather used along the lines of a "Know your guidelines before you break them" mindset.

The goal of this repo is to serve as a toolkit that helps deconstruct the testing process into manageable chunks. It aims to:

  • Identify pitfalls in our current evaluation process
  • Provide direction for self-guided evaluation by means of best practices
  • Equip designers with the tools necessary for testing visualizations

Designers should be able to use this repo to determine success criteria for their projects, prioritize which attributes to test, and conduct more focused and productive user tests.

This is a WIP living document. Please read our Contribution Guidelines to help refine this repo.

How to Use this Toolkit

The approach advocated by this repo is:

 1. Follow best practices
 2. Determine research questions
 3. Plan the test
 4. Conduct the test
 5. Act on your findings

To break down the contents of each phase:

  1. Setting the Stage

    Before beginning the evaluation process, its's helpful to first:

    • Understand the inherent complexity that comes with evaluating a data visualization,
    • Understand the pitfalls with the current evaluation methodologies, in order to prevent yourself from falling into them, and
    • Equip yourself with the testing mindset in order to properly set expectations.
  2. Follow Best Practices

    At a strategic level, this serves as an initial self-directed reflection, intended to help you determine the most important design criteria for the success of your project, as well as to help you improve the current design based on known best practices. At a tactical level, it provides guidelines to help you evaluate specific UI elements such as typography, color, and arrangement (see Data Visualization Checklist prepared by Stephanie Evergreen and Ann K. Emery).

  3. Determine Research Questions

    Next, you will need to determine what it is you specifically want to test and measure, and what attributes you want to prioritize for your dashboard. Do you want to test its usability, usefulness, desirability, or a combination of these attributes? This section provides

    • A list of desirable attributes and the corresponding questions you can ask to test those attributes
    • A list of unwanted attributes you can use to cross your visualization against
    • Guidelines on determining the style of data you should capture (verbal, multiple choice, rating, written, etc.)
    • The Axis Design Sprint Testing Template that can be used to begin documenting the evaluation process
  4. Plan the Test

    Now that you have pinpointed the relevant questions and attributes, you can use this page to devise a testing plan and methodology. This page includes information on

    • Recruiting users/evaluators for your test
    • Choosing the right test (Attitudinal or behavioral? Qualitative or quantitative? What's the context of use?)
    • Choosing the right tasks
  5. Conduct the Test

    After you've mapped out the type of test that would be most appropriate for collecting feedback, you can reference this page to see specific testing methods, and instructions on how to conduct them.

  6. Act on your Findings

    Now comes the redesign and iteration. After conducting your tests, you'll need to analyze your results, then translate them into action items. This section provides resources on

    • Resources on analyzing both quantitative and qualitative data
    • A Checklist of potential actions for design and usability changes you can make based on feedback
    • Guidelines to conveying your findings

Putting it all together

Axis Design Sprint Testing Template

Throughout the sprint we want to track how research questions mature as the design sprint progresses and what methods were used to answer those questions. Copy over this Google Sheets document to your own drive to help guide and document your testing process.

Acknowledgements: Thanks to Arielle Cason, UX Researcher for sharing her personal test templat (developed while she was a graduate student at Georgia Institute of Technology) with us. We were able to use her template as a substrate to build our own.

Here is a snippet of her template-

Testing Template


License

Copyright © 2017 Axis Group, LLC. The information contained in this document is free, and may be redistributed under the terms specified in the license.

evaluation-toolkit's People

Contributors

jessielian avatar lgeorge12 avatar liza92 avatar manasvil avatar werdnanoslen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

evaluation-toolkit's Issues

Method Template

Executive Summary (if applicable)
What
When
Why
How
Method in Action
Resources
References

Better Attribution

Add proper links and citations to Quoted Text
Right now they're in quote blocks

Operationalizing RQ's

Eg. Effectiveness can be measured as a % of people who completed assigned tasks across multiple usability tests, or Likert scale ratings from a survey

add more links

Just in general, it'd be nice for some things to be linked, like apps, websites, methodologies, articles, and other authorities that can back up the claims and lend multiple perspectives to the issues covered here.

Why evaluate?

Write down risks of evaluation

How UCD is based on three pillars

UX Research Landscape

In planning the test, lead with an intro to the Research Landscape; How you can have attitudinal vs. behavioral measurements +qual and quant

Then say what are some things people do for data viz and pros and cons of those
Then,final section should be Putting a stake in the ground-

  1. Usability Test
  2. Expert Review/ Heuristic Evaluation

Personal mental notes

  • a delicate balance, trade-offs
    -beyond lip service
  • link to visual design principles
  • Include emphasis on Accessibility
  • EXPOSURE to an interface before feedback gathering
  • Deferring changes vs. Denying
  • Cognitive, mechanical burden
  • UX Research Landscape
    how your methodology can fall on a gradient
    context matters
    go back to goals of why we are testing: diagnostic, realism
    -Operationalizing measures Eg. Effectiveness can be measured as a % of people who completed assigned tasks across multiple usability tests, or Likert scale ratings from a survey

Challenges document

The challenges document is currently in a bullet form
Needs sewing up+add additional issues

readme should have more meta repo info

The readme is classically (and on Github) used as a way to get around a project (or repo). Right now, the readme. Overall it reads very well and describes process, but it's written in a narrative style, which is good for a one-sitting read-through, but there isn't an intro or quick section listing that would allow for someone to quickly reference something. There also ought to be a succinct "this is what this repo is about" bit at the beginning and a more robust one with links to each of the folders in the How To Use section.

Issues uncovered during heuristic evaluation demo

  1. Agreement about what counts as a heuristic- In most UX scenarios, Nielsen and Norman's heuristic are used as an accepted usability standard. But in our case, we have a list of info vis best practices, which though generally agreed upon within the team, may not be as comprehensive as it could be. For instance, during the demo test, there was a question about "I cannot find details about my target for this month but how important is it really to know the number?" and the expert did not know how to categorize this issue. One approach is to capture these questions as miscellaneous. Another approach would be to increase the number of best practices and classifying them under the useful, usable, desirable framework.

  2. Setting expectations with the expert- The expert felt the burden to find an issue in violation of every best practice or not even though the best practices section is just a guide to be able to categorize issues and also to prime them to think about potential issues. This goes to having a common understanding about the expert about what they are required to do.

  3. Task make every test- There is a very important precursor to every testing method and that is to write tasks that allow for the evaluator to explore the interface. FInding issues is an incidental process of performing these tasks and hence the burden of this falls on the facilitator and this is not something the evaluators should worry about.

  4. Writing task guideline- Whom you interviewed and whom you are testing with can be an important determinant of writing tasks and establishing usability criteria.

  5. Heuristic evaluation - needs scoring method and details on how to conduct the test

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.