MOO provides helps businesses to print high quality stationary.
They provide a library of stylish contemporary designs and also enable customers to use their own designs.
This work was not commissioned or sanctioned by MOO.
When using the service I spotted a few problems and a design opportunity.
I wanted to know how significant these problems were, how much they affected other users.
Moving forward I played client and made a few assumptions:
To learn how users reacted in these specific areas I examined the upload journey and scripted task scenarios.
I was looking for Success rates; Errors; Confusions; Ease of use; User feedback.
Testers would need to use Chrome browser, install Lookback's extension and used prepared artwork. I explained all this in a friendly email and reminded testers to 'Think aloud'.
Its not always possible to coordinate testing so I was interseted to see how different the data was from moderated and unmoderated tests.
This was a chance to use Lookback. This was a great all-in-one option for testing, serving a task list, recording screen share with audio and video.
Previously I’d used Tape-a-call to record phone interviews. I’d also tried a combination of Skype and Quicktime screen recording, with mixed results.
Its useful to add comments to markers when reviewing playback but I had to keep scrolling back up to navigate the video. A modular controller would have supported natural commenting.
To ensure good qualitative data (in Live-test) I left the testers to find their way through the tasks as much as I could. A few prompts to think aloud were necessary.
I had to intervene where instructions were not precise enough, testers were blocked or strayed off-task (in ways that were not relevant).
Wanting to get the most out of the test I found it difficult to remain quiet. I caught myself questioning (and extending the sessions), where I should have simply made notes.
Additional errors and confusions emerged for users when handling artwork.
Reviewing tasks informally helped to understand how difficult or confusing they were for the testers.
Because testers were experienced designers they were used to solving interface and production problems.
I suspect there would be value in repeating the test with less skilled customers, such as entrepreneurs.
Be more aware of confirmation bias. For example, where user stated “X needs to be clearer” I did not go deeper to ask why or how.
Capturing a meaningful account of Ease-of-use was challenging. I would explore adding this to the task list as the test proceeds, for review during interview, as I felt that testers did not recall their experience clearly.
Going forward I would expect to Improve the focus and persistence in my questioning to get a deeper understanding of testers’ behaviours.
I exceeded the agreed 30 minutes on a couple of tests. If task scenarios are too complicated the depth of understanding from interview may suffer from having too much behaviour to interrogate.
Most of the problems were validated and a few more were revealed.
Judging by the amount of data and overrunning I think I may tested too many problems.
Error counts did not reflect test. I had a sense that testers persisted when they might otherwise have abandoned tasks.
I was not sure how to count or measure confusion. Possibly a sympton of solo testing.
It was difficult to assess whether an error was caused by usability or if the task scenario was flawed. Unfortunately I lost access to the video once the Lookback trial expired.