Dr. Jill Drury, Mondays, 5:30 - 8:30 p.m., Olson Hall Room 404
U Mass Lowell
Class Materials
Syllabus version 5
1:Goals and Prototypes
2: Heuristic evaluation
3: Cognitive Walkthrough and Formal Methods
4. Introduction to Usability Testing
5. Designing and conducting usability experiments
6a. Analysis prep
6b. Review
7. Self-reported metrics
8. Issues-based and Performance metrics
Video of our exercise usability test
9. Combined metrics and coding video
10. Cost-Benefit Analyses
______________
Helpful Documents
Sample Evaluation Materials
Usability test report template
Tutorial for NGOMSL
Grading finished for special topic presentations
I have finished sending out the grades for the special topic presentations, so please look for your personalized email message.
Guidance for Project Part 3
I have adapted a test report template from Usability.gov that I suggest you use for your Project Part 3 report. You can find it here and under the Class Materials column. I will grade the report using the following criteria:
-- Correctness: 40%. For example, does your description of the videotape analysis lead me to believe that you developed reasonable coding categories and correctly calculated the Kappa statistic? Do you compare your findings to the goals/requirements? Are the usability issues you identified likely to be "real" issues instead of false positives? Did you conduct the testing in a way that avoided threats to validity?
-- Completeness: 50%. For example, do you provide profiles of your user groups? Do you explain why you feel certain issues are important? Do you provide severity levels for the issues you identify? Have you explained your testing methodology sufficiently so that others could reproduce your evaluation? Do you list all the categories you used for coding the videotape, and provide your rationale for coding it in the way you did? Have you been thorough in finding the issues that need to be addressed? Do you compare and contrast the results from your two user groups?
-- Clarity: 10%. Is your report readable and professional? Would this report be easy for the intended customer to understand?
Guidance for special topics
I will grade the special topics presentations in the following way:
40%: connection to what we've learned this semester so far. For example, if you have a topic that is a different type of non-WIMP application, would you be able to apply the evaluation techniques we've covered so far to [your type of system here]? If you have a topic relating to a nontraditional evaluation technique, such as remote usability testing, how is it different than the evaluation techniques we've learned so far?
40%: correctness and coverage of the topic. I know you're not going to be able to provide very complete coverage of your topic in only twenty minutes, but you need to give me the impression that you've at least skimmed a number of relevant papers and have extracted interesting points from them.
10%: clarity of presentation.
10%: ability to answer questions from the professor or your classmates.
Special topics
Jeff: Mobile apps
Adam: Web apps for online social networks
Justin: Games
Simone: Voting systems
Heather: Cars
Amanda: smart houses
Eric: Multitouch
Xiaoxiao: remote usability testing
Raviteja: Kiosks
Sascha: robots, especially with haptic/6DOF inputs
Tristan: E-reader
Leon: Biometric evaluation approaches
Groups formed
The three groups include the following students:
Raviteja, Leon, Adam, Simone
Sascha, Heather, Eric, Xiaoxiao
Justin, Jeff, Amanda, Tristan
Course Description
This course is a graduate-level introduction to methods used to evaluate the design of human-computer interaction. Participants will acquire a basic set of skills and techniques for assessing the effectiveness of interface designs, and will understand how evaluation fits into computer products’ lifecycles. Further, students will practice skills in critically assessing the literature of HCI evaluation.
Return to Jill Drury's home page
Last updated on 4 May 2011