Perception, Cognition, and Empirical Studies: My Journey so far

My first collaboration with a psychologist was through an EPSRC project in the late 1990s, working with David J. Oborne -- a professor of ergonomics at work. My research assistant Mark Kiddell and I were responsible for developing a multimedia remote interview system, while David and his research assistant were responsible for conducting empirical studies, Although David and I managed to co-author papers only on the system aspect, I learnt some basics from our psychology colleagues. In 2002, my MPhil/PhD student Gareth Daniel utilised the equipment of that remote interview system to study various control protocols for multi-site and multi-user video surveillance. We designed an empirical study to compare a few protocols. Several members of staff and PhD students drove their cars around the campus, and several others attempted to spot them from their own desktop screens while controlling four shared cameras. We submitted a paper of the study to ACM CHI, which gave a set of good scores but asked for more participants and statistically-significant conclusions. We could not carry out more empirical studies under similar conditions. As Gareth was more interested in video visualization by then, we submitted the work in another venue [1]. Nevertheless I learnt a useful lesson about empirical studies.

In 2005, I visited Tom Ertl's team in Stuttgart for three months and we had highly productive collaboration resulting in several new visual designs for video visualization and a GPU-based implementation. Arrived back at Swansea, my task was to organise an empirical study to compare these visual designs. I met Ian Thornton, a newly appointed psychology professor at Swansea. Having been specialised on motion perception, Ian questioned whether video visualization would work without the motion in the original video. Despite his doubt, Ian was hugely supportive to the study. I designed the video stimuli, Tom's PhD student Ralf Botchen rendered the visualization stimuli, and my PhD student Rudy Hashim wrote the control software for the experiment. During the pilot test, Ian got about 70% of the trials correct himself. He gave the greenlight for the study to go ahead. Rudy magically managed to recruit over 70 participants with only £2 per participant. After she collected all the data, Ian did the ANOVA himself. While the study was designed to compare four visual designs, its most important findings were that humans could identify visual signatures of different motions using video visualization and the skills of video visualization could easily be learnt and be retained [2].

In 2009, EPSRC awarded a substantial grant to me and Ian to work on video visualization. Rita Borgo was appointed as one of the postdoctoral researchers. Initially Rita was not sure if conducting empirical studies would suit her. It turned out that she was brilliant at these. While we were working with glaciologists to see if visualization could assist in routine observation of sequences of high resolution images, we conducted an empirical study to figure out what tasks could be performed effectively and what could not. The study [3] made a few interesting observations (e.g., humans' ability to estimate the average intensity of a group of pixels), and motivated the subsequent development of a new spatiotemporal visualization method for glaciology data. After Rita and Ian's PhD student Karl Proctor collected all the experimental data in November 2009, there was a delay for Ian to offer his analysis before the coming Christmas. After the Christmas break, Rita came back, miraculously together with her analytical conclusions and her new knowledge of ANOVA. It was Rita's launch pad to become one of the leading experts on empirical studies in visualization. We worked together on several other empirical studies [4, 5, 13] and she also led several ones independently.

The empirical study on visual embellishments [4] was a carefully-designed dual-task experiment. All stimuli of the main task were drawn manually using MS-Powerpoint. I learnt the dual-task method from reading several encyclopaedias of psychology. In this study, it was designed to detect the relatively weak signals about the impact of visual embellishments. Alfie Abdul-Rahman, my PhD student back in Swansea in the early 2000s, re-joined me at Oxford as a postdoctoral researcher. Alfie took over the development of the control software for the experiment from Farhan Mohamed who had to go back to Malaysia. Rita ran the study at Swansea with the help of Alfie, and analysed the results with the help of Swansea psychologist Irene Reppa. While Rita became fluent with managing an empirical study and ANOVA, the study was also Alfie's launch pad to become an expert on empirical studies in visualization [9, 10, 11, 12, 17].

Since then, I have worked on a number of empirical studies, including laboratory studies [5, 8, 11, 12, 17, 18], crowdsourcing studies [9, 13], surveys and small group discussions [10, 16], and field observation [15]. I have been impressed by many talented young researchers, including Ilaria Liccardi and Varshita Sher who were innovative and unafraid of challenging the status quo, Natchaya (Pinpin) Kijmongkolchai, Rassadarie (Ploy) Kanjanabose, and David Chung who were rigorous in their designs of the experiments, and Karl Proctor and Rassadarie (Ploy) Kanjanabose who were confident and competent in delivering analytical conclusions.

In the field of visualization, empirical studies are typically conducted under the major scope of "Evaluation". The emphases have typically been placed on "testing" some visual designs or visualization systems as part of a software engineering workflow. While empirical studies can and should support "evaluation" in visualization, there have not been enough emphases given to the more ambitious goal of empirical studies, that is, to make new discoveries about how and why visualization works in some conditions and not in others. I first aired my concern publicly in the VIS2013 Panel on Evaluation [6]. When I was a papers co-chair for IEEE VAST 2015, I sought the agreement from VAST colleagues and Tamara Munzner to change the paper type "Evaluation" to "Empirical Studies" and drafted its scope.

Once "making new discoveries" became one of the "official" goals, researchers can study hypotheses that relate directly to some fundamental questions about visualization. Most visualization researchers have such hypotheses, but may often feel the need to repackage them as evaluation questions. For example, when we conducted the empirical study on video visualization [2], we really wanted to know whether or not humans could detect visual signatures of different motions using a single visualization image, and whether or not humans could learn such skills easily. However, we felt that it would be more acceptable to the reviewers if we focused on comparing four types of visual designs. Nowadays, I have a bit more courage to ask discovery questions directly, such as "Can we detect and measure the human knowledge used in visualization?" [17] and "Can we perceive correlation indices with reasonable accuracy, and if not, why do we view scatter plots?" [18].

Most of us agree that in some circumstances, visualization is more effective and/or efficient than viewing data in numerical, textual, or tabular forms, and than being simply informed by a computer about the decision. When visualization works in these circumstances, there must be some merits in perception and cognition. Hence any causal factors that make visualization work may potentially be the causal factors that make perception and cognition work. Therefore, visualization researchers are in the right place at the right time to look for these causal factors. "We must always respect such challenges 'in theory,' but we should never be afraid of them 'in practice.' " [Chen et al. CGA 2017] In the spirit of being adventurous, I asked some daring questions such as [7, 19], which are yet to be confirmed by empirical studies.


  1. G. W Daniel and M. Chen. Interaction control protocols for distributed multi-user multi-camera environments. Journal on Systemics, Cybernetics and Informatics, 1(5), 2003.

  2. M. Chen, R. P. Botchen, R. R. Hashim, D. Weiskopf, T. Ertl and I. M. Thornton. Visual signatures in video visualization. IEEE Transactions on Visualization and Computer Graphics, 12(5):1093-1100, 2006. (Presented in IEEE Visualization 2006.) DOI

  3. R. Borgo, K. Proctor, M. Chen, H. Jaenicke, T. Murray and I. M. Thornton. Evaluating the impact of task demands and block resolution on the effectiveness of pixel-based visualization. IEEE Transactions on Visualization and Computer Graphics, 16(6):963-972, 2010. (Presented in IEEE VisWeek 2010.) DOI.

  4. R. Borgo, A. Abdul-Rahman, F. Mohamed, P. W. Grant, I. Reppa, L. Floridi, and M. Chen. An empirical study on using visual embellishments in visualization. IEEE Transactions on Visualization and Computer Graphics, 18(12):2759-2768, 2012. (Presented in IEEE VisWeek 2012.) DOI.

  5. H. Fang, G. K. L. Tam, R. Borgo, A. J. Aubrey, P. W. Grant, P. L. Rosin, C. Wallraven, D. Cunningham, D. Marshall, and M. Chen. Visualizing natural image statistics. IEEE Transactions on Visualization and Computer Graphics, 19(7):1228-1241, 2013, (Presented in IEEE VIS 2013.) DOI.

  6. R. S. Laramee, M. Chen, D. Ebert, B. Fisher, and T. Munzner. Evaluation: How Much Evaluation is Enough? IEEE VIS Panel, Atlanta, 13-18 October, 2013. PDF(74K), Slides.

  7. M. Chen, S. Walton, K. Berger, J. Thiyagalingam, B. Duffy, H. Fang, C. Holloway, and A. E. Trefethen. Visual multiplexing, Computer Graphics Forum, Wiley 33(3):241-250, 2014. DOI, (Presented in EuroVis 2014, Slides.)

  8. S. Khan, K. J. Proctor, S. Walton, R. Bañares-Alcántara, and M. Chen. A study on glyph-based visualisation with dense visual context. Proc. Computer Graphics and Visual Computing (CGVC), 2014.

  9. A. Abdul-Rahman, K. J. Proctor, B. Duffy and M. Chen. Repeated measures design in crowdsourcing-based experiments for visualization. Proc. BELIV 2014, DOI

  10. A. Abdul-Rahman, E. Maguire, and M. Chen. Comparing three designs of macro-glyphs for poetry visualization. Proc. EuroVis Short Papers, 2014.

  11. R. Kanjanabose, A. Abdul-Rahman, and M. Chen. A multi-task comparative study on scatter plots and parallel coordinates plots. Computer Graphics Forum, Wiley, 34(3):261-270, 2015. (Presented in EuroVis 2015.) DOI.

  12. I. Liccardi, A. Abdul-Rahman, M. Chen. I know where you live: Inferring details of people's lives by visualizing publicly shared location data. Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), Honorable Mention, 2016. DOI.

  13. D. H. S. Chung, D. Archambault, R. Borgo, D. J. Edwards, R. S Laramee, and M. Chen. How ordered is it? On the perceptual orderability of visual channels. Computer Graphics Forum, 35(3):131-140, 2016. (Presented in EuroVis 2016.) DOI.

  14. D. J. Edwards, L. T. Kaastra, B. Fisher, R. Chang, and M. Chen. Cognitive information theories of psychology and applications with visualization and HCI through crowdsourcing platforms. D. Archambault, H. Purchase, T. Hossfeld (eds), Evaluation in the Crowd: Crowdsourcing and Human-Centered Experiments, Lecture Notes in Computer Science 10264, Springer, 2017.

  15. G. K. L. Tam, V. Kothari, and M. Chen. An analysis of machine- and human-analytics in classification. IEEE Transactions on Visualization and Computer Graphics, 23(1):71-80, 2017. DOI. (Presented in IEEE VIS 2016, VAST2016 Best Paper Award.)

  16. P. A. Legg, E. Maguire, S. Walton, and M. Chen. Glyph visualization: A fail-safe design scheme based on quasi-Hamming distances. IEEE Computer Graphics and Applications, 37(2):31-41, 2017. DOI. (Presented in VIS2017, Slides.)

  17. N. Kijmongkolchai, A. Abdul-Rahman, and M. Chen. Empirically measuring soft knowledge in visualization. Computer Graphics Forum, 36(3):73-85, 2017. DOI. (Presented in EuroVis 2017, Slides.)

  18. V. Sher, K. G. Bemis, I. Liccardi, and M. Chen. An empirical study on the reliability of perceiving correlation indices using scatterplots. Computer Graphics Forum, 36(3):61-72, 2017. DOI. (Presented in EuroVis 2017, Slides.)

  19. M. Chen. Can Information Theory Explain Concepts in Cognitive Science? video.