Modern vision science for designers:

Making designs clear at a glance

About the course

Why do some interfaces allow users to find what they need easily, while others do not? What information can the visual system effortlessly extract, and what requires slower, more cognitive processes? What does eye-tracking data tell us about what users perceive?

Vision scientists have recently made ground-breaking progress in understanding many aspects of vision that are key to good design. This course reviews state-of-the-art vision science, including a computational model that visualizes the available information. We will demonstrate use of this model in evaluating and guiding visual designs.

Detailed information about the course content can be found here.

Making designs clear at a glance

User-friendly designs allow users to acquire a great deal of information in just one glance. But how do we quantify the available information and gain intuitions about what the user will easily perceive? What are some guidelines for creating more clear-at-a-glance designs?

In this course, we will describe the critical factors limiting what you can see at a glance. We will also present our model of peripheral vision, which allows you to visualize the information available in a single glance. Below are some examples of these visualizations.

Fixating on the City Hall station indicated by "☉" in (a), can you, for instance, tell how many subway lines go from east to west? In (c), we visualize the available information. Information about the subway lines is poor in the periphery, though one can assess the geographic layout. Next, try the same exercise with the more abstract subway map in (b). Does the task seem easier? Again we visualize the information available when fixating the City Hall station in (d). Peripheral vision preserves considerably more information in the abstract map.

For details, see Rosenholtz (2011).

Imagine you are driving in the car depicted in (e), and looking at the end of your lane, indicated by the "☉" mark. Can you acquire any of the information displayed on the tablet without taking your eyes off the road?

Our visualization in (f) suggests that you cannot. Little information survives, other than the location of the tablet itself, suggesting the driver will need to move her eyes in order to see what is on the tablet.

Who should attend?

Students who want a quick overview of the state of the art in vision science.

Industrial and academic researchers, who will gain updated perspective on vision for design usability, and implications for design guidelines.

Designers and developers, who will learn about new tools to evaluate their designs, and will gain insights into how visual perception influences user experience.

We welcome anyone who has an interest in creating designs that better utilize human vision for fast and effortless processing.


Ruth Rosenholtz is a Principal Research Scientist in MIT's Department of Brain and Cognitive Sciences, and a member of CSAIL. She is a world expert in peripheral vision and its implications for how we think about vision, attention, and design.

Dian Yu is a postdoctoral associate at CSAIL, MIT. She has a Ph.D. in Psychology from Northwestern University. Dian is passionate about applying vision science to solve design problems.

Schedules & Resources

Date: The course will be held on 6th May 2019, as part of the main CHI conference program (May 4–9, 2019).

Time: 11:00 AM, Monday, May 6

Duration: One 80-minute session

Suggested reading: The organizers’ review article, Capabilities and limitations of peripheral vision, offers an overview of the characteristics of peripheral vision. Our group website offers additional details on our model of peripheral vision.


Sign up to attend the course as part of your CHI 2019 registration. The course code is TBD. It costs an additional fee (TBD) to attend (on top of the standard conference registration fee).

Courses can be added to your conference attendance package even if you’ve already registered – just log in and tick the box for this course.

For more information, please contact