Each day, we complete thousands of visual search tasks, like searching for your keys in the morning. My research lies at the intersection of attention and memory in this type of visual search. For example, how do you remember where you left your keys, and how is attention directed to this location? My broad goal is to understand how people search for objects and how their search is guided by knowledge of target properties. Specifically, I am interested in how attentional templates are formed from long-term memory and visual working memory for the purpose of attention guidance. One way that long-term memory templates can guide attention is through selection history, or the history of previous selective actions (Awh et al., 2012). For instance, learning that cups tend to be green influences the type of visual attention template that people form for that category (Bahle, Kershner, & Hollingworth, 2021). We have termed this type of category-specific learning categorical cuing: object categories structure the learning of statistical regularities (e.g., frequent category color), influence the composition of the template, and guide attention. I have found that category templates retrieved from long-term memory are biased toward statistical regularities of recent category exemplars in a real-world scene-specific manner, for example finding your sofa faster when it is in your living room (Kershner & Hollingworth, 2022). In addition to two published papers, one in revision, and two more in preparation, my work on categorical cuing research earned me a National Science Foundation Graduate Research Fellowship.
My doctoral dissertation examines the interaction between attention and long-term memory templates for real-world categories in visual search, specifically the formation, composition, time-course, and strategy of using these templates. I have found that categorical cuing is consistent with episodic retrieval in which individual category exemplars are retrieved to guide attention toward the target (Kershner, Duan, & Hollingworth, under review). Additionally, we found that statistical regularities organized by category were continuously acquired for multiple feature dimensions (e.g., color, orientation, location). While participants could reliably retrieve and report the predictive feature of the categories, I found that this does not necessarily imply a strategic guidance mechanism when the categories have been selected as targets or rejected as distractors; incidental guidance of attention by learning can depend on explicit forms of memory and is not limited to implicit memory. Finally, two features that can predict the next category target (e.g., color and location) compete for priority to guide attention. However, these two features can also combine to lead to even faster response times under different circumstances. I have presented these data at multiple conferences and departmental seminars and am in the progress of preparing two more papers for publication. My research at the University of Iowa has earned me the J.R. Simon Early Scholarship Potential Award.
I plan to grow this line of research by investigating if only predictive features (i.e., features that can predict the likely features of the next target) guide search in categorical cuing and if the long-term memory template for these predictive features must be loaded into visual working memory to guide search. First, my previous work does investigate different feature dimensions, but they are always made predictive of the category through multiple representations. Does the long-term memory template include all features of the category exemplars or only the features that have been predictive? If visual search is more efficient when the predictive feature is paired with an additional repeated but non-predictive feature compared to when the predictive feature is presented alone, it would suggest that all features of the last category episode are included in the template on this trial. Second, the categorical cuing literature assumes that the long-term memory template is loaded into visual working memory to guide attention. However, recent evidence suggests that long-term memory can bypass visual working memory to guide attention (Pruin & Woodman, 2022). While it is possible that a long-term memory template can be loaded into visual working memory to guide attention, it is also possible that long-term memory guides attention directly, without mediation in visual working memory. If participants showed no performance decrement with a concurrent visual working memory task compared to completing a categorical cuing task with no concurrent task, it would suggest that long-term memory guides attention directly. This work would have a significant impact on our understanding of how attention and memory interact, especially in selection history research. Additionally, this looks at the new and unique effect of categorical cuing, about which many questions must still be asked.
Another line of my research explores how guidance by visual working memory and long-term memory compete for attentional priority. Using a task adapted from the probability cuing literature (Jiang et al., 2013), I put these two sources of guidance into competition for the guidance of attention on the same feature dimension. I have found that both sources guide attention when presented independently. When put into competition, both sources still guide, but visual working memory is prioritized, as measured by reaction time, the first and second objects fixated, fixation latency, and dwell time. The long-term memory effect can then be observed later in competition. Prioritizing visual working memory guidance would be especially useful when moment-to-moment goals must override opposing long-term memory biases to meet the needs of the current task. This may occur due to difference in the priority map, in which visual working memory may have the highest peak, thus receiving priority to guide attention. After visual working memory has had its effect on guidance, long-term memory can then guide as the next highest peak on the priority map. I will continue this work, exploring the differences in strategy for guidance from long-term memory verses visual working memory. It is possible that visual working memory guides attention automatically using only predictive features, while long-term memory may require strategic guidance due to a more comprehensive template representation, including even irrelevant information about the category. This work will be a significant contribution to the literature showing that a new visual working memory goal will overtake a learned long-term memory bias (Berggren, Nako, & Eimer, 2020).