In visual search, attentional sets guide observers to select relevant targets within a task. Through my research, I have found that attentional sets, when defined categorically (e.g., find the letter, animal, or prohibited carry-on) will update dynamically throughout search according to one's current search experience (we're better at finding what you've found before), as well as one's conceptual knowledge of the search category (we're better at finding representative category members). This results in novel targets, which were never shown in the search environment despite matching the target category description, being found slower than frequently presented targets (assuming sufficient exposure to the frequent set). However, this novelty cost can be moderated by conceptual knowledge of the target category. For example, novel mammal targets (a highly representative type of animal) will show a novelty cost when presented only alongside frequently presented birds, but when presented alongside a wider range of frequently presented animal exemplars, they will be found faster than other targets.
OATS impact when using letters as targets in three conditions: 4-letter target set, 12-letter set with short exposure to the set before the novel trials (12-short), and 12-letter set with longer exposure. OATS development is influenced by both the amount of items shown as frequent targets as well as the amount of exposure to each target. The grey line represents baseline RT, with the surrounding shaded region encompassing the 95% bootstrapped confidence interval around the baseline. Error bars on novel trial represent ±1 SE of the mean.
OATS impact utilizing natural animal images as stimuli. Observers were instructed to search for animal images, but targets were constrained to be only birds or nonmammals. An equivalent number of novel mammal images were shown to both groups. Showing only bird targets elicited novelty slowing for mammals, while showing non-mammals generated a novelty speeding effect for mammal targets (based on typicality speeding). NTW = novel target window. The grey line represents baseline RT, with the surrounding shaded region encompassing the 95% bootstrapped confidence interval around the baseline. Error bars on novel trial represent ±1 SE of the mean.
I have developed a conceptual model to account for this Operationalization of one's Attentional Task Set (OATS). The model will prioritize attentional guidance all members of a target category according to their representativeness, meaning representative category members start with greater prioritization than non-representative members. After this initial activation, an individual target's activation in the attentional set is dynamically updated by a combination of recent selection boosts and representativeness reinforcement. You can read more about the model here!
Conceptual model for an observer’s attentional set with interactive influences of recent task experience and categorical knowledge for three different target types, represented by blue, green, and yellow circles Activation represents attentional sensitivity for selecting a target, with higher activation on a given node leading to more frequent and rapid detection. Initially, task instructions activate an attentional set based on categorical biases reflecting representativeness (i.e., greater initial activation to highly representative targets like the green and blue nodes). Then, on each trial, after a target is selected, activation levels update according to three parameters: (1) categorical reinforcement (boost in activation based on how representative a given target type is, i.e., light green arrows of varying sizes), (2) selection reinforcement (boost in activation for the selected target type, i.e., the dark green arrow), and (3) activation decay (decreasing activation for non-selected targets, i.e., the red arrow). The sum of the influences for each item lead to an overall increase or decrease in activation for each trial. The change in activation accumulates across trials.
I am currently working to implement a computational version of the model! Stay tuned!
I used this model to predict representativeness effects in high stakes search environments, like finding prohibited airplane carry-ons in an experimentally-controlled search display. Specifically, highly-representative carry-ons (like knives and water bottles; shown in blue) are less susceptible to the low prevalence effect, where search errors are more common for rare targets, compared to poorly representative category members (like gas cans and fireworks; shown in orange).
Research has shown that objects which enter the focus of attention are not automatically encoded into working memory. Specifically, it seems that information is stored in memory based on relevance to the task, or more specifically, relevance to the response. The study of the attribute amnesia (AA) phenomenon has emphasized the importance of distinguishing what is attended to in a task versus what is inherently represented in memory. Commonly, this task asks participants to find a target through 1 feature, and report on a different feature (find letter, report location) before assessing memory for the search-relevant feature via a surprise memory test. Generally, participants have poor memory for the search-relevant feature at surprise, even though it entered the focus of attention, because it violates their task expectations.
Here are some findings from my own research on AA and expectation-related changes to memory representations (you can find more in Publications)
We cannot automatically recall the configuration of items presented to us a display from a given trial, though we can remember which configurations we have seen before and which are new. Read more here
Response-relevant features are susceptible to interference from unexpected interruption, but this interference can be mitigated by a remember cue. AA effects cannot be overcome by a cue. Read more here.
The level of specificity required to report a given target feature dictates how precise that representation can be in terms of storing visual quality. See my poster on this.