James E. Hoffman
Professor Emeritus
Department of Psychological
and Brain Sciences
University of Delaware
Newark DE. 19716
James E. Hoffman
Professor Emeritus
Department of Psychological
and Brain Sciences
University of Delaware
Newark DE. 19716
Biography
I received my PhD in Cognitive Psychology in 1974 while working with Prof. Charles Eriksen at the University of Illinois in Champaign-Urbana. That same year, I landed a position as an Assistant Professor at the University of Delaware in Newark, DE. In the subsequent years (and decades!) I have done research on various aspects of spatial cognition with an emphasis on visual attention. Attention is important because studies have shown that we don't perceive and remember everything we are "looking at" in a visual scene although it may feel like we do (see the famous gorilla study conducted by Dan Simons and colleagues). However, if we are expecting a particular object to be present (for example, looking for a friend on a busy street corner), we can set or prime our visual machinery to favor the expected object and give it priority in entering our awareness. This is an example of voluntary or top-down attention. Sometimes though, objects may capture our attention even when we should ignore them, This is known as automatic or bottom-up attention capture. More on this distinction below.
The intricate interplay between the various mechanisms that process visual features, direct attention, and produce conscious awareness can be studied using a combination of behavioral measures and recording of neural activity from different areas of the visual brain. This can be done using tools that vary in their temporal and spatial precision. In my lab, I used event-related brain potentials or ERPs which are derived from EEG activity recorded from sensors on the scalp. This technique records brain activity with excellent temporal precision but imprecise spatial localization in the brain. There are other techniques, such as functional magnetic resonance imaging, that provide much better spatial localization but have relatively poor temporal resolution. These two approaches are sometimes combined to get the best of both worlds.
Williams Syndrome
One way to understand a complex system is to examine how it is disrupted when part of it is damaged. Together with Prof. Barbara Landau, who is currently at Johns Hopkins University, we pursued this approach by examining spatial abilities in children with Williams syndrome (WS) which is a genetic disorder that results in relatively good language and social skills together with impaired spatial abilities. For example, a person with WS can give a very good verbal description of an elephant but their drawing of an elephant might consist of a somewhat random arrangement of disconnected parts.
We attempted to understand the nature of the WS spatial deficit by examining performance in the block construction task which involves combining a set of blocks displaying different features in order to match a larger, sample pattern or model (Hoffman, Landau & Pagani, 2003). WS children are severely impaired in this task even though they appear to use similar problem solving strategies compared to normally developing children (as reflected in similar patterns of eye fixations between the model and their reconstructions). Interestingly though, we found the WS children almost always recognized when their completed constructions did not match the model patterns, suggesting that their perception of the patterns was normal. This is consistent with other studies we did showing that object recognition is a relative strength in WS (Landau, Hoffman, & Kurz, 2006). Note that in the block construction task, people need to explicitly code location (e.g., "the part with a horizontal bar is on the bottom left corner of the model"). This combination of visual features and explicit location is thought to be processed by the dorsal aspect of the visual stream while object recognition is carried out in the ventral stream. Considerable research now shows that WS people have deficits in the parietal lobe of the brain which is an important part of the dorsal stream. This research program was described in a book (Landau and Hoffman, 2012)
Emotion and Attention
Another area of my research was concerned with the ability of salient visual information to automatically capture attention. For example, if you are searching for a red triangle among a set of identical red circles, you will quickly find the triangle because it tends to "pop-out" of the display (being the only triangle in a display of circles makes it salient), Now, if instead, one of the circles is green, there will be a delay in finding the red triangle because a green circle in a display of red objects is even more salient than the target and it captures attention even though you know that green objects are irrelevant and you should ignore them. Once your attention is captured by the irrelevant, green distractor object, you will quickly discover that it isn't a target and you will suppress it and move your attention to the target.
Many investigators have speculated that other aspects of a visual object besides its physical salience can produce automatic attention capture. For example, negative emotional or threatening stimuli such as snakes or angry faces might capture attention even if they aren't physically salient. Such stimuli might have emotional salience rather than physical salience. Studies designed to assess whether emotional pictures automatically capture attention have produced mixed results with some studies showing automatic emotional capture and perhaps an equal number showing no such effect.
We examined the question of automatic attentional capture using a paradigm known as emotion-induced blindness (EIB, Most, Chun, Widders, & Zald, 2005). In EIB, you view a stream of pictures that are rapidly presented at the center of your fixation. Most of these "background" pictures are "scene stimuli" consisting of forests, beaches, and cityscapes. The task is to detect a target picture in which one of the scene pictures has been rotated 90 deg. left or right (e.g. the top of the picture is now on the left) and report the direction of rotation. On some trials, the target is preceded by an irrelevant picture (the distractor) containing people or animals. The distractor can be negative (for example, a snarling wolf) or neutral (a calm-looking dog). In any case, you never have to report anything about the distractor and so, it should be ignored. Interestingly, when the distractor is negative (a snarling wolf) and it appears a few pictures before the target, there is a large impairment in the ability to report the target orientation, reflecting the automatic capture of attention by the emotional distractor.
Subjectively, you feel that the target picture was missing from the picture stream and hence the name: emotion-induced blindness. However, the neutral distractor (calm-looking dog) also produces a deficit in the ability to report the target, albeit a smaller one compared to the negative distractor. In this case, the neutral distractor captures attention due to its physical salience. Pictures containing close-up views of people and animals are physically quite different than the background scene pictures in terms of their visual features, color, etc. and will therefore briefly capture attention. Like the neutral distractors, the negative distractors are also physically salient but, in addition, they potentially have emotional salience as well and the combination of these two sources of salience might result in greater attention capture and larger interference with the following target.
We tested this idea (Baker, Kim, & Hoffman, 2021) by removing physical salience from both negative and neutral distractors by relacing the background pictures in the stream with pictures of people and animals in a non-emotional context (e.g., two people having a conversation, person walking a dog, etc.). In this case, the emotional and neutral distractors are physically similar to the background pictures and cannot capture attention based on physical salience. However, the emotional distractor still might capture attention based on emotional salience and might, therefore, interfere with detecting the target picture. In fact, we found that both negative and neutral distractors failed to affect target report accuracy, indicating they no longer captured attention. That means that emotional salience is fundamentally different than physical salience. Physical salience can automatically capture visual attention while emotional salience cannot, perhaps because determining the emotional meaning of a picture depends on access to the semantic system which appears to require attention (Guida, Kim, Stibolt, Lompado, and Hoffman, 2024). Recall that in the original EIB experiment, the emotional distractor produced more interference with a following target than the neutral distractor. Since both distractors are physically salient and capture attention, this difference must reflect later stages that are involved with access to working memory, semantics, and conscious awareness.
Courses Regularly Taught
PSYC 310 Sensation and Perception
PSYC 433: Introduction to Cognitive Neuroscience
PSYC 667 Cognitive Control and Attention (Graduate Seminar)
Publications (Greatest Hits)
Visual Search
Hoffman, J.E. (1978). Search through a sequentially presented visual display. Perception and Psychophysics, l978, 23, 1-11.
Hoffman, J.E. A two-stage model of visual search. Perception and Psychophysics, l979, 25,319-327.
Attention and Eye Movements
Hoffman, J. E. and Subramanium, B. (1995). The role of visual attention in saccadic eye movements. Perception and Psychophysics, 57, 787-795.
Hoffman, J.E. (1998), Visual attention and eye movements. In H. Pashler (Ed.), Attention. London: University College London Press, 119-154
Williams Syndrome
Hoffman, J.E., Landau, B. & Pagani, B. (2003). Spatial Breakdown in Spatial Construction: Evidence from Eye Fixations in Children with Williams Syndrome. Cognitive Psychology, 45, 260-301.
Reiss, J. E., Hoffman, J. E., & Landau, B. (2005). Motion processing specialization in Williams syndrome. Vision Research, 45(27), 3379-3390.
Landau, B. L. & Hoffman, J. E. (2012). Spatial Representation: From Gene to Mind. Oxford University Press.
Landau, B., Hoffman, J.E., & Kurz, N. (2006). Object recognition with severe spatial deficits in Williams syndrome: sparing and breakdown. Cognition, 100 (3): 483-510.
Semantic Processing
Nigam, A., Hoffman, J.E., and Simons, R.F. (1992). N400 and Semantic Anomaly with Pictures and Words. Journal of Cognitive Neuroscience, 4, 15-22.
St. George, M., Mannes, S., & Hoffman, J.E. (1994). Global semantic expectancy and language comprehension. Journal of Cognitive Neuroscience, 6, 70-83.
St. George, M. Mannes, S. Hoffman, J.E. (1997). Individual differences in inference generation: An ERP analysis. Journal of Cognitive Neuroscience, 9, 776-787.
Guida, C., Kim, M.J.B., Stibolt, O.A., Lompado, A., and Hoffman, J.E. (2024) The N400 component reflecting semantic and repetition priming of visual scenes is suppressed during the attentional blink. Attention Perception and Psychophysics, doi: 10.3758/s13414-024-02997-1.
Emotion and Attention
Kennedy, B.L., Rawding, J., Most, S.& Hoffman, J.E. (2014). Emotion-induced blindness reflects competition at early and late processing stages: An ERP study. Cognitive, Affective, and Behavioral Neuroscience, 14, 1485-1498.
Hoffman, J. E., Kim, M., Taylor, M., and Holiday, K. (2020). Emotional capture during emotion-induced blindness is not automatic. Cortex, 2020,122, 140-158.
Baker, A. L., Kim, M., and Hoffman, J.E. (2021). Searching for emotional salience. Cognition, 2021, 214, 104730.