Want to see yourself here? Get LINCD up with us!
Greg is an Assistant Professor in the Cognitive Area of the Psychology Department at the University at Albany. After realizing he was not cut out for life as a musician, he turned to the far more lucrative professions of mathematical psychology and cognitive science. Along the way, he was fortunate to get to work with a number of fantastic mentors and collaborators, from his undergraduate studies at the University of Maryland (with Isaiah Harbison and Michael Dougherty), to his graduate studies at Indiana University (with Rich Shiffrin, Michael Jones, and Rob Nosofsky), to his postdoctoral work at both Syracuse University (with Amy Criss and Mike Kalish) and Vanderbilt University (with Tom Palmeri, Gordon Logan, and Jeff Schall). Greg's research is aimed toward developing dynamic computational models of the processes involved in attention, memory, and decision making and how they interact to support adaptive behavior. He teaches Statistical Methods for Psychology, as well as various graduate courses including Human Memory, Complex Mental Processes, Information Processing and Perception, and a new course in Bayesian Data Analysis. Astute nature watchers may sometimes glimpse Greg in the wild amidst the gorgeous hills and streams of Upstate New York.
Nate's research interests focus on memory, music cognition, computational modeling, and personality. He joined the lab as a Ph.D. student in the Cognitive Psychology program at the University at Albany, SUNY, in Fall 2021. Nate graduated from Union College in 2020 and earned a University Innovation Fellowship from Stanford University in 2019. In the summer of 2022, he worked at the Air Force Research Laboratory at Wright-Patterson Air Force Base via the Repperger internship program. Previously, he conducted choirs and taught high school music courses as a Teaching Fellow in the Fine Arts Department at Culver Academies. He also composes video-game and chamber music and performs as an operatic tenor.
Pierce's research interests intersect music, memory, and prediction. Broadly, his current research focuses on the computational modelling of memory for music. Specifically, he is interested in disentangling how we learn relationships between events (e.g., notes, phrases, chord progressions, etc.) and how we use this information to make predictions about what we will experience next. Music in this context serves as an assay for memory processes involved in learning contingent/probabilistic relationships, and the ability to make predictions using knowledge of those relationships. Pierce graduated manga cum laude from Elon University in 2022 with a degree in Psychology and Philosophy, where he worked under Amy Overman examining how we form–and later update–impressions of the people we meet. Following graduation, Pierce spent a year working under Jason Watson at CU Denver, norming a database of nature images and measuring their influence on attention restoration. Pierce joined the LINCD lab as a Cognitive Science Ph.D. student at the University at Albany in Fall 2023. Outside of the lab Pierce is an avid musician (flute), and loves to play tennis.
Julia's current research focuses on the relationship between depressive symptoms and reasoning ability in both social and non-social situations. People with depression are sometimes described as having a "more realistic" view of the world, suggesting they should be more logical when dealing with non-social situations. However, people with depression also tend to make social inferences that are biased towards negative interpretations (e.g., someone didn't say hello because they hate me, not because they were just in a hurry). Is it the case that depressive symptoms could be advantageous in one situation (non-social) but harmful in another (social)?
Steven's current research focuses on perceptions of body shape as a function of musculature and weight, and how these perceptions may be warped in the presence of body dysmorphia. Do people with symptoms of body dysmorphia tend to focus their attention on body parts in different, and potentially idiosyncratic, ways relative to those without such symptoms?
Priya's work examines how information is integrated across experiences. For example, when we see two similar cars of different colors, do we remember them separately or do they get merged into single blended representation? Are there factors that could encourage or discourage the blending of experiences in this way? When is it better to integrate information across experiences and when is it better to keep them separated?
Sam's current project looks at how we recognize situations that have more than one element. Consider, for example, meeting several new people in the morning. Later, you go you lunch and sit down at a table with two people already there. How do you recognize whether you saw either of those people that morning? Do you recognize them separately or do you recognize them as a pair? What happens if you met one person but not the other?