tarrlab stimuli

You have reached the tarrlab stimulus repository. With the advent large-scale image datasets and generative models in computer vision, most of these datasets are really here for historical reasons (although the Greebles are still pretty special!). Note that our lab has been committed to open access and free dissemination of our stimuli since the 1990's. Many of these datasets are 30 years old (and have been on the internet since there was an internet).

Images and 3D models are for non-commercial use only. If you use any of these images in publicly available work - talks, papers, etc. - please acknowledge their source and adhere to the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Please also include an acknowledgment such as "Stimulus images courtesy of Michael J. Tarr, Carnegie Mellon University, http://www.tarrlab.org/"

image similarity toolbox a still useful matlab toolbox for computing image similarities using a variety of interpretable computer vision algorithms. by Darren Seibert and Daniel Leeds

greebles

Greeble Generator (.zip)

A set of scripts and files for 3DS MAX that allow you to combine Greeble parts to create new Greebles. This was written some time ago and it is up to you to debug, etc. We won't provide technical support.


Greebles - Asymmetric (.zip)

Worried that Greebles look like faces? Then try these asymmetric versions of the Greebles. The archive includes images and .max files. You should probably also read: Sheinberg, D. & Tarr, M. J. (2009). Objects of expertise. (In I. Gauthier, M. J. Tarr, & D. Bub (Eds.), Perceptual expertise: Bridging brain and behavior. New York, NY: Oxford University Press).


Greebles - Symmetric (.zip)

The notorious Greeble object set. Scott Yu designed these objects as a control set for faces - discriminating among them requires attention to subtle variations in shape and they are hierarchically organized into genders and families, as well as individuals. See the readme.txt file for an explanation of the current naming scheme. Images are TIFFs generated from the new 3DS Max versions of the Greebles (they correspond to the included MAX or 3DS files). The 3D file format is 3D Studio Max which should be importable into many different 3D modeling programs (extension .max). Each Greeble should contain the same camera positions and have a standard textured purple shading. The archive includes two viewpoints for each Greeble, plus the .max and .3ds files (thanks to Jeff Munson, University of Washington, for converting the MAX files to the more portable 3DS format).


Greebles - Curvilinear (original) and matched Rectilinear Greebles (PNG & 3SD; .zip)

Stimuli from Greebles actually do look like faces (but not in the way you thought) by Juliet Shafto et al. (2015).

novel objects

Fribbles (.zip)

There are 12 species of Fribbles, each with 81 exemplars. Fribbles are made of colored, distinct parts and a given species has a prototype, with each individual a certain hamming distance from this prototype. The archive also includes Strata 3D CX models for the 12 Fribble species. We also include, a text file containing a matrix of similarity ratings between pairs of some exemplars of the Fb3 (LORO) species (see Williams, 1998, Appendix B). The original Fribbles were brightly colored and textured. We also include an archive with the three most-different exemplars of all 12 species (36 pictures total), all rendered in a uniform blue color with the same smooth texture.

Of note, there is also the "See and Grasp Data Set" from Robbie Jacobs' lab at the University of Rochester. This data set is based on the Fribbles, but includes visual and haptic features for a set of 40 Fribbles.

Fribble_jpegs (.zip) contains JPEG versions of the 81 exemplars.

Blue_Fribbles_jpegs (.zip) contains JPEG versions of the 36 Blue Fribbles.


Geons (.zip)

This image set contains 10 single part objects - Geons. Each object is qualitatively different from the others in the set. Objects were modeled after those in Biederman and Gerhardstein's (1993) JEP:HPP article. The set includes rotations in depth around the vertical axis of 0, 45, and 90 deg (labeled A, B, and C). One nice property is that the 0 to 45 deg rotations show the same visible image features, but the 45 to 90 deg rotations show qualitatively different image features. This allows a comparison between equal magnitude rotations with differing degrees of qualitative change. See Hayward and Tarr's (1997) JEP:HPP article for an experiment using these objects and more on this topic.


Hummel and Biederman Objects (.zip)

This image set was created by Mohsin Malik using FormZ for a computational project that Pepper Williams and I work on. The objects were adapted from the 10 objects used in the 1992 Hummel and Biederman Psych Review paper (the JIM model). Unlike most of the other sets included on this page, the images are in JPEG format and the 3D models are included for use with FormZ.


Multipart Geon Objects (.zip)

This image set contains 10 objects composed of 5 parts per an object. Each object has a different central Geon and 4 parts are attached around it. Objects were modeled after those in Biederman and Gerhardstein's (1993) JEP:HPP article. The set includes rotations in depth around the vertical axis of 0, 30, 45, 60, 90, 120, 150, and 180 deg (as well as a few misc rotations). One nice property of these objects is that the 0 to 45 deg rotations show the same visible parts, but the 45 to 90 deg rotations show different visible parts. This allows a comparison between equal magnitude rotations with differing degrees of qualitative change. See Hayward's (1998) JEP:HPP article for an experiment using these objects and more on this topic.


Novel 2D Shapes (.zip)

This image set was created by Isabel Gauthier for use in a study on how semantic and perceptual similarity influence one another. There are 4 complete homogeneous classes of shapes, 6 shapes per class, plus some distractor shapes from other classes. Each shape is shown in the 0, 55, 110, and 165 deg orientations (clockwise picture-plane rotation). See Gauthier, James, Curby, & Tarr (2003) for experiments using these shapes and more on this topic.


Pairwise Similar Objects (.zip)

This image set was created by Scott Yu using AliasSketch and contains 12 (actually 13) objects composed of 5 parts per an object. These objects have been used in a variety of studies in our lab, including experiments on class generalization, the effect of perceptual similarity on viewpoint, and how changes in illumination conditions influence recognition performance.


Possible and Impossible Objects (.zip)

This image set was created by Pepper Williams for use in several studies on how the 3D structure of objects is represented in visual memory. The unique thing about this set (unlike those created and used by Schacter and Cooper) is that there are three versions of each object (40 objects in total): possible, impossible1 (one impossible part), impossible3 (three impossible parts). See Williams and Tarr (1997) and (1998) for papers using these stimuli.


Original Shepard and Metzler 3D Objects (.zip)

The famous Shepard & Metzler (1971) 3D object set. Roger Shepard gave me xeroxes of the original set that he distributed. We have scanned the images in at 300dpi. Included is the original readme file by Roger explaining the organization of the set. Beyond still being useful for research, these objects hold a place as perhaps the first computer-generated complex stimulus set used in visual psychophysics. All of us owe a huge debt to Roger. Graham Dean has also kindly provided the Shepard & Metzler objects as 3D models (in DXF format). Be sure to read his README.TXT. Shepard, R. N., and Metzler, J. (1971). Mental Rotation of three-dimensional objects. Science, 171, 701-703.


String Objects (.zip)

This image set was created by Marion Zabinski using a program written by Volker Blanz and contains 40 (actually 39) objects composed of 5 parts per an object. Each object is composed of a linear chain of Geons. Objects were inspired by Poggio, Edelman and Bulthoff's work, as well Biederman and Gerhardstein's (1993) variation. The set (STANDARD COLORS only) include rotations in depth around the vertical axis of -90, -60, -30, 0, 30, 60, and 90 deg. There are actually 4 sets of 10 objects - as a default all parts are composed of tubes. The sets vary in how many of the parts are unique Geons rather than tubes. Set 0 has no unique Geons in each object, Set 1 has 1 unique Geon in the middle of each object, Set 3 has 3 Geons in the middle of each object, and Set 5 has 5 Geons (and no tubes; an error resulted in only 9 objects in this set). File names are coded as: Set#.Object#.View. See Tarr, et al.'s (1997) Psychological Science article for experiments using these objects and more on this topic.


Two Part Objects (.zip)

This image set was created by Will Hayward using Strata StudioPro and contains 5 objects composed of 2 parts per an object - all 10 parts are qualitatively different from one another. Objects appear in 9 viewpoints separated by 10 deg increments (the 0 deg viewpoint is straight on and the 90 deg viewpoint is a profile view). No views are near mirror reflections of other views. Both parts are visible from every viewpoint. As such, these objects fulfill Biederman and Gerhardstein's conditions for viewpoint invariance (we actually found viewpoint dependence). See Hayward and Tarr's (1997) JEP:HPP article for sequential matching and naming experiments that use these objects and more on this topic.


Yadgits (.zip)

This image set was created by Jerome Harris as a more artifactually-appearing 3D novel object set as a contrast for Greebles and YUFOs. There are 6 families, with 6 visually-similar individuals per family. One folder has the 3DS-MaxR5 models and the other folder has a high-res render from on viewpoint for each individual.


YUFOs (.zip)

Yu's Un-Facelike Objects. These puppies were created by Scott Yu as successors to the Greebles. Again there are families and genders. They do look like some sort of evil creatures. But not faces. Really. There are eight (8) folders containing different families of YUFOs. Each family consists of both male and female individuals rendered from several different viewpoints (all families), or from the same viewpoint, but with different lighting directions (only 5 of the families). Named by famous cognitive neuroscientist, Rene Marois.

everyday objects

Valence Objects (.zip)

Images of valence scored objects from Sophie Lebrecht’s PhD thesis,“Micro-valences”: Affective valence in “neutral” everyday objects, Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 2012. The appendices from the thesis listing the valence scores for the images (obtained in an implicit behavior task) are also included.

Images, words, and valence ratings are taken from


The Object Databank (.zip)

The Object Databank is a set of realistic three-dimensional objects for use in computational and psychological studies. The archive includes 24-bit color images of 209 objects from 14 viewpoints in TIFF format. If you wish to manipulate the objects' appearance more directly the 3D models are included in their native formats (FormZ or AliasSketch).


Snodgrass and Vanderwart 'Like' Objects (.zip)

A new set of colored and shaded images commissioned by Bruno Rossion (Brown University and University of Louvain, Belgium) and Gilles Pourtois (Tilburg University, The Netherlands). Normative data similar to the original Snodgrass and Vanderwart paper (1980) have been collected. Each of these images was loosely based on an image from the original Snodgrass and Vanderwart set. Please cite the following paper in any presentation or published research using these new images: Rossion, B., & Pourtois, G. (2004). Revisiting Snodgrass and Vanderwart's object set: The role of surface detail in basic-level object recognition. Perception, 33, 217-236.


Snodgrass and Vanderwart Objects (.zip)

Apparently Joan Snodgrass sold the copyright to the original images in the 'Snodgrass and Vanderwart' image set to a for-profit company of some sort. Each time someone purchases the images from the company, she gets some share of the fee. This is all rather odd in that APA might hold the copyright per the standard agreement we sign on publication. No other identification of copyright appears in the original article. Moreover, charging for stimuli used in a published study seems pretty contrary to the spirit of academic exchange. In any case the company verbally told me to remove these images from my web site, so I did for something like a decade. But as you can see, they are back. At this point they should really only serve as a historical curiousity, useful for replication or the like. The fact of the matter is that the original images are very poor quality and using such line drawings can produce misleading results, so you are MUCH better off using the new set of colored and shaded images commissioned by Bruno Rossion ("Snodgrass and Vanderwart 'Like' Objects").


Diagostic Color Objects (.zip)

Color images of many diagnostic color objects, e.g., a banana (which is typically yellow). Objects are shown in typical and atypical colors. There are also control sets of neutral color objects. The orignal set were used as stimuli in Naor-Raz, G., Tarr, M. J., & Kersten, D. (2003). Is color an intrinsic property of object representation? Perception, 32, 667-680. There is also a bigger and better set that we have used in subsequent studies.


Change Blindness Scenes (.zip)

This set of scenes were used as stimuli in the studies reported in Aginsky, V., & Tarr, M. J. (2000). How are different visual properties of a scene encoded in visual memory? The set contains many variants of individual scenes. Variants were generated by either moving or changing the color of some element of the scene.

events

Real-World Events (.zip) & PsychToolBox Scripts (.zip)

Movie and sound files of real world events from Jean Vettel's PhD thesis. Names tell you which sound goes with which event; there are multiple examples of all event types. Also a collection of PsychToolBox scripts and related files to help those interested in getting audio-visual studies up and running in PTB using the Real World Events files (I have no idea if these are useful anymore...).

Sound Events Database from the Auditory Lab @ CMU

The Sound Events Database is a unique collection of recordings of sounds that were made for research purposes. A variety of objects underwent various impacts, scrapes, rolls, and deformations; liquids were dripped, poured, sloshed and splashed. Every type of sound event includes five exemplars, and each exemplar lasts for several seconds (when possible).

face place(s)

A-Z Directory of Face Stimulus Datasets


Face Place. This dataset includes multiple photographs for over 200 individuals of many different races with consistent lighting, multiple views, real emotions, and disguises (and some participants returned for a second session several weeks later with a haircut, or a new beard, etc.). This is the final Release (3.0). The resolution of these images is as good as it gets given that we used a standard resolution video camera (HD video was too expensive at the time).

The images are in jpeg format, 250x250 72 dpi 24 bit color and we were careful to obtain the correct approvals for use of these images in both experiments and in publications for non-commercial purposes.

These face images were used in: Righi, G, Peissig, JJ, & Tarr, MJ (2012) Recognizing disguised faces. Visual Cognition, 20(2), 143-169. doi:10.1080/13506285.2012.654624

file downloads (.zip archives)

Asian | Black | Caucasian | Hispanic | Multiracial | ReadMe

If you use any of these images in publicly available work - talks, papers, etc. - you must acknowledge their source and adhere to the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. You must also include the following: Face images courtesy of Michael J. Tarr, Carnegie Mellon University, http://www.tarrlab.org/. Funding provided by NSF award 0339122.