Interactive Art and Pattern Recognition

An ICPR 2022 Tutorial

Tutorial Description

In the last decade, the art and theater worlds have increasingly endeavored to create immersive experiences for their audiences. In many cases, this entails the use of cameras and other sensors, combined with recognition techniques so audience input can modify the artwork. Recent machine learning advancements have also increased artists’ enthusiasm for creating technology-enhanced artworks.

In this tutorial we discuss interactive methodologies using patterns from video, audio, biometrics, and machine learning. We show a sampling of the interactive artworks starting with the Experiments in Art and Technology (E.A.T.) that involved New York artists such as Robert Rauschenberg and Merce Cunningham working with Bell Labs engineers in the 1960s, and up to present day.

Target Audience

The level of depth in the course will span from researchers and practitioners who are interested in learning the techniques of interactive art, to those who have a love of art and want to learn some modern techniques behind it. Technical material will be presented, however, it will be at a general-audience level. All presented material is referenced to enable those with interest to delve deeper. I encourage the audience to be interactive, to ask questions, make comments, and to share any interactive art projects they have done.

Tutorial Outline

1. History – Experiments in Art and Technology (E.A.T) of the 1960s in New York.

2. Video – including person detection, camera perspective, and lighting.

3. Audio – including localization, spatial sound, and speech prosody.

4. Wearables – including heart rate monitoring and EEG.

5. Biometrics – including voice, gait, and pose.

6. Virtual and Augmented Reality – including Pokemon Go and The Tempest.

7. Machine Learning – including generative adversarial networks (GANs).

8. Examples – held in theaters, art galleries, and public spaces.

9. Interactive Game – Hammer and Eggs

10. Open Questions

Bio - Larry O'Gorman

Larry O'Gorman is a Fellow at Bell Labs Research in Murray Hill, NJ, leading work in video analytics. Previously he was Chief Scientist at Veridicom, a biometric company. He has taught in the area of multimedia security at Cooper Union and New York University.

Dr. O’Gorman has collaborated with artists on the creation of several interactive artworks including: Brooklyn Blooms with NYU at the World Science Festival, 2013; Pixelpalooza with NYU at the Liberty Science Center, 2014-17; Butterflies Alight! with Stevens Institute of Technology in Hoboken, NJ, 2015; and Omnia Per Omnia with Sougwen Chung at Mana Contemporary Gallery, 2018.

He has published over 80 technical papers, 8 book chapters, holds over 35 patents, and is co-author of the books, "Practical Algorithms for Image Analysis" published by Cambridge University Press, and "Document Image Processing" published by IEEE Press. He is a Fellow of the IEEE and of the International Association for Pattern Recognition. He has been on the editorial boards of 4 journals, and has served on US government committees to NIST, NSF, NIJ, and NAE, and to France's INRIA. He received the B.A.Sc., M.S., and Ph.D. degrees in electrical engineering from the University of Ottawa, University of Washington, and Carnegie Mellon University respectively.


Larry O'Gorman,