AI 4 Sound Societies (AI4S2):
adaptive sonic environments for wellbeing
A Collaboratory under the AI4S Signature Area at the University of Alberta
AI 4 Sound Societies (AI4S2):
adaptive sonic environments for wellbeing
A Collaboratory under the AI4S Signature Area at the University of Alberta
Wellbeing is elusive in our frenetic modern life. We are plagued by an unhealthy level of stress, due to fragmented overstuffed schedules, exacerbated by poor diet, insufficient physical exercise, media overstimulation, and the dismal state of the global environment. Stress in turn adversely affects general health, with serious consequences for the individual, and for society as a whole. Stress reduction—at work, or at home--and concomitant improved focus, mood, and sleep, are crucial metrics for individual and social wellbeing.
All wellbeing entails adaptation via interaction, enabled by feedback pathways, co-evolving towards a harmonized social state I have termed “resonance”.[1] Either we adapt to an aggravating environment, or our environment must adapt to us. The environment is relatively fixed, and so it is we who must adapt - stressful in itself. Certain aspects of the environment respond to our needs, however. The climate in one’s home is fluid, adapting via a smart thermostat, sensing temperature, time, and the presence or absence of people. By contrast, the home itself is rigid: even moving furniture is cumbersome.
Research in music therapy shows that well-chosen sounds—whether produced by humans (including music), other lifeforms, or the inanimate natural world—can help reduce stress, enhancing focus, mood, and sleep. Environmental soundscapes—the sonic equivalent of wallpaper—can be relaxing, or productivity-enhancing. And like climate, the sonic environment is relatively fluid. An adaptive soundscape can remove unwanted sound through masking or noise cancellation, while adding sounds that relax, uplift, or focus, as indicated by feedback.
How to generate an adaptive sonic environment that enhances wellbeing?
This problem--sometimes subsumed under the general notion of “smart” or “intelligent” environment-- has been addressed in several literatures, from acoustic ecology and music therapy, to health science, to mechanical and civil engineering. But as AI has developed by leaps and bounds in recent years, its applications to customizing sonic environments have lagged.
Our proposed collaboratory’s focus is AI-driven systems for adaptive soundscapes: linking people, environments, and algorithms in communications loops, using feedback to customize an auditory environment enhancing inhabitants’ social wellbeing. Potential components of such systems include computers and algorithms, biosensors (tracking physiological data, or spatial position), environmental sensors (microphones, thermometers, barometers), big datasets (including historical profile data on system users), and sound generators (speakers or headphones).
The roles of AI include recognizing and interpreting bio- and environmental signals (e.g. using supervised learning), and developing effective responses (e.g. using reinforcement learning), whether by selecting, tuning, and mixing pre-recorded sounds, or by generating new ones (e.g. using Generative Adversarial Networks). Biosignal interpretation includes AI-mediated sonification, enabling users to learn to control their own autonomic systems. Such systems may draw upon, as well as generate, big datasets, which are important for supervised learning.
Instantiations of this concept include: therapeutic soundscapes for hospitals; interactive films; audio-based games; interactive musical compositions; and sound art installations.
Sound technologies include spatial and binaural sound, immersive sound, low-frequency sound (subwoofers), tactile sound (for the deaf), interactive sonic media, and extended (virtual or augmented) auditory reality.
AI4^2 will operate productively within a pre-existing ecosystem of relevant research groups and units, particularly those organizations to which its participants are already associated, including: the Canadian Centre for Ethnomusicology, the Sound Studies Institute, the new Sound3 Lab, the Integrative Health Network (formerly Integrative Health Institute), the Alberta Machine Intelligence Institute, Precision Health signature area, Research Creation signature area, and Stories of Change signature area.
For more information, contact:
Michael Frishkopf (michaelf@ualberta.ca)
Scott Smallwood
[1] See my 2019 paper in the journal Ethnomusicology.
People:
University of Alberta:
● Michael Frishkopf (PI), Professor, Music, and Director of the Canadian Centre for Ethnomusicology, Faculty of Arts
● Scott Smallwood, Professor, Music, and Director of the Sound Studies Institute, and PI for the new Sound3 Studio, Faculty of Arts
● Marilene Oliver, Professor, Art and Design, Faculty of Arts
● Lucinda Johnston, Music Librarian and certified music therapist
● Christopher Sturdy, Professor and Chair, Psychology, Faculty of Arts/Science
● Abram Hindle, Professor, Computing Science, Faculty of Science
● Nathan Sturtevant, Professor, Computing Science, Faculty of Science
● Levi Lelis, Professor, Computing Science, Faculty of Science
● Elisavet Papathanasoglou, Professor, Faculty of Nursing
● Shaista Meghani, doctoral student Nursing
● Jim Kutsogiannis Professor, Department of Critical Care, Faculty of Medicine and Dentistry
● Hernish Acharya, Clinical Professor, Department of Medicine, Division of Physical Medicine & Rehabilitation; physician with Glenrose Rehabilitation Hospital, Edmonton.
● Samer Adeeb, Professor, Civil Engineering (smart environments), Faculty of Engineering
● Lindsey Westover, Professor, Mechanical Engineering (vibrational systems), Faculty of Engineering
● Xinming (Sherry) Li, Professor, Mechanical Engineering (ergonomics), Faculty of Engineering
● Rafiq Ahmad, Professor, Mechanical Engineering. Director and Founder: Laboratory of Intelligent Manufacturing Design and Automation (LIMDA, U of A), Faculty of Engineering.
Beyond the University of Alberta:
● Tiffany Brulotte (MA), Certified Music Therapist
● Greg Mulyk (MMus), Composer and Sound Designer
● Rebecca Fiebrink, Reader at the Creative Computing Institute at University of the Arts London, and in Computing at Goldsmiths, University of London (technologies for human expression, creativity, and embodied interaction; human-computer interaction; machine learning; signal processing)
● Astrid Ensslin, Professor in the Department of Linguistic, Literary and Aesthetic Studies (Digital Media Culture; Video game studies; critical sociolinguistics; gender)
● Michael Cohen, Professor of Computer Science, University of Aizu, Japan. computer scientist - binaural and spatial sound, interactive and immersive media, polyphonic soundscapes, XR, IoT.
● Yoko Sen, ambient electronic musician, composer, engineer and founder of Sen Sound, a social enterprise that aims to transform the soundscape in hospitals. Former fellow at Halcyon Incubator, and artist-in-residence at Johns Hopkins Sibley Innovation Hub and Stanford MedicineX.
● Eyra Abraham, founder and CEO of lisnen.com.