In this hands-on studio, we invite researchers, designers, technologists, and makers to explore how tangible interfaces can form interconnected, sound-centric smart ecologies. Participants will engage with a modular toolkit to create responsive soundscapes that adapt and evolve to their environments.
Drawing from the practices of digital lutherie and sound design, we will experiment with interactive machine learning techniques to map sensor data to expressive sound interactions.
Participants will:
Learn how to create responsive soundscapes by integrating sound as a core design modality.
Prototype adaptive systems, mapping sensor data to expressive sound interactions.
Gain hands-on experience with interactive machine learning techniques.
Explore sustainable design practices based on modular approaches.
We will collectively design a decentralized, sound-centric, networked system. We welcome anyone with an interest in sustainable design, tangible and embodied interaction, and sound. Join us at TEI’25 to reimagine how our smart objects can speak to us—and to each other.
Move beyond simple notification sounds and create rich, adaptive soundscapes that seamlessly integrate with our interactions.
Prototype interactive systems that respond to sensor data, using machine learning to map gestures and movements to expressive sound.
Embrace sustainable design practices by working with a modular toolkit and open-source principles, fostering collaboration and adaptability.
Collaborate with fellow participants to design a decentralized, sound-centric networked system that responds to its environment.
Learn to design and prototype interactive sound-based systems.
Gain hands-on experience with machine learning for creative applications.
Explore sustainable and modular approaches to design.
Contribute to a collaborative soundscape installation.
Connect with a community of like-minded individuals passionate about sound and interaction.