What are we ignoring, and what are we refusing to see?
In a concert hall, we can hear a symphony orchestra as a whole. We can also tune in our focus and attend to individual musical instruments or even the couple whispering in the next row. The ability to selectively pay attention to objects producing sound at the same time is an ability that is not yet completely understood. However, it remains undeniable that humans have a powerful auditory system. And this is why data sonification could perhaps make things better.
Sonification is the audio equivalent of visualization. So, for example, consider listening to the changes in global temperature over the last thousand years or listening to x-rays and brain waves, or even data collected from far away planets and galaxies.
Similarly, SNOOP is a gadget that takes in live environmental parameters and allows its users to grasp traces of the changes taking place around them.
the sample size (i.e.) how many tones played in the melody
the sampling period, or how much time until it records the next reading.
a Light Dependent Resistor
Temperature and Humidity Sensor
Ultrasonic Sensor
a tone progression where:
1) The tone/ frequency of each melody is determined by light intensity.
2) Humidity determines the note duration
3) the distance of the first block from the device controls a detuning factor.
4) difference in temperature controls the octave of the reading; the lower the temperature get, the lower the octave, and the "darker" the sound
5) Buttons allow the user to mess with the tones.
how much time they will wait till the next melody; the default is a sampling period of 1s and ten samples.
the data to the serial:
So the user can pass it on to other applications and modulate/control many other things. I passed them onto a patch I made on Pure Data Vanilla attached below and shown in the demos, which outputs ambient generative music.