At first, our goal was to figure out how to convert a non-time varying image into a time-varying audio form. Below is the result of what an image would sound like.
By this point we had successfully managed to sonify an image. Our method involved convolving the RGB values into a vector and then using the Fourier transform to convert into a time-varying signal. The resulting sound was very jarring and non-musical.
For our presentation we decided to set a few goals to implement more complex image-processing techniques and to make the sonified images sound more musical. A random walk, sectioning, and line predominance algorithm will be implemented to give more variety to the image processing techniques. We also wanted to assign specific colors to music chords in the scale of C-major to guarantee that the image signal sounds musical.
Currently, our algorithm can perform random walk on an image, break an image into sectors, and identify vertical or horizontal line predominance in those sectors. We also have managed to implement a function that can play guitar chords. Right now we have it set to play some chords in C-major for the colors red, green, blue, white, and black.