My big idea is the sonification of data from a motion capture device, for this, I want to have a motion capture device on all limbs and on the head. Each of these devices could be used to control different aspect of sound or music. Some examples of things I could do with these gesture controls would be changing the pitch or duration of the sound. Apart from this, I could have created some filters or effects for the program. I would want to program this in ChucK in order to not only learn a new and powerful tool but to not have to deal with MaxMSP. I am drawing from studies such as the 3D swarmalators and gesture controls. Reminiscent of wekinator. Its main purpose would be for performance. While I know what I want each body part to change the sound I'm not sure I want to have it be set to specific controls. While I am not sure how changing what each limb does will work I think it might be interesting to work with machine learning and Markov models in which based on previous performances and such it would change after a set period of times. I believe the integration of machine learning would be a nice integration of previous experiences at RPI.Â