Initial Data
Shown above is our raw processed audio. There are no filters applied to this input signal which is depicted by the lack of varying color denotations among our low and high frequencies. With no filters, all frequencies contribute to the overall audio signal resulting in a more evenly distributed signal. Comparisons with our low-pass filter shown below emphasizes the effects of filtering when it comes to the resulting output.
The above graph illustrates the effects of using our low-pass filter to achieve a different spectrogram output. We processed our original audio input by passing it through a low-pass filter we created. The lighter colors to lower frequencies whereas the darker colors correspond to higher frequencies. When we compare our low-pass filter output to the original input signal shown in our first graph, there is a stricter cut-off frequency that essentially eliminates the influence of higher frequencies.
The above plot displays the edited audio wave output for this WAV file input. We applied a looping function to our input file through the implementation of Python libraries such as Pydub and PyAudio. With this looping function we were able to seamlessly play the song cut until the user terminated the program in the terminal. Improvements we are making to the looping function includes fades that cause transitions to appear smoother.
Progress so far:
So far, we’ve set up our Python environment, installed essential libraries like Pyo, and begun preparing audio files for processing. We are working on segmenting audio files, which will support various effects, particularly looping. To guide our looping implementation, we’re studying the looping function in Audacity as a reference model. Additionally, we’re developing a script to compute the FFT of any audio file, a foundational step for pitch modulation and frequency analysis. These preparations have established a solid foundation for implementing and refining our DSP effects.
Challenges encountered thus far:
For some of us, this is our first time using Python to code. In our first couple of meetings, there were some growing pains when setting up our Python environments. This learning curve was steep initially, but we were able to learn with the aid of online resources and quickly became familiar with the Syntax. While we all have variable coding backgrounds, we aim to have coding contributions from all of our members.
Next Steps:
Over the next three weeks, our project goals are to establish a real-time audio framework, implement core effects, and refine functionality. In the first week, we will set up the real-time audio pipeline using Pyo and create a basic echo effect with adjustable delay and decay. In the second week, we will add pitch modulation through Fourier-based phase shifts and implement high-pass and low-pass filters for frequency control, ensuring smooth real-time adjustments. Finally, in the third week, we will implement a looping mechanism using time-shifting for continuous playback, then integrate and optimize all effects to ensure low-latency and responsive controls for an intuitive user experience.
What we have learned so far:
So far, we’ve learned that noise won’t be a significant issue for our project, as we are working with pre-recorded sounds rather than live recordings. By using controlled, high-quality audio samples, we avoid the complications of background noise that can interfere with real-time effects. This approach has allowed us to focus on the DSP techniques themselves—such as echo, pitch modulation, filtering, and looping—without needing to account for noise reduction, which simplifies our workflow and optimizes the quality of our output.