jesusonic plugins for use in reaper. most of these will be converted to vst/ladspa/etc...
download them individually from here, or all at once: jesuccernn.zip
"the goertzel algorithm is a digital signal processing (dsp) technique for identifying frequency components of a signal. while the general fast fourier transform (fft) algorithm computes evenly across the bandwidth of the incoming signal, the goertzel algorithm looks at specific, predetermined frequency." (wikipedia)
buffersize: number of samples to analyze
signal input: audio channel to analyze
analysis output: output channel of analysis output - the strength of the specified frequency in the input signal
freq: frequency to look at
"a wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. usually one can assign a frequency range to each scale component. each scale component can then be studied with a resolution that matches its scale. wavelet transforms have advantages over traditional fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals. the haar wavelet is recognised as the first known wavelet, and is also the simplest possible wavelet." (wikipedia)
bits: number of bits. buffersize = 2^bits
signal input: where to read audio to analyze
analysis output: where to send the haar wavelet transform output
sync output: sends an impulse when buffer wraparound occurs (when we have received, and then converted anough samples), signaling (f.ex fx_sigview) that a new buffer of analysed data is available.
converts various signals to 1-sample impulses.
signal input: signal to analyse
sync output: sends the signal impulse output to this channel
trigger: what will trigger it, thresh = when the audio input exceeds 'value' (below), diff = the difference between the currend audio sample and the previous exceeds 'value'
mode: impulse = send a one sample long impuse, state = switch between on/off (gate)
value: the threshold, or difference needed to 'react'
hold: (not used)
signal viewer. can also viev the results from the haar transform (ana_haar), and can sync the visible buffersize to an incoming sync signal.
signal input: audio channel for signal we want to look at
sync input: an impulse coming here, signals the plugin to sync its buffer/view size to the input sync signal. when this is set to the same channel as the sync output from ana_sigview, they will generate and view the signal in sync.
size: the number of samples to show (may be auto-synced from sync signal (above)
bits: ..or a simpler way of setting buffersizes that needs to be a power of two (fft, haar, etc)
drawmode: how to draw the buffer (see below): dots, lines, bars, wavelet, wavelet.abs.
viewzoom: scale signal before drawing (for low amplitude inputs, etc)
sends out a sync signal when the slope (steepness, delta) of the analyzed wave is larger or smaller than a defined value.
signal input: input to look at
sync output: output for impulse
mode: what to trigger the impulse: positive = when positive slope is above threshold (below), negative = same, but negative, zero = when the slope is zero, horizontal/silent, cycle = when it has changed sign two times, from positive, then negative, for counting waveform cycles.
threshold: the threshold to cross, depending on mode
outgoing sync signal from sources like blocksize, samplerate, beatpos, ...
sync output: impulse output
mode: when to send signal. srate = impulse with sample rate (f.ex 44100) intervals, block = block/buffer size intervals, beat = beat (bpm) intervals
multiplier: speed up or slow down the 'firing rate'
based on this thread in the reaper forum. detects transients (sudden changes in volume/energy), and generates a sync signal..
env detector drop in db per scale:
block length (ms):
ms energy rise in db for positive transient:
sync out when waveform crosses the zero line.
hold (min dist, samples):
an almost direct conversion of the ambience plugin from mda, (sources found here), took an hour or so i guess. mostly for the experience, for the learning, and for having one more tool available for making audio and noises.
a delay effect, but instead of just overwriting the buffer every delay cycle, it (sort of) crossfades between the audio input and the buffered audio, and can create a blurred, mashed mix, and you can freeze input recording, so that the delay/buffer is static. assign your midi knobs to the parameters, and play around with, and fine-tune them in realtime.
another port/conversion, another reverb, this time from here. i did almost nothing here, as it was already a jesusonic plugin, but for the 'original' jesusonic. the channel-routing was a bit odd, and non-reaper specific, so i changed that to stereo (double mono).
slicey/dicey/cutup thingy. split and re-split slices, with some randomization and probabilities, everything (durations, start/end time, etc), synced (transport/tempo) to reaper, some initial inspiration from bbcut/livecut
boost + clip distortion. ugly and harsh.
same as fx_dist1, but input has been split in thee frequency bands, and distortion (boost/clip) is applied individually to each band.
compressor, based on loser's js tutorial, then modified, tweaked, tried to learn and experiment with dynamics.
experiment with a fractional-sample delay line and feedback. the delay length is 'playable' via midi, similar to a wavetable synth, and this was the first step towards the syn_plucked synth.
buffers incoming audio, then loops (forward or pingpong) small section of the buffer, optionally bypass recording, so that looped audio is static ('freezed'). planned for realtime use.
my first (of many!) journey into glitch land, randomization, probabilities, ... this one is very simple yet, but can creater some interesting cut-ups already.
preview1 (mp3, 429k), preview2 (mp3, 643k)
update 0.0.8: added pitch & stuttering efects, and 'sliders' for probabilities per step.
update v0.0.9: midi-controllable. hold midi keys, things are triggered at the next beat-start. notes from c2 -> e3 forces 'on' skip, reverse, mute, stutter, pitch. notes from c3 -> controls which slice of the buffer to play. randomization and probabilities can override the midi keys, so you might want to turn down the auto-random stuff if you plan to play/program via midi. experiment.
(already outdated. too many new things. new docs is planned, ..ehmm..)
(will be done properly, when i have some time and motivation leftovers... doc writing isn't as exciting as code-writing.... :)
the general idea is that audio is constantly being recorded into an internal buffer, and slices are randomly being played back from this, with some glitchy effects.
the length of the internal buffer is defined with "num beats", and each beat is then sub-divided further with "num beat subdivs" (slices). every time the play position crosses into a new slice, something might get triggered, depending on probabilities.
[todo: describe 'probabilities']
there are 5 types of 'effects', or things that might happen, or 6 if you count 'nothing' as an effect...
skip - jumps randomly ("skip probability") to somewhere else, to another step/sub-beat. max distance it can jump (from the play cursor) is defined by "skip max offset".
reverse - playing backwards. note that the playback starts at the beginning of a slice, then playing backwards, into the previous slice.
mute - 'silence', nothing, mute
stutter - the slice is yet again subdivided, but only the first slice is repeatedly played (n-n-n-n-nineteen), number of slices is random, max number is maximum
pitch - plays the slice at another speed, no stretching or anything, just play-speed is temporarily modified. max pitch is how far from "play speed" it can go, from half speed (one octave down) to double (octave up)
each step/slice it can jump back to the rec cursor (play mode "sticky") before selecting something new to do... so that the random stuff is small variations of the original audio only. or it can run "free" for no syncing.
and there's a "global probability" slider for changing the general, randomness of the probabilities.
and a global playing speed
then, there's the step sequencer, each step has a slider that control that step's probs. so, totally there's three things that is combined into one step-probability: global + effect + step-seq
finally, we have midi for realtime control.
the 5 'effects' can be 'forced' with midi semi-notes c2..e2
or you can select slice to play with midi (semi-)notes from c3 and upwards.
hold midi key - probabilities etc are checked only when play/record cursor enters a new sub-beat, to keep everything in sync.
and, almost forgot, the triggers:
0 - clear step sequencer
1 - fill step seq with 1's
2 - invert all steps
i guess it's easier to understand by trying it
[todo: examples of usage, describe the ideas behind some parameters, how to use live/reltime, how to program, reaper example projects?]
i have noticed a few isues with this version, that i would like to fix, sometime....
- clicks, clicks... some kind of anti-click things need to be in there, quite noticeable when doing play-speed changing things.
- some not-planned things with midi control. when releasing a key, it doesn't check which key, and this could create some weirdness if you play multiple keys rapidly after eachother, if not properly releasing eack key before pressing a new, the new slice will play, but if you release the first key while still holding the second, it will stop.
granular synthesis, or grain cloud.. audio is buffered, lots of tiny sections of this buffer are looped and pitched and changed, etc, and then mixed/mashed
wavelet analysis (haar) of two inputs, then re-synthesis, crossfading/morphing between them. but, the main effect heard is the quality loss because of poor quality analysis/resynthesis. an experiment that didn't go as planned, but perhaps we can do something else with the analysis results, before transforming them back to 'proper' audio. like fft-ish things.
simple midi, but controlled maknly via midi cc. made mainly for use with midi_automata.
modulation delay. ala phaser, flanger, for experimentation, various ways of combining the original, and modulated signal (add, sub, mul), and (failed, i think) attempt ot multiple taps/phases.
poor quality pitch shifter, replaying buffer at different speeds. nothing is done to get clean or good sounds from it. (again) based on loser's tutorials
another first test, trying to make a reverb myself, a series of allpass filters, controllable via sliders, not tuned, doesn't really sound like a reverb at all.
realtime sample, playback, overdub, erase, etc,.. somewhat inspired by repeatler, but have plans for some other directions and features.
beats: buffersize, in beats
draw mode: how to draw the waveform. off, dots, lines.
sync mode: manual = play state manually toggled on/off with 'transport note' (below) manual, seq = sync to sequencer (reaper)
transport note: midi note to toggle transport on/off (if syncmode=manual)
record note: switch to record mode. records incoming audio into buffer
play note: switch to play mode. playback from buffer
erase note: ...and same, but erase/clear buffer
bypass note: bypass mode
the record/play/erase/bypass modes are exclusive states (can only be in one of these modes), indicated by cursor color: red=record, green=play, blue=erase, white=bypass.
turns incoming audio into a square wave, on/off only, kind of 1-bit audio. and an additional filter to have a little bit of control.
switches between two audio inputs. control the toggling with a sync signal (f.ex with ana_transient or something), or with a specific midi note (for midi_automata, etc)
a simple tempo-synchronized delay, because i use them a lot.
very basic track things i always miss when in an experimental or 'audio-journey', go with the flow mood... so far, lowpass, highpass and boost. to be extended and finetuned.
lowpass filter. stumbled upon the formula by accident, when playing around with interpolation, and non-linear envelopes for the fx_kick. instead of using an incoming sample directly, we 'travel towards it', going just a small step towards it. kind of slowing down the fastest movements in the waveform. like a lowpass filter. quite simple to calculate (a plus, a minus, and a multply per sample, double for stereo), and a different way of looking at filtering.
visualise (-ze) various math formulas.
primitive raytracer, but realtime. reflections, shadows.. i did a lot of extra things to this one, until i 'forked' to a c++ version, but unfortunately, i lost the sources for the newer version. but (re-)found this one in the reaper forums.
cellular automaton, or 'the game of life', lots of randomization or sequencing possibilities, sends midi note on and control changes. need a in-depth 'doc' here
converts (sample accurate) midi control changes into a constant audio signal. made it for/when experimenting with plugin parameter modulation, and using this audio signal to control another plugin, the sliders/parameters.
midi notes based on the fibonacci series, but to keep it under control, it wraps to a defined set of notes.
a simple one. kills all incoming midi. main use is between plugins.. before any other ones that you don't want any midi messages to reach.
discards note off messages, but keeps track of note on messages, and send a note off for the same key a certain amount of time (in ms) later.
for testing the perfornamce and cpu usage of different parts of reaper. benchmarking? load-balance testing?
inter-plugin communication test plugins. read and/or writes values to 'global' stuff, gmem, reg00, spl0.. put several of these in different items, tracks, etc.
routes audio between tracks, and can additionally do a few things with the source/dest channels. add, sub, multiply (ringmod), ... flexible, and nice to have at times, especially when deeply involved in weird routing experiments, heh.
"binaural beats or binaural tones are auditory processing artifacts, or apparent sounds, the perception of which arises in the brain independent of physical stimuli. the brain produces a phenomenon resulting in low-frequency pulsations in the loudness of a perceived sound when two tones at slightly different frequencies are presented separately, one to each of a subject's ears, using stereo headphones. a beating tone will be perceived, as if the two tones mixed naturally, out of the brain." (wikipedia)
the synth part of this is very ugly, no envelopes or anything, clicks when you press a midi key, just simple tones.. might expand it later... mostly an experiment.
the 'synthesis engine' in a speech synthesizer. very simple. saw waveform, 3 bandpass filters (formants)
something similar to the drum/kick synths included in lmms and fl studio.
"the lorenz attractor is a 3-dimensional structure corresponding to the long-term behavior of a chaotic flow, noted for its lemniscate shape. the map shows how the state of a dynamical system (the three variables of a three-dimensional system) evolves over time in a complex, non-repeating pattern." (wikipedia)
not very useful, i guess, if you're not of the more experimental type, and want another weird, different sound source available for (ab)use.
percussion sounds. syn_kick is probably obsolete. this one can do much more..
this one is already obsolete, will be removed soon, but i have used it in some example projects, so it needs to be around for a little while.
"in human language, a phoneme (from the greek: φώνημα, phōnēma, "a sound uttered") is the smallest posited structural unit that distinguishes meaning, though they carry no semantic content themselves. in theoretical terms, phonemes are not the physical segments themselves, but cognitive abstractions or categorizations of them. in effect, a phoneme is a group of slightly different sounds which are all perceived to have the same function by speakers of the language in question. An example of a phoneme is the /k/ sound in the words kit and krill." (wikipedia)
"karplus-strong string synthesis is a method of physical modelling synthesis that loops a short waveform through a filtered delay line to simulate the sound of a hammered or plucked string or some types of percussion. although it is useful to view this as a subtractive synthesis technique based on a feedback loop similar to that of a comb filter for z-transform analysis, it is better viewed as the simplest of a class of wavetable-modification algorithms now known as digital waveguide synthesis, as the delay line acts to store one period of the signal." (wikipedia)
polyphonic synthesizer. the synthesis itself is pretty simple, the most interesting here is the polyphony. awkward to do in such a limited language as jesusonic, with no structs, macros, functions, preprocessor or anything. so, this is far, far from optimized. i paid more attention to the struct-like things, and keeping it simple to experiment with (code-wise). will continue to add small functions, until i feel the 'architecture' is working as i want, then optimize, unroll memory handling, etc..
strongly inspired by the computalker ct-1, but not an emulation. also taken some inspirations from here. it's a bit more optimized than the earlier formant/phoneme synth things i've done earlier. especially the filters, tried some optimizations, cpu use now 1/3 of what it was before. planning to add a couple of control-plugins to this, for different uses. one for selecting and morphing between phonemes, another for connecting multiple of these phonemes into words, a separate phoneme table plugin, ... lots of plans/ideas.
a simulation / emulation / inspired by (no promises regarding how authentic this is) of the roland tb-303. originally by tobybear, this version is an almost direct port of tb303.pas found here.