This topic is about investigating ways to achieve real time in Pythonista in situations that are extremely real time dependent.
These situations are generally found in two fields. Audio (synth, filters, sound processing), and Image (smudge tool). What makes these situations difficult is that, as opposed to usual real-time applications, the computer doesn’t just need to have your ui input and a few variables to update the internal state of the program and play/display whatever it should, it instead needs the actual data that is being played/displayed (the screen image or the audio output). It’s a lot more data than a few hidden state variables. So while you‘re hearing the audio/seeing the screen and deciding what to do next, the computer is actually also working directly on the current audio/screen data in order to compute the next thing you will hear/see, the challenge is then to deal with these often large amount of data fast enough to deliver it in a seamless fashion to you.
Here are the advances we made in the audio department (see the end of the post for the image part)
Real time audio processing in Pythonista:
At the lowest level, real time audio processing is achieved with a processing (also called circular) audio buffer (a kind of array). Basically, the code iteratively fills this buffer with the next bit of sound and send it to your ears when you’re finished hearing the current one.
Most audio processing effects (filters, for instance) needs the very data that you’re currently hearing to compute the next bit of sound. So before sending it to your ears, they will copy that data and start working on refilling the buffer with the next part before you’re done with the current one and come back for more sound.
@JonB AudioRenderer wrapper (see below) is a great solution to have access to such an audio buffer functionality in Pythonista.
https://gist.github.com/0db690ed392ce35ec05fdb45bb2b3306
Here are my current modifications of @JonB files to get an antialiased sawtooth instead of a sine wave, a 4 poles filter you can control, unison, vibrato, chords (with several fingers) and a delay, all in one buffer/render method:
https://gist.github.com/medericmotte/d8e81b7e0961006d7026f16cc195682c
It’s set up to play one chord with number of notes = number of fingers on the screen (up to 4). You can control the filter by moving up and down the first finger having touched the screen.
It’s an inefficient implementation because all the work is asked in perfect real time as if the control parameters were changing on a sample by sample basis. It’s at the edge of glitching (to see it, just set the filterNumbers to 16 and notice the glitches when creating filter sweeps). The solution for that is to compute audio elsewhere with bigger anticipation chunks of audio corresponding to, say, a 60Hz rate (because that’s pretty much the rate at which touch_moved are received anyway) and then progressively fill the circular buffer with the computed data.
Real time image smudge tool in Pythonista:
A real time smudge tool (See also my second post) works similarly to a filter or a reverb, only it processes regions of an image along a brush stroke rather than bits of sound along a sound stream. @JonB ‘s IOSurfaceWrapper (see below, and thanks to him) made it easy for me to code a real time smudging tool (lots of comments in there as well):
https://gist.github.com/medericmotte/37e43e477782ce086880e18f5dbefcc8
It can be interesting to take a look at my previous approach, especially to compare their speeds:
https://gist.github.com/medericmotte/a570381ca8adfcec6149da2510e81da2
The difference, on my device at least, seems small at first glance, but when you smudge in a very fast circular fashion (around a small circle) you will notice that with my previous approach, the blue cursor can’t keep up and ends up being on the opposite side of your finger’s circular motion, while with the current approach, the cursor is always perfectly in line with your finger.