Sunday, March 31, 2024

Python: Digital Audio and DSP--Baby Steps

A goal for 2023--I will blame my tardiness on the pandemic? was to start down the difficult path of teaching myself Digital Signal Processing.

Audio is a good vehicle for this pursuit: there're a lot of good programming examples out there, and every DJ loves delay.  

Question: what platform to use for experiments? 

After research I chose Windows and Python. Windows machines are everywhere; Python is free and easy, and perhaps the path of least resistance.


Before going further: fair warning: Like the last post about learning C++, this post forms my trail of breadcrumbs through my learning process--instead of writing this information in a notebook for later review it's in this blog. 

So: mostly useful to me, maybe not a lot of others--I expect like 5 views on this post. 

More hardware next time.

Let's Get Started....

My dev platform was simple as possible: a Windows 10 Pro computer and Python 3.11.  

The goal was to create Python audio effects and waveforms to create a mono audio signal. If I could get some sound to come out of the crappy speaker I figured it was a good start.

First up: I started by recreating the code found in the video here.

The video, as straightforward as it was, still covered math and terminology unfamiliar to me--as with a lot of DSP concepts, I soon realized that learning DSP could (and probably has) made many sane techs run screaming.

Nevertheless I tried to unpack it....

Oh no--right away imaginary numbers and complex planes reared their ugly heads. 

 I didn't cover these in school but maybe after decades I am getting the hang of it....it is centered around a seemingly impossible number: the square root of -1.  

How can you square a real number and get -1? Turns out it's like online dating: You can only do it if you imagine you can do it....

A square root of -1 makes as much sense as a negative number--you can't have a basket with -3 apples in it--but -3 apples might mean you owe your neighbor 3 apples--negative numbers exist only in our minds.

I had to remember: negative numbers are a human, and not a physical construct; imaginary numbers are the same.

Like negative numbers, imaginary numbers help us solve otherwise unsolvable problems (like x*x + 1 = 0)

Interesting Introductory video to imaginary numbers is here

I saw imaginary numbers everywhere in the DSP literature I was reading. 

Why? 

Turns out, convenience: It takes a lot less ink to solve math found in DSP using imaginary numbers vs. real numbers--therefore it's what DSP pros use. A good article about this is here.   

S plane, Z plane: more complex number stuff--an X-Y grid of real (X-axis) and imaginary (Y-axis) numbers. S plane is analog; Z plane is discrete.  A to D conversions are accomplished with math: "Z plane transform"--just as ADC's chop up analog signals into digital chunks, the transform equation in this context does the same.

I also heard a lot about "Zeros and poles" found in these planes. Hello? A video covering this that I could follow for the most part is here.

And on and on forever....tons and many kilogram tons more. 

DSP means math, and not the math you learned in 10th grade. DSP is difficult, unfamiliar math. I am curious how far I can go with this.  

But enough reading. The rest of the weekend was spent coding.

PROGRAMMING:

First thing I did with Python DSP was follow the example here to create an audio filter. It worked. Cool!

For the example I needed to include and import the Python numpy and sounddevice libraries--numpy created arrays of samples and sounddevice played them back. Here's the code:

import numpy as np

import sounddevice as sd

# create numpy array filled with random numbers 

sampling_rate = 48000

duration_in_seconds = 5

highpass = False  # False for lowpass

amplitude = 0.3  # scale output for your PC

duration_in_samples = int(duration_in_seconds * sampling_rate)

#watch the line wrap....

white_noise = np.random.default_rng().uniform(-1,1,duration_in_samples)

#next the creator sets up an "all pass filter" 

#which uses phase tricks to knock 

#out audio frequencies creating a 12db Q=0 type filter:

cutoff_frequency = np.geomspace(20000, 20, input_signal.shape[0])

#array for output processing.  

#We create a np array with all zeros w the length of the output.

allpass_output = np.zeros_like(input_signal)

#create all pass filter

#First, create inner buffer of all pass filter.

dn_1 = 0

#process each sample

for n in range(input_signal.shape[0]):

    break_frequency = cutoff_frequency[n]

    #calculate output coefficient

    tan = np.tan(np.pi * break_frequency/sampling_rate)

    a1 = (tan  - 1 )/(tan + 1)

    #all pass filter difference equation

    allpass_output[n] = a1 * input_signal[n] + dn_1

    #store value in buffer for next iteration

    dn_1 = input_signal[n] - a1 * allpass_output[n]

    # we now need to set up the feedback loop based on our filter

if highpass:

    allpass_output *= -1

filter_output = input_signal + allpass_output

#scale amplitude of loop

filter_output *=0.5

#played out using the sounddevices library....

filter_output *= amplitude

#play out.  soundlibrary

sd.play(filter_output, sampling_rate)

sd.wait()

Ha! This worked.  This was an excellent introductory video, and the content creator goes over the math and the code but still keeps it all sane.

And at the end of it--I wrote my first DSP code (well, not really--I just copied the code to my PC with a few minor changes and ran it)  

Joy!

What next?

If I could play back and filter white noise numpy arrays, how about doing the same with a ramp wave? 

It was not too hard to create a ramp wave using some numpy trickery. For instance in this fragment linspace creates a numpy array from x to y with step z. while tile repeats the elements of numpy array x y times.  Viola: Ramp wave.

sample_rate = 44100

grain_size = 100

repeat_grain = 4800

wave_audio = np.linspace(-.9999, .9999, grain_size)

ramp_wave = np.tile(wave_audio, repeat_grain)

Or a x% pulse wave?  That was not too bad either, but I got stuck...I also found out (not documented?) that sounddevice as sd won't play a sample if it's greater than or equal to 1 or less than or equal to -1. I couldn't figure out why until I started messing around with floats less than 1 for the max and min values in the array. When I did that suddenly I heard a raspy pulse wave coming from my speaker.

#numpy array for square/Pulse

#enter 1-100 below, PW as percent

PW = 5

PW_as_percent = PW * .01

PW_high = int(grain_size* PW_as_percent)

PW_low = int(grain_size - PW_high)

x = np.full(PW_high,.99)

y = np.full(PW_low,-.99)

sq_wave_single_grain = np.concatenate((x,y))

sq_wave = np.tile(sq_wave_single_grain,repeat_grain)


Finally, a numpy array where the PW varies for each "grain", in this case, PW modulated by a ramp wave. 

This one made me look up some numpy techniques but I got it to work.  

For this fragement I used "simpleaudio", a library similar to sounddevices, but it uses 16 bit values between 32767 and -32768 instead of floats between -.9999 and .9999--in general, there are a lot of audio libraries for python, a good video summarizing some I messed around with is here.

Here's the PWM code:

sq_wave_pwm_np = np.array([], dtype=np.int16)

x = range(20,99,1) #new PWM value each grain

for n in x:


    PW_as_percent = n * .01

    PW_high = int(grain_size * PW_as_percent)

    PW_low = int(grain_size - PW_high)

    pwm_x = np.full(PW_high, 32767)

    pwm_y = np.full(PW_low, -32768)

    sq_wave_pwm_grain = np.concatenate((pwm_x, pwm_y))

    #combine new grain and existing array.

    sq_wave_pwm_np  = np.append(sq_wave_pwm_grain,sq_wave_pwm_np)

pwm = np.tile(sq_wave_pwm_np,10)

New IDE--VSCODE

I had used Jetbrains Pycharm as my Python IDE for many years but as I was working on baby DSP it started to feel bloated--too feature rich for what I do.

I was using VSCODE  at work so figured, why not try that for Python.

I followed the basic setup VSCODE and Python steps from our most honorable, raze the world for our shareholders bestest buddies at Microsoft, here

What do you want for free?

So far, seems OK.


Turned out to be not as easy as expected, but the video here was helpful.  

To summarize:

using Windows Explorer, I created a new directory for the virtual environment

cd'd to the new directory

in the VSCODE terminal, typed this

python -m venv myenv 

(myenv could be whatever I wanted to call the new virtual environment)

This created a framework for a new Python virtual environment.

next I navigated to the newly created myenv\scripts directory.....

typed this

./activate.bat

finally installed numpy, in the same myenv/scripts directory:

./pip install numpy

For me, I had to include the dot forward slash (as if I was using a linux system) probably because of a PATH statement somewhere.  

...if I typed  

pip install numpy

as seen in the video and elsewhere, without the dot slash, numpy got installed in my roaming profile, not in the virtual environment. Nice!

As a last step, before running or debugging the .py files I created using VSCODE, I had to tell VSCODE which interpreter to use for the virtual environment (it prompted me).  

I had to navigate to the python.exe file in new myenv/scripts folder and choose that one.

From there, it all worked. 

What's Next?

I could not get sounddevices' buffering too work, too bad, because it'd be cool to send changes to numpy arrays to a buffer in real time. This would get me a lot closer to what I'd be doing with an embedded system. 

I am not sure I have time to get that working, and using numpy arrays sent to sounddevices buffers isn't well documented. But I felt I could have figured it out with more time and effort.  

For now it's back to hardware for a bit. 

I find myself more and more drawn to the software side of DiWHY. DSP of course is a huge challenge, but I might be up for it--maybe? Not just for audio--for all of it. This could become a big problem.



Monday, March 11, 2024

C++: Learning, Links, and TidBits

No posts so far for 3-2024....I have been brushing up on C++ skills.





 After years of writing code in embedded C, which works fine for what we usually do, why C++?

I was talking with a talented DIYer at my friendly (and let's not forget) geeky meetup.

We discussed the 4011 based hardware debounce circuit--post here

He was baffled: why not do this in software? 

His advice--other professional engineers have told me this also: If you can do something in software you should do it in software. 

Software is easy to change--hardware, not as much.  

He emailed me his debounce C++ class which was only 40 lines, contained in a single cpp file.  

I could follow the basics--maybe--and figured I'd port it to C, then thought, why, Stroutty, why? 

It should be easy to attend an online C++ class and get to the point where I could port the debounce code and drop it into my current RP2040 project or whatever else I was working on.

Besides, I was a bit tired of troubleshooting hardware after a disappointing Moog HPF build and needed a different challenge.

 So....I signed up for the C++ Mosh class. 

 I already attended an online Mosh GIT class I found useful. His C++ offering was affordable ($99USD for a three part course).  You can find a free version of the Mosh C++ course Part I here.  

Do the Monster Mosh....

Turns out it parts I and II of this 3 part class were good for C review and C++ basics, but by part III  but the lack of source code became problematic; I was spending too much time typing Mosh's lessons into Visual Studio Community to see if they worked. 

I stopped attending classes entirely after about a third of the way through part III where not only was it ridiculously painful and time consuming to type example code into my IDE, but some of the damn lessons wouldn't compile even after meticulous typing. Hello?

The forum for the class wasn't helpful either, some members were oddly rude and many basic questions (like--where the hell is the source code?) went unanswered.  

$99 was still a good value for what I got, but, no source code and examples that don't compile? 

Time to move on.

What Else?

Fortunately even if Mosh part III was the pits there was a ton of materials about C++ everywhere.

C++ references: 

cppreference;   Damn hard to follow for a beginner, and maybe for almost everyone, but, as I learned more C++, I could begin to sort of kinda of make some sense of some of it. Sorta. Kinda. 

C++ for C programmers. Useful to me, since I am already OK at embedded C.

C++ tutorials:

TutorialsPoint  this has become my go-to reference for learning. These lessons are super clear, lots of simple examples, and unlike Mosh Pits--this guy includes his source code. Yeah!

W3 C++ Tutorial  -- who says these guys can only teach you to write HTML and CSS. Another good source for learning.

C++ video tutorials:

  • Hank Stalica -- practical C++.  Like for a college freshman.
  • The Cherno -- a bit more advanced--you have to be OK at C++ to follow some of his C++--still, useful.
  • Code Beauty  -- easy to follow lessons and yes, she is a cutey. 
  • Caleb Curry  -- whole enchilada here. This kid must have spent hours and hours and hours (!!) putting together these tutorials--we owe him our gratitude.

What Was Next?

With a few weekends and evenings of C++ practice I went back to some embedded examples (for instance an AD4725 C++ class for Arduino--here or AD1115 for RP2040--here) and was happy to see that for embedded open source C++ most programmers stuck to basics. No virtual classes, no friend classes, no lamda expressions, not even smart pointers. I could follow this.  

Perhaps for embedded systems, the kind of stuff DiWhyers do anyway--we don't need/don't want crazy advanced modern C++ foobar.  "Just the Class, ma'am".

I went back to the bench to see if I could rewrite the debounce C++ class that started me down the C++  rabbit hole--Could I get it to work with an RP2040 MCU?  

The original code was written for a Teensy 3.1--which is based on an NXP MCU.  

The RP2040's SDK handles interrupts quite differently; but how could this port be?

Turned out--hard. Very hard.  

Interrupt handlers in C++ are oddly difficult to code (explanation here), but to make things worse, the PICO uses callbacks as Interrupt handlers, after a few hours looking into this, yes, C++ is, umm...err..., odd?, and I thought it made the most sense to write a interrupt driven debounce algo for RP2040 in C or continue to use hardware to debounce. 

It would take too much time to sort this out in C++ and the resulting code would be overly complex, especially for someone new to C++. Really?  Really. In this case: "C: the right tool for the job".  

Next I went over a few recently C classes I wrote. For example, the C code for the MCP4922--what would be involved in porting that to C++?  Sure, I could do it.

Right now I don't think I have time. 

Moved on.

OUTTRO

For now I will see if the next IC I need to incorporate into a digital project has a C++ class I can use or modify.  Maybe I will create my own C++ class for it.

Beyond that, I will keep at the C++ learning as time permits. Even if I never code with it, understanding the syntax basics has already been useful for decyphering open source class files and perhaps even helped me to better understand embedded C.

After about a month--yeh, I am convinced--C++ sucks. It has the oddest syntax I have ever seen, is endlessly dense, endlessly confusing, has even has its own bizarre terminology (lvalue? rvalue? Storage class specifier? there are thousands of these damn things) along with a lot more to make a normal person trying to get some damn thing to work run screaming.

Worse of all, some of the C++ programmers I discovered on my month long journey have the atonal bebop jazz complex: "screw you if you are not as awesome and talented and smart as I am". 

Right! Toot your trumpet, Ornette, while I cover my ears. 

I still have no idea why C++ is so popular, nevertheless I will keep plugging away at it. Thank you Bjane, may I have another? I am that kind of masochist.


EFM/KORG770 VCF--BUILD: yes, SOUND: no

Ahoy!  This time, I wanted to refine the quick prototyping idea discussed midway through this previous post : Minimal breadboarding--I hate ...