Hi! I’m new to this forum and the world of PYNQ. When playing around with the Jupyter examples, after some googling I noticed that the PYNQ-Z1 apparently doesn’t support live audio processing much in the premade overlays, and I thought it would be a great practice to try to work on it.
I’m quite new to all this, but my basic question is this: In PYNQ-Z1 reference manual, the PDM ADC of the microphone signal is described, and apparently the output of this process goes to G18 pin of the ZYNQ programmable logic on the FPGA (?). Now, If I wanted to create a custom overlay for the audio processing (first to just enable the basic live audio streaming functionality into the python side), how could I access the G18 microphone data pin of the Zynq programmable logic?
My basic plan would be to try to buffer the audio samples in the FPGA, and then send interrupts from there to the python side in Jupyter for python to read and process those values every time the buffer gets full. Later on, perhaps some FPGA accelerated DSP algorithms could be brought to the picture as well to accelerate the DSP in python. Big world of possibilities would be opened up by enabling the live audio processing.
I was thinking of modifying/building on top of this example of AXI/DMA demos that I found (I had some other links in mind too but I’m only allowed to link 1):
Any help in trying to figure out the basic architecture here would be much appreciated, I’m a newbie. My basic intuition at the moment would be that somehow perhaps I could access the microphone pin in Vivado to take the input of the buffering system from there (pin G18), and output interrupts to python for python to read the register values on the Jupyter side, but this also feels somewhat naive and perhaps I’ve missed many important points. Got to start somewhere. First, I’d just need to figure out where to access that pin 18 - is it somehow in Vivado? Thanks!