Concrete suggestion for improved base overlay

Following the discussion about what the base overlay is for, here is a concrete suggestion as to how I think it could be made rather more useful, without having to change it much. The goal is to make it easier for people to do their own PL processing.

First, consider the existing base overlay. Take the part dealing with audio as an example. At present, in simple terms it collects data from the audio codec, and sends it through to the PS, and also sends data back the other way. Think of those as 2 data streams.

What I suggest is to send each of those 2 data streams through a trivial IP block, that in the base overlay just passes them straight from their input to their output.

The end result would work in exactly the same way as the current base overlay. However, it would have a significant advantage to people wishing to do their own processing on the data streams in the PL. (E.g. to add some audio effect). They would modify the pass-through IP block into an audio-effect IP block, by writing new code for it (a trivial example would be to right shift audio values being passed through, to act as a simple volume control).

I guess a little extra thought is needed to deal with clocking, resets, etc.

Clearly, this could be done for other kinds of datastreams too, such as video, and perhaps others.

Comments? Suggestions? Is this something the Pynq developers might consider? At least in principle, it does not seem as if it would need too much work.

1 Like

We had an intern look at the audio subsystem on the PYNQ-Z2 - their work is on github at GitHub - wady101/PYNQ_Z2-Audio: Audio streaming architecture for the PYNQ-Z2 board. The idea was, as you suggest, to decouple the I2S and codec configuration from the DMA engine that transfers the data back to the PS such that there is a well-defined, externally accessible, AXI stream interface. We could then support custom block insertion and bypassing the PS altogether much more conveniently,

Unfortunately there is still a fair amount of work left to be done to both test it and integrate it in a backwards-compatible way. As with all these things it is a matter of priorities given that the core team is not that big.

One of the key themes we are thinking about is how to make the subsystems we have in PYNQ more composable and reusable such that it is easier for people to take parts of the base overlay and insert custom logic in the various data-paths. Currently we’re concentrating mainly on video but there is no reason audio couldn’t also be considered.

In the mean time we’re happy to offer advice to people who want to contribute these kinds of changes.

Peter

Thanks Peter. It’s good to hear that you are thinking about how to help people insert custom logic into data-paths. I believe this is key in making Pynq more accessible, and attractive, to a range of users (students, software guys, makers, researchers,…).

While video is certainly important, I guess many courses on signal processing would also like to be able to do this with audio: audio is easier to get to grips with than video, being one dimensional. I’m sure students would be more impressed by real-time audio processing on Pynq than by demos sending pre-recorded samples through matlab, for example.

Dont forget digital signals too (as per logictools). It could be very handy to watch USB bus signals (leading into logic analyzers), or to modify Ethernet packets on the fly (leading into firewalls and the like).

With all of these broken out, Pynq could be much more useful as a teaching tool, and a prototyping tool.

Ralph:

Take a look at GitHub - jgoeders/dac_sdc_2020: DAC System Design Contest 2020 therein you will find an example of a fifo operating inside a Jupyter environment. Absolutely a great place to start.

Thanks - although that example is specific to video, and assumes input is coming from the PS. I agree that its a useful reference, though.