Litex integration with PYNQ

I’m currently in the process of attempting to integrate Litex’s SoC building suit with my PYNQ-Z1 board and am running into issues.

While I am not a total novice when it comes to designing digital logic, I definitely am when it comes to integrating designs into actual boards, especially when it comes to additional software layers like with the zynq so keep that in mind.

To give a short summary of what litex does:
It’s essentially a set of scripts that allows one to glue together all kinds of open source IPs to end up with a full SoC and then use a vendor or open source EDA to build it all and then either make ASICs or load into FPGAs.

What’s cool is that it runs on migen which is an FHDL wrapping verilog in python code and adding extra features.

I’m trying to write a paper over this fall on the use of these tools for making image processing applications as opposed to vivado_hls or pure vivado block diagrams. To that end, getting it running with a pynq would let me run some really sweet experiments.

However, litex hasn’t been ported to pynq yet and I’ve essentially taken it upon myself to try and do it. What does exist is a port to the zybo board. In that board, what the developers did was generate an .xci for the zynq processing system and generated an SoC around the Zynq and connected AXI GP0 in the fabric.

I’ve modified the zybo files in litex’s board repo to make ones matching the pynq from a pin POV, made the .xci and generated the bitstream.

This is where the problems start. From the resources I see, the standard path is to generate a .bit and a .tcl file with vivado to then load using the overlay system in the pynq. This IS what I want, given that it really streamlines the whole process being able to keep everything in python. The issue is that said .tcl file does not get generated when litex calls vivado to create the bitstream.

The project that gets generated by vivado given the litex python stuff results in this sort of block design:


The zynq PS is INSIDE that core, as far as I’m aware and I think the intention is that I would now load this on to the board and chat with the IP via the uart.

However, to get this all running under the overlay system, I think I have to go from an entirely different direction (?) and get vivado to attach litex-generated IP with the zynq PS and make the .tcl/.bit.

I could really use direction here on how to proceed. I’ve written a letter to the litex guys asking about this too, but decided to ask here as well, given that the problem is very pynq-specific, rather than just adding a new board.

I can edit litex’s build system which essentially generated a .tcl for how vivado should generate the bitstream. This is where I assume I should be making changes - make it not wrap everything into a single rtl IP, but have two split IPs in vivado, one custom and one - the PS and then have it generate the .tcl that the overlay system could consume.

Any thoughts?

I’m also now reading the entire zynq manual book and given the fact I want to work in fpga research, want to really understand what is happening here when I’m trying to get this custom IP merged with the zynq and loaded as an overlay.

By the end of the day, this link between the ecosystems could lead to some cool projects :wink:

This is the .tcl that litex generates:

As far as I understand, the PS should be merged inside the verilog file. So this I think I have to move out to vivado?

You don’t have to package everything up into a single hierarchy. You should make the PS block appear at the top level block design. We want the block design to be open to users and there is no blackbox that prohibits users from checking the design. This is also a common design style - I have rarely seen any block design that wraps up the PS block.

I see, I just had a chat with one of the litex designers. They told me that the method I used had litex doing the integration. The alternative is to get litex to generate a core that vivado would see as a custom IP and then get the integration done in vivado. I think that’s the approach I will take. I assume this can be fully done in tcl? Create some necessary ports in the litex design that I could then run block automation on with vivado?

It can be done in tcl, but a much easier way is to do it in Vivado IPI by connecting blocks there. After you have done the design (and validated it) you can generate the tcl file by write_block_tcl command in the tcl console.