Pynq for ZCU208

Hi, I am trying to make a pynq Image for RFSoC Gen3 ZCU208 based on ZCU111 image and Vagrant VM.

I have been able to compile an image without packages, and LEDs, buttons, DMAs and basic issues work well, but I got stucked with xrfdc package:

after a while I managed to compile too an image without mistakes containing xrfdc at vagrant VM, but when importing rfdc at jupyter notebook i got this error:

import xrfdc

and, as it says, when doing
nm -DC libxrfd.so

from terminal checking for the generated shared object, metal_register_generic_device, it does exist but it is undefined. However, for some reason is not defined before importing xrfdc at python.

I am newbie and I do not understand a lot about all files needed for making a custom image. Is there any special file I have forgotten to change between zcu111 and zcu208?? for the moment i made this to the Makefile:

EMBEDDEDSW_DIR ?= embeddedsw

ESW_LIBS := rfdc scugic

LIB_NAME := libxrfdc.so

LIB_METAL_DIR := $(EMBEDDEDSW_DIR)/ThirdParty/sw_services/libmetal/src/libmetal

LIB_METAL_INC := $(LIB_METAL_DIR)/build-libmetal/lib/include

ESW_SRC := $(filter-out %_g.c, $(foreach lib, $(ESW_LIBS), $(wildcard $(EMBEDDEDSW_DIR)/XilinxProcessorIPLib/drivers/$(lib)/src/*.c)))

ESW_INC := $(patsubst %, -I$(EMBEDDEDSW_DIR)/XilinxProcessorIPLib/drivers/%/src, $(ESW_LIBS))

OS_INC := -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/common -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/ARMv8/64bit -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/ARMv8/64bit/platform/ZynqMP -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/common/gcc -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/common -I$(LIB_METAL_INC)

LOCAL_SRC := $(wildcard src/*.c)

all: $(LIB_NAME)

$(LIB_NAME): $(EMBEDDEDSW_DIR) $(LIB_METAL_INC)

    gcc -o $(LIB_NAME) -shared -fPIC $(ESW_INC) -Isrc $(ESW_SRC) $(LOCAL_SRC) $(OS_INC) -D__BAREMETAL__ -ggdb

install:

   cp $(LIB_NAME) xrfdc/$(LIB_NAME)

   pip3 install .

$(EMBEDDEDSW_DIR):

   git clone https://github.com/Xilinx/embeddedsw -b release-2020.2 $(EMBEDDEDSW_DIR)

$(LIB_METAL_INC): $(EMBEDDED_DIR)

   mkdir -p $(LIB_METAL_DIR)/build-libmetal;

   cd $(LIB_METAL_DIR)/build-libmetal; \

   cmake .. -DCMAKE_TOOLCHAIN_FILE=/../../../../../../../zcu111-libmetal.cmake

basically, what I did is changing from 2018.3 to 2020.2 (to contain newer rfdc driver for ZCU208) and changing path to cortex A53, because since 2019 versions of embedded software is at another path acording to the changelog.

I also modified xparameters.h, xrfdc_g.c and xscugic_g.c at /packages/xrfdc/pkg/src, with the ones at my own bsp for ZCU208 Vivado project containing ZCU208 and rfdc block.

Do you think is there any other file I should modify? I know I am close to make it work but need your help please

thanks and regards
Fran

1 Like

Hi again, today I have compared both libxrfdc.so files via nm -DC libxrfdc.so (see atached)

log_nm_DC_gen3.txt (6.4 KB) log_nm_DC_zcu111.txt (4.5 KB)

and metal_register_generic_device does not appear at ZCU111. Maybe I am including accidentaly this function when compiling… But I cannot find where have I included it by accident…

Any ideas are extremely welcome.

Regards

1 Like

It’s possibly a function call inside the driver that was added as part of 2020.2. You can probably safely add a dummy implementation into packages/xrfdc/pkg/src/libmetal_stubs.c.

At the moment PYNQ doesn’t use libmetal to interact with the hardware so we implement some dummy functions to keep the build process happy.

int metal_register_generic_device(struct metal_device *device)
{
     return 0;
}

Peter

2 Likes

Thank you Peter for your solution, adding at libmetal_stubs.c functions “metal_register_generic_device” and “metal_device_open” solved that issue.

However got stuck again :S, after including stubs import xrfdc works in Python without errors, but when loading any rfdc objects (rf, tiles, dacs…) it freezes.

image

Firstly, have tried to comment properties of the blocks at init.py and config.py but same thing happened.

I´ve put a lot of logs, but it does not appear to freeze at same point everytime. Sometimes it gets stuck without sending any log, and other times it prints the logs.

image

It doesn`t seem to be related with the loaded block (adc, dac, rf) … Sometimes prints the logs and others don´t independently from the block, but one thing is true, if printing logs it always prints until same point. So it does print until same point or does not print anything, there is no intermediate point.

Bistream loaded is a very simple design with one ADC and one DAC only (have tried with other designs too but same results are achieved), and address memory map at Vivado is the same as the one shown by Pynq

Do you have any idea what could be happening?? Help Will be extremely welcome
(any extra file needed, sd files, etc. Can be included)

1 Like

When you say locked up does that mean that the entire board stops working - serial terminal and all - or can you restart just the Jupyter kernel to get the system running again?

Peter

1 Like

Hi Peter, thanks for your reply:

when it freezes, I cant restart Jupyter window, but Linux system Works. I can use other Jupyter Windows to stop the notebook and can relaunch another notebook (with same results), so system continues running yup….

(also via SSH continues working, cause I can edit .py files of xrfdc meanwhile)

Maybe this info helps to get the solution…

Firstly my apologies, because I am newbie in a lot of things: in folder packages/xrfdc/src

what we did to avoid errors is replacing xparameters.h, xrfdc_g.c and xscugic_g.c with files got from a Vivado 2020 Project containing an xrfdc 2.4 (actually, same Project mentioned above). Is this appropriate? Maybe it is related with the error…

Regards

1 Like

I don’t know what the issue is - it seems like something in the driver is going into a loop but I don’t know where. Next step would be to compile the library with debug symbols, run the python script in GDB and see where the code gets stuck.

The _g files shouldn’t matter - we recreate them from the configuration data in the HWH file. It’s possible that something has changed there and we’re passing an invalid configuration structure into the driver. That could have happened with the update to 2020.2?

Peter

1 Like

Hi Peter, thanks for your advice. My apologies for taking time to answer…

I realized that if you load firstly the bitstream without import xrfdc lib and then you load normally (including xrfdc) I does not hang…

It seems it needs some king of “initial state” and after that It works…

However a new issue is encountered:

dac0 and tile 0 appears to work. I can configure and change mixer without error, but when using mixer or dynamicPLLConfig on another tile (in this case, dac0 tile 1) program hangs… Checking dac10.BlockStatus very strange config appears (dac0 tile 0 has proper config at BlockStatus)

image

Our config is ref_clk 122.88MHz, and using internal PLL for 3194.88MHz Sampling Rate

image

So question number one: Is there anything at xrfdc init.py we should change in order getting every tile working? In ZCU111 there are 2 tiles with 4 dacs each. At ZCU208 we´re having 4 tiles with 2 dacs each. Since for loop goes from 0 to 4 I supposed no changes have to be done but have I missed something?

On the other hand, related to xrfdc freezing, we suspect from our Makefile (see complete Makefile above in this topic) that imports cortexA53:

OS_INC := -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/common -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/ARMv8/64bit -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/ARMv8/64bit/platform/ZynqMP -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/common/gcc -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/common -I$(LIB_METAL_INC)

according to changelog.txt at

“02/01/19 Added support for Versal. Cortea53 BSP will be re-used re-named cortexa53 directory as ARMv8, since files are generic for ARMv8 based processors. Created “platform” directory to place SoC based files.”

cortexA53 is re-used by ARMv8, so we import ARMv8 64 bit and platform/ZynqMP. ÂżDoes this seem to be correct and could explain something about xrfdc freezing?

Thanks in advance
Fran

1 Like

just to complete my reply, related to dac tile number one, mixer accepts every property except Freq, and that is when it hangs:

1 Like

Hi @fjtoralza Fran,

I, too, want to get PYNQ working on the ZCU208. Were you able to get it working reliably, with support for the entire RfSoc Gen 3 IP? If you were, what did you end up having to do in order to make it work? Thank you.

John

Hi johnsmith,

Yep, I was able to make it work (currently working with it, as a matter of fact), but I am afraid my company cannot allow me to give you further details…

however

Since 2.7 versions, new versions for xrfdc and xrfclk were given. Vast majority of issues you are going to cope with are solved there. Most of the problems I had were comming from Makefile and now you have It I am quite sure you will get it soon.

However, any conqrete questions (I am not allowed to give whole project) I will be glad to help with

Regards
Fran

1 Like

Hi guys,

I’m wondering if you could help me by setting up the PLL clocks using the xrfclk command.
I build a PYNQ image for the ZCU208 with Ubuntu 20.04, Vivado 20.2 and the associated petalinux and bsp files.

I added the .spec file and the two tics files as mentioned here: ZCU216-PYNQ

I was able to build the image, juypter notebook does launch, and I can also download the bitstream file. The scope shows me the correct signal I just generated with the DAC.
However, the frequency is wrong, since I still have to set the lmx and lmk clocks accordingly.
Running xrfclk.set_ref_clks() shows the following error message:

It seems that the error appears in _find_devices. Somehow the program can’t find a “lmk…” or “lmx…” device in the /sys/bus/spi/devices folder.

So I used Putty to check this path on my board. If I run the Python code in the command window or just check the path I find the file p90jedec,spi-nor. But nothing with “lmk” or “lmx”. Because of that, the lmx_devices or lmk_devices list stay empty and hence print the error messages.

Do you have an idea if I may forgot something during the image build process? Do I need to run a specific command to initialize the clock board? Why can’t I find files with “lmk” or “lmx”? Is this an issue with the bsp file?

My current workaround is to define the clocks manually by the Xilinx BoardUI.

Really appreciate your help.

Best regards
Patrick

1 Like

Hi patrickmatalla,

as you know, lmx and lmk are routed differently at gen3 RFSoC. As a matter of fact, they are not at clk104 board

so, if you just pick set_all_ref_clks from gen1 It wont detect any clocks.

I am afraid my company does not allow me to give the code, but you have to go to embeddedsw repo, check for new C libraries for clocks at 2020.2 version and you will see there is an spi protocol and some functions there you need to use and I am quite sure you will cope with this very fast

Regards
Fran

1 Like

Hi @fjtoralza @patrickmatalla ,

It appears as though the embeddedsw version of rfclk (xrfclk.c/.h) does indeed set the SPI MUX and Bridge on the CLK104 board. However, I don’t see the same logic in the PYNQ Python version, xrfclk.py. Did you have to modify the PYNQ xrfclk.py to set the SPI MUX/Bridge before you could configure the two LMX2594 devices and the LMK04828 on the CLK104 board? The AXI_GPIO IP needs to be in the Vivado design for this to work, since the MUX/Bridge is set with two GPIO signals.

Thank you,

John

1 Like

Hi all,

we have finally been able to create the ZCU208 PYNQ image.
Furthermore, we wrote our own function which reads the whole clock configuration from the .hwh or .bit files and automatically configures the LMK and LMX clock.

The README documentation will be completed soon.

You can find the image and the clock function in our Github organization:
Institute of Photonics and Quantum Electronics - KIT (github.com)

We hope that this will save many of you a lot of time and nerves.

We hope that this may help you.

Best,
Patrick

3 Likes