PYNQ: PYTHON PRODUCTIVITY

Pynq for ZCU208

Hi, I am trying to make a pynq Image for RFSoC Gen3 ZCU208 based on ZCU111 image and Vagrant VM.

I have been able to compile an image without packages, and LEDs, buttons, DMAs and basic issues work well, but I got stucked with xrfdc package:

after a while I managed to compile too an image without mistakes containing xrfdc at vagrant VM, but when importing rfdc at jupyter notebook i got this error:

import xrfdc

and, as it says, when doing
nm -DC libxrfd.so

from terminal checking for the generated shared object, metal_register_generic_device, it does exist but it is undefined. However, for some reason is not defined before importing xrfdc at python.

I am newbie and I do not understand a lot about all files needed for making a custom image. Is there any special file I have forgotten to change between zcu111 and zcu208?? for the moment i made this to the Makefile:

EMBEDDEDSW_DIR ?= embeddedsw

ESW_LIBS := rfdc scugic

LIB_NAME := libxrfdc.so

LIB_METAL_DIR := $(EMBEDDEDSW_DIR)/ThirdParty/sw_services/libmetal/src/libmetal

LIB_METAL_INC := $(LIB_METAL_DIR)/build-libmetal/lib/include

ESW_SRC := $(filter-out %_g.c, $(foreach lib, $(ESW_LIBS), $(wildcard $(EMBEDDEDSW_DIR)/XilinxProcessorIPLib/drivers/$(lib)/src/*.c)))

ESW_INC := $(patsubst %, -I$(EMBEDDEDSW_DIR)/XilinxProcessorIPLib/drivers/%/src, $(ESW_LIBS))

OS_INC := -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/common -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/ARMv8/64bit -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/ARMv8/64bit/platform/ZynqMP -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/common/gcc -I$(EMBEDDEDSW_DIR)/lib/bsp/standalone/src/arm/common -I$(LIB_METAL_INC)

LOCAL_SRC := $(wildcard src/*.c)

all: $(LIB_NAME)

$(LIB_NAME): $(EMBEDDEDSW_DIR) $(LIB_METAL_INC)

    gcc -o $(LIB_NAME) -shared -fPIC $(ESW_INC) -Isrc $(ESW_SRC) $(LOCAL_SRC) $(OS_INC) -D__BAREMETAL__ -ggdb

install:

   cp $(LIB_NAME) xrfdc/$(LIB_NAME)

   pip3 install .

$(EMBEDDEDSW_DIR):

   git clone https://github.com/Xilinx/embeddedsw -b release-2020.2 $(EMBEDDEDSW_DIR)

$(LIB_METAL_INC): $(EMBEDDED_DIR)

   mkdir -p $(LIB_METAL_DIR)/build-libmetal;

   cd $(LIB_METAL_DIR)/build-libmetal; \

   cmake .. -DCMAKE_TOOLCHAIN_FILE=/../../../../../../../zcu111-libmetal.cmake

basically, what I did is changing from 2018.3 to 2020.2 (to contain newer rfdc driver for ZCU208) and changing path to cortex A53, because since 2019 versions of embedded software is at another path acording to the changelog.

I also modified xparameters.h, xrfdc_g.c and xscugic_g.c at /packages/xrfdc/pkg/src, with the ones at my own bsp for ZCU208 Vivado project containing ZCU208 and rfdc block.

Do you think is there any other file I should modify? I know I am close to make it work but need your help please

thanks and regards
Fran

Hi again, today I have compared both libxrfdc.so files via nm -DC libxrfdc.so (see atached)

log_nm_DC_gen3.txt (6.4 KB) log_nm_DC_zcu111.txt (4.5 KB)

and metal_register_generic_device does not appear at ZCU111. Maybe I am including accidentaly this function when compiling… But I cannot find where have I included it by accident…

Any ideas are extremely welcome.

Regards

It’s possibly a function call inside the driver that was added as part of 2020.2. You can probably safely add a dummy implementation into packages/xrfdc/pkg/src/libmetal_stubs.c.

At the moment PYNQ doesn’t use libmetal to interact with the hardware so we implement some dummy functions to keep the build process happy.

int metal_register_generic_device(struct metal_device *device)
{
     return 0;
}

Peter

1 Like

Thank you Peter for your solution, adding at libmetal_stubs.c functions “metal_register_generic_device” and “metal_device_open” solved that issue.

However got stuck again :S, after including stubs import xrfdc works in Python without errors, but when loading any rfdc objects (rf, tiles, dacs…) it freezes.

image

Firstly, have tried to comment properties of the blocks at init.py and config.py but same thing happened.

I´ve put a lot of logs, but it does not appear to freeze at same point everytime. Sometimes it gets stuck without sending any log, and other times it prints the logs.

image

It doesn`t seem to be related with the loaded block (adc, dac, rf) … Sometimes prints the logs and others don´t independently from the block, but one thing is true, if printing logs it always prints until same point. So it does print until same point or does not print anything, there is no intermediate point.

Bistream loaded is a very simple design with one ADC and one DAC only (have tried with other designs too but same results are achieved), and address memory map at Vivado is the same as the one shown by Pynq

Do you have any idea what could be happening?? Help Will be extremely welcome
(any extra file needed, sd files, etc. Can be included)

When you say locked up does that mean that the entire board stops working - serial terminal and all - or can you restart just the Jupyter kernel to get the system running again?

Peter

Hi Peter, thanks for your reply:

when it freezes, I cant restart Jupyter window, but Linux system Works. I can use other Jupyter Windows to stop the notebook and can relaunch another notebook (with same results), so system continues running yup….

(also via SSH continues working, cause I can edit .py files of xrfdc meanwhile)

Maybe this info helps to get the solution…

Firstly my apologies, because I am newbie in a lot of things: in folder packages/xrfdc/src

what we did to avoid errors is replacing xparameters.h, xrfdc_g.c and xscugic_g.c with files got from a Vivado 2020 Project containing an xrfdc 2.4 (actually, same Project mentioned above). Is this appropriate? Maybe it is related with the error…

Regards

I don’t know what the issue is - it seems like something in the driver is going into a loop but I don’t know where. Next step would be to compile the library with debug symbols, run the python script in GDB and see where the code gets stuck.

The _g files shouldn’t matter - we recreate them from the configuration data in the HWH file. It’s possible that something has changed there and we’re passing an invalid configuration structure into the driver. That could have happened with the update to 2020.2?

Peter