DPU PYNQ on ZCU104 and ZCU111

Hi there,

I have installed dpu-pynq using the terminal on zcu104 and zcu111, but the Jupyter notebook is not loading properly. If it loads, it gets disconnected. If I don’t load any libraries or frameworks on Pynq, it works. Can you guys tell me what the exact problem is?

2 Likes

Hi there,

Can you give a bit more info about your setup? What PYNQ version are you using? DPU-PYNQ doesn’t support ZCU111 anymore since v1.4 with the pynq v2.7 image. Has everything worked properly before you installed dpu-pynq?

It would be helpful if you could provide some reproducibility steps.

Thanks
Shawn

1 Like

Hi,
thank you for your reply.
yes, without any new libraries or frameworks it was working fine. if I install any libraries or any frameworks it’s getting stuck for a while and disconnected. I am using PYNQ version 2.7 on ZCU104.

So this happens not necessarily with just dpu-pynq, but when you install anything? Even if you apt-get install or pip install any packages? Or is it only when you pip install pynq related packages? Can you still use the board with a serial connection rather than Jupyter?

Could you provide some minimal reproduction steps, from a fresh image, what commands you run and what errors/warnings (if any) you are seeing?

Thanks
Shawn

Hi,

The PYNQ OS is booting up on the terminal and able to install pynq-dpu and all the libraries, but is not able to open a Jupyter notebook. If I don’t install any libraries, the Jupyter notebook works. Do you think the problem is because of the heavy libraries installed on the PYNQ OS? So the Jupyter notebook is not loading properly.

Please see the below images.


it’s running very slow and stuck, after some time it gets disconnected.

1 Like

Is pynq-dpu the only thing you are installing? Are you able to install more packages?

What’s the capacity of your SD card? You could check how much free space you have with a df -lh

I am using a 64 GB SD card. I have only installed pynq-dpu now, but am able to install keras and theono. If I install any packages, the Jupyter notebook will not load properly.

Is the behavior the same if you are in jupyter lab (192.168.2.99:9090/lab)? If it’s a networking issue maybe changing how you connect the board to the internet could help, if possible you could try to connect it to your router.

Can you confirm you are actually not running out of memory by doing a df -lh? Have you tried just re-burning the sd card and trying this on a fresh pynq image? Or trying a different sd card?

There’s a jupyter log in /var/log/jupyter.log, could you post the outputs from there? Maybe it can give us a hint as to what’s happening.

Hi ,

Thank you for your suggestions. I have connected the display port to my FPGA and now the Jupyter notebook is working fine and the DPU-PYNQ is also working. I am trying to install Tensorflow, but the version is not satisfying. Can you tell me which version will be suitable?

I don’t think tensorflow ships for aarch64, you would have to look into building from source or find a hosted package compiled for this cpu architecture. Would recommend looking at tensorflow lite or similar.

Thanks
Shawn

I think you might want to learn how to train model on a host machine, then compile the model to target device.
Hope this might help: DPU-PYNQ/README.md at v1.4.0 · Xilinx/DPU-PYNQ · GitHub

2 Likes

I have installed tensor flow-lite and tensor flow ==0.9.0 but it is not working. It is not supported by the ZCU104 and ZCU111 processor architectures. How can I know which version is suitable for my boards? Can you help me with this issue?

Thank you.

I am not sure about the reason why you need to install tensorflow on Xilinx embedded platforms. But with my limited knowledge, I know that tensorflow is like a bag of tools you need to design, train and validate a deep learning model. Of course, there are officially supported frameworks by Xilinx like caffe, darknet,… These bags are heavy. So it is not recommended to bring it onto embedded platforms.
After you release a high accuracy model, you might write a wrapper application in C++, Python,… to deploy that model on the host machine as well.
If you want to deploy the model on edge device, for example, Xilinx platform, you need to use a specific tool such as Xilinx Vitis-AI tools to quantize the float model to int8 model, then compile it so that a softcore DPU (assume you have built and deployed the DPU properly) can load and run the model. In this final step, you can read the examples in Vitis-AI to know how to write wrapper application using OpenCL framework or Overlay library.

1 Like

Hi Thanks for your suggestions.

My network model is Keras. So I need to use Tensorflow to load the model. As you suggested, another method is the Xilinx VITIS AI method. I feel it’s kind of a little challenging because I have to modify my Python code according to the Vitis AI specification and training and testing should be done. Can you suggest any tutorial to install tensorflow on an ARM CPU (aarch64)?

1 Like

Difficulty of adapting your network to the Vitis AI tools and DPU will likely depend on how exotic your model is. If you check out the dpu-pynq host notebook for training an MNIST model it is quite trivial to quantize/compile a simple CNN for the DPU.

On related projects, if you don’t need acceleration… I have never tried installing it on a pynq board, but maybe tensorflow lite?

Thanks
Shawn

I installed Tensor Flow on the ZCU111 board using “pip install tensorflow-aarch64” and it worked without using any overlay. I am getting some errors if I use xlnk. How to replace xlnk with other functions?

Great to know about tensorflow-aarch64, thanks for letting us know!

Xlnk has been deprecated in v2.7, you should use pynq.allocate instead now. Here’s a relevant post with some useful links.

Thanks
Shawn

Thanks for your reply. I have modified xlnk to allocate and it is working. I am getting this error after loading the bitstream file.

Please see my design:

Are you following some example design? Looks like you’re getting key errors accessing overlay.memory which isn’t valid. You can check overlay.ip_dict for valid IP names in the design hierarchy.

Thanks
Shawn

Hi @skalade

Thank you for your reply.
Yes, I am following the example design “PYNQ-CNN-ATTEMPT/FPGA_CNN.ipynb at master · ZhaoqxCN/PYNQ-CNN-ATTEMPT · GitHub” trying to replicate this design on the ZCU111 board. Here we are using allocate instead of xlnk, so there must be a different way to load the weights in the memory and I am not sure what the exact issue is. Can you suggest me a way to solve this?

Screenshot 2022-04-13 at 05-00-57 FPGA_CNN - Jupyter Notebook