I am new to PYNQ framework and I have acquired a PYNQ-Z2 board. I am interested in using PYNQ for accelerating image processing pipelines in Python, using FPGA capabilities. What is the easiest way to use OpenCV accelerated functions from Xilinx. I have seen Vitis Vision Library, but I could not create IPs using HLS to use in my design and create my overlays.
I have experimented the helloworld project that uses the “resize” IP generated from that library, and I would like to reproduce that for other OpenCV functions, like thresholding. What would be the right path to that? Is there any tutorial? Do I need to have Vitis installed or could I only use Vivado?
You could copy the “hello world” resizer, and replace the resize function with other functions from the Vitis library.
There isn’t a PYNQ tutorial for this.
Vivado and Vitis HLS are the two main tools you need. If disk space isn’t an issue, I think it would be easiest to install Vitis as it will include Vivado and Vitis HLS and software development tools if you need to use these later.
Pick the correct version of the tools for the version of PYNQ you are using and the version of “hello world”. This will make learning easier as you won’t have to deal with version mismatches/design updates.
Thank you for your response. I have been trying to replicate the Hello world example with another IP core generated from the Vitis Vision library, using Vitis HLS, namely the thresholding function. However, I am having some problems. I’ve installed Vitis 2020.2, several versions of OpenCV, namely 3.4.4, and I’ve created the settings.tcl file linking to the OpenCV includes and lib. But, when I run vitis_hls -f run_hls.tcl inside the vision/L1/examples/threshold folder, I am getting the following error during C simulation,
When I set CSIM flag to 0 I am getting an error during C synthesis,
Therefore, I cannot generate the IP core to insert into Vivado design. Does anyone have an idea of what could be causing these problems? I’ve tried several 3.x versions of OpenCV and I still don’t get how to build the IP.
We’ve just released the composable video pipeline
Perhaps you can reuse the composable pipeline.
Thank you. I will try out the composable overlay.