Changing the IP in the Composable Pipeline

These steps are to use new vitis vision-based functions or custom HLS and insert them into the Composable Pipeline. I hope this helps to unravel how to extend this great tool. Please let me know if you have feedback.

High level overview:

  • Create new IP
  • Add the new IP to the Composable Pipeline Vivado design
  • Build the new bitstream
  • Edit the Composable Pipeline Python code to support the new function

For this example, I will create a gammacorrection HLS block and insert it into the pipeline.

Creating a new IP block using OpenCV

The existing Composable Pipeline functions are in PYNQ_Composable_Pipeline/src. NOTE: the name of the directory and the cpp file are expected by the Makefile to be the name of the xfOpenCV kernel function. In this case, it would be gammacorrection.

cd PYNQ_Composable_Pipeline/src
mkdir gammacorrection
cp LUT/LUT.cpp gammacorrection/gammacorrection.cpp

Make the necessary edits:

  • include the correct vitis vision hpp file
  • change the function name to be the name of the xfOpenCV kernel + _accel
  • Make sure that the #defines are correct ( remove unused ones and create any new ones )
  • Edit the offsets as necessary. The general pipeline framework expects rows at 0x10 and cols at 0x18. Look in the HWH file to get the offsets if needed

In this example, the gammacorrection.cpp file is:

// Copyright (C) 2021 Xilinx, Inc
//
// SPDX-License-Identifier: BSD-3-Clause

#include "hls_stream.h"
#include "common/xf_common.hpp"
#include "common/xf_infra.hpp"
#include "imgproc/xf_gammacorrection.hpp"

#define DATA_WIDTH 24
#define NPIX XF_NPPC1

/*  set the height and width  */
#define WIDTH 1920
#define HEIGHT 1080
#define TYPE XF_8UC3


typedef xf::cv::ap_axiu<DATA_WIDTH,1,1,1> interface_t;
typedef hls::stream<interface_t> stream_t;

// https://xilinx.github.io/Vitis_Libraries/vision/2020.2/api-reference.html#custom-convolution
void gammacorrection_accel(stream_t& stream_in,
                    stream_t& stream_out,
                    unsigned int rows,
                    unsigned int cols,
                    float gaPreparing to unpack .../gir1.2-gtksource-3.0_3.24.11-2_amd64.deb ...
egister both port=stream_in
#pragma HLS INTERFACE axis register both port=stream_out
#pragma HLS INTERFACE s_axilite port=rows offset=0x10
#pragma HLS INTERFACE s_axilite port=cols offset=0x18
#pragma HLS INTERFACE s_axilite port=gammaval offset=0x20
#pragma HLS INTERFACE s_axilite port=return

    xf::cv::Mat<TYPE, HEIGHT, WIDTH, NPIX> img_in(rows, cols);
    xf::cv::Mat<TYPE, HEIGHT, WIDTH, NPIX> img_out(rows, cols);

#pragma HLS DATAFLOW

    // Convert stream in to xf::cv::Mat
    xf::cv::AXIvideo2xfMat<DATA_WIDTH, TYPE, HEIGHT, WIDTH, NPIX>(stream_in, img_in);

    // Run xfOpenCV kernel:
    xf::cv::gammacorrection<TYPE,TYPE,HEIGHT,WIDTH,NPIX>(img_in, img_out, gammaval);

    // Convert xf::cv::Mat to stream
    xf::cv::xfMat2AXIvideo<DATA_WIDTH, TYPE, HEIGHT, WIDTH, NPIX>(img_out, stream_out);
}

Build the vitis IP

The Makefile builds the HLS in boards/ip/vitis_vision. If the compile is not successful, delete the boards/ip/vitis_vision/.vhlsprj directory and rebuild.

cd ../boards/ZCU104.build
make vision_ip

Now the IP is available to vivado as the composable pipeline project already has this directory in the IP sources.

Modify the Vivado project

Open the project in boards/ZCU104.build/cv_dfx_4_pr/cv_dfx_4_pr.xpr

Navigate to Open Block design Double-click on Composable Choose one of the IP blocks to remove - like rgb2hsv - and remove it.
Add in the New Ip by clicking the ‘+’ icon and searching for gammacorrection Move it into the location of the old IP connect the pins (there is only one way to do this ) Click on the gammacorrection IP and rename it by removing the _0 at the end Navigate to Window->Address Editor and go to the bottom for “Unassigned entry” and click assign

Now, build it

  • Run Synthesis
  • Run Implementation
  • Run Generate Bitstream

Collect the overlay files and package everything

After editing the design, the overlay files will be out of date. Download the overlay.tcl file (culled from the main tcl file)

# collect the overlay files 
vivado -mode batch -source overlay.tcl -notrace

# Make the dictionary
make dict

# Make the ZIP file for transferring to the board 
make zip 

# copy the file to the board:
scp composable-video-pipeline-zcu104-v1_0_2.zip xilinx@<board>:/tmp

Install the new overlays

Log into the board as the xilinx user

sudo -i 
cd ~xilinx/jupyter_notebooks
# Install the overlay directory and file 
unzip /tmp/composable-video-pipeline-zcu104-v1_0_2.zip 

Note: In the Jupyter Notebooks, reference the overlay path by full path ol = Overlay(“/home/xilinx/jupyter_notebooks/overlay/cv_dfx_4_pr.bit”)

Edit the Composable Pipeline python

On the board, the pynq_composable files are located at /usr/local/share/pynq-venv/lib/python3.8/site-packages/pynq_composable/

Edit the libs.py to get base functionality. apps.py needs to be edited for the interactive widgets to work (future work)

There are 2 main steps. The first is to install the new accelerator in the VitisVisionIP class to get some common functionaly. The second is to add the new class that calls the functionality.

Step 1:
Edit the bindto array in the VitisVisionIP class to replace the removed accelerator with the new one. In this case, it looks like:

bindto = [
        'xilinx.com:hls:dma2video_accel:1.0',
        'xilinx.com:hls:video2dma_accel:1.0',
        'xilinx.com:hls:rgb2gray_accel:1.0',
        'xilinx.com:hls:gaussian_filter_accel:1.0',
        'xilinx.com:hls:gray2rgb_accel:1.0',
        'xilinx.com:hls:pyrUp_accel:1.0',
        'xilinx.com:hls:subtract_accel:1.0',
        'xilinx.com:hls:gammacorrection_accel:1.0',
        'xilinx.com:hls:rgb2xyz_accel:1.0',
        "xilinx.com:hls:absdiff_accel:1.0",
        "xilinx.com:hls:add_accel:1.0",
        "xilinx.com:hls:bitwise_and_accel:1.0",
        "xilinx.com:hls:bitwise_not_accel:1.0",
        "xilinx.com:hls:bitwise_or_accel:1.0",
        "xilinx.com:hls:bitwise_xor_accel:1.0",
    ]

Step 2: Add the new functionality. This is where the mapping of python variables and registry maps occurs. The rows and cols happens because it inherits from the VitisVisionIP. Other axilite variables get mapped here. In this case, gammaval needs to be added. Changing the variable in Python doesn’t cause it to be written to the register map by default. So a getter/setter is created to actually write the value. In this case, it writes the float to 0x20.

Note: Examining cv_dfx_4_pr.hwh will show the register maps. In this case, under gammacorrection_accel: rows: ADDRESS_OFFSET=16 → 0x10 cols: ADDRESS_OFFSET=24 → 0x18 gammaval: ADDRESS_OFFSET=32 → -x20

class gammacorrection(VitisVisionIP):
    """gammacorrection"""

    bindto = ['xilinx.com:hls:gammacorrection_accel:1.0']

    def __init__(self, description):
        print("gammacorrection: init")
        super().__init__(description=description)
        self._gammaval = 1.0

    def start(self):
        super().start()
        self.gammaval=1.0
        print("gammacorrection: start")

    @property
    def gammaval(self) -> float:
        return self._gammaval

    @gammaval.setter
    def gammaval(self, gammaval: float):
        if not isinstance(gammaval, (float, int)):
            raise ValueError("gammaval must a number")

        self._gammaval = float(gammaval)
        self.write(0x20, _float2int(self.gammaval))
        #print("setting gammaval to " + str(self.gammaval))

Test it in the Jupyter Notebook

Open the pynq_composable/custom_pipeline/02_first_custom_pipeline.ipynb notebook. File → Make a Copy and save it as GammaCorrection

Changes:

#Downlaod Composable Overlay: 
ol = Overlay("/home/xilinx/jupyter_notebooks/overlay/cv_dfx_4_pr.bit")

#Let us Compose:
gamma = cpipe.gammacorrection_accel

video_pipeline = [cpipe.hdmi_source_in, gamma, cpipe.hdmi_source_out]

cpipe.compose(video_pipeline)

#Play with IP
import time
for i in range(0,1000):
    gamma.gammaval = i*0.0025
    time.sleep(0.01)
gamma.gammaval=1

Then run the Jupyter Notebook.

TODO:

  • insert new functions into the DFX region
  • Path for the overlays isn’t default
  • Make the apps work
3 Likes

Hi @jcollier,

Thank you for sharing this :grinning:

One small comment, the bindto of the VitisVisionIP shouldn’t include the vlnv of the derived objects. So, 'xilinx.com:hls:gammacorrection_accel:1.0', should only be part of the gammacorrection class.
Otherwise, there’s a risk of VitisVisionIP being assigned as driver to the gammacorrection IP.

Mario

Thanks much. I will update my instructions. I was unclear how and why I would do that, but it worked.

hello, I have a problem,

collect the overlay files

vivado -mode batch -source overlay.tcl -notrace
Can you share the “overlay.tcl”? I don’t kown how to code the file.

Hi @jcollier,

A great step by step guide, Did you ever attempt the same procedure but to insert new functions into the dfx region?
@marioruiz have you seen any posts on this subject? I can’t find any myself.

I feel like this would be a lot more beneficial for adding new filters to the pipeline as am I right in saying it would require only the generation of a partial bitstream and not the whole project again?

Regards,
Cameron

2 Likes

Hi @cking,

Adding new reconfigurable modules to reconfigurable partitions is something that is only supported in the flow with Abstract Shell.
This is something we have not explored for composable.

If you would like to explore, you can find information here: Create a new accelerator RM — Kria SOM DFX Examples 1.0 documentation
This is something we do not support in this forum.

Mario

Hi @marioruiz,

That’s no problem, I will have to do a bit of research on the abstract shell first.

Instead I am trying @jcollier’s method of replacing a static ip, but I am a bit confused at the "**Build the vitis IP ** section, which surrounds the building of the makefile to sync the new cpp file in the src.

It’s not very clear in this example whether they are using a ZCU104 board or whether the vision_ip code is held under the ZCU104 folder and this is where you access it.

I am using the KV260, do I just need to run the makefile that is under the KV260 folder, as I notice it contains reference to the vitis ip vision libraries

If so will this prompt the full Vivado procedure with the generation of all the files again?

Thanks,
Cameron

Realised just after I posted that I have to then link to that vision_ip heading in the KV260 makefile by running:

make vision_ip

within the kv260 folder.

Will see if it works.

Yes, to modify anything on the static portion you need to kick off the whole project build regardless of the flow.
If you would like to add something on the dynamic part, you can do it manually with or without Abstract Shell.

Mario

Hi Mario,

Wait, so I can add something to the dynamic (dfx) region manually without abstract shell, just as jcollier has done here by ip creation?

Thanks, Cameron

@cking

You can:
Very easy:

For 3.0 vivado 2022 you can make it even easier by block design container BDC.

ENJOY~

2 Likes

Hi @marioruiz

I tried running the “make vision_ip” code but it seems to present the following error and I’m at a bit of a loss.

Do you think this is another error presented due to the RAM of the system?

p.s. I created the file on the left ‘rgb2bgr’ taking advantage of the rgb2bgr open cv tool in the vitis vision library, this is what I am trying to resync to the src folder loaded into vivado.

Any help appreciated,
Thanks,
Cameron

Thanks Brian will check it out!

The error is likely because the code has an error. You can try to open the Vitis project that gets generated an see if you can spot the error.

It also seems that the top level function doesn’t match the name of the file.

1 Like

Hi Mario,

Apologies for the pestering but still struggling to get the cpp code synced with the make Vision_ip command.

I have rewritten the cpp code for the rgb2bgr vision IP, However I’m still unsure how to know what to put in the file for #include, #define or the #pragmas. Is there a guide on building the cpp code required for functions in the vision IP library?

rgb2bgr.cpp (1.5 KB)

This code still presents the error code 2. “no matching function to call 'rgb2bgr”

Any advice would be appreciated,
Cameron

1 Like

Hi Cameron,

The Vitis Libraries are documented here AMD Adaptive Computing Documentation Portal

You can look at the implementation details of the function here Vitis_Libraries/xf_cvt_color.hpp at main ¡ Xilinx/Vitis_Libraries ¡ GitHub
And detailed documentation here AMD Adaptive Computing Documentation Portal

Please, check that the datatypes your are using for the input and output images are correct and what the function expects.

Mario

1 Like

@marioruiz

Should this converted to support label?
I think learn label is more related to confirmed knowledge sharing?

ENJOY~

Hi Mario,
Thanks for the links, with their help I was able to deduce the issue and get the .cpp file synthesised, with the IP block added to vivado replacing another in the static configuration.

I was hoping maybe you could elaborate on the exporting of the bitstream after generation. Do I have to manually carry out the block editing in the gui, followed by synth, impl and .bit generation or can I edit the makefile code to grab my desired .cpp (rgb2bgr) from the src folder rather than one (rgb2hsv) in the predefined static pipeline?

Following Jcollier’s steps for the Gui method, Is he gathering all of the files that would make up the “overlay” file that comes as a result of running the makefile (i.e. .bit .hwh .dtbo .json and all the partials) with the code under his "collect the overlay files and package everything"section?

As intuitively it looks like its just the tcl file he is gathering.

Sorry for the constant questioning!

Thanks,
Cameron

Hi Cameron,

I was hoping maybe you could elaborate on the exporting of the bitstream after generation. Do I have to manually carry out the block editing in the gui, followed by synth, impl and .bit generation or can I edit the makefile code to grab my desired .cpp (rgb2bgr) from the src folder rather than one (rgb2hsv) in the predefined static pipeline?

This would be the easiest path, just update the tcl file to include your IP, and rerun the Makefile.

You can also do this from the GUI, but then you need to collect and rename all the necessary files manually.

Mario

2 Likes

Hi,

I am looking to add a FINN stitched-IP to make use of the MIPI input / DP output pipeline. Does this guide still work for this proposed idea? Or is there an easier way to include a FINN-IP into the pipeline without having to do certain steps since we already have our .bit/.hwh files from FINN? I am aiming to deploy this on a Kria KV260 on PYNQ 3.0.1

Thanks!