Sobel Filter IP

Hi, I created a custom IP for the Ultra96 board starting from the SOBEL FILTER algorithm* example provided here using Vivado HLS. The creation of the IP is successful, the obtained scheme is the one attached below:

.
At the moment the the two images i get are wrong.
Anyone can check if the IP connections are right?
Any help would be greatly appreciated!

Thank you!

Connections look OK.
Did you run co-sim in HLS?

Cathal

Hi Chatal,
No, I couldn’t even simulate the IP Sobel filter stand alone, I am working on windows 10 environment and xilinx people advice me to use Linux.
so Im installing ubuntu to see if I get best results.
many thanks, Franco

I’m not sure why you were told that. It shouldn’t really matter if you are using Windows/Linux to develop your IP.
I’d suggest the problem is probably with your code, and not your OS.

Cathal

Hi Cathal,
you were right… no changes I got by using Ubuntu. I get two images (X,Y)


that, as you can see are not the right images.

This looks like your HLS IP does not have the correct algorithm?

I got the IP from L1 vision vitis-library.

Are you sure the configurations of that IP is correct? It might be the pixel width, gray-scale, or anything else…

I am not very skilled on this topic… I left the parametesr default values… please could you check them ?
xf_sobel_config.h (2.4 KB) xf_config_params.h (903 Bytes)

this is the main
xf_sobel_accel.cpp (2.6 KB)

Can you share or paste the notebook content?

from pynq import Overlay
overlay = Overlay(“/home/xilinx/pynq/overlays/sobelfilter/Sobelfilter2.bit”)

Load sobelfilter IP

sobel_ip = overlay.sobel_accel_0.s_axi_control
sobel_ip_r = overlay.sobel_accel_0.s_axi_control_r

from cffi import FFI
ffi = FFI()

from pynq import Xlnk
import numpy as np
from PIL import Image

IMAGE_PATH = ‘/home/xilinx/test_1080p.bmp’

prepare input/output image

COLS = 1920#640
ROWS = 1080#480
CHANNELS = 1

load original image + grayscale conversion

original_image = Image.open(IMAGE_PATH).convert(‘L’)
original_image = original_image.resize((COLS,ROWS), Image.ANTIALIAS)
original_image.load()

display origina image

display(original_image, ‘input image’)

to numpy array

gray_input_array = np.array(original_image)
newgraynp = gray_input_array.reshape(gray_input_array.shape[0],gray_input_array.shape[1], CHANNELS)

allocate memory buffer

xlnk = Xlnk()
image_buffer = xlnk.cma_array(shape=(ROWS,COLS,CHANNELS), dtype=np.uint8, cacheable=1)
return_buffer1 = xlnk.cma_array(shape=(ROWS,COLS,CHANNELS), dtype=np.uint8, cacheable=1)
return_buffer2 = xlnk.cma_array(shape=(ROWS,COLS,CHANNELS), dtype=np.uint8, cacheable=1)

copy input image to memory buffer

image_buffer[0:ROWS * COLS * CHANNELS] = newgraynp
return_buffer1[0:ROWS * COLS * CHANNELS] = 0
return_buffer2[0:ROWS * COLS * CHANNELS] = 0

input/output pointers

image_pointer = ffi.cast(“uint8_t *”, ffi.from_buffer(image_buffer)) # image_buffer.ctypes.data
return_pointer1 = ffi.cast(“uint8_t *”, ffi.from_buffer(return_buffer1)) # return_buffer1.ctypes.data
return_pointer2 = ffi.cast(“uint8_t *”, ffi.from_buffer(return_buffer2)) # return_buffer2.ctypes.data

print(‘Pointer size’, ffi.sizeof(image_pointer), ‘bytes’)

utilities functions

import pynq

CONTROL_ADDR = 0x00

def start(ip):
data = ip.read(CONTROL_ADDR) & 0x80
ip.write(CONTROL_ADDR, data | 0x01)

def enable_auto_restart(ip):
ip.write(CONTROL_ADDR, 0x80)

def disable_auto_restart(ip):
ip.write(CONTROL_ADDR, 0x0)

def is_done(ip):
data = ip.read(CONTROL_ADDR)
return (data >> 1) & 0x1

def is_idle(ip):
data = ip.read(CONTROL_ADDR)
return (data >> 2) & 0x1;

def is_ready(ip):
data = ip.read(CONTROL_ADDR)
return (data >> 2) & 0x1;

def set_img1(ip, image_buffer, input_image: bool):
addr = 0x10 if input_image else 0x1c
print(‘Image addr’, hex(addr))
print(‘writing’, hex(image_buffer.physical_address))
ip.write(addr, image_buffer.physical_address)
ip.write(addr + 4, 0x0)

def set_img2(ip, image_buffer, input_image: bool):
addr = 0x10 if input_image else 0x28
print(‘Image addr’, hex(addr))
print(‘writing’, hex(image_buffer.physical_address))
ip.write(addr, image_buffer.physical_address)
ip.write(addr + 4, 0x0)

def get_img(ip, input_image: bool):
# input/output image reg addr
addr = 0x10 if input_image else 0x1c
print(‘Image addr’, addr)
data_0 = ip.read(addr)
#data_1 = ip.read(addr + 4) << 32
data_1 = 0
data = int(‘{:32b}’.format(data_0 + data_1), 2)
print('read ', hex(data))
return data

rows, cols, threshold

def set_params(ip, rows, cols):
ip.write(0x10, rows)
ip.write(0x18, cols)

lettura registri

def read_reg(ip):
reg_0 = ip.read(0x0)
print(‘reg0’ , reg_0)

def read_reg_r(ip):
reg_1c = ip.read(0x1c)
print(‘reg1c’ , reg_1c)
reg_10 = ip.read(0x10)
print(‘reg10’ , reg_10)
reg_28 = ip.read(0x28)
print(‘reg28’ , reg_28)

#enable_auto_restart(fast_ip)

write values in ip registers

set_params(sobel_ip, ROWS, COLS)
set_img1(sobel_ip_r, image_buffer, input_image=True)
set_img1(sobel_ip_r, return_buffer1, input_image=False)
set_img2(sobel_ip_r, return_buffer2, input_image=False)
read_reg(sobel_ip)
read_reg_r(sobel_ip_r)

display original image and result

#display(original_image, ‘input image’)
#print(np.unique(return_buffer2))
display(Image.fromarray(np.array(return_buffer1).squeeze()), ‘output image’)
display(Image.fromarray(np.array(return_buffer2).squeeze()), ‘output image’)

1 Like

Here’s the procedure.

Using Ubuntu 2018.4, Vitis 2020.1

STEP 1 - Install CMAKE

sudo apt-get install cmake

STEP 2 - Install Vitis Vision lib

git clone GitHub - Xilinx/Vitis_Libraries: Vitis Libraries

STEP 3 - Install OpenCV

mkdir ~/opencv_build && cd ~/opencv_build
git clone GitHub - opencv/opencv: Open Source Computer Vision Library
git clone GitHub - opencv/opencv_contrib: Repository for OpenCV's extra modules

#wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/3.4.4.zip
#wget -O opencv.zip https://github.com/opencv/opencv/archive/3.4.4.zip

cd ~/opencv_build/opencv
mkdir build && cd build

mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=OFF -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules -D OPENCV_ENABLE_NONFREE=ON -D BUILD_EXAMPLES=ON …

make -j4
sudo make install
sudo ldconfig

pkg-config --modversion opencv
ls /usr/local/python/cv2/python-3.6
cd /usr/local/python/cv2/python-3.6
sudo mv cv2.cpython-36m-x86_64-linux-gnu.so cv2.so

STEP 4 - Compile Vitis IP core

Change dir under Vision/L1/Sobel and modify the tcl script with :slight_smile:

#source settings.tcl

set PROJ “threshold.prj”
set SOLN “sol1”

set XF_PROJ_ROOT “/home/user/Documents/Vitis_Libraries/vision/”
set OPENCV_INCLUDE “/usr/local/include/opencv2”
set OPENCV_LIB “/usr/local/lib”
set XPART “xczu9eg-ffvb1156-2-i”
set CSIM “1”
set CSYNTH “1”
set COSIM “0”
set VIVADO_SYN “0”
set VIVADO_IMPL “0”

Then issue:

vivado_hsl -f script.tcl

You should get an AXI sobel core under sol1 directory:

recreate the design with the attched tcl file on Vivado.

design_1_bd.tcl (52.5 KB)

Step 5 Test with notebook on Pynq 2.6

Then test your notebook against this design.
The design passes CSIM and COSIM so it should work. I have not tested the file you copied above since formatting is incorrect.

Also remember to change the width and height dimensions on the config file.h file.

1 Like

Hi Dimiter,
I thank you a lot for your detailed suggestions but I need of some clarifications.
STEP 3
I installed python 3.8 but I can find it in “/usr/local” path I found something in
~/vivado/opencv/build/python/cv2/python-3.8 with inside the file
cv2.cp37-win_amd64.pyd
STEP 4
do you mean to modify the run_hls.tcl ?

I modified it in
°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
#source settings.tcl

set PROJ “sobelfilter.prj”
set SOLN “sol1”

set XF_PROJ_ROOT /home/franco/vivado/Vitis_Libraries/vision/
set OPENCV_INCLUDE “/usr/local/include/opencv4/opencv2”
set OPENCV_LIB “/usr/local/lib”
set XPART “xczu3eg-sbva484-1-e”
set CSIM “1”
set CSYNTH “1”
set COSIM “0”
set VIVADO_SYN “0”
set VIVADO_IMPL “0”

if {![info exists CLKP]} {
set CLKP 3.3
}

open_project -reset $PROJ

add_files “${XF_PROJ_ROOT}/L1/examples/sobelfilter/xf_sobel_accel.cpp” -cflags “-I${XF_PROJ_ROOT}/L1/include -I ${XF_PROJ_ROOT}/L1/examples/sobelfilter/build -I ./ -D__SDSVHLS__ -std=c++0x” -csimflags “-I${XF_PROJ_ROOT}/L1/include -I ${XF_PROJ_ROOT}/L1/examples/sobelfilter/build -I ./ -D__SDSVHLS__ -std=c++0x”
add_files -tb “${XF_PROJ_ROOT}/L1/examples/sobelfilter/xf_sobel_tb.cpp” -cflags “-I${OPENCV_INCLUDE} -I${XF_PROJ_ROOT}/L1/include -I ${XF_PROJ_ROOT}/L1/examples/sobelfilter/build -I ./ -D__SDSVHLS__ -std=c++0x” -csimflags “-I${XF_PROJ_ROOT}/L1/include -I ${XF_PROJ_ROOT}/L1/examples/sobelfilter/build -I ./ -D__SDSVHLS__ -std=c++0x”
set_top sobel_accel

open_solution -reset $SOLN

set_part $XPART
create_clock -period $CLKP

if {$CSIM == 1} {
csim_design -ldflags “-L ${OPENCV_LIB} -lopencv_imgcodecs -lopencv_imgproc -lopencv_core -lopencv_highgui -lopencv_flann -lopencv_features2d” -argv " ${XF_PROJ_ROOT}/data/128x128.png "
}

if {$CSYNTH == 1} {
csynth_design
}

if {$COSIM == 1} {
cosim_design -ldflags “-L ${OPENCV_LIB} -lopencv_imgcodecs -lopencv_imgproc -lopencv_core -lopencv_highgui -lopencv_flann -lopencv_features2d” -argv " ${XF_PROJ_ROOT}/data/128x128.png "
}

if {$VIVADO_SYN == 1} {
export_design -flow syn -rtl verilog
}

if {$VIVADO_IMPL == 1} {
export_design -flow impl -rtl verilog
}

exit

°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°
I get the error:

No /home/franco/opencv_build/Vitis_Libraries/vision/L1/examples/sobelfilter/sobelfilter.prj/sol1/sol1.aps file found.

Other oddity is that I also get the error:
ERROR: [HLS 200-1023] Part ‘“xczu3eg-sbva484-1-e”’ is not installed.
but I am sure that it exists because if I create a project by Vivado_hls gui the part exist in the menu.

Hi,

The Vitis libraries should not be inside the opencv directory but under /Documents or somewhere else. Otherwise update the paths.

So you don’t need to change the part since all Ultrascale+ part should be compatible.

I don’t follow why would you install a new Python version , I am using the one that came with Python 2.5.1 or 2.6. OpenCV3.4 works fine with it.

I see you are giving the path to OpencV V4 but Vitis supports only v3.4 so it’s not picking up the path correctly.
The generated IP passes CSIM and COSIM , but you may need to update the notebook. Some lines above don’t make sense.
Maybe you can post an updated notebook.