HLS Data Types Compatible with PYNQ Platform

** PYNQ Version **
Release 2021_11_18 14a7328
Board 2021_11_18 14a7328
GitHub - Xilinx/PYNQ: Python Productivity for ZYNQ

  • Standard Image
  • Board: Pynq-Z1

What I’m trying to do:
I have a working version of a 1024x1024 matrix multiplier that uses the m_axi & s_axilite interfaces and I’m trying to work on optimizing the design. What I’m trying to do in the design currently is change all instances of “float” to a datatype that is either “ap_int<16>” or “ap_fixed<32,16>”. Using these “arbitrary precision” datatypes is better for HLS performance.

The problem I’m having:
Every time I change the HLS design “ap_int<16>” or “ap_fixed<32,16>”, I get an output matrix of all zeros.

Are these datatypes not translatable from “numpy.int16” or “numpy.float”? I noticed in other designs that “ap_axis<>” works just fine… I’m probably doing something wrong. I’ve posted my juypter notebooks and HLS CPP from my most-previous attempt. This attempt involved trying to convert my “float” data to “int16” by multiplying all inputs by 32768, and at the output convert “int16” back to “float” by dividing by 65536. Please note that the matrix “matout” in the HLS code is ap_int<32> because multiplying two ap_int<16> values could be greater than 16 bits.

All in all, the only datatype I’ve gotten this to work for is float.


Jupyter Notebook:
mm.ipynb (117.1 KB)

example.cpp (3.1 KB)
example.h (252 Bytes)

Hi Nick,
I think your problem is here:
inbuff = [None] * 1048576

You reassign inbuff to a new array, so anything you do after this is not changing the data in the original pynqbuffer.


Hey Cathal,

I apologize for my delayed response. That was causing it.

Thank you for your help!

1 Like