How to use ap_fixed data type to communicate with the ip made by the vivado hls?

We don’t have any specific support for ap_fixed within PYNQ but it’s not too difficult to set things up and do the conversion yourself. The native type for the array should be a numpy int or uint of the power of 2 the same or bigger than the total bits of the fixed point – e.g. a 24-bit fixed gets packed into 32-bit ints.

Once that is in place you can create a helper function to convert an array of floats to your integer type by multiplying by the correct shift and assigning to an int array. The extra sign-bits won’t matter in this case

import numpy as np
fixed_ar = np.ndarray((1024,), 'i4')
float_ar = np.arange(-512, 512, dtype='f4')
fixed_ar[:] = float_ar * 256

Converting back is more challenging as the array needs to be sign-extended first. If we mask off the top 8 bits of fixed_ar to simulate a returned list of 24-bit ap_fixeds with 8 fractional bits we can see that dividing by 256 gives the wrong results for negative numbers

return_ar = fixed_ar & 0xFFFFFF
return_ar[0] / 256
> 65024

With some view casting and np.where we can construct a function that will do the down-conversion for us

def convert(a, total_bits, frac_bits): 
    condition = 1 << (total_bits - 1) 
    mask = (~((1 << total_bits) - 1)) & 0xFFFFFFFF 
    return np.where(a < condition, a, (a.view('u4') | mask).view('i4')) / (1 << frac_bits)

convert(return_ar, 24, 8)
> array([-512., -511., -510., ...,  509.,  510.,  511.])

I’m not offering this up as authoritative just an example of a way to do this type of conversion. Hope this gives some ideas.

Quick edit - using some bit-manipulation magic we can get rid of the np.where to improve performance (maybe?)

def convert(a, total_bits, frac_bits): 
    mask1 = 1 << (total_bits - 1) 
    mask2 = mask1 - 1 
    return ((a & mask2) - (a & mask1)) / (1 << frac_bits)

For more details on how this works see https://stackoverflow.com/questions/32030412/twos-complement-sign-extension-python

Peter

2 Likes