How to use ap_fixed data type to communicate with the ip made by the vivado hls?

pynq package version == 2.5

pynq-z2

thanks for help

# How to use ap_fixed data type to communicate with the ip made by the vivado hls?

Hi @dongzw,

I am sure you noticed I had to close the issue on GitHub due to our internal policy. Glad to see you also asked your question here.

I would ask you to provide a bit more context so we can try to come up with a proper answer for you.

What is the width of your ap_fixed? How do you interact with your HLS core?

Would you be willing to share the HLS code and the python code you are trying to use? We donâ€™t really need the entirety of the code, perhaps the HLS core signature and a bit of pseudocode (or at very least a high level explanation).

In the meantime, I am also tagging @PeterOgden as heâ€™s probably the guy that can help you out here.

```
# include "ap_fixed.h"
typedef ap_fixed<8, 2> data_t;
#define DIM_1 3
#define DIM_2 4
#define DIM_3 5
void top(data_t A[DIM_1][DIM_2],data_t B[DIM_2][DIM_3],data_t out[DIM_1][DIM_3])
{
#pragma HLS INTERFACE s_axilite port=return
#pragma HLS INTERFACE m_axi depth=1024 port=A offset=slave
#pragma HLS INTERFACE m_axi depth=1024 port=B offset=slave
#pragma HLS INTERFACE m_axi depth=1024 port=out offset=slave
for (int i = 0; i < DIM_1; ++i)
{
for(int j = 0; j < DIM_3; ++j)
{
#pragma HLS PIPELINE
data_t tmp = 0;
for(int t = 0; t < DIM_2; ++t)
{
tmp += A[i][t] * B[t][j];
}
out[i][j] = tmp;
}
}
}
```

The code to compute the multiplication of two matrix above is my test on how to use ap_fixed data type. I want to create a ap_fixed<N, M> data, the N and M will be any number in my work. I try to use the code like below to allocate the ram to communicate with the pl

```
a = pynq.allocate(shape=(50,), dtype='f4')
```

the dtype or data_type not support the ap_fixed type. I didnâ€™t find the ap_fixed solution in the python package of pynq.

I canâ€™t find the right way to use both float in the pynq linux and ap_fixed<N, M> in the hls code.

Thanks for your help.

since your ap_fixed uses 8 bits, with a bit of hackery it can be worked out (I am not aware of an explicit way of managing this, but I may be wrong). What I mean is that thereâ€™s definitely a way to transfer the data to the accelerator, as long as the single element is 8 bits in the dtype it should work ok.

The true â€śproblemâ€ť is to use this data properly while in python I guess, by which I mean interpret it as it supposed to be interpreted.

But again, I would wait to hear also @PeterOgdenâ€™s opinion.

We donâ€™t have any specific support for ap_fixed within PYNQ but itâ€™s not too difficult to set things up and do the conversion yourself. The native type for the array should be a numpy `int`

or `uint`

of the power of 2 the same or bigger than the total bits of the fixed point â€“ e.g. a 24-bit fixed gets packed into 32-bit ints.

Once that is in place you can create a helper function to convert an array of floats to your integer type by multiplying by the correct shift and assigning to an int array. The extra sign-bits wonâ€™t matter in this case

```
import numpy as np
fixed_ar = np.ndarray((1024,), 'i4')
float_ar = np.arange(-512, 512, dtype='f4')
fixed_ar[:] = float_ar * 256
```

Converting back is more challenging as the array needs to be sign-extended first. If we mask off the top 8 bits of `fixed_ar`

to simulate a returned list of 24-bit `ap_fixed`

s with 8 fractional bits we can see that dividing by 256 gives the wrong results for negative numbers

```
return_ar = fixed_ar & 0xFFFFFF
return_ar[0] / 256
> 65024
```

With some view casting and `np.where`

we can construct a function that will do the down-conversion for us

```
def convert(a, total_bits, frac_bits):
condition = 1 << (total_bits - 1)
mask = (~((1 << total_bits) - 1)) & 0xFFFFFFFF
return np.where(a < condition, a, (a.view('u4') | mask).view('i4')) / (1 << frac_bits)
convert(return_ar, 24, 8)
> array([-512., -511., -510., ..., 509., 510., 511.])
```

Iâ€™m not offering this up as authoritative just an example of a way to do this type of conversion. Hope this gives some ideas.

Quick edit - using some bit-manipulation magic we can get rid of the `np.where`

to improve performance (maybe?)

```
def convert(a, total_bits, frac_bits):
mask1 = 1 << (total_bits - 1)
mask2 = mask1 - 1
return ((a & mask2) - (a & mask1)) / (1 << frac_bits)
```

For more details on how this works see https://stackoverflow.com/questions/32030412/twos-complement-sign-extension-python

Peter