PYNQ: PYTHON PRODUCTIVITY

Np.copyto() Kills Python Kernel

Hello!

I’m testing some simple overlay stuff with Vitis HLS to Vivado flow. I use pynq allocate method to allocate two (3,3) np arrays. I then define two (3,3) np arrays and use np.copyto() to move the arrays into allocated memory.

 in1 = allocate(shape=(3,3), dtype='int8')
 in2 = allocate(shape=(3,3), dtype='int8')

 tarr1 = np.array([
     [1,2,3],[2,3,4],[3,4,5]], np.int8)
 tarr2 = np.array([
     [2,2,2],[3,3,3],[4,4,4]], np.int8)

 np.copyto(in1, tarr1)
 np.copyto(in2, tarr2)

However, the np.copyto() method kills the kernel with the following message.

“The kernel appears to have died. It will restart automatically.”

However, iterating through the arrays and assigning each value individually appears to work fine, with the overlay outputting the correct values. Like this:

for i in range(3):
    for j in range(3):
        in1[i][j] = tarr1[i][j]

for i in range(3):
    for j in range(3):
        in2[i][j] = tarr2[i][j]

The copyto() method was working for me when the overlay was using 1D arrays, so I wonder if it has to do with resource usage? Any help would be appreciated. Thanks in advance!

My Best,
David

Hey,

Try to allocate the buffer with an even dimension size: shape(4,3)

We recommend using cached buffers when using numpy functions - especially if you happen to be using a ZYNQ Ultrascale+ part. You can pass cacheable=1 to pynq.allocate. Note that this will mean that .flush and .invalidate will need to be called on the buffer as appropriate but you can be much freer in the types and shapes of arrays you allocate and how you manipulate them

Peter