Interrupts not registering correctly

I seem to have a problem with using Interrupts correctly.
I want to use a dma for data transfer and write to several buffers.
Once a buffer is written, I generate an Interrupt so that I can save the data from the buffer before it is rewritten with new data.
As already mentioned in the forum I connected the level-type interrupts to the zynq via an interrupt controller

And if I then load the overlay and start a transfer with 3MB size per Buffer and 2 Buffers and want my 12 MB of data, then on the first run I get 4 interrupts in the fabric interrupt in /proc/interrupts
However, if I restart the same method I get only 3 interrupts
And sometimes if I reload the overlay with let’s say 8 buffers instead of 2 the kernel/pynq dies and I have to do a system reset.

Also, my async functions still trigger on my defined interrupts but somehow get not shown in the fabric interrupt and I lose data.

Does someone have an idea where the problem could lie? Has someone experienced something similar? Do I have to take care of something special when defining interrupt signals?

Any help would be appreciated :slight_smile:

This is my async code:

async def wait_for_interrupt(self,interrupt):
        f = open(self.filename+"buffer"+str(interrupt),'wb')
        for i in range(2):
            await self.interrupts[interrupt].wait()
            with np.printoptions(threshold=np.inf): 
async def async_transfer(self):
    await asyncio.gather(*(self.wait_for_interrupt(i) for i in range(self.buffercount)))

I now noticed that the IRQ signal from the interrupt controller ist high for a very long time. Is this supposed to be the behaviour of IRQ? I thought it would deassert after an interrupt got awaited. If someone could shed some light on the issue that would be great! :slight_smile:

Ok it seems that the interrupts should be working correctly. I still don’t know why I get less interrupts when restarting a data transfer. Will post if I find something.