After quantizing and compiling the trained model (modified-unet structure) that contains a sigmoid activation at the output layer, I tried to load the model onto the DPU for testing the model on ZCU104.
The notebook returns a fatal error that the “The kernel appears to have died. It will restart automatically”. I am attaching the jupyter error log and also the structure that I have compiled for running on the FPGA.
Have you generated this using the v1.3 docker or v1.4? Seems like the problem is that hard-sigmoid-fix op, this was added to quantization in the Vitis AI v1.4 release, and DPU-PYNQ for the moment supports vart v1.3.2.
Thank you for the reply. I have used v1.4, the current github branch. The issue that I think here is that the last layer has the sigmoid as activation for the Conv2D layer, while it shows in the documentation that the activations supported are ReLU, LeakyReLU, ReLU6 only for DPUCZDX8G_ISA0_B4 096_MAX_BG2 (ZCU102/104).
Is there a workaround to this compiler constraint or can you please suggest something to keep the CNN structure the same and carry out the computation?
For a clearer picture, I want am attaching a block-level design of the CNN model.
Well since it’s the last layer you could probably save the quantized model with the sigmoid activation excluded, and then implement it in software on the dpu output. Many dpu examples do this for the final softmax layer for example.