Fastest way to deploy DNN model on RFSoC Academia

I want to know what’s currently the optimal way to deploy a DNN model on Rfsoc academia board?

I know you have made progress from this post as you have started to ask other questions on the forum about the DPU

Replying in case anyone else is reading this post.

It is difficult to say what is optimal. There are many ways to deploy a DNN model on an FPGA board.

Use the Vitis AI DPU
Use IP from one of several open source or third party providers (internet search for open source fpga dnn)
FINN is a low precision research project from Xilinx research. Examples are available for some PYNQ enable boards. e.g. PYNQ-Z2.
Use your own custom DNN

What is easiest or optimal depends on your board and what is supported out-of-the-box, your level of experience with FPGA design and your requirements.

The Vitis AI DPU is the Xilinx solution which I saw you are asking about on another post. I would suggest this is a good option for people who want to get started.