I am running the same program with the same overlay on the zcu104 board (in both pynq 2.5.1 and pynq 2.6).
The system has a video mixer IP that takes the background picture from an outside video stream. It has an overlay video stream input that comes from ps via a vdma. The video mixer has a 60FPS 1080p output similar to the background input.
By a capture tool, I have found that in v2.5.1 there are two pictures coming in (one is completely okay and another one is just noise):
Just to give an overview:
I created a buffer using allocate function. Then I changed it through some OpenCV functions (for text adding). Then write it to the video mixer through vdma.
I already have tried even reducing the speed of the stream, there is always 1 good and 1 bad picture.
It would be nice if someone can give me some clue on how to avoid it or what has been changed in the memory allocation function from v2.5.1 to v2.6.
input output through fmc. No storage is used except the vdma. The background stream doesn’t have any effect unless overlay is turned on in video mixer. It is already checked that, the error coming after videomixer through a ila.
Usually the VDMA will strip the main memory and make the part out of the CPU usage.
So if the memory allocation setup is wrongly define in the dtsi or the fsbl.
There will be unexpected overcrossing.
I have encounter this once when a custom Ubuntu image is build from ZYNQ and DDR3 is 512MB rather than 1G and ends up the CPU write to same video memory location.
I would have suspected that too. But in this case, v2.6 is working okay with same program so i suppose it is kinda related to the implementation of the libraries. I have seen a similar problem with v2.7 and unfortunately, there was no solution.
Nah, V2.6 V2.5 V2.7 all build from different environment. So dtsi or fsbl might modified. But usually this is not the case. So VDMA driver should always the one to be blame here. So from what you share, looks very much it is a XLNX driver bugs?
If you use hard OPENCV HLS IP, make sure it is aligned with the Vitis version.
If you used soft OPENCV, make sure the Linux is aligned with the python version.
Ofcourse, all the versions has different environment. Also, i have modified the dtb through custom dtsi according to the need. Both v2.5 and v2.6 has been checked out. It has no effect.
The problem between v2.7 and v2.6 was moving to xrt (as far as I remember, because of that it was too slow that my program could not coping up). But I can’t figure out the difference between v2.5 and v2.6.
Software opencv, vision library yet not support adding text.
Gesss, software OPENCV, then better use alternative rather than stuck with OPENCV. Making things work on software level is much easier than finding the root cause on software.
This feel very very puzzle. This give me a strong feeling on the DDR memory eye is not open or open very small making signal integrity very bad.
Otherwise this will never happen.
I have tested the program on the default image for zcu104 v2.5.1 and v2.6 without changing anything. Just for your information. It behaves the same as building the image from scratch. In v2.5.1 this effect always appears whereas in v2.6 doesn’t.
Question could be why version 2.5.1. Because for yocto build there is no recipe for v2.6 or v3.0 yet . It also would be nice if someone could help me to transfer the recipe for v3.0.
I understand that. But if I were you, I will try slow down the DDR4 clock speed.
As I have no study on ZYNQ Ultra-scale DDR4 memory training cycle. So if the Linux do undergo that training procedure and this could be the case.
From the info you providing, looks like after 2.5 they fixed the DDR4 training producer to stabilize the main memory?
It looks like this is show a good explain here.