You mentionned this shouldn’t be possible with video streams. Is this because of timing constraints with video acquirement? Anyhow, the shown usecase with the looped video works fine for me, I’m just trying to understand what’s under the hood.
erode and dilate are implemented in the same partial bistream, that the reason why I can use them at the same time.
c_dict.loaded shows what is loaded and can be used. If a function is not loaded you need to load it manually, this is something the provided API does not make automatically.
The composable video pipeline forks to two branches, so you will have to create your own composable overlay that can branch the number of time you need.
Would this cause some problem with the composable overlay? How would the reconfiguration process be managed in such a case? Would the bitstream download be able to pre-fetch (eg: reconfigure pr_2 with task_9 after task_2 has been executed, even if task_0 still has to wait for task_6, 7 and 8 to finish?)
It depends, reconfiguring part of the FPGA takes time. So, you need to store the intermediate results from une function to another. Depending on this size of this intermediate result the implementation could be viable or not. I am assuming these connections are streaming.
As I mentioned in the first comment, we use DFX as a feature to augment the overlay functionality, but it is not the core of the composable overlay.