How do I add NVME support

Hello,

I am trying to add NVME support via the m.2 slot on the ZCU111. I am using the DMA/Bridge Subsystem for PCI Express IP from Xilinx. I was able to test my image on Petalinux, however, I would like to add PCIE DMA on PYNQ so that I have a much faster and bigger hard drive.

2 Likes

I was able to add the device tree and I can see my device tree correctly being loaded. However, the kernel doesn’t boot up the driver. What do I have to do, to get it to work? I can make it work in petalinux, but I cannot get PYNQ to recognize it. Any help?

1 Like

Hi there,

If it works on petalinux but not the pynq image it might have something to do with your kernel configuration. You could check the running kernel configs of each image by running cat /proc/config.gz | gunzip > running.config and reading the running.config file to see what’s enabled on each image. I would look for keywords with “NVME”. Hope that helps!

Thanks
Shawn

1 Like

@skalade that was helpful. I looked at the config files and the drivers are all there for axi-xdma, pcie, and nvme. The petalinux image that I built with nvme is almost identical to the one provided by PYNQ with my added kernel recipes. I’m not sure what’s exactly wrong anymore.

The only thing that I can think of is whether I wrote the device tree modifications correctly. i’m not a device driver expert. Do I have to write the axi-xdma pcie as a fragment? In my original system-user.dtsi, all I had was (something along these lines since I’ve changed it then but it compiled)

/include/ “system-conf.dtsi”
/ {
amba_pl@0 {
data_source_top0:data_source_top@a0000000 {
compatible = “xlnx,data-source-top-1.0”;
reg = <0x0 0xa0000000 0x0 0x10000>;
};

};


		xdma_0: axi-pcie@a0000000 {
			#address-cells = <3>;
			#interrupt-cells = <1>;
			#size-cells = <2>;
			clock-names = "sys_clk", "sys_clk_gt";
			clocks = <&misc_clk_0>, <&misc_clk_0>;
			compatible = "xlnx,xdma-host-3.00";
			device_type = "pci";
			interrupt-map = <0 0 0 1 &pcie_intc_0 1>, <0 0 0 2 &pcie_intc_0 2>, <0 0 0 3 &pcie_intc_0 3>, <0 0 0 4 &pcie_intc_0 4>;
			interrupt-map-mask = <0 0 0 7>;
			interrupt-names = "misc", "msi0", "msi1";
			interrupt-parent = <&gic>;
			interrupts = <0 89 4 0 90 4 0 91 4>;
			ranges = <0x02000000 0x00000000 0xA0000000 0x0 0xA0000000 0x00000000 0x10000000>;
			reg = <0x00000004 0x00000000 0x0 0x20000000>;
			pcie_intc_0: interrupt-controller {
				#address-cells = <0>;
				#interrupt-cells = <1>;
				interrupt-controller ;
			};
		};
		misc_clk_0: misc_clk_0 {
			#clock-cells = <0>;
			clock-frequency = <100000000>;
			compatible = "fixed-clock";
		};
	};

};

Which worked. However, as you can see here there’s is a data_source0 which is part of the original zcu111-pynq. The addresses for both the axi_dma and data_source0 is the same. Could that be an issue? Should I try declaring axi_xdma as a fragment? If so, how should I do that?

1 Like

You shouldn’t need to do fragments if you aren’t loading device tree overlays during runtime. I believe the address should match the address in your vivado project address editor, I would check the hardware project to see what axi_dma and data_source0 are assigned to.

You can make sure if the device tree was loaded correctly by checking if your device appears in /proc/device-tree. Also a dmesg log might be helpful if there were errors while the device tree was being loaded.

Thanks
Shawn

@skalade, I’m just modifying the original PYNQ ZCU111 image which uses partial reconfiguration. My image is not preloaded when I boot pynq. I have to write a Jupyter script to do that. Which is why i’m wondering if its possible at all for PCIE devices. I’m also interested in seeing if I can install pynq using petalinux. The example I found Deploying PYNQ and Jupyter with Petalinux is old and according to that thread there is a way. The old method also only supports 2 boards. I’d be interested in seeing how I can also load pynq using petalinux in 2020.2.

I don’t see it under my device tree. It’s in my device tree in my original petalinux project. However, I do see data_source which uses the same address as my axi-xdma/pcie. What is data_source anyway? That’s part of the original recipe of zcu111 pynq ZCU111-PYNQ/system-user.dtsi at master · Xilinx/ZCU111-PYNQ · GitHub. I am going to try removing that since that’s not part of my image. Not sure what it will do…

I’ve also attached my dmesg log. Please feel free to take a look.
dmesg_pynq.log (33.6 KB)

Thanks again for your help.

Could you clarify a bit how you are modifying the pynq image? Are you just using the overlay class to download a bitstream and device tree overlay (.dtbo file)?

I can’t comment too much on deploying pynq on petalinux, but from that discuss post it seems like people have been successful in relatively recent builds (2020.1), and the meta-xilinx-pynq layer is available for 2020.2, so it should still work, but as discussed on that post, you might need to make some modifications.

Thanks
Shawn

@skalade I’m just following the standard build instructions using PYNQ here: PYNQ SD Card image — Python productivity for Zynq (Pynq) using this board specific package: GitHub - Xilinx/ZCU111-PYNQ: Board files to build the ZCU111 PYNQ image. I’m just modifying the system-user.dtsi from that repository to add the axi_xdma into the device tree and i added .cfg under the recipe-kernel folder with nvme, pcie, and xdma configurations. Then I’m just loading the overlays in jupyter from my vivado project.

I also just tried a different approach. Before I added axi_xdma/pcie into the system-user dtsi and declared them as fragments. That seemed okay and linux would load without any kernel panic but axi-xdma and pcie drivers didn’t show up in the device tree nor were the drivers loading. In my second try, I removed the fragment declaration, however, when I turned on pynq, it would hang right at where it would say starting kernel and doesn’t go past that. To me this looks like I can’t go down the partial reconfiguration approach that pynq takes. The only other idea that I have is to try to load the device tree at runtime and see if that would load the axi-xdma and pcie drivers at runtime. However, I don’t have enough experience in linux if that would work. Do you have any idea if that’s worth a try?

The only other method which I think would work is to go down the petalinux route and somehow have petalinux build the pynq project. We have petalinux with nvme working using this project: GitHub - fpgadeveloper/fpga-drive-aximm-pcie: Example designs for FPGA Drive FMC.

1 Like

When you say partial reconfiguration, do you mean device tree overlays? The flow for building the PYNQ ZCU111 image doesn’t have partial reconfiguration or partial bitstreams as far as I’m aware.

If you are adding your new devices as fragments, you will have to compile a device tree overlay with a .dtbo extension using dtc (device-tree-compiler) and load it like this, or as an additional parameter for the Overlay class. Your second approach (without the fragments) is probably the easier, more standard route to go. The fact that it’s hanging could mean that there were errors in the modified .dts file. I would make sure the addresses you add to the device tree match the ones in the vivado memory address editor.

Can’t give too much advice on the petalinux flow with the meta-pynq layer as I have never used it and I’m not aware of resources other than the tutorial you linked earlier, but I assume the flow shouldn’t have changed drastically since then.

Thanks
Shawn

2 Likes

Maybe my interpretation of partial bitstreams/reconfiguration is incorrect. When I first looked at how pynq generates the device trees (without my modifications) I can see devices being added as fragments. I thought this is a result of having the fpga manager turned on in petalinux.

Thanks. I’ll try this and get back to you.

@skalade I tried something new today. I was going to try using the device tree compiler method you discussed, but before that I had to change the address of my axi_xdma: pcie in vivado just so its not the same address as data_source. My pynq image loaded properly. I simply tried to load the bitstream on jupyter and this is what dmesg says:

[ +0.000034] [drm] Pid 1421 closed device
[ +0.000305] [drm] Pid 1421 opened device
[ +0.001359] [drm] Pid 1421 closed device
[ +1.479992] [drm] Pid 1407 opened device
[ +0.000036] [drm] Pid 1407 closed device
[ +0.000296] [drm] Pid 1407 opened device
[ +0.002211] [drm] Pid 1407 opened device
[ +0.000047] [drm] Pid 1407 closed device
[ +0.001764] [drm] Pid 1407 opened device
[ +0.000038] [drm] Pid 1407 closed device
[ +0.000086] [drm] Pid 1407 opened device
[ +2.434645] fpga_manager fpga0: writing zcu111_pcie.bin to Xilinx ZynqMP FPGA Manager
[ +0.734068] [drm] Finding IP_LAYOUT section header
[ +0.000003] [drm] AXLF section IP_LAYOUT header not found
[ +0.000007] [drm] Finding DEBUG_IP_LAYOUT section header
[ +0.000002] [drm] AXLF section DEBUG_IP_LAYOUT header not found
[ +0.000002] [drm] Finding CONNECTIVITY section header
[ +0.000002] [drm] AXLF section CONNECTIVITY header not found
[ +0.000002] [drm] Finding MEM_TOPOLOGY section header
[ +0.000001] [drm] Section MEM_TOPOLOGY details:
[ +0.000005] [drm] offset = 0x830
[ +0.000001] [drm] size = 0x80
[[ 96.181020] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/overlay2
+0.000025] [drm] Download new XCLBIN 4F503C1C-2D3E-4BC5-8150-8[ 96.193642] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/xdma_0
DE598F9A140 done.
[ +0.000038] [drm] zocl_xclbin_read_axlf 4f5[ 96.208840] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/pcie_intc_0
03c1c-2d3e-4bc5-8150-8de598f9a140 ret: 0.
[ +0.007762] [drm] ->[ 96.224439] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /symbols/misc_clk_0
Hold xclbin 4F503C1C-2D3E-4BC5-8150-8DE598F9A140, from ref=0

After that it just hangs and I’m not sure what this all means. Do you have any thoughts on this?

@skalade I’m starting to think that this isn’t really doable because maybe you can’t really hotswap pcie nvme drives. If so, is there a way to build a pynq image with the fpga bitstream loaded at boot?

On enabling bitstream during boot, a similar issue has been discussed in this forum post. The solution is to disable fpga manager and then your bitstream will load on boot, but you will lose the ability to dynamically load overlays in PYNQ.

Regarding the previous solutions, if your bitstream needs to be loaded for the device tree to properly register the pcie device maybe the approach of just modifying the dts won’t work, you may very well be better off doing a device tree overlay. I can’t come up with any examples of doing this for pcie, but there is dtbo example here, there’s some posts on the topic on more official Xilinx confluence pages and the comments on the linux-xlnx github is a good resources to help you configure your individual entries correctly.

Thanks
Shawn

1 Like

@skalade I’ve tried loading the device tree overlay first before loading the bitstream, but either way, if I try to load the bitstream it crashes the program, which leads me to believe that xdma/pcie isn’t supported yet.

I’m not sure how to debug this problem; I have only seen one similar request in the internet with no answers. Perhaps the best way is to just disable to fpga manager and preload the bitstream. I’m travelling for the next 2 days and won’t be investigate until Friday.

However, how do I interact with the image via pynq if its preloaded? I plan on adding other IP blocks to my preloaded image. If the image is preloaded on boot, how would I be able to interact with the other parts of my image (ie. dma)?

Also, if anyone is curious to try this, the github project I am using for the ZCU111 is here: GitHub - fpgadeveloper/fpga-drive-aximm-pcie: Example designs for FPGA Drive FMC. It works in petalinux, but it doesn’t work while being dynamically loaded in PYNQ.