I’m trying to access the loaded overlay from multiple python interpreters while writing a package and can’t get a proof of concept to work in a root ipython interpreter. I know this is possible as running to notebooks and downloading an overlay in one will clearly affect what is available in the other.
I’ve tried pynq.Device.active_device.ip_dict thinking that would pull from the default global pl_server but it did not.
In essence what I’d like is a way to do the following:
Program 1 (possibly a jupyter notebook):
ol = pynq.<get_active_overlay>
This post seems related but the suggestions there didn’t seem to work here.
We were on 2.6 on this board (ZCU111). On a possibly related note we were finding a difference in the list of devices available. In the script this code:
def configure(bitstream, mig='mig_modified_ip_layout_mem_topology.xclbin', ignore_version=False, clocks=False, download=True):
import pynq, xrfclk
from pynq import PL
global _gen3_overlay, _mig_overlay
_gen3_overlay = pynq.Overlay(bitstream, ignore_version=ignore_version, download=download)
_mig_overlay = pynq.Overlay(mig, device=pynq.Device.devices, download=download)
getLogger(__name__).info('Failed to set clocks with set_all_ref_clks, trying new driver call')
would also choke with an index error on devices (there is only one present) whereas in a jupyter notebook there are two. So my takeaway is there is something different about the way the notebook server is starting the interpreter (either working directories, paths, or environment variables) that I’m not reproducing.
The idea would be in one process I’d make the call with download=True and in others with download=False. I don’t think we are going to need multiprocessing within an instance so I’m not too worried about the proper picklability of the the overlay objects (plus I can simple set them to none and reconnect with some getstate setstate hooks if needed.
I did try the active_device call but it didn’t seem to work.
Mario gave you the solution. Have only one python process, the first one to start, when you invoke Overlay(…) set download=True. Have all the others set it to download=False. Then of course don’t access the same PL entities/registers at once for hardware that can’t tolerate it.
That isn’t my experience. I’ve now updated to pynq 2.7 and of a fresh sd image do the following:
This results in RuntimeError: No Devices Found
as does executing the command with download=True
I do get the warning /home/xilinx/pynq/pl_server/device.py:79: UserWarning: No devices found, is the XRT environment sourced?
Looking through systemd services I make my way to pl_server.sh
and see that there are indeed a host of environment variables set via the lines
for f in /etc/profile.d/*.sh; do source $f; done
and one of those is xrt_setup.sh. I think my original question is still unanswered.
Without using jupyter AT ALL (I don’t care if it is left running, but assume that no connection to :9090 is ever made after ZCU boot), what is the proper way to go about creating a python program that will connect to and interact with pynq. One program would download and overlay, others would just interact with it. For these latter programs I think the answer given is sufficient. For the first, primary program, it clearly isn’t as I need to get some environment variables configured.
edit: just a quick followup here that running those lines from pl_server.sh while working in a su environment followed by spinning up ipython WAS sufficient to clear the runtime error with and without downloading the overlay.
I think thus my question becomes what of that host of environment specification do I need to incorporate into the startup of my process, which clearly needs to run as root?
PYNQ uses the fpga_manager to download bitstreams, so you need to run as root or with root permissions to be able to download the bitstream. Additionally, in the PYNQ SD card image 2.7, the pynq package is installed in a virtual environment. So, you need to source the environment first (for regular user), when you run as root this environment is sourced automatically.
Right, and I see that scripts to do that are in profile.d, but are all of those needed? Mainly I want to make sure that I’m sourcing neither too few nor too many for the systemd root services I’m planning to spin up.
As for it being sourced automatically, I’m not clear what you mean. Running su in a terminal and then dropping into python was not sufficient for me. I needed to manually source /etc/environment and the others in etc/profile.d/ this, among other things, activated the venv and things then worked. Manually sourcing things is fine, again though, I’d like to avoid sourcing too much i.e. some file that pynq is assuming only gets sourced for the systemd process for the jupyter server.
@marioruiz Is there documentation on properly sourcing the pynq2.7 venv, especially for a given board, I’m not seeing any in the docs?
It looks like pointing pycharm at /usr/local/share/pynq-venv/bin/python for a remote ssh interpreter works well as does source /etc/profile.d/pynq-venv.sh in an ssh session (Using a root session via su as needed depending on desired features, e.g. downloading).
From looking through the contents of the other files in /etc/profile.d some things may need XILINX_XRT=/usr set, though I’m not certain what leans on that.
On the ZCU111 (at least), working with the xrfclk package also needs BOARD=ZCU111 set. This normally seems to be handled by /etc/profile.d/boardname.sh Without it I’m getting a KeyError: 'BOARD' whenever I try to import xrfclk. Not yet certain if working with it requires root, but that should be straightforward.
There is also xdg_dirs_desktop_session.sh. Do I need to worry about anything from it? Anything else I’ve missed?
Generally what I do when using pynq over an ssh connection is add source /etc/profile.d/pynq-venv.sh to the end of my /root/.bashrc file. Have you tried this on your target board? Then when a root shell opens it will source everything for the environment. You could also try just including the parts of source /etc/profile.d/pynq_venv.sh that you want in your /root/.bashrc file, for instance:
However, we would recommend sourcing all of pynq_venv.sh file in your /root/.bashrc.
Do you have pycharm configured to use connect via ssh with the root user account? To enable root ssh connection there are a few things that you might need to configure in the ssh config file. You need to edit /etc/ssh/sshd_config to set PermitRootLogin yes.
I think if you setup both /root/.bashrc and pycharm in this way then you should be able to do everything from your remote pycharm interpreter.