Accessing an overlay from multiple python interpreters

I’m trying to access the loaded overlay from multiple python interpreters while writing a package and can’t get a proof of concept to work in a root ipython interpreter. I know this is possible as running to notebooks and downloading an overlay in one will clearly affect what is available in the other.

I’ve tried pynq.Device.active_device.ip_dict thinking that would pull from the default global pl_server but it did not.

In essence what I’d like is a way to do the following:
Program 1 (possibly a jupyter notebook):

import pynq
pynq.Overlay('my bitstream.bit')

Program 2

import pynq

ol = pynq.<get_active_overlay>
ol.my_core.do_something()

This post seems related but the suggestions there didn’t seem to work here.

1 Like

Hi,

What pynq version are you using?

In 2.7 I was able to do this using this piece of code

import pynq
ol = pynq.Overlay(<your_overlay>.bit, download=False)
pynq.Device.active_device.reset(ol.parser, ol.timestamp, ol.bitfile_name)

Mario

We were on 2.6 on this board (ZCU111). On a possibly related note we were finding a difference in the list of devices available. In the script this code:

def configure(bitstream, mig='mig_modified_ip_layout_mem_topology.xclbin', ignore_version=False, clocks=False, download=True):
    import pynq, xrfclk
    from pynq import PL

    global _gen3_overlay, _mig_overlay
    _gen3_overlay = pynq.Overlay(bitstream, ignore_version=ignore_version, download=download)

    if mig:
        _mig_overlay = pynq.Overlay(mig, device=pynq.Device.devices[1], download=download)

    if clocks:
        try:
            xrfclk.set_all_ref_clks(409.6)
        except:
            getLogger(__name__).info('Failed to set clocks with set_all_ref_clks, trying new driver call')
            xrfclk.set_ref_clks(409.6)

would also choke with an index error on devices[1] (there is only one present) whereas in a jupyter notebook there are two. So my takeaway is there is something different about the way the notebook server is starting the interpreter (either working directories, paths, or environment variables) that I’m not reproducing.

The idea would be in one process I’d make the call with download=True and in others with download=False. I don’t think we are going to need multiprocessing within an instance so I’m not too worried about the proper picklability of the the overlay objects (plus I can simple set them to none and reconnect with some getstate setstate hooks if needed.

I did try the active_device call but it didn’t seem to work.

1 Like

@marioruiz any chance you know of a workaround for 2.6 or have any other thoughts?

Hi,

Mario gave you the solution. Have only one python process, the first one to start, when you invoke Overlay(…) set download=True. Have all the others set it to download=False. Then of course don’t access the same PL entities/registers at once for hardware that can’t tolerate it.

This works in v2.6 also.

Kind regards

1 Like

That isn’t my experience. I’ve now updated to pynq 2.7 and of a fresh sd image do the following:

  1. ssh in
  2. sudo ipython
  3. import pynq
  4. pynq.Overlay(path_to_my_bitstream, download=False)
    This results in
    RuntimeError: No Devices Found
    as does executing the command with download=True

I do get the warning
/home/xilinx/pynq/pl_server/device.py:79: UserWarning: No devices found, is the XRT environment sourced?
Looking through systemd services I make my way to pl_server.sh
and see that there are indeed a host of environment variables set via the lines

. /etc/environment
for f in /etc/profile.d/*.sh; do source $f; done

and one of those is xrt_setup.sh. I think my original question is still unanswered.

Without using jupyter AT ALL (I don’t care if it is left running, but assume that no connection to :9090 is ever made after ZCU boot), what is the proper way to go about creating a python program that will connect to and interact with pynq. One program would download and overlay, others would just interact with it. For these latter programs I think the answer given is sufficient. For the first, primary program, it clearly isn’t as I need to get some environment variables configured.

edit: just a quick followup here that running those lines from pl_server.sh while working in a su environment followed by spinning up ipython WAS sufficient to clear the runtime error with and without downloading the overlay.

I think thus my question becomes what of that host of environment specification do I need to incorporate into the startup of my process, which clearly needs to run as root?

1 Like

PYNQ uses the fpga_manager to download bitstreams, so you need to run as root or with root permissions to be able to download the bitstream. Additionally, in the PYNQ SD card image 2.7, the pynq package is installed in a virtual environment. So, you need to source the environment first (for regular user), when you run as root this environment is sourced automatically.

Right, and I see that scripts to do that are in profile.d, but are all of those needed? Mainly I want to make sure that I’m sourcing neither too few nor too many for the systemd root services I’m planning to spin up.

As for it being sourced automatically, I’m not clear what you mean. Running su in a terminal and then dropping into python was not sufficient for me. I needed to manually source /etc/environment and the others in etc/profile.d/ this, among other things, activated the venv and things then worked. Manually sourcing things is fine, again though, I’d like to avoid sourcing too much i.e. some file that pynq is assuming only gets sourced for the systemd process for the jupyter server.

@marioruiz Is there documentation on properly sourcing the pynq2.7 venv, especially for a given board, I’m not seeing any in the docs?

It looks like pointing pycharm at /usr/local/share/pynq-venv/bin/python for a remote ssh interpreter works well as does source /etc/profile.d/pynq-venv.sh in an ssh session (Using a root session via su as needed depending on desired features, e.g. downloading).

From looking through the contents of the other files in /etc/profile.d some things may need XILINX_XRT=/usr set, though I’m not certain what leans on that.

On the ZCU111 (at least), working with the xrfclk package also needs BOARD=ZCU111 set. This normally seems to be handled by /etc/profile.d/boardname.sh Without it I’m getting a KeyError: 'BOARD' whenever I try to import xrfclk. Not yet certain if working with it requires root, but that should be straightforward.

There is also xdg_dirs_desktop_session.sh. Do I need to worry about anything from it? Anything else I’ve missed?

Hi @baileyji,

Generally what I do when using pynq over an ssh connection is add source /etc/profile.d/pynq-venv.sh to the end of my /root/.bashrc file. Have you tried this on your target board? Then when a root shell opens it will source everything for the environment. You could also try just including the parts of source /etc/profile.d/pynq_venv.sh that you want in your /root/.bashrc file, for instance:

source /usr/local/share/pynq-venv/bin/activate
export BOARD=KV260
export XILINX_XRT=/usr

However, we would recommend sourcing all of pynq_venv.sh file in your /root/.bashrc.

Do you have pycharm configured to use connect via ssh with the root user account? To enable root ssh connection there are a few things that you might need to configure in the ssh config file. You need to edit /etc/ssh/sshd_config to set PermitRootLogin yes.

I think if you setup both /root/.bashrc and pycharm in this way then you should be able to do everything from your remote pycharm interpreter.

Hope this helps a bit!

All the best,
Shane

2 Likes

Could you possibly load your Overlay using boot.py which gets executed as part of startup? Then have your other future ipython sessions use the download=False option?

There are some details in another thread here:

1 Like

A post was split to a new topic: PYNQ-Z1, KeyError when downloading overlay