AWS IoT Greengrass and PYNQ

AWS IoT Greengrass and PYNQ

Keywords: Greengrass, PYNQ

Introduction

Edge computing plays a very important role in today’s data centers. Edge devices can both collect and process data locally, reducing the overall latency of the system when that edge data is intended for cloud applications. For a simple example of this integration, in many smart-camera applications, the deployed cameras will only trigger when motion is detected; this data filtering saves both data center storage and networking resources for interesting events that require more computation. Machine learning at the edge is also very exciting - here is an AWS re:Invent 2018 presentation that Xilinx and AWS did on this very topic.

Due to this inclusion of edge computing in data center applications, many cloud service providers developed their own frameworks for Internet-of-Things (IoT) edge computing. As an example, AWS Greengrass is software that extends AWS cloud capabilities to local devices. With Greengrass, edge devices and cloud service can communicate with each other securely.

In this article we will show how all PYNQ enabled devices support AWS IoT Greengrass applications. Specifically, we will show briefly what steps we need for a simple Greengrass example using an image resizer application running on a Xilinx ZCU104 board.

Step 1: Preparation

The Greengrass-related Linux kernel configurations have already been added into all PYNQ SD card images;
i.e., when a PYNQ image is built, the following configurations are added into the image by default:

# 
# Greengrass
#
CONFIG_NAMESPACES=y
CONFIG_IPC_NS=y
CONFIG_UTS_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_MEMCG=y
CONFIG_POSIX_MQUEUE=y
CONFIG_OVERLAY_FS=y
CONFIG_SECCOMP_FILTER=y
CONFIG_KEYS=y
CONFIG_SECCOMP=y

So if you are using a PYNQ image, this has been taken care of already. All that users need to do is to burn the image onto an SD card, and boot up into Linux on the board.

Step 2: Add user and check dependencies

As suggested in AWS Greengrass documentation, we have to add additional user:

sudo adduser --system ggc_user
sudo addgroup --system ggc_group

PYNQ images have all the other dependencies satisfied. If users want to make sure of this, users can run the following commands:

cd /home/xilinx
git clone https://github.com/aws-samples/aws-greengrass-samples.git
cd aws-greengrass-samples/greengrass-dependency-checker-GGCv1.9.x
sudo ./check_ggc_dependencies | more

An example output is shown below:

dependency-check

As you can see all dependencies are met.

Step 3: Configure Greengrass group

In AWS IoT console, you can locate the Greengrass group:

iot-console1

Click Create first Group, then Use easy creation if asked.

iot-console2

Follow the instructions until you can download the resources.

Note that there are 2 things required to be downloaded onto the board (we will download the root certificate later).

  1. The keys stored in the tarball.
  2. The core software with a version compatible to your OS.

For core software, choose Ubuntu 18.04.

software-download

After the above 2 items are downloaded, click Finish.

Step 4: Start Greengrass core

Now we have 2 files in /home/xilinx/greengrass from last step:

  1. <hash>-setup.tar.gz
  2. greengrass-linux-aarch64-1.9.4.tar.gz

We can run the following commands to extract tarballs:

cd /home/xilinx/greengrass
sudo tar -xzvf greengrass-*.tar.gz -C /
sudo tar -xzvf *-setup.tar.gz -C /greengrass

Download root certificate now:

cd /greengrass/certs/
sudo wget -O root.ca.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem

Greengrass core software will start a daemon process that works as an agent and talks to the AWS directly. Before we start the Greengrass core, we need to adjust the permission to allow the Lambda functions to be run as root. This is especially useful when we want to interact with the on-board memory and Programmable Logic (PL). Open the file /greengrass/config/config.json and add:

{
  "coreThing" : {
    ...
  },
  "runtime" : {
    ...
    "allowFunctionsToRunAsRoot" : "yes"
  },
  ...
}

For details you can refer to the AWS documentation page on Lamda configuration. Now we can start the Greengrass core software:

cd /greengrass/ggc/core/
sudo ./greengrassd start

You should be able to see Greengrass successfully started with PID: <PID>. Keep in mind you can use sudo ./greengrassd stop to terminate the daemon process any time after you are done with this article.

Step 5: Create Lambda function

Lambda function runs on the Greengrass core device; all the other edge devices talk to the Lambda function on the core device to interact with the cloud service. Open the Lambda console and choose Create function. Use resize for the function name, and Python 2.7 as the run-time.

Since Greengrass core supports different versions of Python run-time (e.g. Python 2.7, Python 3.8, etc.) which may not be compatible with pynq run-time (Python 3.6), we need to call out a sub-process from the Lambda function. Hence the first step is to design a local Python program that runs on Python 3.6; in this program we also want to leverage the Programmable Logic (PL) as the hardware accelerator. From all the available examples we have, we will just choose PYNQ-HelloWorld as a starting point.

Run the following command on the board:

sudo pip3 install --upgrade git+https://github.com/xilinx/pynq-helloworld.git

This will install the package along with the corresponding PL overlay onto the board. The block design on the PL contains an HLS IP to resize an incoming picture (640 by 480 pixels), and return a down-sized picture (320 by 180 pixels) back to the processor. After the package has been installed, add the following Python program to /home/xilinx/jupyter_notebooks/helloworld and rename it to resize.py. This Python program takes 2 arguments, first being the input picture path and second being the output picture path.

import sys
from PIL import Image
import numpy as np
from pynq import Xlnk
from pynq import Overlay

# check number of arguments
if len(sys.argv) != 3:
	raise RuntimeError("Usage - python3 resize.py <input_jpg> <output_jpg>.")

# load overlay and expose the components in the overlay
resize_design = Overlay(
    "/usr/local/lib/python3.6/dist-packages/helloworld/bitstream/resizer.bit")
dma = resize_design.axi_dma_0
resizer = resize_design.resize_accel_0

# new height and width are half of the original after resizing
old_width, old_height = 640, 360
new_width, new_height = 320, 180

# allocate contiguous memory buffers based on size
xlnk = Xlnk()
in_buffer = xlnk.cma_array(shape=(old_height, old_width, 3),
                           dtype=np.uint8, cacheable=1)
out_buffer = xlnk.cma_array(shape=(new_height, new_width, 3),
                            dtype=np.uint8, cacheable=1)

# read input image
image_path = sys.argv[1]
original_image = Image.open(image_path)
original_image.load()
input_array = np.array(original_image)
in_buffer[0:old_width*old_height*3] = input_array

# setting control registers
resizer.write(0x10, old_height)
resizer.write(0x18, old_width)
resizer.write(0x20, new_height)
resizer.write(0x28, new_width)

# send image, start resizing, and receive image
dma.sendchannel.transfer(in_buffer)
dma.recvchannel.transfer(out_buffer)
resizer.write(0x00,0x81)
dma.sendchannel.wait()
dma.recvchannel.wait()

# save output image
resized_image = Image.fromarray(out_buffer)
resized_image.save(sys.argv[2], "JPEG")

# clean up contiguous memory
xlnk.xlnk_reset()

Now we are ready to design our Lambda function which calls the resize.py. We need to package up the Lambda function as a zip file and upload it.

  1. Download the SDK repository.
  2. Go into the downloaded repository; create a folder Resize under the folder examples.
  3. Copy the folder greengrasssdk from the top level of this repository into examples/Resize.
  4. Create the following Python program under examples/Resize. Name it as greengrassResize.py.
import subprocess
import platform
from threading import Timer
import greengrasssdk


client = greengrasssdk.client('iot-data')
my_platform = platform.platform()

def greengrass_hello_world_run():
	if not my_platform:
		client.publish(
			topic='hello/world',
			payload='Hello world! Sent from Greengrass Core.')
	else:
		client.publish(
			topic='hello/world',
			payload='Hello world! Sent from '
					'Greengrass Core running on platform: {}'
					.format(my_platform))
	Timer(10, greengrass_hello_world_run).start()


greengrass_hello_world_run()


def function_handler(event, context):
	resize_py = "/home/xilinx/jupyter_notebooks/helloworld/resize.py"
	original = "/home/xilinx/jupyter_notebooks/helloworld/images/paris.jpg"
	resized = "/home/xilinx/jupyter_notebooks/helloworld/images/resized.jpg"
	cmd = "/usr/bin/python3 {} {} {}".format(resize_py, original, resized)
	proc = subprocess.Popen(cmd, shell=True, 
		stdout=subprocess.PIPE, stderr=subprocess.PIPE)
	out, err = proc.communicate()
	client.publish(
		topic='hello/world',
		payload='Picture resized successfully.')

  1. Go under examples/Resize, and zip the greengrasssdk folder and greengrassResize.py into a single zip file resize_lambda.zip.

Then we proceed on the AWS Lambda console page to upload it.

After creating it, you can edit the lambda function in the editor shown on the web. In the example shown above, every 10 seconds, the function publishes one of two possible messages to the hello/world topic. What’s more, when a Lambda function is triggered, the resize.py will get called.

Click Save and choose Publish new version from the dropdown menu of Actions. Give it a version number and continue. You should be able to see Successfully created version <X> for function resize.

Step 6: Add Lambda function

Now go back to the Greengrass group console, add Lambda function:

Choose Use existing Lambda. Then choose resize and click Next. After adding it to the Greengrass group, you can edit its configurations.

You need to change the following configurations and then click Update:

  1. For Run as, choose Another user ID/group ID and put 0 for both UID and GID, because we want to run the Lambda as root.
  2. Choose No container (always) for Containerization.
  3. Increase the timeout to 30 seconds.
  4. Choose Make this function long-lived and keep it running indefinitely for Lambda lifecycle.

Step 7: Add subscription

The final thing that we need to add is the subscription, as shown below. This will specify on what topics your devices and services will be listening and broadcasting.

You need to add subscriptions: choose resize under Lambdas as the source, and choose IoT Cloud under Services as the target. This means the messages published from the Lambda can be captured and received by IoT console. Click Next and put hello/world inside the Topic filter. This specifies the topic that IoT console is interested in. Finally click Finish.

Repeat the above step with IoT Cloud as the source, and resize as the target, so the Lambda function can also capture messages published on topic hello-world. The final look is similar to:

Step 8: Deployment

We can now deploy the group. This means the Lambda function will be deployed and run on the core device. Click Deploy under the Actions for the Deployments tab.

You can choose Automatic detection when prompted. Once completed, users will see a message similar to:

This means the Lambda function has been successfully deployed on your board.

Step 9: Test

We can also test the Lambda function now. Go to the Test tab of the IoT console, and set up the test as below:

After clicking Subscribe to Topic, users will see messages popping up every 10 seconds.

If you click the Public to topic button, the Lambda function handler will be triggered and the resize.py will be called to resize a picture.

And you will see the message returned by the function handler in the Lambda function. Also, you can find a new resized picture /home/xilinx/jupyter_notebooks/helloworld/images/resized.jpg in your file system. The picture has been resized with the help from the PL. With a single mouse click on the IoT console, users can control the IoT device remotely to complete complex tasks.

Conclusion

In this article we have shown a very simple example of deploying AWS Greengrass and leveraging programmable logic accelerators on PYNQ. We demonstrated a more involved example in AWS re:invent last year, where a Lambda function could call out to a deep learning accelerator in programmable logic; the example was based on PYNQ running on top of an Ultra96 board. But keep in mind, AWS Greengrass is supported on all of the PYNQ-enabled boards.

Reference

  1. Getting Started with AWS IoT Greengrass
  2. AWS Greengrass SDK
  3. PYNQ HelloWorld
1 Like