Ei For Amr - Developer Guide - 2022.3.1 767160 768295

Download as pdf or txt
Download as pdf or txt
You are on page 1of 256

Edge Insights for Autonomous Mobile

Robots (EI for AMR) Developer Guide


Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Contents
Chapter 1: Edge Insights for Autonomous Mobile Robots (EI for AMR)
How it Works ............................................................................................4
Recommended Hardware .......................................................................... 12
edgesoftware Command Line Interface (CLI)............................................... 13
EI for AMR Robot Tutorials ........................................................................ 20
UPS 6000 and UP Xtreme i11 Robot Kits............................................. 20
Create Your Own Robot Kit ............................................................... 36
Step 1: Hardware Assembly ..................................................... 36
Step 2: Integration into Edge Insights for Autonomous Mobile
Robots............................................................................... 37
Step 3: Robot Base Node ROS 2 Node ....................................... 37
Step 4: Robot Base Node ROS 2 Navigation Parameter File ........... 39
Step 5: Navigation Full Stack.................................................... 40
Perception...................................................................................... 46
Intel® RealSense™ ROS 2 Sample Application............................... 46
ROS 2 OpenVINO™ Toolkit Sample Application ............................. 49
OpenVINO™ Sample Application................................................. 52
2D LIDAR and ROS 2 Cartographer............................................ 57
GStreamer* Pipelines .............................................................. 64
Point Cloud Library (PCL) Optimized for the Intel® oneAPI Base
Toolkit ............................................................................... 73
Navigation ................................................................................... 109
Collaborative Visual SLAM ...................................................... 109
Kudan Visual SLAM ............................................................... 122
FastMapping Algorithm .......................................................... 131
ADBSCAN Algorithm .............................................................. 133
ITS Path Planner ROS 2 Navigation Plugin ................................ 136
ITS Path Planner Plugin Customization ..................................... 141
Intel® oneAPI Base Toolkit Sample Application........................... 142
Robot Teleop Using a Gamepad ............................................... 145
Robot Teleop Using a Keyboard ............................................... 146
Simulation ................................................................................... 147
turtlesim.............................................................................. 147
Wandering Application in a ARIAC Gazebo* Simulation............... 150
Wandering Application in a Waffle Gazebo* Simulation ............... 152
Benchmarking and Profiling ............................................................ 154
VTune™ Profiler in a Docker* Container..................................... 154
OpenVINO™ Benchmarking Tool ............................................... 157
EI for AMR Container on a Virtual Machine ........................................ 160
Fibocom’s FM350 5G Module Integration........................................... 165
Change Existing and Add New Docker* Images to the EI for AMR SDK . 168
Troubleshooting for Robot Tutorials .................................................. 171
EI for AMR Robot Orchestration Tutorials .................................................. 174
Device Onboarding End-to-End Use Case.......................................... 174
Basic Fleet Management................................................................. 196
Basic Fleet Management Use Case........................................... 198
Remote Inference End-to-End Use Case ................................... 210
OTA Updates ................................................................................ 212

2
Contents

Troubleshooting for Robot Orchestration Tutorials .............................. 225


Intel® Edge Software Device Qualification (Intel® ESDQ) for EI for AMR ......... 236
Security ............................................................................................... 246
Real-Time Support ................................................................................. 251
Enable Intel® TCC Baseline Support ................................................. 252
Terminology.......................................................................................... 253
Notices and Disclaimers.......................................................................... 256

3
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Edge Insights for Autonomous


Mobile Robots (EI for AMR) 1
EI for AMR offers containerized software packages and pre-validated hardware modules for sensor data
ingestion, classification, environment modelling, action planning, action control. Based on the Robot
Operating System 2 (ROS* 2), it also includes the OpenVINO™ toolkit, Intel® oneAPI Base Toolkit (Base Kit),
Intel® RealSense™ SDK, and other software dependencies in a container, along with the source code, as well
as reference algorithms and deep learning models as working examples.
In addition to autonomous mobility, this package showcases map building and Simultaneous Localization And
Mapping (SLAM) loop closure functionality. The package uses an open source version of visual SLAM with
camera input from an Intel® RealSense™ camera. Optionally, the package allows you to run Light Detection
and Ranging (LIDAR) based SLAM and compare those results with visual SLAM results on accuracy and
performance indicators. In addition, this package detects the objects and highlights them in the map.
Depending on the platform that is used, AI workloads are run on an integrated GPU or on Intel® Movidius™
Myriad™ X accelerator.
EI for AMR helps address industrial, manufacturing, consumer market, and smart cities use cases, that
include data collection, storage, and analytics on a variety of hardware nodes across the factory floor.

Collateral Description

How it Works EI for AMR features

edgesoftware Command Line Interface (CLI) Intel’s Developer Catalog package management

EI for AMR Robot Tutorials Step-by-step, hands-on walkthroughs, including


how to run a demo ROS 2 sample application inside
the EI for AMR Docker* container

EI for AMR Robot Orchestration Tutorials Step-by-step, hands-on walkthroughs, including


how to set up basic fleet management

Troubleshooting for Robot Tutorials Help with robot tutorials

Troubleshooting for Robot Orchestration Tutorials Help with robot orchestration tutorials

Get Started Guide for Robots Robot Kit installation

Get Started Guide for Robot Orchestration Server Complete Kit installation

Release Notes New features and known issues

How it Works
The Edge Insights for Autonomous Mobile Robots (EI for AMR) modules are deployed via Docker* containers
for enhanced Developer Experience (DX), support of Continuous Integration and Continuous Deployment
(CI/CD) practices and flexible deployment in different execution environments, including robot, development
PC, server, and cloud.
This section provides an overview of the modules and services featured with Edge Insights for Autonomous
Mobile Robots.

Modules and Services


The middleware layered architecture in the Intel® oneAPI Base Toolkit (Base Kit) and Intel® Distribution of
OpenVINO™ toolkit (OpenVINO™) abstracts hardware dependencies from the algorithm implementation.

4
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
The ROS 2 with data distribution service (DDS) is used as a message bus. This Publisher-Subscriber
architecture based on ROS 2 topics decouples data providers from consumers.
Camera and LIDAR sensor data is abstracted with ROS 2 topics.
Video streaming processing pipelines are supported by GStreamer*. GStreamer* is a library for constructing
graphs of media-handling components. It decouples sensor ingestion, video processing and AI object
detection via OpenVINO™ toolkit DL Streamer framework. The applications it supports range from simple Ogg
Vorbis playback audio and video streaming to complex audio (mixing) and video (non-linear editing)
processing.
Also, more complex computational graphs that decouple Sense-Plan-Act autonomous mobile robot
applications can be implemented using ROS 2 topic registration.
This diagram shows the software components included in the EI for AMR package. The software stack keeps
evolving iteratively with additional algorithms, applications, and third-party ecosystem software components.

5
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

6
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
The EI for AMR software stack is based on software supported by and part of the underlying hardware
platform, their respective Unified Extensible Firmware Interface (UEFI) based boot, and their supported
Linux* operating system. For requirement details, see:
• Get Started Guide for Robots
• Get Started Guide for Robot Orchestration

EI for AMR Drivers


Edge Insights for Autonomous Mobile Robots relies on standard Intel® Architecture Linux* drivers included
and upstreamed in the Linux* kernel from kernel.org and included in Ubuntu* distributions. These drivers are
not included in the EI for AMR package. Some notable drivers that are specifically important for EI for AMR
include:
• 5G/LTE Device Drivers for 5G and LTE connectivity
• Driver for the Fibocom* FM350-G 5G/LTE modem
• Battery Bridge Kernel Module, which allows user-space applications to feed battery and power information
into the Linux kernel’s power supply subsystem. It has been designed to be used together with the ROS 2
Battery Bridge to allow ROS 2-based EI for AMR software stacks to forward battery information from an EI
for AMR’s microcontroller into the Linux kernel.
• MIPI CSI IMX390 Device Driver, for cameras that are using the Sony* IMX390 sensor and are connected
to a Tiger Lake platform SoC via a MIPI CSI connection
• Device Drivers for Intel® Movidius™ Myriad™ X VPUs
• Video4Linux2 Driver Framework, a collection of device drivers and an API for supporting realtime video
capture on Linux* systems. It supports many USB webcams, TV tuners, and related devices,
standardizing their output, so that programmers can easily add video support to their applications.

EI for AMR Middleware


EI for AMR integrates the following middleware packages on EI for AMRs.
• AAEON* ROS 2 interface, the ROS 2 driver node for AAEON EI for AMRs
• GStreamer*, which includes support for libv4l2 video sources, GStreamer* “good” plugins for video and
audio, and a GStreamer* plugin for display to display a video stream in a window
• Kobuki, the ROS 2 driver node for Cogniteam’s Pengo robots
• Intel® oneAPI Base Toolkit, which is able to execute Intel® oneAPI Base Toolkit sample applications. The
Intel® oneAPI Base Toolkit is a core set of tools and libraries for developing high-performance, data-centric
applications across diverse architectures. It features an industry-leading C++ compiler and the Data
Parallel C++ (DPC++) language, an evolution of C++ for heterogeneous computing. For Intel® oneAPI
Base Toolkit training and a presentation of the CUDA* converter, refer to:
• Intel® DPC++ Compatibility Tool Self-Guided Jupyter Notebook Tutorial
• Optimize Edge Compute Performance by Migrating CUDA* to DPC++
• OpenCV (Open Source Computer Vision Library), an open-source library that includes several hundred
computer vision algorithms
• Intel® Distribution of OpenVINO™ toolkit, which is a comprehensive toolkit for quickly developing
applications and solutions that solve a variety of tasks including emulation of human vision, automatic
speech recognition, natural language processing, recommendation systems, and many others. Based on
latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent
and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel
hardware, maximizing performance. It accelerates applications with high-performance, AI and deep
learning inference deployed from edge to cloud.
• Intel® RealSense™ ROS 2 Wrapper node, used for Intel® RealSense™ cameras with ROS 2
• Intel® RealSense™ SDK, used to implement software for Intel® RealSense™ cameras
• ROS 2 ros1_bridge, which provides a network bridge allowing the exchange of messages between ROS1
and ROS 2. This lets users evaluate the EI for AMR SDK on EI for AMRs or with sensors for which only
ROS1 driver nodes exist.

7
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• ROS 2, Robot Operating System (ROS), which is a set of open source software libraries and tools for
building robot applications
ROS 2 depends on other middleware, like the Object Management Group (OMG) DDS connectivity
framework that is using a publish-subscribe pattern. The standard ROS 2 distribution includes eProsima
Fast DDS implementation.
• ROS 2 Battery Bridge, which utilizes the Battery Bridge Kernel Module to forward battery information from
an EI for AMR’s microcontroller into the Linux kernel
• RPLIDAR ROS 2 Wrapper node, for using RPLIDAR LIDAR sensors with ROS 2
• SICK Safetyscanners ROS 2 Driver, which reads the raw data from the SICK Safety Scanners and
publishes the data as a laser_scan msg
• Teleop Twist Joy, a generic facility for teleoperating twist-based ROS 2 robots with a standard joystick. It
converts joy messages to velocity commands. This node provides no rate limiting or autorepeat
functionality. It is expected that you take advantage of the features built into ROS 2 Driver for Generic
Joysticks for this.
• Teleop Twist Keyboard generic keyboard teleoperation for ROS 2
• Twist Multiplexer for when there is more than one source to move a robot with a geometry_msgs::Twist
message. It is important to multiplex all input sources into a single source that goes to the EI for AMR
control node.
• ROS 2 Driver for Generic Joysticks
• ModemManager, which provides a unified high level API for communicating with mobile broadband
modems, regardless of the protocol used to communicate with the actual device (for example, generic AT,
vendor-specific AT, QCDM, QMI, MBIM). Edge Insights for Autonomous Mobile Robots uses ModemManager
to establish 5G connections using the Fibocom* FM350-G 5G/LTE modem.

EI for AMR Algorithms


EI for AMR includes reference algorithms as well as deep learning models as working examples for the
following automated robot control functional areas:
• DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is an unsupervised clustering
algorithm that clusters high dimensional points based on their distribution density. Adaptive DBSCAN
(ADBSCAN) has clustering parameters that are adaptive based on range and are especially suitable for
processing LIDAR data. It improves the object detection range by 20-30% on average.
• Collaborative visual SLAM, a collaborative visual simultaneous localization and mapping (SLAM) framework
for service robots. With an edge server maintaining a map database and performing global optimization,
each robot can register to an existing map, update the map, or build new maps, all with a unified
interface and low computation and memory cost. A collaborative visual SLAM system consists of at least
two elements:
• The tracker is a visual SLAM system with support for inertial and odometry input. It estimates the
camera pose in real-time, and maintains a local map. It can work without a server, but if it has one
configured, it communicates with the server to query and update the map. The tracker represents a
robot. There can be multiple trackers running at the same time.
• The server maintains the maps and communicates with all trackers. For each new keyframe from a
tracker, it detects possible loops, both intra-map and inter-map. Once detected, the server performs
map optimization or map merging and distributes the updated map to corresponding trackers.
For collaborative visual SLAM details, refer to A Collaborative Visual SLAM Framework for Service Robots
paper.
• FastMapping, which is an algorithm to create a 3D voxel map of a robot’s surrounding, based on Intel®
RealSense™ depth sensor data.
• OpenVINO™ Model Zoo, optimized deep learning models and a set of demos to expedite development of
high-performance deep learning inference applications. A developer can use these pre-trained models
instead of training their own models to speed-up the development and production deployment process.

8
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• ROS 2 Cartographer, a system that provides real-time simultaneous localization and mapping (SLAM)
based on real-time 2D LIDAR sensor data. It is used to generate as-built floor plans in the form of
occupancy grids.
• ROS 2 Depth Image to Laser Scan, which converts a depth image to a laser scan for use with navigation
and localization.
• ROS 2 Navigation stack, which seeks a safe way to have a mobile robot move from point A to point B.
This completes dynamic path planning, computes velocities for motors, detects and avoids obstacles, and
structures recovery behaviors. Navigation 2 uses behavior trees to call modular servers to complete an
action. An action can be computing a path, controlling effort, recovery, or any other navigation-related
action. These are separate nodes that communicate with the behavior tree over a ROS 2 action server.
• RTAB-Map (Real-Time Appearance-Based Mapping), a RGB-D, Stereo and Lidar Graph-Based SLAM
approach based on an incremental appearance-based loop closure detector. The loop closure detector uses
a bag-of-words approach to determinate how likely a new image comes from a previous location or a new
location. When a loop closure hypothesis is accepted, a new constraint is added to the map’s graph, then
a graph optimizer minimizes the errors in the map. A memory management approach is used to limit the
number of locations used for loop closure detection and graph optimization, so that real-time constraints
on large-scale environnements are always respected. RTAB-Map can be used alone with a handheld
Kinect, a stereo camera or a 3D lidar for 6DoF mapping, or on a robot equipped with a laser rangefinder
for 3DoF mapping.
• IMU Tools - filters and visualizers - from https://github.com/CCNYRoboticsLab/imu_tools:
• imu_filter_madgwick: A filter which fuses angular velocities, accelerations, and (optionally) magnetic
readings from a generic IMU device into an orientation.
• imu_complementary_filter: A filter which fuses angular velocities, accelerations and (optionally)
magnetic readings from a generic IMU device into an orientation quaternion using a novel approach
based on a complementary fusion.
• rviz_imu_plugin: A plugin for rviz which displays sensor_msgs::Imu messages.
• Intelligent Sampling and Two-Way Search (ITS) global path planner Robot Operating System 2 (ROS 2)
Plugin is a plugin for ROS 2 Navigation package which conducts a path planning search on a roadmap
from two directions simultaneously. The main inputs are 2D occupancy grid map, robot position, and the
goal position. The occupancy is converted into a roadmap and can be saved for future inquiries. The
output is a list of waypoints which constructs the global path. All inputs and outputs are in standard ROS
2 formats. This plugin is a global path planner module which is based on the Intelligent Sampling and
Two-Way Search (ITS). Currently, the ITS plugin does not support continuous replanning. To use this
plugin, a simple behavior tree with compute path to pose and follow path should be used. The inputs for
the ITS planner are global 2d_costmap (nav2_costmap_2d::Costmap2D), start and goal pose
(geometry_msgs::msg::PoseStamped). The outputs are 2D waypoints of the path. The ITS planner gets
the 2d_costmap and it converts it to either Probabilistic Road Map (PRM) or Deterministic Road Map
(DRM). The generated roadmap is saved in a txt file which can be reused for multiple inquiries. Once a
roadmap is generated, the ITS conducts a two-way search to find a path from the source to destination.
Either the smoothing filter or catmull spline interpolation can be used to create a smooth and continuous
path. The generated smooth path is in the form of ROS navigation message type (nav_msgs::msg).
• Kudan Visual SLAM (KdVisual), Kudan’s proprietary visual SLAM software, has been extensively developed
and tested for use in commercial settings. Open source and other commercial algorithms struggle in many
common use cases and scenarios. Kudan Visual SLAM achieves much faster processing time, higher
accuracy, and a more robust results in dynamic situations.
• The Point Cloud Library (PCL), a standalone, large scale, open project for 2D/3D image and point cloud
processing (see also https://pointclouds.org/). The EI for AMR SDK version of PCL adds optimized
implementations of several PCL modules which allow you to offload computation to a GPU.
• Robot_localization (from https://github.com/cra-ros-pkg/robot_localization), a collection of state
estimation nodes, each of which is an implementation of a nonlinear state estimator for robots moving in
3D space. It contains two state estimation nodes, ekf_localization_node and ukf_localization_node. In
addition, robot_localization provides navsat_transform_node, which aids in the integration of GPS data.
• SLAM Toolbox, a set of tools and capabilities for 2D SLAM built by Steve Macenski that includes the
following.

9
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• Starting, mapping, saving pgm files, and saving maps for 2D SLAM mobile robotics
• Refining, remapping, or continue mapping a saved (serialized) pose-graph at any time
• Loading a saved pose-graph continue mapping in a space while also removing extraneous information
from newly added scans (life-long mapping)
• An optimization-based localization mode built on the pose-graph. Optionally run localization mode
without a prior map for “LIDAR odometry” mode with local loop closures
• Synchronous and asynchronous modes of mapping
• Kinematic map merging (with an elastic graph manipulation merging technique in the works)
• Plugin-based optimization solvers with an optimized Google* Ceres-based plugin
• rviz2 plugin for interacting with the tools
• Graph manipulation tools in rviz2 to manipulate nodes and connections during mapping
• Map serialization and lossless data storage
• See also https://github.com/SteveMacenski/slam_toolbox.

EI for AMR Applications


• Intel In-Band Manageability, monitors device(s) and updates software and firmware of the device(s)
remotely.
• Object Detection AI Application, detects objects in video data using a deep learning neural network model
from the OpenVINO™ Model Zoo.
• VDA5050 Sample Handler, processes selected commands from the VDA5050 EI for AMR/AGV
interoperability standard and forwards the EI for AMR’s software components for autonomous navigation.
• Wandering Application, included in the EI for AMR SDK to demonstrate the combination of the
middleware, algorithms, and the ROS 2 navigation stack to move a robot around a room avoiding hitting
obstacles, updating a local map in real time exposed as ROS topic, and publish AI-based objects detected
in another ROS topic. It uses the robot’s sensors and actuators that are available from the robot’s
hardware configuration.

Edge Server Middleware


• FIDO Device Onboarding, an automatic onboarding protocol for IoT devices. Permits late binding of device
credentials, so that one manufactured device may onboard, without modification, to many different IoT
platforms.
• Intel® Smart Edge Open, a software toolkit for building edge platforms. It speeds up development of edge
solutions that host network functions alongside AI, media processing, and security workloads with
reference solutions optimized for common use cases powered by a Certified Kubernetes* cloud native
stack.
• kube-proxy, the Kubernetes* network proxy that runs on each node. This reflects services as defined in
the Kubernetes* API on each node and can do simple TCP, UDP, and SCTP stream forwarding or round
robin TCP, UDP, and SCTP forwarding across a set of backends. Service cluster IPs and ports are currently
found through Docker* links-compatible environment variables specifying ports opened by the service
proxy. There is an optional addon that provides cluster DNS for these cluster IPs. The user must create a
service with the apiserver API to configure the proxy.
• The VDA5050 ROS 2 Bridge which translates VDA5050 messages into ROS 2 messages which can be
received and executed by ROS 2 components in an EI for AMR-SDK based EI for AMR control system.
• The VDA Navigator, a basic waypoint navigator that receives a list of waypoints and sends them to the
ROS 2 Navigation stack one after the other

Edge Server Algorithms


• OpenVINO™ Model Zoo, which includes optimized deep learning models and a set of demos to expedite
development of high-performance deep learning inference applications. A developer can use these pre-
trained models instead of training their own models to speed-up the development and production
deployment process.

10
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Edge Server Applications
• OpenVINO™ Model Server (OVMS), a high-performance system for serving machine learning models. It is
based on C++ for high scalability and optimized for Intel solutions, so that you can take advantage of the
power of the Intel® Xeon® processor or Intel’s AI accelerator and expose it over a network interface. OVMS
uses the same architecture and API as TensorFlow Serving, while applying OpenVINO for inference
execution. Inference service is provided via gRPC or REST API, making it easy to deploy new algorithms
and AI experiments.
• ThingsBoard*, an open-source IoT platform for data collection, processing, visualization, and device
management. It enables device connectivity via industry standard IoT protocols - MQTT, CoAP and HTTP
and supports both cloud and on-premises deployments.

Tools
ROS Tools
Edge Insights for Autonomous Mobile Robots is validated using ROS 2 nodes. ROS 1 is not compatible with EI
for AMR components. A ROS 1 bridge is included to allow EI for AMR components to interface with ROS 1
components.
• From the hardware perspective of the supported platforms, there are no known limitations for ROS 1
components.
• For information on porting ROS 1 applications to ROS 2, here is a guide from the ROS community.
Edge Insights for Autonomous Mobile Robots includes:
• colcon (collective construction), a command line tool to improve the workflow of building, testing, and
using multiple software packages. It automates the process, handles the ordering, and sets up the
environment to use the packages.
• rqt, a software framework of ROS 2 that implements the various GUI tools in the form of plugins.
• rviz2, a tool used to visualize ROS 2 topics.
Simulation
Edge Insights for Autonomous Mobile Robots includes:
• The Gazebo* robot simulator, making it possible to rapidly test algorithms, design robots, perform
regression testing, and train AI systems using realistic scenarios. Gazebo offers the ability to simulate
populations of robots accurately and efficiently in complex indoor and outdoor environments.
• An industrial simulation room model for Gazebo*, the Open Source Robotics Foundation (OSRF) Gazebo
Environment for Agile Robotics (GEAR) workcell that was used for the ARIAC competition in 2018.
Other Tools
Edge Insights for Autonomous Mobile Robots includes:
• Intel® oneAPI Base Toolkit, which includes the DPC++ compiler and compatibility tool, as well as
debugging and profiling tools like VTune™ Profiler, etc. (formerly known as Intel System Studio).
• OpenVINO™ Tools, including the model optimization tool.

Deployment
All applications, algorithms, and middleware components which are executed as standalone processes are
deployed in their own Docker* containers. This allows you to selectively pull these components onto an EI for
AMR or Edge Server and launch them there.
For development purposes, the middleware libraries and all tools are deployed in a single container called
Full SDK. This container is constructed hierarchically by extending the OpenVINO SDK container, which itself
extends the ROS2 SDK container. For storage space savings, you can choose to run any of the containers
depending on the needs of your application.

11
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• The ROS2 SDK container includes the ROS 2 middleware and tools, Intel® RealSense™ SDK and ROS 2
wrapper, GStreamer* and build tools, ROS 2 packages (Cartographer, Navigation, RTAB_MAP) and the
FastMapping application (the Intel-optimized version of octomap).
• The OpenVINO SDK container includes the ROS2 SDK, as well as the OpenVINO™ development toolkit, the
OpenVINO™ DL GStreamer* plugins and the Wandering demonstration application.
• The Full SDK container includes the OpenVINO™ container, as well as the Intel® oneAPI Base Toolkit, the
Data Parallel C++ (DPC++) compatibility tool and profiler, analyzer tools.

Recommended Hardware

Knowledge/Experience
• You are familiar with executing Linux* commands.
• You have basic Docker* experience.
• ROS 1 or ROS 2 background recommended.

Target System for the Robot Base Kit


• Intel® processors:
• Intel® Atom® processor with Intel® SSE4.1 support
• Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
• 6th Generation or newer Intel® Core™ processors
• 8 GB RAM
• 64 GB hard drive
• Intel® RealSense™ camera D435i
• Accelerator: Intel® Movidius™ Myriad™ X VPU (optional)
• IOT Ubuntu* Desktop 20.04
• Slamtec* RPLIDAR A3 2D LIDAR (optional)

NOTE

Intel does not recommend running simulations, like Gazebo*, on a robot.


Intel does not recommend compilation on robots with 8 GB of RAM.

Target System for Development and Simulations with the Robot Complete Kit
• Intel® processors:
• 11th Generation Intel® Core™ processors with Intel® Iris® Xe Integrated Graphics or Intel® UHD Graphics
• 10th Generation Intel® Core™ processors with an integrated GPU and Intel® UHD Graphics
• 16 GB RAM
• 128 GB hard drive
• Intel® RealSense™ camera D435i
• Accelerator: Intel® Movidius™ Myriad™ X VPU (optional)
• IOT Ubuntu* Desktop 20.04
• Slamtec* RPLIDAR A3 2D LIDAR (optional)

Target System for the Server Complete Kit and Robot and Server Complete Kit
• Intel® processors:
• Intel® Xeon® processor E3, E5, and E7 family

12
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• 2nd Generation Intel® Xeon® Scalable Processors
• 3rd Generation Intel® Xeon® Scalable Processors

NOTE Intel recommends the previously listed Intel® Xeon® processors for any system running resource
intensive loads. Resource intensive uses include such things as remote inference and collaborative
visual SLAM. If the server is only for fleet management, robot deployment, and robot onboarding, the
following Intel® Core™ processors are sufficient.

• 11th Generation Intel® Core™ processors with Intel® Iris® Xe Integrated Graphics or Intel® UHD Graphics
• 10th Generation Intel® Core™ processors with an integrated GPU and Intel® UHD Graphics
• 16 GB RAM
• 128 GB hard drive
• Ubuntu* 20.04 LTS

edgesoftware Command Line Interface (CLI)


The edgesoftware CLI helps you manage packages in Intel’s Developer Catalog.
In this tutorial, you:
• Try out commands and get familiar with CLI and the package you installed
• Learn how to update modules
• Learn how to install custom components
• Learn how to export a package you installed, including custom modules, so that you can install it on other
edge nodes.

Get Started with the edgesoftware CLI


1. Open a terminal window.
2. Go to the edge_insights_for_amr directory.
3. Try out the following commands.
Get Help or List the Available Commands
• Command:

./edgesoftware --help
• Response:

Usage: edgesoftware [OPTIONS] COMMAND [ARGS]...


A CLI wrapper for management of Intel® Edge Software Hub packages

Options:
-v, --version Show the version number and exit.
--help Show this message and exit.

Commands:
download Download modules of a package.
export Exports the modules installed as a part of a package.
install Install modules of a package.
list List the modules of a package.
log Show log of CLI events.
pull Pull Docker image.

13
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

uninstall Uninstall the modules of a package.


update Update the modules of a package.
upgrade Upgrade a package.
View the Software Version
• Command:

./edgesoftware --version
• Response: The edgesoftware version, build date, and target OS.
List the Package Modules
• Command:

./edgesoftware list
• Response: The modules installed and status.

14
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

List Modules Available for Download


• Command:

./edgesoftware list --default


• Response: All modules available for download for that package version, modules ID and version.
Download Package Modules
• Command:

./edgesoftware download
• Response: All available modules in that package are downloaded.

15
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Display the CLI Event Log


• Command:

./edgesoftware log
• Response: CLI event log information, such as:
• target system information (hardware and software)
• system health
• installation status
• modules you can install

See the Installation Event Log for a Module


• Command:

./edgesoftware log <MODULE_ID>


You can specify multiple <MODULE_ID> arguments by listing them with a space between each.

NOTE To find the module ID, use:

./edgesoftware list

• Response: The installation log for the module.

16
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Install Package Modules


This edgesoftware command installs package modules on the target system. To do so, the command looks at
edgesoftware_configuration.xml that was downloaded from the Intel Edge Software Hub when you
installed the Edge Insights for Autonomous Mobile Robots software. This file contains information about the
modules to install.
During the installation, you are prompted to enter your product key. The product key is in the email message
you received from Intel confirming your Edge Insights for Autonomous Mobile Robots download.

Warning Do not manually edit edgesoftware_configuration.xml.

1. Open a terminal window.


2. Go to the edge_insights_for_amr directory.
3. Run the install command:

./edgesoftware install

Update the Package Modules

NOTE On a fresh Linux* installation, you might need to use the install command at least once
before performing an update. install makes sure all dependencies and packages are installed on the
target system.

./edgesoftware install

When you are ready to perform the update, use:

./edgesoftware update <MODULE_ID>

17
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

During the installation, you are prompted to enter your product key. The product key is in the email message
you received from Intel confirming your Edge Insights for Autonomous Mobile Robots download.

NOTE To find the module ID, use:

./edgesoftware list -d

18
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Export the Package for Installation


The edgesoftware CLI lets you package the installed modules, customer applications, and dependencies as
part of a package. The export is provided in a .zip file that includes installation scripts, XML files, and an
edgesoftware Python* executable.
Command:

./edgesoftware export

Uninstall the Packages


The edgesoftware CLI lets you uninstall the complete package or individual components from the package.
To uninstall an individual package, run the following command:

./edgesoftware uninstall <MODULE_ID>


To uninstall all packages, run the following command:

./edgesoftware uninstall -a

NOTE This command does not uninstall Docker* Compose and Docker* Community Edition (CE).

19
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Troubleshooting
If the following error is encountered:

PermissionError: [Errno 13] Permission denied: '/var/log/esb-cli/


Edge_Insights_for_Autonomous_Mobile_Robots_2021.3/output.log'
Run the CLI commands with sudo:

sudo ./edgesoftware <CLI_commands>

EI for AMR Robot Tutorials


Prerequisite: Follow the instructions in Get Started Guide for Robots.
• The target system for the for the Robot Base Kit is different than the target system for development and
simulations, so make sure that your target system meets the applicable Recommended Hardware
requirements.
• When you download the EI for AMR software, select the:
• Robot Complete Kit for development and simulations
• Robot Base Kit for installation on a robot
• UP Xtreme i11 Robotic Kit to see how AAEON’s UPS 6000 and UP Xtreme i11 Robot Kits work
With step-by-step instructions covering real world usage scenarios, these tutorials provide a learning path for
developers to use and configure EI for AMR.
You can execute the following sample applications on the eiforamr-full-flavour-sdk container.

UPS 6000 and UP Xtreme i11 Robot Kits

Hardware Prerequisite
You have one of these AAEON* robot kits:
• UPS 6000 Robotic Kit
• UP Xtreme i11 Robotic Kit
This tutorial uses the UP Xtreme i11 Robotic Kit.
If you need help assembling your robot, see AAEON* Resources.
You can use one of these teleop methods to validate that the robot kit hardware setup was done correctly.
• Robot Teleop Using a Gamepad
Start at step 2 (insert the USB dongle in the robot); and, for step 5, run the yml file exactly as shown in
the example (ignore the instruction to replace it with your own generic yml file).
• Robot Teleop Using a Keyboard
For step 2, instead of customizing your file, use the exact command in the example.

NOTE The full-sdk docker image is only present in the Robot Complete Kit, not in the Robot Base Kit
or Up Xtreeme i11 Robotic Kit.

Check Your Installation


1. Check if your installation has the turtlesim Docker* image.

docker images |egrep "amr-aaeon-amr-interface|amr-ros-base|amr-imu-tools|amr-robot-localization|


amr-nav2|amr-wandering|amr-realsense|amr-collab-slam|amr-collab-slam-gpu|"
#if you have them installed, the result is:

20
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
amr-aaeon-amr-interface
amr-ros-base
amr-imu-tools
amr-robot-localization
amr-realsense
amr-collab-slam
amr-collab-slam-gpu
amr-nav2
wandering

NOTE If these images are not installed, continuing with these steps triggers a build that takes
longer than an hour (sometimes, a lot longer depending on the system resources and internet
connection).

2. If these images are not installed, Intel recommends checking your installation with FastMapping
Algorithm or installing the Robot Complete Kit with the Get Started Guide for Robots.

Calibrate Your Robot’s Inertial Measurement Unit (IMU) Sensor


The IMU sensor is used to determine the robot’s orientation. Moving the robot interferes with calibration, so
do not move the robot while performing these steps.
1. Prepare the environment:

mkdir ~/imu_cal
sudo chmod 0777 ~/imu_cal
cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
2. Start the ros2_amr_interface:

chmod a+x run_interactive_docker.sh


./run_interactive_docker.sh amr-aaeon-amr-interface:2022.3 eiforamr -c imu_cal
export ROS_DOMAIN_ID=27
ros2 run ros2_amr_interface amr_interface_node --ros-args -p try_reconnect:=true -p
publishTF:=true --remap /amr/cmd_vel:=/cmd_vel -p port_name:=/dev/ttyUSB0
Expected output example:

[INFO] [1655311130.138413471] [IoContext::IoContext]: Thread(s) Created: 2


[INFO] [1655311130.139098979] [AMR_node]: Serial opened on /dev/ttyUSB0 at 115200
[INFO] [1655311131.144572706] [AMR_node]: Hardware is now online
If your output is not similar to this, verify that the motor controller is not on /dev/ttyUSB0. Adapt the
command accordingly.
3. In another terminal, attach to the opened Docker* image, and get the offsets needed for calibration.

docker exec -it imu_cal bash


source ros_entrypoint.sh
export ROS_DOMAIN_ID=27
awk -F',' '{sum+=$17; ++n} END { print "Ang_vel_x: "sum"/"n"="0-sum/n }' <( timeout 4s ros2
topic echo /amr/imu/raw --csv )
awk -F',' '{sum+=$18; ++n} END { print "Ang_vel_y: "sum"/"n"="0-sum/n }' <( timeout 4s ros2
topic echo /amr/imu/raw --csv )
awk -F',' '{sum+=$19; ++n} END { print "Ang_vel_z: "sum"/"n"="0-sum/n }' <( timeout 4s ros2
topic echo /amr/imu/raw --csv )
awk -F',' '{sum+=$29; ++n} END { print "linear_accel_x: "sum"/"n"="0-sum/n }' <( timeout 4s ros2
topic echo /amr/imu/raw --csv )
awk -F',' '{sum+=$30; ++n} END { print "linear_accel_y: "sum"/"n"="0-sum/n }' <( timeout 4s ros2
topic echo /amr/imu/raw --csv )

21
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Expected output example:

eiforamr@glaic3tglaaeon1:~/workspace$ awk -F',' '{sum+=$17; ++n} END { print "Ang_vel_x:


"sum"/"n"="0-sum/n }' <( timeout 4s ros2 topic echo /amr/imu/raw --csv )
Ang_vel_x: 0.873369/290=-0.00301162
eiforamr@glaic3tglaaeon1:~/workspace$ awk -F',' '{sum+=$18; ++n} END { print "Ang_vel_y:
"sum"/"n"="0-sum/n }' <( timeout 4s ros2 topic echo /amr/imu/raw --csv )
Ang_vel_y: 1.16431/261=-0.00446097
eiforamr@glaic3tglaaeon1:~/workspace$ awk -F',' '{sum+=$19; ++n} END { print "Ang_vel_z:
"sum"/"n"="0-sum/n }' <( timeout 4s ros2 topic echo /amr/imu/raw --csv )
Ang_vel_z: -0.487217/290=0.00168006
eiforamr@glaic3tglaaeon1:~/workspace$ awk -F',' '{sum+=$29; ++n} END { print "linear_accel_x:
"sum"/"n"="0-sum/n }' <( timeout 4s ros2 topic echo /amr/imu/raw --csv )
linear_accel_x: -104.311/261=0.399657
eiforamr@glaic3tglaaeon1:~/workspace$ awk -F',' '{sum+=$30; ++n} END { print "linear_accel_y:
"sum"/"n"="0-sum/n }' <( timeout 4s ros2 topic echo /amr/imu/raw --csv )
linear_accel_y: 120.091/261=-0.460118
After some time, the aaeon node stops publishing data on /amr/imu/raw. When this happens, you get
results similar to:

"linear_accel_z: /=-nan"
To fix this, restart the aaeon node:

# Go to the terminal where you started "the ros2_amr_interface"


# Stop the AAEON node
ctrl-c
# re-start the AAEON node
ros2 run ros2_amr_interface amr_interface_node --ros-args -p try_reconnect:=true -p
publishTF:=true --remap /amr/cmd_vel:=/cmd_vel -p port_name:=/dev/ttyUSB0
# go back to the terminal where you get the data for calibration and continue with the commands
4. Put these values in aaeon_node_params.yaml:

• Ang_vel_x in gyro: x
• Ang_vel_y in gyro: y
• Ang_vel_z in gyro: z
• linear_accel_x in accelerometer: x
• linear_accel_y in accelerometer: y

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
gedit 01_docker_sdk_env/artifacts/01_amr/amr_generic/param/aaeon_node_params.yaml

# Replace the values from this file to the ones you got in the previous step.

imu:
frame_id: imu_link
offsets:
accelerometer:
x: 0.399657
y: -0.460118
z: 0.0
gyro:
x: -0.00301162
y: -0.00446097
z: 0.00168006

22
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE Indentation is important in yaml files, so make sure to align offsets with frame_id. If the
indentation is incorrect, the container reports an error when started.

5. Verify that the changes are correctly aligned and that the aaeon-amr-interface node can start:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml up aaeon-amr-interface
Expected results:

Creating amr-aaeon-amr-interface ... done


Attaching to amr-aaeon-amr-interface
amr-aaeon-amr-interface | ros_distro = foxy
amr-aaeon-amr-interface | USER = eiforamr
amr-aaeon-amr-interface | User's HOME = /home/eiforamr
amr-aaeon-amr-interface | ROS_HOME = /home/eiforamr/.ros
amr-aaeon-amr-interface | ROS_LOG_DIR = /home/eiforamr/.ros/ros_log
amr-aaeon-amr-interface | ROS_WORKSPACE = /home/eiforamr/ros2_ws
amr-aaeon-amr-interface | [INFO] [1659008958.761251488] [IoContext::IoContext]: Thread(s)
Created: 2
amr-aaeon-amr-interface | [INFO] [1659008958.761926192] [AMR_node]: Serial opened on /dev/
ttyUSB0 at 115200
amr-aaeon-amr-interface | [INFO] [1659008959.779575257] [AMR_node]: Hardware is now online
If the “Hardware is now online” message is not received but a message similar to the following message
is received, check the alignment of the yaml file again.

what(): failed to initialize rcl: Couldn't parse params file:


Send Ctrl-c to the terminal where you ran the docker-compose command to close it; or run, in a
different terminal:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml down --remove-orphans

Map an Area with the Wandering Application and UP Xtreme i11 Robotic Kit
The goal of the wandering application is to map an area and avoid hitting objects.
1. Place the robot in an area with multiple objects in it.
2. Go to the installation folder of Edge_Insights_for_Autonomous_Mobile_Robots:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
# The following command makes sure you have correct permissions so that collaborative SLAM can
save the map
sudo find . -type d -exec chmod 775 {} +
sudo chown $USER:$USER * -R
3. Start mapping the area:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


aaeon_mapping_realsense_collab_slam_nav2_ukf.tutorial.yml up
Expected result: The robot starts wandering around the room and mapping the entire area.
4. On a different terminal, prepare the environment to visualize the mapping and the robot using rviz2.

23
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

NOTE If available, use a different development machine because rviz2 consumes a lot of resources
that may interfere with the robot.

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
rviz_robot_wandering.yml up

24
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

5. To see the map in 3D, can check the MarkerArray:

25
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

NOTE Displaying in 3D consumes a lot of the system resources. Intel recommends opening rviz2 on a
development system. The development system needs to be in the same network and have the same
ROS_DOMAIN_ID set.

26
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

6. To stop the robot from mapping the area, do one of the following:

27
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• Type Ctrl-c in the terminal where the


aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml was run.
• (Preferred method because this option cleans the workspace) On the system you used docker-
compose up on in step 2, use docker-compose down:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml down --remove-orphans
If the robot moves in an unpredictable way and hits objects easily, there may be some hardware
configuration issues. See the Troubleshooting section for suggestions.

Start the UP Xtreme i11 Robotic Kit in Localization Mode


Prerequisites:
• Collaborative visual SLAM is based on visual keypoints, so use a room with multiple obstacles in it.
• If the room has bland walls, consider adding pictures to it so that visual SLAM has enough keypoints to
localize itself.
• Localization mode only works if a map was already generated. To generate a map go to Map an Area with
the Wandering Application and UP Xtreme i11 Robotic Kit.
• The pre-generated file contains enough information to allow the robot to navigate without having to
map the entire area.
• With this pre-generated map, the robot only has to localize itself relative to the pre-generated map.
1. Go to the installation folder of Edge_Insights_for_Autonomous_Mobile_Robots:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_localization_realsense_collab_slam_nav2_ukf.tutorial.yml up
Expected result: The robot starts moving in the already mapped area and reports if it is able to localize
itself or not.
• If the robot is able to localize itself, the amr-collab-slam node increases the tracking success
number, and the “relocal fail number” stays constant:

amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.720] [info] tracking


success number: 102, fail number: 0, relocal number: 1, relocal fail number: 51
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.766] [info] Started
Localization! current frame id is 154
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.779] [info] valid tracked
server landmarks num: 590
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.779] [info] Successfully
tracked server map!
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.785] [info] tracking
success number: 103, fail number: 0, relocal number: 1, relocal fail number: 51
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.835] [info] Started
Localization! current frame id is 155
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.848] [info] valid tracked
server landmarks num: 571
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 06:59:58.848] [info] Successfully
tracked server map!

28
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• If the robot is not able to localize itself, the amr-collab-slam node keeps the tracking success
number constant and increases the “relocal fail number”:

amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 07:06:49.475] [info] tracking


success number: 5740, fail number: 6, relocal number: 6, relocal fail number: 119
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 07:06:49.584] [info] tracking
success number: 5740, fail number: 6, relocal number: 6, relocal fail number: 120
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 07:06:49.667] [info] tracking
success number: 5740, fail number: 6, relocal number: 6, relocal fail number: 121
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 07:06:49.806] [info] tracking
success number: 5740, fail number: 6, relocal number: 6, relocal fail number: 122
amr-collab-slam | [univloc_tracker_ros-2] [2022-09-05 07:06:49.859] [info] tracking
success number: 5740, fail number: 6, relocal number: 6, relocal fail number: 123
2. To visualize the robot localizing itself and updating its pose, run in a different terminal:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
rviz_robot_localization.yml up

29
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

30
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE If the robot is not able to localize itself, the robot does not start navigating the room, and rviz2
reports map not found. To avoid this, move the robot to the room where the map was created and face
it towards a keypoint. It also helps if the room you mapped has a lot of keypoints

31
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Perform Object Detection While Mapping an Area with the UP Xtreme i11 Robotic Kit
1. Place the robot in an area with multiple objects in it.
2. Go to the installation folder of Edge_Insights_for_Autonomous_Mobile_Robots:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
3. Start mapping the area and listing the detected objects:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering_local_inference.yml up |grep Label
Expected result: The robot starts wandering around the room and listing what objects it sees:

AAEON* Resources
• Development Kit: https://github.com/up-board/up-community/wiki/UP-Robotic-Development-Kit-QSG
• Hardware Assembly: https://github.com/up-board/up-community/wiki/UP-Robotic-Development-Kit-HW-
Assembly-Guide
• Power Management: https://github.com/up-board/up-community/wiki/UP-Robotic-Development-Kit-
Power-Management-Guide

32
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Troubleshooting
If the server fails to load the map:

amr-collab-slam | [univloc_server-1] [2022-09-19 06:45:08.520] [critical] cannot load the


file at /tmp/aaeon/maps/map.msg
amr-collab-slam | [univloc_server-1] terminate called after throwing an instance of
'std::runtime_error'
amr-collab-slam | [univloc_server-1] what(): cannot load the file at /tmp/aaeon/maps/
map.msg
amr-collab-slam | [ERROR] [univloc_server-1]: process has died [pid 72, exit code -6, cmd
'/home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_core/univloc_server/lib/univloc_server/
univloc_server --ros-args -r __node:=univloc_server --params-file /tmp/launch_params_iiuu5pfp'].
Change your folder permissions so that the Docker* user is able to write the map on your system:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
sudo find . -type d -exec chmod 775 {} +
sudo chown $USER:$USER * -R
If the tracker (univloc_tracker_ros) fails to start, giving the following error, see Collaborative Visual SLAM
on Intel® Atom® Processor-Based Systems.

amr-collab-slam | [ERROR] [univloc_tracker_ros-2]: process has died [pid 140, exit code -4, cmd
'/home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_core/univloc_tracker/lib/
univloc_tracker/univloc_tracker_ros --ros-args -r __node:=univloc_tracker_0 -r __ns:=/ --params-
file /tmp/launch_params_zfr70odz -r /tf:=tf -r /tf_static:=tf_static -r /univloc_tracker_0/
map:=map'].
If the robot does not start moving, the firmware might be stuck. To make it work again:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
./run_interactive_docker.sh amr-aaeon-amr-interface:2022.3 eiforamr -c aaeon_robot
ros2 run ros2_amr_interface amr_interface_node --ros-args -p try_reconnect:=true -p
publishTF:=true --remap /amr/cmd_vel:=/cmd_vel -p port_name:=/dev/ttyUSB0
ctrl-c
ros2 run ros2_amr_interface amr_interface_node --ros-args -p try_reconnect:=true -p
publishTF:=true --remap /amr/cmd_vel:=/cmd_vel -p port_name:=/dev/ttyUSB0
# Look for the text: [INFO] [1655311131.144572706] [AMR_node]: Hardware is now online
# If you don't get this repeat the commands from the docker image and check if the motor
controller is not attached to /dev/ttyUSB0.
# If it is not attached to /dev/ttyUSB0, find out which one it is and adapt the commands
accordingly.
# When you get the [INFO] [1655311131.144572706] [AMR_node]: Hardware is now online, exit the
docker image:
exit
If the robot is not behaving as instructed when using the teleop_twist_keyboard, try the following steps.
1. Check the direction of the wheels. The way they are facing is very important, as shown in the following
picture.

33
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Each wheel says R (Right) or L (Left). Intel had to use the following wheel setup:
R (wheel) <<<>>> L (wheel)
L (wheel) <<<>>> R (wheel)
2. Check the connection between the wheels (left in the following picture) and the motor controller.

34
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

It is very important to have the hardware setup correctly configured. If it is not correct, it is evident
when testing with the teleop_twist_keyboard.
3. If the wheels do not turn at all, there may be something wrong with the wheel motor control. The
board’s datasheet states that it takes a 12 V input. Intel found that a 12.5 V input did not work, but 5V,
8V, and 10V inputs do work.
If the IMU gives errors and you did not install the librealsense udev rules when you configured the host,
install the librealsense udev rules now:

git clone https://github.com/IntelRealSense/librealsense


# Copy the 99-realsense-libusb.rules files to the rules.d folder
cd librealsense

35
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/


sudo udevadm control --reload-rules
sudo udevadm trigger
For general robot issues, go to: Troubleshooting for Robot Tutorials.

Create Your Own Robot Kit


This tutorial describes how to create an autonomous mobile robot capable of exploring and mapping an area
by adding the an Intel® compute system, adding a Intel® RealSense™ camera on top of any robot base, and
using the Edge Insights for Autonomous Mobile Robots software.
You can use one of these teleop methods to validate that the robot kit hardware setup was done correctly.
• Robot Teleop Using a Gamepad
• Robot Teleop Using a Keyboard

Hardware Requirements
The robot base should contain:
• Intel® compute system with Edge Insights for Autonomous Mobile Robots installed
• Intel® RealSense™ camera
• Robot base support (chassis) for the Intel® compute system and the Intel® RealSense™ camera
• Wheels
• Motor
• Motor controller
• Batteries for all components

Software Requirements
The robot base should have a ROS 2 node capable of:
• Publishing information from the motor controller firmware into ROS 2 topics like wheel odometry
• Getting information from other ROS 2 nodes and transmitting this data to the motor controller firmware,
for example, receiving movement commands from ROS 2 Navigation 2 stack on the cmd_vel topic
• Providing robot specific information like the tf tree data with correct tf transformations, for example, odom
and base_link and their transformations
• When using multiple robots, it is useful to be able to change these names for each robot like
robot1_odom and robot1_base_link (more information can be found here)

NOTE This ROS 2 node runs on the compute system and gets information from the motor robot
controller via a wire connection, usually a USB connection.

Step 1: Hardware Assembly

To have a working autonomous mobile robot, connect the following:


• The wheels, the motors, the motor controller
• The chassis to a power source
• The wheels to the motor controller and the motors
• The motors and the motor controller to a power source
• The Intel® compute system to the chassis
• The Intel® compute system to a power source
• The Intel® RealSense™ camera to the chassis

36
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• The Intel® RealSense™ camera USB to the Intel® compute system
• The motor controller to the Intel® compute system

Step 2: Integration into Edge Insights for Autonomous Mobile Robots

The best way to integrate with Edge Insights for Autonomous Mobile Robots is to create a Docker* image for
the robot base node.
A robot base node ROS 2 node can also be started outside of a Docker* container, but Intel recommends
creating one and adding it to the Edge Insights for Autonomous Mobile Robots to have a complete pipeline of
all components needed for autonomous navigation in one yaml file.
If started outside of the SDK, use the same ROS_DOMAIN_ID for both the robot base node and the rest of
the pipeline.
For how to to create a Docker* image for the robot base node, see Create New Docker* Images with
Selected Applications from the SDK.

Step 3: Robot Base Node ROS 2 Node

Introduction to Robotic Base Node


The Edge Insights for Autonomous Mobile Robots pipeline assumes that the robot base ROS 2 node:
• Publishes odom and base_link

• odom is used by the Navigation 2 package and others to get information from sensors, especially the
wheel encoders. See this Navigation 2 tutorial on odometry for more information.
• base_link represents the center of the robot to which all other links are connected.
• Creates the transform between odom and base_link
• Is subscribed to cmd_vel which is used by the Navigation 2 package to give instructions to the robot like
spin in place or move forward
In Edge Insights for Autonomous Mobile Robots, there are two examples:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
ls 01_docker_sdk_env/artifacts/01_amr/amr_generic/param/pengo_nav.param.yaml
ls 01_docker_sdk_env/artifacts/01_amr/amr_generic/param/aaeon_node_params.yaml
# These are configuration files that the robot base nodes of Pengo and UP Xtreme i11 robotic
kits.
ls 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml
ls 01_docker_sdk_env/docker_compose/05_tutorials/
pengo_wandering__kobuki_realsense_collab_slam_fm_nav2.tutorial.yml
# These are yaml files that start the full pipeline that makes these robots wander and area and
map it.
# In them you can find how each nodes are started.
One is for AAEON’s UP Xtreme i11 Robotic Kit, and the other is for Cogniteam’s Pengo robot.

Robotic Base Node Deep Dive

NOTE The following commands only work only if they are run on Cogniteam’s Pengo robot or AAEON’s
UP Xtreme i11 Robotic Kit.

37
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Using the Cogniteam’s Pengo robot and AAEON’s UP Xtreme i11 Robotic Kit as references and starting their
node like this:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
# for Cogniteam's Pengo robot
docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
pengo_wandering__kobuki_realsense_collab_slam_fm_nav2.tutorial.yml up kobuki
# or for UP Xtreme i11 Robotic Kit
docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml up aaeon-amr-interface
In a different terminal, attach to the opened Docker* image:

docker exec -it amr-aaeon-amr-interface bash


# or
docker exec -it amr-kobuki bash
source ros_entrypoint.sh
export ROS_DOMAIN_ID=27
You can check the following:
• ROS 2 topics

ros2 topic list


# The result for UP Xtreme i11 Robotic Kit is similar to:
# /amr/cmd_vel
# /amr/imu/raw
# /amr/initial_pose
# /amr/odometry
# /parameter_events
# /rosout
# /sensors/battery_state
# /tf
# The result for the Pengo robot contains multiple topics but the crucial to this pipeline are:
# /cmd_vel
# /joint_states
# /rosout
# /odom
# /parameter_events
# /tf
• odom and base_link frames

ros2 run tf2_tools view_frames.py


cp frames.pdf /home/<user>
# Open the pdf through file explorer, it should look similar to:

38
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Step 4: Robot Base Node ROS 2 Navigation Parameter File

Introduction to the ROS 2 Navigation Parameter File


The Edge Insights for Autonomous Mobile Robots pipeline for AMRs uses the Navigation 2 package from ROS
2.
The Navigation 2 package requires several parameters specific to the robot and the area it is mapping to be
set.
In Edge Insights for Autonomous Mobile Robots there are two examples:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
ls 01_docker_sdk_env/artifacts/01_amr/amr_generic/param/aaeon_nav.param.yaml 01_docker_sdk_env/
artifacts/01_amr/amr_generic/param/pengo_nav.param.yaml
One is for AAEON’s UP Xtreme i11 Robotic Kit, and the other is for Cogniteam’s Pengo robot.

The Navigation Parameter File as it Applies to Robots


To help understand the options in this parameter file, see ROS 2 Navigation 2 packages:
There are a lot of parameters to set, and Intel recommends reading this Navigation 2 documentation to get
the better picture.

39
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

You can also compare the two nav.param.yaml files that are in Edge Insights for Autonomous Mobile Robots
to understand which parameters are different from robot to robot:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
meld 01_docker_sdk_env/artifacts/01_amr/amr_generic/param/aaeon_nav.param.yaml 01_docker_sdk_env/
artifacts/01_amr/amr_generic/param/pengo_nav.param.yaml
The most important parameters to set are:
• use_sim_time:
Ignore all differences. It is used in simulations. Set it to False when running in a real environment.
• base_frame_id: robot frame ID being published by the robot base node
The default is base_footprint, but base_link is another option. Use the one you choose to publish in
the robot base node.
• robot_model_type: robot type
The options are omnidirectional, differential, or a custom motion model that you provide.
• tf_broadcast: turns transform broadcast on or off
Set this to False to prevent amcl from publishing the transform between the global frame and the
odometry frame.
• odom_topic: source of instantaneous speed measurement
• max_vel_x, max_vel_theta: maximum speed on the X axis or angular (theta)
• robot_radius: radius of the robot
• inflation_radius: radius to inflate costmap around lethal obstacles
• min_obstacle_height: minimum height to add return to occupancy grid
• max_obstacle_height: maximum height to add return to occupancy grid
• obstacle_range: determines the maximum range sensor reading that results in an obstacle being put
into the costmap
• max_rotational_vel, min_rotational_vel, rotational_acc_lim: configure the rotational velocity
allowed for the base in radians/second

Create a Navigation Parameter File for Your Robotic Kit


To create a navigation parameter file for your robotic kit:

cp 01_docker_sdk_env/artifacts/01_amr/amr_generic/param/aaeon_nav.param.yaml 01_docker_sdk_env/
artifacts/01_amr/amr_generic/param/generic_robot_nav.param.yaml
# Replace generic_robot_nav to a name that makes sense to your robotic kit.
# When replacing this make sure that it is also replaced wherever it is also used.
# It is used in the general pipeline yaml file that starts all components.
gedit 01_docker_sdk_env/artifacts/01_amr/amr_generic/param/generic_robot_nav.param.yaml
# you can also use any other preferred editor, it is important though to keep the path.
# Make all changes that are specific to your Robotic Kit, see the previous chapter where it was
described in details.

Step 5: Navigation Full Stack

Introduction to the Navigation Full Stack


The Edge Insights for Autonomous Mobile Robots navigation full stack contains a lot of components that help
the robot navigate, avoid obstacles, and map an area. For example:
• Intel® RealSense™ camera node: receives input from the camera and publishes topics used by the vSLAM
algorithm
• Robot base node: receives input from the motor controller (for example, from wheel encoders) and sends
commands to the motor controller to move the robot

40
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• ros-base-camera-tf: Uses static_transform_publisher to create transforms between base_link
and camera_link

static_transform_publisher publishes a static coordinate transform to tf using an x/y/z offset in


meters and yaw/pitch/roll in radians. The period, in milliseconds, specifies how often to send a transform.
• yaw = rotation on the x axis
• pitch = rotation on the y axis
• roll = rotation on the z axis
• collab-slam: A Collaborative Visual SLAM Framework for Service Robots paper
• FastMapping: an algorithm to create a 3D voxelmap of a robot’s surroundings, based on Intel® RealSense™
camera’s depth sensor data and provide the 2D map needed by the Navigation 2 stack
• nav2: the navigation package
• Wandering: demonstrates the combination of middleware, algorithms, and the ROS 2 navigation stack to
move a robot around a room without hitting obstacles
In Edge Insights for Autonomous Mobile Robots, there two examples:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
ls 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml
ls 01_docker_sdk_env/docker_compose/05_tutorials/
pengo_wandering__kobuki_realsense_collab_slam_fm_nav2.tutorial.yml
One is for AAEON’s UP Xtreme i11 Robotic Kit, and the other is for Cogniteam’s Pengo robot.

Create a Parameter File for Your Robotic Kit


To create a parameter file for your robotic kit:

cp 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml 01_docker_sdk_env/
docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml
# Replace generic_robot_nav to a name that makes sense to your robotic kit.
gedit 01_docker_sdk_env/docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml
# you can also use any other preferred editor, it is important though to keep the path.
Make all of the changes that are specific to your robotic kit:
1. Replace the aaeon-amr-interface target with the generic robot node you created in Step 3: Robot
Base Node ROS 2 Node.
2. Remove the ros-base-teleop target because this is specific to AAEON’s UP Xtreme i11 Robotic Kit.
3. In the ROS 2 command file, change the Navigation 2 target so that params_file targets the
parameter file you created in Step 4: Robot Base Node ROS 2 Navigation Parameter File.
from: params_file:=${CONTAINER_BASE_PATH}/01_docker_sdk_env/artifacts/01_amr/
amr_generic/param/aaeon_nav.param.yaml
to: params_file:=${CONTAINER_BASE_PATH}/01_docker_sdk_env/artifacts/01_amr/
amr_generic/param/generic_robot_nav.param.yaml
4. In the ros-base-camera-tf target, change the transform values from
static_transform_publisher. The values for x, y, and z depend on where your Intel® RealSense™
camera is set.

Start Mapping an Area with Your Robot


1. Place the robot in an area with multiple objects in it.

41
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

2. Go to the installation folder of Edge_Insights_for_Autonomous_Mobile_Robots:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
3. Start mapping the area:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml up
Expected result: The robot starts wandering around the room and mapping the entire area.
4. On a different terminal, prepare the environment to visualize the mapping and the robot using rviz2.

NOTE If available, use a different development machine because rviz2 consumes a lot of resources
that may interfere with the robot.

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
rviz_robot_wandering.yml up

42
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

5. To see the map in 3D, check the MarkerArray:

43
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

NOTE Displaying in 3D consumes a lot of system resources. Intel recommends opening rviz2 on a
development system. The development system needs to be in the same network and have the same
ROS_DOMAIN_ID set.

44
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

6. To stop the robot from mapping the area, do one of the following:

45
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• Type Ctrl-c in the terminal where the


aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml was run.
• (Preferred method because this option cleans the workspace) On the system you used docker-
compose up on in step 2, use docker-compose down:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml down --remove-orphans

Perception
The following tutorials offer solutions for sensors and computer vision in ROS 2-based Docker* containers.

Intel® RealSense™ ROS 2 Sample Application

This tutorial tells you how to:


• Launch ROS nodes for a camera.
• List ROS topics.
• See that Intel® RealSense™ topics are publishing data.
• Get data from the Intel® RealSense™ camera (data coming at FPS).
• See an image from the Intel® RealSense™ camera displayed in rviz2.

Run the Sample Application


1. Connect an Intel® RealSense™ camera (for example, D435i) to the host.
2. Check if your installation has the amr-realsense Docker* image.

docker images |grep amr-realsense


#if you have it installed, the result is:
amr-realsense

NOTE If one or both of the images are not installed, continuing with these steps triggers a build
that takes longer than an hour (sometimes, a lot longer depending on the system resources and
internet connection).

3. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started
Guide for Robots.
4. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
5. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=12
6. Run the command below to start the Docker* container:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml


run realsense bash
7. Check for latest Intel® RealSense™ firmware updates.
a. Open the Intel® RealSense™ viewer application:

realsense-viewer
In the Intel® RealSense™ viewer, if any firmware update is available, a window popup appears in
the upper right corner.

46
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
b. During the firmware update installation, do not disconnect the Intel® RealSense™ camera. Press
Install in the window popup.
c. After the installation is complete or if no update is available, close the Intel® RealSense™ viewer.
d. Exit the Docker* image:

exit
8. Run an automated yml file that opens the Intel® RealSense™ ROS 2 node and lists camera-relevant
information.

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


realsense.tutorial.yml up
Expected output: The image from the Intel® RealSense™ camera is displayed in rviz2, on the bottom left
side.

47
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

48
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
9. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


realsense.tutorial.yml down

Troubleshooting
In some cases, the stream may not appear due to permission issues on the host. You may see this error
message:

ERROR: Pipeline doesn't want to pause.


1. To fix this, install the librealsense udev rules.

git clone https://github.com/IntelRealSense/librealsense


# Copy the 99-realsense-libusb.rules files to the rules.d folder
cd librealsense
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules
sudo udevadm trigger
2. Then open the gst-launch command.

If the problem persists, you can try any or all of the following:
• Verify that $DISPLAY has the correct value.
• Perform an Intel® RealSense™ hardware reset:

# Open realsense docker container


docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run realsense bash
# While in realsense container, open the realsense-viewer application
realsense-viewer
# In realsense-viewer menu, go to "More" and then select "Hardware Reset"
# Wait for reset to complete and then close the realsense-viewer application.
• Reboot the target.
For Intel® RealSense™ documentation, see https://dev.intelrealsense.com/docs/docs-get-started.
For calibration issues, see https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras.
For general robot issues, go to: Troubleshooting for Robot Tutorials.

ROS 2 OpenVINO™ Toolkit Sample Application

This tutorial tells you how to run the segmentation demo application on both a static image and on a video
stream received from a Intel® RealSense™ camera.

Run the Sample Application


1. Check if your installation has the amr-ros2-openvino Docker* image.

docker images |grep amr-ros2-openvino


#if you have it installed, the result is:
amr-ros2-openvino

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

49
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

2. If the image is not installed, Intel recommends installing the Robot Base Kit or Robot Complete Kit with
the Get Started Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=16
5. Launch the automated execution of the ROS 2 OpenVINO™ toolkit sample applications:

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


ros2_openvino.tutorial.yml up
Expected output:
a. Execution of the object segmentation sample code input from the image: This takes one minute,
and you can see the semantic segmentation being applied to the image.
Original image

Image with semantic object segmentation

50
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

51
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

b. Execution of the object segmentation sample code input from the Intel® RealSense™ camera topic:
This requires a Intel® RealSense™ camera connected to the testing target. It takes one minute,
and you can see the semantic segmentation being applied to the video stream received from a
Intel® RealSense™ camera.
6. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr 01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml
down

How it Works
All of the commands required to run this tutorial are documented in:

01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml
To use your own image to run semantic segmentation:
1. Copy your image into the AMR_containers folder at:

cp <path_to_image>/my_image.jpg 01_docker_sdk_env/docker_compose/05_tutorial/param/
2. Edit 01_docker_sdk_env/docker_compose/05_tutorials/ros2_openvino.tutorial.yml, at line
34, adding the following command:

cp ${CONTAINER_BASE_PATH}/01_docker_sdk_env/docker_compose/05_tutorials/param/my_image.jpg ../
ros2_ws/src/ros2_openvino_toolkit/data/images/
3. Edit 01_docker_sdk_env/docker_compose/05_tutorials/param/
pipeline_segmentation_image.yaml to change the input_path:, line 4:

input_path: /home/eiforamr/ros2_ws/src/ros2_openvino_toolkit/data/images/my_image.jpg
4. Run the automated yml:

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


ros2_openvino.tutorial.yml up
Expected result: Execution of semantic segmentation on the image you selected

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

OpenVINO™ Sample Application

This tutorial tells you how to:


• Run inference engine object detection on a pretrained network using the SSD method.
• Run the detection demo application for a CPU and GPU.
• Use a model optimizer to convert a TensorFlow* neural network model.
• After conversion, run the neural network with inference engine for a CPU and GPU.

Run the Sample Application


1. Check if your installation has the eiforamr-openvino-sdk Docker* image.

docker images |grep eiforamr-openvino-sdk


#if you have it installed, the result is:
eiforamr-openvino-sdk

52
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started
Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=22
5. Run inference engine object detection on a pre-trained network using the Single-Shot multibox
Detection (SSD) method. Run the detection demo application for a CPU:

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


openvino_CPU.tutorial.yml up
Expected output: A video in a loop with cars being detected and labeled by the Neural Network using a
CPU

53
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

6. To close this, do one of the following:


• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


openvino_CPU.tutorial.yml down
7. For an explanation of what happened, open the yml file. The file is well documented. To use your own
files, place them in your home directory, and change the respective lines in the yml files to target them.
8. Run the detection demo application for the GPU:

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


openvino_GPU.tutorial.yml up

54
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Expected output: A video in a loop with cars being detected and labeled by the Neural Network using a
GPU
9. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


openvino_GPU.tutorial.yml down
10. For an explanation of what happened, open the yml file. The file is well documented. To use your own
files, place them in your home directory, and change the respective lines in the yml files to target them.
11. For system with an Intel® Movidius™ Myriad™ X accelerator, run the detection demo application on the
Intel® Movidius™ Myriad™ X accelerator:

55
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

NOTE Only execute this command on systems with an Intel® Movidius™ Myriad™ X accelerator.
Check your system:

lsusb
Look for Intel Movidius MyriadX in the output.

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


openvino_MYRIAD.tutorial.yml up
Expected output: A video in a loop with cars being detected and labeled by the Neural Network using
the Intel® Movidius™ Myriad™ X accelerator.

56
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE There is a known issue that if you choose to run the object_detection_demo using the –d
MYRIAD option, a core dump error is thrown when the demo ends.
If errors occur, remove the following file and try again:

rm -rf /tmp/mvnc.mutex

12. To close this, do one of the following:


• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


openvino_MYRIAD.tutorial.yml down
13. For an explanation of what happened, open the yml file. The file is well documented. To use your own
files, place them in your home directory, and change the respective lines in the yml files to target them.

Troubleshooting
If running the yml file gets stuck at downloading:

gedit 01_docker_sdk_env/docker_compose/05_tutorials/openvino_CPU.tutorial.yml
# In the same way open any other yml you want to test behind a proxy.
Add the following lines after echo echo “*** Set up the OpenVINO environment ***”, replacing http://
<http_proxy>:port with your actual environment http_proxy.

export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
For general robot issues, go to: Troubleshooting for Robot Tutorials.

2D LIDAR and ROS 2 Cartographer

Slamtec* RPLIDAR A3 2D LIDAR


1. Connect a Slamtec* RPLIDAR A3 2D LIDAR to your system.
2. Add a new udev rule.
a. Create a new file:

sudo nano /etc/udev/rules.d/rplidar.rules


b. Add these lines:

# set the udev rule, make the device_port be fixed by rplidar


#
KERNEL=="ttyUSB*", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", MODE:="0777", SYMLINK
+="rplidar"
c. Reload the rules:

sudo udevadm control --reload-rules


sudo udevadm trigger
3. Check if your installation has the amr-rplidar and amr-cartographer Docker* images.

docker images |grep amr-rplidar


#if you have it installed, the result is:
amr-rplidar

docker images |grep amr-cartographer


#if you have it installed, the result is:
amr-cartographer

57
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

NOTE If one or both of the images are not installed, continuing with these steps triggers a build
that takes longer than an hour (sometimes, a lot longer depending on the system resources and
internet connection).

4. If one or both of the images are not installed, Intel recommends installing the Robot Base Kit or Robot
Complete Kit with the Get Started Guide for Robots.
5. Run the Sample Application.
a. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
b. Prepare the docker_compose environment:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=114
c. Get the Slamtec* RPLIDAR serial port:

dmesg | grep cp210x


d. Check for similar logs:

usb 1-3: SerialNumber: 0001


cp210x 1-3:1.0: cp210x converter detected
usb 1-3: cp210x converter now attached to ttyUSB0
e. Export the port:

export RPLIDAR_SERIAL_PORT=/dev/ttyUSB0
# this value may differ from system to system, use the value returned in the previous step
f. Run the Slamtec* RPLIDAR tutorial:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


cartographer.rplidar_a3.lua.tutorial.yml up

6. Go to rviz2 that is already open, and turn off LaserScan.


7. Click Add in lower left corner, click By topic, select PointCloud2 from the /scan_matched_points2
topic, and click OK.

58
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

The rviz2 window looks like this:

SICK* nanoScan3* Safety Laser Scanner


1. Connect a SICK* nanoScan3* laser scanner to your system. For the hardware setup and configuration
required in a production environment, see the SICK* website.
2. Get the SICK* nanoScan3* laser scanner’s IP and the host’s IP.
This information can be found when configuring the SICK* nanoScan3* laser scanner using the Safety
Designer, in the “Networking” chapter.
3. Check if your installation has the amr-sick-nanoscan and amr-cartographer Docker* images.

docker images |grep amr-sick-nanoscan


#if you have it installed, the result is:
amr-sick-nanoscan

59
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

docker images |grep amr-cartographer


#if you have it installed, the result is:
amr-cartographer

NOTE If one or both of the images are not installed, continuing with these steps triggers a build
that takes longer than an hour (sometimes, a lot longer depending on the system resources and
internet connection).

4. If one or both of the images are not installed, Intel recommends installing the Robot Complete Kit with
the Get Started Guide for Robots.
5. Run the Sample Application.
a. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
b. Prepare the docker_compose environment:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=114
# Export the Sick NanoScan3 IP and the host's IP
export HOST_IP=<host_ip>
export SICK_NANOSCAN_IP=<sick_nanoscan_ip>
c. Run the SICK* nanoScan3* laser scanner tutorial:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


cartographer.sick_nanoscan.lua.tutorial.yml up
6. In rviz2, add the LaserScan topic (Add > By Topic > scan/LaserScan), and change Fixed Frame to
scan:

60
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

61
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

7. Go to rviz2 that is already open, and turn off LaserScan.


8. Click Add in lower left corner, click By topic, select PointCloud2 from the /scan_matched_points2
topic, and click OK.

The rviz2 window looks like this:

62
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

63
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

GStreamer* Pipelines

See GStreamer* details here.


• Sony’s IMX390 MIPI Camera Stream Through a GStreamer* Video Pipeline
• GStreamer* Audio Pipeline
• GStreamer* Video Pipeline
• GStreamer* Video Pipeline with libv4l2
• GStreamer* Video Pipeline with an Intel® RealSense™ Camera

Sony’s IMX390 MIPI Camera Stream Through a GStreamer* Video Pipeline

This tutorial tells you how to set up and run a GStreamer* video pipeline using Sony’s IMX390 MIPI sensor.
Prerequisites: To enable Sony’s IMX390 MIPI sensor, you must use the Resource Design Center (RDC), have
a Corporate Non-Disclosure Agreement (CNDA) in place, and ask for download access.

Step 1: Update the Kernel to Version 5.10.131


The only supported kernel version for EI for AMR is Intel’s Linux* LTS Kernel 5.10.131. Depending on your
Ubuntu* 20.04 version, the default kernels are:
• 5.4 on Ubuntu* 20.04.1
• 5.8 on Ubuntu* 20.04.2
• 5.11 on Ubuntu* 20.04.3
• 5.13 on Ubuntu* 20.04.4
To check your kernel version:

uname -r
Step 1 is only valid for updating from kernel versions 5.4, 5.8, 5.11 and 5.13. If you have a different kernel,
go to the Support Forum.
This process can take from 30 minutes to two hours, depending on your system.
1. Clone Intel’s Linux* LTS kernel 5.10.131 repository from GitHub*.

git clone https://github.com/intel/linux-intel-lts.git


cd linux-intel-lts
2. Check out to the lts-v5.10.131-yocto-220812T072653Z branch.

git checkout lts-v5.10.131-yocto-220812T072653Z


3. Install the necessary packages and gcc dependencies.

sudo apt-get -y install build-essential gcc bc bison flex libssl-dev libncurses5-dev libelf-dev
dwarves zstd
4. Copy the configuration file to your folder, and rename it .config.

cp /boot/config-$(uname -r) ./.config


5. Change these values of the kernel configuration.

scripts/config --set-str SYSTEM_TRUSTED_KEYS ""


scripts/config --set-str CONFIG_SYSTEM_REVOCATION_KEYS ""
6. Enable the Sony* IMX390-related and TI* TI960-related modules.

scripts/config --module CONFIG_VIDEO_IMX390


scripts/config --module CONFIG_VIDEO_TI960
scripts/config --module CONFIG_VIDEO_INTEL_IPU6

64
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
scripts/config --module CONFIG_VIDEO_AR0234
scripts/config --module CONFIG_PINCTRL_TIGERLAKE
scripts/config --enable CONFIG_INTEL_IPU6_TGLRVP_PDATA
7. Compile the kernel, and make the Debian* kernel packages.

NOTE

This kernel compilation • approximately one hour on systems with 32 GB of RAM


step takes a long time to • two to three hours on systems with 8 GB of RAM
complete:

make olddefconfig
make -j4 deb-pkg
8. Install the new Debian* kernel packages.

cd ../ && sudo dpkg -i linux-*.deb


9. For kernel versions 5.11 and 5.13, the newly installed kernel is a lower version than the system
kernel, so the system needs to be configured to use it instead of the latest version.
a. Open the GRUB.

sudo cp /etc/default/grub /etc/default/grub.bak


sudo vi /etc/default/grub
b. Change the value of GRUB_DEFAULT from GRUB_DEFAULT=0 to GRUB_DEFAULT="Advanced
options for Ubuntu>Ubuntu, with Linux 5.10.131".
c. Update the GRUB.

cd /tmp
sudo update-grub
10. Reboot your system.

sync
sudo reboot -fn
11. Check your kernel version after reboot.

uname -r

Step 2: Install the IPU6 Packages


1. Download the Tiger Lake IPU6 Packages on the host (it contains the IPU RPM libraries and IPU
firmware).
2. Unzip the archive, and copy the RPM folder to the /tmp folder:

unzip 645460.zip
tar -xf ipu6_rpm_beta.tar.bz2
cp -r rpm /tmp
3. Run the Docker* image as root:

xhost +
./run_interactive_docker.sh amr-gstreamer:<TAG> root -e "--volume /sys/kernel/:/sys/kernel:rw --
volume /sys/class:/sys/class:rw"
4. If your network runs behind proxies, export the corresponding proxies in the container.

export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
5. Install the RPM package in the Docker* container:

apt-get update
apt-get install rpm

65
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

6. Set isys_freq:

echo 400 > /sys/kernel/debug/intel-ipu/buttress/isys_freq


7. Install the IPU6 firmware and other necessary user-space libraries:

rpm -ivh /tmp/rpm/* --nodeps --force


8. Prepare the setup:

export DISPLAY=:0 #If you are on VNC adapt this value to the correct one
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:/usr/lib64/pkgconfig:/usr/lib/pkgconfig
export LD_LIBRARY_PATH=/usr/local/lib:/usr/lib64:/usr/lib
export GST_PLUGIN_PATH=/usr/lib/gstreamer-1.0
export GST_GL_PLATFORM=egl

Step 3: Run the Sample Application


1. Run the GStreamer* pipeline:

gst-launch-1.0 icamerasrc device-name=imx390 printfps=true num-vc=1 ! video/x-


raw,format=NV12,width=1920,height=1200 ! videoconvert ! xvimagesink
Expected output: A video opens, showing images captured with the camera using Sony’s IMX390 MIPI
sensor.

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

GStreamer* Audio Pipeline

Run a GStreamer* audio pipeline using GStreamer* plugins in a Docker* container.

Run the Sample Application


1. Check if your installation has the amr-gstreamer Docker* image.

docker images |grep amr-gstreamer


#if you have it installed, the result is:
amr-gstreamer

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Base Kit or Robot Complete Kit with
the Get Started Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=34
5. Run an automated yml file that opens a GStreamer* sample application inside the EI for AMR Docker*
container.

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_audio.tutorial.yml up

66
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Expected output:

#gst-launch-1.0 filesrc location=/data_samples/media_samples/sample.ogg ! oggdemux ! vorbisdec !


audioconvert ! audioresample ! Testsink
error: XDG_RUNTIME_DIR not set in the environment.
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:01:14.349609320
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
6. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_audio.tutorial.yml down
7. For an explanation of what happened, open the yml file:
• The first 23 lines are from the EI for AMR infrastructure.
• Line 26 plays the audio file using GStreamer*.
8. To use your own audio, use the same yml file but update line 26 to target your own file.
For example, copy the file:

cp test.ogg ${CONTAINER_BASE_PATH}/01_docker_sdk_env/docker_compose/05_tutorials/test.ogg
And update line 26 to:

gst-launch-1.0 filesrc location=${CONTAINER_BASE_PATH}/01_docker_sdk_env/docker_compose/


05_tutorials/test.ogg ! oggdemux ! vorbisdec ! audioconvert ! audioresample ! testsink

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

GStreamer* Video Pipeline

Run a GStreamer* video pipeline using GStreamer* plugins, and display a video file in a Docker* container
window.

Run the Sample Application


1. Check if your installation has the amr-gstreamer Docker* image.

docker images |grep amr-gstreamer


#if you have it installed, the result is:
amr-gstreamer

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Base Kit or Robot Complete Kit with
the Get Started Guide for Robots.

67
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=31
5. Run an automated yml file that opens a GStreamer* sample application inside the EI for AMR Docker*
container.

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_video.tutorial.yml up
Expected output: The video file is displayed in a window in the container.

68
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

6. To close this, do one of the following:


• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_video.tutorial.yml down
7. For an explanation of what happened, open the yml file:

69
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• The first 23 lines are from the EI for AMR infrastructure.


• Line 26 clones some sample videos.
• Line 27 play the video using GStreamer*.
8. To use your own video, use the same yml file but update line 27 to target your own file.
For example, copy the file:

cp test.mp4 ${CONTAINER_BASE_PATH}/01_docker_sdk_env/docker_compose/05_tutorials/test.mp4
And update line 27 to:

gst-launch-1.0 playbin uri=file://${CONTAINER_BASE_PATH}/01_docker_sdk_env/docker_compose/


05_tutorials/test.mp4

Troubleshooting
If running the yml file gets stuck at downloading:

gedit 01_docker_sdk_env/docker_compose/05_tutorials/gstreamer_video.tutorial.yml
# In the same way open any other yml you want to test behind a proxy.
Add the following lines after “echo “*** Run gst-launch with video sample from the Docker container ***”,
replacing http://<http_proxy>:port with your actual environment http_proxy.

export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
Check your system date and time:

date
If the date is incorrect, contact your local support team for help setting the correct date and time.
For general robot issues, go to: Troubleshooting for Robot Tutorials.

GStreamer* Video Pipeline with libv4l2

Run a GStreamer* video pipeline using libv4l2 in a Docker* container.

Run the Sample Pipeline


1. Connect a video camera compatible with libv4l2, such as a webcam (an Intel® RealSense™ camera is not
compatible).
2. Check if your installation has the amr-gstreamer Docker* image.

docker images |grep amr-gstreamer


#if you have it installed, the result is:
amr-gstreamer

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

3. If the image is not installed, Intel recommends installing the Robot Base Kit or Robot Complete Kit with
the Get Started Guide for Robots.
4. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers

70
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
5. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
sudo chmod a+rw /dev/video*
6. Get the stream from the webcam using GStreamer*:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_libv4l2.tutorial.yml up
Expected output: The stream from the webcam is displayed.
7. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_libv4l2.tutorial.yml down
8. For an explanation of what happened, open the yml file:
• The first 23 lines are from the EI for AMR infrastructure.
• Line 26 plays the stream from the webcam using GStreamer*.

Troubleshooting
If the following error is encountered:

eiforamr@edgesoftware:~/workspace$ gst-launch-1.0 v4l2src ! autovideosink


Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.000028689
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
GStreamer* may want the type of decoding added. For example, for a Logitech* C922 webcam, the
command is:

$ gst-launch-1.0 v4l2src ! jpegdec ! autovideosink


If the following error is encountered:

amr-gstreamer | Setting pipeline to PAUSED ...


amr-gstreamer | error: XDG_RUNTIME_DIR not set in the environment.
Try this:

mkdir -pv ~/.cache/xdgr


export XDG_RUNTIME_DIR=$PATH:~/.cache/xdgr
For general robot issues, go to: Troubleshooting for Robot Tutorials.

GStreamer* Video Pipeline with an Intel® RealSense™ Camera

Run a GStreamer* video pipeline using the Intel® RealSense™ plugin in a Docker* container in order to use a
Intel® RealSense™ video camera as the video source.

71
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Run the Sample Pipeline


1. Connect an Intel® RealSense™ video camera.
2. Check if your installation has the amr-gstreamer Docker* image.

docker images |grep amr-gstreamer


#if you have it installed, the result is:
amr-gstreamer

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

3. If the image is not installed, Intel recommends installing the Robot Base Kit or Robot Complete Kit with
the Get Started Guide for Robots.
4. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
5. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=45
sudo chmod a+rw /dev/video*
6. Get the stream from the Intel® RealSense™ camera using gstreamer:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_realsensesrc.tutorial.yml up
Expected output: The stream from the Intel® RealSense™ is displayed.
7. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


gstreamer_realsensesrc.tutorial.yml down
8. For an explanation of what happened, open the yml file:
• The first 23 lines are from the EI for AMR infrastructure.
• Line 26 gets the stream from the Intel® RealSense™ app using GStreamer*.

Troubleshooting
• In some cases, the stream may not appear due to permission issues on the host. You may see this error
message:

ERROR: Pipeline doesn't want to pause.


1. To fix this, if you did not install the librealsense udev rules when you configured the host, install them
now:

git clone https://github.com/IntelRealSense/librealsense


# Copy the 99-realsense-libusb.rules files to the rules.d folder
cd librealsense
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules
sudo udevadm trigger
2. Then open the gst-launch command.

If the problem persists, you can try any or all of the following:

72
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• Verify that $DISPLAY has the correct value.
• Perform a Intel® RealSense™ hardware reset:

# Open realsense docker container


docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run realsense bash
# While in realsense container, open the realsense-viewer application
realsense-viewer
# In realsense-viewer menu, go to "More" and then select "Hardware Reset"
# Wait for reset to complete and then close the realsense-viewer application.
• Reboot the target.
• For general robot issues, go to: Troubleshooting for Robot Tutorials.

Point Cloud Library (PCL) Optimized for the Intel® oneAPI Base Toolkit

All tutorials are compatible with PCL 1.12.1.

Collateral Description

“Getting Started” on the Point Cloud Library site A high-level overview of PCL

Optimized PCL Known Limitation A known limitation

Optimized PCL Troubleshooting Troubleshooting

Spatial Partitioning and Search Operations with Octrees

This tutorial performs a “Neighbors within Radius” search.


This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir octree && cd octree
2. Create the file oneapi_octree_search.cpp:

vim oneapi_octree_search.cpp
3. Place the following inside the file:
#include <iostream>
#include <fstream>
#include <numeric>
#include <pcl/oneapi/octree/octree.hpp>
#include <pcl/oneapi/containers/device_array.h>
#include <pcl/point_cloud.h>

using namespace pcl::oneapi;

float dist(Octree::PointType p, Octree::PointType q) {


return std::sqrt((p.x-q.x)*(p.x-q.x) + (p.y-q.y)*(p.y-q.y) + (p.z-q.z)*(p.z-q.z));
}

int main (int argc, char** argv)


{
std::size_t data_size = 871000;
std::size_t query_size = 10000;
float cube_size = 1024.f;

73
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

float max_radius = cube_size / 30.f;


float shared_radius = cube_size / 30.f;
const int max_answers = 5;
const int k = 5;
std::size_t i;
std::vector<Octree::PointType> points;
std::vector<Octree::PointType> queries;
std::vector<float> radiuses;
std::vector<int> indices;

//Generate point cloud data, queries, radiuses, indices


srand (0);
points.resize(data_size);
for(i = 0; i < data_size; ++i)
{
points[i].x = ((float)rand())/(float)RAND_MAX * cube_size;
points[i].y = ((float)rand())/(float)RAND_MAX * cube_size;
points[i].z = ((float)rand())/(float)RAND_MAX * cube_size;
}

queries.resize(query_size);
radiuses.resize(query_size);
for (i = 0; i < query_size; ++i)
{
queries[i].x = ((float)rand())/(float)RAND_MAX * cube_size;
queries[i].y = ((float)rand())/(float)RAND_MAX * cube_size;
queries[i].z = ((float)rand())/(float)RAND_MAX * cube_size;
radiuses[i] = ((float)rand())/(float)RAND_MAX * max_radius;
};

indices.resize(query_size / 2);
for(i = 0; i < query_size / 2; ++i)
{
indices[i] = i * 2;
}

//Prepare oneAPI cloud


pcl::oneapi::Octree::PointCloud cloud_device;
cloud_device.upload(points);

//oneAPI build
pcl::oneapi::Octree octree_device;
octree_device.setCloud(cloud_device);
octree_device.build();

//Upload queries and radiuses


pcl::oneapi::Octree::Queries queries_device;
pcl::oneapi::Octree::Radiuses radiuses_device;
queries_device.upload(queries);
radiuses_device.upload(radiuses);

//Prepare output buffers on device


pcl::oneapi::NeighborIndices result_device1(queries_device.size(), max_answers);
pcl::oneapi::NeighborIndices result_device2(queries_device.size(), max_answers);
pcl::oneapi::NeighborIndices result_device3(indices.size(), max_answers);
pcl::oneapi::NeighborIndices result_device_ann(queries_device.size(), 1);
pcl::oneapi::Octree::ResultSqrDists dists_device_ann;
pcl::oneapi::NeighborIndices result_device_knn(queries_device.size(), k);
pcl::oneapi::Octree::ResultSqrDists dists_device_knn;

74
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
//oneAPI octree radius search with shared radius
octree_device.radiusSearch(queries_device, shared_radius, max_answers, result_device1);

//oneAPI octree radius search with individual radius


octree_device.radiusSearch(queries_device, radiuses_device, max_answers, result_device2);

//oneAPI octree radius search with shared radius using indices to specify
//the queries.
pcl::oneapi::Octree::Indices cloud_indices;
cloud_indices.upload(indices);
octree_device.radiusSearch(queries_device, cloud_indices, shared_radius, max_answers,
result_device3);

//oneAPI octree ANN search


//if neighbor points distances results are not required, can just call
//octree_device.approxNearestSearch(queries_device, result_device_ann)
octree_device.approxNearestSearch(queries_device, result_device_ann, dists_device_ann);

//oneAPI octree KNN search


//if neighbor points distances results are not required, can just call
//octree_device.nearestKSearchBatch(queries_device, k, result_device_knn)
octree_device.nearestKSearchBatch(queries_device, k, result_device_knn, dists_device_knn);

//Download results
std::vector<int> sizes1;
std::vector<int> sizes2;
std::vector<int> sizes3;
result_device1.sizes.download(sizes1);
result_device2.sizes.download(sizes2);
result_device3.sizes.download(sizes3);

std::vector<int> downloaded_buffer1, downloaded_buffer2, downloaded_buffer3, results_batch;


result_device1.data.download(downloaded_buffer1);
result_device2.data.download(downloaded_buffer2);
result_device3.data.download(downloaded_buffer3);

int query_idx = 2;
std::cout << "Neighbors within shared radius search at ("
<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << shared_radius << std::endl;
for (i = 0; i < sizes1[query_idx]; ++i)
{
std::cout << " " << points[downloaded_buffer1[max_answers * query_idx + i]].x
<< " " << points[downloaded_buffer1[max_answers * query_idx + i]].y
<< " " << points[downloaded_buffer1[max_answers * query_idx + i]].z
<< " (distance: " << dist(points[downloaded_buffer1[max_answers * query_idx +
i]], queries[query_idx]) << ")" << std::endl;
}

std::cout << "Neighbors within individual radius search at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << radiuses[query_idx] << std::endl;
for (i = 0; i < sizes2[query_idx]; ++i)
{
std::cout << " " << points[downloaded_buffer2[max_answers * query_idx + i]].x
<< " " << points[downloaded_buffer2[max_answers * query_idx + i]].y

75
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

<< " " << points[downloaded_buffer2[max_answers * query_idx + i]].z


<< " (distance: " << dist(points[downloaded_buffer2[max_answers * query_idx +
i]], queries[query_idx]) << ")" << std::endl;
}

std::cout << "Neighbors within indices radius search at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << shared_radius << std::endl;
for (i = 0; i < sizes3[query_idx/2]; ++i)
{
std::cout << " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].x
<< " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].y
<< " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].z
<< " (distance: " << dist(points[downloaded_buffer3[max_answers * query_idx /
2 + i]], queries[2]) << ")" << std::endl;
}

std::cout << "Approximate nearest neighbor at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ")" << std::endl;
std::cout << " " << points[result_device_ann.data[query_idx]].x
<< " " << points[result_device_ann.data[query_idx]].y
<< " " << points[result_device_ann.data[query_idx]].z
<< " (distance: " << std::sqrt(dists_device_ann[query_idx]) << ")" << std::endl;

std::cout << "K-nearest neighbors (k = " << k << ") at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ")" << std::endl;
for (i = query_idx * k; i < (query_idx + 1) * k; ++i)
{
std::cout << " " << points[result_device_knn.data[i]].x
<< " " << points[result_device_knn.data[i]].y
<< " " << points[result_device_knn.data[i]].z
<< " (distance: " << std::sqrt(dists_device_knn[i]) << ")" << std::endl;
}
}
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_octree_search)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

76
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
add_executable (${target} oneapi_octree_search.cpp)
target_link_libraries (${target} sycl pcl_oneapi_containers pcl_oneapi_octree pcl_octree)
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/octree/
mkdir build && cd build
cmake ../
make -j
8. Run the binary:
./oneapi_octree_search
Expected results example:

Neighbors within shared 660.296 725.957 439.677 (distance: 29.8829) 665.768 721.884 442.919
radius search at (671.675 (distance: 26.7846) 683.988 714.608 445.164 (distance: 30.9962) 677.927
733.78 466.178) with 725.08 446.531 (distance: 22.3788) 695.066 723.509 445.762 (distance:
radius=34.1333 32.7028)

Neighbors within 672.71 736.679 447.835 (distance: 18.6) 664.46 731.504 452.074 (distance:
individual radius search at 16.0048) 671.238 725.881 461.408 (distance: 9.23819) 667.707 718.527
(671.675 733.78 466.178) 466.622 (distance: 15.7669) 654.552 733.636 467.795 (distance: 17.1993)
with radius=19.3623
Neighbors within indices 660.296 725.957 439.677 (distance: 29.8829) 665.768 721.884 442.919
radius search at (671.675 (distance: 26.7846) 683.988 714.608 445.164 (distance: 30.9962) 677.927
733.78 466.178) with 725.08 446.531 (distance: 22.3788) 695.066 723.509 445.762 (distance:
radius=34.1333 32.7028)

The search only finds the first five neighbors (as specified by max_answers), so a different radius finds
different points.

Code Explanation
Generate point cloud data, queries, radiuses, indices with a random number.

//Generate point cloud data, queries, radiuses, indices


srand (0);
points.resize(data_size);
for(i = 0; i < data_size; ++i)
{
points[i].x = ((float)rand())/(float)RAND_MAX * cube_size;
points[i].y = ((float)rand())/(float)RAND_MAX * cube_size;
points[i].z = ((float)rand())/(float)RAND_MAX * cube_size;
}

queries.resize(query_size);
radiuses.resize(query_size);
for (i = 0; i < query_size; ++i)
{
queries[i].x = ((float)rand())/(float)RAND_MAX * cube_size;
queries[i].y = ((float)rand())/(float)RAND_MAX * cube_size;
queries[i].z = ((float)rand())/(float)RAND_MAX * cube_size;
radiuses[i] = ((float)rand())/(float)RAND_MAX * max_radius;
};

77
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

indices.resize(query_size / 2);
for(i = 0; i < query_size / 2; ++i)
{
indices[i] = i * 2;
Create and build the Intel® oneAPI Base Toolkit point cloud; then upload the queries and radiuses to a Intel®
oneAPI Base Toolkit device.

//Prepare oneAPI cloud


pcl::oneapi::Octree::PointCloud cloud_device;
cloud_device.upload(points);

//oneAPI build
pcl::oneapi::Octree octree_device;
octree_device.setCloud(cloud_device);
octree_device.build();

//Upload queries and radiuses


pcl::oneapi::Octree::Queries queries_device;
pcl::oneapi::Octree::Radiuses radiuses_device;
queries_device.upload(queries);
Create output buffers where we can download output from the Intel® oneAPI Base Toolkit device.

//Prepare output buffers on device


pcl::oneapi::NeighborIndices result_device1(queries_device.size(), max_answers);
pcl::oneapi::NeighborIndices result_device2(queries_device.size(), max_answers);
pcl::oneapi::NeighborIndices result_device3(indices.size(), max_answers);
pcl::oneapi::NeighborIndices result_device_ann(queries_device.size(), 1);
pcl::oneapi::Octree::ResultSqrDists dists_device_ann;
pcl::oneapi::NeighborIndices result_device_knn(queries_device.size(), k);
pcl::oneapi::Octree::ResultSqrDists dists_device_knn;
The fist radius search method is “search with shared radius”. In this search method, all queries use the same
radius to find the neighbors.

//oneAPI octree radius search with shared radius


octree_device.radiusSearch(queries_device, shared_radius, max_answers, result_device1);
The second radius search method is “search with individual radius”. In this search method, each query uses
its own specific radius to find the neighbors.

//oneAPI octree radius search with individual radius


octree_device.radiusSearch(queries_device, radiuses_device, max_answers, result_device2);
The third radius search method is “search with shared radius using indices”. In this search method, all
queries use the same radius, and indices specify the queries.

//oneAPI octree radius search with shared radius using indices to specify
//the queries.
pcl::oneapi::Octree::Indices cloud_indices;
cloud_indices.upload(indices);
octree_device.radiusSearch(queries_device, cloud_indices, shared_radius, max_answers,
result_device3);

78
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Download the search results from the Intel® oneAPI Base Toolkit device. The size vector contains the size of
found neighbors for each query. The downloaded_buffer vector contains the index of all found neighbors for
each query.

//Download results
std::vector<int> sizes1;
std::vector<int> sizes2;
std::vector<int> sizes3;
result_device1.sizes.download(sizes1);
result_device2.sizes.download(sizes2);
result_device3.sizes.download(sizes3);

std::vector<int> downloaded_buffer1, downloaded_buffer2, downloaded_buffer3, results_batch;


result_device1.data.download(downloaded_buffer1);
result_device2.data.download(downloaded_buffer2);
result_device3.data.download(downloaded_buffer3);
Print the query, radius, and found neighbors to verify that the result is correct.

int query_idx = 2;
std::cout << "Neighbors within shared radius search at ("
<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << shared_radius << std::endl;
for (i = 0; i < sizes1[query_idx]; ++i)
{
std::cout << " " << points[downloaded_buffer1[max_answers * query_idx + i]].x
<< " " << points[downloaded_buffer1[max_answers * query_idx + i]].y
<< " " << points[downloaded_buffer1[max_answers * query_idx + i]].z
<< " (distance: " << dist(points[downloaded_buffer1[max_answers * query_idx +
i]], queries[query_idx]) << ")" << std::endl;
}

std::cout << "Neighbors within individual radius search at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << radiuses[query_idx] << std::endl;
for (i = 0; i < sizes2[query_idx]; ++i)
{
std::cout << " " << points[downloaded_buffer2[max_answers * query_idx + i]].x
<< " " << points[downloaded_buffer2[max_answers * query_idx + i]].y
<< " " << points[downloaded_buffer2[max_answers * query_idx + i]].z
<< " (distance: " << dist(points[downloaded_buffer2[max_answers * query_idx +
i]], queries[query_idx]) << ")" << std::endl;
}

std::cout << "Neighbors within indices radius search at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << shared_radius << std::endl;
for (i = 0; i < sizes3[query_idx/2]; ++i)
{
std::cout << " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].x
<< " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].y
<< " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].z
<< " (distance: " << dist(points[downloaded_buffer3[max_answers * query_idx /
2 + i]], queries[2]) << ")" << std::endl;

Constructing a Convex Hull Polygon for a 3D Point Cloud

79
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

The Intel® oneAPI Base Toolkit optimization for convex hull is ported from CUDA optimization code, and the
result of the Intel® oneAPI Base Toolkit optimization matches the result of the CUDA implementation.
However, it does not match the result of the PCL CPU implementation.
This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir convex_hull && cd convex_hull
2. Create the file oneapi_convex_hull.cpp:

vim oneapi_convex_hull.cpp
3. Place the following inside the file:
#include <pcl/oneapi/surface/convex_hull.h>
#include <pcl/io/pcd_io.h>
#include <pcl/PolygonMesh.h>
#include <pcl/surface/convex_hull.h>
#include <pcl/visualization/pcl_visualizer.h>

using namespace pcl::oneapi;

int main (int argc, char** argv)


{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZ>() );
pcl::PointCloud<pcl::PointXYZ>::Ptr convex_ptr;

int result = pcl::io::loadPCDFile(argv[1], *cloud_ptr);


if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}

PseudoConvexHull3D pch(1e5);

PseudoConvexHull3D::Cloud cloud_device;
PseudoConvexHull3D::Cloud convex_device;

cloud_device.upload(cloud_ptr->points);

pch.reconstruct(cloud_device, convex_device);

convex_ptr.reset(new pcl::PointCloud<pcl::PointXYZ>((int)convex_device.size(), 1));


convex_device.download(convex_ptr->points);

pcl::PolygonMesh mesh;
pcl::ConvexHull<pcl::PointXYZ> ch;
ch.setInputCloud(convex_ptr);
ch.reconstruct(mesh);

pcl::console::print_info ("Starting visualizer... Close window to exit.\n");


pcl::visualization::PCLVisualizer vis;
vis.addPointCloud (cloud_ptr);
vis.addPolylineFromPolygonMesh(mesh);

80
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
vis.resetCamera ();
vis.spin ();
}
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_convex_hull)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

add_executable (${target} oneapi_convex_hull.cpp)


target_link_libraries (${target} sycl pcl_surface pcl_visualization pcl_oneapi_surface)
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/convex_hull/
mkdir build && cd build
cmake ../
make -j
8. Download the test data from GitHub*:
wget https://raw.githubusercontent.com/PointCloudLibrary/pcl/master/test/bun0.pcd
# if the binary is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
9. Run the binary:
./oneapi_convex_hull ./bun0.pcd
Expected results: a convex hull triangle mesh

Code Explanation
Load the test data from GitHub* into a PointCloud<PointXYZ>.

pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZ>() );


pcl::PointCloud<pcl::PointXYZ>::Ptr convex_ptr;

int result = pcl::io::loadPCDFile(argv[1], *cloud_ptr);


if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}

81
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Create GPU input and output device arrays, and load point cloud data into the input device array.

PseudoConvexHull3D::Cloud cloud_device;
PseudoConvexHull3D::Cloud convex_device;

cloud_device.upload(cloud_ptr->points);
GPU: Perform reconstruction, and generate convex hull vertices.

pch.reconstruct(cloud_device, convex_device);
Download the convex hull vertices from the GPU to the CPU.

convex_device.download(convex_ptr->points);
CPU: Perform reconstruction to generate the convex hull mesh.

pcl::PolygonMesh mesh;
pcl::ConvexHull<pcl::PointXYZ> ch;
ch.setInputCloud(convex_ptr);
ch.reconstruct(mesh);
Visualize the convex hull mesh results.

pcl::console::print_info ("Starting visualizer... Close window to exit.\n");


pcl::visualization::PCLVisualizer vis;
vis.addPointCloud (cloud_ptr);
vis.addPolylineFromPolygonMesh(mesh);
vis.resetCamera ();
vis.spin ();

Detecting Specific Models and Their Parameters in 3D Point Clouds

This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir sample_consensus && cd sample_consensus
2. Create the file oneapi_sample_consensus.cpp:

vim oneapi_sample_consensus.cpp
3. Place the following inside the file:
/*
* Software License Agreement (BSD License)
*
* Point Cloud Library (PCL) - www.pointclouds.org
* Copyright (c) 2010-2012, Willow Garage, Inc.
* Copyright (c) 2014-, Open Perception, Inc.
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above

82
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
* * Neither the name of the copyright holder(s) nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
*/

#include <pcl/oneapi/sample_consensus/sac_model_plane.h>
#include <pcl/oneapi/sample_consensus/ransac.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>

int main (int argc, char** argv)


{
// Read Point Cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZ>() );
pcl::PointCloud<pcl::PointXYZ>::Ptr convex_ptr;
int result = pcl::io::loadPCDFile(argv[1], *cloud_ptr);
if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}

// Prepare Device Point Cloud Memory


pcl::oneapi::SampleConsensusModel::PointCloud_xyz cloud_device_xyz;
cloud_device_xyz.upload(cloud_ptr->points);
pcl::oneapi::SampleConsensusModel::PointCloud & cloud_device =
(pcl::oneapi::SampleConsensusModel::PointCloud &)cloud_device_xyz;

// Algorithm tests
typename pcl::oneapi::SampleConsensusModelPlane::Ptr sac_model (new
pcl::oneapi::SampleConsensusModelPlane (cloud_device));
pcl::oneapi::RandomSampleConsensus sac (sac_model);
sac.setMaxIterations (10000);
sac.setDistanceThreshold (0.03);
result = sac.computeModel ();

// Best model
pcl::oneapi::SampleConsensusModelPlane::Indices sample;
sac.getModel (sample);

83
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

// Coefficient
pcl::oneapi::SampleConsensusModelPlane::Coefficients coeffs;
sac.getModelCoefficients (coeffs);

// Inliers
pcl::Indices pcl_inliers;
int inliers_size = sac.getInliersSize ();
pcl_inliers.resize(inliers_size);

pcl::oneapi::SampleConsensusModelPlane::IndicesPtr inliers = sac.getInliers ();


inliers->download(pcl_inliers.data(), 0, inliers_size);

// Refined coefficient
pcl::oneapi::SampleConsensusModelPlane::Coefficients coeff_refined;
sac_model->optimizeModelCoefficients (*cloud_ptr, pcl_inliers, coeffs, coeff_refined);

// print log
std::cout << "input cloud size: " << cloud_ptr->points.size() << std::endl;
std::cout << "inliers size : " << inliers_size << std::endl;
std::cout << " plane model coefficient: " << coeffs[0] << ", " << coeffs[1] << ", " <<
coeffs[2] << ", " << coeffs[3] << std::endl;
std::cout << " Optimized coefficient : " << coeff_refined[0] << ", " << coeff_refined[1] <<
", " << coeff_refined[2] << ", " << coeff_refined[3] << std::endl;
}
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_sample_consensus)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

add_executable (${target} oneapi_sample_consensus.cpp)


target_link_libraries (${target} sycl pcl_oneapi_sample_consensus ${PCL_LIBRARIES})
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/sample_consensus/
mkdir build && cd build
cmake ../
make -j

84
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
8. Download the test data from GitHub*:
wget https://raw.githubusercontent.com/PointCloudLibrary/data/
5c26bdd0591ba150b91858b5c9fe5e91cb39ae86/segmentation/mOSD/test/test59.pcd
# if the binary is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
9. Run the binary:
./oneapi_sample_consensus ./test59.pcd
Expected results example:
input cloud size: 307200
inliers size : 77316
plane model coefficient: -0.0789502, -0.816661, -0.571692, 0.546386
Optimized coefficient : -0.0722213, -0.818286, -0.570256, 0.547587

Code Explanation
Load the test data from GitHub* into a PointCloud<PointXYZ>.

// Read Point Cloud


pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZ>() );
pcl::PointCloud<pcl::PointXYZ>::Ptr convex_ptr;
int result = pcl::io::loadPCDFile(argv[1], *cloud_ptr);
if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}
Create GPU input and output device arrays, and load point cloud data into the input device array.

// Prepare Device Point Cloud Memory


pcl::oneapi::SampleConsensusModel::PointCloud_xyz cloud_device_xyz;
cloud_device_xyz.upload(cloud_ptr->points);
pcl::oneapi::SampleConsensusModel::PointCloud & cloud_device =
(pcl::oneapi::SampleConsensusModel::PointCloud &)cloud_device_xyz;
GPU: Start computing the model.

typename pcl::oneapi::SampleConsensusModelPlane::Ptr sac_model (new


pcl::oneapi::SampleConsensusModelPlane (cloud_device));
pcl::oneapi::RandomSampleConsensus sac (sac_model);
sac.setMaxIterations (10000);
sac.setDistanceThreshold (0.03);
result = sac.computeModel ();
Result (best model):

pcl::oneapi::SampleConsensusModelPlane::Indices sample;
sac.getModel (sample);
Result (coefficient model):

pcl::oneapi::SampleConsensusModelPlane::Coefficients coeffs;
sac.getModelCoefficients (coeffs);
Result (inliers model):

pcl::Indices pcl_inliers;
int inliers_size = sac.getInliersSize ();
pcl_inliers.resize(inliers_size);

85
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

pcl::oneapi::SampleConsensusModelPlane::IndicesPtr inliers = sac.getInliers ();


inliers->download(pcl_inliers.data(), 0, inliers_size);
Result (refined coefficient model):

pcl::oneapi::SampleConsensusModelPlane::Coefficients coeff_refined;
sac_model->optimizeModelCoefficients (*cloud_ptr, pcl_inliers, coeffs, coeff_refined);
Result (output log):

std::cout << "input cloud size: " << cloud_ptr->points.size() << std::endl;
std::cout << "inliers size : " << inliers_size << std::endl;
std::cout << " plane model coefficient: " << coeffs[0] << ", " << coeffs[1] << ", " <<
coeffs[2] << ", " << coeffs[3] << std::endl;
std::cout << " Optimized coefficient : " << coeff_refined[0] << ", " << coeff_refined[1] <<
", " << coeff_refined[2] << ", " << coeff_refined[3] << std::endl;

Plane Model Segmentation

This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir one_api_segmentation && cd one_api_segmentation
2. Create the file oneapi_segmentation.cpp:

vim oneapi_segmentation.cpp
3. Place the following inside the file:

#include <pcl/oneapi/segmentation/segmentation.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <pcl/pcl_config.h>

int main (int argc, char **argv)


{
//Read Point Cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_input (new pcl::PointCloud<pcl::PointXYZ> ());

//Load a standard PCD file from disk


int result = pcl::io::loadPCDFile(argv[1], *cloud_input);
if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}

//Create the oneapi_segmentation object


pcl::oneapi::SACSegmentation seg;

//Configure oneapi_segmentation class


seg.setInputCloud(cloud_input);
seg.setProbability(0.99);
seg.setMaxIterations(50);
seg.setDistanceThreshold(0.01);

86
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
//Optional
seg.setOptimizeCoefficients(true);
//Set algorithm method and model type
seg.setMethodType(pcl::oneapi::SAC_RANSAC);
seg.setModelType (pcl::oneapi::SACMODEL_PLANE);

//Out parameter declaration for getting inliers and model coefficients


pcl::PointIndices::Ptr inliers (new pcl::PointIndices);
double coeffs[4]={0,0,0,0};

//Getting inliers and model coefficients


seg.segment(*inliers, coeffs);

std::cout << "input cloud size : " << seg.getCloudSize() << std::endl;
std::cout << "inliers size : " << seg.getInliersSize() << std::endl;
std::cout << "model coefficients : " << coeffs[0] << ", " << coeffs[1] << ", " << coeffs[2]
<< ", " << coeffs[3] << std::endl;

return 0;
}
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_segmentation)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

add_executable (${target} oneapi_segmentation.cpp)


target_link_libraries (${target} sycl pcl_oneapi_containers pcl_oneapi_segmentation $
{PCL_LIBRARIES})
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/one_api_segmentation/
mkdir build && cd build
cmake ../
make -j

87
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

8. Download the test data from GitHub*:


wget https://raw.githubusercontent.com/PointCloudLibrary/data/
5c26bdd0591ba150b91858b5c9fe5e91cb39ae86/segmentation/mOSD/test/test59.pcd
# if the binary is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
9. Run the binary:
./oneapi_segmentation ./test59.pcd
Expected results example:
input cloud size : 307200
inliers size : 25332
model coefficients : -0.176599, -1.87228, -1.08408, 1

Code Explanation
Load the test data from GitHub* into a PointCloud<PointXYZ>.

//Read Point Cloud


pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_input (new pcl::PointCloud<pcl::PointXYZ> ());

Create the oneapi_segmentation object.

pcl::oneapi::SACSegmentation seg;
Configure the oneapi_segmentation class.

seg.setInputCloud(cloud_input);
seg.setProbability(0.99);
seg.setMaxIterations(50);
seg.setDistanceThreshold(0.01);
//Optional
seg.setOptimizeCoefficients(true);
//Set algorithm method and model type
seg.setMethodType(pcl::oneapi::SAC_RANSAC);
seg.setModelType (pcl::oneapi::SACMODEL_PLANE);
Set to true if a coefficient refinement is required.

seg.setOptimizeCoefficients(true);
Set the algorithm method and model type.

seg.setMethodType(pcl::oneapi::SAC_RANSAC);
seg.setModelType (pcl::oneapi::SACMODEL_PLANE);
Declare output parameters for getting inliers and model coefficients.

pcl::PointIndices::Ptr inliers (new pcl::PointIndices);


double coeffs[4]={0,0,0,0};
Get inliers and model coefficients by calling the segment() API.

seg.segment(*inliers, coeffs);
Result (output log):

std::cout << "input cloud size : " << seg.getCloudSize() << std::endl;
std::cout << "inliers size : " << seg.getInliersSize() << std::endl;
std::cout << "model coefficients : " << coeffs[0] << ", " << coeffs[1] << ", " << coeffs[2]
<< ", " << coeffs[3] << std::endl;

88
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Surface Reconstruction with Intel® oneAPI Base Toolkit's Moving Least Squares (MLS)

MLS creates a 3D surface from a point cloud through either down-sampling or up-sampling. Intel® oneAPI
Base Toolkit's MLS is based on the original MLS API. Differences between the two:
• Intel® oneAPI Base Toolkit's MLS calculates with 32-bit float instead of 64-bit double.
• Intel® oneAPI Base Toolkit's MLS’s surface is constructed as a set of indices grouped into multiple blocks.
This consumes more system memory than the original version. Control the block size with
setSearchBlockSize.
• Intel® oneAPI Base Toolkit's MLS improves the performance of all up-sampling methods.
• The Intel® oneAPI Base Toolkit namespace must be appended to the original MovingLeastSquares class.
See resampling.rst for details.
This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir mls && cd mls
2. Create the file oneapi_mls.cpp:

vim oneapi_mls.cpp
3. Place the following inside the file:
#include <pcl/oneapi/surface/mls.h>
#include <pcl/oneapi/search/kdtree.h>
#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>

using namespace pcl::oneapi;

int main (int argc, char** argv)


{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZ>() );

// Load bun0.pcd -- should be available with the PCL archive in test


pcl::io::loadPCDFile (argv[1], *cloud_ptr);

pcl::oneapi::KdTreeFLANN<pcl::PointXYZ>::Ptr tree (new


pcl::oneapi::KdTreeFLANN<pcl::PointXYZ>);

// Output has the PointNormal type in order to store the normals calculated by MLS
pcl::PointCloud<pcl::PointNormal> mls_points;

// Init object (second point type is for the normals, even if unused)
pcl::oneapi::MovingLeastSquares<pcl::PointXYZ, pcl::PointNormal> mls;

mls.setComputeNormals (true);

// Set parameters
mls.setInputCloud (cloud_ptr);
mls.setPolynomialOrder (2);
mls.setSearchMethod (tree);
mls.setSearchRadius (0.03);

// Reconstruct

89
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

mls.process (mls_points);

// Save output
pcl::io::savePCDFile ("bun0-mls.pcd", mls_points);
}

4. Create a CMakeLists.txt file:


vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_mls)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)
include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

add_executable (${target} oneapi_mls.cpp)


target_link_libraries (${target} sycl pcl_oneapi_containers pcl_oneapi_surface
pcl_oneapi_kdtree ${PCL_LIBRARIES})
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/mls/
mkdir build && cd build
cmake ../
make -j
8. Download the test data from GitHub*:
wget https://raw.githubusercontent.com/PointCloudLibrary/pcl/master/test/bun0.pcd
# if the binary is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
9. Run the binary:
./oneapi_mls ./bun0.pcd
To see the smoothed cloud (zoom out to see the reconstructed shape):
/home/eiforamr/workspace/lib/pcl/bin/pcl_viewer bun0-mls.pcd

Code Explanation
Intel® oneAPI Base Toolkit's MLS requires this header.

#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>

90
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Load the test data from GitHub* into a PointCloud<PointXYZ> (these fields are mandatory; other fields are
allowed and preserved).

pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_ptr( new pcl::PointCloud<pcl::PointXYZ>() );

// Load bun0.pcd -- should be available with the PCL archive in test


pcl::io::loadPCDFile (argv[1], *cloud_ptr);
If normal estimation is required:

mls.setComputeNormals (true);
Append the Intel® oneAPI Base Toolkit namespace to the original MovingLeastSquares class. The first
template type is for the input and output cloud. Only the XYZ dimensions of the input are smoothed in the
output.

pcl::oneapi::MovingLeastSquares<pcl::PointXYZ, pcl::PointNormal> mls;


The maximum polynomial order is five. See the code API (pcl:MovingLeastSquares) for default values and
additional parameters to control the smoothing process.

mls.setPolynomialOrder (2);
If the normal and original dimensions need to be in the same cloud, the fields have to be concatenated.

// Save output
pcl::io::savePCDFile ("bun0-mls.pcd", mls_points);

Intel® oneAPI Base Toolkit's Iterative Closest Point (ICP)

ICP is an algorithm employed to minimize the difference between two clouds of points. The standard, not
joint nor generalized, ICP has been optimized using the Intel® oneAPI Base Toolkit.
See registration_api for details.
This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir one_api_registration && cd one_api_registration
2. Create the file oneapi_icp_example.cpp:

vim oneapi_icp_example.cpp
3. Place the following inside the file:
#include <pcl/oneapi/registration/icp.h>
#include <pcl/console/parse.h>
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>
#include <pcl/point_representation.h>
#include <pcl/io/pcd_io.h>

using namespace pcl;


using namespace pcl::io;
using namespace pcl::console;

/* ---[ */
int
main (int argc, char** argv)

91
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

{
// Parse the command line arguments for .pcd files
std::vector<int> p_file_indices;
p_file_indices = parse_file_extension_argument (argc, argv, ".pcd");
if (p_file_indices.size () != 2)
{
print_error ("Need one input source PCD file and one input target PCD file to continue.\n");
print_error ("Example: %s source.pcd target.pcd\n", argv[0]);
return (-1);
}

// Load the files


print_info ("Loading %s as source and %s as target...\n", argv[p_file_indices[0]],
argv[p_file_indices[1]]);
PointCloud<PointXYZ>::Ptr src, tgt;
src.reset (new PointCloud<PointXYZ>);
tgt.reset (new PointCloud<PointXYZ>);
if (loadPCDFile (argv[p_file_indices[0]], *src) == -1 || loadPCDFile (argv[p_file_indices[1]],
*tgt) == -1)
{
print_error ("Error reading the input files!\n");
return (-1);
}

PointCloud<PointXYZ> output;
// Compute the best transformtion
pcl::oneapi::IterativeClosestPoint<PointXYZ, PointXYZ> reg;
reg.setMaximumIterations(20);
reg.setTransformationEpsilon(1e-12);
reg.setMaxCorrespondenceDistance(2);

reg.setInputSource(src);
reg.setInputTarget(tgt);

// Register
reg.align(output); //point cloud output of alignment i.e source cloud after transformation is
applied.

Eigen::Matrix4f transform = reg.getFinalTransformation();

std::cerr << "Transform Matrix:" << std::endl;


std::cerr << transform << std::endl;
// Write transformed data to disk
savePCDFileBinary ("source_transformed.pcd", output);
}
/* ]--- */
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_icp_example)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(registration)

92
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
find_package(PCL 1.12 REQUIRED)
find_package(PCL-ONEAPI 1.12 REQUIRED)

add_executable (${target} oneapi_icp_example.cpp)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

target_link_libraries (${target} sycl pcl_oneapi_registration pcl_oneapi_search


pcl_oneapi_kdtree pcl_io)

6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/one_api_registration/
mkdir build && cd build
cmake ../
make -j
8. Download the test data from GitHub*:
wget https://raw.githubusercontent.com/NVIDIA-AI-IOT/cuPCL/main/cuOctree/test_P.pcd
wget https://raw.githubusercontent.com/NVIDIA-AI-IOT/cuPCL/main/cuOctree/test_Q.pcd
# if the binaries is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
9. Run the binary:
./oneapi_icp_example test_P.pcd test_Q.pcd
Expected results example:
Transform Matrix:
0.998899 0.0107221 0.0457259 0.0790768
-0.00950837 0.999602 -0.0266773 0.0252976
-0.0459936 0.026213 0.998599 0.0677631
0 0 0 1

Code Explanation
Define two input point Clouds (src, tgt), declare the output point cloud, and load the test data from GitHub*.

PointCloud<PointXYZ>::Ptr src, tgt;


src.reset (new PointCloud<PointXYZ>);
tgt.reset (new PointCloud<PointXYZ>);
if (loadPCDFile (argv[p_file_indices[0]], *src) == -1 || loadPCDFile (argv[p_file_indices[1]],
*tgt) == -1)
{
print_error ("Error reading the input files!\n");
return (-1);
}

PointCloud<PointXYZ> output;

93
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Declare the Intel® oneAPI Base Toolkit's ICP, and set the input configuration parameters.

pcl::oneapi::IterativeClosestPoint<PointXYZ, PointXYZ> reg;


reg.setMaximumIterations(20);
reg.setTransformationEpsilon(1e-12);
reg.setMaxCorrespondenceDistance(2);
Set the two input point clouds for the ICP module, and call the method to align the two point clouds. The
align method populates the output point cloud, passed as a parameter, with the src point cloud transformed
using the computed transformation matrix.

reg.setInputSource(src);
reg.setInputTarget(tgt);

// Register
reg.align(output); //point cloud output of alignment i.e source cloud after transformation is
applied.
Get the computed matrix transformation, print it, and save the transformed point cloud.

Eigen::Matrix4f transform = reg.getFinalTransformation();

std::cerr << "Transform Matrix:" << std::endl;


std::cerr << transform << std::endl;
// Write transformed data to disk
savePCDFileBinary ("source_transformed.pcd", output);

KdTree Search Using Intel® oneAPI Base Toolkit's KdTree FLANN

Intel® oneAPI Base Toolkit's KdTree is similar to pcl::KdTreeFLANN except that Intel® oneAPI Base Toolkit's
KdTree is able to search the entire point cloud in a single call. Intel® oneAPI Base Toolkit's KdTree returns a
two-dimensional vector of indices and distances for the entire point cloud search. Both nearestKSearch and
radiusSearch are supported.
See kdtree_search for details.
This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir kdtree && cd kdtree
2. Create the file oneapi_kdtree.cpp:

vim oneapi_kdtree.cpp
3. Place the following inside the file:
#include <pcl/oneapi/search/kdtree.h> // for KdTree
#include <pcl/point_cloud.h>

#include <vector>
#include <iostream>

int
main (int argc, char** argv)
{
srand (time (NULL));

pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);

94
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
// Generate pointcloud data
cloud->width = 1000;
cloud->height = 1;
cloud->points.resize (cloud->width * cloud->height);

for (std::size_t i = 0; i < cloud->size (); ++i)


{
(*cloud)[i].x = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*cloud)[i].y = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*cloud)[i].z = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
}

pcl::PointCloud<pcl::PointXYZ>::Ptr searchPoints (new pcl::PointCloud<pcl::PointXYZ>);

// Generate pointcloud data


searchPoints->width = 3;
searchPoints->height = 1;
searchPoints->points.resize (searchPoints->width * searchPoints->height);

for (std::size_t i = 0; i < searchPoints->size (); ++i)


{
(*searchPoints)[i].x = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*searchPoints)[i].y = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*searchPoints)[i].z = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
}

pcl::oneapi::search::KdTree<pcl::PointXYZ> kdtree;
kdtree.setInputCloud (cloud);

// K nearest neighbor search


int K = 5;

std::vector< std::vector< float > > pointsSquaredDistance (searchPoints->size()) ;


std::vector< pcl::Indices > pointsIdxKnnSearch (searchPoints->size());

kdtree.nearestKSearch(*searchPoints, K, pointsIdxKnnSearch, pointsSquaredDistance);

for (std::size_t j = 0; j < pointsIdxKnnSearch.size(); ++j)


{
std::cout << "K=" << K << " neighbors from (" << (*searchPoints)[j].x << ","
<< (*searchPoints)[j].y << ","
<< (*searchPoints)[j].z << ")" << std::endl;
for (std::size_t i = 0; i < pointsIdxKnnSearch.at(j).size(); ++i)
{
std::cout << " " << (*cloud)[ pointsIdxKnnSearch.at(j)[i] ].x
<< " " << (*cloud)[ pointsIdxKnnSearch.at(j)[i] ].y
<< " " << (*cloud)[ pointsIdxKnnSearch.at(j)[i] ].z
<< " (squared distance: " << pointsSquaredDistance.at(j)[i] << ")" << std::endl;
}
}

// Neighbors within radius search


float radius = 100.f;

std::vector< std::vector< float > > pointsRadiusSquaredDistance (searchPoints->size()) ;


std::vector< pcl::Indices > pointsIdxRadiusSearch (searchPoints->size());

kdtree.radiusSearch(*searchPoints, radius, pointsIdxRadiusSearch,

95
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

pointsRadiusSquaredDistance, 10);

for (std::size_t j = 0; j < pointsIdxRadiusSearch.size(); ++j)


{
std::cout << "Radius=" << radius << " neighbors from (" << (*searchPoints)[j].x << ","
<< (*searchPoints)[j].y << ","
<< (*searchPoints)[j].z << ")" << std::endl;
for (std::size_t i = 0; i < pointsIdxRadiusSearch.at(j).size(); ++i)
{
std::cout << " " << (*cloud)[ pointsIdxRadiusSearch.at(j)[i] ].x
<< " " << (*cloud)[ pointsIdxRadiusSearch.at(j)[i] ].y
<< " " << (*cloud)[ pointsIdxRadiusSearch.at(j)[i] ].z
<< " (squared distance: " << pointsRadiusSquaredDistance.at(j)[i] << ")" <<
std::endl;
}
}

return 0;
}
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_kdtree)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

add_executable (${target} oneapi_kdtree.cpp)


target_link_libraries (${target} sycl -lpcl_oneapi_search -lpcl_search)
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/kdtree/
mkdir build && cd build
cmake ../
make -j
8. Run the binary:
./oneapi_kdtree
Expected results example:

96
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
K=5 neighbors from 895.824 185.838 555.685 (squared distance: 2259.57) 877.741 116.544
(868.536,165.24,588.71) 656.683 (squared distance: 7076.32) 906.9 102.777 515.255 (squared distance:
10769.1) 817.828 258.588 546.829 (squared distance: 13039.3) 767.465
164.017 644.785 (squared distance: 13361.4)

K=5 neighbors from 219.717 774.625 586.534 (squared distance: 2934.17) 137.948 729.391
(169.766,772.676,607.39 630.512 (squared distance: 3420.39) 211.662 726.615 591.354 (squared
5) distance: 4134.23) 241.534 720.475 579.8 (squared distance: 8637.12) 236.382
811.854 548.706 (squared distance: 9417.13)

K=5 neighbors from 1001.23 881.896 424.253 (squared distance: 2485.66) 1002.05 882.791
(974.478,854.754,392.10 460.627 (squared distance: 6241.17) 980.62 864.419 471.809 (squared
8) distance: 6483.47) 891.607 840.559 334.935 (squared distance: 10337.9)
875.04 824.699 399.918 (squared distance: 10852.3)

Radius=100 neighbors 895.824 185.838 555.685 (squared distance: 2259.57) 877.741 116.544
from 656.683 (squared distance: 7076.32)
(868.536,165.24,588.71)
Radius=100 neighbors 219.717 774.625 586.534 (squared distance: 2934.17) 137.948 729.391
from 630.512 (squared distance: 3420.39) 211.662 726.615 591.354 (squared
(169.766,772.676,607.39 distance: 4134.23) 241.534 720.475 579.8 (squared distance: 8637.12) 236.382
5) 811.854 548.706 (squared distance: 9417.13)

Radius=100 neighbors 1001.23 881.896 424.253 (squared distance: 2485.66) 1002.05 882.791
from 460.627 (squared distance: 6241.17) 980.62 864.419 471.809 (squared
(974.478,854.754,392.10 distance: 6483.47)
8)

Code Explanation
Intel® oneAPI Base Toolkit's KdTree requires this header.

#include <pcl/point_cloud.h>
Seed rand() with the system time, create and fill a point cloud with random data (cloud), create and fill
another point cloud with random coordinates (searchPoints), and search three coordinates using a single
call.

srand (time (NULL));

pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);

// Generate pointcloud data


cloud->width = 1000;
cloud->height = 1;
cloud->points.resize (cloud->width * cloud->height);

for (std::size_t i = 0; i < cloud->size (); ++i)


{
(*cloud)[i].x = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*cloud)[i].y = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*cloud)[i].z = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
}

pcl::PointCloud<pcl::PointXYZ>::Ptr searchPoints (new pcl::PointCloud<pcl::PointXYZ>);

// Generate pointcloud data


searchPoints->width = 3;
searchPoints->height = 1;
searchPoints->points.resize (searchPoints->width * searchPoints->height);

97
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

for (std::size_t i = 0; i < searchPoints->size (); ++i)


{
(*searchPoints)[i].x = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*searchPoints)[i].y = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
(*searchPoints)[i].z = static_cast<float>(1024.0f * rand () / (RAND_MAX + 1.0));
}
Create the KdTree object, and set cloud as the input.

pcl::oneapi::search::KdTree<pcl::PointXYZ> kdtree;
kdtree.setInputCloud (cloud);
Create an integer ‘K’ and set it equal to five, and create two two-dimensional vectors for storing our K
nearest neighbors from the search.

// K nearest neighbor search


int K = 5;

std::vector< std::vector< float > > pointsSquaredDistance (searchPoints->size()) ;


std::vector< pcl::Indices > pointsIdxKnnSearch (searchPoints->size());
If KdTree returns more than 0 closest neighbors, print the locations of all K of the closest neighbors to
searchPoints.

kdtree.nearestKSearch(*searchPoints, K, pointsIdxKnnSearch, pointsSquaredDistance);

for (std::size_t j = 0; j < pointsIdxKnnSearch.size(); ++j)


{
std::cout << "K=" << K << " neighbors from (" << (*searchPoints)[j].x << ","
<< (*searchPoints)[j].y << ","
<< (*searchPoints)[j].z << ")" << std::endl;
for (std::size_t i = 0; i < pointsIdxKnnSearch.at(j).size(); ++i)
{
std::cout << " " << (*cloud)[ pointsIdxKnnSearch.at(j)[i] ].x
<< " " << (*cloud)[ pointsIdxKnnSearch.at(j)[i] ].y
<< " " << (*cloud)[ pointsIdxKnnSearch.at(j)[i] ].z
<< " (squared distance: " << pointsSquaredDistance.at(j)[i] << ")" << std::endl;
}
}
Find all neighbors to searchPoints within a given radius, and create two two-dimensional vectors for
storing the information about our neighbors.

// Neighbors within radius search


float radius = 100.f;

std::vector< std::vector< float > > pointsRadiusSquaredDistance (searchPoints->size()) ;


std::vector< pcl::Indices > pointsIdxRadiusSearch (searchPoints->size());
If KdTree returns more than 0 closest neighbors within the specified radius, print the locations of these
points.

kdtree.radiusSearch(*searchPoints, radius, pointsIdxRadiusSearch,


pointsRadiusSquaredDistance, 10);

for (std::size_t j = 0; j < pointsIdxRadiusSearch.size(); ++j)


{
std::cout << "Radius=" << radius << " neighbors from (" << (*searchPoints)[j].x << ","
<< (*searchPoints)[j].y << ","
<< (*searchPoints)[j].z << ")" << std::endl;

98
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
for (std::size_t i = 0; i < pointsIdxRadiusSearch.at(j).size(); ++i)
{
std::cout << " " << (*cloud)[ pointsIdxRadiusSearch.at(j)[i] ].x
<< " " << (*cloud)[ pointsIdxRadiusSearch.at(j)[i] ].y
<< " " << (*cloud)[ pointsIdxRadiusSearch.at(j)[i] ].z
<< " (squared distance: " << pointsRadiusSquaredDistance.at(j)[i] << ")" <<
std::endl;
}
}

Downsampling 3D Point Clouds with a Voxelized Grid

This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir voxel_grid && cd voxel_grid
2. Create the file oneapi_voxel_grid.cpp:

vim oneapi_voxel_grid.cpp
3. Place the following inside the file:
#include <pcl/oneapi/filters/voxel_grid.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>

int main (int argc, char** argv)


{
// Read Point Cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_( new pcl::PointCloud<pcl::PointXYZ>() );
int result = pcl::io::loadPCDFile("table_scene_lms400.pcd", *cloud_);
if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}

// Prepare Device Point Cloud Memory (input)


pcl::oneapi::VoxelGrid::PointCloud_xyz cloud_device_xyz;
cloud_device_xyz.upload(cloud_->points);
pcl::oneapi::VoxelGrid::PointCloud & cloud_device = (pcl::oneapi::VoxelGrid::PointCloud
&)cloud_device_xyz;

// Prepare Device Point Cloud Memory (output)


pcl::oneapi::VoxelGrid::PointCloud_xyz cloud_device_xyz_o;
cloud_device_xyz_o.create(cloud_->size());
pcl::oneapi::VoxelGrid::PointCloud & cloud_device_o = (pcl::oneapi::VoxelGrid::PointCloud
&)cloud_device_xyz_o;

// GPU calculate
pcl::oneapi::VoxelGrid vg_oneapi;
vg_oneapi.setInputCloud(cloud_device);
float leafsize= 0.005f;

99
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

vg_oneapi.setLeafSize (leafsize, leafsize, leafsize);


vg_oneapi.filter(cloud_device_o);

// print log
std::cout << "[oneapi voxel grid] PointCloud before filtering: " << cloud_device.size() <<
std::endl;
std::cout << "[oneapi voxel grid] PointCloud after filtering: " << cloud_device_o.size() <<
std::endl;
}
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_voxel_grid)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

add_executable (${target} oneapi_voxel_grid.cpp)


target_link_libraries (${target} sycl pcl_oneapi_filters ${PCL_LIBRARIES})
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/voxel_grid/
mkdir build && cd build
cmake ../
make -j
8. Download the test data from GitHub*:
wget https://raw.githubusercontent.com/PointCloudLibrary/data/
5c26bdd0591ba150b91858b5c9fe5e91cb39ae86/tutorials/table_scene_lms400.pcd
# if the binary is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
9. Run the binary:
./oneapi_voxel_grid
Expected results example:
[oneapi voxel grid] PointCloud before filtering: 460400
[oneapi voxel grid] PointCloud after filtering: 41049

100
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Code Explanation
Load the test data from GitHub* into a PointCloud<PointXYZ>.

// Read Point Cloud


pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_( new pcl::PointCloud<pcl::PointXYZ>() );
int result = pcl::io::loadPCDFile("table_scene_lms400.pcd", *cloud_);
if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}
Create the GPU input and output device arrays, and upload the point cloud data to the input device array.

// Prepare Device Point Cloud Memory (input)


pcl::oneapi::VoxelGrid::PointCloud_xyz cloud_device_xyz;
cloud_device_xyz.upload(cloud_->points);
pcl::oneapi::VoxelGrid::PointCloud & cloud_device = (pcl::oneapi::VoxelGrid::PointCloud
&)cloud_device_xyz;

// Prepare Device Point Cloud Memory (output)


pcl::oneapi::VoxelGrid::PointCloud_xyz cloud_device_xyz_o;
cloud_device_xyz_o.create(cloud_->size());
pcl::oneapi::VoxelGrid::PointCloud & cloud_device_o = (pcl::oneapi::VoxelGrid::PointCloud
&)cloud_device_xyz_o;
The GPU starts to compute the model.

// GPU calculate
pcl::oneapi::VoxelGrid vg_oneapi;
vg_oneapi.setInputCloud(cloud_device);
float leafsize= 0.005f;
vg_oneapi.setLeafSize (leafsize, leafsize, leafsize);
vg_oneapi.filter(cloud_device_o);
Result (output log):

// print log
std::cout << "[oneapi voxel grid] PointCloud before filtering: " << cloud_device.size() <<
std::endl;
std::cout << "[oneapi voxel grid] PointCloud after filtering: " << cloud_device_o.size() <<
std::endl;

Filtering Point Clouds with a Passthrough Filter

This tutorial shows how to use these optimizations inside a Docker* image. For the same functionality
outside of Docker* images, see PCL Optimizations Outside of Docker* Images.
1. Prepare the environment:
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
./run_interactive_docker.sh eiforamr-full-flavour-sdk:2022.3 root -c full_flavor
mkdir passthrough && cd passthrough
2. Create the file oneapi_passthrough.cpp:

vim oneapi_passthrough.cpp
3. Place the following inside the file:
#include <pcl/oneapi/filters/passthrough.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>

101
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

#include <pcl/point_cloud.h>

int main (int argc, char** argv)


{
// Read Point Cloud
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_( new pcl::PointCloud<pcl::PointXYZ>() );
int result = pcl::io::loadPCDFile("using_kinfu_large_scale_output.pcd", *cloud_);
if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}

// Prepare Device Point Cloud Memory (input)


pcl::oneapi::PassThrough::PointCloud_xyz cloud_device_xyz;
cloud_device_xyz.upload(cloud_->points);
pcl::oneapi::PassThrough::PointCloud & cloud_device = (pcl::oneapi::PassThrough::PointCloud
&)cloud_device_xyz;

// Prepare Device Point Cloud Memory (output)


pcl::oneapi::PassThrough::PointCloud_xyz cloud_device_xyz_o;
cloud_device_xyz_o.create(cloud_->size());
pcl::oneapi::PassThrough::PointCloud & cloud_device_o = (pcl::oneapi::PassThrough::PointCloud
&)cloud_device_xyz_o;

// GPU calculate
pcl::oneapi::PassThrough ps;
ps.setInputCloud(cloud_device);
ps.setFilterFieldName ("z");
ps.setFilterLimits (0.0, 1.0);
ps.filter(cloud_device_o);

// print log
std::cout << "[oneapi passthrough] PointCloud before filtering: " << cloud_device.size() <<
std::endl;
std::cout << "[oneapi passthrough] PointCloud after filtering: " << cloud_device_o.size() <<
std::endl;
}
4. Create a CMakeLists.txt file:
vim CMakeLists.txt
5. Place the following inside the file:
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
set(target oneapi_passthrough)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

102
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
add_executable (${target} oneapi_passthrough.cpp)
target_link_libraries (${target} sycl pcl_oneapi_filters ${PCL_LIBRARIES})
6. Source the Intel® oneAPI Base Toolkit environment:

export PATH=/home/eiforamr/workspace/lib/pcl/share/pcl-1.12:/home/eiforamr/workspace/lib/pcl/
share/pcl-oneapi-1.12:$PATH
source /opt/intel/oneapi/setvars.sh
7. Build the code:
cd /home/eiforamr/workspace/passthrough/
mkdir build && cd build
cmake ../
make -j
8. Download the test data from GitHub*:
wget https://raw.githubusercontent.com/PointCloudLibrary/data/
5c26bdd0591ba150b91858b5c9fe5e91cb39ae86/tutorials/kinfu_large_scale/
using_kinfu_large_scale_output.pcd
# if the binary is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
9. Run the binary:
./oneapi_passthrough
Expected results example:
[oneapi passthrough] PointCloud before filtering: 993419
[oneapi passthrough] PointCloud after filtering: 328598

Code Explanation
Load the test data from GitHub* into a PointCloud<PointXYZ>.

// Read Point Cloud


pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_( new pcl::PointCloud<pcl::PointXYZ>() );
int result = pcl::io::loadPCDFile("using_kinfu_large_scale_output.pcd", *cloud_);
if (result != 0)
{
pcl::console::print_info ("Load pcd file failed.\n");
return result;
}
Create the GPU input and output device arrays, and upload the point cloud data to the input device array.

// Prepare Device Point Cloud Memory (input)


pcl::oneapi::PassThrough::PointCloud_xyz cloud_device_xyz;
cloud_device_xyz.upload(cloud_->points);
pcl::oneapi::PassThrough::PointCloud & cloud_device = (pcl::oneapi::PassThrough::PointCloud
&)cloud_device_xyz;

// Prepare Device Point Cloud Memory (output)


pcl::oneapi::PassThrough::PointCloud_xyz cloud_device_xyz_o;
cloud_device_xyz_o.create(cloud_->size());
pcl::oneapi::PassThrough::PointCloud & cloud_device_o = (pcl::oneapi::PassThrough::PointCloud
&)cloud_device_xyz_o;
The GPU starts to compute the model.

// GPU calculate
pcl::oneapi::PassThrough ps;
ps.setInputCloud(cloud_device);

103
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

ps.setFilterFieldName ("z");
ps.setFilterLimits (0.0, 1.0);
ps.filter(cloud_device_o);
Result (output log):

// print log
std::cout << "[oneapi passthrough] PointCloud before filtering: " << cloud_device.size() <<
std::endl;
std::cout << "[oneapi passthrough] PointCloud after filtering: " << cloud_device_o.size() <<
std::endl;

PCL Optimizations Outside of Docker* Images

Prerequisites:
• Ubuntu 20.04 Desktop
• An 11th Generation Intel® Core™ microprocessor

Download the PCL Optimizations Bundle


1. Download the PCL libraries
a. Go to the Product Download page.
b. Select the PCL Optimizations bundle.
c. Click Download Recommended Configurations.
d. Click Download.
e. During installation, you are prompted to enter your product key, so copy the product key
displayed on the download page.
2. Copy edge_insights_for_amr.zip from the developer workstation to the Home directory on your
target system. You can use a USB flash drive to copy the file.

Install the PCL Optimizations Bundle


1. Set up a proxy (optional): If a proxy is required to connect to the Internet, add the proxy settings as
described below, updating <http_proxy> and <https_proxy> to your actual proxy.

a. Add proxies in /etc/apt/apt.conf.d/proxy.conf:

echo 'Acquire::http::proxy "http://<http_proxy>:port";' | sudo tee -a /etc/apt/apt.conf.d/


proxy.conf
echo 'Acquire::https::proxy "http://<https_proxy>:port";' | sudo tee -a /etc/apt/apt.conf.d/
proxy.conf
b. Change the environment:

export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
2. Install the bundle:

unzip edge_insights_for_amr.zip
cd edge_insights_for_amr
chmod 775 edgesoftware
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
./edgesoftware install

104
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Configure the Host System
1. After installing the bundle, open a new terminal and:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_PCL
./install.sh
sudo apt install vim -y
# logout, and close this terminal
exit

Run an Example Tutorial


This example uses the “Spatial Partitioning and Search Operations with Octrees” PCL optimization tutorial. All
PCL optimization tutorials can be adapted in a similar way.
1. Configure the host system by opening a new terminal, and:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_PCL
source /opt/intel/oneapi/setvars.sh
mkdir octree && cd octree
2. Create the test file:

vim oneapi_octree_search.cpp
3. Place the following inside the file:

#include <iostream>
#include <fstream>
#include <numeric>
#include <pcl/oneapi/octree/octree.hpp>
#include <pcl/oneapi/containers/device_array.h>
#include <pcl/point_cloud.h>

using namespace pcl::oneapi;

float dist(Octree::PointType p, Octree::PointType q) {


return std::sqrt((p.x-q.x)*(p.x-q.x) + (p.y-q.y)*(p.y-q.y) + (p.z-q.z)*(p.z-q.z));
}

int main (int argc, char** argv)


{
std::size_t data_size = 871000;
std::size_t query_size = 10000;
float cube_size = 1024.f;
float max_radius = cube_size / 30.f;
float shared_radius = cube_size / 30.f;
const int max_answers = 5;
const int k = 5;
std::size_t i;
std::vector<Octree::PointType> points;
std::vector<Octree::PointType> queries;
std::vector<float> radiuses;
std::vector<int> indices;

//Generate point cloud data, queries, radiuses, indices


srand (0);
points.resize(data_size);
for(i = 0; i < data_size; ++i)
{
points[i].x = ((float)rand())/(float)RAND_MAX * cube_size;
points[i].y = ((float)rand())/(float)RAND_MAX * cube_size;
points[i].z = ((float)rand())/(float)RAND_MAX * cube_size;

105
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

queries.resize(query_size);
radiuses.resize(query_size);
for (i = 0; i < query_size; ++i)
{
queries[i].x = ((float)rand())/(float)RAND_MAX * cube_size;
queries[i].y = ((float)rand())/(float)RAND_MAX * cube_size;
queries[i].z = ((float)rand())/(float)RAND_MAX * cube_size;
radiuses[i] = ((float)rand())/(float)RAND_MAX * max_radius;
};

indices.resize(query_size / 2);
for(i = 0; i < query_size / 2; ++i)
{
indices[i] = i * 2;
}

//Prepare oneAPI cloud


pcl::oneapi::Octree::PointCloud cloud_device;
cloud_device.upload(points);

//oneAPI build
pcl::oneapi::Octree octree_device;
octree_device.setCloud(cloud_device);
octree_device.build();

//Upload queries and radiuses


pcl::oneapi::Octree::Queries queries_device;
pcl::oneapi::Octree::Radiuses radiuses_device;
queries_device.upload(queries);
radiuses_device.upload(radiuses);

//Prepare output buffers on device


pcl::oneapi::NeighborIndices result_device1(queries_device.size(), max_answers);
pcl::oneapi::NeighborIndices result_device2(queries_device.size(), max_answers);
pcl::oneapi::NeighborIndices result_device3(indices.size(), max_answers);
pcl::oneapi::NeighborIndices result_device_ann(queries_device.size(), 1);
pcl::oneapi::Octree::ResultSqrDists dists_device_ann;
pcl::oneapi::NeighborIndices result_device_knn(queries_device.size(), k);
pcl::oneapi::Octree::ResultSqrDists dists_device_knn;

//oneAPI octree radius search with shared radius


octree_device.radiusSearch(queries_device, shared_radius, max_answers, result_device1);

//oneAPI octree radius search with individual radius


octree_device.radiusSearch(queries_device, radiuses_device, max_answers, result_device2);

//oneAPI octree radius search with shared radius using indices to specify
//the queries.
pcl::oneapi::Octree::Indices cloud_indices;
cloud_indices.upload(indices);
octree_device.radiusSearch(queries_device, cloud_indices, shared_radius, max_answers,
result_device3);

//oneAPI octree ANN search


//if neighbor points distances results are not required, can just call
//octree_device.approxNearestSearch(queries_device, result_device_ann)
octree_device.approxNearestSearch(queries_device, result_device_ann, dists_device_ann);

106
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
//oneAPI octree KNN search
//if neighbor points distances results are not required, can just call
//octree_device.nearestKSearchBatch(queries_device, k, result_device_knn)
octree_device.nearestKSearchBatch(queries_device, k, result_device_knn, dists_device_knn);

//Download results
std::vector<int> sizes1;
std::vector<int> sizes2;
std::vector<int> sizes3;
result_device1.sizes.download(sizes1);
result_device2.sizes.download(sizes2);
result_device3.sizes.download(sizes3);

std::vector<int> downloaded_buffer1, downloaded_buffer2, downloaded_buffer3, results_batch;


result_device1.data.download(downloaded_buffer1);
result_device2.data.download(downloaded_buffer2);
result_device3.data.download(downloaded_buffer3);

int query_idx = 2;
std::cout << "Neighbors within shared radius search at ("
<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << shared_radius << std::endl;
for (i = 0; i < sizes1[query_idx]; ++i)
{
std::cout << " " << points[downloaded_buffer1[max_answers * query_idx + i]].x
<< " " << points[downloaded_buffer1[max_answers * query_idx + i]].y
<< " " << points[downloaded_buffer1[max_answers * query_idx + i]].z
<< " (distance: " << dist(points[downloaded_buffer1[max_answers * query_idx +
i]], queries[query_idx]) << ")" << std::endl;
}

std::cout << "Neighbors within individual radius search at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << radiuses[query_idx] << std::endl;
for (i = 0; i < sizes2[query_idx]; ++i)
{
std::cout << " " << points[downloaded_buffer2[max_answers * query_idx + i]].x
<< " " << points[downloaded_buffer2[max_answers * query_idx + i]].y
<< " " << points[downloaded_buffer2[max_answers * query_idx + i]].z
<< " (distance: " << dist(points[downloaded_buffer2[max_answers * query_idx +
i]], queries[query_idx]) << ")" << std::endl;
}

std::cout << "Neighbors within indices radius search at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ") with radius=" << shared_radius << std::endl;
for (i = 0; i < sizes3[query_idx/2]; ++i)
{
std::cout << " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].x
<< " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].y
<< " " << points[downloaded_buffer3[max_answers * query_idx / 2 + i]].z
<< " (distance: " << dist(points[downloaded_buffer3[max_answers * query_idx /
2 + i]], queries[2]) << ")" << std::endl;
}

107
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

std::cout << "Approximate nearest neighbor at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ")" << std::endl;
std::cout << " " << points[result_device_ann.data[query_idx]].x
<< " " << points[result_device_ann.data[query_idx]].y
<< " " << points[result_device_ann.data[query_idx]].z
<< " (distance: " << std::sqrt(dists_device_ann[query_idx]) << ")" << std::endl;

std::cout << "K-nearest neighbors (k = " << k << ") at ("


<< queries[query_idx].x << " "
<< queries[query_idx].y << " "
<< queries[query_idx].z << ")" << std::endl;
for (i = query_idx * k; i < (query_idx + 1) * k; ++i)
{
std::cout << " " << points[result_device_knn.data[i]].x
<< " " << points[result_device_knn.data[i]].y
<< " " << points[result_device_knn.data[i]].z
<< " (distance: " << std::sqrt(dists_device_knn[i]) << ")" << std::endl;
}
}
4. Create a CMakeLists.txt file:

vim CMakeLists.txt
5. Place the following inside the file:

cmake_minimum_required(VERSION 3.5 FATAL_ERROR)


set(target oneapi_octree_search)
set(CMAKE_CXX_COMPILER dpcpp)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "-Wall -Wpedantic -Wno-unknown-pragmas -Wno-pass-failed -Wno-unneeded-
internal-declaration -Wno-unused-function -Wno-gnu-anonymous-struct -Wno-nested-anon-types -Wno-
extra-semi -Wno-unused-local-typedef -fsycl -fsycl-unnamed-lambda -ferror-limit=1")
project(${target})

find_package(PCL 1.12 REQUIRED)


find_package(PCL-ONEAPI 1.12 REQUIRED)

include_directories(${PCL_INCLUDE_DIRS} ${PCL-ONEAPI_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS} ${PCL-ONEAPI_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS} ${PCL-ONEAPI_DEFINITIONS})

add_executable (${target} oneapi_octree_search.cpp)


target_link_libraries (${target} sycl pcl_oneapi_containers pcl_oneapi_octree pcl_octree)
6. Build the code:

mkdir build && cd build


cmake ../
make -j
7. Run the binary:

./oneapi_octree_search

NOTESurface Reconstruction with Intel® oneAPI Base Toolkit's Moving Least Squares (MLS) uses the
pcl_viewer from the Docker* environment. Outside the Docker* environment, run pcl_viewer as:

pcl_viewer bun0-mls.pcd

108
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Optimized PCL Known Limitation

JIT Limitation
Intel’s PCL optimization is implemented with the Intel® oneAPI DPC++ Compiler. The Intel® oneAPI DPC++
Compiler converts a DPC++ program into an intermediate language called SPIR-V (Standard Portable
Intermediate Representation). The SPIR-V code is stored in the binary produced by the compilation process.
The SPIR-V code has the advantage that it can be run on any hardware platform by translating the SPIR-V
code into the assembly code of the given platform at runtime. This process of translating the intermediate
code present in the binary is called Just-In-Time (JIT) compilation. Since JIT compilation happens at the
beginning of the execution of the first offloaded kernel, the performance is impacted. This issue can be
mitigated by setting the system environment variable to cache and reuse JIT-compiled binaries.
1. Set the system environment variable to cache and reuse JIT-compiled binaries.

export SYCL_CACHE_PERSISTENT=1
2. Set the environment variable permanently.

echo "export SYCL_CACHE_PERSISTENT=1" >> ~/.bashrc


source ~/.bashrc
3. Execute the program once to generate the JIT-compiled binaries. Every execution after this first
execution reuses the cached JIT-compiled binaries.

NOTE To get an accurate PCL optimization performance number, this system environment variable
needs to be set, and the program needs to be executed once to generate and cache the JIT-compiled
binaries.

Optimized PCL Troubleshooting

If the executable gives a segmentation fault (the core is dumped), the Docker* image was not opened with
the root user.
If you can see the GPU in sycl, the user has the correct permissions:

sycl-ls
[opencl:0] ACC : Intel(R) FPGA Emulation Platform for OpenCL(TM) 1.2 [2021.13.11.0.23_160000]
[opencl:0] CPU : Intel(R) OpenCL 3.0 [2021.13.11.0.23_160000]
[opencl:0] GPU : Intel(R) OpenCL HD Graphics 3.0 [22.17.23034]
[level_zero:0] GPU : Intel(R) Level-Zero 1.3 [1.3.23034]
[host:0] HOST: SYCL host platform 1.2 [1.2]
If the user does not have the correct permissions, add the user to the render group:
#replace userName with the actual user of your system
sudo usermod -a -G render <userName>

Navigation
The following tutorials tell you how to use ROS 2 components developed by Intel to help an EI for AMR
navigate and map a room and provide teleop options.

Collaborative Visual SLAM

109
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Collaborative visual SLAM is compiled natively for both Intel® Core™ and Intel® Atom® processor-based
systems. By default, in this tutorial, the Intel® Core™ processor-based system version is used. If you are
running an Intel® Atom® processor-based system, you must make the changes detailed in Collaborative
Visual SLAM on Intel® Atom® Processor-Based Systems for collaborative visual SLAM to work.
• Collaborative Visual SLAM with Two Robots: uses as input two ROS 2 bags that simulate two robots
exploring the same area
• The ROS 2 tool rviz2 is used to visualize the two robots, the server, and how the server merges the
two local maps of the robots into one common map.
• The output includes the estimated pose of the camera and visualization of the internal map.
• All input and output are in standard ROS 2 formats.
• Collaborative Visual SLAM with FastMapping Enabled: uses as an input a ROS 2 bag that simulates a robot
exploring an area
• Collaborative visual SLAM has the FastMapping algorithm integrated.
• For more information on FastMapping, see How it Works.
• The ROS 2 tool rviz2 is used to visualize the robot exploring the area and how FastMapping creates the
2D and 3D maps.
• Collaborative Visual SLAM with GPU Offloading
• Offloading to the GPU only works on systems with 11th Generation Intel® Core™ processors with Intel®
Iris® Xe Integrated Graphics.
• Collaborative Visual SLAM also contains mapping and can operate in localization mode.
• Map an Area with the Wandering Application and UP Xtreme i11 Robotic Kit
• Start the UP Xtreme i11 Robotic Kit in Localization Mode

Collaborative Visual SLAM with Two Robots


Prerequisites:
• The main input is a camera, either monocular or stereo or RGB-D.
• IMU and odometry data are supported as auxiliary inputs.
1. Check if your installation has the amr-collab-slam Docker* image.

docker images |grep amr-collab-slam


#if you have it installed, the result is:
amr-collab-slam

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends re-installing the EI for AMR Robot Kit with the Get
Started Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
5. If the bags were not extracted before, do it now:

unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/


sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags

110
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
6. Run the collaborative visual SLAM algorithm using two bags simulating two robots going through the
same area:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


cslam.tutorial.yml up
Expected result: On the server rviz2, both trackers are seen.
• Red indicates the path robot 1 is taking right now.
• Blue indicates the path robot 2 took.
• Green indicates the points known to the server.

Collaborative Visual SLAM with FastMapping Enabled


1. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
2. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
3. If the bags were not extracted before, do it now:

unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/


sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
4. Run the collaborative visual SLAM algorithm with FastMapping enabled:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-
fastmapping.tutorial.yml up
Expected result: On the opened rviz2, you see the visual SLAM keypoints, the 3D map, and the 2D
map.
5. You can disable the /univloc_tracker_0/local_map, univloc_tracker_0/fused_map, or both
topics.
Visible Test: Showing keypoints, the 3D map, and the 2D map
Expected Result:

111
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

112
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Visible Test: ``Showing the 3D map
Expected Result:

113
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

114
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Visible Test: Map showing the 2D map
Expected Result:

115
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

116
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Visible Test: Showing keypoints and the 2D map
Expected Result:

117
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

118
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Collaborative Visual SLAM with GPU Offloading


1. Check if your installation has the amr-collab-slam Docker* image.

docker images |grep amr-collab-slam-gpu


#if you have it installed, the result is:
amr-collab-slam-gpu

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started
Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
5. If the bags were not extracted before, do it now:

unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/


sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
6. Run the collaborative visual SLAM algorithm with GPU offloading:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-gpu.tutorial.yml up
Expected result: On the opened rviz2, you see the visual SLAM keypoints, the 3D map, and the 2D map

119
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

120
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
7. On a different terminal, check how much of the GPU is using intel-gpu-top.

sudo apt-get install intel-gpu-tools


sudo intel_gpu_top

8. To close this execution, close the rviz2 window, and press Ctrl-c in the terminal.
9. Clean up the Docker* images:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-gpu.tutorial.yml
down --remove-orphans

Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems


1. Open the collaborative visual SLAM yml file for editing (depending on which tutorial you want to run,
replace <collab-slam-tutorial> with cslam, collab-slam-fastmapping, or collab-slam-gpu):

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
gedit 01_docker_sdk_env/docker_compose/05_tutorials/<collab-slam-tutorial>.tutorial.yml
2. Replace this line:

source /home/eiforamr/workspace/ros_entrypoint.sh
With these lines:

unset CMAKE_PREFIX_PATH
unset AMENT_PREFIX_PATH
unset LD_LIBRARY_PATH
source /home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_atom/setup.bash

Troubleshooting
• If the tracker (univloc_tracker_ros) fails to start, giving this error:

amr-collab-slam | [ERROR] [univloc_tracker_ros-2]: process has died [pid 140, exit code -4, cmd
'/home/eiforamr/workspace/CollabSLAM/prebuilt_collab_slam_core/univloc_tracker/lib/
univloc_tracker/univloc_tracker_ros --ros-args -r __node:=univloc_tracker_0 -r __ns:=/ --params-
file /tmp/launch_params_zfr70odz -r /tf:=tf -r /tf_static:=tf_static -r /univloc_tracker_0/
map:=map'].
See Collaborative Visual SLAM on Intel® Atom® Processor-Based Systems.
• The odometry feature use_odom:=true does not work with these bags.

The ROS 2 bags used in this example do not have the necessary topics recorded for the odometry feature
of collaborative visual SLAM.
If the use_odom:=true parameter is set, the collab-slam reports errors.

121
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• The bags fail to play.


The collab_slam Docker* is started with the local user and needs access to the ROS 2 bags folder.

Make sure that your local user has read and write access to this path: <path to
edge_insights_for_amr>//Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
01_docker_sdk_env/docker_compose/06_bags
The best way to do this is to make your user the owner of the folder. If the EI for AMR bundle was
installed with sudo, chown the folder to your local user.
• If the following error is encountered:

amr-collab-slam-gpu | [univloc_tracker_ros-2] /workspace/src/gpu/l0_rt_helpers.h:56: L0 error


78000001
The render group might be on a different ID then 109 which is placed in the yaml files used in the
examples.
To find what ID the render group has on your system:

getent group render


cat /etc/group |grep render
If the result is not render:x:109, change the yml file:

gedit 01_docker_sdk_env/docker_compose/05_tutorials/collab-slam-gpu.tutorial.yml
# Change the at the line 26 from 109, to the number you got above.
• For general robot issues, go to: Troubleshooting for Robot Tutorials.

Kudan Visual SLAM

This tutorial tells you how to run a Kudan Visual SLAM (KdVisual) system using a ROS 2 bag as the input
containing data of a robot exploring an area.
• The ROS 2 tool rviz2 is used to visualize how KdVisual interprets the data from the ROS 2 bag.
• Find more information about Kudan Visual SLAM here.
• Find more information on Kudan in general here.

Download the Optional Kudan Visual SLAM (KdVisual) Bundle


1. Download the Robot Complete Kit with the optional Kudan Visual SLAM (KdVisual) bundle.
a. Go to the Product Download page.
b. Select the Robot Complete Kit.
c. Click Customize Download.
d. Make sure that AMR Kudan SLAM is selected:

122
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

This may be in Step 1 or Step 3 depending on the use case you selected.
e. Click Next until you get to Download.
f. Click Download.
g. During installation, you are prompted to enter your product key, so copy the product key
displayed on the download page.
2. Copy edge_insights_for_amr.zip from the developer workstation to the Home directory on your
target system. You can use a USB flash drive to copy the file.

Install EI for AMR


1. Set up a proxy (optional): If a proxy is required to connect to the Internet, add the proxy settings as
described below, updating <http_proxy> and <https_proxy> to your actual proxy.

123
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

a. Add proxies in /etc/apt/apt.conf.d/proxy.conf:

echo 'Acquire::http::proxy "http://<http_proxy>:port";' | sudo tee -a /etc/apt/apt.conf.d/


proxy.conf
echo 'Acquire::https::proxy "http://<https_proxy>:port";' | sudo tee -a /etc/apt/apt.conf.d/
proxy.conf
b. Change the environment.
• Change the environment for all users on the system:

sudo su
echo 'export http_proxy="http://<http_proxy>:port"' >> /etc/environment
echo 'export https_proxy="http://<https_proxy>:port"' >> /etc/environment
echo 'export ftp_proxy="http://<ftp_proxy>:port"' >> /etc/environment
echo 'export no_proxy="<no_proxy>"' >> /etc/environment
exit
source /etc/environment

NOTE These steps are needed only once per host. They do not have to be done for different users or
different logins of the same user.

• Change the environment for the current user only:

echo 'export http_proxy="http://<http_proxy>:port"' >> ~/.bashrc


echo 'export https_proxy="http://<https_proxy>:port"' >> ~/.bashrc
echo 'export ftp_proxy="http://<ftp_proxy>:port"' >> ~/.bashrc
echo 'export no_proxy="<no_proxy>"' >> ~/.bashrc
source ~/.bashrc
Read and follow best practices as described in this Linux* wiki about environment variables.
Edit the /etc/sudoers file with visudo:

sudo visudo
# Add after other lines that add Defaults:
Defaults env_keep += "ftp_proxy http_proxy https_proxy no_proxy"
2. Run the following commands to go to the directory, change permission of the executable edgesoftware
file, and install the bundle:

cd edge_insights_for_amr
chmod 775 edgesoftware
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
./edgesoftware install
3. Type the product key at the prompt:

NOTE The product key is displayed on the download page. Contact Support Forum if it is not.

124
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

4. Based on components selected and system configuration, you might be prompted for additional actions.
For example, if your system is behind a proxy, you are asked to enter proxy settings.

125
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

When the installation is complete, you see an installation complete message and the installation status
for each module.

126
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

127
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

5. If any of the installed modules report a failure in the Status column due to a break in the internet
connection or for any other reason, run the install again:

./edgesoftware install
6. Set the correct ownership:

sudo chown $USER:$USER * -R


7. Verify that all Docker* images were downloaded:

docker image list


Expected output includes all downloaded containers:

REPOSITORY TAG IMAGE ID CREATED SIZE


amr-nav2 2022.3
eiforamr-full-flavour-sdk 2022.3
amr-object-detection 2022.3
.
.

NOTE Installation failure logs are available at: /var/log/esb-cli/


Edge_Insights_for_Autonomous_Mobile_Robots_<version>/<Component_name>/install.log

Run the Sample Application


1. Check if your installation has the amr-kudan-slam Docker* image.

docker images |grep amr-kudan-slam


#if you have it installed, the result is:
amr-kudan-slam

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends repeating the Download the Optional Kudan Visual SLAM
(KdVisual) Bundle step.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
# If the bags were not extracted before, do it now
unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/
sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
5. Specify that the Docker* engine uses Release 2022.3.1 of the amr-kudan-slam Docker* image:

export DOCKER_TAG=2022.3.1
6. Run the Kudan Visual SLAM algorithm using a ROS 2 bag simulating a robot exploring an area:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/kudan-slam-cpu.tutorial.yml up
In rviz2, you can see what KdVisual does with the input from the ROS 2 bag.

128
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

129
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

7. To close this execution, close the rviz2 window, and press Ctrl-c in the terminal.
8. Clean up the Docker* images:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/kudan-slam-cpu.tutorial.yml down


--remove-orphans
9. KdVisual can also offload some processing to the GPU. To demonstrate this, use this example:

NOTE Offloading to the GPU only works on systems with 11th Generation Intel® Core™ processors with
Intel® Iris® Xe Integrated Graphics.

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/kudan-slam-gpu.tutorial.yml up
10. On a different terminal, check how much of the GPU is used using intel-gpu-top:

sudo apt-get install intel-gpu-tools


sudo intel_gpu_top

11. To close this execution, close the rviz2 window, and press Ctrl-c in the terminal.
12. Cleanup the Docker* images:

docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/kudan-slam-gpu.tutorial.yml down


--remove-orphans

Troubleshooting
If you encounter this error:

amr-kudan-slam | [kdvisual_ros2_rgbd-1] /workspace/src/gpu/l0_rt_helpers.h:56: L0 error 78000001


The render group might be on a different id then 109 which is placed in the yaml files used in the examples.
To find what id the render group has on your system:

getent group render


cat /etc/group |grep render
If the result is different then: render:x:109 change the yml file:

gedit 01_docker_sdk_env/docker_compose/05_tutorials/kudan-slam-gpu.tutorial.yml
# and/or
gedit 01_docker_sdk_env/docker_compose/05_tutorials/kudan-slam-cpu.tutorial.yml
# Change the at the line 26 from 109, to the number you got above.
If you encounter this error:

Failed to initialize SLAM, error 5 = The supplied license key is invalid or expired

130
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Update the EI for AMR software to Release 2022.3.1; see the Get Started Guide for Robots for software
install instructions.
If the Kudan Visual SLAM tutorial does not start within a few seconds and you see the message “Building
kudan-slam,” the Docker* engine started building the incorrect version of the amr-kudan-slam image.
Specify that the Docker* engine must use Release 2022.3.1 of the amr-kudan-slam Docker* image:

export DOCKER_TAG=2022.3.1

If you are building your own Kudan Visual SLAM Docker* image outside the amr-kudan-slam Docker* image,
retrieve the updated license file (intel_amr.kdlicense) from the amr-kudan-slam:2022.3.1 Docker*
image:

id=$(docker create amr-kudan-slam:2022.3.1)


docker cp $id:/home/eiforamr/ros2_ws/install/kdvisual_ros2/share/kdvisual_ros2/
config/intel_amr.kdlicense - > temp.tar
docker rm -v $id
tar xvf temp.tar && rm temp.tar

For general robot issues, go to: Troubleshooting for Robot Tutorials.

FastMapping Algorithm

For more information on FastMapping, see How it Works.

Run the Sample Application


1. Check if your installation has the amr-fastmapping Docker* image.

docker images |grep amr-fastmapping


#if you have it installed, the result is:
amr-fastmapping

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends re-installing the EI for AMR Robot Kit with the Get
Started Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
# If the bags were not extracted before do it now
unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/
sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
5. Run the FastMapping Algorithm using a bag of a robot spinning:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


fastmapping.tutorial.yml up
Expected output:

131
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

132
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
6. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


fastmapping.tutorial.yml down

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

ADBSCAN Algorithm

This tutorial tells you how to run the ADBSCAN algorithm from EI for AMR using 2D Slamtec* RPLIDAR and
Intel® RealSense™ camera input.
It outputs to the obstacle_array topic of type nav2_dynamic_msgs/ObstacleArray.
Prerequisites: You know how to connect and configure a Slamtec* RPLIDAR sensor. For details, see: 2D
LIDAR and ROS 2 Cartographer.

Run the ADBSCAN Algorithm with Slamtec* RPLIDAR Input


1. Check if your installation has the amr-adbscan and amr-rplidar Docker* images.

docker images |grep amr-adbscan


#if you have it installed, the result is:
amr-adbscan
docker images |grep amr-rplidar
#if you have it installed, the result is:
amr-rplidar

NOTE If one or both of the images are not installed, continuing with these steps triggers a build
that takes longer than an hour (sometimes, a lot longer depending on the system resources and
internet connection).

2. If one or both of the images are not installed, Intel recommends installing the Robot Base Kit or the
Robot Complete Kit with the Get Started Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
# Unzip the ros2 bags if they were not unzipped before
unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/
sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
5. Depending on the Slamtec* RPLIDAR availability, you have two possibilities:
• Slamtec* RPLIDAR connected
1. Verify the udev rules that you configured for RPLIDAR in 2D LIDAR and ROS 2 Cartographer.
Get the
a. Slamtec* RPLIDAR serial port:

dmesg | grep cp210x

133
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Check
b. for similar logs:

usb 1-3: SerialNumber: 0001


cp210x 1-3:1.0: cp210x converter detected
usb 1-3: cp210x converter now attached to ttyUSB0
Export
c. the port:

export RPLIDAR_SERIAL_PORT=/dev/ttyUSB0
# this value may differ from system to system, use the value returned in the previous step
2. Start a pre-configured yml file that starts the LIDAR Node and then the ADBSCAN application:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


adbscan_LIDAR.tutorial.yml up
• No Slamtec* RPLIDAR connected
Start a pre-configured yml file that plays a ROS 2 bag containing LIDAR data and then the ADBSCAN
application:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


adbscan_2D.tutorial.yml up
Expected output: ADBSCAN prints logs of its interpretation of the LIDAR data coming from the ROS
2 bag.

134
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Run the ADBSCAN Algorithm with Intel® RealSense™ Camera Input
1. Check if your installation has the amr-adbscan and amr-realsense Docker* images.

docker images |grep amr-adbscan


#if you have it installed, the result is:
amr-adbscan
docker images |grep amr-realsense
#if you have it installed, the result is:
amr-realsense

NOTE If one or both of the images are not installed, continuing with these steps triggers a build
that takes longer than an hour (sometimes, a lot longer depending on the system resources and
internet connection).

2. If one or both of the images are not installed, Intel recommends installing the Robot Base Kit or Robot
Complete Kit with the Get Started Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
# Unzip the ros2 bags if they were not unzipped before
unzip 01_docker_sdk_env/docker_compose/06_bags.zip -d 01_docker_sdk_env/docker_compose/
sudo chmod 0777 -R 01_docker_sdk_env/docker_compose/06_bags
5. Depending on the Intel® RealSense™ camera availability, you have two possibilities:
• Intel® RealSense™ camera connected
Start a pre-configured yml file that starts the Intel® RealSense™ node and then the ADBSCAN
application:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


adbscan_RealSense.tutorial.yml up
• No Intel® RealSense™ camera connected
Start a pre-configured yml file that plays a ROS 2 bag containing Intel® RealSense™ data and then
the ADBSCAN application:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


adbscan_RS.tutorial.yml up
Expected result: rviz2 starts, and you see how ADBSCAN interprets Intel® RealSense™ data coming
from the ROS 2 bag:

135
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

ITS Path Planner ROS 2 Navigation Plugin

136
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
The ITS Plugin for the ROS 2 Navigation 2 application plugin is a global path planner module that is based on
Intelligent sampling and Two-way Search (ITS). It does not support continuous replanning.
Prerequisites: Use a simple behavior tree with a compute path to pose and a follow path.
ITS planner inputs:
• global 2D costmap (nav2_costmap_2d::Costmap2D)
• start and goal pose (geometry_msgs::msg::PoseStamped)
ITS planner outputs: 2D waypoints of the path
Path planning steps summary:
1. The ITS planner converts the 2D costmap to either a Probabilistic Road Map (PRM) or a Deterministic
Road Map (DRM).
2. The generated roadmap is saved as a txt file which can be reused for multiple inquiries.
3. The ITS planner conducts a two-way search to find a path from the source to the destination. Either the
smoothing filter or a catmull spline interpolation can be used to create a smooth and continuous path.
The generated smooth path is in the form of a ROS 2 navigation message type (nav_msgs::msg).
For customization options, see ITS Path Planner Plugin Customization.

Run the ROS 2 Navigation Sample Application Using ITS Path Planner
1. Check if your installation has the eiforamr-full-flavour-sdk Docker* image.

docker images |grep eiforamr-full-flavour-sdk


#if you have it installed, the result is:
eiforamr-full-flavour-sdk

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started
Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
5. Start the ROS 2 navigation sample application using the TurtleBot* 3 Gazebo* simulation:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


its_path_planner.tutorial.yml up

NOTE The above command opens Gazebo* and rviz2 applications. Gazebo* takes a longer time to
open (up to a minute) depending on the host’s capabilities. Both applications contain the simulated
waffle map, and a simulated robot. Initially, the applications are opened in the background, but you
can bring them into the foreground, side-by-side, for a better visual.

a. Set the robot 2D Pose Estimate in rviz2:


a. Set the initial robot pose by pressing 2D Pose Estimate in rviz2.
b. At the robot estimated location, down-click inside the 2D map. For reference, use the robot
pose as it appears in Gazebo*.
c. Set the orientation by dragging forward from the down-click. This also enables ROS 2
navigation.

137
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

b. In rviz2, press Navigation2 Goal, and choose a destination for the robot. This calls the
behavioral tree navigator to go to that goal through an action server.

138
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

139
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Expected result: The robot moves along the path generated to its new destination.
c. Set new destinations for the robot, one at a time.

d. To close this, do one of the following:


• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr 01_docker_sdk_env/docker_compose/05_tutorials/its_path_planner.tutorial.yml
down

Troubleshooting
For general robot issues, go to Troubleshooting for Robot Tutorials.

140
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

ITS Path Planner Plugin Customization

The ROS 2 navigation bringup application is started using the TurtleBot* 3 Gazebo* simulation, and it
receives as input parameter its_nav2_params.yaml.
Check the code snippet from 01_docker_sdk_env/docker_compose/05_tutorials/
its_path_planner.tutorial.yml:
export TURTLEBOT3_MODEL=waffle
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/home/eiforamr/ros2_ws/install/turtlebot3_gazebo/
share/turtlebot3_gazebo/models/
ros2 launch nav2_bringup tb3_simulation_launch.py params_file:=${CONTAINER_BASE_PATH}/
01_docker_sdk_env/docker_compose/05_tutorials/param/its_nav2_params.yaml
To use the ITS path planner plugin, the following parameters are added in its_nav2_params.yaml:
planner_server:
ros__parameters:
expected_planner_frequency: 0.01
use_sim_time: True
planner_plugins: ["GridBased"]
GridBased:
plugin: "its_planner/ITSPlanner"
interpolation_resolution: 0.05
catmull_spline: False
smoothing_window: 15
buffer_size: 10
build_road_map_once: True
min_samples: 250
roadmap: "PROBABLISTIC"
w: 32
h: 32
n: 2

ITS Path Planner Plugin Parameters


catmull_spline:
If true, the generated path from the ITS is interpolated with the catmull spline method; otherwise, a
smoothing filter is used to smooth the path.

smoothing_window:
The window size for the smoothing filter (The unit is the grid size.)

buffer_size:
During roadmap generation, the samples are generated away from obstacles. The buffer size dictates how far
away from obstacles the roadmap samples should be.

build_road_map_once:
If true, the roadmap is loaded from the saved file; otherwise, a new roadmap is generated.

min_samples:
The minimum number of samples required to generate the roadmap

roadmap:
Either PROBABILISTIC or DETERMINISTIC

w:

141
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

The width of the window for intelligent sampling

h:
The height of the window for intelligent sampling

n:
The minimum number of samples that is required in an area defined by w and h

You can modify these values by editing the file below, at lines 251-267:

01_docker_sdk_env/docker_compose/05_tutorials/param/its_nav2_params.yaml

Intel® oneAPI Base Toolkit Sample Application

This tutorial tells you how to use the DPC++ compiler, convert CUDA to DPC++, build it, and run it in a
Docker* container.

Run the Sample Application


1. Check if your installation has the eiforamr-full-flavour-sdk Docker* image.

docker images |grep eiforamr-full-flavour-sdk


#if you have it installed, the result is:
eiforamr-full-flavour-sdk

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started
Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
5. Run the command below to start the Docker* container as root:

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run


full-sdk bash

NOTE If a proxy is required to connect to the Internet, update /etc/apt/apt.conf.d/proxy.conf


with the corresponding exports and execute the following export commands:

echo 'Acquire::http::proxy "<http_proxy:port>";' | sudo tee -a /etc/apt/apt.conf.d/


proxy.conf
echo 'Acquire::https::proxy "<https_proxy:port>";' | sudo tee -a /etc/apt/apt.conf.d/
proxy.conf
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"

6. Install CUDA (replace <http_proxy:port> with your proxy):

# Send proxy exports


echo "deb http://apt.pop-os.org/proprietary focal main" | sudo tee -a /etc/apt/sources.list.d/
pop-proprietary.list

142
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
sudo apt-key adv --keyserver-options http-proxy=<http_proxy:port> --keyserver
keyserver.ubuntu.com --recv-key 204DD8AEC33A7AFF
sudo -E apt-get update -y --allow-unauthenticated && DEBIAN_FRONTEND=noninteractive sudo -E apt-
get install -y --no-install-recommends system76-cuda-10.1
7. The install command may fail to check if CUDA was installed:

ls /usr/lib/cuda*
Example output:

/usr/lib/cuda/:
EULA.txt NsightSystems-2018.3 cublas_version.txt extras jre libnsight
nsightee_plugins nvvm share targets version.txt
NsightCompute-2019.1 bin doc include lib64 libnvvp
nvml samples src tools

/usr/lib/cuda-10.1/:
EULA.txt NsightSystems-2018.3 cublas_version.txt extras jre libnsight
nsightee_plugins nvvm share targets version.txt
NsightCompute-2019.1 bin doc include lib64 libnvvp
nvml samples src tools
8. Set up the environment for Intel® oneAPI Base Toolkit:

source /opt/intel/oneapi/setvars.sh
9. Download a sample file that uses the DPC++ compiler:

wget -L https://raw.githubusercontent.com/intel/llvm/98b6ee437ed325992ace95548b0ffc01dd4cbbe9/
sycl/examples/simple-dpcpp-app.cpp -O simple.cpp
# if the binary is not downloaded try setting the proxies first and try again:
export http_proxy="http://<http_proxy>:port"
export https_proxy="http://<https_proxy>:port"
Run the command below and review the output binary:

dpcpp simple.cpp -o simple


./simple
Expected output:

"The results are correct":


10. Convert CUDA to DPC++ and build it.
a. Go to CUDA code sample and convert to DPC++:

git clone https://github.com/oneapi-src/oneAPI-samples.git


cp oneAPI-samples/Tools/Migration/vector-add-dpct/src/vector_add.cu /home/eiforamr/data_samples/
vector_add.cu
chmod +x /home/eiforamr/data_samples/vector_add.cu
dpct --in-root=/home/eiforamr/data_samples/ /home/eiforamr/data_samples/vector_add.cu
Expected output:

root@edgesoftware:/home/eiforamr/data_samples# chmod +x "${DATA_SAMPLES}"/vector_add.cu


root@edgesoftware:/home/eiforamr/data_samples# dpct --in-root=/home/eiforamr/data_samples/
vector_add.cu
NOTE: Could not auto-detect compilation database for file 'vector_add.cu' in '/home/eiforamr/
data_samples' or any parent directory.
The directory "dpct_output" is used as "out-root"
Processing: /home/eiforamr/data_samples/vector_add.cu
/home/eiforamr/data_samples/vector_add.cu:32:14: warning: DPCT1003:0: Migrated API does not
return error code. (*, 0) is inserted. You may need to rewrite this code.
status = cudaMemcpy(Result, d_C, VECTOR_SIZE*sizeof(float), cudaMemcpyDeviceToHost);
^

143
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Processed 1 file(s) in -in-root folder "/home/eiforamr/data_samples"

See Diagnostics Reference to resolve warnings and complete the migration:


https://www.intel.com/content/www/us/en/develop/documentation/intel-dpcpp-compatibility-tool-
user-guide/top/diagnostics-reference.html

root@edgesoftware:/home/eiforamr/data_samples#
b. Conversion successfully done:

ls
dpct_output vector_add.cu
c. Go to output directory:

cd /dpct_output
d. Create a simple Makefile with this content:

CXX = dpcpp
TARGET = vector_add
SRCS = vector_add.dp.cpp

# Use predefined implicit rules and add one for *.cpp files.
%.o: %.cpp
$(CXX) -c $(CXXFLAGS) $(CPPFLAGS) $< -o $@

all: $(TARGET)

$(TARGET): $(SRCS) $(DEPS)


$(CXX) $(SRCS) -o $@

run: $(TARGET)
./$(TARGET)

.PHONY: clean
clean:
rm -f $(TARGET) *.o
e. Run make and then the output binary named vector_add:

make
./vector_add
Expected output:
A block of even numbers are listed, indicating the result of adding two vectors: [ 1..N] + [1..N].

./vector_add

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32
34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64
66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96
98 100 102 104 106 108 110 112 114 116 118 120 122 124 126 128
130 132 134 136 138 140 142 144 146 148 150 152 154 156 158 160
162 164 166 168 170 172 174 176 178 180 182 184 186 188 190 192
194 196 198 200 202 204 206 208 210 212 214 216 218 220 222 224
226 228 230 232 234 236 238 240 242 244 246 248 250 252 254 256

144
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
258 260 262 264 266 268 270 272 274 276 278 280 282 284 286 288
290 292 294 296 298 300 302 304 306 308 310 312 314 316 318 320
322 324 326 328 330 332 334 336 338 340 342 344 346 348 350 352
354 356 358 360 362 364 366 368 370 372 374 376 378 380 382 384
386 388 390 392 394 396 398 400 402 404 406 408 410 412 414 416
418 420 422 424 426 428 430 432 434 436 438 440 442 444 446 448
450 452 454 456 458 460 462 464 466 468 470 472 474 476 478 480
482 484 486 488 490 492 494 496 498 500 502 504 506 508 510 512

Troubleshooting
The Makefile from step 9.d contains tabs and may not copy well to your system, giving this error:

Makefile:8: *** missing separator. Stop.


To fix this, make sure there are tabs in lines 8, 15, 19, and 24.
For general robot issues, go to: Troubleshooting for Robot Tutorials.

Robot Teleop Using a Gamepad

Hardware Prerequisites
You have a robot and a gamepad.
This example uses a Logitech* F710 wireless gamepad and the UP Xtreme i11 Robotic Kit.
1. Add ros-base-teleop to your robot’s yml file.
EI for AMR contains a yml file with ros-base-teleop configured:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
gedit 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml
Copy the lines for ros-base-teleop into your generic-robot yml file:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
meld 01_docker_sdk_env/docker_compose/05_tutorials/
aaeon_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml 01_docker_sdk_env/
docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml up
# replace 01_docker_sdk_env/docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2.tutorial.yml up with your yml file
2. Insert the USB dongle in the robot.
3. Place the robot in an area with multiple objects in it.
4. Go to the installation folder of Edge_Insights_for_Autonomous_Mobile_Robots, and prepare the
environment:
cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=27
sudo chmod a+rw /dev/input/js0
sudo chmod a+rw /dev/input/event*
5. Start mapping the area:
docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml up
# replace 01_docker_sdk_env/docker_compose/05_tutorials/
generic_robot_wandering__aaeon_realsense_collab_slam_fm_nav2_ukf.tutorial.yml up with your yml
file
Expected result: The robot starts wandering around the room and mapping the entire area.

145
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

6. After the robot starts to move, you can control it with the gamepad:
• To control the robot, keep the top right button labeled 1 in the picture below pressed at all times.
• To move the robot on the X and Y axes, enable the mode button, and use the buttons labeled 2 in
the picture below.
• To rotate the robot in place, use the joystick labeled 3 in the picture below.
• To move the robot on the X and Y axrs, disable the mode button, and use the joystick labeled 4 in
the picture below.

Robot Teleop Using a Keyboard

Hardware Prerequisites
You have a robot and a keyboard or an ssh/vnc connection to the robot.
This example uses the UP Xtreme i11 Robotic Kit.
1. Connect to your robot via ssh/vnc or direct access. If you choose direct access, insert a monitor and a
keyboard into the robot’s compute system.
2. Start your robot’s node, and make sure that you have the correct remapping similar to this:
cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
chmod a+x run_interactive_docker.sh
./run_interactive_docker.sh amr-aaeon-amr-interface:2022.3 eiforamr -c aaeon_node
source /home/eiforamr/workspace/ros_entrypoint.sh

146
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
export ROS_DOMAIN_ID=167
ros2 run ros2_amr_interface amr_interface_node --ros-args -p try_reconnect:=true -p
timeout_connection:=1000.0 -p publishTF:=true --remap /amr/cmd_vel:=/cmd_vel -p port_name:=/dev/
ttyUSB0
3. In another terminal, open full-sdk, and start teleop_twist_keyboard:

NOTE The full-sdk docker image is only present in the Robot Complete Kit, not in the Robot Base Kit
or Up Xtreeme i11 Robotic Kit.

To check if you have it:


docker images |egrep "eiforamr-full-flavour-sdk"
#if you have it downloaded, the result is:
eiforamr-full-flavour-sdk

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml
run full-sdk bash
source /home/eiforamr/workspace/ros_entrypoint.sh
export ROS_DOMAIN_ID=167
ros2 run teleop_twist_keyboard teleop_twist_keyboard
Expected result: The robot responds to your keyboard commands in these ways:
• i: Move forward
• k: Stop
• ,: Move backward
• j: Turn right
• l: Turn left
• q/z: Increase/decrease max speeds by 10%
• w/x: Increase/decrease only linear speed by 10%
• e/c: Increase/decrease only angular speed by 10%
• L or J (only for omnidirectional robots): Strafe (move sideways)
• anything else: Stop
• Ctrl-c: Quit

Simulation

The following tutorials tell you how to use ROS 2 simulations inside Docker* containers. Robot sensing and
navigation can be tested in these simulated environments.

turtlesim

Run a ROS 2 simulation sample application in the Docker* container.


1. Check if your installation has the turtlesim Docker* image.
docker images |grep amr-turtlesim
#if you have it installed, the result is:
amr-turtlesim

147
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Complete Kit (see Get Started Guide
for Robots Step 3).
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
4. Prepare the environment setup:
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=12

NOTE The “Access control disabled, clients cannot connect from any host.” message is expected and
does not impact functionality.

5. Run an automated yml file that opens a ROS 2 sample application inside the EI for AMR Docker*
container.
CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
turtlesim.tutorial.yml up
6. Go to Plugins > Services > Service Caller: Choose to move turtle1 by choosing (from the Service
drop-down list) /turtle1/teleport_absolute and make sure you changed x and y coordinates for the
original values. Press Call. The turtle should move. Close the service caller window by pressing x. Then
type Ctrl-c.

148
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

7. To close this, do one of the following:


• Type Ctrl-c in the terminal where you did the up command.
• Close the rqt window.

149
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• Run this command in another terminal:


CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
turtlesim.tutorial.yml down
8. For an explanation of what happened, open the yml file:
• The first 23 lines are from the EI for AMR infrastructure.
• Line 26 starts the turtlesim ROS 2 node.
• Line 31 starts the rqt so that the turtle can be controlled.

Wandering Application in a ARIAC Gazebo* Simulation

This tutorial tells you how to use an industrial simulation room model (the OSRF GEAR workcell that was
used for the 2018 ARIAC competition) with objects and a wafflebot3 robot for simulation in Gazebo*. The
industrial room includes: shelves, conveyor belts, pallets, boxes, robots, stairs, ground lane markers, and a
tiled boundary wall.

Run the Sample Application


1. If your system has an Intel® GPU, follow the steps in the Get Started Guide for Robots to enable the
GPU for simulation. This step improves Gazebo* simulation performance.
2. Check if your installation has the turtlesim Docker* image.

docker images |egrep "amr-turtlebot3|amr-rtabmap|amr-nav2|amr-wandering"


#if you have them installed, the result is:
amr-turtlesim
amr-rtabmap
amr-nav2
wandering

NOTE If one or more of the images are not installed, continuing with these steps triggers a build
that takes longer than an hour (sometimes, a lot longer depending on the system resources and
internet connection).

3. If one or more of the images are not installed, Intel recommends installing the Robot Complete Kit with
the Get Started Guide for Robots.
4. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
5. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=32
6. Run the command below to start the Docker* container:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


wandering_gazebo_ariac.tutorial.yml up
Expected output:
The robot starts wandering inside the simulation. See the simulation snapshots from different angles:

150
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

To increase performance, the real time update rate can be set to 0:


a. In Gazebo*’s left panel, go to the World Tab, and click Physics.
b. Change the real time update rate to 0.
7. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


wandering_gazebo_ariac.tutorial.yml down --remove-orphans

151
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Troubleshooting
If the robot is not moving but Gazebo* is started, start the Wandering application manually by opening a
container shell and entering:
The ARIAC world tutorial works only with the eiforamr user. If the yml is started with a different user, the
Gazebo* model fails.

docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run wandering bash


ros2 run wandering_app wandering
For general robot issues, go to: Troubleshooting for Robot Tutorials.

Wandering Application in a Waffle Gazebo* Simulation

This tutorial shows a TurtleBot* 3 simulated in a waffle world. For more information about TurtleBot* 3 and
the waffle world, see this.

Run the Sample Application


1. If your system has an Intel® GPU, follow the steps in the Get Started Guide for Robots to enable the
GPU for simulation. This step improves Gazebo* simulation performance.
2. Check if your installation has the turtlesim Docker* image.

docker images |egrep "amr-turtlebot3|amr-rtabmap|amr-nav2|amr-wandering"


#if you have them installed, the result is:
amr-turtlesim
amr-rtabmap
amr-nav2
wandering

NOTE If one or more of the images are not installed, continuing with these steps triggers a build
that takes longer than an hour (sometimes, a lot longer depending on the system resources and
internet connection).

3. If one or more of the images are not installed, Intel recommends installing the Robot Complete Kit with
the Get Started Guide for Robots.
4. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
5. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=32
6. Run this command to start the Docker* container:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


wandering_gazebo.tutorial.yml up
Expected output:
After Gazebo* starts, the robot starts wandering inside the simulation. See the simulation snapshot:

152
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

153
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

To increase performance, the real time update rate can be set to 0:


a. In Gazebo*’s left panel, go to the World Tab, and click Physics.
b. Change the real time update rate to 0.
7. To close this, do one of the following:
• Type Ctrl-c in the terminal where you did the up command.
• Run this command in another terminal:

CHOOSE_USER=eiforamr docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


wandering_gazebo.tutorial.yml down --remove-orphans

Troubleshooting
If the robot is not moving but Gazebo* has started, start the wandering application manually by opening a
container shell and entering:

docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run wandering bash


ros2 run wandering_app wandering
For general robot issues, go to: Troubleshooting for Robot Tutorials.

Benchmarking and Profiling


Use the VTune™ profiler and OpenVINO™ toolkit benchmarking to measure the performance of your system.

VTune™ Profiler in a Docker* Container

Run the profiling application in a Docker* container with the VTune™ profiler.

Run the Sample Application


1. Check if your installation has the eiforamr-full-flavour-sdk Docker* image.

docker images |grep eiforamr-full-flavour-sdk


#if you have it installed, the result is:
eiforamr-full-flavour-sdk

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started
Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Prepare the environment setup:

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=19
5. Run the VTune™ profiler:

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


vtune.tutorial.yml up oneapi

154
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Expected output:

vtune: Warning: To profile kernel modules during the session, make sure they are available in
the /lib/modules/kernel_version/ location.
vtune: Collection started. To stop the collection, either press CTRL-C or enter from another
console window: vtune -r /tmp/matrix_multiply_vtune/r001gh -command stop.
Address of buf1 = 0x7f4578e4b010
Offset of buf1 = 0x7f4578e4b180
Address of buf2 = 0x7f457864a010
Offset of buf2 = 0x7f457864a1c0
Address of buf3 = 0x7f45746e2010
Offset of buf3 = 0x7f45746e2100
Address of buf4 = 0x7f4573ee1010
Offset of buf4 = 0x7f4573ee1140
Using multiply kernel: multiply1
Running on Intel(R) Iris(R) Xe Graphics [0x9a49]
Elapsed Time: 0.91916s
vtune: Collection stopped.
vtune: Using result path `/tmp/matrix_multiply_vtune/r001gh'
vtune: Executing actions 19 % Resolving information for `libpi_opencl.so'
vtune: Warning: Cannot locate debugging information for file `/usr/local/lib/
libze_intel_gpu.so.1'.
vtune: Executing actions 20 % Resolving information for `libc-dynamic.so'
vtune: Warning: Cannot locate debugging information for file `/lib/modules/5.10.65/kernel/fs/
overlayfs/overlay.ko'.
vtune: Executing actions 20 % Resolving information for `libm-2.31.so'
vtune: Warning: Cannot locate debugging information for file `/usr/lib/x86_64-linux-gnu/
libm-2.31.so'.
vtune: Executing actions 20 % Resolving information for `libc-2.31.so'
vtune: Warning: Cannot locate debugging information for file `/usr/lib/x86_64-linux-gnu/
libc-2.31.so'.
vtune: Executing actions 20 % Resolving information for `ld-2.31.so'
vtune: Warning: Cannot locate debugging information for file `/usr/lib/x86_64-linux-gnu/
ld-2.31.so'.
vtune: Warning: Cannot locate file `vmlinux'.
vtune: Executing actions 20 % Resolving information for `libpin3dwarf.so'
vtune: Warning: Cannot locate debugging information for file `/usr/local/lib/libigc.so.1.0.8517'.
vtune: Executing actions 20 % Resolving information for `libxed.so'
vtune: Warning: Cannot locate debugging information for the Linux kernel. Source-level analysis
is not possible. Function-level analysis is limited to kernel symbol tables. See the Enabling
Linux Kernel Analysis topic in the product online help for instructions.
vtune: Executing actions 21 % Resolving information for `libgcc_s.so.1'
vtune: Warning: Cannot locate debugging information for file `/usr/lib/x86_64-linux-gnu/
libgcc_s.so.1'.
vtune: Executing actions 21 % Resolving information for `libstdc++.so.6.0.28'
vtune: Warning: Cannot locate debugging information for file `/usr/lib/x86_64-linux-gnu/libstdc+
+.so.6.0.28'.
vtune: Executing actions 21 % Resolving information for `libtpsstool.so'
vtune: Warning: Cannot locate debugging information for file `/opt/intel/oneapi/vtune/2022.0.0/
lib64/libtpsstool.so'.
vtune: Executing actions 21 % Resolving information for `i915.ko'
vtune: Warning: Cannot locate debugging information for file `/opt/intel/oneapi/vtune/2022.0.0/
lib64/runtime/libittnotify_collector.so'.
vtune: Warning: Cannot locate debugging information for file `/opt/intel/oneapi/vtune/2022.0.0/
lib64/runtime/libittnotify_collector.so'.
vtune: Executing actions 22 % Resolving information for `libOpenCL.so.1'
vtune: Warning: Cannot locate debugging information for file `/usr/local/lib/
libze_intel_gpu.so.1.2.20939'.
vtune: Executing actions 22 % Resolving information for `libigdrcl.so'

155
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

vtune: Warning: Cannot locate debugging information for file `/lib/modules/5.10.65/kernel/


drivers/gpu/drm/i915/i915.ko'.
vtune: Warning: Cannot locate debugging information for file `/usr/local/lib/intel-opencl/
libigdrcl.so'.
vtune: Warning: Cannot locate debugging information for file `/usr/local/lib/intel-opencl/
libigdrcl.so'.
vtune: Executing actions 75 % Generating a report Elapsed Time:
1.163s
GPU Time: 0.041s
EU Array Stalled/Idle: 55.0% of Elapsed time with GPU busy
| The percentage of time when the EUs were stalled or idle is high, which has a
| negative impact on compute-bound applications.
|
GPU L3 Bandwidth Bound: 82.0% of peak value
| L3 bandwidth was high when EUs were stalled or idle. Consider improving
| cache reuse.
|

Hottest GPU Computing Tasks Bound by GPU L3 Bandwidth


Computing Task Total Time
-------------- ----------
Matrix1<float> 0.035s
Occupancy: 91.1% of peak value

Hottest GPU Computing Tasks with Low Occupancy


Computing Task Total Time SIMD Width Peak Occupancy(%) Occupancy(%) SIMD
Utilization(%)
-------------- ---------- ---------- ----------------- ------------
-------------------
Sampler Busy: 0.0% of peak value

Hottest GPU Computing Tasks with High Sampler Usage


Computing Task Total Time
-------------- ----------
Collection and Platform Info
Application Command Line: ./matrix.dpcpp
Operating System: 5.10.65 DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.3 LTS"
Computer Name: glaic3aeon2
Result Size: 28.3 MB
Collection start time: 15:39:14 04/01/2022 UTC
Collection stop time: 15:39:15 04/01/2022 UTC
Collector Type: Event-based sampling driver,Driverless Perf system-wide sampling,User-mode
sampling and tracing
CPU
Name: Intel(R) microarchitecture code named Tigerlake
Frequency: 2.803 GHz
Logical CPU Count: 8
GPU
Name: TigerLake GT2 [Iris Xe Graphics]
Vendor: Intel Corporation
EU Count: 96
Max EU Thread Count: 7
Max Core Frequency: 1.350 GHz
GPU OpenCL Info
Version
Max Compute Units: 96
Max Work Group Size: 512
Local Memory: 65.5 KB

156
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
SVM Capabilities

If you want to skip descriptions of detected performance issues in the report,


enter: vtune -report summary -report-knob show-issues=false -r <my_result_dir>.
Alternatively, you may view the report in the csv format: vtune -report
<report_name> -format=csv.
vtune: Executing actions 100 % done
6. For a list of the steps that were executed, see 01_docker_sdk_env/docker_compose/05_tutorials/
vtune.tutorial.yml.

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

OpenVINO™ Benchmarking Tool

This tutorial tells you how to run the benchmark application on an 11th Generation Intel® Core™ processor
with an integrated GPU. It uses the asynchronous mode to estimate deep learning inference engine
performance and latency.

Start Docker* Container


1. Check if your installation has the eiforamr-full-flavour-sdk Docker* image.

docker images |grep eiforamr-full-flavour-sdk


#if you have it installed, the result is:
eiforamr-full-flavour-sdk

NOTE If the image is not installed, continuing with these steps triggers a build that takes longer
than an hour (sometimes, a lot longer depending on the system resources and internet connection).

2. If the image is not installed, Intel recommends installing the Robot Complete Kit with the Get Started
Guide for Robots.
3. Go to the AMR_containers folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_containers
4. Start the Docker* container as root:

./run_interactive_docker.sh eiforamr-full-flavour-sdk:<TAG> root

Set Environment Variables


The environment variables must be set before you can compile and run OpenVINO™ applications.
1. Run the following script:

source /opt/intel/openvino/bin/setupvars.sh
--or--
source <OPENVINO_INSTALL_DIR>/bin/setupvars.sh

Build Benchmark Application


1. Change directory and build the benchmark application using the cmake script file using the following
commands:

cd /opt/intel/openvino/inference_engine/samples/cpp
./build_samples.sh

157
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

2. Once the build is successful, access the benchmark application in the following directory:

cd /root/inference_engine_cpp_samples_build/intel64/Release
-- or --
cd <INSTALL_DIR>/inference_engine_cpp_samples_build/intel64/Release
The benchmark_app application is available inside the Release folder.

Input File
Select an image file or a sample video file to provide an input to the benchmark application from the
following directory:

cd /root/inference_engine_cpp_samples_build/intel64/Release

Application Syntax and Options


The benchmark application syntax is as follows:

./benchmark_app [OPTION]
In this tutorial, we recommend you select the following options:

./benchmark_app -m <model> -i <input> -d <device> -nireq <num_reqs> -nthreads <num_threads> -b


<batch>

where:
<model>-------------The complete path to the model .xml file
<input>-------------The path to the folder containing image or sample video file.
<device>------------The device type can be GPU or CPU etc.,
<num_reqs>----------No of parallel inference requests
<num_threads>-------No of threads to use for inference on the CPU (throughput mode)
<batch>-------------Batch size
For complete details on the available options, run the following command:

./benchmark_app -h

Run the Application


The benchmark application is executed as seen below. This tutorial uses the following settings:
• Benchmark application is executed on frozen_inference_graph model.
• Number of parallel inference requests is set as 8.
• Number of CPU threads to use for inference is set as 8.
• Device type is GPU.

./benchmark_app -d GPU -i ~/<dir>/input/ -m /home/eiforamr/workspace/object_detection/src/


object_detection/models/ssd_mobilenet_v2_coco/frozen_inference_graph.xml -nireq 8 -nthreads 8
./benchmark_app -d GPU -i /home/eiforamr/data_samples/media_samples/plates_720.mp4 -m /home/
eiforamr/workspace/object_detection/src/object_detection/models/ssd_mobilenet_v2_coco/
frozen_inference_graph.xml -nireq 8 -nthreads 8
Expected output:

[Step 1/11] Parsing and validating input arguments


[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /home/eiforamr/data_samples/media_samples/plates_720.mp4
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version ............ 2.1

158
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Build .................. 2021.2.0-1877-176bdf51370-releases/2021/2
Description ....... API
[ INFO ] Device info:
GPU
clDNNPlugin version ......... 2.1
Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2

[Step 3/11] Setting device configuration


[ WARNING ] -nstreams default value is determined automatically for GPU device. Although the
automatic selection usually provides a reasonable performance,but it still may be non-optimal
for some cases, for more information look at README.
[Step 4/11] Reading network files
[ INFO ] Loading network files
[ INFO ] Read network took 89.49 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device
[ INFO ] Load network took 44714.68 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'image_tensor' precision U8, dimensions (NCHW): 1 3 300 300
[ WARNING ] No supported image inputs found! Please check your file extensions: bmp, dib, jpeg,
jpg, jpe, jp2, png, pbm, pgm, ppm, sr, ras, tiff, tif
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 4 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 5 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 6 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[ INFO ] Infer Request 7 filling
[ INFO ] Fill input 'image_tensor' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 8 inference requests using 2
streams for GPU, limits: 60000 ms duration)
[ INFO ] First inference took 10.01 ms

[Step 11/11] Dumping statistics report


Count: 9456 iterations
Duration: 60066.11 ms
Latency: 51.33 ms
Throughput: 157.43 FPS

Benchmark Report
Sample execution results using an 11th Gen Intel® Core™ i7-1185GRE @ 2.80 GHz.

Read network time (ms) 89

159
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Load network time (ms) 44714.68

First inference time (ms) 10.01

Total execution time (ms) 60066.11

Total num of iterations 9456

Latency (ms) 51.33

Throughput (FPS) 157.43

NOTE Performance results are based on testing as of dates shown in configurations and may not
reflect all publicly available updates. No product or component can be absolutely secure. Performance
varies by use, configuration and other factors. Learn more at Intel® Performance Index.

Troubleshooting
For general robot issues, go to: Troubleshooting for Robot Tutorials.

EI for AMR Container on a Virtual Machine

Run the Edge Insights for Autonomous Mobile Robots container on a KVM guest.

Run the Sample Application


1. Verify that your CPU supports hardware virtualization using the following command. Output greater
than zero indicates support is present:

egrep -c '(vmx|svm)' /proc/cpuinfo

NOTE If the output is not greater than zero, reboot your server. Modify BIOS Settings > Security
features > Enable Virtualization Technology.

2. Verify that Kernel-based Virtual Machine (KVM) acceleration can be used with the commands:

sudo apt install -y cpu-checker


kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
3. Install KVM in Ubuntu* 20.04 using the following commands:

NOTE
1. If the PAM configuration page is displayed when installing the following packages, click Yes.
2. If Configuring openssh-server page is displayed when installing the following packages, select
keep the local version currently installed.

sudo apt update


sudo apt install -y qemu qemu-kvm libvirt-daemon libvirt-daemon-system libvirt-clients bridge-
utils virt-manager virtinst openssh-server net-tools
sudo adduser $USER libvirt
sudo adduser $USER kvm

160
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

# Check if libvirtd is active


sudo systemctl status libvirtd

# If libvirtd is not active use


sudo systemctl enable --now libvirtd
4. Reboot the host:

sudo reboot -fn


After the reboot, verify that libvirtd is active.

sudo systemctl status libvirtd


# If libvirtd is not active use
sudo systemctl enable --now libvirtd
5. Create the KVM bridge:
a. Create a new bridge XML file:

vim br_kvm.xml
b. Add bridge details to br_kvm.xml:

<network>
<name>br_kvm</name>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='br10' stp='on' delay='0'/>
<ip address='192.168.124.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.124.50' end='192.168.124.200'/>
</dhcp>
</ip>
</network>
c. Define and start br_kvm network using the following commands:

sudo virsh net-define br_kvm.xml


sudo virsh net-start br_kvm
sudo virsh net-autostart br_kvm

# Check if autostart is enabled for br_kvm


sudo virsh net-list --all

Name State Autostart Persistent


br_kvm active yes yes
default active yes yes

# Confirm bridge creation and IP address


ip addr show dev br10
13: br10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen
1000
link/ether 52:54:00:21:ff:4f brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global br10
valid_lft forever preferred_lft forever

161
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

6. Create KVM with Ubuntu 20.04.


a. Download the Ubuntu 20.04 desktop .iso image from <releases.ubuntu.com/20.04/>.
b. Open virt-manager. Click on File > New VM > Select Local install media (ISO image or
CDROM). Choose the .iso file downloaded and choose the operating system to be installed.

c. Select the CPUs and Memory. For example: Memory: 4096, CPUs: 2
d. Add minimum 100 GB for the virtual machine storage.

e. Select the Virtual Network created above for Network Preferences.

162
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

f. Click Finish and start the Ubuntu 20.04 installation.


7. After Ubuntu Installation, add the Intel® RealSense™ camera to the VM by clicking on Show virtual
hardware details in Virt Manager > Add Hardware > Select USB Host Device and Select the
RealSense Camera.

163
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

8. Install Edge Insights for Autonomous Mobile Robots using the steps in Get Started Guide for Robots.

NOTE If your system is behind a proxy, you must configure the proxy settings.

9. Run the Intel® RealSense™ ROS 2 sample application inside the Docker* container using the steps from
Intel® RealSense™ ROS 2 Sample Application.

164
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Troubleshooting
If the following error is encountered:

Error connecting to graphical console:


Error opening Spice console, SpiceClientGtk missing
Install gir1.2-spiceclientgtk-3.0 with the command:

sudo apt-get install -y gir1.2-spiceclientgtk-3.0


For general robot issues, go to: Troubleshooting for Robot Tutorials.

Fibocom’s FM350 5G Module Integration

This tutorial covers how to build, install, and run Wireless Wide Area Network (WWAN) 5G private network
with Linux* components on Ubuntu for Intel IoT platforms.
Prerequisites:
• Fibocom’s FM350 5G module installed on the compute system
• A 5G private network infrastructure
• The APN of the 5G private network

Update the Ubuntu* Kernel Version 5.13 to 5.15


1. Clone Intel’s kernel 5.15.35 repository from GitHub*.

git clone https://git.launchpad.net/~canonical-kernel/ubuntu/+source/linux-intel-iotg/+git/focal/


cd focal
2. Reset HEAD to the Ubuntu-intel-iotg-5.15-5.15.0-1010.14_20.04.1 branch:

git reset --hard Ubuntu-intel-iotg-5.15-5.15.0-1010.14_20.04.1


3. Install the necessary packages and gcc dependencies:

sudo apt-get -y install build-essential gcc bc bison flex libssl-dev libncurses5-dev libelf-dev
dwarves zstd

165
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

4. Copy the configuration file to your folder, and rename it .config:

cp /boot/config-$(uname -r) ./.config


5. Change these kernel configuration values:

make olddefconfig
scripts/config --set-str SYSTEM_TRUSTED_KEYS ""
scripts/config --set-str CONFIG_SYSTEM_REVOCATION_KEYS ""
scripts/config --enable WWAN
scripts/config --module CONFIG_MTK_T7XX
6. Compile the kernel, and create the Debian* kernel packages:

make -j4 deb-pkg

NOTE

This kernel compilation • approximately one hour on systems with 32 GB of RAM


step takes a long time to • two to three hours on systems with 8 GB of RAM
complete:

7. Install the new Debian* kernel packages:

cd ../ && sudo dpkg -i linux-*.deb


8. Configure the system to use the new kernel version.
a. Open the GRUB.

sudo cp /etc/default/grub /etc/default/grub.bak


sudo vi /etc/default/grub
b. Change the value of GRUB_DEFAULT from GRUB_DEFAULT=0 to GRUB_DEFAULT="Advanced
options for Ubuntu>Ubuntu, with Linux 5.15.35".
c. Update the GRUB.

cd /tmp
sudo update-grub
9. Reboot your system.

sync
sudo reboot -fn
10. Check your kernel version after reboot.

uname -r

Build the 5G Module Kernel Modules


1. Go to the AMR_containers folder, and change the permissions of the install script:

cd Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
sudo chmod 775 ./01_docker_sdk_env/artifacts/01_amr/amr_5G_wwan/wwan_module_install.sh
2. If you already have it cloned, remove the focal/ folder:

rm -rf focal/
3. Run the kernel module install script:

./01_docker_sdk_env/artifacts/01_amr/amr_5G_wwan/wwan_module_install.sh
4. Follow the on-screen instructions to fetch the kernel source files and build and load the kernel modules.
(The patches are self-healing, so errors that appear in the patches are fixed by subsequent patches.)
5. Prepare the environment setup:

source 01_docker_sdk_env/docker_compose/common/docker_compose.source

166
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
If you get a “some variables not defined” error message, see Troubleshooting.
6. Run the WWAN 5G module container:

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


wwan_5G_modem_tutorial.yml up
If you get a “some variables not defined” error message, see Troubleshooting.
7. Launch the WWAN 5G module container by opening a new terminal on the same system:

docker exec -it amr-wwan-5g-modem bash


8. Set up a 5G network connection.

./network_init.sh
9. Follow the on-screen instructions to enter the APN and IP route of your 5G private network.
10. Test the 5G network connection by pinging the IP address of the server.
11. Disconnect the 5G network connection and disable the 5G module from within the WWAN 5G module
container.
a. Obtain the 5G module index number:

mmcli -L
b. Disconnect the 5G network connection and disable the 5G module (this example uses 0 for the 5G
module index number):

mmcli -m 0 --simple-disconnect
mmcli -m 0 -d

NOTE If you only stop the WWAN 5G module container, the 5G network is still available because the
5G module is still enabled and has an active connection.

12. Stop the WWAN 5G module container:

docker stop amr-wwan-5g-modem

Troubleshooting
• If you see an error message that docker-compose fails with some variables not defined, add the
environment variables to .bashrc so that they are available to all terminals:

export DOCKER_BUILDKIT=1
export COMPOSE_DOCKER_CLI_BUILD=1
export DOCKER_HOSTNAME=$(hostname)
export DOCKER_USER_ID=$(id -u)
export DOCKER_GROUP_ID=$(id -g)
export DOCKER_USER=$(whoami)
# Check with command
env | grep DOCKER
• During execution of wwan_module_install.sh, the following errors may be encountered:

INFO: Checking git dependencies...


ERROR: git config user.name not present
ERROR: Invalid git configuration
Aborting...
To fix this, add your username and email address to the git configuration:

git config --global user.name "yournamme"


git config --global user.email "youremail"

167
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• During execution of network_init.sh, the following error may be encountered:

INFO: Simple Connect...


mmcli -m 0 --simple-connect=apn=internet, ip-type=ipv4v6
error: couldn't connect the modem: 'Timeout was reached'
In network_init.sh’s log, check the module status. If state is enabled, reboot the EI for AMR host,
and repeat Build the 5G Module Kernel Modules.

Status | lock: sim-pin2


| unlock retries: sim-pin2 (3)
| state: enabled
| power state: on
| signal quality: 41% (cached)
• For general robot issues, go to: Troubleshooting for Robot Tutorials.

Change Existing and Add New Docker* Images to the EI for AMR SDK

This tutorial covers:


• Creating custom Docker* images by adding or removing components in the Docker* files provided in the
SDK
• Creating completely new Docker* images by selecting components from the Docker* files provided in the
SDK

NOTE Building should be done on a development machine with at least 16 GB of RAM. Building
multiple Docker* images, in parallel, on a system with 8 GB of RAM may end in a crash due to lack of
system resources.

Create a Customized Docker* Image


1. In the SDK root folder, go to the Docker* environment folder:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
01_docker_sdk_env/
There are two main Docker* files here:
• dockerfile.amr: For images to be deployed on robots
• dockerfile.edge-server: For images to be deployed on the server
These Docker* files create multiple Docker* images. The Docker* images to be created are defined in
the yaml configuration file, which is in the docker_compose folder.

Also, these Docker* files include many sub-files in the docker_stages folder. Each Docker* stage
represents a specific component that must be included in one of the Docker* images.

# docker_compose and docker_stages folders and dockerfiles are found as below.


ls -a ./
. .. artifacts docker_compose docker_orchestration docker_stages dockerfile.amr
dockerfile.edge-server
2. Go to the docker_stages folder, and choose the dockerfile.stage.* you want to modify:

cd docker_stages
3. Open the Docker* file from the environment folder in your preferred integrated development
environment (IDE), and append component-specific installation instructions in the appropriate place.
The following is an example of appending the Gazebo* application in dockerfile.stage.realsense.

168
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Give appropriate permissions:

chmod 777 dockerfile.stage.realsense


... <original code from dockerfile.stage.realsense>

################################# Gazebo app START #####################################


RUN sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs`
main" > /etc/apt/sources.list.d/gazebo-stable.list' \
&& wget https://packages.osrfoundation.org/gazebo.key -O - | apt-key add
- \
&& apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -
q -y \
ros-${ROS_DISTRO}-gazebo-ros-
pkgs
\
&& rm -rf /var/lib/apt/lists/*
################################# Gazebo app END #####################################
4. Build the Docker* image with the modified Docker* file:
Choose the appropriate Docker Compose* target from the docker_compose folder, so that your
particular target Docker* image is built.
For example, to build Intel® RealSense™:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_202*/AMR_containers
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml build realsense

NOTE Building should be done on a development machine with at least 16 GB of RAM. Building
multiple Docker* images, in parallel, on a system with 8 GB of RAM may end in a crash due to lack of
system resources.
Building on the People’s Republic of China (PRC) network may result in multiple issues. See the
Troubleshooting section for more details.

5. To see the details of the built images:

docker images | grep -i amr-


docker images | grep -i edge-server-
6. Run the required, newly-installed application from within the container (see turtlesim for details).

Create New Docker* Images with Selected Applications from the SDK
In this tutorial, you install an imaginary component as a new image and add it to the existing ros2-foxy-
sdk image.
1. Add a new file called docker_stages/01_amr/dockerfile.stage.imaginary.

Add instructions to install this component into this file using basic Docker* file syntax:

# Note: below repo does not exist, it is for demonstration purposes only
WORKDIR ${ROS2_WS}
RUN cd src \
&& git clone --branch ros2 https://github.com/imaginary.git \
&& cd imaginary && git checkout <commit_id> \
&& source ${ROS_INSTALL_DIR}/setup.bash \
&& colcon build --install-base ${ROS2_WS}/install \
&& rm -rf ${ROS2_WS}/build/* ${ROS2_WS}/src/* ${ROS2_WS}/log/*

169
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

2. Include this new Docker* file in dockerile.amr or dockerfile.edge-server at any appropriate


location.
You need to create a separate stage for this new Docker* file so that a separate image can be created
using that stage name. For example, append this to dockerfile.amr:

################################### imaginary stage START ######################################


FROM ros-base AS imaginary
INCLUDE+ docker_stages/01_amr/dockerfile.stage.imaginary
INCLUDE+ docker_stages/01_amr/dockerfile.stage.entrypoint
################################### imaginary stage END ########################################
3. If you need to add this new component to other images also, add it inside any stage.
For example, to add imaginary to the ros2-foxy-sdk stage:

################################# ros2-foxy-SDK stage START ####################################


FROM ros-base AS ros2-foxy-sdk
INCLUDE+ docker_stages/common/dockerfile.stage.tools-dev

####### add new component on appropriate place in this block #######


INCLUDE+ docker_stages/01_amr/dockerfile.stage.imaginary

INCLUDE+ docker_stages/01_amr/dockerfile.stage.vda5050
INCLUDE+ docker_stages/01_amr/dockerfile.stage.imaginary
INCLUDE+ docker_stages/01_amr/dockerfile.stage.opencv
INCLUDE+ docker_stages/01_amr/dockerfile.stage.rtabmap
INCLUDE+ docker_stages/01_amr/dockerfile.stage.fastmapping
INCLUDE+ docker_stages/01_amr/dockerfile.stage.gazebo
INCLUDE+ docker_stages/01_amr/dockerfile.stage.gstreamer
INCLUDE+ docker_stages/01_amr/dockerfile.stage.kobuki
INCLUDE+ docker_stages/01_amr/dockerfile.stage.nav2
INCLUDE+ docker_stages/01_amr/dockerfile.stage.realsense
INCLUDE+ docker_stages/01_amr/dockerfile.stage.ros-arduino
INCLUDE+ docker_stages/01_amr/dockerfile.stage.ros1-bridge
INCLUDE+ docker_stages/01_amr/dockerfile.stage.rplidar
INCLUDE+ docker_stages/01_amr/dockerfile.stage.turtlebot3
INCLUDE+ docker_stages/01_amr/dockerfile.stage.turtlesim
# simlautions has hard dependency in nav2 (@todo:), so we can not create separate image for
simulations without nav2.
INCLUDE+ docker_stages/01_amr/dockerfile.stage.simulations
INCLUDE+ docker_stages/01_amr/dockerfile.stage.entrypoint
################################# ros2-foxy-SDK stage END #######################################
4. Define a new target in the docker_compose/01_amr/amr-sdk.all.yml or docker_compose/
01_amr/edge-server.all.yml file:

imaginary:
image: ${REPO_URL}amr-ubuntu2004-ros2-foxy-imaginary:${DOCKER_TAG:-latest}
container_name: ${CONTAINER_NAME_PREFIX:-amr-sdk-}imaginary
extends:
file: ./amr-sdk.all.yml
service: ros-base
build:
target: imaginary
network_mode: host
command: ['echo imaginary run finished.']
5. Build two Docker* images:
• amr-ubuntu2004-ros2-foxy-imaginary
• amr-ubuntu2004-ros2-foxy-sdk

170
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
These images contain the new imaginary component.

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_202*/AMR_containers
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml build imaginary ros2-
foxy-sdk
6. To see the details of the built image:

docker images | grep -i amr-


docker images | grep -i imaginary
7. To verify that the imaginary component is part of created Docker* image:

docker history amr-ubuntu2004-ros2-foxy-imaginary


8. Run the required, newly-installed application from within the container (see turtlesim for details).

Troubleshooting
1. Building on the People’s Republic of China (PRC) Open Network.
Building Docker* images on the People’s Republic of China (PRC) open network may fail. Intel
recommends updating these links with their corresponding PRC mirrors. To do this, go to the
AMR_containers folder, and update the broken sites with the default or user-defined mirrors.

cd Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
chmod 775 changesources.sh
./changesources.sh -d .
Enter mirror server ('https://example.com' format) or leave empty to use the default value.
Git mirror [https://github.com.cnpmjs.org]:
Apt mirror [http://mirrors.bfsu.edu.cn]:
Pip mirror [https://opentuna.cn/pypi/web/simple/]:
Raw files mirror [https://raw.staticdn.net]:
2. Building on a limited resource system (8 GB of RAM or less) can be problematic.
Perform the following steps to minimize issues:
a. Save the output in a file instead of printing it, because printing consumes RAM resources.

nohup time docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml build --


parallel > eiforAMR.txt &

NOTE Edge Insights for Autonomous Mobile Robots comes with prebuilt images and building all
images is not required. Only do this step if you want to regenerate all images.

b. For remote connections, use an ssh connection instead of a VNC one as VNC connection consumes
resources.
c. For building multiple Docker* images, do not use the --parallel option as it requires more
resources.

nohup time docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml build >


eiforAMR.txt &

NOTE Edge Insights for Autonomous Mobile Robots comes with prebuilt images and building all
images is not required. Only do this step if you want to regenerate all images.

Troubleshooting for Robot Tutorials

171
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Unable to Play ROS 1 Bags


To play a ROS 1 bag in our ROS 2 environment, do the following.
1. Prepare the environment, and start full-sdk:

cd <edge_insights_for_amr_path>Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers/
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=35
docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run --name full-
flavour-sdk full-sdk bash
2. In a different terminal, attach to this Docker* image:

docker exec -it full-flavour-sdk bash


unset ROS_DISTRO
source /opt/ros/noetic/setup.bash
source /opt/ros/foxy/setup.bash
3. Download and set the ROS_DOMAIN_ID to know where you want to publish the data:

export ROS_DOMAIN_ID=35
wget <ROS 1 BAG>
ros2 bag play -s rosbag_v2 <ROS 1 BAG>

Permission Denied Error


For a permission denied error when running a script:

$ ./run_interactive_docker.sh eiforamr-full-flavour-sdk:<TAG> eiforamr


bash: ./run_interactive_docker.sh Permission denied
Give executable permission to the script:

chmod 755 run_interactive_docker.sh

DISPLAY Environment Variable Error


For errors related to the DISPLAY environment variable when trying to open the Docker* container or a GUI
application, enter the command:

echo $DISPLAY
If this variable is empty, it causes issues when opening applications that need a GUI.
The most common solution is to give it the 0:0 value:

export DISPLAY= "0:0"


If the connection with the system is via VNC, DISPLAY should be already set.

If it is not, find out the value of DISPLAY set by vncserver and then set the correct value:

For example:

ps ax |grep vncserver
/usr/bin/Xtigervnc :42 -desktop ....
/usr/bin/perl /usr/bin/vncserver -localhost no -geometry 1920x1000 -depth 24 :42

export DISPLAY= ":42"

172
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Use ROS_DOMAIN_ID to Avoid Interference in ROS Messages
A typical method to demonstrate a use case requires you to start a container (or group of containers) and
exchange ROS messages between various ROS nodes. However, interference from other ROS nodes can
disrupt the whole process. For example, you might receive ROS messages from unknown nodes that are not
intended for the demo use case. These other nodes could be on the same host machine or on other host
machines within the local network. In this scenario, it can be difficult to debug and resolve the interference.
You can avoid this by declaring ROS_DOMAIN_ID as a fixed numeric value per use case, under the following
conditions:
• The ROS_DOMAIN_ID should be same for all containers launched for a particular use case.
• The ROS_DOMAIN_ID should be an integer between 0 and 101.
• After launching the container, you can declare it with:

export ROS_DOMAIN_ID=<value>
For more information, go to: ROS_DOMAIN_ID
To add the ROS_DOMAIN_ID, you can choose any of the below options.

1. Add it in the common.yml file for all containers:

# In file 01_docker_sdk_env/docker_compose/common/common.yml
# ROS_DOMAIN_ID can be added that applies to all use cases
services:
common:
environment:
ROS_DOMAIN_ID: <choose ID>
2. Add it in the .env file for all containers:

# In file 01_docker_sdk_env/docker_compose/01_amr/.env
# add below line and provide ROS_DOMAIN_ID
ROS_DOMAIN_ID=<choose ID>
3. Add it in the specific yml file for a specific use case for specific targets:

# In the below example, ROS_DOMAIN_ID is added in ros-base target


# For any use case where this target is used, the ROS_DOMAIN_ID is set to the given value.

services:

ros-base:
image: ${REPO_URL}amr-ubuntu2004-ros2-foxy-ros-base:${DOCKER_TAG:-latest}
container_name: ${CONTAINER_NAME_PREFIX:-amr-sdk-}ros-base
environment:
ROS_DOMAIN_ID: <choose ID>
env_file:
- ./.env
extends:
4. Add it in the specific yml file in the command: section and apply only after launching the containers:

# In file 01_docker_sdk_env/docker_compose/05_tutorials/
fleet_mngmnt_with_low_battery.up.tutorial.yml
# In the below example, ROS_DOMAIN_ID is set to 58
# You may change it to any new value as per use case requirement.
services:

battery_bridge:
image: ${REPO_URL}amr-ubuntu2004-ros2-foxy-battery_bridge:${DOCKER_TAG:-latest}
container_name: ${CONTAINER_NAME_PREFIX:-amr-sdk-}battery_bridge
extends:
file: ../01_amr/amr-sdk.all.yml

173
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

service: ros-base
volumes:
- /dev/battery_bridge:/dev/battery_bridge:rw
build:
target: battery_bridge
network_mode: host
restart: "no"
command:
- |
source ros_entrypoint.sh
source battery-bridge/src/prebuilt_battery_bridge/local_setup.bash
export ROS_DOMAIN_ID=58
sleep 5
ros2 run battery_pkg battery_bridge
5. Add it while running a container using the run_interactive_docker.sh script:

# by adding env parameter, ROS_DOMAIN_ID can be exported inside container:


./run_interactive_docker.sh <image name> <user> --extra_params "-e ROS_DOMAIN_ID=<choose ID>"

NOTE You can use any number between 0 and 101 (inclusive), to set ROS_DOMAIN_ID, as long as it is
not used by a different ROS system.

Be aware that you can also use these options to modify other environment variables.

System HOME Directory Issues


If your test system uses $HOME mounted in remote volumes, for example, in a network file system (NFS),
you may encounter the error below when you try to run a Docker* image using the ./
run_interactive_docker.sh script:

docker: Error response from daemon: error while creating mount source path '/nfs/site/home/
<user>': mkdir /nfs/site/home/<user>: file exists.
To avoid this, before you run a Docker* image, create a new directory in /tmp (or any locally mounted
volume), and set $HOME to the new path:

mkdir /tmp/tmp_home
export HOME=/tmp/tmp_home
./run_interactive_docker.sh eiforamr-full-flavour-sdk:<release_tag> eiforamr

EI for AMR Robot Orchestration Tutorials


Prerequisite: Follow the instructions in Get Started Guide for Robot Orchestration (make sure that your target
system meets the applicable Recommended Hardware requirements).

Device Onboarding End-to-End Use Case

This tutorial describes how to:


1. Onboard the Fast IDentity Online (FIDO) device (the Robot).
2. Register the Robot in ThingsBoard*.
3. Set up a secure TLS connection for communication.
4. Load specified applications (containers) to the EI for AMR device.
These machines are used:

174
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

• The Intel® Smart Edge Open control plane which deploys ThingsBoard* to the edge node (The
ThingsBoard* GUI is accessed with the control plane IP and mapped port.)

NOTE In a Single-Node deployment, ThingsBoard* is installed on the same machine as the control
plane.
In a Multi-Node deployment, ThingsBoard* is installed on an edge node, not the control plane.

• The EI for AMR Robot that you want to onboard


• executes amr-fdo-client in terminal 1.

NOTE The diagram only shows two robots but you can add as many as you need.

• The FDO server which executes the manufacturer, rendezvous and owner servers
• edge-server-fdo-manufacturer on terminal 1
• edge-server-fdo-owner on terminal 2
• edge-server-fdo-rendezvous on terminal 3
• terminal 4 for configuration and control

175
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

NOTE The FDO server can be on any machine in the same network as the control plane. In this
tutorial, the FDO server is on an edge node.

The Onboarding Flow


In this flow, the FIDO device, or FDO client, is the Robot.

1. The FDO owner sends the FDO script, fileserver access, and filelist to the robot at field to be
onboarded.
2. The FDO client saves and starts the FDO script.
3. FDO loads and stores files from FileServer.
4. FDO registers the device in ThingsBoard* and and writes the Intel® In-Band Manageability
configuration.
a. FDO provisions each new device.

176
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Device naming convention:

<SEO 'tier' label value>_<SEO 'environment' label value>_<IP Address of Device>_<Hostname of


Device>_<MAC Address of Device>
Example:

BasicFleetManagement_tutorial_127.0.0.1_noop_00005E0053EA
b. FDO saves the Intel® In-Band Manageability configuration and certification files in the host file
system.
5. FDO registers the device in Intel® Smart Edge Open and gets the token and hash.
6. FDO starts the Intel® Smart Edge Open install script.
7. Intel® Smart Edge Open deploys all configured containers, including Intel® In-Band Manageability, and
brings them up.
8. When ThingsBoard* receives a new device online event, ThingsBoard* triggers a firmware and OS
update. After completion, the power recycles.

Prerequisites
You must do all sections of this tutorial in order.
Configure the edge with the Get Started Guide for Robot Orchestration.
Verify that the robot has a product name.

dmidecode -t system | grep Product


If the robot does not have a product name, the onboarding flow fails because this information is required
when configuring the OTA update. To assign a name, complete the following steps.
1. Prepare for the Intel® RealSense™ camera firmware update.
a. Download the firmware from https://dev.intelrealsense.com/docs/firmware-releases.
b. Place the .bin file that contains the firmware in a .tar.gz archive. Make sure that you do not
archive the entire directory, only the .bin file.
c. Set up a basic HTTP server, and upload the .tar.gz on it as a trusted repository server:

a. Install the apache2:

sudo apt update


sudo apt install apache2
b. Put the RealSense .bin file inside a .tar.gz, and place it on a http server:

tar -czvf rs_12_15.tar.gz Signed_Image_UVC_5_12_15_50.bin


sudo cp rs_12_15.tar.gz /var/www/html/
2. On ThingsBoard*, open Rule Chain.
3. Open Form_Config_Update, and, on line 15, update the URL of HTTP host that has the new firmware.

177
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

4. Open Form_POTA, and, on line 15, update the following.

178
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
a. The entire HTTP URL with the .tar.gz file for the firmware file.

NOTE The link should be similar to http://<hostname>/<archive.tar.gz>

b. The Manufacturer, Vendor, and the Product name with the output of the following commands.
Execute these commands on the robot.

dmidecode -t system | grep Product


dmidecode -t system | grep Manufacturer
dmidecode -t bios | grep Vendor

179
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

180
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE Updating the Manufacturer, Vendor, and Product name needs to be done every time you
onboard a new type of robot. If these values do not match the ones from the robot trying to onboard,
the flow fails.

5. Save all changes.

Configure the Robot and the FDO Server for the Onboarding Flow
1. Robot and FDO server Download, and install the needed scripts from the latest release.

NOTE These steps only install certain modules (Docker Community Edition CE and for Docker
Compose) and the set of scripts needed for this onboarding tutorial. These steps do not install the
full Robot Complete Kit bundle on your Robot.

a. Go to the Product Download page.


b. Select:
• For Robot, Robot Complete Kit.
• For FDO server, Server Complete Kit.
c. Click Download.
d. Copy the zip file to your target machine.
e. Extract and install the software:

unzip edge_insights_for_amr.zip
cd edge_insights_for_amr
chmod 775 edgesoftware
export no_proxy="127.0.0.1/32,devtools.intel.com"
./edgesoftware download
./edgesoftware list

NOTE Get the IDs for the Docker* Community Edition CE and for Docker Compose*:

./edgesoftware update <ID_Docker Community Edition CE> <ID_Docker Compose>


sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
source /etc/environment
f. Configure password-less ssh access for root:

• Edit /etc/ssh/sshd_config:

sudo nano /etc/ssh/sshd_config


• Add the following line at the end of the file:

PermitRootLogin yes
• Restart the ssh service:

sudo service ssh restart


sudo su
service ssh restart
ssh-keygen
exit

181
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

2. FDO server All images in the FDO pipeline are self-contained and require minimal configuration.
Configuration settings are all handled by external environment files, but some environment files need to
be generated by running the fdo_keys_gen.sh script:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers/01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
chmod +x fdo_keys_gen.sh
bash fdo_keys_gen.sh .
3. Robot Install the Battery Bridge Kernel Module.

cd components/amr_battery_bridge_kernel_module/src/
chmod a+x module_install.sh
# below command will install battery-bridge-kernel-module
sudo module_install.sh
# to uninstall battery-bridge-kernel-module (if needed)
sudo module_install.sh -u
The Battery Bridge Kernel Module does not work on Secure Boot machines. To disable UEFI Secure
Boot:
a. Go to the BIOS menu.
b. Open Boot > Secure Boot.
c. Disable Secure Boot.
d. Save the new configuration, and reboot the machine.

NOTE When the robot uses an actual battery, the sensor-driver of the robot provides the
corresponding driver’s ros-interface, which writes battery status into generic ros2-topic interface /
sensors/battery_state. However, this information is usually not transmitted to the generic OS
interface /sys/class/power_supply. Components that interact with the OS directly (for example, Intel®
In-Band Manageability), cannot get battery-information from the OS. To bridge this gap, a ROS
component battery-bridge and battery-bridge-kernel-module are provided. Using this battery-bridge,
battery-status can be transmitted via a kernel module to the standard OS interface /sys/class/
power_supply. The kobuki driver and kobuki_ros_interfaces is proven to work with battery-bridge and
battery-bridge-kernel-module components.

4. Robot Set the robot type by adding your robot type to /etc/robottype. The supported values are
amr-aaeon and amr-pengo. Example:

sudo echo "amr-aaeon" > /etc/robottype


5. Robot Run the following command on the client host:

sudo apt-get update && DEBIAN_FRONTEND=noninteractive sudo apt-get install --no-install-


recommends -q -y \
software-properties-
common \
&& if [[ -z "$http_proxy" ]] ; then sudo apt-key adv --keyserver
keys.gnupg.net \
--recv-key
F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE; \
else sudo apt-key adv --keyserver keys.gnupg.net --keyserver-
options \
http-proxy="${http_proxy}" --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE;
fi \
|| if [[ -z "$http_proxy" ]] ; then sudo apt-key adv --keyserver hkp://
keyserver.ubuntu.com:80 \
--recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE;
else \
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --keyserver-

182
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
options \
http-proxy="${http_proxy}" --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE;
fi \
&& sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo focal main" -
u \
&& DEBIAN_FRONTEND=noninteractive sudo apt-get install --no-install-recommends -q -
y \
rsync
\
librealsense2=2.50.* \
librealsense2-utils=2.50.* \
librealsense2-dev=2.50.* \
librealsense2-gl=2.50.* \
librealsense2-net=2.50.* \
librealsense2-dbg=2.50.* \
librealsense2-udev-rules=2.50.* \
&& sudo rm -rf /var/lib/apt/lists/*
sudo dpkg --configure -a
sudo mkdir -p /var/cache/manageability/repository-tool/sota
6. Robot Disable swap:

sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab


swapoff -a

Prepare the Environment Needed to Build the FDO Docker* Images


These steps have to be re-executed if a terminal is re-started.
1. Robot

export DISPLAY=0:0
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
export no_proxy=<no_proxy>,ip_from_fdo_server,ip_from_robot,localhost
sudo su
source ./AMR_containers/01_docker_sdk_env/docker_compose/common/docker_compose.source
2. FDO server all terminals

export DISPLAY=0:0
cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
source ./AMR_server_containers/01_docker_sdk_env/docker_compose/common/docker_compose.source

NOTE Set up the environment on every terminal on which you want to run docker-compose
commands.

3. FDO server terminal 1 Get the DNS:

sudo cat /run/systemd/resolve/resolv.conf


4. Robot Set the IP of the FDO server and the serial number of the robot.
Before building the FDO client image, there are a variety of configuration flags that need to be
adjusted.
Important This step needs to be done for each robot you add to the cluster. You must use a unique
serial number for each robot. These serial numbers are used later: when configuring the FDO server in
Onboard's step 8.
a. Open AMR_containers/01_docker_sdk_env/artifacts/01_amr/amr_fdo/device.config:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
nano 01_docker_sdk_env/artifacts/01_amr/amr_fdo/device.config

183
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

b. Add the following lines:

MANUFACTURER_IP_ADDRESS = ip_from_FDO_Server
c. For onboarding multiple robots, use a unique serial number for the DEVICE_SERIAL_NUMBER
variable.
This value must be unique for each robot that you onboard. Therefore, the default serial number,
1234abcd, can only be used once.

DEVICE_SERIAL_NUMBER = <unique_serial_number>

Build FDO Docker* Images


1. Robot Build the fdo-client image:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/AMR_containers
docker-compose -f ./01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml build fdo-client
2. FDO server terminal 1 Build the FDO manufacturer server image:
Before building the FDO manufacturer image, there are a variety of configuration flags that need to be
adjusted.
a. Open 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/manufacturer/
service.yml:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers
nano 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/manufacturer/service.yml
b. Add the following lines:

# Modify the values shown below in bold in the above file with respective DNS and IP address of
Rendezvous server
rv-instruction:
dns: dns_from_step_4
ip: ip_from_FDO_Server
c. Build the manufacturer server image:

docker-compose -f ./01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml build


fdo-manufacturer
3. FDO server terminal 2 Build the owner server image:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers
docker-compose -f ./01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml build
fdo-owner
4. FDO server terminal 3 Build the rendezvous server image:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers
docker-compose -f ./01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml build
fdo-rendezvous
See Troubleshooting if docker-compose errors are encountered.

Initialize FDO
1. FDO server - terminal 4 Adjust the Python script for your setup.

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers/
nano 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/scripts/sdo_script.py

184
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
a. For DEF_TB_MQTT_PORT, replace 1883 with 18883.
b. For network:

• Replace 0.0.0.0 with your proxy IP.


• If you use a hostname for a proxy, get the proxy IP:

telnet proxy_hostname proxy_port


• Leave it as 0.0.0.0 if no proxy is required.
c. file_server
a. For host, replace xx.xxx.xx.x with SFTP hostname or IP.
b. For user, replace someone with the SFTP username.
c. For password, replace pass with the SFTP password.
d. For fingerprint, replace :

|1|pYOofp22FlwwWNHH+vaK8gWhSxw=|S713N4hkiSRJCzfJQgqMfaYTJWw= ecdsa-sha2-nistp256
AABBE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFv3xFkoWZuALLa/iH8fLBK5ciKkvep
+61DAGEBSiORQbPxUtvBo0qbi14/N+KD58YEkWrrzlQIEsp/minlSVKE=
With the output of the following command:

ssh-keyscan -t ecdsa [host | addrlist namelist]


d. thingsboard
a. For host, replace xx.xxx.xx.x with the control plane IP.
b. For http_port, replace 9090 with 32764.
c. For sec_mqtt_port, replace 8883 with 32767.
d. For device_key, replace 9oq7uxtdsgt4yjyqdekg with 9oq7uxtdsgt4yjyqdekg.
e. The value for device_secret stays 6z3j3osphpr8ck1b9ocp.

The values for device_key and device_secret are obtained from the ThingsBoard* web
interface. Go to Thingsboard > Device Profiles > Device Profiles details > Device Provisioning.
In preconfigured data, the following are set in ThingsBoard*:

device_key = "9oq7uxtdsgt4yjyqdekg"
device_secret = "6z3j3osphpr8ck1b9ocp"
e. seo
a. For host, replace xx.xxx.xx.xxx with the control plane IP.
b. For crt_hash, replace
fd6d98ee914f5e08df1858b2e82e1ebacbcf35cae0ddd7e146ec18fa200a265b with the
output of the following commands on control plane:

cd /etc/kubernetes/pki/
openssl x509 -pubkey -in ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -
sha256 -hex | sed 's/^.* //'
f. sftp_filelist
a. In the fdo_sftp/etc/docker/certs.d line, replace 10.237.22.133 with the IP of the
control plane.
b. Add / at the beginning of every line after "file":".

After you made the changes it should look similar to this:

sftp_filelist = '[ {"file":"/fdo_sftp/thingsboard.pub.pem","path":"/etc/tc" },\


{"file":"/fdo_sftp/pki/ca.crt","path":"/host/etc/kubernetes/pki" },\
{"file":"/fdo_sftp/pki/apiserver-kubelet-client.crt","path":"/host/etc/
kubernetes/pki" },\
{"file":"/fdo_sftp/pki/apiserver-kubelet-client.key","path":"/host/etc/
kubernetes/pki" },\

185
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

{"file":"/fdo_sftp/root/.docker/config.json","path":"/host/root/.docker/" },\
{"file":"/fdo_sftp/etc/docker/daemon.json","path":"/host/etc/docker/" }, \
{"file":"/fdo_sftp/etc/docker/certs.d/<Replace here with Control Plane
IP>:30003/ca.crt","path":"/host/etc/docker/certs.d/<Replace here with Control Plane
IP>:30003" },\
{"file":"/fdo_sftp/etc/systemd/system/docker.service.d/http-
proxy.conf","path":"/host/etc/systemd/system/docker.service.d" },\
{"file":"/fdo_sftp/seo_install.sh","path":"/host/root" },\
{"file":"/fdo_sftp/k8s_apply_label.py","path":"/host/root" },\
{"file":"/fdo_sftp/etc/amr/ri-certs/server.pem","path":"/host/etc/amr/ri-
certs" },\
{"file":"/fdo_sftp/etc/amr/ri-certs/client.key","path":"/host/etc/amr/ri-
certs" },\
{"file":"/fdo_sftp/etc/amr/ri-certs/client.pem","path":"/host/etc/amr/ri-
certs" }]'
2. FDO server terminal 4 Edit 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
scripts/multi_machine_config.sh:

nano 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/scripts/multi_machine_config.sh
a. Assign the value from 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
creds/manufacturer/service.env to the variable mfg_api_passwd.

cat 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/creds/manufacturer/service.env
b. Assign the value from 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
creds/owner/service.env to the variable default_onr_api_passwd.

cat 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/creds/owner/service.env
c. Replace {rv-dns} with the FDO server DNS.
d. Replace {owner-dns} with the FDO server DNS.
e. Replace {rv-ip} with the FDO server IP.
f. Replace {owner-ip} with the FDO server IP.
g. Replace the http://localhost:8042 and http://localhost:8039 in both curl commands with
http://FDO_SERVER_IP:8042 with http://FDO_SERVER_IP:8039.
Example (without the curly brackets):

mfg_api_passwd={manufacturer_api_password_from_service.env}
onr_api_passwd={owner_api_password_from_service.env}
.......................................................
# Updating RVInfo blob in Manufacturer
# Replace localhost, {rv-dns} and {rv-ip} references with respective DNS and IP address of the
host machine
curl -D - --digest -u "${api_user}":"${mfg_api_passwd}" --location --request POST 'http://
<ip_from_FDO_SERVER>:8039/api/v1/rvinfo' \
--header 'Content-Type: text/plain' \
--data-raw '[[[5,"dns"],[3,8040],[12,1],[2,"ip_from_FDO_SERVER"],[4,8040]]]'

# Updating T02RVBlob in Owner


# Replace localhost, {owner-ip} and {owner-dns} references with respective DNS and IP address of
the host machine
curl -D - --digest -u "${api_user}":"${onr_api_passwd}" --location --request POST 'http://
<ip_from_FDO_SERVER>:8042/api/v1/owner/redirect' \
--header 'Content-Type: text/plain' \
--data-raw '[["ip_from_FDO_SERVER","dns",8042,3]]'
3. FDO server terminal 3 Edit 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
scripts/extend_upload.sh, and set the following variables:

nano 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/scripts/extend_upload.sh

186
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
a. Assign the value from 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
creds/manufacturer/service.env to the variable default_mfg_api_passwd.

cat 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/creds/manufacturer/service.env
b. Assign the value from 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
creds/owner/service.env to the variable owner_api_password_from_machine.

cat 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/creds/owner/service.env
c. Assign the FDO server IP to the variable default_mfg_ip.
d. Assign the FDO server IP to the variable default_onr_ip.

Example:

default_mfg_ip="<ip_from_FDO_SERVER>"
default_onr_ip="<ip_from_FDO_SERVER>"
...........................
default_mfg_api_passwd="<manufacturer_api_password_from_service.env>"
default_onr_api_passwd="<owner_api_password_from_service.env>"
4. FDO server terminal 3 Edit 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/
scripts/configure_serviceinfo.sh, and set the following variables:

nano 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/scripts/configure_serviceinfo.sh
a. Assign the FDO server IP to the variable OWNER_IP.

Onboard
FDO is a new IoT standard that is built on Intel® Secure Device Onboard (Intel® SDO) specifications. It is the
first step in onboarding a device. The FDO specification specifies four entities.
• Device: the EI for AMR device plus the FDO client (the FDO client supports the FDO protocol)
• Manufacturer Server: the entity that is responsible for the initial steps of the FDO protocol and loading
credentials onto the device, and is also a part of the production flow of the EI for AMR device
• Owner Server: the entity that sends all required data (for example, keys and certificates) to the device in
the final protocol step TO2
• Rendezvous Server: the first contact point for the device after you switch the device on and configure it
for network communication. The rendezvous server sends the device additional information, for example,
how to contact the owner server entity.
All containers, including the client, follow this command structure:

docker-compose -f <.yml path used during build stage> up <fdo service name>
1. FDO server terminal 1 Run the manufacturer server:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers/
docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml up fdo-
manufacturer
2. FDO server terminal 2 Run the owner server:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers/
docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml up fdo-
owner
3. FDO server terminal 3 In a new terminal window, run the rendezvous server:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers/
docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml up fdo-
rendezvous

187
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

4. FDO server terminal 4 Add rules for the following ports:

ufw allow 8039


ufw allow 8040
ufw allow 8042
5. Robot Run the client:

cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_*/
AMR_server_containers/
sudo su
export no_proxy=<no_proxy>,ip_from_FDO_SERVER,ip_from_ROBOT,localhost
source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
export ROS_DOMAIN_ID=17
CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/
fdo_client_onboard.yml up
After running the FDO client for the first time, the device initialization is complete:

FDO Client log snippet:

amr-sdk-fdo-client | 09:56:55:433 FDOProtDI: Received message type 13 : 1 bytes


amr-sdk-fdo-client | 09:56:55:433 Writing to Normal.blob blob
amr-sdk-fdo-client | 09:56:55:433 Hash write completed
amr-sdk-fdo-client | 09:56:55:434 HMAC computed successfully!
amr-sdk-fdo-client | 09:56:55:434 Writing to Secure.blob blob
amr-sdk-fdo-client | 09:56:55:434 Generating platform IV of length: 12
amr-sdk-fdo-client | 09:56:55:434 Generating platform AES Key of length: 16
amr-sdk-fdo-client | 09:56:55:434 Device credentials successfully written!!
amr-sdk-fdo-client | (Current) GUID after DI: <GUID>
amr-sdk-fdo-client | 09:56:55:434 DIDone completed
amr-sdk-fdo-client | 09:56:55:434
amr-sdk-fdo-client | ------------------------------------ DI Successful
--------------------------------------
amr-sdk-fdo-client | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
amr-sdk-fdo-client | @FIDO Device Initialization Complete@
amr-sdk-fdo-client | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
amr-sdk-fdo-client exited with code 0

NOTE When starting FDO containers, start the FDO client image last because the FDO client image
immediately begins reaching out to the manufacturer server in order to complete device initialization
(DI), and it only attempt this connection a few times before exiting. If the FDO client is successful in
connecting to the manufacturer server, the manufacturer server assigns a GUID to the FDO client and
generates an ownership voucher for use in the rest of the pipeline.

6. FDO server terminal 4 Run multi_machine_config.sh:

NOTE Run the FDO scripts on FDO server as root.

cd 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/scripts/
chmod +x *
sudo su
export no_proxy=<no_proxy>,ip_from_FDO_SERVER,ip_from_Robot,localhost
./multi_machine_config.sh

188
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Expected output:

HTTP/1.1 401
WWW-Authenticate: Digest realm="Authentication required", qop="auth",
nonce="1652260953609:a1f80c513623b4c7b87292c054d5d650", opaque="4F6AB1DF45A94C67D59892BC7DB6B6B4"
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 673
Date: Wed, 11 May 2022 09:22:33 GMT

HTTP/1.1 200
Content-Length: 0
Date: Wed, 11 May 2022 09:22:33 GMT

HTTP/1.1 401
WWW-Authenticate: Digest realm="Authentication required", qop="auth",
nonce="1652260953705:0e2856e16da3eb830dca777a34f1f154", opaque="E11DE6169652A5495FC93933790D1A04"
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 673
Date: Wed, 11 May 2022 09:22:33 GMT

HTTP/1.1 200
Content-Length: 0
Date: Wed, 11 May 2022 09:22:33 GMT
7. FDO server terminal 4 Run the configure_serviceinfo.sh:

./configure_serviceinfo.sh
Expected output:

Upload Device execution script to Owner Server


HTTP/1.1 401
WWW-Authenticate: Digest realm="Authentication required", qop="auth",
nonce="1652941145981:e5cdb0c180cd069360cd159fdcadccde", opaque="BE4E73265635CC0D98F9430BABA64DBE"
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 673
Date: Thu, 19 May 2022 06:19:05 GMT

HTTP/1.1 100

HTTP/1.1 200
Content-Length: 0
Date: Thu, 19 May 2022 06:19:05 GMT
8. FDO server terminal 4 Add the robot by using the serial number.

./extend_upload.sh -s <serial_number>
# By default the serial number is 1234abcd, the exepcted output is assuming this serial number.
./extend_upload.sh -s 1234abcd # use your robot's serial number.

NOTE The serial number is the value of DEVICE_SERIAL_NUMBER from the 01_docker_sdk_env/
artifacts/01_amr/amr_fdo/device.config file, set on the robot when preparing to build the FDO server
in Prepare the Environment Needed to Build the FDO Docker* Images.

189
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Expected output:

Success in downloading SECP256R1 owner certificate to owner_cert_SECP256R1.txt


Success in downloading extended voucher for device with serial number 1234abcd
Success in uploading voucher to owner for device with serial number 1234abcd
GUID of the device is 7e1e0c59-6d87-4b40-b68d-e7fcc00a7e37
Success in triggering TO0 for 1234abcd with GUID 7e1e0c59-6d87-4b40-b68d-e7fcc00a7e37 with
response code: 200
xxxx@FDO_SERVER: 01_docker_sdk_env/artifacts/02_edge_server/edge_server_fdo/scripts$
9. FDO server terminal 2 In the edge-server-fdo-owner logs, verify that TO0 finished.

edge-server-fdo-owner | 06:49:50.463 [INFO ] TO0 completed for GUID: ...

NOTE This task can take more than three minutes.

10. Robot

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


fdo_client_onboard.yml up
11. Robot In the client messages, verify that FDO completed.

amr-fdo-client | ------------------------------------ TO2 Successful


--------------------------------------
amr-fdo-client |
amr-fdo-client | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
amr-fdo-client | @FIDO Device Onboard Complete@
amr-fdo-client | @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
amr-fdo-client exited with code 0

NOTE FDO protocol steps TO1 and TO2 can take more than five minutes.

Expected result:
Control plane In the ThingsBoard* GUI, Robot was added in Devices as a new device.

190
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

NOTE The device is online on the Dashboard after the Intel® In-Band Manageability container in Robot
is automatically brought up successfully.

191
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Robot The wandering app is deployed from the Intel® Smart Edge Open controller, and the robot starts
to wander around.
12. Verify that the onboarding was successful by checking the followings logs on the control plane:

$ kubectl get all --output=wide --namespace onboarding


NAME READY STATUS RESTARTS AGE
IP NODE NOMINATED NODE READINESS GATES
pod/onboarding-deployment-95f5dc897-44xpj 0/16 Pending 0 61m
<none> <none> <none> <none>
pod/onboarding-deployment-95f5dc897-8267z 0/16 Pending 0 61m
<none> <none> <none> <none>
pod/onboarding-deployment-95f5dc897-99svk 0/16 Pending 0 61m
<none> <none> <none> <none>

192
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
pod/onboarding-deployment-95f5dc897-j6t5j 0/16 Pending 0 61m
<none> <none> <none> <none>
pod/onboarding-deployment-95f5dc897-qd22f 16/16 Running 38 (4m15s ago) 61m
10.245.224.68 glaic3ehlaaeon2 <none> <none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE


SELECTOR
service/onboarding-service NodePort 10.105.68.202 <none> 8883:32759/TCP 61m
app.kubernetes.io/instance=onboarding-abcxzy,app.kubernetes.io/name=onboarding

NAME READY UP-TO-DATE AVAILABLE AGE


CONTAINERS

IMAGES

SELECTOR
deployment.apps/onboarding-deployment 0/5 5 0 61m dds-bridge,amr-
fleet-management,vda5050-ros2-bridge,amr-realsense,amr-ros-base-camera-tf,amr-aaeon-amr-
interface,amr-ros-base-teleop,amr-battery-bridge,amr-object-detection,imu-madgwick-filter,robot-
localization,amr-collab-slam,amr-fastmapping,amr-nav2,amr-wandering,amr-vda-navigator
10.237.22.198:30003/intel/eclipse/zenoh-bridge-dds:0.5.0-beta.9,10.237.22.198:30003/intel/amr-
fleet-management:2022.3,10.237.22.198:30003/intel/amr-vda5050-ros2-
bridge:2022.3,10.237.22.198:30003/intel/amr-realsense:2022.3,10.237.22.198:30003/intel/amr-ros-
base-camera-tf:2022.3,10.237.22.198:30003/intel/amr-aaeon-amr-
interface:2022.3,10.237.22.198:30003/intel/amr-ros-base-teleop:2022.3,10.237.22.198:30003/intel/
amr-battery-bridge:2022.3,10.237.22.198:30003/intel/amr-object-
detection:2022.3,10.237.22.198:30003/intel/amr-imu-madgwick-filter:2022.3,10.237.22.198:30003/
intel/amr-robot-localization:2022.3,10.237.22.198:30003/intel/amr-collab-
slam:2022.3,10.237.22.198:30003/intel/amr-fastmapping:2022.3,10.237.22.198:30003/intel/amr-
nav2:2022.3,10.237.22.198:30003/intel/amr-wandering:2022.3,10.237.22.198:30003/intel/amr-vda-
navigator:2022.3 app.kubernetes.io/instance=onboarding-abcxzy,app.kubernetes.io/name=onboarding

NAME DESIRED CURRENT READY AGE


CONTAINERS

IMAGES

SELECTOR
replicaset.apps/onboarding-deployment-95f5dc897 5 5 0 61m dds-
bridge,amr-fleet-management,vda5050-ros2-bridge,amr-realsense,amr-ros-base-camera-tf,amr-aaeon-
amr-interface,amr-ros-base-teleop,amr-battery-bridge,amr-object-detection,imu-madgwick-
filter,robot-localization,amr-collab-slam,amr-fastmapping,amr-nav2,amr-wandering,amr-vda-
navigator 10.237.22.198:30003/intel/eclipse/zenoh-bridge-dds:0.5.0-beta.9,10.237.22.198:30003/
intel/amr-fleet-management:2022.3,10.237.22.198:30003/intel/amr-vda5050-ros2-
bridge:2022.3,10.237.22.198:30003/intel/amr-realsense:2022.3,10.237.22.198:30003/intel/amr-ros-

193
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

base-camera-tf:2022.3,10.237.22.198:30003/intel/amr-aaeon-amr-
interface:2022.3,10.237.22.198:30003/intel/amr-ros-base-teleop:2022.3,10.237.22.198:30003/intel/
amr-battery-bridge:2022.3,10.237.22.198:30003/intel/amr-object-
detection:2022.3,10.237.22.198:30003/intel/amr-imu-madgwick-filter:2022.3,10.237.22.198:30003/
intel/amr-robot-localization:2022.3,10.237.22.198:30003/intel/amr-collab-
slam:2022.3,10.237.22.198:30003/intel/amr-fastmapping:2022.3,10.237.22.198:30003/intel/amr-
nav2:2022.3,10.237.22.198:30003/intel/amr-wandering:2022.3,10.237.22.198:30003/intel/amr-vda-
navigator:2022.3 app.kubernetes.io/instance=onboarding-abcxzy,app.kubernetes.io/
name=onboarding,pod-template-hash=95f5dc897
For amr-pengo, run:

kubectl get all --output=wide --namespace onboarding-pengo


13. Verify that the Docker* images are present on the Robot:

$ docker images

<Control_Plane_IP>:30003/intel/amr-ros-base-camera-tf latest
31735754089b 2 days ago 8.25GB
<Control_Plane_IP>:30003/intel/amr-wandering latest
31735754089b 2 days ago 8.25GB
<Control_Plane_IP>:30003/intel/amr-fastmapping latest
5c1bbefc1d17 2 days ago 2.28GB
<Control_Plane_IP>:30003/intel/amr-collab-slam latest
415975276b1f 2 days ago 3.24GB
<Control_Plane_IP>:30003/intel/amr-aaeon-amr-interface latest
5d94f57da0d1 2 days ago 2.37GB
<Control_Plane_IP>:30003/intel/amr-realsense latest
1dab67f4d287 2 days ago 3GB
<Control_Plane_IP>:30003/intel/amr-ros-base-camera-tf latest
0ac635f5633f 2 days ago 1.76GB
<Control_Plane_IP>:30003/intel/amr-nav2 latest
769353e041bf 2 days ago 3.55GB
<Control_Plane_IP>:30003/intel/amr-kobuki latest
799ed6f79385 2 days ago 3.06GB
<Control_Plane_IP>:30003/intel/amr-fleet-management latest
e91bf2815f65 2 days ago 1.79GB
<Control_Plane_IP>:30003/intel/amr-vda-navigator latest
499c0c09b685 2 days ago 2.08GB
<Control_Plane_IP>:30003/intel/amr-vda5050-ros2-bridge latest
4e8282a666be 2 days ago 2.06GB
<Control_Plane_IP>:30003/intel/eclipse/zenoh-bridge-dds 0.5.0-beta.9
1a5e41449966 9 months ago 86.1MB
<Control_Plane_IP>:30003/intel/node-feature-discovery v0.9.0
00019dda899b 13 months ago 123MB

NOTE Pod deployment may take a while because of the size of the Docker* containers from the pod.
If you get an error after the deployment, wait a few minutes. The pods automatically restart, and the
error goes away. If the error persists after a few automatic restarts, restart the pod manually from the
control plane:

$ kubectl rollout restart deployment wandering-deployment -n wandering

14. Verify that the Docker* container is running on the Robot:

$ docker ps

CONTAINER ID IMAGE COMMAND


CREATED STATUS PORTS NAMES

194
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
86184dab6d92 10.237.22.39:30003/intel/amr-ros-base-camera-tf "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-ros-base-teleop_wandering-
deployment-86d6b669d6-rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_1
9d19c163076f 10.237.22.39:30003/intel/amr-wandering "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-wandering_wandering-
deployment-86d6b669d6-rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_0
b9f03850310e 10.237.22.39:30003/intel/amr-nav2 "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-nav2_wandering-deployment-86d6b669d6-
rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_0
8fb3fb882505 10.237.22.39:30003/intel/amr-fastmapping "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-fastmapping_wandering-
deployment-86d6b669d6-rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_0
1f122686f8e1 10.237.22.39:30003/intel/amr-collab-slam "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-collab-slam_wandering-
deployment-86d6b669d6-rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_0
ee7e6cd8b50a 10.237.22.39:30003/intel/amr-aaeon-amr-interface "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-aaeon-amr-interface_wandering-
deployment-86d6b669d6-rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_0
009efc5405af 10.237.22.39:30003/intel/amr-ros-base-camera-tf "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-ros-base-camera-tf_wandering-
deployment-86d6b669d6-rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_0
1a6409b8c361 10.237.22.39:30003/intel/amr-realsense "/bin/bash -c 'sourc…"
About a minute ago Up About a minute k8s_amr-realsense_wandering-
deployment-86d6b669d6-rzlgr_wandering_c00ecd97-2217-4f4f-a62c-9f99bc44ac7d_0
15. After the Onboarding process is finished, the Firmware Update and Operating System Update are
triggered automatically. If you want to start the update manually, see OTA Updates.

Adding Robots After the Initial Robot Setup


The manufacturer, rendezvous, and owner must still be running on the FDO server.
1. Make sure that you meet the Prerequisites.
2. Run all Robot steps in Configure the Robot and the FDO Server for the Onboarding Flow.
3. Run all Robot steps in Prepare the Environment Needed to Build the FDO Docker* Images.
a. For step 4, set a different DEVICE_SERIAL_NUMBER than for other onboarded robots.
4. Build the fdo-client image on the Robot (Build FDO Docker* Images step 1).
5. Run steps 5-15 in Onboard.
a. For step 8, use the DEVICE_SERIAL_NUMBER set in step 3.

Hosts Cleanup

Warning Doing these steps erases most of the work done in previouse steps, so only do these steps
when you want to clean up your machines.
To remake a setup after these cleanup steps, restart the onboarding process from the beginning.

1. Robot

CHOOSE_USER=root docker-compose -f 01_docker_sdk_env/docker_compose/05_tutorials/


fdo_client_onboard.yml down
2. FDO server terminal 1

docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml down


3. Remove the Robot in the ThingsBoard* web interface.

195
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

4. Robot If the Robot was added to the Intel® Smart Edge Open cluster, remove it:

kubeadm reset
systemctl restart kubelet
5. Robot If the docker images are already running, remove these images:

docker rm -f $(docker ps | grep "<CONTROL_PLANE_IP>:30003/intel/" | awk '{ print $1 }')


docker rmi -f $(docker images | grep "<CONTROL_PLANE_IP>:30003/intel/" | awk '{ print $3 }')
6. Robot Remove the /etc/tc directory:

rm -rf /etc/tc

Troubleshooting
1. If a docker-compose error is encountered while building the FDO docker images, update the docker-
compose version:

curl -L " https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$


(uname -m)" -o /usr/local/bin/docker-compose

FDO References

Term Reference

DMS N/A

FDO https://fidoalliance.org/intro-to-fido-device-onboard/

FIDO https://en.wikipedia.org/wiki/FIDO_Alliance

RV https://fidoalliance.org/specs/FDO/FIDO-Device-Onboard-RD-
v1.0-20201202.html

Intel® SDO https://www.intel.com/content/www/us/en/internet-of-things/secure-device-


onboard.html

Basic Fleet Management

The basic fleet management solution consists of server and client architecture.
• For server setup which is orchestrated by Intel® Smart Edge Open, see the Get Started Guide for Robot
Orchestration.
• For client setup in which Intel® Smart Edge Open onboards and deploys devices for basic fleet
management use cases, see Device Onboarding End-to-End Use Case.
The following diagram presents the architecture, components, and communication between components.

196
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Basic Fleet Management use cases:


• Basic Fleet Management use case, commanding a robot to return to a docking station triggered by battery
level or commanding to multiple specified locations
• Remote Inference use case with a remote inference request to an OpenVINO™ model server by battery
level
The basic fleet management server is one of the microservices of orchestration, and it is the solution
provided by ThingsBoard* (https://thingsboard.io/docs/user-guide/install/docker/).
Using a ThingsBoard* server to connect to the Intel® In-Band Manageability framework (https://github.com/
intel/intel-inb-manageability), deployed clients are able to provide fleet management and telemetry. The
ThingsBoard* server GUI gives a clear view of the telemetry data with the Intel® In-Band Manageability-
tailored dashboard. In addition, rules can be set for configured, validated events to reach fleet management
use cases.
The basic fleet management client (which is deployed on robots) consists of Intel® In-Band Manageability, the
VDA5050 client which complies with VDA5050 v2.0 (https://www.vda.de/dam/
jcr:f0c9c019-1506-4dee-998a-e92723fbf025/EN-VDA5050-V2_0_0.pdf), and ROS 2 nodes (can be
navigation or object detection purposes). When a subscribed topic is published by Intel® In-Band
Manageability, the VDA5050 client processes the VDA5050 complied JSON format (https://github.com/
VDA5050/VDA5050/tree/main/json_schemas) and translates it into ROS2 topics to publish.
The VDA5050 complied JSON format message can be conducted in the ThingsBoard* Rule Engine (https://
thingsboard.io/docs/user-guide/rule-engine-2-0/re-getting-started/) nodes with configured telemetry
message validation and sent via an RPC call node, or it can be sent manually on the GUI.

197
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

For the remote inference use case, the requests from ROS 2 node go to OpenVINO™ model server (https://
github.com/openvinotoolkit/model_server/tree/main/extras/nginx-mtls-auth) via SSL channel.
• Basic Fleet Management Use Case
• Remote Inference End-to-End Use Case

Basic Fleet Management Use Case

This tutorial describes how to:


• From the ThingsBoard* GUI, command the robot to navigate to multiple locations.
• Command the robot back to the docking station when its battery reaches the 40% threshold.
Prerequisites:
• The server is configured with the Get Started Guide for Robot Orchestration.
• The robot is onboarded with the Device Onboarding End-to-End Use Case.

VDA5050 Client and Multiple Destinations Order


The VDA5050 client works as a bridge between VDA5050 specification compliance and the ROS 2 world.
Specifically, it complies with VDA5050 V2.0 specification schemas.
The following software block diagram of the VDA5050 client shows internal software components, and the
extra VDA_Navigator component as a ROS 2 node to handle interfacing to the ROS 2 Navigation 2 stack. You
can extend the internal handlers and VDA_Navigator to fit your use cases.

198
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

1. Pause wandering from ThingsBoard*.


a. Click Manifest Update:

199
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

b. Replace the default command with the following VDA5050 json format embedded command:

<?xml version="1.0" encoding="UTF-8"?><manifest><type>cmd</type><cmd>custom</cmd><custom>


<data>{
"headerId":0,
"timestamp":"2022-08-03T11:40:03.12Z",
"version":"1.0",
"manufacturer":"Intel",
"serialNumber":"12345678",
"actions": [
{
"actionId":"0",
"actionName":"ToggleWandering",
"blockingType":"NONE",
"actionParameters":[
{
"key":"pause",
"value":true
}]
}]
}</data></custom></manifest>
c. Click Send. Intel® In-Band Manageability forwards the message to the VDA5050 client.
2. Instruct the robot to go to multiple navigation goals by adding them in the nodes array.

a. Click Manifest Update.


b. Replace the default command with a command containing multiple navigation goals, for example:

<?xml version="1.1" encoding="UTF-8"?><manifest><type>cmd</type><cmd>custom</cmd><custom>


<data>{
"headerId": 0,
"timestamp":"2021-10-14T11:40:04.12Z",
"version":"1.0",
"manufacturer":"Intel",
"serialNumber":"12345678",
"orderId":"1235",
"orderUpdateId": 0,
"nodes":[

200
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
{
"nodeId":"A",
"sequenceId": 7,
"released": true,
"nodePosition":{
"x":0.3,
"y":0.8,
"theta":0,
"mapId": "001"
},
"actions":[]
},
{
"nodeId":"B",
"sequenceId": 7,
"released": true,
"nodePosition":{
"x":0.9,
"y":0.8,
"theta":0,
"mapId": "001"
},
"actions":[]
}
],
"edges":[
{
edgeId": "edge9",
"sequenceId": 0,
"edgeDescription": "edge1",
"released": false,
"startNodeId": "Origin",
"endNodeId": "AnotherNode",
"maxSpeed": 0.0,
"maxHeight": 0.0,
"minHeight": 0.0,
"orientation": 0.0,
"direction": "straight",
"rotationAllowed": true,
"maxRotationSpeed": 0.0,
"length": 0.0,
"trajectory": {
"degree": 0.0,
"knotVector": [
0.0,
0.5,
0.6,
1.0
],
"controlPoints": [
{
"x": 0.0,
"y": 0.0,
"weight": 0.0
}]},
"actions": []}]
}</data> </custom></manifest>
c. Click Send. Intel® In-Band Manageability forwards the message to the VDA5050 client.

201
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Expected result: A status update is sent to the ThingsBoard* Dashboard Event Log, and the robot starts
navigating to the navigation goals.

3. Send a command to update the navigation goals by adding a new one “C”.

NOTE The orderID remains the same; orderUpdateId flags that this as an update from the previous
instructions.

a. Click Manifest Update.


b. Replace the command with:

<?xml version="1.0" encoding="UTF-8"?><manifest><type>cmd</type><cmd>custom</cmd><custom><data>{


"headerId": 0,
"timestamp":"2021-10-14T11:40:04.12Z",
"version":"1.0",
"manufacturer":"Intel",
"serialNumber":"12345678",
"orderId":"1235",
"orderUpdateId": 1,
"nodes":[
{
"nodeId":"B",
"sequenceId": 7,
"released": true,
"nodePosition":{
"x":0.9,
"y":0.8,
"theta":0,
"mapId": "001"},
"actions":[]},
{
"nodeId":"C",
"sequenceId": 7,
"released": true,
"nodePosition":{
"x":1.1,
"y":1.1,
"theta":0,

202
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
"mapId": "001"},
"actions":[]}],
"edges":[
{
"edgeId": "edge2",
"sequenceId": 0,
"edgeDescription": "c",
"released": false,
"startNodeId": "B",
"endNodeId": "C",
"maxSpeed": 0.0,
"maxHeight": 0.0,
"minHeight": 0.0,
"orientation": 0.0,
"direction": "straight",
"rotationAllowed": true,
"maxRotationSpeed": 0.0,
"length": 0.0,
"trajectory": {
"degree": 0.0,
"knotVector": [
0.0,
0.5,
0.6,
1.0],
"controlPoints": [{
"x": 0.0,
"y": 0.0,
"weight": 0.0}]
},
"actions": []}]
}</data> </custom></manifest>
c. Click Send. Intel® In-Band Manageability forwards the message to the VDA5050 client.
Expected result: A status update is sent to the ThingsBoard* Dashboard Event Log, and the robot starts
navigating to the navigation goals, now including the C navigation goal.

Command the Robot Back to the Docking Station When its Battery Reaches the 40% Threshold
Collaboration Diagram
When a robot’s battery level is less than 40%, basic fleet management tells the robot to move to the origin
position. The following diagram depicts the steps.

203
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

On the server, log into the basic fleet management server.


1. Open the basic fleet management dashboard:

NOTE VNC interferes with the Intel® Smart Edge Open installation. Intel recommends that you open
the basic fleet management dashboard on a different system, as the dashboard is accessible via
internet.

# Open a browser, use controller IP and open:


<IP Address>:32764

NOTE Use the following credentials:


• account: [email protected]
• password: tenant

If the fleet management server dashboard is not accessible on a system in the same network, check
Troubleshooting for Robot Orchestration Tutorials, “Fleet Management Server Dashboard over LAN
Issues”.

204
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
The following home page is loaded. Device Profiles and Devices are loaded with pre-configured data
from Intel.

2. Check the pre-configured device profile from Intel named: INB.


• The Root Rule Chain is associated with the pre-defined Device Profile “INB.”
• Definitions in this Root Rule Chain determine how incoming and outgoing messages are processed
for all devices registered in this profile.

205
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

3. Check the pre-configured Root Rule Chain from ThingsBoard*.


a. Open the Rule:

206
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

b. Intel added to the Rule Chain to fulfill the following use cases:
• Basic fleet management
• Remote inference
• Onboarding
• OTA update

207
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

208
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE To add new clients to the fleet management server, see Troubleshooting for Robot Orchestration
Tutorials, “Add New Clients to the Fleet Management Server”.

4. Check out the dashboard tailored for Intel® In-Band Manageability.


a. Open the dashboard:

b. The dashboard shows the device’s basic information and telemetry data, for example:
• The INB_Fleet_Management_Client device is currently online (if a device is onboarded through
the onboarding process, an additional device is shown to be online.).
• The battery status is numeric and presented as an average value.

209
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• The battery level is 35 (hover over the grey line representing battery value by time to see
this).

5. Check the Wandering application logs when the battery level goes under 40%.
The logs are similar to this:

[wandering_mapper]: GoToLocation 0,0

Remote Inference End-to-End Use Case

210
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
This tutorial describes how to use the basic fleet management server to set object detection inference on EI
for AMR remotely at the OpenVINO™ model server when its battery is lower than the 60% threshold. If the
battery is equal to or greather than 60%, the inference is set to be done locally at EI for AMR.
Prerequisites:
• The server is configured with the Get Started Guide for Robot Orchestration.
• The robot is onboarded with the Device Onboarding End-to-End Use Case.

Collaboration Diagram
When a robot’s battery level is less than 60%, basic fleet management tells the robots to do Remote
Inference. When the battery level is back to equal or greater than 60%, basic fleet management tells the
robots to do Local Inference. The following diagram depicts the steps.

Check Object Detection with Local or Remote Inference


On the robot
• Example logs when local inference is performed:

[object_detection_node-3]

[object_detection_node-3] [ INFO ] <LocalInference> Done frame: 5516 . Processed in: 0.279625 ms

211
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

[object_detection_node-3]

[object_detection_node-3] [ INFO ] <LocalInference> Label tv

[object_detection_node-3]

[object_detection_node-3] [ INFO ] <LocalInference> Done frame: 5517 . Processed in: 0.240508 ms

[object_detection_node-3]

[object_detection_node-3] [ INFO ] <LocalInference> Label tv


• Example logs when remote inference is performed:

[object_detection_node-3] [INFO] [1643382428.696445729] [object_detection]:


switchToRemoteInfCallback

[object_detection_node-3] [INFO] [1643382428.720869717] [object_detection]: <RemoteInference>


Sending Image

[object_detection_node-3] [INFO] [1643382428.854300655] [object_detection]: <RemoteInference>


Sending Image

[remote_inference-4] [INFO] [1643382428.863697253] [remote_inference]: <RemoteInference>


Receiving video frame

[object_detection_node-3] [INFO] [1643382428.882912426] [object_detection]: <RemoteInference>


Sending Image

[remote_inference-4] [INFO] [1643382428.895332223] [remote_inference]: <RemoteInference>


Processing and inference took 31.16

[object_detection_node-3] [INFO] [1643382428.896419726] [object_detection]: <RemoteInference>


Detected Objects Received

[object_detection_node-3] [INFO] [1643382428.896478543] [object_detection]: <RemoteInference>


Label : tv

[remote_inference-4] [INFO] [1643382428.897817637] [remote_inference]: <RemoteInference>


Receiving video frame

[object_detection_node-3] [INFO] [1643382428.921090305] [object_detection]: <RemoteInference>


Sending Image

[remote_inference-4] [INFO] [1643382428.922211172] [remote_inference]: <RemoteInference>


Processing and inference took 23.68

OTA Updates

The OTA updates solution is based on the Basic Fleet Management architecture. The following diagram
presents the architecture, components, and communications for these use cases.

212
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

Prerequisites:
• The server is configured with the Get Started Guide for Robot Orchestration.
• The robot is onboarded with the Device Onboarding End-to-End Use Case.

Operating System Update


On the ThingsBoard* dashboard, click Trigger SOTA, select update, and click SEND. Update progress is
visible in the ThingsBoard logs. The client host reboots after SOTA completes.

213
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

214
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE If the operating system update fails, dpkg may have been interrupted in the past or the SOTA
cache directory is missing on the robot. Run the following commands to solve the issue:

sudo dpkg --configure -a


sudo mdkir -p /var/cache/manageability/repository-tool/sota

Firmware Update
This example updates the Intel® RealSense™ camera firmware.
1. Preparation for the Intel® RealSense™ camera firmware update:
a. Download the firmware from https://dev.intelrealsense.com/docs/firmware-releases.
b. Place the .bin file that contains the firmware in a .tar.gz archive. Make sure that you do not
archive the entire directory, only the .bin file.
c. Set up a basic HTTP server, and upload the .tar.gz on it as a trusted repository server.
2. On the ThingsBoard* dashboard, click Trigger Config Update.
3. Choose:
• Command: append
• Path: trustedRepositories:http://url-to-http-server/and-optional-path-if-necessary/

215
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

4. Click Send, and observe the logs in the ThingsBoard* bottom screen.
5. Trigger the firmware update.
• BIOS Version: any number

216
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• Fetch: http://url-to-http-server/and-optional-path-if-necessary/arhive-with-firmware.tar.gz
• Manufacturer: set the value according to the following image
• Product: set the value according to the following image
• Release Date: the current date in the YYYY-MM-DD format
• Vendor: set the value according to the following image
• Server Username and Server Password: only used if the HTTP server is password protected

217
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

218
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
6. Click Send, and observe the logs in the ThingsBoard* bottom screen.
The client host reboots after the update complete.

Manifest Update to Trigger POTA (Operating system update and Firmware update)
Programming Over The Air, or POTA, includes both SOTA and FOTA.

NOTE A configuration update is still required if the image URL is not listed on trusted repository.

1. Trigger a POTA by clicking Manifest Update. Replace the default manifest with your text.
Example:

<?xml version="1.0" encoding="utf-8"?>


<manifest>
<type>ota</type>
<ota>
<header>
<type>pota</type>
<repo>remote</repo>
</header>
<type>
<pota>
<fota name="sample">
<biosversion>5.12</biosversion>
<manufacturer>AAEON</manufacturer>
<product>UP-APL01</product>
<vendor>American Megatrends Inc.</vendor>
<releasedate>2022-07-08</releasedate>
<fetch>http://glaic3n125.gl.intel.com/rs_13.tar.gz</fetch></fota>
<sota>
<cmd logtofile="y">update</cmd>
<release_date>2024-12-31</release_date>
</sota>
</pota>
</type>
</ota>
</manifest>
• BIOS version: Set this to any number.
• Fetch: http://url-to-http-server/and-optional-path-if-necessary/arhive-with-firmware.tar.gz
• Manufacturer: Set this to match the one in ThingsBoard*.
• Product: Set this to match the one in ThingsBoard*.
• Vendor: Set this to match the one in ThingsBoard*.
• Release Date: Set this to the current date in the YYYY-MM-DD format.
2. Click Send, and observe the logs in the ThingsBoard* bottom screen.

219
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

The client host reboots after the update completes.

220
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Container Update
Because the containers (or applications) are orchestrated and deployed by Intel® Smart Edge Open, the
ThingsBoard* Rule Chain routes AOTA trigger requests to the Intel® Smart Edge Open server. AOTA only
works for whole pod updates.
1. Verify the current version:

$ helm list -A

NAME NAMESPACE REVISION


UPDATED STATUS CHART
APP VERSION
cadvisor telemetry 1 2022-09-13 17:14:17.731573771
+0300 EEST deployed cadvisor-0.1.0 1
cert-manager cert-manager 1 2022-09-13 17:05:37.960015552
+0300 EEST deployed cert-manager-v1.6.1 v1.6.1
collab collab 1 2022-09-14 11:41:31.420169905
+0300 EEST deployed collab-0.1.0 2022.3.0
collab-dds-router collab-dds-router 1 2022-09-14 11:40:51.67619388
+0300 EEST deployed collab-dds-router-0.1.0 0.5.0-beta.9
collectd telemetry 1 2022-09-13 17:14:10.903887379
+0300 EEST deployed collectd-0.1.0 1
fleet fleet-management 1 2022-09-14 13:23:47.082994841
+0300 EEST deployed fleet-0.2.0 2022.3.0
grafana telemetry 1 2022-09-13 17:15:27.339867512
+0300 EEST deployed grafana-6.16.13 8.2.0
harbor-app harbor 1 2022-09-13 17:06:59.18688859
+0300 EEST deployed harbor-1.7.4 2.3.4
nfd-release smartedge-system 1 2022-09-13 17:12:20.988514637
+0300 EEST deployed node-feature-discovery-0.2.0 v0.9.0
onboarding onboarding 1 2022-09-14 15:04:07.609923463
+0300 EEST deployed onboarding-0.1.0 2022.3.0
**onboarding-pengo onboarding-pengo 4 2022-09-16 13:19:26.690734294
+0300 EEST deployed onboarding-pengo-0.1.18 2022.3.0**
ovms ovms-tls 1 2022-09-14 11:29:31.642868313
+0300 EEST deployed ovms-tls-0.2.0 2022.3.0
prometheus telemetry 1 2022-09-13 17:13:40.166305066
+0300 EEST deployed prometheus-14.9.2 2.26.0
statsd-exporter telemetry 1 2022-09-13 17:14:01.640302724
+0300 EEST deployed prometheus-statsd-exporter-0.4.10.22.1
2. Create the /var/amr/helm-charts/ directory:

$ mdkir -p /var/amr/helm-charts/
3. If you are onboarding with a Cogniteam* Pengo robot, make a copy of the AMR_server_containers/
01_docker_sdk_env/docker_orchestration/ansible-playbooks/01_amr/onboarding-pengo directory:

$ cd <edge_insights_for_amr_path>/Edge_Insights_for_Autonomous_Mobile_Robots_<version>/
AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/01_amr/
$ cp -r onboarding-pengo onboarding-pengo2
4. To create a new version of a Helm* chart, increment the version field in the Chart.yaml file:

$ nano onboarding_pengo2/helm_onboarding_pengo/Chart.yaml
$ cat onboarding_pengo2/helm_onboarding_pengo/Chart.yaml

apiVersion: v2
appVersion: 2022.3.0
description: A helm chart for onboarding-pengo

221
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

name: onboarding-pengo
type: application
**version: 0.1.20**
5. Move the new Helm* chart to /var/amr/helm-charts/.

$ mv onboarding_pengo2 /var/amr/helm-charts
6. On the ThingsBoard* dashboard, click Trigger AOTA.

7. Select:

222
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• App: application as App
• Command: update
• Container Tag: onboarding-pengo
• Version:

223
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

8. Click Send, and observe the logs in the ThingsBoard* bottom screen.
Known issue: A timeout error occurs after clicking Send because the AOTA request message is rerouted
to the Intel® Smart Edge Open server, and the robot never receives the request.

Expected results:
• On the Intel® Smart Edge Open server, check the mqtt_aota service status. It should look similar to
the following.

$ systemctl status mqtt_aota

● mqtt_aota.service - MQTT Broker AOTA Service


Loaded: loaded (/etc/systemd/system/mqtt_aota.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-09-16 10:42:39 EEST; 1 day 4h ago
Main PID: 934368 (python3)

224
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Tasks: 2 (limit: 618670)
Memory: 15.2M
CGroup: /system.slice/mqtt_aota.service
└─934368 /usr/bin/python3 /usr/local/bin/mqtt_onboard_aota.py

sep 17 15:38:26 glaic3edge02 python3[3951685]: + helm_extra_args+=(--set "env_label=$


{OPT_ENV_LABEL}")
sep 17 15:38:26 glaic3edge02 python3[3951736]: + cd /var/amr/helm-charts/onboarding_pengo2/
helm_onboarding_pengo
sep 17 15:38:26 glaic3edge02 python3[3951736]: + helm upgrade --install onboarding-pengo . --
namespace onboarding-pengo --set env.whoami=root --set hostIP=10.237.23.136:30003 --set
'node_names={glaic3pengo1}' --set tier_label=BasicFleetManagement --set env_label=tutorial
sep 17 15:38:27 glaic3edge02 python3[3951737]: **Release "onboarding-pengo" has been upgraded.
Happy Helming!**
sep 17 15:38:27 glaic3edge02 python3[3951737]: NAME: onboarding-pengo
sep 17 15:38:27 glaic3edge02 python3[3951737]: LAST DEPLOYED: Sat Sep 17 15:38:26 2022
sep 17 15:38:27 glaic3edge02 python3[3951737]: NAMESPACE: onboarding-pengo
sep 17 15:38:27 glaic3edge02 python3[3951737]: STATUS: deployed
sep 17 15:38:27 glaic3edge02 python3[3951737]: REVISION: 5
sep 17 15:38:27 glaic3edge02 python3[3951737]: TEST SUITE: None
• On the Intel® Smart Edge Open server, the Helm* chart version is renewed to a newer version.

$ helm list -A
NAME NAMESPACE REVISION
UPDATED STATUS CHART
APP VERSION
cadvisor telemetry 1 2022-09-13 17:14:17.731573771
+0300 EEST deployed cadvisor-0.1.0 1
cert-manager cert-manager 1 2022-09-13 17:05:37.960015552
+0300 EEST deployed cert-manager-v1.6.1 v1.6.1
collab collab 1 2022-09-14 11:41:31.420169905
+0300 EEST deployed collab-0.1.0 2022.3.0
collab-dds-router collab-dds-router 1 2022-09-14 11:40:51.67619388
+0300 EEST deployed collab-dds-router-0.1.0 0.5.0-beta.9
collectd telemetry 1 2022-09-13 17:14:10.903887379
+0300 EEST deployed collectd-0.1.0 1
fleet fleet-management 1 2022-09-14 13:23:47.082994841
+0300 EEST deployed fleet-0.2.0 2022.3.0
grafana telemetry 1 2022-09-13 17:15:27.339867512
+0300 EEST deployed grafana-6.16.13 8.2.0
harbor-app harbor 1 2022-09-13 17:06:59.18688859
+0300 EEST deployed harbor-1.7.4 2.3.4
nfd-release smartedge-system 1 2022-09-13 17:12:20.988514637
+0300 EEST deployed node-feature-discovery-0.2.0 v0.9.0
onboarding onboarding 1 2022-09-14 15:04:07.609923463
+0300 EEST deployed onboarding-0.1.0 2022.3.0
**onboarding-pengo onboarding-pengo 5 2022-09-17 15:38:26.58658014
+0300 EEST deployed onboarding-pengo-0.1.20 2022.3.0**
ovms ovms-tls 1 2022-09-14 11:29:31.642868313
+0300 EEST deployed ovms-tls-0.2.0 2022.3.0
prometheus telemetry 1 2022-09-13 17:13:40.166305066
+0300 EEST deployed prometheus-14.9.2 2.26.0
statsd-exporter telemetry 1 2022-09-13 17:14:01.640302724
+0300 EEST deployed prometheus-statsd-exporter-0.4.10.22.1

Troubleshooting for Robot Orchestration Tutorials

225
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Setting a Static IP
Depending on your network setup, there are multiple ways to set a static IP.
• In a home network, see your router’s instructions for how to set a static IP using your MAC address.
• In a corporate network, contact your local support team for how to set a static IP.
• To set it from your computer’s operating system:
1. Make sure that your computer has the correct date:

date
If the date is incorrect, contact your local support team for help setting the correct date and time.
2. Find the gateway:

ip route | grep default


3. Name the servers by finding your interface name and replacing it:

nmcli device show <interface name> | grep IP4.DNS


4. Follow the “Static IP Address Assignment” steps from Ubuntu* here.

virtualenv Error
If the following error is displayed:

Virtualenv location:
Warning: There was an unexpected error while activating your virtualenv. Continuing anyway…
Traceback (most recent call last):
File "./deploy.py", line 24, in <module>
from scripts import log_all
ImportError: cannot import name 'log_all' from 'scripts' (/home/test/.local/lib/python3.8/site-
packages/scripts/__init__.py)
Remove the ~/.local/lib/python3.8/ directory and run the following commands:

pip install --user -U pip


pip freeze --user | cut -d'=' -f1 | xargs pip install --user -U

Python Error
If the following error is displayed:

Failed to install wget. b' ERROR: Command errored out with exit status 1:\n
command: /usr/bin/python3 -c \'import sys, setuptools, tokenize; sys.argv[0] = \'"\'"\'/tmp/pip-
install-6hcmet6a/wget/setup.py\'"\'"\'; __file__=\'"\'"\'/tmp/pip-install-6hcmet6a/wget/setup.py
\'"\'"\';f=getattr(tokenize, \'"\'"\'open\'"\'"\', open)
(__file__);code=f.read().replace(\'"\'"\'\\r\\n\'"\'"\', \'"\'"\'\\n
\'"\'"\');f.close();exec(compile(code, __file__, \'"\'"\'exec\'"\'"\'))\' egg_info --egg-
base /tmp/pip-pip-egg-info-7_3nl4xa\n cwd: /tmp/pip-install-6hcmet6a/wget/\n Complete
output (17 lines):\n Traceback (most recent call last):\n File "<string>", line 1, in
<module>\n File "/tmp/pip-install-6hcmet6a/wget/setup.py", line 15, in <module>\n
setup(\n File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/core.py", line
147, in setup\n _setup_distribution = dist = klass(attrs)\n File "/usr/local/lib/
python3.8/dist-packages/setuptools/dist.py", line 476, in __init__\n
_Distribution.__init__(\n File "/usr/local/lib/python3.8/dist-packages/setuptools/
_distutils/dist.py", line 280, in __init__\n self.finalize_options()\n File "/usr/
local/lib/python3.8/dist-packages/setuptools/dist.py", line 899, in finalize_options\n
for ep in sorted(loaded, key=by_order):\n File "/usr/local/lib/python3.8/dist-packages/
setuptools/dist.py", line 898, in <lambda>\n loaded = map(lambda e: e.load(), filtered)
\n File "/usr/local/lib/python3.8/dist-packages/setuptools/_vendor/importlib_metadata/
__init__.py", line 196, in load\n return functools.reduce(getattr, attrs, module)\n
AttributeError: type object \'Distribution\' has no attribute \'_finalize_feature_opts\'\n

226
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
----------------------------------------\nERROR: Command errored out with exit status 1: python
setup.py egg_info Check the logs for full command output.\nWARNING: You are using pip version
20.2.4; however, version 22.2.2 is available.\nYou should consider upgrading via the \'/usr/bin/
python3 -m pip install --upgrade pip\' command.\n'
Remove the ~/.local/lib/python3.8/ directory, and run the following commands:

python3 -m pip uninstall setuptools


python3 -m pip install testresources
python3 -m pip install launchpadlib
python3 -m pip install setuptools
python3 -m pip install --user -U pip

termcolor Error
If the following error is displayed:

Failed to install termcolor. b'/usr/local/lib/python3.8/dist-packages/pkg_resources/


__init__.py:122:
python3 -m pip uninstall setuptools
python3 -m pip install testresources
python3 -m pip install setuptools

Failed OpenSSL Download


If the following error is displayed:

FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (4


retries left).
FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (4
retries left).
FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (3
retries left).
FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (3
retries left).
FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (2
retries left).
FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (2
retries left).
FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (1
retries left).
FAILED - RETRYING: OpenSSL download from https://www.openssl.org/source/openssl-1.1.1i.tar.gz (1
retries left)
Run the following commands:

wget --directory-prefix=/tmp http://certificates.intel.com/repository/certificates/


IntelSHA2RootChain-Base64.zip

sudo unzip -o /tmp/IntelSHA2RootChain-Base64.zip -d /usr/local/share/ca-certificates/

rm /tmp/IntelSHA2RootChain-Base64.zip

update-ca-certificates

227
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

“Isecl control plane IP not set” Error


If the following error is displayed:

TASK [Check control plane IP]


*************************************************************************************************
*************************************************************************************************
*************
task path: /root/dek/roles/security/isecl/common/tasks/precheck.yml:7
Wednesday 16 February 2022 15:36:34 +0000 (0:00:00.047) 0:00:05.373 ****
fatal: [node01]: FAILED! => {
"changed": false
}

MSG:

Isecl control plane IP not set!


fatal: [node02]: FAILED! => {
"changed": false
}

MSG:

Isecl control plane IP not set!


fatal: [controller]: FAILED! => {
"changed": false
}

MSG:

Isecl control plane IP not set!


Update the ~/dek/inventory/default/group_vars/all/10-default.yml file with:

# Install isecl attestation components (TA, ihub, isecl k8s controller and scheduler extension)
platform_attestation_node: false

“PCCS IP address not set” Error


If the following error is displayed:

TASK [Check PCCS IP address]


*************************************************************************************************
*************************************************************************************************
**************
task path: /root/dek/roles/infrastructure/provision_sgx_enabled_platform/tasks/
param_precheck.yml:7
Wednesday 16 February 2022 15:39:59 +0000 (0:00:00.060) 0:00:05.688 ****
fatal: [node01]: FAILED! => {
"changed": false
}

MSG:

PCCS IP address not set!


fatal: [node02]: FAILED! => {
"changed": false
}

MSG:

228
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

PCCS IP address not set!


fatal: [controller]: FAILED! => {
"changed": false
}

MSG:

PCCS IP address not set!


Update the ~/dek/inventory/default/group_vars/all/10-default.yml file with:

### Software Guard Extensions


# SGX requires kernel 5.11+, SGX enabled in BIOS and access to PCC service
sgx_enabled: false

“no supported NIC is selected” Error


If the following error is displayed:

sriovnetwork.sriovnetwork.openshift.io/sriov-vfio-network-c1p1 unchanged
STDERR:
Error from server (no supported NIC is selected by the nicSelector in CR sriov-netdev-net-c0p0):
error when creating "sriov-netdev-net-c0p0-sriov_network_node_policy.yml": admission webhook
"operator-webhook.sriovnetwork.openshift.io" denied the request: no supported NIC is selected by
the nicSelector in CR sriov-netdev-net-c0p0
Error from server (no supported NIC is selected by the nicSelector in CR sriov-netdev-net-c1p0):
error when creating "sriov-netdev-net-c1p0-sriov_network_node_policy.yml": admission webhook
"operator-webhook.sriovnetwork.openshift.io" denied the request: no supported NIC is selected by
the nicSelector in CR sriov-netdev-net-c1p0
Error from server (no supported NIC is selected by the nicSelector in CR sriov-vfio-pci-net-
c0p1): error when creating "sriov-vfio-pci-net-c0p1-sriov_network_node_policy.yml": admission
webhook "operator-webhook.sriovnetwork.openshift.io" denied the request: no supported NIC is
selected by the nicSelector in CR sriov-vfio-pci-net-c0p1
Error from server (no supported NIC is selected by the nicSelector in CR sriov-vfio-pci-net-
c1p1): error when creating "sriov-vfio-pci-net-c1p1-sriov_network_node_policy.yml": admission
webhook "operator-webhook.sriovnetwork.openshift.io" denied the request: no supported NIC is
selected by the nicSelector in CR sriov-vfio-pci-net-c1p1
Update the ~/dek/inventory/default/group_vars/all/10-default.yml file with:

sriov_network_operator_enable: false

## SR-IOV Network Operator configuration


sriov_network_operator_configure_enable: false

“Unexpected templating type error”


If the following error is displayed:

MSG:
AnsibleError: Unexpected templating type error occurred on (# SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2020 Intel Corporation
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: telemetry
labels:
grafana_datasource: '1'

229
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

data:
prometheus-tls.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus-TLS
access: proxy
editable: true
orgId: 1
type: prometheus
url: https://prometheus:9099
withCredentials: true
isDefault: true
jsonData:
tlsAuth: true
tlsAuthWithCACert: true
secureJsonData:
tlsCACert: |
{{ telemetry_root_ca_cert.stdout | trim | indent(width=13, indentfirst=False) }}
tlsClientCert: |
{{ telemetry_grafana_cert.stdout | trim | indent(width=13, indentfirst=False) }}
tlsClientKey: |
{{ telemetry_grafana_key.stdout | trim | indent(width=13, indentfirst=False) }}
version: 1
editable: false
): do_indent() got an unexpected keyword argument 'indentfirst'
Update the ~/dek/roles/telemetry/grafana/templates/prometheus-tls-datasource.yml file with:

- {{ telemetry_root_ca_cert.stdout | trim | indent(width=13, indentfirst=False) }}


+ {{ telemetry_root_ca_cert.stdout | trim | indent(width=13, first=False) }}
- {{ telemetry_grafana_cert.stdout | trim | indent(width=13, indentfirst=False) }}
+ {{ telemetry_grafana_cert.stdout | trim | indent(width=13, first=False) }}
- {{ telemetry_grafana_key.stdout | trim | indent(width=13, indentfirst=False) }}
+ {{ telemetry_grafana_key.stdout | trim | indent(width=13, first=False) }}

“Wait till all Harbor resources ready” Message


If the following log is displayed:

TASK [kubernetes/cni : Wait till all Harbor resources ready]


*************************************************************************************************
*******************************************************************************
task path: /home/user/dek/roles/kubernetes/cni/tasks/main.yml:20
Tuesday 16 November 2021 14:41:58 +0100 (0:00:00.070) 0:04:39.646 ******
FAILED - RETRYING: Wait till all Harbor resources ready (60 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (59 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (58 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (57 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (56 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (55 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (54 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (53 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (52 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (51 retries left).
FAILED - RETRYING: Wait till all Harbor resources ready (50 retries left).
Wait approximately 30 minutes. The Intel® Smart Edge Open deployment script waits for the Harbor
resources to be ready.

230
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Installation Stuck
If the installation remains stuck with the following log:

TASK [infrastructure/os_setup : enable UFW]


*************************************************************************************************
************************************************************************************************
task path: /root/dek/roles/infrastructure/os_setup/tasks/ufw_enable_debian.yml:12
Wednesday 16 February 2022 15:53:04 +0000 (0:00:01.627) 0:08:03.425 ****
NOTIFIED HANDLER reboot server for controller
changed: [controller] => {
"changed": true,
"commands": [
"/usr/sbin/ufw status verbose",
"/usr/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/
user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules",
"/usr/sbin/ufw -f enable",
"/usr/sbin/ufw status verbose",
"/usr/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/
user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules"
]
}

MSG:

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
Type Ctrl-c, and restart the installation. (Run the ./deploy.sh script again.)

Pod Remains in “Terminating” State after Uninstall


After uninstall, if the pod does not stop but remains in “Terminating” state, enter the following commands:

kubectl get pods -n fleet-management


kubectl delete -n <pod_name_from_above_command> --grace-period=0 --force
ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/
02_edge_server/fleet_management/fleet_management_playbook_uninstall.yaml

docker-compose Failure
If you see an error message that docker-compose fails with some variables not defined, add the
environment variables to .bashrc so that they are available to all terminals:

export DOCKER_BUILDKIT=1
export COMPOSE_DOCKER_CLI_BUILD=1
export DOCKER_HOSTNAME=$(hostname)
export DOCKER_USER_ID=$(id -u)
export DOCKER_GROUP_ID=$(id -g)
export DOCKER_USER=$(whoami)
# Check with command
env | grep DOCKER

231
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Keytool Not Installed


The keytool utility is used to create the certificate store. Install any preferred Java* version. For
development, Intel used:

sudo apt install default-jre


# Check your Java version:
java -version

Corrupt Database or Nonresponsive Server


Reset the ThingsBoard* server with the following steps.
1. Uninstall the playbook:

ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/
02_edge_server/fleet_management/fleet_management_playbook_uninstall.yaml
2. After uninstalling the playbook, wait several seconds for all fleet related containers to stop. Verify that
there are no fleet containers running:

docker ps | grep fleet


3. Reinstall the playbook:

ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/
02_edge_server/fleet_management/fleet_management_playbook_install.yaml

ThingsBoard* Server Errors


These errors can be fixed directly on the hosting machine using Docker* Compose. However, this requires
automated steps using Ansible* playbooks, so try these fixes last.
• Reset the database to a pristine state (without customizations from Intel):

# delete database and start the server


# The state of server is - without any customization from Intel.
sudo rm -rf ~/.mytb-data/db ~/.mytb-data/.firstlaunch ~/.mytb-data/.upgradeversion
docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml down
CHOOSE_USER=thingsboard docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-
server.all.yml up fleet-management

NOTE This only restarts the ThingsBoard* server, without Intel® Smart Edge Open.

• Reset the database to the preconfigured state (with customizations from Intel), and restart the server:

# Start the server with old/corrupted database


CHOOSE_USER=thingsboard docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-
server.all.yml up fleet-management
# attach to running container from another terminal:
docker exec -it edge-server-sdk-fleet-management bash

# inside the container: replace the database with Intel-customized-database:


# Just press tb<tab>. The tb-server-reset-db.sh is present in /usr/local/bin folder, so it is
accessible from anywhere.
tb-server-reset-db.sh
# When asked press y and enter. Done.
# Now exit the container. and run below commands again to re-launch the server with
preconfigured-state of database (With Intel Customizations):
docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-server.all.yml down
CHOOSE_USER=thingsboard docker-compose -f 01_docker_sdk_env/docker_compose/02_edge_server/edge-
server.all.yml up fleet-management

232
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE This only restarts the ThingsBoard* server, without Intel® Smart Edge Open.

• When you deploy the ThingsBoard* container using Intel® Smart Edge Open Ansible* playbook,
sometimes the server cannot start due to following error:

edge-server-sdk-fleet-management | 2021-11-25 15:24:34,345 [main] ERROR


com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
edge-server-sdk-fleet-management | org.postgresql.util.PSQLException: Connection to
localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is
accepting TCP/IP connections.
edge-server-sdk-fleet-management | at
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303)
edge-server-sdk-fleet-management | at
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
edge-server-sdk-fleet-management | at
org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
edge-server-sdk-fleet-management | at org.postgresql.Driver.makeConnection(Driver.java:465)
edge-server-sdk-fleet-management | at org.postgresql.Driver.connect(Driver.java:264)
If, after waiting for some time, the server is not up and running, and the server URL localhost:9090 is not
showing the server page, uninstall and reinstall the playbook:

ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/
02_edge_server/fleet_management/fleet_management_playbook_uninstall.yaml
ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/
02_edge_server/fleet_management/fleet_management_playbook_install.yaml
Result: The database is reset to the preconfigured database provided by Intel.

Fleet Management Server Dashboard over LAN Issues


If the Dashboard is not accessible from the client, the first step is to make sure that the client and server
nodes are in the same subnet. This helper page can be used to find out: https://www.meridianoutpost.com/
resources/etools/network/two-ips-on-same-network.php
If the client and server are in the same subnet, then it is possible that you are using proxies that prevent the
connection. To check this on Linux, run the following command:
wget -q -T 3 -t 3 --no-proxy http://<IP>:9090/ && echo "COMMAND PASSED"
Where <IP> is the IP of your server.

If COMMAND PASSED is displayed, then you should configure your browser to NOT use proxy when accessing
the IP/hostname of the server.

Playbook Install Errors


If you start the basic fleet management server right after a server reboot, you may encounter the error:

fatal: [localhost]: FAILED! => {"changed": false, "msg": "Logging into 10.237.22.88:30003 for
user admin
failed - 500 Server Error for http+docker://localhost/v1.41/auth: Internal Server Error
(\"Get \"https://10.237.22.88:30003/v2/\": dial tcp 10.237.22.88:30003: connect: connection
refused\")"}
1. Wait two minutes until the server is up and running.
2. Verify that all pods are running and no errors are reported:

kubectl get all -A

233
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

3. After all pods and services are up and running, restart the basic fleet management server:

ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/
02_edge_server/fleet_management/fleet_management_playbook_install.yaml

Battery Status Not Available in Dashboard


To verify that the battery is correctly reported by the robot, check it on the client side:

python
>>> import psutil
>>> battery = psutil.sensors_battery()
>>> print("Battery percentage : ", battery.percent)
Battery percentage : 43
When the battery bridge is installed in robot, the 2 commands below are equivalent. So when you launch
kobuki node, it publishes battery percentage in topic /sensors/battery_state. You can also do the same using
the ros2 topic pub command.

# Publish battery status


ros2 topic pub /sensors/battery_state sensor_msgs/msg/BatteryState "{percentage: 10}"
# or
launch kobuki image
source ros_entrypoint.sh
ros2 launch kobuki_node kobuki_node-composed-launch.py

Add New Clients to the Fleet Management Server


New devices can be created when more basic fleet management clients are going to be deployed. Remember
to specify the Device Profile with which to associate the Device. It impacts the associated Rule Chain too.

234
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

235
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

For configuring new basic fleet management clients (1-to-1 mapping), the new tokens of the new Devices
can be retrieved with Copy access token.

battery-bridge-kernel-module Install Failure


Follow the steps below:

cd components/amr_battery_bridge_kernel_module/src/
# uninstall battery-bridge-kernel-module
sudo ./module_install.sh -u
# check if below path exists
ls /sys/class/power_supply/BAT0
If the above path exists, then there is another kernel module occupying the place already and provided
battery-bridge-kernel-module can not be installed.
In this case, the provided solution does work.

Pod Remains in “Terminating” State after Uninstall


After uninstall, if the pod does not stop but remains in a “Terminating” state, enter the following commands:

kubectl get pods -n ovms-tls


kubectl delete -n <pod_name_from_above_command> --grace-period=0 --force
ansible-playbook AMR_server_containers/01_docker_sdk_env/docker_orchestration/ansible-playbooks/
02_edge_server/openvino_model_server/ovms_playbook_uninstall.yaml

Intel® Edge Software Device Qualification (Intel® ESDQ) for


EI for AMR

Overview
Intel® Edge Software Device Qualification (Intel® ESDQ) for EI for AMR provides customers with the capability
to run an Intel provided test suite at the target system, with the goal of enabling partners to determine their
platform’s compatibility with the EI for AMR.
The target of this self certification suite is the EI for AMR compute systems. These platforms are the brain of
the Robot Kit. They are responsible to get input from sensors, analyze them, and give instructions to the
motors and wheels to move the EI for AMR.

How It Works
The EI for AMR Test Modules interacts with the Intel® ESDQ CLI through a common test module interface
(TMI) layer which is part of the Intel® ESDQ binary. Intel® ESDQ generates a complete test report in HTML
format, along with detailed logs packaged as one zip file, which you can manually choose to email to:
[email protected]

NOTE Each test and its pass/fail criteria is described below. To jump to the installation process, go to
Download and Install Intel® ESDQ for EI for AMR.

Intel® ESDQ for EI for AMR contains the following test modules.
• Docker* Container
This module verifies that the EI for AMR comes as a Docker* container and it can run on the target
platform.

236
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
For more information, go to the Docker* website.
The test is considered Pass if:
• The Docker* container can be opened.
• Intel® RealSense™ Camera
This module verifies the capabilities of the Intel® RealSense™ technology on the target platform.
For more information, go to the Intel® RealSense™ website.
The tests within this module verify that the following features are installed properly on the target platform
and that EI for AMR and the Intel® RealSense™ camera are functioning properly:
• The camera is detected and is working.
• Intel® RealSense™ SDK.
The tests are considered Pass if:
• The Intel® RealSense™ SDK 2.0 libraries are present in Docker* container.
• A simple C++ file can be compiled using g++ and -lrealsense2 flag.
• Intel® RealSense™ Topics are listed and published.
• The number of FPS (Frames Per Second) are as expected.
• Intel® VTune™ Profiler
This module runs the Intel® VTune™ Profiler on the target system.
For more information, go to the Intel® VTune™ Profiler website.
The test is considered Pass if:
• VTune™ Profiler runs without errors.
• VTune™ Profiler collects Platform information.
• rviz2 and FastMapping
This module runs the FastMapping application (the version of octomap optomized for Intel) on the target
system and uses rviz2 to verify that it works as expected.
For more information, go to the rviz wiki.
The test is considered Pass if:
• FastMapping is able to create a map out of a pre-recorded ROS 2 bag.
• Turtlesim
This module runs the Turtlesim ROS2 application on the target system and checks if it works as expected.
For more information, go to the Turtlesim wiki.
The test is considered Pass if:
• Turtlesim opens and runs without error.
• Intel® oneAPI Base Toolkit
This module verifies some basic capabilities of Intel® oneAPI Base Toolkit on the target platform.
For more information, go to the Intel® oneAPI Base Toolkit website.
The tests within this module verify that the following features are functioning properly on the target
platform:
• DPC++ compiler
• CUDA to DPC++ converter
This test is considered Pass if:
• A simple C++ file can be compiled using the DPC++ compiled and it runs as expected.
• CUDA can be installed.

237
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

• A CUDA specific file can be converted to DPC++ and it runs as expected.


• OpenVINO™ Toolkit
This module verifies two core features of the OpenVINO™ Toolkit:
• OpenVINO™ model optimizer
• Object detection using TensorFlow*
The test is considered Pass if:
• The OpenVINO™ model optimizer is capable to transform a TensorFlow model to an Intermediate
Representation (IR) of the network, which can be inferred with the Inference Engine.
• Object Detection on CPU
This module verifies object detection using OpenVINO™ on CPU.
The test is considered Pass if:
• The object is detected.
If the test is failed, you can check the expected picture and the actual picture obtained by the test.
• Object Detection on VPU
This module verifies object detection using OpenVINO™ on VPU.
The test is considered Pass if:
• The object is detected.
If the test is failed, you can check the expected picture and the actual picture obtained by the test.
• Object Detection on Intel® Movidius™ Myriad™ X VPU
This module verifies object detection using OpenVINO™ on Intel® Movidius™ Myriad™ X VPU.
The test is considered Pass if:
• The object is detected.
If the test is failed, you can check the expected picture and the actual picture obtained by the test.
• GStreamer* Video
This module verifies if a GStreamer* Video Pipeline using GStreamer* Plugins runs on the target system.
The test is considered Pass if:
• The Video Pipeline was opened on the host without errors.
• GStreamer* Audio
This module verifies if a GStreamer* Audio Pipeline using GStreamer* Plugins runs on the target system.
The test is considered Pass if:
• The Audio Pipeline was opened on the host without errors.
• GStreamer* Autovideosink Plugin - Display
This module verifies if a stream from a camera compatible with libv4l2 can be opened and displayed using
GStreamer*.
The test is considered Pass if:
• No Error messages are displayed while running the gst-launch command.
This test may FAIL, or it may be skipped if the target system does not have a Web Camera connected.
• GStreamer* Intel® RealSense™ Video Plugin
This module verifies if a GStreamer* Video Pipeline using the Intel® RealSense™ Plugin runs on the target
system.
The test is considered Pass if:

238
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
• No Error messages are displayed while running the gst-launch command.
This test may FAIL, or it may be skipped if the target system does not have a Intel® RealSense™ Camera
connected.
• ADBSCAN
This module verifies if the ADBSCAN algorithm works on the target system.
The test is considered Pass if:
• The ADBSCAN algorithm works on the target system.
• Collaborative Visual SLAM
This module verifies if the collaborative visual SLAM algorithm works on the target system.
The test is considered Pass if:
• The collaborative visual SLAM algorithm works on the target system.
• Kudan visual SLAM
This module verifies if the Kudan visual SLAM algorithm works on the target system.
The test is considered Pass if:
• The Kudan visual SLAM algorithm works on the target system.

Get Started
This tutorial takes you through installing the Intel® ESDQ CLI tool, which is installed as part of EI for AMR.
Refer to How It Works before starting the installation. To use this tutorial, you must Download and Install
Intel® ESDQ for EI for AMR.

Download and Install Intel® ESDQ for EI for AMR


Intel® ESDQ is optionally bundled with EI for AMR solutions.
1. Download a configuration that includes Intel® ESDQ.
a. Go to the Product Download page.
b. Select Robot Complete Kit.
c. Select Customize Download.

239
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

d. Click Next, until you get to step 4.


e. On the Reference Implementations page, make sure that Intel® Edge Software Device
Qualification is checked.

240
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1

f. Make sure that AMR Bag Files and AMR Kudan SLAM are selected:

241
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

g. Click Next until you get to the Download page, and click on Download.
2. Follow the steps in the Get Started Guide for Robots to extract and install EI for AMR.
3. Run your target self-certification. Intel® ESDQ for EI for AMR contains three types of self-certifications:
• Run the Self-Certification Application for Compute Systems for certifying Intel® based compute
systems with the EI for AMR software
• Run the Self-Certification Application for RGB Cameras for certifying RGB cameras with the EI for
AMR software
• Run the Self-Certification Application for Depth Cameras for certifying depth cameras with the EI for
AMR software

Run the Self-Certification Application for Compute Systems


1. Change the directory:

cd $HOME/edge_software_device_qualification/Edge_Software_Device_Qualification_For_AMR_*/esdq

242
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
2. Unzip the ROS 2 bags used in the tests:

unzip ../AMR_containers/01_docker_sdk_env/docker_compose/06_bags.zip -d ../AMR_containers/


01_docker_sdk_env/docker_compose/
sudo chmod 0777 -R ../AMR_containers/01_docker_sdk_env/docker_compose/06_bags
3. Run the Intel® ESDQ test, and generate the report:

export ROS_DOMAIN_ID=19
./esdq run -r
Expected output (These results are for illustration purposes only.)

243
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

244
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
NOTE The OpenVINO™ Object Detection Myriad Test Failure above is shown for demonstration
purposes only. The test is expected to pass.

Run the Self-Certification Application for RGB Cameras


Prerequisites: The RGB camera has a ROS 2 node that can publish:
• The RGB raw image on topic: “/camera/color/image_raw” on fixed frame: “camera_color_frame”

NOTE The following steps use the Intel® RealSense™ ROS 2 node as an example. You must change the
node to your actual camera ROS 2 node.

1. Change the directory:

cd $HOME/edge_software_device_qualification/Edge_Software_Device_Qualification_For_AMR_*/esdq
2. Start the the sensor ROS 2 node:
a. Replace the commands with the commands you use to open up the RGB camera for the
certification Docker* container. If there is no Docker* container, run the RGB camera node in a
ROS 2 environment after setting the ROS_DOMAIN_ID.

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run realsense bash
b. Replace the commands with the commands you use to open up the RGB camera for the
certification ROS 2 node.

source ros_entrypoint.sh
# set a unique id here that is used in all terminals
export ROS_DOMAIN_ID=19
ros2 launch realsense2_camera rs_launch.py &
c. The self-certification test expects the camera stream to be on the “/camera/color/image_raw”
topic. This topic must be visible in rviz2 using the “camera_color_frame” fixed frame. If your
camera ROS 2 node does not stream to that topic by default, use ROS 2 remapping to publish to
that topic.

ros2 topic list


3. Run the Intel® ESDQ test, and generate the report:

export ROS_DOMAIN_ID=19
./esdq run -r -p "sensors_rgb"

Run the Self-Certification Application for Depth Cameras


Prerequisites: The depth camera has a ROS 2 node that can publish:
• Point cloud points on topic: “/camera/depth/color/points” on the fixed frame: “camera_link”
• The depth raw image on topic: “/camera/depth/image_rect_raw” on the fixed frame: “camera_link”

NOTE The following steps use the Intel® RealSense™ ROS 2 node as an example. You must change the
node to your actual camera ROS 2 node.

1. Change the directory:

cd $HOME/edge_software_device_qualification/Edge_Software_Device_Qualification_For_AMR_*/esdq

245
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

2. Start the the sensor ROS 2 node:


a. Replace the commands with the commands you use to open up the depth camera for the
certification Docker* container. If there is no Docker* container, run the depth camera node in a
ROS 2 environment after setting the ROS_DOMAIN_ID.

source ./01_docker_sdk_env/docker_compose/common/docker_compose.source
export CONTAINER_BASE_PATH=`pwd`
docker-compose -f 01_docker_sdk_env/docker_compose/01_amr/amr-sdk.all.yml run realsense bash
b. Replace the commands with the commands you use to open up the depth camera for the
certification ROS 2 node.

source ros_entrypoint.sh
# set a unique id here that is used in all terminals
export ROS_DOMAIN_ID=19
ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true &
c. The self-certification test expects the camera stream to be on the “/camera/depth/color/points”
and “/camera/depth/image_rect_raw” topics. These topics must be visible in rviz2 using the
“camera_link” fixed frame. If your camera ROS 2 node does not stream to that topic by default,
use ROS 2 remapping to publish to that topic.

ros2 topic list


3. Run the Intel® ESDQ test, and generate the report:

export ROS_DOMAIN_ID=19
./esdq run -r -p "sensors_depth"

Send Results to Intel


Once the automated and manual tests are executed successfully, you can submit your test results and get
your devices listed on the Intel® Edge Software Recommended Hardware site.
Send the zip file that is created after running Intel® ESDQ tests to: [email protected].
For example, after one of our local runs the following file was generated:
esdqReport_2022-03-09_13:22:28.zip

Troubleshooting
For issues, go to: Troubleshooting for Robot Tutorials.

Support Forum
If you’re unable to resolve your issues, contact the Support Forum.

Security
This section highlights the security features offered by the Edge Insights for Autonomous Mobile Robots (EI
for AMR) platform and provides an overview of the security features. For further reading, refer to the specific
documents listed below.

Shim Layer - Protect your application data


The EI for AMR includes open-source components, which may be affected by vulnerabilities. A shim layer can
help to protect your program data against an attack initiated via these vulnerabilities.
The main task of the shim layer is to reduce the attack surface by verifying the data (such as size, value
range, memory range, etc.) transferred via a function call to or from a library or an executable and protect
the customer code and data via this mechanism.

246
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Due to architecture constraints, only the developer of the application code can implement the matching shim
layer correctly.
The following picture shows a potential implementation of a shim layer around the customer application like a
shell.

247
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

248
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Keep in mind, complex checks and more layers might have an impact on the overall system performance.
In general, it is highly recommended to check regularly for updates and vulnerabilities on the component
web sites.

Edge Insights for Autonomous Mobile Robots Platform


The main EI for AMR platform is based on the 11th generation Intel® Core™ processor with accelerators
primarily used for AI inference and vision processing. The platform inherits many security elements from the
processor.

Security Use Cases and Features


The EI for AMR platform offers various security features that customers can leverage in the context of
Autonomous Robotics Applications. They are listed as follows:
Secure Boot
Ensure the system boots from a trusted source and is not manipulated by an attacker. To establish a secure
boot, a chain of trust is set up; the root-of trust is unmodifiable by nature. Typically, the root-of trust is a key
burned in fuses in the device or ROM based program code.
Intel devices support secure boot with Intel® Trusted Execution Technology (Intel® TXT) and offers via the
Intel® CSME, a software implementation of the Trusted Platform Module (TPM).
More information about the described use cases and features can be found in the following documents:

Document Title Intel Document Document Link


ID

Intel® Converged Boot Guard and Intel® 575623 https://cdrdv2.intel.com/v1/dl/


Trusted Execution Technology (Intel® getContent/575623
TXT)

Tiger Lake platform - Firmware 608531 https://cdrdv2.intel.com/v1/dl/


Architecture Specification getContent/608531

Intel® Trusted Execution Technology 633933 https://cdrdv2.intel.com/v1/dl/


(Intel® TXT) DMA Protection Ranges getContent/633933

Intel® Trusted Execution Technology – https://www.intel.com/


(Intel® TXT) Enabling Guide content/www/us/en/developer/articles/
guide/intel-trusted-execution-
technology-intel-txt-enabling-guide.html

Trusted Platform Module Specification – https://trustedcomputinggroup.org/

Authentication
Authentication helps to develop a secure system. A run-time authentication system is the next step following
secure boot. Any program code can be authenticated before it is executed by the system. This powerful tool
enables AMR suppliers to guarantee a level of security, and safety during run-time. Executing code from an
unknown source or malware wouldn’t be possible.
The Intel® Dynamic Application Loader (Intel® DAL) is a feature of Intel® platforms that allows you to run
small portions of Java* code on Intel® Converged Security and Management Engine (Intel® CSME) firmware.
Intel has developed DAL Host Interface Daemon (also known as JHI), which contains the APIs that enable a
Linux* operating system to communicate with Intel DAL. The daemon is available both in a standalone
software package and as part of the Linux* Yocto 64-bit distribution.
More information about the described use cases and features can be found in the following documents:

249
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Document Title Intel Document Document Link


ID

Trusty TEE Software Architecture 607736 https://cdrdv2.intel.com/v1/dl/


Specification getContent/607736

Intel® Dynamic Application Loader – https://www.intel.com/


(Intel® DAL) Developer Guide content/www/us/en/develop/
documentation/dal-developer-guide/
top.html

Virtualization
Virtualization is another important element to increase the level of security and safety. It helps to establish
freedom from interference (FFI), as it’s requested for safety use cases, and workload consolidation. Intel
devices have supported this use case with Intel® Virtualization Technology (Intel® VT) for decades.
More information about the described use cases and features can be found in the following documents:

Document Title Intel Document Document Link


ID

Intel® 64 and IA-32 Architectures – https://www.intel.com/


Software Developer Manuals content/www/us/en/developer/articles/
technical/intel-sdm.html

Encryption
Encryption is required for many security use cases. The EI for AMR platform supports the common encryption
algorithms like AES or RSA in hardware. This increases the encryption/decryption performance and the
security level. Typical use cases are the encryption of communication messages, a file system, or single files
for IP protection or the creation of a secure storage for security relevant data like crypto keys or passwords.
Another use case is memory encryption; the EI for AMR platform supports this with the Total Memory
Encryption (TME) feature.
More information about the described use cases and features can be found in the following documents:

Document Title Intel Document Document Link


ID

Tiger Lake platform Intel® Total Memory 620815 https://cdrdv2.intel.com/v1/dl/


Encryption (Intel® TME) getContent/620815

Whitley Platform Memory Encryption 611211 https://cdrdv2.intel.com/v1/dl/


Technologies -TME/MK-TME deep Dive getContent/611211

Intel® 64 and IA-32 Architectures – https://www.intel.com/


Software Developer Manuals content/www/us/en/developer/articles/
technical/intel-sdm.html

Filesystem-level encryption (Linux*) – https://www.kernel.org/doc/html/v4.16/


filesystems/fscrypt.html

Intel® Advanced Encryption Standard – https://www.intel.com/content/dam/


Instructions (AES-NI) develop/external/us/en/documents/
introduction-to-intel-secure-key-
instructions.pdf

Firmware Update

250
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
To improve the security and safety status over the lifetime of a device, the internal firmware (e.g. BIOS)
must be updatable. In this case the update packages are signed by the supplier (e.g. Intel, OEM etc.).
More information about the described use cases and features can be found in the following document:

Document Title Intel Document Document Link


ID

Tiger Lake platform - Firmware 608531 https://cdrdv2.intel.com/v1/dl/


Architecture Specification getContent/608531

Secure Debug
Debugging is an important feature during product development. During in-field usage, debugging might also
be needed to analyze field returns. To prevent anyone from accessing internal resources via the debugger, a
secure debugging system is developed. In this case an engineer who wants to use the debugger has to
authenticate via a valid token which has to be offered to the system (e.g. storing it in flash). Tokens must be
signed by a key which was stored during manufacturing flow into the device fuses.
More information about the described use cases and features can be found in the following documents:

Document Title Intel Document Document Link


ID

Tiger Lake platform - Firmware 608531 https://cdrdv2.intel.com/v1/dl/


Architecture Specification getContent/608531

Tiger Lake platform enDebug User Guide 630604 https://cdrdv2.intel.com/v1/dl/


getContent/630604

Anderson Lake Secure Debug User Guide 614222 https://cdrdv2.intel.com/v1/dl/


getContent/614222

Docker* Installation and Usage


Docker* is not a primary use case of EI for AMR systems. Docker* uses virtualization on the OS-level to
deliver software in packages called containers. The EI for AMR bundle is delivered in this form. It is up to you
to decide whether or not to re-use this approach in the final product.
The host system owner can improve the security level of the Docker* installation and Docker* during run-
time. For this, it is useful to check if your system follows Docker* best practices.
Additionally, it would be good to check your Docker* installation with the CIS Docker* benchmark. This
benchmark checks several aspects of the installation and run-time configuration and gives you a good
indication of improvements.

Real-Time Support
Intel real-time technology supports new solutions that require a high degree of coordination, both within and
across network devices. Intel® Time Coordinated Computing (Intel® TCC)-enabled processors deliver optimal
compute and time performance for real-time applications. Using integrated or discrete Ethernet controllers
featuring IEEE 802.1 Time Sensitive Networking (TSN), these processors can power complex real-time
systems.
For more information, refer to:
• Intel Real-Time Computing IoT Technology Resources
• Intel® Time Coordinated Computing Tools

251
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Enable Intel® TCC Baseline Support

The baseline support for Intel® TCC consists of a real-time Linux* kernel and the Intel® TCC Mode in the
BIOS. These features may be sufficient to satisfy many use cases, with initial cycle times possible in the
hundreds of microseconds.

Requirements
• Processor with support for Intel® TCC, such as Intel® i7-1185GRE, i5-1145GRE, and i3-1115GRE
processors (The complete list of compatible devices is on the TCC Tools webpage.)
• BIOS with support for Intel® TCC, such as Intel’s reference BIOS

Enable the Intel® TCC Mode in the BIOS

NOTE The reference BIOS may differ from BIOS versions offered by BIOS vendors. If you cannot find
Intel® TCC Mode in the BIOS, contact your BIOS vendor for more information.

1. Go to the BIOS menu.


2. In the BIOS, navigate to Intel Advanced Menu > Intel® Time Coordinated Computing.
3. Set Intel® TCC Mode to <Enabled>.
• For more details, go to Enable Intel® TCC Mode in BIOS.
• For additional configuration, go to Configure Intel® TCC Tools in BIOS.

Configure, Build, and Install Intel’s Linux* kernel with Real-Time Support
1. Install required dependencies:

sudo apt -y install build-essential gcc bc bison flex libssl-dev libncurses5-dev libelf-dev
dwarves zstd
2. Clone Intel’s Linux* kernel git repository:

git clone https://github.com/intel/linux-intel-lts.git


3. Enter the newly created directory:

cd linux-intel-lts
4. Reset to the latest 5.15 RT tag:

git reset --hard lts-v5.15.49-rt47-preempt-rt-220718T080310Z


5. Create a .config file:

make olddefconfig
6. Configure the kernel EI for AMR options:

scripts/config --module CONFIG_VIDEO_INTEL_IPU


scripts/config --enable CONFIG_VIDEO_INTEL_IPU_SOC
scripts/config --disable CONFIG_VIDEO_INTEL_IPU_FW_LIB
scripts/config --enable CONFIG_VIDEO_INTEL_IPU_USE_PLATFORMDATA
scripts/config --enable CONFIG_V4L_PLATFORM_DRIVERS
scripts/config --module CONFIG_VIDEO_LT6911UXC
scripts/config --disable CONFIG_VIDEO_CRLMODULE
scripts/config --disable CONFIG_VIDEO_INTEL_IPU_PDATA_DYNAMIC_LOADING
scripts/config --module CONFIG_VIDEO_IMX390
scripts/config --module CONFIG_VIDEO_TI960
scripts/config --module CONFIG_VIDEO_INTEL_IPU6
scripts/config --module CONFIG_VIDEO_AR0234

252
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
scripts/config --module CONFIG_PINCTRL_TIGERLAKE
scripts/config --enable CONFIG_INTEL_IPU6_TGLRVP_PDATA
scripts/config --set-str SYSTEM_TRUSTED_KEYS ""
scripts/config --set-str CONFIG_SYSTEM_REVOCATION_KEYS ""
7. Configure the Intel® TCC-related options:

scripts/config --module CONFIG_X86_TCC_PTCM


8. Build the kernel:

make deb-pkg -j8


9. Install the kernel:

cd ../ && sudo dpkg -i linux-*.deb


10. Modify the kernel command line for the selected kernel by adding these options to /etc/default/grub:

apparmor=1 security=apparmor art=virtallow udmabuf.list_limit=8192 processor.max_cstate=0


intel.max_cstate=0 processor_idle.max_cstate=0 intel_idle.max_cstate=0 clocksource=tsc
tsc=reliable nowatchdog intel_pstate=disable idle=poll rcupdate.rcu_cpu_stall_suppress=1
irqaffinity=0 mce=off hpet=disable numa_balancing=disable igb.blacklist=no efi=runtime
hugepages=1024 nmi_watchdog=0 i915.force_probe=* i915.enable_rc6=0 i915.enable_dc=0
i915.disable_power_well=0 isolcpus rcu_nocbs noht nosoftlockup rcu_nocb_poll vt.handoff=7
i915.enable_guc=2
11. Make the new kernel the default by modifying /etc/default/grub:

GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux v5.15.49-rt47"


12. Update grub:

sudo update-grub
13. Reboot the board:

sudo reboot -fn

Terminology
Term Description

ADBSCAN Adaptive Density-Based Spatial Clustering of


Applications with Noise

AGV Autonomous Guided Vehicle

AI Artificial Intelligence

amcl adaptive Monte-Carlo localizer

AOTA Application Over The Air

API Application Programming Interface

ARIAC Agile Robotics for Industrial Automation


Competition

AT ATtention

BRIEF Binary Robust Independent Elementary Feature

CNDA Corporate Non-Disclosure Agreement

CPU Central Processing Unit

253
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Term Description

CUDA Compute Unified Device Architecture

DBSCAN Density-Based Spatial Clustering of Applications


with Noise

DDS Data Distribution Service

DI Device Initialization protocol

DL Deep Learning

DMS Device Management Service

DNS Domain Name System

DPC++ Data Parallel C++

DRM Deterministic Road Map

EI for AMR Edge Insights for Autonomous Mobile Robots

EOF End-Of-File

FAST Features from Accelerated and Segments Test

FDO FIDO Device Onboard

FIDO Fast IDentity Online

FLANN Fast Library for Approximate Nearest Neighbors

FM FastMapping

FOTA Firmware Over The Air

GEAR Gazebo Environment for Agile Robotics

GPU Graphics Processor Unit

GPS Global Positioning System

GSLAM General Simultaneous Localization and Mapping

GUI Graphical User Interface

ICP Iterative Closest Point

IDE Integrated Development Environment

IE Inference Engine

IMU Inertial Measurement Unit

IP Intellectual Property

IPU Image Processing Unit

Intel® SDO Intel® Secure Device Onboard

ITS Intelligent sampling and Two-way Search

JIT Just-In-Time

KVM Kernel-based Virtual Machine

254
Edge Insights for Autonomous Mobile Robots (EI for AMR) 1
Term Description

LAN Local Area Network

LIDAR LIght Detection And Ranging

MBIM Mobile Interface Broadband Model

MLS Moving Least Squares

MQTT Message Queuing Telemetry Transport

MSM Mobile Station Modem

NFS Network File System

NN Neural Network

ORB Oriented FAST and Rotated BRIEF

OSRF Open Source Robotics Foundation

OS Operating System

OTA Over The Air

PCL Point Cloud Library

POTA Programming Over The Air (includes SOTA and


FOTA)

PRM Probabilistic Road Map

QCDM Qualcomm Diagnostic Monitor

QMI Qualcomm MSM Interface

RDC Resource and Documentation Center

RGBD Red, Green, Blue plus Depth

ROS Robot Operating System

RPLIDAR 360-degree 2D LIDAR solution developed by


SLAMTEC

RPM Red Hat* Package Manager

RTAB-Map Real-Time Appearance-Based Mapping

RV RendezVous

SCTP Stream Control Transmission Protocol

SDK Software Development Kit

SFTP SSH File Transfer Protocol

SLAM Simultaneous Localization And Mapping

SOTA operating System Over The Air

SPIR Standard Portable Intermediate Representation

SSD Single-Shot multi-box Detection

SSL Secure Sockets Layer

255
1 Edge Insights for Autonomous Mobile Robots (EI for AMR) Developer Guide

Term Description

TCP Transmission Control Protocol

TLS Transport Layer Security

TMI Test Module Interface

UDP User Datagram Protocol

UEFI Unified Extensible Firmware Interface

VNC Virtual Network Computing

vSLAM visual Simultaneous Localization And Mapping

WWAN Wireless Wide Area Network

Notices and Disclaimers


You may not use or facilitate the use of this document in connection with any infringement or other legal
analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free
license to any patent claim thereafter drafted which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this
document.
All product plans and roadmaps are subject to change without notice.
The products described may contain design defects or errors known as errata which may cause the product
to deviate from published specifications. Current characterized errata are available on request.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of
merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from
course of performance, course of dealing, or usage in trade.
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
Code names are used by Intel to identify products, technologies, or services that are in development and not
publicly available. These are not “commercial” names and not intended to function as trademarks.
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its
subsidiaries. Other names and brands may be claimed as the property of others.
This software and the related documents are Intel copyrighted materials, and your use of them is governed
by the express license under which they were provided to you (License). Unless the License provides
otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit this software or the
related documents without Intel’s prior written permission.
This software and the related documents are provided as is, with no express or implied warranties, other
than those that are expressly stated in the License.

256

You might also like