All Chapters
All Chapters
All Chapters
com
Table of content:
1. Introduction
2. Getting Started
3. User Interface
4. Extensibility
5. Human Machine Interface (HMI)
6. Programming Reference
7. Programming Tips
8. Technical Issues
9. Working with GigE Vision® Devices
10. Machine Vision Guide
11.
12. Appendices
1. Introduction
Table of content:
What's new in 5.0?
Product Overview
How to Learn?
User Manual Conventions
What's new in 5.0?
Worker Tasks
Before 5.0, you could only have one main loop in the program and everything happened there. Now, it became possible to perform many
computations in parallel! Read more at:
Worker Tasks section of the Macrofilters article.
Creating a Worker Task in the Project Explorer.
Parallel Image Saving program example.
Parallel Enumeration program example.
HMI Events
Event-based programming is now possible in our HMI Designer. You can easily create separate subprograms that will be executed when something
happens—for example, when the user clicks a button, logs in or changes a specific parameter. Read more at:
Handling HMI Events article.
HMI Handling Events program example.
Results control
This new power control is used for easy definitions of Pass/Fail criteria. You just select a filter and set the range for its numeric outputs. What is
more, the Results control also collects statistics automatically. Read more at:
Running and Analysing Programs article.
Main Window Overview article.
Bottle Inspector Part 1 tutorial.
Bottle Inspector Part 2 tutorial.
Module encryption
Sometimes you need to hide the contents of some macrofilters or protect them against unauthorized access. You can achieve this by using our new
module encryption function. Read more at:
Locking Modules section of the Project Explorer article.
Product Overview
Welcome!
Thank you for choosing Aurora Vision Studio. What you have bought is not only a software package and support, but a comprehensive access to
the image analysis technology. Please keep in mind that machine vision is a highly specialized domain of knowledge and mastering it takes time.
However, whenever you encounter any problem, do not hesitate to contact our support team for assistance ([email protected]). We believe
that working together we can solve any obstacles and make your projects highly successful. We wish you good time!
Other Products
Aurora Vision Studio Runtime
Aurora Vision Studio is the application that is installed on the developer's computer. Programs created with Studio can be later executed (with the
HMI) on other machines by using another product, Aurora Vision Studio Runtime, which includes the Aurora Vision Executor application. This
lightweight application does not allow for making changes to the program and is intended for use in the production environment.
How to Learn?
Prerequisites
Aurora Vision Studio Professional is a drag and drop environment which makes it possible to create advanced vision inspection algorithms without
writing a single line of code. Nevertheless, as a sophisticated tool and a fully-fledged visual programming language, it requires some effort before you
become a power user.
Typically, the prerequisites required to start learning Aurora Vision Studio are:
higher technical education,
basic course in image processing and preferably also in computer vision,
the ability to understand written technical documentation in English.
Learning Materials
The available materials for learning Aurora Vision Studio are:
This User Manual
– contains detailed instructions how to use the features of Aurora Vision Studio.
The online video tutorials
– provide a quick start for the most popular topics.
The book "Image Analysis Techniques for Industrial Vision Systems"
– explains the theory that lies behind the filters (tools) of Aurora Vision Studio.
E-mail Support
– do not hesitate to contact us if anything still remains unclear.
Additionally, it is recommended to go through the entire content of the Toolbox control and make sure that you understand when and how you can
use each of the tools. This might take some time but it assures that in your projects you will use the right tools for the right tasks.
Terms Checklist
When you are ready to create your first real-life application, please review the following article to make sure that you have not missed any important
detail: Summary: Common Terms that Everyone Should Understand
Hardware. CPU and GPU Complexity of your algorithm Your programming skills Speed requirements of your vision
benchmarks can be found in under(Number of tools) (Appropriate choice of the tools system
this link. We hope this will help you and how they are parametrized -
to evaluate the hardware impact especially important when more Size of the input data
on the performance of your advanced tools i.e. Template Capacity and configuration of the
algorithm. Matching are used) network
Installation Procedure
Important: Aurora Vision Studio setup program requires the user to have administrative privileges.
The installation procedure consists of several straightforward steps:
Setup language
Please choose the language for the installation process and click OK to proceed.
License agreement
Click I accept the agreement if you agree with the license agreement and then click Next to continue.
Localization on disk
Choose where Aurora Vision Studio should be installed on your disk.
Components
Choose which additional components you wish to install.
Start Menu shortcut
Choose if Aurora Vision Studio should create a Start Menu shortcut.
Additional Options
On this screen you can decide if the application should create a Desktop shortcut. You can also decide to associate Aurora Vision Studio with the
.avproj files.
Installing
Click Install to continue with the installation or Back to review or change any settings.
Please wait until all files are copied.
Final Options
At the end of the installation you are able to run the application.
Deinstallation procedure
To remove Aurora Vision Studio from your computer please launch Add or remove programs, select Aurora Vision Studio Professional » Uninstall
and follow the on-screen instructions.
Minimal
Minimal program view is
recommended for small-scale
applications, typically ones that fit
into one screen. In this mode,
connections between filters are
hidden and named output labels
are used instead. The main benefit
here is that basic applications can
be easily presented in a single
view. On the other hand when the
program is getting bigger,
dependencies between different
program elements may become
less clear and the program
becomes more difficult to analyze.
In Minimal program view you The Minimal view in the Bottle Crate example.
create connections by naming
outputs and selecting them from a Compact
drop-down menu in the Properties
window. The drag & drop action Compact program view is the
between filters is also possible, default one in the Professional
but the connection still remains edition. It aims to provide the
hidden. optimal level of detail by displaying
explicit connections between filters
This view is the only one available while still hiding some inputs and
in the SMART edition. outputs that are usually not being
connected (you can use the
Show/Hide action to change that
for any port).
This mode suits well both simple
and complex applications alike.
Full
Unlike in the Compact mode, here
all the inputs and outputs of the
given filter are always visible. Most
importantly, this mode provides a
handy preview of the filter's
properties directly in the Program
Editor – whenever possible they
are displayed on the right margin
of the editor. This, however,
comes at the cost of higher
verbosity which may impact
readability.
The Application Toolbar contains buttons for most commonly used actions and settings.
Here is the list of the most important menu commands:
File
Connect to Opens a window that provides connection with a remote Aurora Vision Executor,
Remote allowing application deployment of the current project. See also Remote
Executor Executor for details.
Export to Exports the current project to an executable file which can be run (but not edited)
Runtime with Aurora Vision Executor.
Executable
Generate Creates a C++ program for the currently opened Aurora Vision program. See
C++ Code also C++ Code Generator for details.
Generate Generates a .NET assembly which provides the chosen macrofilters as class
.NET methods. Generated assembly may be referenced in managed languages: C# or
Macrofilter Visual Basic .NET. See .NET Macrofilter Interface Generator for more
Interface information.
Edit
Rename
Current Renames current macrofilter. F2
Macrofilter
Remove HMI Unbinds HMI from the current project and clears all the HMI controls.
Program
Startup Selects a worker task from which the execution process will start. When the
Program execution is paused, shows active worker tasks an allows the user to switch to
(combo box) them. See Testing and Debugging for more details.
Executes the program until all iterations are finished, the user presses Pause or
Run Stop buttons or something else pauses program. See Testing and Debugging for F5
more details.
Run Single Executes the program running only the primary worker task until all iterations are
finished, the user presses Pause or Stop buttons or something else pauses F5
Worker
program.
Executes the program to the end of a single iteration of the outermost Task. See
Iterate also Execution Process and Testing and Debugging for more information about F6
iterations.
Pause Suspends the current program execution immediately after the last invoked filter Ctrl+Alt+Pause
(tool) is finished.
Stop Stops the program immediately after the last invoked filter is finished. Shift+F5
Iterate Back Executes the program to the end of a single iteration, reversing direction of Shift+F6
enumerating filters. See Testing and Debugging for more details.
Iterate Executes the current program to the end of the currently selected macrofilter.
Current Ctrl+F10
See Testing and Debugging for more details.
Macrofilter
Step Over Executes the next single instance of a filter or a macrofilter, without entering into F10
the macrofilter.
Step Into Executes the next single filter instance. If it is an instance of a macrofilter it F11
enters inside of it.
Step Out Executes all filters till the end of the current macrofilter, and exits to the parent Shift+F11
macrofilter.
Run with Executes the program using Aurora Vision Executor application installed on the
Aurora Vision Ctrl+F5
local machine.
Executor
Diagnostic Turns on or off Diagnostic Mode, which controls whether filters compute
Mode additional information helpful for program debugging, but resulting in a slower
program execution.
Program Shows information about execution time of each filter in the selected macrofilter. F8
Statistics
View
HMI Designer Switches the HMI Designer window visibility. See Designing HMI for details.
Program Switches Program Editor into a mode where no inputs or outputs are displayed,
Display: and connections are created by selecting data sources in the Properties window.
Minimal
Program Switches Program Editor into a mode where primary inputs and outputs are
Display: visible and can be connected in a visual way.
Compact
Program Switches Program Editor into a mode where all inputs and outputs are visible
Display: Full together with input values preview on the editor's margin.
User-defined
Preview Activates the first preview layout defined by the user.
Layout 1
User-defined
Preview Activates the second preview layout defined by the user.
Layout 2
User-defined
Preview Activates the third preview layout defined by the user.
Layout 3
Auto Preview Activates a preview layout which will be automatically filled in accordance with
Layout the currently selected filter.
HMI Design Activates a preview layout containing only the HMI window.
Layout
Tools
Check
Project for Checks if current project has any issues.
Issues
Manage GigE Opens a manager for enumerating and configuring GigE Vision compatible
Vision cameras.
Devices
Macrofilters
Preview Saves a graphical overview of selected macrofilters.
Generator
Edit HMI
User Opens a window which allows to configure the credentials of password-
Credentials protected HMI. See Protecting HMI with a Password for details.
File
Settings Opens a window which allows to customize the settings of Aurora Vision Studio.
Help
View Help Opens up the documentation of Aurora Vision Studio. F1
Message to Sends an e-mail to support with attached Aurora Vision Studio screenshot, log
Support and optional user supplied message.
Download
Remote Downloads TeamViewer application that allows for remote support.
Support
Client
License Allows to view and manage available licenses for Aurora Vision products.
Manager
About Aurora Displays information about your copy of Aurora Vision Studio, e.g. the application
Vision Studio version, loaded assemblies and plugins.
HMI Designer
Make
Horizontal Makes all marked HMI controls' horizontal spacing equal.
Spacing
Equal
Make Vertical
Spacing Makes all marked HMI controls' vertical spacing equal.
Equal
Application Settings
Aurora Vision Studio is a customizable environment and all its settings can be adjusted in the Settings window, located in the Tools » Settings menu.
Falling back to defaults is possible with the Reset Environment button located at the bottom of the window. Here is the list of the Application Settings:
1. Environment
General Startup Console 2. Program Execution
Previews General HMI 3. Filters
Library Filter Catalog Toolbox 4. File Associations
Filter Properties Project Explorer Default Program 5. Program Edition
Macrofilter Navigator Program Editor Program Analysis 6. Results
View 7. Project Files
Project Explorer Project Binary Files 8. Messages
Show these messages:
1. Environment
General
Complexity level:
Selecting tools level complexity.
Language:
User interface language.
Theme:
User interface color theme.
Display short project name in the Title Bar
If checked, only name of the current project is displayed in the Main Window title. Otherwise, full path to the project file is displayed.
Floating point significant digits:
Sets the number of digits after floating point separator.
Startup
Console
Previews
2. Program Execution
General
HMI
3. Filters
Library
Filter Catalog
Toolbox
Filter Properties
Project Explorer
5. Program Edition
Macrofilter Navigator
Program Editor
Program Analysis
6. Results
View
Flat view
If checked, outputs are arranged in a flat list in the results. Otherwise, outputs are only displayed in a tree layout according to their
logical relationship to their parent outputs.
Show statistics columns
If checked, statistics-related columns are displayed.
Use output labels
If checked, labeled outputs are presented as their Data Source Labels. Otherwise, original output name is used.
Contents
Defines the source for the results control contents. Either the selected filters or all filters in the current macrofilter (variant).
Show only visible outputs
If checked, hidden outputs are also hidden in the results.
Show only textual outputs
If checked, only outputs of textual data type (e.g. String, Integer, Real, etc.) are shown in the results.
Show only checked outputs
If checked, only outputs which are checked.
Show only labeled outputs
If checked, only outputs which have the label defined.
7. Project Files
Project Explorer
8. Messages
Show these messages:
Data
Aurora Vision Studio is a data processing environment so data is one of its central concepts. The most important fact about data that has to be
understood is the distinction between types (e.g. Point2D) and values (e.g. the coordinates (15.7, 4.1)). Types define the protocol and guide the
program construction, whereas values appear during program execution and represent information that is processed. Examples of common types of
data are: Integer, Rectangle2D, Image.
Aurora Vision Studio also supports arrays, i.e. variable-sized collections of data items that can be processed together. For each data type there is a
corresponding array type. For example, just as 4 is a value of the Integer type, the collection {1, 5, 4} is a value of the IntegerArray type. Nested
arrays (arrays of arrays) are also possible.
Filters (Tools)
Filters are the basic data processing elements in data flow programming. In a typical machine vision application there is an image acquisition filter at
the beginning followed by a sequence of filters that extract information about regions, contours, geometrical primitives and then produce a final result
such as a pass/fail indication.
A filter usually has several inputs and one or more outputs. Each of the ports has a specific type (e.g. Image, Point2D etc.) and only connections
between ports with compatible types can be created. Values of unconnected inputs can be set in the Properties window, which also provides
graphical editors for convenient defining of geometrical data. When a filter is invoked (executed), its output data can be displayed and analyzed in the
Data Preview panels.
Connections
Connections transmit data between filters, but they also play an important role in encapsulating much of the complexity typical for low-level
programming constructs like loops and conditions. Different types of connections support: basic flow of data , automatic conversions ,
array (for-each) processing and conditional processing . You do not define the connection types explicitly – they are inferred automatically
on the do what I mean basis. For example, if an array of regions is connected to an input accepting only a single region, then an array connection is
created and the individual regions are processed in a loop.
Macrofilters
Macrofilters provide a means for building bigger real-life projects. They are reusable subprograms with their own inputs and outputs. Once a
macrofilter is created, it appears in the Project Explorer window and since then can be used in exactly the same drag and drop way as any regular
filter.
Most macrofilters (we call them Steps) are just substitutions of several filters that help to keep the program clean and organized. Some other,
however, can create nested data processing loops (Tasks) or direct the program execution into one of several clearly defined conditional paths
(Variant Steps). These constructs provide an elegant way to create data flow programs of any complexity.
Data and types are very similar to what you know from C++. We also have a generic collection type – array – which is very similar to std::vector. Filters and
macrofilters are just equivalents of functions. But, instead of a single returned value they often have several output parameters. Connections correspond to local
variables, which do not have to be named. On the other hand loops and conditions in Aurora Vision Studio are a bit different to C++ – the former are done with array
connections or with Task macrofilters, for the latter there are conditional connections and Variant Step macrofilters. See also: Quick Start Guide for the C/C++
Programmers.
Sections
In order to improve clarity of applications and make it easier for the user to manage the data flow, starting from Aurora Vision Studio 5.0 we have
introduced a new feature called "Sections". These special areas visible in the Program Editor are responsible for dividing the application code into
four consecutive stages. Placing filters in the right section makes the application more readable and reduces necessity of using macrofilters in
simple applications.
Currently, there are four sections available: INITIALIZE, ACQUIRE, PROCESS and FINALIZE.
INITIALIZE – this section should consist of filters that have to be executed only once, before the loop is started. The filters in this section will not be
repeated during the loop. Such filters are usually responsible for initiation of a connection (e.g. GigEVision_StartAcquisition, TcpIp_Accept) or setting
values of constant parameters for application execution.
ACQUIRE – this section is intended to include filters from loop generation category like EnumerateImages or GigEVision_GrabImage that generate
stream of data that will be processed or analyzed in the next section. Filters in this section will be executed in every iteration.
PROCESS – this is the main section that contains filters responsible for analyzing, processing and calculation of data. In most applications the
PROCESS section will be largest one because its main purpose is to perform all the key tasks of the application. Filters in this section will be
executed in every iteration.
FINALIZE – filters placed in this section are executed only once, after the loop. In most cases this section is used to close all the connections and
save or show the inspection results (e.g. GigEVision_StopAcquisition, TcpIp_Close, SaveText). This section is executed only if the task macrofilter
finishes without exceptions - it is not executed if an error occurs, even if it is handled by the error handler.
By default, sections are not visible inside Step and Variant Step macrofilters. However, the view in these macrofilters can be extended with
INITIALIZE and PROCESS sections. In Worker Task and Task macrofilters the default view consist of ACQUIRE and PROCESS sections and can
be extended to all four sections.
To change the default view of the sections in the Program Editor click on the button located to the left of the macrofilter name in the top bar of the
Program Editor
Running and Analysing Programs
Example Projects
One of the best ways to learn Aurora Vision Studio quickly is to study the example projects that come with the application. The File » Open
Example... command is a shortcut to the location they are stored in. The list of available projects is also available in the Program Examples section.
Executing Programs
When a project is loaded you can run it simply by clicking the Run button on the Application Toolbar or by pressing F5 . This will start
continuous program execution; results will be visible in the Data Preview panels. You can also run the program iteration by iteration by clicking Iterate
or pressing F6 .
When a project is loaded from a file or when new filters (tools) are added, the filter instances are displayed subdued. They become highlighted after
they are invoked (executed). It is also possible to invoke individual filters one by one by clicking Step Over or Step Into , or by pressing
F10 or F11 respectively. The difference between Step Over and Step Into is related to macrofilters – the former invokes entire macrofilters,
whereas the latter invokes individual filters inside.
A program with four filter instances; three of them have been already invoked.
Viewing Results
Once the filters (tools) are executed, their output data can be displayed in the Data Previews panels. To display a value of a particular output, just
drag and drop from the port of the filter to a Data Preview panel. Remember to press and hold left mouse button during entire movement from the
port to the preview. Multiple data can be often displayed in multiple layers of a single preview. This is useful especially for displaying geometrical
primitives, paths or regions over images.
User-defined data previews from individual filter outputs.
In bigger projects you will also find it useful to switch between three different layouts, which can be created to visualize different stages of the
algorithm, as well as to the automatic layout mode, which is very useful for interactive analysis of individual filters:
Automatic data previews – the layout adapts to the currently selected filter.
Analysing Data
Data displayed in the Data Preview panels can be analyzed interactively. There are different tools for different types of data available in the main
window toolbar. These tools depend on currently selected preview which is marked with a yellow border when window is docked.
Measure rectangle - Selection
Measure distance - Length
For the most common Image type the Data Preview window has the following appearance:
Mouse Wheel – zooms in or out 3rd Mouse Button + Drag – Right Click – opens a context Results Control
the image. moves the view. menu and allows to save the view
to an image file. While analyzing your application, it is
useful to switch to the Results control available near the bottom of the screen. If you cannot see it, you need to enable it through View » Results.
Browsing Macrofilters
Except for the most simple applications, programs in Aurora Vision Studio are composed of many so called macrofilters (subprograms). Instances
of macrofilters in the Program Editor can be recognized through the icon which depicts several blue bars. Double clicking on an instance opens the
macrofilter in the Program Editor.
Definitions of macrofilters correspond to definitions of functions in C++, whereas instances of macrofilters correspond to function calls on a call-stack. Unlike the
C++, there is no recurrence in Aurora Vision Studio and each macrofilter definition has a constant and finite number of instances. Thus, we actually have a static
call-tree instead of a dynamic call-stack. This makes program understanding and debugging much easier.
Analysing Operation of a Single Macrofilter
The Iterate Current Macro command can be very useful when you want to focus on a single macrofilter in a program with many macrofilters. It
executes the whole program and pauses each time when it comes to the end of the currently selected macrofilter instance.
The Iterate Current Macro command is very similar to setting a breakpoint at the end of some function in a C++ debugger.
Execution Breakpoints
Pausing the program execution in any algorithm location is a fundamental technique while debugging program in any programming language. In
Aurora Vision Studio such a pausing can be done with breakpoints in the program editor. Breakpoints are visualized with red circle in the left margin
of the program editor and a line next to it:
Acquiring Images
Acquiring Images from Files
Aurora Vision Studio is not limited to any fixed number of image sources. Instead, image acquisition is a regular part of the library of filters (tools). In
particular, to load an image from a file, the user needs to create a program with an instance of the LoadImage filter.
Four steps are required to get the result ready:
1. Add a LoadImage filter to the Program Editor:
a. Either by choosing it from the Image Acquisition section of the Toolbox (recommended).
b. Or by dragging and dropping it from the Image :: Image IO category of the Filter Catalog.
2. Select the new filter instance and click on "..." by the inFile port in the Properties window. Then select a PNG, JPG, BMP or TIFF file.
3. Run the program.
4. Drag and drop the outImage port to the Data Previews panel.
Click the X at the top right corner. Use the mouse wheel button and Right-click on the name of the Arranging Data Previews
In this way, you will remove the press it when mouse cursor is preview and select one of the
last data that has been added to placed on the name of the preview options, which are Remove There are five default options that you
the preview. If the closed one is (top left corner). Preview, Close, Close All But This can use to arrange your preview:
also the last, the preview window and Close All. . Despite
is closed.
displaying your preview in a single window, you can use four other options:
Additionally, you can further divide the preview windows by right-clicking on their name and selecting "Split View" option.
Another option that is at your disposal is docking. You can either double-click on the name of the preview or right-click and select Undock option. To
restore the window to its previous preview simply right-click on the bar and select Dock option. Also, you can click and hold a Tab with the name of
data preview and move it to another position. During that process the navigation pane appears and you can specify new appearance of the preview.
Layouts
It is possible to have up to three separate preview layouts arranged in different ways and switch between them when working on an algorithm.
Additionally, Auto mode, which shows the most useful inputs and outputs of the currently active filter (tool), is available. All the above-mentioned
features are available on the Toolbar: . There is also the HMI button that opens the standalone tab where you can design
the user interface.
Image Tools
You gain access to the Image Tools after clicking on the image. It contains handy tools that will help you during the analysis of images. Note that they
do not modify the image in the program, but rather help you set appropriate parameters and collect data necessary for the inspection, by influencing
the preview.
Move and Zoom are purely for the The Pick Color tool is handy if you The next three icons are all about Next, there are tools for 1D and 2D
navigation on the preview. It is want to check the intensity in the the measurement. Measure image profile analysis. After
possible to use the same image or RGB/HSV values before rectangle allows to select a marking the segment or box on the
functions with the scroll wheel of performing further operations on rectangle and measure its width image and you will receive a graph
your mouse. the image. and height. Measure distance ready for a quick analysis.
returns the distance in pixels
between two points on the image. Following buttons are for fitting the
Measure angle simply checks the image to the content and returning
angle defined by three points it to the original size.
selected by user. Display data labels option allows
you to see the labels on the
objects and Display array indices
shows the numbers on the image
that correspond to the position of
the features in the input array, as
seen on the image below:
Save Image As - saves the image Save View As - saves the current Copy View As Image - copies the Zoom to Fit - scales the image to
with its original size and other view of the image, with other current view of the image. the size of the preview window so
parameters in the specified folder. elements (e.g. geometric it can fit in it.
primitives) located on it, in the
specified folder. Zoom to Original size - returns the Show Information - turns on/off the
image to the original size. information about the current
preview.
Show Array Indices - shows
sorted numbers on the preview.
You can only see the results of
that button if it was dropped to the
preview as an array.
Preview information
The green text at the top left corner of the preview contains the names of all the outputs and inputs that were dropped on it. They are always followed
by additional information. In case of Image or Region it is usually its size. It is also possible to see a number in the brackets - [4]. It indicates the
number of the elements of the array of data. To the right of the names you may notice up to five buttons:
- it tells you about the color - by dragging and moving it - it allows you to turn on and - that button removes the
the type of data is being displayed up and down you can organize the off the data from the preview data from the preview.
in. If you press it, you will be able data on the preview. Note that the without completely removing it.
to change its shade. color changes with the order of - it helps you navigate through
data, so it does not stay the same elements of the arrays. If you turn
after being moved. it off, you will be able to see all the
elements of the array on the
preview at once. Turning it on
switches the preview to show only
one element and allows to iterate
them one by one, however, it
makes it impossible to see them
all.
Adding comments
Adding a comment is a handful option that you can use on all your preview windows. To do so right-click on the name of the preview and choose Add
Comment option. After that a yellow box at the bottom of the preview will appear. You will be able to write anything of your liking there, as in the image
below:
Useful Settings
To change the quality of the previews you can enter Tools>>Settings>>Previews>>Preview Quality or Program>>Preview Quality. You will be able to
choose from three different settings:
Note that in case of larger images
Fast Balanced High Quality Image is too big! warning pops out. It
automatically lowers down the quality of the image even though the best one was selected beforehand.
Users may extract macrofilters in their program. By default that operation removes the previews of the filters that were enclosed. To prevent that you
can access Tools>>Settings>>Editor>>Preserve port previews when extracting macrofilter.
For more details on the topic, please refer to a video tutorial on our YouTube channel:
Please note that previews of the 3D data are explained in a separate article: Working With 3D Data.
Extracting Blobs
To start this simple demonstration we load an image from a file – with the LoadImage filter (tool), which is available in the Image Acquisition section
of the Toolbox, the From File group. The image used in the example has been acquired with a backlight, so it is easy to separate its foreground from
the background simply with the ThresholdToRegion filter (Image Processing → Threshold Image). The result of this filter is a Region, i.e. a
compressed binary image or a set of pixel locations that correspond to the foreground objects. The next step is to transform this single region into an
array (a list) of regions depicting individual objects. This is done with the SplitRegionIntoBlobs filter (Region Analysis → Split Region):
Connections between filters are The data previews on the right are The input file is available here: Classifying the Blobs
created by dragging with a mouse created by dragging and dropping parts.png.
from a filter output to an input of filter outputs. At this stage what we have is an array
another filter. of regions. This array has 12 elements, of which 4 are nails. To separate
the nails from other objects, we can use the fact that they are longer and thinner. The ClassifyRegions filter
(Region Analysis → Region Logic) with inFeature input set to Elongation and inMinimum set to 10 will return only the nails on its outAccepted
output:
Conclusion
As this example demonstrates, creating programs in Aurora Vision Studio consists of selecting filters (tools) from the Toolbox (or from the Filter
Catalog), connecting them and configuring. Data previews are created by dragging and dropping filter ports to the data preview area. This simple
workflow is common for the most basic programs, as the one just shown, as well as for highly advanced industrial applications which can contain
multiple image sources, hundreds of filters and arbitrarily complex data-flow logic.
Note: This program is available as "Nails Screws and Nuts" in the official examples of Aurora Vision Studio.
3. User Interface
Table of content:
Available Levels
Basic – Designed for production Advanced – Designed for Expert / Scientific – Gives Changing Complexity
engineers who want to quickly professional machine vision access to experimental and Level
build simple machine vision engineers who also implement scientific filters (tools), which are
projects without devoting much challenging and complex projects. not recommended for typical
time to learning the full capabilities machine vision applications, but Complexity Level can be changed at
of the software. might be useful for research any time. It can be done by clicking on
purposes. the level name in the upper-right
corner of Aurora Vision Studio:
Finding Filters
Introduction
There are many hundreds of ready-for-use filters (tools) in Aurora Vision Studio implementing common image processing algorithms, planar
geometry, specialized machine vision tools as well as things like basic arithmetics or operating system functions. On the one hand, it means that
you get instant access to results of tens of thousands of programmers' work hours, but it also means that there are quite a lot of various libraries and
filter categories that you need to know. This article advises you how to cope with that multitude and how to find filters that you need.
The most important thing to know is that there are two different catalogs of filters, designed for different types of users:
Toolbox's Tools view(for typical applications) Toolbox's Libraries view (for advanced users)
The Toolbox's Tools view is designed for use in typical machine vision applications. It is task-oriented, most filters are grouped into tools and they are
supported with intuitive illustrations. This makes it easy to find the filters you need the most. It does not, however, contain all the advanced filters,
which might be required in more challenging applications. To access the complete list of filters, you should use the Toolbox's Libraries view. This
catalog is organized in libraries, categories and subcategories. Due to its comprehensiveness, it usually takes more time to find what you need, but
there is also an advanced text-based search engine, which is very useful if you can guess a part of the filter name.
Toolbox
Sections
When you use the Toolbox's Tools view, the general idea is that you will most probably start with a filter (tool) from the first section, Image
Acquisition, and then follow with filters from consecutive sections:
1. Image Acquisition
– Acquiring images from cameras, frame grabbers or files.
2. Image Acquisition (Third Party)
– Acquiring images from third-party cameras.
3. Image Processing
– Image conversions, enhancements, transformations etc.
4. Region Analysis
– Operations on pixel sets representing foreground objects (blobs).
5. Computer Vision 2D
– Specialized tools for high-level image analysis and measurements.
6. Computer Vision 3D
– Specialized tools for analysis of 3D point clouds.
7. Deep Learning
– Self-learning tools based on deep neural networks.
8. Geometry 2D
– Filters for constructing, transforming and analysing primitives of 2D geometry.
9. Geometry 3D
– Filters for constructing, transforming and analysing primitives of 3D geometry.
10. Logic & Math
– Numerical calculations, arrays and conditional data processing.
11. Program Structure
– Category contains the basic program structure elements.
12. File System
– Filters for working with files on disk.
13. Program I/O
– Filters for communication with external devices.
Program Editor
Ctrl+Space / Ctrl+T
When you know the name of a filter, which you would like to add into your program, you can use a keyboard shortcut Ctrl+Space or Ctrl+T to find it
straightaway in the Program Editor, without having to open the Toolbox's Libraries view. After applying this shortcut, you are prompted to type a filter
name:
Search Window
Ctrl+F
Creating a large program in Aurora Vision Studio may require finding elements in it's structure. Using Search Window is the best way to find Filters
(tools), Macrofilters, Variants and Global parameters which already exist in the project. To open the search window find a Magnifier button in Program
Editor or press Ctrl+F. In the search window insert the name of an element you want to find or a part you remember. Press 'Find' button and results
of search will appear in the new window. You may also pick some search options like using case sensitivity, matching whole names or regular
expression which can narrow your search.
Rules
Entering a search phrase also allows to pre-filter data with some keywords. When your project is large it might be useful to use Rules in the search
phrase. Here are some examples of using Rules in Search Window.
Input in:<Name> load in:File Must contain an input which either name or type contains phrase <Name>.
Output out:<Name> load out:Integer Must contain an output which either name, type or Data Source Label
contains phrase <Name>.
Connecting and Configuring Filters
After a filter is added to the program it has to be configured. This consists in setting its inputs to appropriate constant values in the Properties window
or connecting them with outputs of other filters. It is also possible to make an input linked with an external AVDATA file, connected with HMI elements
or with Global Parameters.
Moreover, it is possible to reconnect an existing connection to another output or input port by dragging the connection's head (near the destination
port) or the connection's first horizontal segment (near the source port).
Another way to create connections between filters is by using only the Properties window. When you click the plug ( ) icon in the right-most
column, you get a list with possible connections for this input. When the input is already connected the plug icon is solid and you can change the
connection to another one.
Note: After clicking on the header of the properties table it is possible to choose additional columns.
Please note, that although being This feature works only when the By default re-executed is the entire It is not possible to re-execute i/o
extremely useful, this is a "dirty" filter has already been executed macrofilter. By unchecking the filters, loop accumulators or loop
feature and may sometimes lead and the program is Paused, but "Global Rerun Mode" setting the generators (because this would
to inconsistencies in the program NOT Stopped. re-execution can be limited to a lead to undesirable side effects).
state. In case of problems, stop single filter. This can be useful These filters are skipped when the
the program and run it again. when there are long-lasting entire macrofilter is getting re-
computations in the current executed.
macrofilter.
When re-executing a nested
instance of another macrofilter, the
previews of its internal data are
NOT updated.
Re-executing some filters, If you set an improper value and The filter parameters can also be Linking or Loading Data
especially macrofilters, can take cause a Domain Error, the modified during continuous From a File
much time. Use the Stop program will stop and it will have toprogram execution (F5).
command (Shift+F5) to break it, be started again to use the re- Sometimes data values that have to be used on an input of a filter are
when necessary. execution feature. stored in an .avdata file, that has been created as a result of some
Aurora Vision Studio program (for example creating a custom OCR model). It is possible to load such data to the program with the LoadObject filter,
but most often it is more convenient and more effective to link the input port directly to the file. To create such a link choose Link from AVDATA File...
from the context menu of the input (click with the right mouse button on an input port of a filter).
Managing recipes depending on a Storing data transferred between Setting global flags. Labeling Connections
signal from PLC, nested macrofilters,
Connections are easier to follow than variables as long as there are not too many of them. When your program becomes complicated, with many
intersecting connections, its readability may become reduced. To solve this problem Aurora Vision Studio provides a way to replace some
connections with labels. You can right-click on a connection, select "Label All Connections..." - when there is more than one connection or "Label
Connection..." when only one connection is present. Then set the name of a label that will be displayed instead. The labels are similar to variables
known from textual programming languages – they are named and can be more easily followed if the connected filters are far away from each other.
Here is an example:
Long connections are replaced with labels for better readability.
Remarks:
Labeled connections can be Please note, that when your Aurora Vision Studio enforces that When the amount of connections
brought back (unlabeled) by using program becomes complicated, all connections between filters are becomes large despite good
the "Un-label This Connection" or the first thing you should consider clearly visualized, even if making program structure you should also
"Un-label All Connections" is reducing its complexity by them implicit would make consider creating User Structures
commands available in the context refactoring the macrofilter programming easier in typical that may be used for merging
menu of a label. hierarchy or rethinking the overall machine vision applications. This multiple connections into one. (Do
structure. Labeling connections is stems from our design philosophy NOT use global parameters for
only a way to visualize the that assumes that: (1) it is wrong that purpose).
program in a more convenient wayto hide something that the user Invalid Connections
and does not make its structure has to think about anyway, (2) the
any simpler. It is the user's user should be able to understand As types of ports in macrofilters and
responsibility to keep it well all the details of a macrofilter formulas may be changed after
organized anyway. looking at a static screen image. connections with other filters have
been created, sometimes an existing connection may become invalid. Such an invalid connection will be marked with red line and white cross in the
Program Editor:
Invalid connections do not allow to run the program. There are two ways to fix it: replace the invalid connections or revise the types of the connected
ports.
Property Outputs
In addition to available filter's outputs, it is possible to get much more information out of a filter. Many data types are represented as structures
containing fields, e.g. Point2D consists of "X" and "Y" fields of Real data type. Although these fields are not available as standard outputs, a user can
easily add them as additional filter outputs - we call them "Property Outputs". That way, they are directly available for creating connections with inputs
of other filters.
There are also special types, which cannot exist independently. They are used for wrapping outputs, which may not be produced under some
conditions (Conditional) or optional inputs (Optional), or else for keeping a set of data of specified type (Array). Property outputs of such types are
listed in the table below.
All of the above-mentioned property outputs are specific for the type. However, ports of Array, Optional or Conditional type may have more property
outputs, depending on the wrapped type. For example, if the output of a filter is an array of regions, this port will have Count, IsArrayEmpty (both
resulting from the array form of the port), IsRegionEmpty and Area (both resulting from the type of the objects held in the array - in this case, regions)
as property outputs. However, if the output of a filter is an array of objects without any property outputs (e.g. Integer), only outputs resulting from the
array form of the port (Count and IsArrayEmpty) will be available.
Tip: Avoid using basic filters like Not, ArraySize which will have bigger performance impact than additional property outputs.
Comment Blocks
Comments are very important part of every computer program. They keep it clear and easy to understand during development and further
maintenance. There are two ways to make a comment in Aurora Vision Studio. One way is to add a special comment block. Another option is adding
a comment directly in the filter. To add a new comment block to program click the right mouse button on the Program's Editor background and select
the "Add Comment Here" option like on the image below.
Creating Macrofilters
Introduction
Macrofilters play an important role in developing bigger programs. They are subprograms providing a means to isolate sequences of filters and re-
use them in other places of the program. After creating a macrofilter you can create its instances. From outside an instance looks like a regular filter,
but by double-clicking on it you navigate inside and you can see its internal structure.
When you create a new project, it contains exactly one macrofilter, named "Main". This is the top level macrofilter from which the program execution
will start. Further macrofilters can be added by extracting them from existing filters or by creating them in the Project Explorer window.
Adding a new output using the context menu of the Macrofilter Outputs
Adding a new output by dragging and dropping a connection.
block.
Before the new port is created you need to provide at least its name and type:
Adding Registers
The context menus of macrofilter input and output blocks also contain a command for adding macrofilter registers, Add Macrofilter Register.... This
is an advanced feature.
Copying Macrofilters
When copying macrofilters in the Program Editor window, you can only create their new instances: modifying one instance will inevitably affect all the
others. In order to duplicate the definitions (create brand-new copies) of old macrofilters as independent entities that you will be able to modify
separately, you need to do it inside the Project Explorer window.
Please keep in mind, that regardless of whether you copy macrofilters in the Program Editor or the Project Explorer, with other macrofilters nested
inside, no new definitions of nested macrofilters will be created.
1. When copying the parent macrofilter (first nest level) in the Program Editor, the number of instances of nested (child) macrofilters will not change.
2. When copying the parent macrofilter (first nest level) in the Project Explorer, new instances of nested (child) macrofilters will be created.
If you want to create a new copy of the whole macrofilter tree (copy of all definitions) you will have to copy each macrofilter separately, starting with
the most nested one.
Creating a Model
Basic
The Basic window contains the following elements:
1. At the top: a list of possible template images
2. Below: a simple toolbar that also contains a button for loading a template image from a file
3. On the left: a tool for selecting a rectangular template region
4. On the right: track-bars for setting parameters and some options related to the view
5. In the center: an area for selecting the template region in the context of the selected template image
3. Edge-based matching only: Set the Edge Threshold parameter, which should be set to value that results in the best quality of the edges.
Edge Threshold determines the minimum strength (gradient's magnitude) of pixels that can be considered as an edge.
Low quality edges (Edge Threshold = 8) and high quality edges (Edge Threshold = 30)
4. Set the Rotation Tolerance in range from 0° to 360°. This parameter determines the maximum expected object rotations. Please note that
the smaller the Rotation Tolerance, the faster the matching.
5. Set the Minimal Pyramid Level parameter which determines the lowest pyramid level used to validate candidates who were found on the
higher levels. What is worth mentioning is that setting this parameter to a value greater than 0 may speed up the computation significantly,
however, the accuracy of the matching can be reduced. More detailed information about Image Pyramid is provided in Template Matching
document in our Machine Vision Guide.
Expert
The Expert window contains the following elements:
1. At the top: a list of possible template images
2. Below: a simple toolbar that also contains a button for loading a template image from a file
3. On the left: a tool for selecting a template region of any shape
4. On the right: parameters of the model, their description and some options related to the view
5. In the center: an area for selecting the template region in the context of the selected template image
See also:
Template Matching guide, Template Matching filters
CalibrateCamera_Pinhole
Camera Calibration
CalibrateCamera_Telecentric
CalibrateWorldPlane_Default
CalibrateWorldPlane_Labeled
World to Image Transform
CalibrateWorldPlane_Manual
CalibrateWorldPlane_Multigrid
CreateRectificationMap_Advanced
Rectification Map Generation CreateRectificationMap_PixelUnits
CreateRectificationMap_WorldUnits
If you are using chessboard pattern, you have to If you are using circle pattern, you have to measure radius of any circle.
count squares in both dimensions. In this example, it is about 10px. Note: it is important to use a symmetric board as shown in
In this example, the width is 10 and the height is 7. the image. Asymmetric boards are currently not supported.
Once they are set, you should adjust camera type according to an applied camera, which can be either pinhole (which uses a standard perspective
projection) or telecentric (uses an orthographic projection).
A few distortion model types are supported. The simplest - divisional - supports most use cases and has predictable behavior even when calibration
data is sparse. Higher order models can be more accurate, however, they need a much larger dataset of high quality calibration points, and are
usually needed for achieving high levels of positional accuracy across the whole image - order of magnitude below 0.1 px. Of course this is only a
rule of thumb, as each lens is different and there are exceptions.
Results & Statistics tab informs you about results of the calibration, especially about reprojection error, which corresponds with reprojection vectors
shown in red on the preview in the picture below:
They indicate the direction towards each located point should be moved in order to deliver an immaculate grid. Additionally, there are values of Root
Mean Square and Maximal error. The tab also displays value and standard deviation for each computed camera and lens distortion parameter.
For more details, please refer to the corresponding group of filters for this page in the table above.
Arrows indicate which points from the calibration grid relate to corresponding rows in the spreadsheet. As you can see each row consists of
coordinates in the image plane (given in pixels), coordinates in the world plane (given in mm), and error (in pixels), which means how much a point
is deviated from its model location. In this case reprojection vectors, which are marked as small, red arrows, also indicate the deviation from the
model.
Colors of points have their own meaning:
The Results & Statistics tab shows
Green Point - the point has been Orange Point - the point which has Blue Point - the point has been information about computed errors for
computed automatically. been selected. adjusted manually.
both image and world coordinates.
The output reprojection errors are useful tool for assessing the feasibility of computed solution. There are two errors in the plugin: Image (RMS) and
World (RMS). The first one denotes how inaccurate the evaluation of perspective is. The latter reflects inaccuracy of labeling of grid 2D world
coordinate system. They are independent, however they both influence quality of the solution, so their values should remain low.
For the details, please refer to the corresponding group of filters for this page in the table above.
The left side presents which filters are necessary to generate a rectification map and how to save it using SaveObject. The right side presents how to
load the rectification map using LoadObject and passing it to the RectifyImage filter.
This is just an exemplary set of filters which might be applied, but using specific filters depend on the calibration board and other parameters relevant
to a case.
Further readings
Calibration-related list of filters in
Aurora Vision Studio Creating Text Segmentation Models
The graphical editor for text segmentation performs two operations:
1. Thresholding an image with one of several different methods to get a single foreground region corresponding to all characters.
2. Splitting the foreground region into an array of regions corresponding to individual characters.
Details about using OCR filters can be found in Machine Vision Guide: Optical Character Recognition.
To configure text extraction please perform the following steps:
1. Add an ExtractText filter to the program.
2. Set the region of interest on the inRoi input. This step is necessary before performing next steps. The image below shows how the ROI was
selected in an example application:
3. Click on the "..." button at the inSegmentationModel input to enter the graphical editor.
4. When entering first time, complete the quick setup by selecting most common settings. In this example a black non-continuous text should be
extracted from a uniform background. Configuration was set to meet these requirements.
5. After the quick setup the graphical editor starts with some parameter set. Adjust the pre-configured parameters to get best results.
6. Configure a character extraction algorithm. In this case thresholding value is too high and must be reduced.
8. Set the minimal and the maximal size of a character. The editor shows the character dimensions when the character is selected in the list
below.
9. Select a character sorting order, character deskewing (shearing) and image smoothing. Smoothing is important when images have low
quality.
To create golden template, select template region and configure its parameters.
Remarks:
To reduce computation time try to For comparing both edges and To create mode programmatically
select only necessary part of an surface use two use filter CreateGoldenTemplate. Creating Models for
object, CompareGoldenTemplate filters, Golden Template
Introduction
Golden Template is an image comparison technique. It is based on the pixel-to-pixel comparison but uses multiple images and advanced algorithms
to create a multi-image model. It is useful for finding general defects of objects that have fixed shape. In order to simplify the process of model
creation a "GUI for Golden Template 2" was created. It is possible to access it by clicking on the button at the inModel input in the Properties
window:
Opening GUI for Golden Template 2
Image Preparation
To create a golden template model you need at least three same-sized images representing the same object. The object should be placed in the
same way in all the images. Otherwise, the final model may not be accurate enough.
The first step is to prepare the images. To be sure that the object is always precisely positioned and have the same size - in both the model and the
program - you can use the sequence of filters presented below:
After the training character samples can be viewed in the details tab:
2. Creating artificial samples - when no samples are available user can create a training set using systems fonts.
3. Creating character variations in case when no more samples are available and the training result is not fine the editor can modify existing
samples to create a new set.
4. Editing samples - in case when gathered samples contain noises, or its quality is low, user can edit them manually. The image below show
how to edit a character '8' to get character '9'.
Note:
Each training character should In cases when some characters Character samples can be stored
have this same number of are very similar number of in an external directory to perform Analysing Filter
samples. samples can be increased to experiments on them. Performance
improve classification.
The Program Statistics window contains information about the time profile of the selected macrofilter. This is very important as real vision algorithms
need to be very fast. However, before starting program optimization we must know what needs to be optimized, so that later we optimize the right
thing. Here is how the required information is presented:
As can be seen on the above illustration, program execution time is affected by several different factors. The most important is the "Native Filters"
time, which corresponds to the core data processing tools. Additional time is consumed by "Data Transfers", which is related to everything that
happens on connections between filters – this encompasses automatic conversions as well as packing and unpacking arrays on array and singleton
connections. Another statistic called "Other" is related to the virtual machine that executes the program. If it value is significant, then C++ code
generation might be worth considering. The last element, "GUI", corresponds to visualization of data and program execution progress. You can
expect that this part is related only to the development environment and can possibly be reduced down to zero in the runtime environment.
Remarks:
In practice, performance statistics Turn off the diagnostic mode when Data Preview Panels, animations Even with all windows closed
may vary significantly in testing performance. in the Program Editor and even thethere are some background
consecutive program executions. Console Window can affect threads that affect performance.
It is advisable to run the program performance in Aurora Vision Performance may still be higher
several times and check if the Studio. Choose Program » when you run the program with the
statistics are coherent. It might Previews Update Mode » Disable Executor (runtime) application.
also be useful to add the Visualization to test performance
EnumerateIntegers filter to your with minimal influence of the Please note that the first program
program to force a loop and collect graphical environment. (Do not be iteration might be slower. This is
performance statistics not from surprised however that nothing is due to the fact that in the first
one, but from many program visible then). iteration memory buffers are
iterations. allocated, filters are initialized,
communication with external
devices is established etc.
See also: Optimizing Image Analysis for Speed.
Example
The ScanSingleEdge filter has three diagnostic outputs:
Usage
Open a project from a file and use standard buttons to control the program execution. A file can also be started using the Windows Explorer context
menu command Run, which is default for computers with Aurora Vision Studio Runtime and no Aurora Vision Studio installed.
Please note, that Aurora Vision Executor can only run projects created in exactly the same version of Aurora Vision Studio. This limitation is
introduced on purpose – little changes between versions of Studio may affect program compatibility. After any upgrade your application should first
be loaded and re-saved with Aurora Vision Studio as it then runs some backward compatibility checks and adjustments that are not available in
Executor.
Console mode
It is possible to run Aurora Vision Executor in the console mode. To do so, the --console argument is needed to be passed. Note, that this mode
makes the --program argument required so the application will know which program to run at startup.
Aurora Vision Executor is able to open a named pipe where it's log will be write into. This is possible with --log-pipe argument which accepts a
pipe name to be opened. One may then connect to the pipe and process Aurora Vision Executor log live. This can be easily done e.g. in C#:
Runtime Executables
Aurora Vision Executor can open .avproj files, the same as Aurora Vision Studio, however it is better to use .avexe files here. Firstly one can have a
single binary executable file for the runtime environment. Secondly this file is encrypted so that nobody is able to look at project details. To create
one, open a project in Aurora Vision Studio and use File » Export to Runtime Executable.... This will produce an .avexe file that can be executed
directly from the Windows Explorer.
If Aurora Vision project contains any User Filter libraries, it is crucial to put their *.dll files into the appropriate directory when running in Aurora Vision
Executor. This is when exporting to .avexe file might also be a handy option. While defining the .avexe contents, it is possible to select all the User
Filters libraries that the exported project depends on. Selected libraries are deployed then to the same directory as generated .avexe file and the
.avexe itself is set to use all User Filter libraries from its directory.
.NET Macrofilter Interface C++ Code Generator – generates Set Aurora Vision Executor as the "system shell"
Generator – generates a native native C++ code (a CPP file) that
.NET assembly (a DLL file) and is based on Aurora Vision Library On Windows systems it is possible to set Aurora Vision Executor as the
makes it possible to invoke C++. This code can be integrated "system shell", thus removing Desktop, Menu Start etc. completely. Go
macrofilters created in Aurora with bigger C++ projects and the to Settings in Aurora Vision Executor and the Startup section. Mark the
Vision Studio as simple class HMI can be implemented with Qt, Set Aurora Vision Executor as the main system application (for the
methods from a .NET project. MFC or similar libraries. Each time current user only). Please be informed that this option requires
Internally the execution engine of you modify the program in Studio, administrator privileges.
Aurora Vision Studio is used, so the C++ code has to be re-
modifying the related macrofilters generated and re-compiled.
does not require to re-compile the
.NET solution. The HMI can be
implemented with WinForms,
WPF or similar technologies.
Startup Applications
It is possible to run any process before starting a program in Aurora Vision Executor. Go to Settings in Aurora Vision Executor and the Startup
section. To define a new startup program select the Add button on the right. In a New startup program dialog box you need to specify the application
path (obligatory) and arguments (optional). It is similar to typing the application name and command-line arguments in the Run dialog box of the
Windows Start menu. The added program will appear in the list. All added programs will start each time you run Aurora Vision Executor.
Defining Startup Applications
Startup Project
It is possible to choose the project the Aurora Vision Executor should run after the startup . Go to Settings in Aurora Vision Executor and the Startup
section. To define the startup project select the ... button on the right. In an Open dialog box you need to specify the startup project path (obligatory).
The added project's path will appear in the box. It will start each time you run Aurora Vision Executor.
Usage
In Aurora Vision Studio the list of available remote systems is accessible in the Connect to Remote Executor window which opens up after choosing
the File › Connect to Remote Executor menu command:
Example
Let us assume that we need to compute the hypotenuse . Here are the two possible solutions:
Calculations using formula blocks.
The second approach, using formula blocks, is the most recommended. Data flow programming is just not well suited for numerical calculations and
standard formulas, which can be defined directly in formula blocks, are much easier to read and understand. You can also think of this feature as an
ability to embed little spreadsheets into your machine vision algorithms.
2. Add inputs and outputs using the context menu or by clicking Add Input and Add Output links:
Replace the highlighted Name with meaningful name. While typing a Tooltip with available options should appear:
Not only the output data of the formulas can be labeled but also inputs and outputs of other filters outside of the formula. This way you
can perform a direct call to another labeled data inside a formula without explicitly defining new formula inputs. In the image below the
input A of the macrofilter is labeled (highlighted in violet color) and a direct call of this label (A) is performed inside the formula block.
Later the output Hypotenuse is connected via a label with the macrofilter output, instead of linking it directly using an outgoing arrow
connection.
Remarks
Existing formulas can also be It is also possible to create new Formula blocks containing When defining a formula for an
edited directly in a formula block in inputs and outputs by dragging incorrect formulas are marked output it is possible to use other
the Program Editor. and dropping connections onto a with red background. Programs outputs, provided that they are
formula block. containing such filters cannot be defined earlier. The order can be
run. changed through the outputs
context menu.
For efficiency reasons it is Syntax and Semantics
advisable not to use "heavy"
objects in formulas, such as For complete information about the syntax and semantics please refer to the Formulas article in the
images or regions. Programming Reference.
Opening Macrofilters
As described in Running and Analysing Programs, there are two ways of navigating through the existing macrofilters. One of them is with the Project
Explorer window, which displays definitions of macrofilters, not the instances. After double-clicking on a macrofilter in the Project Explorer, however,
a macrofilter instance is opened in the Program Editor. As one macrofilter definition can have zero, one or many instances. Some special rules apply
to which of the instances it is:
If possible, the most recently If no instance has been executed If there are no instances at all the Macrofilter Counter
executed instance is opened. yet, the most recently created one "ghost instance" is presented,
is opened. which allows editing the Macrofilter counter shows how many
macrofilter, but will never have any times a given macrofilter is used in
data on the output ports. the program.
ADVANCED NOTE: If a macrofilter X is used in a macrofilter Y and there are multiple instances of the macrofilter Y, we still consider macrofilter X
being used once. Number of uses is not the same as number of instances.
Global Parameters
If some value is used many times in several different places of a program then it should be turned into a global parameter. Otherwise, consecutive
changes to the value will require the error-prone manual process of finding and changing all the occurrences. It is also advisable to use global
parameters to clearly distinguish the most important values from the project specification – for example the expected dimensions and tolerances.
This will make a program much easier to maintain in future.
In Aurora Vision Studio global parameters belong to specific modules and are managed in the Project Explorer. To create one, click the Create New
Global Parameter... button and then a dialog box will appear where you will provide the name, the type and the value of the new item. After a
global parameter is created it can be dragged-and-dropped on filter inputs and appropriate connections will be created with a visual label displaying
the name of the parameter.
Global parameter "inAddress" created within the filter WriteParameter can be read with the ReadParameter filter.
Thanks to these filters you can easily read or write the values of global parameters anywhere in your algorithm. In order to facilitate development the
icon of the global parameter has different appearance depending on whether it is overwritten somewhere in the program. The color of the icon will be
red then so that you will know that this value may change during the execution of your application.
To see how Global Parameters work in practice, check out our official example: HMI Handling Events.
Remarks:
Connected filters are not re- Do NOT use writable global Modules
executed after the global parameters unless you really
parameter is changed. This is due must. In most cases data should When a project grows above 10-20 macrofilters it might be appropriate
to the fact, that many filters in be passed between filters with to divide it into several separate modules, each of which would
different parts of the program can explicit connections, even if there correspond to some logical part. It is advisable to create separate
be connected to one global are a lot of them. Writable global modules for things like i/o communication, configuration management or
parameters. Re-executing all of parameters should be used only for automated unit testing. Macrofilters and global variables will be then
them could cause unexpected for some very specific tasks, most grouped in a logical way and it will be easier to browse them.
non-local program state changes notably for configuration
and thus is forbidden. parameters that may be Modules are also sometimes called "libraries of macrofilters". This is
dynamically loaded during because they provide a means to develop sets of common user's tools
program execution and for high that can be used in many different projects. This might be very handy for
level program statistics that may users who specialize in specific market areas and who find some
be manipulated through the HMI
(like the last defect time). standard tasks appearing again and again.
To create a module, click the Create New module... button and then a dialog box will appear. In there you specify the location and name of the
module. The path may be absolute or relative. Modules are saved with extension .avlib. Saving and updating the module files happens when the
containing program is saved.
Ways to access the editing window. The option to change the icon is highlighted
The image below shows a structure of an example program. The macrofilters have been grouped into two Modules. Module FindDefects has
macrofilters related to defect detection and a global parameter used by the macrofilters. Notice how the ProcessImage macrofilter is grayed out. It
indicates it is a private macrofilter (here it is used by FindShapeDefects). ProcessImage cannot be used outside its module.
The other module has macrofilters related to showing and storing the results of the inspection.
Example program structure with macrofilters grouped into modules.
Here are some guidelines on how to use modules in such situations:
It
Create a separate module for Give each module a unique and Use the English language and Create the common macrofilters is
each set of related, standard clear name. follow the same naming in such a way, that they do not
macrofilters. conventions as in the native filters. have to be modified between
different projects and only the
values of their parameters have to
be adapted.
important to note that modules containing filters interfacing with the HMI should not be shared between
If some of the macrofilters are programs. Every filter port connected to the HMI has a unique identifier. Those identifiers vary between
intended as implementation only
(not to be used from other programs - ports of the same filter in different programs will have different identifiers.
modules), mark them as private. Generally it is a good practice to create a separate module for all things related to the HMI. That way every
other module can be shared between programs without any problems.
Importing Modules
If you want to use a module which had been created before click the Import Existing Module... . This will open a window in which you can select
modules to add. Now the path to the module will be linked to the project. Similarly to creating a new module you can choose whether the path to it will
be relative or absolute.
Remember that modules are separate files and as such can be modified externally. This is especially important with modules which are shared
between multiple projects at the same time.
See also: Trick: INI File as a Module Not Exported to AVEXE.
Locking Modules
Sometimes users would like to hide the contents of some of their macrofilter to protect them against unauthorized access. They can do this by
placing them inside a module and locking it. To do this it is necessary to right click on the module in the Project Explorer and select Lock Module
option. User will be then prompted to provide a password for this specific module.
Keyboard Shortcuts
Introduction
Many of Aurora Vision Studio actions can be invoked with keyboard shortcuts. Most of them have default shortcuts - such as copying with Ctrl+C ,
pasting with Ctrl+V , finding elements with Ctrl+F , saving with Ctrl+S , navigating with arrow keys etc.
In some controls - e.g. formula editor - there are present commonly used keyboard shortcuts for text editors: navigating through whole words with
Ctrl+Left/Right Arrow , selecting consecutive letters/lines ( Shift+Arrow keys ), selecting whole words ( Ctrl+Shift+Left/Right Arrow ), increasing
indents with Tab , decreasing them with Shift+Tab etc.
In this article you can find listed all of the less-known shortcuts.
Table of Contents
1. Program Editor
2. HMI Designer
3. Properties Control
4. 3D View
5. Deep Learning Editors
Shortcuts Table
Command Shortcut
Program Editor
Back to Top
Run program F5
Iterate program F6
Ctrl+T
Insert new filter instance Ctrl+Space
Toggles breakpoint for currently selected filter and macrofilter outputs block F9
Ctrl+Insert
Copy element
Ctrl+C
Shift+Insert
Paste element
Ctrl+V
HMI Designer
Back to Top
Properties Control
Back to Top
Change scale on property value slider from 1 to 10 Hold Shift while modifying value with slider
Change scale on property value slider from 1 to 0.1 Hold Ctrl while modifying value with slider
Change scale on property value slider from 1 to 0.01 Hold Ctrl+Shift while modifying value with slider
3D View
Back to Top
Show grid G
Q
Move view up
Up Arrow
Z
Move view down
Down Arrow
A
Move view left
Left Arrow
D
Move view right
Up Arrow
W
Zoom in
Page Up
S
Zoom out
Page Down
Toolbar
The toolbar appears the moment a point cloud is previewed:
Rotate This button permits to rotate a point cloud around the rotation center point.
Resetting view This button permits to revert changes made by rotating or panning a point cloud.
Probe Point Coordinates This button permits to obtain information about coordinates of a point.
Bounding Boxes This button permits to display the bounding box of a point cloud.
This button permits to switch between different coloring modes which are divided into 4 categories:
Solid (whole point cloud is of a Along X-axis (greater values along Along Y-axis (greater values along
single color) the X-axis are colored according tothe Y-axis are colored according to
the colors' scale bar in the top the colors' scale bar in the top
Coloring Mode right corner of the preview) right corner of the preview)
Along Z-axis (greater values along
the Z-axis are colored according to
the colors' scale bar in the top
right corner of the preview)
This button permits to display a grid plane to your liking. There are also 4 modes available:
None (no grid) XY (the grid is displayed in XY XZ (the grid is displayed in XZ
Grid Mode plane) plane)
YZ (the grid is displayed in YZ
plane)
This button permits to control the size of a single point within a point cloud. There are 5 possible sizes:
Point size Very small Small Medium
Large Very large
If you click Left Mouse Button on a preview, it works as if you used the Rotate button, so you can rotate a
point cloud around a fixed center point until the button is released.
If you click Right Mouse Button on a point cloud, a dropdown list with all features described in the previous
section appears.
If you use Mouse Wheel on a point cloud, you can zoom it in and out.
If you press Mouse Wheel on a point cloud, you can move the rotation center point of the coordinate system
(works like the "Pan" button).
Preview
When you drag and drop the output, which is of 3D data type, and you have already previewed some images, string or real values, please note that it
can be only previewed in a separate view. It is not possible to display an image and a point cloud in the very same view.
While analyzing 3D data, please pay attention to the colors' scale bar in the top right corner of the preview:
Colors' scale bar.
It might be helpful if you want to estimate the coordinate value based on color.
You can also hide or remove surfaces from a view. To do it, you have to right-click on the information about a preview in the top left corner of the
view:
View3DBox HMI control - a point cloud is displayed in the same manner as in the preview.
It is worth mentioning that in this HMI control you can also rotate and change the rotation center of the point cloud. For more details about designing
and using HMI, please refer to HMI Controls.
Introduction
Deep Learning editors are dedicated graphical user interfaces for DeepModel objects (which represent training results). Each time a user opens
such an editor, he is able to add or remove images, adjust parameters and perform new training.
Since version 4.10 it is also an option to open a Deep Learning Editor as a stand-alone application, which is especially useful for re-training models
with new images in production environment.
Requirements:
Currently available deep learning tools are:
A DeepLearning license is Deep Learning Service must be up
required to use Deep Learning and running to perform model 1. Anomaly Detection – for detecting unexpected object variations; trained
editors and filters. training. with sample images marked simply as good or bad.
2. Feature Detection – for detecting regions of defects (such as surface scratches) or features (such as vessels on medical images); trained
with sample images accompanied with precisely marked ground-truth regions.
3. Object Classification – for identifying the name or the class of the most prominent object on an input image; trained with sample images
accompanied with the expected class labels.
4. Instance Segmentation – for simultaneous location, segmentation and classification of multiple objects in the scene; trained with sample
images accompanied with precisely marked regions of each individual object.
5. Point Location – for location and classification of multiple key points; trained with sample images accompanied with marked points of
expected classes.
6. Read Characters – for location and classification of multiple characters; this tool uses a pretrained model and cannot be trained, so it is not
described in this article
7. Object Location – for location and classification of multiple objects; trained with sample images accompanied with marked bounding
rectangles of expected classes.
Technical details about these tools are available at Machine Vision Guide: Deep Learning while this article focuses on the training graphical user
interface.
Workflow
You can open a Deep Learning Editor via:
a filter in Aurora Vision Studio: a standalone Deep Learning The Deep Learning model preparation process is usually split into the
Editor application: following steps:
1. Place the relevant DL filter (e.g.
DL_DetectFeatures or 1. Open a standalone Deep
DL_DetectFeatures_Deploy) in theLearning Editor application 1. Loading images – load training images from disk
Program Editor. (which can be found in the2.Aurora
Labeling images – mark features or attach labels on each training
2. Go to its Properties. image
Vision Studio installation folder
as "DeepLearningEditor.exe", in
3. Click on the button next to the 3. menu
Aurora Vision folder in Start Setting the region of interest (optional) – select the area of the
inModelDirectory or or in Aurora Vision Studio image to be analyzed
inModelId.ModelDirectory application in Tools menu4.). Adjusting training parameters – select training parameters,
parameter.
2. Choose whether you want topreprocessing steps and augmentations specific for the application at
create a new model or use anhand
existing one: 5. Training the model and analyzing results
Creating a new model: Select
the relevant tool for your model
and press OK, then select or
create a new folder where files for
your model will be contained and
press OK.
Choosing existing model:
Navigate to the folder containing
your model files – either write a
path to it, click on the button next
to field to browse to it or select one
of the recent paths if there are any;
then press OK.
Pre-processing button – located Current Model directory – Show Model Details button – Train & Resume buttons – allow
in the top toolbar; allows you to located in the bottom toolbar; located next to the previous you to start training or resume it in
see the changes applied to a allows you to switch a model beingcontrol; allows you to display case you have changed some of
training image e.g. grayscale or in another directory or to simply information on the current model training parameters.
downsampling. see which model you are actually and save them to a file.
working on. Saving buttons:
Save – saves the current model in
chosen location.
Save & Exit – saves the model
and then exits the Deep Learning
Editor.
Exit Without Saving – exits the
editor, but the model is not being
saved.
Open automatic training window
button – allows you to prepare a
training series for different
parameters. If you are not sure, which
parameter settings will give you the
best result, you can prepare a
combination for each value to
compare the results. The test
parameters can be prepared
automatically with Generate new
grid or manually entered. After setting
the parameters you need to start the
test. The settings and the results are
shown in the grid, one row for one
model.
Show columns – hides / shows model parameters you will use in your test. The view is common for all deep learning tools. To create an appropriate
grid search, choose these parameters which are correct for the used tool (the ones you can see in Training Parameters). For DL_DetectAnomalies2
choose the network type first to see the appropriate parameters.
Generate new grid – prepares the search grid for the given parameters. Only parameters chosen in Show columns are available. The values
should be separated with ; sign.
Duplicate rows – duplicates a training parameters configuration. If the parameters inside the row won't be modified, this model will be trained twice.
Import from editor – copies the training parameters from Editor Window to the last search grid row.
Show report – shows the report for the chosen model (the chosen row). This option is available only if you choose Save Reports before starting
the training session.
Additional options
Remove – removes a chosen training configuration.
Export grid to CSV file – exports Import grid from CSV file –
the grid of training parameters to a imports the grid of training Clear – clears the whole search grid.
CSV file. parameters from a CSV file.
Stopping conditions of a single training – determines when a single training stops.
Detecting anomalies 1
In this tool the user only needs to mark which images contain correct cases (good) or incorrect ones (bad).
1. Marking Good and Bad samples and dividing them into Test and Training data.
Click on a question mark sign to label each image in the training set as Good or Bad. Green and red icons on the right side of the training images
indicate to which set the image belongs to. Alternatively, you can use Add images and mark.... Then divide the images into Train or Test by clicking
on the left label on an image. Remember that all bad samples should be marked as Test.
Labeled images in Deep Learning Editor.
2. Configuring augmentations
It is usually recommended to add some additional sample augmentations, especially when the training set is small. For example, the user can add
additional variations in pixels intensity to prepare the model for varying lighting conditions on the production line. Refer to "Augmentation" section for
detailed description of parameters: Deep Learning – Augmentation.
6. Analyzing results
The window shows a histogram of sample scores and a heatmap of found defects. The left column contains a histogram of scores computed for
each image in the training set. Additional statistics are displayed below the histogram.
To evaluate trained model, Evaluate: This Image or Evaluate: All Images buttons can be used. It can be useful after adding new images to the
data set or after changing the area of interest.
The histogram tool where green bars represent correct samples and red bars represent anomalous samples. T marks the main threshold and T1,
T2 define the area of uncertainty.
Left: a histogram presenting well-separated groups indicating a good accuracy of the model. Right: a poor accuracy of the model.
Setting ROI.
5. Result analysis
Image scores (heatmaps) are presented in blue-yellow-red colors palette after using the model to evaluation of image. The color represents the
probability of the element belonging to the currently selected feature class.
Evaluate: This Image and Evaluate: All Images buttons can be used to classify images. It can be useful after adding new images to the data set or
after changing the area of interest.
Classifying objects
In this too, the user only has to label images with respect to a desired number of classes. Theoretically, the number of classes that a user can create
is infinite, but please note that you are limited by the amount of data your GPU can process. Labeled images will allow to train model and determine
features which will be used to evaluate new samples and assign them to proper classes.
2. Labeling samples
Labeling of samples is possible after adding training images. Each image has a corresponding drop-down list which allows for assigning a specific
class. It is possible to assign a single class to multiple images by selecting desired images in Deep Learning Editor.
6. Analyzing results
The window shows a confusion matrix which indicates how well the training samples have been classified.
The image view contains a heatmap which indicates which part of the image contributed the most to the classification result.
Evaluate: This Image and Evaluate: All Images buttons can be used to classify training images. It can be useful after adding new images to the
data set or after changing the area of interest.
Confusion matrix and class assignment after the training.
Sometimes it is hard to guess the right parameters in the first attempt. The picture below shows confusion matrix that indicates inaccurate
classification during the training (left).
Confusion matrices for model that needs more training (left) and for model well trained (right).
It is possible that confusion matrix indicates that trained model is not 100% accurate with respect to training samples (numbers assigned exclusively
on main diagonal represent 100% accuracy). User needs to properly analyze this data, and use to his advantage.
increase the network depth, prolong training by increasing increase amount of data used for use augmentation,
number of iterations, training,
increase the detail level
parameter.
Segmenting instances
In this tool a user needs to draw regions (masks) corresponding to the objects in the scene and specify their classes. These images and masks are
used to train a model which then in turn is used to locate, segment and classify objects in the input images.
2. Labeling objects
After adding training images and defining classes a user needs to draw regions (masks) to mark objects in images.
To mark an object the user needs to select a proper class in the Current Class drop-down menu and click the Add Instance button (green plus).
Alternatively, for convenience of labeling, it is possible to apply Automatic Instance Creation which allows a user to draw quickly masks on multiple
objects in the image without having to add a new instance every time.
Use drawing tool to mark objects on the input images. Multiple tools such as brush and shapes can be used to draw object masks. Masks are the
same color as previously defined for the selected classes.
The Marked Instances list in the top left corner displays a list of defined objects for the current image. If an object does not have a corresponding
mask created in the image, it is marked as "(empty)". When an object is selected, a bounding box is displayed around its mask in the drawing area.
Selected object can be modified in terms of a class (Change Class button) as well as a mask (by simply drawing new parts or erasing existing
ones). Remove Instance button (red cross) allows to completely remove a selected object.
Labeling objects.
current iteration number, current training statistics (training number of processed samples, elapsed time.
and validation error),
6. Analyzing results
The window shows the results of instance segmentation. Detected objects are displayed on top of the images. Each detection consists of following
data:
class (identified by a color), bounding box, model-generated instance mask, confidence score.
Evaluate: This Image and Evaluate: All Images buttons can be used to perform instance segmentation on the provided images. It can be useful
after adding new images to the data set or after changing the area of interest.
Instance segmentation results visualized after the training.
Instance segmentation is a complex task therefore it is highly recommended to use data augmentations to improve network's ability to generalize
learned information. If results are still not satisfactory the following standard methods can be used to improve model performance:
providing more training data, increasing number of training increasing the network depth.
iterations, Locating points
In this tool the user defines classes and marks key points in the image. This data is used to train a model which then is used to locate and classify
key points in images.
1. Defining classes
First, a user needs to define classes of key points that the model will be trained on and later used to detect. Point location model can deal with single
as well as multiple classes of key points.
Class editor is available under the Class Editor button.
To manage classes, Add, Remove or Rename buttons can be used. Color of each class can be changed using Change Color button.
current iteration number, current training statistics (training number of processed samples, elapsed time.
and validation error),
Training point location model.
Training may be a long process. During this time, training can be stopped. If no model is present (first training attempt) model with best validation
accuracy will be saved. Consecutive training attempts will prompt user whether to replace the old model.
6. Analyzing results
The window shows results of point location. Detected points are displayed on top of the images. Each detection consists of following data:
Evaluate: This Image and Evaluate:
visualized point coordinates, class (identified by a color), confidence score. All Images buttons can be used to
perform point location on the provided images. It may be useful after adding new training or test images to the data set or after changing the area of
interest.
changing the feature size, providing more training data, increasing number of training increasing the network depth.
iterations,
Locating objects
In this tool a user needs to draw rectangles bounding the objects in the scene and specify their classes. These images and rectangles are used to
train a model to locate and classify objects in the input images. This tool doesn't require from a user to mark the objects as precisely as it is required
for segmenting instances.
1. Defining classes
First, a user needs to define classes of objects that the model will be trained on and later used to detect. Object location model can deal with single
as well as multiple classes of objects.
Class editor is available under the Class Editor button.
To manage classes, Add, Remove or Rename buttons can be used. Color of each class can be changed using Change Color button.
Using Class Editor.
Marking rectangles.
current iteration number, current training statistics (training number of processed samples, elapsed time.
and validation error),
6. Analyzing results
The window shows results of point location. Detected points are displayed on top of the images. Each detection consists of following data:
Evaluate: This Image and Evaluate:
visualized rectangle (object) class (identified by a color), confidence score. All Images buttons can be used to
coordinates,
perform object location on the provided images. It may be useful after adding new training or test images to
the data set or after changing the area of interest.
changing the detail level, providing more training data, increasing number of training See also:
iterations or extending the duration
of the training.
Machine Vision Guide: Deep
Learning – Deep Learning
technique overview,
Deep Learning Installation – Managing Workspaces
installation and configuration of
Aurora Vision Deep Learning. Workspace window enables user a convenient way to store datasets grouped by this same category or
purpose. For example single workspace may be created for a single project like "Part Inspection" and each included dataset may represent
inspection from different day or images of the different part types.
Overview
Filmstrip control is a powerful tool for controlling the execution of the program in the Offline Mode where the Filmstrip data is accessed with the
ReadFilmstrip filters and through the bound outputs of the Online-Only filters.
The data presented in the Filmstrip control is arranged in the grid layout, where the rows represents the Channels and the columns represents the
Samples.
Additionally, the control enables most common operations over the current workspace, i.e., adding Datasets and Channels.
Changing the current dataset is as easy as selecting one from the datasets combo box:
Dataset selection.
To keep the program consistent, the to be selected dataset must contain channels with the same names as channels bound in the current Worker
Task.
Common Tasks
Dragging a channel from the Dragging a channel from the Dragging files onto the Filmstrip Dragging files onto the existing
Filmstrip control onto the Program Filmstrip control onto the filter control empty area creates a new channel within the Filmstrip control
Editor empty area inserts the instance output binds the output Channel with the dragged files appends the dragged data to that
ReadFilmstrip filter assigned to with that channel, as long as: included. channel, if only the dragged data
that channel, type match the channel type.
The filter is the Online-Only filter,
Double-click on the Filmstrip See Also
The filter is in the ACQUIRE sample executes one program
section of the Worker Task, iteration with the clicked sample.1. Managing Workspaces - extensive
Requirements:
The output data type is the same description how to manage dataset
as the channel data type. workspaces in Aurora Vision Studio.
The Offline mode is active
2. Offline Mode - the mode that enables
There is at least one channelaccess to the Channel data items.
assigned in the Single-Threaded
application's Worker Task or in the
Multi-Threaded application's
Primary Worker Task
4. Extensibility
Table of content:
Table of Contents
1. Introduction
1. Prerequisites
2. User Filter Libraries Location
3. Adding New Global User Filter Libraries
4. Adding New Local User Filter Libraries
2. Developing User Filters
1. User Filter Project Configuration
2. Basic User Filter Example
3. Structure of User Filter Class
1. Structure of Define Method
2. Structure of Invoke Method
4. Using Arrays
5. Diagnostic Mode Execution and Diagnostic Outputs
6. Filter Work Cancellation
7. Using Dependent DLL
3. Advanced Topics
1. Using the Full Version of AVL
2. Accessing Console from User Filter
3. Generic User Filters
4. Creating User Types in User Filters
4. Troubleshooting and examples
1. Upgrading User Filters to Newer Versions of Aurora Vision Studio
2. Building x64 User Filters in Microsoft Visual Studio Express Edition
3. Remarks
4. Example: Image Acquisition from IDS Cameras
5. Example: Using PCL library in Aurora Vision Studio
Introduction
User filters are written in C/C++ and allow the advanced users to extend capabilities of Aurora Vision Studio with virtually no constraints. They can be
used to support a new camera model, to communicate with external devices, to add application-specific image processing operations and more.
Prerequisites
To create a user filter you will need:
User filters are grouped in user filter
an installed Microsoft Visual Studio the environment variable C/C++ programming skills. libraries. Every user filter library is a
2015/2017/2019 for C++, Express AVS_PROFESSIONAL_SDK5_3 in
Edition (free) or any higher edition, your system (depending on the single .dll file built using Microsoft Visual Studio. It can contain one or
edition; a proper value of the more filters that can be used in programs developed with Aurora Vision
variable is set during the Studio.
installation of Aurora Vision
Studio), User Filter Libraries Location
There are two types of user filter libraries:
A solution (.sln file) of a global user filter library can be located in any
Global – once created or importedLocal – belong to specific projects location on your hard disk, but the default and recommended location is
to Aurora Vision Studio they can of Aurora Vision Studio. The filters Documents\Aurora Vision Studio 5.3
be used in all projects. The filters from such libraries are visible only
from such libraries are visible in in the project that the library has Professional\Sources\UserFilters (the exact path can vary
the Libraries tab of the Toolbox. been added to. depending on the version of Aurora Vision Studio). The output .dll file built
using Microsoft Visual Studio and containing global user filters has to be
located in Documents\Aurora Vision Studio 5.3 Professional\Filters\x64 (this time the exact path depends on the version and
the edition) and this path is set in the initial settings of the generated Microsoft Visual Studio project. For global user filter libraries, this path must not
be changed because Aurora Vision Studio monitors this directory for changes of the .dll files. The Global User Filter .dll file for Aurora Vision Executor
has to be located in Documents\Aurora Vision Studio 5.3 Runtime\Filters\x64 (again, the exact path depends on the version and
edition). For 32 bit edition the last subdirectory should be changed from x64 to Win32. The Local User Filter .dll file for Aurora Vision Executor has to
be located in path configured in the User Filter properties. You can modify this path by editing user filters library properties in Project Explorer.
A local user filter library is a part of the project developed with Aurora Vision Studio and both source and output .dll files can be located anywhere on
the hard drive. Use the Project Explorer task panel to check or modify paths to the output .dll and Microsoft Visual Studio solution files of the user filter
library. The changes of the output .dll file are monitored by Aurora Vision Studio irrespectively of the file location. It's a good practice to keep the local
user filter library sources (and the output .dll) relatively to the location of the developed project, for example in a subdirectory of the project.
The other option is to use Create New Local User Filter Library button from Project Explorer panel.
A dialog box will appear where you should choose:
name for the new library, type of the library: local (available location of the solution directory, version of Microsoft Visual Studio
in current project only) or global (2015, 2017 or 2019),
(available in all projects),
whether Microsoft Visual Studio whether the code of example user
should be opened automatically, filters should be added to the
solution - good idea for users with
less experience with user filters
programming.
If you choose Microsoft Visual Studio to be opened, you can build this solution instantly. A new library of filters will be created and after a few seconds
loaded to Aurora Vision Studio. Switch back to Aurora Vision Studio to see the new filters in:
You can work simultaneously in both Microsoft Visual Studio and Aurora Vision Studio. Any time the C++ code is built, the filters will get reloaded.
Just re-run your program in Aurora Vision Studio to see what has changed.
If you do not see your filters in the above-mentioned locations, make sure that they have been compiled correctly in an architecture compatible with
your Aurora Vision Studio architecture: x86 (Win32) or x64.
If you want to configure the existing project to be a valid user filter library, please use the proper .props file (file with v140 suffix is dedicated for
Microsoft Visual Studio 2015, v141 for 2017 and v142 for 2019).
#include "UserFilterLibrary.hxx"
namespace avs
{
// Example image processing filter
class CustomThreshold : public UserFilter
{
private:
// Non-trivial outputs must be defined as a filed to retain data after filter execution.
avl::Image outImage;
public:
// Defines the inputs, the outputs and the filter metadata
void Define() override
{
SetName (L"CustomThreshold");
SetCategory (L"Image::Image Thresholding");
SetImage (L"CustomThreshold_16.png");
SetImageBig (L"CustomThreshold_48.png");
SetTip (L"Binarizes 8-bit images");
ReadInput(L"inImage", inImage);
ReadInput(L"inThreshold", inThreshold);
if (inImage.Type() != avl::PlainType::UInt8)
throw atl::DomainError("Only uint8 pixel type are supported.");
// Continue program
return INVOKE_NORMAL;
}
};
List of attributes:
Attribute Description Example Comment
Name
Defines a set of synchronized Informs that arrays in inA and inB require the same number of
ArraySync ports.
L"inA inB" elements.
FilterGroup Defines element of filter L"FilterName default ## Creates a FilterName with default element VariantName. More detailed
group. Description for group" description in "Defining Filters Groups"
Defines alternative URL for When user press F1 in the Program Editor alternative help page will be
CustomHelpUrl this filter.
L"http:\\adaptive-vision.com" opened.
Several filters can be grouped into a single group, which can be very helpful for user to change variant of very similar operations.
To create filter group define attribute L"FilterGroup" for default filter with parameters. L"FilterName<VariantName> default ## Description for
group". Notice "default" word. Text after "##" defines the tooltip for whole group.
If default filter is defined you can add another filter using L"FilterGroup" with parameter L"FilterName<NextVariant>"
Example usage:
As result a filter group Find will be created with three variants: Circle, Rectangle, Polygon.
Using methods SetImage and SetImageBig user can assign a custom icon for user filter. Filter icon must be located in this same directory as
output output user filter DLL file.
There are four types of icons:
Small Icon - icon with size 16x16 Medium Icon - icon of size 24x24 Big Icon - icon of size 48x48 pixel,Description Icon - icon of size
pixels used in Libraries tab, set by pixel, created automatically from set by SetImageBig, name 72x72 used in filter selection from
SetImage, name should end with Big Icon, should end with "_48", group, name is created by
"_16" replacing "_48" from
SetImageBig by "_D". For given
SetImageBig as
"custom_48.png" a name
"custom_D.png" will be generated.
Structure of Invoke Method
An Invoke method has to contain 3 elements:
1. Reading data from inputs
To read the value passed to the filter input, use the ReadInput method. This is a template method supporting all Aurora Vision Studio data
types. ReadInput method returns the value (by reference) using its second parameter.
2. Computing output data from input data
It is the core part. Any computations can be done here
3. Writing data to outputs
Similarly to reading, there is a method WriteOutput that should be used to set values returned from filter on filter outputs.
Data types that don't contain blobs (i.e. int, avl::Point2D, avl::Rectangle2D) can be simply returned by passing the local variable to the
WriteOutput method. Output variables with blobs (i.e. avl::Image, avl::Region) should be declared at least in a class scope.
// ...
}
All non-trivial data types like Image, Region or ByteBuffer should be defined as a filter class filed.
This solution has two benefits:
1. Reduces performance overhead for creating new objects in each filter execution,
2. Assures that types which contains blobs are not released after the filter execution.
For the sake of clarity it is good habit to define all filter variables as class members.
int Invoke()
{
// ... computing image ...
WriteOutput("outImage", image);
}
// ...
}
Using Arrays
User filters can process not only single data objects, but also arrays of them. In Aurora Vision Studio, arrays are represented by data types with suffix
Array (i.e. IntegerArray, ImageArray, RegionArrayArray). Multiple Array suffixes are used for multidimensional arrays. In C++ code of user filters,
atl::Array<T> container is used for storing objects in arrays:
For more information about types from atl and avl namespaces, please refer the documentation of Aurora Vision Library.
Advanced Topics
Using the Full Version of AVL
By default, user filters are based on Aurora Vision Library Lite library, which is a free edition of Aurora Vision Library Professional. It contains data
types and basic functions from the 'full' edition of Aurora Vision Library. Please refer to the documentation of Aurora Vision Library Lite and Aurora
Vision Library Professional to learn more about their features and capabilities.
If you have bought a license for the 'full' Aurora Vision Library, you can use it in user filters instead of the Lite edition. The following steps are required:
In compiler settings of the project, In linker settings of the project, add In linker settings of the project, In source code file, change
add additional include directory new additional library directory replace AVL_Lite.lib including AVL_Lite.h to AVL.h.
$(AVL_PATH5_3)\include $(AVL_PATH5_3)\lib\$(Platform) additional dependency with Accessing Console from
(Configuration Properties | C/C++ (Configuration Properties | Linker | AVL.lib (Configuration
| General | Additional Include General | Additional Library Properties | Linker | Input | User Filter
Directories). Directories). Additional Dependencies).
It is possible to add messages to the
console of Aurora Vision Studio from within the Invoke method. Logging messages can be used for problems visualization, but also for debugging.
To add the message, use one of the following functions:
In the Invoke method of a user filter, the GetTypeParam function can be used to resolve the data type that the filter has been concretized with. Once
the data type is known, the data can be properly processed using the if-else statement. Please see the example below.
if (type == "Integer")
{
atl::Array< int > ints = GetInputArray< int >("inArray");
arrayByteSize = ints.Size() * sizeof(int);
}
else if (type == "Image")
{
atl::Array< avs::Image > images = GetInputArray< avs::Image >("inArray");
arrayByteSize = 0;
for (int i = 0; i < images.Size(); ++i)
arrayByteSize += images[i].pitch * images[i].height;
}
enum PartType
{
Nut
Bolt
Screw
Hook
Fastener
}
struct Part
{
String Name
Real Width
Real Height
Real Tolerance
}
In your C++ code declare structures/enums with the same field types, names and order. If you create an enum then you can start using this type in
your project instantly. For structures you must provide ReadData and WriteData functions overrides for serialization and deserialization.
In these functions you should serialize/deserialize all fields of your structure in the same order you declared them in the type definition file.
To support structure Part from the previous example in your source code you should add:
Structure declaration:
struct Part
{
atl::String Name;
float Width;
float Height;
float Tolerance;
};
Enum declaration:
enum PartType
{
Nut,
Bolt,
Screw,
Hook,
Fastener
};
Remarks
If you get problems with PDB files User filters can be debugged. See A user filter library (in .dll file) that If you use Aurora Vision Library
being locked, kill the Debugging User Filters. has been built using SDK from ('full' edition) in user filters, Aurora
mspdbsrv.exe process using one version of Aurora Vision Vision Library and Aurora Vision
Windows Task Manager. It is a Studio is not always compatible Studio should be in the same
known issue in Microsoft Visual with other versions. If you want to version.
Studio. You can also switch to use use the user filter library with a
the Release configuration instead. different version, it may be A solution of a user filter library
required to rebuild the library. can be generated with example
filters. If you're a beginner in
writing your own filters, it's
probably a good idea to study
these examples.
Only compiling your library in the Example: Image Acquisition from IDS Cameras
release configuration lets you use
it on other computer units. You One of the most common uses of user filters is for communication with hardware, which does not (fully)
cannot do so if you use a debug support the standard GenICam industrial interface. Aurora Vision Studio comes with a ready example of
configuration. such a user filter – for image acquisition from cameras manufactured by the IDS company. You can use
this example as a reference when implementing support for your specific hardware.
The source code is located in the directory:
To run this example PCL Library must be installed and system PCL_ROOT must be defined.
5. Go to Debugging section:
4. In the Attach to Process dialog box, find AuroraVisionStudio.exe process from the Available Processes list.
5. In the Attach to box, make sure Native code option is selected.
Debugging Tips
User filters have access to the To write messages on the Output
Console window of Aurora Vision window of the Microsoft Visual Creating User Types
Studio. It can be helpful during Studio, please use standard
debugging user filters. To write on OutputDebugString function In Aurora Vision Studio it is possible for the user to create custom types
the Console, please use one of the(declared in Windows.h). of data. This can be especially useful when it is needed to pass multiple
functions below: parameters conveniently throughout your application or when creating
User Filters.
bool LogInfo (const
atl::String& message); The user can define a structure with named fields of specified type as well as his own enumeration types
(depicting several fixed options).
bool LogWarning (const For example, the user can define a structure which contains such parameters as: width, height, value and
atl::String& message); position in a single place. Also, the user can define named program states by defining an enumeration type
with options: Start, Stop, Error, Pause, etc.
struct Part
{
String Name
Real Width
Real Height
Real Tolerance
}
Save your file and reload the project. Now the newly created type can be used as any other type in Aurora Vision Studio.
After reloading the project the custom made type is available in Aurora Vision Studio.
Also custom enumeration types can be added this way. To create a custom enumeration type add the code below to the top of your AVCODE file.
enum PartType
{
Nut
Bolt
Screw
Hook
Fastener
}
Designing HMI Standard HMI Controls Handling HMI Events Saving a State of HMI Applications
Protecting HMI with a Password Creating User Controls List of HMI Controls
Designing HMI
Introduction
Although image analysis algorithms are the central part of any machine vision application, a human-machine interface (HMI, end user's graphical
environment) is usually also very important. Aurora Vision Studio, as a complete software environment, comes with an integrated graphical designer,
which makes it possible to create end user's graphical interfaces in a quick and easy way.
A very simple HMI example: at design time (left) and at run time (right).
The building blocks of a user interface are called controls. These are graphical components such as buttons, check-boxes, image previews or
numeric indicators. The user can design a layout of an HMI in an arbitrary way by selecting controls from the HMI Controls window and by placing
them on the HMI Canvas. The Properties window will be used to customize such elements as color, font face or displayed text. There are also some
controls, called containers, which can contain other controls in a hierarchical way. For example, a TabControl can have several tabs, each of which
can contain a different set of other controls. It can be used to build highly sophisticated interfaces.
HMI controls are connected with filters in the standard data-flow way: through input and output ports. There are three possible ways of connecting
controls with filters:
There are also a few properties in HMI
From a filter's output to a control's From a control's output to a filter's From a control's output to a filter's controls that can be connected
input – e.g. for displaying an image input, as data sources – e.g. for input, as events – e.g. for signaling
or a text result. setting various parameters in a that a button has been clicked. directly between two different control.
program with track-bars or check- See Label.AutoValueSource and the
boxes. EnableManager control.
Overview of Capabilities
The HMI Designer in Aurora Vision Studio is designed for very easy creation of custom user interfaces which resemble physical control panels (a.k.a.
front panels). With such an interface the end user will be able to control execution process, set inspection parameters and observe
visualization results. There are also more advanced features for protecting the HMI with passwords, saving its state to a file, creating multi-screen
interfaces or even for allowing the end user to create object models or measurement primitives.
There is, however, a level of complexity, at which other options of creating end user's graphical interfaces may become suitable. These are:
Note: Aurora Vision Executor with HMI
Creating custom HMI controls in Using .NET Macrofilter Interfaces Using C++ Code Generator and support is only available on Microsoft
the C# programming language – and then creating entire HMI in the then creating entire HMI in the C++
especially for dynamic or highly C# programming language. programming language. Windows operating system. For Linux
interactive GUI elements, e.g. we recommend generating C++ code
charts. and creating the user interface with Qt library.
This adds "HMI - Design" special view (which can be undocked), and a new window, HMI Controls. The Properties window shows parameters of the
selected control – here, of the main HMI Canvas.
Elements of the HMI Designer: (1) HMI Controls catalog, (2) HMI Editor, (3) Control's Properties + its context-sensitive help.
HMI Canvas is an initial element in the HMI Design view. It represents the entire window of the created application. At the beginning it is empty, but
controls will be placed on it throughout HMI construction process.
Removing HMI
If at any point you decide that the created HMI is not needed anymore, it can be removed with a the Edit » Remove HMI command available in the
Main Menu.
Basic Workflow
The HMI design process consists in repeating the following three steps:
1. Drag & drop a control from HMI Controls to HMI Editor. Set its location and size.
2. Set properties of the selected control.
3. Drag & drop a connection between the control's input or output with an appropriate filter port in Program Editor.
Sample macrofilter for explicit communication with HMI from multiple places of a program.
If more overlays are required on a single image, then we use multiple Image Drawing filters connected sequentially.
Note: If many drawing filters are connected in a sequence, then performance may suffer as Aurora Vision Studio keeps a copy of an image for each
of the filters. A recommended work-around is to encapsulate the drawing routine in a very simple user filter – because on the C++ level (with AVL Lite
library) it is possible to do drawing "in place", i.e. without creating a new copy of the image. Please refer to the "User Filter Drawing Results" example
in Aurora Vision Studio. Make sure you build it in the "Release" configuration and for the appropriate platform (Win32 / x64).
Example 2a:
The ScanMultipleStripes filter has an output named outStripes.Width and outStripes.Magnitude, which are of RealArray type. We want to display
these numbers in the HMI with a DetailedListView control, which has several inputs of StringArray type.
The solution is to convert each Real number into a String using FormatRealToString filter. On the pictures below demonstrated is the visualization of
two columns of numbers with different parameters used in the formatting filters:
Example 2b:
If a similar array of numbers is to be displayed in a Label control, which has a single String on the input (not an array), then some more
preprocessing is still required – in the FormatRealToString filter we should add a new-line character to the inSuffix input and then use
ConcatenateStrings_OfArray to join all individual strings into a single one.
See Also
Standard HMI Controls Handling HMI Events Saving State of HMI Controls Protecting HMI with Password
Standard HMI Controls
Introduction
There are several groups of UI components in the HMI Controls catalog:
In
Components – with non-visual Containers – for organizing the Controls – with standard controls File System – for choosing files a
elements which extend the HMI layout with panels, groups, for setting parameters and and directories.
window functionality. splitters and tab controls. controlling the application state.
Indicators – for displaying
inspection results or status.
Logic and Automation – for binding Multiple Pages – for creating multi- Password Protection – for limiting Shape Editors – with controls that
some HMI properties with each screen applications. access to some parts of an HMI to allow the end user to define object
other, also with basic AND/OR authorized personnel. models or measurement
conditions. primitives.
basic machine vision application you
Shape Array Editors – with State Management – for loading Video Box ‐ with several variants will need one VideoBox control for
controls that allow the end user to and saving state of the controls to of the VideoBox control for high-
define an array of object models or a file. performance image display. displaying an image, possibly with
measurement primitives. some graphical overlays, and a
couple of TrackBars, CheckBoxes, TextBoxes etc. for setting parameters. You will also often use Panels,
GroupBoxes, Labels and ImageBoxes to organize and decorate the window space.
Common Properties
Many properties are the same for different standard controls. Here is a summary of the most important ones:
AutoSize – specifies whether a BackColor – the background BackgroundImage – the BorderStyle – controls how the
control will automatically size itself color of the component. background image used for the border of the control looks like.
to fit its contents. control.
Three sample labels with different Three sample labels with different
background colors. border styles.
Enabled – can be used to make Font – defines the size and style ForeColor – foreground color of Text – a string being displayed in
the control not editable by the end of the font used to display text in the control, which is used to the control.
user. the control. display text.
Visible – can be used to make the
control invisible (hide it).
With the available properties it is possible to create an application with
virtually any look and feel. For example, to use an arbitrary design for a
Sample buttons, first enabled, button, we can use a bitmap, while setting FlatAppearance.BorderSize
second disabled. Three sample labels with different = 0 and FlatStyle = Flat:
fonts.
HMI Canvas
HMICanvas control represents the entire window of the created application. Other controls will be placed on it. Properties of HMICanvas define some
global settings of the application, which are effective when it is run with Aurora Vision Executor (runtime environment):
GroupBox – can be used to group Panel – can be used to group SplitContainer – can be used to TabControl – can be used to
several controls within an several controls without additional create two panels with movable create multiple tabs in the user
enclosing frame with label. graphical elements. boundary between them. interface.
VideoBox – the basic variant of the SelectingVideoBox – also allows ZoomingVideoBox – also allows FloatingVideoWindow – a
image display control. for selecting a box region by the for zooming and panning the component that allows for opening
end user. image by the end user. an additional window for displaying
images.
A VideoBox.
To display some additional information over images in a VideoBox it is necessary to modify the images using drawing filters before passing them to
the VideoBox. This gives the user full control over the overlay design, but may expand the program. To keep the program simple, while still being able
to display some overlay information, it is possible to use View2DBox controls, which can be found in the "Indicators" section. They accept images
and 2D primitives produced by various filters as input, however, the possibility of changing their style is limited:
TrackBar, Knob, NumericUpDown CheckBox, ToggleButton, ComboBox, EnumBox – for TextBox – for entering textual data.
– for setting numerical values. You OnOffButton, RadioButton – for choosing one of several options. The entered text will be available
can set Minimum and Maximum turning something on or off. The on the outText output.
parameters and then the value outValue or outChecked output Example: Using TrackBar
entered by the end user will be will provide the state of the setting.
available on the outValue output. TrackBar by default has two connectable inputs and one connectable output. The inValue input and the
outValue output represent the same property – the value indicated by the current state of the control. The second input, inEnabled, is a boolean
value and controls whether the TrackBar is enabled or disabled in the user interface.
A track-bar.
Using EnumBox
EnumBox is very similar to ComboBox. The main difference between them is that the latter allows creating your own list of items while EnumBox
can display one of the Enum types available in Aurora Vision Studio e.g. ColorPalette, OCRModelType or RotationDirection.
Change Events
Controls that output a value adjustable by the user often also provide outputs informing about the moments when this value has been changed - so-
called events. For example, the ComboBox control has an output named outSelection for indicating the index of the currently selected list item, but
it also has outSelectionChanged output which is set to True for exactly one iteration when the selection has been changed.
Please note that most of the event triggering outputs e.g. outValueChanged, outSelectionChanged etc. are hidden by default and can be shown with
the "Edit Port Visibility..." command in the context menu of the control.
See also: Handling HMI Events.
Label – for displaying simple text ListView, DetailedListView – for PassFailIndicator – for displaying BoolIndicatorBoard – for displaying
results. displaying arrays or tables. text on a colorful background inspection status where multiple
depending on the inspection objects or features are checked.
status.
AnalogIndicator,
AnalogIndicatorWithScales – for
displaying numerical results in a
nice-looking graphical way.
AnalogIndicator
AnalogIndicator and its variant AnalogIndicatorWithScales are used to display numerical results in a retro style – as analog gauges. The latter variant
has green-orange-red ranges defined with properties like GreenColorMinimum, GreenColorMaximum etc. The ranges are usually disjoint, but there
can also be nested ranges: e.g. the red color can span throughout the scale, and the green color can have a narrower range around the middle of the
scale. The green sector will be displayed on top, producing an effect of a single green sector, with two red sectors outside.
TabControl.
MultiPanelControl.
To create a multi-screen HMI, put a MultiPanelControl control in your window and fit it to the window size. In the upper-right corner of this control, you
will see a four-element Navigation Bar ( ). Using the left and right arrows you will be able to switch between consecutive screens. The
frame button allows for selecting the parent control in the same way as the "Select: MultiPanelControl" command in the control's context menu. The
triple-dot button opens a configuration window as on the picture below:
MultiPanelControl Property Box.
Also when you double-click on the area of a MultiPanelControl, the same configuration window appears. Using this window you are able to organize
the pages. During work with pages there are at least two necessary things you have to remember. First – all pages names have to be unique
globally. Second – one MultiPanelControl contains many MultiPanelPages. An important thing is the home icon – use it to set the Initial Page of
your application. It will be the first page to appear when the program starts.
A MultiPanelControl control is usually accompanied by some MultiPanelSwitchButton controls that navigate between the pages. Usage of a button
of this kind is very simple. Just place it on your window and set the TargetPage property (also available through double-clicking on the button). Page
switching buttons are usually placed within individual pages, but this is not strictly required – they can also be placed outside of the MultiPanelControl
control.
Every HMI Control has built-in input and output ports. In MultiPanelControl, by default, two ports are visible: "inActivePageName" and
"outActivePageName", but others are hidden. To make them visible click the right mouse button on the control and select Edit Ports Visibility.... The
"inActivePageName" input enables setting the active page directly from your program. It is another way to handle page changing. One is using the
dedicated buttons by the operator and here comes another method. It might be useful e.g. when "something happens" and you would like to change
the screen automatically, not by a button click. This can be e.g. information that a connection with remote host was lost and immediate
reconfiguration is needed. The "outActivePageName" by definition is the output which allows you to check the current active page.
Note: There is a tutorial example HMI Multipanel Control.
The Stop button is not available for ProgramControlButton and File and Directory Picking
the HMI, because the end user ProgramControlBox controls are
should not be able to stop the useful for the purpose of fast There are two controls for selecting paths in the file system, which are
entire application at a random prototyping. In many applications, FilePicker and DirectoryPicker. They look similar to a TextBox, but they
point of execution. If you have a however, it might be advisable to have a button next to them, which brings up a standard system file dialog
good reason to make it possible, design an application-specific for browsing a file or directory.
use a separate ImpulseButton state machine.
control connected to an Exit filter. What is worth mentioning about these two controls is the option NilOnEmpty, which can be useful for
conditional execution. As the name suggests, it causes the control to return the special NIL value instead of an empty string, when no file or directory
is chosen.
Shape Editors
Geometrical primitives, regions and fitting fields, that are used as parameters in image analysis algorithms, are usually created by the user of Aurora
Vision Studio. In some cases, however, it is also required that the end user is able to adjust these elements in the runtime environment. For that
purpose, in the "Shape Editors" section there are appropriate controls that bring graphical editors known from Aurora Vision Studio to the runtime
environment.
Note: Shape Array Editor controls allow to create multiple primitives of the same data type and return the array of primitives e.g. Segment2DArray,
Circle2DArray etc.
Each such editor has a form of a button with four ports visible by default:
inAlignment – for specifying an inReferenceImage – for inValue – for setting initial value of outValue – for getting the value of
appropriate coordinate system for specifying a background images the edited shape. the shape after it is edited by the
the edited shape. for the editor. end user.
outStoredReferenceImage – a
reference image that was
displayed in the editor during last
editing approved with the OK
button. Active when
inStoreReferenceImage is set to
True.
Sample ShapeRegion editor control in the HMI Designer.
When the end user clicks the button, an appropriate graphical editor is opened:
Sample ShapeRegion editor opened after the button is clicked in the runtime environment.
The following properties are specific for these controls allow for customizing appearance of an opened editor:
Shape editors can be used for
HideSelectors – allows to Message – an additional text that WindowTitle – text displayed on creating input values for filter or
simplify the editor by removing can be displayed in an opened the title bar of the opened editor's modify existing values. Every shape
image and coordinate system editor as a hint for the user.window.
selectors. editor has a proper data type
assigned to inValue and outValue. The following list shows data types assigned to individual HMI controls
and example filters that can be connected with them.
Temperature chart.
There are several properties specific to that control:
EnableAntiAliasing – when set to GridMode – specifies visibility PointSize – specifies the display ProjectionMode – selects the
true increases the render quality at and orientation of the size of the cloud points relative to view/projection mode of 3D
the expense of performance. measurement grid displayed in the the size of the scene. visualization.
scene.
ScaleColoringMode – allows to ShowBoundingBoxes – enables
enable automatic coloring of the displaying range of bounding
cloud points according to the boxes around point cloud
position along the specified axis. primitives in the preview.
Interaction between the user and his program can be adjusted with parameters in Behaviour section. To
WorldOrientation – the type and customize this interaction the following parameters should be changed:
orientation of a coordinate system
used to display 3D primitives.
Enabled – indicates whether the EnableMouseNavigation – whenInitialViewState – specifies the
control is enabled. set to true enables the user to initial position of observer in the
change observer position using scene.
mouse.
Visible – determines whether the
control is visible or hidden.
Commonly used features of View3DBox control
To make point cloud differences along Z axis more visible, change the ScaleColoringMode form Solid to ZAxis.
Point3DArray displaying
Point3DArray containing up to 300 points is displayed as an array of 3D marks (crosses). If there are more than 300 points, they are displayed as a
standard point cloud.
ActivityIndicator
ActivityIndicator is used to visualize a working or waiting state of User Interface. There are two appearance modes available: "Busy" and "Recording".
Both are shown in the below picture. By setting its input inActive to True or False values, user can display the animation showing that the program is
either calculating or recording something.
Machine Vision Guide: Optical Creating Text Segmentation Creating Text Recognition Models. TextSegmentationEditor
Character Recognition, Models,
Once a TextSegmentationEditor is added, the following ports are visible:
inReferenceImage – an image inRoi – a region of interest for the inRoiAlignment – an alignment outStoredReferenceImage – the
set as the editor background. reference image. for the region of interest. Its image set as the editor
purpose is to adjust the ROI to the background that can be sent to the
position of the inspected object. algorithm.
After clicking on TextSegmentationEditor button, the editor will be
outValue – a segmentation outValueChanged – an output of opened. However, to be able to open it, the program has to be executed.
model. This value can be the bool data type that is set to
connected to True for one iteration in which the
inSegmentationModel input of outValue is changed.
the ExtractText filter.
OCRModelEditor
Once an OCRModelEditor is added, the following ports are visible:
KeyboardListener
KeyboardListener is used for getting information about keyboard events. This HMI control returns an IntegerArray where each value describes a
number represented in the ASCII code.
To use KeyboardListener or any other tools from the "Components" category, a control must be dropped at any place on the HMICanvas.
All default output values allow to check which key is pressed, but there is a few differences between them:
To use those values in other place in
outKeysDown – emits array of outKeysPressed – emits array of outKeysUP – emits array of keys program the best solution is copying
keys that have been recently currently pressed keys. Using this that have been recently released. them with CopyObject filter with
pressed. output is recommended is most
cases. IntegerArray data type.
VirtualKeyboard
VirtualKeyboard is the HMI control which enables showing of automatic on-screen virtual keyboard for editing content of HMI controls.
This functionality can be used in a runtime environment when a physical keyboard is not available at the workstation.
To use VirtualKeyboard or any other tools from the "Components" category, a control must be dropped at any place on the HMICanvas.
The example of using VirtualKeyboard for entering data on HMI panel into TextBox control.
User can set the initial preset size of keyboard window by setting proper InitialKeyboardSize value in the properties.
It is possible to display only a number pad keyboard for editing numerical values. To do that, EnableNumPadOnlyKeyboard must be set to True in
the properties.
The example of using VirtualKeyboard for entering data on HMI panel into NumericUpDown control.
ToolTip
ToolTip HMI control is used to display messages created by the user. Those messages are displayed when user moves the pointer over an
associated control.
This functionality can be used when a developer wants to give helpful tips for a final user.
To use ToolTip or any other tools from the "Components" category, a control must be dropped at any place on the HMICanvas.
Once ToolTip control is added, each HMI Control has additional parameter available under the "Misc" category in the Properties window. It is possible
to use multiple ToolTips with different parameters.
Active – determines if the ToolTip AutomaticDelay – sets the values AutoPopDelay – determines the BackColor – the background
is active. A tip will only appear if of AutoPopDelay, InitialDelay and length of time the ToolTip window color of the ToolTip control.
the ToolTip has been activated. ReshowDelay to the appropriate remains visible if the pointer is
values. stationary inside a ToolTip region. ForeColor – the foreground of the
ToolTip control.
InitialDelay – determines the IsBalloon – indicates whether the ReshowDelay – determines the ToolTipTitle – determines the
length of time the pointer must ToolTip will take on a balloon form. length of time it takes for title of the ToolTip.
remain stationary within a ToolTip subsegment ToolTip windows to
region before the ToolTip window appear as the pointer moves from UseAnimation – when set to true,
appears. one ToolTip region to another. animation are used when the
ToolTip is shown or hidden.
UseFading – when set to true, a
fade effect is used when ToolTip
are shown or hidden.
Displayed information which default ToolTip setting and which active balloon form.
ColorPicker
ColorPicker is a tool which allows the user to pick the color components in the runtime environment.
This control can be used i.e. when it is required to know the reference color used by an algorithm.
The outValue port contains the value of selected color and can be connected to all tools containing an input of the Pixel data type.
In a Color Picker window the user can use the palette of colors or enter the values of color components manually.
And - in1 and in2 and ... NotAnd - not (in1 and in2 Or - in1 or in2 or ... or NotOr - not (in1 or in2 or
and inN (e.g.: {True, True, True, and ... and inN) (e.g.: {True, inN (e.g.: {False, False, False, ... or inN) (e.g.: {False,
False} → False; {True, True, True, True, True, False} → True; {True, False} → False; {False, True, False, False, False} → True;
True} → True) True, True, True} → False) False, False} → True) {False, True, False, False} →
False)
The BoolAggregator control is a boolean value source control itself and its result can be used as a value source by subsequent controls, like
EnabledManager or another BoolAggregator. The result value can also be used in the program by connecting to the controls outValue output port
(hidden by default). The result value is computed asynchronously to the main program execution.
EnabledManager
EnabledManager allows to automatically manage controls enable state (by writing to its Enable property) using other HMI controls as a boolean value
source.
It can be used with BoolAggregator in order to manage controls' enable states in more complex cases.
To use EnabledManager or any other tools from the "Logic and Automation" category, a control must be dropped at any place on the HMICanvas.
The control itself does not provide any properties or ports, but it activates the enable management functionality in the HMI design. Only one
EnabledManager can be added to the design.
Once the EnabledManager is added to the design, it allows to send an enable signal to every HMI Control. User can specify the source of the enable
signal by selecting it from the AutoEnabledSource property list available in the Properties window of arbitrary controls in the HMI design. It is also
possible to negate the source data value by setting the AutoEnabledNegate to true.
Note that when the AutoEnabledSource property is in use for a given control the Enable property of that control cannot be changed in other ways
(e.g. by making a connection to it from the program) as the mechanisms would interfere with each other.
In the example below TextBox's enable state is controlled by BoolAggregator which gathers signals from two onOffButton HMI Controls.
Automation category in TextBox's properties.
EdgeModelEditor
EdgeModelEditor is the HMI control that makes it possible to create a Template Matching model in runtime environment by using an easy user
interface called "GUI for Template Matching".
The process of creating a model representing the expected object's shape is described in Creating Models for Template Matching article.
Once an EdgeModelEditor is added, the user can specify a reference image by connecting it to inReferenceImage input. The reference image can
also be loaded from a file by clicking Load Image... button available in the EdgeModel Editor's toolbar.
outValue output has EdgeModel data type. This output can be connected to inEdgeModel input in LocateObjects_Edges1 filter.
Besides standard parameters available in the Appearance, Design and Layout sections of the Properties window, the EdgeModelEditor can be
customized using the following parameters:
AllowChangingLevel – allows to ComplexityLevel – default plugin Enabled – determines if the controlHideIcon – determines if the editor
change plugin complexity level. complexity level. is enabled. icon is hidden.
HideSelectors – determines if the InfoMessageVisible – determines if LoadImageButtonVisible – StoreReferenceImage – if set to
image and alignment selectors of plugin's hint messages are determines if a reference image True the selected reference image
EdgeModel Editor are hidden. displayed under the Reference can be loaded from the disc by is available at the
image bar. using "Load Image..." button outStoredReferenceImage
available in the editor's toolbar. output.
WindowTitle – text displayed on
the editor window bar.
GrayModelEditor
GrayModelEditor is the HMI control that makes it possible to create a Template Matching model in runtime environment by using an easy user
interface called "GUI for Template Matching".
The process of creating a model representing the expected object's shape is described in Creating Models for Template Matching article.
Once a GrayModelEditor is added, the user can specify a reference image by connecting it to inReferenceImage input. The reference image can
also be loaded from a file by clicking Load Image... button available in the GrayModel Editor's toolbar.
Besides standard parameters available in the Appearance, Design and Layout sections of the Properties window, the GrayModelEditor can be
customized using the following parameters:
To
AllowChangingLevel – allows to ComplexityLevel – default plugin Enabled – determines if the controlHideIcon – determines if the editor
change plugin complexity level. complexity level. is enabled. icon is hidden.
HideSelectors – determines if the InfoMessageVisible – determines if LoadImageButtonVisible – StoreReferenceImage – if set to
image and alignment selectors of plugin's hint messages are determines if a reference image True the selected reference image
EdgeModel Editor are hidden. displayed under the Reference can be loaded from the disc by is available at the
image bar. using "Load Image..." button outStoredReferenceImage
available in the editor's toolbar. output.
use the model created by GrayModelEditor, user needs to connect the outValue output to the
WindowTitle – text displayed on inGrayModel input of LocateSingleObject_NCC filter.
the editor window bar.
GenICamAddressPicker
GenICamAddressPicker is used to choose GenICam GenTL device address in runtime environment. This control can be used with GenICam filters.
See also: examples where a GenICamAddressPicker control is used: HMI Grab Single Image and HMI Image Recorder.
GigEVisionAddressPicker
GigEVisionAddressPicker is used to choose GigE Vision device address in runtime environment. This control can be used with GigE Vision filters.
See also: an example where GigEVisionAddressPicker control is used: HMI Image Recorder.
MatrixEditor
MatrixEditor is used to modify existing Matrix or create a new one in runtime environment.
outValue returns matrix which was modified or created in MatrixEditor.
Deep Learning
Pressing the Deep Learning Start Button allows you to enter Deep Learning Editor from the level of the HMI.
ModelPath - path of a saved AlwaysOnTop - editor will always DisableChangingLocation - Language - selects language of
model. If empty or invalid the be visible as a top window. prevents the user from changing the editor.
Model Path Selection dialog box model path.
will be opened. Handling HMI Events
Introduction
The HMI model in Aurora Vision Studio is based on the program loop. A macrofilter reads data from the HMI controls each time before its first filter is
executed. Then, after all filters are executed, the macrofilter sends results to HMI. This is repeated for each iteration. This communication model is
very easy to use in typical machine vision applications, which are based on a fast image acquisition loop and where the HMI is mainly used for setting
parameters and observing inspection results. In many applications, however, it is not only required to use parameters set by the end user, but also to
handle events such as button clicks.
To perform certain actions immediately after an event is raised, Aurora Vision Studio also comes with the HMI subsystem that handles events
independently of the program execution process. This allows for creating user interfaces that respond almost in real-time to the user input e.g.
clicking button, moving mouse cursor over an object or changing its value. It would not be possible if events were handled solely by the program
execution process.
Event Handlers
An event handler is a subprogram in Aurora Vision Studio that contains the actions to be performed when an associated event occurs. In fact, it is a
macrofilter that is executed once for each received event. If there are several HMI controls whose events should trigger the same procedure, one
event-handling macrofilter can be used for all of them.
3. This will open the Events window with most commonly used events that can be handled. Click the drop-down list next to the event name and
select <add new...> as shown below:
NOTE: There are many more events available. If you want to see the complete list, right-click on the HMI Control, select Edit Ports and Events
Visibility... and go to the Events tab.
4. Define the name of the event handler in the pop-up window:
5. Some types of events, e.g. mouse move, can send additional information about the performed action. When creating an event handler you
need to specify whether input parameters (containing additional information) should be created. If you select this option, the created event
handler will have some inputs that can be used in the event-handling algorithm. A practical example of using input parameters is the mouse
click event handler on View2DBox HMI control:
NOTE: When input parameters are used, the event handler is assigned only to the HMI control it was created for, and cannot be used for other
controls. To create a universal event-handling macrofilter, which can contain an algorithm executed for several different HMI controls, choose
the second option in "Add New Event Handler" pop-up window.
6. Once the event handler is created, it is available in the Project Explorer:
Actions
When a button (ImpulseButton) is clicked, it will change its outValue output from False to True for exactly one iteration. This output is typically used
in one of the following ways:
1. As the forking input of a variant macrofilter, where the True variant contains filters that will be executed when the event occurs.
2. As the input of a formula block with a conditional operator (if-then-else or ?:) used within the formula to compute some value in a response to
the event.
State Transitions
Handling events that switch the application to another mode of operation is described in the Programming Finite State Machines article.
An example program structure with a fast cycle of HMI processing and a slow cycle of image analysis.
Logging Out
After you log in to the PasswordPanel, you can log out of it (lock its content) in the following ways:
You will be also automatically
by clicking the LogoutButton by clicking the Esc key when by clicking the Ctrl+L key logged out after the period of
control placed inside keyboard focus is on a control combination when keyboard focus
PasswordPanel (you can add it inside PasswordPanel (and when is on a control inside time defined in the
from the Password Protection UseEscToLock property of PasswordPanel. AutoLockTimeout property (in
category), PasswordPanel control is set to seconds) of the
True), PasswordPanel control if the system input has been idle for that
period of time (no keystrokes or mouse clicks/moves).
Event Logging
All events related to each PasswordPanel are displayed in the console window and saved to the application log file. Description of the following
events is saved:
Moreover, at the moment the
user logged in, user logged out, username or password was user is logging out, the values
invalid,
of all HMI controls within the
user access level was lower than protected area are also saved. This makes it possible to conduct audit trials about who changed
the RequiredAccessLevel. particular parameters and at what time.
A Control can signalize if properties has been changed against their default values in one of the following two ways:
Only one from the above mechanisms should be used. Not using
by marking given property with by implementation of any of them will cause a control property to be always serialized
ShouldSerializeXXX,
System.ComponentModel.DefaultValueAttribute
and its value may not be reset in editor.
attribute which specify constant ResetXXX mechanism.
default value, or During loading of HMI from a file, control properties are initialized in unspecified order. If a control can
enter improper state when properties are set with erroneous values or controls property values depends against other properties (e.g. it limits
the value to a given range between minimum and maximum), then it should implement
System.ComponentModel.ISupportInitialize interface with help of which HMI subsystem will initialize properties in a single
transaction.
Property values are serialized to a file in a XML format and every value must be converted to a text, which is feasible to be saved in XML node.
To ensure it is feasible, types of data used in public properties must support conversion from and to a text independent from a localization.
Therefore, they need to have assigned a type converter which is capable of conversion in both direction with System.String type. Most of
simple built-in types and types provided by .NET Framework (as System.Integer or System.Drawing.Size) already supports such
conversion and does not require additional attention.
Installed Aurora Vision Studio Installed Microsoft Visual Studio Creating Modules of Controls
environment (in version 4.3 or environment (in version 2015 or
higher). higher) with installed tools for To create a module of controls one should create DLL library
chosen language of a .NET (Class Library) for .NET environment, based on .NET Framework
platform (or other environment 4. Library must make control classes available as public types.
enabling creating applications for Moreover, library must make available one public class of arbitrary
Microsoft .NET platform based on name which implements HMI.IUserHMIControlsProvider
.NET Framework 4.0). interface. This class has to implement
GetCustomControlTypes method which is returning an array with a list of types (class instance System.Type) indicating class types of
controls available through the library. Aurora Vision Studio environment will create an instance of this class to obtain a list of controls available
through the module.
Required data types from HMI namespace are available through HMI.dll library included in an Aurora Vision Studio installation directory. User
HMI controls project should add a reference to this file.
DLL file prepared in such way should be placed in HMIControls folder in installation folder of adequate Aurora Vision product or in HMIControls
folder in sub-folder of Aurora Vision product folder in user documents. Loading of the module requires restarting of the Aurora Vision
Studio/Executor environment.
It is recommended to compile the user controls project using "any CPU" platform thanks to which it will be feasible to use in Aurora Vision
Studio editions for various processor platforms. In case of necessity to use specifically defined processor architecture it should be ensured to
use modules compiled for 32-bit processors with Aurora Vision 32-bit environment and modules compiled for 64-bit processors with 64-bit
environment.
In order to create an output port of a control property, it must allow reading (it must implement a public get accessor). In order to create an
input port, it must allow both reading as well as writing (it must implement both get and set public accessors).
In order to create a control port based on a given property, a HMI.HMIPortPropertyAttribute attribute should be specified on the
property, passing port visibility type as attribute argument. The attribute argument is a value of HMI.HMIPortDirection enumeration of bit
flags, which accepts the following values:
It is also possible to use logic
HMI.HMIPortDirection.Input HMI.HMIPortDirection.Output HMI.HMIPortDirection.HiddenInput sums of above values to create
- property is visible in the editor as - property is visible in the editor as - property is available in the editor bidirectional ports.
a control's input port ready to a control's output port ready to as a control's input port, but the
connect with a program. By connect with a program. By port is hidden by default. Port If public property is not marked
default, port is visible and can be default, port is visible and can be visibility can be set by a user with
hidden by a user through the port hidden by a user through the port through the port visibility editor.
visibility editor. visibility editor.
HMI.HMIPortDirection.HiddenOutput
- property is available in the editor
as a control's output port, but the
port is hidden by default. Port
visibility can be set by a user
through the port visibility editor.
HMI.HMIPortPropertyAttribute attribute then in some cases, when the property type is a
HMIPortDirection.None -
basic type, the property can be automatically shown in the port visibility editor. This means that there
property is not a port (using
property as a port is forbidden). is a possibility for the end user to promote some properties to ports capable to be connected with the
Property is neither available in the vision application. If a public property of a control should never be used as a HMI port, such property
port visibility editor. should be marked with a HMI.HMIPortPropertyAttribute attribute set to
HMIPortDirection.None value.
If a value of a property used as a port can change as a result of control's internal action (e.g. as a result of editing control's content by a user)
then the control should implement property value change notification mechanism. For each such property a public event of a EventHandler
type (or a derived type) and with the name consistent with a property name extended with a "Changed" suffix should be created. For instance,
for a property named MyProperty, MyPropertyChanged event should be created, which will be invoked by a user control implementation after
each change of the property value. This mechanism is particularly necessary when given property realizes both input and output ports, and
can be changed by a control itself as well by a write operation from the vision application.
In order for a control's property to be able to create a port, capable to be connected with a vision application, its data type must support
conversion to an AMR representation. A port can be connected only to the data types to which conversion is possible (when it is supported by
an AMR converter assigned to the property data type). You will find more on AMR conversion below.
By default, properties of basic value types and reference types cannot accept or generate conditional data (by default types are not
conditional). In order for a property to be able to accept conditional types, and to be able to generate conditional types, causing conditional
execution of connected filters (its data type, from vision program's point of view, was conditional) it must be explicitly marked as conditional. In
order to mark property as a conditional one:
[HMI.HMIPortProperty(HMI.HMIPortDirection.Output)]
public event EventHandler MyEvent;
Event ports are of Bool type in the AVS environment. During normal operation False value will be returned from it. After the underlying event
has been invoked by the control, the port will return True value for a single read operation, effectively creating an impulse of True value for a
time of a single vision program iteration. This can be used for example to control the flow in a Variant Step macrofilter.
Conversion to AMR
Data in vision application in Aurora Vision environment is represented in internal AMR format. Data in HMI controls based on .NET execution
environment is represented in the form of statically typed data of .NET types. To exchange data with vision program, as well as to edit control's
property data in edition tools, data must be converted between .NET types and AMR representation. This task is performed by a system of
AMR converters. For a single .NET type it assigns a single converter (similarly to converters from Components system), which allows to
translate data to a single or more AMR types.
In order to be able to connect control's property to a vision program, as well as to be able to edit it during development, data types of the
property must posses AMR converter. HMI System posses set of built-in converters which handle the following standard types:
List<int>
IntegerArray IntegerArray
(System.Collections.Generic.List<System.Int32>)
List<bool>
BoolArray BoolArray
(System.Collections.Generic.List<System.Boolean>)
Conversion of all value types in their counterparts allowing null (System.Nullable<T>) value is also possible.
Additionally, when editing in the Aurora Vision Studio environment, the editor enables edition in property grid of arbitrary enumeration type as
well as types required to define appearance and behavior of controls such as System.Drawing.Image,
System.Windows.Forms.AnchorStyles or System.Drawing.Font.
In case of establishing a connection between vision program and HMI control's port, AMR data type of control's port does not have to be strictly
specified. Instead of that AMR converter assigned to control's property is choosing best possible conversion between port in vision program
and control's property. In this way it is possible to, for instance, more precisely convert data between property of System.Decimal type and
Integer and Real types, where Decimal type enables universal representation of ranges and precisions required for editing both of those
types.
During edition in property grid a default AMR data type of control's property is suggested by an AMR converter. It is possible to change default
AMR data type of control's property. In order to do that, HMI.AmrTypeAttribute attribute should be specified on the property and a name of
new data type in AMR convention should be given as an argument. Only types supported by an AMR converter or name aliases of those types
are supported (e.g. for property of System.String type assigning "File" or "Directory" AMR data types, where both are aliases of String
type). HMI.AmrTypeAttribute attribute also allows to:
assign permitted range define that specified type should Saving Controls State
(min...max) for numerical types be represented in its conditional
(using Min and Max parameters), (T?) form or can accept Nil specialHMI subsystem provides functionality to save an application state
value (property data type must (settings chosen by application's end user). Saving of the state is
allow for assigning null value). accomplished by saving settings of each of the controls. Saving
the state of a single control is realized by the same serialization mechanism as in serialization of control's properties in saving to an avhmi file.
Control's state is defined by values visible on chosen control properties. In order for a control to be able to save elements of its state (its
settings), the state must be fully represented in public bidirectional properties, which data types have possibility to be converted into a text (type
converter handles System.String type conversion). For control's property value to be saved together with the application state, such
property must be marked with the HMI.HMIStatePortAttribute attribute.
Properties are serialized in unspecified order and in context of various states of other properties. Controls must then implement properties
representing state in such a way that no dependencies will appear between them which can cause errors during execution or lead a control
into incorrect state. It is recommended that a control be able to correct input value of such properties without causing errors (e.g. in case of
errors in state storage file or HMI application changes between saving and reading of the application state). If given control's state is visible on
more than one property (e.g. numeric state can be read/written also in text form on Text property) then only one property should be subject to
state saving. State saving properties can, but do not have to, be control's ports enabling connection with vision application. Similarly, properties
realizing control's ports can, but do not have to, save its state.
Index
General Advanced Editors Components
HMICanvas Edge Model Editor Keyboard Listener
Gray Model Editor ToolTip
Matrix Editor Virtual Keyboard
OCR Model Editor Containers
Text Segmentation Editor Group Box
Panel
Split Container
Tab Control
Controls Deep Learning File System
CheckBox StartButton DirectoryPicker
ColorPicker FilePicker
ComboBox Indicators Logic and Automation
Detailed List View ActivityIndicator BoolAggregator
EnumBox AnalogIndicator EnabledManager
GenICam Address Picker AnalogIndicatorWithScales Multiple Pages
GigEVision Address Picker BoolIndicatorBoard MultiPanelControl
ImageBox PassFailIndicator MultiPanelSwitchButton
ImpulseButton ProfileBox Password Protection
Knob View2DBox LogoutButton
Label View2DBox_PassFail PasswordPanel
List View View3DBox Shape Array Editors
NumericUpDown Arc2DArrayEditor
OnOffButton ArcFittingFieldArrayEditor
Program Control Box BoxArrayEditor
ProgramControlButton Circle2DArrayEditor
RadioButton CircleFittingFieldArrayEditor
TextBox Line2DArrayEditor
ToggleButton LocationArrayEditor
TrackBar PathArrayEditor
PathFittingFieldArrayEditor
Point2DArrayEditor
Rectangle2DArrayEditor
Segment2DArrayEditor
SegmentFittingFieldArrayEditor
ShapeRegionArrayEditor
Shape Editors State Management Video Box
Arc2DEditor StateAutoLoader FloatingVideoWindow
ArcFittingFieldEditor StateControlBox SelectingVideoBox
BoxEditor StateControlButton VideoBox
Circle2DEditor ZoomingVideoBox
CircleFittingFieldEditor
Line2DEditor
LocationEditor
PathEditor
PathFittingFieldEditor
Point2DEditor
Rectangle2DEditor
RegionEditor
Segment2DEditor
SegmentFittingFieldEditor
SegmentScanFieldEditor
ShapeRegionEditor
ShapeRegionDeprecatedEditor
6. Programming Reference
Table of content:
Introduction
Integer number, image or two-dimensional straight line are examples of what is called a type of data. Types define the structure and the
meaning of information that is being processed. In Aurora Vision Studio types also play an important role in guiding program construction – the
environment assures that only inputs and outputs of compatible types can be connected.
Primitive Types
The most fundamental data types in Aurora Vision Studio are:
Integer – Integer number in the Real – Floating-point Bool – Logical value – False or Type Constructs
range of approx. ±2 billions (32 approximation of real numbers (32 True.
bits). bits) Complex types of data used in
String – Sequence of characters Aurora Vision Studio are built
(text) encoded as UTF-16 unicode.with one of the following
constructs:
Structures
Composite data consisting of a fixed number of named fields. For example, the Point2D type is a structure composed of two real-valued fields:
X and Y.
Arrays
Composite data composed of zero, one or many elements of the same type. In Aurora Vision Studio, for any type X there is a corresponding
XArray type, including XArrayArray. For example, there are types: Point2D, Point2DArray, Point2DArrayArray and more.
Composite Types
Array and conditional or optional types can be mixed together. Their names are then read in the reverse order. For example:
Please note, that the above two types are very different. In the first
IntegerArray? is read as Integer?Array is read as "array of case we have an array of numbers or no array at all (Nil). In the
"conditional array of integer conditional integer numbers". second case there is an array, in which every single element may
numbers".
be present or not.
Built-in Types
A list of the most important data types available in Aurora Vision Studio is here.
Angles
Angles are using Real data type. If you are working with them bear in mind these assumptions:
All the filters working with angles The default direction of the
return their results in degrees. measurement between two
objects is clockwise. In some of
the measuring filters, it is possible
to change that. Automatic Conversions
There is a special category of filters, Conversions, containing filters that define additional possibilities for connecting inputs and outputs. If there is a
filter named XToY (where X and Y stand for some specific type names), then it is possible to create connections from outputs of type X to inputs of
type Y. You can for example connect Integer to Real or Region to Image, because there are filters IntegerToReal and RegionToImage respectively. A
conversions filter may only exist if the underlying operation is obvious and non-parametric. Further options can be added with user filters.
Structures
Introduction
A structure is a type of data composed of several predefined elements which we call fields. Each field has a name and a type.
Examples:
Point2D is a simple structure DrawingStyle is a more complex Image is actually also a structure,
composed of two fields: the X and structure composed of six fields: although only some of its fields are Structures in Aurora Vision Studio are
Y coordinates of type Real. DrawingMode, Opacity, accessible: Width, Height, Depth, exactly like structures in C/C++.
Thickness, Filled, PointShape and Type, PixelSize and Pitch.
PointSize.
Working with Structure Fields
There are special filters "Make" and "Access" for creating structures and accessing their fields, respectively. For example, there is MakePoint which
takes two real values and creates a point, and there is AccessPoint which does the opposite. In most cases, however, it is recommended to use
Property Outputs and Expanded Input Structures.
Arrays are very similar to the std::vector class template in C++. As one can have std::vector < std::vector< atl::Point2D > > in C++, there is also Point2DArrayArray
in Aurora Vision Studio.
Singleton Connections
When a scalar value, e.g. a Region, is connected to an input of an array type, e.g. RegionArray, an automatic conversion to a single-element array
may be performed. This feature is available for selected filters and cannot be used at inputs of macrofilters.
Array Connections
When an array, e.g. a RegionArray, is connected to an input of a scalar type, e.g. Region, the second filter is executed many times, once per each
input data element. We call it the array mode. Output data from all the iterations are then merged into a single output array. In the user interface a "[]"
symbol is added to the filter name and the output types are changed from T to T(Array). The brackets distinguish array types which were created
due to array mode from those which are native for the specific filter.
For example, in the program fragment below an array of blobs (regions) is connected to a filter computing the area of a single region. This filter is
independently executed for each input blob and produces an array of Integers on its output.
An array connection.
Remarks:
Synchronized Inputs
Some filters have several array inputs and require that all the connected arrays are synchronized, so that it is possible to process the elements in
parallel. For example, the SortArray filter requires that the array of the objects being sorted and the array of the associated values (defining the
requested order) are synchronized. This requirement is visualized with blue semicircles on the below picture:
Optional Inputs
Some filters have optional inputs. An optional input can receive some value, but can also be left not set at all. Such an input is then said to have an
automatic value. Optional inputs are marked with a star at the end of the type name and are controlled with an additional check-box in the Properties
control.
Examples:
Connections
Read before: Introduction to Data Flow Programming.
Introduction
In general, connections in a program can be created between inputs and outputs of compatible types. This does not mean, however, that the types
must be exactly equal. You can connect ports of different types if only it is possible to adapt the transferred data values by means of automatic
conversions, array decompositions, loops or conditions. The power of this drag and drop programming model stems from the fact that the user just
drags a connection and all the logic is added implicitly on the do what I mean basis.
Connection Logic
This is probably the most complicated part of the Aurora Vision programming model as it concentrates most of its computing flexibility. You will never
define any connection logic explicitly as this is done by the application automatically, but still you will need to think a lot about it.
There are 5 basic types of connection logic:
1. Basic Connection
T T
This kind of connection appears between ports of equal types.
2. Automatic Conversion
A B
Automatic conversion is used when the user connects two ports having two different types, e.g. Integer and Real, and there is an appropriate
filter in the Conversions category, e.g. IntegerToReal, that can be used to translate the values.
3. Array Connection
TArray T
This logic creates an implicit loop on a connection from an array to a scalar type, e.g. from RegionArray to Region. All the individual results are
then merged into arrays and so the outputs of the second filter turn into arrays as well (see also: Arrays).
Example conditional connection (intersection between two segments may not exist).
Each individual connection logic is a realization of one control flow pattern from C/C++. For example, Array Connection with Conditional Elements corresponds to
this basic code structure:
Conditional Execution
Introduction
Conditional execution consists in executing a part of a program, or not, depending on whether some condition is met. There are two possible ways to
achieve this in Aurora Vision Studio:
1. Conditional Connections – more simple; preferred when the program structure is linear and only some steps may be skipped in some
circumstances.
2. Variant Macrofilters – more elegant; preferred when there are two or more alternative paths of execution.
This section is devoted to the former, conditional connections.
Conditional Data
Before conditional connections can be discussed, we need to explain what conditional data is. There are some filters, for which it may be impossible
to compute the output data in some situations. For example, the SegmentSegmentIntersection filter computes the intersection point of two line
segments, but for some input segments the intersection may not exist. In such cases, the filter returns a special Nil value which indicates lack of
data (usually read: "not detected").
Intersection exists – the filter returns a point. There is no intersection – the filter returns Nil value.
ScanSingleEdge – will not find any DecodeBarcode – will not return GigEVision_GrabImage_WithTimeout MakeConditional – explicitly
edge on a plain image. any decoded string, if the – will not return any image, if creates a conditional output;
checksum is incorrect. communication with the camera returns Nil when a specific
times out. condition is met.
Filter outputs that produce conditional data can be recognized by their types which have a question mark suffix. For example, the output type in
SegmentSegmentIntersection is "Point2D?".
The Nil value corresponds to the NULL pointer or a special value, such as -1. Conditional types, however, provide more type safety and clearly define, in which parts
of the program special cases are anticipated.
Conditional Connections
Conditional connections ( ) appear when a conditional output is connected to a non-conditional input. The second filter is then said to be executed
conditionally and has its output types changed to conditional as well. It is actually executed only if the incoming data is not Nil. Otherwise, i.e. when a
Nil comes, the second filter also returns Nil on all of its outputs and it becomes subdued (darker) in the Program Editor.
First of all, the Nil case can be The most simple way to resolve A conditional value can also be If conditional values appear in an
ignored. Some part of the program Nil is by using the MergeDefault transformed into a logical value by array, e.g. because a filter with a
will be not executed and it will not filter, which replaces the Nil with making use of IsNil and IsNil.Not conditional output is executed in
be signaled in any way. another value, specified in the property outputs. (Before version array mode, then the RemoveNils
Example:Use this approach when program. 4.8 the same could have been filter can create a copy of this
your camera is running in a free- Example: Most typically for done with TestObjectNotNil and array with all Nils removed.
run mode and an inspection industrial quality inspection TestObjectNil filters).
routine has to be executed only if systems, we can replace a Nil [Advanced] Sometimes we have a
an object has been detected. ("object not detected") with "NOT Task with conditional values at
OK" (False) inspection result. some point where Nil appears only
in some iterations. If we are
interested in returning the latest
non-Nil value, a connection to a
non-conditional macrofilter output
will do just that. (Another option is
to use the LastNotNil filter).
Another possibility is to use a Example
variant macrofilter with a
conditional forking input and Conditional execution can be illustrated with a simplified version of the "Capsules" example:
exactly two variants: "Nil" and
"Default". The former will be
executed for Nil, the latter for any
other value.
Capsule not detected by the FindCapsuleRegion step. Capsule detected, but shape detection failed in the FindCapsuleShape
step.
Conditional Choice
In the simplest cases it is also possible to compute a value conditionally with the filters from the Choose group (ChooseByPredicate,
ChooseByRange, ChooseByCase) or with the ternary <if> ? <then> : <else> operator in the formula blocks.
Types of Filters
Read before: Introduction to Data Flow Programming.
Most filters are functional, i.e. their outputs depend solely on the input values. Some filters however can store information between consecutive
iterations to compute aggregate results, communicate with external devices or generate a loop in the program. Depending on the side effects they
cause, filters are classified as:
Constraints
Here is a list of constraints related to some of the filter types:
Introduction
Most of the filters available in Aurora Vision Studio have clearly defined types of inputs and outputs – they are applicable only to data of that specific
types. There is however a range of operations that could be applied to data of many different types – for instance the ArraySize filter could work with
arrays of any type of the items, and the TestObjectEqualTo filter could compare any two objects having the same type. For this purpose there are
generic filters that can be instantiated for many different types.
Type Definitions
Each generic filter is defined using a type variable T that represents any valid type. For instance, the GetArrayElement filter has the following ports:
Instantiation
When a generic filter is added to the program, the user has to choose which type should be used as the T type for this filter instance. This process is
called instantiation. There are two variants of the dialog window that is used for choosing the type. They differ only in the message that appears:
This window appears for filters related to arrays, e.g. GetArrayElement This window appears for filters not related to arrays, e.g. MergeDefault.
or JoinArrays. The user has to choose the type of the array elements. The user has to choose the type of the entire object.
For example, if the GetArrayElement filter is added to the program and the user chooses the T variable to represent the Real type, then the interface
of the filter takes the following form:
After instantiation the filter appears in the Program Editor with the instantiation type specified by the name of the filter:
An example of two generic filters in the Program Editor: MergeDefault instantiated for replacing Nils in arrays of real numbers and GetArrayElement
instantiated for getting an single element from them. Please note that the type parameter specified is different.
A common mistake is to provide the type of the entire object, when only a type of array element is needed. For example, when instantiating the
GetArrayElement filter intended for accessing arrays of reals, enter the Real type, which is the type of an element, but NOT the RealArray type, which
is the type of the entire object.
Macrofilters
Read before: Introduction to Data Flow Programming.
For information about creating macrofilter in the user interface, please refer to Creating Macrofilters.
This article discusses more advanced details on macrofilter construction and operation.
Macrofilter Structures
There are four possible structures of macrofilters which play four different roles in the programs:
Steps
Step is the most basic macrofilter structure. Its main purpose is to make programs clean and organized.
A macrofilter of this structure simply separates a sequence of several filters that can be used as one block in many places of the program. It works
exactly as if its filters were expanded in the place of each macrofilter's instance. Consequently:
The state of the contained filters If a loop generator is inserted to a If a loop accumulator is inserted to Variant Steps
and registers is preserved Step macrofilter, then this step a Step macrofilter, then this step
between consecutive invocations. becomes a loop generator as the becomes a loop accumulator as Variant macrofilters are similar to
whole. the whole. Steps, but they can have multiple
alternative execution paths. Each of the paths is called a variant. At each invocation exactly one of the variants is executed – the one that is chosen
depends on the value of the forking input or register (depicted with the icon), which is compared against the labels of the variants.
The type of the forking port and the labels can be: Bool, Integer, String or any enumeration type. Furthermore, any conditional or optional types can be
used, and then a "Nil" variant will be available. This can be very useful for forking the program execution on the basis on whether some analysis was
successful (there exists a proper value) or not (Nil).
All variants share a single external interface – inputs, outputs and also registers are the same. In consecutive iterations different variants might be
executed, but they can exchange information internally through the local registers of this macrofilter. From outside a variant step looks like any other
filter.
Here are some example applications of variant macrofilters:
When some part of the program When there is an object detection When an inspection algorithm can Finite State Machines.
being created can have multiple step which can produce several have multiple results (most Variant Step macrofilters correspond to
alternative implementations, which classes of detected objects, and typically: OK and NOK) and we the switch statement in C++.
we want to be chosen by the end- the next step of object recognition want to execute different
user. For example, there might be should depend on the detected communication or visualization
two different methods of detecting class. filters for each of the cases. Example 1
an object, having different trade-
offs. The user might be able to A Variant Step can be used to create a subprogram encapsulating image acquisition that will have two
choose one of the methods options controlled with a String-typed input:
through a combo-box in the HMI or
by changing a value in a 1. Variant "Files": Loading images from files of some directory.
configuration file. 2. Variant "Camera": Grabbing images from a camera.
Example 2
Another application of Variant Step macrofilters is in creating optional data processing steps. For example, there may be a configurable image
smoothing filter in the program and the user may be able to switch it on or off through a CheckBox in the HMI. For that we create a macrofilter like
this:
A variant step macrofilter for optional image preprocessing.
NOTE: Using a variant step here is also important due to performance reasons. Even if we set inStdDev to 0, the SmoothImage_Gauss filter will still
need to copy data from input to output. Also filters such as ChooseByPredicate or MergeDefault perform a full copy. On the other hand, when we use
macrofilters, internal connections from macrofilter inputs to macrofilter outputs do not copy data (but they link them). If this is about heavy types of
data such as images, the performance benefit can be significant.
Tasks
A Task macrofilter is much more than a separated sequence of filters. It is a logical program unit realizing a complete computing process. It can be
used as a subprogram, and is usually required in more advanced situations, for example:
When we need an explicit nested When we need to perform some Execution Process
loop in the program which cannot computations before the main
be easily achieved with array program loop starts. The most important difference between Tasks and Steps is that a Task
connections. can perform many iterations of its filter sequence before it finishes its
own single invocation. This filter sequence gets executed repeatedly until one of the contained loop
generating filters signals an end-of-sequence. If there are no loop generators at all, the task will execute exactly one iteration. Then, when the
program execution exits the task (returning to the parent macrofilter) the state of the task's filters and registers is destroyed (which includes closing
connections with related I/O devices).
What is also very important, Tasks define the notion of an iteration of the program. During program execution there is always a single task that is the
most nested and currently being executed. We call it the current task. When the execution process comes to the end of this task, we say that an
iteration of a program has finished. When the user clicks the Iterate (F6) button, execution continues to the end of the current task. Also the Update
Data Previews Once an Iteration refers to the same notion.
Task macrofilters can be considered an equivalent of C/C++ functions with a single while loop.
Worker Tasks
The main purpose of Worker Tasks is to allow users to process data parallelly. They also make several other new things possible:
Parallel receiving and processing Parallel control of output devices, Dividing our program, splitting Flawless processing of HMI events
of data coming from different, not that is not dependent on the cycle parts of the program that need to
synchronized, devices of the program (e.g. light up a be processed in real time from the Having additional programs in a
diode every 500ms) ones that can wait for processing project for unit testing of our
algorithms.
The number of Worker Tasks available for use is dependent on the license factor, however every program
Having additional programs in a will contain at least one Worker Task, which will act as "Main".
project for precomputing data that
will be later used in the main
program. Creating a Worker Task
Due to its special use you cannot create a Worker Task by pressing Ctrl+Space shortcut, or directly in the program editor. The only option to do so
is to make one in the Project Explorer, as shown in the Creating Macrofilters article.
Queues
Queues play an important role in communication and synchronization between different Worker Tasks in the program. They may pass most types of
data, as well as Arrays between the threads. Operations on the queues are atomic meaning that once the processor starts processing them it
cannot be interrupted or affected in any way. So in case that several threads would like to perform an operation, some of them would have to wait.
Please note, that different arrays are not synchronized with each other. To process complex data or pass data that needs to be synchronized (like
image and its information) you should use user types. We also highly advise not to use queues as an replacement of global parameters.
Creating Queues
To create a new queue in the Project Explorer click the Create New Queue... icon. A new window will appear, allowing you to select the name
and parameters of the new queue such as:
Items Type that specifies the type Maximum size of the queue Module to which it belongs Access - public or private
of data passed by the queue
Queue operations
In order to perform queue operations and fully manage queues in Worker Tasks, you can use several filters available in our software, namely:
Queue_Pop - takes value from the Queue_Pop_Timeout - takes valueQueue_Peek - returns specified Queue_Peek_Timeout - returns
queue without copying it. Waits from the queue without copying it. element of the queue without specified element of the queue
infinitely if the queue is empty. ThisWaits for the time specified in removing it. Waits infinitely if the without removing it. Waits for the
operation only reads data and Timeout if the queue is empty. queue is empty. time specified in Timeout if the
does not copy it, so it is performed This operation only reads data and queue is empty.
instantly. does not copy it, so it is performed
instantly. Queue_Push - adds element to Queue_Size - returns the size of
the queue. This operation copies the queue.
data, so it may take more time
compared to the others. Queue_Flush - clears the queue.
Does not block data flow.
Macrofilters Ports
Inputs
Macrofilter inputs are set before execution of the macrofilter is started. In the case of Task macrofilters the inputs do not change values in
consecutive iterations.
Outputs
Macrofilter outputs can be set from connections with filters or with global parameters as well as from constant values defined directly in the outputs.
In the case of Tasks (with many iterations) the value of a connected output is equal to the value that was computed the latest. One special case to
be considered is when there is a loop generator that exits in the very first iteration. As there is no "the latest" value in this case, the output's default
value is used instead. This default value is part of the output's definition. Furthermore, if there is a conditional connection at an output, Nil values
cause the previously computed value to be preserved.
Registers
Registers make it possible to pass information between consecutive iterations of a Task. They can also be defined locally in Steps and in Variant
Steps, but their values will still be set exactly in the same way as if they belonged to the parent Task:
In case the of variant macrofilters, register values for the next iteration
Register values are initialized to Register values are changed at are defined separately in each variant. Very often some registers should
their defaults when the Task the end of each iteration.
starts. just preserve their values in most of the variants. This can be done by
creating long connections between the prev and next ports, but usually a more convenient way is to disable
the next port in some variants.
Please note, that in many cases it is possible to use accumulating filters (e.g. AccumulateElements, AddIntegers_OfLoop and other OfLoop) instead
of registers. Each of these filters has an internal state that stores information between consecutive invocations and does not require explicit
connections for that. Thus, as a method this is simpler and more readable, it should be preferred. Registers, however, are more general.
Yet another alternative to macrofilter registers is the prev operator in the formula blocks.
Registers correspond to variables in C/C++. In Aurora Vision Studio, however, registers follow the single assignment rule, which is typical for functional and data-flow
programming languages. It says that a variable can be assigned at most once in each iteration. Programs created this way are much easier to understand and
analyze.
A task with registers computing Greatest Common Denominator of inA and inB.
Formulas
Introduction Data types Literal constants Predefined constant values
Enumeration constants
Arithmetic conversions Automatic implicit conversions Operators Operator processing modes
Unary operators Conditional processing
Binary operators Array processing
Special operators Array and conditional processing
combination
Operator precedence
Functions
Functions list
Mathematical functions
Conversion functions
Statistic and array processing
functions
Geometry processing functions
Functions of type String
Structure constructors
Typed Nil constructors
Examples Introduction
A formula block enables to write down operations on basic arithmetic and logic types in a concise textual form. Its main advantage is the ability to
simplify programs, in which calculations may require use of a big number of separate filters with simple mathematical functions.
Formula notations consist of a sequence of operators, arguments and functions and are founded on basic rules of mathematical algebra. Their
objective in Aurora Vision Studio is similar to formulas in spreadsheet programs or expressions in popular scripting languages.
A formula block can have any number of inputs and outputs defined by the user. These ports can be of any unrelated types, including various types in
one block. Input values are used as arguments (identified by inputs names) in arithmetic formulas. There is a separate formula assigned to each
output of a block, the role of such formula is to calculate the value held by given output. Once calculated, a value from output can be used many
times in formulas of outputs which are located at a lower position in a block description.
Each of arguments used in formulas has a strictly defined data type, which is known and verified during formula initialization (so called static typing).
Types of input arguments arise from explicitly defined block port types. These types are then propagated by operations performed on arguments.
Finally, a formula value is converted to the explicitly defined type assigned to a formula output. Compatibility of these types is verified during program
initialization (before the first execution of formula block), all incompatibilities are reported as errors.
Formula examples:
outValue = inValue + 10
outValue = inValue1 * 0.3 + inValue2 * 0.7
outValue = (inValue - 1) / 3
outValue = max(inValue1, inValue2) / min(inValue1, inValue2)
outRangePos = (inValue >= -5 and inValue <= 10) ? (inValue - -5) / 15 : Nil
outArea = inBox.Width * inBox.Height
Data types
Blocks of arithmetic-logic formulas can pass any types from inputs to outputs. The data types mentioned below are recognized by operators and
functions available in formulas and therefore they can take part in processing their values.
Arithmetic type of integral values in the range of -2,147,483,648...2,147,483,647, it can be used with arithmetic and comparison
Integer
operators.
Conditional or optional arithmetic type of integral values, it can additionally take value of Nil (empty value).
Integer? It can be used everywhere, where the type of Integer is accepted. It (if an operator or a function doesn't assume different
Integer* behavior) causes that the result of an operation performed on it is also a conditional value. It means that an occurrence of Nil
value causes that operation execution is aborted and Nil value is returned as operation result.
Real Real number (floating-point), it can be used with arithmetic and comparison operators.
Real? Conditional or optional real value, it can additionally take value of Nil (empty value).
Real* Similarly to Integer? it can be used everywhere, where the type of Real is accepted causing conditional execution of operators.
Arithmetic type of integral values in the range of -9,223,372,036,854,775,808...9,223,372,036,854,775,807, it can be used with
Long
arithmetic and comparison operators.
Long? Conditional or optional equivalent of Long type, it can additionally take value of Nil (empty value).
Long* Similarly to Integer? it can be used everywhere, where the type of Long is accepted causing conditional execution of operators.
Double Double precision floating-point number, it can be used with arithmetic and comparison operators.
Conditional or optional equivalent of Double type, it can additionally take value of Nil (empty value).
Double?
Double* Similarly to Integer? it can be used everywhere, where the type of Double is accepted causing conditional execution of
operators.
Logic type, takes one of the two values: true or false. It can be used with logic and comparison operators or as a condition
Bool
argument. It's also a result of comparison operators.
Conditional or optional logic type, it can additionally take value of Nil (empty value).
Bool? It can be used everywhere, where the type of Bool is accepted. It (if an operator or a function doesn't assume different behavior)
Bool* causes that the result of an operation performed on it is also a conditional value. It means that an occurrence of Nil value causes
that operation execution is aborted and Nil value is returned as operation result.
Conditional or optional textual type, it can additionally take value of Nil (empty value).
String? It can be used everywhere, where the type of String is accepted. It (if an operator or a function doesn't assume different
String* behavior) causes that the result of an operation performed on it is also a conditional value. It means that an occurrence of Nil
value causes that operation execution is aborted and Nil value is returned as operation result.
Any of the enumeration types declared in the type system. An enumeration type defines a limited group of items which can be
assigned to a value of given type. Every one of these items (constants) is identified by a unique name among the given
enum enumeration type (e.g. enumeration type SortingOrder has two possible values: Ascending and Descending).
As part of formulas it is possible to create a value of enumeration type and to compare two values of the same enumeration
type.
enum? Any conditional or optional enumeration type, instead of a constant, it can additionally take value of Nil (empty value).
enum*
Any structure - a composition of a number of values of potentially different types in one object. Within formulas, it is possible to
structure
read separately structure elements and to perform further operations on them within their types.
Any conditional or optional structure, instead of field composition, it can additionally take value of Nil (empty value).
structure? It is possible to get access to fields of conditional structures on the same basis as in the case of normal structures. However, a
structure* field to be read is also of conditional type. Reading fields of a conditional structure of value Nil returns as a result Nil value as
well.
Any array of values. Within formulas, it is possible to determine array size, read and perform further operations on array
<T>Array elements.
Array size can be determined by .Count property which is built-in in array types.
Any conditional or optional array, instead of elements sequence it can additionally take value of Nil (empty value).
<T>Array? It is possible to read sizes and elements of conditional arrays on the same basis as in the case of normal arrays. In this case,
<T>Array* values to be read are also of conditional type. Reading an element from a conditional array of Nil value returns as a result Nil
value as well.
Literal constants
You can place constant values in a formula by writing them down literally in the specified format.
Data type Example
0
Integer 150
-10000
Integer value in decimal notation
0L
Long 150L
-10000L
0xa1c
Integer 0x100
0xFFFF
Integer value in hexadecimal notation
0xa1cL
Long 0x100L
0xFFFFL
0.0
Real 0.5
100.125
-2.75
Real value in decimal notation
0.0d
Double 0.5d
100.125d
-2.75d
1e10
Real 1.5e-3
-5e7
Real value in scientific notation
1e10d
Double 1.5e-3d
-5e7d
"Hello world!"
Text String "First line\nSecond line"
"Text with \"quote\" inside."
Textual constants
Textual constants are a way to insert data of type String into formula. They can be used for example in comparison with input data or in conditional
generating of proper text to output of formula block. Textual constants are formatted by placing some text in quotation marks, e.g.: "This is
text"; the inner text (without quotation marks) becomes value of a constant.
In case you wish to include in a constant characters unavailable in a formula or characters which are an active part of a formula (e.g. quotation
marks), it is necessary to enter the desired character using escape sequence. It consists of backslash character (\) and proper character code (in
this case backslash character becomes active and if you wish to enter it as a normal character it requires the use of escape sequence too). It is
possible to use the following predefined special characters:
\n - New line, ASCII: 10 \r - Carriage return, ASCII: 13 \t - Horizontal tabulation, ASCII: 9 \' - Apostrophe, ASCII: 39
\" - Quotation mark, ASCII: 34 \\ - Backslash, ASCII: 34 \v - Vertical tabulation, ASCII: 11 \a - "Bell", ASCII: 7
Examples:
\b - "Backspace", ASCII: 8 \f - "Form feed", ASCII: 12
"This text \"is quoted\""
"This text\nis in new line"
"In Microsoft Windows this text:\r\nis in new line."
"c:\\Users\\John\\"
Other characters can be obtained by entering their ASCII code in hexadecimal format, by \x?? sequence, e.g.: "send \x06 ok" formats text
containing a character of value 6 in ASCII code, and "\xce" formats a character of value 206 in ASCII code (hex: ce).
Special value of conditional and optional types, it represents an empty value (no value).
Nil Null This constant doesn't has its own data type (it is represented by the special type of "Null"). Its type is automatically
converted to conditional types as part of executed operations.
Represents positive infinity. It is a special value of type Real that is greater than any other value possible to be represented
inf Real
by the type Real. Note that negative infinity can be achieved by "-inf" notation.
Enumeration constants
There are enumeration constants defined in the type system. They have their own types and among these types a given constant chooses one of a
few possibilities. E.g. a filter for array sorting takes a parameter which defines the order of sorting. This parameter is of "SortingOrder" enumeration
type. As part of this type, you can choose one of two possibilities identified by "Ascending" and "Descending" constants.
As part of formulas, it is possible to enter any enumeration constant into formula using the following syntax:
Full name of a constant used in a formula consists of enumeration type name, of a dot and of name of one of the possible options available for a
given type, e.g.:
SortingOrder.Descending
BayerType.BG
GradientOperator.Gauss
Arithmetic conversions
Binary arithmetic operators are executed as part of one defined common type. In case when the types of both arguments are not identical, an
attempt to convert them to one common type is performed, on this type the operation is then executed. Such conversion is called arithmetic
conversion and when it's possible to carry it out, the types of arguments are called compatible.
Arithmetic conversion is performed as part of the following principles:
The argument of type with smaller range or precision is converted to the type of second argument
Two different numeric types. (with greater range or precision). Possible argument type conversions includes: Integer to Real,
Integer to Long, Integer to Double and Real to Double. The same principle is used in case of
conditional counterparts of mentioned types.
One of the types is conditional. The second type is converted to conditional type.
Two different conditional types. Arithmetic conversion is performed as part of types on which arguments conditional types are
based. As the result, the types remain conditional.
One of the arguments is Nil. The second argument is converted to conditional type (if it's not already conditional), Nil value is
converted to the conditional type of the second argument.
Conversion Description
Conversion of integer to floating-point value. As part of such conversion loss of precision may occur. As the result of it,
Integer → Real instead of accurate number representation, an approximated floating-point value is returned.
Integer → Double
Such conversion is also possible when it comes to conditional counterparts of given types.
Integer → Long Conversion to equivalent type with greater range or precision. No data is lost as a result of such conversion.
Real → Double Such conversion is also possible when it comes to conditional counterparts of given types.
T → T? Assigning conditionality. Any non-conditional type can be converted to its conditional counterpart.
Assigning type to a Nil value. Nil value can be converted to any conditional type. As the result of it, you get an empty value of
Nil → T?
given type.
As part of formulas, optional and conditional types are handled the same way (in formulas there is no defined reaction for
T* → T? automatic values, they only pass data). If given operation requires argument of conditional type, then it also can, basing on
the same principles, take argument of optional type.
Operators
As part of formulas, it is possible to use the following operators:
Negation operator: -
Identity operator: +
Two's complement: ~
Additive operators: + -
Field read: .
Unary operators
Negation operator (-)
- expression
+ expression
~ expression
not expression
Binary operators
Multiplication operator (*)
expression * expression
Data types:
Integer * Integer → Integer
Long * Long → Long
Real * Real → Real
Double * Double → Double
Vector2D * Vector2D → Vector2D
Vector2D * Real → Vector2D
Real * Vector2D → Vector2D
Vector3D * Vector3D → Vector3D
Vector3D * Real → Vector3D
Real * Vector3D → Vector3D
Matrix * Matrix → Matrix
Matrix * Real → Matrix
Real * Matrix → Matrix
Matrix * Vector2D → Vector2D
Matrix * Point2D → Point2D
Matrix * Vector3D → Vector3D
Matrix * Point3D → Point3D
Returns product of two numeric arguments, element-by-element product of two vectors, vector scaled by a factor, multiplies two matrices,
multiplies matrix elements by a scalar, or multiplies matrix by a vector or point coordinates.
When multiplying matrix by 2D vector (or 2D point coordinates), a general transformation 2x2 or 3x3 matrix is expected. Multiplication is
performed by assuming that the source vector is a 2x1 matrix and that the resulting 1x2 matrix form the new vector or point coordinates:
When 3x3 matrix is used the multiplication is performed by assuming that the source vector is a 3x1 matrix (expanded with 1 to three
elements) and by normalizing the resulting 1x3 matrix back to two element vector or point coordinates:
By analogy, when multiplying matrix by 3D vector (or 3D point coordinates) a general transformation 3x3 or 4x4 matrix is expected.
expression / expression
expression + expression
Data types:
Integer + Integer → Integer
Long + Long → Long
Real + Real → Real
Double + Double → Double
Vector2D + Vector2D → Vector2D
Point2D + Vector2D → Point2D
Vector3D + Vector3D → Vector3D
Point3D + Vector3D → Point3D
Matrix + Matrix → Matrix
String + String → String
Returns sum of two numeric values, sum of two vectors, moves a point by a vector or adds two matrices element-by-element.
In case of using it with String type, it performs text concatenation and returns connected string.
Subtraction operator (-)
expression - expression
Data types:
Integer - Integer → Integer
Long - Long → Long
Real - Real → Real
Double - Double → Double
Vector2D - Vector2D → Vector2D
Point2D - Vector2D → Point2D
Vector3D - Vector3D → Vector3D
Point3D - Vector3D → Point3D
Matrix - Matrix → Matrix
Returns difference of two numeric values, subtracts two vectors, moves a point backwards by a vector or subtracts two matrices element-
by-element.
Bitwise OR (|)
expression | expression
expression ^ expression
Logic OR (or)
expression or expression
expression == expression
Special operators
Condition operator (if-then-else, ?:)
Data types:
This operator conditionally chooses and returns one of two or
condition_expression_x: Bool expression_1, expression_2, ..., more values.
expression_n: any compatible
types
if condition_expression then expression_1 else expression_2
The first operand, in this case condition_expression, is a condition value of Bool data type. Depending on whether
condition_expression equals True or False, it returns either expression_1 (when True) or expression_2 (when False).
When connecting multiple conditions in a row, the program checks whether any of the consecutive conditions is met. The original condition
uses the clause if...then, all the following ones use the clause elif...then. Should any of these multiple conditions be found to be
True, then the corresponding expression is returned. However, after both the if...then clause and all the elif...then clauses are
found to be False, the operator else calls the one expression that must be returned instead.
The operator treats conditional values in a special way. An occurrence of conditional type only in case of the first operand (condition value)
causes the whole operator to be executed conditionally. Conditional types in operands of values to choose from do not make any changes in
operator execution. Values of these operands are always passed in the same way that operation results are (also in case of an empty value).
The operator result type is determined by combining types of values to choose from (by converting one type to another or by converting two
types into yet another common result type), according to the principles of arithmetic conversions. Such types can be changed to a conditional
type in case of operator conditional execution.
expression ?? expression
Function call is performed by entering function name and a follow-up list of arguments in brackets. Required data types depend on the called
function. For functions with generic type it is also optionally possible to explicitly specify the generic type in angular brackets directly after the
function name.
See: Functions
expression.name
This operator is used to read a single structure field or a property supported by an object. Operation result type is compatible with the type of
the field which is read.
Any structure (from which a field of any type is read) can be passed as an argument.
name determines name of a field (or property), which should be read.
This construction is intended to call a function provided by an object (defined by data type of expression object).
Access to this function is provided (similar to field read operator) by entering a dot sign and the function name after an argument. After the
function name a list of function arguments should be entered in round brackets. The required data types depend on the function which is
called.
expression[indexer]
expression[indexer1, indexer2]
Data types:
<T>Array[i] → <T>
Path[i] → Point2D
Matrix[row, col] → Real
This operator is used to read a single element from an array, a single point from a Path object, or a single element from a Matrix object.
Operator result type is equal to the array elements type. An indexer has to be an argument of type Integer, its value points to an element to be
read (value of 0 indicates the first array element). If an indexer returns a value pointing out of 0...ArrayLength-1 range, then the operator
causes that a Domain Error appears during program runtime.
Explicit array processing ([])
expression[]
Suggest that the outer operation (that uses the result of this operator as its argument) should be performed in an array processing mode,
using this argument as an array data source.
This operator should be applied on an array data types and it is returning an unchanged array of the same type. The operation has no effect in
runtime and is only used to resolve ambiguities of the array processing mode.
prev(outputName, defaultValue)
prev(outputName)
This operator is used to read the value of a formula block output from the previous iteration of a task loop, inside which the formula block is
instantiated. This operator takes an identifier as its first argument - the name of the formula block output. Any output of the parent formula
block can be used, including the current formula output and outputs of formulas defined below the current one.
Second argument of the operator defines a default value and can accept any expression.
The purpose of this operator is to create an aggregate or sequence generating formula. In the first iteration of a task, this operator returns the
default value (defined by the second argument). In each subsequent task iteration the operator returns the value previously computed by the
referenced formula output (defined by the first argument).
Operator result type is determined by combining types of referenced output and default value argument, according to the principles of
arithmetic conversions.
Second argument of the operator can be omitted. In such situation Nil is used as the default value and the result type is a conditional
equivalent of the accessed output type. You can use the merge with default (??) operator to handle a Nil value.
::globalParameterName
Reads the value of a program global parameter (referencing parameter by its name).
Global parameter must be accessible from the current program module.
Operator precedence
In case of using more than one operator in the surrounding of the same argument, the precedence of executing these operators depends on their
priorities. In the first instance, operators with the highest priority are executed. In case of binary operators with equal priorities, the operators are
executed one after another from left to right, e.g.:
A + B * C / D
In the first instance, arguments B and C are multiplied (multiplication has higher priority than addition and, in this case, is located left from division
operator which has the same priority). Multiplication result is divided by D (going from left to right), and then the quotient is added to A.
Unary operators, which are always in form of prefixes, are executed from the closest operator to an argument to the furthest one, that is from right to
left.
Ternary condition operators are executed in order from right to left. It enables nesting conditions as part of other condition operator arguments.
Order of operators execution can be changed by placing nested parts of formulas in brackets, e.g.:
(A + B) * (C / D)
In this case, addition and division are executed in the first instance, then their results are multiplied.
Operator priorities
# Operator Description
1 () Function call
+ Identity
- Negation
2
~ Bitwise negation
4 + - Additive operators
6 &
7 ^ Bitwise operators
8 |
== Equality test
10
<> Inequality tests
11 and
13 or
15 ?:
Condition operator
16 if-then-else
Remarks
Equality checking operators (== The condition operator (?:, if- The merge with default operator In function call operations the
and <>) will not be executed in a then-else) can be executed in a (??) will never be executed in a conditional execution mode can be
conditional mode. Instead those conditional mode when a value of conditional mode as it is designed created by any conditional value
operators will compare the values type Bool? is used for the to handle conditional types assigned to a non-conditional
taking into account conditional condition (first) argument. True explicitly. argument of the function.
types and testing for equality with and False value arguments In generic function call operations
Nil, even when conditional and non (second and third arguments) are conditional execution cannot be
conditional types are mixed. not participating in the conditional implicitly created on generic
processing. Their values are function arguments, as conditional
passed as the result regardless of data type of such arguments will
their data types. result in automatic generic type
deduction to use also a conditional
data type. To achieve this the
function generic type needs to be
specified explicitly.
Similarly to above, conditional
Array processing
execution cannot be created on
arguments of the array creating Also similarly to filters in a vision application it is possible to invoke formula operators and function calls in
operator ({}) as it will result in an array mode in which the formula operations are executed multiple times for each element of a source
creating an array with conditional array, creating an array of results as the output. When an array argument is provided for an operator, or for
items. A call to the createArray function argument that does not expect an array (or expect lower rank array) the whole operation is
function with its generic type performed in an array mode. In the array mode the operation return value is promoted to an array type (or a
specified explicitly must be used higher rank array) and the operation is executed multiple times, one time for each source array element,
instead. and the return value is also an array composed from subsequent operation results.
An array mode operation can be performed when only one argument is providing an array, or when multiple different arguments are providing multiple
source array. In the latter situation all source arrays must be of equal size (have the same number of elements) and the operation is performed once
for each group of equivalent array items from source arrays. Runtime Error is generated during execution when the source array sizes are different.
For example, lets consider a binary addition operation: a + b. Below table presents what results will be generated for different data types and values
of a and b arguments:
a b a + b
IntegerArray {10, 20, 30} IntegerArray {5, 6, 7} IntegerArray {15, 26, 37}
Array processing is applied automatically when argument types clearly points to the array processing of the operation. In situations when application
of the array processing is ambiguous the operation by default is left without array processing. To remove the ambiguity the explicit array processing
operator can be applied on the array source arguments of the operation.
Explicit array processing operator needs to be used in the following situations:
For the element read operator on aThe property Count of double Equality testing operators (== and In the merge with default operator
double nested array (e.g. nested array types will return the <>) will never implicitly start array (??) the first argument might need
IntegerArrayArray), the size of the outermost array. The processing. Instead those to be marked as explicit array
operator in the form arg[index] construction arg[].Count will operators will always try to source in some situations when
will access the element of the return an array of sizes of nested compare whole objects and return connecting array items with default
outermost array. The operator in arrays. a scalar Bool value. To compare values from second array (when
the form arg[][index] will array elements in an array mode both arguments are array
access elements of nested arrays at least one argument needs to be sources).
in an array processing mode. market with the explicit array
processing operator. In a generic function call operation
values for generic arguments need
to be marked as explicit array
sources for the automatic generic
type deduction to consider the
array processing on those
arguments (unless generic type is
explicitly specified).
The result of an operator executed in the array mode is considered as an explicit array source for
In the array creation operator ({}) subsequent outer operations. This means that the array processing will cascade in the nested operations
and the createArray function call without the need to repeat the explicit array processing operator for each nested operation.
all array source arguments always
needs to be marked explicitly. For example, considering a and b to be arrays of type IntegerArray, the formula:
a == b
will check whether both arrays are equal and will return the value of type Bool. The formula:
a[] == b[]
will check equality of array items, element by element, and will return the array of type BoolArray.
Remarks
In the condition operator (?:, if-then-else) the array mode processing can only be started by an array on the condition argument (first
argument). However when the operation enters array processing the True and False arguments will also be considered as array sources, allowing to
perform element-by-element based conditions in an array mode. It is also possible to mix scalar and array values on the True and False arguments.
Remarks
The True and False arguments of The merge with default operator
the condition operator (?:, if- (??) has special requirements on Functions
then-else) can participate in the its argument data types for
array mode, but can never complex processing. The following As part of formulas, you can use one of the functions listed below by
participate in the conditional mode. constructs are allowed for the using a function call operator. Each function has its own requirements
Thus, for condition operator in an array processing mode: defined regarding the number and types of individual arguments.
array mode it is forbidden to Functions with the same names can accept different sets of parameter
provide True and/or False T?Array ?? T
types, on which the returned value type depends. Each of the functions
arguments with conditional arrays. T?Array ?? T? listed below has its possible signatures defined in form of descriptions of
parameters data types and returned value types resulting from them,
T?Array? ?? T (array- e.g.:
conditional processing on first
argument)
Real foo( Integer value1, Real value2 )
T?Array? ?? T? (array-
conditional processing on first
argument) This signature describes the "foo" function, which requires two
arguments, the first of type Integer and the second of type Real. The
T?Array ?? TArray (requires function returns a value of type Real. Such function can be used in a
explicit array mode) formula in the following way:
T?Array ?? T?Array (requires
explicit array mode) outValue = foo(10, inValue / 2) + 100
The following constructs will not
result in an array processing
mode: Arguments passed to a function don't have to exactly match a given
signature. In such case, the best fitting function version is chosen. Then
T?Array? ?? TArray an attempt to convert argument types to a form expected by the function
is performed (according to the principles of implicit conversions). If
T?Array? ?? T?Array choosing a proper function version for argument types is not possible or,
T?Array? ?? TArray? if it's not possible to convert one or more arguments to required types,
then program initialization ends up with an error of incorrect function
T?Array? ?? T?Array? parameters set.
T?Array ?? TArray? A conditional or optional type can be used as an argument of each
T?Array ?? T?Array? function. In such case, if function signature doesn't assume some
special use of conditional types, the entire function call is performed
conditionally. A returned value is then also of a conditional type, an occurrence of Nil value among function arguments results in returning Nil value as
function result.
Generic functions
Some functions accept arguments with so called generic type, where the type is only partially defined and the function is able to adapt to the actual
type required. Such functions are designated with "<T>" symbol in their signatures, e.g. function minElement:
This function is designated to process arrays with elements of arbitrary type. Its first argument accepts an array based on its generic type. A value of
its generic type is also returned as a result. Generic functions can have only one generic type argument (common for all arguments and return
value).
Generic functions can be called in the same way as regular functions. In such case the generic type of the function will be deduced automatically
based on the type of specified arguments:
In this example the value on input inBoxArrays is of type BoxArray, so the return type of the functions is Box.
In situations where the generic type cannot be deduced automatically, or needs to be changed from the automatically deduced one, it is possible to
specify it explicitly, e.g. in this call of array function:
Functions list
Mathematical functions
sin cos tan
asin acos atan Conversion functions
exp ln log
log2 sqrt floor
Statistic and array processing
ceil round abs functions
pow square hypot
clamp lerp integer
Geometry processing
real long double functions
toString parseValue tryParseValue
min max indexOfMin
indexOfMax avg sum Complex object creation
product variance stdDev
median nthValue quantile Functions of type String
all any count
contains findFirst findLast Mathematical functions
findAll minElement maxElement
sin
removeNils withoutNils flatten
select crop trimEnd Real sin( Real )
rotate pick sequence
array createArray join
angleNorm angleTurn angleDiff
distance area dot
normalize createVector createSegment
createLine translate toward
scale rotate Matrix
identityMatrix Path Substring
Trim ToLower ToUpper
Replace StartsWith EndsWith
Contains Find FindLast
IsEmpty
Double sin( Double )
Returns an approximation of the sine trigonometric function. It takes an angle measured in degrees as its argument.
cos
Returns an approximation of the cosine trigonometric function. It takes an angle measured in degrees as its argument.
tan
Returns an approximation of the tangent trigonometric function. It takes an angle measured in degrees as its argument.
asin
Returns an approximation of the inverse sine trigonometric function. It returns an angle measured in degrees.
acos
Returns an approximation of the inverse cosine trigonometric function. It returns an angle measured in degrees.
atan
Returns an approximation of the inverse tangent trigonometric function. It returns an angle measured in degrees.
exp
Returns an approximated value of the e mathematical constant raised to the power of function argument: exp(x) = ex
ln
log
log2
sqrt
floor
Rounds an argument down, to an integer number not larger than the argument.
ceil
Rounds an argument up, to an integer number not lesser than the argument.
round
Rounds an argument to the closest value with a strictly defined number of decimal places. The first function argument is a real number to be
rounded. The second argument is an optional integer number, defining to which decimal place the real number should be rounded. Skipping
this argument effects in rounding the first argument to an integer number.
Examples:
round(1.24873, 2) → 1.25
round(1.34991, 1) → 1.3
round(2.9812) → 3.0
abs
pow
Raises the first argument to the power of the second argument. Returns the result as a real number: pow(x, y) = xy
square
hypot
Limits the specified value (first argument) to the range defined by min and max arguments. Return value is unchanged when it fits in the
range. Function returns the min argument when the value is below the range and the max argument when the value is above the range.
lerp
Computes a linear interpolation between two numeric values or 2D points. Point of interpolation is defined by the third argument of type Real
in the range from 0.0 to 1.0 (0.0 for value equal to a, 1.0 for value equal to b).
Conversion functions
integer
Converts an argument to Integer type by cutting off its fractional part or ignoring most significant part of integer value.
real
long
Converts an argument to Long type (cuts off fractional part of floating-point values).
double
toString
Takes a number represented as a text and returns its value as a chosen numeric type. It is allowed for the text to have additional whitespace
characters at the beginning and the end but it cannot have any whitespace characters or other thousand separator characters in the middle.
For Real and Double types both decimal and exponential representations are allowed with dot used as the decimal symbol (e.g. "-5.25" or
"1e-6").
This function will generate a DomainError when the provided text cannot be interpreted as a number or its value is outside of the types
allowed range. To explicitly handle the invalid input text situation use the tryParse function variants.
Takes a number represented as a text and returns its value as a chosen numeric type. It is allowed for the text to have additional whitespace
characters at the beginning and the end but it cannot have any whitespace characters or other thousand separator characters in the middle.
For Real and Double types both decimal and exponential representations are allowed with dot used as the decimal symbol (e.g. "-5.25" or
"1e-6").
This function returns a conditional value. When the provided text cannot be interpreted as a number or its value is outside of the types allowed
range Nil is returned instead. The conditionality of the type needs to be handled later in the formula or the program. When the invalid input
value is not expected and does not need to be explicitly handled a simpler parse variants of the function can be used.
Returns the smallest of input values. This function can take from two to four primitive numeric arguments, from which it chooses the smallest
value or an array of primitive numeric values, in which the smallest value is searched for. In case of an attempt to search for a value in an
empty array, an operation runtime error is reported.
max
Returns the largest of input values. This function can take from two to four primitive numeric arguments, from which it chooses the largest
value or an array of primitive numeric values, in which the largest value is searched for. In case of an attempt to search for a value in an
empty array, an operation runtime error is reported.
indexOfMin
Returns the (zero based) index of the smallest of the values in the input array. When multiple items in the array match the condition the index
of the first one is returned. In case of an attempt to search for a value in an empty array, an operation runtime error is reported.
indexOfMax
Returns the (zero based) index of the largest of the values in the input array. When multiple items in the array match the condition the index of
the first one is returned. In case of an attempt to search for a value in an empty array, an operation runtime error is reported.
avg
Computes the arithmetic mean of input numeric values or a middle point (center of mass) of input geometric points. This function can take
two arguments, which are averaged or an array of values, which is averaged altogether. In case of an attempt to enter an empty array, an
operation runtime error is reported.
sum
Returns the sum of input values. This function can take an array of primitive numeric values, which will be summed altogether. In case of
entering an empty array, the value of 0 will be returned.
Note: in case of summing large values or a big number of real arguments, it's possible to partially lose the result precision and, due to this, to
mangle the final result. In case of summing integer numbers of too large values, the result may not fit in the allowed data type range.
product
Returns the product of entered values. This function can take an array of primitive numeric values, which all will be multiplied. In case of
entering an empty array, the value of 1 will be returned.
Note: in case of multiplying large values or a big number of real arguments, it's possible to partially lose the result precision and, due to this, to
mangle the final result. In case of multiplying integer numbers of too large values, the result may not fit in the allowed data type range.
variance
Computes statistic variance from a set of numbers provided as an array in the argument. Result is computed using the formula: 1⁄n∑(x - xi)2.
In case of an attempt to enter an empty array, a domain error is reported.
stdDev
Computes statistic standard deviation from a set of numbers provided as an array in the argument. Result is equal to a square root of
variance described above. In case of an attempt to enter an empty array, a domain error is reported.
median
Returns the median value from a set of numbers provided as an array in the argument. In case of an attempt to enter an empty array, a
domain error is reported.
nthValue
Returns the n-th value in sorted order from a set of numbers provided as an array in the argument. The zero-based index n, of type Integer, is
provided as the second argument. In case of an attempt to enter an empty array, a domain error is reported.
quantile
Returns the specified quantile from a set of numbers provided as an array in the argument. In case of an attempt to enter an empty array, a
domain error is reported.
all
This function takes an array of logical values and returns True when all elements in the array are equal to True. In case of entering an empty
array, the value of True will be returned.
any
This function takes an array of logical values and returns True when at least one element in the array equals True. In case of entering an
empty array, the value of False will be returned.
count
First variant of this function takes an array of logical values and returns the number of items equal to True.
Second variant takes an array of items and counts the number of elements in that array that are equal to the value given in the second
argument.
contains
findFirst
Searches the specified array for instances equal to the given value and return the zero based index of the first found item (closest to the
beginning of the array). Returns Nil when no items were found.
findLast
Searches the specified array for instances equal to the given value and return the zero based index of the last found item (closest to the end
of the array). Returns Nil when no items were found.
findAll
Searches the specified array for instances equal to the given value and return an array of zero based indices of all found items. Returns an
empty array when no items were found.
minElement
Returns an array element that corresponds to the smallest value in the array of values. When input array sizes does not match or when
empty arrays are provided, a domain error is reported.
maxElement
Returns an array element that corresponds to the biggest value in the array of values. When input array sizes does not match or when empty
arrays are provided, a domain error is reported.
removeNils
Removes all Nil elements from an array. Returns a new array with simplified type.
withoutNils
Returns the source array only when it does not contains any Nil values.
This function accepts an array with conditional items (<T>?Array) and returns a conditional array (<T>Array?). When at least one item in the
source array is equal to Nil the source array is discarded and Nil is returned, otherwise the source array is returned with simplified type.
flatten
Takes an array of arrays, and concatenates all nested arrays creating a single one-dimensional array containing all individual elements.
select
Selects the elements from the array of items for which the associated predicate is True.
Arrays of items and predicates must have the same number of elements. This function returns a new array composed out of the elements
from the items array, in order, for which the elements of the predicates array are equal True.
crop
trimStart
Removes count elements from the beginning of the array. By default (when the argument count is omitted) removes a single element. When
count is larger than the size of the array an empty array is returned.
trimEnd
Removes count elements from the end of the array. By default (when the argument count is omitted) removes a single element. When count
is larger than the size of the array an empty array is returned.
rotate (array)
Rotates the elements from the specified array by steps places. Rotates right (towards larger indexes) when the steps value is positive, and
left (towards lower indexes) when the steps value is negative. By default (when the steps argument is skipped) rotates right by one place.
Rotation operation means that all elements are shifted along the array positions, and the elements that are shifted beyond the end of the array
are placed back at the beginning.
pick
<T>Array pick( <T>Array items, Integer start, Integer step, Integer count )
sequence
Creates an array of count numbers, starting from the value provided in start argument and incrementing the value of each item by the value of
step (or 1 when step is not specified).
array
Creates a uniform array with count items by repeating the value of item.
createArray
Joins an arbitrary (at least two) number of arrays and/or scalar values into a single array. Returns a uniform array with elements in the same
order as specified in the function arguments.
Normalizes a given angle (or other cyclic value) to the range [0...cycle). cycle must be a positive value (usually 180 or 360, but any greater
than zero value is acceptable).
angleTurn
angleDiff
distance
Calculates the distance between a point and the closest point of a geometric primitive.
area
dot
normalize
Normalizes the specified vector. Returns a vector with the same direction but with length equal 1. When provided with zero length vector
returns a zero length vector as a result.
createVector
Creates a two dimensional vector (a Vector2D structure). First variant creates a vector between two points (from point1 to point2). Second
variant creates a vector pointing towards a given angle (in degrees) and with specified length.
createSegment
Creates a two dimensional segment (a Segment2D structure) starting at a given start point, pointing towards a given direction (in degrees)
and with specified length.
createLine
Creates a line (a Line2D or Line3D structure). First two variants create a line that contains both of the specified points. Third variant creates a
line containing specified point and oriented according to the specified angle (in degrees).
translate
Moves a point.
First variant of the function moves a point by the specified translation vector. Second variant moves a point towards a specified direction
(defined by angle in degrees) by an absolute distance.
toward
Moves a point in the direction of the target point by an absolute distance. The actual distance between point and target does not affect the
distance moved. Specifying a negative distance value results in moving the point away from the target.
scale
Moves a point by relatively scaling its distance from the center of the coordinate system (first function variant) or from the specified origin
point (second function variant).
rotate (geometry)
Moves a point by rotating it around the specified origin point (by an angle specified in degrees).
Creates and returns a new Matrix object with specified number of rows and columns. When no value/data parameter is specified the new
matrix is filled with zeros.
A scalar value can be specified as the third argument to set all elements of the new matrix to.
An array of values can be specified as the third (data) argument to fill the new matrix. Consecutive elements from the array are set into the
matrix row-by-row, top-to-bottom, left-to-right. Provided array must have at least as many element as the number of elements in the created
matrix.
identityMatrix
Creates and returns a new square identity Matrix object with size rows and size columns.
Path
Creates and returns a new Path object filled with points provided in an array. By default (when no second argument is specified) the new path
is not closed.
This function returns the specified part of text. The first argument defines the position (starting from zero) on which the desired part starts in
text. The second argument defines the desired length of returned part of text (this length can be automatically shortened when it exceeds the
text length). Leaving out the second argument results in returning the part of text from the specified position to the end of text.
Trim
String arg.Trim()
Returns argument value, from which the whitespace characters have been removed from the beginning and the end.
ToLower
String arg.ToLower()
Returns argument value, in which all the upper case letters have been changed to their lower case equivalents.
ToUpper
String arg.ToUpper()
Returns argument value, in which all the lower case letters have been changed to their upper case equivalents.
Replace
Searches text for all occurrences of the value entered as the first argument and replaces such occurrences with the value entered as the
second argument. Text search is case sensitive.
This function searches the source text consecutively from left to right and leaps the occurrences immediately after finding them. If the sought-
after parts overlap in the text, only the complete occurrences found this way will be replaced.
StartsWith
This function returns True only if text contains at its beginning (on the left side) the value entered as the argument (or if it is equal to it). Text
search is case sensitive.
EndsWith
This function returns True only if text contains at its end (from the right side) the value entered as the argument (or if it is equal to it). Text
search is case sensitive.
Contains
This function returns True only if text contains (as a part at any position) the value entered as the argument (or if it is equal to it). Text search
is case sensitive.
Find
This function searches in text for the first occurrence of the part entered as the argument (it performs searching from left to right) and returns
the position, at which an occurrence has been found. It returns -1 when the sought-after substring doesn't occur in text. Text search is case
sensitive.
Optionally, as the second function argument, one can enter the starting position from which the search should be performed.
FindLast
This function searches in text for the last occurrence of the part entered as the argument (it performs search starting from right) and returns
the position, at which an occurrence has been found. It returns -1 when the sought-after substring doesn't occur in text. Text search is case
sensitive.
Optionally, as the second function argument, one can enter the starting position from which the search should be performed (and proceed in
the direction of text beginning).
IsEmpty
Bool arg.IsEmpty()
This function returns True if text is empty (its length equals 0).
Structure constructors
As part of formulas, you can pass and access fields of any structures. It's also possible to generate a new structure value, which is passed to further
program operations.
In order to generate a structure value, a structure constructor (with syntax identical to the function with the name of the structure type) is used. As
part of structure parameters, structure fields values should be entered consecutively. E.g. the constructor below generates a structure of type Box,
starting in point 5, 7, with width equal to 100 and height equal to 200:
It is also possible to create a structure object with default value by calling the constructor with empty parameters list:
Box()
Not all structure types are available through constructors in formulas. Only structures consisting of fields of primitive data types and not requiring
specific dependencies between them are available. E.g. types such as Box, Circle2D or Segment2D consist of arithmetic types and due to this
creating them in formulas is possible. Types such as Image or Region have complex data blocks describing primitives and therefore it's not possible
to create them in formulas.
Some structures have additional requirements on field values (e.g. Box requires that width and height be non-negative). When such a requirement is
not met a Domain Error appears during program runtime.
Integer(Nil)
Box(Nil)
Segment2D(Nil)
Examples
Having given two integer values and the position between them in form of a real number 0...1, the task is to compute the interpolated value in this
position.
A simple weighted averaging of two given values is performed here. Due to the multiplication by real values, the intermediate averaging result is also
a real value. It's necessary then to round and convert the final result back to integer value.
Box is a data structure consisting of four fields of type Integer: X, Y, Width and Height. Having given an object of such primitive, the task is to
compute its center in form of two integer coordinates X and Y.
Generating a structure
In the previous example, data used in a formula is read from a primitive of type Box. Such primitive can also be generated inside of a formula and
passed to an output. In this example, we'd like to enlarge a Box primitive by a frame of given width.
outBox = Box( inBox.X - inFrame, inBox.Y - inFrame, inBox.Width + inFrame*2, inBox.Height + inFrame*2 )
In case of working with conditional types, it's possible, as part of formula blocks, to use primitive types and to use conditional filters execution,
including conditional execution of entire formula blocks. It's also possible to pass conditional execution inside formulas. This way, only a part of
formulas of a block, or even only a part of a single formula, is executed conditionally.
In the simple summing shown below, parameter B is of a conditional type and this conditionality is passed to output.
Formula content itself doesn't contain any specific elements related to a conditional parameter. Only formula output is marked with a conditional type.
An occurrence of an empty value in input causes aborting execution of further operators and moving an empty value to output, in a similar way to
operations on filters.
If the conditional output type and thereby the conditional execution mode of following filters were not desired in the example above, it's possible to
resolve the conditional type still in the same formula by creating a connection with a non-conditional default value.
Addition operators still work on conditional types, the entirety of their calculations can be aborted when an empty value occurs. The result is however
merged with a default value of 0. If summing doesn't return a non-empty result, then the result value is replaced with such non-empty value. In such
case, an output doesn't have to be of conditional type.
In order to find the maximal value in an array of numbers, you can use the max function, it requires although a non-empty array. If an explicit reaction
to such case is possible, we can define it by a condition and thereby avoid an error by passing an empty sequence to input.
In the example above a value of zero is entered instead of maximum of an empty array.
Range testing
Let's assume that we test a position of some parameter, which varies in the range of -5...10 and we'd like to normalize its value to the range of 0...1.
What's more, the reaction to an error and to range overrun by a parameter is returning an empty value (which means conditional execution of further
operations).
If block results should control operations which take enumeration types as parameters, then it's possible to generate values of such types as part of
formulas. In the example below we'd like to determine the sorting direction, having given a flag of type Bool, which determines, if the direction should
be opposite to the default one.
This example works in a loop, in each iteration there is some value passed to the inValue input and the goal is to sum these values. The prev
operator returns 0 in the first iteration (as defined by its second argument). In each iteration the value of inValue input is added to the sum from the
previous iteration.
Performance-Affecting Settings
There are two settings that affect the performance in a noticeable way. The first one is the Diagnostic Mode. Many filters have diagnostic outputs and
when the Diagnostic Mode is on, those outputs will be populated with additional data that may help during program designing. However, calculating
and storing this additional data takes time. Depending on the program, this can cause a considerable slowdown. This mode can be toggled off with
the button in the Application toolbar.
Another setting that can hinder the performance compared to Aurora Vision Runtime is Previews Update Mode. Ports that are being previewed are
updated according to this setting. Updating previews is generally quite fast, but it depends on the amount and complexity of the data being previewed
(large instances of Image or Surface will take more time to display than an instance of Integer).
With active previews even having the filter visible in the Program Editor will cause some overhead (tied to how often the filter is executed). This is
related to how Aurora Vision Studio shows progress. The option Program » Previews Update Mode » Disable Visualization disables the previews
altogether. While the overhead is generally small it can be noticeable when the iteration of the program is also short.
Execution Flow
There is one more difference between Studio and Runtime execution. By default, Studio is set to pause whenever an exception, warning or assertion
occurs. Every exception will cause the program to pause, even if it is handled with an error handling.
Runtime continues to run through all of them, when possible - it will only report them in the console.
To make Studio behave like Runtime in that regard, the following settings need to be disabled:
Break when an exception occurredNote that this does not prevent Break when a warning occurred Break on assertion failed
errors from ending the program.
This can only be achieved with
error handling.
View of program execution settings. The settings above should result in Studio performance as close to Runtime as possible.
At any point during testing it is possible to view statistics of every filter executed until this point.
Operation Mode
There are two modes in which the program can be run during debugging: normal and iteration mode.
Both modes will also pause whenever they encounter one of the things
In normal mode, the program will During iteration mode, the user described in the following part.
run continuously until the user can step into macrofilters, step
pauses it or until it ends. In this over them or step to the end of
mode, previews are updated macrofilters or iterate the Execution Pausing
according to the Previews Update displayed macrofilters.
Mode setting The following can pause the application. The application will pause at any of these regardless of the mode
in which it was launched:
Breakpoints, Ending of the tracked Worker Failed Assertions (can be changedException (can be changed in
Task, in settings), settings),
The program can also be stopped at any time using the pause button.
Warnings (can be changed in
settings).
Breakpoints
Breakpoints allow the user to specify points in the program at which the execution will pause. They are effective only in Studio.
Breakpoints can be placed at any Breakpoints are not saved in the Breakpoints are shared between Breakpoints can be placed in HMI
filter or output block by right- project and they will disappear all instances of particular events.
clicking and selecting Toggle after loading the program again. macrofilters that contain them. For
Breakpoint, by pressing F9 or by example, if there are two Breakpoints affect all Worker
hovering on the left side of the instances of TestMacro, with a threads.
Program Editor. breakpoint inside, the program will
pause both when executing the
first and the second one.
It is only possible to switch it before the program is started. When the program is paused, all Worker Tasks are paused.
When in the iteration mode only one Worker Task is being run, it is the Tracked Worker. The other Worker Tasks are paused, so if the they
exchange information that is important for debugging, the normal mode has to be used at some point.
If the program is started in the iteration mode, the Worker Task that is being viewed at the moment (or that contains the macrofilter being viewed) will
be set as the Tracked Worker Task and only this thread will run. At any point, you can run it in normal mode to launch the remaining threads.
The Primary Worker Task is marked with an asterisk attached directly to its name, visible when the program is running. The active Worker Task is
emboldened both when the program is off and when it is running.
A view of the Project Explorer of a paused program. The Main Worker Task, which is the Primary Worker Task is marked with an asterisk attached
to its name (not to be confused with the asterisk on the right, related to thread consumption). The currently Tracked Worker Task Communication is
bold. The ShortWorker has already ended, which is marked with a stop symbol, as opposed to the pause symbol.
Program ComboBox
The ComboBox in the Toolbar allows the user to specify both the Primary Worker Task and the Tracked Worker Task. It can also display active
threads. Its functionality changes depending on the state of the program.
When the program has not run or has stopped, the ComboBox selects the Primary Worker. It is one of two ways of setting the Primary Worker (the
other being through right-clicking the desired Worker Task).
When the program has been started and is paused, the ComboBox displays all active threads at the moment. This includes HMI events, if the pause
happened when an event was being executed. All Worker Tasks that have already ended will not be displayed. The user can set one of the Worker
Tasks active in the ComboBox as the Worker Task to be tracked.
Iterating Program
Running the program in the iteration mode can be achieved with the buttons present in the Application Toolbar. As mentioned before iterating actions
are single-threaded. All Worker Tasks other than the tracked one will be paused when iterating.
Additional information about iterating can be found in Running and Analysing Programs.
Iterate Program
The Iterate Program button executes one iteration of the Primary Worker Task.
Iterate Back
Under some circumstances it is possible to Iterate Back , that is to reverse the iteration and go back to the previous data. The requirements for
this option are:
The program has to be paused The filter has to be a Task or The filter has to be the Primary The Task macrofilter to be iterated
after completing one iteration of a Worker Task. Worker Task or be inside it. back cannot contain other Task
filter (either with Iterate Current macrofilters.
Macro or Iterate Program ).
At least one of the loop generators EnumerateIntegers EnumerateFiles
in the task needs to be Filters such as EnumerateImages
deterministic and be a source of EnumerateImages and EnumerateFiles create a list of
data (for example it has to be an objects before their first iteration and enumerate over it. The list will not
enumeration filter). Some reflect later changes to the list (adding/removing images or files).
examples include:
As long as the Task contains at least one enumeration filter is present it is possible to iterate back. Furthermore, there can be other loop generators
present, even if they are not enumerators.
Below are examples of filters that do not allow iterating back by themselves:
If those filters are in a Task that can
Camera grabbing filters, such as Loop — no data to iterate back. Communication filters, such as be iterated back, they will behave as
GenICam_GrabImage. TcpIp_ReadLine.
they would during a normal, non-
reversed iteration.
Step Buttons
There are three options for the user to move step-by-step through the program. Those are: Step Over , Step Into , and Step Out .
Those buttons are described here.
Error Handling
Introduction
Error handling is a mechanism which enables the application to react to exceptional situations encountered during program execution. The
application can continue to run after errors raised in filters.
This mechanism should mainly be used for handling errors of the IO_ERROR kind. Usage for other kinds of errors should be taken with caution.
Every application can be created in a manner which precludes DOMAIN_ERROR. This topic is elaborated in a separate article: Dealing with Domain
Errors. A frequent cause of this kind of errors is the omission of filters such as SkipEmptyRegion.
Error handling can only be added to Task Macrofilters. A re–execution of the Task Macrofilter following an error results in a reset of stateful filters
contained therein. This means in particular a possible reconnect to cameras or other devices. The error handler for each kind of error is executed in
the manner of a Step Macrofilter.
A special case is the error handler of the Task Macrofilter set as Startup Program (usually: Main). The program execution is always terminated after
the error handler finishes.
There are several kinds of errors which can be handled:
Program Execution
During program execution in the IDE, after an error occurs, there will be a message pop–up, with the option to continue (that is, run the error
handler). This message can be turned off in the IDE settings, by changing: Tools » Settings » Program Execution : "Break when exception
occurred".
In the Runtime environment, understandably, there will be no message and the error handling will be executed immediately.
Example
Below is a simple example, acquiring images from a camera in a loop. If the camera is not connected, we enter the error handler, where a message
is returned until the error is no longer being raised. Without error handling the program would stop after the first failed image acquisition:
Sample program with error handler. Macrofilter GrabImage has error handler IO_ERROR (right) with error message.
Result after error handler execution. Result after normal program execution.
try
{
for(;;)
{
if (!avl::WebCamera_GrabImage(webCamera_GrabImageState1, 0, atl::NIL, image1))
{
return;
}
}
}
catch(const atl::IoError&)
{
atl::String string1;
Offline Mode
1. Worker Tasks
2. HMI Events
3. New, powerful formulas
4. Program Editor Sections and Minimal View
5. Results control
6. Module encryption
7. Elements of the User Interface
8. Program Display
9. The Basic Workflow
10. Data
11. Filters (Tools)
12. Connections
13. Macrofilters
14. Sections
15. Executing Programs
16. Results Control
17. Browsing Macrofilters
18. Execution Breakpoints
19. Knowing Where You Are
20. Toolbox
21. Setting Basic Properties
22. Editing Geometrical Primitives
23. Testing Parameters in Real Time
24. Linking or Loading Data From a File
25. Labeling Connections
26. Invalid Connections
27. Property Outputs
1. Additional Property Outputs
28. Expanded Input Structures
29. Comment Blocks
30. Extracting Macrofilters (The Quick Way)
31. Creating Macrofilters in the Project Explorer
32. Trick: Configuration File as a Module Not Exported to AVEXE
33. Macrofilter Counter
34. Global Parameters
35. Modules
36. Importing Modules
37. Locking Modules
38. Table of Contents
Introduction
Workflow
Detecting anomalies 1
1. 4. Setting training parameters
Detecting anomalies 2 (classificational approach)
Detecting features (segmentation)
Classifying objects
Segmenting instances
Locating points
Locating objects
1. Overview
2. Common Tasks
3. See Also
Introduction
1. Prerequisites
2. User Filter Libraries Location
3. Adding New Global User Filter Libraries
4. Adding New Local User Filter Libraries
Developing User Filters
1. User Filter Project Configuration
2. Basic User Filter Example
3. Structure of User Filter Class
4. Using Arrays
5. Diagnostic Mode Execution and Diagnostic Outputs
6. Filter Work Cancellation
7. Using Dependent DLL
Advanced Topics
1. Using the Full Version of AVL
2. Accessing Console from User Filter
3. Generic User Filters
4. Creating User Types in User Filters
Troubleshooting and Examples
1. Upgrading User Filters to Newer Versions of Aurora Vision Studio
2. Remarks
3. Example: Image Acquisition from IDS Cameras
4. Example: Using PCL library in Aurora Vision Studio
5. Basic Workflow
6. HMI Interactions with the Program
1. Sending Values to HMI from Multiple Places
7. Preparing Data for Display in HMI
8. Introduction
9. HMI Canvas
10. Controls for Setting Layout of the Window
11. Controls for Displaying Images
12. Controls for Setting Parameters
1. Binding with a Label
13. Controls for Displaying Inspection Results
14. AnalogIndicator
15. Event Triggering Controls
16. The TabControl Control
17. The MultiPanelControl Control
18. Program Control Buttons
19. File and Directory Picking
20. Shape Editors
21. ProfileBox
22. View3DBox
23. ActivityIndicator
24. TextSegmentationEditor and OcrModelEditor
25. KeyboardListener
26. VirtualKeyboard
27. ToolTip
28. ColorPicker
29. BoolAggregator
30. EnabledManager
31. EdgeModelEditor
32. GrayModelEditor
33. GenICamAddressPicker
34. GigEVisionAddressPicker
35. MatrixEditor
36. Deep Learning
1. Handling Events in Low Frame-Rate Applications
37. StateControlBox control
38. StateControlButton control
39. StateAutoLoader control
40. Introduction
41. Serialization of a Control to a File
42. Descriptions Added to a Control
43. Editing Properties in the HMI Designer
44. Creating Modules of User Controls
1. Prerequisites
2. Creating Modules of Controls
3. Creating Projects of User Controls in Microsoft Visual Studio
45. Defining Control Ports
1. Conditional Data Types of Ports
2. Data Ports out of Control Events
46. Conversion to AMR
47. Saving Controls State
48. Interoperability with Extended HMI Services
1. Controlling Program Execution and Reactions to Changes in Program Execution
2. Controls Managing Saving of the HMI State
3. Controlling On-Screen Virtual Keyboard
49. Introduction
50. Index
51. Automatic Conversions
52. Singleton Connections
53. Array Connections
54. Conditional Data
55. Conditional Connections
56. Other Alternatives to Conditional Execution
57. Instantiation
58. Macrofilter Structures
59. Steps
60. Variant Steps
1. Example 1
2. Example 2
61. Tasks
1. Execution Process
2. Example: Initial Computations before the Main Loop
62. Worker Tasks
63. Macrofilters Ports
1. Inputs
2. Outputs
3. Registers
4. Example: Computing Greatest Common Denominator
64. Sequence of Filter Execution
65. Execution and Performance
1. Performance-Affecting Settings
2. Execution Flow
66. Operation Mode
67. Execution Pausing
68. Breakpoints
69. Single-Threaded Debugging and Testing
70. Multi-Threaded Debugging and Testing
71. Program ComboBox
72. Iterating Program
1. Iterate Program
2. Iterate Current Macro
3. Iterate Back
4. Step Buttons
73. Introduction
74. Workflow Example
75. Online-Only Filters
76. Accessing the Offline Data
1. Binding Online-Only Filter Outputs
2. The ReadFilmstrip Filter
77. Offline Data Structure
78. Workspace and Dataset Assignment
79. Modifying the Offline Data
1. Structural Modifications
2. Content Modifications
80. Activation and Appearance
1. Main Window
2. Program Editor
81. See Also
82. Application Warm-Up (Advanced)
83. Configuring Parallel Computing
84. Configuring Image Memory Pools
85. Using GPGPU/OpenCL Computing
86. When to use Aurora Vision Library?
87. Selecting Device Address for Filter
88. Selecting Pixel Format
1. Firewall Issues
2. Configuring IP Address of a Device
3. Packet Size
4. Connecting Multiple Devices to a Single Computer
89. Selecting Parameter Name for Filter
1. Image Pyramid
2. Grayscale-based Matching
3. Edge-based Matching
4. Advanced Application Schema
90. Application Guide – Image Stitching
1. Introduction
1. Overview of Deep Learning Tools
2. Basic Terminology
1. Deep neural networks
2. Depth of a neural network
3. Training process
3. Stopping Conditions
4. Preprocessing
5. Augmentation
2. Anomaly Detection
3. Feature Detection (segmentation)
4. Object Classification
5. Instance Segmentation
6. Point Location
7. Locating objects
8. Reading Characters
9. Troubleshooting
1. 1. Installation guide
2. 2. Aurora Vision Deep Learning Library and Filters
3. 3. Aurora Vision Deep Learning Service
4. 4. Aurora Vision Deep Learning Examples
5. 5. Aurora Vision Deep Learning Standalone Editor
6. 6. Logging
7. 7. Troubleshooting
8. References
Introduction
There is a group of filters that require external devices to work correctly, e.g., a camera to grab the images from. The Offline Mode helps to develop
vision algorithms without having an access to the real device infrastructure. The ReadFilmstrip makes it possible without even knowing the target
device infrastructure and still be able to work on real data.
This article briefly describes what the Offline Mode is, what the Offline Data is and how to access this data and modify it.
Workflow Example
1. Prepare a simple application to collect data directly from the production line using Aurora Vision Studio or import data sets from images stored
on the disk.
2. Develop a project using previously prepared data accessing the recorded images with the ReadFilmstrip filter.
3. Replace the ReadFilmstrip filter with the production filter that capture images from the cameras. Replacement is possible with a simple click-
and-replace operation.
4. Use of a ready program on the production line. Software is ready to work with real devices (online mode) and with recorded data (offline
mode).
At any time during the work, the system maintainer can collect new datasets that can be used to tune the finished system. During the system
development developer can playback updated datasets. Algorithms can be tested without any modification of the project's code.
All the data for offline mode is stored in easy to share format. Offline mode is the preferred way to work with big projects with bigger development
team.
Online-Only Filters
The filters that require connected external devices can only operate while being connected, thus they are online-only. Great examples of such a filters
are:
GigEVision_GrabImage
GenICam_GrabImage
SerialPort_ReadByte
All Online-Only filters are either I/O Function filters or I/O Loop Generator filters.
The Offline Mode is designed to make the Online-Only filters behave as if they were connected even without the actual connection. For that can
happen, the user need to provide the offline data the filter will serve when executed in the Offline Mode.
Although all the Online-Only are I/O filters, the opposite is not always true, i.e. there are I/O filters that can operate in the Offline Mode using their
default logic. In most cases this refers to the filters that access the local system resources, e.g.:
LoadImage
LoadObject
GetClockTime
Only Online-Only filters can have their outputs bound to the Dataset Channels. Other filters always execute their default logic.
Sample.
At the single iteration of the Worker Task macrofilter, there is one Sample available.
Note: it is forbidden to use two datasets from different workspaces within the same project. Trying to use a dataset from the other workspace
switches datasets in all other Worker Tasks to the datasets with the same names but from the new workspace. It is considered as a project
error, if the dataset cannot be found in the new workspace.
Note: Dataset and Channel assignments are name-based. The Worker Task can be successfully switched to any other Dataset as long there is
the same set of channels (with the same names and data types) in the new Dataset. Only bound channels count.
In the fresh Aurora Vision Studio installation there is the Default Workspace available for all new projects and for those projects that have not explicitly
selected any other Workspace.
Structural Modifications
In the workspace structure figure can be seen, that Workspaces, Datasets and Channels are simply collections. Adding and removing workspaces,
datasets and channels may be considered as structure modifications. All these operations are available in the Workspaces Window. However,
operations common for the current workspace (adding/removing datasets and channels) are also available in the Filmstrip Control so there is no
need to jump between controls to perform workspace-related tasks.
By default Aurora Vision Studio starts up with the default Workspace. Initially the default workspace contains a single Dataset with an empty channel
of type Image. Similarly each new Dataset created in any workspace also contains an empty channel of type Image.
Content Modifications
As the Offline content is actually the data that can be used in the filter outputs, specifically the channel items, the easiest way to add a new data is
through the big "+" button in the last Filmstrip control's column. The new data editor depends from the type, e.g. for the Image type the image files
are selected from the disk and the geometrical data, as well as the basic data, is defined through either the appropriate dialog or an inline editor.
Note: When the channel data (e.g. images) is selected from the disk, it is the files at their original locations that are loaded in the offline mode
during the execution.
It is also possible to append online data directly to the bound channel. This can be achieved through the Save icon at the Filmstrip controls's toolbar:
Note: All channels within the Dataset have synchronized sizes. That means that manually adding the item to the channel, populates all other
channels with data that is default for the channel types. Similarly, saving the online data populates all channels within the dataset either with the
data that comes from bound outputs or with the data default for the channel types in case the channel is unbound.
Existing channel items can be modified with the edit button that shows up on hovering the item. The button, once clicked, makes the default editor for
the item type opened.:
Editing the channel item.
Aurora Vision Studio Main Window with marked the Offline mode and execution indicators.
1. The Offline mode button
Idle
Executing
3. Execution status and the status bar background color. If the program is currently in the execution or paused state, the background color
corresponds to the indication button (2) background color,
4. If the Offline mode is active, there is additional Offline label at the bottom-right corner of the the window.
Program Editor
The Online-Only filters, unless bound with the Filmstrip channel, are disabled in the Offline mode. It is marked with the OFFLINE label over the filter:
Note: As in case of manually disabled filters, filters disabled due to the current Offline mode make all connected filters disabled too.
See Also
1. Managing Workspaces - extensive description how to manage dataset workspaces in Aurora Vision Studio.
2. Using Filmstrip Control - tool for recording and restoring data from datasets in Aurora Vision Studio.
Other
CopyObject
Does a very simple thing – copies the input object to the output. Useful for creating values that should be send to the HMI at some point of the
program.
ChooseByPredicate
Gets two individual elements and outputs one of them depending on a condition.
7. Programming Tips
Table of content:
Formulas Migration Guide to version 5.0
Dealing with Domain Errors
Programming Finite State Machines
Recording Images
Sorting, Classifying and Choosing Objects
Optimizing Image Analysis for Speed
Understanding OrNil Filter Variants
Working with XML Trees
Formulas Migration Guide to version 5.0
Introduction
Aurora Vision Studio 5.0 comes with many new functions available directly in the formula block. Moreover, many operations, which were previously
done using filters, can now be accomplished directly in the formulas. This way, the complexity of the inspection program can be reduced
significantly. This document focuses on pointing out some of the key differences between formulas in Aurora Vision Studio 5.0 and its previous
versions. For all functions please refer to the Formulas article in the Programming Reference section of this documentation.
Conditional operator
Before 5.0, a ternary operator ?: was used to perform a conditional execution inside a formula. This compact syntax, however, might prove quite
challenging and hard to understand, especially to users, who are not familiar with textual programming. In order to overcome this issue, a more user
friendly if-then-else syntax has been introduced, while also keeping the option of using the ternary operator.
In the example below, apart from the new conditional syntax, a new square function has also been used, instead of inA*inA.
Before 5.0 From 5.0
Mathematical functions
Lerp - computes a linear interpolation between two numeric values.
From 5.0
Before 5.0
findLast - searches the specified array for instances equal to the given value and return the index of the last found item.
findAll - searches the specified array for instances equal to the given value and return an array of indices of all found items.
minElement - returns an array element that corresponds to the smallest value in the array of values.
Before 5.0 From 5.0
maxElement - returns an array element that corresponds to the biggest value in the array of values.
removeNils - removes all Nil elements from an array.
Before 5.0 From 5.0
flatten - takes an array of arrays, and concatenates all nested arrays creating a single one-dimensional array containing all individual elements.
Before 5.0 From 5.0
select - selects the elements from the array of items for which the associated predicate is True.
Before 5.0 From 5.0
Before 5.0
distance - calculates the distance between a point and the closest point of a geometric primitive (one of: Point2D, Segment2D or Line2D).
Before 5.0 From 5.0
Examples
An example of a possibly erroneous situation is a use of the ThresholdToRegion filter followed by RegionMassCenter. In some cases, the first filter
can produce an empty region and the second will then throw a Domain Error. To prevent that, the OrNil variant should be used to create an
appropriate data flow.
RegionMassCenter properly used in OrNil
RegionMassCenter throwing a Domain Error on variant.
empty input region. The program execution is
stopped.
RegionMassCenter properly preceded with
SkipEmptyRegion. Conditional processing
appears.
Another typical example is when one is trying to select an object having the maximum value of some feature (e.g. the biggest blob). The
GetMaximumElement filter can be used for this purpose, but it will throw a Domain Error if there are no objects found (if the array is empty). In this
case using the OrNil option instead of the Unsafe one solves the problem.
Instructions
In Aurora Vision Studio, Finite State Machines can be created with variant macrofilters and registers. The general program schema consists of a
main loop macrofilter (a task, usually the "Main" macrofilter) and a variant macrofilter within it with variants corresponding to the states of the Finite
State Machine. Individual programs may vary in details, but in most cases the following instructions provide a good starting point:
1. Create a Variant Step macrofilter (e.g. "App") for the State Machine with variants corresponding to individual states.
2. Use a forking register (e.g. "regState") of the String type, so that you can assign clear names to each state.
3. At first, you will have only one default variant. Remove it and add one variant with a meaningful label for each state.
4. Do not forget to set an appropriate initial value for the forking register (the initial state).
5. Add macrofilter inputs – usually for the input image and major parameters.
6. Add macrofilter outputs that will contain information about the results. In each state these outputs can be computed in a different way.
7. Create an instance of this variant macrofilter in some task with a loop, e.g. in the "Main" macrofilter.
8. In each state compute the value of the next state and connect the result to the next port of the forking register. Typically, an HMI button's
outputs are used here as inputs to formula blocks.
9. Optional: Create a formula in the main loop task for enabling or disabling individual controls, depending on the current state (also requires
exposing the current state value as the variant step's output).
Example
For a complete example, please refer to the "HMI Start-Stop" example program. It is based on two states: "Inspecting" and "Stopped". In the first state
the input images are processed, in the second they are not. Two buttons, Start and Stop, allow the user to control the current state.
Recording Images
Continuous Recording
Images can be recorded to a disk with this simple program:
The solution for choosing the smaller segment and the bigger segment.
The ChooseByPredicate filter is very similar to the ternary (?:) operator from the C/C++ programming language.
RegionMassCenter throwing a Domain Error on empty input region. The program execution is stopped.
RegionMassCenter not being executed due to conditional processing introduced with SkipEmptyRegion. Program execution continues.
RegionMassCenter in the OrNil variant safely executed and introduces conditional processing on its output. Program execution continues.
Introduction
Extensible Markup Language (XML) is a markup language used for storing data in human readable text-format files. XML standard is very popular in
exchanging data between two systems. XML provides a clear way to create text files which can be readable both by a computer application and a
human.
An XML file is organized into a tree structure. The basic element of this tree structure is called a node. A node has a name, attributes and can have a
text value. The example below shows a node with the name 'Value' and with two attributes 'a', 'b'. The example node contains text '10.0'.
<Value a="1.0" b="2.0">10.0</Value>
An XML node can also contain other nodes which are called children. The example below shows a node named 'A' with two children: 'B' and 'C'.
<A>
<B />
<C />
</A>
To load or store data in the XML format, two filters are necessary: Xml_LoadFile and Xml_SaveFile. Also two conversions are possible:
XmlNodeToString and StringToXmlNode. These filters are commonly used to perform operations on data received from different sources like Serial
Port or TCP/IP.
The table below shows basic examples of XPath usage of Xml_SelectMultipleAttributes_AsStrings filter:
Establishing a Connection
To communicate using the TCP/IP protocol stack, first a connection needs to be established. There are two ways of doing this: (1) starting a
connection to a server and (2) accepting connections from clients. The sockets resulting from both of the methods are later indistinguishable - they
behave the same: as a bidirectional communication utility.
A connection, once created, is accessed through its socket. The returned socket must be connected to all the following filters, which will use it for
writing and reading data, and disconnecting.
Usually, a connection is created before the application enters its main loop and it remains valid through multiple iterations of the process. It becomes
invalid when it is explicitly closed or when an I/O error occurs.
Reads a serialized object. The data can only come from another
instance of Aurora Vision Studio/Executor executing the
TcpIp_WriteObject filter described above, using the same type
parameter.
Reads all text, until EOF (until other side closes the connection).
Reads all data, until EOF (until other side closes the connection).
* - the delimiter and the suffix are passed as escaped strings, which are special because they allow for so called escape sequences. These are
combinations of characters, which have special meaning. The most important are "\n" (newline), "\r" (carriage return), and "\\" - a verbatim
backslash. This allows for sending or receiving certain characters, which cannot be easily included in a String.
Application Structure
The most typical application structure consist of three elements:
1. A filter creating a socket (TcpIp_Connect or TcpIp_Accept) executed.
2. A macrofilter realizing the main loop of the program (task).
3. A filter closing the socket (TcpIp_Close).
If the main loop task may exit due to an I/O error, as described in the next section, there should be also a fourth element, a Loop filter (possibly
together with some Delay), assuring that the system will attempt reconnection and enters the main loop again.
For more information see the "IO Simple TcpIp Communication" official example.
After choosing a device you can go to its settings tree by clicking Tools » Device Settings in the GenICam Device Manager. You should see device
parameters grouped by category there. Some parameters are read-only, other are both readable and writable. You can set the writable parameters
directly in the GenICam Settings Tree.
Another way to read or write parameters is by using the filters from the GenICam category. In order to do this you should:
1. Check of which type is the parameter you'd like to read/write (you can check it in the GenICam Settings Tree).
2. Add a proper filter from the GenICam category to the program.
3. Choose a device, define parameter name and new value (when you want to set the parameter's value) in the filter properties.
4. Execute the filter.
Known Issues Regarding Specific Camera Models
In this section you will find solutions to known issues that we have came across while testing communication between Aurora Vision products and
different camera models through GenICam.
Manta G032C
We have encountered a problem with image acquisition (empty preview window with no image from camera) while testing this camera with default
parameters. The reason for this is that the packet size in this camera is set by default to 8,8 kB. You need a network card supporting Jumbo Packets
to work with packets of such size (the Jumbo Packets option has to be turned on in such case). If you don't have such network card, you need to
change the packet size to 1.5 kB (maximal UDP packet size) in the GenICam Settings Tree for the Manta camera. In order to do this, please open
the GenICam Device Manager, choose your camera and then click Tools » Device Settings (go to the GenICam Settings Tree). The parameter
which needs to be changed is GevSCPSPPacketSize in the GigE category, please set its value to 1500 and save it. After doing this, you should be
able to acquire images from your Manta G032C camera through GenICam.
Adlink NEON
On devices with early version of the firmware, there are problems with starting image acquisition. The workaround is simple: just set the
IgnoreTransportImageFormat setting to "yes". This option is available in the GenICam Device Manager (Tools → Application Settings...):
Daheng cameras
1. Open GentTL settings: Tools » Manage GenICam Devices... » Tools » Application Settings
2. Turn on "Keep System Modules Alive" option
Creating Tasks
To start working with National Instruments' devices, a task that will
perform a specific operation must be created.
For this purpose one of the filters from list shown below should be
selected.
DAQmx_CreateDigitalPort
DAQmx_CreateAnalogChannel
DAQmx_CreatePulseChannelFreq
DAQmx_CreateCountEdgesChannel
These filters return outTaskID, which is the identifier of a created
task.
Starting Tasks
To start a task, filter DAQmx_StartTask should be used.
Finishing Tasks
There are two ways of finishing tasks: by using the
DAQmx_StopTask filter or by waiting for the entire program to finish.
It should be noted that you cannot start two tasks using the same port
or channel in a device. Basic scheme of program
Configuring Tasks
After creating a task, it can be configured using the filters DAQmx_ConfigureTiming, DAQmx_ConfigAnalogEdgeTrigger or
DAQmx_ConfigDigitEdgeTrigger. Note that some configuration filters should be used before the task is started.
Sample Application
The following example illustrates how to use DAQmx filters. The program shown below reads 5000 samples of voltage from analog input. However,
data is acquired on rising edge of a digital signal.
At first, analog channel is created. In this sample, input
values will be measured, so inCVType input must be set
to Voltage and inIOType should be set to Input. The rest
of inputs depends on used device.
Afterwards, sample clock to task is assigned. inDeviceID
input might be set to Auto, because it is the only one
device used in this program. Other parameters could be
set according to user's camera.
Sample application should start acquiring data after a
digital edge occurs. Because an active edge of digital
signal should be rising, inTriggerEdge input must be set
to Rising (for falling edge, value Falling must be chosen).
Task is ready to start.
Last step in presented program is to acquire multiple data
from selected device. Filter DAQmx_ReadAnalogArray
could be used for this. DAQmx_StopTask is not required,
because this program doesn't use one channel in two
different tasks. Acquired array of values might be
represented e.g. as a profile (shown below).
Establishing Connection
To communicate with a ModbusTCP device, first a connection needs to be established. The ModbusTCP_Connect filter should be used for
establishing the connection. The default port for ModbusTCP protocol is 502. Usually, a connection is created before the application enters its main
loop and it remains valid through multiple iterations of the process. It becomes invalid when it is explicitly closed using ModbusTCP_Close filter or
when an I/O error occurs. The Modbus filters work on the same socket as TCP IP filters. If Modbus device supports Keep-Alive mechanism it is
recommended to enable it, to increase chance of detecting a broken connection. For more details please read Using TCP/IP Communication.
The Coils, Inputs and Registers in ModbusTCP protocol are addressed starting at zero. The table below lists filters to control object types mentioned
above.
Custom Functions
To support device specific function Aurora Vision Studio provides ModbusTCP_SendBuffer filter. This filter allows to create a custom Modbus frame.
Project Files
When you save a project, several files are created. Most of them have textual or xml formats, so they can be edited also in a notepad and can be
easily managed with version control systems.
A project may consist of the following file types:
Single *.avproj file (XML) – this is the main file of the project; plays a role of the table of content.
Single *.avcode file (textual) – this is the main source code file.
Zero or more *.avlib files (textual) – these are additional source code modules.
Zero or more *.avdata files (binary) – these files contain binary data edited with graphical tools, such as Regions or Paths.
Single *.avview file (XML) – this file stores information about the layout of Data Previews panels.
Zero or one *.avhmi file (XML) – this file stores information about the HMI Design.
Introduction
A program accepted by Aurora Vision Studio and Aurora Vision Executor is executed in a virtual machine environment. Such a machine
consecutively executes (with preservation of proper rules) filters from the source program. The C++ Code Generator allows to automatically create a
C++ program, which is a logical equivalent of the job performed by the virtual machine of Aurora Vision Studio executor. As part of such programs,
consecutive filter calls are changed to calls of functions from Aurora Vision Library.
To generate, compile and run programs in this way it is necessary to own also a Aurora Vision Library license.
User interface
The C++ Code Generator functionality is available in the Main Menu in File » Generate C++ Code.... After choosing this command, a window with
additional generator options and parameters (divided into tabs) will be opened.
In the Output tab basic generator parameters can be set:
Program name - it determines the name of the generated C++ program (it doesn't have to be related to Aurora Vision Studio project name). It
is a base name for creating names of source files and (after choosing a proper option) it is also the name of the newly created project.
Output directory - it determines the folder in which a generated program and project files will be saved. Such folder has to exist before
generation is started. The path can be either absolute or relative (when an Aurora Vision Studio project is already saved) starting from the
project directory. All program and project files will be saved in this folder with no additional subfolders. Before overwriting any files in the output
directory, a warning message with a list of conflicted files will be displayed.
Code namespace - an optional namespace, in which the whole generated C++ code will be contained. A namespace can be nested many
times using the "::" symbol as a separator (e.g. "MyNamespace" or "MyProject::MyLibrary::MyNamespace"). Leaving the code namespace field
empty will result in generating code to the global namespace.
Create sample Microsoft Visual Studio solution - enabling this option will result in generating a new sample tentatively configured solution
for compiling generated code in the Microsoft Visual Studio environment. Each time when this option is enabled and code is generated (e.g.
when generating code again after making some changes in an Aurora Vision project), a new solution will be created and any potential changes
made in an already existing solution with the same path will be overwritten.
This option is enabled by default when generating code for the first time for the selected folder. It is disabled by default when code is generated
again (updated) in the selected output folder.
Details regarding Microsoft Visual Studio solution configuration required by generated code are described in this document below.
In the Modules tab there is a list of project modules which can participate in code generation. It is possible to choose any subset of modules existing
in an Aurora Vision Studio project, as long as this doesn't cause breaking the dependencies existing in them. Selecting a subset of modules will
cause that only macrofilters and global parameters present in the selected modules will be generated to C++ code. If disabling a module causes
breaking dependencies (e.g. a macrofilter being generated refers to an other macrofilter which exists in a module which is disabled in code
generation), it will be reported with an error during code generation.
In the Options tab there are additional and advanced options modifying the behavior of the C++ Code Generator:
Include instances in diagnostic mode - diagnostic instances (instances processing data from diagnostic outputs of preceding filter
instances) are meant to be used to analyze program execution during program development, that's why they aren't included by default in
generated programs. In order to include such instances in a generated program this option has to be enabled (Note: in order to make
diagnostic outputs of functions work correctly, enabling diagnostic mode in Aurora Vision Library has to be taken into account in a
C++ program).
Generate macrofilter inputs range checks - the virtual machine controls data on macrofilter inputs in terms of assigned to them allowed
ranges. Generated code reproduces this behavior by default. If such control is not necessary in a final product, it is possible to uncheck this
option to remove it from a C++ program.
Enable function block merging - language constructions of conditional blocks and loops, which map virtual machine functions in terms of
conditional and array filter execution, are placed in generated code. The C++ Code Generator uses optimization which places, where possible,
a couple of filter calls in common blocks (merging their function blocks). Unchecking this option will disable such optimization. This
functionality is meant to help solve problems and usually should be kept enabled.
The Generate button at the bottom of the window starts code generation according to the chosen configuration (in case of a need to overwrite
existing files an additional window to confirm such action will be displayed) and when the generation operation is completed successfully it closes the
window and saves the parameters. The Close button closes the window and saves the chosen configuration. The Cancel button closes the window
and ignores the changes made to the configuration.
All parameters and generation options are saved together with an Aurora Vision Studio project. At the next code generation (when the configuration
window is opened again), the parameters chosen previously for the project will be restored.
In order to keep names between Aurora Vision Studio project files and C++ program files synchronized, it is advised to save a project each time
before generating C++ code.
When a macrofilter contains facilities requiring preserving their states in consecutive macrofilter iterations (e.g. a Step macrofilter containing
registers, loop generators or accumulators), a state argument will be added to the function interface on the first position:
bool MyMacro( MyMacroState& state, int inValue1, int inValue2, int& outResult )
Data types of inputs, outputs and connections will be mapped in code to types, structures and constructions based on the templates (e.g.
atl::Array<> or atl::Conditional<>) coming from Aurora Vision Library. Filter calls will be mapped to function calls coming also from
Aurora Vision Library. In order to get an access to proper implementations and declarations, there are Aurora Vision Library headers included in
generated code. Such includes can appear as well in .cpp files, as in .h files (because references to Aurora Vision Library types appear also in
function signatures generated from macrofilters).
In generated program there are also static constants, including constant compound objects, which require initial initialization or loading from a file
(e.g. if in the IDE there is a hand-edited region on a filter input, then such region will be saved in an .avdata file together with generated code). A
program requires preparing constants before using any of its elements. In order to do this, in the main module in generated code an additional
function Init with no arguments is added, as part of this function compound constants are prepared. This function has to be called before using
any element from generated code (also before constructing a state object).
Generated Code Usage
The basis of generated code are file pairs emerging from modules with the structure described above. These modules can be used as part of a
vision program, e.g. as a set of functions implementing the algorithms designed in the Aurora Vision Studio environment.
Before calling any generated function, constructing a function state object or accessing a global parameter, it is necessary to call the
Init() function added to the main module code. The Init function must be called once at the beginning of a program. This function is added to
generated code even when a program does not contain any compound constants which require initialization, in order not to have to modify the
schema of generated code after modifying and updating the program.
Stateful functions
As described above, sometimes a function may have an additional first argument named state, passed by reference. Such object preserves a
function state between consecutive iterations (each call of such function is considered as an iteration). In such objects there are, among others, kept
generator positions (e.g. the current position of filters of Enumerate* type), register states of Step macrofilters or internal filter data (e.g. a state of a
connection with a device in filters which acquire images from cameras).
A state object type is a simple class with a parameterless constructor (which initializes the state for the first iteration) and with a destructor which
frees state resources. It is the task of applications using functions with a state to prepare a state object and to pass it through a parameter to a
function. An application may construct a state object only after calling the Init() function. An application should not modify such object by
itself. One instance of a state object is intended for a function call as part of one and the same task (it is an equivalent of a single macrofilter instance
in an Aurora Vision Studio program). Lifetime of a single object should be sustained for the time when such task is performed (e.g. you should not
construct a new state object to perform the same operation on consecutive video frames). A single state object cannot be shared among different
tasks (e.g. if as part of one iteration the same function is called twice, then both calls should use different instances of a state object).
State object type name is generated dynamically and it can be found in the declaration of a generated function. The C++ Code Generator, if possible,
creates names of state types according to the template Macrofilter-nameState.
Freeing state resources is performed in a state class destructor. If a function established connections with external devices, then a state object
destructor is used also to close them. The recommended way of state handling is using a local variable on the stack, in such way that a destructor is
called automatically after stepping out of a program block.
Error handling
The functions of Aurora Vision Library and generated code report the same errors which can be observed in the Aurora Vision Studio environment.
On the C++ code level error reporting is performed by throwing an exception. To throw an exception, an object of type derived from the atl::Error
class is used. Methods of this class can be used to get the description of reported problems.
Note: the Init() function can also throw an exception (e.g. when an external .avdata file containing a static constant could not be loaded).
Global parameters
Simple global parameters (only read by the vision application) are generated as global variables in the C++ program (with the same name as the
global parameter). It is possible to modify those variables from the user code in order to change the application configuration, however such
modification is not thread safe.
When thread safe modification of the global parameters is needed, global parameters should only be accessed with WriteParameter and
ReadParameter filter blocks in the vision application. In order to allow access from user code to such operations, appropriate read/write should be
encapsulated in a public macrofilter participating in the code generation. This will give access to the parameters from the user code in form of a
function call.
Introduction
Generation of macrofilter interfaces allows to design complex applications without loosing comfort of visual programming which comes from the
environment of Aurora Vision Studio.
The most common reasons people choose .NET Macrofilter Interfaces are:
Creating applications with very complex or highly interactive HMI.
Creating applications that connect to external databases or devices which can be accessed more easily with .NET libraries.
Creating flexible systems which can be modified without recompiling the source code of an application.
Macrofilters can be treated as mini-programs which can be run independently of each other. Aurora Vision Studio enables to use those macrofilters
in such programming languages as C# or C++/CLI and execute them as regular class methods. Because those methods are just interfaces to the
macrofilters, there is no need to re-generate the assembly after each change made to the AVCODE (the graphical program).
Requirements
In order to build and use a .NET Macrofilter Interface assembly, the following applications must be present in the user's machine:
Building Running
Aurora Vision Aurora Vision Professional 5.3 or
Professional 5.3 Aurora Vision Runtime 5.3
Microsoft Visual Microsoft Visual C++ Redistributable Package of the same bitness (32/64) as the generated assembly and the same
Studio 2015 version as Microsoft Visual Studio used to build the assembly.
(or greater)
Namespace
Defines the name of the main library class container.
Macrofilter Interface Class Name
Defines the name of the class, where all macrofilters will be available as methods (with the same signatures as macrofilters).
Generate sample Microsoft Visual Studio solution
Generates empty Microsoft Visual Studio C# WinForms project that uses to-be created Macrofilter .NET Interface assembly.
Link Aurora Vision Project Files
Includes current Aurora Vision project files to the C# project as links. This way Aurora Vision project files (*.avproj, *.avcode, *.avlib) are
guaranteed to be easily accessible from the application output directory, e.g. with following line of code (assuming the project is
Inspection.avproj):
macrofilters = InspectionMacrofilters.Create(@"auroravision\Inspection.avproj");
Interface page
Environment
Selection of which Microsoft Visual Studio build tools should be used to build an assembly. The drop down list is populated with all detected
compatible tools in the system, including Microsoft Visual Studio and Microsoft Visual Studio Build Tools environments. If none of detected
are suitable, custom environment may be used. Then, MSBuild.exe location and target Microsoft Visual Studio version properties need to be
defined manually.
MSBuild location
Shows the path to the actual MSBuild.exe according to the selected environment. Editable, when Custom environment is selected.
Microsoft Visual Studio version
Defines the generated sample C# project format (sln and csproj files).
Windows SDK version
Allows choosing the appropriate SDK version when generating Macrofilter .Net Interface. The list contains all detected SDK versions in the
system.
Advanced page
Assembly Signing
Enables signing the generated Macrofilter .NET Assembly with given private key and makes it a Strong-Named assembly. Keys may be
generated e.g. in Strong Name Tool or in Microsoft Visual Studio IDE (C# project properties page).
MacrofilterInterfaceClass.Create static method receives a path to either *.avcode, *.avproj or *.avexe path, for which the dll library was
generated. It is suggested to wrap MacrofilterInterfaceClass instantiating with a try-catch statement, since the Create method may
throw exceptions, e.g. when some required libraries are missing or are corrupted.
Finalization
MacrofilterInterfaceClass class implements the IDisposable interface, in which the Dispose() method, libraries' releasing and other
cleaning ups are performed. It is a good practice to clean up on application closure.
Bitness
Generated assembly bitness is the same as the used Aurora Vision Studio bitness. That is why the user application needs to have its target platform
adjusted to avoid libraries' format mismatch. For example, if Aurora Vision Studio 64-bit was used, the user application needs to have its target
platform switched to the x64 option.
Diagnostic mode
The macrofilter execution mode can be modified with DiagnosticMode property of the generated Macrofilter .NET Interface class. It enables both
checking and enabling/disabling the diagnostic mode.
Dialogs
It is possible to edit geometrical primitives the same way as in Aurora Vision Studio. All that need to be done is to use appropriate classes from the
Avl.NET.Designers.dll assembly. Dialog classes are defined in AvlNet.Designers namespace. For more info see the AVL.NET Dialogs article.
Full AVL.NET
With Aurora Vision Library installed one can take advantage of full AVL.NET functionality in the applications that use Macrofilter .NET Interface
assemblies. Just follow the instructions from Getting Started with Aurora Vision Library .NET.
Example
Introduction
This example is a step-by-step demonstration of the .NET Macrofilter Interface Assembly generation and usage. To run this example at least Aurora
Vision Studio Professional 5.3 and Microsoft Visual Studio 2015 are required. Visual C# will be used to execute macrofilters.
In the Interface page check the ThresholdLena to generate .NET Macrofilter Interface for the macrofilter. This macrofilter will be accessible as C#
method later on.
When ready, click Generate to generate an assembly, which will allow us to run the ThresholdLena macrofilter in a Visual C# application.
using AvlNet;
using AuroraVision;
public partial class Form1 : Form
{
/// <summary>
/// Object that provides access to the macrofilters defined in the Aurora Vision Studio project.
/// </summary>
private readonly ThresholdLenaMacrofilters macros;
public Form1()
{
InitializeComponent();
try
{
string avsProjectPath = @"auroravision\ThresholdLena.avproj";
macros = ThresholdLenaMacrofilters.Create(avsProjectPath);
}
catch (Exception e)
{
MessageBox.Show(e.Message);
}
}
ExampleMacrofilters class does not provide a public constructor. Instead, instances of the ExampleMacrofilters class can only be
obtained through it's static method Create accepting a path to either *.avcode, *.avproj or *.avexe file. In this example it takes the path to the
*.avcode file with the definition of ThresholdLena macrofilter. Example.avcode passed to the Create method means, that the runtime will look for
the *.avcode file in the application output directory. To guarantee that this file will be found, it should be included in the project and its "Copy to Output
Directory" property should be set to either "Copy always" or "Copy if newer".
The C# project is prepared to run the macrofilters as methods. Since ThresholdLena outputs an image taking two optional float values, which stand
for threshold's minimum and maximum values, let's add one PictureBox and two TrackBar controls with value range equal to 0-255:
Changing a value of either of track bars calls the UpdateImage() method, where ThresholdLena macrofilter is executed and calculated image is
printed in the PictureBox control:
private void UpdateImage()
{
try
{
//create an empty image buffer to be populated in the ThresholdLena macrofilter
using (var image = new Image())
{
//call macrofilter
macros.ThresholdLena(minTrackBar.Value, maxTrackBar.Value, image);
On application closing, all resources loaded in ExampleMacrofilters.Create(...) method, should be released. It is achieved in
ExampleMacrofilters.Dispose() instance method, which should be called during disposing of containing form, in Form1.Dispose(bool)
override:
base.Dispose(disposing);
}
Hints
Generated macrofilter interface class offers also Exit() method which allows user to disconnect from Aurora Vision environment.
Every step macrofilter contains specific resetting method, which resets an internal state. For example method ResetLenaThreshold()
resets internal register values and any iteration states.
Best way to create an application using macrofilter interfaces is to use encrypted avexe files. It secures application code from further
modifications.
Diagnostic mode can be switched on and off with IsDiagnosticModeEnabled static property of the AvlNet.Settings class.
The application using Macrofilter .NET interface may also reference the AVL's AvlNet.dll assembly to enable direct AVL function calls from the
user code (see Getting Started with Aurora Vision Library .NET for referencing the AvlNet.dll in the user applications).
Next step is to choose dongle, that is about to be upgraded and then click on "License Update" button, which will bring CmFas Assistant window:
If "Next" button is chosen, window with possible operations will appear:
To create WibuCmRac file, which is necessary for process of update, first option should be selected, "Create license request".
Because there is already Aurora Vision Studios' license in dongle being upgraded, in next step "Extend existing license" should be chosen.
In next window one can choose which vendors' license is wanted to be updated. "Adaptive Vision Sp. z o.o." should be selected.
The last step of creation WibuCmRac file is to choose its name (default is fine) and localization.
All settings need to be applied by clicking "Commit". If everything went well, this window will appear:
If your dongle happens to be empty (no prior Aurora Vision license is programmed, and list with available producers is empty), you need to select
"Add license of a new vendor" option in CmFAS Assistant:
In next window one will be prompted for entering the FirmCode of vendor. In case of "Adaptive Vision Sp. z o.o." one should type 102213, as shown
on image below:
After committing these steps, request file will be generated as described earlier in this section.
Navigate to file, which was received from Aurora Vision team, and click "Commit".
To open card configuration, double click on its icon and select the "Configuration/Modules" category in the Navigation Area on the left side of the
dialog. In the main window you can add individual modules. Remember that in Profinet, Slot configuration must match the configuration from
your master device, otherwise the connection will not work. For boolean indicators we recommend "1 Byte Output" or "1 Byte Input".
You can see the final address table (addresses on the Hilscher device where input or output data will be available) in the "Configuration/Address
table" category. If you plan to use IORead and IOWrite filters, for example Hilscher_Channel_IORead_SInt8, note the addresses. You can switch the
display mode to decimal, as Aurora Vision Studio accepts decimal addressing, not hexadecimal. Aurora Vision Studio implementation of Profinet
checks whether the address and data size match. In the sample configuration below, writing a byte to address 0x002 would not work, because that
module address starts at 0x001 and spans 2 bytes. Moreover, Aurora Vision Studio prohibits writing with a SlotWrite filter to input areas, and reading
with a SlotRead filter from output areas. Click "OK" when you have finished the proper configuration.
The final step is to generate configuration files for Aurora Vision Studio. You can to this by right clicking on the device icon, then navigating to
"Additional Functions->Export->DBM/nxd...", entering your configuration name and clicking "Save". You can now close SYCON.net for this example,
remember to save your project before, so that it is easier to add new slots.
Filters in Aurora Vision Studio
At the beginning of every program where you want to use filters intended for communication over a Hilscher card you need to add
Hilscher_Channel_Open_Profinet filter. The configuration files generated in the previous step are now required in inConfig (xxx.nxd) and inNwid
(xxx_nwid.nxd) properties of that filter. These files are used when there is no connection between a card and a Profinet master. In that case
Aurora Vision Studio updates the card configuration and starts the communication. For IO, we recommend SlotRead and SlotWrite filters, as they
are more convenient. For example, the Hilscher_Channel_SlotWrite_SInt8 filter writes 8 bytes of signed data to the selected slot. Slot numbers
match those in the "Configuration/Modules" category of card configuration in SYCON.net program.
9. Working with GigE Vision®
Devices
Table of content:
Enabling Traffic in Firewall
Enabling Jumbo Packets
GigE Vision® Device Manager
Connecting Devices
Device Settings Editor
Known Issues
Enabling Traffic in Firewall
Standard windows firewall or other active firewall applications should prompt for confirmation on enabling incoming traffic upon first start of an Aurora
Vision program that is using a video streaming filter. Sample prompt message from standard Windows 7 Firewall is shown on the image bellow.
Please note that a device connected directly to computer's network adapter in Windows Vista and Windows 7 will become an element of an
unidentified network. Such device will be treated by default as Public network. In order to communicate with such a device you must allow for traffic
also in Public networks as shown on the above image.
Clicking on Allow access will enable application to stream video from a device. Because of a delay caused by the firewall dialog first run of a program
may fail with a timeout error. In such situations just try running program again after enabling access.
For information about changing settings of your firewall application search how to allow a program to communicate through this firewall in a Windows
help or a third party application manual. GigE Vision® driver requires that incoming traffic is enabled on all UDP ports for Aurora Vision Studio and
Aurora Vision Executor (by default located in C:\Program Files\Aurora Vision\Aurora Vision Studio Professional\AuroraVisionStudio.exe and
C:\Program Files\Aurora Vision\Aurora Vision Studio Runtime\AuroraVisionExecutor.exe).
2. Right click a network adapter that have a connection with a device and open its properties (an administrator password may be needed).
4. From Advanced tab select Jumbo Packet property and increase its value up to 9014 Bytes (9k Bytes). This step might look differently
depending on the network card vendor. For some vendors this property might have different similar name (e.g. Large Packet). When there is
no property for enabling/setting large packet size this card does not support jumbo packets.
5. Click OK.
Refresh
Refresh button performs a new search in the network. Use this function when the network configuration has been changed, a new device has been
plugged in or when your device has not been found at startup.
Tools
Tools button opens a menu with functions designed for device configuration. Some of these functions are device dependant and require the user to
selected a device on the list first (they are also available in a device context menu).
Static address — this field allows to set a static (persistent) network configuration saved in device non-volatile memory. Use this setting when
the device is identified by IP address that cannot change or when automatic address configuration is not available. This field has no effect
when Use static IP field is not checked.
Current address — this read-only field shows current network configuration of a device, for example the address assigned to it by a DHCP
server.
Device IP configuration — this field allows to activate or deactivate specified methods of acquiring addresses by a device on startup. Some
of this options can be not available (grayed) when the device is not supporting specified mode.
Use static IP — Device will use address specified in Static address field.
Use DHCP server — When a DHCP server is available in the network, devices will acquire automatically assigned address from it.
Use Link-local address — When there is no other method available a device will try to find a free address from 169.254.-.- range.
When using this method (for example on a direct connection between the device and a computer) the device will take significantly more
time to become available in network after startup.
After clicking OK the new configuration will be send to a device. Configuration can be changed only when the device is not used by another
application and/or is not streaming video. New configuration may be not available until the device is restarted or reconnected.
Tool: Assign Temporary IP for Unreachable Device
This tool is intended for situations when a device cannot be accessed because of its invalid or unspecified network configuration (note that this
should be a very rare case and usually the device should appear in list). The tool allows to immediately change network address of an idle device
(thus realizing GigE Vision® FORCE IP function).
This tool requires a user to specify device hardware network adapter MAC address (should be printed on device casing). After that a new IP
configuration can be specified. The address can be changed only when the device is idle (is not connected to other application and not streaming
video). The new address will be available immediately after successful send operation.
Tool: Application Transport Settings...
This tool allows to access and edit application settings related with driver transport layer, like connection attempts and timeouts. Settings are saved
and used at whole application level. Changes affects only newly opened connections.
Tool: Open GenICam XML Directory
GigE Vision® devices are implementing GenICam standard. GenICam standard requires that a device must be described by a special XML file that
defines all device parameters and capabilities. This file is usually obtained automatically by the application from the device memory or from
manufacturer's internet web page. Sometimes the XML file can be supplied by manufacturer on a separate disk. Aurora Vision Studio and Aurora
Vision Executor use a special directory for these files which is located in the user data directory. Use this tool to open that directory.
Device description files should be copied into this directory without changing their name, extension and content. File can also be supplied as a ZIP
archive — do not decompress such file nor change its extension.
In such mode the manager allows to select one device from list (by selecting it on a list and clicking Select or by double clicking on the list). Below
the list shown are options determining how a device will be identified in program:
IP Address - (Default) Current device IP address will be saved in program. This mode allows for faster start, but when the device is using
automatic IP configuration its address can change in future and device will become unavailable for the program.
MAC Address - Device hardware MAC address will be saved in a program. Upon every program start application will search for the device to
obtain most recent IP address. This mode allows to identify device even when its IP is changed.
Serial number - Device serial number will be saved in a program. Use this mode only when device serial number identification is supported by
device. Upon every program start application will search for the device to obtain most recent IP address. This mode allows to identify the
device even when its IP has changed.
After device selection, address in the chosen format is inserted into a filter's property value.
In this mode additional dialog is shown with a drop-down list of pixel formats supported by a device. When there is more that one device in the
network you must first select a device for which format will be selected. When there is no device available in network a list of all standard pixel
formats will be shown.
Pixel format list contains all formats signaled by a device and uses a name provided by the device. If it is a device-specific format, if may be not
possible to decode it.
After selecting a format and clicking Ok the format name will be inserted into a filter's property value.
Firewall Warning
In some situations device manager might show a warning message about Windows firewall.
This warning is shown when the application has determined that Windows firewall will block any type of UDP traffic for application. For some devices
it is required that incoming UDP traffic is explicitly allowed in firewall. Lack of this warning does not indicate that firewall will not block connection (for
example if there is a 3rd party firewall in the system or when the Windows firewall allows traffic in Private domain but a device is in an unidentified
network).
To clear out this message you must enable incoming traffic on all UDP ports in Windows firewall. For more information about enabling traffic in
Windows Firewall see: Enabling Traffic in Firewall.
Connecting Devices
Connecting a GigE Vision device to a computer means plugging both into the same Ethernet network.
It is recommended that the connection is as simple as possible. To achieve best performance use direct connection with a crossed Ethernet cable
or connect the camera and the computer to the same Ethernet switch (without any other heavy traffic routed through the same switch).
The device and the computer must reside in a single local area network and must be set up for the same subnet.
GigE Vision® is designed for 1 Gb/s networks, but it is also possible to use 100 Mb/s connection as long as the entire network connection have an
uniformed speed (some custom device configuration might be required when the device is not able to detect connection speed automatically). It is
recommended however to avoid connecting a device to a network link which is faster than the maximum throughput of the whole network route.
Such configurations require manual setting of the device's transmission speed limit.
Firewall Issues
GigE Vision® protocol produces a specific type of traffic that is not firewall friendly. Typical firewall software is unable to recognize that video
streaming traffic is initialized by a local application and will block this connection. Aurora Vision's GigE driver attempts to overcome this problem
using firewall traversal mechanism, but not all devices support this.
It is thus required to enable incoming traffic on all UDP ports for Aurora Vision Studio and Aurora Vision Executor in a firewall on your local computer.
For information how to enable such traffic in Windows Firewall see: Enabling Traffic in Firewall.
Packet Size
Network video stream is divided into packets of a specified size. The packet size is limited by the Ethernet standard but some network cards support
an extension called jumbo packet that increases allowed packet size. Because a connection is more efficient when the packet size is bigger, the
application will attempt to negotiate biggest possible network packet size for current connection, taking advantage of enabled jumbo packets.
For information how to enable jumbo packets see: Enabling Jumbo Packets
In such configuration it is required for the computer hardware to handle concurrent gigabit streams at once. Even when separate network cards are
able to receive the network streams there still may be problems for the computer hardware to transfer data from the network adapters to the system
memory. Attention must be paid when choosing hardware for such applications. When given requirements are not met the system may observe
excessive packets loss leading to the video stream frames loss.
Even when the cameras framerate is low and the resulting average network throughput is relatively low the system still may drop packets during
network bursts when the momentary data transfer exceeds the system capabilities. Such burst may appear when multiple cameras transmit a
single frame at the same time. By default GigEVision camera is transferring a single frame with maximum available speed and lowering framerate is
only increasing the gaps between frame transfers:
Although diagnostic tools will report network throughput utilization to be well below system limits it is still possible for short burst transfers to
temporary exceed the system limits resulting in packets drop. To overcome such problems it is required to not only ensure camera framerates to be
below proper limit, but also to limit the maximum network transfer speed of the device network adapters. Refer to the device documentation for
details about how to limit the network transfer speed in specific device. Usually this can be achieved by decreasing value of parameters such as
DeviceLinkThroughputLimit or StreamBytesPerSecond (in bytes per second), or by introducing delays in between the network packets by increasing
parameters such as PacketDelay, InterPacketDelay or GevSCPD (measured in internal device timer ticks - must be calculated individually for
device using device timer frequency).
Above requirements are especially important when cameras are connected to the computer using a single network card and a shared network
switch:
Special care must be taken to assure that all the cameras connected to the switch (when all transmitting at once) do not exceed transfer limits of the
connection between the switch and the computer, both average transfer (by limiting the framerate) as well as temporary burst transfer (by limiting
network transfer speed). Network switch will attempt to handle burst transfers by storing the packets in its internal buffer and transmitting packets
stored in the buffer after the burst, but when the amount of data in burst transfer exceeds the buffer size the network packets will be dropped. Thus it
is required for the switch buffer to be large enough to store all camera frames captured at once, or to limit transmission speed for the switch buffer to
not overflow.
It is important to note that the maximum performance of the multi-camera system with shared network switch is limited by the throughput of
the link between the switch and the computer, and usually it will not be possible to achieve the maximum framerate and/or resolution of the
cameras.
A common case of using multiple cameras at once is to capture multiple photos of a object from a single trigger source (with synchronous
triggering):
All above recommendations must be considered for such configuration. Because under synchronous triggering all the cameras will always be
transferring images at the same time the problem of momentary burst transfer is especially present. Care must be taken to limit the maximum
network transmission speed in the cameras to the system limits and to give enough time between trigger events for the cameras to finish the
transfer.
In such situation selected parameter is connected with one of categories (slots) described by selector. In the example on the above image,
parameter is determining whether the physical Line is used to Input or Output a signal. This device has two lines and both have its own separate
values to choose from. Selector will pick which line we want to edit and bottommost editor will change it purpose. This means that there are actually
two different Line Modes parameters in device.
Please note that selector will not always be displayed above editor. You must follow a device documentation and search parameters tree for
selectors and other parameters on which this parameter is dependent.
The Device Settings Editor can be used to identify device capabilities and descriptions or to set up a new device. Device Editor can be also used
when a program is running and the camera is streaming. In this situation changes should be immediately visible in the camera output.
Settings Editor gives a user an unlimited access to the device parameters and, when used improperly, can put device in an invalid state in
which the device will become inaccessible by applications or can cause transitional errors in the program execution.
Known Issues
In this section you will find solutions to known issues that we have came across while testing communication between Aurora Vision products and
different camera models through GigE Vision.
Flir Cameras
There might be problems with image acquisition from Flir cameras through GigE Vision. It is caused by the implementation (regarding image trailer
data) of the GigE Vision standard in those cameras and as a result no image can be seen in Aurora Vision Studio (the previews are empty during
program execution).
To resolve this issue a parameter change in Aurora Vision GenAPI configuration is required. Parameter which should be changed is:
Enable Ignore Gev Image Trailer data (should be set to True).
You can change these parameters in the GigE Vision Application Settings Tree (Tool » Manage GigE Vision Devices... » Tools » Application
Transport Settings). Remember to revert this changes before working with different devices as it can prevent some cameras from working properly
in some modes.
10. Machine Vision Guide
Table of content:
Image Processing
Blob Analysis
1D Edge Detection
1D Edge Detection – Subpixel Precision
Shape Fitting
Template Matching
Using Local Coordinate Systems
Camera Calibration and World Coordinates
Golden Template
Image Processing
Introduction
There are two major goals of Image Processing techniques:
1. To enhance an image for better human perception
2. To make the information it contains more salient or easier to extract
It should be kept in mind that in the context of computer vision only the second point is important. Preparing images for human perception is not part
of computer vision; it is only part of information visualization. In typical machine vision applications this comes only at the end of the program and
usually does not pose any problem.
The first and the most important advice for machine vision engineers is: avoid image transformations designed for human perception when
the goal is to extract information. Most notable examples of transformations that are not only not interesting, but can even be highly disruptive, are:
JPEG compression (creates artifacts not visible by human eye, but disruptive for algorithms)
CIE Lab and CIE XYZ color spaces (specifically designed for human perception)
Edge enhancement filters (which improve only the "apparent sharpness")
Image thresholding performed before edge detection (precludes sub-pixel precision)
Examples of image processing operations that can really improve information extraction are:
Gaussian image smoothing (removes noise, while preserving information about local features)
Image morphology (can remove unwanted details)
Gradient and high-pass filters (highlight information about object contours)
Basic color space transformations like HSV (separate information about chromaticity and brightness)
Pixel-by-pixel image composition (e.g. can highlight image differences in relation to a reference image)
Regions of Interest
The image processing tools provided by Aurora Vision have a special inRoi input (of Region type), that can limit the spatial scope of the operation.
The region can be of any shape.
An input image and the inRoi. Result of an operation performed within inRoi.
Remarks:
The output image will be black outside of the inRoi region.
To obtain an image that has its pixels modified in inRoi and copied outside of it, one can use the ComposeImages filter.
The default value for inRoi is Auto and causes the entire image to be processed.
Although inRoi can be used to significantly speed up processing, it should be used with care. The performance gain may be far from
proportional to the inRoi area, especially in comparison to processing the entire image (Auto). This is due to the fact, that in many cases more
SSE optimizations are possible when inRoi is not used.
Some filters have a second region of interest called inSourceRoi. While inRoi defines the range of pixels that will be written in the output image, the
inSourceRoi parameter defines the range of pixels that can be read from the input image.
Toolset
Image Combinators
The filters from the Image Combinators category take two images and perform a pixel-by-pixel transformation into a single image. This can be used
for example to highlight differences between images or to normalize brightness – as in the example below:
Input image with high reflections. Image of the reflections (calibrating). The result of applying DivideImages with inScale = 128 (inRoi
was used).
Image Smoothing
The main purpose of the image smoothing filters (located in the Image Local Transforms category) is removal of noise. There are several different
ways to perform this task with different trade-offs. On the example below three methods are presented:
1. Mean smoothing – simply takes the average pixel value from a rectangular neighborhood; it is the fastest method.
2. Median smoothing – simply takes the median pixel value from a rectangular neighborhood; preserves edges, but is relatively slow.
3. Gauss smoothing – computes a weighted average of the pixel values with Gaussian coefficients as the weights; its advantage is isotropy and
reasonable speed for small kernels.
Input image with some noise. Result of applying Result of applying Result of applying
SmoothImage_Mean. SmoothImage_Gauss. SmoothImage_Median.
Image Morphology
Basic morphological operators – DilateImage and ErodeImage – transform the input image by choosing maximum or minimum pixel values from a
local neighborhood. Other morphological operators combine these two basic operations to perform more complex tasks. Here is an example of
using the OpenImage filter to remove salt and pepper noise from an image:
Gradient Analysis
An image gradient is a vector describing direction and magnitude (strength) of local brightness changes. Gradients are used inside of many
computer vision tools – for example in object contour detection, edge-based template matching and in barcode and DataMatrix detection.
Available filters:
GradientImage – produces a 2-channel image of signed values; each pixel denotes a gradient vector.
GradientMagnitudeImage – produces a single channel image of gradient magnitudes, i.e. the lengths of the vectors (or their approximations).
GradientDirAndPresenceImage – produces a single channel image of gradient directions mapped into the range from 1 to 255; 0 means no
significant gradient.
Spatial Transforms
Spatial transforms modify an image by changing locations, but not values, of pixels. Here are sample results of some of the most basic operations:
Result of RotateImage. Result of DownsampleImage.
Result of MirrorImage. Result of ShearImage.
Result of Result of TranslateImage. Result of CropImage. Result of UncropImage applied to the result
TransposeImage. of CropImage.
There are also interesting spatial transform tools that allow to transform a two dimensional vision problem into a 1.5-dimensional one, which can be
very useful for further processing:
Result of ImageAlongPath.
Example of remapping of a spherical object using CreateSphereMap and RemapImage. Image before and after remapping.
Furthermore custom spatial maps can be created with ConvertMatrixMapsToSpatialMap.
An example of custom image transform created with ConvertMatrixMapsToSpatialMap. Image before and after remapping.
Image Thresholding
The task of Image Thresholding filters is to classify image pixel values as foreground (white) or background (black). The basic filters ThresholdImage
and ThresholdToRegion use just a simple range of pixel values – a pixel value is classified as foreground iff it belongs to the range. The
ThresholdImage filter just transforms an image into another image, whereas the ThresholdToRegion filter creates a region corresponding to the
foreground pixels. Other available filters allow more advanced classification:
ThresholdImage_Dynamic and ThresholdToRegion_Dynamic use average local brightness to compensate global illumination variations.
ThresholdImage_RGB and ThresholdToRegion_RGB select pixel values matching a range defined in the RGB (the standard) color space.
ThresholdImage_HSx and ThresholdToRegion_HSx select pixel values matching a range defined in the HSx color space.
ThresholdImage_Relative and ThresholdToRegion_Relative allow to use a different threshold value at each pixel location.
Input image with uneven light. Result of ThresholdImage – the bars can not be Result of ThresholdImage_Dynamic – the bars
recognized. are correct.
There is also an additional filter SelectThresholdValue which implements a number of methods for automatic threshold value selection. It should,
however, be used with much care, because there is no universal method that works in all cases and even a method that works well for a particular
case might fail in special cases.
Input image with uneven light. Result of ColorDistanceImage for the red color with Result of thresholding reveals the
inChromaAmount = 1.0. Dark areas correspond to low location of the red dots on the globe.
color distance.
Image Features
Image Features is a category of image processing tools that are already very close to computer vision – they transform pixel information into simple
higher-level data structures. Most notable examples are: ImageLocalMaxima which finds the points at which the brightness is locally the highest,
ImageProjection which creates a profile from sums of pixel values in columns or in rows, ImageAverage which averages pixel values in the entire
region of interest. Here is an example application:
Digit locations extracted by applying SmoothImage_Gauss and Profile of the vertical projection revealing regions of digits and the
ImageLocalMaxima. boundaries between them.
Blob Analysis
Introduction
Blob Analysis is a fundamental technique of machine vision based on analysis of consistent image
regions. As such it is a tool of choice for applications in which the objects being inspected are clearly
discernible from the background. Diverse set of Blob Analysis methods allows to create tailored
solutions for a wide range of visual inspection problems.
Main advantages of this technique include high flexibility and excellent performance. Its limitations
are: clear background-foreground relation requirement (see Template Matching for an alternative)
and pixel-precision (see 1D Edge Detection for an alternative).
Concept
Let us begin by defining the notions of region and blob.
Region is any subset of image pixels. In Aurora Vision Studio regions are represented using Region data type.
Blob is a connected region. In Aurora Vision Studio blobs (being a special case of region) are represented using the same Region data type.
They can be obtained from any region using a single SplitRegionIntoBlobs filter or (less frequently) directly from an image using image
segmentation filters from category Image Analysis techniques.
An example image. Region of pixels darker than 128. Decomposition of the region into array of blobs.
The basic scenario of the Blob Analysis solution consists of the following steps:
1. Extraction - in the initial step one of the Image Thresholding techniques is applied to obtain a region corresponding to the objects (or single
object) being inspected.
2. Refinement - the extracted region is often flawed by noise of various kind (e.g. due to inconsistent lightning or poor image quality). In the
Refinement step the region is enhanced using region transformation techniques.
3. Analysis - in the final step the refined region is subject to measurements and the final results are computed. If the region represents multiple
objects, it is split into individual blobs each of which is inspected separately.
Examples
The following examples illustrate the general schema of Blob Analysis algorithms. Each of the techniques represented in the examples (thresholding,
morphology, calculation of region features, etc.) is inspected in detail in later sections.
Rubber Band
Initial image
Extraction
In this case each of the steps: Extraction, Refinement and Analysis is represented by
a single filter.
Extraction - to obtain a region corresponding to the red band a Color-based
Thresholding technique is applied. The ThresholdToRegion_HSx filter is capable of
finding the region of pixels of given color characteristics - in this case it is targeted to
detect red pixels.
Refinement
Refinement - the problem of filling the gaps in the extracted region is a standard one.
Classic solutions for it are the region morphology techniques. Here, the CloseRegion
filter is used to fill the gaps.
Analysis - finally, a single RegionArea filter is used to compute the area of the obtained
region.
Results
Mounts
In this example a picture of a set of mounts is inspected to identify the damaged ones.
Input image
Extraction - as the lightning in the image is uniform, the objects are consistently dark
and the background is consistently bright, the extraction of the region corresponding to
Extraction
the objects is a simple task. A basic ThresholdToRegion filter does the job, and does it
so well that no Refinement phase is needed in this example.
Analysis - as we need to analyze each of the blobs separately, we start by applying
the SplitRegionIntoBlobs filter to the extracted region.
To distinguish the bad parts from the correct parts we need to pick a property of a
region (e.g. area, circularity, etc.) that we expect to be high for the good parts and low
for the bad parts (or conversely). Here, the area would do, but we will pick a somewhat
more sophisticated rectangularity feature, which will compute the similarity-to-
rectangle factor for each of the blobs.
Analysis
Once we have chosen the rectangularity feature of the blobs, all that needs to be done
is to feed the regions to be classified to the ClassifyRegions filter (and to set its
inMinimum value parameter). The blobs of too low rectangularity are available at the
outRejected output of the classifying filter.
Results
Extraction
There are two techniques that allow to extract regions from an image:
Image Thresholding - commonly used methods that compute a region as a set of pixels that meet certain condition dependent on the
specific operator (e.g. region of pixels brighter than given value, or brighter than the average brightness in their neighborhood). Note that the
resulting data is always a single region, possibly representing numerous objects.
Image Segmentation - more specialized set of methods that compute a set of blobs corresponding to areas in the image that meet certain
condition. The resulting data is always an array of connected regions (blobs).
Thresholding
Image Thresholding techniques are preferred for common applications (even those in which a set of objects is inspected rather than a single object)
because of their simplicity and excellent performance. In Aurora Vision Studio there are six filters for image-to-region thresholding, each of them
implementing a different thresholding method.
Brightness-
based
(basic)
Brightness-
based
(additional)
Color-
based
Classic Thresholding
ThresholdToRegion simply selects the image pixels of the specified brightness. It should be considered a basic tool and applied whenever the
intensity of the inspected object is constant, consistent and clearly different from the intensity of the background.
Dynamic Thresholding
Inconsistent brightness of the objects being inspected is a common problem usually caused by the imperfections of the lightning setup. As we can
see in the example below, it is often the case that the objects in one part of the image actually have the same brightness as the background in
another part of the image. In such case it is not possible to use the basic ThresholdToRegion filter and ThresholdToRegion_Dynamic should be
considered instead. The latter selects image pixels that are locally bright/dark. Specifically - the filter selects the image pixels of the given relative
local brightness defined as the difference between the pixel intensity and the average intensity in its neighborhood.
Color-based Thresholding
When inspection is conducted on color images it may be the case that despite a significant difference in color, the brightness of the objects is
actually the same as the brightness of their neighborhood. In such case it is advisable to use Color-based Thresholding filters:
ThresholdToRegion_RGB, ThresholdToRegion_HSx. The suffix denote the color space in which we define the desired pixel characteristic and not
the space used in the image representation. In other words - both of these filters can be used to process standard RGB color image.
An example image. Mono equivalent of the image depicting Result of the color-based thresholding
brightness of its pixels. targeted at red pixels.
Refinement
Region Morphology
Region Morphology is a classic technique of region transformation. The core concept of this toolset is the usage of a structuring element also known
as the kernel. The kernel is a relatively small shape that is repeatedly centered at each pixel within dimensions of the region that is being
transformed. Every such pixel is either added to the resulting region or not, depending on operation-specific condition on the minimum number of
kernel pixels that have to overlap with actual input region pixels (in the given position of the kernel). See description of Dilation for an example.
Expanding Reducing
Basic
Composite
Dilation is one of two basic morphological transformations. Here each pixel P within the dimensions of the region being transformed is added to the
resulting region if and only if the structuring element centered at P overlaps with at least one pixel that belongs to the input region. Note that for a
circular kernel such transformation is equivalent to a uniform expansion of the region in every direction.
Erosion is a dual operation of Dilation. Here, each pixel P within the dimensions of the region being transformed is added to the resulting region if
and only if the structuring element centered at P is fully contained in the region pixels. Note that for a circular kernel such transformation is equivalent
to a uniform reduction of the region in every direction.
The actual power of the Region Morphology lies in its composite operators - Closing and Opening. As we may have recently noticed, during the
blind region expansion performed by the Dilation operator, the gaps in the transformed region are filled in. Unfortunately, the expanded region no
longer corresponds to the objects being inspected. However, we can apply the Erosion operator to bring the expanded region back to its original
boundaries. The key point is that the gaps that were completely filled during the dilation will stay filled after the erosion. The operation of applying
Erosion to the result of Dilation of the region is called Closing, and is a tool of choice for the task of filling the gaps in the extracted region.
Opening is a dual operation of Closing. Here, the region being transformed is initially eroded and then dilated. The resulting region preserves the
form of the initial region, with the exception of thin/small parts, that are removed during the process. Therefore, Opening is a tool for removing the
thin/outlying parts from a region. We may note that in the example below, the Opening does the - otherwise relatively complicated - job of finding the
segment of the rubber band of excessive width.
Other Refinement Methods
Analysis
Once we obtain the region that corresponds to the object or the objects being inspected, we may commence the analysis - that is, extract the
information we are interested in.
Region Features
Aurora Vision Studio allows to compute a wide range of numeric (e.g. area) and non-numeric (e.g. bounding circle) region features. Calculation of the
measures describing the obtained region is often the very aim of applying the blob analysis in the first place. If we are to check whether the
rectangular packaging box is deformed or not, we may be interested in calculating the rectangularity factor of the packaging region. If we are to check
if the chocolate coating on a biscuit is broad enough, we may want to know the area of the coating region.
It is important to remember, that when the obtained region corresponds to multiple image objects (and we want to inspect each of them separately),
we should apply the SplitRegionIntoBlobs filter before performing the calculation of features.
Numeric Features
Each of the following filters computes a number that expresses a specific property of the region shape.
Annotations in brackets indicate the range of the resulting values.
Similarity to own convex hull (0.0 - 1.0) Similarity to a rectangle (0.0 - 1.0)
Count of the region holes (0 - ) Orientation of the main region axis (0.0 - 180.0)
Non-numeric Features
Each of the following filters computes an object related to the shape of the region. Note that the primitives extracted using these filters can be made
subject of further analysis. For instance, we can extract the holes of the region using the RegionHoles filter and then measure their areas using the
RegionArea filter.
Annotations in brackets indicate Aurora Vision Studio's type of the result.
Smallest axis-aligned rectangle containing the region (Box) Smallest circle containing the region (Circle2D)
Smallest any-orientation rectangle containing the region (Rectangle2D) Boundaries of the region (PathArray)
Longest segment connecting two points inside the region (Segment2D) Array of blobs representing gaps in the region (RegionArray)
Case Studies
Capsules
In this example we inspect a set of washing machine capsules on a conveyor line. Our aim is to identify the deformed capsules.
We will proceed in two steps: we will commence by designing a simple program that, given picture of the conveyor line, will be able to identify the
region corresponding to the capsule(s) in the picture. In the second step we will use this program as a building block of the complete solution.
FindRegion Routine
In this section we will develop a program that will be responsible for the Extraction and Refinement phases of
the final solution. For brevity of presentation in this part we will limit the input image to its initial segment.
Initial image
ThresholdToRegion
FillRegionHoles
After a brief inspection of the input image we may note that the task at hand will not be trivial - the average
brightness of the capsule body is similar to the intensity of the background. On the other hand the border of the
capsule is consistently darker than the background. As it is the border of the object that bears significant
information about its shape we may use the basic ThresholdToRegion filter to extract the darkest pixels of the
image with the intention of filling the extracted capsule border during further refinement.
The extracted region certainly requires such refinement - actually, there are two issues that need to be
addressed. We need to fill the shape of the capsule and eliminate the thin horizontal stripes corresponding to
the elements of the conveyor line setup. Fortunately, there are fairly straightforward solutions for both of these
problems.
OpenRegion
FillRegionHoles will extend the region to include all pixels enclosed by present region pixels. After the region is
filled all that remains is the removal of the thin conveyor lines using the classic OpenRegion filter.
Our routine for Extraction and Refinement of the region is ready. As it constitutes a continuous block of filters performing a well defined task, it is
advisable to encapsulate the routine inside a macrofilter to enhance the readability of the soon-to-be-growing program.
Complete Solution
Our program right now is capable of
extracting the region that directly
corresponds to the capsules visible in
the image. What remains is to inspect
each capsule and classify it as a
correct or deformed one.
As we want to analyze each capsule
separately, we should start with
decomposition of the extracted region
into an array of connected
components (blobs). This common
operation can be performed using the
straightforward SplitRegionIntoBlobs
filter.
We are approaching the crucial part of
our solution - how are we going to
distinguish correct capsules from
deformed ones? At this stage it is
advisable to have a look at the
summary of numeric region features
provided in Analysis section. If we
could find a numeric region property that is correlated with the nature of the problem at hand (e.g. it takes low values for a correct capsules and high
values for a deformed one, or conversely), we would be nearly done.
Rectangularity of a shape is defined as the ratio between its area and area of its smallest enclosing rectangle - the higher the value, the more the
shape of the object resembles a rectangle. As the shape of a correct capsule is almost rectangular (it is a rectangle with rounded corners) and
clearly more rectangular than the shape of deformed capsule, we may consider using rectangularity feature to classify the capsules.
Having selected the numeric feature that will be used for the classification, we are ready to add the ClassifyRegions filter to our program and feed it
with data. We pass the array of capsule blobs on its inRegions input and we select Rectangularity on the inFeature input. After brief interactive
experimentation with the inMinimum threshold we may observe that setting the minimum rectangularity to 0.95 allows proper discrimination of
correct (available at outAccepted) and deformed (outRejected) capsule blobs.
1D Edge Detection
Introduction
1D Edge Detection (also called 1D Measurement) is a classic technique of machine vision where
the information about image is extracted from one-dimensional profiles of image brightness. As we
will see, it can be used for measurements as well as for positioning of the inspected objects.
Main advantages of this technique include sub-pixel precision and high performance.
Concept
The 1D Edge Detection technique is based on an observation that any edge in the image corresponds to a rapid brightness change in the direction
perpendicular to that edge. Therefore, to detect the image edges we can scan the image along a path and look for the places of significant change of
intensity in the extracted brightness profile.
The computation proceeds in the following steps:
1. Profile extraction – firstly the profile of brightness along the given path is extracted. Usually the profile is smoothed to remove the noise.
2. Edge extraction – the points of significant change of profile brightness are identified as edge points – points where perpendicular edges
intersect the scan line.
3. Post-processing – the final results are computed using one of the available methods. For instance ScanSingleEdge filter will select and
return the strongest of the extracted edges, while ScanMultipleEdges filter will return all of them.
Example
The image is scanned along the path and the brightness profile is extracted and smoothed.
Brightness profile is differentiated. Notice four peaks of the profile derivative which correspond to four prominent image edges intersecting the scan
line. Finally the peaks stronger than some selected value (here minimal strength is set to 5) are identified as edge points.
Filter Toolset
Basic toolset for the 1D Edge Detection-based techniques scanning for edges consists of 9 filters each of which runs a single scan along the given
path (inScanPath). The filters differ on the structure of interest (edges / ridges / stripes (edge pairs)) and its cardinality (one / any fixed number /
unknown number).
Edges
Single
Result
Multiple
Results
Fixed
Number
of
Results
Stripes
Single
Result
Multiple
Results
Fixed
Number
of
Results
Ridges
Single
Result
Multiple
Results
Fixed
Number
of
Results
Note that in Aurora Vision Library there is the CreateScanMap function that has to be used before a usage of any other 1D Edge Detection function.
The special function creates a scan map, which is passed as an input to other functions considerably speeding up the computations.
Parameters
Profile Extraction
In each of the nine filters the brightness profile is extracted in exactly the same way. The stripe of
pixels along inScanPath of width inScanWidth is traversed and the pixel values across the path are
accumulated to form one-dimensional profile. In the picture on the right the stripe of processed pixels
is marked in orange, while inScanPath is marked in red.
The extracted profile is smoothed using Gaussian smoothing with standard deviation of
inSmoothingStdDev. This parameter is important for the robustness of the computation - we
should pick the value that is high enough to eliminate noise that could introduce false / irrelevant
extrema to the profile derivative, but low enough to preserve the actual edges we are to detect.
The inSmoothingStdDev parameter should be adjusted through interactive experimentation using outBrightnessProfile output, as demonstrated
below.
Too low inSmoothingStdDev - too much Appropriate inSmoothingStdDev - low noise, Too high inSmoothingStdDev - significant
noise significant edges are preserved edges are attenuated
Edge Extraction
After the brightness profile is extracted and refined, the derivative of the profile is computed and its
local extrema of magnitude at least inMinMagnitude are identified as edge points. The
inMinMagnitude parameter should be adjusted using the outResponseProfile output.
The picture on the right depicts an example outResponseProfile profile. In this case the significant
extrema vary in magnitude from 11 to 13, while the magnitude of other extrema is lower than 3.
Therefore any inMinMagnitude value in range (4, 10) would be appropriate.
Edge Transition
Filters being discussed are capable of filtering the edges depending on the kind of transition they represent - that is, depending on whether the
intensity changes from bright to dark, or from dark to bright. The filters detecting individual edges apply the same condition defined using the
inTransition parameter to each edge (possible choices are bright-to-dark, dark-to-bright and any).
Stripe Intensity
The filters detecting stripes expect the edges to alternate in their characteristics. The parameter inIntensity defines whether each stripe should
bound the area that is brighter, or darker than the surrounding space.
Sample edge profile (red) and its derivative (green). Please note, that the derivative is shifted by 0.5.
The steepest segment is between points 4.0 and 5.0, which corresponds to the maximum of the derivative (green) at 4.5. Without the subpixel
precision the edge would be found at this point.
It is, however, possible to consider information about the values of the neighbouring profile points to extract the edge location with higher precision.
The simplest method is to fit a parabola to three consecutive points of the derivative profile:
Shape Fitting
Introduction
Shape Fitting is a machine vision technique that allows for precise detection of objects whose
shapes and rough positions are known in advance. It is most often used in measurement
applications for establishing line segments, circles, arcs and paths defining the shape that is to be
measured.
As this technique is derived from 1D Edge Detection, its key advantages are similar – including sub-
pixel precision and high performance.
Concept
The main idea standing behind Shape Fitting is that a continuous object (such as a circle, an arc or a segment) can be determined using a finite set
of points belonging to it. These points are computed by means of appropriate 1D Edge Detection filters and are then combined together into a single
higher-level result.
Thus, a single Shape Fitting filter's work consists of the following steps:
1. Scan segments preparation – a series of segments is prepared. The number, length and orientations of the segments are computed from
the filter's parameters.
2. Points extraction – points that should belong to the object being fitted are extracted using (internally) a proper 1D Edge Detection filter (e.g.
ScanSingleEdge in FitCircleToEdges) along each of the scan segments as the scan path.
3. Object fitting – the final result is computed with the use of a technique that allows fitting an object to a set of points. In this step, a filter from
Geometry 2D Fitting is internally used (e.g. FitCircleToPoints in FitCircleToEdges). An exception to the rule is path fitting. No Geometry 2D
Fitting filter is needed there, because the found points serve themselves as the output path characteristic points.
The scan segments are created according to ScanSingleEdge (or another proper 1D Edge A segment is fitted to the obtained points.
the fitting field and other parameters (e.g. Detection filter) is performed.
inScanCount).
The scan segments are created according to ScanSingleEdge (or another proper 1D Edge A segment is fitted to the obtained points.
the fitting field and other parameters (e.g. Detection filter) is performed.
inScanCount).
Toolset
The whole toolset for Shape Fitting consists of several filters. The filters differ on the object being fitted (a circle, an arc, a line segment or a path) and
the 1D Edge Detection structures extracted along the scan segments (edges, ridges or stripe, all of them clearly discernible on the input image).
Parameters
Because of the internal use of 1D Edge Detection filters and Geometry 2D Fitting filters, all parameters known from them are also present in Shape
Fitting filters interfaces.
Beside these, there are also a few parameters specific to the subject of shape fitting. The inScanCount parameter controls the number of the scan
segments. However, not all of the scans have to succeed in order to regard the whole fitting process as being successful. The
inMaxIncompleteness parameter determines what fraction of the scans may fail.
FitCircleToEdges performed on the sample image with inMaxIncompleteness = 0.25. Although two scans have ended in failure, the circle has been
fitted successfully.
The path fitting functions have some additional parameters, which help to control the output path shape. These parameters are:
inMaxDeviationDelta – it defines the maximal allowed difference between deviations of consecutive points of the output path from the
corresponding input path points; if the difference between deviations is greater, the point is considered to be not found at all.
inMaxInterpolationLength – if some of the scans fail or if some of found points are classified to be wrong according to another control
parameters (e.g. inMaxDeviationDelta), output path points corresponding to them are interpolated depending on points in their nearest
vicinity. No more than inMaxInterpolationLength consecutive points can be interpolated, and if there exists a longer series of points that
would have to be interpolated, the fitting is considered to be unsuccessful. The exception to this behavior are points which were not found on
both ends of the input path. Those are not part of the result at all.
FitPathToEdges performed on the sample image with inMaxDeviationDelta = 2 and inMaxInterpolationLength = 3. Blue points are the points that
were interpolated. If inMaxInterpolationLength value was less than 2, the fitting would have failed.
Template Matching
Introduction
Template Matching is a high-level machine vision technique that identifies the parts on an image that
match a predefined template. Advanced template matching algorithms allow to find occurrences of
the template regardless of their orientation and local brightness.
Template Matching techniques are flexible and relatively straightforward to use, which makes them
one of the most popular methods of object localization. Their applicability is limited mostly by the
available computational power, as identification of big and complex templates can be time-
consuming.
Concept
Template Matching techniques are expected to address the following need: provided a reference image of an object (the template image) and an
image to be inspected (the input image) we want to identify all input image locations at which the object from the template image is present.
Depending on the specific problem at hand, we may (or may not) want to identify the rotated or scaled occurrences.
We will start with a demonstration of a naive Template Matching method, which is insufficient for real-life applications, but illustrates the core concept
from which the actual Template Matching algorithms stem from. After that we will explain how this method is enhanced and extended in advanced
Grayscale-based Matching and Edge-based Matching routines.
Naive Template Matching
Imagine that we are going to inspect an image of a plug and our goal is to find its pins. We are provided with a template image representing the
reference object we are looking for and the input image to be inspected.
We will perform the actual search in a rather straightforward way – we will position the template over the image at every possible location, and each
time we will compute some numeric measure of similarity between the template and the image segment it currently overlaps with. Finally we will
identify the positions that yield the best similarity measures as the probable template occurrences.
Image Correlation
One of the subproblems that occur in the specification above is calculating the similarity measure of the aligned template image and the overlapped
segment of the input image, which is equivalent to calculating a similarity measure of two images of equal dimensions. This is a classical task, and a
numeric measure of image similarity is usually called image correlation.
Cross-Correlation
The fundamental method of calculating the image correlation is so called cross-correlation, Image1 Image2 Cross-Correlation
which essentially is a simple sum of pairwise multiplications of corresponding pixel values of the
images. 19404780
Though we may notice that the correlation value indeed seems to reflect the similarity of the 23316890
images being compared, cross-correlation method is far from being robust. Its main drawback
is that it is biased by changes in global brightness of the images - brightening of an image may 24715810
sky-rocket its cross-correlation with another image, even if the second image is not at all similar.
Normalized Cross-Correlation
Normalized cross-correlation is an enhanced version of the classic cross-correlation method that introduces Image1 Image2 NCC
two improvements over the original one:
-0.417
The results are invariant to the global brightness changes, i.e. consistent brightening or darkening of
either image has no effect on the result (this is accomplished by subtracting the mean image 0.553
brightness from each pixel value).
0.844
The final correlation value is scaled to [-1, 1] range, so that NCC of two identical images equals 1.0,
while NCC of an image and its negation equals -1.0.
Identification of Matches
All that needs to be done at this point is to decide which points of the template correlation image are good enough to be considered actual matches.
Usually we identify as matches the positions that (simultaneously) represent the template correlation:
stronger that some predefined threshold value (i.e stronger that 0.5)
locally maximal (stronger that the template correlation in the neighboring pixels)
Areas of template correlation above 0.75 Points of locally maximal template correlation Points of locally maximal template correlation
above 0.75
Summary
It is quite easy to express the described method in Aurora Vision Studio - we will need just two
built-in filters. We will compute the template correlation image using the ImageCorrelationImage
filter, and then identify the matches using ImageLocalMaxima - we just need to set the
inMinValue parameter that will cut-off the weak local maxima from the results, as discussed in
previous section.
Though the introduced technique was sufficient to solve the problem being considered, we may
notice its important drawbacks:
Template occurrences have to preserve the orientation of the reference template image.
The method is inefficient, as calculating the template correlation image for medium to large images is time consuming.
In the next sections we will discuss how these issues are being addressed in advanced template matching techniques: Grayscale-based Matching
and Edge-based Matching.
Grayscale-based Matching, Edge-based Matching
Grayscale-based Matching is an advanced Template Matching algorithm that extends the original idea of correlation-based template detection
enhancing its efficiency and allowing to search for template occurrences regardless of its orientation. Edge-based Matching enhances this method
even more by limiting the computation to the object edge-areas.
In this section we will describe the intrinsic details of both algorithms. In the next section (Filter toolset) we will explain how to use these techniques
in Aurora Vision Studio.
Image Pyramid
Image Pyramid is a series of images, each image being a result of downsampling (scaling down, by the factor of two in this case) of the previous
element.
Pyramid Processing
Image pyramids can be applied to enhance the efficiency of the correlation-based template detection. The important observation is that the template
depicted in the reference image usually is still discernible after significant downsampling of the image (though, naturally, fine details are lost in the
process). Therefore we can identify match candidates in the downsampled (and therefore much faster to process) image on the highest level of our
pyramid, and then repeat the search on the lower levels of the pyramid, each time considering only the template positions that scored high on the
previous level.
At each level of the pyramid we will need appropriately downsampled picture of the reference template, i.e. both input image pyramid and template
image pyramid should be computed.
Grayscale-based Matching
Although in some of the applications the orientation of the objects is uniform and fixed (as we have seen in the plug example), it is often the case that
the objects that are to be detected appear rotated. In Template Matching algorithms the classic pyramid search is adapted to allow multi-angle
matching, i.e. identification of rotated instances of the template.
This is achieved by computing not just one template image pyramid, but a set of pyramids - one for each possible rotation of the template. During the
pyramid search on the input image the algorithm identifies the pairs (template position, template orientation) rather than sole template positions.
Similarly to the original schema, on each level of the search the algorithm verifies only those (position, orientation) pairs that scored well on the
previous level (i.e. seemed to match the template in the image of lower resolution).
The technique of pyramid matching together with multi-angle search constitute the Grayscale-based Template Matching method.
Edge-based Matching
Edge-based Matching enhances the previously discussed Grayscale-based Matching using one crucial observation - that the shape of any object is
defined mainly by the shape of its edges. Therefore, instead of matching of the whole template, we could extract its edges and match only the nearby
pixels, thus avoiding some unnecessary computations. In common applications the achieved speed-up is usually significant.
Matching object edges instead of an object as a whole requires slight modification of the original pyramid matching method: imagine we are matching
an object of uniform color positioned over uniform background. All of object edge pixels would have the same intensity and the original algorithm
would match the object anywhere wherever there is large enough blob of the appropriate Grayscale-based Matching:
color, and this is clearly not what we want to achieve. To
resolve this problem, in Edge-based Matching it is the gradient direction (represented as a color in HSV space for the illustrative purposes) of the
edge pixels, not their intensity, that is matched.
Aurora Vision Studio provides a set of filters implementing both Grayscale-based Matching and Edge-based Matching. For the list of the filters
see Template Matching filters. Different kinds of template pyramids used in Template Matching algorithms.
As the template image has to be preprocessed before the pyramid matching (we
need to calculate the template image pyramids for all possible rotations and scales), the algorithms are split into two parts:
Model Creation - in this step the template image pyramids are calculated and the results are stored in a model - atomic object representing
all the data needed to run the pyramid matching.
Matching - in this step the template model is used to match the template in the input image.
Such an organization of the processing makes it possible to compute the model once and reuse it multiple times.
Available Filters
For both Template Matching methods two filters are provided, one for each step of the algorithm.
Grayscale-based Matching Edge-based Matching
Model
Creation:
Matching:
Please note that the use of CreateGrayModel and CreateEdgeModel2 filters will only be necessary in more advanced applications. Otherwise it is
enough to use a single filter of the Matching step and create the model by setting the inGrayModel or inEdgeModel parameter of the filter. For more
information see Creating Models for Template Matching. The CreateEdgeModel2 and LocateMultipleObjects_Edges2 filters are preferred over
CreateEdgeModel1 and LocateMultipleObjects_Edges1 because they are newer, more advanced versions with more capabilities.
The main challenge of applying the Template Matching technique lies in careful adjustment of filter parameters, rather than designing the program
structure.
For the cases 1 and 2 it is advisable to implement model creation in a separate Task macrofilter, save the model to an AVDATA file and then link that
file to the input of the matching filter in the main program:
Model Creation:
Main Program:
When this program is ready, you can run the "CreateModel" task as a program at any time you want to recreate the model. The link to the data file on
the input of the matching filter does not need any modifications then, because this is just a link and what is being changed is only the file on disk.
For the case 3, when the model has to be created dynamically, both the model creating filter and the matching filter have to be in the same task. The
former, however, should be executed conditionally, when a respective HMI event is raised (e.g. the user clicks an ImpulseButton or makes some
mouse action in a VideoBox). For representing the model, a register of EdgeModel2? type should be used, that will store the latest model (another
option is to use LastNotNil filter). Here is an example realization with the model being created from a predefined box on an input image when a button
is clicked in the HMI:
Model Creation
The inMaxPyramidLevel parameter determines the number of levels of the pyramid matching and should be set to the largest number for which the
template is still recognizable on the highest pyramid level. This value should be selected through interactive experimentation using the diagnostic
output diagTemplatePyramid (Grayscale-based Matching) or diagEdgePyramid (Edge-based Matching).
The inMinPyramidLevel parameter determines the lowest pyramid level that is generated during creation phase and the lowest pyramid level that
the occurrences are tracked to during location phase. If the parameter is set to lower value in location than in creation, the missing levels are
generated dynamically by the locating filter. This approach leads to much faster creation, but a bit slower location.
In the following example the inMaxPyramidLevel value of 4 would be too high (for both methods), as the structure of the template is entirely lost on
this level of the pyramid. Also the value of 3 seems a bit excessive (especially in case of Edge-based Matching) while the value of 2 would definitely
be a safe choice.
Grayscale-based Matching
(diagTemplatePyramid):
Edge-based Matching
(diagEdgePyramid):
Angle Range
The inMinAngle, inMaxAngle parameters determine the range of template orientations that will be considered in the matching process. For
instance (values in brackets represent the pairs of inMinAngle, inMaxAngle values):
(-180.0, 180.0): all rotations are considered (default value)
(-15.0, 15.0): the template occurrences are allowed to deviate from the reference template orientation at most by 15.0 degrees (in each
direction)
(0.0, 0.0): the template occurrences are expected to preserve the reference template orientation
Wide range of possible orientations introduces significant amount of overhead (both in memory usage and computing time), so it is advisable to limit
the range whenever possible, especially if different scales are also involved. The number of rotations created can be further manipulated with
inAnglePrecision parameter. Decreasing it results in smaller models and smaller execution times, but can also lead to objects that are slightly less
accurate.
Scale Range
The inMinScale, inMaxScale parameters determine the range of template scales that will be considered in the matching process. It enables
locating objects that are slightly smaller or bigger than the object used during model creation.
Wide range of possible scales introduces significant amount of overhead (both in memory usage and computing time), so it is advisable to limit the
range whenever possible. The number of scales created can be further manipulated with inScalePrecision parameter. Decreasing it results in
smaller models and smaller execution times, but can also lead to objects that are slightly less accurate.
Edge Detection Settings (only Edge-based Matching)
The inEdgeThreshold, inEdgeHysteresis parameters of CreateEdgeModel2 filter determine the settings of the hysteresis thresholding used to
detect edges in the template image. The lower the inEdgeThreshold value, the more edges will be detected in the template image. These
parameters should be set so that all the significant edges of the template are detected and the amount of redundant edges (noise) in the result is as
limited as possible. Similarly to the pyramid height, edge detection thresholds should be selected through interactive experimentation using the
outEdges output and the diagnostic output diagEdgePyramid - this time we need to look only at the picture at the lowest level.
(15.0, 30.0) - excessive amount of noise (40.0, 60.0) - OK (60.0, 70.0) - significant edges lost
The CreateEdgeModel2 filter will not allow to create a model in which no edges were detected at the top of the pyramid (which means not only some
significant edges were lost, but all of them), yielding an error in such case. Whenever that happens, the height of the pyramid, or the edge
thresholds, or both, should be reduced.
Matching
The inMinScore parameter determines how permissive the algorithm will be in verification of the match candidates - the higher the value the less
results will be returned. This parameter should be set through interactive experimentation to a value low enough to assure that all correct matches
will be returned, but not much lower, as too low value slows the algorithm down and may cause false matches to appear in the results.
In the second step we sort the centers by the X coordinate and create a coordinate system "from segment" defined by the two points
(CreateCoordinateSystemFromSegment). The segment defines both the origin and the orientation. Having this coordinate system ready, we connect
it to the inScanPathAlignment input of ScanExactlyNRidges, which will measure the distance between two insets. The measurement will work
correctly irrespective of the object position (mind the expanded structure inputs and outputs):
Manual Alignment
In some cases the filter you will need to use with a local coordinate system will have no appropriate inAlignment input. In such cases the solution is
to transform the primitive manually with filters like AlignPoint, AlignCircle, AlignRectangle. These filters accept a geometrical primitive defined in a
local coordinate system, and the coordinate system itself, and return the same primitive, but with absolute coordinates, i.e. aligned to the coordinate
system of an image.
A very common case is with ports of type Region, which is pixel-precise and, while allowing for creation of arbitrary shapes, cannot be directly
transformed. In such cases it is advisable to use the CreateRectangleRegion filter and define the region-of-interest at inRectangle. The filter, having
also the inRectangleAlignment input connected, will return a region properly aligned with the related object position. Some ready-made tools, e.g.
CheckPresence_Intensity, use this approach internally.
The ScanSingleEdge filter with a pair of ports: inScanPath and outAlignedScanPath, belonging to different coordinate systems.
OCR technology is widely used for automatic data reading from various sources. It is especially used to gather data from documents and printed
labels.
In the first part of this manual usage of high level filters will be described.
The second part of this manual shows how to use standard OCR models provided with Aurora Vision Studio. It also shows how to prepare an image
to get best possible results of recognition.
The third part describes the process of preparing and training OCR models.
The last part presents an example program that reads text from images.
Original image.
Segmenting text
Text region segmentation is a process of splitting a region into lines and individual characters. The recognition step is only possible if each region
contains a single character.
Firstly, if there are multiple lines of text, separation into lines must be performed. If the text orientation is horizontal, simple region dilation can be used
followed by splitting the region into blobs. In other cases the text must be transformed, so that the lines become horizontal.
The process of splitting text into lines using region morphology filters.
When text text lines are separated, each line must be split into individual characters. In a case when characters are not made of diacritic marks and
characters can be separated well, the filter SplitRegionIntoBlobs can be used. In other cases the filter SplitRegionIntoExactlyNCharacters or
SplitRegionIntoMultipleCharacters must be used.
AZ_small abcdefghijklmnopqrstuvwxyz.-/
OCRA monospaced
09 0123456789.-/+
AZ09 ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-/+
AZ ABCDEFGHIJKLMNOPQRSTUVWXYZ.-/
AZ_small abcdefghijklmnopqrstuvwxyz.-/
OCRB monospaced
09 0123456789.-/+
AZ09 ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-/+
AZ ABCDEFGHIJKLMNOPQRSTUVWXYZ.-/
AZ_small abcdefghijklmnopqrstuvwxyz.-/
Computer monospaced
09 0123456789.-/+
AZ09 ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-/+
AZ ABCDEFGHIJKLMNOPQRSTUVWXYZ+-./
09 01234556789.+-/
AZ ABCDEFGHIJKLMNOPQRSTUVWXYZ.-/
AZ_small abcdefghijklmnopqrstuvwxyz.-/
Regular proportional
09 0123456789.-/+
AZ09 ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-/+
Character recognition
Aurora Vision Library offers two types of character classifiers:
1. Classifier based on multi-layer perceptron (MLP).
2. Classifier based on support vector machines (SVM).
Both of the classifiers are stored in the OcrModel type. To get a text from character regions use the RecognizeCharacters filter, shown on the image
below:
The first and the most important step is to choose the appropriate character normalization size. The internal classifier recognizes characters using
their normalized form. More information about character normalization process will be provided in the section describing the process of classifier
training.
The character normalization allows to classify characters with different sizes. The parameter inCharacterSize defines the size of a character
before the normalization. When the value is not provided, the size is calculated automatically using the character bounding box.
Characters after
Character presentation Description
normalization
Next, character sorting order must be chosen. The default order is from left to right.
If the input text contains spaced characters, the value of inMinSpaceWidth input must be set. This value indicates the minimal distance between
two characters between which a space will be inserted.
Character recognition provides the following information:
1. the read text as a string (outCharacters),
2. an array of character recognition scores (outScores),
3. an array of recognition candidates for each character (outCandidates).
Interpreting results
The table below shows recognition results for characters extracted from the example image. An unrecognized character is colored in red.
X X 1.00 X: 1.00
A A 1.00 A: 1.00
M M 1.00 M: 1.00
L L 1.00 L: 1.00
E E 1.00 E: 1.00
In this example the letter P was not included in the training set. In effect, the OCR model was unable to recognize the representation of the P letter.
The internal classifier was trying to select most similar known character.
Verifying results
The results of the character recognition process can be validated using basic string manipulation filters.
The example below shows how to check if a read text contains a valid year value. The year value should be greater than 2012 (e.g. production
started in this year) and must not be greater than the current year.
For more complex validation a user defined filter is recommended. For further information read the documentation on creating user filters.
Training of MLP classifier using TrainOcr_MLP. Training of SVM classifier using TrainOcr_SVM.
Macrofilter which uses the trained classifier to acquire text from images.
A set of grid pictures for basic calibration. Note that high accuracy applications require denser grids and higher amount of pictures. Also note that
all grids are perpendicular to the optical axis of the camera, so the focal length won't be calculated by the filter.
Image to world plane coordinate calculation. Image rectification, with cropping to an area from point (0,0) to (5,5) in world
coordinates.
In order to use the image to world plane transform mechanism of Aurora Vision Studio, appropriate UI wizards are supplied:
For calculation of real world coordinates from locations on original image – use a wizard associated with the inTransform input of
ImagePointToWorldPlane filter (or other from ImageObjectsToWorldPlane group).
For image rectification onto the world plane – use a wizard associated with the inRectificationMap input of RectifyImage filter.
Although using UI wizards is the recommended course of action, the most complicated use cases may need a direct use of filters, in such a case
following steps are to be performed:
1. Camera calibration – this step is highly recommended to achieve accurate results, although not strictly necessary (e.g. when lens distortion
errors are insignificant).
2. World plane calibration – the CalibrateWorldPlane filters compute a RectificationTransform, which represents image to world plane relation
3. The image to world plane relation then can be used to:
Calculate of real world coordinates from locations on original image, and vice versa, see ImagePointToWorldPlane,
WorldPlanePointToImage or similar filters (from ImageObjectsToWorldPlane or WorldPlaneObjectsToImage groups).
Perform image rectification onto the world plane, see CreateRectificationMap filters.
There are different use cases of world coordinates calculation and image rectification:
Calculating world coordinates from pixel locations on original image without image rectification. This approach uses transformation output for
example by CalibrateWorldPlane to calculate real world coordinates with ImageObjectsToWorldPlane
Second scenario is very similar to the first one with the difference of using image rectification. In this case, after performing analysis on an
rectified image (i.e. image remapped by RectifyImage), the locations can be transformed to a common coordinate system given by the world
plane by using the rectified image to world plane relation. It is given by auxiliary output outRectifiedTransform of RectifyImage filter. Notice
that the rectified image to world plane relation is different than original image to world plane relation.
Last use case is to perform image rectification and rectified image analysis without its features recalculation to real world coordinates.
Example of taking world plane measurements on the rectified image. Left: original image, as captured by a camera, with mild lens distortion.
Right: rectified image with annotated length measurement.
Notes:
Image to world plane transform is still a valid mechanism for telecentric cameras. Is such a case, the image would be related to world plane by
an affine transform.
Camera distortion is automatically accounted for in both world coordinate calculations and image rectification.
The spatial map generated by CreateRectificationMap filters can be thought of as a map performing image undistortion followed by a
perspective removal.
Please refer to Preparing Rectification Map Transform to get step-by-step instruction on how to perform calibration with the calibration editor (plugin).
Stitching result.
Golden Template
Golden Template technique performs a pixel-to-pixel comparison of two images. This technique is especially useful when the object's surface or
object's shape is very complex.
Aurora Vision Studio offers three ways of performing the golden template comparison.
Comparison based on pixels intensity - it can be achieved using the CompareGoldenTemplate_Intensity. In this method two images are
compared pixel-by-pixel and the defect is classified based on a difference between pixels intensity. This technique is especially useful in finding
defects like smudges, scratches etc.
How To Use
Golden template is a previously prepared image which is used to compare image from the camera. This robust technique allows us to perform quick
comparison inspection but some conditions must be met:
stable light conditions,
position of the camera and the object must be still,
precise object positioning
Most applications use the Template Matching technique for finding objects and then matched rectangle is compared. Golden template image and
image to compare must have this same dimensions. To get best results filter CropImageToRectangle should be used. Please notice that filter
CropImageToRectangle performs cropping using a real values and it has sub-pixel precision.
Example
Example below shows how to use a basic golden template matching.
Example application performs following operations
1. Finding object location using the Template Matching technique.
2. Comparing input image with previously prepared golden template.
Deep Learning
Table of contents:
1. Introduction
Overview of Deep Learning Tools
Basic Terminology
Stopping Conditions
Preprocessing
Augmentation
2. Anomaly Detection
3. Feature Detection
4. Object Classification
5. Instance Segmentation
6. Point Location
7. Object Location
8. Reading Characters
9. Troubleshooting
1. Introduction
Deep Learning is a breakthrough machine learning technique in computer vision. It learns from training images provided by the user and can
automatically generate solutions for a wide range of image analysis applications. Its key advantage, however, is that it is able to solve many of the
applications which have been too difficult for traditional, rule-based algorithms of the past. Most notably, these include inspections of objects with high
variability of shape or appearance, such organic products, highly textured surfaces or natural outdoor scenes. What is more, when using ready-
made products, such as our Aurora Vision Deep Learning, the required programming effort is reduced almost to zero. On the other hand, deep
learning is shifting the focus to working with data, taking care of high quality image annotations and experimenting with training parameters – these
elements actually tend to take most of the application development time these days.
Typical applications are:
detection of surface and shape defects (e.g. cracks, deformations, discoloration),
detecting unusual or unexpected samples (e.g. missing, broken or low-quality parts),
identification of objects or images with respect to predefined classes (i.e. sorting machines),
location, segmentation and classification of multiple objects within an image (i.e. bin picking),
product quality analysis (including fruits, plants, wood and other organic products),
location and classification of key points, characteristic regions and small objects,
optical character recognition.
The use of deep learning functionality includes two stages:
1. Training – generating a model based on features learned from training samples,
2. Inference – applying the model on new images in order to perform the actual machine vision task.
The difference to the traditional image analysis approach is presented in the diagrams below:
2. Feature Detection (segmentation) – this technique is used to precisely segment one or more classes of pixel-wise features within an
image. The pixels belonging to each class must be marked by the user in the training step. The result of this technique is an array of probability
maps for every class.
3. Object Classification – this technique is used to identify an object in a selected region with one of user-defined classes. First, it is necessary
to provide a training set of labeled images. The result of this technique is: the name of detected class and a classification confidence level.
4. Instance Segmentation – this technique is used to locate, segment and classify one or multiple objects within an image. The training requires
the user to draw regions corresponding to objects in an image and assign them to classes. The result is a list of detected objects – with their
bounding boxes, masks (segmented regions), class IDs, names and membership probabilities.
An example of instance segmentation using DL_SegmentInstances tool. Left: The original image. Right: The resulting list of detected objects.
5. Point Location – this technique is used to precisely locate and classify key points, characteristic parts and small objects within an image. The
training requires the user to mark points of appropriate classes on the training images. The result is a list of predicted point locations with
corresponding class predictions and confidence scores.
An example of point location using DL_LocatePoints tool. Left: The original image. Right: The resulting list of detected points.
6. Reading Characters – this technique is used to locate and recognize characters within an image. The result is a list of found characters.
An example of optical character recognition using DL_ReadCharacters tool. Left: The original image. Right: The image with the recognized
characters drawn.
Basic Terminology
You do not need to have the specialistic scientific knowledge to develop your deep learning solutions. However, it is highly recommended to
understand the basic terminology and principles behind the process.
Training process
Model training is an iterative process of updating neural network weights based on the training data. One iteration involves some number of steps
(determined automatically), each step consists of the following operations:
1. selection of a small subset (batch) of training samples,
2. calculation of an error measure for these samples,
3. updating the weights to achieve lower error for these samples.
At the end of each iteration, the current model is evaluated on a separate set of validation samples selected before the training process. Validation
set is automatically chosen from the training samples. It is used to simulate how neural network would work with real images not used during
training. Only the set of network weights corresponding with the best validation score at the end of training is saved as the final solution. Monitoring
the training and validation score (blue and orange lines in the figures below) in consecutive iterations gives fundamental information about the
progress:
1. Both training and validation scores are improving – keep training, the model can still improve.
2. Both training and validation scores has stopped improving – keep training for a few iterations more and stop if there is still no change.
3. Training score is improving, but validation score has stopped or is going worse – you can stop training, model has probably started overfitting
to your training data (remembering exact samples rather than learning rules about features). It may also be caused by too small amount of
diverse samples or too low complexity of the problem for a network selected (try lower Network Depth).
The above graphs represent training progress in the Deep Learning Editor. The blue line indicates performance on the training samples, and the
orange line represents performance on the validation samples. Please note the blue line is plotted more frequently than the orange line as validation
performance is verified only at the end of each iteration.
Stopping Conditions
The user can stop the training manually by clicking the Stop button. Alternatively, it is also possible to set one or more stopping conditions:
1. Iteration Count – training will stop after a fixed number of iterations.
2. Iterations without Improvement – training will stop when the best validation score was not improved for a given number of iterations.
3. Time – training will stop after a given number of minutes has passed.
4. Validation Accuracy or Validation Error – training will stop when the validation score reaches a given value.
Preprocessing
To adjust performance to a particular task, the user can apply some additional transformations to the input images before training starts:
1. Downsample – reduction of the image size to accelerate training and execution times, at the expense of lower level of details possible to
detect. Increasing this parameter by 1 will result in downsampling by the factor of 2 over both image dimension.
2. Convert to Grayscale – while working with problems where color does not matter, you can choose to work with monochrome versions of
images.
Augmentation
In case when the number of training images can be too small to represent all possible variations of samples, it is recommended to use data
augmentations that add artificially modified samples during training. This option will also help avoiding overfitting.
Below is a description of the available augmentations and examples of the corresponding transformations:
1. Luminance – change brightness of samples by a random percentage (between -ParameterValue and +ParameterValue) of pixel values (0-
255). For a given augmentation values, samples as below can be added to the training set.
2. Noise – modify samples with uniform noise. Value of each channel and pixel is modified separately, by random percentage (between -
ParameterValue and +ParameterValue) of pixel values (0-255). Please note that choosing an appropriate augmentation value should depend
on the size of the feature in pixels. Larger value will have a much greater impact on small objects than on large objects. For a tile with the
feature "F" with the size of 130x130 pixels and a given augmentation values, samples as below can be added to the training set.:
Original grayscale Grayscale image. Grayscale image. Grayscale image. Grayscale image.
image. Noise=4. Noise=10. Noise=25. Noise=50.
Original RGB image. RGB image. Noise=4. RGB image. Noise=10. RGB image. Noise=25. RGB image. Noise=50.
3. Gaussian Blur – blur samples with a kernel of a size randomly selected between 0 and the provided maximum kernel size. Please note that
choosing an appropriate Gaussian Blur Kernel Size should depend on the size of the feature in pixels. Larger kernel sizes will have a much
greater impact on small objects than on large objects. For a tile with the feature "F" with the size of 130x130 pixels and a given augmentation
values, samples as below can be added to the training set.:
Original image. Gaussian Blur=5. Gaussian Blur=10. Gaussian Blur=25. Gaussian Blur=50.
4. Rotation – rotate samples by a random angle between -ParameterValue and +ParameterValue. Measured in degrees.
In Detect Features, Locate Points and Detect Anomalies, for a tile with the feature "F" and given augmentation values, samples as below can
be added to the training set.
Tile rotation=-45°. Tile rotation=-20°. Original tile. Tile rotation=20°. Tile rotation=45°.
In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added
to the training set.
Image rotation=-45°. Image rotation=-20°. Original image. Image rotation=20°. Image rotation=45°.
7. Relative Translation – translate samples by a random shift, defined as a percentage (between -ParameterValue and +ParameterValue) of
the tile (in Detect Features, Locate Points and Detect Anomalies) or the image size (in Classify Object and Segment Instances). Works
independently in both X and Y dimensions.
In Detect Features, Locate Points and Detect Anomalies, for a tile with the feature "F" and given augmentation values, samples as below can
be added to the training set.
Tile translation x=20%, y=20%. Original tile. Tile translation x=-20%, y=-20%.
In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added
to the training set.
Image translation x=20%, y=20%. Original image. Image translation x=-20%, y=-20%.
8. Scale – resize samples relatively to their original size by a random percentage between the provided minimum scale and maximum scale.
9. Horizontal Shear – shear samples horizontally by an angle between -ParameterValue and +ParameterValue. Measured in degrees.
In Detect Features, Locate Points and Detect Anomalies, for a tile with the feature "F" and given augmentation values, samples as below can
be added to the training set.
In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added
to the training set.
In Classify Object and Segment Instances, for an image with the feature "F" and given augmentation values, samples as below can be added
to the training set.
Warning: the choice of augmentation options depends only on the task we want to solve. Sometimes they might be harmful for quality of a solution.
For a simple example, the Rotation should not be enabled if rotations are not expected in a production environment. Enabling augmentations also
increases the network training time (but does not affect execution time!)
2. Anomaly Detection
Aurora Vision Deep Learning provides three ways of defect detection:
DL_DetectAnomalies1
DL_DetectAnomalies2 Single Class
DL_DetectAnomalies2 Golden Template
The DL_DetectAnomalies1 (reconstructive approach) uses deep neural networks to remove defects from the input image by reconstructing the
affected regions. It is used to analyze images in fragments of size determined by the Feature Size parameter.
This approach is based on reconstructing an image without defects and then comparing it with the original one. It filters out all patterns smaller than
Feature Size that were not present in the training set.
The DL_DetectAnomalies2 Single Class uses a simpler algorithm than Golden Template. It uses less space and the iteration time is shorter. It can
be used with less complex objects.
The DL_DetectAnomalies2 Golden Template is an appropriate method for positioned objects with complex details. The tool divides the images into
regions and creates a separate model for each region. The tool has the Texture Mode dedicated for texture defects detection. It can be used for
plain surfaces or the ones with a simple pattern.
To sum up, while choosing the tool for anomaly detection, first check the Golden Template with the Texture Mode on or off, depending on the
object's kind. If the model takes too much space or the iteration is too long, please try the Single Class tool. If the object is complex and its position
is unstable, please check the DL_DetectAnomalies1 approach.
Parameters
Feature Size is related to DL_DetectAnomalies1 and DL_DetectAnomalies2 Single Class approach. It corresponds to the expected defect
size and it is the most significant one in terms of both quality and speed of inspection. It it is represented by a green square in the Image
window of the Editor. The common denominator of all fragment based approaches is that the Feature Size should be adjusted so that it
contains common defects with some margin.
For DL_DetectAnomalies1 large Feature Size will cause small defects to be ignored, however the inference time will be shortened
considerably. Heatmap precision will also be lowered. For DL_DetectAnomalies2 Single Class large Feature Size increases training as well
as inference time and memory requirements. Consider using Downscale parameter instead of increasing the Feature Size.
Sampling Density is related to DL_DetectAnomalies1 and DL_DetectAnomalies2 Single Class approach. It controls the spatial resolution of
both training and inspection. The higher the density the more precise results but longer computational time. It is recommended to use the Low
density only for well positioned and simple objects. The High density is useful when working with complex textures and highly variable objects.
Max Translation is related to DL_DetectAnomalies2 Golden Template approach. It is the maximal position change tolerance. If the
parameter increases, the working area of a small model enlarges and the number of the created small models decreases.
Model Complexity is related to DL_DetectAnomalies2 Golden Template and DL_DetectAnomalies2 Texture approach. Greater value may
improve model effectiveness, especially for complex objects, at the expense of memory usage and interference time.
Metrics
Measuring accuracy of anomaly detection tools is a challenging task. The most straightforward approach is to calculate the Recall/Precision/F1
measures for the whole images (classified as GOOD or BAD, without looking at the locations of the anomalies). Unfortunately, such an approach is
not very reliable due to several reasons, including: (1) when we have a limited number of test images (like 20), the scores will vary a lot (like Δ=5%)
when just one case changes; (2) very frequently the tools we test will find random false anomalies, but will not find the right ones - and still will get
high scores as the image as a whole is considered correctly classified. Thus, it may be tempting to use annotated anomaly regions and calculate the
per-pixel scores. However, this would be too fine-grained. For anomaly detection tasks we do not expect the tools to be necessarily very accurate in
terms of the location of defects. Individual pixels do not matter much. Instead, we expect that the anomalies are detected "more or less" at the right
locations. As a matter of fact, some tools which are not very accurate in general (especially those based on auto-encoders) will produce relatively
accurate outlines for the anomalies they find, while the methods based on one-class classification will usually perform better in general, but the
outlines they produce will be blurred, too thin or too thick.
For these reasons, we introduced an intermediate approach to calculation of Recall. Instead of using the per-image or the per-pixel methods, we use
a per-region one. Here is how we calculate Recall:
For each anomaly region we check if there is any single pixel in the heatmap above the threshold. If it is, we increase TP (the number of True
Positives) by one. Otherwise, we increase FN (the number of False Negatives) by one.
Then we use the formula:
The above method works for Recall, but cannot be directly applied to the calculation of Precision. Thus, for Precision we use a per-pixel approach,
but it also comes with its own difficulties. First issue is that we often find ourselves having a lot of GOOD samples and a very limited set of BAD
testing cases. This means unbalanced testing data, which in turn means that the Precision metric is highly affected with the overwhelming quantity
of GOOD samples. The more GOOD samples we have (at the same amount of BAD samples), the lower Precision will be. It may be actually very
low, often not reflecting the true performance of the tool. For that reason, we need to incorporate balancing into our metrics.
A second issue with Precision in real-world projects is that False Positives tend to naturally occur within BAD images, outside of the marked
anomaly regions. This happens for several reasons, but is repeatable among different projects. Sometimes if there is a defect, it often means that
something was broken and other parts of the object may be slightly affected too, sometimes in a visible way, sometimes with a level of ambiguity.
And quite often the objects under inspection simply get affected by the process of artificially introducing defects (like someone is touching a piece of
fabric and accidentally causes wrinkles that would normally not occur). For this reason, we calculate the per-pixel False Negatives only on GOOD
images.
The complete procedure for calculation of Precision is:
We calculate the average pp_TP (the number of per-pixel True Positives) across all BAD testing samples.
We calculate the average pp_FP (the number of per-pixel False Positives) across all GOOD testing samples.
Then we use the formula:
Finally we calculate the F1 score in the standard way, for practical reasons neglecting the fact that the Recall and Precision values that we unify
were calculated in different ways. We believe that this metric is best for practical applications.
Model Usage
In Detect Anomalies 1 variant, a model should be loaded with DL_DetectAnomalies1_Deploy prior to executing it with DL_DetectAnomalies1.
Alternatively, the model can be loaded directly by DL_DetectAnomalies1 filter, but it will then require time-consuming initialization in the first program
iteration.
In Detect Anomalies 2 variant, a model should be loaded with DL_DetectAnomalies2_Deploy prior to executing it with DL_DetectAnomalies2.
Alternatively, model can be loaded directly by DL_DetectAnomalies2 filter, but it will then require time-consuming initialization in the first program
iteration.
Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors.
Training Data
Images loaded to the Editor of DL_DetectFeatures can be of different sizes and can have different ROIs defined. However, it is important to ensure
that the scale and the characteristics of the features are consistent with that of the production environment.
The features can be marked using an intuitive interface in the Editor or can be imported as masks from a file.
Each and every feature should be marked on all training images, or the ROI should be limited to include only marked defects. Incompletely or
inconsistently marked features are one of the main reasons of poor accuracy. REMEMBER: If you leave even a single piece of some feature not
marked, it will be used as a negative sample and this will highly confuse the training process!
The marking precision should be adjusted to the application requirements. The more precise marking the better accuracy in the production
environment. While marking with low precision it is better to mark features with some excess margin.
An example of wood knots marked with low precision. An example of tile cracks marked with high precision.
Patch Size
Detect Features is an end-to-end segmentation tool which works best when analysing an image in a medium-sized square window. The size of this
window is defined by the Patch Size parameter. It should be not too small, and not too big. Typically much bigger than the size (width or diameter) of
the feature itself, but much less than the entire image. In a typical scenario the value of 96 or 128 works quite well.
Performance Tip 1: a larger Patch Size increases the training time and requires more GPU memory and more training samples to operate
effectively. When Patch Size exceeds 128 pixels and still looks too small, it is worth considering the Downsample option.
Performance Tip 2: if the execution time is not satisfying you can set the inOverlap filter input to False. It should speed up the inspection by 10-30%
at the expense of less precise results.
Examples of Patch Size: too large or too small (red), maybe acceptable (yellow) and good (green). Remember that this is just an example and may
vary in other cases.
Model Usage
A model should be loaded with DL_DetectFeatures_Deploy filter before using DL_DetectFeatures filter to perform segmentation of features.
Alternatively, the model can be loaded directly by DL_DetectFeatures filter, but it will result in a much longer time of the first iteration.
Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors.
Parameters:
To limit the area of image analysis you can use inRoi input.
To shorten feature segmentation process you can disable inOverlap option. However, in most cases, it decreases segmentation quality.
Feature segmentation results are passed in a form of bitmaps to outHeatmaps output as an array and outFeature1, outFeature2,
outFeature3 and outFeature4 as separate images.
4. Object Classification
This technique is used to identify the class of an object within an image or within a specified region.
Confusion matrix presents correct (diagonal) and incorrect assignment of samples to the user defined classes.
Training Parameters
In addition to the default training parameters (list of parameters available for all Deep Learning algorithms), the DL_ClassifyObject tool provides a
Detail Level parameter which enables control over the level of detail needed for a particular classification task. For majority of cases the default
value of 1 is appropriate, but if images of different classes are distinguishable only by small features (e.g. granular materials like flour and salt),
increasing value of this parameter may improve classification results.
Model Usage
A model should be loaded with DL_ClassifyObject_Deploy filter before using DL_ClassifyObject filter to perform classification. Alternatively, model
can be loaded directly by DL_ClassifyObject filter, but it will result in a much longer time of the first iteration.
Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors.
Parameters:
To limit the area of image analysis you can use inRoi input.
Classification results are passed to outClassName and outClassIndex outputs.
The score value outScore indicates the confidence of classification.
5. Instance Segmentation
This technique is used to locate, segment and classify one or multiple objects within an image. The result of this technique are lists with elements
describing detected objects – their bounding boxes, masks (segmented regions), class IDs, names and membership probabilities.
Note that in contrary to feature detection technique, instance segmentation detects individual objects and may be able to separate them even if they
touch or overlap. On the other hand, instance segmentation is not an appropriate tool for detecting features like scratches or edges which may
possibly have no object-like boundaries.
Training Data
The training phase requires the user to draw regions corresponding to objects on an image and assign them to classes.
Training Parameters
Instance segmentation training adapts to the data provided by the user and does not require any additional training parameters besides the default
ones.
Model Usage
A model should be loaded with DL_SegmentInstances_Deploy filter before using DL_SegmentInstances filter to perform classification. Alternatively,
model can be loaded directly by DL_SegmentInstances filter, but it will result in a much longer time of the first iteration.
Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors.
Parameters:
To limit the area of image analysis you can use inRoi input.
To set minimum detection score inMinDetectionScore parameter can be used.
Maximum number of detected objects on a single image can be set with inMaxObjectsCount parameter. By default it is equal to the
maximum number of objects in the training data.
Results describing detected objects are passed to following outputs:
bounding boxes: outBoundingBoxes,
class IDs: outClassIds,
class names: outClassNames,
classification scores: outScores,
masks: outMasks.
6. Point Location
This technique is used to precisely locate and classify key points, characteristic parts and small objects within an image. The result of this technique
is a list of predicted point locations with corresponding class predictions and confidence scores.
When to use point location instead of instance segmentation:
precise location of key points and distinctive regions with no strict boundaries,
location and classification of objects (possibly very small) when their segmentation masks and bounding boxes are not needed (e.g. in object
counting).
When to use point location instead of feature detection:
coordinates of key points, centroids of characteristic regions, objects etc. are needed.
Training Data
The training phase requires the user to mark points of appropriate classes on the training images.
Feature Size
In the case of the Point Location tool, the Feature Size parameter corresponds to the size of an object or characteristic part. If images contain
objects of different scales, it is recommended to use a Feature Size slightly larger than the average object size, although it may require
experimenting with different values to achieve the best possible results.
Performance tip: a larger feature size increases the training time and needs more memory and training samples to operate effectively. When feature
size exceeds 64 pixels and still looks too small, it is worth considering the Downsample option.
Model Usage
A model should be loaded with DL_LocatePoints_Deploy filter before using DL_LocatePoints filter to perform point location and classification.
Alternatively, model can be loaded directly by DL_LocatePoints filter, but it will result in a much longer time of the first iteration.
Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors.
Parameters:
To limit the area of image analysis you can use inRoi input.
To set minimum detection score inMinDetectionScore parameter can be used.
inMinDistanceRatio parameter can be used to set minimum distance between two points to be considered as different. The distance is
computed as MinDistanceRatio * FeatureSize. If the value is not enabled, the minimum distance is based on the training data.
To increase detection speed but with potentially slightly worse precision inOverlap can be set to False.
Results describing detected points are passed to following outputs:
point coordinates: outLocations,
class IDs: outClassIds,
class names: outClassNames,
classification scores: outScores.
7. Locating objects
This technique is used to locate and classify one or multiple objects within an image. The result of this technique is a list of rectangles bounding the
predicted objects with corresponding class predictions and confidence scores.
The tool returns the rectangle region containing the predicted objects and showing their approximate location and orientation , but it doesn't return the
precise position of the key points of the object or the segmented region. It is an intermediate solution between the Point Location and the Instance
Segmentation.
Training Data
The training phase requires the user to mark rectangles bounding objects of appropriate classes on the training images.
Model Usage
A model should be loaded with DL_LocateObjects_Deploy filter before using DL_LocateObjects filter to perform object location and classification.
Alternatively, model can be loaded directly by DL_LocateObjects filter, but it will result in a much longer time of the first iteration.
Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors.
Parameters:
To limit the area of image analysis you can use inRoi input.
To set minimum detection score inMinDetectionScore parameter can be used.
Results describing detected objects are passed to the object output: outObjects.
8. Reading Characters
This technique is used to locate and recognize characters within an image. The result is a list of found characters.
This tool uses the pretrained model and cannot be trained.
Model Usage
A model should be loaded with DL_ReadCharacters_Deploy filter before using DL_ReadCharacters filter to perform recognition. Alternatively, model
can be loaded directly by DL_ReadCharacters filter, but it will result in a much longer time of the first iteration.
Running Aurora Vision Deep Learning Service simultaneously with these filters is discouraged as it may result in degraded performance or errors.
Parameters:
To limit the area of the image analysis and/or to set a text orientation you can use inRoi input.
The average size (in pixels) of characters in the analysed area should be set with inCharHeight parameter.
To improve a performance with a font with exceptionally thin or wide characters you can use inWidthScale input. To some extent, it may also
help in a case of characters being very close to each other.
To restrict set of recognized characters use inCharRange parameter.
9. Troubleshooting
Below you will find a list of most common problems.
1. Network overfitting
A situation when a network loses its ability to generalize over available problems and focuses only on test data.
Symptoms: during training, the validation graph stops at one level and training graph continues to rise. Defects on training images are marked very
precisely, but defects on new images are marked poorly.
See Also
Deep Learning Service Configuration - installation and configuration of Deep Learning service,
Creating Deep Learning Model - how to use Deep Learning Editor.
11.
Table of content:
Interfacing Photoneo to Aurora Vision Studio
Interfacing Hilscher card (EtherNet/IP) to Aurora Vision Studio
Interfacing Hilscher card (EtherCAT) to Aurora Vision Studio
Using Modbus TCP Communication
Interfacing Wenglor profile sensor to Aurora Vision Studio
Interfacing Gocator to Aurora Vision Studio
Interfacing Hilscher card (Profinet) to Aurora Vision Studio
Interfacing Profinet gateway to Aurora Vision Studio
Interacting with GigEVision cameras
Using TCP/IP Communication
Changing parameters of GigEVision cameras
Interfacing Photoneo to Aurora Vision Studio
Purpose and requirement
This document explains how to interface a PhoXi 3D Scanner to Aurora Vision Studio.
A PhoXi 3D Scanner is an advanced sensor created by Photoneo. It is purposed for 3D machine vision and processing point clouds. The PhoXi 3D
Scanner has many functionalities including:
Scanning objects and representing them as a point cloud or intensity images,
Various representations of scans.
Required equipment:
PhoXi Control v1.2.35 or later
Aurora Vision Studio 4.11 Professional or later
Getting started with a PhoXi 3D Scanner in Aurora Vision Studio
At the beginning download proper version of PhoXi Control from producer's website and install it.
Follow these steps:
1. Connect a PhoXi 3D Scanner to the PC running Aurora Vision Studio Professional.
2. Power up the PhoXi 3D Scanner, connect it to the PC via Ethernet interface.
3. Open PhoXi Control application.
4. Copy ID of the scanner you would like to use in Aurora Vision Studio.
9. Connect outGrid output to preview window and run program. If everything is done correctly, you should get Point3DGrid with scanned object.
Troubleshooting
If you had any problems during the connection process, you could find some solutions here.
Make sure you can grab point cloud in PhoXi Control application. If you cannot, please check connection and change the properties of
acquisition.
Make sure you are using proper version of Aurora Vision Studio and PhoXi Control application (from chapter Purpose and requirement)
Make sure that device is in running mode. If it is necessary Unpause device in PhoXi Control and use Free run, then you should be able to grab
Point3DGrid in Aurora Vision Studio.
You are not be able to change parameters in PhoXi Control application, while a program in Aurora Vision Studio is running and acquiring data
from a scanner. You can stop a program by clicking Stop button or using Shift + F5 key combination.
If you encounter a problem that has not been mentioned above, please do not hesitate to contact us, so that we could investigate it and add the
description of its solution to this section.
Calibration
Point's coordinate system is defined by scanner's camera position and its specific angle depending on PhoXi 3D Scanner type. You can read more
about Coordinate spaces of Photoneo scanners in this document
You can align coordinate system position by following this instruction. After completing steps outlined by the manufacturer, the coordinate system will
be changed also in case of scans acquired with Aurora Vision Studio.
To make an calibration of PhoXi 3D Scanner you have to print calibration board. Then navigate to Properties -> Coordinates Settings -> Marker Scale
in PhoXi Control and change parameters to these like on the picture below. Acquire scan of points cloud using Trigger scan and then click Set and
Store.
These settings does not allow to send Normal maps, Depth maps, Confidence maps and Textures to Aurora Vision Studio application. You can tick
checkboxes in PhoXi Control software or you can set these parameters in Aurora Vision Studio.
To make it possible in Aurora Vision Studio, you need to change the parameters using Photoneo_SetParameters filter, which should be executed
at the very beginning of program or between Photoneo_StopAcquisition and Photoneo_StartAcquisition filters.
If parameters, responsible for maps reading, will be set as True, you can use proper filters (Photoneo_GrabNormals, Photoneo_GrabTexture,
Photoneo_GrabDepthMap, Photoneo_GrabConfidence) to acquire respective maps to Aurora Vision Studio application.
Hardware connection
All devices must work in a common LAN network - henceforth called shared network. In most cases they are connected to each other through a
network switch.
The devices shall be connected as follows:
Master device e.g. PLC to the shared network
Hilscher card to PC
Hilscher card to the shared network
PC's network card to shared network
Before proceeding to the further point, make sure that all devices are powered up.
To be sure that firmware is installed properly you can open cifX Test software and then navigate to Device -> Open to achieve the result like below.
To open the configuration card double click on dropped icon and in the popped-up menu navigate to "Settings/Device Assignment". Then scan for
devices by clicking "Scan", mark found one it with a tick like on the image below and then click "Apply".
In "Configuration/General" you can change the IP of the device from DHCP to Static like on the below screen.
In "Configuration/Assembly" you can change the configuration of memory blocks for IN/OUT operations. By default, the Data length is set to 32 bytes.
All subsequent PLC data blocks mentioned in this document are configured for 32 bytes. In case you need memory blocks of a different size, you
can change the Data length from 0 up to 504 bytes.
The final step is to generate configuration files for Aurora Vision Studio. You can do this by right-clicking on the device icon, then navigating to
"Additional Functions -> Export -> DBM/nxd..", entering your configuration name and clicking "Save". You can now close SYCON.net for this
example, remember to save your project before.
4) Example configuration of EtherNet/IP PLC
Below you will find an example configuration process for EtherNet/IP PLC Omron NXJP2-9024DT in Sysmac Studio software.
First of all, there were defined two structures Data Types: Input_Data and Output_Data for further tests of Aurora Vision Studio macrofilters.
Information about offsets will be further used in Aurora Vision Studio.
There were created two global variables Input_Struct and Output_Struct using previously created structures to handle communication with
Hilscher card.
Then navigate to (Tools -> EtherNet/IP Connection Setting) and "Register All Tag Sets".
Add support for Hilscher card using EDS Library configuration (path \COMSOL-EIS V2.15.0.1\EDS). Then add "CIFX RE/EIS" and configure Input
and Output like below. It is necessary to match the configuration from SYCON.net.
.
For this moment it is essential to connect with a PLC device, to do this in Sysmac Studio you need to navigate to "Controller -> Communication
Setup...". There might be an error in Ethernet Communication Test if your network card is configured as DHCP (Configuration of the network
card section in this tutorial). If communication with PLC is established you will receive "Test OK".
If communication with PLC is established you can download the configuration to the device.
Below you can find a test of input and output memory using a watchtable in Sysmac Studio and IO Monitor in SYCON.net. To turn on IO Monitor you
have to right-click on the icon of the card in SYCON.net project and click "Connect". Then double click this icon and navigate to "Tools -> IO Monitor".
By clicking "Update" you can send a changed frame to the PLC.
5) Example configuration in Aurora Vision Studio
To use EtherNetIP filters in Aurora Vision Studio first you need to attach configuration files from SYCON.net to Hilscher_Channel_Open_EthernetIP
filter in INITIALIZE section. The configuration files generated in the previous step are now required in inConfig (xxx.nxd) and inNwid
(xxx_nwid.nxd) properties of that filter. Below you can find two Step macrofilters responsible for writing data to PLC and receiving data from it. To
have cyclic communication you have to place Loop macrofilter at the end of PROCESS section. In FINALIZE section place Hilscher_Channel_Close
macrofilter.
In step macrofilter ReadSection you can find for example Hilscher_Channel_IORead_SInt8 filter which writes 8 bytes of signed data to the predefined
memory area. Using different offsets for each macrofilter enables access to different variables created in PLC (Example configuration of
Ethernet/IP PLC section of this tutorial).
WriteSection step macrofilter has Hilscher_Channel_IOWrite filters (for instance Hilscher_Channel_IOWrite_SInt8) filters with adequate offsets and
data types used to match PLC data variables configuration.
Below is presented reading and writing to PLC with Aurora Vision Studio and Sysmac Studio watchtable. Decimal values of variables are dependent
on the data type used.
Troubleshooting
1. Set static IP to Hilscher card. Go to the section Configuration using SYCON.net of this tutorial.
2. Make sure that the current program setting is loaded to the Hilscher card. If not please use SYCON.net application to connect and download
settings to the devices as shown in the picture below.
3. Check in your device in Device Assignment. This step was described in Configuration using SYCON.net section of this tutorial.
4. If your master device has a problem with connection with the Hilscher card use the following settings in the SYCON.net application. In order to do
this right-click on the card icon and select Configuration...
6. If none of the above advice has helped, please restart your computer.
Hardware connection
PLC and PC must work in a common LAN network - henceforth called shared network. In most cases they are connected to each other through a
network switch. In case of EtherCAT interface Hilscher communication card must be configured as slave device. In this tutorial Port 1 of Hilscher
Card has to be connected to the EtherCAT Port 2 of the PLC.
The devices shall be connected as follows:
Master device e.g. PLC to the shared network
Hilscher card to PC
Hilscher card to the EtherCAT port of PLC
PC's network card to shared network
Before proceeding to the further point, make sure that all devices are powered up.
Using cifX Setup application select the channel you would like to configure (usually channel - CH#0) and remove preexisting firmware by clicking
"Clear". Then click "Add" to add a new firmware file and navigate to the downloaded EtherCAT Slave firmware (path \COMSOL-ECS
V4.8.0.4\Firmware\cifX). Then click "Apply" and finish configuration with "OK".
To be sure that firmware is installed properly you can open cifX Test software and then navigate to Device -> Open to achieve the result like below.
2) Configuration of the network card
It might be necessary to change the IP of the network card from dynamic to static. In this tutorial the following settings are used.
To open the configuration card double click on dropped icon and in the popped-up menu navigate to "Settings/Device Assignment". Then scan for
devices by clicking "Scan", mark found one it with a tick like on the image below and then click "Apply".
In "Configuration/General Settings" you can change the configuration of memory blocks for IN/OUT operations. By default, the Data length is set to
200 bytes. All subsequent PLC data blocks mentioned in this document are configured for 25 bytes. In case you need memory blocks of a different
size, you can change the Data length from 0 up to 256 bytes.
The final step is to generate configuration files for Aurora Vision Studio. You can do this by right-clicking on the device icon, then navigating to
"Additional Functions -> Export -> DBM/nxd..", entering your configuration name and clicking "Save". You can now close SYCON.net for this
example, remember to save your project before.
4) Example configuration of EtherNet/IP PLC
Below you will find an example configuration process for EtherCAT PLC Omron NXJP2-9024DT in Sysmac Studio software.
For this moment it is essential to connect with a PLC device, to do this in Sysmac Studio you need to navigate to "Controller -> Communication
Setup...". There might be an error in Ethernet Communication Test if your network card is configured as DHCP (Configuration of the network
card section in this tutorial). If communication with PLC is established you will receive "Test OK".
Add support for Hilscher card using ESI Library configuration (path \COMSOL-ECS V4.8.0.4\EDS). To do this in Sysmac Studio you need to navigate
to EtherCAT tab, right-click the Master icon and choose "Display ESI Library".
If communication with PLC is established and Hilscher Card is connected to the EtherCAT port of PLC it is possible to load EtherCAT configuration
to the PLC. To accomplish that right-click Master icon and choose "Compare and Merge with Actual Network Configuration".
If everything is connected correctly Compare and Merge with Actual Network Configuration window should pop-up. To match configuration in
software to configuration in hardware click "Apply actual network configuration".
Choose previously added slave device and navigate to the "Edit PDO Map Settings" button and click it. This step is essential to match the
configuration from SYCON.net.
In Edit PDO Map Settings window click "Apply actual device". Then click "Apply" and "OK".
In the online mode of the PLC controller navigate to "Controller -> Synchronize...". Mark with ticks all the changes and click "Transfer To Controller".
In the online mode of the PLC controller navigate to "Controller -> Transfer... -> Transfer To Controller" and download changes.
Below you can find a test of input and output memory using a watchtable in Sysmac Studio and IO Monitor in SYCON.net. To turn on IO Monitor you
have to right-click on the icon of the card in SYCON.net project and click "Connect". Then double click this icon and navigate to "Tools -> IO Monitor".
By clicking "Update" you can send a changed frame to the PLC.
In step macrofilter ReadSection you can find for example Hilscher_Channel_IORead_SInt8 filter which writes 8 bytes of signed data to the predefined
memory area. Using different offsets for each macrofilter enables access to different variables created in PLC (Example configuration of
EtherCAT PLC section of this tutorial).
WriteSection step macrofilter has Hilscher_Channel_IOWrite filters (for instance Hilscher_Channel_IOWrite_SInt8) filters with adequate offsets and
data types used to match PLC data variables configuration.
Below is presented reading and writing to PLC with Aurora Vision Studio and Sysmac Studio I/O Map. Decimal values of variables are dependent on
the data type used.
Troubleshooting
1. Make sure that the current program setting is loaded to the Hilscher card. If not please use SYCON.net application to connect and download
settings to the devices as shown in the picture below.
2. Check in your device in Device Assignment. This step was described in Configuration using SYCON.net section of this tutorial.
3. Using Sysmac Studio you can encounter Minor fault Controller Error like below which typically does not affect data exchange. Replacing Ethernet
cables or hardware could possibly make this warning disappear.
4. After changes it may be necessary to restart ODMV3 service.
5. If none of the above advice has helped, please restart your computer.
Open the newly created Task Macrofilter and add the ModbusTCP_ReadDiscreteInputs, ModbusTCP_ReadMultipleIntegerRegisters,
ModbusTCP_ForceMultipleCoils and ModbusTCP_WriteMultipleIntegerRegisters filters in the arrangement simulating that in the image below:
Below you can see working communication between the ModbusTCP client within the Aurora Vision Studio application and the simulated server.
Create a new project in the TIA Portal environment and add your device by double-clicking on the Add new device button. Select your controller
model, set its CPU and other available components listed in the drop-down menu. Once you are ready, click the OK button.
Expand the Program blocks list and double-click the Main icon. A network view will appear. Expand the Communication tab on the right-hand side
(within the Instructions section) and double-click on the MB_SERVER icon available at the Communication -> Others -> MODBUS TCP location.
Drag the MB_SERVER block from the Communication -> Others -> MODBUS TCP location and drop it on to Network 1 within the application Basic.
Troubleshooting IP addressing
Select the Accessible devices icon from the toolbar to open the configuration window. Select the PN/IE type of the PG/PC interface and then the
Ethernet card connected to the PLC. Click the Start search button, choose your PLC from the list of accessible nodes and click the Show button.
The MB_SERVER instruction is used to establish the Modbus TCP communication. In the image above, you can see some DB blocks connected to
several inputs and outputs of the MB_Server block. In order to establish communication between a client and the server, you will need to set them up
according to the way presented below:
MBConfig - DB block which defines the connection type, the address of the ModbusTCP server as well as the used Port (by default 502)
Now, define variables in the Data block. Use the same names and data types as in the image below. If you want to create a new row, just right-
click on any row and select Insert row. Note that setting the right values of the Start values is essential in this step.
BUFF - DB block serving as the communication buffer for the data received or sent between the PLC and the Aurora Vision Studio application
Create a word array in the BUFF data block, it should have the same size as it does in the image below:
Attach the data block variables to the inputs and outputs of the MB_SERVER block in the way presented below:
If all previous steps have been performed correctly, your PLC program is ready. Compile the application and download it to the device. To see the
current states of variables, turn on the Monitoring mode:
Use the application created in Aurora Vision Studio in the section where the Modbus TCP simulator was used. To test discrete inputs, coils and
registers in TIA Portal, create a Watch table and a Force table, like in the image below:
For Networks 3 and 4, use MOVE blocks to move BUFF words to ModelData.T and ModelData.dt.
For Network 5, use MOVE blocks to move BUFF words to ModelData.Control. For Network 6, use a CONV block to convert Real values to Dint
values for further use in timers blocks.
Network 7 uses TON Timer blocks to generate a timed impulse signal for further calculations. Create a new branch in this network, put TON blocks
in the network and create Data blocks appropriate for them.
For Network 8, use a DIV block to scale ModelData.dt from milliseconds to seconds for further calculations. For Network 9, use the logic presented in
the image below. The formula for the CALCULATE block should be as follows:
OUT:= IN5 + IN3 / IN2 * (IN1 * IN4 - IN5)
For Network 10, use MOVE blocks to move ModelData.Output_prev words to BUFF. This variable will be read in Aurora Vision Studio application.
The UpdateStatusLabel macrofilter which is invoked in a few parts of the program; for instance, to notify the user about successful execution or an
upcoming error:
An IO_ERROR error handler which is executed when connection in the online mode is interrupted:
Interfacing Wenglor profile sensor to Aurora Vision Studio
Purpose and requirement
This document explains how to interface a Wenglor profile sensor to Aurora Vision Studio.
Required equipment:
Wenglor weCat3D Server 2.0.0 or later. You can download it under GigE Vision Interface section (version 2.2.1 used);
Aurora Vision Studio 5.0 Professional or later (version 5.2 used);
Wenglor weCat3D sensor (MLSL123 used).
12. The provided browser-based user interface allows you to configure the parameters of the Wenglor sensor. You can navigate to the 2D/3D
profile settings tab to check if profile is acquired with Intern Sync mode:
Setting up Wenglor weCat3D server
If you download Wenglor server archive, there should be a folder named "weCat3DGigEInterface". Copy it to the folder with Aurora Vision Studio
project:
This console application should be run with appropriate arguments described in Wenglor server documentation. You can create file with bat
extension with the following syntax:
Using bat file extension is not obligatory and you can start server with appropriate arguments by Command prompt. You can start server with
Execute_StartOnly filter - it is described in the next chapter.
In INITIALIZE section Execute_StartOnly filter starts Wenglor WeCat3D server. Then WenglorInitParameters macrofilter sets appropriate
parameters with GigEVision_SetParameter filters:
For more details about parameters initiation with Wenglor, please refer to the official example inside Aurora Vision Studio and to the manual attached
to the downloaded WeCat3D Server.
If you change inMode from Internal to Encoder, you will be able to acquire whole surface from profiles acquired with encoder trigger source:
Interfacing Gocator to Aurora Vision Studio
Purpose and requirement
This document explains how to interface a Gocator sensor to Aurora Vision Studio.
A Gocator is an advanced sensor created by LMI Technologies. It is purposed for 3D machine vision and processing point clouds. The Gocator has
many functionalities including:
Scanning objects and representing them as a point cloud or intensity images,
Various representations of scans,
Basic tools for 3D processing like: filtering, thresholding etc.,
Tools for performing measurements.
Required equipment:
Gocator Firmware Release 3.2 or later
Aurora Vision Studio 4.10 Professional or later
Setting up Aurora Vision Studio with a Gocator for the first time
Each sensor is shipped with a default IP address of 192.168.1.10. So before working with the Gocator you must make sure that the IP address has
been set correctly and is unique for each device in the network. You can verify that with the steps described below:
1. Connect a Gocator to the PC running Aurora Vision Studio Professional.
2. Power up the Gocator, connect it to the PC via Ethernet interface.
3. Open Control Panel on your PC.
4. Find Network and Sharing Center.
5. Choose Change adapter settings.
6. Right-click on unidentified network connection, choose Properties.
7. In Ethernet Properties find Internet Protocol Version 4 (TCP/IPv4) and click Properties.
13. Now go back to Aurora Vision Studio. In Image Acquisition (Third Party) section find LMI and double-click. Choose Gocator_GrabSurface to
grab a surface or Gocator_GrabProfile to grab a profile. Run or iterate the program. It is recommended to use the other option for testing. If
everything works fine you will receive the point cloud or the profile of the object from your Gocator Note that this filter has an input inAddress,
but it is not necessary to use it if there is only one Gocator connected with your PC. You must set this input if there are more sensors.
14. If no problems occurred as far, it means you have successfully connected Aurora Vision Studio with the Gocator.
Troubleshooting
If you had any problems during the connection process, you could find some solutions here.
If you cannot connect with a Gocator, please verify your Ethernet connection. Unplug your Internet cable and plug the cable from the Gocator.
Verify whether you have correctly set IP address.
If the device cannot connect, use the Security button on your Master device.
Make sure your browser uses the newest version of Flash.
In the case that other page is displayed on the 192.168.1.10 address press Ctrl + F5 to force a cache refresh.
If you meet the problem we have not mentioned above, please let us know, so that we could investigate it and add it to this section.
Examples
In this chapter several examples are presented to help you understand how to use functionalities described above and to help you learn how to work
with the Gocator's user interface and filters in Aurora Vision Studio.
In the "Output" tab make sure that the profile top output is marked:
In Aurora Vision Studio choose the Gocator_GrabUniformProfile filter and add the output outProfileData to a preview window:
In Aurora Vision Studio choose the Gocator_GrabSurface filter and add the output outSurface to a preview window:
In the "Output" tab make sure that the output data you would like to send is marked:
In Aurora Vision Studio choose the Gocator_GrabMeasurement filter and add the output outValue to a preview window. Please note that the
data output from a previous paragraph has a specific ID, so you must enter this value in Aurora Vision Studio - in Property window there is the
parameter named inMeasurementID that should be set to 9, because our goal is to display the value of hole's radius:
Hardware connection
All devices must work in a common LAN network - henceforth called shared network. In most cases they are connected to each other through a
network switch.
The devices shall be connected as follows:
Master device e.g. PLC to the shared network
Hilscher card to PC
Hilscher card to the shared network
PC's network card to shared network
Before proceeding to the further point, make sure that all devices are powered up.
To be sure that firmware is installed properly you can open cifX Test software and then navigate to Device -> Open to achieve the result like below.
To open the configuration card double click on dropped icon and in the popped-up menu navigate to "Settings/Device Assignment". Then scan for
devices by clicking "Scan", mark found one it with a tick like on the image below and then click "Apply".
To open card configuration, double click on its icon and select the "Configuration/Modules" category in the Navigation Area on the left side of the
dialog. In the main window you can add individual modules. Remember that in Profinet, Slot configuration must match the configuration from
your master device, otherwise the connection will not work. For boolean indicators we recommend "1 Byte Output" or "1 Byte Input".
Then you have to download the configuration to the device like on the screen below.
The final step is to generate configuration files for Aurora Vision Studio. You can do this by right-clicking on the device icon, then navigating to
"Additional Functions -> Export -> DBM/nxd..", entering your configuration name and clicking "Save". You can now close SYCON.net for this
example, remember to save your project before, so that it is easier to add new slots.
4) Working with Profinet in Aurora Vision Studio
The full list of filters for communication over Hilscher devices is available under this link.
Before proceeding to the further steps, please check Hilscher and AVS connection using Hilscher_Driver_GetBoardInformation filter. If all previous
steps are done correctly you will be able to get device board information including BoardName and ID. Please note that in this step the connection
with the Hilscher card via Sycon.net is required, so you need to run the software, right-click on the card icon and click Connect.
As described in previous chapter the Sycon.net application is not required for a proper operation. On the contrary, it is highly recommended to keep it
closed and use the generated configuration files to guarantee high stability of the connection.
After following the instruction from chapter 3, you have a xxx.nxd and xxx_nwid.nxd files. To send data over Profinet, follow the below steps:
1. Configure connection in AVS with Hilscher_Channel_Open_Profinet filter. Drag it from Program I/O category in the Toolbox and drop it in the
Program Editor.
2. In inBoardName input enter value acquired from Hilscher_Driver_GetBoardInformation filter. In inConfig and inNwid inputs enter paths to
generated files from Sycon.net.
3. To prevent from I/O errors make sure that you open connection in the initialization step with Hilscher_Channel_Open_Profinet and close it at
the end of the program using Hilscher_Channel_Close filter.
4. For IO, we recommend SlotRead and SlotWrite filters, as they are more convenient. For example, the Hilscher_Channel_SlotWrite_SInt8 filter
writes 8 bytes of signed data to the selected slot. Slot numbers match those in the "Configuration/Modules" category of card configuration in
SYCON.net program.
5) Example Configuration
Below you will find an sample configuration of Siemens TIA Portal, Hilscher Sycon and Aurora Vision Studio.
As you can see on the screenshots above the slots are common for each software. The only difference is that Siemens and Hilscher inputs
corresponds to outputs from Aurora Vision Studio and vice-versa - Siemens output is Aurora Vision Studio input. It should be considered in the
following manner: data outgoing from PLC (output) is incoming to Aurora Vision Studio (input) while data incoming to PLC (input) is outgoing from
Aurora Vision Studio (output).
The summarization of data type and directions you will find in the table below:
For the slot configuration described in this paragraph the Aurora Vision Studio program will look as follows:
You can use both Hilscher_Channel_IOWrite / Hilscher_Channel_IORead or Hilscher_Channel_SlotWrite / Hilscher_Channel_ SlotRead to send and
receive data. The only difference is in addressing the accessible data:
IORead and IOWrite filters use the offset value
SlotRead and SlotWrite filters use the slot numbers
Both of the information you will find in output outSlots from Hilscher_Channel_GetSlots filter.
Sometimes it is necessary to combine few integer values into one slot of data. In order to accomplish this task you can use binary buffers in Aurora
Vision Studio. Let's assume that someone would like to write two 2-bytes integers into one 4 bytes slot (SInt32 in Aurora Vision Studio). In this
situation the program should look as follows:
First we use two WriteIntegerToBuffer filters with specified format: Signed_16Bit_BigEndian connected in cascade to write two different values into
one buffer (in this case 55 and 99). Also we specify the offset where the second value should be written. Later we extract the combined value as one
Integer with filter ReadIntegerFromBuffer. For 8 bytes integers should be used ReadLongFromBuffer. Here we also specify the format of data
(Signed_32Bit_BigEndian). Later we send the obtained number with Hilscher_Channel_SlotWrite_SInt32. Between ReadFromBuffer and SlotWrite
the data is meaningless but later in TIA Portal when we access it as two separate 2-bytes values we get the send values.
Troubleshooting
If it is still not possible to use AVS with Hilscher card first, make sure that you done correctly all previous steps and then use following advices.
1. Use Ethernet Device Configuration software to set static IP address to Hilscher card.
2. Make sure that current program setting is loaded to Hilscher card. If not please use SYCON.net application to connect and download settings
to the devices as shown at the picture bellow.
3. If your master device has problem with connection with Hilscher card use following settings using Sycon.net application. In order to do this
right-click on the card icon and select Configuration...
5. If none of the above advice has helped, please restart your computer.
Hardware connection
In application notes there is configured one network which uses Ethernet switch. Devices connected to this network have addresses 192.168.0.XXX,
where XXX stands for number 1-254 unique for every device/port connected.
If you use Anybus ABC4090 gateway, it has 5 configurable In/Outs. Please find below description of these ports and connecting scheme:
X1 - Configuration port. By default this address is set to 192.168.0.10 and this address is used in this application notes. If you want to change
it, use HMS IPconfig tool in the way described in ABC4090 configuration manual. Connected to switch.
X2.1 - configurable port from network 1. Configured as Profinet. Connected to switch.
X2.2 - configurable port from network 1. Configured as Profinet.
X3.1 - configurable port from network 2. Configured as Modbus-TCP. Connected to switch.
X3.2 - configurable port from network 2. Configured as Modbus-TCP.
Before proceeding to the further point, make sure that all devices are powered up.
2. Default configuration IP address should be set to 192.168.0.10. PC must work in 192.168.0.0 network, so Ethernet-AnybusConfig IP were
changed accordingly:
If you change network card's IP as shown above and you will connect it to the CONFIG port of the gateway you should be able to access
configuration menu by typing 192.168.0.10 IP in the web browser:
3. In order to change gateway's firmware navigate to Maintenance -> Files & firmware tab:
Click Upload and navigate to download firmware directory. It consists of GSDML file for later (hardware definition for Profinet PLC) and
firmware file:
4. Devices linked should have an IP address which belongs to 192.168.0.0 network. Go to Configuration -> PROFINET tab, change settings
like on the screen attached below and click Apply:
5. Devices linked should have an IP address which belongs to 192.168.0.0 network. Go to Configuration -> Modbus TCP tab, change settings
like on the screen attached below and click Apply:
6. Navigate to I/O configuration tab, change settings like on the screen attached below and click Apply:
If your application requires more or less bytes to be exchanged feel free to change its values. It is important to have the same
configuration in the gateway and in the PLC.
5. Add two modules to the project which match these configured in Anybus web configurator. Drag & drop IN 0016 and OUT 0016 module onto
Device overview:
6. If all previous steps have been done correctly, your PLC program is ready. Compile the program and download it to the device. To see current
states of variables, turn on Monitoring mode:
7. It is good manner to use gateways In/Outs by PLC tags - using raw addresses you can mistakenly modify/read wrong bytes. Create
"AnybusTags" table in PLC tags folder:
To monitor and easily modify tags please add Watch table in Watch and force tables folder.:
Configuring Aurora Vision Studio application
1. Aurora Vision Studio application should have ModbusTCP_Connect and I/O Read/Write filters as on the screen attached below:
2. Turn on online mode and monitoring mode in TIA Portal. You can create watch tables to easily modify data in TIA Portal environment:
Troubleshooting
In case of any problem with gateway configuration, please try manuals and tutorial videos from Anybus support page.
b. When the camera is in the same subnet, it is possible to change its settings (for example changing its static IP).
c. If the camera is connected but not detected you may open the window (from point a), manually type the camera MAC address and
assign it another IP address. The menu can be opened from the Tools menu.
4. Now you can run the program. Previewing the outImage will show you the view from the camera.
Changing camera parameters
You can change camera parameters programmatically using SetParameters filters. To demonstrate this, we will expand the previous program.
1. Add a GigEVision_SetRealParameter filter and set its inAddress to the same address as GigEVision_GrabImage.
2. Specify the parameter name. If you are not sure about the name you can select through the GigEVision device tree. To do that click on the "..."
button near the inParameterName.
3. Here you can see all available camera parameters with a short description. Select the parameter with name Exposure Time (Abs) (or similar if
not present). As you can see it controls the camera exposure time in microseconds and its type is IFloat.
a. If you cannot find the parameter try using the search function (the magnifying glass icon)
b. The type of the filter needs to match the type of the parameter. The exception is that IFloat is represented as Real in Aurora Vision
Studio.
c. Some more advanced parameters may not be visible unless Visibility is set to the correct level.
4. Make sure that the parameter ExposureAuto is set to Off. Otherwise, it will not be possible to manually change the exposure value.
5. Before closing the window note the minimum and maximum values of exposure time. After selecting the parameter, set the inValue to the
lowest acceptable value.
6. Run the program. While running, steadily increase the inValue input. You will notice that the camera image gets brighter and brighter.
a. Try not to go outside the acceptable range. It will result in an error if the input inVerify is set to True.
7. It is also possible to check a parameter's value by using GigEVision_GetParameter filters. Try adding one in the Float variant and specifying
the same name as in the previous filter.
a. You many notice that not always the read value is equal to inValue of SetParameter. It is because the camera modifies it.
With GigEVision_GetParameter filters it is not only possible to check editable parameters but also to check read-only ones. For example, if the
camera was set to automatically adjust exposure, the current exposure time might be useful to know. Another example would be to read parameters
holding device specific information, like maximum image size.AvsFilter_cvReinSrc
For some quick testing you may also want to set parameters directly in the GigEVision tree.
It is important to note that while most parameters are stored in volatile memory and as such will reset after after unplugging the camera from power.
There are some parameters that are stored in non-volatile memory, such as Device User ID, but they usually do not affect the acquisition directly.
Because of this it is recommended to design program in such a way that every parameter with value different than default is set programmatically.
2. Add another GigEVision_SetIntegerParameter filter and select the Width parameter. You may notice that it is greyed out.
a. While selecting the parameter check the information tab. There may be additional information about possible value, e.g. increment.
Some parameters can only take value that are multiples of certain numbers, like 2 or 4.
3. Enter 240 as inValue and run the program. You will encounter an error saying that the parameter is not writeable.
4. Now add a GigEVision_StopAcquisition before the previous Set filter and rerun the program.
5. The parameter is now set without errors and the output image is smaller.
As you can see the Width parameter was not writable while the acquisition was running. Stopping the acquisition before changing the parameter
made it writable.
You may also notice that we have a filter to explicitly stop the acquisition but there is not filter to explicitly start it. GigEVision_GrabImage will attempt
use an ongoing acquisition but if none is present it will start a new one.
While it is acceptable for simple programs, it should be avoided when the status of the acquisition is program-controlled.
Now we will make a proper program out of this.
1. Start with removing the filters added in this chapter.
2. Now select the rest and extract a task macrofilter (called MainLoop) from them. This will be the main acquisition loop.
3. Create a new step macrofilter called InitializeCamera and add it before MainLoop in INITIALIZE section.
4. In InitializeCamera add the three following filters GigEVision_StopAcquisition, GigEVision_SetIntegerParameter (select Width parameter)
and GigEVision_StartAcquisition.
a. Set the width to the maximum possible value.
5. Drag inAddress of any of the filter to the top bar to create a new input. Then add an output of the same type and connect it to the input.
6. Go up one level and drag the outAddress output to the MainLoop filter creating a new input.
7. Inside that filter connect the newly created input to all the GigE filters
a. Now every GigE filter in the whole program shares the same camera address.
b. The final program should look like this:
Now the program is divided into two parts. The first part is executed once, after that the program enters MainLoop which run continuously until the
program is stopped.
The parameters that can be changed during acquisition are set in the InitializeCamera filter where the acquisition is guaranteed to be stopped.
All the other parameters are set in MainLoop.
In real applications not all parameters need to be changed during runtime. Even then they probably will only be changed from time to time, not in
every iteration.
For instance, specifying if the camera will run continuously in trigger mode will likely be done only once (even though it can be set during acquisition).
Exposure time might be changed multiple times.
It is good practice to have all the parameters that are set only once in one macrofilter (like InitializeCamera) regardless of if they required acquisition
to be stopped.
The parameters changed from time to time should be set in a variant macrofilter.
The reason of that is time-saving. Setting the parameter to the same value does not take much time (Aurora Vision Studio caches previous values
and avoids resending the same ones), but if we have a lot of parameters it adds up. It is also more intuitive if one-off parameters are in a dedicated
filter.
It is important to note how the cameras work in Aurora Vision Studio. When the acquisition is started Aurora Vision Studio creates a background
thread which buffers incoming frames in memory. This thread persists until the acquisition is stopped or the application exits the task that started the
acquisition.
For example, if InitializeCamera was a task and not a step macrofilter the acquisition would stop when exiting it and it would have to be restarted in
MainLoop.
Also, instead of passing the camera address as a parameter it is possible to put it into a global parameter.
4. Now we will modify the contents of MainLoop worker task. We want to be able to change trigger mode and to execute a software trigger.
a. Add an instance of GigEVision_SetEnumParameter. Connect it to the camera address and select Trigger Mode as its parameter
b. Add a GigEVision_ExecuteCommand filter, connect them to the camera address. Through Device Manager select parameter
TriggerSoftware.
c. Move all those filters to PROCESS section and add GigEVision_GrabImage to ACQUIRE section.
5. Let's design an HMI. In this case it will feature:
a. View2DBox - which will display the camera image;
b. ComboBox - for selecting trigger mode;
c. NumericUpDown - to control exposure time;
d. ImpulseButton - to generate software trigger;
e. Labels - to label other controls.
6. Now we will configure the controls:
a. View2DBox - change its InitialSizeMode to FitToWindow and connect to GrabImage's outImage;
b. ComboBox (for trigger mode)
I. Connect outText to inValue of Set Parameter for TriggerMode;
II. Expand List in Data category in Properties and add the following item: On, Off;
III. Set Selection to 0
c. ImpulseButton - connect its outValue to ExecuteCommand's inValue;
d. NumericUpDown:
I. Connect outValue to inValue of SetParameter for ExposureTime;
II. Set Minimum and Maximum to the values specified in the Device Tree for that parameter (for the camera used while writing this
note the values were 35 and 999985.
7. The program should display the camera image on the HMI. You should be able to change the exposure time. However, when you change the
ComboBox's value to On, the program will freeze. Clicking the trigger button will not do anything.
a. This is caused by the fact that the GrabImage in its Synchronous variant. The cannot get past GrabImage and execute a trigger, so it
waits for an image indefinitely.
b. To solve that we can change GigEVision_GrabImage variant to GigEVision_GrabImage_WithTimeout. Now the program will only
attempt grabbing for a specified amount of time. Let's set inTimeout to 100ms and rerun the program.
8. Now the program does not freeze when setting triggered mode on. If no image is grabbed in 100ms GrabImage returns Nil and the program
proceeds to the next iteration, where the camera may be triggered.
a. Generally if the camera is meant to acquire images only from time to time it is a good idea to use WithTimeout variant of GrabImage. It
prevents program hang-ups in case of some camera problems and allows to inform the user that the camera may not be working
correctly. However, the rest of the program must be designed in a way that handles Nil images properly.
We can now expand the program to enable the user to change other parameters, including those which require acquisition to be off.
1. Create an empty variant macrofilter inside MainLoop and choose Bool as the fork type. Drag the camera address to the filter to create an input.
2. Enter the filter and choose the variant True. Add a StopAcquisition filter, followed by a SetParameter filter in variant Integer and select Width
parameter. After them add a StartAcquisition filter. Connect all filters to the camera address.
3. In the HMI add two new controls:
a. NumericUpDown - to control width's value; set its Minimum and Maximum to the respective limits of the Width parameter in the camera
(for the camera used while writing this note the values were 35 and 999985); set Increment to the respective value as well.
b. ImpulseButton - to confirm new value.
Network configuration
Main network configuration
Create a new project in the TIA Portal environment and add your device by double-clicking on Add new device. Select your controller model, its CPU
etc. in the list of controllers available in a dialog box. Click on OK button if you are ready.
Expand a Program blocks list and double-click on the Main icon. Now you should see a network view. Find the Communication tab on the right side
and double-click on the TCON icon available in Communication -> Open user communication -> Others.
The TCON instruction is used for establishing the TCP communication. It is now visible in the Network View. Right-click on it and select Properties.
In the Configuration tab, you need to set the IP address of your PC. To check the IP address, you can use the Command Prompt and the ipconfig
command. Other parameters should be set like in the next figure. If you use these settings, the IP of the PLC device should be set automatically. If
you cannot establish the connection, please follow steps described in next chapter about troubleshooting.
Troubleshooting IP addressing
If you have not been able to properly set the IP address of the PC, as described in the previous chapter, you should set a static IP following steps
described below or otherwise, feel free to skip this step.
Choose Accessible devices from a toolbar to open the configuration window. Select PN/IE type of the PG/PC interface and select the Ethernet card
connected to the PLC. Click the Start search button, choose your PLC from the list of accessible nodes and click on the Show button. You should get
a new static IP which you can use in steps from the previous chapter.
Now define variables in the Data block. Use the same names and data types as shown in the image below. If you want to create a new row, just right
click on any row and select Insert row. Please note that setting the right values of Start Values is essential in this step.
Label all the inputs and outputs of the TCON block in the network view. To label connection drag and drop variables from Data block to the Program
block (for example TYCON) or double click on a connection and select displayed icon like in the image shown below.
Add 5 networks to the Main program. In order to do that right-click on the existing network and select Insert network as shown in the image below:
Insert additional communication block, TDISCON, from the Communication tab like on this previous picture. Label all connections as shown in the
image below. TCON block will be used to establish TCP/IP connection while TDISCON will be used to close the connection.
The next step is to allow exchanging data between Aurora Vision Studio and PLC. Use TRCV block for receiving messages and TSEND block for
sending messages to Aurora Vision Studio. Label added blocks as shown in the image below:
To start connection over TCP/IP using TCON you need to use rising edge signal on REQ input. Same applies to other function blocks (in TRCV a
rising edge signal should be set on EN_R input). You can switch these values manually by right-clicking on the connection and selecting Modify. In
this sample application, an automatic pulse generator will be used in order to avoid switching the values manually. Insert a new network and add a
TP block located in the Basic instructions tab inside the Timer operation folder (picture below).
Use configuration from next picture to create a network which will be generating a proper signal.
In the last step please add a sample math function e.g. multiplying. In this example a number received from Aurora Vision Studio will be squared and
the result calculated on the PLC's side will be sent back to Aurora Vision Studio.
If all previous steps have been done correctly, your PLC program is ready. Compile the program and download it to the device. To see current states
of variables, turn on Monitoring mode.
Create a new Task Macrofilter and create an inSocket input (you can do this by dragging the outSocket and dropping it on the macrofilter). Connect
outSocket from TcpIp_Accept filter to the inSocket input of Task Macrofilter. In this example Aurora Vision Studio will connect over TCP/IP only
once and the connection will be held. The data exchange will be executed inside the Task Macrofilter.
Inside the MainLoop macrofilter you need to add the TcpIp_WriteBuffer filter to send messages and TcpIp_ReadBuffer to receive messages over
TCP/IP. PLC program works on Buffer data type, in our example user will be specifying decimal numbers, therefore conversion from Buffer to Real
value is needed. To do so, use the WriteRealToBuffer and ReadRealFromBuffer filters which automatically convert decimal value into specified
binary representation and write it to/read it from a buffer.
Add the Loop filter at the end of the algorithm inside MainLoop macrofilter. Make sure to set proper format - SinglePrecision_32Bit_BigEndian,
otherwise program will not work properly. The algorithm should look like the one shown in the next picture.
In the next step simple HMI will be created. Add a NumericUpDown and TextBox controls from the Controls tab. Connect NumericUpDown outValue
output to the WriteRealToBuffer inValue input (previous picture) and the output outValue of ReadRealFromBuffer filter to the inText of the TextBox
control. Set the properties of the NumericUpDown HMI control as shown in the next picture.
In the last step use Label control to describe the previously added controls. The current algorithm should look like the one shown in the next picture.
The program can work in current state; however, it will be improved in next steps.
Main Loop is running continuously due to the inShouldLoop input of Loop, which is always true in the current structure. In order to change that, add
ImpulseButton and name it "Disconnect and exit". Loop filter generates loop when true value is passed to the inShouldLoop input. Default state of
ImpulseButton outValue output is false, so this value should be negated before connecting it to the Loop filter. Insert the CopyObject filter inside
MainLoop macrofilter and set the Bool? data type. Connect ImpulseButton outValue to the inObject input of the CopyObject filter. Right click on the
outObject output, select Property Output and Not variant. Now connect outObject.Not output to the inShouldLoop input as shown in the next
picture.
The same ImpulseButton control will be used to Close TCP/IP connection in Main task. Use the structure known from description above and Figure
25 to control Loop filter in Main task as shown in the image below:
The application is almost ready, but every good and stable application should have Error handling. Next modification will allow you to see the current
connection state, for example: wait for connection, connection is active, and lost connection. In order to do that, go inside MainLoop macrofilter, right
click on the program editor window and insert a step macrofilter. Name new created macrofilter as "SetHmiMessage" and create new String? type
input. Name the newly created input as inMessage. This macrofilter will be used in a few places is this application, in case you use it inside
MainLoop macrofilter, insert command "Connection is active" in macrofilter properties as shown in image below:
Enter SetHmiMessage macrofilter and add the CopyObject filter with String? type. Add a new Label control to the HMI window. As default text set
"Wait for connection". Connect the outObjects output from CopyObject filter to the inText input of Label control. The result should look like in the
image below:
Currently, connection state message has two different variants: Wait for connection and Connection is active. Now, we are going to add message
which will be prompted when connection is broken. In order to do that, you need to use Error handlers for MainLoop macrofilter. Right click on the
MainLoop macrofilter, visible in the Project Explorer window, and select Add New Error Handler... Choose IO ERROR from the list, as shown in the
next figure.
Enter newly created error handler and add SetHmiMessage macrofilter. Set inMessage? input to "Connection lost" as shown in the figure below:
The result should look like in the following picture. If all previous steps have been done correctly, program should work without any problems.
If you want to test program with occurring errors, you should unlock breaks in settings, as shown in the image below. With these settings, a pop-up
window with error messages will not appear.
GigEVision_SetParameter has one more input, where the user specifies the value to be written. In GigEVision_GetParameter there is an output
with the value of the parameter.
Both filters have variants depending on the type of the parameter being accessed - Real, Integer, Bool, Enum and String.
To see what the possible values for a given parameter are, the user may just click on inParameterName which shows a window with available
parameters for the currently connected camera.
Program overview
The designed program allows the user to select a camera from a list of connected devices. After selecting one, the acquisition starts. The program
displays a preview of the camera images. The user may modify selected camera parameters like gain, light source correction, and acquisition mode.
If the camera is set to triggered acquisition mode, the user can release a software trigger. While running the program displays information about the
current acquisition.
Under the preview there are two groups of controls. The controls on the left allow changing the parameters that require a restart of acquisition.
Because of that there is also a button which lets the user restart the acquisition with new parameters.
The other group has controls for the parameters that can be changed during the acquisition. The button labeled "Trigger" is used to start the
acquisition using with a software trigger. It is disabled when in continuous mode.
The MainLoop filter of the program can be divided into 3 parts. The first part is connecting the camera - here it is done by the ConnectCamera task
macrofilter.
The second and the third part are done by the UseCamera task macrofilter. First the program starts the acquisition with set parameters. Then the
program continuously acquires images while being able to change some of the camera parameters.
The Not and Loop filters control whether MainLoop will continue. The Not filter is connected to the "Exit program" HMI button. If the button has been
pressed the loop will not continue - the program goes back top Main and stops the acquisition (if a camera had been connected) which finishes the
program.
Connecting the camera
The first part of the program allows the user to connect a camera. The delay in the filter is used to limit the number of iterations per second. The
inAddress input of the formula is connected to the HMI control labeled Camera Address.
After clicking the 3-dot button a window opens letting the user select the camera. The selected camera's address is the sent to the formula block.
The animated waiting indicator is visible only if no camera has been chosen (so the address is Nil). The state of the indicator is controlled by the
outNilAddress output of the formula.
If the user chooses a camera and its address is no longer Nil this exits the loop of this macrofilter and the camera address is outputted.
Setting the parameters
After the camera has been connected the next step is to start the acquisition. However, some acquisition parameters can only be set while not
acquiring images. Such parameters include binning, packet size and pixel format. They are set in the SetAcquisitionParameters step macrofilter
The NotOr filter negates the sum of two boolean values related to HMI controls. The first being outAddressChanged of the camera address picker
and the second - whether the "Exit program" button has been pressed. If both are false the value of NotOr is true, which enables the loop to
continue. If at least one of them is true - the program exits the UseCamera loop and goes back to MainLoop.
First any undergoing acquisition is stopped to ensure the parameters can be set. After that the mode of acquisition is set to "Continuous". This allows
the camera to grab images to show in the preview of the program. The formula sets outTriggerMode to "Off" if the inMode parameter is anything
other than "Single Frame".
The next steps SetAcquisitionParameters set the binning of the camera (in both the horizontal and vertical axes) as well as the packet size. The
possible values of the HMI controls used should be limited to be compatible with the camera value range. For example, the packet size here ranges
from 220 to 16404 in increments of 4.
Finally, the program starts the acquisition with the pixel format chosen by the user.
Acquiring images
Acquiring images takes place in a loop in the filter AcquireImages. It begins with an Or filter that's sums the states of three HMI controls: "Exit
program" button, outAddressChanged of the camera address picker and "Set parameters" button. If any of them is true, the program exits the loop
and goes back to UseCamera.
After that there is another instance of SetAcquisitionMode. This time the inMode parameter is connected to the appropriate control allowing the user
to switch between continuous and triggered acquisition.
SetGain allows the user to switch between different modes of gain adjustment as well as set its value manually. First the program sets whether the
gain will be adjusted manually or not based on the value from the control. If it is, the formula below sets the state for the next filter as well as enables
(or disables) the gain value control in the HMI.
AdjustGain is a variant macrofilter. If gain is to be adjusted manually it sets the parameter to the given value. If gain is controlled automatically, it
instead reads the value. This enables the user to see the automatically adjusted gain value in real time.
Next the light source is set. It allows the user to select the preset best suited to the current lighting conditions and by extension - to make colors look
more natural.
TriggerAcquisition is another variant macrofilter. The inTimeout parameter sets how long should the program wait for an image from the camera.
The variant for continuous acquisition is almost empty - it only passes inTimeout to outTimeout.
If the acquisition is triggered the program first check the state of the "Trigger" button. If it had been pressed the program executes the trigger
command and passes inTimeout to outTimeout. If the button had not been pressed no command is executed and timeout is set to its lowest
possible value - 100. Low timeout makes program more responsive when the user does not trigger the acquisition.
After that, in GrabImage macrofilter, the program grabs and displays the image from the camera. Since the inPixelFormat has been set when
starting the acquisition in the previous part of the program it is not necessary to change the value of this parameter here.
LastNotNil filter keeps the image in the preview in case the user is in triggered acquisition mode. Without it all images from the camera would only
be displayed for one iteration of AcquireImages.
The last macrofilter in AcquireImages gathers data about the current acquisition and displays it on the HMI.
Notes
The program may not work on every camera. This is dependent on the available parameters as well as their values.
It is possible to modify more parameters in the program.
To better understand how some of the parameters work reading the camera's documentation is recommended.
If functionalities of some of the filters used in this example are unclear, check their documentation.
12. Appendices
Table of content:
Backward Compatibility Policy
Quick Start Guide for the Users of LabVIEW
Quick Start Guide for the C/C++ Programmers
Deep Learning Service Installation
Backward Compatibility Policy
Programs created in Aurora Vision Studio are fully backward compatible within the same release number.
Between consecutive minor revisions (3.1, 3.2, ...) the following changes might be anticipated:
User Filters might have to be rebuilt.
Data files (e.g. Template Matching models) might have to be recreated.
Generated C++ code might have to be re-generated.
Nodes Filter Instances In both environments these elements have several inputs and outputs, and are the basic data
processing elements.
Connections in Aurora Vision Studio encompass more program complexity than wires in LabVIEW.
Wires Connections Like in LabVIEW there are basic connections and connections with data conversion (LabVIEW:
coercion). There are, however, also array connections that transmit data in a loop and conditional
connections that can make the target filter not executed at all.
Basic Data Types Aurora Vision Studio has two numeric data types: Integer and Real, both 32-bit. There are also
Booleans (LabVIEW: Boolean), Strings, File (LabVIEW: Path) and enumerated types.
Arrays and Structures are more similar to the corresponding elements of the C language. Arrays in
Arrays and Clusters Arrays and Aurora Vision Studio can be multi-dimensional, e.g. one can have arrays of arrays of arrays of
Structures integer numbers. Structure types are predefined and their elements can be "bundled" and
"unbundled" with appropriate Make and Access filters.
The programming model of Aurora Vision Studio enforces the use of data flow connections instead
Local Variables Labels of procedural-style variables. Instead of local variables you can use labels, that replace
connections visually, but not actually.
Global Variables in LabVIEW are recommended mainly for passing information between VIs that
Global Variables Global Parameters run simultaneously. In Aurora Vision Studio this is similar – global parameters should be used to
communicate between Parallel Tasks and with HMI Events.
Dynamic values / Generic Filters of Aurora Vision Studio are more similar to templates of the C++ programming
Polymorphic VIs Generic Filters language. The user specifies the actual type explicitly and thus the environment is able to control
the types of connections in a more precise way.
Waveform Profile A sequence of numeric values that can be depicted with a 2D chart is called Profile.
A macrofilter is a sequence of other filters hidden beyond an interfaces of several inputs and
Virtual Instrument Macrofilter outputs. It can be used in many places of a program as it was a regular filter. Macrofilters do not
(VI, SubVI) have their individual front panels. Instead, the environment of Aurora Vision Studio is designed to
allow output data preview and input data control.
In Aurora Vision Studio, HMI (Human-Machine Interface) is created for the end user of the machine
Front Panel HMI vision system. There is thus single HMI for a project. There are no blocks in the Program Editor
that correspond to HMI controls. The connections between the algorithm and the HMI controls are
represented with "HMI" labels.
There are two methods to create loops. The first one is straightforward – when the user connects
an output that contains an array to an input that accepts a single element, then an array connection
is used. A for-each loop is here created implicitly. The second is more like the structures of
For Loop, While Array Connections, LabVIEW, but also more implicit – the entire Task macrofilter works in a loop. Thus, when you
Loop Task Macrofilter need a nested loop you can simply create a new Task macrofilter. These loops are controlled by
the filters that are used – more iterations are performed when there are filters signaling ability to
generate new data.
Registers in Aurora Vision Studio are very similar to Shift Registers. One difference is that the
types and initial values of registers have to be set explicitly. Step macrofilters preserve the state of
Shift Registers Registers registers between subsequent executions within a single execution of the Task that contains them.
There are no Stacked Shift Registers, but you can use the LastTwoObjects / AccumulateElements
filters instead.
While Task Macrofilters can be considered an equivalent of the While Loops of LabVIEW, Variant
Case Structures Variant Macrofilter Macrofilters can be considered an equivalent of the Case Structures. Selector Terminals and
Cases are called Forking Ports and Variants respectively.
Sequence – All macrofilters in Aurora Vision Studio are executed sequentially, so explicit Sequence Structures
Structures are not needed.
Formula Blocks are used to define values with standard textual expressions. Several inputs and
Formula / Formula Blocks outputs are possible, but loops and other C-like statements are not. This feature is thus something
Expression Nodes between LabVIEW's Expression and Formula Nodes. If you need C-like statements, just use C++
User Filters that are well integrated with Aurora Vision Studio.
Iterate Current As there are no explicit loops other than the loops of Task macrofilters, a macrofilter is actually the
Breakpoints Macrofilter most appropriate unit of program debugging. One can use the Iterate Current Macrofilter command
to continue the program to the end of an iteration of the selected macrofilter.
In Aurora Vision Studio there are no error in/out ports. Instead, errors are handled in separate
Error Handling subprograms called Error Handlers. In some cases when an output of a tool cannot be computed,
conditional outputs are used. The special value Nil is then signaling a special case. This is for
example when you try to find the intersection of two line segments which actually do not intersect.
Call Library User Filter User Filters can be used to execute pieces of code written in Microsoft Visual C++. The process of
Function Node creating and using User Filters is highly automated.
Quick Start Guide for the C/C++ Programmers
Aurora Vision Studio has been created by developers, who were previously creating machine vision applications in C++. We created this product to
make this work much more efficient, whereas another our goal was to retain as much of the capabilities and flexibility as possible. We did not,
however, simply create a graphical interface for a low level C++ library. We applied a completely different programming paradigm – the Data Flow
model – to find the optimum balance between capabilities and development efficiency. Programming in Aurora Vision Studio is more like designing
an electrical circuit – there are no statements and no variables in the same way as they are not present on a PCB board. The most important thing to
keep in mind is thus that there is no direct transition from C++ to Aurora Vision Studio. You need to stop thinking in C++ and start thinking in data
flow to work effectively. Automatic C++ code generation is still possible from the data flow side, but this should be considered a one-way
transformation. At the level of a data flow program there are no statements, no ifs, not fors.
So, how should you approach constructing a program, when you are accustomed to such programming constructs as loops, conditions and
variables? First of all, you need to look at the task at hand from a higher level perspective. There is usually only a single, simple loop in machine
vision applications – from image acquisition to setting digital outputs with the inspection results. It is highly recommended to avoid nested loops and
use Array Connections instead, which are data-flow counterparts of for-each loops from the low level programming languages. For conditions, there
is no if-then-else construct anymore. There are Conditional Connections instead (data may flow or not), or – for more complex tasks – Variant
Macrofilters. The former can be used to skip a part of a program when some data is not available, the latter allow you to create subprograms that
have several alternative paths of execution. Finally, there are no variables, but data is transmitted through (usually unnamed) connections. Moreover,
Global Parameters can be used to create named values that need to be used in many different places of a program, and Macrofilter Registers can
be applied to program complex behaviors and store information between consecutive iterations of the program loop.
Please note, that even if you are an experienced C++ programmer, your work on machine vision projects will get a huge boost when you switch to
Aurora Vision Studio. This is because C++ is designed to be the best general purpose language for crafting complex programs with complicated
control flow logic. Aurora Vision Studio on the other hand is designed for one specific field and focuses on what is mostly important for machine
vision engineers – the ability to experiment quickly with various combinations of tools and parameters, and to visualize the results instantly, alone or
in combination with other data.
Here is a summary:
Aurora Vision Studio is NOT based on the control flow paradigm. Instead, data flow
Conditions (the if Conditional Connections, constructs can be used to obtain very similar behavior. Conditional connections
statement) Variant Macrofilters can be used to skip some part of a program when no data is available. Variant
Macrofilters are subprograms that can have several alternative paths of execution.
See also: Sorting, Classifying and Choosing Objects
Aurora Vision Studio is NOT based on the control flow paradigm. Instead, data flow
Loops (the for and while Array Connections, Task constructs can be used to obtain very similar behavior. Array connections
statements) Macrofilters correspond to for-each style loops, whereas Task Macrofilters can be used to
create complex programs with arbitrary nested loops.
Data flow programming assumes no side effects. Computed data is stored on the
Connections, Global filter outputs and transmitted between filters through connections. Global
Variables Parameters, Macrofilter Parameters can be used to define a named value that can be used in many
Registers different places of a program, whereas Macrofilter Registers allow to store
information between consecutive iterations.
Collections (arrays, Arrays The Array type is very similar to the std::vector<T> type from C++. This is the only
std::vector etc.) collection type in Aurora Vision Studio.
Macrofilters are subprograms, very similar to functions from C++. One notable
Functions, methods Macrofilters difference is that macrofilters can not be recursive. We believe that this makes
programs easier to understand and analyze.
GUI Libraries (MFC, Qt, If more complex GUI is needed, the algorithms created in Aurora Vision Studio can
WxWidgets etc.) HMI Designer be integrated with a GUI written in C++ through C++ Code Generator. See also:
Handling HMI Events.
Bigger projects require better organization. As you can create libraries in C++
Static, dynamic libraries Modules which can be used in many different programs, you can also create modules
(a.k.a. libraries of macrofilters) in Aurora Vision Studio.
As there are no side effects within macrofilters, there is no need to set breakpoints
The "Iterate Current in arbitrary places. You can, however, run the program to the end of a selected
Breakpoints Macrofilter" command macrofilter – just open this macrofilter in the Program Editor and use the "Iterate
(Ctrl+F10). Current Macrofilter" command. The program will pause when it reaches the end of
the selected macrofilter instance.
There are no threads in Aurora Vision Studio. Instead, the filters utilize as many
Threads processors as possible internally and the HMI (end user interface) is automatically
run in parallel and synchronized with the program loop.
FAQ
Question:
How to mark the end of a loop started with filters such as EnumerateIntegers or Loop?
Answer:
This is by design different than in C++. The loop goes through the entire Task macrofilter, which is a logical part of a program. If you really need a
nested loop (which is rare in typical machine vision projects), then create a Task macrofilter for the entire body of the loop. First of all, however,
consider array connections. They allow for example to inspect many objects detected on a single image without creating an explicit loop.
Question:
Could you add a simple "if" filter, that takes a boolean value and performs the next filter only if the condition is met?
Answer:
This would be a typical construct in the control-flow based programming languages. We do not what to mix different paradigms, because we must
keep our software not too complicated. You can achieve the same thing by using the MakeConditional filter and passing data to the next filter
conditionally then. If there is no appropriate data that could be used in that way, then a variant macrofilter might be an another solution.
Question:
How to create a variable?
Answer:
There are no variables in data-flow. This is for the same reason you do not see variables on PCB boards or when you look at a production line in a
factory. There is a flow instead and connections transmit data (or objects) from one processing element to another. If you need to store information
between consecutive iterations, however, then also stateful filters (e.g. AddIntegers_OfLoop, macrofilter registers or appropriate functions in formula
blocks can be used.
1. Installation guide
To use Deep Learning Filters, Library or Service with Aurora Vision Studio or Aurora Vision Library, a corresponding version of Aurora Vision Deep
Learning must be installed (the best idea is to use the newest versions of both from our website). Before installation, please check your hardware
configuration.
Deep Learning is available in two versions:
GPU version (recommended) - version working with CUDA GPU acceleration. Much faster than CPU counterpart.
CPU version - uses only CPU, GPU is not required and used. Relatively slow, especially during training phase.
Requirements
Graphics card compatible with CUDA toolkit. List of compatible devices can be found on this website (all CUDA devices with "Compute
Capability" greater than or equal 3.5 and less than or equal 8.6). Minimum 2 GB of graphic memory is recommended. Display Driver with at
least 461.33 version is required (recommended latest display driver version).
At least 3.5 GB disk space for program files, SSD recommended.
At least 8 GB RAM memory.
64-bit processor, Intel i5, i7 or better are recommended. AVX support is required.
Windows 7, 8 or 10.
Known issues
If you are getting Access Denied errors during updating (or uninstalling), close all processes that may use previously installed Deep Learning files,
like programs that need Deep Learning Library, Aurora Vision Studio, Aurora Vision Executor and so on.
Installer sets environment variable named AVLDL_PATH5_3 containing path to Library subdirectory. Exemplary use of AVLDL_PATH5_3 is presented in C++
examples distributed with Aurora Vision Deep Learning.
The Service icon can be displayed in three colors, indicating the Service status:
6. Logging
Deep Learning Service and Filters logs some information during execution to several files, located at %LocalAppData%/Aurora Vision/Aurora Vision Deep
Learning 5.3 directory. Total disk space used by these files should not exceed several MB. Files older than a couple of days are automatically deleted. More
information are provided in the documentation of DL_ConfigureLogging filter.
If this disk space requirement is unacceptable, Service can be executed in "minimal logging" mode. This can be achieved by running
run_service_minimal_logging.bat script, located at Service installation folder. Note, that it will not lead to any observable performance improvement.
7. Troubleshooting
Most common problems encountered by our clients can be separated into two groups:
1. Problems with installed Nvidia drivers - most problems occurs during loading Deep Learning Filters into Aurora Vision Studio.
Most common error from the console log:
Unable to load filter library "(...)AvlDlFilters.dll". Win32 error: The specified procedure could not be found.
2. Resources exhaustion during the training - training takes to much GPU/System memory that cannot be handled by current
system. In such state your computer may lost stability and different problems my occur.
Most common errors:
Out of memory. Try freeing up hard disk space, using less training images, increasing downsample or resizing
images to smaller ones. or Service disconnected.
1. Invalid or old version of graphical card drivers - verify if your GPU card have supported version of drivers. It can be checked in
Window's Control Panel.
2. Corrupted installation of the GPUs drivers - verify if your GPU drivers are installed properly. In some cases full re-installation of
Nvidia drivers may be necessary.
Please verify if following files are found on your computer C:\Windows\System32\nvml.dll and C:\Windows\System32\nvcuda.dll .
3. Deep Learning product version is too old for latest version of the latest Aurora Vision Studio - Update Deep Learning to the
latest version.
4. Changes in environment PATH variable may affects how Deep Learning filters works - Remove all paths from PATH variable
which may point to Nvidia CUDA runtime DLL. Please verify if command where cudnn_ops_infer64_8.dll returns no results.
5. GPU card doesn't meet minimum software requirements - in some cases older GPU cards may encounter runtime problems
during training or inference.
References
See also:
Machine Vision Guide: Deep Learning - Deep Learning technique overview,
Creating Deep Learning Model - how to use Deep Learning Editor.