UserGuide PDF
UserGuide PDF
eCognition® Developer
User Guide
Trimble Documentation:
eCognition Developer 9.0
User Guide
* * *
Typeset by Wikipublisher
ii
CONTENTS iii
4 An Introductory Tutorial 45
4.1 Identifying Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.1 Divide the Image Into Basic Objects . . . . . . . . . . . . . . . . 46
4.1.2 Identifying the Background . . . . . . . . . . . . . . . . . . . . 46
4.1.3 Shapes and Their Attributes . . . . . . . . . . . . . . . . . . . . 48
4.1.4 The Complete Rule Set . . . . . . . . . . . . . . . . . . . . . . . 49
7 About Classification 97
7.1 Key Classification Concepts . . . . . . . . . . . . . . . . . . . . . . . . 97
7.1.1 Assigning Classes . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.1.2 Class Descriptions and Hierarchies . . . . . . . . . . . . . . . . 97
7.1.3 The Edit Classification Filter . . . . . . . . . . . . . . . . . . . . 102
7.2 Classification Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.2.1 The Assign Class Algorithm . . . . . . . . . . . . . . . . . . . . 103
7.2.2 The Classification Algorithm . . . . . . . . . . . . . . . . . . . . 104
7.2.3 The Hierarchical Classification Algorithm . . . . . . . . . . . . . 104
7.2.4 Advanced Classification Algorithms . . . . . . . . . . . . . . . . 106
7.3 Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
7.3.1 Using Thresholds with Class Descriptions . . . . . . . . . . . . . 106
7.3.2 About the Class Description . . . . . . . . . . . . . . . . . . . . 107
7.3.3 Using Membership Functions for Classification . . . . . . . . . . 107
7.3.4 Evaluation Classes . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.4 Supervised Classification . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.4.1 Nearest Neighbor Classification . . . . . . . . . . . . . . . . . . 113
7.4.2 Working with the Sample Editor . . . . . . . . . . . . . . . . . . 122
7.4.3 Training and Test Area Masks . . . . . . . . . . . . . . . . . . . 127
7.4.4 The Edit Conversion Table . . . . . . . . . . . . . . . . . . . . . 129
7.4.5 Creating Samples Based on a Shapefile . . . . . . . . . . . . . . 130
7.4.6 Selecting Samples with the Sample Brush . . . . . . . . . . . . . 131
7.4.7 Setting the Nearest Neighbor Function Slope . . . . . . . . . . . 132
7.4.8 Using Class-Related Features in a Nearest Neighbor Feature Space 133
7.5 Classifier Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.5.2 Bayes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.5.3 KNN (K Nearest Neighbor) . . . . . . . . . . . . . . . . . . . . 134
7.5.4 SVM (Support Vector Machine) . . . . . . . . . . . . . . . . . . 134
7.5.5 Decision Tree (CART resp. classification and regression tree) . . 135
7.5.6 Random Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
13 Options 252
Acknowledgments 259
The Visualization Toolkit (VTK) Copyright . . . . . . . . . . . . . . . . . . . 259
ITK Copyright . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
python/tests/test_doctests.py . . . . . . . . . . . . . . . . . . . . . . . . 260
src/Verson.rc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
src/gt_wkt_srs.cpp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
eCognition Documentation is divided into two different books: the User Guide and the
Reference Book. This User Guide is a good starting point to get familiar with work-
flows using eCognition software. You get to know the world of object based image anal-
ysis where classification and feature extraction offer new possibilities and lead an image
analyst to new horizons. The User Guide is meant as a manual to start using eCogni-
tion software realizing simple workflows but also advanced classification concepts. Work
through the chapters to improve your image analysis knowledge and get an idea of a
new language in image analysis not only object but also in combination with pixel based
image analysis.
The Reference Book is a list of algorithms and features where you can look up how they
are calculated in case you need detailed background information and explanations on sup-
ported parameters and domains. If you need to understand more of the calculations, if
you want to understand what is behind all this e.g. why does the Mutiresolution Segmen-
tation lead to great results or how is distance calculated in eCognition then the Reference
Book is the right choice to work through.
For additional help please also refer to our:
• User community including e.g. guided tour examples based on data provided, the
latest webinars and the possibility for rule set exchange
(http://www.ecognition.com/community)
• Training team offering open trainings, in-company and customized trainings
(http://www.ecognition.com/learn/trainings)
• Consultancy unit helping you to solve your image analysis challenges e.g. in feasi-
bility studies or delivering the complete solution development for your projects
(http://www.ecognition.com/products/consulting-services)
• Support team (http://www.ecognition.com/support)
• Additional geospatial products and services provided by Trimble can be found here:
http://www.trimble.com/imaging/
• Installation and licensing guides can be found in the installation directory in the
document InstallationGuide.pdf
The following chapter introduces some terminology that you will encounter when work-
ing with eCognition software.
1
Introduction and Terminology 2
In eCognition Developer 9.0, an image layer is the most basic level of information con-
tained in a raster image. All images contain at least one image layer.
A grayscale image is an example of an image with one layer. whereas the most common
single layers are the red, green and blue (RGB) layers that go together to create a color
image. In addition, image layers can contain information such as the intensity values
of biomarkers used in life sciences or the near-infrared (NIR) data contained in remote
sensing images. Image layers can also contain a range of other information, such as
geographical elevation models.
eCognition Developer 9.0 allows the import of these image raster layers. It also supports
what are known as thematic raster layers, which can contain qualitative and categorical
information about an area (an example is a layer that acts as a mask to identify a particular
region).
The first step of an eCognition image analysis is to cut the image into pieces, which serve
as building blocks for further analysis – this step is called segmentation and there is a
choice of several algorithms to do this.
The next step is to label these objects according to their attributes, such as shape, color
and relative position to other objects. This is typically followed by another segmentation
step to yield more functional objects. This cycle is repeated as often as necessary and the
hierarchies created by these steps are described in the next section.
An image object is a group of pixels in a map. Each object represents a definite space
within a scene and objects can provide information about this space. The first image
objects are typically produced by an initial segmentation.
This is a data structure that incorporates image analysis results, which have been extracted
from a scene. The concept is illustrated in figure 1.1 on the following page.
It is important to distinguish between image object levels and image layers. Image layers
represent data that already exists in the image when it is first imported. Image object
levels store image objects, which are representative of this data.
The scene below is represented at the pixel level and is an image of a forest. Each level
has a super-level above it, where multiple objects may become assigned to single classes
– for example, the forest level is the super-level containing tree type groups on a level
below. Again, these tree types can consist of single trees on a sub-level.
Every image object is networked in a manner that each image object knows its context –
who its neighbors are, which levels and objects (superobjects) are above it and which are
below it (sub-objects). No image object may have more than one superobject, but it can
have multiple sub-objects.
1.4.3 Domain
The domain describes the scope of a process; in other words, which image objects (or
pixels or vectors) an algorithm is applied to. For example, an image object domain is
created when you select objects based on their size.
A segmentation-classification-segmentation cycle is illustrated in figure 1.2 on the next
page. The square is segmented into four and the regions are classified into A and B.
Region B then undergoes further segmentation. The relevant image object domain is
listed underneath the corresponding algorithm.
You can also define domains by their relations to image objects of parent processes, for
example, sub-objects or neighboring image objects.
1.5.1 Scenes
On a practical level, a scene is the most basic level in the eCognition hierarchy.
A scene is essentially a digital image along with some associated information. For ex-
ample, in its most basic form, a scene could be a JPEG image from a digital camera
with the associated metadata (such as size, resolution, camera model and date) that the
camera software adds to the image. At the other end of the spectrum, it could be a
four-dimensional medical image set, with an associated file containing a thematic layer
containing histological data.
The image file and the associated data within a scene can be independent of eCognition
software (although this is not always true). However, eCognition Developer will import
all of this information and associated files, which you can then save to an eCognition
format; the most basic one being aan eCognition project (which has a .dpr extension). A
dpr file is separate to the image and – although they are linked objects – does not alter it.
What can be slightly confusing in the beginning is that eCognition Developer creates
another hierarchical level between a scene and a project – a map. Creating a project will
always create a single map by default, called the main map – visually, what is referred to
as the main map is identical to the original image and cannot be deleted.
Maps only really become useful when there are more than one of them, because a single
project can contain several maps. A practical example is a second map that contains a
portion of the original image at a lower resolution. When the image within that map is
analyzed, the analysis and information from that scene can be applied to the more detailed
original.
1.5.3 Workspaces
Workspaces are at the top of the hierarchical tree and are essentially containers for
projects, allowing you to bundle several of them together. They are especially useful
for handling complex image analysis tasks where information needs to be shared. The
eCognition hierarchy is represented in figure 1.3 on the following page.
eCognition clients share portals with predefined user interfaces. A portal provides a se-
lection of tools and user interface elements typically used for image analysis within an
industry or science domain. However, most tools and user interface elements that are
hidden by default are still available.
Selecting Rule Set Mode will take you to the standard Developer environment. For simple
analysis tasks, you may wish to try Quick Map Mode (p 31).
Click any portal item to stop automatic opening. If you do not click a portal within three
seconds, the most recently used portal will start. To start a different portal, close the client
and start again.
7
Starting eCognition® Developer 8
You can start and work on multiple eCognition Developer clients simultaneously; this is
helpful if you want to open more than one project at the same time. However, you cannot
interact directly between two active applications, as they are running independently – for
example, dragging and dropping between windows is not possible.
Figure 2.2. The default workspace when a project or image is opened in the application
1. The map view displays the image file. Up to four windows can be displayed by se-
lecting Window > Split Vertically and Window > Split Horizontally from the main
menu, allowing you to assign different views of an image to each window. The
image can be enlarged or reduced using the Zoom functions on the main toolbar
(or from the View menu)
2. The Process Tree: eCognition Developer uses a cognition language to create rule-
ware. These functions are created by writing rule sets in the Process Tree window
3. Class Hierarchy: Image objects can be assigned to classes by the user, which are
displayed in the Class Hierarchy window. The classes can be grouped in a hierar-
chical structure, allowing child classes to inherit attributes from parent classes
4. Image Object Information: This window provides information about the character-
istics of image objects
5. View Settings: Select the image and vector layer view settings, toggle between 2D
and 3D, layer, classification or sample view, or object mean and pixel view
6. Feature View: In eCognition software, a feature represents information such as
measurements, attached data or values. Features may relate to specific objects or
apply globally and available features are listed in the Feature View window.
File Toolbar
This group of buttons allows you to load image files, open and save projects:
This group of buttons allows you to open and create new workspaces and opens the Import
Scenes dialog to select predefined import templates.
These buttons, numbered from one to four, allow you to switch between the four window
layouts:
This group is concerned with displaying outlines and borders of image objects, and views
of pixels:
With show polygons active you can visualize the skeletons for selected objects (if this
button is not visible go to View > Customize > Toolbars and select Reset All):
This button is a allows the comparison of a downsampled scene and toggles between
Image View or Project Pixel View:
These toolbar buttons allow you to visualize different layers; in grayscale or in RGB. If
available they also allow you to switch between layers:
These toolbar buttons allow you to open the main View Settings, the Edit Image Layer
mixing dialog and the Edit Vector Layer Mixing dialog:
These toolbar buttons toggle between 3D point cloud view and back to the 2D image
view. The last button is only active in point cloud mode and opens the Point Cloud View
Settings dialog:
This region of the toolbar offers direct selection and the ability to drag an image, along
with several zoom options.
The View Navigate folder allows you to delete levels, select maps and navigate the object
hierarchy.
Tools Toolbar
The buttons on the Tools toolbar launch the following dialog boxes and toolbars:
• Redo
• Save Current Project State
• Restore Saved Project State
There are several ways to customize the layout in eCognition Developer, allowing you to
display different views of the same image. For example, you may wish to compare the
results of a segmentation alongside the original image.
Selecting Window > Split allows you to split the window into four – horizontally and
vertically – to a size of your choosing. Alternatively, you can select Window > Split
Horizontally or Window > Split Vertically to split the window into two.
There are two more options that give you the choice of synchronizing the displays. Inde-
pendent View allows you to make changes to the size and position of individual windows
– such as zooming or dragging images – without affecting other windows. Alternatively,
selecting Side-by-Side View will apply any changes made in one window to any other
windows.
A final option, Swipe View, displays the entire image into across multiple sections, while
still allowing you to change the view of an individual section
2.3.3 Magnifier
The Magnifier feature lets you view a magnified area of a region of interest in a separate
window. It offers a zoom factor five times greater than the one available in the normal
map view.
To open the Magnifier window, select View > Windows > Magnifier from the main men.
Holding the cursor over any point of the map centers the magnified view in the Magnifier
window. You can release the Magnifier window by dragging it while holding down the
Ctrl key.
2.3.4 Docking
By default, the four commonly used windows – Process Tree, Class Hierarchy, Image
Object Information and Feature View – are displayed on the right-hand side of the
workspace, in the default Develop Rule Set view. The menu item Window > Enable
Docking facilitates this feature.
When you deselect this item, the windows will display independently of each other, al-
lowing you to position and resize them as you wish. This feature may be useful if you
are working across multiple monitors. Another option to undock windows is to drag a
window while pressing the Ctrl key.
You can restore the window layouts to their default positions by selecting View > Restore
Default. Selecting View > Save Current View also allows you to save any changes to the
workspace view you make.
View Layer
To view your original image pixels, you will need to click the View Layer button on the
toolbar. Depending on the stage of your analysis, you may also need to select Pixel View
(by clicking the Pixel View or Object Mean View button).
In the View Layer view (figure 2.3), you can also switch between the grayscale and RGB
layers, using the buttons to the right of the View Settings toolbar. To view an image in its
original format (if it is RGB), you may need to press the Mix Three Layers RGB button .
Figure 2.3. Two images displayed using Layer View. The left-hand image is displayed in RGB,
while the right-hand image displays the red layer only
View Classification
Used on its own, View Classification will overlay the colors assigned by the user when
classifying image objects (these are the classes visible in the Class Hierarchy window) –
figure 2.4 on the next page shows the same image when displayed in pixel view with all
its RGB layers, against its appearance when View Classification is selected.
Clicking the Pixel View or Object Mean View button toggles between an opaque overlay
(in Object Mean View) and a semi-transparent overlay (in Pixel View). When in View
Classification and Pixel View, a small button appears bottom left in the image window –
clicking on this button will display a transparency slider, which allows you to customize
the level of transparency.
Feature View
The Feature View button may be deactivated when you open a project. It becomes active
after segmentation when you select a feature in the Feature View window by double-
clicking on it.
Image objects are displayed as grayscale according to the feature selected (figure 2.5).
Low feature values are darker, while high values are brighter. If an object is red, it has
not been defined for the evaluation of the chosen feature.
Figure 2.4. An image for analysis displayed in Pixel View, next to the same image in Classi-
fication View. The colors in the right-hand image have been assigned by the user and follow
segmentation and classification of image objects
Figure 2.5. An image in normal Pixel View compared to the same image in Feature View, with
the Shape feature “Density” selected from the Feature View window
This button switches between Pixel View and Object Mean View.
Object Mean View creates an average color value of the pixels in each object, displaying
everything as a solid color (figure 2.6). If Classification View is active, the Pixel View
is displayed semi-transparently through the classification. Again, you can customize the
transparency in the same way as outlined in View Classification on the preceding page.
The Show or Hide Outlines button allows you to display the borders of image objects
(figure 2.7) that you have created by segmentation and classification. The outline colors
vary depending on the active display mode:
Figure 2.6. Object displayed in Pixel View at 50% opacity (left) and 100% opacity (right)
These two colors can be changed choosing View > Display Mode > Edit
Highlight Colors.
• After a classification the outlines take on the colors of the respective classes in
View Classification mode.
Figure 2.7. Images displayed with visible outlines. The left-hand image is displayed in Layer
View. The image in the middle shows unclassified image objects in the View Classification
mode. The right-hand image is displayed with View Classification mode with the outline
colors based on user classification colors
Image View or Project Pixel View is a more advanced feature, which allows the compar-
ison of a downsampled scene (e.g. a scene copy with scale in a workspace or a coarser
map within your project) with the original image resolution. Pressing this button toggles
between the two views.
Scenes are automatically assigned RGB (red, green and blue) colors by default when
image data with three or more image layers is loaded. Use the Single Layer Grayscale
button on the View Settings toolbar to display the image layers separately in grayscale.
In general, when viewing multilayered scenes, the grayscale mode for image display
provides valuable information. To change from default RGB mode to grayscale mode, go
to the toolbar and press the Single Layer Grayscale button, which will display only the
first image layer in grayscale mode.
Figure 2.8. Single layer grayscale view with red layer (left) and DSM elevation information
(right)
Display three layers to see your scene in RGB. By default, layer one is assigned to the red
channel, layer two to green, and layer three to blue. The color of an image area informs
the viewer about the particular image layer, but not its real color. These are additively
mixed to display the image in the map view. You can change these settings in the Edit
Image Layer Mixing dialog box.
In Grayscale mode, this button displays the previous image layer. The number or name
of the displayed image layer is indicated in the middle of the status bar at the bottom of
the main window.
In Three Layer Mix, the color composition for the image layers changes one image layer
up for each image layer. For example, if layers two, three and four are displayed, the
Show Previous Image Layer Button changes the display to layers one, two and three. If
the first image layer is reached, the previous image layer starts again with the last image
layer.
In Grayscale mode, this button displays the next image layer down. In Three Layer
Mix, the color composition for the image layers changes one image layer down for each
layer. For example, if layers two, three and four are displayed, the Show Next Image
Layer Button changes the display to layers three, four and five. If the last image layer is
reached, the next image layer begins again with image layer one.
Figure 2.9. Edit Image Layer Mixing dialog box. Changing the layer mixing and equalizing
options affects the display of the image only
You can define the color composition for the visualization of image layers for display
in the map view. In addition, you can choose from different equalizing options. This
enables you to better visualize the image and to recognize the visual structures without
actually changing them. You can also choose to hide layers, which can be very helpful
when investigating image data and results.
NOTE: Changing the image layer mixing only changes the visual display
of the image but not the underlying image data – it has no impact on the
process of image analysis.
When creating a new project, the first three image layers are displayed in red, green and
blue.
1. To change the layer mixing, open the Edit Image Layer Mixing dialog box (fig-
ure 2.9):
• Choose View > Image Layer Mixing from the main menu.
• Double-click in the right pane of the View Settings window.
2. Define the display color of each image layer. For each image layer you can set
the weighting of the red, green and blue channels. Your choices can be displayed
together as additive colors in the map view. Any layer without a dot or a value in
at least one column will not display.
3. Choose a layer mixing preset (see figure 2.10):
• (Clear): All assignments and weighting are removed from the Image Layer
table
• One Layer Gray displays one image layer in grayscale mode with the red,
green and blue together
• False Color (Hot Metal) is recommended for single image layers with large
intensity ranges to display in a color range from black over red to white. Use
this preset for image data created with positron emission tomography (PET)
• False Color (Rainbow) is recommended for single image layers to display a
visualization in rainbow colors. Here, the regular color range is converted to a
color range between blue for darker pixel intensity values and red for brighter
pixel intensity values
• Three Layer Mix displays layer one in the red channel, layer two in green and
layer three in blue
• Six Layer Mix displays additional layers
4. Change these settings to your preferred options with the Shift button or by clicking
in the respective R, G or B cell. One layer can be displayed in more than one color,
and more than one layer can be displayed in the same color.
5. Individual weights can be assigned to each layer. Clear the No Layer Weights
check-box and click a color for each layer. Left-clicking increases the layer’s color
weight while right-clicking decreases it. The Auto Update checkbox refreshes the
view with each change of the layer mixing settings. Clear this check box to show
the new settings after clicking OK. With the Auto Update check box cleared, the
Preview button becomes active.
6. Compare the available image equalization methods and choose one that gives you
the best visualization of the objects of interest. Equalization settings are stored in
the workspace and applied to all projects within the workspace, or are stored within
a separate project. In the Options dialog box you can define a default equalization
setting.
7. Click the Parameter button to changing the equalizing parameters, if available.
Figure 2.10. Layer Mixing presets (from left to right): One-Layer Gray, Three-Layer Mix,
Six-Layer Mix
1. Right-click on the Heat Map window or go to View > Thumbnail Settings to open
the Thumbnail Settings dialog box (figure 2.11).
2. Choose among different layer mixes in the Layer Mixing drop-down list. The One
Layer Gray preset displays a layer in grayscale mode with the red, green and blue
together. The three layer mix displays layer 1 in the red channel, layer 2 in green
and layer 3 in blue. Choose six layer mix to display additional layers.
3. Using the Equalizing drop-down box and select a method that gives you the best
display of the objects in the thumbnails.
4. If you select an equalization method you can also click the Parameter button to
changing the equalizing parameters.
The Layer Visibility Flag It is also possible to change the visibility of individual layers
and maps. The Manage Aliases for Layers dialog box is shown in figure 2.12 on the
current page. To display the dialog, go to Process > Edit Aliases > Image Layer Aliases
(or Thematic Layer Aliases). Hide a layer by selecting the alias in the left-hand column
and unchecking the ‘visible’ checkbox.
Figure 2.13. Window leveling. On a black-white gradient, adjusting the center value defines
the mid-point of the gradient. The width value specifies the limits on each side
Window Leveling The Window Leveling dialog box (figure 2.14) lets you control the
parameters for manually adjusting image levels on-screen. Leveling sets the brightness
of pixels that are displayed. The Center value specifies the mid-point of the equalization
range; the Width value sets the limits on either side of it (figure 2.13).
It is also possible to adjust these parameters using the mouse with the right-hand mouse
button held down – moving the mouse horizontally adjusts the center of window leveling;
moving it vertically adjusts the width. (This function must be enabled in Tools > Options.)
Image Equalization
Image equalization is performed after all image layers are mixed into a raw RGB (red,
green, blue) image. If, as is usual, one image layer is assigned to each color, the effect is
the same as applying equalization to the individual raw layer gray value images. On the
other hand, if more than one image layer is assigned to one screen color (red, green or
blue), image equalization leads to higher quality results if it is performed after all image
layers are mixed into a raw RGB image.
There are several modes for image equalization:
• None: No equalization allows you to see the scene as it is, which can be helpful at
the beginning of rule set development when looking for an approach. The output
from the image layer mixing is displayed without further modification
• Linear Equalization with 1.00% is the default for new scenes. Commonly it dis-
plays images with a higher contrast than without image equalization
• Standard Deviation Equalization has a default parameter of 3.0 and renders a dis-
play similar to the Linear equalization. Use a parameter around 1.0 for an exclusion
of dark and bright outliers
• Gamma Correction Equalization is used to improve the contrast of dark or bright
areas by spreading the corresponding gray values
• Histogram Equalization is well-suited for Landsat images but can lead to substan-
tial over-stretching on many normal images. It can be helpful in cases where you
want to display dark areas with more contrast
• Manual Image Layer Equalization enables you to control equalization in detail. For
each image layer, you can set the equalization method. In addition, you can define
the input range by setting minimum and maximum values.
Figure 2.15. Left: Three layer mix (red, green, blue) with Gamma correction (0.50). Right:
One layer mix with linear equalizing (1.00%)
Figure 2.16. Left: Three layer mix (red, green, blue) without equalizing. Right: Six-layer mix
with Histogram equalization. (Image data courtesy of the Ministry of Environmental Affairs
of Sachsen-Anhalt, Germany.)
The Edit Vector Layer Mixing dialog can be opened by double clicking in the lower
right pane of the view settings dialog or by selecting View > Vector Layer Mixing or by
selecting the Show/hide vector layers button. This dialog lets you change the order of
different layers by drag and drop of a thematic vector layer. Any layer without a dot in
the column Show will not display. Furthermore you can select an outline color and a fill
color and set the transparency of the vector layer individually.
The Auto update checkbox refreshes the view with each change of the layer mixing set-
tings. Clear this check box to show the new settings after clicking OK only.
Figure 2.17. The Edit Vector Layer Mixing Dialog to select visualization of vector layers
In some instances, it is desirable to display text over an image – for example, a map title or
year and month of a multitemporal image analysis. In addition, text can be incorporated
into a digital image if it is exported as part of a rule set.
To add text, double click on the image in the corner of Map View (not the image itself)
where you want to add the text, which causes the appropriate Edit Text Settings window
to launch (figure 2.19).
The buttons on the right allow you to insert the fields for map name, slice position and any
values you wish to display. The drop-down boxes at the bottom let you edit the attributes
of the text. Note that the two-left hand corners always display left-justified text and the
right hand corners show right-justified text.
Text rendering settings can be saved or loaded using the Save and Load buttons; these
settings are saved in files with the extension .dtrs. If you wish to export an image as
part of a rule set with the text displayed, it is necessary to use the Export Current View
algorithm with the Save Current View Settings parameter. Image object information is
not exported.
If a project contains multiple slices, all slices will be labelled.
It is possible to specify the default text that appears on an image by editing the file
default_image_view.xml. It is necessary to put this file in the appropriate folder for the
portal you are using; these folders are located in C:\Program Files\Trimble\eCognition
Developer 9.0\bin\application (assuming you installed the program in the default loca-
tion). Open the xml file using Notepad (or your preferred editor) and look for the follow-
ing code:
<TopLeft></TopLeft>
<TopRight></TopRight>
<BottomLeft></BottomLeft>
<BottomRight></BottomRight>
Enter the text you want to appear by placing it between the relevant containers, for exam-
ple:
<TopLeft>Sample_Text</TopLeft>
You will need to restart eCognition Developer 9.0 to view your changes.
Inserting a Field
In the same way as described in the previous section, you can also insert the feature codes
that are used in the Edit Text Settings box into the xml.
For example, changing the xml container to <TopLeft> {#Active pixel x-value
Active pixel x,Name}: {#Active pixel x-value Active pixel x, Value}
</TopLeft> will display the name and x-value of the selected pixel.
Inserting the code APP_DEFAULT into a container will display the default values (map
number and slice number).
2.3.9 Navigating in 2D
• The left mouse button is used for normal functions such as moving and selecting
objects
• Holding down the right mouse button and moving the pointer from left to right
adjusts window leveling on page 19
• To zoom in and out, either:
– Use the mouse wheel
– Hold down the Ctrl key and the right mouse button, then move the mouse up
and down.
The map features of eCognition Developer 9.0 also let you investigate:
There are several options for viewing and analyzing image data represented by eCognition
maps. You can view three-dimensional, four-dimensional, and time series data using
specialized visualization tools. Using the map view, you can also explore three or four-
dimensional data in several perspectives at once, and also compare the features of two
maps.
If you have loaded point cloud data to your project you can select the button “Point Cloud
View or Image View” to toggle between a 3D visualization mode and 2D image view.
With the point cloud mode active the following control scheme is applied:
• Left Mouse Button (LMB): While holding down the LMB you can move the
mouse up and down to achieve a smooth zooming. This zooming corresponds
directly to mouse movement and is best used when already close to the point cloud,
in order to see fine details.
• Right Mouse Button (RMB): While holding down, RMB is used for rotating the
point cloud around a pivot point. The pivot point is always a point in the cloud,
which is closest to the screen pixel the user has right-clicked. If you click on a
point in the cloud, the whole cloud will be rotated around that point in both vertical
and horizontal axes. If the cloud is not visible it will not be rotated off-screen.
• Mouse wheel: Rotating the wheel will produce a discrete zoom. This zooming is
faster than the LMB zooming and is performed in individual steps, corresponding
to the rotation of the wheel. It is best used to get close to the cloud if the camera is
far away, but not recommended when close to the cloud as it can jump through it.
• Both LMB and RMB: Holding down both the left and right mouse buttons will
enable the user to pan the camera left/right/upwards/downwards in 3D space.
• Level of Detail: Select one of the predefined values between 1% and 100% to
define the percentage of points displayed. For large data sets, lowering the value
will have a positive effect on the performance of view updates, especially when
first opening a project or reclassifying point clouds.
• Point Size: Choose if the points should be visualized.
– Small
– Medium
– Large or
– Extra Large
The 3D image objects display and the planar projections allow you to view objects in 3D
while simultaneously investigating them in 2D slices.
You can select from among six different split-screen views (figure 2.21) of the image data
and use the 3D Settings toolbar to navigate through slices, synchronize settings in one
projection with others, and change the data range. To display the 3D toolbar, go to View
> Toolbars > 3D.
The 3D Toolbar
From left-to-right, you can use the toolbar buttons to perform the following functions:
TIP: If 3D rendering is taking too long, you can stop it by unchecking the
classes in the Class Filter dialog.
The Window Layout button in the 3D Toolbar allows you to choose from the available
viewing options. Standard XYZ coordinates are used (figure 2.22).
From left to right, the following views are available:
• XY Planar Projection
• XZ Planar Projection
• YZ Planar Projection
• 3D Image Object
Select one or more classes of image objects to display in the 3D image objects display.
This display renders the surface of the selected image objects in their respective class
colors.
1. To display image objects in 3D, click the Window layout button in the 3D Settings
toolbar and select the 3D Image Objects button or the Multi-Planar Reprojection
button to open the 3D image objects display
2. Click the Class filter button to open the Edit Classification Filter dialog box and
check the boxes beside the classes you want to display. Click OK to display your
choices.
Navigating in 3D Several options are available to manipulate the image objects in the
3D image objects display. The descriptions use the analogy of a camera to represent the
user’s point of view:
• Hold down the left mouse button to freely rotate the image in three dimensions by
dragging the mouse
• Ctrl + left mouse button rotates the image in x and y dimensions
• Shift + left mouse button moves the image around the window
• To zoom in and out:
– Holding down the right mouse button and moving the mouse up and down
zooms in and out with a high zoom factor
– Holding down the right mouse button and moving the mouse left and right
zooms in and out with a low zoom factor.
To enhance performance, you can click the 3D Visualization Options button in the 3D
Settings toolbar and use the slider to lower the detail of the 3D image objects. When
you select a 3D connected image object it will automatically be selected in the planar
projections.
Setting Transparency for 3D Image Objects Changing the transparency of image objects
allows better visualization:
1. Open the Classification menu from the main menu bar and select Class Legend.
If you are using eCognition Developer you can also access the Class Hierarchy
window
2. Right-click on a class and click Transparency (3D Image Objects) to open the slider
3. Move the slider to change the transparency. A value of 0 indicates an opaque ob-
ject. Any image object with transparency setting greater than zero is ignored when
selected; the image object is not simultaneously selected in the planar projections.
At very low transparency settings, some image objects may flip 180 degrees. Raise
the transparency to a higher setting to resolve this issue.
There are several ways to navigate slices in the planar projections. The slice number and
orientation are displayed in the bottom right-hand corner of the map view. If there is
more than one map, the map name is also displayed.
• Reposition the cursor and crosshairs in three dimensions by clicking inside one of
the planar projections.
• Turn crosshairs off or on with the Crosshairs button.
• To move through slices:
– Select a planar projection and click the green arrows in the 3D Settings tool-
bar
– Use the mouse wheel (holding down the mouse wheel and moving the mouse
up and down will move through the slices more quickly)
– Click the Navigation button in the 3D Settings toolbar to open Slice Position
sliders that display the current slice and the total slices in each dimension.
– Use PgUp or PgDn buttons on the keyboard.
• You can move an object in the window using the keyboard arrow keys. If you hold
down the Ctrl key at the same time, you can move down the vertical scroll bar
• To zoom in and out, hold down the Ctrl key and the right mouse button, and move
the mouse up or down
• Holding down the right mouse button and moving the pointer from left to right
adjusts window leveling on page 19
Change the view settings, image object levels and image layers in one planar projection
and then apply those settings to the other projections. For the MPR Comparison view,
synchronization is only possible for XY projections with the same map.
You can also use this tool to synchronize the map view after splitting using the options
available in the Window menu. Select one of the planar projections and change any of
the functions below. Then click the Sync button in the 3D settings toolbar to synchronize
the changes among all open projections.
To customize your window layouts and save them along with the selected view settings:
1. Create a customized window layout by opening a window layout and choosing the
view settings you want to keep for each projection
2. Select View > Save Current Splitter Layout in the main menu to open the Save
Custom Layout dialog box
3. Choose a layout label (Custom 1 through Custom 7) and choose synchronization
options for planar projections. The options are:
• None: The Sync button is inoperative; it will not synchronize view settings,
crosshairs, or zoom settings in any of the projections
• By rows: The Sync button operates only across rows of the display
• By columns: The Sync button operates only across columns of the display
• All: The Sync button synchronizes all planar projections displayed
4. Click OK to save the layout. It is saved to the user information on the computer.
Image data that includes a time series can be viewed as an animation. You can also step
though frames one at a time. The current frame number displays in the bottom right
corner of the map view.
To view an animation of an open project, click the play button in the Animation toolbar;
to stop, click again. You can use the slider in the Animation toolbar to move back and
forth through frames. Either drag the slider or click it and then use the arrow keys on
your keyboard to step through the frames. You can also use buttons in the 3D Settings
toolbar to step back and forth through frames.
3.1 Introduction
The Quick Map Mode is designed to allow a user to solve simple analysis tasks without
having to develop rule sets. The main steps in analyzing an image are creating objects,
classifying objects and exporting results. For each of these steps, a small assortment of
actions is available. These actions can be combined freely utilizing the Analysis Builder
framework offered by eCognition Architect 9.0 and eCognition Developer products. Us-
ing the Quick Map Mode, users can build a new analysis, starting with data only, or build
on existing eCognition projects.
31
Using eCognition® Developer in Quickmap Mode 32
3.2 Workflow
The Quick Map Mode supports two basic workflows. Users can start with new data or
build on existing eCognition projects.
When starting with new data, a new project must be created. Once data is loaded, the first
action needs to be a segmentation action that generates an object level either quadtree or
the multiresolution segmentation.
Once an image object level is generated, a classification action can be applied. This can
be the nearest neighbor, the optimal box or the brightness threshold action. From there
you can merge areas, continue classifying, re-segment areas or simply export results.
Working with an existing project allows segmentation and classification just as with new
data only with the difference you can start classifying right away if image objects are
available. When working with existing projects some points need to be considered:
• Make sure that the action library is opened and then the project is loaded. Opening
an action library after the project will delete all existing levels and results
• Only classes which are flagged “display always” are displayed in the action drop-
down menus. If classes exist which are not displayed, you need to change this
accordingly or ask the rule set developer to do so.
The Quick Map Mode is operated within the Application Builder framework. If you start
eCognition Developer in the Quick Map Mode, the application is automatically opened.
To open the library in eCognition Developer when running in Rule
Set Mode, go to Library > Open Action Library and open the ac-
tion library stored in the bin\application folder of your installation:
eCognition 8.0\bin\applications\QuickMapMode\ActionLibrary.
You can save the analysis settings in the Analysis Builder as a solution file (extension
.dax) and load them again.
• To save the analysis settings, click the Save Solution to a File button on the Archi-
tect toolbar or use Library > Save Solution on the main menu
• Alternatively, you can encrypt the solution by clicking the Save Solution Read-
Only button on the Architect toolbar or choosing Library > Save Solution Read-
only on the main menu bar.
Loading a Solution
Load an already existing solution with all analysis settings from a solution file (extension
.dax) to the Analysis Builder window.
Running a Solution
There are several ways to run a solution or components of a solution. You can utilize
buttons implemented in the individual actions or use the Architect toolbar functions. All
functions implemented with the individual actions will be explained with the respective
actions. Below is an overview of the functions offered by the Architect toolbar.
• Run Selected Action: Click this button to run a selected action. This function is
used to execute an action after the definition is set
• Run Solution Until Selected Action: Deletes the existing status and reruns the
entire solution until the selected action. If you build on existing projects you should
not use this button, since all existing objects and results are removed
• Execute Solution: Click Execute Solution to delete the existing status and run the
entire solution. If you build on existing projects you should not use this button,
since all existing objects and results are removed.
Use segmentation actions to create image objects or merge objects in the same class.
Three segmentation actions are provided:
• Quadtree: This action divides images into squares of different sizes, depending on
the homogeneity of the image. Homogeneous areas will have larger image objects
than complex or heterogeneous areas
• Multiresolution: The multiresolution segmentation algorithm consecutively
merges pixels or existing image objects. It is based on a pairwise region merging
technique
• Merge Objects: This algorithm merges image objects that are in the same level and
class into larger image objects.
Use the Segmentation (Quadtree) action to create image objects. It uses the quadtree-
based segmentation algorithm, which creates squares of differing sizes by cutting. There-
fore it is a top-down segmentation algorithm. You can define an upper limit of color
differences within each square; this limit is called the scale parameter.
The following are examples of typical uses:
To increase the processing performance significantly for this type of image, apply a
quadtree-based segmentation before separating the background. Then use a different seg-
mentation for the non-background areas of interest. To use the Segmentation (Quadtree)
action:
1. Click ⊕ or the Add New Generic Action link in the Analysis Builder window.
Select the action, then click Add and Close.
2. In the Domain group box, select a level or create a new one to open the Enter Name
of New Level dialog box. Enter a name or accept the default, then click OK.
3. In the Domain group box, select a class or select Create New Class to open the
Create New Class dialog box. Enter a name or accept the default, then click OK.
4. Select Use Thematic Layers if you want to include thematic data.
5. Use the slider in the Scale area to select the scale of objects. A higher scale will
tend to create larger objects.
6. Move your mouse over fields in the Analysis Builder window to see descriptions
in the Description area.
7. Run the action by clicking Run (Alternatively you can run the action by clicking
on Analysis > Run Selected Action in the main menu.)
After running the action, you can see the resulting image objects by clicking in the project
view or by clicking the Show or Hide Outlines button to see all image objects outlined.
Use the Segmentation (Multiresolution) action to create image objects. It uses the mul-
tiresolution segmentation algorithm, which consecutively merges pixels or existing image
objects. It is a bottom-up segmentation based on a pairwise region merging technique.
Multiresolution segmentation is an optimization procedure which, for a given number
of image objects, minimizes the average heterogeneity and maximizes their respective
homogeneity.
The following are examples of typical uses:
• Extracting features that are characterized not purely by color but also by shape
homogeneity
• Extracting land cover or man-made features from remote sensing imagery. To use
the Segmentation (Multiresolution) action:
1. Click ⊕ or the Add new Generic Action link in the Analysis Builder window. Select
Segmentation (Multiresolution) in the Add Action dialog box; then click Add and
Close.
2. In the Domain group box, select an Input Level, and an Output Level using the
drop-down arrows in the Level fields.
Choose from the available values or select Create New Level to open the Enter
Name of the New Level dialog box and enter a name or accept the default. Click
OK. If the output level equals the input level, the input level will be re-segmented.
3. Select a Class in the Domain group box.
Select from the available values or select create new class to open the Create New
Class dialog box, name a new class, select a color and click OK.
4. Select Use Thematic Layers if you want to include thematic data.
5. Use the slider in the Scale area to select the scale of objects. A higher scale will
tend to create larger objects.
6. Use the Color slider in the Settings area to determine the weight of color as a
parameters in the segmentation result.
Higher values will tend to produce results with greater emphasis on the color of
image objects.
7. Move your mouse over fields in the Analysis Builder window to see descriptions
in the Description area.
8. Run the action by clicking Run (Alternatively you can run the action by clicking
on Analysis > Run Selected Action in the main menu.)
After running the action, you can see the resulting image objects by clicking in Project
Vew or by clicking the Show or Hide Outlines button to see all image objects outlined.
Use the Segmentation (Merge Objects) action to merge objects that are in the same class
into larger objects. You must have classified image objects to use this action.
To use the Segmentation (Merge Objects) action:
1. Click ⊕ or the Add New Generic Action link in the Analysis Builder window.
Select Segmentation (Merge Objects) in the Add Action dialog box; then click
Add and Close.
2. In the Domain group box, select a level using the drop-down arrow. Choose from
the available values and click OK. This is the level where objects will be merged.
3. Select a Class in the Domain group box. Select from the available values only
objects in the selected class will be merged.
4. Select Use Thematic Layers if you want to include thematic data.
5. Run the action by clicking Run. Alternatively you can run the action by clicking
on Analysis > Run Selected Action in the main menu.
After running the action, you can see the resulting image objects by clicking in the project
view or by clicking the Show or Hide Outlines button to see all image objects outlined.
Use classification actions to classify image objects based on samples or thresholds. Four
classification algorithms are available:
Use the Classification Box (Optimal Box Classifier) action to classify image objects based
on samples. You must first run a segmentation action to create image objects.
1. Click ⊕ in the Analysis Builder window, select Classification (Optimal Box Clas-
sifier) in the Add Action dialog box and click Add and Close to add it.
2. In the Domain group box, select a level where image objects that you want to
classify exist.
3. Select an Input Class from the drop-down list in the Domain group box. These are
the image objects that will be classified.
4. Select a Positive Output Class and a Negative Output Class from the drop-down list
in the Domain group box, or create new class names by selecting create new name.
These will be the two new classes into which the Input Class will be divided.
5. Select a Feature Space from the drop-down list. The Feature Space 1 is the type of
feature used to classify image objects.
• Color is color.
• Color & Shape refers to color and shape combined.
• Color & Texture refers to color and texture combined.
6. Select positive samples by clicking the Positive Sample Selection magic wand but-
ton, then double-clicking on image objects in the Project View to select them as
samples of the prospective class. Each sample will have the color that you selected
when you created the Positive Output Class.
7. Next click the Negative Sample Selection magic wand button and choose some
negative samples in project view by clicking on image objects.
Each sample will have the color that you selected when you created the Negative
Output Class.
8. If you need to deselect a sample, click on it again.
9. To change the assignment of an individual sample, change the sample input mode
and select the sample again.
10. Click Apply to apply the new classes to the entire project.
11. Select more samples as needed and click Apply again.
12. Add samples from a vector or raster file, if needed.
13. Click Reset All Samples button to delete all samples and start over.
14. Once the results are satisfying, save the solution .
Samples can be trained in an iterative mode using several input images. To do so, do the
following:
• Train the action on one image set, save the solution and open a second image.
• Run all steps of the solution until the classification step you want to train.
• Add samples
• Once the results are satisfying, save the solution
Use the Classification (Clutterremoval) action to remove image objects below a defined
size threshold. To use the Classification (Clutterremoval) action:
Use the Classification (Nearest Neighbor) action to classify image objects by using sam-
ples. To use the Classification (Nearest Neighbor) action:
6. Select positive samples, by clicking the Positive Sample Selection magic wand
button and then double-clicking on image objects in the project view to select them
as samples of the prospective class. Each sample will have the color that you
selected when you created the Positive Output Class.
7. Next click the Negative Sample Selection magic wand button and choose some
negative samples in project view by clicking on image objects.
8. Click the Preview Classification-Run button to preview the resulting classifications.
9. Click the Reset button to delete all samples and start over.
10. To finalize the classification, save the solution or add another segmentation action.
Use the Classification (Brightness Threshold) action to classify objects based on bright-
ness. To use the Classification (Brightness Threshold) action:
6. In the Preview group box, click the Test Classification-Run button to preview the
classification.
The classified objects display in the project view.
7. Click the Reset Classification-Run button to delete the classifications if needed.
8. To preserve the classification, save the solution or apply a new segmentation.
• Export Points
• Export Polygons
Use the Export (Points) action to export object co-ordinates as a set of vertices.
To export points:
1. Click ⊕ in the Analysis Builder window and select Export (Points) in the Add
Action dialog box. Click Add and Close
2. Run the action by going to Analysis > Run Selected Action in the main menu.
Use the Export (Polygons) action to export an ESRI .shp file containing the object out-
lines. The desktop export location is the image file folder. To export polygons:
1. Click ⊕ in the Analysis Builder window and select Export (Polygons) in the Add
Action dialog box. Click Add and Close.
2. Run the action by going to Analysis > Run Selected Action in the main menu.
As an introduction to eCognition image analysis, we’ll analyze a very simple image. The
example is very rudimentary, but will give you an overview of the working environment.
The key concepts are the segmentation and classification of image objects; in addition, it
will familiarize you with the mechanics of putting together a rule set.
45
An Introductory Tutorial 46
The first step in any analysis is for the software to divide up the image into defined areas –
this is called segmentation and creates undefined objects. By definition, these objects will
be relatively crude, but we can refine them later on with further rule sets. It is preferable
to create fairly large objects, as smaller numbers are easier to work with.
Right-click in the Process Tree window and select Append New from the right-click
menu. The Edit Process dialog appears. In the Name field, enter ‘Create objects and
remove background’. Press OK.
TIP: In the Edit Process box, you have the choice to run a process imme-
diately (by pressing Execute) or to save it to the Process Tree window for
later execution (by pressing OK).
In the Process Tree window, right-click on this new rule and select Insert Child. In the
Algorithm drop-down box, select Multiresolution Segmentation. In the Segmentation Set-
tings, which now appear in the right-hand side of the dialog box, change Scale Parameter
to 50. Press Execute.
The image now breaks up into large regions. When you now click on parts of the image,
you’ll see that – because our initial images are very distinct – the software has isolated
the shapes fairly accurately. It has also created several large objects out of the white
background.
Overview
The obvious attribute of the background is that it is very homogeneous and, in terms of
color, distinct from the shapes within it.
In eCognition Developer you can choose from a huge number of shape, texture and color
variables in order to classify a particular object, or group of objects. In this case, we’re
going to use the Brightness feature, as the background is much brighter then the shapes.
You can take measurements of these attributes by using the Feature View window. The
Feature View window essentially allows you test algorithms and change their parameters;
double-click on a feature of interest, then point your mouse at an object to see what
numerical value it gives you. This value is displayed in the Image Object Information
window. You can then use this information to create a rule.
TIP: Running algorithms and changing values from the Feature View tree
does not affect any image settings in the project file or any of the rule sets.
It is safe to experiment with different settings and algorithms.
From the Feature View tree, select Object Features > Layer Values > Mean, then double-
click on the Brightness tag. A Brightness value now appears in the Image Object Informa-
tion window. Clicking on our new object primitives now gives a value for brightness and
the values are all in the region of 254. Conversely, the shapes have much lower brightness
values (between 80 and 100). So, for example, what we can now do is define anything
with a brightness value of more than 250 as background.
Right-click on the sibling you just created (the ‘200 [shape: 0.1 . . . ’ process) and select
Append New – this will create a new rule at the same level. (Once we’ve isolated the
background we’re going to stick the pieces together and give it the value ‘Background’.)
In the Algorithm drop-down box, select Assign Class. We need to enter the brightness
attributes we’ve just identified by pressing the ellipsis (. . . ) in the value column next
to Threshold Condition, which launches the Select Single Feature window. This has
a similar structure to the Feature View box so, as previously, select Object Features >
Layer Values > Mean and double-click on Brightness. We can define the background as
anything with a brightness over 230, so select the ‘greater than’ button (>) and enter 230
in the left-hand field. Press OK.
The final thing to do is to classify our new criteria. In the Use Class parameter of Algo-
rithm Parameters on the right of the Edit Process window, overwrite ‘unclassified’, enter
‘Background’ and press Enter. The Class Description box will appear, where you can
change the color to white. Press OK to close the box, then press Execute in the Edit
Process dialog.
TIP: It’s very easy at this stage to miss out a function when writing rule
sets. Check the structure and the content of your rules against the screen
capture of the Process Tree window at the end of this section.
As a result, when you point your mouse at a white region, the Background classification
we have just created appears under the cursor. In addition, ‘Background’ now appears in
the Class Hierarchy window at the top right of the screen. Non-background objects (all
the shapes) have the classification ‘Unclassified’.
Again, right-click on the last rule set in the Process Tree and select Append New, to create
the third rule in the ‘Create objects and remove background’ parent process.
In the Algorithm drop-down box, select Merge Region. In the Class Filter parameter,
which launches the Edit Classification Filter box, select ‘Background’ – we want to merge
the background objects so we can later sub-divide the remaining objects (the shapes).
Press OK to close the box, then press Execute. The background is now a single object.
eCognition Developer has a built-in algorithm called Elliptic Fit; it basically measures
how closely an object fits into an ellipse of a similar area. Elliptic Fit can also be found
in Feature View and can be found by selecting Object Features > Geometry > Shape,
then double-clicking on Elliptic Fit. Of course a perfect circle has an elliptic fit of 1 (the
maximum value), so – at least in this example – we don’t really need to check this. But
you might want to practice using Feature View anyway.
To isolate the circles, we need to set up a new rule. We want this rule to be in the same
hierarchical level as our first ‘Create objects . . . ’ rule set and the easiest way to do this is
to right-click on the ‘Create objects’ rule set and select Append New, which will create a
process at the same level. Call this process ‘Define and isolate circles’.
To add the rule, right-click the new process and select Insert Child. In the Algorithm
drop-down box, select Assign Class. Click on Threshold Condition to navigate towards
the Elliptic Fit algorithm, using the path describe earlier. To allow for a bit of image
degradation, we’re going to define a circle as anything with a value of over 0.95 – click
on the ‘greater than’ symbol and enter the value 0.95. Press OK.
Back in the Edit Process window, we will give our classification a name. Replace the
‘unclassified’ value in Use Class with ‘Circle’, press enter and assign it a color of your
choosing. Press OK. Finally, in the Edit Process window, press Execute to run the process.
There is now a ‘Circle’ classification in the class hierarchy and placing your cursor over
the circle shape will display the new classification.
There is also a convenient algorithm we can use to identify squares; the Rectangular Fit
value (which for a square is, of course, one).
The method is the same as the one for the circle – create a new parent class and call
it ‘Define and isolate squares’ When you create the child, you will be able to find the
algorithm by going to Object Features > Geometry > Shape > Rectangular. Set the range
to ‘=1’ and assign it to a new class (‘Square’).
There are criteria you could use to identify the star but as we’re using the software to
separate and classify squares, circles and stars, we can be pragmatic – after defining
background, circles and squares, the star is the only objects remaining. So the only thing
left to do is to classify anything ‘unclassified’ as a star.
Simply set up a parent called ‘Define and isolate stars’, select Assign Class, select ‘un-
classified’ in the Class Filter and give it the value ‘Star’ in Use Class.
Rule sets are built up from single processes, which are displayed in the Process Tree and
created using the Edit Process dialog box. A single process can operate on two levels; at
the level of image objects (created by segmentation), or at the pixel level. Whatever the
object, a process will run sequentially through each target, applying an algorithm to each.
This section builds upon the tutorial in the previous chapter.
Figure 5.1. The Process Tree window displaying a simple rule set
There are three ways to open the Edit Process dialog box (figure 5.2):
5.1.1 Name
50
Basic Rule Set Editing 51
5.1.2 Algorithm
The Algorithm drop-down box allows the user to select an algorithm or a related pro-
cess. Depending on the algorithm selected, further options may appear in the Algorithm
Parameters pane.
5.1.3 Domain
This defines e.g. the image objects (p 3) or vectors or image layers on which algorithms
operate.
This defines the individual settings of the algorithm in the Algorithms Parameters group
box. (We recommend you do this after selecting the domain.)
image objects but also single vector layers or multiple vector layers. When using the pixel
level, the process creates a new image object level.
Depending on the algorithm, you can choose among different basic domains and specify
your choice by setting execution parameters. Common parameters are:
• Level: If you choose image object level as domain, you must select an image object
level name or a variable. If you choose pixel level as domain, you can specify the
name of a new image object level.
• Class filter: If you already classified image objects, you can select classes to focus
on the image objects of these classes.
• Threshold condition: You can use a threshold condition to further narrow down
the number of image objects or vectors. When you have added a threshold condi-
tion, a second drop-down box appears, which allows you to add another threshold
condition.
• Max. number of objects: Enter the maximum number of image objects to be pro-
cessed.
Technically, the domain is a set of image objects or vectors. Every process loops through
the set of e.g. image objects in the image object domain, one-by-one, and applies the
algorithm to each image object.
The set of domains is extensible, using the eCognition Developer SDK. To specify the
domain, select an image object level or another basic domain in the drop-down list. Avail-
able domains are listed in table 5.1 on the current page, Domains.
Pixel level Applies the algorithm to the pixel level. Map; Threshold condition
Typically used for initial segmentations and
filters
Image object Applies the algorithm to image objects on an Level; Class filter;
level image object level. Typically used for object Threshold condition; Map;
processing Region; Max. number of
objects
Current image Applies the algorithm to the current internally Class filter; Threshold
object selected image object of the parent process. condition; Max. number of
image objects
Continues. . .
Neighbor image Applies the algorithm to all neighbors of the Class filter; Threshold
object current internally selected image object of the condition; Max. number of
parent process. The size of the neighborhood objects; Distance
is defined by the Distance parameter.
Super object Applies the algorithm to the superobject of Class filter; Threshold
the current internally selected image object condition; Level distance;
of the parent process. The number of levels Max. number of objects
up the image objects level hierarchy is
defined by the Level Distance parameter.
Sub objects Applies the algorithm to all sub-objects of the Class filter; Threshold
current internally selected image object of condition; Level distance;
the parent process. The number of levels Max. number of objects
down the image objects level hierarchy is
defined by the Level Distance parameter.
Linked objects Applies the algorithm to the linked object of Link class filter; Link
the current internally selected image object direction; Max distance;
of the parent process. Use current image object;
Class filter; Threshold
condition; Max. number of
objects
Maps Applies the algorithm to all specified maps of Map name prefix;
a project. You can select this domain in Threshold condition
parent processes with the Execute child
process algorithm to set the context for child
processes that use the Map parameter From
Parent.
Image object list A selection of image objects created with the Image object list; Class
Update Image Object List algorithm. filter; Threshold condition;
Max. number of objects
Array Applies the algorithm to a set of for example Array; Array type; Index
classes, levels, maps or multiple vector layers variable
defined by an array.
Vectors Applies the algorithm to a single vector layer. Threshold condition; Map;
Thematic vector layer
Vectors (multiple Applies the algorithm to set of vector layers. Threshold condition; Map;
layers) Use Array; Thematic
vector layers
Algorithms are selected from the drop-down list under Algorithms; detailed descriptions
of algorithms are available in the Reference Book.
By default, the drop-down list contains all available algorithms. You can customize this
list by selecting ‘more’ from the drop-down list, which opens the Select Process Algo-
rithms box (figure 5.3).
By default, the ‘Display all Algorithms always’ box is selected. To customize the display,
uncheck this box and press the left-facing arrow under ‘Move All’, which will clear the
list. You can then select individual algorithms to move them to the Available Algorithms
list, or double-click their headings to move whole groups.
Loops & Cycles allows you to specify how many times you would like a process (and its
child processes) to be repeated. The process runs cascading loops based on a number you
define; the feature can also run loops while a feature changes (for example growing).
To execute a single process, select the process in the Process Tree and press F5. Alterna-
tively, right-click on the process and select Execute. If you have child processes below
your process, these will also be executed.
You can also execute a process from the Edit Process dialog box by pressing Execute,
instead of OK (which adds the process to the Process Tree window without executing it).
The introductory tutorial (p 45) will have introduced you to the concept of parent and
child processes. Using this hierarchy allows you to organize your processes in a more
logical way, grouping processes together to carry out a specific task.
Go to the Process Tree and right-click in the window. From the context menu, choose
Append New. the Edit Process dialog (figure 5.2) will appear. This is the one time it is
recommended that you deselect automatic naming and give the process a logical name,
as you are essentially making a container for other processes. By default, the algorithm
drop-down box displays Execute Child Processes. Press OK.
You can then add subordinate processes by right-clicking on your newly created parent
and selecting Insert Child. We recommend you keep automatic naming for these pro-
cesses, as the names display information about the process. Of course, you can select
child process and add further child processes that are subordinate to these.
Figure 5.4. Rule set sample showing parent and child processes
It is also possible to delete a level as part of a rule set (and also to copy or rename one). In
the Algorithm field in the Edit Process box, select Delete Image Object Level and enter
your chosen parameters.
It is possible to go back to a previous state by using the undo function, which is located in
Process > Undo (a Redo command is also available). These functions are also available
as toolbar buttons using the Customize command. You can undo or redo the creation,
modification or deletion of processes, classes, customized features and variables.
However, it is not possible to undo the execution of processes or any operations relating
to image object levels, such as Copy Current Level or Delete Level. In addition, if items
such as classes or variables that are referenced in rule sets are deleted and then undone,
only the object itself is restored, not its references.
It is also possible to revert to a previous version (p 217).
Undo Options
You can assign a minimum number of undo actions by selecting Tools > Options; in
addition, you can assign how much memory is allocated to the undo function (although
the minimum number of undo actions has priority). To optimize memory you can also
disable the undo function completely.
Right-clicking in the Process Tree window gives you two delete options:
• Delete Rule Set deletes the entire contents of the Process Tree window – a dialog
box will ask you to confirm this action. Once performed, it cannot be undone
• The Delete command can be used to delete individual processes. If you delete
a parent process, you will be asked whether or not you wish to delete any sub-
processes (child processes) as well
CAUTION: Delete Rule Set eliminates all classes, variables, and cus-
tomized features, in addition to all single processes. Consequently, an
existing image object hierarchy with any classification is lost.
You can edit the organization of the Process Tree by dragging and dropping with the
mouse. Bear in mind that a process can be connected to the Process Tree in three ways:
• Left-click a process and drag and drop it onto another process. It will be appended
as a sibling process on the same level as the target process
• Right-click a process and drag and drop it onto another process. It will be inserted
as a child process on a lower level than the target process.
Commonly, the term segmentation means subdividing entities, such as objects, into
smaller partitions. In eCognition Developer it is used differently; segmentation is any op-
eration that creates new image objects or alters the morphology of existing image objects
according to specific criteria. This means a segmentation can be a subdividing operation,
a merging operation, or a reshaping operation.
There are two basic segmentation principles:
An analysis of which segmentation method to use with which type of image is beyond
the scope of this guide; all the built-in segmentation algorithms have their pros and cons
and a rule-set developer must judge which methods are most appropriate for a particular
image analysis.
Top-down segmentation means cutting objects into smaller objects. It can – but does
not have to – originate from the entire image as one object. eCognition Developer 9.0
offers three top-down segmentation methods: chessboard segmentation, quadtree-based
segmentation and multi-threshold segmentation.
Multi-threshold segmentation is the most widely used; chessboard and quadtree-based
segmentation are generally useful for tiling and dividing objects into equal regions.
Chessboard Segmentation
• Refining small image objects: Relatively small image objects, which have already
been identified, can be segmented with a small square-size parameter for more
detailed analysis. However, we recommend that pixel-based object resizing should
be used for this task.
• Applying a new segmentation: Let us say you have an image object that you want
to cut into multiresolution-like image object primitives. You can first apply chess-
board segmentation with a small square size, such as one, then use those square
image objects as starting image objects for a multiresolution segmentation.
You can use the Edit Process dialog box to define the size of squares.
• Object Size: Use an object size of one to generate pixel-sized image objects. The
effect is that for each pixel you can investigate all information available from fea-
tures
• Medium Square Size: In cases where the image scale (resolution or magnification)
is higher than necessary to find regions or objects of interest, you can use a square
size of two or four to reduce the scale. Use a square size of about one-twentieth to
one-fiftieth of the scene width for a rough detection of large objects or regions of
interest. You can perform such a detection at the beginning of an image analysis
procedure.
Quadtree-Based Segmentation
• Cut each square into four smaller squares if the homogeneity criterion is not met.
Example: The maximal color difference within the square object is larger than the
defined scale value.
• Repeat until the homogeneity criterion is met at each square.
Contrast filter segmentation is a very fast algorithm for initial segmentation and, in some
cases, can isolate objects of interest in a single step. Because there is no need to initially
create image object primitives smaller than the objects of interest, the number of image
objects is lower than with some other approaches.
An integrated reshaping operation modifies the shape of image objects to help form coher-
ent and compact image objects. The resulting pixel classification is stored in an internal
thematic layer. Each pixel is classified as one of the following classes: no object, ob-
ject in first layer, object in second layer, object in both layers and ignored by threshold.
Finally, a chessboard segmentation is used to convert this thematic layer into an image
object level.
In some cases you can use this algorithm as first step of your analysis to improve overall
image analysis performance substantially. The algorithm is particularly suited to fluores-
cent images where image layer information is well separated.
Multiresolution Segmentation
With any given average size of image objects, multiresolution segmentation yields good
abstraction and shaping in any application area. However, it puts higher demands on the
processor and memory, and is significantly slower than some other segmentation tech-
niques – therefore it may not always be the best choice.
• The shape criterion can be given a value of up to 0.9. This ratio determines to what
degree shape influences the segmentation compared to color. For example, a shape
weighting of 0.6 results in a color weighting of 0.4
• In the same way, the value you assign for compactness gives it a relative weighting
against smoothness.
The Multi-Threshold Segmentation algorithm splits the image object domain and classi-
fies resulting image objects based on a defined pixel value threshold. This threshold can
be user-defined or can be auto-adaptive when used in combination with the Automatic
Threshold algorithm.
The threshold can be determined for an entire scene or for individual image objects; this
determines whether it is stored in a scene variable or an object variable, dividing the se-
lected set of pixels into two subsets so that heterogeneity is increased to a maximum. The
algorithm uses a combination of histogram-based methods and the homogeneity measure-
ment of multi-resolution segmentation to calculate a threshold dividing the selected set
of pixels into two subsets.
Spectral difference segmentation lets you merge neighboring image objects if the dif-
ference between their layer mean intensities is below the value given by the maximum
spectral difference. It is designed to refine existing segmentation results, by merging
spectrally similar image objects produced by previous segmentations and therefore is a
bottom-up segmentation.
The algorithm cannot be used to create new image object levels based on the pixel level.
2. The Multiresolution Segmentation algorithm criteria Smoothness and Compactness are not related to the fea-
tures of the same name.
All algorithms listed under the Reshaping Algorithms 3 group technically belong to the
segmentation strategies. Reshaping algorithms cannot be used to identify undefined im-
age objects, because these algorithms require pre-existing image objects. However, they
are useful for getting closer to regions and image objects of interest.
The two most basic algorithms in this group are Merge Region and Grow Region. The
more complex Image Object Fusion is a generalization of these two algorithms and offers
additional options.
Merge Region
The Merge Region algorithm merges all neighboring image objects of a class into one
large object. The class to be merged is specified in the domain. The image object domain
of a process using the Merge Region algorithm should define one class only. Otherwise,
all objects will be merged irrespective of the class and the classification will be less
predictable.
Classifications are not changed; only the number of image objects is reduced.
3. Sometimes reshaping algorithms are referred to as classification-based segmentation algorithms, because they
commonly use information about the class of the image objects to be merged or cut. Although this is not always
true, eCognition Developer 9.0 uses this terminology.
Grow Region
The Grow Region algorithm extends all image objects that are specified in the domain,
and thus represent the seed image objects. They are extended by neighboring image
objects of defined candidate classes. (Grow region processes should begin the initial
growth cycle with isolated seed image objects defined in the domain. Otherwise, if any
candidate image objects border more than one seed image objects, ambiguity will result
as to which seed image object each candidate image object will merge with.) For each
process execution, only those candidate image objects that neighbor the seed image object
before the process execution are merged into the seed objects. The following sequence
illustrates four Grow Region processes:
Figure 5.8. Red seed image objects grow stepwise into green candidate image objects
Although you can perform some image analysis on a single image object level, the full
power of the eCognition object-oriented image analysis unfolds when using multiple lev-
els. On each of these levels, objects are defined by the objects on the level below them
that are considered their sub-objects. In the same manner, the lowest level image objects
are defined by the pixels of the image that belong to them. This concept has already been
introduced in Image Object Hierarchy.
The levels of an image object hierarchy range from a fine resolution of image objects on
the lowest level to the coarse resolution on the highest. On its superlevel, every image
object has only one image object, the superobject. On the other hand, an image object
may have – but is not required to have – multiple sub-objects.
To better understand the concept of the image object hierarchy, imagine a hierarchy of
image object levels, each representing a meaningful structure in an image. These levels
are related to the various (coarse, medium, fine) resolutions of the image objects. The
hierarchy arranges subordinate image structures (such as a tree) below generic image
structures (such as tree types); figure 5.9 shows a geographical example.
Figure 5.9. Meaningful image object levels within an image object hierarchy
The lowest and highest members of the hierarchy are unchanging; at the bottom is the
digital image itself, made up of pixels, while at the top is a level containing a single
object (such as a forest).
• Applying a segmentation algorithm using the pixel-level domain will create a new
level. Image object levels are usually added above an existing ones, although some
algorithms let you specify whether new layers are created above or below existing
ones
• Using the Copy Image Object Level algorithm
The shapes of image objects on these super- and sublevels will constrain the shape of the
objects in the new level.
The hierarchical network of an image object hierarchy is topologically definite. In other
words, the border of a superobject is consistent with the borders of its sub-objects. The
area represented by a specific image object is defined by the sum of its sub-objects’ areas;
eCognition technology accomplishes this quite easily, because the segmentation tech-
niques use region-merging algorithms. For this reason, not all the algorithms used to
analyze images allow a level to be created below an existing one.
Each image object level is constructed on the basis of its direct sub-objects. For example,
the sub-objects of one level are merged into larger image objects on level above it. This
merge is limited by the borders of exiting superobjects; adjacent image objects cannot be
merged if they have different superobjects.
You can create an image object level by using some segmentation algorithms such as
multiresolution segmentation, multi-threshold or spectral difference segmentation. The
relevant settings are in the Edit Process dialog box:
• Go to the drop-down list box within the Image Object Domain group box and select
an available image object level. To switch to another image object level, select the
currently active image object level and click the Parameters button to select another
image object level in the Select Level dialog box.
• Insert the new image object level either above or below the one selected in the
image object level. Go to the Algorithm Parameters group box and look for a Level
Usage parameter. If available, you can select from the options; if not available the
new image object level is created above the current one.
Because a new level produced by segmentation uses the image objects of the level beneath
it, the function has the following restrictions:
• An image object level cannot contain image objects larger than its superobjects or
smaller than its sub-objects
• When creating the first image object level, the lower limit of the image objects size
is represented by the pixels, the upper limit by the size of the scene.
This structure enables you to create an image object hierarchy by segmenting the image
multiple times, resulting in different image object levels with image objects of different
scales.
It is often useful to duplicate an image object level in order to modify the copy. To
duplicate a level, do one of the following:
• Choose Image Objects > Copy Current Level from the main menu. The new image
object level will be inserted above the currently active one
• Create a process using the Copy Image Object Level algorithm. You can choose to
insert the new image object level either above or below an existing one.
You may want to rename an image object level name, for example to prepare a rule set for
further processing steps or to follow your organization’s naming conventions. You can
also create or edit level variables and assign them to existing levels.
1. To edit an image object level or level variable, select Image Objects > Edit Level
Names from the main menu. The Edit Level Aliases dialog box opens (figure 5.10)
2. Select an image object level or variable and edit its alias
3. To create a new image object level name, 4 type a name in the Alias field and click
the Add Level or Add Variable button to add a new unassigned item to the Level
Names or Variable column. Select ‘not assigned’ and edit its alias or assign another
level from the drop-down list. Click OK to make the new level or variable available
for assignment to a newly created image object level during process execution.
4. To assign an image object level or level variable to an existing value, select the item
you want to assign and use the drop-down arrow to select a new value.
5. To remove a level variable alias, select it in the Variables area and click Remove.
6. To rename an image object level or level variable, select it in the Variables area,
type a new alias in the Alias field, and click Rename.
In some cases it is helpful to define names of image object levels before they are assigned
to newly created image object levels during process execution. To do so, use the Add
Level button within the Edit Level Aliases dialog box.
4. You can change the default name for new image object levels in Tools > Options
• Depending on the selected algorithm and the selected image object domain, you
can alternatively use one of the following parameters:
• Level parameter of the image object domain group box
• Level Name parameter in the Algorithm Parameters group box
Instead of selecting any item in the drop-down list, just type the name of the image object
level to be created during process execution. Click OK and the name is listed in the Edit
Level Aliases dialog box.
When working with image object levels that are temporary, or are required for testing
processes, you will want to delete image object levels that are no longer used. To delete
an image object level do one of the following:
The Delete Level dialog box (figure 5.11) will open, which displays a list of all image
object levels according to the image object hierarchy.
Select the image object level to be deleted (you can press Ctrl to select multiple levels)
and press OK. The selected image object levels will be removed from the image object
hierarchy. Advanced users may want to switch off the confirmation message before dele-
tion. To do so, go to the Option dialog box and change the Ask Before Deleting Current
Level setting.
When analyzing individual images or developing rule sets you will need to investigate
single image objects. The Features tab of the Image Object Information window is used
The selected feature values are now displayed in the map view. To compare single image
objects, click another image object in the map view and the displayed feature values are
updated.
Double-click a feature to display it in the map view; to deselect a selected image object,
click it in the map view a second time. If the processing for image object information
takes too long, or if you want to cancel the processing for any reason, you can use the
Cancel button in the status bar.
Image objects have spectral, shape, and hierarchical characteristics and these features are
used as sources of information to define the inclusion-or-exclusion parameters used to
classify image objects. There are two major types of features:
• Object features, which are attributes of image objects (for example the area of an
image object)
• Global features, which are not connected to an individual image object (for exam-
ple the number of image objects of a certain class)
Available features are sorted in the feature tree, which is displayed in the Feature View
window (figure 5.14). It is open by default but can be also selected via Tools > Feature
View or View > Feature View.
This section lists a very brief overview of functions. For more detailed information, con-
sult the Reference Book
Vector Features
Vector features are available in the feature tree if the project includes a thematic layer.
They allow addressing vectors by their attributes.
Object Features
Object features are calculated by evaluating image objects themselves as well as their
embedding in the image object hierarchy. They are grouped as follows:
• Texture features allow texture values based on layers and shape. Texture after
Haralick is also available.
• Object Variables are local variables for individual image objects
• Hierarchy features provide information about the embedding of an image object
within the image object hierarchy.
• Thematic attribute features are used to describe an image object using information
provided by thematic layers, if these are present.
Class-Related Features
Class-related features are dependent on image object features and refer to the classes
assigned to image objects in the image object hierarchy.
This location is specified for superobjects and sub-objects by the levels separating them.
For neighbor image objects, the location is specified by the spatial distance. Both these
distances can be edited. Class-related features are grouped as follows:
• Relations to Neighbor Objects features are used to describe an image object by its
relationships to other image objects of a given class on the same image object level
• Relations to Sub-Objects features describe an image object by its relationships to
other image objects of a given class, on a lower image object level in the image
object hierarchy. You can use these features to evaluate sub-scale information be-
cause the resolution of image objects increases as you move down the image object
hierarchy
• Relations to Superobjects features describe an image object by its relations to other
image objects of a given class, on a higher image object level in the image object hi-
erarchy. You can use these features to evaluate super-scale information because the
resolution of image objects decreases as you move up the image object hierarchy
• Relations to Classification features are used to find out about the current or poten-
tial classification of an image object.
Scene Features
Scene features return properties referring to the entire scene or map. They are global
because they are not related to individual image objects, and are grouped as follows:
• Variables are global variables that exist only once within a project. They are inde-
pendent of the current image object.
• Class-Related scene features provide information on all image objects of a given
class per map.
• Scene-Related features provide information on the scene.
Process-Related Features
Process-related features are image object dependent features. They involve the relation-
ship of a child process image object to a parent process. They are used in local processing.
A process-related features refers to a relation of an image objects to a parent process ob-
ject (PPO) of a given process distance in the process hierarchy. Commonly used process-
related features include:
• Border to PPO: The absolute border of an image object shared with its parent pro-
cess object.
• Distance to PPO: The distance between two parent process objects.
• Elliptic dist. from PPO is the elliptic distance of an image object to its parent
process object (PPO).
• Same super object as PPO checks whether an image object and its parent process
object (PPO) are parts of the same superobject.
• Rel. border to PPO is the ratio of the border length of an image object shared with
the parent process object (PPO) to its total border length.
Region Features
Region features return properties referring to a given region. They are global because
they are not related to individual image objects. They are grouped as follows:
Metadata
Metadata items can be used as a feature in rule set development. To do so, you have
to provide external metadata in the feature tree. If you are not using data import proce-
dures to convert external source metadata to internal metadata definitions, you can create
individual features from a single metadata item.
Feature Variables
Feature variables have features as their values. Once a feature is assigned to a feature
variable, the variable can be used in the same way, returning the same value and the
assigned value. It is possible to create a feature variable without a feature assigned, but
the calculation value would be invalid.
Most features with parameters must first be created before they are used and require
values to be set beforehand. Before a feature of image object can be displayed in the map
view, an image must be loaded and a segmentation must be applied to the map.
1. To create a new feature, right-click in the Feature View window and select Man-
age Customized Features. In the dialog box, click Add to display the Customized
Features box, then click on the Relational tab
2. In this example, we will create a new feature based on Min. Pixel Value. In the
Feature Selection box, this can be found by selecting Object Values > Layer Values
> Pixel-based > Min. Pixel Value
3. Under Min. Pixel Value, right-click on Create New ‘Min. Pixel Value’ and select
Create.
The relevant dialog box - in this case Min. Pixel Value - will open.
4. Depending on the feature and your project, you must set parameter values. Pressing
OK will list the new feature in the feature tree. The new feature will also be loaded
into the Image Object Information window
5. Some features require you to input a unit, which is displayed in parentheses in the
feature tree. By default the feature unit is set to pixels, but other units are available.
You can change the default feature unit for newly created features. Go to the Options
dialog box and change the default feature unit item from pixels to ‘same as project unit’.
The project unit is defined when creating the project and can be checked and modified in
the Modify Project dialog box.
Thematic Attributes
Thematic attributes can only be used if a thematic layer has been imported into the project.
If this is the case, all thematic attributes in numeric form that are contained in the attribute
table of the thematic layer can be used as features in the same manner as you would use
any other feature.
Object-oriented texture analysis allows you to describe image objects by their texture. By
looking at the structure of a given image object’s sub-objects, an object’s form and texture
can be determined. An important aspect of this method is that the respective segmentation
parameters of the sub-object level can easily be adapted to come up with sub-objects that
represent the key structures of a texture.
A straightforward method is to use the predefined texture features provided by eCognition
Developer 9.0. They enable you to characterize image objects by texture, determined by
the spectral properties, contrasts and shape properties of their sub-objects.
Another approach to object-oriented texture analysis is to analyze the composition of
classified sub objects. Class-related features (relations to sub objects) can be utilized to
provide texture information about an image object, for example, the relative area covered
by sub objects of a certain classification.
Further texture features are provided by Texture after Haralick. 5 These features are based
upon the co-occurrence matrix, 6 which is created out of the pixels of an object.
Some features may be edited to specify a distance relating two image objects. There are
different types of feature distances:
• The level distance between image objects on different image object levels in the
image object hierarchy.
• The spatial distance between objects on the same image object level in the image
object hierarchy.
5. The calculation of Haralick texture features can require considerable processor power, since for every pixel of
an object, a 256 x 256 matrix has to be calculated.
6. en.wikipedia.org/wiki/Co-occurrence_matrix
• The process distance between a process and the parent process in the process hier-
archy.
Figure 5.15. Editing feature distance (here the feature Number of)
Level Distance
The level distance represents the hierarchical distance between image objects on different
levels in the image object hierarchy. Starting from the current image object level, the
level distance indicates the hierarchical distance of image object levels containing the
respective image objects (sub-objects or superobjects).
Spatial Distance
The spatial distance represents the horizontal distance between image objects on the same
level in the image object hierarchy.
Feature distance is used to analyze neighborhood relations between image objects on the
same image object level in the image object hierarchy. It represents the spatial distance
in the selected feature unit between the center of masses of image objects. The (default)
value of 0 represents an exception, as it is not related to the distance between the center
of masses of image objects; only the neighbors that have a mutual border are counted.
Process Distance
The process distance in the process hierarchy represents the upward distance of hierarchi-
cal levels in process tree between a process and the parent process. It is a basic parameter
of process-related features.
In practice, the distance is the number of hierarchy levels in the Process Tree window
above the current editing line, where you find the definition of the parent object. In the
Process Tree, hierarchical levels are indicated using indentation.
Figure 5.16. Process Tree window displaying a prototype of a process hierarchy. The pro-
cesses are named according to their connection mode
Example
• A process distance of one means the parent process is located one hierarchical level
above the current process.
• A process distance of two means the parent process is located two hierarchical
levels above the current process. Put figuratively, a process distance of two defines
the ‘grandparent’ process.
This feature allows you to compare image objects of selected classes when evaluating
classifications. To open it, select Image Objects > Image Object Table from the main
menu. To launch the Configure Image Object Table dialog box, double-click in the win-
dow or right-click on the window and choose Configure Image Object Table.
Upon opening, the classes and features windows are blank. Press the Select Classes
button, which launches the Select Classes for List dialog box.
Add as many classes as you require by clicking on an individual class, or transferring the
entire list with the All button. On the Configure Image Object Table dialog box, you can
also add unclassified image objects by ticking the checkbox. In the same manner, you
can add features by navigating via the Select Features button.
Clicking on a column header will sort rows according to column values. Depending on
the export definition of the used analysis, there may be other tabs listing dedicated data.
Selecting an object in the image or in the table will highlight the corresponding object.
This feature allows you to analyze the correlation of two features of selected image ob-
jects. If two features correlate highly, you may wish to deselect one of them from the
Image Object Information or Feature View windows. As with the Feature View window,
not only spectral information maybe displayed, but all available features.
• To open 2D Feature Space Plot, go to Tools > 2D Feature Space Plot via the main
menu
• The fields on the left-hand side allow you to select the levels and classes you wish
to investigate and assign features to the x- and y-axes
The Correlation display shows the Pearson’s correlation coefficient between the values of
the selected features and the selected image objects or classes.
Many image data formats include metadata or come with separate metadata files, which
provide additional image information on content, qualitiy or condition of data. To use
this metadata information in your image analysis, you can convert it into features and use
these features for classification.
The available metadata depends on the data provider or camera used. Examples are:
The metadata provided can be displayed in the Image Object Information window, the
Feature View window or the Select Displayed Features dialog box. For example depend-
ing on latitude and longitude a rule set for a specific vegetation zone can be applied to the
image data.
Although it is not usually necessary, you may sometimes need to link an open project
to its associated metadata file. To add metadata to an open project, go to File > Modify
Open Project.
The lowest pane of the Modify Project dialog box (figure 5.20) allows you to edit the
links to metadata files. Select Insert to locate the metadata file. It is very important to
select the correct file type when you open the metadata file to avoid error messages.
Once you have selected the file, select the correct field from the Import Metadata box and
press OK. The filepath will then appear in the metadata pane.
To populate with metadata, press the Edit button to launch the MetaData Conversion
dialog box (figure 5.21).
Press Generate All to populate the list with metadata, which will appear in the right-hand
column. You can also load or save metadata in the form of XML files.
If you are batch importing large amounts of image data, then you should define metadata
via the Customized Import dialog box (figure 5.22).
On the Metadata tab of the Customized Import dialog box, you can load Metadata into the
projects to be created and thus modify the import template with regard to the following
options:
A master file must be defined in the Workspace tab; if it is not, you cannot access the
Metadata tab. The Metadata tab lists the metadata to be imported in groups and can be
modified using the Add Metadata and Remove Metadata buttons.
You may want to use metadata in your analysis or in writing rule sets. Once the metadata
conversion box has been generated, click Load – this will send the metadata values to
the Feature View window, creating a new list under Metadata. Right-click on a feature
and select Display in Image Object Information to view their values in the Image Object
Information window.
To create a simple project – one without thematic layers, metadata, or scaling (geocoding
is detected automatically) – go to File > Load Image File in the main menu. 1
Figure 6.1. Load Image File dialog box for a simple project, with recursive file display selected
Load Image File (along with Open Project, Open Workspace and Load Ruleset) uses a
customized dialog box. Selecting a drive displays sub-folders in the adjacent pane; the
dialog will display the parent folder and the subfolder.
1. In Windows there is a 260-character limit on filenames and filepaths (http://msdn.microsoft.com/en-us/library
/windows/desktop/aa365247%28v=vs.85%29.aspx). Trimble software does not have this restriction and can
export paths and create workspaces beyond this limitation. For examples of this feature, refer to the FAQs in
the Windows installation guide.
80
Projects and Workspaces 81
Clicking on a sub-folder then displays all the recognized file types within it (this is the
default).
You can filter file names or file types using the File Name field. To combine different
conditions, separate them with a semicolon (for example *.tif; *.las). The File Type
drop-down list lets you select from a range of predefined file types.
The buttons at the top of the dialog box let you easily navigate between folders. Pressing
the Home button returns you to the root file system.
There are three additional buttons available. The Add to Favorites button on the left lets
you add a shortcut to the left-hand pane, which are listed under the Favorites heading.
The second button, Restore Layouts, tidies up the display in the dialog box. The third,
Search Subfolders, additionally displays the contents of any subfolders within a folder.
You can, by holding down Ctrl or Shift, select more than one folder. Files can be sorted
by name, size and by date modified.
In the Load Image File dialog box you can:
1. Select multiple files by holding down the Shift or Ctrl keys, as long as they have
the same number of dimensions.
2. Access a list of recently accessed folders displays in the Go to Folder drop-down
list. You can also paste a filepath into this field (which will also update the folder
buttons at the top of the dialog box).
When you create a new project, the software generates a main map representing the image
data of a scene. To prepare this, you select image layers and optional data sources like
thematic layers or metadata for loading to a new project. You can rearrange the image
layers, select a subset of the image or modify the project default settings. In addition, you
can add metadata.
An image file contains one or more image layers. For example, an RGB image file con-
tains three image layers, which are displayed through the Red, Green and Blue channels
(layers).
Open the Create Project dialog box by going to File > New Project (for more detailed
information on creating a project, refer to The Create Project Dialog Box). The Import
Image Layers dialog box opens. Select the image data you wish to import, then press the
Open button to display the Create Project dialog box.
Opening certain file formats or structures requires you to select the correct driver in the
File Type drop-down list.
Then select from the main file in the files area. If you select a repository file (archive file),
another Import Image Layers dialog box opens, where you can select from the contained
files. Press Open to display the Create Project dialog box.
The Create Project dialog box (figure 6.2) gives you several options. These options can
be edited at any time by selecting File > Modify Open Project:
• Change the name of your project in the Project Name field. The Map selection is
not active here, but can be changed in the Modify Project dialog box after project
creation is finished.
• If you load two-dimensional image data, you can define a subset using the Subset
Selection button. If the complete scene to be analyzed is relatively large, subset
selection enables you to work on a smaller area to save processing time.
• If you want to rescale the scene during import, edit the scale factor in the text box
corresponding to the scaling method used: resolution (m/pxl), magnification (x),
percent (%), or pixel (pxl/pxl).
• To use the geocoding information from an image file to be imported, select the Use
Geocoding checkbox.
• For feature calculations, value display, and export, you can edit the Pixels Size
(Unit). If you keep the default (auto)the unit conversion is applied according to the
unit of the coordinate system of the image data as follows:
– If geocoding information is included, the pixel size is equal to the resolution.
– In other cases, pixel size is 1.
In special cases you may want to ignore the unit information from the included geocoding
information. To do so, deactivate Initialize Unit Conversion from Input File item in Tools
> Options in the main menu
• The Image Layer pane allows you to insert, remove and edit image layers. The
order of layers can be changed using the up and down arrows
– If you use multidimensional image data sets, you can check and edit multi-
dimensional map parameters. You can set the number, the distance, and the
starting item for both slices and frames.
– If you load two-dimensional image data, you can set the value of those pixels
that are not to be analyzed. Select an image layer and click the No Data button
to open the Assign No Data Values dialog box.
– If you import image layers of different sizes, the largest image layer dimen-
sions determine the size of the scene. When importing without using geocod-
ing, the smaller image layers keep their size if the Enforce Fitting check box
is cleared. If you want to stretch the smaller image layers to the scene size,
select the Enforce Fitting checkbox.
• Thematic layers can be inserted, removed and edited in the same manner as image
layers.
• If not done automatically, you can load Metadata source files to make them avail-
able within the map.
6.2.3 Geocoding
Figure 6.3. The Layer Properties dialog box allows you to edit the geocoding information
The software cannot reproject image layers or thematic layers. Therefore all image layers
must belong to the same coordinate system in order to be read properly. If the coordinate
system is supported, geographic coordinates from inserted files are detected automatically.
If the information is not included in the image file but is nevertheless available, you can
edit it manually.
After importing a layer in the Create New Project or Modify Existing Project dialog
boxes, double-click on a layer to open the Layer Properties dialog box. To edit geocoding
information, select the Geocoding check box. You can edit the following:
No-data values can be assigned to scenes with two dimensions only. This allows you to
set the value of pixels that are not to be analyzed. Only no-data-value definitions can be
applied to maps that have not yet been analyzed.
No-data values can be assigned to image pixel values (or combinations of values) to save
processing time. These areas will not be included in the image analysis. Typical examples
for no-data values are bright or dark background areas. The Assign No Data Value dialog
box can be accessed when you create or modify a project.
After preloading image layers press the No Data button. The Assign No Data Values
dialog box opens (figure 6.4):
• Selecting Use Single Value for all Layers (Union) lets you set a single pixel value
for all image layers.
• To set individual pixel values for each image layer, select the Use Individual Values
for Each Layer checkbox
• Select one or more image layers
• Enter a value for those pixels that are not to be analyzed. Click Assign. For exam-
ple in the dialog box above, the no data value of Layer 1 is 0.000000. This implies
that all pixels of the image layer Layer 1 with a value of zero (i.e. the darkest pix-
els) are excluded from the analysis. The no data value of Layer 2 is set to 255 in
the Value field
• Select Intersection to include those overlapping no data areas only that all image
layers have in common
• Select Union to include the no data areas of all individual image layers for the
whole scene, that is if a no data value is found in one image layer, this area is
treated as no data in all other image layers too
You can insert image layers and thematic layers with different resolutions (scales) into
a map. They need not have the same number of columns and rows. To combine image
layers of different resolutions (scales), the images with the lower resolution – having a
larger pixel size – are resampled to the size of the smallest pixel size. If the layers have
exactly the same size and geographical position, then geocoding is not necessary for the
resampling of images.
Figure 6.5. Left: Higher resolution – small pixel size. Right: Lower resolution – image is
resampled to be imported
When creating a new map, you can check and edit parameters of multidimensional maps
that represent 3D, 4D, or time series scenes. Typically, these parameters are taken au-
tomatically from the image data set and this display is for checking only. However in
special cases you may want to change the number, the distance, and the starting item of
slices and frames. The preconditions for amending these values are:
To open the edit multidimensional map parameters, create a new project or add a map to
an existing one. After preloading image layers press the Edit button. The Layer Properties
dialog box opens.
Editable parameters are listed in table 6.1 on the current page, Multidimensional Map
Parameters.
Confirm with OK and return to the previous dialog box. After the a with a new map has
been created or saved, the parameters of multidimensional maps cannot be changed any
more.
If the loaded image files are geo-referenced to one single coordinate system, image layers
and thematic layers with a different geographical coverage, size, or resolution can be
inserted.
This means that image data and thematic data of various origins can be used simultane-
ously. The different information channels can be brought into a reasonable relationship
to each other.
When dealing with Point Cloud processing and analysis, there are several components
that provide you with means to directly load and analyze LiDAR point clouds, as well
as to export result as raster images, such as DSM and DTM. Working with point clouds,
eCognition Developer uses a three-stage approach.
1. Point clouds are loaded and displayed using the maximum intensity among all
returns
2. Once loaded, additional layers can be generated using a LiDAR converter algo-
rithm (see the Reference Book for more information on the LiDAR File Converter
algorithm)
3. Gaps within the data can then be interpolated based on image objects.
Figure 6.7. The first image shows LiDAR intensity as displayed after loading. In the second,
the first return is displayed after applying the LiDAR converter algorithm. In the third image,
the LiDAR last return is displayed after applying the LiDAR converter algorithm.
To allow for quick display of the point cloud, rasterization is implemented in a simple
averaging mode based on intensity values. Complex interpolation of data can be done
based on the point cloud file converter algorithm.
To create a new project using point cloud data:
NOTE: In the loading process, a resolution must be set that determines the
grid spacing of the raster image generated from the las file. The resolution
is set to 1 by default, which is the optimal value for point cloud data with
a point density of 1pt/m2 . For data with a lower resolution, set the value to
2 or above; for higher-resolution data, set it to 0.5 or below.
The Workspace window lets you view and manage all the projects in your workspace,
along with other relevant data. You can open it by selecting View > Windows >
Workspace from the main menu.
Figure 6.8. Workspace window with Summary and Export Specification and drop-down view
menu
• The left-hand pane contains the Workspace tree view. It represents the hierarchical
structure of the folders that contain the projects
• In the right-hand pane, the contents of a selected folder are displayed. You can
choose between List View, Folder View, Child Scene View and two Thumbnail
views.
In List View and Folder View, information is displayed about a selected project – its state,
scale, the time of the last processing and any available comments. The Scale column
displays the scale of the scene. Depending on the processed analysis, there are additional
columns providing exported result values.
To open a workspace, go to File > Open Workspace in the main menu. Workspaces
have the .dpj extension. This function uses the same customized dialog as described for
loading an image file on page 80.
To create a new workspace, select File > New Workspace from the main menu or use the
Create New Workspace button on the default toolbar. The Create New Workspace dialog
box lets you name your workspace and define its file location – it will then be displayed
as the root folder in the Workspace window.
If you need to define another output root folder, it is preferable to do so before you load
scenes into the workspace. However, you can modify the path of the output root folder
later on using File > Workspace Properties.
User Permissions
The two checkboxes at the bottom left of the Open Workspace dialog box determine the
permissions of the user who opens it.
• If both boxes are unchecked, users have full user rights. Users can analyze,
roll back and modify projects, and can also modify workspaces (add and delete
projects). However, they cannot rename workspaces
• If Read-Only is selected, users can only view projects and use History View. The
title bar will display ‘(Read Only)’
• If Edit-Only is selected, the title bar will display ‘(Limited’) and the following
principles apply:
– Projects opened by other user are displayed as locked
– Users can open, modify (history, name, layers, segmentation, thematic lay-
ers), save projects and create new multi-map projects
– Users cannot analyze, rollback all, cancel, rename, modify workspaces, up-
date paths or update results
If a Project Edit user opens a workspace before a full user, the Workspace view will
display the status ‘locked’. Users can use the Project History function to show all modi-
fications made by other users.
Multiple access is not possible in Data Management mode. If a workspace is opened
using an older software version, it cannot be opened with eCognition Developer 9.0 at
the same time.
Before you can start working on data, you must import scenes in order to add image
data to the workspace. During import, a project is created for each scene. You can
select different predefined import templates according to the image acquisition facility
producing your image data.
If you only want to import a single scene into a workspace, use the Add Project command.
To import scenes to a workspace, choose File > Predefined Import from the main menu or
right-click the left-hand pane of the Workspace window and choose Predefined Import. 2
The Import Scenes dialog box opens (figure 6.10):
• You can use various import templates to import scenes. Each import template is
provided by a connector. Connectors are available according to which edition of
the eCognition Server you are using.
• Generic import templates are available for simple file structures of import data.
When using generic import templates, make sure that the file format you want to
import is supported
2. By default, the connectors for predefined import are stored in the installation folder under
\bin\drivers\import. If you want to use a different storage folder, you can change this setting under Tools
> Options > General.
• Import templates provided by connectors are used for loading the image data ac-
cording to the file structure that is determined by the image reader or camera pro-
ducing your image data.
• Customized import templates can be created for more specialized file structures of
import data
• A full list of supported and generic image formats is available in the accompanying
volume Supported Connectors and Drivers.
Generic import templates may support additional instruments or image readers not listed
here. For more information about unlisted import templates contact Trimble via www
.ecognition.com/support
About Generic Import Templates Image files are scanned into a workspace with a specific
method, using import templates, and in a specific order according to folder hierarchy.
This section lists principles of basic import templates used for importing scenes within
the Import Scenes dialog box.
– The number of image layers per scene is dependent on the image file. For
example, if the single image file contains three image layers, the scene is
created with three image layers.
– Matching Pattern: anyname
– For the scene name, the file name without extension is used.
– Geocoded – one file per scene: Reads the geo-coordinates separately from
each readable image file.
• Generic one scene per folder
– All image layers are taken from all image files.
– Creates a scene for each subfolder.
– Takes all image files from the subfolder to create a scene.
– If no subfolder is available the import will fail.
– The name of the subfolder is used for the scene name.
– Geocoded – one file per scene: Reads the geo-coordinates separately from
each readable image file.
Options Images are scanned in a specific order in the preview or workspace. There are
two options:
eCognition Developer 9.0 offers several options for customizing the Workspace.
To select what information is displayed in columns, right-click in the pane (see fig-
ure 6.12) to display the context menu.
1. Expand All Columns will auto fit the columns to the width of the pane. If this is
selected, the menu will subsequently display the Collapse All Menus option.
2. Selecting Insert Column or Modify Column displays the Modify Column dialog
box (figure 6.13):
• In Type, select the column you wish to display
• In Name, enter the text to appear in the heading
• In Width, select the width in pixels
• In Alignment, choose between left, right and center
Figure 6.14. Edit Views Dialog Box (selecting Add launches Add New View)
When editing processes, you can use the following algorithms to classify image objects:
• Assign Class assigns a class to an image object with certain features, using a thresh-
old value
• Classification uses the class description to assign a class
• Hierarchical Classification uses the class description and the hierarchical structure
of classes
• Advanced Classification Algorithms are designed to perform a specific classifica-
tion task, such as finding minimum or maximum values of functions, or identifying
connections between objects.
You will already have a little familiarity with class descriptions and hierarchies from the
basic tutorial, where you manually assigned classes to the image objects derived from
segmentation..
There are two views in the Class Hierarchy window, which can be selected by clicking
the tabs at the bottom of the window:
• Groups view allows you to assign a logical classification structure to your classes.
In the figure below, a geographical view has been subdivided into land and sea; the
land area is further subdivided into forest and grassland. Changing the organization
of your classes will not affect other functions
• Inheritance view allows class descriptions to be passed down from parent to child
classes.
Double-clicking a class in either view will launch the Class Description dialog box. The
Class Description box allows you to change the name of the class and the color assigned
to it, as well as an option to insert a comment. Additional features are:
• Select Parent Class for Display, which allows you to select any available parent
classes in the hierarchy
97
About Classification 98
Figure 7.1. The Class Hierarchy window, displaying Groups and Inheritance views
• Display Always, which enables the display of the class (for example, after export)
even if it has not been used to classify objects
• The modifier functions are:
– Shared: This locks a class to prevent it from being changed. Shared image
objects can be shared among several rule sets
– Abstract: Abstract classes do not apply directly to image objects, but only
inherit or pass on their descriptions to child classes (in the Class Hierarchy
window they are signified by a gray ring around the class color
– Inactive classes are ignored in the classification process (in the Class Hierar-
chy window they are denoted by square brackets)
– Use Parent Class Color activates color inheritance for class groups; in other
words, the color of a child class will be based on the color of its parent. When
this box is selected, clicking on the color picker launches the Edit Color
Brightness dialog box, where you can vary the brightness of the child class
color using a slider.
There are two ways of creating and defining classes; directly in the Class Hierarchy win-
dow, or from processes in the Process Tree window.
Creating a Class in the Class Hierarchy Window To create a new class, right-click in the
Class Hierarchy window and select Insert Class. The Class Description dialog box will
appear.
Enter a name for your class in the Name field and select a color of your choice. Press OK
and your new class will be listed in the Class Hierarchy window.
Creating a Class as Part of a Process Many algorithms allow the creation of a new class.
When the Class Filter parameter is listed under Parameters, clicking on the value will
display the Edit Classification Filter dialog box (figure 7.7). You can then right-click on
this window, select Insert Class, then create a new class using the same method outlined
in the preceding section.
The Assign Class Algorithm The Assign Class algorithm is a simple classification algo-
rithm, which allows you to assign a class based on a threshold condition (for example
brightness):
• Select Assign Class from the algorithm list in the Edit Process dialog box
• Select a feature for the condition via the Threshold Condition parameter and define
your feature values
In the Algorithm Parameters pane, opposite Use Class, select a class you have previously
created, or enter a new name to create a new one (this will launch the Class Description
dialog box)
You can edit the class description to handle the features describing a certain class and the
logic by which these features are combined.
Inserting an Expression A new or an empty class description contains the ‘and (min)’
operator by default.
• To insert an expression, right-click the operator in the Class Description dialog and
select Insert New Expression. Alternatively, double-click on the operator
The Insert Expression dialog box opens, displaying all available features.
• Navigate through the hierarchy to find a feature of interest
• Right-click the selected feature it to list more options:
– Insert Threshold: In the Edit Threshold Condition dialog box, set a condi-
tion for the selected feature, for example Area <= 100. Click OK to add the
condition to the class description, then close the dialog box
– Insert Membership Function: In the Membership Function dialog box, edit
the settings for the selected feature.
Moving an Expression To move an expression, drag it to the desired location (figure 7.6).
• Select Operator for Expression: Allows you to choose a logical operator from the
list
• Edit Standard Nearest Neighbor Feature Space: Selects or deselects features for
the standard nearest neighbor feature space
• Edit Nearest Neighbor Feature Space: Selects or deselects features for the nearest
neighbor feature space.
Evaluating Undefined Image Objects Image objects retain the status ‘undefined’ when
they do not meet the criteria of a feature. If you want to use these image objects anyway,
for example for further processing, you must put them in a defined state. The function
Evaluate Undefined assigns the value 0 for a specified feature.
• Select the expression and press the Del button on your keyboard
• Right-click the expression and choose Delete Expression from the context menu.
Using Samples for Nearest Neighbor Classification The Nearest Neighbor classifier is
recommended when you need to make use of a complex combination of object features,
or your image analysis approach has to follow a set of defined sample image objects. The
principle is simple first, the software needs samples that are typical representatives for
each class. Based on these samples, the algorithm searches for the closest sample image
object in the feature space of each image object. If an image object’s closest sample
object belongs to a certain class, the image object will be assigned to it.
For advanced users, the Feature Space Optimization function offers a method to math-
ematically calculate the best combination of features in the feature space. To classify
image objects using the Nearest Neighbor classifier, follow the recommended workflow:
Defining Sample Image Objects For the Nearest Neighbor classification, you need sam-
ple image objects. These are image objects that you consider a significant representative
of a certain class and feature. By doing this, you train the Nearest Neighbor classification
algorithm to differentiate between classes. The more samples you select, the more con-
sistent the classification. You can define a sample image object manually by clicking an
image object in the map view.
You can also load a Test and Training Area (TTA) mask, which contains previously manu-
ally selected sample image objects, or load a shapefile, which contains information about
image objects. (For information on TTA masks; to find out about shapefiles.)
The Edit Classification Filter is available from the Edit Process dialog for appropriate
algorithms (e.g. Algorithm classification) and can be launched from the Class Filter
parameter.
The buttons at the top of the dialog allow you to:
The Use Array drop-down box lets you filter classes based on arrays (p 156).
The Assign Class algorithm is the most simple classification algorithm. It uses a threshold
condition to determine whether an image object belongs to a class or not. This algorithm
is used when a single threshold condition is sufficient.
1. In the Edit Process dialog box, select Assign Class from the Algorithm list
2. The Image Object Level domain is selected by default. In the Parameter pane, se-
lect the Threshold Condition you wish to use and define the operator and reference
value
3. In the Class Filter, select or create a class to which the algorithm applies.
The Classification algorithm uses class descriptions to classify image objects. It evaluates
the class description and determines whether an image object can be a member of a class.
Classes without a class description are assumed to have a membership value of one. You
can use this algorithm if you want to apply fuzzy logic to membership functions, or if
you have combined conditions in a class description.
Based on the calculated membership value, information about the three best-fitting
classes is stored in the image object classification window; therefore, you can see into
what other classes this image object would fit and possibly fine-tune your settings. To
apply this function:
1. In the Edit Process dialog box, select classification from the Algorithm list and
define the domain
2. From the algorithm parameters, select active classes that can be assigned to the
image objects
3. Select Erase Old Classification to remove existing classifications that do not match
the class description
4. Select Use Class Description if you want to use the class description for classifica-
tion. Class descriptions are evaluated for all classes. An image object is assigned
to the class with the highest membership value.
1. Classes are not applied to the classification of image objects whenever they contain
applicable child classes within the inheritance hierarchy.
Parent classes pass on their class descriptions to their child classes. 1 These child
classes then add additional feature descriptions and if they are not parent classes
themselves are meaningfully applied to the classification of image objects. The
above logic is following the concept that child classes are used to further divide
a more general class. Therefore, when defining subclasses for one class, always
keep in mind that not all image objects defined by the parent class are automatically
defined by the subclasses. If there are objects that would be assigned to the parent
class but none of the descriptions of the subclasses fit those image objects, they
will be assigned to neither the parent nor the child classes.
2. Classes are only applied to a classification of image objects, if all contained classi-
fiers are applicable.
The second rule applies mainly to classes containing class-related features. The
reason for this is that you might generate a class that describes objects of a certain
spectral value in addition to certain contextual information given by a class-related
1. Unlike the Classification algorithm, classes without a class description are assumed to have a membership value
of 0.
feature. The spectral value taken by itself without considering the context would
cover far too many image objects, so that only a combination of the two would lead
to satisfying results. As a consequence, when classifying without class-related fea-
tures, not only the expression referring to another class but the whole class is not
used in this classification process.
Contained and inherited expressions in the class description produce membership
values for each object and according to the highest membership value, each object
is then classified.
If the membership value of an image object is lower than the pre-defined minimum mem-
bership value, the image object remains unclassified. If two or more class descriptions
share the highest membership value, the assignment of an object to one of these classes
is random.
The three best classes are stored as the image object classification result. Class-related
features are considered only if explicitly enabled by the corresponding parameter.
1. In the Edit Process dialog box, select Hierarchical Classification from the Algo-
rithm drop-down list
2. Define the Domain if necessary.
3. For the Algorithm Parameters, select the active classes that can be assigned to the
image objects
4. Select Use Class-Related Features if necessary.
• Find domain extrema allows identifying areas that fulfill a maximum or minimum
condition within the defined domain
• Find local extrema allows identifying areas that fulfill a local maximum or mini-
mum condition within the defined domain and within a defined search range around
the object
• Find enclosed by class finds objects that are completely enclosed by a certain class
• Find enclosed by object finds objects that are completely enclosed by an image
object
• Connector classifies image objects that represent the shortest connection between
objects of a defined class.
7.3 Thresholds
• Go to the Class Hierarchy dialog box and double-click on a class. Open the Con-
tained tab of the Class Description dialog box. In the Contained area, right-click
the initial operator ‘and(min)’ and choose Insert New Expression on the context
menu
• From the Insert Expression dialog box, select the desired feature. Right-click on it
and choose Insert Threshold from the context menu. The Edit Threshold Condition
dialog box opens, where you can define the threshold expression
• In the Feature group box, the feature that has been selected to define the threshold
is displayed on the large button at the top of the box. To select a different feature,
click this button to reopen the Select Single Feature dialog box. Select a logical
operator
• Enter the number defining the threshold; you can also select a variable if one exists.
For some features such as constants, you can define the unit to be used and the
feature range displays below it. Click OK to apply your settings. The resulting
logical expression is displayed in the Class Description box.
The class description contains class definitions such as name and color, along with several
other settings. In addition it can hold expressions that describe the requirements an image
object must meet to be a member of this class when class description-based classification
is used. There are two types of expressions:
You can use logical operators to combine the expressions and these expressions can be
nested to produce complex logical expressions.
Membership functions allow you to define the relationship between feature values and
the degree of membership to a class using fuzzy logic.
Double-clicking on a class in the Class Hierarchy window launches the Class Description
dialog box. To open the Membership Function dialog, right-click on an expression the
default expression in an empty box is ‘and (min)’ to insert a new one, select Insert New
Expression. You can edit an existing one by right-clicking and selecting Edit Expression.
• The selected feature is displayed at the top of the box, alongside an icon that allows
you to insert a comment
• The Initialize area contains predefined functions; these are listed in the next section.
It is possible to drag points on the graph to edit the curve, although this is usually
not necessary we recommend you use membership functions that are as broad as
possible
• Maximum Value and Minimum Value allow you to set the upper and lower limits
of the membership function. (It is also possible to use variables as limits.)
• Left Border and Right Border values allow you to set the upper and lower limits
of a feature value. In this example, the fuzzy value is between 100 and 1,000, so
anything below 100 has a membership value of zero and anything above 1,000 has
a membership value of one
• Entire Range of Values displays the possible value range for the selected feature
• For certain features you can edit the Display Unit
• The name of the class you are currently editing is displayed at the bottom of the
dialog box.
• To display the comparable graphical output, go to the View Settings window and
select Mode > Classification Membership.
Larger than
Smaller than
Approximate Gaussian
About range
Full range
Figure 7.10. Sample Editor with generated membership functions and context menu
Membership functions can also be inserted and defined manually in the Sample Editor
window. To do this, right-click a feature and select Membership Functions > Edit/Insert,
which opens the Membership Function dialog box. This also allows you to edit an auto-
matically generated function.
To delete a generated membership function, select Membership Functions > Delete. You
can switch the display of generated membership functions on or off by right-clicking in
the Sample Editor window and activating or deactivating Display Membership Functions.
Editing Membership Function Parameters You can edit parameters of a membership func-
tion computed from sample objects.
1. In the Sample Editor, select Membership Functions > Parameters from the context
menu. The Membership Function Parameters dialog box opens
2. Edit the absolute Height of the membership function
3. Modify the Indent of membership function
4. Choose the Height of the linear part of the membership function
5. Edit the Extrapolation width of the membership function.
The minimum membership value defines the value an image object must reach to be
considered a member of the class.
If the membership value of an image object is lower than a predefined minimum, the
image object remains unclassified. If two or more class descriptions share the highest
membership value, the assignment of an object to one of these classes is random.
To change the default value of 0.1, open the Edit Minimum Membership Value dialog
box by selecting Classification > Advanced Settings > Edit Minimum Membership Value
from the main menu.
• Mean (arithm)
• Mean (geom)
• Mean (geom. weighted)
Similarities work like the inheritance of class descriptions. Basically, adding a similarity
to a class description is equivalent to inheriting from this class. However, since similari-
ties are part of the class description, they can be used with much more flexibility than an
inherited feature. This is particularly obvious when they are combined by logical terms.
A very useful method is the application of inverted similarities as a sort of negative inher-
itance: consider a class ‘bright’ if it is defined by high layer mean values. You can define
a class ‘dark’ by inserting a similarity feature to bright and inverting it, thus yielding the
meaning dark is not bright.
It is important to notice that this formulation of ‘dark is not bright’ refers to similarities
and not to classification. An object with a membership value of 0.25 to the class ‘bright’
would be correctly classified as’ bright’. If in the next cycle a new class dark is added
containing an inverted similarity to bright the same object would be classified as ‘dark’,
since the inverted similarity produces a membership value of 0.75. If you want to specify
that ‘dark’ is everything which is not classified as ‘bright’ you should use the feature
Classified As.
Similarities are inserted into the class description like any other expression.
The combination of fuzzy logic and class descriptions is a powerful classification tool.
However, it has some major drawbacks:
• Internal class descriptions are not the most transparent way to classify objects
• It does not allow you to use a given class several times in a variety of ways
• Changing a class description after a classification step deletes the original class
description
• Classification will always occur when the Class Evaluation Value is greater than 0
(only one active class)
• Classification will always occur according to the highest Class Evaluation Value
(several active classes)
There are two ways to avoid these problems stagger several process containing the re-
quired conditions using the Parent Process Object concept (PPO) or use evaluation classes.
Evaluation classes are as crucial for efficient development of auto-adaptive rule sets as
variables and temporary classes.
To clarify, evaluation classes are not a specific feature and are created in exactly the
same way as ‘normal’ classes. The idea is that evaluation classes will not appear in the
classification result they are better considered as customized features than real classes.
Like temporary classes, we suggest you prefix their names with ‘_Eval’ and label them
all with the same color, to distinguish them from other classes.
To optimize the thresholds for evaluation classes, click on the Class Evaluation tab in the
Image Object Information window. Clicking on an object returns all of its defined values,
allowing you to adjust them as necessary.
In the above example, the rule set developer has specified a threshold of 0.55. Rather than
use this value in every rule set item, new processes simply refer to this evaluation class
when entering a value for a threshold condition; if developers wish to change this value,
they need only change the evaluation class.
TIP: When using this feature with the geometrical mean logical operator,
ensure that no classifications return a value of zero, as the multiplication of
values will also result in zero. If you want to return values between 0 and
1, use the arithmetic mean operator.
Figure 7.15. Optimize thresholds for evaluation classes in the Image Object Information win-
dow
eCognition software has the Nearest Neighbor implemented as a classifier that can be ap-
plied using the algorithm Classifier on page 134 (KNN with k=1) or using the concept of
classification based on the Nearest Neighbor Classification as described in the following
chapters.
The nearest neighbor classifies image objects in a given feature space and with given sam-
ples for the classes of concern. First the software needs samples, typical representatives
for each class. After a representative set of sample objects has been declared the algo-
rithm searches for the closest sample object in the defined feature space for each image
object. The user can select the features to be considered for the feature space. If an image
object’s closest sample object belongs to Class A, the object will be assigned to Class A.
All class assignments in eCognition are determined by assignment values in the range
0 (no assignment) to 1 (full assignment). The closer an image object is located in the
feature space to a sample of class A, the higher the membership degree to this class. The
membership value has a value of 1 if the image object is identical to a sample. If the
image object differs from the sample, the feature space distance has a fuzzy dependency
on the feature space distance to the nearest sample of a class (see also Setting the Function
Slope on page 132 and Details on Calculation below).
For an image object to be classified, only the nearest sample is used to evaluate its mem-
bership value. The effective membership function at each point in the feature space is a
combination of fuzzy function over all the samples of that class. When the membership
function is described as one-dimensional, this means it is related to one feature.
In higher dimensions, depending on the number of features considered, it is harder to
depict the membership functions. However, if you consider two features and two classes
only, it might look like the graph on figure 7.19 on the next page:
Figure 7.19. Membership function showing Class Assignment in two dimensions. Samples
are represented by small circles. Membership values to red and blue classes correspond to
shading in the respective color, whereby in areas in which object will be classified red, the
blue membership value is ignored, and vice-versa. Note that in areas where all membership
values are below a defined threshold (0.1 by default), image objects get no classification; those
areas are colored white in the graph
The distance in the feature space between a sample object and the image object to be
classified is standardized by the standard deviation of all feature values. Thus, features
of varying range can be combined in the feature space for classification. Due to the
standardization, a distance value of d = 1 means that the distance equals the standard
deviation of all feature values of the features defining the feature space.
Based on the distance d a multidimensional, exponential membership function z(d) is
computed:
2
z(d) = e−kd
The parameter k determines the decrease of z(d). You can define this parameter with the
variable function slope:
k = ln(1/ f unctionslope)
The default value for the function slope is 0.2. The smaller the parameter function slope,
the narrower the membership function. Image objects have to be closer to sample ob-
jects in the feature space to be classified. If the membership value is less than the mini-
mum membership value (default setting 0.1), then the image object is not classified. The
figure 7.20 demonstrates how the exponential function changes with different function
slopes.
Figure 7.20. Different Membership values for different Function Slopes of the same object for
d=1
To define feature spaces, Nearest Neighbor (NN) expressions are used and later applied
to classes. eCognition Developer distinguishes between two types of nearest neighbor
expressions:
• Standard Nearest Neighbor, where the feature space is valid for all classes it is
assigned to within the project.
• Nearest Neighbor, where the feature space can be defined separately for each class
by editing the class description.
Figure 7.21. The Edit Standard Nearest Neighbor Feature Space dialog box
1. From the main menu, choose Classification > Nearest Neighbor > Edit Standard
NN Feature Space. The Edit Standard Nearest Neighbor Feature Space dialog box
opens
2. Double-click an available feature to send it to the Selected pane. (Class-related
features only become available after an initial classification.)
3. To remove a feature, double-click it in the Selected pane
4. Use the Feature Space Optimization on page 119 dialog to combine the best fea-
tures.
1. From the main menu, select Classification > Nearest Neighbor > Apply Standard
NN to Classes. The Apply Standard NN to Classes dialog box opens
Figure 7.22. The Apply Standard Nearest Neighbor to Classes dialog box
2. From the Available classes list on the left, select the appropriate classes by clicking
on them
3. To remove a selected class, click it in the Selected classes list. The class is moved
to the Available classes list
4. Click the All -→> button to transfer all classes from Available classes to Selected
classes. To remove all classes from the Selected classes list, click the <←- All
button
5. Click OK to confirm your selection
6. In the Class Hierarchy window, double-click one class after the other to open the
Class Description dialog box and to confirm that the class contains the Standard
Nearest Neighbor expression.
NOTE: The Standard Nearest Neighbor feature space is now defined for
the entire project. If you change the feature space in one class descrip-
tion, all classes that contain the Standard Nearest Neighbor expression are
affected.
The feature space for both the Nearest Neighbor and the Standard Nearest Neighbor clas-
sifier can be edited by double-clicking them in the Class Description dialog box.
Once the Nearest Neighbor classifier has been assigned to all classes, the next step is to
collect samples representative of each one.
Successful Nearest Neighbor classification usually requires several rounds of sample se-
lection and classification. It is most effective to classify a small number of samples and
then select samples that have been wrongly classified. Within the feature space, mis-
classified image objects are usually located near the borders of the general area of this
class. Those image objects are the most valuable in accurately describing the feature
space region covered by the class. To summarize:
1. Insert Standard Nearest Neighbor into the class descriptions of classes to be con-
sidered
2. Select samples for each class; initially only one or two per class
3. Run the classification process. If image objects are misclassified, select more sam-
ples out of those and go back to step 2.
Feature Space Optimization is an instrument to help you find the combination of features
most suitable for separating classes, in conjunction with a nearest neighbor classifier.
It compares the features of selected classes to find the combination of features that pro-
duces the largest average minimum distance between the samples of the different classes.
Using Feature Space Optimization The Feature Space Optimization dialog box helps you
optimize the feature space of a nearest neighbor expression.
To open the Feature Space Optimization dialog box, choose Tools > Feature Space Op-
timization or Classification > Nearest Neighbor > Feature Space Optimization from the
main menu.
1. To calculate the optimal feature space, press Select Classes to select the classes you
want to calculate. Only classes for which you selected sample image objects are
available for selection
2. Click the Select Features button and select an initial set of features, which will later
be reduced to the optimal number of features. You cannot use class-related features
in the feature space optimization
3. Highlight single features to select a subset of the initial feature space
4. Select the image object level for the optimization
5. Enter the maximum number of features within each combination. A high number
reduces the speed of calculation
6. Click Calculate to generate feature combinations and their distance matrices 2
7. Click Show Distance Matrix to display the Class Separation Distance Matrix for
Selected Features dialog box. The matrix is only available after a calculation.
• The Best Separation Distance between the samples. This value is the min-
imum overall class combinations, because the overall separation is only as
good as the separation of the closest pair of classes.
8. After calculation, the Optimized Feature Space group box displays the following
results:
• The Dimension indicates the number of features of the best feature combina-
tion.
2. The distance calculation is only based upon samples. Therefore, adding or deleting samples also affects the
separability of classes.
TIP: When you change any setting of features or classes, you must first
click Calculate before the matrix reflects these changes.
Figure 7.26. The Feature Space Optimization – Advanced Information dialog box
1. The Result List displays all feature combinations and their corresponding distance
values for the closest samples of the classes. The feature space with the highest
result is highlighted by default
2. The Result Chart shows the calculated maximum distances of the closest samples
along the dimensions of the feature spaces. The blue dot marks the currently se-
lected feature space
3. Click the Show Distance Matrix button to display the Class Separation Distance
Matrix window. This matrix shows the distances between samples of the se-
lected classes within a selected feature space. Select a feature combination and
re-calculate the corresponding distance matrix.
Using the Optimization Results You can automatically apply the results of your Feature
Space Optimization efforts to the project.
1. In the Feature Space Optimization Advanced Information dialog box, click Apply
to Classes to generate a nearest neighbor classifier using the current feature space
for selected classes.
2. Click Apply to Std. NN. to use the currently selected feature space for the Standard
Nearest Neighbor classifier.
3. Check the Classify Project checkbox to automatically classify the project when
choosing Apply to Std. NN. or Apply to Classes.
The Sample Editor window is the principal tool for inputting samples. For a selected
class, it shows histograms of selected features of samples in the currently active map.
The same values can be displayed for all image objects at a certain level or all levels in
the image object hierarchy.
You can use the Sample Editor window to compare the attributes or histograms of image
objects and samples of different classes. It is helpful to get an overview of the feature
distribution of image objects or samples of specific classes. The features of an image
object can be compared to the total distribution of this feature over one or all image
object levels.
Use this tool to assign samples using a Nearest Neighbor classification or to compare an
image object to already existing samples, in order to determine to which class an image
object belongs. If you assign samples, features can also be compared to the samples of
other classes. Only samples of the currently active map are displayed.
1. Open the Sample Editor window using Classification > Samples > Sample Editor
from the main menu
2. By default, the Sample Editor window shows diagrams for only a selection of fea-
tures. To select the features to be displayed in the Sample Editor, right-click in the
Sample Editor window and select Select Features to Display
3. In the Select Displayed Features dialog box, double-click a feature from the left-
hand pane to select it. To remove a feature, click it in the right-hand pane
4. To add the features used for the Standard Nearest Neighbor expression, select Dis-
play Standard Nearest Neighbor Features from the context menu.
Figure 7.28. The Sample Editor window. The first graph shows the Active Class and Compare
Class histograms. The second is a histogram for all image object levels. The third graph
displays an arrow indicating the feature value of a selected image object
Comparing Features
To compare samples or layer histograms of two classes, select the classes or the levels
you want to compare in the Active Class and Compare Class lists.
Values of the active class are displayed in black in the diagram, the values of the compared
class in blue. The value range and standard deviation of the samples are displayed on the
right-hand side.
When you select an image object, the feature value is highlighted with a red pointer.
This enables you to compare different objects with regard to their feature values. The
following functions help you to work with the Sample Editor:
• The feature range displayed for each feature is limited to the currently detected
feature range. To display the whole feature range, select Display Entire Feature
Range from the context menu
• To hide the display of the axis labels, deselect Display Axis Labels from the context
menu
• To display the feature value of samples from inherited classes, select Display Sam-
ples from Inherited Classes
• To navigate to a sample image object in the map view, click on the red arrow in the
Sample Editor.
In addition, the Sample Editor window allows you to generate membership functions.
The following options are available:
Selecting Samples
1. To assign sample objects, activate the input mode. Choose Classification > Samples
> Select Samples from the main menu bar. The map view changes to the View
Samples mode.
2. To open the Sample Editor window, which helps to gather adequate sample image
objects, do one of the following:
• Choose Classification > Samples > Sample Editor from the main menu.
• Choose View > Sample Editor from the main menu.
3. To select a class from which you want to collect samples, do one of the following:
• Select the class in the Class Hierarchy window if available.
*Select the class from the Active Class drop-down list in the Sample Editor
window.
This makes the selected class your active class so any samples you collect
will be assigned to that class.
4. To define an image object as a sample for a selected class, double-click the image
object in the map view. To undo the declaration of an object as sample, double-
click it again. You can select or deselect multiple objects by holding down the
Shift key.
As long as the sample input mode is activated, the view will always change back to
the Sample View when an image object is selected. Sample View displays sample
image objects in the class color; this way the accidental input of samples can be
avoided.
5. To view the feature values of the sample image object, go to the Sample Editor
window. This enables you to compare different image objects with regard to their
feature values.
6. Click another potential sample image object for the selected class. Analyze its
membership value and its membership distance to the selected class and to all other
classes within the feature space. Here you have the following options:
• The potential sample image object includes new information to describe the
selected class: low membership value to selected class, low membership
value to other classes.
• The potential sample image object is really a sample of another class: low
membership value to selected class, high membership value to other classes.
• The potential sample image object is needed as sample to distinguish the se-
lected class from other classes: high membership value to selected class, high
membership value to other classes.
In the first iteration of selecting samples, start with only a few samples for
each class, covering the typical range of the class in the feature space. Other-
wise, its heterogeneous character will not be fully considered.
7. Repeat the same for remaining classes of interest.
8. Classify the scene.
9. The results of the classification are now displayed in the map view. In the View
Settings dialog box, the mode has changed from Samples to Classification.
10. Note that some image objects may have been classified incorrectly or not at all. All
image objects that are classified are displayed in the appropriate class color. If you
hover the cursor over a classified image object, a tool -tip pops up indicating the
class to which the image object belongs, its membership value, and whether or not
it is a sample image object. Image objects that are unclassified appear transparent.
If you hover over an unclassified object, a tool-tip indicates that no classification
has been applied to this image object. This information is also available in the
Classification tab of the Image Object Information window.
11. The refinement of the classification result is an iterative process:
• First, assess the quality of your selected samples
• Then, remove samples that do not represent the selected class well and add
samples that are a better match or have previously been misclassified
• Classify the scene again
• Repeat this step until you are satisfied with your classification result.
12. When you have finished collecting samples, remember to turn off the Select Sam-
ples input mode. As long as the sample input mode is active, the viewing mode
will automatically switch back to the sample viewing mode, whenever an image
object is selected. This is to prevent you from accidentally adding samples without
taking notice.
Figure 7.29. Map view with selected samples in View Samples mode. (Image data courtesy of
Ministry of Environmental Affairs of Sachsen-Anhalt, Germany.)
Once a class has at least one sample, the quality of a new sample can be assessed in
the Sample Selection Information window. It can help you to decide if an image object
contains new information for a class, or if it should belong to another class.
1. To open the Sample Selection Information window choose Classification > Sam-
ples > Sample Selection Information or View > Sample Selection Information from
the main menu
2. Names of classes are displayed in the Class column. The Membership column
shows the membership value of the Nearest Neighbor classifier for the selected
image object
3. The Minimum Dist. column displays the distance in feature space to the closest
sample of the respective class
4. The Mean Dist. column indicates the average distance to all samples of the corre-
sponding class
5. The Critical Samples column displays the number of samples within a critical dis-
tance to the selected class in the feature space
6. The Number of Samples column indicates the number of samples selected for the
corresponding class.
The following highlight colors are used for a better visual overview:
• Gray: Used for the selected class.
• Red: Used if a selected sample is critically close to samples of other classes
in the feature space.
• Green: Used for all other classes that are not in a critical relation to the se-
lected class.
The critical sample membership value can be changed by right-clicking inside the win-
dow. Select Modify Critical Sample Membership Overlap from the context menu. The
default value is 0.7, which means all membership values higher than 0.7 are critical.
To select which classes are shown, right-click inside the dialog box and choose Select
Classes to Display.
Navigating Samples
To navigate to samples in the map view, select samples in the Sample Editor window to
highlight them in the map view.
1. Before navigating to samples you must select a class in the Select Sample Informa-
tion dialog box.
2. To activate Sample Navigation, do one of the following:
• Choose Classification > Samples > Sample Editor Options > Activate Sample
Navigation from the main menu
• Right-click inside the Sample Editor and choose Activate Sample Navigation
from the context menu.
3. To navigate samples, click in a histogram displayed in the Sample Editor window.
A selected sample is highlighted in the map view and in the Sample Editor window.
4. If there are two or more samples so close together that it is not possible to select
them separately, you can use one of the following:
• Select a Navigate to Sample button.
• Select from the sample selection drop-down list.
Figure 7.32. For sample navigation choose from a list of similar samples
Deleting Samples
• Deleting samples means to unmark sample image objects. They continue to exist
as regular image objects.
• To delete a single sample, double-click or Shift-click it.
• To delete samples of specific classes, choose one of the following from the main
menu:
– Classification > Class Hierarchy > Edit Classes > Delete Samples, which
deletes all samples from the currently selected class.
– Classification > Samples > Delete Samples of Classes, which opens the
Delete Samples of Selected Classes dialog box. Move the desired classes
from the Available Classes to the Selected Classes list (or vice versa) and
click OK
• To delete all samples you have assigned, select Classification > Samples > Delete
All Samples.
Alternatively you can delete samples by using the Delete All Samples algorithm or
the Delete Samples of Class algorithm.
Existing samples can be stored in a file called a training and test area (TTA) mask, which
allows you to transfer them to other scenes.
To allow mapping samples to image objects, you can define the degree of overlap that a
sample image object must show to be considered within in the training area. The TTA
mask also contains information about classes for the map. You can use these classes or
add them to your existing class hierarchy.
Figure 7.33. The Create TTA Mask from Samples dialog box
1. From the main menu select Classification > Samples > Create TTA Mask from
Samples
2. In the dialog box, select the image object level that contains the samples that you
want to use for the TTA mask. If your samples are all in one image object level, it
is selected automatically and cannot be changed
3. Click OK to save your changes. Your selection of sample image objects is now
converted to a TTA mask
4. To save the mask to a file, select Classification > Samples > Save TTA Mask. Enter
a file name and select your preferred file format.
To load samples from an existing Training and Test Area (TTA) mask:
1. From the main menu select Classification > Samples > Load TTA Mask.
2. In the Load TTA Mask dialog box, select the desired TTA Mask file and click
Open.
3. In the Load Conversion Table dialog box, open the corresponding conversion table
file. The conversion table enables mapping of TTA mask classes to existing classes
in the currently displayed map. You can edit the conversion table.
4. Click Yes to create classes from the conversion table. If your map already contains
classes, you can replace them with the classes from the conversion file or add them.
If you choose to replace them, your existing class hierarchy will be deleted.
If you want to retain the class hierarchy, you can save it to a file.
5. Click Yes to replace the class hierarchy by the classes stored in the conversion
table.
6. To convert the TTA Mask information into samples, select Classification > Samples
> Create Samples from TTA Mask. The Apply TTA Mask to Level dialog box
opens.
7. Select which level you want to apply the TTA mask information to. If the project
contains only one image object level, this level is preselected and cannot be
changed.
8. In the Create Samples dialog box, enter the Minimum Overlap for Sample Objects
and click OK.
The default value is 0.75. Since a single training area of the TTA mask does not
necessarily have to match an image object, the minimum overlap decides whether
an image object that is not 100% within a training area in the TTA mask should be
declared a sample.
The value 0.75 indicates that 75% of an image object has to be covered by the
sample area for a certain class given by the TTA mask in order for a sample for this
class to be generated.
The map view displays the original map with sample image objects selected where
the test area of the TTA mask have been.
You can check and edit the linkage between classes of the map and the classes of a Train-
ing and Test Area (TTA) mask.
You must edit the conversion table only if you chose to keep your existing class hierarchy
and used different names for the classes. A TTA mask has to be loaded and the map must
contain classes.
1. To edit the conversion table, choose Classification > Samples > Edit Conversion
Table from the main menu
2. The Linked Class list displays how classes of the map are linked to classes of the
TTA mask. To edit the linkage between the TTA mask classes and the classes of the
current active map, right-click a TTA mask entry and select the appropriate class
from the drop-down list
3. Choose Link by name to link all identical class names automatically. Choose Un-
link all to remove the class links.
You can use shapefiles to create sample image objects. A shapefile, also called an ESRI
shapefile, is a standardized vector file format used to visualize geographic data. You can
obtain shapefiles from other geo applications or by exporting them from eCognition maps.
A shapefile consists of several individual files such as .shx, .shp and .dbf.
To provide an overview, using a shapefile for sample creation comprises the following
steps:
• Opening a project and loading the shapefile as a thematic layer into a map
• Segmenting the map using the thematic layer
• Classifying image objects using the shapefile information.
• Select File > Modify Open Project from the main menu. The Modify Project dialog
box opens
• Insert the shapefile as a new thematic layer. Confirm with OK.
• In the Process Tree window, right-click and select Insert Child from the context
menu
• From the Algorithm drop-down list, select Multiresolution Segmentation. Under
the segmentation settings, select Yes in the Thematic Layer entry.
The segmentation finds all objects of the shapefile and converts them to image objects in
the thematic layer.
The child process identifies image objects using information from the thematic layer –
use the threshold classifier and a feature created from the thematic layer attribute table,
for example ‘Image Object ID’ or ‘Class’ from a shapefile ‘Thematic Layer 1’
• Select the following feature: Object Features > Thematic Attributes > Thematic
Object Attribute > [Thematic Layer 1]
• Set the threshold to, for example, > 0 or = “Sample” according to the content of
your thematic attributes
• For the parameter Use Class, select the new class for assignment.
• To mark the classified image objects as samples, add another child process
• Use the classified image objects to samples algorithm. From the Domain list, select
New Level. No further conditions are required
• Execute the process.
The Sample Brush is an interactive tool that allows you to use your cursor like a brush,
creating samples as you sweep it across the map view. Go to the Sample Editor toolbar
(View > Toolbars > Sample Editor) and press the Select Sample button. Right-click on
the image in map view and select Sample Brush.
Drag the cursor across the scene to select samples. By default, samples are not reselected
if the image objects are already classified but existing samples are replaced if drag over
them again. These settings can be changed in the Sample Brush group of the Options
dialog box. To deselect samples, press Shift as you drag.
NOTE: The Sample Brush will select up to one hundred image objects at a
time, so you may need to increase magnification if you have a large number
of image objects.
The Nearest Neighbor Function Slope defines the distance an object may have from the
nearest sample in the feature space while still being classified. Enter values between 0
and 1. Higher values result in a larger number of classified objects.
1. To set the function slope, choose Classification > Nearest Neighbor > Set NN Func-
tion Slope from the main menu bar.
2. Enter a value and click OK.
Figure 7.38. The Set Nearest Neighbor Function Slope dialog box
• It is not possible to use the feature Similarity To with a class that is described by a
nearest neighbor with class-related features.
• Classes cannot inherit from classes that use nearest neighbor-containing class-
related features. Only classes at the bottom level of the inheritance class hierarchy
can use class-related features in a nearest neighbor.
• It is impossible to use class-related features that refer to classes in the same group
including the group class itself.
7.5.1 Overview
The classifier algorithm allows classifying based on different statistical classification al-
gorithms:
• Bayes
• KNN (K Nearest Neighbor)
• SVM (Support Vector Machine)
• Decision Tree
• Random Trees
The Classifier algorithm can be applied either pixel- or object-based. For an example
project containing these classifiers please refer here http://community.ecognition.com
/home/CART%20-%20SVM%20Classifier%20Example.zip/view
7.5.2 Bayes
fruit is an apple. An advantage of the naive Bayes classifier is that it only requires a small
amount of training data to estimate the parameters (means and variances of the variables)
necessary for classification. Because independent variables are assumed, only the vari-
ances of the variables for each class need to be determined and not the entire covariance
matrix.
The k-nearest neighbor algorithm (k-NN) is a method for classifying objects based on
closest training examples in the feature space. k-NN is a type of instance-based learning,
or lazy learning where the function is only approximated locally and all computation is
deferred until classification. The k-nearest neighbor algorithm is amongst the simplest of
all machine learning algorithms: an object is classified by a majority vote of its neighbors,
with the object being assigned to the class most common amongst its k nearest neighbors
(k is a positive integer, typically small). The 5-nearest-neighbor classification rule is to
assign to a test sample the majority class label of its 5 nearest training samples. If k = 1,
then the object is simply assigned to the class of its nearest neighbor.
This means k is the number of samples to be considered in the neighborhood of an un-
classified object/pixel. The best choice of k depends on the data: larger values reduce the
effect of noise in the classification, but the class boundaries are less distinct.
eCognition software has the Nearest Neighbor implemented as a classifier that can be ap-
plied using the algorithm classifier(KNN with k=1) or using the concept of classification
based on the Nearest Neighbor Classification.
A support vector machine (SVM) is a concept in computer science for a set of related
supervised learning methods that analyze data and recognize patterns, used for classifica-
tion and regression analysis. The standard SVM takes a set of input data and predicts, for
each given input, which of two possible classes the input is a member of. Given a set of
training examples, each marked as belonging to one of two categories, an SVM training
algorithm builds a model that assigns new examples into one category or the other. An
SVM model is a representation of the examples as points in space, mapped so that the
examples of the separate categories are divided by a clear gap that is as wide as possible.
New examples are then mapped into that same space and predicted to belong to a cate-
gory based on which side of the gap they fall on. Support Vector Machines are based on
the concept of decision planes defining decision boundaries. A decision plane separates
between a set of objects having different class memberships.
There are different kernels that can be used in Support Vector Machines models. Included
in eCognition are linear and radial basis function (RBF). The RBF is the most popular
choice of kernel types used in Support Vector Machines. Training of the SVM classifier
involves the minimization of an error function with C as the capacity constant.
Decision tree learning is a method commonly used in data mining where a series of
decisions are made to segment the data into homogeneous subgroups. The model looks
like a tree with branches - while the tree can be complex, involving a large number of
splits and nodes. The goal is to create a model that predicts the value of a target variable
based on several input variables. A tree can be “learned” by splitting the source set into
subsets based on an attribute value test. This process is repeated on each derived subset
in a recursive manner called recursive partitioning. The recursion is completed when the
subset at a node all has the same value of the target variable, or when splitting no longer
adds value to the predictions. The purpose of the analyses via tree-building algorithms is
to determine a set of if-then logical (split) conditions.
The minimum number of samples that are needed per node are defined by the parameter
Min sample count. Finding the right sized tree may require some experience. A tree with
too few of splits misses out on improved predictive accuracy, while a tree with too many
splits is unnecessarily complicated. Cross validation exists to combat this issue by setting
eCognitions parameter Cross validation folds. For a cross-validation the classification
tree is computed from the learning sample, and its predictive accuracy is tested by test
samples. If the costs for the test sample exceed the costs for the learning sample this
indicates poor cross-validation and that a different sized tree might cross-validate better.
The random trees classifier is more a framework that a specific model. It uses an input
feature vector and classifies it with every tree in the forest. It results in a class label of the
training sample in the terminal node where it ends up. This means the label is assigned
that obtained the majority of "votes". Iterating this over all trees results in the random
forest prediction. All trees are trained with the same features but on different training sets,
which are generated from the original training set. This is done based on the bootstrap
procedure: for each training set the same number of vectors as in the original set ( =N
) is selected. The vectors are chosen with replacement which means some vectors will
appear more than once and some will be absent. At each node not all variables are used
to find the best split but a randomly selected subset of them. For each node a new subset
is construced, where
p its size is fixed for all the nodes and all the trees. It is a training
parameter, set to numbero f variables . None of the trees that are built are pruned.
In random trees the error is estimated internally during the training. When the training
set for the current tree is drawn by sampling with replacement, some vectors are left out.
This data is called out-of-bag data - in short "oob" data. The oob data size is about N/3.
The classification error is estimated based on this oob-data.
As described in the Reference Book > Template Matching you can apply templates gen-
erated with eCognitions Template Matching Editor to your imagery.
Please refer to our template matching videos in the eCognition community http://www
.ecognition.com/community covering a variety of application examples and workflows.
The typical workflow comprises two steps. Template generation using the template editor,
and template application using the template matching algorithm.
To generate templates:
• Create a new project
• Open View > Windows > Template Editor
• Insert samples in the Select Samples tab
• Generate template(s) based on the first samples selected
Figure 7.40. Template Matching Algorithm to generate Correlation Coefficient Image Layer
• To generate the a temporary layer with correlation coefficients (output layer) you
need to provide
– the folder containing the template(s)
– the layer that should be correlated with this template
• To generate in addition a thematic layer with points for each target you need to
provide
– a threshold for the correlation coefficient for a valid target
• Review your targets using the image object table (zoom in and make only a small
region of the image visible, and any object you select in the table will be visible in
the image view, typically centered).
Figure 7.41. RGB Image Layer (left) and Correlation Coefficient Image Layer (right)
Figure 7.42. Template Matching Algorithm to generate Thematic Layer with Results
Some images do not typically carry coordinate information; therefore, units, scales and
pixel sizes of projects can be set manually in two ways:
• When you create a project, you can define the units in the Create Project dialog box
(File > Create Project). When you specify a unit in your image analysis, eCognition
Developer will always reference this value. For example, if you have an image of
a land area with a scale of 15 pixels/km, enter 15 in the Pixel Size (Unit) box and
select kilometer from the drop-down box below it. (You can also change the unit
of an existing project by going to File > Modify Open Project.)
• During rule set execution with the Scene Properties algorithm. (See the Reference
Book for more details.)
The default unit of a project with no resolution information is a pixel. For these projects,
the pixel size cannot be altered. Once a unit is defined in a project, any number or features
within a rule set can be used with a defined unit. Here the following rules apply:
• A feature can only have one unit within a rule set. The unit of the feature can be
edited everywhere where the feature is listed, but always applies to every use of
this feature – for example in rule sets, image object information and classes
• All geometry-related features, such as ‘distance to’ let you specify units, for exam-
ple pixels, metrics, or the ‘same as project unit’ value
• When using Object Features > Position, you can choose to display user coordinates
(‘same as project unit’ or ‘coordinates’). Selecting ‘pixel’ uses the pixel (image)
coordinate system.
• In Customized Arithmetic Features, the set calculation unit applies to numbers,
not the used features. Be aware that customized arithmetic features cannot mix
coordinate features with metric features – for example, (xmax (coor.)−xmin (coor.))
Length(m) would
require two customized arithmetic features.
Since ‘same as project unit’ might vary with the project, we recommend using absolute
units.
140
Advanced Rule Set Concepts 141
Thematic layers are raster or vector files that have associated attribute tables, which can
add additional information to an image. For instance, a satellite image could be combined
with a thematic layer that contains information on the addresses of buildings and street
names. They are usually used to store and transfer results of analyses.
Thematic vector layers comprise only polygons, lines or points. While image layers con-
tain continuous information, the information of thematic raster layers is discrete. Image
layers and thematic layers must be treated differently in both segmentation and classifica-
tion.
Typically – unless you have created them yourself – you will have acquired a thematic
layer from an external source. It is then necessary to import this file into your project.
eCognition Developer supports a range of thematic formats and a thematic layer can be
added to a new project or used to modify an existing project. Vector data is rasterized
when imported into eCognition Developer and depending on the resolution your work
can result in a degradation of the information. E.g. if using a vector file collected at
10 cm accuracy in combination with 1 m image data, small features in the vector layer
might be lost, and the shape of the remaining features might be slightly different due to
the rasterization done based on the 1 m grid. To retain the original detail of the imported
vector layer, ensure that the pixel size of your project matches the spatial resolution of
your imported vector layer. The original vector files are not altered by the importation
process.
Thematic layers can be specified when you create a new project via File > New Project
– simply press the Insert button by the Thematic Layer pane. Alternatively, to import
a layer into an existing project, use the File > Modify Existing Project function. Once
defined, the Edit button allows you to further modify the thematic layer and the Delete
button removes it.
When importing thematic layers, ensure the image layers and the thematic layers have
the same coordinate systems and geocoding. If they do not, the content of the individual
layers will not match.
As well as manually importing thematic layers, using the File > New Project or File
> Modify Open Project dialog boxes, you can also import them using rule sets. For
more details, look up the Create/Modify Project algorithm in the eCognition Developer
Reference Book.
The polygon shapefile (.shp), which is a common format for geo-information systems,
will import with its corresponding thematic attribute table file (.dbf) file automatically.
For all other formats, the respective attribute table must be specifically indicated in the
Load Attribute Table dialog box, which opens automatically. Polygon shapefiles in 2D
and 3D scenes are supported. From the Load Attribute Table dialog box, choose one of
the following supported formats:
When loading a thematic layer from a multi-layer image file (for example an .img stack
file), the appropriate layer that corresponds with the thematic information is requested in
the Import From Multi Layer Image dialog box. Additionally, the attribute table with the
appropriate thematic information must be loaded.
If you import a thematic layer into your project and eCognition Developer does not find
an appropriate column with the caption ID in the respective attribute table, the Select ID
Column dialog box will open automatically. Select the caption of the column containing
the polygon ID from the drop-down menu and confirm with OK.
To display a thematic layer, select View > View Settings from the main menu. Right-click
the Layer row and select the layer you want to display from the context menu.
The thematic layer is displayed in the map view and each thematic object is displayed in
a different random color. To return to viewing your image data, go back to the Layer row
and select Image Data.
The values of thematic objects are displayed in the Thematic Layer Attribute Table, which
is launched via Tools > Thematic Layer Attribute Table.
To view the thematic attributes, open the Manual Editing toolbar. Choose Thematic Edit-
ing as the active editing mode and select a thematic layer from the Select Thematic Layer
drop-down list.
The attributes of the selected thematic layer are now displayed in the Thematic Layer At-
tribute Table. They can be used as features in the same way as any other feature provided
by eCognition.
The table supports integers, strings, and doubles. The column type is set automatically,
according to the attribute, and table column widths can be up to 255 characters.
Class name and class color are available as features and can be added to the Thematic
Layer Attribute Table window. You can modify a thematic layer attribute table by adding,
editing or deleting table columns or editing table rows.
A thematic object is the basic element of a thematic layer and can be a polygon, line
or point. It represents positional data of a single object in the form of co-ordinates and
describes the object by its attributes.
The Manual Editing toolbar lets you manage thematic objects, including defining regions
of interest before image analysis and the verification of classifications after image analy-
sis.
1. To display the Manual Editing toolbar choose View > Toolbars > Manual Editing
from the main menu
2. For managing thematic objects, go to the Change Editing Mode drop-down list and
change the editing mode to Thematic Editing
3. From the Select Thematic Layer drop-down list box select an existing thematic
layer or create a new one.
If you want to edit image objects instead of thematic objects by hand, choose Image
Object Editing from the drop-down list.
While editing image objects manually is not commonly used in automated image analysis,
it can be applied to highlight or reclassify certain objects, or to quickly improve the
analysis result without adjusting a rule set. The primary manual editing tools are for
merging, classifying and cutting manually.
To display the Manual Editing toolbar go to View > Toolbars > Manual Editing from the
main menu. Ensure the editing mode, displayed in the Change Editing Mode drop-down
list, is set to Image Object Editing.
If you want to edit thematic objects by hand, choose Thematic Editing from the drop-
down list.
If you do not use an existing layer to work with thematic objects, you can create a new
one. For example, you may want to define regions of interest as thematic objects and
export them for later use with the same or another project.
On the Select Thematic Layer drop-down list box, select New Layer to open the Create
New Thematic Layer dialog box. Enter an name and select the type of thematic vector
layer: polygon, line or point layer.
There are two ways to generate new thematic objects – either use existing image objects
or create them yourself. This may either be on an existing layer or on a new thematic
layer you have created.
For all objects, the selected thematic layer must be set to the appropriate selection: poly-
gon, line or point. Pressing the Generate Thematic Objects button on the Manual Editing
toolbar will then open the appropriate window for shape creation. The Single Selection
button is used to finish the creation of objects and allows you to edit or delete them.
Creating Polygon Objects To draw polygons, set the selected thematic to Polygon. Click
in the map view to set vertices in the thematic polygon layer. Right-click and select Close
Polygon to complete the shape. This object can touch or cross any existing image object.
The following cursor actions are available:
• Click and hold the left mouse button as you drag the cursor across the map view to
create a path with points
• To create points at closer intervals, drag the cursor more slowly or hold Ctrl while
dragging
Figure 8.5. New thematic polygon object. The polygon borders are independent of existing
image object borders
Creating Lines and Points When drawing lines, click in the map view to set vertices in
the thematic line layer. Right-click and choose Finish Line to stop drawing. This object
can touch or cross any existing image object.
Generate point objects on a thematic point layer in one of the following ways:
• Click in the thematic layer. The point’s co-ordinates are displayed in the Generate
Point window.
• Enter the point’s x and y co-ordinates in the Generate Point dialog box and click
Add Point to generate the point.
The point objects can touch any existing image object. To delete the point whose co-
ordinates are displayed in the Generate Point dialog box, press Delete Point.
Thematic objects can be created from the outlines of selected image objects. This function
can be used to improve a thematic layer – new thematic objects are added to the Thematic
Layer Attribute Table. Their attributes are initially set to zero.
1. Select a polygon layer for thematic editing. If a polygon layer does not exist in
your map, create a new thematic polygon layer.
2. Activate the Generate Thematic Object Based on Image Object button on the Man-
ual Editing toolbar.
3. In the map view, select an image object and right-click it. From the context menu,
choose Generate Polygon to add the new object to the thematic layer
4. To delete thematic objects, select them in the map view and click the Delete Se-
lected Thematic Objects button
NOTE: Use the Classify Selection context menu command if you want
to classify image objects manually. Note, that you have to Select a Class
for Manual Classification with activated Image object editing mode before-
hand.
Image objects or thematic objects can be selected using these buttons on the Manual
Editing toolbar. From left to right:
You can merge objects manually, although this function only operates on the current
image object level. To merge neighboring objects into a new single object, choose Tools
> Manual Editing > Merge Objects from the main menu or press the Merge Objects
Manually button on the Manual Editing toolbar to activate the input mode.
Select the neighboring objects to be merged in map view. Selected objects are displayed
with a red outline (the color can be changed in View > Display Mode > Edit Highlight
Colors).
To clear a selection, click the Clear Selection for Manual Object Merging button or de-
select individual objects with a single mouse-click. To combine objects, use the Merge
Figure 8.7. Left: selected image objects. Right: merged image objects
Selected Objects button on the Manual Editing toolbar, or right-click and choose Merge
Selection.
You can create merge the outlines of a thematic object and an image object while leaving
the image object unchanged:
1. Activate the manual cutting input mode by selecting Tools > Manual Editing > Cut
Objects from the main menu
2. To cut an object, activate the object to be split by clicking it
3. Draw the cut line, which can consist of several sections. Depending on the object’s
shape, the cut line can touch or cross the object’s outline several times, and two or
more new objects will be created
4. Right-click and select Perform Split to cut the object, or Close and Split to close
the cut line before cutting
Figure 8.8. In the left-hand image, a thematic object (outlined in blue) and a neighboring
image object (outlined in red) are selected
5. The small drop-down menu displaying a numerical value is the Snapping Tolerance,
which is set in pixels. When using Manual Cutting, snapping attracts object borders
‘magnetically’.
NOTE: If you cut image objects, note that the Cut Objects Manually tool
cuts both the selected image object and its sub-objects on lower image
object levels.
Figure 8.9. Choosing Perform Split (left) will cut the object into three new objects, while Close
and Split (right) will cause the line to cross the object border once more, creating four new
objects
Thematic objects, with their accompanying thematic layers, can be exported to vector
shapefiles. This enables them to be used with other maps or projects.
In the manual editing toolbar, select Save Thematic Layer As, which exports the layer in
.shp format. Alternatively, you can use the Export Results dialog box.
In contrast to image layers, thematic layers contain discrete information. This means that
related layer values can carry additional information, defined in an attribute list.
The affiliation of an object to a class in a thematic layer is clearly defined, it is not possible
to create image objects that belong to different thematic classes. To ensure this, the
borders separating different thematic classes restrict further segmentation whenever a
thematic layer is used during segmentation. For this reason, thematic layers cannot be
given different weights, but can merely be selected for use or not.
If you want to produce image objects based exclusively on thematic layer information,
you have to switch the weights of all image layers to zero. You can also segment an
image using more than one thematic layer. The results are image objects representing
proper intersections between the layers.
1. To perform a segmentation using thematic layers, choose one of the following seg-
mentation types from the Algorithms drop-down list of the Edit Process dialog
box:
• Multiresolution segmentation
• Spectral difference segmentation
• Multiresolution segmentation region grow
2. In the Algorithm parameters area, expand the Thematic Layer usage list and select
the thematic layers to be considered in the segmentation. You can use the following
methods:
• Select an thematic layer and click the drop-down arrow button placed inside
the value field. Define for each the usage by selecting Yes or No
• Select Thematic Layer usage and click the ellipsis button placed inside the
value field to set weights for image layers.
Figure 8.10. Define the Thematic layer usage in the Edit Process dialog box
Within rule sets you can use variables in different ways. Some common uses of variables
are:
• Constants
• Fixed and dynamic thresholds
• Receptacles for measurements
• Counters
• Containers for storing temporary or final results
• Abstract placeholders that stand for a class, feature, or image object level.
While developing rule sets, you commonly use scene and object variables for storing your
dedicated fine-tuning tools for reuse within similar projects.
Variables for classes, image object levels, features, image layers, thematic layers, maps
and regions enable you to write rule sets in a more abstract form. You can create rule sets
that are independent of specific class names or image object level names, feature types,
and so on.
Scene Variables
Scene variables are global variables that exist only once within a project. They are inde-
pendent of the current image object.
Object Variables
Object variables are local variables that may exist separately for each image object. You
can use object variables to attach specific values to image objects.
Class Variables
Class Variables use classes as values. In a rule set they can be used instead of ordinary
classes to which they point.
Feature Variables
Feature Variables have features as their values and return the same values as the feature
to which they point.
Level Variables
Level Variables have image object levels as their values. Level variables can be used in
processes as pointers to image object levels.
Image Layer and Thematic Layer Variables have layers as their values. They can be se-
lected whenever layers can be selected, for example, in features, domains, and algorithms.
They can be passed as parameters in customized algorithms.
Region Variables
Region Variables have regions as their values. They can be selected whenever layers can
be selected, for example in features, domains and algorithms. They can be passed as
parameters in customized algorithms.
Map Variables
Map Variables have maps as their values. They can be selected wherever a map is selected,
for example, in features, domains, and algorithm parameters. They can be passed as
parameters in customized algorithms.
Feature List lets you select which features are exported as statistics.
The Image Object List lets you organize image objects into lists and apply functions to
these lists.
To open the Manage Variables box, go to the main menu and select Process > Manage
Variables, or click the Manage Variables icon on the Tools toolbar.
Select the tab for the type of variable you want to create then click Add. A Create Variable
dialog box opens, with particular fields depending on which variable is selected.
Selecting scene or object variables launches the same Create Variable dialog box.
The Name and Value fields allow you to create a name and an initial value for the vari-
able. In addition you can choose whether the new variable is numeric (double) or textual
(string).
The Insert Text drop-down box lets you add patterns for ruleset objects, allowing you
to assign more meaningful names to variables, which reflect the names of the classes
and layers involved. The following feature values are available: class name; image layer
name; thematic layer name; variable value; variable name; level name; feature value.
The Type field is unavailable for both variables. The Shared check-box allows you to
share the new variable among different rule sets.
The Name field and comments button are both editable and you can also manually assign
a color.
To give the new variable a value, click the ellipsis button to select one of the existing
classes as the value for the class variable. Click OK to save the changes and return to the
Manage Variables dialog box. The new class variable will now be visible in the Feature
Tree and the Class Hierarchy, as well as the Manage Variables box.
After assigning a name to your variable, click the ellipsis button in the Value field to open
the Select Single Feature dialog box and select a feature as a value.
After you confirm the variable with OK, the new variable displays in the Manage Vari-
ables dialog box and under Feature Variables in the feature tree in several locations, for
example, the Feature View window and the Select Displayed Features dialog box
Region Variables have regions as their values and can be created in the Create Region
Variable dialog box. You can enter up to three spatial dimensions and a time dimension.
The left hand column lets you specify a region’s origin in space and the right hand column
its size.
The new variable displays in the Manage Variables dialog box, and wherever it can be
used, for example, as an domain parameter in the Edit Process dialog box.
Create Level Variable allows the creation of variables for image object levels, image
layers, thematic layers, maps or regions.
The Value drop-down box allows you to select an existing level or leave the level variable
unassigned. If it is unassigned, you can use the drop-down arrow in the Value field of the
Manage Variables dialog box to create one or more new names.
Parameter sets are storage containers for specific variable value settings. They are mainly
used when creating action libraries, where they act as a transfer device between the values
set by the action library user and the rule set behind the action. Parameter sets can be
created, edited, saved and loaded. When they are saved, they store the values of their
variables; these values are then available when the parameter set is loaded again.
You can edit a parameter set by selecting Edit in the Manage Parameter Sets dialog box:
1. To add a variable to the parameter set, click Add Variable. The Select Variable for
Parameter Set dialog box opens
2. To edit a variable select it and click Edit. The Edit Value dialog box opens where
you can change the value of the variable
• If you select a feature variable, the Select Single Feature dialog opens, en-
abling you to select another value
• If you select a class variable, the Select Class dialog opens, enabling you to
select another value
• If you select a level variable, the Select Level dialog opens, enabling you to
select another value
3. To delete a variable from the parameter set, select it and click Delete
4. Click Update to modify the value of the selected variable according to the value of
the rule set
5. Click Apply to modify the value of the variable in the rule set according to the
value of the selected variable
6. To change the name of the parameter set, type in a new name.
8.4 Arrays
The array functions in eCognition Developer let you create lists of features, which are
accessible from all rule-set levels. This allows rule sets to be repeatedly executed across,
for example, classes, levels and maps. For more information on arrays, please consult
also the eCognition Developer 9.0 Reference Book > Variables Operation Algorithm >
Update Array and have a look at the examples in our user community:
• http://community.ecognition.com/home/Arrays%20Example%20%231.zip/view
or
• http://community.ecognition.com/home/Arrays%20Example%20%232.zip/view
The Manage Arrays dialog box (figure 8.18) can be accessed via Process > Manage Ar-
rays in the main menu. The following types of arrays are supported: numbers; strings;
classes; image layers; thematic layers; levels; features; regions; map names.
To add an array, press the Add Array button and select the array type from the drop-down
list. Where arrays require numerical values, multiple values must be entered individually
by row. Using this dialog, array values – made up of numbers and strings – can be
repeated several times; other values can only be used once in an array. Additional values
can be added using the algorithm Update Array, which allows duplication of all array
types.
When selecting arrays such as level and image layer, hold down the Ctrl or Shift key to
enter more than one value. Values can be edited either by double-clicking them or by
using the Edit Values button.
Initially, string, double, map and region arrays are executed in the order they are entered.
However, the action of rule sets may cause this order to change.
Class and feature arrays are run in the order of the elements in the Class Hierarchy and
Feature Tree. Again, this order may be changed by the actions of rule sets; for example a
class or feature array may be sorted by the algorithm Update Array, then the array edited
in the Manage Array dialog at a later stage – this will cause the order to be reset and
duplicates to be removed.
‘Array’ can be selected in all Process-Related Operations (other than Execute Child Se-
ries)
Array Features
In Scene Features > Rule-Set Related, three array variables are present: rule set array
values, rule set array size and rule set array item. For more information, please consult
the Reference Book.
In Customized Algorithms
Rule set arrays may be used as parameters in customized algorithms on page 174.
Arrays may be selected in the Find What box in the Find and Replace pane.
Through the examples in earlier chapters, you will already have some familiarity with the
idea of parent and child domains, which were used to organize processes in the Process
Tree. In that example, a parent object was created which utilized the Execute Child
Processes algorithm on the child processes beneath it.
• The child processes within these parents typically defined algorithms at the image
object level. However, depending on your selection, eCognition Developer can
apply algorithms to other objects selected from the Domain.
• Current image object: The parent image object itself.
• Neighbor obj: The distance of neighbor objects to the parent image object. If
distance is zero, this refers to image objects that have a common border with the
parent and lie on the same image object level. If a value is specified, it refers to the
distance between an object’s center of mass and the parent’s center of mass, up to
that specified threshold
• Sub objects: Objects whose image area covers all or part of the parent’s image area
and lie a specified number of image object levels below the parent’s image object
level.
• Super objects: Objects whose image area covers some or all of the parent’s image
area and lie a specified number of image object levels above the parent’s image
object level. (Note that the child image object is on top here.)
Terminology
• Parent process: A parent process is used for grouping child processes together in a
process hierarchy.
• Child process: A child process is inserted on a level beneath a parent process in the
hierarchy.
• Child domain / subdomain: A domain defined by using one of the four local pro-
cessing options.
• Parent process object (PPO): A parent process object (PPO) is the object defined
in the parent process.
A parent process object (PPO) is an image object to which a child process refers and must
first be defined in the parent process. An image object can be called through the respective
selection in the Edit Process dialog box; go to the Domain group box and select one of
the four local processing options from the drop-down list, such as current image object.
When you use local processing, the routine goes to the first random image object de-
scribed in the parent domain and processes all child processes defined under the parent
process, where the PPO is always that same image object.
The routine then moves through every image object in the parent domain. The routine
does not update the parent domain after each processing step; it will continue to process
those image objects found to fit the parent process’s domain criteria, no matter if they still
fit them when they are to be executed.
A special case of a PPO is the 0th order PPO, also referred to as PPO(0). Here the PPO
is the image object defined in the domain in the same line (0 lines above).
For better understanding of child domains (subdomains) and PPOs, see the example be-
low.
This example demonstrates how local processing is used to change the order in which
class or feature filters are applied. During execution of each process line, eCognition
software first creates internally a list of image objects that are defined in the domain.
Then the desired routine is executed for all image objects on the list.
1. Have a look at the rule set of this project that can be seen in figure 8.19 on the
previous page
2. Using the parent process named ‘simple use’ you can compare the results of the As-
sign Class algorithm with (figure 8.22) and without (figure 8.20) the parent process
object (PPO).
3. At first a segmentation process is executed.
4. Then the ‘without PPO’ process using the Assign Class algorithm is applied. With-
out a PPO the whole image is classified. This is because, before processing the line,
no objects of class My Class existed, so all objects in Level 1 return true for the
condition that no My Class objects exist in the neighborhood. In the next example,
the two process steps defining the domain objects on Level 1 and no My Class
objects exist in the neighborhood are split into two different lines.
5. Executing the process at Level 1: Unclassified (restore) removes the
classification and returns to the state after step 3.
6. The the process ‘with PPO’ is executed.
The process if with Existence of My Class (0) = 0:My Class (figure 8.21) ap-
plies the algorithm Assign Class to the image object that has been set in the parent pro-
cess unclassified at Level 1: for all. This has been invoked by selecting Cur-
rent Image Object as domain. Therefore, all unclassified image objects will be called
sequentially and each unclassified image object will be treated separately.
1. Executing the process results is a painted chessboard.
2. At first, all objects on image object Level 1 are put in a list. The process does
nothing but pass on the identities of each of those image objects down to the next
line, one by one. That second line – the child process – has only one object in
the domain, the current image object passed down from the parent process. It then
checks the feature condition, which returns true for the first object tested. But the
next time this process is run with the next image object, that image object is tested
again and returns false for the same feature, because now the object has the first
object as a My Class neighbor.
3. To summarize – in the example ‘without PPO’, all image objects that fitted the
condition were classified at once; in the second example’with PPO’, a list of 48
image objects is created in the upper process line, and then the child process runs
48 times and checks if the condition is fullfilled or not.
4. In other words – the result with the usage of the parent process object (PPO) is
totally different than without using it. Algorithms that are referring to a parent
process object (PPO), must be executed from the parent process. Therefore, you
must execute the parent process itself or in between a superordinated parent process.
The case is that using the parent process object (PPO) will process each image
object in the image in succession. That means: the algorithm checks for the first
unclassified image object complying with the set condition which is ‘Existence of
My Class (0) = 0)’. The image object identifies that there is no My Class neighbor,
so it classifies itself to My Class. Then the algorithm goes to the second unclassified
image object and finds a neighbor, which means the condition does not fit. Then it
goes to the third, there is no neighbor, so it classifies itself, and so on.
Figure 8.21. Setting with parent process object (PPO), a kind of internal loop
Figure 8.22. Result with parent process object (PPO), a kind of internal loop
One more powerful tool comes with local processing. When a child process is executed,
the image objects in the domain ‘know’ their parent process object (PPO). It can be very
useful to directly compare properties of those image objects with the properties of the
PPO. A special group of features, the process-related features, do exactly this job.
Figure 8.23. Process tree with more complex usage of parent process object (PPO)
1. In this example (figure 8.23) each child process from the process more complex
is executed. After the Segmentation the visualization settings are switched to the
outline view. In this rule set the PPO(0) procedure is used to merge the image
objects with the brightest image object classified as bright objects in the red
image layer. For this purpose a difference range (> −95) to an image object of the
class bright objects is used.
2. The red image object (bright objects) is the brightest image object in this image.
To find out how it is different from similar image objects to be merged with, the
user has to select it using the Ctrl key. Doing that the parent process object (PPO)
is manually selected. The PPO will be highlighted in green (figure 8.25)
3. For better visualization the outlines can now be switched off and using the Feature
View window the feature Mean red diff. PPO (0) can be applied. To find the best-
fitting range for the difference to the brightest object (bright objects) the values
in the Image Object Information window (figure 8.25) can be checked.
The green highlighted image object displays the PPO. All other image objects that are
selected will be highlighted in red and you can view the difference from the green high-
lighted image object in the Image Object Information window (figure 8.26). In this fig-
ure 8.27 on page 166 you see the result of the image object fusion.
1. Typically, you create the process-related features you need for your specific rule set.
For features that set an image object in relation to the parent object only an integer
number has to be specified, the process distance (Dist.) It refers to the distance in
the process hierarchy; the number of hierarchy levels in the Process Tree window
above the current editing line, in which you find the definition of the parent object.
This is true for the following features:
• Same super object as PPO
• Elliptic Distance from PPO
• Rel. border to PPO
• Border to PPO
For the following process-related features, comparing an image object to the
parent object the process distance (Dist.) has to be specified as well:
• Ratio PPO
• Diff PPO
In addition, you have to select the feature that you want to be compared. For
example, if you create a new ratio PPO, select Distance=2 and the feature
Area; the created feature will be Area ratio PPO (2). The number it returns
will be the area of the object in question divided by the area of the parent
process object of order 2, that is the image object whose identity was handed
down from two lines above in the process tree.
A special case are process-related features with process Distance=0, called
PPO(0) features. They only make sense in processes that need more than one
image object as an input, for example image object fusion. You may have a
PPO(0) feature evaluated for the candidate or for the target image object. That
feature is then compared or set to relation to the image object in the domain
of the same line, that is the seed image object of the image object fusion.
Go to the Feature View window (figure 8.28) to create a process-related fea-
ture sometimes referred to as PPO feature. Expand the process-related fea-
tures group.
To create a process-related feature (PPO feature), double-click on the feature
you want to create and add a process distance to the parent process object.
The process distance is a hierarchical distance in the process tree, for exam-
ple:
• PPO(0), has the process distance 0, which refers to the image object in the
current process, that is mostly used in the image object fusion algorithm.
• PPO(1), has the process distance 1, which refers to the image object in the
parent process one process hierarchy level above.
• PPO(2), has the process distance 2, which refers to the parent process two
hierarchy levels above in the process hierarchy.
If you want to create a customized parent process object, you also have to
choose a feature.
2. The following processes in the sample rule set are using different parent process
object hierarchies. Applying them is the same procedure as shown before with the
PPO(0).
Figure 8.25. Compare the difference between the red highlighted image object and the green
highlighted parent process object (PPO)
Figure 8.26. Process settings to perform an image object fusion using the difference from the
parent process object (PPO)
Figure 8.27. Result after image object fusion using the difference to the PPO(0)
Figure 8.28. Process-Related features used for parent process objects (PPO)
The Manage Customized Features dialog box allows you to add, edit, copy and delete
customized features, and to create new arithmetic and relational features based on the
existing ones.
To open the dialog box, click on Tools > Manage Customized Features from the main
menu, or click the icon on the Tools toolbar.
Clicking the Add button launches the Customized Features dialog box, which allows you
to create a new feature. The remaining buttons let you to edit, copy and delete features.
The procedure below guides you through the steps you need to follow when you want to
create an arithmetic customized feature.
Open the Manage Customized Features dialog box and click Add. Select the Arithmetic
tab in the Customized Features dialog box.
Figure 8.30. Creating an arithmetic feature in the Customized Features dialog box
1. Insert a name for the customized feature and click on the map-pin icon to add any
comments if necessary
2. The Insert Text drop-down box lets you add patterns for ruleset objects, allowing
you to assign more meaningful names to customized features, which reflect the
names of the classes and layers involved. The following feature values are avail-
able: class name; image layer name; thematic layer name; variable value; variable
name; level name; feature value. Selecting <automatic> displays the arithmetic
expression itself
3. Use the calculator to create the arithmetic expression. You can:
• Type in new constants
• Select features or variables in the feature tree on the right
• Choose arithmetic operations or mathematical functions
4. To calculate or delete an arithmetic expression, highlight the expression with the
cursor and then click either Calculate or Del.
5. You can switch between degrees (Deg) or radians (Rad)
6. Click the Inv check-box to invert the expression
7. To create a new customized feature do one of the following:
• Click Apply to create the feature without leaving the dialog box
• Click OK to create the feature and close the dialog box.
8. After creation, the new arithmetic feature can be found in:
• The Image Object Information window
• The Feature View window under Object Features > Customized.
The following procedure will assist you with the creation of a relational customized fea-
ture.
1. Open the Manage Customized Features dialog box (Tools > Manage Customized
Features) and click Add. The Customized Features dialog opens; select the Rela-
tional tab
2. The Insert Text drop-down box lets you add patterns for ruleset objects, allowing
you to assign more meaningful names to customized features, which reflect the
names of the classes and layers involved. The following feature values are avail-
able: class name; image layer name; thematic layer name; variable value; variable
name; level name; feature value
3. Insert a name for the relational feature to be created 1
4. Select the target for the relational function the ‘concerning’ area
5. Choose the relational function to be applied in the drop-down box
6. Define the distance of the related image objects. Depending on the related image
objects, the distance can be either horizontal (expressed as a unit) or vertical (image
object levels)
1. As with class-related features, the relations refer to the group hierarchy. This means if a relation refers to one
class, it automatically refers to all its subclasses in the group hierarchy.
Figure 8.31. Creating a relational feature at the Customized Features dialog box
Relations between surrounding objects can exist either on the same level or on a level
lower or higher in the image object hierarchy (table 8.1, Relations between surrounding
objects).
Object Description
Neighbors Related image objects on the same level. If the distance of the image
objects is set to 0 then only the direct neighbors are considered.
When the distance is greater than 0 then the relation of the objects is
computed using their centers of gravity. Only those neighbors whose
center of gravity is closer than the distance specified from the starting
image object are considered. The distance is calculated either in
metric units or pixels.
Sub-objects Image objects that exist below other image objects whose position in
the hierarchy is higher (superobjects). The distance is calculated in
levels.
Sub-objects of Only the image objects that exist below a specific superobject are
superobject considered in this case. The distance is calculated in levels.
Level Specifies the level on which an image object will be compared to all
other image objects existing at this level. The distance is calculated in
levels.
An overview of all functions existing in the drop-down list under the relational function
section is shown in table 8.2 on the current page, Relational functions.
Function Description
Mean Calculates the mean value of selected features of an image object and its
neighbors. You can select a class to apply this feature or no class if you
want to apply it to all image objects. Note that for averaging, the feature
values are weighted with the area of the image objects.
Mean difference Calculates the mean difference between the feature value of an image
object and its neighbors of a selected class. Note that the feature values
are weighted by either by the border length (distance =0) or by the area
(distance >0) of the respective image objects.
Mean absolute Calculates the mean absolute difference between the feature value of an
difference image object and its neighbors of a selected class. Note that the feature
values are weighted by either by the border length (distance =0) or by the
area (distance >0)of the respective image objects.
Continues. . .
Function Description
Ratio Calculates the proportion between the feature value of an image object
and the mean feature value of its neighbors of a selected class. Note that
for averaging the feature values are weighted with the area of the
corresponding image objects.
Sum Calculates the sum of the feature values of the neighbors of a selected
class.
Number Calculates the number of neighbors of a selected class. You must select a
feature in order for this feature to apply, but it does not matter which
feature you pick.
Min Returns the minimum value of the feature values of an image object and
its neighbors of a selected class.
Max Returns the maximum value of the feature values of an image object and
its neighbors of a selected class.
Mean difference Calculates the mean difference between the feature value of an image
to higher values object and the feature values of its neighbors of a selected class, which
have higher values than the image object itself. Note that the feature
values are weighted by either by the border length (distance =0) or by the
area (distance > 0)of the respective image objects.
Mean difference Calculates the mean difference between the feature value of an image
to lower values object and the feature values of its neighbors of a selected class, which
have lower values than the object itself. Note that the feature values are
weighted by either by the border length (distance = 0) or by the area
(distance >0) of the respective image objects.
Portion of higher Calculates the portion of the area of the neighbors of a selected class,
value area which have higher values for the specified feature than the object itself to
the area of all neighbors of the selected class.
Portion of lower Calculates the portion of the area of the neighbors of a selected class,
value area which have lower values for the specified feature than the object itself to
the area of all neighbors of the selected class.
Portion of higher Calculates the feature value difference between an image object and its
values neighbors of a selected class with higher feature values than the object
itself divided by the difference of the image object and all its neighbors of
the selected class. Note that the features are weighted with the area of the
corresponding image objects.
Portion of lower Calculates the feature value difference between an image object and its
values neighbors of a selected class with lower feature values than the object
itself divided by the difference of the image object and all its neighbors of
the selected class. Note that the features are weighted with the area of the
corresponding image object.
Continues. . .
Function Description
You can save customized features separately for use in other rule sets: 2
• Open the Tools menu in the main menu bar and select Save Customized Features
to open the Save Customized Features dialog box. Your customized features are
saved as a .duf file.
• To load customized features that have been saved as a .duf file, open the Tools
menu and select Load Customized Features to open the Load Customized Features
dialog box.
You can find customized features at different places in the feature tree, depending on the
features to which they refer. For example, a customized feature that depends on an object
feature is sorted below the group Object Features > Customized.
If a customized feature refers to different feature types, they are sorted in the feature
tree according to the interdependencies of the features used. For example, a customized
feature with an object feature and a class-related feature displays below class-related
features.
You may wish to create a customized feature and display it in another part of the Feature
Tree. To do this, go to Manage Customized Features and press Edit in the Feature Group
pane. You can then select another group in which to display your customized feature.
In addition, you can create your own group in the Feature Tree by selecting Create New
Group. This may be useful when creating solutions for another user.
Although it is possible to use variables as part or all of a customized feature name, we
would not recommend this practice as – in contrast to features – variables are not auto-
matically updated and the results could be confusing.
2. Customized features that are based on class-related features cannot be saved by using the Save Customized
Features menu option. They must be saved with a rule set.
• Local scope: Local rule set items are only visible within a customized algorithm
and can only be used in child processes of the customized algorithm. For this scope
type, a copy of the respective rule set item is created and placed in the local scope
of the customized algorithm. Local rule set items are thus listed in the relevant con-
trols (such as the Feature View or the Class Hierarchy), but they are only displayed
when the customized algorithm is selected.
• Global scope: Global rule set items are available to all processes in the rule set.
They are accessible from anywhere in the rule set and are especially useful for cus-
tomized algorithms that are always used in the same environment, or that change
the current status of variables of the main rule set. We do not recommend using
global rule set items in a customized algorithm if the algorithm is going to be used
in different rule sets.
• Parameter scope: Parameter rule set items are locally scoped variables in a cus-
tomized algorithm. They are used like function parameters in programming lan-
guages. When you add a process including a customized algorithm to the Main tab
of the Process Tree window, you can select the values for whatever parameters you
have defined. During execution of this process, the selected values are assigned to
the parameters. The process then executes the child processes of the customized
algorithm using the selected parameter values.
• Dependent: Dependent rule set items are used by other rule set items. For example,
if class A uses the feature Area and the customized feature Arithmetic1 in its class
description, it has two dependencies – Area and Arithmetic1
• Reference: Reference rule set items use other rule set items. For example, if the
Area feature is used by class A and the customized feature by Arithmetic1, classA
is its reference and and Arithmetic1 is a dependent.
A relationship exists between dependencies of rule set items used in customized algo-
rithms and their scope. If, for example, a process uses class A with a customized feature
Arithmetic1, which is defined as local within the customized algorithm, then class A
should also be defined as local. Defining class A as global or parameter can result in an
inconsistent situation (for example a global class using a local feature of the customized
algorithm).
Scope dependencies of rule set items used in customized algorithms are handled automat-
ically according to the following consistency rules:
• If a rule set item is defined as global, all its references and dependent must also be
defined as global. If at least one dependent or referencing rule set item cannot be
defined as global, this scope should not be used. An exception exists for features
without dependents, such as area and other features without editable parameters. If
these are defined as global, their references are not affected.
• If a rule set item is defined as local or as parameter, references and dependents
also have to be defined as local. If at least one dependent or referencing rule set
item cannot be defined as local, this scope should not be used. Again, features
without dependents, such as area and other features without editable parameters,
are excepted. These remain global, as it makes no sense to create a local copy of
them.
During the execution of a customized algorithm, image objects can refer to local rule set
items. This might be the case if, for example, they get classified using a local class, or
if a local temporary image object level is created. After execution, the references have
to be removed to preserve the consistency of the image object hierarchy. The application
offers two options to handle this cleanup process.
When Delete Local Results is enabled, the software automatically deletes locally created
image object levels, removes all classifications using local classes and removes all local
image object variables. However, this process takes some time since all image objects
need to be scanned and potentially modified. For customized algorithms that are called
frequently or that do not create any references, this additional checking may cause a
significant runtime overhead. If not necessary, we therefore do not recommend enabling
this option.
When Delete Local Results is disabled, the application leaves local image object levels,
classifications using local classes and local image object variables unchanged. Since
these references are only accessible within the customized algorithm, the state of the
image object hierarchy might then no longer be valid. When developing a customized
algorithm you should therefore always add clean-up code at the end of the procedure,
to ensure no local references are left after execution. Using this approach, you will cre-
ate customized algorithms with a much better performance compared to algorithms that
relying on the automatic clean-up capability.
When a customized algorithm is called, the selected domain needs to be handled correctly.
There are two options:
• If the Invoke Algorithm for Each Object option is selected, the customized algo-
rithm is called separately for each image object in the selected domain. This option
is most useful if the customized algorithm is only called once using the Execute do-
main. You can also use the current domain within the customized algorithm to
process the current image object of the calling process. However, in this case we
recommend to pass the domain as a parameter
• The Pass Domain from Calling Process as a Parameter option offers two possibili-
ties:
– If Object Set is selected, a list of objects is handed over to the customized al-
gorithm and the objects can be reclassified or object variables can be changed.
if a segmentation is performed on the objects, the list is destroyed since the
objects are ‘destroyed’ with new segmentation
– If Domain Definition is selected, filter settings for objects are handed over
to the customized algorithm. Whenever a process – segmentation, fusion or
classification – is performed, all objects are checked to see if they still suit
the filter settings
– If the Pass Domain from Calling Process as a Parameter option is selected,
the customized algorithm is called only once, regardless of the selected image
object in the calling process. The domain selected by the calling process is
available as an additional domain within the customized algorithm. When
this option is selected, you can select the From Calling Process domain in the
child processes of the customized algorithm to access the image object that is
specified by the calling process.
1. To create a customized algorithm, go to the Process Tree window and select the par-
ent process of the process sequence that you want to use as customized algorithm.
Do one of the following:
• Right-click the parent process and select Create Customized Algorithm from
the context menu.
• Select Process > Process Commands > Create Customized Algorithm from
the main menu. The Customized Algorithms Properties dialog box opens.
2. Assign a name to the customized algorithm
3. The Used Rule Set Items are arranged in groups. To investigate their dependencies,
select the Show Reference Tree checkbox
4. You can modify the scope of the used rule set items. Select an item from the list,
then click the dropdown arrow button. The following options are available:
• Global: The item is used globally. It is also available for other processes.
• Local: The item is used internally. Other processes outside this customized
algorithm are unable to access it. All occurrences of the original global item
in the process sequence are replaced by a local item with the same name.
• Parameter: The item is used as a parameter of the algorithm. This allows
the assignment of a specific value within the Algorithm parameters of the Edit
Process dialog box whenever this customized algorithm is used.
5. If you define the scope of a used a rule set item as a parameter, it is listed in
the Parameters section. Modifying the parameter name renames the rule set item
accordingly. Furthermore, you can add a description for each parameter. When
using the customized algorithm in the Edit Process dialog box, the description is
displayed in the parameter description field if it is selected in the parameters list.
For parameters based on scene variables, you can also specify a default value. This
value is used to initialize a parameter when the customized algorithm is selected in
the Edit Process dialog box.
6. Configure the general properties of the customized algorithm in the Settings list:
• Delete Local Results specifies if local rule set items are deleted from the
Figure 8.33. Original process sequence (above) and customized algorithm displayed on a
separate tab
The local features and feature parameters are displayed in the feature tree of the
Feature View window using the name of the customized algorithm, for example
MyCustomizedAlgorithm.ArithmeticFeature1.
The local variables and variable parameters can be checked in the Manage Variables
dialog box. They use the name of the customized algorithm as a prefix of their name, for
example MyCustomizedAlgorithm.Pm_myVar.
The image object levels can be checked by in Edit Level Names dialog box. They
use the name of the customized algorithm as a prefix of their name, for example
MyCustomizedAlgorithm.New Level.
Once you have created a customized algorithm, it displays in the Customized Algorithms
tab of the Edit Process Tree window. The rule set items you specified as Parameter are
displayed in parentheses following the algorithm’s name.
Customized algorithms are like any other algorithm; you use them in processes added
to your rule set in the same way, and you can delete them in the same ways. They are
grouped as Customized in the Algorithm drop-down list of the Edit Process dialog box.
You use them in processes added to your rule set in the same way, and you can delete
them in the same ways. They are grouped as Customized in the Algorithm drop-down
list of the Edit Process dialog box. If a customized algorithm contains parameters, you
can set the values in the Edit Process dialog box.
You can edit existing customized algorithms like any other process sequence in the soft-
ware. That is, you can modify all properties of the customized algorithm using the Cus-
tomized Algorithm Properties dialog box. To modify a customized algorithm select it on
the Customized Algorithms tab of the Process Tree window. Do one of the following to
open the Customized Algorithm Properties dialog box:
• Double-click it
• Select Process > Process Commands > Edit Customized Algorithm from the main
menu
• In the context menu, select Edit Customized Algorithm.
You can execute a customized algorithm or its child processes like any other process
sequence in the software.
Select the customized algorithm or one of its child processes in the Customized Algo-
rithm tab, then select Execute. The selected process tree is executed. The application
uses the current settings for all local variables during execution. You can modify the
value of all local variables, including parameters, in the Manage Variables dialog box.
If you use the Pass domain from calling process as a parameter domain handling mode,
you additionally have to specify the domain that should be used for manual execution.
Select the customized algorithm and do one of the following:
• Select Process > Process Commands > Edit Process Domain for stepwise execution
from the main menu
• Select Edit Process Domain for Stepwise Execution in the context menu
– The Edit Process Domain for dialog box opens. Specify the domain that you
want to be used for the from calling process domain during stepwise execution
– The Domain of the customized algorithm must be set to ‘from calling pro-
cess’.
The customized algorithm is removed from all processes of the rule set and is also deleted
from the list of algorithms in the Edit Process dialog box.
Customized algorithms and all processes that use them are deleted without reconfirma-
tion.
You can save a customized algorithm like any regular process, and then load it into another
rule set.
8.9 Maps
As explained in chapter one, a project can contain multiple maps. A map can:
• Contain image data independent of the image data in other project maps (a multi-
project map)
• Contain a copy or subsets from another map (multi-scale map).
• Multi-scale and scene subset image analysis, where the results of one map can be
passed on to any other multi-scale map
• Comparing analysis strategies on the same image data in parallel, enabling you to
select the best results from each analysis and combine them into a final result
• Testing analysis strategies on different image data in parallel.
When working with maps, make sure that you always refer to the correct map in the
domain. The first map is always called ‘main’. All child processes using a ‘From Parent’
map will use the map defined in a parent process. If there is none defined then the main
map is used. The active map is the map that is currently displayed and activated in Map
View – this setting is commonly used in Architect solutions. The domain Maps allows
you to loop over all maps fulfilling the set conditions.
Be aware that increasing the number of maps requires more memory and the eCognition
client may not be able to process a project if it has too many maps or too many large maps,
in combination with a high number of image objects. Using workspace automation splits
the memory load by creating multiple projects.
Use cases that require different images to be loaded into one project, so-called multi-
project maps, are commonly found:
• During rule set development, for testing rule sets on different image data
• During registration of two different images
• In the workspace, select multiple projects – with the status ‘created’ (not ‘edited’)
– by holding down the Ctrl key, then right-click and choose Open from the context
menu. The New Multi-Map Project Name dialog box opens. Enter the name of the
new project and confirm; the new project is created and opens. The first scene will
be displayed as the main map
• Open an existing project and go to File > Modify Open Project in the main menu.
In the Modify Open Project dialog box, go to Maps > Add Map. Type a name for
the new map in the Map box and assign the image for the new map. The new map
is added to the Map drop-down list.
Like workspace automation, a copy of a map can be used for multiscale image analysis –
this can be done using the Copy Map algorithm. The most frequently used options are:
The third option creates a map that has the extent of a bounding box drawn around the
image object. You can create copies of any map, and make copies of copies. eCognition
Developer maps can be copied completely or 2D subsets can be created. Copying image
layer or image objects to an already existing map overwrites it completely. This also
applies to the main map, when it is used as target map. Therefore, image layers and
thematic layers can be modified or deleted if the source map contains different image
layers.
Use the Scale parameter to define the scale of the new map. Keep in mind that there
are absolute and relative scale modes. For instance, using magnification creates a map
with a set scale, for example 2x, with reference to the original project map. Using the
Percent parameter, however, creates a map with a scale relative to the selected source
map. When downsampling maps, make sure to stay above the minimum size (which is
4 × 4 × 1 × 1 (x, y, z,t)). In case you cannot estimate the size of your image data, use a
scale variable with a precalculated value in order to avoid inadequate map sizes.
Resampling is applied to the image data of the target map to be downsampled. The
Resampling parameter allows you to choose between the following two methods:
• Fast resampling uses the pixel value of the pixel closest to the center of the source
matrix to be resampled. In case the image has internal zoom pyramids, such as
Mirax, then the pyramid image is used. Image layers copied with this method can
be renamed
• Smooth resampling creates the new pixel value from the mean value of the source
matrix starting with the upper left corner of the image layer. The time consumed
by this algorithm is directly proportional to the size of the image data and the scale
difference.
In most use cases, eCognition Developer images are available as image file stack, which
means one image file per slice or frame. Usually this image data has slice and frame
resolution information. In case this information is not provided with the data, it can be
entered using the 3D/4D Settings algorithm. The 3D/4D Settings algorithm provides
three different modes:
• Non-Invasive should be used for MRI and CT images. It assumes a square image
size and you can set the slice distance and time frame settings.
• 2D extend allows you to convert a 2D image into an eCognition Developer image.
This is useful for images that resemble film strips. Enter the slice and frame sizes
and resolution to virtually break up the image.
• 4D layout can be used to edit the slice distance and time frame settings of your
image data. Ensure the correct number of slices and frames are entered
The 3D/4D Settings algorithm was created for very specific use cases. We recommend
using the default settings when importing the data, rather than trying to modify them.
In order to display different maps in the Map View, switch between maps using the drop-
down box at the top of the eCognition Developer client; to display several maps at once,
use the Split commands, available under Window in the main menu.
When working with multi-scale or multi-project maps, you will often want to transfer a
segmentation result from one map to another. The Synchronize Map algorithm allows
you to transfer an image object hierarchy using the following settings:
• The source map is defined in the domain. Select the image object level, map and,
if necessary, classes, conditions and source region.
• With regard to the target map, set the map name, target region, level, class and
condition. If you want to transfer the complete image object hierarchy, set the
value to Yes.
Synchronize Map is most useful when transferring image objects of selected image object
levels or regions. When synchronizing a level into the position of a super-level, then the
relevant sub-objects are modified in order to maintain a correct image object hierarchy.
Image layers and thematic layers are not altered when synchronizing maps.
Maps are automatically saved when saving the project. Maps are deleted using the Delete
Map algorithm. You can delete each map individually using the domain Execute, or delete
all maps with certain prefixes and defined conditions using the domain Maps.
Creating a downsampled map copy is useful if working on a large image data set when
looking for regions of interest. Reducing the resolution of an image can improve perfor-
mance when analyzing large projects. This multi-scale workflow may follow the follow-
ing scheme.
Likewise you can also create a scene subset in a higher scale from the downsampled map.
For more information on scene subsets, refer to Workspace Automation.
In some use cases it makes sense to refine the segmentation and classification of indi-
vidual objects. The following example provides a general workflow. It assumes that the
objects of interest have been found in a previous step similar to the workflow explained
in the previous section. In order to analyze each image object individually on a separate
map do the following:
8.10.1 Overview
• Scene copy
• Scene subset
• Scene tiles
Sub-scenes let you work on parts of images or rescaled copies of scenes. Most use cases
require nested approaches such as creating tiles of a number of subsets. After processing
the sub-scenes, you can stitch the results back into the source scene to obtain a statistical
summary of your scene.
In contrast to working with maps, workspace automation allows you to analyze sub-
scenes concurrently, as each sub-scene is handled as an individual project in the
workspace. Workspace automation can only be carried out in a workspace.
Scene Copy
A scene copy is a duplicate of a project with image layers and thematic layers, but without
any results such as image objects, classes or variables. (If you want to transfer results to
a scene copy, you might want to use maps. Otherwise you must first export a thematic
layer describing the results.)
Scene copies are regular scene copies if they have been created at the same magnification
or resolution as the original image (top scene). A rescaled scene copy is a copy of a scene
at a higher or lower magnification or resolution.
To create a regular or rescaled scene copy, you can:
• Use the Create Scene Copy dialog (described in the next section) for manual cre-
ation
• Use the Create Scene Copy algorithm within a rule set; for details, see the eCogni-
tion Developer Reference Book.
The scene copy is created as a sub-scene below the project in the workspace.
Scene Subset
A scene subset is a project that contains only a subset area (region of interest) of the orig-
inal scene. It contains all image layers and thematic layers and can be rescaled. Scene
subsets used in workspace automation are created using the Create Scene Subset algo-
rithm. Depending on the selected domain of the process, you can define the size and
cutout position.
• Based on coordinates: If you select Execute in the Domain drop-down box, the
given PIXEL coordinates of the source scene are used.
• Based on classified image objects: If you select an image object level in the Domain
drop-down list, you can select classes of image objects. For each image object of
the selected classes, a subset is created based on a rectangular cutout around the
image object.
Neighboring image objects of the selected classes, which are located inside the cutout
rectangle, are also copied to the scene subset. You can choose to exclude them from
further processing by giving the parameter Exclude Other Image Objects a value of Yes.
If Exclude Other Image Objects is set to Yes, any segmentation in the scene subset will
only happen within the area of the image object used for defining the subset. Results are
not transferred to scene subsets.
The scene subset is created as a sub-scene below the project in the workspace. Scene
subsets can be created from any data set (for example 2D or 3D data sets). Creating a
subset from a 3D, 2D+T or 4D data set will reverse the slice order for the created sub-
scene.
Scene Tiles
Sometimes, a complete map needs to be analyzed, but its large file size makes a straight-
forward segmentation very time-consuming or processor-intensive. In this case, creating
scene tiles is a useful strategy. (The absolute size limit for an image to be segmented in
eCognition Developer 9.0 is 231 (46,340 x 46,340 pixels.) Creating scene tiles cuts the
selected scene into equally sized pieces. To create a scene tile you can:
• Use the Create Tiles dialog (described in the next section) for manual creation
• Use the Create Scene Tiles algorithm within a rule set; for more details, see the
eCognition Developer Reference Book.
Define the tile size for x and y; the minimum size is 100 pixels. Scene tiles cannot be
rescaled and are created in the magnification or resolution of the selected scene. Each
scene tile will be a sub-scene of the parent project in the workspace. Results are not
included in the created tiles.
Scene tiles can be created from any data set (2D or 3D for example). When tiling z-stacks
or time series, each slice or frame is tiled individually.
Manually created scene copies are added to the workspace as sub-scenes of the originat-
ing project. Image objects or other results are not copied into these scene copies.
1. To create a copy of a scene at the same scale, or at another scale, select a project in
the right-hand pane of the Workspace window.
2. Right-click it and select Create Copy with Scale from the context menu. The Create
Scene Copy with Scale dialog box opens (see figure 8.37)
3. Edit the name of the subset. The default name is the same as the selected project
name.
4. You can select a different scene scale compared to that of the currently selected
project; that way you can work on the scene copy at a different resolution. If
you enter an invalid scale factor, it will be changed to the closest valid scale and
displayed in the table. Reconfirm with OK. In the workspace window, a new project
item appears within the folder corresponding to the scale (for example 100%).
5. The current scale mode cannot be modified in this dialog box.
Click the Image View or Project Pixel View button on the View Settings toolbar to display
the map at the original scene scale. Switch between the display of the map at the original
scene scale (button activated) and the rescaled resolution (button released).
Creating Tiles
Manually created scene tiles are added into the workspace as sub-scenes of the originating
project. Image objects or other results are not copied into these scene copies.
You can analyze tile projects in the same way as regular projects by selecting single or
multiple tiles or folders that contain tiles.
In the Workspace window, select a project with a scene from which you created tiles or
subsets. These tiles must have already been analyzed and be in the ‘processed’ state. To
open the Stitch Tile Results dialog box, select Analysis > Stitch Projects from the main
menu or right-click in the workspace window.
The Job Scheduler field lets you specify the computer that is performing the analysis. It
is set to http://localhost:8184 by default, which is the local machine. However, if
you are running an eCognition Server over a network, you may need to change this field.
Click Load to load a ruleware file for image analysis – this can be a process (.dcp) or
solution (.dax) file that contains a rule set to apply to the stitched projects.
For more details, see Submitting Batch Jobs to a Server (p 221).
The concept of workspace automation is realized by structuring rule sets into subroutines
that contain algorithms for analyzing selected sub-scenes.
Workspace automation can only be done on an eCognition Server. Rule sets that include
subroutines cannot be run in eCognition Developer in one go. For each subroutine, the
according sub-scene must be opened.
A subroutine is a separate part of the rule set, cut off from the main process tree and
applied to sub-scenes such as scene tiles. They are arranged in tabs of the Process Tree
window. Subroutines organize processing steps of sub-scenes for automated processing.
Structuring a rule set into subroutines allows you to focus or limit analysis tasks to regions
of interest.
Figure 8.39. Subroutines are assembled on tabs in the Process Tree window
2. Hand over the created sub-scenes to a subroutine using the Submit Scenes for Anal-
ysis algorithm. All sub-scenes are processed with the rule set part in the subroutine.
Once all sub-scenes have been processed, post-processing steps – such as stitch
back – are executed as defined in the Submit Scenes for Analysis algorithm.
3. The rule set execution is continued with the next process following the Submit
Scenes for Analysis algorithm.
A rule set with subroutines can be executed only on data loaded in a workspace. Process-
ing a rule set containing workspace automation on an eCognition Server allows simulta-
neous analysis of the sub-scenes submitted to a subroutine. Each sub-scene will then be
processed by one of the available engines.
Creating a Subroutine
To create a subroutine, right-click on either the main or subroutine tab in the Process Tree
window and select Add New. The new tab can be renamed, deleted and duplicated. The
procedure for adding processes is identical to using the main tab.
Executing a Subroutine
Developing and debugging open projects using a step-by-step execution of single pro-
cesses is appropriate when working within a subroutine, but does not work across subrou-
tines. To execute a subroutine in eCognition Developer, ensure the correct sub-scene is
open, then switch to the subroutine tab and execute the processes.
When running a rule set on an eCognition Server, subroutines are automatically executed
when they are called by the Submit Scenes for Analysis algorithm (for a more detailed
explanation, consult the eCognition Developer Reference Book).
Editing Subroutines
Right-clicking a subroutine tab of the Process Tree window allows you to select common
editing commands.
You can move a process, including all child processes, from one subroutine to another
subroutine using copy and paste commands. Subroutines are saved together with the rule
set; right-click in the Process Tree window and select Save Rule Set from the context
menu.
Figure 8.41. Subroutine commands on the context menu of the Process Tree window
The strategy behind analyzing large images using workspace automation depends on the
properties of your image and the goal of your image analysis. Most likely, you will have
one of the following use cases:
• Complete analysis of a large image, for example finding all the houses in a satellite
image. In this case, an approach that creates tiles of the complete image and stitches
them back together is the most appropriate
• A large image that contains small regions of interest requiring a detailed analysis,
such as a tissue slide containing samples. In this use case, we recommend you
create a small-scale copy and derive full-scale subsets of the regions of interests
only.
To give you practical illustrations of structuring a rule set into subroutines, refer to the
use cases in the next section, which include samples of rule set code. For detailed instruc-
tions, see the related instructional sections and the algorithm settings in the eCognition
Developer Reference Book.
Tiling an image is useful when an analysis of the complete image is problematic. Tiling
creates small copies of the image in sub-scenes below the original image. (For an example
of a tiled top scene, see figure 8.42 on this page. Each square represents a scene tile.
In order to put the individually analyzed tiles back together, stitching is required. A
complete workflow and implementation in the Process Tree window is illustrated in fig-
ure 8.43 on the current page:
1. Select the Create Scene Tile algorithm and define the tile size. When creating tiles,
the following factors should be taken into account:
• The larger the tile, the longer the analysis takes; however, too many small
tiles increases loading and saving times
• When stitching is requested, bear in mind that there are limitations for the
number of objects over all the tiles, depending on the number of available
image layers and thematic layers.
2. Tiles are handed over to the subroutine analyzing the scene tiles by the Submit
Scenes for Analysis algorithm.
• In the Type of Scenes field, select Tiles
• Set the Process Name to ‘Subroutine 1’
• Use Percent of Tiles to Submit if you want a random selection to be analyzed
(for example, if you want a statistical overview)
• Set Stitching to Yes in order to stitch the analyzed scene tiles together in the
top scene
• Setting Request Post-Processing to No will prevent further analysis of the
stitched tiles, as an extra step after stitching
Each tile is now processed with the rule set part from Subroutine 1. After
all tiles have been processed, stitching takes place and the complete image
hierarchy, including object variables, is copied to the top scene.
3. In case you want to remove the created tiles after stitching, use the Delete Scenes
algorithm and select Type of Sub-Scenes: Tiles. (For a more detailed explanation,
consult the Reference Book.)
4. Finally, in this example, project statistics are exported based on the image objects
of the top scene.
In this basic use case, a subroutine limits detailed analysis to subsets representing ROIs –
this leads to faster processing.
Commonly, such subroutines are used at the beginning of rule sets and are part of the
main process tree on the Main tab. Within the main process tree, you sequence processes
in order to find ROIs against a background. Let us say that the intermediate results are
multiple image objects of a class ‘no_background’, representing the regions of interest of
your image analysis task.
While still editing in the main process tree, you can add a process applying the Create
Scene Subset algorithm on image objects of the class ‘no_background’ in order to analyze
ROIs only.
The subsets created must be sent to a subroutine for analysis. Add a process with the
algorithm Submit Scenes for Analysis to the end of the main process tree; this executes a
subroutine that defines the detailed image analysis processing on a separate tab.
Creating scene copies and scene subsets is useful if working on a large image data set with
only a small region of interest. Scene copies are used to downscale the image data. Scene
subsets are created from the region of interest at a preferred magnification or resolution.
Reducing the resolution of an image can improve performance when analyzing large
projects.
In eCognition Developer you can start an image analysis on a low-resolution copy of a
map to identify structures and regions of interest. All further image analyses can then be
done on higher-resolution scenes. For each region of interest, a new subset project of the
scene is created at high resolution. The final detailed image analysis takes place on those
subset scenes. This multi-scale workflow can follow the following scheme.
2 Find regions of interest (ROIs) Create rescaled subsets of Common image analysis
ROIs algorithms
6 Stitch tile results to subset Detailed analysis of tiles Submit Scenes for
results Analysis
7 Merge subsets results back to Create rescaled subsets of Submit Scenes for
main scene ROIs Analysis
8 Export results of main scene Export results of main Export Classification View
scene
Multi-Scale 1: Rescale a Scene Copy Create a rescaled scene copy at a lower magnifica-
tion or resolution and submit for processing to find regions of interest.
In this use case, you use a subroutine to rescale the image at a lower magnification or
resolution before finding regions of interest (ROIs). In this way, you reduce the amount
of image data that needs to be processed and your process consumes less time and perfor-
mance. For the first process, use the Create Scene Copy algorithm.
With the second process – based on the Submit Scenes for Analysis algorithm – you
submit the newly created scene copy to a new subroutine for finding ROIs at a lower
scale.
NOTE: When working with subroutines you can merge back selected re-
sults to the main scene. This enables you to reintegrate results into the
complete image and export them together. To fulfill a prerequisite to merg-
ing results back to the main scene, set the Stitch Subscenes parameter to
Yes in the Submit Scenes Analysis algorithm.
Figure 8.44. Subroutines are assembled on tabs in the Process Tree window
Multi-Scale 2: Create Rescaled Subset Copies of Regions of Interest In this step, you use a
subroutine to find regions of interest (ROIs) and classify them, in this example, as ‘ROI’.
Based on the image objects representing the ROIs, you create scene subsets of the ROIs.
Using the Create Scene Subset algorithm, you can rescale them to a higher magnification
or resolution. This scale will require more processing performance and time, but it also
allows a more detailed analysis.
Finally, submit the newly created rescaled subset copies of regions of interest for further
processing to the next subroutine. Use the Submit Scenes for Analysis algorithm for such
connections of subroutines.
Multi-Scale 3: Use Tiling and Stitching Create tiles, submit for processing, and stitch the
result tiles for post-processing. In this step, you create tiles using the Create Scene Tiles
algorithm.
In this example, the Submit Scenes for Analysis algorithm subjects the tiles to time- and
performance-consuming processing which, in our example, is a detailed image analysis
at a higher scale. Generally, creating tiles before processing enables the distribution of
the analysis processing on multiple instances of Analysis Engine software.
Here, following processing of the detailed analysis within a separate subroutine, the tile
results are stitched and submitted for post-processing to the next subroutine. Stitching
settings are done using the parameters of the Submit Scenes for Analysis algorithm.
Tiling+Stitching Subsets
create (500x500) tiles
process tiles with ’Detailed Analysis of Tiles’ and stitch
Detailed Analysis of Tiles
Detailed Analysis
...
...
...
If you want to transfer result information from one sub-scene to another, you can do so
by exporting the image objects to thematic layers and adding this thematic layer then to
the new scene copy. Here, you either use the Export Vector Layer or the Export Thematic
Raster Files algorithm to export a geocoded thematic layer. Add features to the thematic
layer in order to have them available in the new scene copy.
After exporting a geocoded thematic layer for each subset copy, add the export item
names of the exported thematic layers in the Additional Thematic Layers parameter of
the Create Scene Tiles algorithm. The thematic layers are matched correctly to the scene
tiles because they are geocoded.
Using the submit scenes for analysis algorithm, you finally submit the tiles for further pro-
cessing to the subsequent subroutine. Here you can utilize the thematic layer information
by using thematic attribute features or thematic layer operations algorithms.
Likewise, you can also pass parameter sets to new sub-scenes and use the variables from
these parameter sets in your image analysis.
Sub-scenes can be tiles, copies or subsets. You can export statistics from a sub-scene
analysis for each scene, and collect and merge the statistical results of multiple files. The
advantage is that you do not need to stitch the sub-scenes results for result operations
concerning the main scene.
To do this, each sub-scene analysis must have had at least one project or domain statistic
exported. All preceding sub-scene analysis, including export, must have been processed
completely before the Read Subscene Statistics algorithm starts any result summary cal-
culations. Result calculations can be performed:
• In the main process tree after the Submit Scenes to Analysis algorithm
• In a subroutine within a post-processing step of the Submit Scenes to Analysis
algorithm.
After processing all sub-scenes, the algorithm reads the exported result statistics of the
sub-scenes and performs a defined mathematical summary operation. The resulting value,
representing the statistical results of the main scene, is stored as a variable. This variable
can be used for further calculations or export operations concerning the main scene.
Hierarchical image object levels allow you to derive statistical information about groups
of image objects that relate to super-, neighbor- or sub-objects. In addition, you can derive
statistical information from groups of objects that are linked to each other. Use cases that
require you to link objects in different image areas without generating a common super-
object include:
1. Link objects between different timeframes of time series data, in order to calculate
a moving distance or direction of an object in time
2. Link-distributed cancer indications
3. Linking a bridge to a street and a river at the same time.
The concept of creating and working with image object links is similar to analyzing hierar-
chical image objects, where an image object has ‘virtual’ links to its sub- or superobjects.
Creating these object links allows you to virtually connect objects in different maps and
areas of the image. In addition, object links are created with direction information that
can distinguish between incoming and outgoing links, which is an important feature for
object tracking.
Through the tutorials in earlier chapters, you will already have some familiarity with the
idea of parent and child domains, which were used to organize processes in the Process
Tree. In that example, a parent object was created which utilized the Execute Child
Processes algorithm on the child processes beneath it.
The child processes within these parents typically defined algorithms at the image object
level. However, depending on your selection, eCognition Developer can apply algorithms
to other objects selected from the Domain.
Object Links are created using the Create Links algorithm. Links may link objects on
different hierarchical levels, different slices or frames, or on different maps. Therefore,
an image object can have any number of object links to any other image object. A link
belongs to the level of its source image object.
The direction of a link is always directed towards the target object, so is defined as an
incoming link. The example in the figure below shows multiple time frames (T0 to T4).
The object (red) in T2 has one incoming link and two outgoing links. In most use cases,
multiple links are created in a row (defined as a path). If multiple links are connected to
one another, the link direction is defined as:
The length of a path is described by a distance. Linked object features use the max.
distance parameter as a condition. Using the example in the figure below, distances are
counted as follows:
• T0 to T1: Distance is 0
• T0 to T2: Distance is 1
• T0 to T4: Distance is 3 (to both objects)
An object link is stored in a class, called the link class. These classes appear as nor-
mal classes in the class hierarchy and groups of links can be distinguished by their link
classes. When creating links, the domain defines the source object and the candidate ob-
ject parameters define the target objects. The target area is set with the Overlap Settings
parameters.
Existing links are handled in this way:
• Splitting an object with m links into n fragments creates n objects, each linking in
the same way as the original object. This will cause the generation of m × (n − 1)
new links (which are clones of the old ones)
• Copying an image object level will also copy the links
• Deleting an object deletes all links to or from this object
• Links are saved with the project.
Figure 8.45. Incoming and outgoing links over multiple time frames. The red circles represent
objects and the green arrows represent links
By default all object links of an image object are outlined when selecting the image object
in the Map View. You can display a specific link class, link direction, or links within a
maximal distance using the Edit Linked Object Visualization dialog. Access the dialog
in the Menu: View – Display Mode - Edit Linked Object Visualization.
For creating statistics about linked objects, eCognition Developer 9.0 provides Linked
Objects Features:
• Linked Objects Count – counts all objects that are linked to the selected object and
that match the link class filter, link direction and max. distance settings.
• Statistics of Linked Objects – provides statistical operations such as Sum or Mean
over a selected feature taking the set object link parameters into account.
• Link weight to PPO – computes the overlap area of two linked objects to each
other.
Polygons are vector objects that provide more detailed information for characterization of
image objects based on shape. They are also needed to visualize and export image object
outlines. Skeletons, which describe the inner structure of a polygon, help to describe an
object’s shape more accurately.
Polygon and skeleton features are used to define class descriptions or refine segmenta-
tions. They are particularly suited to studying objects with edges and corners.
A number of shape features based on polygons and skeletons are available. These features
are used in the same way as other features. They are available in the feature tree under
Object Features > Geometry > Based on Polygons or Object Features > Geometry >
Based on Skeletons.
Polygons are available after the first segmentation of a map. To display polygons in the
map view, click the Show/Hide Polygons button. For further options, open the View
Settings (View > View Settings) window.
Figure 8.46. View Settings window with context menu for viewing polygons
Click on Polygons in the left pane and select one of the following polygon display modes:
Figure 8.47. Different polygon displays in the map view. Left: raster outline mode. Right:
smoothed outline mode
Figure 8.48. Different polygon display methods in the map view. Bottom left: Result of scale
parameter analysis. Bottom right: Selected image object (Image data courtesy of Ministry of
Environmental Affairs of Sachsen-Anhalt, Germany.)
If the polygon view is activated, any time you select an image object it will be rendered
along with its characterizing polygon. This polygon is more generalized than the poly-
gons shown by the outlines and is independent of the topological structure of the image
object level. Its purpose is to describe the selected image object by its shape.
You can use the settings in the Rule Set Options algorithm to change the way that poly-
gons display. The settings for polygons in Project Settings group of the Options dialog
box displays control how polygons are generalized.
Figure 8.49. Sample map with one selected skeleton (the outline color is yellow; the skeleton
color is orange)
About Skeletons
Skeletons describe the inner structure of an object. By creating skeletons, the object’s
shape can be described in a different way. To obtain skeletons, a Delaunay triangulation
of the objects’ shape polygons is performed. The skeletons are then created by identifying
the mid-points of the triangles and connecting them. To find skeleton branches, three
types of triangles are created:
The main line of a skeleton is represented by the longest possible connection of branch
points. Beginning with the main line, the connected lines then are ordered according to
their types of connecting points.
The branch order is comparable to the stream order of a river network. Each branch
obtains an appropriate order value; the main line always holds a value of 0 while the
outmost branches have the highest values, depending on the objects’ complexity.
The right image shows a skeleton with the following branch order:
• 4: Branch order = 2.
• 5: Branch order = 1.
• 6: Branch order = 0 (main line).
Encrypting rule sets prevents others from reading and modifying them. To encrypt a rule
set, first load it into the Process Tree window. Open the Process menu in the main menu
and select Encrypt Rule Set to open the Encrypt Data dialog box. Enter the password that
you will use to decrypt the rule set and confirm it.
The rule set will display only the parent process, with a padlock icon next to it. If you
have more than one parent process at the top level, each of them will have a lock next to
it. You will not be able to open the rule set to read or modify it, but you can append more
processes to it and they can be encrypted separately, if you wish.
Decrypting a rule set is essentially the same process; first load it into the Process Tree
window, then open the Process menu in the main menu bar and select Decrypt Rule Set
to open the Decrypt Data dialog box. When you enter your password, the padlock icon
will disappear and you will be able to read and modify the processes.
If the rule set is part of a project and you close the project without saving changes, the
rule set will be decrypted again when you reopen the project. The License id field of
the Encrypt Data dialog box is used to restrict use of the rule set to specific eCognition
licensees. Simply leave it blank when you encrypt a rule set.
Find and Replace is a useful method to browse and edit rule-set items, allowing you
to replace them by rule set items of the same category. This is helpful especially for
maintaining large rule sets and for development in teams.
Within a rule set, you can find and replace all occurrences the following rule set items:
algorithms (within an rule set loaded in the Process Tree window); classes; class vari-
ables; features; feature variables; image layers; image object levels; level variables; map
variables; object variables; region variable; scene variables; text and thematic layers.
To open the Find and Replace window, do one of the following:
There are two checkboxes in the Find and Replace window – Delete After Replace All
and Find Uninitialized Variables.
202
Additional Development Tools 203
Selecting Delete After Replace All deletes any unused features and variables that result
from the find and replace process. For instance, imagine a project has two classes, ‘dark’
and ‘bright’. With ‘class’ selected in the Find What drop-down box, a user replaces all
instances of ‘dark’ with ‘bright’. If the box is unchecked, the ‘dark’ class remains in the
Class Hierarchy window; if it is selected, the class is deleted.
Find Uninitialized Variables simply lets you search variables that do not have an explicit
initialization.
It is good practice to include comments in your rule sets if your work will be shared with
other developers.
To add a comment, select the rule set item (for example a process, class or expression) in
a window where it is displayed – Process Tree, Class Hierarchy or Class Description.
The Comment icon appears in a window when you hover over an item; it also appears in
the relevant editing dialog box. The editing field is not available unless you have selected
a rule set item. Comments are automatically added to rule set items as soon as another
rule set item or window is selected.
The up and down arrows allow you to navigate the comments attached to items in a
hierarchy. Paste and Undo functions are also available for this function.
There is an option to turn off comments in the Process Tree in the Options dialog box
(Tools > Options).
The Rule Set Documentation window manages the documentation of rule sets. To open it,
select Process > Rule Set Documentation or View > Windows > Rule Set Documentation
from the main menu.
Clicking the Generate button displays a list of rule set items in the window, including
classes, customized features, and processes. The window also displays comments at-
tached to classes, class expressions, customized features and processes. Comments are
preceded by a double backslash. You can add comments to rule set items in the window
then click the Generate button again to view them.
It is possible to edit the text in the Rule Set Documentation window; however, changes
made in the window will not be added to the rule set and are deleted when the Generate
button is pressed. However, they are preserved if you Save to File or Copy to Clipboard.
(Save to File saves the documentation to ASCII text or rich text format.)
A Process Path is simply a pathway to a process in the Process Tree window. It can be
used to locate a process in a rule set and is useful for collaborative work.
Right-click on a process of interest and select Go To (or use the keyboard shortcut Ctrl-
G). The pathway to the process is displayed; you can use the Copy button to copy the
path, or the Paste button to add another pathway from the clipboard.
The time taken to execute a process is displayed before the process name in the Process
Tree window. This allows you to identify the processes that slow down the execution of
your rule set. You can use the Process Profiler to identify processes so you can replace
them with less time-consuming ones, eliminating performance bottlenecks. To open the
Process Profiler, go to View > Windows > Process Profiler or Process > Process Profiler
in the main menu. Execute a process and view the profiling results under the Report tab.
By default, the slowest five processes are displayed. Under the Options tab, you can
change the profiling settings.
You can also inactivate process profiling in Tools > Options, which removes the time
display before the process name.
9.5 Snippets
A process snippet is part of a rule set, consisting of one or more processes. You can
organize and save process snippets for reuse in other rule sets. You can drag-and-drop
processes between the Process Tree window and the Snippets window. To reuse snippets
in other rule sets, export them and save them to a snippets library. Open the Snippets
window using View > Windows > Snippets or Process > Snippets from the main menu.
By default, the Snippets window displays frequently used algorithms that you can drag
into the Process Tree window. Drag a process from the Process Tree window into the
Snippets window – you can drag any portion of the Process Tree along with its child
processes. Alternatively, you can right-click process or snippets for copying and pasting.
You can also copy snippets from the Snippets window to any position of the Process Tree
window.
To save all listed snippets in a snippets library, right click in the Snippets window and
select Export Snippets. All process snippets are saved as a snippet .slb file. To import
Snippets from a snippets library, right-click in the Snippets window and select Import
Snippets.
You cannot add customized algorithms to the Snippets window, but snippets can include
references to customized algorithms.
A project is the most basic format in eCognition Developer 9.0. A project contains one
or more maps and optionally a related rule set. Projects can be saved separately as a .dpr
project file, but one or more projects can also be stored as part of a workspace.
For more advanced applications, workspaces reference the values of exported results and
hold processing information such as import and export templates, the required ruleware,
processing states, and the required software configuration. A workspace is saved as a set
of files that are referenced by a .dpj file.
The Workspace window lets you view and manage all the projects in your workspace,
along with other relevant data. You can open it by selecting View > Windows >
Workspace from the main menu.
The Workspace window is split in two panes:
• The left-hand pane contains the Workspace tree view. It represents the hierarchical
structure of the folders that contain the projects
• In the right-hand pane, the contents of a selected folder are displayed. You can
choose between List View, Folder View, Child Scene View and two Thumbnail
views.
In List View and Folder View, information is displayed about a selected project – its state,
scale, the time of the last processing and any available comments. The Scale column
displays the scale of the scene. Depending on the processed analysis, there are additional
columns providing exported result values.
Opening and Creating New Workspaces To create a new workspace, select File > New
Workspace from the main menu or use the Create New Workspace button on the default
toolbar. The Create New Workspace dialog box lets you name your workspace and define
its file location – it will then be displayed as the root folder in the Workspace window.
208
Automating Data Analysis 209
Figure 10.1. Workspace window with Summary and Export Specification and drop-down
view menu
If you need to define another output root folder, it is preferable to do so before you load
scenes into the workspace. However, you can modify the path of the output root folder
later on using File > Workspace Properties.
Importing Scenes into a Workspace Before you can start working on data, you must
import scenes in order to add image data to the workspace. During import, a project is
created for each scene. You can select different predefined import templates according to
the image acquisition facility producing your image data.
If you only want to import a single scene into a workspace, use the Add Project command.
To import scenes to a workspace, choose File > Predefined Import from the main menu
or right-click the left-hand pane of the Workspace window and choose Predefined Import.
The Import Scenes dialog box opens.
• You can use various import templates to import scenes. Each import template is
Displaying Statistics in Folder View Selecting Folder View gives you the option to display
project statistics. Right-click in the right-hand pane and Select Folder Statistics Type
from the drop-down menu. The available options are Sum, Mean, Standard Deviation,
Minimum and Maximum.
Multiple scenes from an existing file structure can be imported into a workspace and
saved as an import template. The idea is that the user first defines a master file, which
functions as a sample file and allows identification of the scenes of the workspace. The
user then defines individual data that represents a scene by defining a search string.
A workspace must be in place before scenes can be imported and the file structure of
image data to be imported must follow a consistent pattern. To open the Customized
Import dialog box, go to the left-hand pane of the Workspace window and right-click a
folder to select Customized Import. Alternatively select File > Customized Import from
the main menu.
1. Click the Clear button before configuring a new import, to remove any existing
settings. Choose a name in the Import Name field
2. The Root Folder is the folder where all the image data you want to import will
be stored; this folder can also contain data in multiple subfolders. To allow a
customized import, the structure of image data storage has to follow a pattern,
which you will later define
3. Select a Master File within the root folder or its subfolders. Depending on the file
structure of your image data, defined by your image reader or camera, the master
file may be a typical image file, a metafile describing the contents of other files, or
both.
4. The Search String field displays a textual representation of the sample file path
used as a pattern for the searching routine. The Scene Name text box displays a
representation of the name of the scene that will be used in the workspace window
after import.
5. Press the Test button to preview the naming result of the Master File based on the
Search String
Loading and Saving Templates Press Save to save a template as an XML file. Tem-
plates are saved in custom folders that do not get deleted if eCognition Developer is
uninstalled. Selecting Load will open the same folder – in Windows XP the location of
this folder is C:\Documents and Settings\[User]\Application Data\eCognition\[Version
Number]\Import. In Windows 7 and Windows 8 the location of this folder is
C:\Users\[User]\AppData\Roaming\eCognition\[Version Number]\import.
Editing Search Strings and Scene Names Editing the Search String and the Scene Name
– if the automatically generated ones are unsatisfactory – is often a challenge for less-
experienced users.
There are two types of fields that you can use in search strings: static and variable. A
static field is inserted as plain text and refers to filenames or folder names (or parts of
them). Variable fields are always enclosed in curly brackets and may refer to variables
such as a layer, folder or scene. Variable fields can also be inserted from the Insert Block
drop-down box.
For example, the expression {scene}001.tif will search for any scene whose filename
ends in 001.tif. The expression {scene}_x_{scene}.jpg will find any JPEG file with _x_
in the filename. For advanced editing, you can use regular expressions (such as ?, * and
OR).
You must comply with the following search string editing rules:
• The search string has to start with {root}\ (this appears by default)
• All static parts of the search string have to be defined by normal text
• Use a backslash between a folder block and its content.
• Use {block name:n} to specify number of characters of a searched item.
• All variable parts of the search string can be defined by using blocks representing
the search items which are sequenced in the search string (see table 10.1, Search
String Variables).
Project Naming in Workspaces Projects in workspaces have compound names that in-
clude the path to the image data. Each folder 2 within the Workspace window folder is
part of the name that displays in the right-hand pane, with the name of the scene or tile
included at the end. You can understand the naming convention by opening each folder
in the left-hand pane of the Workspace window; the Scene name displays in the Summary
pane. The name will also indicate any of the following:
1. To view the entire name, select List View from the drop-down list in the right-hand
pane of the Workspace window.
2. In the folder tree in the left-hand pane, select the root folder, which is labeled by
the workspace name. The entire project names now display in the right-hand pane.
Managing Folders in the Workspace Tree View Add, move, and rename folders in the tree
view on the left pane of the Workspace window. Depending on the import template, these
folders may represent different items.
:reverse Starts reading from the end instead {part of a file name:reverse}
of the beginning is recommended for reading file
names, because file name endings
are usually fixed
any Represents any order and number of Used as wildcard character for
characters example, {any}.tif for TIFF files
with an arbitrary name
root Represents a root folder under Every search string has to start with
which all image data you want to {root}\
import is stored
folder Represents one folder under which {root}\{scene}.tif for TIFF files
the image files are stored whose file names will be used as
scene names
scene Represents the name of a scene that {root}\{scene}.tif for TIFF files
will be used for project naming whose file names will be used as
within the workspace after import scene names
frame Represents the frames of a time {frame}.tif for all TIFF files or
series or 4D image data set. It can {frame}\{any}.tif for all TIFF
be used for files or folders. files in a folder containing frame
files.
Saving and Moving Workspaces Workspaces are saved automatically whenever they are
changed. If you create one or more copies of a workspace, changes to any of these will
result in an update of all copies, irrespective of their location. Moving a workspace is
easy because you can move the complete workspace folder and continue working with
the workspace in the new location. If file connections related to the input data are lost, the
Locate Image dialog box opens, where you can restore them; this automatically updates
all other input data files that are stored under the same input root folder. If you have
loaded input data from multiple input root folders, you only have to relocate one file per
input root folder to update all file connections.
We recommend that you do not move any output files that are stored by default within
the workspace folder. These are typically all .dpr project files and by default, all results
files. However, if you do, you can modify the path of the output root folder under which
all output files are stored.
To modify the path of the output root folder choose File > Workspace Properties from
the main menu. Clear the Use Workspace Folder check-box and change the path of the
output root folder by editing it, or click the Browse for Folders button and browse to an
output root folder. This changes the location where image results and statistics will be
stored. The workspace location is not changed.
Opening Projects and Workspace Subsets Open a project to view and investigate its maps
in the map view:
1. Go to the right-hand pane of the Workspace window that lists all projects of a
workspace.
2. Do one of the following:
• Right-click a project and choose Open on the context menu.
• Double-click a project
• Select a project and press Enter.
3. The project opens and is displayed its main map in the map view. If another project
is already open, it is closed before opening the other one. If maps are very large,
you can open and investigate a subset of the map:
• Go to the right pane of the Workspace window that lists all projects of a
workspace
• Right-click a project and choose Open Subset. The Subset Selection dialog
box opens
• Define a subset and confirm with OK. The subset displays in the map view.
This subset is not saved with the project and does not modify the project.
After closing the map view of the subset, the subset is lost; however, you can
save the subset as a separate project.
Inspecting the State of a Project For monitoring purposes you can view the state of the
current version of a project. Go to the right-hand pane of the Workspace window that
lists the projects. The state of a current version of a project is displayed behind its name.
Inspecting the History of a Project Inspecting older versions helps with testing and opti-
mizing solutions. This is especially helpful when performing a complex analysis, where
the user may need to locate and revert to an earlier version.
1. To inspect the history of older project versions, go to the right-hand pane of the
Workspace window that lists projects. Right-click a project and choose History
from the context menu. The Project History dialog box opens.
2. All project versions (Ver.) are listed with related Time, User, Operations, State, and
Remarks.
3. Click OK to close the dialog box.
Clicking a column header lets you sort by column. To open a project version in the map
view, select a project version and click View, or double-click a project version.
To restore an older version, choose the version you want to bring back and click the
Roll Back button in the Project History dialog box. The restored project version does
not replace the current version but adds it to the project version list. The intermediate
versions are not lost.
Reverting to a Previous Version Besides the Roll Back button in the Project History dia-
log box, you can manually revert to a previous version. 3
The intermediate versions are not lost. Select Destroy the History and All Results if
you want to restart with a new version history after removing all intermediate versions
including the results. In the Project History dialog box, the new version one displays
Rollback in the Operations column.
Importing an Existing Project into a Workspace Processed and unprocessed projects can
be imported into a workspace.
Go to the left-hand pane of the Workspace window and select a folder. Right-click it and
choose Import Existing Project from the context menu. Alternatively, Choose File > New
Project from the main menu.
3. In the event of an unexpected processing failure, the project automatically rolls back to the last workflow state.
This operation is documented as Automatic Rollback in the Remarks column of the Workspace window and as
Roll Back Operation in the History dialog box.
The Open Project dialog box will open. Select one project (file extension .dpr) and click
Open; the new project is added to the right-hand Workspace pane.
Creating a New Project Within a Workspace To add multiple projects to a workspace, use
the Import Scenes command. To add an existing projects to a workspace, use the Import
Existing Project command. To create a new project separately from a workspace, close
the workspace and use the Load Image File or New Project command.
Loading Scenes as Maps into a New Project Multi-map projects can be created from
multiple scenes in a workspace. The preconditions to creating these are:
In the right-hand pane of the Workspace window select multiple projects by holding down
the Ctrl or Shift key. Right-click and select Open from the context menu. Type a name
for the new multi-map project in the opening New Multi-Map Project Name dialog box.
Click OK to display the new project in the map view and add it to the project list.
If you select projects of different folders by using the List View, the new multi-map
project is created in the folder with the last name in the alphabetical order. Example: If
you select projects from a folder A and a folder B, the new multi-map project is created
in folder B.
Working on Subsets and Copies of Scenes If you have to analyze projects with maps
representing scenes that exceed the processing limitations, you have to consider some
preparations.
Projects with maps representing scenes within the processing limitations can be processed
normally, but some preparation is recommended if you want to accelerate the image anal-
ysis or if the system is running out of memory.
To handle such large scenes, you can work at different scales. If you process two-
dimensional scenes, you have additional options:
For automated image analysis, we recommend developing rule sets that handle the above
methods automatically. In the context of workspace automation, subroutines enable you
to automate and accelerate the processing, especially the processing of large scenes.
Removing Projects and Deleting Folders When a project is removed, the related image
data is not deleted. To remove one or more projects, select them in the right pane of the
Workspace window. Either right-click the item and select Remove or press Del on the
keyboard.
To remove folders along with their contained projects, right-click a folder in the left-hand
pane of the Workspace window and choose Remove from the context menu.
If you removed a project by mistake, just close the workspace without saving. After
reopening the workspace, the deleted projects are restored to the last saved version.
1. Go to the right pane of the Workspace window. Right-click a project and choose
Save list to file from the context menu.
2. The list can be opened and analyzed in applications such as Microsoft® Excel.
In the Options dialog box under the Output Format group, you can define the decimal
separator and the column delimiter according to your needs.
Copying the Workspace Window The current display of both panes of the Workspace can
be copied the clipboard. It can then be pasted into a document or image editing program
for example.
Simply right-click in the right or left-hand pane of the Workspace Window and select
Copy to Clipboard.
Subscenes can be tiles or subsets. You can export statistics from a subscene analysis for
each scene and collect and merge the statistical results of multiple files. The advantage
is that you do not need to stitch the subscenes results for result operations concerning the
main scene.
To do this, each subscene analysis must have had at least one project or domain statistic
exported. All preceding subscene analysis, including export, must have been processed
completely before the Read Subscene Statistics algorithm starts any result summary cal-
culations. To ensure this, result calculations are done within a separate subroutine.
After processing all subscenes, the algorithm reads the exported result statistics of the
subscenes and performs a defined mathematical summary operation. The resulting value,
representing the statistical results of the main scene, is stored as a variable. This variable
can be used for further calculations or export operations concerning the main scene.
A rule set with subroutines can be executed only on data loaded to a workspace. This
enables you to review all projects of scenes, subset, and tiles. They all are stored in the
workspace.
(A rule set with subroutines can only be executed if you are connected to an eCognition
Server. Rule sets that include subroutines cannot be processed on a local machine.)
10.1.5 Tutorials
To give you practical illustrations of structuring a rule set into subroutines, have a look at
some typical use cases including samples of rule set code. For detailed instructions, see
the related instructional sections and the Reference Book listing all settings of algorithms.
Find regions of interest (ROIs), create scene subsets, and submit for further processing.
In this basic use case, you use a subroutine to limit detailed image analysis processing
to subsets representing ROIs. The image analysis processes faster because you avoid
detailed analysis of other areas.
Commonly, you use this subroutine use case at the beginning of a rule set and therefore
it is part of the main process tree on the Main tab. Within the main process tree, you
sequence processes in order to find regions of interest (ROI) on a bright background. Let
us say that the intermediate results are multiple image objects of a class no_background
representing the regions of interest of your image analysis task.
Still editing within the main process tree, you add a process applying the create scene
subset algorithm on image objects of the class no_background in order to analyze regions
of interest only.
The subsets created must be sent to a subroutine for analysis. Add a process with the
algorithm submit scenes for analysis to the end of the main process tree. It executes a
subroutine that defines the detailed image analysis processing on a separate tab.
Figure 10.7. The Main process tree in the Process Tree window
the merging results parameters of the submit scenes for analysis algorithm because its
intersection handling may result in performance intensive operations.
Here you use the export thematic raster files algorithm to export a geocoded thematic
layer for each scene or subset containing classification information about intermediate
results. This information, stored in a thematic layers and an associated attribute table, is
a description of the location of image objects and information about the classification of
image objects.
After exporting a geocoded thematic layer for each subset copy, you reload all thematic
layers to a new copy of the complete scene. This copy is created using the create scene
copy algorithm.
The subset thematic layers are matched correctly to the complete scene copy because they
are geocoded. Consequently you have a copy of the complete scene with intermediate
result information of preceding subroutines.
Using the submit scenes for analysis algorithm, you finally submit the copy of the com-
plete scene for further processing to a subsequent subroutine. Here you can use the inter-
mediate information of the thematic layer by using thematic attribute features or thematic
layer operations algorithms.
Advanced: Transfer Results of Subsets
’at ROI_Level: export classification to ExportObjectsThematicLayer
’create scene copy ’MainSceneCopy’
’process ’MainSceneCopy*’ subsets with ’Further’
Further
’Further Processing
’’...
’’...
’’...
eCognition Developer 9.0 enables you to perform automated image analysis jobs that
apply rule sets to single or multiple projects. It requires a rule set or existing ruleware
file, which may be a rule set (.dcp) or a solution (.dax). Select one or more items in the
Workspace window – you can select one or more projects from the right-hand pane or an
entire folder from the left-hand pane. Choose Analysis > Analyze from the main menu
or right-click the selected item and choose Analyze. The Start Analysis Job dialog box
opens.
1. The Job Scheduler field displays the address of the computer that assigns the anal-
ysis to one or more (if applicable) computers. It is assigned to the local computer
by default
2. Click Load to load a ruleware file for the image analysis – this can be a process
file (extension .dcp) or a solution file (extension .dax). The Edit button lets you
configure the exported results and the export paths of the image analysis job in an
export template. Save lets you store the export template with the process file.
3. Select the type of scene to analyze in the Analyze drop-down list.
• All scenes applies the rule set to all selected scenes in the Workspace window.
• Top scenes refers to original scenes, that have been used to create scene
copies, subsets, or tiles.
• Tiles Only limits the analysis to tiles, if you have created them
4. Select the Use Time-Out check box to automatically cancel image analysis after
a defined period. This may be helpful in cases of unexpected image aberrations.
When testing rule sets you can cancel endless loops automatically; projects are
then marked as Canceled. (Time-Out applies to all processing, including tiling and
stitching.)
5. The Configuration tab lets you edit settings (this is rarely necessary)
6. Press Start to begin the image analysis. While the image analysis is running, the
state of the projects displayed in the right pane of the Workspace window will
change to Waiting, then Processing, and later to Processed.
These settings are designed for advanced users. Do not alter them unless you are aware of
a specific need to change the default values and you understand the effects of the changes.
The Configuration tab of the Start Analysis Job dialog box enables you to review and alter
the configuration information of a job before it is sent to the server. The configuration
information for a job describes the required software and licenses needed to process the
job. This information is used by the eCognition Server to configure the analysis engine
software according to the job requirements.
An error message is generated if the installed packages do not meet the requirements spec-
ified in the Configuration tab. The configuration information is of three types: product,
version and configuration.
Settings The Product field specifies the software package name that will be used to
process the job. Packages are found by using pattern matching, so the default value
‘eCognition’ will match any package that begins with ‘eCognition’ and any such package
will be valid for the job.
The Version field displays the default version of the software package used to process the
job. You do not normally need to change the default.
If you do need to alter the version of the Analysis Engine Software, enter the number
needed in the Version text box. If the version is available it will be used. The format for
version numbers is major.upgrade.update.build. For example, 7.0.1.867 means platform
version 7.0.1, build 867. You can simply use 7.0 to use the latest installed software
package with version 7.0.
The large pane at the bottom of the dialog box displays the plug-ins, data I/O drivers and
extensions required by the analysis engine to process the job. The eCognition Grid will
not start a software package that does not contain all the specified components.
Plug-Ins The plug-ins that display initially are associated with the rule set that has been
loaded in the General tab. All the listed plug-ins must be present for eCognition Server
to process the rule set. You can also edit the plug-ins using the buttons at the top of the
window.
To add a plug-in, first load a rule set on the General tab to display the associated plug-ins.
Load a plug-in by clicking the Add Plug-in button or using the context menu to open the
Add a Plug-In dialog box. Use the Name drop-down box to select a plug-in and version,
if needed. Click OK to display the plug-in in the list.
Drivers The listed drivers listed must be installed for the eCognition Server to process
the rule set. You might need to add a driver if it is required by the rule set and the wrong
configuration is picked because of the missing information.
To add a driver, first load a rule set on the General tab to display the associated drivers.
Load a driver by clicking the Add Driver button or using the context menu to open the Add
a Driver dialog box. Use the drop-down Name list box to select a driver and optionally a
version, if needed. Click OK to display the driver in the list.
You can also edit the version number in the list. For automatic selection of the correct
version of the selected driver, delete the version number.
Changing the Configuration To delete an item from the list, select the item and click the
Delete Item button, or use the context menu. You cannot delete an extension.
If you have altered the initial configuration, return to the initial state by using the context
menu or clicking the Reset Configuration Info button.
In the initial state, the plug-ins displayed are those associated with the rule set that has
been loaded. Click the Load Client Config Info button or use the context menu to load the
plug-in configuration of the client. For example, if you are using a rule set developed with
an earlier version of the client, you can use this button to display all plug-ins associated
with the client you are currently using.
Tiling and stitching is an eCognition method for handling large images. When images
are so large that they begin to degrade performance, we recommend that they are cut
into smaller pieces, which are then treated individually. Afterwards, the tile results are
stitched together. The absolute size limit for an image in eCognition Developer is 231
(46,340 x 46,340 pixels).
Creating tiles splits a scene into multiple tiles of the same size and each is represented as
a new map in a new project of the workspace. Projects are analyzed separately and the
results stitched together (although we recommend a post-processing step).
Creating Tiles
Creating tiles is only suitable for 2D images. The tiles you create do not include results
such as image objects, classes or variables.
To create a tile, you need to be in the Workspace window, which is displayed by default
in views 1 and 3 on the main toolbar, or can be launched using View > Windows >
Workspace. You can select a single project to tile its scenes or select a folder with projects
within it.
To open the Create Tiles dialog box, choose Analysis > Create Tiles or select it by right-
clicking in the Workspace window. The Create Tiles box allows you to enter the horizon-
tal and vertical size of the tiles, based on the display unit of the project. For each scene
to be tiled, a new tiles folder will be created, containing the created tile projects named
tilenumber.
You can analyze tile projects in the same way as regular projects by selecting single or
multiple tiles or folders that contain tiles.
Only the main map of a tile project can be stitched together. In the Workspace window,
select a project with a scene from which you created tiles. These tiles must have already
been analyzed and be in the ‘processed’ state. To open the Stitch Tile Results dialog box,
select Analysis > Stitch Projects from the main menu or right-click in the Workspace
window.
The Job Scheduler field lets you specify the computer that is performing the analysis. It is
set to http://localhost:8184 by default, which is the local machine. However, if you
are running eCognition Developer 9.0 over a network, you may need change this field to
the address of another computer.
Click Load to load a ruleware file for image analysis—this can be a process (.dcp) or
solution (.dax) file. The Edit feature allows you to configure the exported results and the
export paths of the image analysis job in an export template. Clicking Save allows you to
store the export template with the process file.
Select the type of scene to analyze in the Analyze drop-down list.
• All Scenes applies the rule set to all selected scenes in the Workspace window
• Top Scenes refers to the original scenes, which have been used to create scene
copies, subsets or tiles
• If you have created tiles, you can select Tiles Only to filter out everything else.
Select the Use Time-Out check-box to set automatic cancellation of image analysis after
a period of time that you can define. This may be helpful for batch processing in cases
of unexpected image aberrations. When testing rule sets you can cancel endless loops
automatically and the state of projects will marked as ‘canceled’
In rare cases it may be necessary to edit the configuration. For more details see the
eCognition Developer reference book.
Data export triggered by rule sets is executed automatically. Which items are exported
is determined by export algorithms available in the Process Tree window. For a detailed
description of these export algorithms, consult the Reference Book. You can modify
where and how the data is exported. 4
• Data export initiated by various Export menu commands applies only to the cur-
rently active map of a project. The Export Current View dialog box is used to
export the current map view to a file. Copy the current map view to the clipboard
and choose Export > Copy Current View to Clipboard from the main menu
• Class, object or scene statistics can be viewed and exported. They are calculated
from values of image object features.
• Image objects can be exported as a thematic raster layer 5 together with an attribute
table providing detailed parameter values. The classification of a current image ob-
ject level can be exported as an image file with an attribute table providing detailed
parameter values.
• Polygons, lines or points of selected classes can be exported to the shapefile format
on page 230. The Generate Report dialog box creates an HTML page listing image
objects, each specified by image object features, and optionally a thumbnail image.
Selecting raster file from the Export Type drop-down box allows you to export image
objects or classifications as raster layers together with attribute tables in csv format con-
taining parameter values.
Image objects or classifications can be exported together with their attributes. Each image
object has a unique object or class ID and the information is stored in an attached attribute
4. Most export functions automatically generate .csv files containing attribute information. To obtain correct ex-
port results, make sure the decimal separator for .csv file export matches the regional settings of your operating
system. In eCognition Developer 9.0, these settings can be changed under Tools > Options. If geo-referencing
information of supported co-ordinate systems has been provided when creating a map, it should be exported
along with the classification results and additional information if you choose Export Image Objects or Export
Classification.
5. The thematic raster layer is saved as a 32-bit image file. But not all image viewers can open these files. To
view the file in eCognition Developer 9.0, add the 32-bit image file to a current map or create a new project and
import the file.
table linked to the image layer. Any geo-referencing information used to create a project
will be exported as well.
There are two possible locations for saving exported files:
• If a new project has been created but not yet saved, the exported files are saved to
the folder where the image data are stored.
• If the project has been saved (recommended), the exported files are saved in the
folder where the project has been saved.
To export image objects or classifications, open the Export Results dialog box by choos-
ing Export > Export Results from the main menu.
Figure 10.11. Exporting image objects with the Export Results dialog box
To export statistics, 6 open the Export Results dialog box by choosing Export > Export
Results from the main menu.
Generating Reports
Generate Report creates a HTML page containing information about image object fea-
tures and optionally a thumbnail image. To open the Generate Report dialog box, choose
Export > Generate Report from the main menu.
1. Select the Image object level for which you want to create the report from the
drop-down box
2. The Table header group box allows you to choose from the following options:
• User Info: Include information about the user of the project
• Project Info: Include co-ordinate information, resolution, and units of the
map
3. From the Table body group box, choose whether or not to include thumbnails of
the image objects in jpeg format
4. Click the Select Classes button to open the Select Classes for Report dialog box,
where you can add or remove classes to be included in the report
5. Click the Select features button to open the Select Features for Report dialog box
where you can add or remove features to be included in the report
6. Change the default file name in the Export File Name text field if desired
7. Clear the Update Obj. Table check-box if you don’t want to update your object
table when saving the report
8. To save the report to disk, press Save Report.
6. The rounding of floating point numbers depends on the operating system and runtime libraries. Therefore the
results of statistical calculations between Linux and Windows may be slightly different.
Polygons, lines, or points of selected classes can be exported as shapefiles. As with the
Export Raster File option, image objects can be exported together with their attributes
and classifications. Any geo-referencing information as provided when creating a map
is exported as well. The main difference to exporting image objects is that the export is
not confined to polygons based on the image objects. Polygons in 2D and 3D scenes are
supported.
You can choose between three basic shape formats: points, lines and polygons. To ex-
port results as shapes, open the Export Results dialog box by choosing Export > Export
Results on the main menu.
Exporting the current view is an easy way to save the map view at the current scene scale
to file, which can be opened and analyzed in other applications. This export type does
not include additional information such as geo-referencing, features or class assignments.
To reduce the image size, you can rescale it before exporting.
1. To export a current active map, choose Export > Current View from the main menu
bar. The Select Scale dialog box opens.
2. To export the map with the displayed scale, click OK. If you want to keep the
original scale of the map, select the Keep Current Scene Scale check-box
3. You can select a different scale compared to the current scene scale, which allows
you to export the current map at a different magnification or resolution
4. If you enter an invalid scale factor, it will be changed to the closest valid scale as
displayed in the table
5. To change the current scale mode, select from the drop-down box. Confirm with
OK and the Export Image Layer dialog box opens
6. Enter a file name and select the file format from the drop-down box. Note that not
all formats are available for export
7. Click Save to confirm. The current view settings are used; however, the zoom
settings are ignored.
7. The class names and class colors are not exported automatically. Therefore, if you want to export shapes for
more than one class and you want to distinguish the exported features by class, you should also export the
feature Class name. You can use the Class Color feature to export the RGB values for the colors you have
assigned to your classes.
• Exporting the current view to clipboard is an easy way to create screenshots that
can then be inserted into other applications:
– Choose Export > Copy Current View to Clipboard from the main menu
– Right-click the map view and choose Copy Current View to Clipboard on the
context menu.
• Many windows contain lists or tables, which can be saved to file or to the clipboard.
Others contain diagrams or images which you can copy to the clipboard. Right-
click to display the context menu and choose:
• Save to File allows you to save the table contents as .csv or transposed .csv (.tcsv)
file. The data can then be further analyzed in applications such as Microsoft Ex-
cel. In the Options dialog box under the Output Format group, you can define the
decimal separator and the column delimiter according to your needs
• Copy to Clipboard saves the current view of the window to clipboard. It can then
be inserted as a picture into other program for example, Microsoft Office or an
image processing program.
As the parameters of an action can be set by users of action libraries using products such
as eCognition Architect 9.0, you must place adjustable variables in a parameter set.
You should use unique names for variables and must use unique names for parameter sets.
We recommend developing adjustable variables of a more general nature (such as ‘low
contrast’), which have influence on multiple features instead of having one control per
feature.
Additionally, in rule sets to be used for actions, avoid identically named parent processes.
This is especially important for proper execution if an eCognition action refers to inactive
parts of a rule set.
233
Rule Sets for eCognition® Architect 234
When creating a Quick Test button in an action, you need to implement a kind of internal
communication to synchronize actions with the underlying rule sets. This is realized by
integration of specific algorithms to the rule sets that organize the updating of parameter
sets, variables, and actions.
Figure 11.1. The communication between action and rule set is organized by algorithms
(arrows)
The first two transfer values between the action and parameter set; the remaining two
transfer values between the parameter set and the rule set.
To get all parameters from the action to the rule set before you execute a Quick Test, you
need a process sequence like this:
Figure 11.2. Sample process sequence for a Quick Test button within actions.
NOTE: General settings must be updated if a rule set relies on them. You
should restore everything to the previous state when the quick test is done.
The developed rule set (.dcp file) will probably be maintained by other developers. There-
fore, we recommend you structure the rule set clearly and document it using meaningful
names of process groups or comments. A development style guide may assure consis-
tency in the naming of processes, classes, variables and customized features, and provide
conventions for structuring rule sets.
An action can contain workspace automation subroutines and produce subsets, copies, or
tiles as a internal activity of an action. Such actions can be executed as rule sets.
If several actions containing multiple workspace automation subroutines are assembled
in one solution .dax file, each action is submitted for processing sequentially, or else an
action might search for tiles that do not yet exist because the preceding action is still
being processed.
Information kept in parameter sets is transferred between the different stages of the
workspace automation. Different subroutines of different actions are able to access vari-
ables of parameter sets. When creating actions you should use special Variables Opera-
tion algorithms to enable actions to automatically exchange parameter sets.
Before wrapping a rule set as an action definition, you have to create a new action library.
1. Choose Library > New Action Library from the main menu. The Create New
Action Library dialog box opens
2. Select a Name and a Location for the new action library. Click OK to create the
new .dlx file.
3. The action library is loaded to the Analysis Builder window. The Analysis Builder
window changes its name to Edit Library: Name of the Library. As the editing
mode is active, you can immediately start editing the action library.
When assembling a new action library, you wrap rule sets as action definitions and give
them a user interface. Later, you may modify an existing action library.
1. To activate the action library editing mode on your newly created or open library,
choose Library > Edit Action Library from the main menu. The Analysis Builder
window changes its title bar to ‘Edit Library: Name of the Loaded Action Library.’
Additionally, a check mark left of the menu command indicates the editing mode
2. Go to the Analysis Builder window and right-click any item or the background for
available editing options. Depending on the right-clicked item you can add, edit,
or delete one of the following:
• General settings definition
• Action groups grouping actions
• Action definitions including various Export actions
• Widgets (user interface components) for the properties of action
3. Save the edited action library using Library > Save Action Library on the main
menu, then close it using Library > Close Action Library
4. To deactivate the editing mode, go to Library > Edit Action Library. The window
title bar reverts to Analysis Builder.
Selecting Library > Action Library Properties brings up the Edit Action Library dialog
box (figure 11.3). The dialog has fields which allow you to edit the name and version of
your action library.
To create a globally unique identifier (GUID), press the Generate button. Generating a
new GUID when an action library is amended is a useful way for a developer to notify
an action library user of changes, as the software will tell the user that the identifier is
different.
Every action is part of a certain action group. If the appropriate action group does not yet
exist, you have to create it.
1. To create an action group, go to upper pane of the Analysis Builder window (now
called Edit Library: Name of the Loaded Action Library) and right-click any item
or the background and choose Add Group. The new action group is added at the
bottom of the existing action group list
2. To modify an action group, double-click it, or right-click it and select Edit Group.
The Edit Group dialog box opens
3. Edit the name, ID and label color of the action group. After adding any action
definition, the ID cannot be modified
4. Before changing the ID or deleting an action group, you have to delete all contained
action definitions
5. To move an action group, right-click it and select Move Group Up or Move Group
Down
6. To delete an action group, right-click it and select Delete Group.
Action definitions are unconfigured actions, which enable users of action libraries to spec-
ify actions that act as building blocks of a specific solution. You can define an action def-
inition by transforming a rule set related to a specified part of the solution. Alternatively,
you can import an action definition from an .xml file to an action library.
To edit action definitions, you’ll need to have loaded a rule set file (.dcp file) into the
Process Tree window, which contains a rule set related to a specified part of the solution.
The rule set must include a parameter set providing variables to be adjusted by the user
of the action library.
1. To create an action definition, go to the Analysis Builder window, select and right-
click any action group or the background and choose Add Action Definition or one
of the standard export action definitions: 1
• Add Export Domain Statistics
• Add Export Object Data
• Add Export Project Statistics
• Add Export Result Image
The new action definition item is added at the bottom of the selected action
group.
2. If you have sequenced two actions or more in an action group, you may rearrange
them using the arrow buttons on the right of each action item. To edit an item, right-
click it and choose Edit Action Definition (or double-click the item). The Action
Definition dialog box opens.
1. Standard export actions are predefined. Therefore the underlying processes cannot be edited and some of the
following options are unavailable.
3. The first two fields let you add a name and description, and the Icon field gives an
option to display an icon on the action user interface element
4. Action ID allows a rule set to keep track of the structure of the analysis and returns
the number of actions in the current analysis with a given ID
5. The Group ID reflects the current group the action belongs to. To move it select
another group from the drop-down list box
6. Priority lets you control the sorting of action lists – the higher the priority, the
higher the action will be displayed in the list
7. Load the rule set in the Rule Set File field as a .dcp file. Select the appropriate
parameter set holding the related variables. The Parameter Set combo box offers
all parameter sets listed in the Manage Parameter Sets dialog box.
8. In Process to Execute, enter the name and the path of the process to be executed
by the action. Process to Execute on Project Closing is usually used to implement
a specific clean-up operation when users close projects, for example after sample
input. Denote the path by using a forward slash to indicate the hierarchy in the
process tree. The Callbacks field allows a rule set to react to certain events by
executing a process
9. Clear the Use Only Once check box to allow multiple actions of a solution.
10. Providing default actions for building solutions requires consideration of dependen-
cies on actions. Click the Dependencies button to open the Edit Action Dependen-
cies dialog box.
11. Confirm with OK.
Figure 11.5. Action Definition dialog box is used for editing unconfigured actions
Editing Action Dependencies Providing action definitions to other users requires con-
sideration of dependencies, because actions are often mutually dependent. Dependency
items are image layers, thematic layers, image object levels, and classes. To enable the
usage of default actions for building solutions, the dependencies on actions concerning
dependency items have to be defined. Dependencies can be defined as follows:
1. To edit the dependencies, go to the Edit Action Definition dialog box and click the
Dependencies button. The Edit Action Dependencies dialog box opens.
2. The Dependency Item tab gives an overview of which items are required, forbidden,
added, or removed. To edit the dependencies, click the ellipsis button located inside
the value column, which opens one of the following dialog boxes:
• Edit Classification Filter, which allows you to configure classes
• Select Levels, to configure image object levels
• Select Image Layers
• Select Thematic Layers.
3. In the Item Error Messages tab you can edit messages that display in the properties
panel to the users of action libraries, in cases where the dependency on actions
cause problems. If you do nothing, a default error message is created.
Loading Rule Sets for Use in Action Libraries If your action library requires a rule set
to be loaded, it is necessary to edit the dix file, which is created automatically when a
new action library is constructed. Insert a link to the rule set file using the following
structure, using a text editor such as Notepad. (The <Preload> opening and closing tags
will already be present in the file.)
<Preload>
<Ruleset name="ruleset.dcp"/>
</Preload>
A configured solution can be automatically updated after have you have changed one or
more actions in the corresponding action library.
This option enables rule set developers to make changes to actions in an action library
and then update a solution without reassembling actions as a solution. The menu item is
only active when a solution is loaded in the Analysis Builder window.
1. To update the open solution, choose Library > Update Solution from the main
menu. All loaded processes are deleted and reloaded from the open action library.
All the solution settings displayed in the Analysis Builder are preserved.
2. You can now save the solution again and thereby update it to changes in the rule
set files.
Before you can analyze your data, you must build an analysis solution in the Analysis
Builder window.
To construct your analysis solution, you can choose from a set of predefined actions
for object detection, classification and export. By testing them on an open project, you
can configure actions to meet your needs. With the Analysis Builder, you assemble and
configure these actions all together to form a solution, which you can then run or save to
file.
Image analysis solutions are built in the Analysis Builder Window. To open it, go to
either View > Windows > Analysis Builder or Analysis > Analysis Builder from the main
menu. You can use View > Analysis Builder View to select preset layouts.
When the Analysis Builder window opens, ensure that the name of the desired action
library is displayed in the title bar of the Analysis Builder window.
The Analysis Builder window consists of two panes. In the upper pane, you assemble
actions to build solutions; in the lower properties pane you can configure them by cus-
tomizing specific settings. Depending on the selected action, the lower properties pane
shows which associated settings to define. The Description area displays information to
assist you with the configuration.
To open an existing action library, go to Library > Open Action Library in the main menu.
The name of the loaded action library is displayed in the title bar of the Analysis Builder
window. The action groups of the library are loaded in the upper pane of the Analysis
Builder window.
If you open an action library after opening a project, all rule set data will be deleted.
A warning message will display. To restore the rule set data, close the project without
saving changes and then reopen it. If you are using a solution built with a older action
library, browse to that folder and open the library before opening your solution.
You can close the current action library and open another to get access to another col-
lection of analysis actions. To close the currently open action library, choose Library >
Close Action Library from the main menu. The action groups in the upper pane of the
Analysis Builder window disappear. When closing an action library with an assembled
solution, the solution is removed from the upper pane of the Analysis Builder window. If
it is not saved, it must be reassembled.
In the Analysis Builder window, you assemble a solution from actions and configure them
in order to analyze your data. If not already visible, open the Analysis Builder window.
1. To add an action, click the button with a plus sign on the sub-section header or, in
an empty section click Add New. The Add Action dialog box opens
2. Select an action from the Add Action dialog box and click OK. The new action
is added to the solution. According to the type of the action, it is sorted in the
corresponding group. The order of the actions is defined by the system and cannot
be changed
3. To remove an action from your solution, click the button with a minus sign on the
right of the action bar
4. Icons inform you about the state of each action:
• A red error triangle indicates that you must specify this action before it can
be processed or another action must be processed previously. The green tick-
mark indicates that the action has been processed successfully.
Selecting an Action To select an action for the analysis solution of your data, click on a
plus sign in an Action Definition button in the Analysis Builder window. The Add Action
dialog box opens.
The filter is set for the action subset you selected. You can select a different filter or
display all available actions. The Found area displays only those actions that satisfy the
filter setting criteria. Depending on the action library, each action is classified with a
token for its subsection, e.g. <A> for segmentation and classifications or <B> for export
actions.
To search for a specific action, enter the name or a part of the name in the Find Text
box. The Found area displays only those actions that contain the characters you entered.
Select the desired action and confirm with OK. The new action is displayed as a bar in
the Analysis Builder window. You must now set the properties of the action:
Settings For each solution you must define specific settings. These settings associate
your image data with the appropriate actions.
• You can save analysis settings in the Analysis Builder as solution files (extension
.dax) and load them again, for example to analyze slides.
• To save the analysis settings, click the Save Solution to a File button on the Archi-
tect toolbar or Library > Save Solution on the main menu
• Alternatively, you can encrypt the solution by selecting Save Solution Read-Only
on the Architect toolbar or Library > Save Solution Read-Only from the main
menu.
To load an already existing solution with all the analysis settings from a solution file
(extension .dax) to the Analysis Builder window, go to Library > Load Solution on the
2. Some actions can be selected only once. If such an action is already part of the analysis, it does not appear in
the Add Action dialog box
main menu. To use a solution that was built with another action library, open the action
library before opening your solution. The solution is displayed in the Analysis Builder
window.
If you want to change a solution built with an older action library, make sure that the
corresponding action library is open before loading the solution. 3
• Testing and improvement cycles might take some time. Here are some tips to help
you to improve the results:
• Use the Preview that some actions provide to instantly display the results of a
certain setting in the map view.
• To execute all assembled actions, click the Run Solution button on the Architect
toolbar. Alternatively, choose Analysis > Run Solution from the main menu
• To execute all actions up to a certain step, select an action and click the Run So-
lution Until Selected Action button on the Architect toolbar. Alternatively, choose
Analysis > Run Solution Until Selected Action from the main menu. All actions
above and including the selected action will be executed.
• To improve the test processing time, you can test the actions on a subset of your
project data.
• For faster testing use the Run Selected Action button on the Architect toolbar. Al-
ternatively, you can remove already tested actions; delete them from the Analysis
Builder window and add them later again. You can also save the actions and the
settings as solution to a .dax file. When removing single actions you must make
sure that the analysis job remains complete.
• To execute a configured solution not locally but on the eCognition Server, select a
project in the workspace window. Click the Run Solution on eCognition Server but-
ton on the Architect toolbar. This option is needed if the solution contains actions
with workspace automation algorithms.
Importing Action Definitions To get access to new, customized or special actions, you
have to load action definitions, which are simply unconfigured actions. If not yet available
in the Add Actions dialog box, you can load additional action definitions from a file to an
action library. This can be used to update action libraries with externally defined action
definitions.
Action definitions can be created with the eCognition Developer 9.0. Alternatively, eCog-
nition offers consulting services to improve your analysis solutions. You can order special
task actions for your individual image analysis needs.
To use an additional action definition you have import it. Beside the .xml file describing
the action definition, you need a rule set file (.dcp file) providing a rule set that is related
to a specific part of the solution. The rule set has to include a parameter set providing
variables to be adjusted by the user of the action library.
3. When you open a solution file (extension .dax), the actions are compared with those of the current action library.
If the current action library contains an action with the same name as the solution file, the action in the current
Action Library is loaded to the Analysis Builder window. This does not apply when using a solution file for
automated image analysis.
1. Copy the action definition files to the system folder of your installation.
2. Choose Library > Import Action on the main menu. Select an .xml file to load.
3. Now the new unconfigured action can be selected in the Add Actions dialog box.
Hiding Layers and Maps eCognition Developer 9.0 users have the option of changing the
visibility settings for hidden layers and hidden maps (see Tools > Options (p 252)). This
is a global setting and applies to all portals (the setting is stored in the UserSettings.cfg
file). The default value in the Options dialog is No and all hidden layers are hidden.
Saving Configured Actions with a Project To facilitate the development of ruleware, you
can save your configured action with single projects and come back to them later. A saved
project includes all actions and their configurations, which are displayed in the Analysis
Builder window in the moment the project was saved.
Configured actions can only be restored properly if the open action library was open when
saving, because the action library provides corresponding action definitions.
You can create a calibration to store the General Settings properties as a Parameter Set
file. Therefore, you can save and provide common settings for common image readers,
for example, as part of an application. A calibration set stores the following General
Settings properties:
• Bit depth
• Pixel resolution in mm/pixel
• Zero-based IDs of the Image Layers within the scene used to store the scheme of
used image layers for image analysis. Example of scene IDs: If a scene consists of
three image layers, the first image layer has ID 0, the second image layer has ID 1,
and the third image layer has ID 2.
To create a calibration, set the General Settings properties in the lower properties pane of
the Analysis Builder window.
For saving, choose Library > Save Calibration from the main menu. By de-
fault, calibration parameter set files with the extension .psf are stored in C:\Program
Files\Trimble\eCognition Developer 9.0\bin\applications.
Widgets are user interface elements, such as drop-down lists, radio buttons and check-
boxes, that the users of action libraries can use to adjust settings.
To create a widget, first select an action definition in the upper pane of the Analysis
Builder window. You must structure the related parameters in at least one property group
in the lower pane of the Analysis Builder window. Right-click the background of the
lower pane and select Add Group. Select a group or widget and right-click it to add it –
the following widgets are available:
• Checkbox
• Drop-Down List
• Button
• Radio Button Row
• Toolbar
• Editbox
• Editbox with Slider
• Select Class
• Select Feature
• Select Multiple Features
• Select File
• Select Level
• Select Image Layer
• Select Thematic Layer
• Select Array Items
• Select Folder
• Slider
• Edit Layer Names
• Layer Drop-Down List
• Manual Classification Buttons
Choose one of the Add (widget) commands on the context menu. The Widget Configura-
tion dialog box opens.
Export an action definition to file. This can be used to extend action libraries of eCogni-
tion Architect 9.0 users with new actions definitions.
1. To export an action definition, select an action in the Analysis Builder window and
choose Library > Export Action on the main menu.
2. Select the path and click OK.
Accuracy assessment methods can produce statistical outputs to check the quality of the
classification results. Tables from statistical assessments can be saved as .txt files, while
graphical results can be exported in raster format.
248
Accuracy Assessment 249
1. Choose Tools > Accuracy Assessment on the menu bar to open the Accuracy As-
sessment dialog box
2. A project can contain different classifications on different image object levels.
Specify the image object level of interest by using the Image object level drop-
down menu. In the Classes window, all classes and their inheritance structures are
displayed.
3. To select classes for assessment, click the Select Classes button and make a new
selection in the Select Classes for Statistic dialog box. By default all available
classes are selected. You can deselect classes through a double-click in the right
frame.
4. In the Statistic type drop-down list, select one of the following methods for accu-
racy assessment:
• Classification Stability
• Best Classification Result
• Error Matrix based on TTA Mask
• Error Matrix based on Samples
5. To view the accuracy assessment results, click Show statistics. To export the statis-
tical output, click Save statistics. You can enter a file name of your choice in the
Save filename text field. The table is saved in comma-separated ASCII .txt format;
the extension .txt is attached automatically.
The Classification Stability dialog box displays a statistic type used for accuracy assess-
ment.
The difference between the best and the second best class assignment is calculated as a
percentage. The statistical output displays basic statistical operations (number of image
objects, mean, standard deviation, minimum value and maximum value) performed on
the best-to-second values per class.
The Best Classification Result dialog box displays a statistic type used for accuracy as-
sessment.
The statistical output 1 for the best classification result is evaluated per class. Basic sta-
tistical operations are performed on the best classification result of the image objects
assigned to a class (number of image objects, mean, standard deviation, minimum value
and maximum value).
The Error Matrix Based on TTA Mask dialog box displays a statistic type used for accu-
racy assessment.
Test areas are used as a reference to check classification quality by comparing the clas-
sification with reference values (called ground truth in geographic and satellite imaging)
based on pixels.
The Error Matrix Based on Samples dialog box displays a statistic type used for accuracy
assessment.
This is similar to Error Matrix Based on TTA Mask but considers samples (not pixels)
derived from manual sample inputs. The match between the sample objects and the
classification is expressed in terms of parts of class samples.
1. To display the comparable graphical output, go to the View Settings window and select Mode.
Figure 12.4. Output of the Error Matrix based on TTA Mask statistics
The following options are available via Tools > Options in the main menu.
General
Output of messages in No: Default. Messages are not saved in the client log file.
the client log file Yes: Messages are saved in the client log file.
Show warnings as Yes: Default. Messages are displayed in a message box and
message box additionally listed in the Message Console.
No: Messages are displayed in the Message Console where a
sequence of messages can be retraced.
Ask on closing project Yes: Default. Open a message box to prompt saving before closing.
for saving project or No: Close without asking for saving.
rule set
Save rule set No: Default. Does not save features with the rule set.
minimized Yes: Save the features used in the Image Object Information windows
with the rule set.
Automatically reload No: Start with a blank map view when opening eCognition
last project Developer 9.0.
Yes: Useful if working with the same project over several sessions.
Use standard Windows No: Display the default, multi-pane selection dialog
file selection dialog Yes: Use classic Windows selection dialog
Predefined import If necessary, enter a different folder in which to store the predefined
connectors folder import connectors.
Portal selection If necessary, enter a different time for automatic start-up of the last
timeout portal selected.
Store temporary layers In eCognition Developer, this setting is available in the Set Rule Set
with project options algorithm.
252
Options 253
Display
Image position Choose whether an opened image is displayed at the top-left or center
of a window
Annotation always No: The Annotation feature is not available for all image objects.
available Yes: The Annotation feature is available for all image objects.
Default image Select the equalization method of the scene display in the map view.
equalization The Options dialog box allows several optional settings concerning:
· Linear
· None
· Standard deviation
· Gamma correction
· Histogram
· Manual
The default value (automatic) applies no equalization for 8-bit RGB
images and linear equalization for all other images.
Display default If set to ‘yes’, a small number of default features will display in
features in image Image Object Information.
object information
Display scale with Select a type of scaling mode used for displaying scale values and
calculating rescaling operations.
Auto: Automatic setting dependent on the image data.
Unit (m/pxl): Resolution expressed in meters per pixel, for example,
40 m/pxl.
Magnification: Magnification factor used similar as in microscopy,
for example, 40x.
Percent: Relation of the scale to the source scene scale, for example,
40%.
Pixels: Relation of pixels to the original scene pixels, for example,
1:20 pxl/pxl.
Display scale bar Choose whether to display the scale bar in the map view by default.
Scale bar default Choose where to display the scale bar by default.
position
Import magnification Select a magnification used for new scenes only in cases where the
if undefined image data has no default magnification defined. Default: 20x
Display selected Shows or hides distance values from the active mouse cursor position
object’s distance in to the selected image object in the image view. Because this
status bar calculation can reduce performance in certain situations, it has been
made optional
Instant render update Choose whether to update the rendering of the Transparency Slider
on slider instantly.
No: The view is updated only after the slider control is released or
after it has been inactive for one second.
Yes: The view is updated instantly as the slider control is moved.
Use right mouse Select yes to activate window leveling (p 19) by dragging the mouse
button for adjusting with the right-hand button held down
window leveling
Show hidden layer Select yes or no to display hidden layers. This setting also applies to
names any action libraries that are opened using other portals, such as
eCognition Architect 9.0
Show hidden layer Select yes or no to display hidden maps. This setting also applies to
maps any action libraries that are opened using other portals, such as
eCognition Architect 9.0
Display disconnected Select yes to display disconnected image object with horizontal lines.
image object with
horizontal lines
Display 3D image Select yes to display 3D image objects with diagonal lines.
objects with diagonal
lines
Manual Editing
Mouse wheel Choose between zooming (where the wheel will zoom in and out of
operation (2D images the image) and panning (where the image will move up and down)
only)
Snapping tolerance Set the snapping tolerance for manual object selection and editing.
(pxl) The default value is 2
Include objects on Yes: Include all objects that touch the selection polygon outline.
selection polygon No: Only include objects that are completely within the selection
outline polygon.
Allow manual object Defines if object can be cut outside image objects.
cut outside image
objects
Image view needs to Defines how mouse clicks are handled in the inactive image view.
be activated before If the value is set to Yes, inactive image view is activated when
mouse input clicked in the (previously inactive) image view. If the value is No,
image view is activated and the currently active input operation is
applied immediately (for example, image object is selected).
This option is especially important while working with multiple
image view panes, because only one image view pane is active at a
time.
Order based fusion Defines if fusion is order based or not. Default value is no.
Output Format
CSV
Reports
eCognition Developer
Default feature unit Change the default feature unit for newly created features that have a
unit.
Pixels: Use pixels as default feature unit.
Same as project unit: Use the unit of the project. It can be checked in
the Modify Project dialog box.
Default new level Change the default name for newly created image object levels.
name
Changes are only applied after restart.
Load extension No: Deactivate algorithms created with the eCognition Developer 9.0
algorithms SDK (Software Development Kit).
Yes: Activate algorithms created with the eCognition Developer 9.0
SDK.
Keep rule set on Yes: Keep current rule set when closing a project. Helpful for
closing project developing on multiple projects.
No: Remove current rule set from the Process Tree window when
closing a project.
Ask: Open a message box when closing.
Process Editing
Always do profiling Yes: Always use time measurement of processes execution to control
the process performance.
No: Does not use time measurement of processes.
Action for double-click Edit: Open the Edit Process dialog box. Execute: Execute the
on a process process immediately.
Switch to classification Yes: Show the classification result in the map view window after
view after process executing a process.
execution No: Current map view does not change.
Switch off comments No: Comments in process tree are active. Yes: No comments in
in process tree process tree.
Ask before deleting Yes: Use Delete Level dialog box for deletion of image objects levels.
current level No: Delete image objects levels without reconfirmation.
(Recommended for advanced users only.)
Undo
Enable undo for Yes: Enable undo function to go backward or forward in the
process editing operation’s history. No: Disable undo to minimize memory usage.
operations
Min. number of Minimum number of operation items available for undo. Additional
operation items items can be deleted if maximum memory is not exceeded as defined
available for undo in Max. amount of memory allowed for undo stack (MB) below.
(priority) Default is 5.
Max. amount of Assign a maximum of memory allowed for undo items. However, a
memory allowed for minimum number of operation items will be available as defined in
operation items (MB) Min. length of undo stack above. Default is 25.
Sample Brush
Replace existing Yes: Replace samples that have already been selected when the
samples sample brush is reapplied.
No: Do not replace samples when the sample brush is reapplied.
Exclude objects that No: Applying the sample brush to classified objects will reclassify
are already classified them according to the current sample brush.
as sample class Yes: Applying the sample brush to classified objects will not
reclassify them.
Unit Handling
Engine
Raster data access Direct: Access image data directly where they are located.
Internal copy: Create an internal copy of the image and accesses
data from there.
Project Settings These values display the status after the last execution of the project.
They must be saved with the rule set in order to display after loading
it.
These settings can be changed by using the Set Rule Set Options
algorithm.
Polygon compatibility Several improvements were made to polygon creation after version
mode 7.0.6. These improvements may cause differences in the generation
of polygons when older files are opened.
Polygon compatibility mode ensures backwards compatibility. By
default, compatibility mode is set to “none”; for rule sets created with
version v7.0.6 and older, the value is “v7.0”. This option is saved
together with the rule set.
Point cloud distance Change the value for point cloud distance filter. The default value is
filter (meters) 20 meters.
Polygons base polygon Display the degree of abstraction for the base polygons. Default is
threshold 1.25.
Polygons shape Display the degree of abstraction for the shape polygons. Default is
polygon threshold 1.00.
Polygons remove Display the setting for removal of slivers. Default is No.
slivers
Portions of this product are based in part on the third-party software components. Trimble
is required to include the following text, with software and distributions.
• Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
• Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other mate-
rials provided with the distribution.
• Neither name of Ken Martin, Will Schroeder, or Bill Lorensen nor the names of
any contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
259
Acknowledgments 260
ITK Copyright
• Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
• Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other mate-
rials provided with the distribution.
• Neither the name of the Insight Software Consortium nor the names of its con-
tributors may be used to endorse or promote products derived from this software
without specific prior written permission.
python/tests/test_doctests.py
• Redistributions of source code must retain the above copyright notice, this list of
conditions and the following disclaimer.
• Redistributions in binary form must reproduce the above copyright notice, this list
of conditions and the following disclaimer in the documentation and/or other mate-
rials provided with the distribution.
• Neither the name of Sean C. Gillies nor the names of its contributors may be used
to endorse or promote products derived from this software without specific prior
written permission.
src/Verson.rc
src/gt_wkt_srs.cpp