N2OS-UserManual-22 0 0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 328

Notice

Legal notices
Publication Date
February 2022

Copyright
Copyright © 2013-2022, Nozomi Networks. All rights reserved.
Nozomi Networks believes the information it furnishes to be
accurate and reliable. However, Nozomi Networks assumes no
responsibility for the use of this information, nor any infringement of
patents or other rights of third parties which may result from its use.
No license is granted by implication or otherwise under any patent,
copyright, or other intellectual property right of Nozomi Networks
except as specifically described by applicable user licenses. Nozomi
Networks reserves the right to change specifications at any time
without notice.

Third Party Software


Nozomi Networks uses third-party software which usage is
governed by the applicable license agreements from each of the
software vendors. Additional details about used third-party software
can be found at https://security.nozominetworks.com/licenses.
| Table of Contents | v

Table of Contents

Legal notices.......................................................................................... iii

Chapter 1: Preliminaries.........................................................................9
Prepare a Safe and Secure Environment...................................................................................10

Chapter 2: Installation.......................................................................... 11
Installing a physical appliance.................................................................................................... 12
Installing on a Virtual Machine (VM)...........................................................................................12
Installing the container................................................................................................................ 15
Set up phase 1 (basic configuration).......................................................................................... 18
Set up phase 2 (web interface configuration)............................................................................. 20
Additional settings....................................................................................................................... 23

Chapter 3: Users................................................................................... 31
Users Introduction....................................................................................................................... 32
Managing Users.......................................................................................................................... 33
Managing Groups........................................................................................................................ 36
Password policies........................................................................................................................38
Active Directory Users.................................................................................................................40
SAML Integration.........................................................................................................................42

Chapter 4: Basics..................................................................................45
Environment................................................................................................................................. 46
Asset............................................................................................................................................ 46
Node............................................................................................................................................ 46
Session........................................................................................................................................ 47
Link.............................................................................................................................................. 47
Variable........................................................................................................................................ 48
Vulnerability................................................................................................................................. 48
Query........................................................................................................................................... 48
Protocol........................................................................................................................................ 49
Incident & Alert............................................................................................................................50
Trace............................................................................................................................................ 50
Charts.......................................................................................................................................... 51
Tables.......................................................................................................................................... 52
Navigation through objects..........................................................................................................53

Chapter 5: User Interface Reference...................................................55


Supported Web Browsers........................................................................................................... 56
Navigation bar............................................................................................................................. 56
Dashboard................................................................................................................................... 58
Alerts............................................................................................................................................ 62
Asset View...................................................................................................................................63
Network View...............................................................................................................................65
Process View...............................................................................................................................80
Queries........................................................................................................................................ 85
Reports........................................................................................................................................ 88
Time Machine.............................................................................................................................. 94
| Table of Contents | vi

Vulnerabilities...............................................................................................................................97
Settings........................................................................................................................................ 99
System....................................................................................................................................... 119
Continuous Traces.................................................................................................................... 132

Chapter 6: Security features.............................................................. 135


Security Control Panel.............................................................................................................. 136
Security Configurations............................................................................................................. 136
Manage Network Learning........................................................................................................ 142
Alerts.......................................................................................................................................... 146
Custom Checks: Assertions...................................................................................................... 148
Custom Checks: Specific Checks............................................................................................. 150
Alerts Dictionary........................................................................................................................ 152
Incidents Dictionary................................................................................................................... 159
Packet rules...............................................................................................................................161
Hybrid Threat Detection............................................................................................................ 165

Chapter 7: Vulnerability Assessment................................................167


Basics........................................................................................................................................ 168
Passive detection...................................................................................................................... 169
Configuration..............................................................................................................................170

Chapter 8: Smart Polling.................................................................... 171


Strategies................................................................................................................................... 172
Plans.......................................................................................................................................... 172
Extracted information.................................................................................................................175

Chapter 9: Threat Intelligence............................................................179


Threat Intelligence Installation and Update...............................................................................180
Understanding if you have the latest Threat Intelligence update..............................................183

Chapter 10: Asset Intelligence...........................................................185


Asset Intelligence Installation and Update................................................................................ 186
Enriched Information................................................................................................................. 186
Needed Input Data.................................................................................................................... 186

Chapter 11: Queries............................................................................ 187


Overview.................................................................................................................................... 188
Reference.................................................................................................................................. 189
Examples................................................................................................................................... 200

Chapter 12: Maintenance....................................................................205


System Overview.......................................................................................................................206
Data Backup and Restore.........................................................................................................207
Reboot and shutdown............................................................................................................... 211
Software Update and Rollback................................................................................................. 212
Data Factory Reset................................................................................................................... 214
Full factory reset with data sanitization.....................................................................................214
Host-based intrusion detection system..................................................................................... 214
Action on log disk full usage.....................................................................................................215
Support...................................................................................................................................... 215
| Table of Contents | vii

Chapter 13: Central Management Console.......................................217


Overview.................................................................................................................................... 218
Deployment................................................................................................................................ 219
Settings...................................................................................................................................... 221
Connecting Appliances.............................................................................................................. 222
Troubleshooting......................................................................................................................... 222
Data synchronization Policy...................................................................................................... 223
Data synchronization Tuning.....................................................................................................224
CMC or Vantage connected appliance - Date and Time.......................................................... 224
Appliances List.......................................................................................................................... 225
Appliances Map......................................................................................................................... 227
High Availability (HA)................................................................................................................ 229
Alerts.......................................................................................................................................... 232
Functionalities Overview............................................................................................................233
Updating.....................................................................................................................................234
Single-Sign-On through the CMC............................................................................................. 234

Chapter 14: Remote Collector............................................................237


Overview.................................................................................................................................... 238
Deployment................................................................................................................................ 239
Using a Guardian with connected Remote Collectors.............................................................. 244
Troubleshooting......................................................................................................................... 246
Updating.....................................................................................................................................247
Disabling a Remote Collector................................................................................................... 247
Install the Remote Collector Container on the Cisco Catalyst 9300......................................... 247

Chapter 15: Configuration.................................................................. 251


Features Control Panel............................................................................................................. 252
Editing appliance configuration................................................................................................. 254
Basic configuration rules........................................................................................................... 255
Configuring the Garbage Collector............................................................................................263
Configuring alerts...................................................................................................................... 266
Configuring Incidents................................................................................................................. 282
Configuring nodes..................................................................................................................... 284
Configuring assets.....................................................................................................................288
Configuring links........................................................................................................................ 289
Configuring variables................................................................................................................. 292
Configuring protocols.................................................................................................................297
Configuring decryption...............................................................................................................304
Configuring trace....................................................................................................................... 306
Configuring continuous trace.....................................................................................................308
Configuring Time Machine........................................................................................................ 310
Configuring retention................................................................................................................. 311
Configuring Bandwidth Throttling.............................................................................................. 316
Configuring Remote Collector Bandwidth Throttling................................................................. 317
Configuring synchronization...................................................................................................... 318
Configuring slow updates.......................................................................................................... 321
Configuring session hijacking protection...................................................................................322
Configuring Passwords..............................................................................................................322

Chapter 16: Compatibility reference................................................. 325


SSH compatibility...................................................................................................................... 326
HTTPS compatibility.................................................................................................................. 327
Chapter

1
Preliminaries
Topics: This chapter describes the preliminary information required to
properly and securely install your Nozomi Networks Guardian or
• Prepare a Safe and Secure Central Management Console (CMC).
Environment
Prepare a Safe and Secure Environment
Before beginning the installation process, confirm the prerequisites in this section to order to have a
safe and secure environment for your Guardian or CMC.
Installing a Physical Appliance
If you are installing a physical appliance, install it in a physically secure location that is accessible only
to authorized personnel. Observe the following precautions to prevent potential property damage,
personal injury, or death:
• Do not use damaged equipment, including exposed, frayed or damaged power cables.
• Do not operate the appliance with any covers removed.
• Choose a suitable location for the appliance. It should be installed in a well-ventilated area that is
clean and dust-free. Avoid areas that generate heat, electrical noise, and electromagnetic fields.
Avoid wet areas. Protect the appliance from liquid intrusion. Disconnect power to the appliance if it
gets wet.
• Use a regulated Uninterruptible Power Supply (UPS). This keeps your system operating in the event
of a power failure, and protects the appliance from power surges and voltage spikes.
• Maintain a reliable ground at all times. Ground the rack itself and the appliance chassis to it via the
provided appliance grounding cable.
• Mount the appliance in a rack or place it in an area with sufficient airflow for safe operation.
• Avoid uneven mechanical loading when the appliance is mounted in a rack.
Installing a Virtual Machine
If you are installing a Virtual Machine (VM), contact your virtual infrastructure manager to ensure that
only authorized personnel have access to the system's console.
Configuration
The appliance's management port should be assigned an IP address in a dedicated management
VLAN to control access at different levels and to restrict access to a select set of hosts and people.
Before connecting a SPAN/mirror port to the appliance, ensure that the configuration on the switch/
router/firewall or other networking device is set to allow only output traffic. The appliance ports are
configured to accept read-only traffic and not to inject any packets. To prevent human error (e.g. a
span port cable put into the management port), check that no packets can be injected from those ports.
Chapter

2
Installation
Topics: This chapter includes basic configuration information for the Nozomi
Networks solution physical and virtual appliances.
• Installing a physical appliance
Additional configuration information is provided in the Configuration
• Installing on a Virtual Machine
chapter.
(VM)
• Installing the container Maintenance tasks are described in the Maintenance chapter.
• Set up phase 1 (basic
configuration)
• Set up phase 2 (web interface
configuration)
• Additional settings
| Installation | 12

Installing a physical appliance


This topic describes how to install a physical appliance for use with the Nozomi Networks solution.
If you purchased a physical appliance from Nozomi Networks, the appropriate release of the Nozomi
Networks Operating System (N2OS) is already installed on it.
Follow these steps to install and configure your physical appliance:
1. Attach the appropriate null modem serial cable to the appliance's serial console:
• For N1000, N750 and P500 appliances, attach an RJ45 console plug.
• For NSG-L and NSG-M Series, attach a USB serial plug.
• For the R50 and R150, attach a DB9 serial plug.
2. Open a terminal emulator, which can be:
• Hyper Terminal or Putty on Windows
• cu or minicom on macOS and other *nix platforms
3. When connecting, set the speed to 9600 bauds with no parity bit set. Alternatively, connect via the
network (ssh) using the default network settings for the physical appliance:
• IP address: 192.168.1.254
• Netmask: 255.255.255.0
• GW:192.168.1.1
The appliance will show a login prompt.
Proceed to Set up phase 1 (basic configuration) on page 18.

Installing on a Virtual Machine (VM)


This topic describes the prerequisites for installing Guardian on a Virtual Machine (VM).
Prerequisites:
We tested the Guardian VMs in multiple OVA-compatible environments. The current N2OS release
supports these hypervisors:
• VMware ESXi 5.5 or newer
• HyperV 2012 or newer
• XEN 4.4 or newer
• KVM 1.2 or newer
The minimum requirements for a Guardian VM are:
• 4 vCPU running at 2 GHz
• 4 GB of RAM
• 10 GB of minimum disk space, running on SSD or hybrid storage (100+ GB of disk recommended)
• 2 or more NICs (the maximum number depends on the hypervisor), with one being used for
management and one (or more) being used for traffic monitoring
All components should be in good working condition. The overall hypervisor load must be under
control, with no regular ballooning on the Guardian so as to avoid unexpected behavior, such as
dropped packets or overall poor system performance.

Virtual Machine (VM) sizing


The following tables list the minimum size requirements for Guardian and CMC instances when they
run on virtual machines. These values are based on assumed amounts of nodes and throughput, and
are based on a simplified model. Consider these recommendations as a starting point for calculating
the best size for your VM. There are more elements affecting the instantaneous and average use
of resources, such as the hypervisor hardware, specific protocol distributions, loading time machine
snapshots, running queries on big data sets, etc. Accordingly to all of these, different settings may be
required and should be tested during deployment.
| Installation | 13

In the tables below, network elements are defined as the sum of nodes, links, and variables.

Guardian sizing recommendations


The following table suggests sizes for instances running the Guardian.

Guardian instances

Min Max Max Max Max Min Suggested RAM Min


1
Model Nodes Network Smart Throughput vCPU vCPU (GB) Disk
Elements IoT (Mbps) (GB)
Devices
V100 100 3,000 500 50 2 4 6 20
V100 250 10,000 1,250 250 4 6 6 20
V100 250 10,000 1,250 500 6 8 6 20
V100 250 10,000 1,250 1,000 8 10 6 20
V100 1,000 20,000 5,000 1,000 8 10 8 100
V250 2,500 50,000 12,500 500 6 8 10 250
V250 2,500 50,000 12,500 1,000 8 10 10 250
V250 5,000 100,000 25,000 500 6 8 12 250
V250 5,000 100,000 25,000 1,000 8 10 12 250
V750 10,000 200,000 100,000 500 6 8 16 250
V750 10,000 200,000 100,000 1,000 8 10 16 250
V1000 40,000 400,000 200,000 1,000 8 10 24 250

1
The amount of RAM from the hypervisor may not correspond with the actual amount seen from within
the Virtual Machine (VM). To confirm that your VM has sufficient RAM to support your configuration,
enter the following command from your terminal:

sysctl hw.physmem

If the VM RAM is insufficient, acquire the required amount identified in the table.

CMC sizing recommendations


The following table suggests sizes for instances running the CMC.

CMC instances
2
Number of Max Network Mode vCPU RAM (GB) Min Disk
1
Appliances Elements
25 100,000 Multi-context 4 8 100+ GB
25 100,000 All-in-one 8 32 100+ GB
50 200,000 Multi-context 6 12 200+ GB
50 200,000 All-in-one 16 64 200+ GB
100 400,000 Multi-context 10 16 1+ TB
250 800,000 Multi-context 12 32 1+ TB
400 1,200,000 Multi-context 16 64 1+ TB
| Installation | 14

1
The amount of RAM from the hypervisor may not correspond with the actual amount seen from within
the Virtual Machine (VM). To confirm that your VM has sufficient RAM to support your configuration,
enter the following command from your terminal:

sysctl hw.physmem

If the VM RAM is insufficient, acquire the required amount identified in the table.
2
Since this sizing is also dependent on the synchronized network elements, the supported number of
appliances varies and should be agreed upon with Nozomi Networks for each installation.

Installing the Virtual Machine (VM)


This topic describes how to install the Virtual Machine (VM) in the hypervisor. You'll need to further
configure your VM to enable external access, with instructions provided in subsequent sections.
If you're unfamiliar with importing the OVA Virtual Machine in your hypervisor environment, refer to
your hypervisor's manual or contact your hypervisor's support service.
Follow these steps to install the VM in the hypervisor:
1. Import the Virtual Machine in the hypervisor and configure the resources according to the minimum
requirements specified in the previous section.
2. After importing the VM, at the hypervisor settings for the VM disk, set the desired size. Some
hypervisors, such as VMware ESX >= 6.0, allow you to change the disk size at this stage. With
hypervisors that do not allow this operation, STOP HERE and continue with the instructions in
Adding a secondary disk to a Virtual Machine (VM) on page 14.
3. Boot the VM. The VM now boots into a valid N2OS environment.
4. Login as admin
You are instantly logged in; no password is set by default.
5. Go to privileged mode with the command:

enable-me

You will now be able to perform system changes.

Expanding a disk on a Virtual Machine (VM)


This topic describes how to expand an existing disk on a Virtual Machine (VM).
To expand an existing disk, edit your VM's settings from the hypervisor. Then, follow these steps:
1. Restart the virtual machine and log in to it.
2. Elevate your privileges using the enable-me command:

enable-me

3. Run the data_enlarge command:

data_enlarge

The virtual machine can now detect the newly allocated space.

Adding a secondary disk to a Virtual Machine (VM)


This topic describes how to add a larger virtual data disk to the N2OS VM, should the main disk not be
large enough when it is first imported.
Prerequisite: In order to proceed you should be familiar with managing virtual disks in your hypervisor
environment. If you are not, refer to your hypervisor manual or contact your hypervisor's support
service.
1. Add a disk to the VM and restart it.
| Installation | 15

2. In the VM console, use the following command to obtain the name of the disk devices:

sysctl kern.disks

3. Assuming ada1 is the device disk added as a secondary disk (note that ada0 is the OS device),
execute this command to move the data partition to it:

data_move ada1

Adding a monitoring interface to the Virtual Machine (VM)


This topic describes how to add a monitoring interface to a Virtual Machine (VM).
By default, the VM has one management network interface and one monitoring interface. Depending
on deployment needs, it may be useful to add more monitoring interfaces to the appliance.
To add one or more interfaces, follow these steps:
1. If the VM is powered on, shut it down.
2. Add one or more network interfaces from the hypervisor configuration.
3. Power on the VM.
The newly added interface(s) will be automatically recognized and used by the Guardian.

Installing the container


This topic describes the container on which you install the Nozomi Networks Operating System
(N2OS).
The container enables you to install the N2OS on embedded platforms, such as switches, routers and
firewalls that have a container engine onboard.
It's also a good platform for tightly integrated scenarios where several products interact on the same
hardware platform to provide a unified experience.
For all the remaining use cases, a physical appliance or a virtual appliance are the recommended
options.
| Installation | 16

Install on Docker
This topic describes how to install the Nozomi Networks Operating System (N2OS) on Docker.
After performing these steps, you will have an image and a running container based on it.
Prerequisite:
• Docker must be installed to perform the steps below. We have tested N2OS with Docker version
18.09 and 20.10.
• BuildKit is required to build the image, Docker 18.09 or higher is required. Please refer to the official
Docker documentation to activate Docker BuikdKit fature: https://docs.docker.com/develop/develop-
images/build_enhancements/
Follow these steps to install N2OS on Docker:
1. Build the image with the following command from the directory containing the artifacts:
docker build -t n2os
This creates the image.
2. Run the image using a command, such as this one:
docker run --hostname=nozomi-sga --name=nozomi-sga --
volume=<path_to_data_folder>:/data --network=host -d n2os

where <path_to_data_folder> is the path to a volume where the appliance's data will be
stored, and saved for future runs.
The image has been built to automatically monitor all network interfaces shown to the container --
and the network=host setting allows access to all network interfaces of the host computer.
3. The container can be stopped at any time with the following command:
docker stop nozomi-sga
and executed with:
docker start nozomi-sga

Additional details
This topic describes additional container details.
The container has the same features as those provided by the physical and virtual appliances. A key
difference is that container provisioning "system" settings must be performed using Docker commands,
and thus they are not editable from inside the container itself. A notable example is the hostname: it
must be set when launching a new instance of the image.
You must use volumes for the /data partition to assure that the data will survive image updates.
Updating a container
To update a container:
1. Build a new version of the n2os image.
2. Stop and destroy the current running containers.
3. Start a new container with the updated image.
Data is automatically migrated to the new version.
The network=host Docker parameter allows the container to monitor the physical NICs on the host
machine. However, by default it also allows the container to monitor all of the available interfaces. To
restrict to a subset, create a cfg/n2osids_if file in the /data volume with the list of interfaces to
monitor separated by a comma (e.g: eth1,eth2).
Customizing the container build
You can customize the container version build using the following variables which may be passed to
the Docker build command using the --build-arg command line switch, such as: docker build
--build-arg APT_PROXY=x.x.x.x:yy -t n2os.
| Installation | 17

Parameter Default Description


value
APT_PROXY none Proxy to be used to download container
packages
N2OS_HTTP_PORT 80 Specify custom http web port
N2OS_HTTPS_PORT 443 Specify custom https web port
| Installation | 18

Set up phase 1 (basic configuration)


This topic describes how to set up the basic configuration to begin using the Nozomi Networks solution.
At the end of this procedure, your system management interface will be set up, and reachable as a text
console via SSH, and as a web console via HTTPS.
Prerequisite: The Nozomi Networks solution is installed and is ready to be configured for the first time.
Note: Depending on the appliance model, use either a serial console (for physical appliances) or the
text hypervisor console (for virtual appliances).
Users
The Guardian shell has two users: admin and root. Log in to both using admin. If elevating from
admin to root, use the admin password. The root account does not have a separate password.
Follow these steps to set up phase 1 of the Nozomi Networks solution:
1. At the console, a prompt displays the text "N2OS - login:". Type admin and then press Enter.
• In a virtual appliance, you are instantly logged in, as no password is set by default.
• In physical appliances, nozominetworks is the default password.
• The admin password can be changed at any time, using the change_password command.
2. Enter the enable-me command to elevate the privileges.
3. Enter the setup command to launch the initial configuration wizard.

4. At the prompt, choose the admin password first. Select a strong password as this will allow the
admin user to access the appliance through SSH.

5. To set up the management interface IP address, select the 2 Network Interfaces menu in the
dialog.

6. Now set up the management interface IP address. Depending on the appliance model, the
management interface can be named em0 or mgmt. Select the mangement interface, then press
Enter.
| Installation | 19

7. Edit the values for IP address (ipaddr) and Netmask (netmask). Enable DHCP to configure all
automatically. Then move up to X. Save/Exit and press Enter.

8. Now select Default Router/Gateway from the menu, and enter the IP address of the default
gateway. Press Tab and then Enter to save and exit.

9. Now select DNS nameservers from the menu, and configure the IP addresses of the DNS servers.

10.Move up to X Exit and press Enter.


This completes the basic networking set up. The remaining configuration steps are performed by
opening the web console running on the management interface.
| Installation | 20

Set up phase 2 (web interface configuration)


This topic describes the second phase of the Nozomi Networks Operating System (N2OS) set up,
which is performed from the web console.
Prerequisites: For this setup, you must use one of the supported web browsers.
Note: The product integrates self-signed SSL certificates to get started, so add an exception in your
browser. Later in this chapter, we describe the steps to import valid certificates.
Follow these steps to set up the web interface of the N2OS:
1. Access the web console by typing https://<appliance_ip> where <appliance_ip> as the
management interface IP address.
2. Add an exception to your browser.
The login screen displays:

3. At the login screen, log in using the default username and password: admin / nozominetworks.
Note: At first login, you will be prompted to change these credentials for security reasons.
4. Go to Administration > General and change the host name.

5. Go to Administration > Date and time, to change the time zone, set the date and (optional)
enable the NTP client.
| Installation | 21

Result: The appliance is almost ready to be put into production. The next step is to install a valid
license.

License
This topic describes how to set a new license.
1. Go to Administration > Updates & Licenses.
2. Obtain a valid license key by copying the machine ID, and using it in conjunction with the Activation
Code from Nozomi Networks.
3. Paste the valid license key inside the text box.
Note: The license types that you can activate are: Base (a Base license is required and includes
passive monitoring, with Smart Polling as optional), Threat Intelligence, and Asset Intelligence.
Result: After the license is confirmed, the appliance begins to monitor the configured network
interfaces.

Figure 1: License screen

The Guardian license status can be any of the following:

Status Description
UNLICENSED The incoming traffic is not processed.
OK The incoming traffic is normally processed accordingly to the license
limits. Keep an eye on the expiration date to renew, if applicable.

EXPIRING After the official expiration date, Nozomi Networks offers a three (3)
month grace period, where the license still works as if it were in OK
status, to allow for emergency renewal.

EXPIRED The incoming traffic is not processed anymore.


| Installation | 22

Install SSL certificates


This topic describes how to import an SSL certificate into the appliance. The SSL certificate is required
to securely encrypt traffic between client computers and the Nozomi Networks Operating System
(N2OS) appliance over HTTPS.
The N2OS webserver uses the HTTP protocol to expose the management interface. During the initial
boot, the appliance generates a self-signed certificate valid for one (1) year. A self-signed certificate
should not be used in production. We suggest that you follow this procedure to install a certificate
obtained from a well-known, trusted Certificate (or Certification) Authority (CA).
To add a private CA to the system's trust store, refer to Install CA certificates on page 23.
Prerequisites
• Be sure you have both the certificate and the key file in PEM format.
• Check to see if your certificate is password-protected.
• To avoid browser errors make sure that the certificate chain is complete. You can combine
certificates using a command, such as cat https_nozomi.crt bundle.crt >
https_nozomi.chained.crt.
Follow these steps to install SSL certificates:
1. Upload the certificate and key files to the appliance using an SSH client in the /data/tmp folder.
For example, given that you have https_nozomi.crt and https_nozomi.key files in the same
folder, open a terminal, cd into it, then use this command to upload the files to the appliance:

scp https_nozomi.* admin@<appliance_ip>:/data/tmp

2. Log into the console, either directly or through SSH, then use this command to elevate the
privileges:

enable-me

3. If your certificate key is protected with a password, use the following command to remove the
protection to avoid being prompted for the password each time the server restarts:

openssl rsa -in <https_nozomi.key> -out https_nozomi_nopassword.key

4. Execute the n2os-addtlscert command below to enable the certificate. Note: If you removed
password protection from the certificate, change the second parameter of the command to
https_nozomi_nopassword.key:

n2os-addtlscert https_nozomi.crt https_nozomi.key

5. Restart the web server to apply the change:

service nginx stop

6. Verify that the certificate is correctly loaded by pointing your browser to https://
<appliance_ip>/ and checking that the certificate is recognized as valid.
The imported SSL certificates are working correctly and will be applied on the next reboot.
| Installation | 23

Additional settings
This topic describes additional, non-mandatory system settings.

Network Flows
This topic describes the basic network flows to operate the solution components.

Required ports and protocols

Table 1: Operator's access to Guardian, CMC, and RC

Port Protocol Source Destination Purpose


tcp/443 https Operator Guardian/CMC Operator’s https access to Guardian/
CMC GUI
tcp/22 ssh Operator Guardian/CMC/RC Operator’s ssh access to Guardian/
CMC/Remote Collector shell

Table 2: Communications between RC, Guardian, and CMC

Port Protocol Source Destination Purpose


tcp/443 https Guardian/ CMC Sync from Guardian to CMC or between
CMC CMCs of different tiers in the hierarchy
tcp/443 https RC Guardian Sync from Remote Collector to Guardian
tcp/6000 proprietary RC Guardian Transmission of monitored traffic from
(on TLS) Remote Collector to Guardian

Install CA certificates
This topic describes how to add a CA certificate to an appliance. This procedure is required when the
issuing Certificate (or Certification) Authority (CA) for the HTTPS certificate is not immediately trusted.
Prerequisites: Before starting, make these pre-checks:
• If your intermediate CA and Root CA certificates are in separate files, combine them. For example:

cat <intermediate_root_cert> <ca_root_cert> > cert.crt


• The certificate must be in Privacy Enhanced Mail (PEM) format. Neither Distinguished Encoding
Rules (DER) nor PKCS#12 formats are supported.
Follow these steps to install CA certificates:
1. Upload the CA certificate file to the appliance with an SSH client in the /data/tmp folder. For
example, if you have the cert.crt file, open a terminal, cd into the directory, and then use the
following command to upload the file to the appliance:

scp cert.crt admin@<appliance_ip>:/data/tmp

2. Log into the console, either directly or through SSH, then elevate the privileges:

enable-me

3. Use the command n2os-addcacert to add the CA certificate to the trust store:

n2os-addcacert cert.crt
| Installation | 24

The imported CA certificate is now trusted by the appliance and may be used to secure HTTPS
communication from the connected appliance to a CMC, as described in Connecting Appliances on
page 222.

Enabling SNMP
This topic describes how to enable the SNMP daemon to monitor the health of the Nozomi Networks
Operating System (N2OS) appliance.
Note: The current SNMP daemon supports versions v1, v2c and v3. This feature is not available in the
container version.
Follow these steps to enable the SNMP daemon:
1. To enable the SNMP daemon, log into the text-console, either directly or through SSH.
2. Elevate the privileges with the command: enable-me.
3. Use vi or nano to edit /etc/snmpd.conf.
4. Edit the location, contact and community variables. For community, choose a strong
password.
5. Provide other variables's value, as needed. For example for SNMP v3 User-Based Security Model
(USM), uncomment the following sections to create a user bsnmp and set privacy and encryption
options to SHA256 message digests and AES encryption for this user:

engine := 0x80:0x10:0x08:0x10:0x80:0x25
snmpEngineID = $(engine)

user1 := "bsnmp"
user1passwd :=
0x22:0x98:0x1a:0x6e:0x39:0x93:0x16: ... :0x05:0x16:0x33:0x38:0x60

begemotSnmpdModulePath."usm" = "/usr/lib/snmp_usm.so"

%usm

usmUserStatus.$(engine).$(user1) = 5
usmUserAuthProtocol.$(engine).$(user1) = $(HMACSHAAuthProtocol)
usmUserAuthKeyChange.$(engine).$(user1) = $(user1passwd)
usmUserPrivProtocol.$(engine).$(user1) = $(AesCfb128Protocol)
usmUserPrivKeyChange.$(engine).$(user1) = $(user1passwd)
usmUserStatus.$(engine).$(user1) = 1

6. Now edit the /etc/rc.conf file to add the following line:

bsnmpd_enable="YES"

7. Start the service with the following command:

service bsnmpd start

8. If you enabled the User-Based Security Model (USM) in Step 4, replace the default value for the
user1passwd variable. Launch the bsnmpget command and convert the SHA or MD5 output to
exe format:

sh -c "SNMPUSER=bsnmp SNMPPASSWD=<newpassword> SNMPAUTH=<sha|md5>


SNMPPRIV=<aes|des> bsnmpget -v 3 -D -K -o verbose"

echo <SHA output> | sed 's/.\{2\}/:0x&/g;s/^.\{6\}//g'


| Installation | 25

Restart the service with the following command:

service bsnmpd restart

9. Save all settings by issuing the following command:

n2os-save

10.To check the functionality, run a test command from an external system (the <appliance_ip> has
to be reachable). For example, in the USM case with the default values provided by the /etc/
snmpd.config file, use a command similar to:

snmpstatus -v3 -u bsnmp -a SHA -A <password> -x AES -X <password> -l


authPriv <appliance_ip>

Configuring the internal firewall


This topic describes how to restrict access to the management interface, SSH terminal, SNMP service,
and ICMP protocol of the Full Stack edition (aka physical and virtual appliances, not the container).
• To limit access to these services, use the CLI to add the required configurations.
• The default settings permit connections from any IP address. The system ignores lines with invalid
IP addresses.
Note: Use caution when changing internal firewall rules because you can lose access to the device
administration interface. In the event of an error, console access is required to fix the rules.
• These configuration settings allow you to fine-tune the firewall rules.

Parameter Description
system firewall icmp Configure acl for icmp protocol
system firewall https Configure acl for http and https services
system firewall ssh Configure acl for ssh service
system firewall snmp Configure acl for snmp service

Follow these steps to configure the internal firewall:


1. Log into the text-console, either directly or through SSH.
2. Add the required configuration lines in the CLI. For example, the following line allows connections
only from networks 192.168.55.0/24 or from the host 10.10.10.10.

conf.user configure system firewall https 192.168.55.0/24, 10.10.10.10

3. Write configuration changes to disk and exit the text editor.


4. Apply new settings using the following command:

n2os-firewall-update

Management interface packet rate protection


This topic describes how to disable internal packet rate protection, which is enabled by default.
Background
• The management interface of the physical and virtual appliances have internal packet rate
protection enabled by default.
• Malicious hosts are banned for 5 minutes if they try to send more than 1024 packets within 5
seconds.
• The n2os-firewall-show-block command shows blocked IP addresses, while the n2os-
firewall-unblock command can unblock a single IP address.
| Installation | 26

Follow these steps to disable internal packet rate protection:


To disable internal packet rate protection, log into the text-console, either directly or through SSH, and
issue the following commands:
1. In the CLI, add the following configuration:

conf.user configure system firewall disable_packet_rate_protection true

2. Write the configuration changes to disk and exit the text editor.
3. Apply the new settings using the following command:

n2os-firewall-update

IPv6 set up
This topic describes how to configure IPv6 to access the full stack edition (i.e., physical and virtual
appliances, not the container).
Follow these steps to configure IPv6 to access the full stack edition:
1. Issue the following command to enable IPv6 on the management interface:

n2os-setupipv6
2. Reboot the appliance.
3. After reboot the address can be retrieved using a system command such as ifconfig
After completing this procedure, you will be able to access the appliance UI by enclosing the address in
the squared brackets, as shown in the following screenshot:

Figure 2: Access a Guardian via IPv6 address

Similarly, appliances may be configured to sync towards a Central Management Console (CMC) or
another appliance in High Availability (HA) specifying the ipv6 address in square brackets.

Figure 3: HA connection for two CMC

Enabling 802.1x on the management interface


This topic describes how to enable 802.1x support for the management interface. Configuration of the
RADIUS server and the creation of possible certificates are not discussed.
Prerequisites: Before beginning, verify the following:
| Installation | 27

• Confirm that you have serial access to the appliance; part of this configuration is performed via
serial console.
• If the 802.1x is already configured, switch side and ports are already closed. Be sure you have a
network patch to reach the appliance via a direct network connection.
• If the authentication process is via TLS certificates, confirm that you have ca.pem, client.pem,
and client.key files, as well as the client.key unlock password.
• If the authentication process is via PEAP, confirm that you have the identity and password.
Follow these steps to enable 802.1x support for the management interface:
1. Log in to the console via the serial console, and enter privileged mode with the command:

enable-me

2. Create the directory /etc/wpa_supplicant_certs and change its permissions to 755.


Note: You must use this exact directory name. No other name is allowed.

mkdir /etc/wpa_supplicant_certs
chmod 755 /etc/wpa_supplicant_certs

3. Create the file /etc/wpa_supplicant.conf and fill it with the required configuration values.
Note: No others file name is allowed; if necessary, rename your file to match the expected
name.

vi /etc/wpa_supplicant.conf

Below, we provide examples of wpa_supplicant.conf.


• Configuration for PEAP authentication:

ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=0
eapol_version=1
ap_scan=0
network={
ssid="NOZOMI8021X"
key_mgmt=IEEE8021X
eap=PEAP
identity="identity_for_this_guardian_here"
password="somefancypassword_here"
}
• Configuration for TLS authentication:

ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=0
eapol_version=1
ap_scan=0
network={
ssid="NOZOMI8021X"
key_mgmt=IEEE8021X
eap=TLS
identity="client"
ca_cert="/etc/wpa_supplicant_certs/ca.pem"
client_cert="/etc/wpa_supplicant_certs/client.pem"
private_key="/etc/wpa_supplicant_certs/client.key"
private_key_passwd="somefancypassword_private_key_here"
}

4. For TLS authentication, copy the required files to the expected location.
To copy the files, connect to the appliance via the Ethernet. If the appliance is not reachable via
SSH using the actual network, we suggest that you configure the mgmt interface with a temporary
IP address and connect the appliance with a direct Ethernet patch cable. Refer to the relevant
chapter of this guide to configure the IP address: Setup Phase 1.
| Installation | 28

5. For TLS authentication, upload the certificate files to the appliance with an SSH client in the /etc/
wpa_supplicant_certs/ folder.
For example, if you have the ca.pem, client.pem, and client.key files, open a terminal, cd
into the directory containing files, and use the following command to upload them to the appliance:
Note: Skip this step if you are using PEAP authentication.

scp ca.pem client.pem client.key admin@<appliance_ip>:/tmp/

6. In the appliance serial console, with elevated privileges, move the files to the expected location:

mv /tmp/ca.pem /tmp/client.pem /tmp/client.key /etc/


wpa_supplicant_certs

7. In the appliance serial console, with elevated privileges, change the certificate permission to 440 as
shown below:
Note: Skip this step if you are using PEAP authentication.

cd /etc/wpa_supplicant_certs
chown root:wheel ca.pem client.pem client.key
chmod 440 ca.pem client.pem client.key

8. In the appliance serial console, with elevated privileges, change the /etc/rc.conf file by adding
the following entries:

wpa_supplicant_flags="-s -Dwired"
wpa_supplicant_program="/usr/local/sbin/wpa_supplicant"

9. Change the /etc/rc.conf file's ifconfig_mgmt entry by adding the prefix WPA.
If the appliance was configured with a direct Ethernet patch cable, you can now configure the
production-ready IP address and connect the appliance to the switch. For example, if the appliance
IP address is 192.168.10.10, the entry will be similar to the following:

ifconfig_mgmt="WPA inet 192.168.10.10 netmask 255.255.255.0"

10.Use the command n2os-save to save the changes:

n2os-save

11.The above configuration process requires that you reboot the system. To reboot the appliance:

shutdown -r now

12.After the reboot, log in to the appliance. Then, using ps aux |grep wpa, you should receive
output similar to the following, which means the WPA Supplicant is enabled for the management
network interface:

root 91591 0.0 0.0 26744 6960 - Ss 09:59


0:00.01 /usr/local/sbin/wpa_supplicant -s -Dwired -B -i mgmt -c /etc/
wpa_supplicant.conf -D wired -P /var/run/wpa_supplicant/mgmt.pid

13.You can check the status of the wpa_supplicant using the wpa_cli -i mgmt status command.
For example:

root@guardian:~# wpa_cli -i mgmt status


bssid=01:01:c1:02:02:02
freq=0
ssid=NOZOMI8021X
id=0
mode=station
pairwise_cipher=NONE
| Installation | 29

group_cipher=NONE
key_mgmt=IEEE 802.1X (no WPA)
wpa_state=COMPLETED
ip_address=192.168.1.2
address=FF:FF:FF:FF:FF:FF
Supplicant PAE state=AUTHENTICATED
suppPortStatus=Authorized
EAP state=SUCCESS
selectedMethod=13 (EAP-TLS)
eap_tls_version=TLSv1.2
EAP TLS cipher=ECDHE-RSA-AES256-GCM-SHA384
tls_session_reused=0
eap_session_id=0dd52aaeaa2aa3aa4deaac6aaafc65edbfa58cdffecff6ff4[...]
uuid=8a31bd80-1111-22aa-ffff-abafa0a9afa6
Chapter

3
Users
Topics: In this section all aspects related to users authentication and
authorization will be covered.
• Users Introduction
You will be guided on the following:
• Managing Users
• Managing Groups 1. Understand different types of available users (Local, Active
• Password policies Directory, SAML).
• Active Directory Users 2. Setup and define local users.
3. Setup groups and define allowed nodes and sections.
• SAML Integration
4. Configure Active Directory and import users and groups.
5. Configure SAML and import users.
| Users | 32

Users Introduction
Users provide a way to define both authentication and authorization policies within Nozomi Networks
Solution.
Three different users types are available that differ mainly for the different authentication procedures
and for the way in which users are typically inserted. The available user types are:
1. Local Users: Authentication is enforced with a password, and the user is created from the web
console.
2. Active Directory Users: Authentication is managed by Active Directory. User's properties and
groups are also imported from Active Directory. In order to work properly Active Directory needs to
be configured in the Nozomi web console (see Configuring Active Directory Integration using the UI
on page 40).
3. SAML Users: Password is not required since Single Sign On Authentication is enforced through
an authentication server that uses SAML. Users can be inserted via the web console or imported
from a CSV file. In order to work properly a SAML application needs to be properly configured in the
Nozomi web console (see SAML Integration on page 42).
Authorization policies are defined using the groups.
Each group define two aspects:
• A list of allowed features.
• A filter to enable visualization only of specific nodes subsets.
When an user belongs to a group, the user can perform only the operations allowed by the group and
can see only the nodes that respect the group node filter.
A user can belong to several groups and inherits the authorizations of all the groups to which the user
belongs. Therefore, when a user belongs to more than one group, each node will be visible if it satisfies
the filter of just one of the groups and its features will be availble if it is available in just one of the
groups.
Two specific group types are available with already predefined authorization policies:
• Administrators: All features available.
• Authentication Only: No feature available except authentication.
When a group is not Administrators nor Authentication Only the allowed features (sections)
can be enabled/disabled individually.
| Users | 33

Managing Users
In this section we will overview the management operations related to users.

List of users
1. Go to the Administration > Users page. Here you can see a list of all users. From the users
page it's possible to create and delete users and change the password and/or username of existing
users.

Adding an user
1. Go to the Administration > Users page.

Click on the "Add" button. You will get to this screen:

2. The first step is to select the type (source) of desired user. Typically only Local or SAML types
should be used. Active Directory can also be created but it is responsability of the operator to
ensure that the added user exists also in the Active Directory. Therefore, it is preferable to directly
import these users from Active Directory.
Once the source is selected, the kind of data to be inserted depends on the user type:
Local User Here you have to specify username, password, and the users group(s) (Groups
configuration will be covered in the next section).
SAML User Here you have to specify just username, and group(s) since a password is not required
for SAML users.
| Users | 34

3. You can eventually change the check boxes Must Update Password, Is suspended, and Is
Expired.
4. Finally clicking "New User" will add the user.

Import SAML users


1. Go to the Administration > Users page.

Click on the "Import" button you will get an upload dialog.


2. You can then drop or select a CSV file that contains the list of the SAML users to be added. The
CSV file has to contain three comma separated fields for each row. The first field defines the user
name, the second field is the Autentication Group that has to be associated to the user (typically
is an Autentication only group), and the last field has to contain one or more groups (separated by
semicolumns) that define additional groups associated to the user (typically used to define allowed
features).
A CSV file example is as follow:

user_1,authentication_group_1,group_1;group_2
user_2,authentication_group_2,group_3

3. When import is completed you will get a message that reports the number of users that have been
correctly imported.

Edit a local user


1. Go to the Administration > Users page. Browse through the list of users and select the one
you want to edit, by selecting edit. You will get to a form like this:

2. Here you can adjust the username and update the password. Of course you will have to enter two
matching passwords to update it correctly. Clicking on the "x" button (or ESC on the keyboard) will
close this window.

Adding SSH keys to admin users


1. Using SSH public keys it is possible to login to SSH console without having to type the password.
To configure SSH password-less authentication you must add SSH public keys to the user account
through the Administration > Users page.
| Users | 35

In the user list you will find a key icon which will allow you to add SSH keys:

In the form you will find the required fields for ssh key based authentication:

2. To allow authentication using SSH keys you'll just need to paste the public keys in the first field.
Every admin user can have its own keys. If you need more than one key you can just paste them
one per line. Non admin users are not allowed to use password less ssh authentication. When an
admin user expires the associated SSH keys will be removed.
Make sure that the pasted key doesn't contains new lines. The system won't use invalid keys.
Enabling the second option will allow you to login using the root account.
SSH public keys will be propagated to all the directly connected appliances. Default keys
propagation interval is 30 minutes. This behaviour can be changed via conf.user configure
ssh_key_update interval <seconds> setting in CLI.
| Users | 36

Managing Groups
In this section we will overview the management operations related to user groups, changing the
sections of the platform the user can access.

List of groups
1. Go to the Administration > Users page and select the Groups tab.

Adding a local group


1. Go to the Administration > Users page. Select the Groups tab.

2. Click the "Add" button. You will get to the following screen:

Then, under "General" tab, the following data has to be defined:


3. Define the group name and define if the group has to be propagated to the connected appliances.
| Users | 37

4. Define if the group belongs to one of the predefined types, Admin or Authentication only,
enabling the respective check boxes. If none of the two check boxes is selected the user does not
belong to any of the predefined types and the allowed features have to be defined manually (see
below)
5. If the group does not belong to one of the predefined types you will have to select one or more
section(s) that the group will be allowed to view and to interact with.
Optionally, under "Filters" tab (visible on the right of previous screen), you can:
• define "Zone filters" by picking one or more from the list, to limit zones visibility for a group;
• define "Node filters" by entering a list of subnet addresses in CIDR format and separated by
comma. This limits the nodes a user can view in the Nodes, Links, Variables list, Graph, Queries
and Assertions;
• define "Allowed appliances" by selecting ones that users can access to and see data coming of.
This feature is available only for CMCs and only if the "is admin" group permission is disabled.

Edit a group
1. Go to the Administration > Users page. Move to the Groups tab. Browse through the list of
groups and select the one you are wanting to edit by clicking on edit.
2. You will get to a form like the previous one used to create a group with different sections enabled or
disabled depending on the group type.
3. You can define the same data as described above.
| Users | 38

Password policies
This topic describes how to manage password policies for local and SSH accounts.

Shell password policies


Passwords for local console and SSH accounts must meet specific complexity requirements.
Valid passwords must be at least 12 characters long, and contain characters from at least three (3) of
the following four (4) classes:
• Upper case letters
• Lower case letters
• Digits
• Other characters
Characters that form a common pattern are discarded.
Upper-case letters used as the first character are not counted towards meeting the upper case letter
class, and digits used as the last character are not counted towards meeting the digits class.
Within the classes, there should be sufficient character differences, distinguished by binary
representation of the characters. For example, within the class, the number of bits in common does
the "upper case letters" class have in common with the "lower case letters" class, not simply a straight
comparision of the classes.

Web GUI password policies


Passwords for web GUI local accounts must meet complexity requirements. By default, they must have
at least eight characters, include a combination of upper-case and lower-case letters, and numbers.
The password history policy determines the number of unique new passwords that are associated with
a user account before old passwords can be reused. The password lockout policy prevents bruteforce
attacks by disabling a user login for a fixed time after x unsuccessful attempts.
Local passwords and local user accounts can be forced to expire after a period of time. Admin
accounts can be protected from expiring. See below table for settings.
Default policies can be changed via the CLI to best suit organizational requirements.
Password policies can be checked using the info tooltip while adding or editing a user.
| Users | 39

Refer to Configuring Passwords to customize the password policy.


| Users | 40

Active Directory Users


Besides local users, already existing users of an Active Directory domain can also be configured to
login. Moreover, their permissions can be defined upon their group.
In order to proceed with the configuration, you will need to have the following:
1. The domain name (aka pre-Windows 2000 name) (In this manual we will refer to it using
<domainname>)
2. The domain Distinguished Name (in this manual we will refer to it using <domainDN>)
3. One or more Domain Controller IP addresses. (In this manual we will refer to an IP using
<domaincontrollerip>)

Configuring Active Directory Integration using the UI


In this section we will configure the Active Directory Integration from the UI.

1. Go to the Administration > Users page. Select the Active Directory tab.
2. Enter Username and Password.
You need to prepend the Domain Name to the Username, separated by a backslash character, as
shown in the example.
3. Specify a Domain Controller IP/Hostname.
You can check if the Active Directory service is running on port 389 (LDAP) or on port 636 (LDAPS)
by using the Check Connection button and the LDAPS selector.
By default server's SSL certificate is not verified, you can enable it using the Verify SSL selector.
Should you need to add another Domain Controller IP you can click on the Add host button.
4. Specify the Domain details in Domain name and Distinguished name.
5. Optionally configure the Connection timeout
6. Save the configuration by clicking on the Save button, which will also validate the data.
If there are errors, they will be shown beside the Status field.
The Delete configuration button allows you to delete the Active Directory configuration by
removing all its variables. This action is not recoverable.

Import Active Directory Groups


This section explains how to import an existing group from an Active Directory infrastructure. This step
is fundamental to allow Active Directory users log into the system.
1. Go to the Administration > Users page. Select the Groups tab. In the Groups page, click on
the Import from Active Directory button.
| Users | 41

2. From the import screen, start specifying a domain administrative credential. Then click on the
Retrieve groups button to retrieve the list of groups.

In the Username field type the Active Directory user logon name in the <domainname>
\<domainusername> format. You can also click on the Filter by group name checkbox and
type the name of a group you want to retrieve.
3. Now filter and select the desired groups to import. If you want to import also related groups (e.g.
parent groups) be sure to flag the checkbox near the Import button.

4. When finished, click the Import button. You will be redirected to the list of groups.

5. Now you can edit the group permissions. Active Directory users belonging to this group will be
automatically assigned to it and will inherit all permissions of the configured group.
6. After configuring Active Directory groups permissions, users can log into the system with the
<domainname>\<domainusername> user and their current domain password in the login screen.
| Users | 42

SAML Integration
Nozomi supports SAML (Security Assertion Markup Language) Single Sign-On (SSO) authentication;
our integration requires your Identity Provider (IdP) to be compatible with SAML 2.0.
The SAML configuration process is often error-prone. This section assumes that you’re familiar with the
SAML protocol, your IdP software, and the exact details of your specific IdP implementation.
Before configuring Nozomi, define a new application in your IdP. This application should consist of:
• The Assertion Consumer Service (ACS) URL for Nozomi; an ACS specifies the /auth path. For
example, https://10.0.1.10/saml/auth
• The issuer URL for your IdP; it specifies the /saml/metadata path. For example, /saml/metadata.
The nature of this value depends entirely on your IdP.
• A metadata XML file that describes your IdP’s SAML parameters. Before configuring Nozomi,
download the file from your IdP vendor and save it to a location accessible to Nozomi.
To configure SAML integration:
1. Click the SAML tab on the Administration > Users page.

2. In the Nozomi URL field, enter the URL for your Nozomi instance. Note that the form of this URL
determines how authentication is processed. For example if the value you enter specifies HTTPS,
Nozomi uses the HTTPS protocol when processing login requests.
3. Click Load the Metadata XML file, and locate and select the metadata file provided by your IdP.
This file tells Guardian how to configure SAML parameters for use with your specific IdP solution.
4. In the SAML Role Attribute Key field, enter a string that will be used to map role names between
Guardian and your IdP. The value in this field is used to compare groups defined in Guardian with
those defined in your IdP. The nature of this value depends on your IdP. (For example, if you are
using Microsoft Office 365 as your IdP, the value might be http://schemas.microsoft.com/
ws/2008/identity/claims/role).
5. Click Save.
6. Test the integration by clicking Single Sign On on the Guardian login page; be sure to use
credentials known by your IdP.
Note: In order for SAML to work properly, groups that match the SAML roles must already exist in the
system. Groups are found using the role's name; for example, if the SAML role attribute specifies an
Operator role, the IdP looks for the Operator group when authorizing an authenticating user.
Once configured, the login page displays a new Single Sign On button:
| Users | 43

If authentication fails, Nozomi writes errors to either:


• In Audit on page 131 page, or
• In the /data/log/n2os/production.log log file.
If it becomes necessary, click Delete configuration to entirely remove your current SAML integration.
We advise deletion only in rare cases when your authentication method changes.
Additional SAML configuration
Usually, when using SAML authentication, replies are sent back from the same host that originally
received the request.
Sometimes SAML requests are chained between different IdP and replies may come from a different
host. By default, the web UI content security rules block these kinds of replies.
This behaviour can be overridden using the csp form-action-urls configuration key.
To accept replies from an IdP SSO Target URL that differs from the one specified in the SAML
metadata, issue the following configuration rule conf.user configure csp form-action-urls
<additional_url> in CLI.
If you need to specify more than one URL make sure to separate them using spaces.
After this change you must run the service unicorn stop command, in a shell console, to apply it.
SAML clock skew
Sometimes the system times of the IdP and the Guardian may differ. By default, the system accepts
requests with up to 60 seconds difference.
This behaviour can be overridden using the saml clock_drift configuration key.
To change the value, issue the following configuration line conf.user configure saml
clock_drift <allowed_seconds> in CLI.
After this change you must run the service unicorn stop command, in a shell console, to apply it.
Known SAML limitation
• SAML logout protocol is not supported.
Chapter

4
Basics
Topics: In the chapter you will get introduced to some basic concepts of the
Nozomi Networks Solution and some recurring graphical interface
• Environment controls will be explained.
• Asset
You must have a solid understanding of these concepts in order to
• Node understand how to properly use and configure the N2OS system.
• Session
• Link
• Variable
• Vulnerability
• Query
• Protocol
• Incident & Alert
• Trace
• Charts
• Tables
• Navigation through objects
| Basics | 46

Environment
The Nozomi Networks Solution Environment is the real time representation of the network
monitored by the Guardian, providing a synthetic view of all the assets, all the network nodes and the
communications between them.

Asset View
In the Asset View section are displayed all your assets, intended as single discrete endpoints. In this
section it is easy to visualize, find and drill down on asset information such as hardware and software
versions.
For more details see Asset View on page 63

Network View
In the Network View section are contained all the generic network information which are not related
to the SCADA side of some protocols like the list of nodes, the connection between nodes and the
topology.
For more details see Network View on page 65

Process View
In the Process View section are contained all the SCADA specific information like the SCADA
producers list, the producer variables with their history of values and other related information, a
section with the analysis on the variables values and some variables related statistics.
For more details see Process View on page 80

Asset
An asset in the Environment represents an actor in the network communication and, depending on the
nodes and components involved, it can be something ranging from a simple personal computer to an
OT device.
All the assets are listed in the Environment > Asset View > List section and can also be
viewed in a more graphical way in the Environment > Asset View > Diagram section which
aggregates the assets in different levels.

Figure 4: An example list of assets

Node
A node in the Environment represents an actor in the network communication and, depending on the
protocols involved, it can be something ranging from a simple personal computer to an RTU or a PLC.
All the nodes in the Environment are listed in the Environment > Network View > Nodes section
or can be viewed in a more graphical way in the Environment > Network View > Graph section.
| Basics | 47

When a node is involved in a communication using SCADA protocols it can be a consumer or a


producer. SCADA producers can be analyzed in detail in the Environment > Process View
section.

Figure 5: An example list of network nodes

Session
A session is a semi-permanent interactive information interchange between two or more
communicating nodes.
A session is set up or established at a certain point in time, and then turned down at some later point.
An established communication session may involve more than one message in each direction.
The Nozomi Networks Solution shows the status of a session depending on the transport protocol, for
example a TCP session can be in the SYN or SYN-ACK status before being OPEN.
When a session is closed it will be retained for a certain amount of time and can still be queried to
perform subsequent analysis.
All the sessions are listed in the Environment > Network View > Sessions.

Figure 6: An example list of network sessions

Link
A link in the Environment represents the communication between two nodes using a specific protocol.
| Basics | 48

All the links are listed in the Environment > Network View > Link section and can be viewed in
a more graphical way in the Environment > Network View > Graph section.

Figure 7: An example list of network links

Variable
The Guardian creates a variable for each used command, monitored measure and, more in general,
for each information that is accessed or modified by the SCADA/ICS system. Different characteristics
can be attached to a variable depending on the protocol that is used to access or modify it. For
instance, highly specialized protocols such as IEC-60870-5-104 will generate and update variables with
specific type and quality for each sampled value that can also determine if the sample is valid or not.
A variable has many properties, described in Process Variables on page 80 in detail. In particular,
the RTU ID and name properties will have specific values depending on the protocol, as explained in
the following section.
A recurring concept of a variable used as an universal identifier inside the system is the var_key. The
var_key is an identifier of the variable that puts together the node IP address, the RTU ID and the
name in the form <node_ip>/<RTU_id>/<name>. For instance, a variable with name ioa-2-99,
located at RTU ID 24567 and accessed with the IP address 10.0.1.2 will have a var_key equals to
10.0.1.2/24567/ioa-2-99.

Vulnerability
A vulnerability is a weakness which allows an attacker to reduce a system's information assurance.
By constantly analyzing industrial network assets against a state-of-the-art repository of ICS
vulnerabilities, the Nozomi Networks Solution permits operators to stay on top of device vulnerabilities,
updates and patch requirements.

Figure 8: The vulnerabilities

Query
The N2QL (Nozomi Networks Query Language) syntax is inspired by the most common Linux and Unix
terminal scripting languages: the query is a concatenation of single commands separated by the |
symbol in which the output of a command is the input of the next command. In this way it is possible to
create complex data processing by composing several simple operations.
| Basics | 49

It is possible to query only the query sources corresponding to the sections enabled for the user. To
manage groups refer to Managing Groups on page 36.
The table shows the permission needed to query the sources.

Source Permission
alerts Alerts
assets Asset view
captured_urls Captured urls
link_events Link events
sessions Sessions
report_files Reports
variables Process view
variable_history Process view
trace_requests Trace requests
sessions_history Sessions
health_log Health
packet_rules Threat Intelligence
yara_rules Threat Intelligence
stix_indicators Threat Intelligence
cve_files Threat Intelligence
node_cves Vulnerabilities
node_cpes Vulnerabilities
node_cpe_changes Vulnerabilities
node_points Smart Polling
sp_executions Smart Polling
sp_node_executions Smart Polling

The following example is a query that lists all nodes ordered by received_bytes (in descending order):

nodes | sort received.bytes desc

For a reference of the graphical user interface or how you can create/edit queries go to the Query -
User interface reference
For a full reference of commands, data sources, and examples of the query language go to the Query -
complete reference

Protocol
In the Environment a link can communicate with one or more protocols. A protocol can be recognized
by the system simply by the transport layer and the port or by a deep inspection of its application layer
packets.
| Basics | 50

SCADA protocols mapping


All SCADA protocols are recognized by deep packet inspection and for each of them there is a
mapping that brings protocol specific concepts to the more generic and flexible Environment Variable
model.
As an example of such mappings, consider the following table:

Protocol RTU ID Name


Modbus Unit identifier (r|dr|c|di)<register address>
IEC 104 Common address <ioa>-<high byte>-<low byte>
Siemens S7 (Timer or Fixed to 1 (C|T)<address>
Counter area)
Siemens S7 (DB or DI Fixed to 1 (DB|DI)<db number>.<type>_<byte
area) position>.<bitposition>
Siemens S7 (other areas) Fixed to 1 (P|I|Q|M|L).<type>_<byte
position>.<bitposition>
Beckhoff ADS <AMSNetId <Index Group>/<Index Offset>
Target><AMSPort
Target>
and more...

Incident & Alert


An alert represents an event of interest in the observed system. Alerts can be of different kinds, for
instance they can derive from anomaly-based learning, assertions or protocol validation. In section
Alerts Dictionary on page 152 a complete list of alerts is given as a reference.
Note: When an alert is raised a trace request is issued.
An incident is a summarized view of alerts. When multiple alerts describe different aspects of the
same situation, N2OS's powerful correlation engine is able to group them and to provide a simple and
clear view of what is happening in the monitored system.
In section Incidents Dictionary on page 159 a complete list of incidents is given as a reference.

Figure 9: The Alerts section

Trace
A trace is a sequence of network packets that have been processed so far and can be downloaded in
a PCAP file for subsequent analysis.

The Nozomi Networks Solution shows the button with which you can download the available
traces. A trace can be generated by an alert or by issuing a trace request manually clicking on ; you
| Basics | 51

can find this icon in all the sections that are related to the trace feature. However, in order to issue a
trace, non admin users need the Trace permission.
For a detailed explanation of the traces configuration go to Configuring trace on page 306.
A continuous trace is a collection of network packets that are kept for future download. Such
collections can be requested through the GUI. The Nozomi Networks Solution will keep registering a
continuous trace from the moment it has been requested until the request is paused.

For a detailed explanation of the continuous traces go to Continuous Traces on page 132.
Some examples:

Figure 10: Some alerts with trace, click on the three


dots then on the cloud icon to download the PCAP file

Figure 11: From the Links section click on the bolt icon to issue a manual trace request

Figure 12: It is possible to send a trace request also from the graph view

Charts
Charts are often used in the Nozomi Networks Solution to show different kinds of information, from
network traffic to the history of values of a variable. Here is a brief description of the two main chart
controls.

Area charts
| Basics | 52

A The title of the chart


B The buttons to switch on and off the live update of the chart
C The time window control, click to open the historic view
D The unit of measure of the chart
E The legend, in this case the entries in the legend represent a categorization
of the traffic. It is possible to click each entry to show or hide the associated
data series in the chart

History charts

A Buttons for detaching the chart, exporting the data to an Excel or CSV file
B The time window control
C The unit of measure
D The navigator: it is possible to interact with it using the mouse. Drag it to
change the visibility of the time window, enlarge or shrink it to change the
width of the time window

Tables
Tables are used in many sections of the Nozomi Networks Solution, for example for listing nodes or
links. Tables offer different functionalities to the user, here is a brief introduction.

Figure 13: A table with a filter and a sorting applied

A Filtering control: while typing in it the rows in the table will be updated
according to the filter
B Sorting control: clicking on it will sort the table, clicking on the same heading
twice will change the sorting direction. Press the CTRL key while clicking to
activate multiple column sorting
| Basics | 53

C The reset buttons are separated in two sections and can independently
remove the filters and the sorting from the table
D Clicking this button will update the data in the table, click on Live to
periodically update the table content
E Use this menu to hide or show the columns. In order to save space, certain
tables have hidden columns by default

Navigation through objects

The navigation icon , allows you to go directly to related objects.


Two examples:

Figure 14: Navigation options for a node

Figure 15: Navigation options for a link


Chapter

5
User Interface Reference
Topics: In this chapter we will describe every aspect of the graphical user
interface. For each view of the GUI we attached a screenshot with a
• Supported Web Browsers reference explaining the meaning and the behavior of each interface
• Navigation bar control.
• Dashboard
• Alerts
• Asset View
• Network View
• Process View
• Queries
• Reports
• Time Machine
• Vulnerabilities
• Settings
• System
• Continuous Traces
| User Interface Reference | 56

Supported Web Browsers


The Nozomi Networks solution web console supports recent versions of the following web browsers:
• Google Chrome
• Chromium
• Safari (for macOS)
• Firefox
• Microsoft Edge
Note: We do not support outdated web browsers.

Navigation bar
This topic describes the Guardian appliance navigation bar and how to access menu items.
The navigation bar displays useful information about the system and enables navigation to sections
within the user interface.

A
Navigate to these sections of the UI:

Dashboard Home page that toggles between overview and stats view, with the ability to
set the time frame view from one minute ago to any chosen date
Appliances Provides type, hostname, model, IP address and health for remote
collectors associated with the Guardian
Alerts Provides the risk, time, name, description, attack analysis as well as
additional details
Environment Provides asset, network, and process views
Analysis Provides the ability to analyze the environment based on queries, reports,
assertions, time machine, and vulnerabilities
Smart Polling (If purchased) - Provides a summary and polled nodes

B
Navigate to these sections of the UI:

Logout
Other actions • Clear personal settings
• Continuous trace
• Request custom trace
• Show requested traces

Zone Filters Apply zone filters so only those zones will display

C
Sub-navigation bar:

Collapse button Click to reduce the nav bar height


| User Interface Reference | 57

Monitoring Click to disable the auto logout


mode button
Time machine Either LIVE, when realtime data is displayed; or a timestamp when a time
status machine snapshot is loaded
Host Hostname
Site Server location
N2OS version Release of the Nozomi Networks Operating System (N2OS) that is in use
Time NTP (Network Time Protocol) offset
Disk Statistics about the used and available space
Licensee Entity to whom license is granted
Updates Version information for TI (Threat Intelligence) and AI (Asset Intelligence),
visible only if purchased
Language Switch between English, French, German, Italian, Spanish, Vietnamese,
Chinese, Japanese, and Korean

Note: You may see the following warning messages:


• HIGH LOAD - Banner notifies you that the appliance is currently receiving more traffic that it can
handle, and it is protecting itself by discarding some information
• LIMITS REACHED - Notifies you that the machine license has reached its hard limits. When this
occurs, the system stops analyzing network elements, and you may want to consider upgrading
your license.
• API connection error - Machine is momentarily not responding to browser requests. This does not
imply data analysis loss or malfunction. The machine may be busy processing other tasks, may be
rebooting, or may be experiencing network problems.
• MIGRATION ERROR: The last migration procedure did not complete successfully. The health log
will report a more detailed explanation of the isssue.

D
The Administration menu allows you to configure Guardian and make changes to system settings,
alerts, users, user groups, and appliance health.
From the Administration dropdown menu, you can make specific changes to adjust Guardian settings
and systems:

Figure 16: Administration menu


| User Interface Reference | 58

Dashboard
The Nozomi Networks Solution offers multiple dashboards that are fully configurable. If you want to
configure them, go to Dashboard Configuration on page 59.
On top of all dashboards there are some useful controls:
• on the left, a time selector component allows you to choose the time window for the dashboard
data. Notice that all widgets are influenced by the time selector,
• on the right, a dropdown menu and a button with the wrench icon allow you, respectively, to choose
the dashboard that you want to see and to go directly to the dashboard configuration page.

Explanation of the sections of the first default built-in dashboard

Environment information A high level view of what the Nozomi Networks Solution saw in
your network, click on a section (except Protocols) for further
details
Total throughput A live chart of the traffic volume.
Assets Overview Assets divided in levels as per IEC 62443
Alerts flow over time Alerts risk charted over time
Situational awareness Gives you a list of evidences, ordered by severity
Latest alerts Latest alerts as they are raised
Failed assertions A list of your failed assertions

Note: You can see more details about a section by clicking the button (where available).
| User Interface Reference | 59

Dashboard Configuration
Go to Administration > Dashboards and choose the widgets that you want in your dashboard
along with their position and dimensions.
Note: Only allowed users can customize the dashboard.
Note: The first time that you customize your dashboard, you will not find any dashboard defined. In the
Dashboard section you will find just the built-in templates.

Main actions
Here you can find the main actions that you can execute on dashboards.

Import The Import button allows you to choose a dashboard configuration previously
saved in your computer.
New Dashboard... After clicking on the New Dashboard... button you can choose a built-in
template to start from.

Do not specify a template if you want to start from scratch.


Choose a With this dropdown menu, when defined, you can choose the dashboard that
Dashboard you want to modify.

Dashboard actions
Here you can find the main actions that you can execute on the dashboard configuration.

+ Add row With + Add row you can add a new row to the dashboard.
History Using this feature you can restore a previously saved version of the dashboard
that you are editing.
Delete Remove the dashboard from your dashboard list.
| User Interface Reference | 60

Edit By clicking on the Edit button you can rename the dashboard configuration and
customize the dashboard visibility.

Discard When you make some changes to the configuration and you want to discard it,
press Discard.
Clone After choosing a dashboard configuration, click on the Clone button to create a
new dashboard as a copy of the chosen one.
Export This button allows you to save the dashboard configuration to your local
computer.
Save After a change in the configuration, the Save button starts to blink and when
you click on it the new configuration is saved. As mentioned above, if you are an
admin user you will save the new default configuration for all the other users.

Row actions
In this section are explained all the actions that you can perform on a row in the dashboard
configuration page.

+ Add widget With + Add widget you can add a new widget to the row. By default it is added
after the widgets already present in the row.
Move row up/down By clicking on these buttons, you are able to move the row up or down in the
dashboard.
Delete row If you want to completely remove the row from the dashboard, you have to click
on the delete button.

Widget actions
When you want to change the aspect that a widget has in the dashboard, you can follow the
instructions below.
| User Interface Reference | 61

Increase/decrease width With these buttons you can increase or decrease the width of the
widget.
Increase/decrease height With these buttons you can increase or decrease the height of the
widget.
Adjust height in row By clicking on this button, the height of all the other widgets in the same
row is set to the current widget's height.
Move widget before/after With these buttons you can move the widget in the row, one step left or
one step right.
Move widget up/down By clicking on these buttons, you can move the widget in the previous
or in the next row.
Delete widget If you want to completely remove the widget from the row, you have to
click on the delete button.
| User Interface Reference | 62

Alerts
Alerts are listed in the Alerts table. The Alerts comes in two fashions: standard and expert. It is possible
to switch between the two versions by means of the buttons present at the top of the page, as shown in
the figure below.

Figure 17: Standard/Expert mode selection

Non admin users can access this section only if at least one of the groups they belong to has the Alerts
permission enabled. However, only admin users can perform actions on alerts (i.e. acknowledgment,
removal).

Figure 18: Alerts table in standard mode

Figure 19: Alerts table in expert mode

An explanation of the Alerts table (expert mode)

A The time span control enables the user to view alerts in a defined time
range.
B By selecting a grouping field the table will show all the alerts aggregated by
the selected field, for an example see the sample picture
C Clicking on the alert id will show a popup with more details.
D Clicking on the gear icon will open the learning page
| User Interface Reference | 63

Figure 20: The Alerts table grouped by protocol and sorted by risk

Figure 21: The Alerts details popup

The alert popup gives a detailed overview of the alert, including the information about the involved
nodes, the audit of the operations applied on the alert and the relevance of the alert within the
MITRE ATT&CK knowledge bases. Guardian supports MITRE ATT&CK for ICS and MITRE ATT&CK
Enterprise at version 8.0.

Asset View

Figure 22: The Assets table

In this page are listed all the Assets using a table. By clicking on an Asset link it is possible to view a
popup with some additional details about the asset.
| User Interface Reference | 64

Figure 23: The Asset details popup


| User Interface Reference | 65

Network View

Network Nodes

Figure 24: The Nodes table

This page shows all the nodes in the Environment.


In addition to the node information there is an Actions column which enables the user to gain more
information about a node, here is an explanation:

Figure 25: Opens the configuration popup of the node


| User Interface Reference | 66

With this popup you can set node properties. Remove or re-assign the Device ID here if the
automatically-assigned Device ID should be overwritten. Use the drop down menu to select an Asset
and assign the node to it. The Device ID referring to that specific Asset is used.
Figure 26: Configuration popup of the node

Figure 27: Opens a popup with only the alerts associated with the current node

Figure 28: Opens a popup with the requested traces

Figure 29: Opens a popup with the form to request a trace

Figure 30: By clicking this icon you can manage the learning of the node

Figure 31: Opens a popup that allows you to navigate to different sections

Figure 32: Opens a popup that allows you to add an additional node to a
plan with an optionally different configuration than the plan's original one

For more information please see the adding additional nodes section.
In this page are listed all the Nodes using a table. By clicking on an IP link it is possible to view a popup
with some additional details about the node.
| User Interface Reference | 67

Figure 33: The Node details popup

Network Links

Figure 34: The Links table

This page shows all the links in the Environment.


In addition to the link information there is an Actions column which enable the user to gain more
information about a link, here is an explanation:

Figure 35: Opens the configuration popup of the link

Figure 36: Opens a popup with only the alerts associated with the current link

Figure 37: Opens a popup with the history of TCP events (Available only for TCP links)
| User Interface Reference | 68

Figure 38: Opens a popup with the urls captured from


the analyzed traffic (Available only for some protocols)

Figure 39: By clicking this icon you can manage the learning
of the link (its color depends on the learning status of the link)

Link Events

Figure 40: The link events popup

A The link availability calculated on the UP and DOWN events


B The time span control enables the user to view only the events in the
specified time range
C The graphical history of the events, a point with value 1 represents an UP
event, a value -1 represents a DOWN event
D The history of events showed in a table

Figure 41: A schematic representation of two link downtimes: d0 and d1

How Link Availability is calculated


A history of events is stored for each link. Two events are of particular interest for computing
availability: UP and DOWN. The former occurs when an activity is detected on an inactive link, whereas
the latter occurs when an active link stops its activity. Every event has a timestamp for tracking the
precise moment at which it happened.
Guardian computes the total downtime of a link by taking the history of events in a finite time window
and summing up all the time spans starting with a DOWN event and ending with an UP event.
By default a link is considered active, therefore the availability of the link will be 100% minus the
percentage of total downtime.
| User Interface Reference | 69

Track Availability
The "Track Availability" feature allows for an accurate computation of the availability. It enables the
monitoring of the activity on the link at regular intervals, generating extra UP and DOWN events
depending on the detected activity on both sides of the link during the last interval.
To specify the interval for a Link, go to the Links table (or any other section where the Link Actions are

displayed) and click on the button, in order to open the following form.

It is advisable to choose a value that is greater than then the expected link polling time, in order to
avoid too frequent checks that are likely to produce spurious DOWN events.
Note: link_events generation is disabled by default, to enable it use the configuration rule described in
Configuring links

Network Sessions

Figure 42: The Sessions table

In this page are listed all the Sessions using a table. By clicking on the From or To node ids additional
details about the involved Nodes are displayed. The buttons in the Actions column enable the user
to ask or to see the traces and to navigate through the UI. In the other columns there are fine-grained
information about each session, like the source and destination ports, the number of transferred
packets or bytes, etc.

Network graph
This topic describes the Nozomi Networks solution network graph.
| User Interface Reference | 70

The network graph page gives a visual overview of the network. In the graph every vertex can
represent a single network node or an ensemble of nodes, while every edge represents one or multiple
links between nodes or nodes ensembles. Edges and vertexes are annotated to give information about
the identification of the node, the protocols used in the communications between two nodes, and more.
The position of the nodes in the graph is determined by either a specific layout or a dynamic automatic
adjustment algorithm that looks for minimal overlap and best readability of the items. An overview
picture of the network graph is given below.

Figure 43: Main network graph with the auxiliary zone/topology


graph window on the left and the information pane on the right

The nature of data represented in the graph in controlled by the graph layout menu, that permits
to select the type of graph and how the nodes are assembled together in the graph. The detailed
description of the options available in the latyout menu is given below.
The user can also control the graph by zooming in and out, centering in specific zones, and get
information about specific elements by clicking on them with the mouse as described in the graph
control paragraph.
On the left and the right of the network graph two ausiliary windows are available to provide additional
information and control.
• Information pane (right). It contains additional information concerning the node or link selected in
the network graph (see graph control)
• Zone/Topology graph (left). It contains the network visualisation in terms of zone or topology. A
detailed description of the feature is provided in the Zone/Topology graph chapter
The contents of the graph can be filtered using different criteria in order to obtain a clearer
representation, or to evidence specific aspects. A detailed description of the controls available is
provided in the table and figure given below:

Figure 44: Network graph with the available commands


| User Interface Reference | 71

A Button to toggle the dynamic adjustment motion of the items


B Button to toggle the information pane
C Buttons to increase (left) or decrease (right) the node icons size; this also
affects the label size
D An evidenced node. When the mouse is placed over a node, the size of the
node icon is increased as well as the label.
E Link
F Button to toggle the topology pane
G Button to toggle the zone pane
H When present, this icon indicates that graph filtering is active. The filter can
be one of the filters in the filter bar (see R and S below), or it can be the
zone/topology filter activated when user clicks on a Link/Node in the zone/
topology graphs
I Button that exports a PDF report containing the graph. Notice that the graph
is exported as it is currently shown on the page.
J The ? button shows the legend for link and nodes. The content of the legend
is aware of the selected perspectives.

K Button to reset all the customizations and reload the data


L Button permits reload of the data, keeping the current customizations. If
the live sliding button is put on "live" the graph is automatically periodically
updated, otherwise just a single update is performed when the user requests
it.
M Button to filter by activity time
N Magic wand button opens a wizard to help the user filter the graph and
view only the desired information. It contains some solutions to reduce the
amount of visualized data for big graphs.
O Button permits configuration of the node visualization options as described
below
P Button permits configuration of the link visualization options as described
below
Q Button to select a graph layout
R Button lets you select the filter types that are available in the main network
graph window. The selected filters are shown at the center top of the graph
window (S). By default no filter is selected.
| User Interface Reference | 72

S This example shows the filters enabled in R. Once a filter is enabled, and
a value is inserted in the filter, the graph is automatically updated. If more
than one filter is enabled then a logical and criteria is applied; only the nodes
that satisfy all the specified filters are shown. Note that if a node passes the
filters, then all the nodes directly connected to it are shown in the graph. For
example if a specific IP filter is used, then the specified node is shown along
with all the nodes connected to it.

Layout options
The layout define the way in which the nodes and links are shown in the graph.

Purdue model Arrange all the graph in rows and places the nodes in separate rows
according to their level. This allows to distinguish the different levels and
isolate potential problems due to communications that cross two or more
level boundaries.
Standard It is the default layout and the kind of visualization depends on Group_by
property:
• Group_by not defined: All the nodes and links are shown
• Group_by defined: All the nodes belonging to the same groups are
collpsed into a single node

Grouped The nodes are grouped according to the criteria defined in Group_by, and
the graph is visualized as following
• Group_by not defined: All the nodes and links are shown
• Group_by defined: All the nodes belonging to the same group are
shown and are placed inside a circle that represent the group, links
between nodes belonging to the same group are shown, while links
between nodes of different groups are replaced by links between groups
represented as lines that connects the circles
| User Interface Reference | 73

Clustered The nodes are clustered according to different criteria as described below.
Once some nodes are clustered together, a single circle is shown that
represent the cluster of nodes, then when the user zoom-in the circle is
expanded and the internal nodes are shown. Possibly a cluster can contain
other subclusters and so no. This layout can be useful when visualizing
large graphs since permits to have an overall view of the graph and at the
same time visualize the details just of the part of interest without cluttering
the visualization with a mess of data that is not of interest. The way in which
the nodes are clustered depends on the value defined by Group_by:
• Group_by not defined: The nodes are clustered based on their
connections. Nodes with a large number of links tends to act as cluster
center and have neighbourg nodes assigned in the same cluster
• Group_by defined: At the highest level a cluster is created for each
group, then inside the each high level cluster some subclusters are
possibly created clustering around nodes with an high number of links.
For example if Group_by is equal to "Zones", then a cluster is created
for each zone, and inside each zone other subclusters are possibly
created around nodes with an high number of links

Group by Define the group that can possibly be used for the Standard, Grouped,
and Clustered layouts. Nodes with the chosen property (i.e. zone, subnet,
etc) are assigned to the same group, then the way in which the group is
displayed depends on the layout chosen as described above.

A couple of examples of graph with different options are given below

Figure 45: The Environment Graph with the zones pane opened with
the Group_by=Zones, Layout = Grouped and zone perspective.
| User Interface Reference | 74

Figure 46: The Environment Graph with the zones and info pane
opened with the Group_by=Zones, Layout = Clustered. The
info pane contains information regarding the Undefined zone

Graph control
The user can move and zoom the graph using the mouse, and it is also possible to increase/decrease
the size of the icons and the text to have the better readability.

Move To move the graph click somewhere, not on a node, and start dragging
Zoom (mode 1) With the mouse inside the window, turn the mouse wheel up and down
to zoom in and out (scrolling). The zoom will be centered on the mouse
position
Zoom (mode 2) Drag in vertical direction while keeping pressed the 'z' key. The zoom will be
centered on the position where started the mouse dragging
Icon and Text Without chaging the zoom it is possible to increase/decrease the size of the
size icons and labels using the buttons identified with letter C in the figure below

Other mouse actions are also available that permits to perform additional actions.

Single click Single click on a node or a link. Fill the info pane with information
regarding the node or link selected. The kind of information displayed
depends on the nature of the node or link selected (nodes, cluster, ...)
Double click Double click on a node. Show a new window with extenden information
regarding the node or link clicked. The action can be perfromed only on
nodes not on clusters or links
Mouse over Mouse over a node or a link. Evidence the node or link
Mouse down Single click down on a node or a link without releasing the mouse
button. Evidence the selected node or link and the elements directly
connected to it

Nodes visualisation options


It permits to define which nodes are shown (filtering), and how they are shown (colouring based on
some properties)
| User Interface Reference | 75

Perspective Change the color of the nodes according to a predefined criterion


Roles Allow you to filter the graph by node roles
Exclude IDs Remove the specified IDs from the graph view; it is possible to specify more
IDs separated by comma
ID filter The graph can be filtered by one or more ID addresses, separated by
comma
ID filter exact If checked, the ID filter will let the graph show only the nodes with exactly
match the specified ID(s) and not with a "start with" criterion
Display Choose the label formatting of the nodes
Show broadcast If checked, it includes in the graph all the nodes with a broadcast IP
Only confirmed If checked, it shows only the nodes that exchanged some data in both
nodes directions while communicating

Links visualisation options


It permits to define which links are shown (filtering), and how they are shown (colouring based on some
properties)
| User Interface Reference | 76

Perspective Change the color of the links according to a predefined criterion


Protocols Allows the ability to filter the graph by link protocols
Enable links If checked, links will become bolder in reaction to mouse movements making
highlighting the link easier to select (may affect performance)
Show protocols If checked, every link will show its protocols
Only with If checked, it shows only the links which exchanged some data in both
confirmed data directions

Zone / Topology Graph


It provide a network visualization that describe the netwrok topology or zones The visualisation of
zone or topology graph is mutually exclusive and can be controlled with the zone and topology toggle
buttons (G and F in the figure below). Inside the zone graph each node represent a zone and each link
represent all the links between nodes in the the connected zones. When the user click on a zone the
information pane is populated with all the nodes/links that belong to the clicked zone, the main network
graph is filtered to show only the nodes and the links of the zone, and the filtering icon (H) appears.
In a similar way when a link is clicked in the zone graph the information pane is populated with all the
links between the two zones, and the networks graph shows only nodes and links that belong to one
of the two zones connected. When the user click in a region of the zone graph without any node or link
the visualisation in the network graph is resetted to show all the nodes and links.

Figure 47: The Environment Graph with the zones pane opened and the
zones perspective active to highlight the zone of origin of each node.
| User Interface Reference | 77

The zones pane offers the ability to filter the graph by clicking on a zone or on a link between two
zones. The zones graph also has a legend and shares some of the nodes and links options. Clicking
on a node or link in the zone pane will show some additional information about the zone or the links
between the zones. See the basic configuration rules to customize Zones.

Figure 48: The Environment Graph with the transferred bytes node
perspective highlighting the high traffic usage of the consumer nodes

"Magic wand" options


The wizard help the user with several hints to improve the performance of the graph. Settings
annotated with an orange exclamation mark are considered suboptimal. Green thumbs annotate
options whose settings are considered helpful.
| User Interface Reference | 78

Show broadcast Broadcast addresses are not actual network nodes in that no asset is
bound to a broadcast address. They are used to represent communications
performed by a node towards an entire subnet. Removing broadcast nodes
reduces the complexity of a graph.
Only with Unconfirmed links can be hidden easily to reduce the complexity of an
confirmed data entangled graph.
Only confirmed Unconfirmed nodes can be hidden to reduce the size of a large graph.
nodes
Exclude tangled Nodes whose connections cause the node to be too complex can be
nodes removed to improve the readability of the graph.
Protocols Nodes and edges can be filtered so to show only those items participating
in communications involing one of the selected protocols. By clicking on
"SCADA", all SCADA protocols are selected.
| User Interface Reference | 79

Traffic
The Traffic tab in the Environment > Network View page shows some useful charts about
throughput, protocols and opened TCP connections.

Figure 49: The traffic charts

An explanation of the sections

A The throughput chart showing traffic divided in macro categories


B The throughput chart showing traffic for each protocol
C A pie chart showing the proportions of packets sent by protocol
D A pie chart showing the proportions of traffic generated by protocol
E The number of opened TCP connections
| User Interface Reference | 80

Process View
The process view tab can be accessed only by users that have the Process view permission.

Process Variables

Figure 50: The process view table, showing a large number of variables

In the variables list there are many details about each variable, here is an explanation for each field:

Actions By clicking on Variable details you will open the variable details page. A
click on Add to favourites will add the variable to favourites variable list.
Host The IP address of the producer which variable belongs to
Host label The label of the variable host
RTU ID An identifier of the variable container, for an explanation of the format see
Protocol on page 49
Name The name assigned to the variable, for an explanation on how this is
calculated see Protocol on page 49
Label A configurable description, for instructions see Configuring variables on
page 292
| User Interface Reference | 81

Type The type of the value, can be analog or digital


Value The current valid value of the variable
Last value The last observed value with an indicator showing if it is valid (green) or not
(red). By clicking on the icon the variable history chart will appear.
Last valid quality The last time the variable had a valid value quality
Last quality Last value quality
Min value The minimum value the variable has ever had
Max value The maximum value the variable has ever had
Unit The unit of measure, for instructions on its configuration see Configuring
variables on page 292
Protocol The protocol used to write or read
# Changes The number of times the variable value changed
# Requests The number of read operations
Last client The IP address of the last client querying the variable
Last FC The function code of the last operation performed
Last FC Info The function code information of the last operation performed
First activity The first time an operation was performed
Last activity The last time an operation was performed
Last change The last time an operation performed on the variable changed its value
Flow control The status of the flow control can be:
status
• CYCLIC if the variable is detected to be updated or read at regular
intervals
• NOT CYCLIC otherwise
• DISABLED if flow control has been disabled from the learning control
panel
• LEARNING if the algorithm is still analyzing the flow
When the status is CYCLIC there is a chart indicating the timing and the
average value in milliseconds.

Flow anomaly in It is true if the system has detected that an anomaly is in progress, it is false
progress otherwise. When an anomaly is in progress a Resolve button appear, by
clicking on it the user can tell the system that the anomaly has ended, if the
anomaly continue another alert is raised.

Active checks It shows the active checks enabled on the variable


History enabled A boolean flag showing if the value history is enabled for the variable

Variable details
To see the details of a variable, you can click on the magnifying glass icon beside the variable.
In the Process Variable details you can see all the info of the variable and its value history in a chart
and in a table (if it is configured as monitored, see Configuring variables on page 292).
| User Interface Reference | 82

With the buttons above the chart, you can open the chart in another window or export the data in Excel
or CSV format.
By default, the chart shows the variable value history only for a specific period of time. Clicking on the
Live update checkbox makes the chart update in real-time.

Figure 51: The detailed view of a variable

Favorite variables
To add a variable to the favorite variables list, you can click on the star icon beside the variable.
Here you can see a chosen group of variables, those variables can also have their values plotted on
the same chart to make a comparison easier.
| User Interface Reference | 83

Figure 52: The process view table with favourites variables on top

Process Variables Extraction


Variables extraction can be tuned both globally and at the protocol level by accessing the Settings tab.

Figure 53: Process variables extraction tuning

The global variables extraction level applies to all protocols for which an extraction level has not been
specified. The possible values that can be set are the following:
• disabled: variable are not extracted
• enabled: variables are extracted
• advanced: variables are extracted exactly as at the enabled level, moreover some protocols will
also employ advanced heuristics techniques to extract additional variables
Protocol specific settings are shown in a dedicated table containing all protocols for which at least one
variable has been extracted. They prevail over global settings except for when variables are globally
disabled; in that case variables will simply not be extracted.
The variables extraction level can be changed for any given protocol by clicking on the relative edit
icon under the ACTIONS column. In addition to the levels that can be set globally there is also a new
value called global. This is the default level for all protocols and it indicates that variables extraction
settings are ihnerited from the global settings.
| User Interface Reference | 84

Figure 54: Protocol variables extraction tuning

Note that:
• variables extraction is globally enabled by default
• the advanced level can be set only on protocols that support it
| User Interface Reference | 85

Queries
All the data sources of the Nozomi Networks Solution can be queried using N2QL (Nozomi Networks
Query Language) from the query page (Analysis > Queries). In that page, you can also see all the
queries that are already saved in the running installation.
You can choose between Standard (currently offered as beta feature) and Expert, the first allows for
an easier experience, useful if you want to quickly have a look at your data, the second allows for more
complex queries but requires more expertise.

Figure 55: Choose between Standard and Expert

Go to Queries on page 187 to get a complete reference of query commands and data sources.

Query builder
The Query builder enables the user to easily create and execute queries on the observed system. To
do so just click through the different options.

Figure 56: The Query builder

While you build your query the available options change to reflect your choices, guiding you through
the process.
| User Interface Reference | 86

Figure 57: The Query builder during a query

Query Editor
The Query Editor enables the user to execute queries on the observed system. To execute a query just
type the query text in the field and press the enter key on the keyboard.

Figure 58: The Query Editor. Some sample queries are displayed
at the beginning, clicking on them will trigger the execution

After the execution, the result will be displayed like in the figure below. If the user has enough
privileges (i.e. it belongs to a group with admin privileges), by clicking on the floppy icon on the right,
the query will be saved and displayed in the Saved Queries section, otherwise the button is disabled.
To save a query, you must specify a description and a group. The query results can be exported
by clicking on the Export button, and by choosing between the Excel or the CSV format. The
corresponding file will be produced in background (to facilitate the production of queries with large
amount of data) and it can be retrieved through the Exports List submenu, once ready. When an
export is downloaded it is automatically removed from the filesystem.

Figure 59: The Query Editor during a query


| User Interface Reference | 87

Saved Queries
When a query is saved, it will be displayed in the Saved Queries section. Here, by using the group
selector, it is possible to change the current group and to restrict the view to the queries of the chosen
group.
Query groups, a simple but powerful method to organize the queries, can be created, renamed and
deleted only by admin users. When a group is deleted, all the queries contained in it will be eliminated.
By clicking on the pen icon, it is possible to change the description and/or the group of a query. By
clicking on the trash icon, the saved query will be deleted. As for the saving actions, the user requires
admin privileges in order to perform such operations.

Figure 60: The Saved Queries


| User Interface Reference | 88

Reports
On the report page (Analysis > Reports), you can generate Custom Reports
Custom Reports are based on custom queries and layouts. You can define them using the Report
Management section.

Report Dashboard
The report dashboard shows a set useful information related to reports.
It is possible to check information about the percentage of disk occupied by reports, report
management statistics (like available layouts and widgets, custom reports and queries), the last
generated reports, the next scheduled reports occurences.

Figure 61: Report dashboard

Report Management
Use the Report Management section to create or edit reports.

Figure 62: Report management

On the left you can find the list of created and saved reports, grouped under folders; the selected report
and its folder are highlighted. You can also perform some actions:
• Add a new folder, by clicking on the Add folder button. When you create a report folder, you
must specify a name.
| User Interface Reference | 89

Figure 63: New report folder creation


• Edit an existing folder, by clicking on the pencil icon beside the folder name.

Figure 64: Edit report folder


• Delete an existing folder, by clicking on the trash icon beside the folder name.
• Create a new report, by clicking on the New report... button. When you create a report, you
must specify a name, a folder and a layout, that can be empty o pre-filled.

Figure 65: New report creation


• Import an exported report, by clicking on the Import Schema button
On the right you can see the preview of the selected report. On the top you can find some action
buttons and options:
• Format: changes the report pages format
• Add page: adds a page to the layout
• Save: saves layout changes
• Edit: allows to change report name and group
| User Interface Reference | 90

Figure 66: Edit report


• Delete: deletes the report
• Export schema: export the report
• Generate Report: starts the Report generation
The report layout is made by a set of pages. Each page contains a set of rows; you can:
• Add a row to the bottom of the page, by clicking on the the Add row button
• Delete a row, by clicking on its trash button
• Move the row up/down, by clicking its the up/down arrow buttons

Figure 67: Report row

Each row is split in two columns; you can fill columns by adding elements, tha can be widgets and,
if you have saved queries, queries. Each element can be removed by clicking on its trash button.
Depending on the element type, it can fill one or two column and you can change the width of the
element by clicking on its reduce/enlarge buttons. Some widgets have additional options (e.g. Style
for [custom text]).

Figure 68: Report row with two single-column elements

Report Generation
In the Report Management section you'll find a Generate Report button. This button lets you
generate:
• On-demand reports: generated immediately.
• Scheduled reports: generated cyclically, based on a custom recurrence. This feature is available
only for users granted the Allow editor permission. Scheduled reports can be managed through
the Scheduled Reports section.
You can also choose the type of report you want to generate:
• PDF: default selection. This is exactly what you see in the Report Management.
• CSV: zipped folder with one csv for each widget that can be converted in this format.
• Excel: single excel file with one sheet for each widget that can be converted in this format and a
legend in the last one.
| User Interface Reference | 91

Figure 69: Generate Report dialog

After report files are generated (either on-demand or scheduled), users can download them from the
Generated Reports section. When scheduling reports, you can optionally send the report files by email.

Generated Reports
After creation, both on-demand and scheduled generated reports are available in the Generated
Reports section as files.

Figure 70: Generated Reports

In this section you can browse the created reports, download them, and delete them if necessary.
You can also configure report retention by clicking the Configure button. You can set the number of
days that a scheduled report remains available after it's generated, and set the maximum number of
reports that can be stored. The default values are 90 days and 500 stored reports.

Note: If the appliance runs low on disk space, the oldest reports are automatically deleted to make
room for the newest ones.
| User Interface Reference | 92

Scheduled Reports
The Scheduled Reports section displays all scheduled reports after they are set up.

Figure 71: Scheduled Reports

In this section, you can browse the scheduled reports, edit them, and delete them if necessary. Click
the Edit button to see the available schedule settings. Edit and save these settings in order to update
the selected schedule.

Report Settings

Report custom logo


You can upload a custom logo to replace the Nozomi Networks logo. After you upload your logo, you
can delete it (the Nozomi Networks logo will be restored), or upload a new one to replace it.
You can edit the report custom logo section visibility to grant or deny user access to the custom logo.
Non-administrative users can see/change the report custom logo only if they are granted the Report
and Allow editor permissions. You can edit users' reports permissions on the user groups section.

Figure 72: The Report custom logo section

Note: Using a logo of a different size than suggested by the tooltip can break the layout of generated
reports by introducing overlapping page headers.
| User Interface Reference | 93

Report SMTP settings


You can configure SMTP settings in order to optionally receive Scheduled Reports by email.
Once enabled and saved, scheduled reports with the Email recipients field set will be sent to
the specified recipients' email addresses.

Figure 73: The Report SMTP settings section

Note: When enabled, for each scheduled report, an email will be sent starting from the next
scheduled recurrence.
| User Interface Reference | 94

Time Machine
With the Time Machine the user can load a previously saved state (called snapshot) and go back in
time, analysing the data in the Nozomi Networks Solution from a past situation. It's possible to load a
single snapshot and use the platform as usual or load two snapshots and make a comparison in an
user interface that highlights what's changed.

Time Machine Snapshots List

Figure 74: The Time Machine Snapshots List

The snapshots periodically taken by the Nozomi Networks Solution are displayed in this table.
Snapshots can be used to go back in time to analyze the Environment status at a certain point in the
past. Moreover, they can be compared by means of a diff.

To load a snapshot

Figure 75: Load snapshot button

Click on the Load snapshot button to load a snapshot and analyze it like if you were in the past. The
user interface will become gray to highlight that you are watching a static snapshot.
Click one of the forward buttons to return to the present and watch the Environment in real time.

Figure 76: Forward button

Figure 77: Forward button in header


| User Interface Reference | 95

To request a diff
To request a diff from the snapshots list you must select two snapshots by clicking on the plus button
shown in the figure.

Figure 78: Plus button: click on it to include the snapshot in the diff

You can exclude the frequently changing fields from the diff result by selecting the corresponding
checkboxs. The fields like the one representing a time will not influence the result anymore.

Figure 79: Check it to exclude the frequently changing fields

After the snapshots are selected just click on the diff button, the request will be processed and the
differences between the two snapshots will be shown.

Figure 80: The button to execute the diff between two snapshots

To configure retention, snapshot interval and event-based snapshot see Configuring Time Machine on
page 310.

Time Machine Diff from Alert

Figure 81: Fast diff button

Sometimes it is more convenient to request a snapshots diff starting from an alert, this automatic
feature will use the previous and the next snapshots according to the alert time.
To make such a request, just open the alert details popup by clicking on an alert ID in the alerts table
and click on the time machine diff button; you will be redirected to the diff result page.
| User Interface Reference | 96

Time Machine Diff Result

Figure 82: Diff result, click on Show changes to see the differences

A Use these buttons to navigate between the Environment items


B Use these buttons to navigate between the subsections; in the example are
displayed the nodes with changes

In the diff result page there are four sections: Nodes, Links, Variables and Graph. In the Nodes,
Links and Variables sections there are three subsections: Added, Removed, Changed. By navigating
these sections and subsections you can observe how the Environment changed between the two
snapshots. You can see, for example, if a node has been added or if a variable value has changed. In
the next image there is a popup with the detailed changes for a single node.

Figure 83: Diff details for node

In addition to the tabular representation there is also a graph view of changes. Thanks to the graph
view and the use of colors, you can quickly spot which nodes or links have been added, removed or
have some changes. An item that has been added is green, one that has been removed is red and,
finally, one that has changes is blue. Details are shown on the right side of the graph by clicking on a
node or a link with changes.

Figure 84: Diff result as a graph


| User Interface Reference | 97

Vulnerabilities
This section describes the Vulnerabilities table, which lists the vulnerabilities.
The Vulnerabilities table lists all vulnerabilities in table format. The table has three tabs:
• Assets: Lists vulnerability information per vulnerable asset.
• List: Lists all vulnerabilities in table format.
• Stats: Lists vulnerability statistics on a global level.

Figure 85: Vulnerabilities table

Assets Tab
From the dropdown menu in the Assets tab, you can filter only the most likely vulnerabilities, by
selecting Only Most Likely, with the likelihood threshold configured as shown in the image below.

Figure 86: Most likely filter configuration form

By clicking the Common Vulnerabilities and Exposures (CVE) link, you can view a popup with
additional details about the vulnerability.

Figure 87: Vulnerability details popup


| User Interface Reference | 98

List Tab
From the List tab, you can update the vulnerability status using the controls, and set it as follows:

Figure 88: Change CVE resolution

The resolution status and reason can also be updated automatically in the background by the system,
as a result of Smart Polling.
For example, Guardian Ticket Incidents that are closed in ServiceNOW are propagated into Guardian,
a synchronous process that is configured in the Smart Polling section of the Guardian portal. Using
the "Close incidents according to their status on the external service" checkbox, you can toggle
incident synchronization on and off. Incidents closed in ServiceNOW will be sent to the Guardian when
the box is checked.
Both the statuses, Mitigated and Accepted, lead to a resolution status that equals 'true'.
Stats Tab
From the Stats tab, you can view the top CPS, CWEs and CVEs in graphic format.

Figure 89: Stats tab


| User Interface Reference | 99

Settings

Command Line Interface (CLI)


The Command Line Interface (CLI) allows you to change some configuration parameters and to
perform troubleshooting activities.
See the Configuration section for a complete list of configuration rules.

Figure 90: The Command Line Interface executing a command

Useful commands

help Show the list of available commands


history Show the commands previously entered
clear Clear the console
find_cmd Find all available CLI commands containing a given sequence of space-
separated keywords

Keyboard shortcuts

Ctrl+R Reverse search through commands history


Esc Cancel search
Up arrow Previous entry in history
Down arrow Next entry in history
Tab Invoke completion handler
Ctrl+A Move cursor to the beginning of line
Ctrl+E Move cursor to the end of line
| User Interface Reference | 100

Firewall integrations
This topic describes how to configure Guardian firewall integrations.
The Nozomi Networks solution discovers, identifies, and learns the behavior of assets on your network.
Through integration with the firewall, unlearned nodes and links are automatically blocked through
block policies. Block policies are not created for nodes and links in the learned state.
Note: For some firewall integrations, the Nozomi Networks Operating System (N2OS) supports
session kill.
Guardian supports integration with the following firewalls:
• Fortinet FortiGate V6 on page 100
• Check Point Gateway on page 101
• Palo Alto Networks V8 on page 102
• Palo Alto Networks V9 on page 102
• Palo Alto Networks V10 on page 103
• Cisco ASA on page 104
• Cisco FTD on page 104
• Cisco ISE on page 104
• Trend Micro ODC on page 106
• Stormshield SNS on page 107
Note: Setting up firewall integrations requires administrative privileges.
Once logged into Guardian, begin the integration process by going to Administration > Firewall
Integration, then select the firewall from the dropdown menu.

After the integration has been set up, policies are produced and inserted in the firewall. The policies are
displayed in the policies section.

Features
• Firewall integrations only work when the global learning policy mode is set to protecting and strict.
It does not work when the policy for zones is set to override the protecting and strict mode. In this
mode, we can see new nodes, but they are not learned.
• If the global learning policy is set to learning and adaptive, and a zone is set to protecting and
adaptive, we see new nodes, but they are not learned, however links to new nodes are learned
automatically.

Fortinet FortiGate V6
As a prerequisite to configure the integration with FortiGate V6 you need to have a REST API access
token, which can be generated directly from the firewall admin UI.
The access token needs to have permissions to insert, read and delete entities as addresses,
addrgroups, routes, sessions and policies. Additionally, the Guardian address subnet needs to be
added to the trusted hosts.
The vdom field is optional. In case you need to specify multiple vdoms, you can use ',' as a separator,
e.g. vdom1,vdom2.
| User Interface Reference | 101

Figure 91: The FortiGate V6 configuration section

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.
This integration uses the REST API, which is supported by FortiOS version 6 or higher.

Check Point Gateway

Figure 92: The Check Point Gateway configuration section

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.

Figure 93: The Guardian policies inserted in the Check Point Gateway
| User Interface Reference | 102

Palo Alto Networks V8

Figure 94: The Palo Alto V8 configuration section

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.

Figure 95: The Guardian policies inserted in the Palo Alto v8 Firewall

Palo Alto Networks V9


Starting from version 9.0, PAN-OS provides a REST API. The Guardian integration relying on this new
API supports the same features as the previous Palo Alto integration and also the following ones:
• Commit by user: commits the current changes required by the user represented by the credentials
used for the api. Global commits are no longer performed
• Dynamic Access Groups for Node Blocking: the Dynamic Access Group references a tag which is
then assigned to new IP address objects that are created on the firewall. This will then automatically
apply the global Guardian denylist rule to each new address without having to modify the firewall
ruleset

Figure 96: The Palo Alto v9 configuration section

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.
| User Interface Reference | 103

Figure 97: The Guardian policies inserted in the Palo Alto v9 Firewall

Palo Alto Networks V10


With version 10.0, PAN-OS provides a REST API. The Guardian integration relying on this new API
supports the same features as the previous Palo Alto integration and also the following ones:
• Commit by user: commits the current changes required by the user represented by the credentials
used for the api. Global commits are no longer performed
• Dynamic Access Groups for Node Blocking: the Dynamic Access Group references a tag which is
then assigned to new IP address objects that are created on the firewall. This will then automatically
apply the global Guardian denylist rule to each new address without having to modify the firewall
ruleset

Figure 98: The Palo Alto v10 configuration section

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.

Figure 99: The Guardian policies inserted in the Palo Alto v10 Firewall
| User Interface Reference | 104

Cisco ASA

Figure 100: The Cisco ASA configuration section

SSL check is always skipped.


If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.

Figure 101: The Guardian policies inserted in the Cisco ASA

Cisco FTD
Permit to kill sessions.

Figure 102: The Cisco FTD configuration section

SSL check is always skipped.


If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.

Cisco ISE

The Cisco ISE configuration


The preferred method to authenticate with Cisco ISE is via certificates. Guardian supports:
• Authentication using certificates issued by the ISE internal CA
• Authentication using certificates issued by an external CA (third party certificates)
| User Interface Reference | 105

Along with the client associated with the certificate and the certificate password, you need to upload
the identity certificate and the private key.

Figure 103: The Cisco ISE configuration using an ISE internal CA certificate

If you are using a third party certificate, you need to upload the external CA certificate as well.

Figure 104: The Cisco ISE configuration using a third part certificate

It is also possible to authenticate via username and password. If you want to use an existing client, you
have to specify the password.

Figure 105: The Cisco ISE configuration using an existing client

Otherwise you can create a new client directly from the Guardian integration configuration window
by using the Create client button once you have specified the new client name. Remember that
you need to approve the new client from the Cisco ISE pxGrid Services window. The password
returned by Cisco ISE will not be displayed, but will be kept in the Guardian configuration.
| User Interface Reference | 106

Figure 106: The Cisco ISE configuration to create a new client

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.

Figure 107: The Guardian policies inserted in the Cisco ISE

Troubleshooting configuration
The UI performs fields validations when the Save and the Pull policies buttons are pressed. In
case of missing fields, a warning message will be displayed. If there are any authentication errors, e.g.
wrong password or certificate mismatch, the UI will display a message detailing the reason of the error.
For further details regarding errors you may experience you can also search for the 'Cisco ISE' string in
the log file /data/log/n2os/n2osjobs.log.

Trend Micro ODC


ODC provides a REST API v1.1. The Guardian integration relying on this API supports the same
features as the previous Trend Micro integration.

Figure 108: Trend Micro ODC configuration section

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.
| User Interface Reference | 107

Figure 109: The Guardian policies inserted in the Trend Micro ODC Firewall

Stormshield SNS
The Guardian integration supports Stormshield CLI API v4.

Figure 110: The Stormshield SNS configuration section (credentials authentication)

Figure 111: The Stormshield SNS configuration section (certificate authentication)

If necessary, tune the integration's behavior using the Options section of the configuration dialog. Each
option is described beneath its checkbox.

Figure 112: The Guardian policies inserted in the Stormshield SNS Firewall
| User Interface Reference | 108

Data Integration
In this section (Administration > Data Integration) you can configure several endpoints.
Depending on configuration, each endpoint could receive Alerts or other items. You can learn more
about Nozomi’s integrations in the web UI: click How this integration works beneath the
Endpoint Configured as field in the New Endpoint dialog.
Integrations that can send data via UDP have a default maximum size of the messages set to 1024. It
is possible to change this default value by adding a max-size query param to the URI, for example
udp://host?max-size=2048.

Figure 113: Some examples of configured endpoint

FireEye CloudCollector
Besides Alerts, with FireEye CloudCollector integration it is possible to send Health Logs, DNS Logs,
HTTP Logs and File transfer Logs.

IBM QRadar (LEEF)


The IBM QRadar integration permits the sending of all Alerts (and optionally Health Logs) in LEEF
format. You can also send assets information to QRadar beginning with version 2.0.0 of the QRadar
App. Click How this integration works to view additional details.
| User Interface Reference | 109

Common Event Format (CEF)


With this integration you are able to send, in CEF format, Alerts and Health Logs. It's possible to enable
the encryption of the data through the TLS checkbox and check for the CEF server’s certificate validity
with the CA-Emitted TLS Certificate checkbox.

ServiceNow
This integration allows you to forward incidents and assets information to a ServiceNow instance.
Using the options below, you can decide whether sending only new incidents or also historical ones.
Additionally, you can choose if assets already existing in ServiceNow need be updated with information
present in the appliance or if assets in ServiceNow will only be created if they do not exist there yet.
Click How this integration works to view additional details.
| User Interface Reference | 110

Tanium
This integration allows you to forward assets information to a Tanium instance. Click How this
integration works to view additional details.

Splunk - Common Information Model (JSON)


If you need to send Alerts to a Splunk - Common Information Model instance, you can use this kind of
integration. Data are sent in JSON format and you are also able to filter on Alerts. You can also send
Health Logs and Audit Logs. Click How this integration works to view additional details.
| User Interface Reference | 111

SMTP forwarding
To send Reports, Alerts and/or Health Logs to an email address, you can configure an SMTP
forwarding endpoint. In this case, you are also able to filter on Alerts.
| User Interface Reference | 112

SNMP Trap
Use this kind of integration to send Alerts through an SNMP Trap.

Syslog Forwarder
Use this type of integration to send the Syslog events captured from monitored traffic to a Syslog
endpoint.
It is useful to passively capture logs and forward them to a SIEM.
Note: In order to enable the Syslog events capture see Enable Syslog capture feature on page 260.

Custom JSON
This type of integration sends all the Alerts to a specific URI using the JSON format.

Custom CSV
This type of integration sends the results of the specified query to a specific URI using the CSV format.
| User Interface Reference | 113

DNS Reverse Lookup


This integration sends reverse DNS requests for the nodes in the environment and uses the names
provided by the DNS as labels for the nodes that don't have a label yet. You can pre-filter the nodes
by specifying a query filter. The strategy runs once a day, but you can run it on demand by selecting
'Rerun the strategy on all the data'.

CheckPoint IoT
This integration allows you to forward assets information and nodes blocking policies to an instance of
CheckPoint Smart Console. Click How this integration works to view additional details. This
integration is available only on CMCs.

Kafka
The Kafka integration allows you to send the results of custom queries in JSON format to existing
topics of a Kafka cluster. Click How this integration works to view additional details.
| User Interface Reference | 114

Cisco ISE
With Cisco's ISE integration, you can send the results of custom node queries to Cisco's ISE asset
information using STOMP protocol. Click How this integration works to see additional details
about certificate usage and Cisco ISE environment requirements.

Perform these steps for the Cisco ISE configuration:


• Create the following custom string attributes: n2os_change_flag, n2os_operating_system,
n2os_product_name, n2os_vendor, n2os_type, n2os_appliance_site, n2os_zone.
• Create a new profile and set the required condition n2os_change_flag custom attribute equal to
"change"
• Modify the existing profiles or, if no profiles are expected to be assigned to assets from n2os, create
a new profile. Add the required condition for n2os_change_flag custom attribute equal to "done"
Due to a long-standing bug in Cisco's PxGrid API, the performance when sending assets is halved,
requiring two network calls for each updated record. Nozomi is working with Cisco to address this
issue. Cisco has not provided a target date for this bug.
| User Interface Reference | 115

External Storage
The External Storage integration uploads files to an external machine. This enables the external
machine to keep remote copies of files that are kept beyond the retention settings. The file location
becomes transparent to the user, who can retrieve them seamlessly from External Storage when the
files are removed from the local file system.
It is also possible to choose a connection protocol for storing the files. Available protocols are smb, ftp,
and ssh. Important: The smb connection protocol is only supported for use with Microsoft operating
systems. Compatibility with third-party devices is not guaranteed. These devices may require additional
configuration changes, including permission changes, creation of new network shares, and creation of
new users. Kerberos authentication is not supported.
Note: At the moment, this functionality is only available for trace pcap files on Guardian.

Microsoft Endpoint Configuration Manager


This integration has the goal to collect some information coming from the Microsoft Endpoint
Configuration Manager to update the Windows nodes present.
Collected items
1. OS information: Returns OS information as version, service pack, build and architecture.
2. Hostnames: Returns host name information to configure the node label.
3. Interfaces information: Returns interface data to populate the node MAC address.
4. Installed software: Returns installed software and populates the node CPE.
5. Hotfixes: Returns installed software version updates and checks to see if there are node CVEs
to close.
Note
It is important to filter the strategy nodes because without the filter, the strategy waits for the timeout
of non-Microsoft nodes that are not reachable. This slows down a lot the Data Integration strategy
execution.
| User Interface Reference | 116
| User Interface Reference | 117

Zone configurations
In this section (Administration > Zone configurations) network zones can be added and
configured.

Figure 114: Zones table

The page lists three kinds of zones:


• "Predefined or "standard" zones show a "lock" icon: they are preconfigured and they cannot be
modified. Inside this group, there are two default zones acting as fallback respectively for public and
private nodes that don't belong to any more specific zone.
These fallback zones can just be renamed by clicking on the "pencil" icon.

Figure 115: Zone default name


• User-defined zones are indicated with the "pencil" and "trash" icons: they can be edited or removed
by clicking on the corresponding icon. They can also be selected trough a checkbox.
User-defined zones can be exported. If no zone is selected, the Export all button exports all
user-defined zones, otherwise the Export selected button exports only selected zones. Some
table actions can help with the zone selection/deselection. Predefined and Auto-configured zones
cannot be exported.
User-defined zones can be imported using the Import button. After the import process, zones will
be reloaded.

Figure 116: Zone configuration import


• Auto-configured zones are indicated with a "plus" icon: they are heuristically discovered by the
engine which is able to pre-fill some fields. They may be added and futher configured by clicking on
the icon. Auto-configured zones are not used by the system until when they are explicitly added and
configured.
| User Interface Reference | 118

Figure 117: Zone configuration

Furthermore, new custom zones may be added. The zone must be given a name that cannot contain
spaces and it must include at least a network segment. All nodes pertaining to one of the segments
of a zone inherit the properties of that zone. The following optional configuration settings are also
available for every node:
• IP network segments: these can be specified in CIDR notation (e.g. 192.168.2.0/24), or by
means of a range that includes both ends (e.g. 192.168.3.0-192.68.3.255). Segments can be
concatenated by commas.
• MAC address ranges: both ends of the range are included (e.g.
08:14:27:00:00:00-08:14:27:ff:ff:ff)
• MAC address matching fallback: one of the necessary conditions to consider a node to be part
of zone is that its node ID must match the zone network segments. There are cases where this
matching strategy is not enough, for example we may want to have nodes with an IP as node
ID match a zone defined with MAC address ranges. In those cases we can enable this fallback
matching stategy in order to match against the MAC address of the node whenever the node IP
does not match any segment.
• Matching VLAN ID: only nodes belonging to such VLAN are shown. For example, consider a zone
configured as 192.168.4.0/24, with vlan id set to 5, and two nodes within such network: 192.168.4.2
and 192.168.4.3, with only the former belonging to such VLAN. When filtering the view with this
zone, only node 192.168.4.2 will be shown.
• Assigned VLAN ID: Nodes belonging to this zone will be assigned this VLAN ID.
• Level: The level defines the position of the nodes pertaining to the given zone within the Purdue
model. Once a level has been set for a zone, all nodes included in that zone will be assigned the
same level, unless a per-node configuration has been specified as well. This means that, if two or
more zones overlap, a node belonging to all of them will inherit the level of the most restrictive zone.
• Nodes Ownership: the ownership of the nodes belonging to the given zone. Once the ownership
has been set for a zone, all nodes included in that zone inherit such ownership, overwriting the
single nodes' ownership.
• Detection approach; can be used to override the global settings from the Security Control Panel.
• Learning mode; can be used to override the global settings from the Security Control Panel.
• Security profile; can be used to override the global settings from the Security Control Panel.
| User Interface Reference | 119

System

General
In the Administration > General page it is possible to change the hostname of the Appliance
and to specify a login banner. The login banner is optional and, when set, it is shown in the login page
and at the begingging of all SSH connections.

Figure 118: The hostname and login banner input fields

Figure 119: An example of login banner

Date and time


| User Interface Reference | 120

Figure 120: Date and time configuration panel

From the date and time page you can:


• change the timezone of the appliance
• change the current time of the appliance (you can use the Pick a date or Set as client buttons to
set a date in a simple way)
• enable or disable the time synchronization to a NTP server by writing a list of comma-separated
server addresses
| User Interface Reference | 121

Network interfaces

Figure 121: Network interfaces list

Actions With the configuration button, you are able to define/modify the NAT rule to
be applied to the current interface.
Interface The interface name or, if set, its label.
Enabled It is true if the interface is enabled to sniff traffic.
Is mirror It is true if the interface is likely receiving mirror traffic and not only
broadcast.
Mgmt filter When on the traffic of the appliance is filtered out. It is on by default. To
change the value see the specific configuration rule in Basic configuration
rules on page 255.
BPF filter The BPF filter applied to the sniffed traffic.
NAT The NAT rule applied to the current interface.
Denylist enabled It is true if the denylist is configured and enabled for the current interface.
Denylist file The denylist file that is used by the current interface. If the file contains a
row starting with #DESCRIPTION:, the description will be shown here. E.g.

#DESCRIPTION: denylist_1 for test

In this form you can set the NAT configuration and the BPF filter.
| User Interface Reference | 122

Figure 122: Interface configuration form

It is possible to disable the network interface in order to stop it from sniffing traffic.
It is also possible to provide a label for a network interface, which will be shown in place of the network
interface name in any part of the user interface.
In the NAT part you may configure the original subnet, the destination subnet and the CIDR mask for
the NAT rule.
In the BPF filter part you may configure the filter to apply to this interface. There are two ways to
configure the filter, via a visual editor or manually. Click the "BPF Filter editor" to open the visual editor.
Here, you can edit the most common filters.
| User Interface Reference | 123

Figure 123: BPF filter editor

More complex filters can be inserted manually in the input box by clicking the toggle.

Figure 124: Manual insertion of a BPF filter

In the Denylist part, you can upload a text file containing a denylist, i.e., a list of IP addresses or
wildcards that will not be processed by the Guardian. The effect is similar to that of the BPF filter,
however a denylist can handle tens of thousands of IP addresses, numbers that are beyond the
capability of the BPF filter.
A denylist must contain one entry per line: a dash (-) followed by a space and an IP address (wildcards
are allowed).
For example:

- 192.168.1.* Deny 192.168.1.* and 192.168.2.1. Everything else is implicitly allowed


- 192.168.2.1

- * The first line is invalid, as it would reject all traffic: invalid lines in a matchlist
are ignored. The last line is simply redundant.
- 192.168.2.*
- 192.168.2.1
| User Interface Reference | 124

Upload traces
In the Administration > Upload traces page you can play a trace file into Guardian, the
appliance ingests the traffic as if it came through the network.

On top, there are flags that you can use to customize the behaviour of the upload/play action.

Use trace timestamps Check this box if you want to use the time captured in
the trace file. Otherwise, the current time is used.
Delete data before play Check this option if you want to delete all data in the
appliance before running the play action. When multiple
traces are played, deletion is applied only before running
the first trace.
Auto play trace after upload With this flag enabled, the trace is played immediately
after the upload.

On every single trace file uploaded there are some available actions as shown below.

Select trace Select the traces to be played using the checkboxes. Multiple
traces will be played one after the other in the order they are
selected (as indicated by the numbers on the left of the checked
boxes).
Replay trace This action replays the corresponding trace (only that single trace).
In order to run all the selected traces you need to click on the three
dots under the label ACTIONS and then click on 'Play selected'.
Edit note If you need to share some note about the uploaded trace.
Delete from the list Erase the trace file from the Appliance, no Environment data will
be affected.

Alerts can be generated as result of the usage of a trace If the played file is artificial, the alerts
timestamp may be not recognised by the system. In this case, a value containing InvalidDate will be
displayed in the time column of the alert table.
Note: By default, the Appliance has a retention of 10 trace files. To configure this value see
Configuring retention on page 311

Export

Export Content Pack


This feature allows you to export a pack that you can then import on another machine
| User Interface Reference | 125

Figure 125: The content pack export feature

You can export the following parts of the system:


• Reports
• Queries
When exporting Reports you can export all of them or select only the folders you want.
When exporting Queries you can export all of them or select only the groups you want.

Import

Import nodes - CSV file


This feature allows you to add nodes and assets from scratch (flagging create non-existing
nodes) or to enrich existing ones.

Figure 126: The node import feature

It is easy to bind the CSV fields to the Nozomi's one. If the CSV provides the headers in the first line
of the file be sure to flag the Has header option to view the column titles. To put the data in the right
items be sure to match the right Nozomi field with the imported data, for example if the CSV file that
has to be imported contains a list of IP addresses select the ip field in the Nozomi data field
dropdown. For each column in the CSV file to import it is possible to specify in which field the data has
to be imported by using the Nozomi field dropdown.
You can only match csv fields with Nozomi mac_address and ip fields. For matching fields binding is
disabled cause it use the matching info to bind the field. It's not possible to bind fields before choosing
a match.
Nozomi field type can only have values that match already existing types, either built-in or custom;
other values are not considered.
Nozomi field role can only have value:
| User Interface Reference | 126

• consumer
• producer
• engineering_station
• historian
• terminal
• web_server
• dns_server
• db_server
• time_server
• antivirus_server
• gateway
• local_node
• voip_server
• dhcp_server
• security_scanner
• teleprotection
• power_quality_meter
• protection_relay
• jump_server
• hypervisor
• backup_server
• HMI
other values are not considered.
Nozomi field zone must match with an existing zone to match, you can add a zone to make it match.

Figure 127: Binding fields

It is even possible to create and import custom fields (only for assets list).
To create a new field go to Administration > Data model and choose a name and a type for
your custom fields. After this operation the field will be available in the import page in the Nozomi
field binding dropdown.

Figure 128: Data model page

An example of a valid CSV file.

Figure 129: CSV example


| User Interface Reference | 127

Import asset types - CSV file


This feature allows you to enlarge the built-in set of asset types with a set of new custom types.
Built-in asset types are:
• switch
• router
• printer
• OT_device
• computer
• cctv_camera
• PLC
• HMI
• barcode_reader
• sensor
• digital_io
• inverter
• controller
• IED
• VOIP_phone
• mobile_phone
• tablet
• WAP
• IOT_device
• light_bridge
• firewall
• RTU
• radio_transmitter
• UPS
• gateway
• AVR
• DSL_modem
• IO_module
• media_converter
• PDU
• power_line_carrier
• time_appliance
• meter
• server
• actuator
• power_generator
• robot
• other
The CSV file must contain an header row with name and the list of type names in the following rows,
one per row. Each asset type is identified by its name; this implies that, during the import process, each
already present asset type name will be ignored and notified.
| User Interface Reference | 128

Figure 130: CSV example

Figure 131: The asset types import feature

Import Configuration / Project file


With this feature a project file can be imported, the information written in the project file will be added to
the asset data in the Nozomi Networks Solution.
Allowed project files are:
• Rockwell Harmony (.conf)
• Yokogawa CENTUM VP (.gz, .zip)
• Siemens (.cfg)
• IEC 61850 SCL/SCD (.scd)
• Triconex (.pt2)
• Allen-Bradley (.l5x)
• Honeywell TDS (.txt, .zip)
• Profinet IOCM (.xml)

Figure 132: The Configuration / Project file import feature

Import Content Pack


With this feature a content pack can be imported, the information written in the content pack will be
added to the corresponding sections in the Nozomi Networks Solution.
| User Interface Reference | 129

Allowed files are:


• Nozomi Content Pack (.json)

Figure 133: The Configuration / Project file import feature


| User Interface Reference | 130

Health
All the sections described below are available for admin user. Additionally, access is granted to all
users with Health permission.

Performance
In this tab there are three charts showing, respectively, the CPU, RAM and disk usage over time.

Figure 134: The performance charts

Health Log
The health log reports the details of any kind of performance issues the appliance experiences. In
general, logs include information such as CPU, RAM, disk space, interface status, stale appliances, or
generic high load.

Figure 135: The health log table

Types of Health Log Entries


Guardian can generate many types of health log entries. The log can indicate when an appliance:
• “is under high load”
• “is no longer under high load”
• “X% cpu usage”
• “cpu usage back to normal”
• “X% ram used”
• “ram usage back to normal”
• “X% disk space used”,
• “disk space usage back to normal”
| User Interface Reference | 131

• “Appliance is stale”
• “Appliance is no longer stale”
• “LINK_UP_on_port_N”
• “LINK_DOWN_on_port_N”
• “Failed migrations“
• Log_disk_full-starting-emergency-shutdown

Audit
In the Administration > Audit page are listed all relevant actions made by users, starting from
Login/Logout action to all the configuration operations, such as learn/delete of objects in Environment.
All the recorded actions are related to the IP and the username of the user who made the action and,
as seen in the other Nozomi Networks Solution tables, you can easily filter and sort on this data.

Figure 136: The audit table

Reset Data
In the Administration > Data page you can selectively reset several kinds of data used by the
Nozomi Networks Solution. Each option resets specific types of data, which are listed beneath each
checkbox in the web UI.

Figure 137: The reset data form


| User Interface Reference | 132

Continuous Traces
The continuous trace page can be accessed through <Username> > Other actions >
Continuous trace. Here is where the continuous traces can be requested, managed, inspected and
downloaded. For non admin users, in order to be able to reach this section and to perform any action, it
is mandatory to belong to a group with Trace permission.

Each trace is saved in PCAP files whose maximum size is 100MB. When a file has reached this
threshold, it is closed and a new file is created to keep collecting the network packets. The trace files
are saved in the hard disk of the appliance. Guardian makes sure 10% of the disk is always free. For
that reason, when the hard disk usage approaches the limit, the oldest PCAP files belonging to the
continuous traces are deleted.
Traces can be stopped and resumed. When a trace is resumed, a new PCAP file is created.
Continuous traces are persistable. When an appliance is restarted the continuous traces, their
collected data and their statuses are resumed automatically.
In order to request a trace, enter a BPF filter in the corresponding field and click the Start button.
From the moment the button is pressed, Guardian will begin collecting packets corresponding to the
provided filter. The filter can be left empty, in which case all packets will be collected by the requested
continuous trace.
The table at the bottom of the page shows the continuous traces that have been requested. The
following information is given:

Time The time at which the trace has been requested.


ID A unique identifier of the trace request.
User The user who requested the trace.
Packet filter The BPF filter defining the collection.
In progress Whether the collection is active or stopped.

Several actions are available to manage the traces:

Figure 138: Starts the trace collection (disabled if the trace is currently in progress)
| User Interface Reference | 133

Figure 139: Stops the trace collection (disabled if the trace is currently paused)

Figure 140: Destroy the trace and discard all data collected

Figure 141: Download an archive containing all the PCAP files belonging to the trace

Figure 142: List and download the PCAP files collected by the trace
Chapter

6
Security features
Topics: In this chapter we will explain how a tailored security shield can
be automatically built by Guardian and subsequently tuned to fit
• Security Control Panel specific needs.
• Security Configurations
Once the baselining has been performed, different kinds of Alerts
• Manage Network Learning will be raised when potentially dangerous conditions are met. There
• Alerts are four main categories of Alerts, each originating from different
• Custom Checks: Assertions engines within the product:
• Custom Checks: Specific 1. Protocol Validation: every packet monitored by Guardian will
Checks be checked against inherent anomalies with respect to the
• Alerts Dictionary specific transport and application protocol. This first step is
• Incidents Dictionary useful to easily detect buffer overflow attacks, denial of service
• Packet rules attacks and other kind of attacks that aim to stress non-resilient
software stacks. This engine is completely automatic, but can be
• Hybrid Threat Detection
eventually tuned as specified in Security Configurations on page
136.
2. Learned Behavior: the product incorporates the concept of
a learning phase. During the learning phase the product will
observe all network and application behavior, especially SCADA/
ICS commands between nodes. All nodes, connections,
commands and variables profiles will be monitored and analyzed
and, after the learning phase is closed, every relevant anomaly
will result in a new Alert. Details about this engine are described
in Learned Behavior.
3. Built-in Checks: known anomalies are also checked in real
time. Similarly to Protocol Validation, this engine is completely
automatic and works also when in Learning mode, but can be
eventually tuned as specified in Security Configurations on page
136.
4. Custom Checks: automatic checks such as the ones deriving
from Protocol Validation and Learned Behavior are powerful and
comprehensive, but sometimes something specific is needed.
Here comes Custom Checks, a category of custom Alerts
that can be raised by the product in specific conditions. Two
subfamilies of Custom Checks exist and are described in Custom
Checks: Assertions on page 148 and Custom Checks: Specific
Checks on page 150.
The powerful automatic autocorrelation of Guardian will generate
Incidents that will group specific Alerts into higher level actionable
items. A complete dictionary of Alerts is described at Alerts
Dictionary on page 152 and Incidents Dictionary on page 159.
Additionally, changing the value of the Security Profile changes the
visibility of the alerts shown by Guardian based on the alerts type.
| Security features | 136

Security Control Panel


The Security Control Panel gives an overview of the current status of the leaning process and allows
the configuration of the features that manage the learning, the security profile, the zones and the alerts
tuning.

Figure 143: The Security Control Panel overview page

The learning section shows the progress of the engine for both network and process learning. The Last
detected change and the Learning started entries will report the point in time when the last behavior
change was detected and the time when the learning is started.

Security Configurations
The security features can be configured using the "Edit" tab of the security control panel. The page
guides the user through four configuration steps that allow an advanced yet simplified customization of
the features.
| Security features | 137

Learning

Figure 144: The learning editor

Guardian provides a flexible approach to anomaly-based detection, allowing to choose from two
different approaches:
• Adaptive Learning: uses a less granular and more scalable approach to anomaly detection
where deviations are evaluated at a global level rather than at a single node level. For example,
the addition of a device similar to the ones already installed in the learned network won't produce
alerts. This holds true for the appearance of a similar communication. Adaptive Learning shows its
maximum capabilities when combined with Asset Intelligence.
• Strict Learning: uses a detailed anomaly-based approach, so deviations from the baseline will
be detected and alerted. This approach is called strict because it requires the learned system
to behave like it has behaved during the learning phase, and requires some knowledge of the
monitored system in order to be maintained over time.
The engine has two distinct learning goals: the network and the process. For both cases the engine
can be in learning and in protection mode, and they can be governed independently.
1. Network Learning is about the learning of Nodes, Links, and Function Codes (e.g. commands) that
are sent from one Node to another. A wide range of parameters is checked in this engine and can
be fine-tuned as described in Manage Network Learning on page 142.
2. Process Learning is about the learning of Variables and their behavior. This learning can be fine-
tuned also with specific checks as described in Custom Checks: Specific Checks on page 150.
With the Dynamic Window option you can configure the time interval in which an engine considers a
change to be learned (every engine does this kind of evaluation per node and per network segment).
After this period of time, the learning phase is safely automatically switched to protection mode, with
the effect of:
• raising alerts when something is different from the learned baseline
• adding suspicious components to the Environment with the "is learned" attribute set to off, in such a
way that an operator can confirm, delete or take proper action from the manage panel.
In this way, stable network nodes and segments become protected automatically thus you are not
overwhelmed with alerts due to the premature closing of learning mode.
| Security features | 138

Security profile

Figure 145: The security profile editor

The Security Profile allows to change the visibility of alerts based on their type. By default the Security
Profile is set to High. Changing the value of the Security Profile has immediate effect on newly
generated alerts and it has no effect on existing alerts. By default the Security Profile is set to High.

Zone configurations
All settings concerning the learning engine and the security profile can be customized on a per-zone
basis. Please refer to Zone configurations for the details.

Alert tunings

Figure 146: The alert rules editor

In the Tuning section of the Security Control Panel, it is possible to customize the alerts behavior.
Specifically, a matching criteria can be created by imposing conditions on several fields such as IP
addresses, protocol and many others.
This feature can be selectively enabled for specific user groups.
| Security features | 139

Figure 147: Alert tuning popup

IP source/destination Set the IP of the source/destination that you want to filter.


MAC source/destination Specify the MAC of the source/destination that you want to filter.
Match IPs and MACs in both Check this if you want to select all the communications between
directions two nodes (IP or MAC) independently of their role in the
communication (source or destination).
Zone source/destination Specify the zone of the source/destination that you want to filter.
Port source/destination Specify the port of the source/destination that you want to filter.
Type ID The type ID of the alert, this field is precompiled if you create a
new modifier from an alert in Alerts page.
Trigger ID Unique identifier corresponding to the specific condition that has
triggered the alert.
Protocol Set the protocol that you want to filter.
Note Enter free-form text that describes details of the alert rule.
Execute action Select an action to perform on the matched alerts:
• Mute: Switch ON/OFF: to mute or not the alert.
• Mute Until: Specify a date until which the alert will be muted.
| Security features | 140

• Change Security Profile Visibility: Set to ON to force the


visibility of the selected alert type for any selected profile, or to
OFF to hide it for any selected profile. Useful for extending or
reducing the default provided security profiles as needed.
• Change risk: Set a custom risk value for the alert.
• Change trace filter: Define a custom trace filter to apply to this
alert.

Priority Set a custom priority; when multiple rules trigger on an alert, the
rule with highest priority applies. "Normal" is the default value if
no selection is made.

As alert rules can be propagated from upstream connections, conflicts between rules are possible.
A conflict is detected when multiple rules, performing the same action, match an alert. To deal with
these collisions, the execution algorithm takes into consideration the source of the rules. The user can
choose three policies:
• upstream_only: alert rules are managed in the top CMC or with Vantage. Creation and
modification are disabled in the lower-level appliances. Only the rules received from upstream are
executed;
• upstream_prevails: in case of conflicts, rules coming from upstream are executed;
• local_prevails: in case of conflicts, rules created locally are executed.
A special case is represented by the 'mute' action. Consider the following example: the execution policy
is 'local_prevails' and a mute rule is received by Guardian from an upstream connection. This rule
will be ignored if at least one local rule matches the alert. Vice versa, with the execution policy set to
'upstream_prevails', local 'mute' will be ignored if at least one rule coming from upstream matches the
alert.

Alert closing options

Figure 148: The alert closing options editor

In the Alert closing options section of the Security Control Panel it is possible to customize
the details of the closure of alerts and incidents. When alerts and incidents are closed, the user must
choose the reason why the closure happens. There are two default reasons: actual incident and
baseline change. The list of reasons can be customized. Each reason has a description and a behavior
| Security features | 141

Figure 149: Alert closing option pop-up

Reason for closing A concise description that explains the reason


why an alert can be considered closed.
Treat as incident Select this entry if alerts closed using this option
will have to be considered deviations from the
baseline and not changes of the baseline. For
instance, actual incidents, attacks, and false
positives could fall into this category. If the alert is
closed with this option and the same event occurs
in the future, an equivalent alert will be issued
again.
Learn Select this entry if alerts closed using this option
will have to be considered as legitimate changes
to the baseline. For instance, new nodes correctly
connected to the network, configuration changes,
and new legitimately installed software could fall
into this category. The modifications that caused
the alert will be learned into the baseline, and, as
a result, equivalent alerts won't be generated if
the same event happens again.
| Security features | 142

Manage Network Learning


In the Manage Network Learning tab it is possible to review and manage the Network Learning status
in detail. The graph is initialized with the node and link not learned perspectives which highlight in
red or orange the items unknown to the system. In this way it is easy to discover new elements and
take an action on them.

Figure 150: The manage page with the selection on an unlearned link

A A node which is not learned


B A link which is not learned. If the link is highlighted in orange it is learned,
but some protocols in it are not
C The information correlated to the current selection, the user can select the
items in it using the checkboxes and then execute some actions. When an
item is not learned it will be red, otherwise it will be green
D With the delete button the user can remove the selected item(s) from the
system
E With the learn button the user can insert the selected item(s) in the system
F When the configuration is complete the user can make it persistent using the
save button
G The discard button undo all the unsaved changes to the system

How to learn protocols


1. Click on red or orange link, information about the selection will be displayed on the right pane
| Security features | 143

2. Check the protocol that you want to learn. In this example we check browser. It is possible to
check more than one item at once

3. Click on the Learn button, a mark will appear on all the checked items which will be learned and
the Save button will start to blink indicating some unsaved changes

4. Click on the Save button, the protocol will be learned and it will become green. In this case also the
link will change color and become orange because some protocols are learned and some others are
not

5. Learning all remaining protocols will result in a completely learned grey link
| Security features | 144

How to learn function codes


If a protocol is a SCADA protocol, the information pane will also display the function codes. The
procedure for learning function codes is equivalent to the procedure for learning protocols.

Figure 151: A SCADA protocol with function codes

How to learn nodes


1. Click on a red node, its information will be displayed in the right pane

2. Check the item that you want to be learned


| Security features | 145

3. Click on the Learn button, a mark will appear on all the checked items which will be learned and
the Save button will start to blink indicating some unsaved changes

4. Click on the Save button, the information pane will turn to green, the learned items and the node in
the graph will become grey

Learning from alerts or incidents

Automatic learning
1. Click on the Close alert button.

2. Choose one of the preset reasons for closing the alert or incident. An informative text will indicate if
the reason is associated to learning a baseline change or not. Alternatively, you can set a custom
reason and choose whether a baseline change is to be learned or not.
| Security features | 146

Manual learning
1. Click on the gear icon to go to the learning page.

2. The graph will be focused on the link involved in the alert (by clicking on the X button the focus will
be removed). According to the alert there is a new node, follow the already explained procedure to
learn the desired items.

Alerts
Alerts are generated by the different engines and can be very detailed, and suitable for drill-down
analysis.
To provide a higher level view, and faster operation of the system also by users without complete
knowledge of the observed system, Incidents are generated from a powerful autocorrelation engine out
of all generated Alerts.
| Security features | 147

Incidents allow to summarize Alerts providing a high-level explanation of what really happened. They
are visible by default in the Alerts table, but can be easily hidden if a more detailed view is required.
Alerts are often a key performance factor for Nozomi environments. We recommend carefully
considering your retention policy. For more information, see Configuring retention.
| Security features | 148

Custom Checks: Assertions


Assertions can be managed in Analysis > Assertions and are based on N2QL (fully explained
in section Queries on page 85). Thanks to the powerful query language it is possible to ensure that
certain conditions are met on the observed system and to be notified when an assertion is not satisfied.

Figure 152: The assertions page with a saved failing


assertion and another assertion during the editing phase.

A valid assertion is just a normal query with another special command appended at the end. The
assertion commands are:

assert_all <field> The assertion will be satisfied when each element in the query result set
<op> <value> matches the given condition
assert_any The assertion will be satisfied when at least one element in the query result
<field> <op> set matches the given condition
<value>
assert_empty The assertion will be satisfied when the query returns an empty result set
assert_not_empty The assertion will be satisfied when the query returns a non-empty result set

For example, it is possible to be notified when someone uses the insecure telnet protocol by saving
the assertion

links | where protocol == telnet | assert_empty

Editing an assertion
To edit an assertion just enter the text in the textbox and press the enter key to execute it. Multiple
assertions can be combined by using the logical operators && (and) and || (or). Round brackets
change the logical grouping as in a mathematical expression.
| Security features | 149

(links | where protocol == telnet | assert_empty && links | where protocol


== iec104 | assert_empty) && (nodes | where is_learned == false |
assert_empty)

Figure 153: A complex assertion being debugged

An assertion with logical operators and brackets can quickly become complex, to make the editing task
easier a debug functionality is present. By pressing the debug button (on the right side of the textbox)
the query will be decomposed and the single pieces will be executed to show the intermediate results.

Saving an assertion

Assertions can be saved in order to have them continuously executed in the system. To save an
assertion just write it in the textbox, press the enter key to execute it and then click on the save button.
A dialog will pop up asking for the assertion name and some other information. In particular the
| Security features | 150

assertion needs to be assigned to an existing group. It is possible to create a new group by clicking on
the "New Group" button. The following dialog will appear asking for a group name.

It is also possible to choose whether the assertion has to trigger an alert. The saved assertion will be
listed at the bottom of the page with a green or red color to indicate the result.
Note: When editing the alert risk, only newly raised alerts are affected.

Custom Checks: Specific Checks


Specific Checks can be added to Links and Variables by opening the dedicated configuration dialog.
To configure checks on a Link, go to the Links table (or any other section where the Link Actions are

displayed) and click on the button.

Here you can flag and configure these checks:


1. Is persistent: when enabled, this check will raise a new Alert whenever a TCP handshake is
successfully completed on the Link.
2. Alert on SYN: when enabled, this check will raise a new Alert whenever a TCP SYN sent by a client
on the Link.
3. Last Activity check: when enabled, this check will raise an Alert whenever the link is not receiving
any data for more than the specified amount of seconds.
(Track Availability instead does not trigger any alert).
| Security features | 151

To configure checks on a Variable, go to the Variables table and click on the button.

Here you can flag and configure these checks:


1. Last Activity check: when enabled, this check will raise an Alert whenever the Variable is not being
measured or changed for more than the specified amount of seconds.
2. Invalid quality check: when enabled, this check will raise an Alert whenever the Variable keeps an
invalid quality for more than the specified amount of seconds.
3. Disallowed qualities check: when enabled, this check will raise an Alert whenever the Variable gains
one of the specified qualities.
| Security features | 152

Alerts Dictionary
As explained at the beginning of this chapter, four categories of Alerts can be generated from the
Nozomi Networks Solution. Here we propose a complete list of the different kinds of Alerts that can
be raised. It should be noted that some Alerts can specify the triggering condition: for instance the
Malformed Packet Alert can be instantiated by each protocol by some specific checked information.
The tables contain the following information:
• Type ID: the strict identifier for an alert type. Use this field to setup integrations.
• Name: a friendly name identifier.
• Security profile: the default profile the alert type belongs to.
• Risk: the default base risk the alert shows. For specific instances, this value is weighted by other
factors (the learning state of the involved nodes and their reputation) and it will result in a different
number.
• Details: general information about the alert event, and what has caused it.
• Release: the minimum release version featuring that alert type. The minimum considered release
version is 18.0.0.
• Trace: whether a trace is produced or not. Note: Traces are always based on buffered data and,
depending on the overall network traffic throughput, the buffer might not contain all of the packets
responsible for the alert itself. Only the last packet responsible for triggering the alert is always
present as the trace is generated.

Protocol Validations
An undesired protocol behavior has been detected. This can refer to a wrong single message, to
a correct single message not supposed to be transmitted or transmitted at the wrong time (state
machines violation) or to a malicious message sequence. Protocol specific error messages indicating
misconfigurations also trigger alerts that fall into this category.

Type ID Name Sec. Prof. Risk Details Release Trace

NET:RST-FROM- Link RST sent by LOW 3 The link has been dropped because of a TCP RST 18.0.0 YES

PRODUCER Producer sent by the producer. Verify that the device is working

properly, no misconfigurations are in place and that

network does not suffer excessive latency.

PROC:SYNC-ASKED-AGAIN Producer sync PARANOID 3 A new sync (e.g. General Interrogation in iec101 and 18.0.0 YES

requested iec104) command has been issued, while in some links it

is sent only once per started connection. It may be due to

a specific sync request of an operator, a cyclic sync, or to

someone trying to discover the process global state.

PROC:WRONG-TIME Process time issue HIGH 3 The time stamp specified in process data is not aligned 18.0.0 YES

with current time. There could be a time sync issue with

the source device, a malfunctioning or a packet injection.

Verify the device configuration and status.

SIGN:ARP:DUP Duplicated IP HIGH 5 ARP messages have shown a duplicated IP address in 18.0.0 YES

the network. It may be a misconfiguration of one of the

devices, or a tentative of a MITM attack.

SIGN:DDOS DDOS attack HIGH 5 A suspicious Distributed Denial of Service has been 19.0.0 YES

detected on the network. Verify that all the devices in the

network are allowed and behaving correctly.

SIGN:DHCP-OPERATION DHCP operation HIGH 4 A suspicious DHCP operation has been detected. This is 18.0.0 YES

related to the presence of new MAC addresses served by

DHCP server, and to DHCP wrong replies.


| Security features | 153

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:ILLEGAL- Illegal parameters MEDIUM 7 A request with illegal parameters (e.g. outside from a 19.0.0 YES

PARAMETERS request legal range) has been issued. This may mean that a

malfunctioning software is trying to perform an operation

without success or that a malicious attacker is trying to

understand the functionalities of the device.

SIGN:INVALID-IP Invalid IP HIGH 7 A packet with an IP reserved for special purposes (e.g. 18.0.0 YES

loopback addresses) has been detected. Packets with

such addresses can be related to misconfigurations or

spoofing/denial of service attacks.

SIGN:MAC-FLOOD Flood of MAC MEDIUM 7 A high number of new MAC addresses has appeared in a 20.0.1 YES

addresses short time. This can be a flooding technique.

SIGN:MALICIOUS- Malicious Protocol LOW 6 An attempted communication by a protocol known to be 19.0.0 YES

PROTOCOL detected related to threats has been detected.

SIGN:MULTIPLE-ACCESS- Multiple Access MEDIUM 8 A host has repeatedly been denied access to a resource. 19.0.5 YES

DENIED Denied events Check the authorization rights.

SIGN:MULTIPLE- Multiple OT device HIGH 8 A host has repeatedly tried to reserve the usage of an OT 19.0.0 YES

OT_DEVICE- reservations device causing a potential denial-of-service.


RESERVATIONS

SIGN:MULTIPLE- Multiple MEDIUM 8 A host has repeatedly tried to login to a service without 18.0.0 YES

UNSUCCESSFUL-LOGINS unsuccessful success. It can be either an user or a script, and due to


logins a malicious entity, or a wrong configuration. Check your

authentication parameters.

SIGN:NETWORK- Malformed MEDIUM 7 A malformed packet within a general-purpose IT protocol 18.0.0 YES

MALFORMED Network packet has been detected. A maliciously malformed packet can

target known issues in devices or software versions, and

thus should be considered carefully as a source of a

possible attack.

SIGN:NETWORK-SCAN Network Scan MEDIUM 7 An attempt to reach many target hosts or ports in a target 19.0.0 YES

network (vertical or horizontal scan) has been detected.

This Alert covers many possible transport protocols.

SIGN:PROC:MISSING-VAR Missing variable HIGH 6 An attempt to access an unexisting variable has been 18.0.0 YES

request made. This may be due to a misconfiguration or a

tentative to discover valid variables inside a producer.

Example: COT 47 in iec104.

SIGN:PROC:UNKNOWN- Missing or MEDIUM 6 An attempt to access an unexisting, virtual (controller's 18.0.0 YES

RTU unknown device logical portion) or physical device has been made.

This may be due to a misconfiguration or a tentative to

discover the network. Example: COT 46 in iec104.

SIGN:PROTOCOL-ERROR Protocol error HIGH 7 A generic protocol error occurred, this usually relates 18.0.0 YES

to a wrong field, option or other general violation of the

protocol.

SIGN:PROTOCOL-FLOOD Protocol-based MEDIUM 7 One or more hosts have sent a suspiciously high amount 19.0.4 YES

flood of packets with the same application layer (e.g., ping

requests) to a single, target host.

SIGN:SCADA-INJECTION OT protocol packet LOW 9 A correct OT protocol packet injected in the wrong 18.0.0 YES

injection context has been detected: this may cause equipment to

operate improperly. Carefully check the involved systems

and communications. Example: a correct GOOSE

message sent with a wrong sequence number (that, if

received in the right moment, would just work instead).


| Security features | 154

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:SCADA-MALFORMED Malformed OT MEDIUM 7 A malformed packet within an OT protocol has been 18.0.0 YES

protocol packet detected. A maliciously malformed packet can target

known issues in devices or software versions, and thus

should be considered carefully as a source of a possible

attack.

SIGN:TCP-FLOOD TCP flood MEDIUM 7 One or more hosts have sent a great amount of 19.0.4 YES

anomalous TCP packets or TCP FIN packets to a single,

target host.

SIGN:TCP-MALFORMED Malformed TCP MEDIUM 7 A packet containing a semantically invalid TCP header 20.0.0 YES

layer has been observed.

SIGN:TCP-SYN-FLOOD TCP SYN flood MEDIUM 7 One or more hosts send a great amount of TCP SYN 18.0.0 YES

packets to a single host. This may cause resource

exhaustion.

SIGN:UDP-FLOOD UDP flood MEDIUM 7 One or more hosts have sent a great amount of UDP 20.0.7.1 YES

packets to a single target host.

SIGN:UNSUPPORTED- Unsupported MEDIUM 7 An unsupported function (e.g. not defined in the 19.0.0 YES

FUNC function request specification) has been used on the OT device. This

may be a malfunctioning software trying to perform an

operation without success or a malicious attacker trying

to understand the device functionalities. Example: COT

44 in iec104.

Virtual Image
Virtual image represents a set of information by which Guardian represents the monitored network.
This includes for example node properties, links, protocols, function codes, variables, variable
values. Such information is collected via learning, smart polling, or external contents, such as Asset
Intelligence. Alerts in this group represent deviations from expected behaviors, according to the learned
or fed information.
Note: when an alert of this category is raised, if the related event is not considered a malicious attack
or an anomaly, it can be learned.

Type ID Name Sec. Prof. Risk Details Release Trace

VI:CONF-MISMATCH Configuration MEDIUM 7 A parameter describing a configuration version that was 20.0.0 YES

Mismatch previously imported from a project has been observed

having a different value in the traffic. Check the traffic

and if the new configuration is legitimate, re-import to

Guardian the project with up-to-date reference values.

In the opposite case, restore the legitimate configuration

into the network.

VI:GLOBAL:NEW-FUNC- New global MEDIUM 5 A previously unseen protocol Function Code for has 19.0.4 YES

CODE function code appeared in the network.

VI:GLOBAL:NEW-MAC- New global MAC MEDIUM 5 A previously unseen MAC vendor has appeared in the 19.0.4 YES

VENDOR vendor network.

VI:GLOBAL:NEW-VAR- New global HIGH 5 A node has started sending variables. It can be a new 21.3.0 YES

PRODUCER variable producer command, a new object, or a tentative of enumerating

existing variables from a malicious attacker.

VI:KB:UNKNOWN-FUNC- Unknown asset HIGH 5 The node has communicated using a function code 20.0.0 YES

CODE function code that is not known for this kind of Asset. This detection is

possible by knowing the specific Asset’s profile.


| Security features | 155

Type ID Name Sec. Prof. Risk Details Release Trace

VI:KB:UNKNOWN- Unknown asset's HIGH 5 The node has communicated using a protocol that is not 20.0.0 YES

PROTOCOL protocol known for this kind of Asset. This detection is possible by

knowing the specific Asset’s profile.

VI:NEW-ARP New ARP HIGH 4 A new MAC Address has started requesting ARP 18.0.0 YES

information.

VI:NEW-FUNC-CODE New function code HIGH 6 A known protocol between two nodes has started using a 18.0.0 YES

new function code (i.e. message type). For example, if a

client A normally uses a function code 'read' when talking

to server B, this alert is raised if client A begins to use a

function code 'write'.

VI:NEW-LINK New link HIGH 5 Two nodes have started communicating with each other. 18.0.0 YES

VI:NEW-MAC New MAC address HIGH 6 A new MAC Address has appeared in the network. 18.0.0 YES

VI:NEW-NET-DEV New network MEDIUM 3 A new network device (switch or router) has appeared on 18.0.0 YES

device the network.

VI:NEW-NODE New node MEDIUM 5 A new node has appeared on the network. 18.0.0 YES

VI:NEW-NODE:MALICIOUS- Bad reputation ip LOW 5 A node with a bad reputation IP has been detected. It is 20.0.0 YES

IP suggested to validate the health status of communicating

nodes, as they may be infected by some malware.

VI:NEW-NODE:TARGET New target node HIGH 4 A new target node has appeared on the network. This 18.0.0 YES

node is not yet confirmed to exist as it still has not sent

back any data.

VI:NEW-PROTOCOL New protocol used HIGH 4 A node has started communicating with a new protocol. 18.0.0 YES

VI:NEW- New application on HIGH 5 A link between two nodes has upgraded to a specific 18.0.0 YES

PROTOCOL:APPLICATION link application protocol.

VI:NEW- New confirmed HIGH 5 Two nodes have started a working, confirmed connection 18.0.0 YES

PROTOCOL:CONFIRMED protocol for a given protocol.

VI:NEW-SCADA-NODE New OT node HIGH 6 A new node, using an OT protocol, has appeared on the 18.0.0 YES

network.

VI:PROC:NEW-VALUE New OT variable HIGH 6 A variable has been set to a value never seen before. 18.0.0 YES

value

VI:PROC:NEW-VAR New OT variable HIGH 6 A new variable has been sent, or accessed by a client. It 18.0.0 YES

can be a new command, a new object, or a tentative of

enumerating existing variables from a malicious attacker.

VI:PROC:PROTOCOL- Protocol flow HIGH 8 A message aimed at reading/writing one or multiple 18.0.0 YES

FLOW-ANOMALY anomaly variables which is sent cyclically, has changed its

transmission interval time. Example: a iec104 command

breaking its normal transmission cycle.

VI:PROC:VARIABLE-FLOW- Variable flow HIGH 6 A variable which is sent cyclically has changed its 18.0.0 YES

ANOMALY anomaly transmission interval time.

Built-in Checks
Built-in checks run without the need of learning because they are based on specific signatures or hard-
coded logics with reference to: known ICS threats (by signatures provided by Threat Intelligence),
known malicious operations, system weaknesses, or protocol-compliant operations that can impact the
network/ICS functionality.
| Security features | 156

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:CLEARTEXT- Cleartext password MEDIUM 7 A cleartext password has been issued or requested. 19.0.0 YES

PASSWORD

SIGN:CONFIGURATION- Configuration MEDIUM 6 A changed configuration has been uploaded to the 18.0.0 YES

CHANGE change OT device. This can be a legitimate operation during

maintenance and upgrade of the software or an

unauthorized tentative to disrupt the normal behavior of

the system.

SIGN:CPE:CHANGE CPE change LOW 0 An installed software change has been detected. The 18.0.0 YES

change relates to the vulnerabilities list, possibly

changing it.

SIGN:DEV-STATE-CHANGE Device state MEDIUM 7 A command that can alter the device state has been 18.0.0 YES

change detected. Examples are a request of reset of processor's

memory, and technology-specific cases.

SIGN:FIRMWARE-CHANGE Firmware change HIGH 6 A firmware has been uploaded to the device. This can 19.0.0 YES

be a legitimate operation during maintenance or an

unauthorized attempt to change the behaviour of the

device.

SIGN:MALICIOUS-DOMAIN Malicious domain LOW 5 A DNS query towards a malicious domain has been 19.0.0 YES

detected. It is suggested to investigate the health status

of the involved nodes.

SIGN:MALICIOUS-IP Bad ip reputation LOW 5 A node with a bad reputation IP has been found. 19.0.0 YES

SIGN:MALICIOUS-URL Malicious URL LOW 5 A request towards a malicious URL has been detected. 19.0.0 YES

It is recommended to investigate the health status of the

involved nodes.

SIGN:MALWARE- Malware detection LOW 9 A potentially malicious payload has been transferred. 18.0.0 NO

DETECTED

SIGN:MITM MITM attack LOW 10 A potential MITM attack has been detected. The attacker 20.0.5 NO

is ARP-poisoning the victims. The attacker node could

alter the communication between its victims.

SIGN:OT_DEVICE-REBOOT OT device reboot HIGH 6 An OT device program has been requested to reboot 18.0.0 YES

request (e.g. by the engineering workstation). This may be

something due to Engineering operations, for instance

the maintenance of the program itself or a system

updates. However, it may indicate suspicious activity from

an attacker trying to manipulate the device execution.

SIGN:OT_DEVICE-START OT device start HIGH 6 An OT device program has been requested to start 18.0.0 YES

request (e.g. by the engineering workstation). This may be

something due to Engineering operations, for instance

the maintenance of the program itself or a system

updates. However, it may indicate suspicious activity from

an attacker trying to manipulate the device execution.

SIGN:OT_DEVICE-STOP OT device stop HIGH 9 An OT device program has been requested to stop 18.0.0 YES

request (e.g. by the engineering workstation). This may be

something due to Engineering operations, for instance

the maintenance of the program itself or a system

updates. However, it may indicate suspicious activity from

an attacker trying to manipulate the device execution.

SIGN:OUTBOUND- High rate of LOW 9 A host has shown a sudden increase of outbound 21.0.0 YES

CONNECTIONS outbound connections. This could be due to the presence of a


connections malware, and it should be carefully validated.
| Security features | 157

Type ID Name Sec. Prof. Risk Details Release Trace

SIGN:PACKET-RULE Packet rule match LOW 9 A packet has matched a Packet rule. 18.0.0 YES

SIGN:PASSWORD:WEAK Weak password HIGH 5 A weak password, possibly default, has been used to 18.5.0 YES

access a resource. To keep your network secure, change

and maintain the password.

SIGN:PROGRAM:CHANGE Program change MEDIUM 6 A changed program has been uploaded to the OT device. 18.0.0 YES

This can be a legitimate operation during maintenance

and upgrade of the software or an unauthorized tentative

to disrupt the normal behavior of the system.

SIGN:PROGRAM:DOWNLOAD Program download HIGH 6 A program has been downloaded from an OT Device 18.0.0 YES

(e.g. an IED) by a workstation / local SCADA. This can be

a legitimate operation during maintenance and upgrade

of the software or an unauthorized attempt to read the

program logic.

SIGN:PROGRAM:UPLOAD Program upload HIGH 9 A program has been uploaded to an OT device (e.g. 18.0.0 YES

an IED) by a workstation / local SCADA. This can be a

legitimate operation during maintenance and upgrade

of the software or an unauthorized attempt to write the

program logic.

SIGN:PUA-DETECTED PUA detection MEDIUM 5 A potentially unwanted application payload (PUA) has 20.0.6 YES

been transferred. This is normally less dangerous than a

malware payload.

SIGN:SUSP-TIME Suspicious time HIGH 7 A suspicious time has been observed in the network. 20.0.0 YES

value There could be a malfunctioning device or a packet

injection. Verify the device configuration and status.

SIGN:WEAK-ENCRYPTION Weak encryption PARANOID 6 The communication has been encrypted using an 19.0.5 YES

obsolete cryptographic protocol, weak cipher suites or

invalid certificates.

Custom Checks
These are checks set in place by the user. Typically the nature of an event related to a custom check
cannot generally be referred to a problem per se, if not contextualized to the specific network and
installation.

Type ID Name Sec. Prof. Risk Details Release Trace

ASRT:FAILED Assertion Failed LOW 0 An assertion has failed. 18.0.0 YES

GENERIC:EVENT Generic Event LOW 0 A generic event has been generated. More details are 20.0.5 YES

available in the description of the event.

NET:INACTIVE-PROTOCOL Inactive protocol LOW 3 The link has been inactive for longer than the set 18.0.0 YES

threshold.

NET:LINK-RECONNECTION Link reconnection LOW 3 The link configured to be persistent has experienced a 18.0.0 YES

complete TCP reconnection.

NET:TCP-SYN TCP SYN LOW 3 A connection attempt (TCP SYN) has been detected on 18.0.0 YES

a link.

PROC:CRITICAL-STATE- Critical state off LOW 1 The system has recovered from a user-defined critical 18.0.0 YES

OFF process state. Investigate the mentioned critical state.

PROC:CRITICAL-STATE-ON Critical state on LOW 9 The system has entered in a user-defined critical process 18.0.0 YES

state. Check if all considered values are still safe or not.


| Security features | 158

Type ID Name Sec. Prof. Risk Details Release Trace

PROC:INVALID-VARIABLE- Invalid variable LOW 3 A variable has showed a quality bit set for longer than the 18.0.0 YES

QUALITY quality set threshold.

PROC:NOT-ALLOWED- Not allowed LOW 3 A variable has shown one or more specific quality bits the 18.0.0 YES

INVALID-VARIABLE variable quality user set as not allowed.

PROC:STALE-VARIABLE Stale variable LOW 3 A variable has not been updated for longer than the set 18.0.0 YES

threshold.
| Security features | 159

Incidents Dictionary

Protocol Validations
An undesired protocol behavior has been detected. This can refer to a wrong single message, to
a correct single message not supposed to be transmitted or transmitted at the wrong time (state
machines violation) or to a malicious message sequence. Protocol specific error messages indicating
misconfigurations also trigger alerts that fall into this category.

Type ID Name Details

INCIDENT:ANOMALOUS-PACKETS Anomalous Packets Malformed packets have been detected during the
deep packet inspection.

Virtual Image
Virtual image represents a set of information by which Guardian represents the monitored network.
This includes for example node properties, links, protocols, function codes, variables, variable
values. Such information is collected via learning, smart polling, or external contents, such as Asset
Intelligence. Alerts in this group represent deviations from expected behaviors, according to the learned
or fed information.
Note: when an alert of this category is raised, if the related event is not considered a malicious attack
or an anomaly, it can be learned.

Type ID Name Details

INCIDENT:INTERNET-NAVIGATION Internet Navigation A node has started surfing the Web.

INCIDENT:VARIABLES-FLOW-ANOMALY Variables Flow Anomaly A timing change on a variable which used to be


updated or read with a regular interval has been

detected.

INCIDENT:VARIABLES-FLOW- Variables Flow Anomaly on Consumer A consumer which used to update or read a variable
ANOMALY:CONSUMER with a regular interval has been detected to have

changed its update interval.

INCIDENT:VARIABLES-FLOW- Variables Flow Anomaly on Producer A Producer which used to update or read a variable
ANOMALY:PRODUCER with a regular interval has been detected to have

changed its update interval.

INCIDENT:VARIABLES-NEW-VALUES New Values on Producer New variable values or behavior have been detected in
an OT device.

INCIDENT:VARIABLES-NEW-VARS New Variables on Producer New variables have been detected in the OT system.

INCIDENT:VARIABLES-NEW-VARS:CONSUMER New Variables Requested on Consumer A new variable has been detected in a Consumer
device.

INCIDENT:VARIABLES-NEW-VARS:PRODUCER New Variables Arrived on Producer A new variable has been detected in a Producer
device.

INCIDENT:VARIABLES-SCAN Variable Scan A node in the network has started scanning not existing
variables.

Built-in Checks
Built-in checks run without the need of learning because they are based on specific signatures or hard-
coded logics with reference to: known ICS threats (by signatures provided by Threat Intelligence),
known malicious operations, system weaknesses, or protocol-compliant operations that can impact the
network/ICS functionality.
| Security features | 160

Type ID Name Details

INCIDENT:BRUTE-FORCE-ATTACK Brute-force Attack Several failed login attempts to a node, using a specific
protocol, are detected.

INCIDENT:ENG-OPERATIONS Engineering Operations Various operations to modify the configuration, the


program, or the status of a device have been detected.

INCIDENT:FUNCTION-CODE-SCAN Function Code Scan A node has performed several actions that are not
supported by the target devices.

INCIDENT:ILLEGAL-PARAMETER-SCAN Illegal Parameter Scan A node has performed a scan of the parameters
available on a device.

INCIDENT:MALICIOUS-FILE Malicious File A compressed archive with some malware inside has
been transferred.

INCIDENT:SUSPICIOUS-ACTIVITY Suspicious Activity Suspicious activity that can be potentially related to


known malware has been detected over two nodes.

INCIDENT:FORCE-COMMAND Force Command Forcing of a command through multiple packets.

INCIDENT:WEAK-PASSWORDS Weak Passwords Several weak passwords have been detected in the
communication of two nodes in the network.

Hybrid Threat Detection


The Hybrid Category is assigned when Alerts belonging to different categories as defined in the Alerts
Dictionary are grouped within such one incident. The other categories are as defined in the Alerts
Dictionary.

Type ID Name Details

INCIDENT:NEW-COMMUNICATIONS New Communications A node has started to communicate with a new


protocol.

INCIDENT:NEW-NODE New Node A new unseen node has started to send packets in the
network.

INCIDENT:PORT-SCAN Network Scan A node has started a series of scans in the network.
| Security features | 161

Packet rules
This topic describes the Nozomi Networks solution packet rules.

Introduction
Packet rules are a tool provided by the Nozomi Networks solution to define malicious network activity,
and to detect packets that match the rules and generate alerts. Packet rules enrich and expand the
checks that are already performed on the network traffic. The Nozomi Networks solution checks and
analyzes all traffic against packet rules.
An alert of type SIGN:PACKET-RULE is sent when a match is found. You can explore packet rules and
learn to edit them in this section: Threat Intelligence on page 179.

Packet rule format


Packet rules are divided into two logical sections, the rule header and the rule options. The rule header
contains the rule's action, transport protocol, source IP address, source ports to match, destination
IP address and destination ports to match. The rule options describe more detailed conditions for the
match and details about the alert that will be generated in case of a match.

Basic packet rule sections


This topic describes the basic packet rule sections.

action Action to execute on match (only alert is currently supported)


transport Transport protocol to match, which can be tcp, udp or ip
src_addr Source IP address to match; this can be any or a valid IP address. In the
former case, no check is performed; in the latter, the source node ID is
compared against the specified IP address.
src_port(s) Source ports to match. The format can be any (to match everything), a
single number, a set (e.g., [80,8080]), a range (e.g., 400:500), a range open
to the left bound (e.g., :500), a range open to the right bound (e.g., 400:). A
set can contain a combination of comma separated single ports and ranges
(e.g., [:5,9,10,12:]).
dst_addr Source IP address to match; this can be any or a valid IP address. In the
former case, no check is performed; in the latter, the source node ID is
compared against the specified IP address.
dst_port(s) Destination ports to match. The format can be any (to match everything),
a single number, a set (e.g., [80,8080]), a range (e.g., 400:500), a range
open to the left bound (e.g., :500), a range open to the right bound (e.g.,
400:). A set may contain single ports and ranges separated by commas
(e.g., [:5,9,10,12:]).
options Options alter the behavior of the packet rule and attach information to it.
Options are a list key-value pairs separated by semi-colons (e.g., content:
<value1>; pcre: <value2>).
Options are further explained in the next section.

Options
There are two categories of options: general rule options and detection options. General rule options
provide information about the rule but do not have any affect during detection. General rule options
include msg and reference.
| Security features | 162

Detection rule options include:


• payload options that look for data inside the packet payload
• non-payload options that look for non-payload data
• post-detection options that are rule-specific triggers that happen after a rule has "fired".
The set of supported detection options includes: content, byte_extract, byte_jump, byte_math,
byte_test, dsize, flags, flow, flowbits, flag_data, frag_bits, id, isdataat,
packet_data, pcre, urilen.

msg Defines the message that will be present in the alert.

Example: msg:"a sample description"

reference Defines the CVE associated with the packet rule.

Example usage: reference:cve,2017-0144;

content Specifies the data to be found in the payload; may contain printable chars,
bytes in hexadecimal format delimited by pipes, or some combination of them.
Examples:
• content: "SMB" searches for the string SMB in the payload
• content: "|FF FF FF|" searches for 3 bytes FF in the payload
• content: "SMB|FF FF FF|" searches for the string and 3 bytes FF in the
payload
The content option may have several modifiers that influence the behavior:
• depth: specifies how far into the packet the content should be searched
• offset: specifies where to start searching in the packet
• distance: specifies where to start searching in the packet relatively to the last
option match
• within: to be used with distance that specifies how many bytes are between
pattern matches
Examples:
Given the rule alert tcp any any -> any any (content:\"x\";
content:\"y\"; distance: 2; within: 2;) the packet {'x', 0x00,
0x00, 0x00, 'y'} will match, the packet {'x', 0x00, 0x00, 0x00, 0x00, 'y'} will not
because the distance and within constraints are not respected.

byte_extract Reads bytes from the packet and saves them in a variable.

Syntax: byte_extract:<bytes_to_extract>, <offset>, <name> [,


relative][, big|little]
For example: byte_extract:2,26,TotalDataCount,relative,little
reads two bytes from the packet at the offset 26 and puts them in a variable
called TotalDataCount. The offset is relative to the last matching option and the
data encoding is little endian.
byte_jump Reads the given number of bytes at the given offset and moves the offset by
their numeric representation.
Syntax: byte_jump:<bytes to convert>,<offset>[,relative]
[,little][,align]
For example: byte_jump:2,1,little; reads two bytes at offset 1, inteprets
them as little endian and moves the offset.
| Security features | 163

byte_math Reads the given number of bytes at the given offset, performs an arithmetic
operation, saves the result in a variable and moves the offset.
Syntax: byte_math: bytes <bytes to convert>, offset
<offset>, oper <operator>, rvalue <r_value>, result
<result_variable>[,relative][,endian <endianess>]
For example: byte_math:bytes 2, offset 1, oper +, rvalue 23,
result my_sum; reads two bytes at offset 1, inteprets them as big endian,
adds 23 to the value, stores the result into the variable my_sum and move the
offset.
byte_test Tests a byte against a value or a variable.

Syntax: byte_test:<bytes to convert>, <operator>, <value>,


<offset> [, relative][, big|little] where <operator> can be = or
>.
For example: byte_test: 2, =, var, 4, relative; reads two bytes
at offset 4 (relative to the last matching option) and tests if the value is equal to
the variable called var.
dsize Matches payloads of a given size.

Syntax: dsize: min<>max; or dsize: <max; or dsize: >min;


Matches if the size of the payload corresponds to the given boundaries. The IP,
TCP and UDP headers are not considered in the payload dimension.
id Matches IP packets with a given ID.

Syntax: id: <id>;

isdataat Verifies that the payload has data at the given position.

Syntax: isdataat:<offset>[,relative]
For example: isdataat:2,relative; verifies that there is data at offset in
relation to the previous match.
flags Matches TCP packets with given flags.

Syntax: flow:
[established,not_established,from_client,from_server,to_client,to_ser
For example: flow: established,from_server; matches responses in
an established TCP session.
flow Matches TCP packets with given flags.

Syntax: flow:
[established,not_established,from_client,from_server,to_client,to_ser
For example: flow: established,from_server; matches the responses
in an established TCP session.
flowbits Checks and sets boolean flags in sessions.

Syntax: flowbits:
[set,setx,unset,toggle,reset,isset,isnotset]
For example: flowbits: set,has_init; sets the has_init flags on
the session if the packet rule matches the packet. flowbits: isnotset,
has_init matches on packets whose session does not have the flag
has_init set.
file_data Moves the point to the beginning of the content in an HTTP packet.

Syntax: file_data;
| Security features | 164

frag_bits Checks the flags of the header of IP packets.

Syntax: fragbits: (MDR+*!);


For example: fragbits: MR*; matches on packet that have the More
fragments or Reserved bit flags set.
pkt_data Moves the pointer to the beginning of the packet payload.

Syntax is: pkt_data;

pcre Specifies a regex to be found in the payload.

Syntax: pcre:"(/<regex>/[ismxAEGR]"
Pcre modifiers:
• i: case insensitive
• s: include newline in dot metacharacter
• m: ^ and $ match immediately following or immediately before any newline
• x: ignore empty space in the pattern, except when escaped or in characters
class
• A: match only at the start
• E: $ will match only at the end of the string ignoring newlines
• G: invert the greediness of the quantifiers
• R: match is relative to the last matching option

urilen Matches on HTTP packets whose URI has a specified size.

Syntax: urilen: min<>max; or urilen: <max; or urilen: >min;


| Security features | 165

Hybrid Threat Detection


The Nozomi Networks Solution can leverage on different types of threat detection.
The first one is the anomaly-based analysis, where Guardian learns the behaviour of the observed
network and alerts the user when a significant deviation is detected in the system. This analysis is
generic and can be applied to every system.
The second analysis is done by Yara rules. Guardian is able to extract files transferred by protocols
such as HTTP or SMB and trigger on them an inspection by the Yara engine; when a Yara rule
matches, an alert is raised. The typical use of Yara rules is to detect the transfer of a malware. A set of
Yara rules is provided by Nozomi and can also be expanded by the user.
The third analysis is done by packet rules. They enable the user to define a criterion to match a
malicious packet and raise an alert. A set of packet rules is provided by Nozomi and can also be
expanded by the user.
The fourth analysis is done by other Indicators of Compromise (IoC) loaded via STIX. They provide
several hints like malicious domains, URLs, IPs, etc.
Guardian can correlate the output obtained with these approaches to provide a powerful and
comprehensive threat detection strategy.
Chapter

7
Vulnerability Assessment
Topics: In this section we will cover the Vulnerability Assessment module.

• Basics A Vulnerability Assessment is the process of identifying, quantifying,


and ranking the vulnerabilities in a system.
• Passive detection
• Configuration The Nozomi Networks Solution provides the ability to find vulnerable
system applications, operating systems or hardware components.
| Vulnerability Assessment | 168

Basics
To manage vulnerability assessment the Nozomi Networks Solution uses NVD (National Vulnerability
Database) format files; the vulnerabilities files match a CPE (Common Platform Enumeration) with a
CVE (Common Vulnerabilities and Exposures):
• CPE is a structured naming scheme for information technology systems, software, and packages.
Based upon the generic syntax for Uniform Resource Identifiers (URI), CPE includes a formal name
format.
• Common Vulnerabilities and Exposures (CVE) is a dictionary of common names for publicly known
cybersecurity vulnerabilities. CVE's common identifiers make it easier to share data across separate
network security databases and tools. With CVE Identifiers, you may quickly and accurately access
fix information in one or more separate CVE-compatible databases to remediate the problem.
• The Common Weakness Enumeration Specification (CWE) provides a common language of
discourse for discussing, finding and dealing with the causes of software security vulnerabilities as
they are found in code, design, or system architecture. Each individual CWE represents a single
vulnerability type.

Figure 154: Vulnerability detail


| Vulnerability Assessment | 169

Passive detection
The Nozomi Networks Solution offers continuous vulnerability detection, since it detects vulnerabilities
within a network by only passively listening to network traffic. This technique allows for a
comprehensive state of risk without impacting in any way the production equipment.
We will consider a passive vulnerability as any vulnerability that may be detected simply through
analysis of network traffic.
The passive vulnerability detection is a valuable component because an active scan can affect the
timing of the device or its sensitive processes.
Passive monitoring is not intrusive on network performance or operation. It is real time and can be very
useful to trace certain network security problems and to verify suspected activity.
Configuration
Vulnerabilities-related information can be provided to the Nozomi Networks Solution as follows:
• via the Threat Intelligence service (see Threat Intelligence on page 179 for more information)
• or by using our vulnerabilities-only database, if Threat Intelligence has not been subscribed.
To use the vulnerability-only database (that can be downloaded from Nozomi Networks at https://
nozomi-contents.s3.amazonaws.com/vulns/vulnassdb.tar.gz), use a tool like scp or
WinSCP to upload it to the /data/tmp folder:

scp vulnassdb.tar.gz admin@<appliance_ip>:/data/tmp

Execute these commands in the appliance:

enable-me

cd /data/contents

tar xzf /data/tmp/vulnassdb.tar.gz

Now reload the database with the command:

service n2osva stop

Additional vulnerabilities can be added to the system. They must be in the NVD (National Vulnerability
Database) format, and be placed in the /data/contents/vulnass folder. However, Nozomi
Networks gives full support only for the own-distributed files.
Chapter

8
Smart Polling
Topics: This section describes Smart Polling, the add-on that allows
Guardian to contact nodes in order to gather new information or to
• Strategies improve the already existing one.
• Plans
In Smart Polling, you can define one or more plans, i.e.,
• Extracted information configurations that instruct Guardian about which nodes to poll, and
when and how to poll them. For instance, one can define the plan
to poll known PLCs in the 192.168.38.0/24 subnet every hour using
the Ethernet/IP protocol.
Smart Polling makes the extracted data accessible in two formats.
Some data, when extracted correctly, allows Guardian to enrich its
knowledge of the network. For instance, PLC nodes polled using
the Ethernet/IP protocol are enriched with information such as
vendor, device type, or serial number in the asset or network views;
Windows computers polled with the WinRM protocol provide a list of
the installed software; Linux machines polled using SSH appear in
the asset and network view the exact name of the distribution and
their uptime. Other data that does not enrich the asset or network
views is still useful for diagnostic purposes, such as the CPU or
memory usage, or the status of the network interfaces. This data is
accessible in the Smart Polling display page.
Note: To enable Smart Polling, you must install and upgrade
using the advanced bundle, that is VERSION-advanced-
update.bundle; do not use VERSION-standard-
update.bundle
| Smart Polling | 172

Strategies
The currently supported strategies are:

EthernetIP To be used with devices that support the Ethernet/IP protocol


Modicon Modbus To be used with Modicon Modbus devices
SEL To be used with SEL devices
SNMPv1 To be used with devices that expose the SNMPv1 service
SNMPv2 To be used with devices that expose the SNMPv2 service
SNMPv3 To be used with devices that expose the SNMPv3 service
SSH To be used with devices that expose the SSH service
WinRM To be used with Windows devices that expose the WinRM service
WMI To be used with Windows devices that expose the WMI service
UPnP The strategy extracts information about nodes by using UPnP protocol
CB Defense (External To be used with Carbon Black services
Service)
DNS reverse lookup The strategy extracts information about nodes by using DNS protocol
(External Service)
Aruba ClearPass (External The strategy send and extract assets information from ClearPass
Service) through HTTP Rest APIs
Cisco ISE (External Service) This strategy extracts assets information from Cisco ISE using the
pxGrid HTTP API
ServiceNow (External This strategy extracts assets information from ServiceNow using the
Service) REST Table API. It also allows you to automatically close Guardian's
incidents whenever their corresponding incidents in ServiceNow are
closed
Tanium (External Service) This strategy extracts assets information from Tanium using the
Tanium Server REST API

Plans
Plans are user-defined schedules that instruct the Guardian about the polling intentions. Each plan
defines:
• a strategy, i.e., a protocol or application to use to connect with the desired service;
• a human-readable label to easily identify the plan;
• a query that defines the set of nodes to poll;
• a run_interval, i.e., the time interval in seconds between two successive executions of the plan; and
• additional parameters defined by the chosen strategy. For example, the SEL strategy requires a
password, while the SNMPv2 lets you restrict the requests to selected OIDs.
The Smart Polling display page gives an overview of the existing plans along with the nodes polled by
each plan. Moreover, it allows to add, modify, remove, enable, and disable the plans and to see their
logs.
| Smart Polling | 173

Figure 155: Two user-configured plans

To add a new plan, click the top-right button "New plan". The plan configuration popup appears. A
similar popup appears when you click the configuration icon next to a plan's label in order to modify the
plan. The popup lets you define the plan parameters and check its functionality by testing on a specific
address. By entering an IP address in the "Host to test" field and clicking "Check connection", a quick
one-shot poll of the corresponding node is performed, and the result of the test appears immediately,
including steps that have been performed, data that has been retrieved, and error messages if any.

Figure 156: Adding or modifying a plan

Figure 157: Example of successful connection check


| Smart Polling | 174

Plan actions
Once a plan is created, you can perform several actions on it.

Figure 158: Actions that can be performed on an existing plan

Enable/Disable Enable or disable the scheduled execution of the plan


Edit plan Update the plan parameters
Show log Show the last log messages with live-update

Figure 159: Example of last log messasges

Delete plan Delete the plan and the associated data

Clearpass configuration
Integration permits to send asset information to Clearpass service. To configure ClearPass you must
add credentials (username and password) as well as the bearer token.

Figure 160: Clearpass configurations


| Smart Polling | 175

Adding additional nodes from the Network View


Once a plan has been configured with a certain query, additional nodes can be added separately
from the Network View by clicking on the smart polling icon. This opens a popup allowing you to
select which plan to add the node to and configure it with different parameters than what the plan was
originally configured with.

Figure 161: Additional nodes configuration

Extracted information
The information that strategies extract during their activity is directly integrated with the information that
was already attached to the targeted nodes. This means that they can be observed in other parts of the
Nozomi Networks Solution such as Asset View on page 63, Network View on page 65 or Vulnerabilities
on page 97.
The following images show assets for which some information has been retrieved with Smart Polling.
| Smart Polling | 176

Figure 162: Source information tooltip of a product name

Figure 163: Name, Type and Operating System retrieved with Smart Polling

Integrating the new information in this way is very useful, but it does not clearly show what was
collected overall and, more importantly, how information evolved. All of this can be found on the Smart
Polling display page, under the Polled nodes tab.

Figure 164: Smart Polling Display page

The page is divided into three columns, each one representing an increasing level of details with
respect to the currently selected row.
The first column provides a list of all the nodes that have been contacted by at least one plan. The
second column refers to the node selected in the first column and it lists all the last inserted values for
each kind of extracted information. Finally, the third column shows the last twenty-five inserted values
for the currently selected information in the second column. For numeric values, this last column is
enriched with a graph that helps in understanding how values changed over time. For unstructured,
complex information, such as details of user accounts or installed software, a glanceable useful
summary is displayed in the last column. The full view can be displayed in a popup window, as in the
following example:
| Smart Polling | 177

Figure 165: Smart Polling History Details View

Querying extracted information


Another interesting way to explore what Smart Polling collected consists in using the queries
mechanism with the node_points data source. For example, we can look at how many different
product names a node has had with the following query:

node_points | where node_id == 192.168.1.3 | where human_name ==


product_name | select content | uniq | count

For more complex information, such as details of the installed software on a particular node, we can
use the following query to search for all software sold by a particular vendor:

node_points | where node_id == 10.41.48.63 | where human_name ==


Installed_Software | expand content | select expanded_content | where
expanded_content.vendor include? vendor_name | uniq

While the node_points data source provides us with the whole polling history, we can get get access
only to the latest polling information by using the node_points_last data source. In the following query,
for example, we check the latest installed hotfixes for each polled node:

node_points_last | where human_name == hotfixes | select node_id content


Chapter

9
Threat Intelligence
Topics: The Threat Intelligence section allows you to enrich the
Nozomi Networks Solution with additional information to improve the
• Threat Intelligence Installation detection of malware and anomalies.
and Update
• Understanding if you have
the latest Threat Intelligence
update

Figure 166: The Threat Intelligence section

The Threat Intelligence section allows you to manage


Packet rules, Yara rules, STIX indicators and
Vulnerabilities.
Packet rules are executed on every packet. They raise an alert
of type SIGN:PACKET-RULE if a match is found. For an explanation
of the packet rules format see Packet rules on page 161.
Yara rules are executed on every file transferred over
the network by protocols like HTTP or SMB. An alert of type
SIGN:MALWARE-DETECTED is raised when a match is found. Yara
rules conform to the specifications found at YaraRules.
STIX indicators contain informations about malicious IP
addresses, malware signatures or malicious DNS domains. This
information is used to enrich existing alerts, or to raise new ones.
Vulnerabilities are assigned to each node and depend on the
installed software we identify in the traffic. The Nozomi Networks
Solution leverages CVE, a dictionary that provides definitions for
publicly disclosed cybersecurity vulnerabilities and exposures.
Threat Intelligence already shipped with the Nozomi Networks
Solution can be enabled or disabled but not modified or deleted.
New Threat Intelligence content can always be added, edited and
deleted by the user.
| Threat Intelligence | 180

Threat Intelligence Installation and Update


Threat Intelligence can also be updated automatically by the Nozomi Networks Solution. In the
Administration > Updates & Licenses page you can monitor the status of the update. The
information shown in this page depends on whether your Guardian is connected to Vantage / CMC or it
is standalone.

When not connected to Vantage or a CMC

Figure 167: Update service status

An additional license, named "Threat Intelligence", is required in order to enable the service. The
Threat Intelligence license can be added or modified using the Set new license button in the same
way as it was done for the Base license. For more information please see the corresponding section in
the License on page 21 page.
Then you can configure the feature by clicking on the Update service configuration button.
You can enable Threat Intelligence to received updates automatically. As the note says, make sure
that you can reach https://nozomi-contents.s3.amazonaws.com from your Guardian / CMC, otherwise
the Nozomi Networks Solution will not be able to fetch any Threat Intelligence update; once you are
done, check that the connection can be established by clicking on the "Check connection" button.

Figure 168: Update service configuration


| Threat Intelligence | 181

When connected to Vantage or a CMC

Figure 169: Update service connection configuration when connected to Vantage or a CMC

In this scenario your Threat Intelligence will be managed by Vantage or the CMC to which you are
connected. The Nozomi Networks Solution will synchronize them. If this is your case make sure you
have Threat Intelligence enabled in Vantage or on your CMC.

Connect through a proxy server

Figure 170: Update service connection configuration through a proxy server

In this scenario your Threat Intelligence will be downloaded through the configured proxy server which
requires authentication. If your Threat Intelligence updates are managed by the CMC the proxy server
will not be used.

Manual Update
If you do not have the possibility to connect your appliance or CMC to the internet you can add
brand new Threat Intelligence updates through manual updates. Ask Support for the manual update
package and drop it in the area shown in the image below. After the update, the new contents will be
propagated to the downstream appliances. Should you want to switch to the Threat Intelligence online
updates you have to enable the "Nozomi Networks Update Service" and click "Save".
| Threat Intelligence | 182

Figure 171: Manual update service configuration


| Threat Intelligence | 183

Understanding if you have the latest Threat Intelligence update

Figure 172: Threat Intelligence license and status

In this screenshot there is some additional information:


• Threat Intelligence contents are up to date.
• Threat Intelligence contents version number and the timestamp when this version was installed.
To let you know about the latest changes in Threat Intelligence, we have added this information in
the navigation header; clicking there will also send you to the Administration > Updates &
Licenses page.

Figure 173: Navigation header showing the update


status and version number of Threat Intelligence
Chapter

10
Asset Intelligence
Topics: This Asset Intelligence (AI) section describes how the functionality
works in combination with the Nozomi Networks operating system
• Asset Intelligence Installation (N2OS). Asset Intelligence is a constantly expanding database of
and Update modelling asset behavior. Once an asset is recognized by N2OS,
• Enriched Information and it is in the Asset Intelligence database, more information is
• Needed Input Data added from the Asset Intelligence feed for that specific asset
representation. Enriching an asset in the inventory improves overall
visibility, asset management, and security aspects, since an asset's
expected behavior is also modelled, regardless of what was learned
via the monitored data. In other words, the feed strengthens the
baseline independently from the monitored network data.
Asset Intelligence Installation and Update
The concept is the same as that found in the Threat Intelligence Installation and Update on page 180
section, but applied to Asset Intelligence, with its own license and menu entries.

Enriched Information
Following is a list of enriched information. The specific enriched fields vary depending on the asset.
• Type: Asset type. You can overwrite an existing value, if you want to be more precise (e.g.
OT_device updated to IED).
• End of Sale: Refers to the moment the hardware is officially no longer for sale.
• End of Support: Refers to the moment the hardware is officially no longer supported. This often
corresponds with a similar End of Life concept.
• Lifecycle status: This value is set to Active, End of Sale, End of Support, depending on
the End of Sale and End of Support values. It is a more generic indication which also applies when
precise dates for other fields are not specified (e.g., "I know it is Active even if I don't know yet when
it will go End of Sale").
• Protocols: Refers to the list of expected protocols for the asset on the N2OS learning engine, so as
to create an alert when the actual behavior deviates from the profiled one.
• Function Codes: Refers to the list of expected function codes for the asset on the N2OS learning
engine, so as to create an alert when the actual behavior deviates from the profiled one.
• Image (only Vantage): A picture of the asset.
• Description (only Vantage): a textual description of the Asset.
As any of the above information is enriched, an asset will be shown as enriched in the asset details
form by this indication:

Needed Input Data


In order to see a match and thus trigger the enrichment, the Asset Intelligence needs to see,
depending on the specific asset model, one or more of these fields: Vendor, Product Name, URL.
Chapter

11
Queries
Topics: In this chapter are listed all the data sources, commands and
functions which can be used in N2QL (Nozomi Networks Query
• Overview Language).
• Reference
• Examples
| Queries | 188

Overview
Each query must start by calling a data source, for example:

nodes | sort received.bytes desc | head

will show in a table the first 10 nodes which received the most bytes.
By adding the pie command at the end it is possible to display the result as a pie chart where each
slice has the node ip as label and the received.bytes field as data:

nodes | sort received.bytes desc | head | pie ip received.bytes

Sometimes query commands are not enough to achieve the desired result. As a consequence, the
query syntax supports functions. Functions allow you to apply calculations on the fields and to use the
result as a new temporary field.
For example, the query:

nodes | sort sum(sent.bytes,received.bytes) desc | column ip


sum(sent.bytes,received.bytes)

uses the sum function to sort on the aggregated parameters and to produce a chart with the columns
representing the sum of the sent and received bytes.
| Queries | 189

Reference

Data sources
These are the available data sources with which you can start a query:

alerts Raised events


appliances Downstream connected appliances synchronizing data to this, local
one
assertions Assertions saved by the users. An assertion represents an automatic
check against other query sources
assets Identified assets. Assets represent a local (private), physical system to
care about, and can be composed of one or more Nodes. Broadcast
nodes, grouped nodes, internet nodes, and similar cannot be Assets
accordingly
captured_files Files reconstructed for analysis
captured_logs Logs captured passively over the network
captured_urls URLs and other protocol calls captured over the network. Access to
files, requests to DNS, requested URLs and other are available in this
query source
cve_files Files containing CVE definitions
dhcp_leases IP to Mac bindings due to the presence of DHCP
function_codes Protocols' function codes used in the environment
health_log System's Health-related events, e.g. high resource utilization or
hardware-related issues or events
help Show this list of data sources
link_events Events that can occurr on a Link, like it being available or not
links Identified links, defined as directional one-to-one associations with a
single protocol (i.e. source, destination, protocol)
node_cpe_changes Common Platform Enumeration changes identified over known nodes.
On the event of update of a CPE (on hardware, operating system and
software versions), an entry in this query source is created to keep
track of software updates or better detection of software
node_cpes Common Platform Enumeration identified on nodes (hardware,
operating system and software versions)
node_cves Common Vulnerability Exposures: vulnerabilities associated to
identified nodes' CPEs
node_points Data points extracted over time via Smart Polling from monitored
Nodes
node_points_last node_points last samples per each included data point
nodes Identified nodes, where a node is an L2 or L3 (and above) entity able
to speak some protocol
packet_rules Packet rules definition files
protocol_connections Identified protocol handhsakes/connections needed to decode process
variables
report_files Generated report files available for consultation
| Queries | 190

session_history Archived sessions


sessions Sessions with recent network actvity. A Session is a specific
application-level connection between nodes. A Link can hold one or
more Session at a given time
stix_indicators STIX definitions files
subnets Identified network subnets
trace_requests Trace requests in processing
variable_history Process variables' history of values
variables Identified process variables
yara_rules YARA rules definition files
zone_links A list of protocols exchanged by the defined zones
zones Defined network zones

Using Basic Operators


When writing queries, keep the following in mind:

Operator | (pipe, AND logical operator)


Description To add a where clause with a logical AND, append it using the pipe
character (|). For example, the query below returns links that are from
192.168.254.0/24 AND going to 172.217.168.0/24.
Example links | where from in_subnet? 192.168.254.0/24 | where
to in_subnet? 172.217.168.0/24

Operator OR
Description To add a where clause with a logical OR, append it using the OR operator.
For example, the query below returns links with either the http OR the https
protocols.
Example links | where protocol == http OR protocol == https

Operator ! (exclamation point, NOT logical operator)


Description Put an exclamation point (!) before a term to negate it. For example, the
query below returns links that do NOT (!) belong to 192.168.254.0/24.
Example nodes | where ip !in_subnet? 192.168.254.0/24 | count

Operator ->
Description To change a column name, select it and use the -> operator followed by the
new name. It is worth noting that specific suffixes are pased and used to
visualize the column content differently. For example:
• _time (using a _time suffix following a prefix, data is shown in a
timestamp format)
• _bytes (using a _bytes suffix following a prefix, data is shown in bytes,
KB, MB, depending on how big the number is)

Example 1 nodes | select created_at created_at->my_integer | where


my_integer > 946684800000
Example 2 nodes | select created_at->my_creation_time
| Queries | 191

Example 3 nodes | select tcp_retransmission.bytes->my_retans_bytes

Operators ==, =, <, >, <=, and >=


Description Queries support the mathematical operators listed above.

Operator " (Quotation marks)


Description Use quotation marks (") to specify an empty string. Consider these two
cases where this technique is useful:
• Finding non-empty values. Example 1 below returns assets where the os
field is not blank.
• Specifing that a value in the query is a string (if its type is ambiguous).
Example 2 below tells concat to treat the "--" parameter as a fixed string
to use rather than as a field from the alerts table.

Example 1 assets | where os != ""


Example 2 alerts | select concat(id_src,"--",id_dst)

Operator in?
Description in? is only used with arrays; the field type must be an array. The query
looks for the text strings you specify using in? and returns arrays that match
one of them.
The example below uses in? to find any node having computer or
printer as elements in the array.

Example assets | where type in? ["computer","printer"]

Operator include?
Description The query looks for the text string you specify using include? and returns
strings that match it.
The example below uses include? to find assets where the os field
contains the string Win.

Example assets | where os include? Win


| Queries | 192

Commands
Here is the complete list of commands:

Syntax select <field1> <field2> ... <fieldN>


Parameters • the list of field(s) to output

Description The select command takes all the input items and outputs them with only the
selected fields

Syntax exclude <field1> <field2> ... <fieldN>


Parameters • the list of field(s) to remove from the output

Description The exclude command takes all the input items and outputs them without
the specified field(s)

Syntax where <field> <==|!=|<|>|<=|>=|in?|include?|start_with?|


end_with?|in_subnet?> <value>
Parameters • field: the name of the field to which the operator will be applied
• operator
• value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax), the query engine will understand the
semantics

Description The where command will send to the output only the items which fulfill the
specified criterion, many clauses can be concatenated using the boolean OR
operator
Example • nodes | where roles include? consumer OR zone ==
office
• nodes | where ip in_subnet? 192.168.1.0/24

Syntax sort <field> [asc|desc]


Parameters • field: the field used for sorting
• asc|desc: the sorting direction

Description The sort command will sort all the items according to the field and the
direction specified, it automatically understands if the field is a number or a
string

Syntax group_by <field> [ [avg|sum] [field2] ]


Parameters • field: the field used for grouping
• avg|sum: if specified, the relative operation will be applied on field2

Description The group_by command will output a grouping of the items using the field
value. By default the output will be the count of the occurrences of distinct
values. If an operator and a field2 are specified, the output will be the
average or the sum of the field2 values

Syntax head [count]


Parameters • count: the number of items to output

Description The head command will take the first count items, if count is not specified
the default is 10
| Queries | 193

Syntax uniq
Parameters
Description The uniq command will remove from the output the duplicated items

Syntax expand <field>


Parameters • field: the field containing the list of values to be expanded

Description The expand command will take the list of values contained in field and for
each of them it will duplicate the original item substituting the original field
value with the current value of the iteration

Syntax expand_recursive <field>


Parameters • field: the field to be recursively expanded

Description The expand_recursive command will recursively parse the content of


field, expanding each array or json structure until a scalar value is found.
It generates a new row for each array element or json field. For each new
row, it duplicates the original item substituting the original field value with
the current value of the iteration and adding a new field that represents the
current iteration path from the root

Syntax sub <field>


Parameters • field: the field containing the list of objects

Description The sub command will output the items contained in field

Syntax count
Parameters
Description The count command outputs the number of items

Syntax pie <label_field> <value_field>


Parameters • label_field: the field used for each slice label
• value_field: the field used for the value of the slice, must be a numeric
field

Description The pie command will output a pie chart according to the specified
parameters

Syntax column <label_field> <value_field ...>


Parameters • label_field: the field used for each column label
• value_field: one or more field used for the values of the columns

Description The column command will output a histogram; for each label a group of
columns is displayed with the value from the specified value_field(s). The
variant column_colored_by_label returns bars of different colors
depending on their labels.

Syntax history <count_field> <time_field>


Parameters • count_field: the field used to draw the Y value
• time_field: the field used to draw the X points of the time series
| Queries | 194

Description The history command will draw a chart representing an historic series of
values

Syntax distance <id_field> <distance_field>


Parameters • id_field: the field used to tag the resulting distances.
• distance_field: the field on which distances are computed among entries.

Description The distance command calculates a series of distances (that is, differences)
from the original series of distance_field. Each distance value
is calculated as the difference between a value and its subsequent
occurrence, and tagged using the id_field.
For example, assuming we're working with an id and a time field, entering
alerts | distance id time returns a table where each distance entry
is characterised by the from_id, to_id, and time_distance fields that
represent time differences between the selected alerts.

Syntax bucket <field> <range>


Parameters • field: the field on which the buckets are calculated
• range: the range of tolerance in which values are grouped

Description The bucket command will group data in different buckets, different records
will be put in the same bucket when the values fall in the same multiple of
<range>

Syntax join <other_source> <field> <other_source_field>


Parameters • other_source: the name of the other data source
• field: the field of the original source used to match the object to join
• other_source_field: the field of the other data source used to match the
object to join

Description The join command will take two records and will join them in one record
when <field> and <other_source_field> have the same value

Syntax gauge <field> [min] [max]


Parameters • field: the value to draw
• min: the minimum value to put on the gauge scale
• max: the maximum value to put on the gauge scale

Description The gauge command will take a value and represent it in a graphical way

Syntax value <field>


Parameters • field: the value to draw

Description The value command will take a value and represent it in a textual way

Syntax reduce <field> [sum|avg]


Parameters • field: the field on which the reduction will be performed
• sum or avg: the reduce operation to perform, it is sum if not specified

Description The reduce command will take a series of values and calculate a single
value
| Queries | 195

Syntax size()
Parameters • field: the field to calculate the size of

Description If the field is an array, then the size function returns the number of entries
in the array. If the field contains a string, then the size function returns the
number of characters in the string.
Note: The size function may only be used on the following data sources:
alerts, assets, captured_files, links, nodes, packet_rules, sessions,
stix_indicators, subnets, variables, yara_rules, zones, and zone_links.

Example: assets | where size(ip) > 1

Nodes-specific commands reference

Syntax where_node <field> < ==|!=|<|>|<=|>=|in?|include?|


exclude?|start_with?|end_with? > <value>
Parameters • field: the name of the field to which the operator will be applied
• operator
• value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax), the query engine will understand the
semantics.

Description The where_node command will send to the output only the items which fulfill
the specified criterion, many clauses can be concatenated using the boolean
OR operator. Compared to the generic where command, the adjacent nodes
are also included in the output.
Note: This command is only applicable to the nodes table.

Syntax where_link <field> < ==|!=|<|>|<=|>=|in?|include?|


exclude?|start_with?|end_with? > <value>
Parameters • field: the name of the links table's field to which the operator will be
applied.
• operator
• value: the value used for the comparison. It can be a number, a string
or a list (using JSON syntax) the query engine will understand the
semantics.

Description The where_link command will send to the output only the nodes which are
connected by a link fulfilling the specified criterion. Many clauses can be
concatenated using the boolean OR operator.
Note: This command is only applicable to the nodes table.

Syntax graph [node_label:<node_field>]


[node_perspective:<perspective_name>]
[link_perspective:<perspective_name>]
| Queries | 196

Parameters • node_label: add a label to the node, the label will be the content of the
specified node field
• node_perspective: apply the specified node perspective to the resulting
graph. Valid node perspective values are:
• roles
• zones
• transferred_bytes
• not_learned
• public_nodes
• reputation
• appliance_host
• link_perspective: apply the specified link perspective to the resulting
graph. Valid link perspectives are:
• transferred_bytes
• tcp_firewalled
• tcp_handshaked_connections
• tcp_connection_attempts
• tcp_retransmitted_bytes
• throughput
• interzones
• not_learned

Description The graph command renders a network graph by taking some nodes as
input.

Link Events-specific commands reference

Syntax availability
Parameters
Description The availability command computes the percentage of time a link is UP. The
computation is based on the link events UP and DOWN that are seen for the
link.

Syntax availability_history <range>


Parameters • range: the temporal window in milliseconds to use to group the link
events

Description The availability_history command computes the percentage of time a link is


UP by grouping the link events into many buckets. Each bucket will include
the events of the temporal window specified by the range parameter.

Syntax availability_history_month <months_back> <range>


Parameters • months_back: number of months to go back in regards to the current
month to group the link events
• range: the temporal window in seconds to use to group the link events

Description The availability_history command computes the percentage of time a link is


UP by grouping the link events into many buckets. Each bucket will include
the events of the temporal window specified by the range and months
parameters.
| Queries | 197

Functions
Please note that functions are always used in conjuction with other commands, such as select. In the
following examples, functions are shown in bold:
• Combining functions with select: nodes | select id type color(type)
• Combining functions with where: nodes | where size(label) > 10
• Combining functions with group_by: nodes | group_by size(protocol)
Here is the complete list of functions:

Syntax abs(<field>)
Parameters • the field on which to calculate the absolute value

Description The abs function returns the absolute value of the field

Syntax bitwise_and(<numeric_field>,<mask>)
Parameters • numeric_field: the numeric field on which apply the mask
• mask: a number that will be interpreted as a bit mask

Description The bitwise_and function calculates the bitwise & operator between the
numeric_field and the mask entered by the user

Syntax coalesce(<field1>,<field2>,...)
Parameters • a list of fields or string literals in the format "<chars>"

Description The coalesce function will output the first value that is not null

Syntax color(<field>)
Parameters • field: the field on which to calculate the color

Description The color function generates a color in the rgb hex format from a value
Note Only available for nodes, links, variables and function_codes

Syntax concat(<field1>,<field2>,...)
Parameters • a list of fields or string literals in the format "<chars>"

Description The concat function will output the concatenation of the input fields or values

Syntax date(<time>)
Parameters • time defined as unix epoch

Description The date function returns a date from a raw time

Syntax day_hour(<time_field>)
Parameters • time_field: the field representing a time

Description The day_hour function returns the hour of the day in the appliance's local
time for the current time field, i.e. a value in the range 0 through 23

Syntax day_hour_utc(<time_field>)
Parameters • time_field: the field representing a time
| Queries | 198

Description The day_hour_utc function returns the hour of the day expressed in UTC
for the current time field, i.e. a value in the range 0 through 23

Syntax days_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The days_ago function returns the amount of days passed between the
current time and the time field value

Syntax dist(<field1>,<field2>)
Parameters • the two fields to compute the distance on

Description The dist function returns the distance between field1 and field2, which is the
absolute value of their difference

Syntax div(<field1>,<field2>)
Parameters • field1 and field2: the two field to divide

Description The div function will calculate the division field1/field2

Syntax hours_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The hours_ago function returns the amount of hours passed between the
current time and the time field value

Syntax is_empty(field) == true | false


Parameters • field: the field to check to evaluate whether it is empty or not

Description The is_empty command takes a field as input and returns only the entries
that are either empty / not empty.
Example nodes | where is_empty(label) == false

Syntax is_recent(<time_field>)
Parameters • time_field: the field representing a time

Description The is_recent function takes a time field and returns true if the time is not
farther than 30 minutes

Syntax minutes_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The minutes_ago function returns the amount of minutes passed between
the current time and the time field value

Syntax mult(<field1>,<field2>,...)
Parameters • a list of fields to multiply

Description The mult function returns the product of the fields passed as arguments
| Queries | 199

Syntax round(<field>,[precision])
Parameters • field: the numeric field to round
• precision: the number of decimal places

Description The round function takes a number and output the rounded value

Syntax seconds_ago(<time_field>)
Parameters • time_field: the field representing a time

Description The seconds_ago function returns the amount of seconds passed between
the current time and the time field value

Syntax split(<field>,<splitter>,<index>)
Parameters • field: the field to split
• splitter: the character used to separate the string and produce the tokens
• index: the 0 based index of the token to output

Description The split function takes a string, separates it and outputs the token at the
<index> position

Syntax sum(<field>,...)
Parameters • a list of fields to sum

Description The sum function returns the sum of the fields passed as arguments
| Queries | 200

Examples

Creating a pie chart


In this example we will create a pie chart to understand the MAC vendor distribution in our network. We
choose nodes as our query source and we start to group the nodes by mac_vendor:

nodes | group_by mac_vendor

We can see the list of the vendors in our network associated with the occurrences count. To better
understand our data we can use the sort command, so the query becomes:

nodes | group_by mac_vendor | sort count desc

In the last step we use the pie command to draw the chart with the mac_vendor as a label and the
count as the value.

nodes | group_by mac_vendor | sort count desc | pie mac_vendor count

Creating a column chart


In this example we will create a column chart with the top nodes by traffic. We start by getting the
nodes and selecting the id, sent.bytes, received.bytes and the sum of sent.bytes and received.bytes.
To calculate the sum we use the sum function, the query is:

nodes | select id sent.bytes received.bytes sum(sent.bytes,received.bytes)

If we execute the previous query we notice that the sum field has a very long name, we can rename it
to be more comfortable with the next commands:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum

To obtain the top nodes by traffic we sort and take the first 10:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum | sort sum desc | head 10

Finally we use the column command to display the data in a graphical way:

nodes | select id sent.bytes received.bytes


sum(sent.bytes,received.bytes)->sum | sort sum desc | head 10 | column id
sum sent_bytes received_bytes

Note: You can access an inner field of a complex type with the dot syntax, in the example the dot
syntax is used on the fields sent and received to access their bytes sub field.
Note: After accessing a field with the dot syntax, it will gain a new name to avoid ambiguity; the dot is
replaced by an underscore. In the example sent.bytes become sent_bytes.
| Queries | 201

Using where with multiple conditions in OR


With this query we want to get all the nodes with a specific role, in particular we want all the nodes
which are web servers or DNS server.
With the where command it is possible to achieve this by writing many conditions separated by OR.
Note: The roles field contains a list of values, thus we used the include? operator to check if a value
was contained in the list.

nodes | where roles include? web_server OR roles include? dns_server |


select id roles

Using bucket and history


In this example we are going to calculate the distribution of link events towards an IP address. We start
by filtering all the link_events with id_dst equal to 192.168.1.11.
After this we sort by time, this is a very important step because bucket and history depend on how
the data are sorted.
At this point we group the data by time with bucket. The final step is to draw a chart using the
history command, we pass count as a value for the Y axis and time for the X axis.
The history command is particularly suited for displaying a big amount of data, in the image below we
can see that there are many hours of data to analyze.

link_events | where id_dst == 192.168.1.11 | sort time asc | bucket time


36000 | history count time

Using join
In this example we will join two data sources to obtain a new data source with more information. In
particular we will list the links with the labels for the source and destination nodes.
| Queries | 202

We start by asking for the links and joining them with the nodes by matching the from field of the links
with the id field of the nodes:

links | join nodes from id

After executing the query above we will get all the links fields plus a new field called
joined_node_from_id, it contains the node which satisfies the link.from == node.id
condition. We can access the sub fields of joined_node_from_id by using the dot syntax.
Because we want to get the labels also for the to field of the links we add another join and we
exclude the empty labels of the node referred by to to get more interesting data:

links | join nodes from id | join nodes to id | where


joined_node_to_id.label != ""

We obtain a huge amount of data which is difficult to understand, just use a select to get only the
relevant information:

links | join nodes from id | join nodes to id | where


joined_node_to_id.label != "" | select from joined_node_from_id.label to
joined_node_to_id.label protocol

Computing availability history


In this example we will compute the availability history for a link. In order to achieve a reliable
availability it is recommended to enable the "Track availability" feature on the desired link.
We start from the link_events data source, filtered by source and destination ip in order to precisely
identify the target link. Consider also filtering by protocol to achieve a higher degree of precision.

link_events | where id_src == 10.254.3.9 | where id_dst == 172.31.50.2

The next step is to sort the events by ascending time of creation. Without this step the
availability_history might produce meaningless results, such as negative values. Finally, we compute
the availability_history with a bucket of 1 minute (60000 milliseconds). The complete query is as
follows.

link_events | where id_src == 10.254.3.9 | where id_dst == 172.31.50.2 |


sort time asc | availability_history 60000
| Queries | 203

Note: link_events generation is disabled by default, to enable it use the configuration rule described in
Configuring links

Query complex field types


Complex field types are typically one of the following:
1. Single, scalar values
To query them: Apply the commands as explained in the chapter.
2. Objects
How to recognize them: They appear as an object included in a {..} :

{
"source": "ARP",
"likelihood": 1,
"likelihood_level": "confirmed"
}

Example: How to query only 'confirmed' Mac addresses (possibly values are confirmed, likely,
not confirmed)? Since mac_address:info is an object, the user can access subfields like
mac_address:info.likelihood_level to apply the "where" condition:

nodes | select mac_address:info mac_address:info.likelihood_level |


where mac_address:info.likelihood_level == confirmed
3. Arrays (e.g. parent in the alerts table)
How to recognize them: They appear as an array included in a {..} :

[
"5b867836-2b41-4c15-ab6f-4ae5f0251e30"
]

Example: How to query only alerts having a parent incident with a known incident id having value
"d36d0"? Since "parents" field is an array, use expand first to get an entry for each parent, then
apply your condition:

alerts | expand parents | where expanded_parents include? d36d0


4. Object arrays (e.g. function_codes in the links table)
How to recognize them: They are a combination of the above, and therefore appear as an object
included in a [{..},{..},.. ] :

[
{
"name": "M-SEARCH",
"is_learned": true,
"is_fully_learned": true
}
]

Example: How to query learned function codes? Since function_codes is an object


array, use expand first to get an entry for each function code, then use the "." operator
(function_code.is_learned) to apply your "where" condition:

links | select from to protocol function_codes | expand function_codes


| where expanded_function_codes.is_learned == true
Chapter

12
Maintenance
Topics: In this chapter you will get the complementary information to keep
the Nozomi Networks Solution up and running with ordinary and
• System Overview extraordinary maintenance tasks.
• Data Backup and Restore
• Reboot and shutdown
• Software Update and Rollback
• Data Factory Reset
• Full factory reset with data
sanitization
• Host-based intrusion detection
system
• Action on log disk full usage
• Support
| Maintenance | 206

System Overview
In this section a brief overview of the Nozomi Networks Solution OS (N2OS) main components is given,
so as to provide further background to administer and maintain a production system.

Partitions and Filesystem layout


In this section we will give a look at the N2OS filesystem, services and commands.
The first thing to know about the N2OS structure is the presence of four different disk partitions:
1. N2OS 1st partition, where a copy of the OS is kept and run from. Two different partitions are used
by the install and update process in order to deliver fast-switch between the running release and
new versions
2. N2OS 2nd partition, that copes with the first one to provide reliable update paths.
3. OS Configuration partition, located at /cfg, where low-level OS configuration files are kept (for
instance, network configurations, shell admin users, SSH keys, etc). This partition is copied on /etc
at the start of the bootstrap process.
4. Data partition, located at /data where all user data is kept (learned configuration, user-imported
data, traffic captures, persistent database)

Figure 174: The N2OS standard partition table

A closer look at the /data partition reveals some sub-folders, for instance:
1. cfg: where all automatically learned and user-provided configurations are kept. Two main
configuration files are stored here:
a. n2os.conf.gz: for automatically learned configurations
b. n2os.conf.user: for additional user-provided configurations.
2. data: working directory for the embedded relational database, used for all persistent data
3. traces: where all traces are kept and rotated when necessary.
4. rrd: this directory holds the aggregated network statistics, used for example for the Traffic on page
79.

Core Services
There are some system services that you need to know for proper configuration and troubleshooting:
1. n2osids, the main monitoring process. It can be controlled with

service n2osids <operation>

(<operation> can be either start or stop. After a stop the service will restart automatically. This
holds true for every service.) Its log files are under /data/log/n2os and start with n2os_ids*.
2. n2ostrace, the tracing daemon. It can be controlled with

service n2ostrace <operation>

Its log files start with n2os_trace* and are located under /data/log/n2os.
| Maintenance | 207

3. n2osva, the Asset Identification and Vulnerability Assessment daemon. It can be controlled with

service n2osva <operation>

Its log files start with n2os_va* and are located under /data/log/n2os.
4. n2ossandbox, the file sandbox daemon. It can be controlled with

service n2ossandbox <operation>

Its log files start with n2os_sandbox* and are located under /data/log/n2os.
5. nginx, the web server behind the web interface. It copes with unicorn to provide the https service up
and secured. It can be controlled with

service nginx <operation>

In order to be able to perform any operation on these services, you need to obtain the privileges using
enable-me. For instance, the following commands allow to restart the n2osids service:

enable-me
service n2osids stop

Several other tools and daemons are running in the system to deliver N2OS functionalities.

Data Backup and Restore


This section describes the methods available for backing up the system and subsequently restoring it.
Note that a backup contains just the data -- the system software is left untouched.
Two different kinds of backup are available: Full Backup and Environment Backup. The former
contains all data, while the latter lacks historical data, extended configurations, and some other
information. Both can be executed while the system is running. Environment Backup can be used to
restore the most important part of the system on another appliance for analysis, or as a delta backup
when a full backup is available.

Full Backup

Command line
To create a new backup, go to a terminal and execute the command:

n2os-fullbackup

You can now download the backup file; for instance:

scp admin@<appliance_ip>:/data/tmp/
<backup_hostname_date_version.nozomi_backup> .

Web application
The graphical interface for creating a backup is available under Administration > Backup/
Restore. You can generate and download a backup on-demand (by clicking the Download button) or
you can schedule a backup for a chosen date or recurrence (by clicking the Schedule backup button).
| Maintenance | 208

When scheduling a backup, you can configure the recurrence, the maximum number of backups to
keep, and the location where backup files are stored; the location can be local or remote.
In the case of local locations, backup files are stored under the /data/backups/ folder on the
appliance.
In the case of a remote location, backup files are stored on a dedicated host; this host must provide
a user/password authentication method through one of the listed protocols, and the user must have
permissions to list, read, and write within a folder that will store the backup files. During the backup
process, the backup file is generated locally and then uploaded on the remote folder. When restored,
the backup file is first downloaded from the remote folder and then used for the restore process.
| Maintenance | 209

When a new scheduled backup is about to be generated, the system checks if the number of maximum
backups to keep is about to be exceeded, and if necessary, eliminates the oldest backup.
It is also possible to choose a remote location for storing the backups, using a protocol among smb,
ftp, and ssh. Important: the smb remote backup is supported only for use with Microsoft operating
systems. Compatibility with third-party devices is not guaranteed. These devices may require additional
configuration changes, including, but not limited to: permission changes, creation of new network
shares, creation of new users. Kerberos authentication is not supported.
By default, traces are not included inside backups, but you can include them by checking the include
traces option, also available for scheduling.

Full Restore

Command line
In order to restore from a full backup you may do the following:
1. Copy via SFTP the backup archive from the location where it
was saved to the admin@<appliance_ip>:/data/tmp/
<backup_hostname_date_version.nozomi_backup> path of the appliance. For instance,
using the scp command line:

scp <backup_location_path>/<backup_hostname_date_version.nozomi_backup>
admin@<appliance_ip>:/data/tmp/
<backup_hostname_date_version.nozomi_backup>
2. Go to a terminal and execute this command:

n2os-fullrestore /data/tmp/<backup_hostname_date_version.nozomi_backup>

Note: Use the --etc_restore option to restore the files from the /etc folder, the feature can be
used with a backup produced from version 20.0.1 and newer.
| Maintenance | 210

Web application
If automatically scheduled backups are present on the disk, they will be listed in the table of the section
named 'Restore Previous Backup'.

For each entry the following actions can be performed:

Download the backup file


This action will start the download process for the selected backup file. The file can be used manually
for the Full Restore process.

Restore the selected backup file


This action will start the Full Restore process using the selected backup file.

Delete the selected backup file


This action will delete the selected backup file from the disk.
Finally, it is possible to upload a backup archive from your local machine, for instance a previously
produced backup via command-line, or downloaded with the UI.

Environment Backup
In this section you will learn how to backup the Environment backup of an existing installation.
1. Issue the save command inCLI.
2. Copy via SFTP the content of the /data/cfg folder to a safe place.

Environment Restore
In this section you will learn how to restore a Nozomi Networks Solution Environment to an existing
installation.
| Maintenance | 211

1. Copy the saved content of the cfg folder to the /data/cfg folder into the appliance.
2. From the console, issue the service n2osids stop command.

Reboot and shutdown


Reboot and shutdown commands can be performed from the web interface under Administration
> Operations

In addition, both commands can be entered in the text console or inside an SSH session.
To reboot the system issue the following command:

enable-me
shutdown -r now

To properly shutdown the system issue the following command:

enable-me
shutdown -p now
| Maintenance | 212

Software Update and Rollback


In this section you will get informed about available methods for updating the system to a newer
release and rolling back to the previous one.
Rolling back to the previously installed release is transparent, and all data is migrated back to the
previous format.
Although the software update is built to be transparent to the user and to preseve all data, we suggest
to always have at least a Environment Backup of the system in a safe place.
An interesting aspect of the Nozomi Networks Solution update file is that it applies to both the
Guardian and the CMC, and will work for all the physical and virtual appliances to make the updating
experience frictionless. Special considerations need to be done for the Container, where different
update commands and procedures apply.
Note:
Before updating an appliance, please refer to the 'Update remarks' section of the Release Notes, which
recommends update paths.
Generally speaking, updating to an older version is not possible; you cannot update to version that is
more than one major version ahead of your current version (e.g. 21.0.0 -> 23.0.0).

Update: from GUI


In this section you will learn how to update the Nozomi Networks Solution software of an existing
installation.
You need to already have the new VERSION-update.bundle file that you want to install.
A running system must be updated with a more recent N2OS release.
1. Go to Administration > System operations

2. Click on Software Update and select the VERSION-update.bundle file


Note: The system must be at least version 18.5.9 to support the .bundle format; if your system is
running a version lower than 18.5.9 you must first update to 18.5.9 to proceed
The file will be uploaded
3. Click the Proceed button
Note: If updating from version 18.5.9, the system prompts to insert the checksum that is distributed
with the .bundle; the button is only enabled after the checksum is verified.
The update process begins. The update may take several minutes to complete.

Update: from command line


In this section you will learn how to update the Nozomi Networks Solution software of an existing
installation.
You need to already have the new update file you want to install.
A running system must be updated with a more recent N2OS release.
1. Go to a terminal and cd into the directory where the VERSION-update.bundle file is located.
Then copy the file to the appliance with:

scp VERSION-update.bundle admin@<appliance_ip>:/data/tmp


| Maintenance | 213

2. Start the installation of the new software with:

ssh admin@<appliance_ip>

enable-me

install_update /data/tmp/VERSION-update.bundle

The appliance will now reboot with the new software installed.

Rollback to the previous version


In this section you will learn how to rollback the software to the very previous version. If you would like
to rollback to a release older than the previous one, follow the instructions in the next section.
You need to have performed a release update at least once.
1. Go to the console and type the command

rollback

2. Answer y to the confirmation message and wait while the system is rebooted. All configuration and
historical data will be automatically converted to the previous version, thus no manual intervention
will be required.
| Maintenance | 214

Data Factory Reset


In this section you will learn how to completely erase the N2OS data partition. IP configuration will
be kept, and the procedure is safe to execute remotely. Executing this procedure will cause the
system to lose all data!
1. Go to a terminal and execute the command:

n2os-datafactoryreset -y

2. The system will start over with a fresh data partition. Refer to Set up phase 2 (web interface
configuration) on page 20 to complete the configuration of the system.

Data Factory Reset with Sanitization


In this section you will learn how to how to completely erase the N2OS data partition sanitizing the disk
space using the U.S. DoD 5220-22M 7-pass scheme.
This process erase the N2OS data partition in accordance to the clear guidelines suggested by the
NIST in document 800-88 rev1.
Configurations like network and console password settings will be kept.
Executing this procedure will cause the system to lose all data!
1. Go to a terminal and execute the command:

n2os-datasanitize -y

2. The system will start over with a fresh data partition. Refer to Set up phase 2 (web interface
configuration) on page 20 to complete the configuration of the system.

Full factory reset with data sanitization


In this section you will learn how to how to completely erase your appliance clearing out the disk space
using the U.S. DoD 5220-22M 7-pass scheme.
This process erase ALL the data inside your appliance in accordance to the clear guidelines suggested
by the NIST in document 800-88 rev1.
All data and configurations will be permanently destroyed. The appliance will require to be rebooted
after this procedure. The installed N2OS version will remain the same.
Executing this procedure will cause the system to lose all data and configurations!
1. Go to a terminal and execute the command:

n2os-fullfactoryreset -iknowwhatimdoing

2. The system will start over and it will require to be reconfigured from scratch. Refer to Set up phase
1 (basic configuration) on page 18 to configure the system.

Host-based intrusion detection system


In this section we will provide information about the internal HIDS.

Host-based intrusion detection system


N2OS appliances can detect changes to the basic firmware image. When a change is detect a new
event is logged in the Audit log of the system and replicated to Vantage or the CMC.
Default HIDS settings can be changed in the CLI to best suit your security requirements.
This feature is not available in the container version due to the different security approach.
| Maintenance | 215

Parameter Default Description


value
hids execution interval 18 HIDS check execution interval
hours
hids ignore files Coma separater list of files to be ignored
by HIDS (ex: /etc/file1, /etc/file2)

Action on log disk full usage


System log files are keep in a dedicated log partition and are automatically rotated in order to preserve
disk usage. However sometime customers may want to shutdown the appliance if the log partition fills
up.
This feature can be enabled by adding the configuration key conf.user configure
shutdown_when_log_disk_full true in CLI.
Log emergency shutdown will also raise an alert into the appliance health log.
This feature is not available in the container version.

Support
In this section you will learn how to generate the archive needed to ask support to Nozomi Networks.
Go to Administration > Support click on download button and your browser will start
downloading the support archive file. Send an email to [email protected] attaching
the file.
The Anonymize option removes sensitive network information from the generated.
Note: An anonymized support archive does not contain sensitive information about the network. It
should be used only when the normal archive cannot be shared.
Chapter

13
Central Management Console
Topics: In this section we will cover the Central Management Console
product, a centralized monitoring variant of the standalone
• Overview Appliance.
• Deployment
The main idea behind the Central Management Console is to deliver
• Settings a unified experience with the Appliance, consequently the two
• Connecting Appliances products appear as similar as possible.
• Troubleshooting
• Data synchronization Policy
• Data synchronization Tuning
• CMC or Vantage connected
appliance - Date and Time
• Appliances List
• Appliances Map
• High Availability (HA)
• Alerts
• Functionalities Overview
• Updating
• Single-Sign-On through the
CMC
| Central Management Console | 218

Overview
The Central Management Console (CMC) has been designed to support complex deployments that
cannot be addressed with a single Appliance.
A central design principle behind the CMC is the Unified Experience, that allows to access information
in the same manner as the Appliance. Some additional functionalities have been added to allow
the simple management of hundreds of appliances, and some other functionalities relying on live
traffic availability have been removed to cope with real-world, geographic deployments of the Nozomi
Networks Solution architectures. In Functionalities Overview on page 233 a detailed overview of
differences will be given.
In the Appliances page all connected appliances can be seen and managed. A graphical
representation of all the hierarchical structure of the connected Appliances and the Appliance Map is
presented to allow a quick health check on a user-provided raster map. In Appliances List on page
225 and Appliances Map on page 227 these functionalities will be explained in detail.
Once Appliances are connected, they are periodically synchronized with the CMC. In particular, the
Environment of each Appliance is merged into a global Environment and Alerts are received for a
centralized overview of the system. Of course, Alerts can also be forwarded to a SIEM directly from the
CMC, thus enabling a simpler decoupling of components in the overall architecture. To synchronize
data, the Appliances must be running the same major release or one of the two prior major ones. For
example, if the CMC is running the version 19.0.x (the major is 19.0), Appliances can synchronize if
running one of the following versions: 19.0.x, 18.5.x or 18.0.x.
Firmware update is also simpler with a CMC. Once the new Firmware is deployed to it, all connected
Appliances are also automatically updated. In Updating on page 234 an overview of the update
process is provided for the CMC scenario.
| Central Management Console | 219

Deployment
The first step to setup a CMC is to deploy its Virtual Machine (VM).
The CMC VM can be deployed following the steps provided in Installing the Virtual Machine (VM) on
page 14. The main difference here is that the CMC version of N2OS must be used in the installation.
The difference is during the Initial Setup phase: you have to locate and configure the management NIC
but not the sniff interfaces. The reason is that the CMC does not have to sniff live traffic.

Deployment to AWS
Before starting, please request access to the CMC Amazon Machine Image (AMI) by emailing
your organization's AWS account ID and the AWS region where this AMI will be deployed to
[email protected]. For information about finding your AWS ID, please refer to Amazon's
documentation on AWS identifiers. We'll grant access upon receiving your request.

Deployment to Microsoft Azure


The Nozomi Networks CMC image has been delivered in a special Azure VHD for use in the Azure
cloud.
Prerequisites
1. The Azure storage account that is to be used must have the capabilities to store Page Blobs. This
is an Azure requirement when uploading vhd images to be used for virtual machines in the Azure
environment.
2. The Azure user performing the installation must have permissions to access the Storage Explorer.
3. Make sure there are well-defined security groups for accessing the virtual machine to be
instantiated in Azure.
4. Nozomi Networks platform images for running on Azure have a number of prerequisites. Please
contact your Nozomi Networks support team for details.
Deploying via the Azure Web UI
1. Log in to the Azure console.
2. Create a resource of type Storage Account if there aren’t any in your subscription (default
values).
Make sure the Storage Account type supports Page Blobs.
3. Select the Storage Account and Storage Explorer (preview) > Blob Container from the
menu.
Make sure the Azure user has permissions to access the Storage Explorer
4. Create a Blob Container if it doesn’t exist and select Upload for the VHD
5. When the upload is completed, from Azure home select Create a resource and choose
Managed Disks with following settings:
• SourceType=Storage Blob
• select the Nozomi Networks VHD as SourceBlob
• Size = <deployment size>
• OS = Linux, Gen1
• Leave the other parameters with their defaults
6. Once the disk is created, select it:
• click +Create VM
• choose required CPU and RAM
• Network Firewall rules - allow SSH, HTTPS and HTTP
• In the Management tab, for Boot Diagnostics select Enable with custom storage
account, then choose or create a Diagnostics storage account
• Leave the other parameters with their defaults
7. Once the virtual machine is created, select it, scroll down to the Support + troubleshooting
section and select Serial Console.
| Central Management Console | 220

8. Log in to the console. The default console credential has no password initially and must be changed
upon first login. The console will display a prompt with the text "N2OS - login:". Type admin and
then press [Enter].
9. Elevate the privileges with the command: enable-me
10.Now launch the initial configuration wizard with the command:

env TERM=xterm /usr/local/sbin/setup

Refer to Set up phase 1 (basic configuration) on page 18 to configure the system.


11.Run data_enlarge to expand the disk space

data_enlarge
12.You can login to the Web UI with:
username: admin
Password: nozominetworks
| Central Management Console | 221

Settings
The Administration > Synchronization settings page allows you to customize all the
Vantage or CMC related parameters.

Sync token The token that must be used by all the appliances allowing for
synchronization to the CMC.
Appliance ID The current Appliance ID, also known as CMC ID, which will
be shown in the CMC we want to replicate data with. This
information is also required when connecting an appliance to
Vantage.
CMC context A CMC's context is either Multi-context or All-in-
one. Multi-context indicates that the data gathered from
the appliances connected to the CMC will be collected and
kept separately, whereas All-in-one indicates that the
information will be merged:
• In Multi-context mode, the user can focus on a single
Guardian to access their data in their separate contexts.
This is the default operational mode; it allows the highest
scalability and supports multitenancy (ideal for MSSPs).
• In All-in-one mode, the user gets a unique, merged
Environment section. This configuration is recommended for
smaller and cohesive environments
The Alerts and Asset View is common to both modes.

Appliance update policy Determines whether the appliances connected to the CMC
will automatically receive updates when a new version of the
software is available.
Remote access to connected Enables/disables remote access of an appliance by passing
Appliances through the CMC.
Allow remote to replicate on When a CMC attempts to replicate data on the current CMC, its
this CMC Sync ID is shown in the corresponding text-field. This validates
that the CMC that is trying to replicate is really the one that you
intended to work with.
HA (High Availability) The High Availability mode allows the CMC to replicate its own
data on another CMC. In order to activate it, you must insert
the other CMC Host and Sync Token.
| Central Management Console | 222

Connecting Appliances
To start connecting an Appliance to a CMC open the web console of a CMC, go to Settings on page
221.
Copy the Sync Token, which you will need for configuring the Appliance.
To connect an Appliance to the CMC you can use the Upstream connection section on the same
page.
In this section you can enter the parameters to connect the Appliance:

Host The CMC host address (the protocol used will be https). If no CA-emitted
certificates are used you can make the verification of certificates optional.
Sync token The Synchronization token necessary to authenticate the connection, the
pair of tokens can be generated from the CMC.
Use proxy Enables connecting to the CMC through a proxy server.
connection

The Check connection button indicates if the pairing between the CMC and the appliance is valid.
After entering the endpoint and the Sync token. Click Save to keep the configuration and open the web
console of the CMC, navigating to Appliances

The table will list all the connected Appliances. When an Appliance is connected for the first time, it
will notify its status and receive Firmware updates. However, it will not be allowed to perform additional
actions. To enable a complete integration of the Appliance you will need to "allow" it (see Appliances
List on page 225 for details).
To configure the synchronization intervals between an Appliance and the CMC see Configuring
synchronization on page 318.

Troubleshooting
In this section a list of the most useful troubleshooting tips for the CMC is given.
1. If the Appliance is not appearing at all in the CMC:
• Ensure that firewall(s) between the Appliance and the CMC allows traffic on TCP/443 port
(HTTPS), with the Appliance as Source and the CMC as the Target
• Check that the tokens are correctly configured both in the Appliance and the CMC
• Check in the /data/log/n2os/n2osjobs.log file for connection errors.
2. The Appliance ID is stored in the /data/cfg/.appliance-uuid file. Please do not edit this
file after the appliance is connected to the CMC or Vantage, since it is the unique identifier of the
Appliance inside the CMC and Vantage. In case a forceful change of the Appliance ID is needed,
you will need to remove the old data from the CMC or Vantage by removing the old Appliance ID
entry.
3. If an issue occurs during the setup of an Appliance, follow the instructions at Appliances List on
page 225 to completely delete the Appliance or just to clear its data from the CMC or Vantage.
| Central Management Console | 223

Data synchronization Policy


Both the CMC and the Guardian deployment have their own configurations. In order to simplify
the management of all the possible appliances connected to an upstream appliance, centralized
configuration is available for:
• Users and user groups.
• Alert rules.
• Zone configurations.
For Alert rules and Zone configurations from the CMC, it is also possible to specify a
synchronization policy.
Policies can be configured in the Policy tab under Administration / Synchronization
settings page.

For details, see the next sections.

Users and user groups:


Admin users can specify which users and user groups will be propagated to connected appliances. As
shown in the figure below, in the create/edit user group popup there is a toggle button to enable this
property, which, by default, is set to false.

The synchronization comes with the following constraints:


• All the users and user groups that arrived in the Guardian from the CMC cannot be modified.
• All the users and user groups created in the Guardian will not be synced with the CMC.
• In case of name conflicts, users and user groups in the Guardian will be overwritten with the ones
coming from the CMC.
For details, see Users on page 31.

Alert rules:
The synchronization can be performed with one of the following policies:
• Upstream only: Alert rules are controlled by top CMC or by Vantage. Local rules will be ignored.
• Upstream prevails: In case multiple alert rules, performing the same action, match an alert,
only the ones received from upstream will be executed. Mute actions, created in Guardian, will be
ignored if at least one rule, received from upstream, matches the alert.
| Central Management Console | 224

• Local prevails: In case multiple alert rules, performing the same action, match an alert, only the
ones created in Guardian will be executed.Mute actions, received from upstream, will be ignored if
at least one local rule matches the alert.

For details, see Security Control Panel.

Zone configurations:
The synchronization can be performed with one of the following policies:
• Upstream only: Zone configurations are controlled by top CMC or by Vantage. Local zones will
be ignored.
• Local only: Zone configurations are controlled by Guardian. Zone received from upstream will be
ignored.

For details, see Zone configurations.

Data synchronization Tuning


Configure synchronization in the CMCs in the Tuning tab under Administration /
Synchronization settings page. You can enable or disable the synchronization for the following
entities:
• Alerts.
• Assets.
• Zone configurations.
• Audit items.
• Health logs.
• Environment (Nodes, links and variables).
The Environment option is only visible in CMCs where context is set to All-in-one.

The configuration is applied only to appliances directly connected to the CMC in which the
configuration has been set. If the CMC has an HA connected, the tuning must be configured in both the
CMCs.
Disabling synchronization for an entity will cause the deletion of all the items already received.

CMC or Vantage connected appliance - Date and Time


| Central Management Console | 225

Note that when an appliance is attached to a CMC or to Vantage, its date and time cannot be manually
set as described in Date and time on page 119. Appliances connected to a CMC or Vantage (and with
no NTP configured) will automatically get time synchronization from the parent CMC or Vantage.

Appliances List
The Appliances section shows the complete list of appliances connected to the current CMC. For each
appliance, you can see some information about its status (system, throughput, alerts, license and
running N2OS version).

Actions on appliances:

Allow/Disallow an Appliance

After allowing an Appliance (an allowed Appliance has the icon)


• Nodes, Links and Variables coming from the Appliance become part of the Environment of the
CMC.
• Alerts coming from the Appliance can be seen in the Alerts section.

Focus on appliance
Allows to filter out only the appliance chosen data, such as Alerts and Environment.

Remote connect to an appliance


| Central Management Console | 226

Connect to a remote appliance directly from the CMC. Click on this action to open a new browser tab to
the appliance selected login page. The action is hidden if the CMC isn't configured to allow this type of
communication between Appliances and CMC; to enable it go to Settings on page 221 page.

Place an appliance on the map


This action is used to place the appliance within the map (if you did not upload a map go to Appliances
Map on page 227), choose the right position of the selected appliance by clicking on the map and
Save.

Lock the appliance software version


When locked, the Appliance will not automatically update its software.

Force the software update of the appliance


Even if it is locked, the Appliance will automatically update its software, with the version installed on the
CMC.

Clear data of an appliance


Clear all synchronized data received from the selected appliance for restarting the synchronization from
an empty state.

Delete a appliance
Clear all data received from the selected Appliance and delete it from the list. If the Appliance tries to
sync with the CMC again, it appears disallowed in the list.
| Central Management Console | 227

Appliances Map
In this page you can upload the Appliances Map by clicking on Upload map and choosing a .jpg file
from your computer.

You can inspect the appliances information in the Info pane. In the map each appliance is identified
by its own ID. The appliance marker color is related to the risk of its alerts and near the ID there is the
number of the alerts in the last 5 minutes (if greater than 0). If the alerts in last 5 minutes grows, the
appliance marker will blink for 1 minute.

If the site has been specified in the Administration/General section of the appliance, it is possible to
enable the "group by site" option. The appliances with the same site will be grouped to deliver a simpler
view of a complex N2OS installation.

Figure 175: Appliances map with "group by site" enabled


| Central Management Console | 228

The Appliances Map is also available as a widget.


| Central Management Console | 229

High Availability (HA)


This topic describes how to configure High Availability (HA), a feature that allows a CMC to replicate all
of its data on another replica CMC.
Note: To enable the highest level of resiliency, both CMCs must replicate each other. This is to ensure
that when a CMC stops working, the connected appliances continue to send data to the replica CMC.
Prerequisite: In order to configure the CMC High Availability (HA) feature, both CMCs must be
synchronized.
Configure both CMCs, using the appropriate IPs/synch tokens, as follows:
1. Configure the first CMC. From the Administration > Synchronization settings page, select Allow
to allow the remote to replicate on this CMC, which then appears on the Optional tab.

2. Connect another CMC as an HA replica, starting from the Administration > Synchronization
settings page.

3. Click the On button in the High Availability portion of the Synchronization settings to enable the
HA feature. Then complete the Host and the Sync Token fields of the endpoint to which you want
to replicate it with.
Note: The Sync token can be found in the Administration > Synchronization settings page of
the destination endpoint.
| Central Management Console | 230

4. Save your changes to confirm the connection to the two CMCs.


5. From the Administration > Synchronization settings of the destination endpoint page, verify that
the Sync ID shown is the one for the current machine, then click the Allow button.

Once the CMCs have been configured, Guardian can be configured to synch with the CMC that you
deem as primary.
Guardian failover functionality
When the primary CMC fails, Guardian automatically fails over to the secondary CMC.
Testing the configuration
To verify the configuration and determine if it is working correctly, from the Administration > Health
settings, go to Replication status. View the various entities to see if they are synchronized. For
example, AuditItems are elements generally with a low creation frequency, which will be In Sync.

You can also verify a working connection by checking the Synchronization Settings page and clicking
the Check connection button.
You can also check the last CMC that the appliance has reached:
| Central Management Console | 231
| Central Management Console | 232

Alerts
Alerts management in the centralized console is equivalent to alerts management in an appliance (for
more information about this go to Alerts on page 62). This allows for you to have all the alerts from all
the appliances in one place.
In an appliance, you can create a query (Queries on page 85) and therefore an assertion (Custom
Checks: Assertions on page 148) that involves all the nodes/links/etc of your overall infrastructure.
In the centralized console you have the ability to create a "Global Assertion": you can make one or
more groups of assertions that can be propagated to all the appliances. The appliances cannot edit nor
delete these assertions, only the CMC has control over them.
As mentioned previously, it is possible to configure the centralized console to forward alerts to a SIEM
without having to configure each appliance (for more information on this topic, see Data Integration on
page 108).
| Central Management Console | 233

Functionalities Overview
The unified experience offered by the CMC lacks some of the features found in the appliance user
interface.

As stated above, the Nodes table in a CMC offers only the Show alerts and Navigate actions (the
same table on a appliance has also Configure node, Show requested trace and Request a trace
actions).

Figure 176: Node actions on appliance (top) and CMC (bottom)

In the Environment Links table only the Show alerts and Navigate actions are available (the same
table on an appliance has also Configure link, Show requested trace, Request a trace and Show
events actions).

Figure 177: Link actions on appliance (top) and CMC (bottom)

In Process View Variables table the Configure variable action is not allowed, but the other actions
(Variable details, Add to favourites and Navigate) are. You have a detailed explanation in Process
Variables on page 80.

Figure 178: Variable actions on appliance (top) and CMC (bottom)

Configuration actions and trace request functionalities are available only in the appliance user
interface.
| Central Management Console | 234

Updating
In this section we will cover the release update and rollback operations of a Nozomi Networks Solution
architecture, comprised of a Central Management Console and one or more Appliance(s).
The Nozomi Networks Solution Software Update bundle is universal (except for the Container) -- it
applies to both the Guardian and the CMC, and will work for all the physical and virtual appliances to
make for a user-friendly update experience.
Once an Appliance is connected to the Central Management Console, updates are controlled from
there. The software bundle is propagated from the CMC and, once the bundle is received by the
Appliance, the update can be performed automatically or manually. Configure this behavior on the
Synchronization settings page; select an option under Let the user perform the update on
the appliances, as shown below.

Figure 179: Update policy

If the CMC is configured to allow manual updates, the Appliance's status bar displays a message
notifying the user as soon as the Appliance receives the update bundle (see the next figure).

Figure 180: Update available notification

The update process from the Central Management Console can proceed as explained in Software
Update and Rollback on page 212. After the Central Management Console is updated, each Appliance
will receive the new Software Update.
If an error occurred during the update procedure, a message appears next to the related Appliance's
version number on the Appliances page.
To Rollback, first rollback the Central Management Console, and then proceed to rollback all the
appliances as explained in Software Update and Rollback on page 212.

Single-Sign-On through the CMC


CMC machines offer a SAML identity provider endpoint to their connected appliances in order to permit
users to login into appliances by passing through the parent CMC.
By default, this functionality is not enabled on the CMC. To use this service, you must specify the
Nozomi URL on the SAML integration page; for details see SAML Integration on page 42. You can use
either the machine's IP address (for example, https://192.168.1.122) or its FQDN (for example,
https://machine.address), but to use either address interchangeably to login using SAML, you
must specify the FQDN.
The identity provider endpoint is exposed only if a configuration rule indicating the external URL at
which the CMC is accessible is present in the configuration (see Configuration on page 251 for
more information). For example, assuming the CMC can be accessed at https://192.168.1.8, the
configuration rule would be the following:

conf.user configure cmc identity_provider_url


https://192.168.1.8

However, if the CMC is also accessible via its FQDN, and the identity provider was previously set
using the FQDN, you must replace the IP address with the correct FQDN in the configuration rule (for
example, cmc identity_provider_url https://machine.address). Note that to make the change effective
you also have to reboot or restart all of the services.
| Central Management Console | 235

Once the CMC has been configured, you should be able to obtain the identity provider metadata in /
idp/saml/metadata endpoint of the CMC. Continuing with the CMC in the example above, you will
find the metadata file at https://192.168.1.8/idp/saml/metadata. This file is important as it has to be
uploaded on all the appliances on which you want to have the SSO login.
The last remaining step is to configure the appliances to point to the CMC when performing SSO
operations. Specifically, you must use the following data:
• SAML role: https://nozominetworks.com/saml/group-name
• Metadata XML: the file downloaded from the CMC in the previous step
following what is described at SAML Integration on page 42
Note: When you use the FQDN as the Nozomi URL, the appliance's hostname (General on page
119) must be the same value as the configured Nozomi URL, otherwise the CMC will not be able to
authorize the login request.
For example, given a Nozomi URL like "https://appliance.address", the hostname must be
"appliance.address"
At this point everything should be setup, allowing you to be able to perform SSO via the CMC on the
configured appliances.
Note that if you have a hierarchy of CMCs in your installation, you can also setup SSO in a composable
way in order to have a SSO chain. For example, in the following scenario:
• the CMC1 is configured to perform SSO on ExternalIdP
• the CMC2 is attached to CMC1
• the Guardian G1 is attached to CMC2
We can have a SSO chain starting at G1, passing through CMC2, then through CMC1 and ending at
ExternalIdP by configuring each pair of machines as described above. In particular we want to have:
• CMC1 has an identity_provider_url specified in the configuration and it is configured to perform SSO
on ExternalIdP
• CMC2 has an identity_provider_url specified in the configuration and it is configured to perform SSO
on CMC1
• G1 is configured to perform SSO on CMC2
Assuming that you want to login into G1 by using ExternalIdP, you will be redirected through all the
configured machines. Once logged in the ExternalIdP, you will be automatically redirected back to G1.
Chapter

14
Remote Collector
Topics: In this section we will cover the Remote Collector product, an
appliance that is intended to be used to collect and forward traffic to
• Overview a Guardian.
• Deployment
A Remote Collector is a low-demanding and low-throughput
• Using a Guardian with appliance suitable for installation in isolated areas (e.g., windmills,
connected Remote Collectors solar power fields), where many small sites are to be monitored.
• Troubleshooting
• Updating
• Disabling a Remote Collector
• Install the Remote Collector
Container on the Cisco
Catalyst 9300
| Remote Collector | 238

Overview
The Remote Collector has been designed to be deployed in installations that require monitoring of
many isolated locations. Remote Collectors connect to a Guardian and act as "remote interfaces",
broadening its capture capability, and thus allowing a Guardian to be applied in simple but highly-
distributed scenarios.
A Remote Collector is an appliance meant to run on a less performant hardware than the Guardian or
the CMC, and its main task is that of just "forwarding" traffic to a Guardian. In some sense a Remote
Collector is to a Guardian as a Guardian is to a CMC. There are some key differences though. First of
all, a Remote Collector does not process sniffed traffic in any way, it just forwards it to the Guardian it
is attached to. Second, a Remote Collector has no graphical user interface. Finally, as it runs on less
performant hardware than the Guardian, a Remote Collector has a limitation on the bandwidth that it
can process.
A Guardian can be enabled to receive traffic from the Remote Collectors. When enabled it provides
an additional (virtual) network interface, called "remote-collector", which aggregates the traffic of the
Remote Collectors connected to it. The currently connected Remote Collectors can be inspected from
the "Appliances" tab.
Each Remote Collector is entitled to forward the traffic it sniffs to a set of Guardians. Several Remote
Collectors can connect to a Guardian. Traffic is encrypted with high security measures over the
channel (TLS), so that it cannot be intercepted by a third-party. The Firmware of a Remote Collector
receives automatic updates from the Guardian it is connected to.
| Remote Collector | 239

Deployment
The first step to setup a Remote Collector is to deploy its Virtual Machine (VM) or its Container.
The Remote Collector VM can be deployed following the steps provided in Installing the Virtual
Machine (VM) on page 14 for the Guardian edition. The main difference is that the Remote Collector
version of the image must be used in the installation.
Alternatively, a Remote Collector Container can be deployed following the steps in Installing the
container on page 15, changing the container name, e.g.:
docker build -t nozomi-rc .

Connecting to a Guardian
Each Remote Collector has to be configured via terminal (ssh or console). First, configure the Remote
Collector network setting by following the same procedure as used for Guardian, which is described
in Set up phase 1 (basic configuration) on page 18. Once you have completed that step, proceed to
connecting it to Guardian as described below. The following assumes that 1.1.1.1 is the IP address
of the Remote Collector.
1. Run command n2os-enable-rc
This command will open port 6000 on the firewall, which the Remote Collector uses to send the
traffic it sniffs. Moreover, a new interface called "remote-collector" will appear in the list of "Network
Interfaces".

2. The synchronization of a Remote Collector towards the Guardian for the purpose of software update
is now enabled as shown in Administration / Synchronization settings. Note down the
Sync token.
| Remote Collector | 240

Remote Collector configuration


Each Remote Collector has to be configured via terminal (ssh or console). In the following assume that
1.2.3.4 is the ip address of the Guardian to connect to. The Remote Collector provides a TUI to help
with this setup phase; it can be started with the n2os-tui command (the command is available after
having elevated your privileges with the enable-me command).
1. Select the Remote Collector menu.

2. Select the "Set Guardian Endpoint" menu.


| Remote Collector | 241

3. Insert the IP address of the Guardian you wish to connect to.

4. From the previous menu, select the "Set Connection Sync Token" menu. Insert the token you have
noted down during the Guardian configuration step.
| Remote Collector | 242

5. Optionally, a bpf-filter can be added by selecting the "Set BPF Filter" menu from the previous menu.

6. Exit from the TUI.

Enable Multiplexing
In addition to a primary Guardian, the Remote Collector can multiplex traffic to a set of secondary
Guardians. Every Guardian will receive the same traffic information from the Remote Collector.
To enable multiplexing it is sufficient to configure at least one secondary guardian. In the following
assume that 1.2.3.4 is the ip address of the secondary Guardian to connect to, and ABCD is the
token you have noted down during the Guardian configuration step. To configure the secondary
Guardian, issue the following rules in CLI:

conf.user configure secondary-endpoint sync-to !https://1.2.3.4 ABCD


conf.user configure remote_sensor_secondary_appliance_address 1.2.3.4
| Remote Collector | 243

Although each guardian receives the same traffic information, only the primary is authorized to
change its settings. At each sync, in case of communication failure with the primary Guardian, the first
secondary Guardian that connects successfully will acquire the configuration capabilities.

Figure 181: A secondary Guardian with no Remote Collector configuration capabilities

Enable traffic forwarding


The previous steps enable the Remote Collector to communicate with the appliance, but traffic
forwarding requires the two to exchange the certificates required for encrypting the sniffed traffic
being forwarded. The following steps explain the simplest way to configure certificates exchange; see
Configuration of CA-based certificates on page 244 for an alternative approach.
1. Select the "Appliances" tab; the newly added Remote Collector appears in the list.

2. Click it and inspect the pane on the right. The "Last seen packet" property indicates whether traffic
is being forwarded.

3. Click the Refresh button.


The Refresh button appears to the right of the"Not Connected" label:

4. Switch on the "Live" notification at the top.


The button turns into a spinning wheel. After few minutes, once the procedure is complete, the date
and time of the last seen packet is displayed
| Remote Collector | 244

Note that it takes few minutes to complete the exchange and the last step is completed only once
the Remote Collector sends the first encrypted packet to the appliance. If no trafic is being sniffed
(and therefore forwarded), the procedure remains stuck in the connecting (i.e. spinning wheel) step.

Configuration of CA-based certificates


The certificates installed by default in the Guardian and the Remote Collector are self-signed, but it
is also possible to use certificates signed with a CA authority, if your company policy demands such
requirement. Normally a "certificate chain" composed by the "Root CA" and several "Intermediate CA"s
are used to sign a "leaf" certificate. If you wish to follow this approach, then you may go through the
following steps, which have to be repeated for both the Guardian and the Remote Collector appliances.
1. Put a "leaf" certificate/key pairs under /data/ssl/https_nozomi.crt and /data/ssl/
https_nozomi.key.
This step installs your certificate in the appliance.
2. Put the "certificate chain" under /data/ssl/trusted_nozomi.crt.
This step installs your certificate chain in the appliance. Any certificate signed with the chain will be
accepted as valid.

Final configuration
After all the appliances have been configured it is necessary to reboot them for the configuration to
take effect. Alternatively, it is sufficient to perform the following commands in a shell console
1. service n2osrc stop
on the Guardian
2. service n2osrs stop
on each Remote Collector

Using a Guardian with connected Remote Collectors


In this section we briefly outline some functionalities that a Guardian offers to monitor traffic with a set
of connected Remote Collectors.
The set of connected Remote Collectors can be inspected from the "Appliances" tab of a Guardian. By
selecting a Remote Collector an information pane appears on the right, showing some more detailed
information. The information includes the health status of the Remote Collector, and the timestamp of
the last received payload traffic.
The list of Remote Collector network interfaces is shown at the bottom of the pane. For each network
interface there is a Configure button that allows the user to upload/enable/disable a denylist and set/
unset a BPF filter in the same way as for the Guardian network interfaces.
| Remote Collector | 245

The provenience of the packets is tracked internally by the Guardian and it is displayed in several
locations, such as in the the "Nodes" tab of "Network View",

in the "Asset view",

and in the "Alerts" page.


| Remote Collector | 246

Troubleshooting
In this section a list of the most useful troubleshooting tips for the RC is given.
1. If a Remote Collector is not appearing at all in the Appliances tab:
• Ensure that firewall(s) between the Guardian and the Remote Collector allows traffic on TCP/443
port (HTTPS), with the Remote Collector as Source and the Guardian as the Target
• Check that the tokens are correctly configured both in the Guardian and the Remote Collector
• Check the /data/log/n2os/n2osjobs.log file of the Remote Collector for connection
errors.
2. If a Remote Collector appears in the Appliances tab, but it sends no traffic (last seen packet is
empty or does not update its value):
• Ensure that firewall(s) between the Guardian and the Remote Collector allows traffic on
TCP/6000 port, with the Remote Collector as Source and the Guardian as the Target
• Check that the certificates have been correctly exchanged between the Guardian and the
Remote Collector, i.e., that the certificate at /data/ssl/https_nozomi.crt of an appliance
appears listed in /data/ssl/trusted_nozomi.crt of the other appliance, or that the
certificate chain has been trusted
• Check the /data/log/n2os/n2os_rs.log file of the Remote Collector for connection
errors. In particular errors related to certificates are logged with the error code coming directly
from the openssl library. Once identified the code it is possible to check for the corresponding
explanation at the following page: https://www.openssl.org/docs/man1.1.0/man3/
X509_STORE_CTX_get_error.html
• Make sure to restart n2osrc and n2osrs services everytime a change in the config or the
certificates is performed
| Remote Collector | 247

Updating
In this section we will cover the release update and rollback operations of a Remote Collector.
Remote Collectors receive automatic updates from the Guardian they are attached to: as for the
Guardian to the CMC, the Remote Collector updates to the version of the Guardian if the current
firwmare version is older than the Guardian's.
Note that Remote Collector Container does not update automatically.
A Remote Collector has no graphical interface. The only other method for changing the version of a
Remote Collector is to use the manual procedure described at Software Update and Rollback on page
212.

Disabling a Remote Collector


Disabling unused remote controllers hardens your environment.
1. Log into the Guardian UI that receives data from the Remote Collector, locate the Remote Collector
on the Appliances tab, and remove it by clicking the Delete button.

2. If you remove all the remote collectors in your environment, you can prevent any remote collectors
from sending data to Guardian. This hardening measure can make your environment more secure.
To do so log into the shell of the Guardian that receives data from the Remote Collector, go to
privileged mode, and run

n2os-disable-rc

Install the Remote Collector Container on the Cisco Catalyst 9300


The Remote Collector Container can be installed on the Catalyst 9300 switch. Extensive knowledge of
IOS, IOx, and the ioxclient program is a prerequisite to performing the tasks in this manual. Installation
and configuration of the Cisco ioxclient is not covered in this manual, but can be found in the official
Cisco documentation. Knowledge of Docker is required. Docker information is covered in the official
Docker manual.
IOS and IOx minimum supported versions:
• Cisco IOS Cisco on C9300-48T 17.3.2a
• Cisco IOx Local Manager 1.11.0.4
Minimum operational requirements:
• the Catalyst 9300 must be enabled to host a container, a second storage can be required
• ioxclient program
• SSH access to the Catalyst 9300 is needed
| Remote Collector | 248

• Privileged access to the Catalyst 9300 with "enable" password is needed.


• The Remote Collector Container version must be present on your registry or on the local Docker
cache
Important notes:
• The supported configuration provides for the exclusive use of the Catalyst 9300 container
subsystem by the Remote Collector Container. No other containers can run at the same time. This
configuration will use all of the CPU and RAM available for the container's subsystem.
• All of the commands and configurations proposed in this documentation regarding the Catalyst
9300 are only examples. All of the commands must be verified by a qualified network administrator
and can be modified according to your actual running configuration. Incorrect configurations and
commands on the Catalyst 9300 can make it unusable and can cause network disruptions.
• The 192.0.2.0/24 network is for documentation purposes only, and should be changed.
Legend of used parameter

Parameter Used value Description


appid NozomiNetworks_RC The Remote Collector Container
name
guest-ipaddress 192.0.2.10 The Remote Collector Container
IP address
app-default-gateway 192.0.2.1 The default gateway for the
provided network
$VERSION n.a. To be filled with the Remote
Collector version

Catalyst 9300 setup


1. Go to privileged mode in the Catalyst 9300:

enable

2. Set up the Catalyst 9300 AppGigabitEthernet interface in trunk mode:

conf t

interface AppGigabitEthernet 1/0/1


switchport mode trunk
exit

3. Configure the IOx to host the container:

app-hosting appid NozomiNetworks_RC


app-vnic management guest-interface 0
guest-ipaddress 192.0.2.10 netmask 255.255.255.0
app-default-gateway 192.0.2.1 guest-interface 0
app-vnic AppGigabitEthernet trunk
app-hosting appid NozomiNetworks_RC
app-vnic AppGigabitEthernet trunk
guest-interface 1
mirroring
end
| Remote Collector | 249

Prepare the Remote Collector Container as a IOx packet


1. Prepare the Remote Collector Container as explained in the chapter "Installing the Container" of this
manual. Upload it on your registry or use a cached version.
2. Write a package.yaml file as in the example below:

descriptor-schema-version: "2.10"
info:
name: NozomiNetworks_RC
version: latest
app:
cpuarch: x86_64
resources:
persistent_data_target: "/data"
network:
- interface-name: eth0
- interface-name: eth1
mirroring: true
profile: custom
cpu: 7400
memory: 2048
disk: 4000
startup:
rootfs: rootfs.tar
target: ["/usr/local/sbin/startup-container.sh"]
user: admin
workdir: /data
type: docker

The above configuration is used by ioxclient to build the Remote Collector Container for IOx. It
enables mirrored ports on the Cat9300 IOx backplane on to the container's eth1 port and set /
data as persistent storage on the Catalyst 9300. Other input ports are not needed for the Remote
Collector.
3. Build the Remote Collector Container for the IOx package, in the same directory of package.yaml,
run:

ioxclient docker package --skip-envelope-pkg your-container-


registry.com/NozomiNetworks_RC:"$VERSION" .

This creates the package.tar. You must upload this file directly onto the IOx as covered by the Cisco
IOx documentation. Important: The app must be stopped and activated/started after every Catalyst
9300 configuration change or redeploy.
4. Import the previously generated package in the Catalyst 9300 IOx subsystem as described in the
Cisco IOx documentation. On the Catalyst 9300 ssh console, activate and start the application with
the commands:

app-hosting stop appid NozomiNetworks_RC


app-hosting activate appid NozomiNetworks_RC
app-hosting start appid NozomiNetworks_RC

5. Access to the container can only be through the Catalyst 9300 console, by running:

app-hosting connect appid NozomiNetworks_RC session

6. Proceed to the Remote Collector configuration.


Chapter

15
Configuration
Topics: This section describes the configuration of Nozomi Networks
Solution components in detail.
• Features Control Panel
Some features can be quickly configured using the Features Control
• Editing appliance configuration
Panel (see Features Control Panel on page 252).
• Basic configuration rules
• Configuring the Garbage You can also issue configuration rules via shell by the using the CLI.
Collector For each configuration rule, we will cover all the required details.
• Configuring alerts
• Configuring Incidents
• Configuring nodes
• Configuring assets
• Configuring links
• Configuring variables
• Configuring protocols
• Configuring decryption
• Configuring trace
• Configuring continuous trace
• Configuring Time Machine
• Configuring retention
• Configuring Bandwidth
Throttling
• Configuring Remote Collector
Bandwidth Throttling
• Configuring synchronization
• Configuring slow updates
• Configuring session hijacking
protection
• Configuring Passwords
| Configuration | 252

Features Control Panel


The Features Control Panel gives an overview of the current status of system features configuration
and allows to fine tune specific values.
In the General tab, you can enable general features, such as whether to generate assets from IPv6
nodes.

The Retention tab allows to select a specific number (aka Retention level) for historical data
persistence. In some cases, you can either completely disable a feature's retention or enable the
advanced options that provides more specific settings.

Expiration: allows to select a specific number of days for historical data persistence. It is allowed to
persist the data forever.
| Configuration | 253

Space retention level: allows to select a specific space size for historical data persistence.
| Configuration | 254

Editing appliance configuration


In CMC and Guardian appliances, use the CLI to configure the Nozomi Networks solution.
You can access the CLI in two ways:
• use the cli command in a text-console when connected to the appliance, either directly or through
SSH
• in the web GUI, select Administration > CLI (see Command Line Interface (CLI) on page 99)
Examples:
A command issued via the cli command in a shell

A command issued (through pipe) to the cli command in a shell

There are cases, on Remote Collector appliances for examples, where cli command doesn't work;
in those cases or to fine-tune user-defined configuration or mass-import rules from other systems
it's required to manually edit the /data/cfg/n2os.conf.user. In this section we will see how to
change and apply a configuration rule.
Please log into the shell console, either directly or through SSH, and issue the following commands.
• Use vi or nano to edit /data/cfg/n2os.conf.user
• Edit a configuration rule with the text editor, see the next sections for some examples.
• Write configuration changes to disk and exit the text editor.
Next sections cover all the necessary details about the supported configuration rules.
| Configuration | 255

Basic configuration rules

Set traffic filter

Product Guardian
Syntax conf.user configure bpf_filter <bpf_expression>
Description Set the BPF filter to apply on incoming traffic to limit the type and amount of
data processed by the appliance.

Parameters • bpf_expression: the Berkeley Packet Filter expression to apply


on incoming traffic. A BPF syntax reference can be accessed on the
appliance at https://<appliance_ip>/#/bpf_guide

Where CLI

To apply In a shell console execute: service n2osids stop

Enable or disable management filters

Product Guardian
Syntax conf.user configure mgmt_filters [on|off]
Description With this rule you can switch off the filters on packets that come from/to
N2OS itself. Choose 'off' if you want to disable the management filters
(default: on).

Where CLI

To apply In a shell console execute: service n2osids stop

Enable or disable TCP/UDP deduplication

Product Guardian
Syntax conf.user configure probe deduplication enabled [true|
false]
Description It can enable or disable the deduplication analysis that N2OS does on TCP/
UDP packets. it can be either true, to enable the feature, or false, to disable
it. (default: true)

Where CLI

To apply In a shell console execute: service n2osids stop

Set TCP deduplication time delta

Product Guardian
Syntax conf.user configure probe deduplication tcp_max_delta
<delta>
Description Set the desired maximum time delta, in milliseconds, to consider a
duplicated TCP packet.

Parameters • delta: The value of the maximum time delta (default: 1)


| Configuration | 256

Where CLI

To apply In a shell console execute: service n2osids stop

Set UDP deduplication time delta

Product Guardian
Syntax conf.user configure probe deduplication udp_max_delta
<delta>
Description Set the desired maximum time delta, in milliseconds, to consider a
duplicated UDP packet.

Parameters • delta: The value of the maximum time delta (default: 1)

Where CLI

To apply In a shell console execute: service n2osids stop

Rename fallback zones

Product Guardian
Syntax conf.user configure vi zones default [private|public]
<zone_name>
Description Set the private or public fallback zone name, for nodes not matching any
zone. Details on zones feature can be viewed in Network graph on page 69.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 117

Parameters • zone_name: the name of the private or public fallback zone

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Add Zone

Product Guardian
Syntax ids configure vi zones create <subnet>[,<subnet>]
<zone_name>
Description Add a new zone containing all the nodes in one or more specified
subnetworks. More subnetworks can be concatenated using commas. The
subnetworks can be specified using the CIDR notation (<ip>/<mask>) or
by indicating the end IPs of a range (both ends are included: <low_ip>-
<high_ip>).
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 117

Parameters • subnet: The subnetwork or subnetworks assigned to the zone; both


IPv4 and IPv6 are supported
• zone_name: The name of the zone
| Configuration | 257

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Assign a level to a zone

Product Guardian
Syntax ids configure vi zones setlevel <level> <zone_name>
Description Assigns the specified level to a zone. All nodes pertaining to the given zone
will be assigned the level.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 117

Parameters • level: The level assigned to the zone


• zone_name: The name of the zone

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set the nodes ownership for a zone

Product Guardian
Syntax ids configure vi zones setis_public [true|false]
<zone_name>
Description Sets the specified nodes ownership for a zone. It can be either true, for
public ownership, or false, for private ownership. All nodes belonging to the
given zone are overwritten inheriting the value.
Remark: zones can be configured through the GUI, which is the preferred
way. Refer to Zone configurations on page 117

Parameters • zone_name: The name of the zone

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Assign a security profile to a zone

Product Guardian
Syntax ids configure vi zones setsecprofile [low|medium|high|
paranoid] <zone_name>
Description Assigns the specified security profile to a zone. The visibility of the alerts
generated within the zone will follow the configured security profile.
Refer to Security Profile.

Parameters • zone_name: The name of the zone


| Configuration | 258

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Add custom protocol

Product Guardian
Syntax conf.user configure probe custom-protocol <name> [tcp|
udp] <port>
Description Add a new protocol specifying a port and a transport layer. Names shall
always be unique, so when defining a custom protocol both for udp and tcp,
use two different names.

Parameters • name: The name of the protocol, it will be displayed through the user
interface; DO NOT use a protocol name already used by SG. E.g. one
can use MySNMP, or Myhttp
• port: The transport layer port used to identify the custom protocol

Where CLI

To apply It is applied automatically

Disabling a protocol

Product Guardian
Syntax conf.user configure probe protocol <name> enable false
Description Completely disables a protocol. This can be useful to fine tune the appliance
for specific needs.

Parameters • name: The name of the protocol to disable

Where CLI

To apply It is applied automatically

Set IP grouping

Product Guardian
Syntax conf.user configure probe ipgroup <ip>/<mask>
Description This command permits to group multiple ip addresses into one single
node. This command is particularly useful when a large network of clients
accesses the SCADA/ICS system. To provide a clearer view and get an
effective learning phase, you can map all clients to a unique node simply by
specifying the netmasks (one line for each netmask). The Trace on page 50
will still show the raw IPs in the provided trace files.
Warning: This command merges all nodes information into one in an
irreversible way, and the information about original nodes is not kept.

Parameters • ip/mask: The subnetwork identifier used to group the IP addresses

Where CLI
| Configuration | 259

To apply In a shell console, execute both: service n2osids stop AND service
n2ostrace stop

Set IP grouping for Public Nodes

Product Guardian
Syntax conf.user configure probe ipgroup public_ips <ip>
Description This command permits to group all public IP addresses into one single node
(for instance, use 0.0.0.0 as the 'ip' parameter). This command is particularly
useful when the monitored network includes nodes that have routing to the
Internet. The Trace on page 50, will still show the raw IPs in the provided
trace files.
Warning: This command merges all nodes information into one in an
irreversible way, and the information about original nodes is not kept.

Parameters • ip: The ip to map all Public Nodes to

Where CLI

To apply In a shell console, execute both: service n2osids stop AND service
n2ostrace stop

Skip Public Nodes Grouping for a subnet

Product Guardian
Syntax conf.user configure probe ipgroup public_ips_skip <ip>/
<mask>
Description This is useful when the monitored network has a public addressing that has
to be monitored (i.e. public addressing used as private or public addresses
that are in security denylists).

Parameters • ip/mask: The subnetwork identifier to skip

Where CLI

To apply In a shell console, execute both: service n2osids stop AND service
n2ostrace stop

Set special Private Nodes allowlist

Product Guardian
Syntax conf.user configure vi private_ips <ip>/<mask>
Description This rule will set the is_public property of nodes matching the provided mask
to false. This is useful when the monitored network has a public addressing
used as private (e.g. violation of RFC 1918).

Parameters • ip/mask: The subnetwork identifier to treat as private; both IPv4 and
IPv6 are supported

Where CLI

To apply In a shell console execute: service n2osids stop


| Configuration | 260

Set GUI logout timeout

Products CMC, Guardian


Syntax conf.user configure users max_idle_minutes
<timeout_in_minutes>
Description Change the default inactivity timeout of the GUI. This timeout is used to
decide when to log out the current session when the user is not active.

Parameters • timeout_in_minutes: amount of minutes to wait before logging out.


The default is 10 minutes.

Where CLI

To apply It is applied automatically

Enable Syslog capture feature

Product Guardian
Syntax conf.user configure probe protocol syslog capture_logs
[true|false]
Description With this configuration rule you can enable (option true) the passively
capture of the syslog events. It is useful when you want to forward them to a
SIEM, for further details see Syslog Forwarder on page 112

Where CLI

To apply It is applied automatically

Enable Guardian HA

Product Guardian
Syntax conf.user configure guardian replica-of
<other_guardian_id>
Description With this configuration rule you can enable the Guardian HA mode for two
Guardians that sniff the same traffic and are connected to the same CMC.
During normal operations, only the primary Guardian syncs with the CMC;
if it stops synchronizing the secondary Guardian will start synchronize the
records from the last primary Guardian update. This rule should only be
configured on the secondary appliance.

Parameters • other_guardian_id: The id of the other Guardian, it can be found


on the CMC with the query appliances | where host ==
<appliance_hostname> | select id

Where CLI

To apply In a shell console execute: service n2osids stop

Disabling Vulnerability Assessment for some nodes

Product Guardian
Syntax conf.user configure va_notification matching [id|label|
zone|type|vendor]=<value> discard
| Configuration | 261

Description With this configuration rule you can disable Vulnerability Assessment for
node matching the specified rules. The effect of this configuration rule is to
discard the matching of CVE identifiers. The types are as follows.
• id: the id of a node, it can be an IP address, a netmask in the CIDR
format or a MAC address.
• label: the label of a node.
• zone: the zone in which a node is located.
• type: the type of a node.
• vendor: the vendor of a node.

Parameters • value: If a simple string is specified the match will be performed with an
"equal to" case-sensitive criterion. The matching supports two operators:
• ^: starts with
• '[': contains
These operators must be specified right after the = symbol and their match
is case-insensitive.
Examples:
• va_notification matching id=192.168.1.123 discard
• va_notification matching id=192.168.1.0/24 discard
• va_notification matching label=^abc discard

Where CLI

To apply In a shell console execute: service n2osva stop

Disabling Vulnerability Assessment for some nodes

Product Guardian
Syntax conf.user configure va cve enable false
Description With this configuration rule you can disable CVE generation. CPE
generation will still be active, as a consequence CVEs on the CMC will be
calculated. This setting is useful for saving resources on a Guardian when
CVEs are only used at the CMC level.

Where CLI

To apply In a shell console execute: service n2osva stop

Enabling IPv6 Assets

Product Guardian
Syntax conf.user configure vi ipv6_assets enabled
Description With this configuration rule you can enable assets generation also when
nodes are IPv6.

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 262

Change the maximum percentage of Variables in the Network Elements pool

Product Guardian
Syntax conf.user configure vi machine_limits_variables_quota
<n>
Description With this configuration rule you can change the maximum percentage of
Variables in the Network Elements pool, the default is 0.6 meaning that no
more than 60% of Network Elements can be Variables.

Parameters • n: the percentage of variables expressed as a number from 0.0 to 1.0,


e.g. vi machine_limits_variables_quota 0.7

Where CLI

To apply In a shell console execute: service n2osids stop

Tuning Unicorn Web Server workers

Products CMC, Guardian


Syntax conf.user configure unicorn_workers <n>
Description With this configuration rule you can change the number of Unicorn Web
Server workers. With an higher workers count the CMC/Guardian can
handle more Web UI requests concurrently but will increase the memory
usage by 600MB per worker on average.

Parameters • n: The new number of workers

Where CLI

To apply In a shell console execute: service unicorn stop


| Configuration | 263

Configuring the Garbage Collector


This section decribes how to configure the Environment Garbage Collector (GC). The Garbage
Collector lets the system to discard nodes, assets, and links that are not longer useful, thus saving
system resources.

Clean up old ghost nodes

Product Guardian
Syntax conf.user configure vi gc old_ghost_nodes <seconds>
Description Set the threshold after which idle nodes that are also not confirmed and not
learned are discarded by the garbage collector.
NOTE: in Adaptive Learning, the GC works also if nodes are learned, since
they all are.

Parameters • seconds: Number of seconds after which cleanup occurs (the default is
3600, the equivalent of one hour).

Where CLI

To apply It is applied automatically

Clean up old public nodes

Product Guardian
Syntax conf.user configure vi gc old_public_nodes <seconds>
Description Determines how long to keep public nodes that are inactive. Expressed in
seconds.

Parameters • seconds: Number of seconds after which cleanup occurs (default is


259200, the equivalent of three days).

Where CLI

To apply It is applied automatically

Clean up old inactive nodes

Product Guardian
Syntax conf.user configure vi gc old_inactive_nodes <seconds>
Description Determines how long to keep nodes that are inactive. Expressed in
seconds. Inactivity is calculated as the difference between the current time
and the last activity time.
Note: When a node that has been deleted by the garbage collector appears
again in the network it will be considered new, as a consequence, according
to the learning mode, an alert could be raised. For a better result, use
Adaptive Learning and choose a reasonably long interval for this setting.

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI
| Configuration | 264

To apply It is applied automatically

Clean up old inactive links

Product Guardian
Syntax conf.user configure vi gc old_inactive_links <seconds>
Description Determines how long to keep links that are inactive. Expressed in seconds.
Inactivity is calculated as the difference between the current time and the
last activity time.
Note: When a link that has been deleted by the garbage collector appears
again in the network it will be considered new, as a consequence, according
to the learning mode, an alert could be raised. For a better result, use
Adaptive Learning and choose a reasonably long interval for this setting.

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI

To apply It is applied automatically

Clean up old ghost links

Product Guardian
Syntax conf.user configure vi gc old_ghost_links <seconds>
Description Determines how long to wait before removing inactive ghost links. A ghost
link is one that has not shown any application payload since its creation.
This could be a connection attempt whose endpoint is not responding on the
specified port; or it could be a link with a successful handshake but without
application data transmitted (in this case, transferred data would still be
greater than 0).

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI

To apply It is applied automatically

Clean up old inactive variables

Product Guardian
Syntax conf.user configure vi gc old_inactive_variables
<seconds>
Description Determines how long to keep variables that are inactive. Expressed in
seconds. Inactivity is calculated as the difference between the current time
and the last activity time.
Note: When a variable that has been deleted by the garbage collector
appears again in the network it will be considered new, as a consequence,
according to the learning mode, an alert could be raised. For a better result,
use Adaptive Learning and choose a reasonably long interval for this setting.
| Configuration | 265

Parameters • seconds: Number of seconds after which cleanup occurs (by default it's
disabled).

Where CLI

To apply It is applied automatically

Clean up old sessions

Product Guardian
Syntax conf.user configure vi gc sessions_may_expire_after
<seconds>
Description Determines how long to wait before a session is considered stale and it's
resources may be collected. Expressed in seconds.

Parameters • seconds: Number of seconds after which clean up may occur. By


default, set to 100 seconds.

Where CLI

To apply It is applied automatically


| Configuration | 266

Configuring alerts

Configure maximum number of victims

Product Guardian
Syntax conf.user configure alerts max_victims <num>
Description Define the maximum number of victims that each alert can contains. Victims
exceeding the given value are not stored. Default value is 1000

Parameters • num: Maximum number of victims stored for each alert

Where CLI

To apply It is applied automatically

Configure maximum number of attackers

Product Guardian
Syntax conf.user configure alerts max_attackers <num>
Description Define the maximum number of attackers that each alert can contains.
Attackers exceeding the given value are not stored. Default value is 1000

Parameters • num: Maximum number of attackers stored for each alert

Where CLI

To apply It is applied automatically

Show/hide credentials

Product Guardian
Syntax conf.user configure alerts hide_username_on_alerts
[true|false]
Syntax conf.user configure alerts hide_password_on_alerts
[true|false]
Description This flags determine whether usernames or passwords should be presented
in the the alert. By default, the credentials are visible. Affected alert types:
SIGN:MULTIPLE-ACCESS-DENIED, SIGN:MULTIPLE-UNSUCCESSFUL-
LOGINS, SIGN:PASSWORD:WEAK.

Where CLI

To apply It is applied automatically

Configure maximum length of description

Product Guardian
Syntax conf.user configure alerts max_description_length
<nchars>
| Configuration | 267

Description Define the maximum number of characters that can contain the description
of each incident. When an incident is appendding an alert description, the
append is performed only if the incident description length is smaller then
the limit.

Parameters • nchars: Maximum number of characters allowed in the description of


each incident

Where CLI

To apply It is applied automatically

Configure MITRE ATT&CK mapping rules

Product Guardian
Syntax for conf.user configure alerts mitre_attack ics_mapping
MITRE ATT&CK <path>
for ICS
mappings
Syntax for conf.user configure alerts mitre_attack
MITRE ATT&CK enterprise_mapping <path>
Enterprise
mappings
Description Customize the rules used to assign MITRE ATT&CK techniques to alerts
by means of an external file. The file has the following format. Each line
defines a rule; the rule specifies an alert type ID followed by a semicolon
and a comma-separated list of MITRE ATT&CK technique IDs. For instance,
the line SIGN:PROGRAM:UPLOAD;T0843,T0853 instructs Guardian that
alerts of type SIGN:PROGRAM:UPLOAD must be assigned both the T0843
and T0853 MITRE ATT&CK techniques.

Parameters • path: The path to the file containing the rules

Where CLI

To apply It is applied automatically


| Configuration | 268

SIGN:MULTIPLE-ACCESS-DENIED
In this section we will configure the Multiple Access Denied alert.
The detection is enabled by default and works accordingly to the following parameters.

Set interval and threshold - 1

Product Guardian
Syntax conf.user configure vi multiple_events protocol
<protocol> <interval> <threshold>
Description Set the detection configuration for a specific protocol.

Parameters • protocol: Name of the protocol to configure. Can be 'all' to apply the
configuration globally.
• interval: maximum time in seconds for the event to happen in order to
trigger the detection. Default: 30[s] for OT devices, 15[s] for the rest."
• threshold: number of times for the event to happen in order to trigger
the detection. Default: 20 for OT devices, 40 for the rest.

Where CLI

To apply It is applied automatically

For example, we can configure the detection of a multiple access denied alert for the SMB protocol with
an interval of 10 seconds and threshold of 35 attempts with the following command:

conf.user configure vi multiple_events protocol smb 10 35


| Configuration | 269

SIGN:MULTIPLE-UNSUCCESSFUL-LOGINS
In this section we will configure the Multiple Unsuccessful Logins alert.
The detection is enabled by default and works accordingly to the following parameters.

Set interval and threshold - 2

Product Guardian
Syntax conf.user configure vi multiple_events protocol
<protocol> <interval> <threshold>
Description Set the detection configuration for a specific protocol.

Parameters • protocol: Name of the protocol to configure. Can be 'all' to apply the
configuration globally.
• interval: maximum time in seconds for the event to happen in order to
trigger the detection. Default: 30[s] for OT devices, 15[s] for the rest."
• threshold: number of times for the event to happen in order to trigger
the detection. Default: 20 for OT devices, 40 for the rest.

Where CLI

To apply It is applied automatically

For example, we can configure the detection of a multiple unsuccessful login alert for the SMB protocol
with an interval of 10 seconds and threshold of 35 attempts with the following command:

conf.user configure vi multiple_events protocol smb 10 35


| Configuration | 270

SIGN:OUTBOUND-CONNECTIONS
In this section we will configure the outbound connections limit.
Guardian can detect a sudden increase of outbound connections from a specific learned source node.
An alert is raised by default when 100 new outbound connections are observed over a 60-seconds
interval.
By default, the detection is only performed when the node is being protected. Optionally, the detection
can also be performed when the node is being learned.
Optionally, we can prevent the system from creating additional destination nodes in order to preserve
resources. Such nodes creation limit is disabled by default.
Some of the configuration parameters listed below can be applied either globally or to individual nodes.
The configuration of an individual node has more priority and overrides the global configuration.

Perform detection when source node is being learned

Product Guardian
Syntax conf.user configure vi outbound_connections_limit
learning [true|false]
Description Specify whether the detection has to be performed also when the source
node is being learned or only when it is being protected.
Select true for detection also when the source node is learned, or false
for detection only when the source node is being protected. By default
false.

Where CLI

To apply It is applied automatically

Enable/disable nodes creation limit

Product Guardian
Syntax global conf.user configure vi outbound_connections_limit
enabled [true|false]
Syntax conf.user configure vi node <ip>
individual node outbound_connections_limit enabled [true|false]
Description Enable (option true) or disable (option false) the destination nodes
creation limit.

Parameters • ip: The IP of the source node

Where CLI

To apply It is applied automatically

Set connections count

Product Guardian
Syntax global conf.user configure vi outbound_connections_limit
connections <count>
Syntax conf.user configure vi node <ip>
individual node outbound_connections_limit connections <count>
Description Set the outbound connections limit, in number of connections.
| Configuration | 271

Parameters • ip: The IP of the source node


• count: The amount of outbound connections from a node to be
observed in order to trigger the detection (default: 100)

Where CLI

To apply It is applied automatically

Set observation interval

Product Guardian
Syntax global conf.user configure vi outbound_connections_limit
interval <value>
Syntax conf.user configure vi node <ip>
individual node outbound_connections_limit interval <value>
Description Set the outbound connections observation interval, in seconds.

Parameters • ip: The IP of the source node


• value: The time interval during which the new outbound connections are
observed.

Where CLI

To apply It is applied automatically

For example, we can configure the outbound connections limit to prevent a source node from creating
additional destination nodes when 70 outbound connections are observed during a 30-seconds interval
with the following configuration commands:

conf.user configure vi outbound_connections_limit enabled true


conf.user configure vi outbound_connections_limit connections 70
conf.user configure vi outbound_connections_limit interval 30
| Configuration | 272

SIGN:TCP-SYN-FLOOD
In this section we will configure the TCP SYN flood detection.
A node is considered to be under a TCP SYN flood attack when:
• The number of incoming connection attempts during the observation interval is greater than the
detection counter
• And, during the observation interval, the ratio between established connections and total number of
connection attempts falls below the trigger threshold
A TCP SYN flood attack is considered terminated when:
• The number of incoming connection attempts during the observation interval returns below the
detection counter
• Or, during the observation interval, the ratio between established connections and total number of
connection attempts returns above the exit threshold

Set detection counter

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection counter
<value>
Description Set the connection attempts counter, in number of connections.

Parameters • value: The amount of connection attempts to be observed in order to


trigger the detection (default: 100)

Where CLI

To apply It is applied automatically

Set observation interval

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection interval
<value>
Description Set the observation interval, in seconds.

Parameters • value: The time interval during which the connection attempts are
observed, in seconds (default: 10).

Where CLI

To apply It is applied automatically

Set trigger threshold

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection
trigger_threshold <value>
Description Set the trigger threshold.

Parameters • value: The ratio between established connections and connections


attempts, which when it is reached triggers the flood detection (default:
0.1).
| Configuration | 273

Where CLI

To apply It is applied automatically

Set exit threshold

Product Guardian
Syntax conf.user configure vi tcp_syn_flood_detection
exit_threshold <value>
Description Set the exit threshold.

Parameters • value: The ratio between established connections and connections


attempts, which when it is reached terminates the flood detection
(default: 0.4).

Where CLI

To apply It is applied automatically

For example, with the commands below the TCP SYN flood detection would trigger when 200
connection attempts are observed during a 15-seconds observation interval and the ratio between
established connections and connection attempts falls below 0.3. Then the detection would terminate
when the ratio returns above 0.5.

conf.user configure vi tcp_syn_flood_detection counter 200


conf.user configure vi tcp_syn_flood_detection interval 15
conf.user configure vi tcp_syn_flood_detection trigger_threshold 0.3
conf.user configure vi tcp_syn_flood_detection exit_threshold 0.5
| Configuration | 274

SIGN:UDP-FLOOD
In this section we will configure the UDP flood detection.
The detection is enabled by default and it triggers when a victim receives 20'000 UDP packets per
second for at least 10 seconds.

Enable/disable detection

Product Guardian
Syntax conf.user configure vi udp_flood_detection enabled
[true|false]
Description Enable (option true) or disable (option false) the UDP flood detection.

Where CLI

To apply It is applied automatically

Set detection threshold

Product Guardian
Syntax conf.user configure vi udp_flood_detection
packets_per_second <threshold>
Description Set the UDP flood detection threshold, in packets per second.

Parameters • threshold: The amount of UDP packets per second to be transmitted


to a victim for at least 10 seconds in order to trigger the detection
(default: 20000)

Where CLI

To apply It is applied automatically

For example, we can configure the UDP flood detection to trigger when a victim receives 40'000 UDP
packets per second for at least 10 seconds with the following configuration command:

vi udp_flood_detection packets_per_second 40000


| Configuration | 275

SIGN:NETWORK-SCAN

DDOS Defense
In this section we will configure the detection of a DDOS attack.
The detection is enabled by default and an alert is raised at most every 5 minutes, when under one
minute more than 20 nodes have been created.

Set analysis interval

Product Guardian
Syntax conf.user configure vi ddos_defense interval <threshold>
Description Set the analysis interval for the detection.

Parameters • threshold: The analysis interval is measured in minutes. Default: one


minute.

Where CLI

To apply In a shell console execute: service n2osids stop

Set max created nodes

Product Guardian
Syntax conf.user configure vi ddos_defense max_created_nodes
<max_nodes>
Description Number of created nodes that, if created in less time than the analysis
interval, will trigger the alert.

Parameters • max_nodes: Number of created nodes that trigger the detection. Default:
20.

Where CLI

To apply In a shell console execute: service n2osids stop

Set alert threshold

Product Guardian
Syntax conf.user configure vi ddos_defense alert_threshold
<threshold>
Description Interval to wait in order to raise an additional alert.

Parameters • threshold: Minutes to wait for another alert to be raised. Default: 5


minutes.

Where CLI

To apply In a shell console execute: service n2osids stop

Set alert threshold

Product Guardian
| Configuration | 276

Syntax conf.user configure vi ddos_defense alert_threshold


<threshold>
Description Interval to wait in order to raise an additional alert.

Parameters • threshold: Minutes to wait for another alert to be raised. Default: 5


minutes.

Where CLI

To apply In a shell console execute: service n2osids stop

For example, we can configure the detection for the TCP Port scan with the following commands:

conf.user configure vi port_scan_tcp attempts_threshold 50


conf.user configure vi port_scan_tcp interval 20
conf.user configure vi port_scan_tcp trigger_threshold 0.2
conf.user configure vi port_scan_tcp
out_of_sequence_threshold_number 15
conf.user configure vi port_scan_tcp out_of_sequence_interval 20
conf.user configure vi port_scan_tcp
out_of_sequence_threshold_max_rate 10
conf.user configure vi port_scan_tcp ignore_ports
1000,1200-1300,1500

TCP Port Scan


In this section we will configure the detection for the TCP Port scan.
The detection is enabled by default and an alert is emitted according to the configuration parameters
described below.

Set attempts threshold

Product Guardian
Syntax conf.user configure vi port_scan_tcp attempts_threshold
<threshold>
Description Set the number of scan attempts that will trigger the alert.

Parameters • threshold: Number of scan attempts that will trigger the alert. Default:
100.

Where CLI

To apply It is applied automatically

Set observation interval

Product Guardian
Syntax conf.user configure vi port_scan_tcp interval <interval>
Description Set the analysis interval for the detection algorithm.

Parameters • interval: Analysis interval in seconds for the detection algorithm.


Default: 10 seconds.

Where CLI
| Configuration | 277

To apply It is applied automatically

Set trigger threshold

Product Guardian
Syntax conf.user configure vi port_scan_tcp trigger_threshold
<threshold>
Description Set the trigger threshold for the detection algorithm. An alert is raised only if
the ratio between the number of established connections and total attempts
is smaller than the trigger threshold.

Parameters • threshold: Trigger threshold as described above for the detection


algorithm. Default: 0.1.

Where CLI

To apply It is applied automatically

Set out of sequence threshold

Product Guardian
Syntax conf.user configure vi port_scan_tcp
out_of_sequence_threshold_number <threshold>
Description Set the number of out of sync fragments which trigger this feature of the
detection algorithm.

Parameters • threshold: Number of out of sync fragments. Default: 10.

Where CLI

To apply It is applied automatically

Set out of sequence interval

Product Guardian
Syntax conf.user configure vi port_scan_tcp
out_of_sequence_interval <interval>
Description Set the analysis interval of the out of sync recognition feature of the
detection algorithm.

Parameters • interval: Analysis interval in seconds. Default: 10 seconds.

Where CLI

To apply It is applied automatically

Set out of sequence max rate

Product Guardian
Syntax conf.user configure vi port_scan_tcp
out_of_sequence_threshold_max_rate <rate>
| Configuration | 278

Description Set the period of time during which additional alerts due to out of sync
fragments are not raised.

Parameters • rate: Timespan in minutes to mute additional alerts due to ouf of sync
fragments. Default: 5 minutes.

Where CLI

To apply It is applied automatically

Set ignored port ranges

Product Guardian
Syntax conf.user configure vi port_scan_tcp ignore_ports
<port_ranges>[,<port_ranges>]
Description Set the victims' ports or port ranges which must not participate in the
detection algorithm.

Parameters • port_ranges: ports can be entered as a list of comma separated


values and ranges as a pair of ports separated by a dash. Example:
1000,1200-1300,1500. Default: none.

Where CLI

To apply It is applied automatically

For example, we can configure the detection for the TCP Port scan with the following commands:

conf.user configure vi port_scan_tcp attempts_threshold 50


conf.user configure vi port_scan_tcp interval 20
conf.user configure vi port_scan_tcp trigger_threshold 0.2
conf.user configure vi port_scan_tcp
out_of_sequence_threshold_number 15
conf.user configure vi port_scan_tcp out_of_sequence_interval 20
conf.user configure vi port_scan_tcp
out_of_sequence_threshold_max_rate 10
conf.user configure vi port_scan_tcp ignore_ports
1000,1200-1300,1500

UDP Port Scan


In this section we will configure the detection for the UDP Port scan.
The detection is enabled by default and an alert is emitted according to the configuration parameters
described below.

Set fast threshold

Product Guardian
Syntax conf.user configure vi port_scan_udp fast_threshold
<threshold>
Description Set the number of attempts which will trigger the alert for the fast detection
algorithm.
| Configuration | 279

Parameters • threshold: Attempts triggering the alert for the fast detection algorithm.
Default: 500.

Where CLI

To apply It is applied automatically

Set slow interval

Product Guardian
Syntax conf.user configure vi port_scan_udp slow_interval
<interval>
Description Set the analysis interval for the slow detection algorithm.

Parameters • interval: Analysis interval for the slow detection algorithm. Default: 60
seconds.

Where CLI

To apply It is applied automatically

Set fast interval

Product Guardian
Syntax conf.user configure vi port_scan_udp fast_interval
<interval>
Description Set the analysis interval for the fast detection algorithm.

Parameters • interval: Analysis interval for the fast detection algorithm. Default: 1
second.

Where CLI

To apply It is applied automatically

Set fast different ports threshold

Product Guardian
Syntax conf.user configure vi port_scan_udp
fast_different_ports_threshold <threshold>
Description Set the number of different ports that should be tested by the attacker for the
fast detection algorithm to trigger the alert.

Parameters • threshold: Minimum number of different ports to be tested by the


attacker to trigger the alert for the fast detection algorithm. Default: 250.

Where CLI

To apply It is applied automatically

Set unreachable ratio

Product Guardian
| Configuration | 280

Syntax conf.user configure vi port_scan_udp unreachable_ratio


<ratio>
Description The slow detection algorithm will issue an alert only if the ratio between the
number of unreachable requests and the total requests is greater than this
value.

Parameters • ratio: Critical ratio for the slow detection algorithm to trigger an
alert. An alert is raised if the ratio between the number of unreachable
requests and the total requests is greater than the critical ratio. Default:
0.1.

Where CLI

To apply It is applied automatically

For example, we can configure the detection for the UDP Port scan with the following commands:

conf.user configure vi port_scan_udp slow_threshold 200


conf.user configure vi port_scan_udp slow_interval 30
conf.user configure vi port_scan_udp fast_threshold 400
conf.user configure vi port_scan_udp fast_different_ports_threshold
150
conf.user configure vi port_scan_udp fast_interval 3
conf.user configure vi port_scan_udp unreachable_ratio 0.2

Ping Sweep
In this section we will configure the detection for the ICMP/Ping Sweep scan.
The detection is enabled by default and an alert is emitted when more than 100 request are issued in
less than 5 seconds with a total number of recorded victims equal to 100.

Set request number

Product Guardian
Syntax conf.user configure vi ping_sweep max_requests
<threshold>
Description Set the number of requests that will trigger the alert.

Parameters • threshold: Number of request that will raise the alert. Default: 100.

Where CLI

To apply It is applied automatically

Set interval

Product Guardian
Syntax conf.user configure vi ping_sweep interval <interval>
Description Set the interval during which the maximum number of requests should be
issued in order to trigger the alert.

Parameters • interval: Interval in seconds for the maximum requests to be issued.


Default: 5 seconds.
| Configuration | 281

Where CLI

To apply It is applied automatically

For example, we can configure the detection for the ICMP/Ping Sweep scan with an analysis interval of
10 seconds for a threshold of 200 requests with 150 victims recorded with the following commands:

conf.user configure vi ping_sweep max_requests 200


conf.user configure vi ping_sweep interval 10

Treck Stack
In this section we will configure the detection for the Treck TCP/IP Fingerprint scan via ICMP 165.
The detection is enabled by default and an alert is emitted at most once every 20 minutes.

Set alert interval

Product Guardian
Syntax conf.user configure vi treck_stack once_every
<threshold>
Description Set the minimum interval between two raised alerts, in minutes.

Parameters • threshold: Minutes to wait for another alert to be raised. Default: 20


minutes.

Where CLI

To apply It is applied automatically

For example, we can configure the detection for the Treck TCP/IP Fingerprint Scan via ICMP 165 with
an interval between two emitted alerts of one hour (60 minutes) with the following command:

conf.user configure vi treck_stack once_every 60


| Configuration | 282

Configuring Incidents
| Configuration | 283

INCIDENT:PORT-SCAN

Port Scan Incident


In this section we will configure the parameters of a Port Scan Incident.
The detection is enabled by default and an incident is raised when more than 6 correlated alerts are
triggered, independently from their creation time.
For example, we can configure the parameters for the Port Scan Incident with the following command,
where we identify the minimum number of alerts for the incident to be triggered, and the maximum time
interval in milliseconds in which they need to occur:

conf.user configure alerts incidents portscan {"min_alerts": 25,


"max_time_interval": 1500}

Configuring the port scan incident

Product Guardian
Syntax conf.user configure alerts incidents portscan <json_obj>
Description Configure the port scan incident by providing the configuration in a JSON
object.

Parameters • json_obj: JSON object containing the keys 'min_alerts' and


'max_time_interval', which are respectively the minimum number of alerts
which trigger the detection and the maximum time interval in which they
need to occur.

Where CLI

To apply In a shell console execute: service n2osalert stop


| Configuration | 284

Configuring nodes

Set node label

Product Guardian
Syntax set ids configure vi node <ip> label <label>
Syntax erase ids configure vi node <ip> label
Description Set the label to a node in the Environment, the label will appear in the
Environment > Network View > Graph, in the Environment >
Network View > Nodes and in the Environment > Process View >
Variables

Parameters • ip: The IP address of the node


• label: The label that will be displayed in the user interface

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set protocol specific node label formatting when coming from live traffic

Product Guardian
Syntax conf.user configure vi live_node_label <protocol>
<option>
Description Restricts the characters that may appear in a node label set from live traffic
using the specified protocol

Parameters • protocol: The name of the protocol


• option: The formatting restriction to apply, where relaxed options
replace non-conforming characters with space and strict options disables
any non-conforming label. From the most permissive to the most strict,
one of:
• default same as utf8_relaxed
• utf8_relaxed printable utf8 characters
• utf8_strict
• ascii_relaxed printable ascii characters
• ascii_strict
• alnum_underscore_relaxed alphanumeric characters including
underscore
• alnum_underscore_strict
• alnum_relaxed alphanumeric characters a-zA-Z0-9
• alnum_strict

Where CLI

To apply In a shell console execute: service n2osids stop

Set default node label formatting when coming from live traffic

Product Guardian
Syntax conf.user configure vi default_live_node_label <option>
| Configuration | 285

Description Restricts the characters that may appear in a node label set from live
traffic unless vi live_node_label <protocol> <option> has been
configured for the traffic protocol.

Parameters • option: The formatting restriction to apply, see vi live_node_label


<protocol> <option>

Where CLI

To apply In a shell console execute: service n2osids stop

Set node Device ID with priority

Product Guardian
Syntax ids configure vi node <ip> device_id_with_priority
<device_id>;<priority>
Description Adds the Device ID to the set of node Device IDs. The final Device ID, used
for node grouping under Assets is the one with the highest priority

Parameters • ip: The IP address of the node


• device_id: The device id
• priority: the priority of the Device ID. If missing, it will be se to the
lowest priority value

Where CLI

To apply It is applied automatically

Override node Device ID

Product Guardian
Syntax ids configure vi node <ip> device_id_override
<device_id>
Description Adds the Device ID to the set of node Device IDs, giving it the maximum
priority value. This Device ID will be used for node grouping under Assets

Parameters • ip: The IP address of the node


• device_id: The device id (with the maximum priority)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Enable or disable node

Product Guardian
Syntax ids configure vi node <ip> state [enabled|disabled]
Description This directive permits to disable a node. This setting has effect in the graph:
a disabled node will not be displayed.

Parameters • ip: The IP address of the node


| Configuration | 286

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Enable or disable same ip node separation

Product Guardian
Syntax conf.user configure check_multiple_macs_same_ip enable
[true|false]
Description This directive permits to enable the separation of L3 nodes with same IP
but different MAC address. The nodes with the desired IP addresses will be
treated as L2 nodes and appear as distinct assets. If the nodes already exist
as L3 nodes upon the application of the configuration, they will be deleted
and the new logic will start to execute with empty statistics.
The values of true or false enables, respectively disables, the feature.

Where CLI

To apply In a shell console execute: service n2osids stop

Configure same ip node separation

Product Guardian
Syntax conf.user configure check_multiple_macs_same_ip ip
<ip_address>
Description Selects the ip of the nodes which should be separated as per the strategy
described in the previous box.

Parameters • ip_address: The IP of the node to be configured

Where CLI

To apply In a shell console execute: service n2osids stop

Delete node

Product Guardian
Syntax ids configure vi node <ip> :delete
Description Delete a node from the Environment

Parameters • ip: The IP of the node to delete

Where CLI

To apply It is applied automatically

Define a cluster

Product Guardian
Syntax conf.user configure vi cluster <ip> <name>
| Configuration | 287

Description This command permits to define an High Availability cluster of observed


nodes. In particular, this permits to: accelerate the learning phase by joining
the learning data of two sibling nodes, and to group nodes by cluster in the
graph.

Parameters • ip: The IP of the node


• name: The name of the cluster

Where CLI

To apply It is applied automatically


| Configuration | 288

Configuring assets

Hide built-in asset types

Product Guardian
Syntax conf.user configure vi hide_built_in_asset_types true
Description Hides built-in asset types from the dropdown in the Environment >
Network View > Assets view under the asset configuration modal

Where CLI

To apply It is applied automatically


| Configuration | 289

Configuring links

Set link last activity check

Product Guardian
Syntax set vi link <ip1> <ip2> <protocol> :check_last_activity
<seconds>
Syntax erase vi link <ip1> <ip2>
<protocol> :check_last_activity :delete
Description Set the last activity check on a link, an alert will be raised if the link remains
inactive for more than the specified seconds

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol
• seconds: The communication timeout

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set link persistency check

Product Guardian
Syntax vi link <ip1> <ip2> <protocol> :is_persistent [true|
false]
Description Set the persistency check on a link, if a new handshake is detected an alert
will be raised

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set link alert on SYN

Product Guardian
Syntax vi link <ip1> <ip2> <protocol> :alert_on_syn [true|
false]
Description Raise an alert when a TCP SYN packet is detected on this link

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 290

Set link to track availability

Product Guardian
Syntax set vi link <ip1> <ip2> <protocol> :track_availability
<seconds>
Syntax erase vi link <ip1> <ip2>
<protocol> :track_availability :delete
Description Notify the link events when the link communication is interrupted or
resumed.

Parameters • ip1, ip2: The IPs of the two nodes involved in the communication
• protocol: The protocol
• seconds: Interval to checking if the link is available or not

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Delete link

Product Guardian
Syntax ids configure vi link <ip1> <ip2> :delete
Description Delete a link

Parameters • ip1, ip2: The IPs identifying the link

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Delete protocol

Product Guardian
Syntax ids configure vi link <ip1> <ip2> <protocol> :delete
Description Delete a protocol from a link

Parameters • ip1, ip2: The IPs identifying the link


• protocol: The protocol of the link to delete

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Delete function code

Product Guardian
Syntax ids configure vi link <ip1> <ip2> <protocol> fc
<func_code>
| Configuration | 291

Description Delete a function code from a protocol

Parameters • ip1, ip2: The IPs identifying the link


• protocol: The protocol of the link
• func_code: The function code to delete

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Enable link_events generation

Product Guardian
Syntax conf.user configure vi link_events [enabled|disabled]
Description Enable or disable the generation of link_events records, this feature can
have an impact on performance, enable it carefully

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Disabling the persistence of links

Product Guardian
Syntax conf.user configure vi persistence skip_links true
Description With this configuration rule you can disable the persistence of links, thus
saving disk space in cases with a large number of links.

Where CLI

To apply It is applied automatically


| Configuration | 292

Configuring variables

Enable or disable default variable history

Product Guardian
Syntax ids configure vi variable default history [enabled|
disabled]
Description Set if the variable history is enabled or not, when not set it's disabled. The
amount of the history maintained can be configured in "Variable history
retention" section in Configuring retention on page 311
Note: Enabling this functionality can negatively affect Guardian's
performance, depending on the amount of variables and the update rate.

Where CLI

To apply It is applied automatically

Enable or disable variable history

Product Guardian
Syntax ids configure vi variable <var_key> history [enabled|
disabled]
Description Define the amount of samples shown in the graphical history of a variable.
Set if the variable history is enabled or not, when not set it's disabled.
The amount of the history maintained can be configured in "Variable history
retention" section in Configuring retention on page 311
Note: Enabling this functionality can negatively affect Guardian's
performance, depending on the amount of variables and the update rate.

Parameters • var_key: The variable identifier

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable label

Product Guardian
Syntax ids configure vi variable <var_key> label <label>
Description Set the label for a variable, the label will appear in the Environment >
Process View sections

Parameters • var_key: The variable identifier


• label: The label displayed in the user interface

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 293

Set variable unit of measure

Product Guardian
Syntax ids configure vi variable <var_key> unit <unit>
Description Set a unit of measure on a variable.

Parameters • var_key: The variable identifier


• unit: The unit of measure displayed in the user interface

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable offset

Product Guardian
Syntax ids configure vi variable <var_key> offset <offset>
Description The offset of the variable that will be used to map the 0 value of the variable.

Parameters • var_key: The variable identifier


• offset: The offset value used to calculate the final value of the variable

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable scale

Product Guardian
Syntax ids configure vi variable <var_key> scale <scale>
Description The scale of the variable that is used to define the full range of the variable.

Parameters • var_key: The variable identifier


• scale: the scale value used to calculate the final value of the variable

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable last update check

Product Guardian
Syntax set vi variable <var_key> :check_last_update <seconds>
Syntax remove vi variable <var_key> :check_last_update :delete
Description Set the last update check on a variable, if the variable value is not updated
for more than the specified seconds an alert is raised
| Configuration | 294

Parameters • var_key: The variable identifier


• seconds: The timeout after which a stale variable alert will be raised

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable quality check

Product Guardian
Syntax set vi variable <var_key> :check_quality <seconds>
Syntax remove vi variable <var_key> :check_quality :delete
Description Set the quality check on a variable, if the value quality remains invalid for
more than the specified seconds an alert is raised

Parameters • var_key: The variable identifier


• seconds: The maximum amount of consecutive seconds the variable
can have an invalid quality

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set variable to alert on quality

Product Guardian
Syntax set vi variable <var_key> :alert_on_quality <quality>
Syntax remove vi variable <var_key> :alert_on_quality :delete
Description Raise an alert when the variable has one of the specified qualities. Possible
values are: invalid, not topical, blocked, substituted, overflow, reserved,
questionable, out of range, bad reference, oscillatory, failure, inconsistent,
inaccurate, test, alarm. Multiple values can be separated by comma.

Parameters • var_key: The variable identifier


• quality: The alert quality

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Set a variable critical state

Product Guardian
Syntax conf.user configure cs variable <id> <var_key> [<|>|=]
<value>
| Configuration | 295

Description Define a new custom critical state on a single variable that will raise on
violation of defined range.
For instance, if the > operator is specified, the variable will have to be higher
than value to trigger the critical state.

Parameters • id: A unique ID for this critical state


• var_key: The variable identifier
• value: The variable value to check for

Where CLI

To apply It is applied automatically

Set a multiple critical state

Product Guardian
Syntax conf.user configure cs multi <id> variable <ci>
<var_key> [<|>|=] <value>[ ^ variable <ci> <var_key> [<|
>|=] <value>]
Description Creates a multi-valued critical state, that is an expression of "variable critical
states", described above. The syntax is and AND (^) expression of the
single-variable critical state.

Parameters • id: A unique ID for this critical state


• ci: Enumerate the variables c1, c2, c3, ..., etc
• var_key: The variable identifier
• value: The variable value to check for

Where CLI

To apply It is applied automatically

Control variables extraction at the protocol level

Product Guardian
Syntax conf.user configure probe protocol <name>
variables_extraction [disabled|enabled|advanced|global]
Description It allows the application of a variable extraction policy different from the
global policy on a protocol basis. Note that if the Global policy is set to
Disabled, it prevails on any protocol-specific setting. However, protocol
specific policies prevail.
Choices are whether variables extraction is disabled, enabled, enabled
with advanced heuristics (advanced) or if it should inherit the global policy
(global)

Parameters • name: The name of the target protocol

Where CLI

To apply It is applied automatically

Control variables extraction at the global level for all zones

Product Guardian
| Configuration | 296

Syntax conf.user configure vi variables_extraction [disabled|


enabled|advanced|global]
Description Same as for the protocol level variables extraction, except it sets the policy
for the global level.
Choices are whether variables extraction is disabled, enabled, enabled
with advanced heuristics (advanced) or if it should inherit the global policy
(global)

Where CLI

To apply It is applied automatically

Control variables extraction at the global level for specific zones

Product Guardian
Syntax conf.user configure vi variables_extraction [disabled|
enabled|advanced|global] <zones>
Description Same as for the protocol level variables extraction, except it sets the policy
for the global level for the specified zones.
Choices are whether variables extraction is disabled, enabled, enabled
with advanced heuristics (advanced) or if it should inherit the global policy
(global)

Parameters • zones: Name of the zones for which the extraction should be enabled.
If unspecified, the extraction is enabled for all the zones. and values
are separated by a comma; for example: [plant1,plant2] or
[zone1,zone2,zone3]. Brackets are required

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 297

Configuring protocols

Configure iec104s encryption key

Product Guardian
Syntax conf.user configure probe protocol iec104s tls
private_key <ip> <location>
Description Add a private key associated to the device running iec104s. For more
information, see Configuring IEC-62351-3 on page 304

Parameters • ip: The IP of the device


• location: The absolute location of the key

Where CLI

To apply In a shell console execute: service n2osids stop

Set CA size for iec101 protocol decoder

Product Guardian
Syntax conf.user configure probe protocol iec101 ca_size <size>
Description iec101 CA size can vary across implementations, with this configuration rule
the user can customize the setting for its own environment

Parameters • size: The size in bytes of the CA

Where CLI

To apply It is applied automatically

Set LA size for iec101 protocol decoder

Product Guardian
Syntax conf.user configure probe protocol iec101 la_size <size>
Description iec101 LA size can vary across implementations, with this configuration rule
the user can customize the setting for its own environment

Parameters • size: The size in bytes of the LA

Where CLI

To apply It is applied automatically

Set IOA size for iec101 protocol decoder

Product Guardian
Syntax conf.user configure probe protocol iec101 ioa_size
<size>
Description iec101 IOA size can vary across implementations, with this configuration
rule the user can customize the setting for its own environment

Parameters • size: The size in bytes of the IOA


| Configuration | 298

Where CLI

To apply It is applied automatically

Set an arbitrary amount of bytes to skip before decoding iec101 protocol

Product Guardian
Syntax conf.user configure probe protocol iec101 bytes_to_skip
<amount>
Description Based on the hardware configuration iec101 can be prefixed with a fixed
amount of bytes, with this setting Guardian can be adapted to the peculiarity
of the environment.

Parameters • amount: The amount of bytes to skip

Where CLI

To apply It is applied automatically

Enable the Red Electrica Espanola semantic for iec102 protocol

Product Guardian
Syntax conf.user configure probe protocol iec102 ree [enabled|
disabled]
Description There is a standard from Red Electrica Espan#ola which changes the
semantic of the iec102 protocol, after enabling (choosing option enabled)
this setting the iec102 protocol decoder will be compliant to the REE
standard.

Where CLI

To apply It is applied automatically

Set the subnet in which the iec102 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol iec102 subnet
<subnet>
Description The detection of iec102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Enable iec102 on the specified port

Product Guardian
Syntax conf.user configure probe protocol iec102 port <port>
| Configuration | 299

Description The detection of iec102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port

Parameters • port: The TCP port

Where CLI

To apply It is applied automatically

Set the subnet in which the iec103 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol iec103 subnet
<subnet>
Description The detection of iec103 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Enable iec103 on the specified port

Product Guardian
Syntax conf.user configure probe protocol iec103 port <port>
Description The detection of iec103 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port

Parameters • port: The TCP port

Where CLI

To apply It is applied automatically

Force iec101 semantics inside iec103 protocol

Product Guardian
Syntax conf.user configure probe protocol iec103
force_iec101_semantics true
Description Forces change of semantics for iec103 protocol to use ASDUs of iec101

Where CLI

To apply It is applied automatically

Allow to recognize as iec103 very fragmented sessions

Product Guardian
Syntax conf.user configure probe protocol iec103
accept_on_fragmented true
| Configuration | 300

Description Allow to accept as iec103 those packets that are always incomplete,
thus allowing situations where the protocol is heavily fragmented to be
recognized.

Where CLI

To apply It is applied automatically

Enable the detection of plain text passwords in HTTP payloads

Product Guardian
Syntax conf.user configure probe protocol http
detect_uri_passwords [true|false]
Description Guardian is able to detect if plain text passwords and login credentials
are present in HTTP payloads, such as strings containing ftp://
user:[email protected]. The feature is disabled by default.
Choose true to enable the feature and false to disable it.

Where CLI

To apply It is applied automatically

Set the subnet in which the tg102 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol tg102 subnet <subnet>
Description The detection of tg102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Set the port range in which the tg102 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol tg102 port_range
<src_port>-<dst_port>
Description The detection of tg102 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port range

Parameters • src_port: The starting port of the range


• dst_port: The ending port of the range

Where CLI

To apply It is applied automatically

Set the subnet in which the tg800 protocol will be enabled

Product Guardian
| Configuration | 301

Syntax conf.user configure probe protocol tg800 subnet <subnet>


Description The detection of tg800 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific subnet

Parameters • subnet: A subnet in the CIDR notation

Where CLI

To apply It is applied automatically

Set the port range in which the tg800 protocol will be enabled

Product Guardian
Syntax conf.user configure probe protocol tg800 port_range
<src_port>-<dst_port>
Description The detection of tg800 can lead to false positives, this rules give the
possibility to the user to enable the detection on a specific port range

Parameters • src_port: The starting port of the range


• dst_port: The ending port of the range

Where CLI

To apply It is applied automatically

Disable variable extraction for a Siemens S7 area and type

Product Guardian
Syntax conf.user configure probe protocol s7 exclude <area>
<type>
Description For performance reasons or to reduce noise it's possible to selectively
exclude variables extraction for some areas and type.

Parameters • area: The area, some examples are: DB, DI, M, Q


• type: The type of the variable, some examples are: INT, REAL, BYTE

Where CLI

To apply It is applied automatically

Enable the full TLS inspection mode

Product Guardian
Syntax conf.user configure probe tls-inspection enable [true|
false]
| Configuration | 302

Description TLS inspection is normally performed only on https and iec104s traffic.
Enabling (chosing the option true) the full inspection mode provides the
following additional features:
• TLS traffic found on any TCP port is inspected
• an alert is raised when TLS-1.0 is used (when this mode is disabled, this
is an https only check)
• an alert is raised on expired certificates
• an alert is raised on weak cipher suites
• session ID, cipher suite and certificates are extracted into the relative link
events

Where CLI

To apply It is applied automatically

Enable or disable the persistence of the connections for Ethernet/IP Implicit

Product Guardian
Syntax conf.user configure probe protocol ethernetip-implicit
persist-connection [true|false]
Description The Ethernet/IP Implicit decoder of Guardian is able to detect handshakes
that are then used to decode variables. In some scenarios these
handshakes are not common but it's very important to persist them so that
Guardian can continue to decode variables after a reboot or an upgrade.
By enabling (chosing option true) this option Guardian will store on disk
the data needed to autonomously reproduce the handshake phase after a
reboot.

Where CLI

To apply It is applied automatically

Enable or disable fragmented packets for modbus protocol

Product Guardian
Syntax conf.user configure probe protocol modbus
enable_full_fragmentation [true|false]
Description Modbus protocol is usually not fragmented, so this option is by default
disabled (option false). If fragmented modbus packets can be present in the
network, then full fragmentation can be enabled (choosing option true) to
avoid generation of unexpected alerts.

Where CLI

To apply It is applied automatically

Import the ge-egd produced data XML file for variables extraction

Product Guardian
Syntax conf.user configure probe protocol ge-egd produced-data-
xml <path>
| Configuration | 303

Description The ge-egd protocol can extract process variables only after the XML file
describing the produced data for the involved nodes is imported. Multiple
imports are allowed as long as the XML files do not provide overlapping
information for any producer node.

Parameters • path: The path of the produced data XML file to import

Where CLI

To apply It is applied automatically

Disable file extraction for SMB protocol

Product Guardian
Syntax conf.user configure probe protocol smb file_extraction
false
Description The SMB protocol decoder is able to extract files and analyze them for
malware in a sandbox. If not needed, the user can disable such feature and
improve the performance of the system especially in environments where
SMB file transfer is heavily used.

Where CLI

To apply It is applied automatically


| Configuration | 304

Configuring decryption
The following sections decribe the configuration of Guardian's decryption capabilities for links. For
more decryption details beyond the scope of this manual, contact Nozomi Networks.

IEC 60870-5-7 / 62351-3/5 encrypted links


IEC TC57 (POWER SYSTEMS management and associated information exchange) develops the
standards 60870 and 62351. IEC 60870 part 5 (by WG3) describes systems used for telecontrol. IEC
62351 (by WG15) handles the security of TC 57 series.
IEC TC57 WG15 recommends the combination of IEC 62351-3 and 5 to secure IEC 60870-5-104 links:
• IEC 62351-3 is a TLS profile to secure power systems related communication.
• IEC 62351-5 is an application security protocol applicable to IEC 60870-5-101, 104, and derivatives.
Its implementation in terms of ASDUs (i.e., real encapsulation) is outlined in IEC 60870-5-7.
In order to decrypt IEC 62351-3 (TLS) traffic, you must meet these conditions:
• The private key for each TLS server (e.g. RTU, PLC) must be available; it is used to derive session
keys.
• All the equipment where decryption is needed must operate using the
TLS_RSA_WITH_AES_128_CBC_SHA (0x00002f) cipher suite. Often, this step is accomplished by
forcing either the client or the server to confine itself to that specific cipher suite.

Configuring IEC-62351-3
The following steps assume we're decoding the communication of a TLS server with the address
192.168.1.26.
1. Upload the TLS server’s private key to /data/cfg. The file name must match the server's address.
In our case, the file must be named 192.168.1.26.key.
Your key should be similar to the following:

2. In Guardian's Features Control Panel, enable link events; this provides visibility to the TLS decoded
handshakes; for example:
| Configuration | 305

3. Specify the key file's location by defining it in the CLI. To continue our example, we would use the
following string:

conf.user configure probe protocol iec104s tls private_key


192.168.1.26 /data/cfg/192.168.1.26.key

4. Repeat these steps for each applicable TLS server key.


5. Run the following command in a shell console:

service n2osids stop


| Configuration | 306

Configuring trace

Trace size and timeout


A trace is a sequence of packets saved to the disk in the PCAP file format. The number of packets in
a trace is fixed, this way when a trace of N packets is triggered Guardian starts to write to disk the N/2
packets that were sniffed before the trace was triggered, after that it tries to save another N/2 packets
and then finalize the write operation, at this point the trace can be downloaded. To avoid a trace being
pending for too much time there is also a timeout, when the time expires the trace is saved also if the
desired number of packets has not been reached.

Figure 182: A schematic illustration of the trace saving process

Set max trace packets

Product Guardian
Syntax conf.user configure trace trace_size <size>
Description The maximum number of packets that will be stored in the trace file.

Parameters • size: Default value 5000

Where CLI

To apply It is applied automatically

Set trace request timeout

Product Guardian
Syntax conf.user configure trace trace_request_timeout
<seconds>
Description The time in seconds after which the trace will be finalized also if the
trace_size parameter is not fulfilled

Parameters • seconds: Default value 60

Where CLI

To apply It is applied automatically

Set max pcaps to retain

Product Guardian
| Configuration | 307

Syntax conf.user configure trace max_pcaps_to_retain <value>


Description The maximum number of PCAP files to keep on disk, when this number is
exceeded the oldest traces will be deleted. Both automatic alert traces and
user-requested traces are included. This is a runtime machine setting used
for self protection prevailing on the retention settings as described in the
Configuring retention section

Parameters • value: Default value 100000

Where CLI

To apply It is applied automatically

Set minimum free disk percentage

Product Guardian
Syntax conf.user configure trace min_disk_free <percent>
Description The minimum percentage of disk free under which the oldest traces will be
deleted

Parameters • percent: Default value 10, enter without % sign

Where CLI

To apply It is applied automatically

Set maximum occupied space

Product Guardian
Syntax conf.user configure retention trace_request
<occupied_space>
Description The maximum traces occupation on disk in bytes

Parameters • occupied_space: Default value is half of disk size

Where CLI

To apply It is applied automatically


| Configuration | 308

Configuring continuous trace

Set max continuous trace occupation in bytes

Product Guardian
Syntax conf.user configure continuous_trace max_bytes_per_trace
<size>
Description The maximum size in bytes for a continuous trace file.

Parameters • size: Default value 100000000

Where CLI

To apply It is applied automatically

Set max pcaps to retain

Product Guardian
Syntax conf.user configure continuous_trace max_pcaps_to_retain
<value>
Description The maximum number of PCAP files to keep on disk, when this number is
exceeded the oldest traces will be deleted. This is a runtime machine setting
used for self protection prevailing on the retention settings as described in
the Configuring retention section

Parameters • value: Default value 100000

Where CLI

To apply It is applied automatically

Set minimum free disk percentage

Product Guardian
Syntax conf.user configure continuous_trace min_disk_free
<percent>
Description The minimum percentage of disk free under which the oldest continuous
traces will be deleted

Parameters • percent: Default value 10, enter without % sign

Where CLI

To apply It is applied automatically

Set maximum occupied space

Product Guardian
Syntax conf.user configure retention continuous_trace
<occupied_space>
Description The maximum continuous traces occupation on disk in bytes
| Configuration | 309

Parameters • occupied_space: Default value is half of disk size

Where CLI

To apply It is applied automatically


| Configuration | 310

Configuring Time Machine


In this section we will configure the Nozomi Networks Solution Time Machine functionality.

Set snapshot interval

Products CMC, Guardian


Syntax conf.user configure tm snap interval <interval_seconds>
Description Set the desired interval between snapshots, in seconds.

Parameters • interval_seconds: The amount of seconds between snapshots


(default: 3600)

Where CLI

To apply service n2osjobs stop

Enable or disable automatic snapshot for each alert

Product Guardian
Syntax conf.user configure tm snap on_alert [true|false]
Description It can enable (option true) or disable (option false) the possibility to take a
snapshot for each alert.

Where CLI

To apply service n2osjobs stop


| Configuration | 311

Configuring retention
Retention of historical data is controlled for each persisted entity by a configuration entry. Modify it to
extend or reduce the default retention.
By default, the CMC retains 500,000 alerts. Note that retaining large numbers of alerts can impair
performance. We recommend limiting the number of alerts generated rather than retaining more data.
If you want to retain more alerts, we recommend an iterative approach of incrementally increasing
this value and evaluating the system's performance. In some cases, you may want to send alerts to a
different system using our data integration features instead of retaining the alerts in the appliance.

Alerts retention

Products CMC, Guardian


Syntax conf.user configure retention alert rows
<rows_to_retain>
Description Set the amount of alerts to retain.

NOTE: When an alert is deleted, the related trace file is deleted too.

Parameters • rows_to_retain: The number of rows to keep (default: 500000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Alerts advanced retention

Products CMC, Guardian


Syntax conf.user configure retention
alert.out_of_security_profile rows <rows_to_retain>
Description Set the amount of alerts out of security profile to retain. By default, this
feature is disabled.
NOTE:
• This retention has a higher priority than retention alert rows
<rows_to_retain> and will be executed before it.
• When an alert is deleted, the related trace file is deleted too.

Parameters • rows_to_retain: The number of rows to keep (disabled by default)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Trace retention size

Product Guardian
Syntax conf.user configure retention trace_request
occupied_space <max_occupied_bytes>
Description Set max occupation in bytes for traces.
| Configuration | 312

Parameters • max_occupied_bytes: the number of bytes to keep (default: Half of


disk size)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Trace retention rows

Product Guardian
Syntax conf.user configure retention trace_request rows
<rows_to_retain>
Description Set the amount of traces to retain.

Parameters • rows_to_retain: The number of rows to keep (default: 10000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Trace advanced retention

Products CMC, Guardian


Syntax conf.user configure retention
trace_request.<generation_cause> rows <rows_to_retain>
Description Set the amount of traces retained considering their generation cause. By
default, these options are disabled.
NOTE: This retention has a higher priority than retention trace rows
<rows_to_retain> and will be executed before it. Moreover, These
advanced retention options depend on each other, thus they must be
configured all together or none.

Parameters • generation_cause: Can be any of:


• by_alerts_high: traces generated by high risk alerts
• by_alerts_medium: traces generated by medium risk alerts
• by_alerts_low: traces generated by low risk alerts
• by_user_request: traces generated by a request from the user
• rows_to_retain: The number of rows to keep

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

For example, we can configure the trace retention with the following command:

conf.user configure retention trace_request 10000

and also set up the advanced retention with:

conf.user configure retention trace_request 10000


| Configuration | 313

conf.user configure retention trace_request.by_alerts_high 5000


conf.user configure retention trace_request.by_alerts_medium 1000
conf.user configure retention trace_request.by_alerts_low 1000
conf.user configure retention trace_request.by_user_request 3000

Continuous trace retention size

Product Guardian
Syntax conf.user configure retention continuous_trace
occupied_space <max_occupied_bytes>
Description Set max occupation in bytes for continuous traces

Parameters • max_occupied_bytes: the number of bytes to keep (default: half of


disk size)

Where CLI

To apply In a shell console execute: service n2ostrace stop

Note You can also change this configuration from the Web UI.

Continuous trace retention rows

Product Guardian
Syntax conf.user configure retention continuous_trace rows
<rows_to_retain>
Description Set the amount of continuous traces to retain

Parameters • rows_to_retain: the number of rows to keep (default: 10000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Link events retention

Product Guardian
Syntax conf.user configure retention link_event rows
<rows_to_retain>
Description Set the amount of link events to retain

Parameters • rows_to_retain: The number of rows to keep (default: 2500000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Captured urls retention

Product Guardian
| Configuration | 314

Syntax conf.user configure retention captured_urls rows


<rows_to_retain>
Description Set the amount of captured "urls" (http queries, dns queries, etc) to retain

Parameters • rows_to_retain: The number of rows to keep (default: 10000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Variable history retention

Product Guardian
Syntax conf.user configure retention variable_history rows
<rows_to_retain>
Description Set the amount of variable historical values to retain

Parameters • rows_to_retain: The number of rows to keep (default: 1000000)

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.

Node CVE retention

Product Guardian
Syntax conf.user configure retention node_cve rows
<rows_to_retain>
Description Set the maximum amount of node_cve entries to retain

Parameters • rows_to_retain: The number of rows to keep (default: 100000)

Where CLI

To apply In a shell console execute: service n2osva stop

Note You can also change this configuration from the Web UI.

Uploaded traces retention

Product Guardian
Syntax conf.user configure retention input_pcap rows
<files_to_retain>
Description Set the amount of PCAP files to retain

Parameters • files_to_retain: The number of files to keep (default: 10))

Where CLI

To apply It is applied automatically


| Configuration | 315

Note You can also change this configuration from the Web UI.
| Configuration | 316

Configuring Bandwidth Throttling


It is possible to limit the bandwidth that an appliance's management port has at its disposal (so for
access and updates) by specifying the maximum amount of allowed traffic.

Limit traffic shaping bandwidth

Products Guardian, Remote Collector


Syntax conf.user configure system traffic_shaping bandwidth
<max_bandwidth>
Description Set the maximum outbound bandwidth that the appliance's management
interface can use. Inbound data is still unlimited.

Parameters • max_bandwidth: the bandwidth limit in bytes, unless specified


otherwise (default: no limitation). When setting a limit in decimal notation
make sure you add the all the leading zeros, and the unit (e.g., write
0.015Mb, not .015Mb).

Where CLI

To apply Update rules with n2os-firewall-update. On a fresh installation a


reboot is necessary.

For example, we can set a limit of two megabytes with the following configuration command:

system traffic_shaping bandwidth 2Mb

Notice that this command affects only the appliance on which it is executed, its effects are not
propagated to other appliances.
It is possible to exclude from the limitation of the bandwidth specific IPs.

Exclude IP from traffic shaping

Products Guardian, Remote Collector


Syntax conf.user configure system traffic_shaping exclude <ip>
Description Set the IP to exclude from the limitation.

Parameters • ip: the IP to exclude. It can be a single IP or a class of IPS (e.g.


192.168.12.34 or 192.168.0.0/16). It can be repeated for as much IPs are
needed.

Where CLI

To apply Update rules with n2os-firewall-update. On a fresh installation a


reboot is necessary.

For example, we can exclude an IP with the following configuration command:

system traffic_shaping exclude 192.168.12.34

Notice that this command affects only the appliance on which it is executed, its effects are not
propagated to other appliances.
| Configuration | 317

Configuring Remote Collector Bandwidth Throttling


For a Remote Collecor, it is possible to limit the bandwidth for the traffic that is sniffed and forwarded
to the Guardian, without impacting other connections on the management port, by specifying the
maximum amount of allowed bandwidth.

Remote collector bandwidth throttling

Product Remote Collector


Syntax remote_sensor_max_bandwidth_kb_per_sec <max_bandwidth>
Description Set the maximum egress bandwidth of forwarded traffic to use.

Parameters • max_bandwidth: The bandwidth limit in Kilobytes per seconds (default:


no limitation)

Where CLI

To apply In a shell console execute: service n2osrs stop

For example, we can set a limit of 100 Kilobytes per second with the following configuration command:

remote_sensor_max_bandwidth_kb_per_sec 100

Notice that this command affects only the appliance on which it is executed, its effects are not
propagated to other appliances.
| Configuration | 318

Configuring synchronization
In this section we will configure the synchronization between appliances at different levels.

Set the global synchronization interval (notification message)

Product CMC
Syntax conf.user configure cmc sync interval <interval_seconds>
Description Set the desired global synchronization interval for the in-scope appliance.
Configuration is defined on the parent appliance; synchronization starts at
child appliances and flows upstream.
Each and every sync takes place following a notification message sent by
the child appliance, stating that the child appliance is ready to synchronize
data to its parent. The notification messages act as global synchronization
settings, working together with the following settings as well.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), the setting must be applied at each parent level (e.g., at the root
CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between status


notifications (default: 60)

Where CLI

To apply It is applied automatically

Set the DB synchronization interval

Products CMC, Guardian


Syntax conf.user configure cmc sync_db_interval
<interval_seconds>
Description Set the desired interval between DB synchronizations for the in-scope
appliance. Configuration is done on the parent appliance; synchronization
starts at child appliances and flows upstream. The setting applies to each
DB element subject to synchronization (e.g., Alerts, Assets, Audit logs, and
Health logs). As the interval expires, the DB entries are synchronized at the
next notification message.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between DB


synchronizations (default: 60). This parameter only makes sense when
set higher than the global synchronization interval.

Where CLI

To apply It is applied automatically

Set the filesystem synchronization interval

Product CMC
Syntax conf.user configure cmc sync_fs_interval
<interval_seconds>
| Configuration | 319

Description Set the desired interval between filesystem synchronizations for the
appliance in scope, from its child appliances. The setting applies to each
filesystem element subject to synchronization (e.g., nodes, links, and
variables). As the interval expires, the filesystem entries are synchronized at
the next notification message.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between filesystem


synchronizations (default: 10800 [3 hours]). This parameter only makes
sense when set higher than the global synchronization interval.

Where CLI

To apply It is applied automatically

Set the binary files synchronization interval

Product CMC
Syntax conf.user configure cmc sync_binary_files_interval
<interval_seconds>
Description Set the desired interval between binary files synchronizations for the
appliance in scope, from its child appliances. The setting applies to each
binary file element subject to synchronization (e.g., PDF reports). As
the interval expires, the binary file entries are synchronized at the next
notification message.
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Parameters • interval_seconds: the number of seconds between binary files


synchronizations (default: 60). This parameter only makes sense when
set higher than the global synchronization interval.

Where CLI

To apply It is applied automatically

Set the rows to be sent at every DB synchronization for each DB element

Products CMC, Guardian


Syntax conf.user configure cmc sync record_per_loop
<number_of_record_per_loop>
Description The system allows the user to customize the synchronization, in particular
the number of records to be sent at each phase. A synchronization
phase is composed of 50 steps for each DB element, every one sending
number_of_record_per_loop rows, which means that the system sends, by
default, 2500 rows every time.

Parameters • number_of_record_per_loop: the number of DB rows sent per


single request (default: 50)

Where CLI
| Configuration | 320

To apply It is applied automatically

Synchronize only visible alerts

Products CMC, Guardian


Syntax conf.user configure cmc sync send_only_visible_alert
[true|false]
Description Set whether to synchronize all alerts from the child appliances to the in-
scope parent appliance (true), or to synchronize only visible alerts (as
defined in the Security Profile) (false).
Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Where CLI

To apply It is applied automatically

Set the alert rules execution policy

Product CMC
Syntax conf.user configure alerts execution_policy alert_rules
[upstream_only|upstream_prevails|local_prevails]
Description Set the desired execution policy for the alert rules.

Note: In a multi-level deployment (e.g., one with root CMC, local CMC, and
Guardian), if the setting is applicable, it must be applied at each parent level
(e.g., at the root CMC as well as at the local CMC).

Where CLI

To apply It is applied automatically

Note You can also change this configuration from the Web UI.
| Configuration | 321

Configuring slow updates


In this section we show how to configure an appliance for receiving firmware updates from an upstream
appliance at a determined speed. This option is suitable for those scenarios where a limited bandwidth
is available, and a normal firmware update procedure would result in a timeout. For example, one may
want to configure a remote collector constrained by a 50Kbps bandwidth to receive the updates at
24Kbps. This configuration will prevent the saturation of the communication channel and thus it will
allow to send data traffic while receiving an update.

Enable or disable slow update

Products CMC, Guardian, Remote Collector


Syntax software_update_slow_mode [true|false]
Description This is a global switch that enables (true) or disables (false) the feature.
When the feature is disabled all the other switches are ignored.

Where CLI

To apply It is applied automatically

Set the transfer chunk size

Products CMC, Guardian, Remote Collector


Syntax software_update_slow_mode_chunk_size <size_in_bytes>
Description The update bundle is split into multiple fixed-sized chunks. Chunks are
individually transmitted, verified and reassembled. In case of a failed
delivery, only invalid chunks are retransmitted. Essentially, big chunks are
more suitable for high-speed networks, while smaller chunks are to be
preferred for more effective bandwidth limitation.

Parameters • size_in_bytes: the chunk size in bytes. Values are normalized to stay
in the range [128, 10485760]. Default value is 4096.

Where CLI

To apply It is applied automatically

Set the transfer speed

Products CMC, Guardian, Remote Collector


Syntax software_update_slow_mode_max_speed <speed_in_bps>
Description Sets the maximum allowed speed for update transfer.

Parameters • speed_in_bps: The maximum allowed speed in bytes per second.


Values lower than 1024 are normalized to 1024. The default value is
4096. Notice that small chunks add some slight overhead, so generally
the transfer speed will remain consistently below the declared limit.

Where CLI

To apply It is applied automatically


| Configuration | 322

Configuring session hijacking protection


Web management interface protects itself from session hijacking attacks binding web session
to ip addresses and browser configurations. When it detects differences on these parameters it
automatically destroy the session and records the error in the audit log. This feature is enabled by
default and it can be disabled using this configuration:

Disable session hijacking protection

Products CMC, Guardian


Syntax conf.user configure ui session protection [true|false]
Description Enable (option true, default behavior) or disable (option false) session
hijacking protection.

Where CLI

To apply It is applied automatically

When closing sessions the web management interface will record in the audit log this error text

Session hijacking detected, closing session

and the details of the affected session.

Configuring Passwords
This topic describes N2OS password parameters and their default values.
Modifying the default password requires that you change the configuration of the appliance, as
discussed in Users Chapter 3.

Parameter Default Description


value
password_policy 3 Number of unsuccessful login attempts
maximum_attempts before user lock
password_policy lock_time 5 Number of minutes that a user account is
locked out after unsuccessful login attempts
password_policy history 3 Number of unique password to be used
password_policy digit 1 Number of numbers that a password must
contain
password_policy lower 1 Number of lower case characters that a
password must contain
password_policy upper 1 Number of upper case characters that a
password must contain
password_policy symbol 0 Number of symbols that a password must
contain
password_policy 12 Minimum password length
min_password_length
password_policy 128 Maximum password length
max_password_length
| Configuration | 323

Parameter Default Description


value
password_policy false Disable inactive user policy flag
inactive_user_expire_enable
password_policy 60 Required inactive days to force user as
inactive_user_lifetime disabled
password_policy admin_can_expire false This setting can prevent admin accounts from
expiring
password_policy false Password expiration feature
password_expire_enable
password_policy password_lifetime 90 Required days to force password change
Chapter

16
Compatibility reference
Topics: In this chapter you will receive compatibility information about
Nozomi Networks products.
• SSH compatibility
• HTTPS compatibility
| Compatibility reference | 326

SSH compatibility

Supported SSH protocols (since 19.0.4)

Function Algorithms
Key exchange [email protected]
diffie-hellman-group-exchange-sha256
diffie-hellman-group14-sha256
diffie-hellman-group16-sha512
diffie-hellman-group18-sha512

Ciphers [email protected]
[email protected]
[email protected]
aes128-ctr
aes192-ctr
aes256-ctr

MACs hmac-sha2-256
hmac-sha2-512
[email protected]
[email protected]
[email protected]
[email protected]

Host Key ssh-rsa


Algorithms
ssh-ed25519
ecdsa-sha2-nistp384
ecdsa-sha2-nistp521
| Compatibility reference | 327

HTTPS compatibility

Supported HTTPS protocols (since 21.9.0)

TLS version Cipher Suite Name (IANA/RFC)


TLS 1.2 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

TLS 1.3 TLS_AES_256_GCM_SHA384


TLS_CHACHA20_POLY1305_SHA256
TLS_AES_128_GCM_SHA256

Supported RC/Guardian data channel protocols (since 21.9.0)

TLS version Cipher Suite Name (IANA/RFC)


TLS 1.2 ECDHE-RSA-AES128-GCM-SHA256
ECDHE-RSA-AES256-GCM-SHA384
DHE-RSA-AES128-GCM-SHA256
DHE-RSA-AES256-GCM-SHA384

You might also like