MvBlueFOX Technical Manual

Download as pdf or txt
Download as pdf or txt
You are on page 1of 224

mvBlueFOX

Technical Manual
i

1.1 About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1.1 Composition of the manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 How to get started? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2.2 Driver concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2.3 Image acquisition concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2.4 Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Imprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Revisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Graphic Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.1 Notes, Warnings, Attentions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.2 Webcasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Important Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 High-Speed USB design guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.2 European Union Declaration of Conformity statement . . . . . . . . . . . . . . . . . . . . . 11
1.5.3 Legal notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.3.1 For customers in the U.S.A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5.3.2 For customers in Canada . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.3.3 Pour utilisateurs au Canada . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.1 Order code nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.1.1 mvBlueFOX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.1.2 mvBlueFOX-M . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.1.3 mvBlueFOX-IGC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.1.4 mvBlueFOX-MLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6.2 What's inside and accessories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7 Quickstart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.1 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.1.1 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.1.2 Installing the mvIMPACT Acquire driver . . . . . . . . . . . . . . . . . . . . . . . 21
1.7.1.3 Installing the hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7.2 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7.2.1 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7.2.2 Installing the mvIMPACT Acquire driver . . . . . . . . . . . . . . . . . . . . . . . 29
1.7.2.3 Installing the hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.7.3 Relationship between driver, firmware and FPGA file . . . . . . . . . . . . . . . . . . . . . . 34
1.7.3.1 FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.7.3.2 Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.7.4 Settings behaviour during startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.8 Technical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.8.1 Power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.8.2 Standard version (mvBlueFOX-xxx) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

MATRIX VISION GmbH


ii

1.8.2.1 Dimensions and connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40


1.8.2.2 LED states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.8.2.3 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1.8.3 Board-level version (mvBlueFOX-Mxxx) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.8.3.1 Dimensions and connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
1.8.3.2 LED states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.8.3.3 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.8.3.4 Accessories mvBlueFOX-Mxxx . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.8.4 Single-board version (mvBlueFOX-MLC2xx) . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1.8.4.1 Typical Power consumption @ 5V . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1.8.4.2 Dimensions and connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
1.8.4.3 LED states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.8.4.4 Assembly variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1.8.5 Single-board version with housing (mvBlueFOX-IGC2xx) . . . . . . . . . . . . . . . . . . . 60
1.8.5.1 Dimensions and connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
1.8.5.2 LED states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
1.8.5.3 Positioning tolerances of sensor chip . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.9 Sensor Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.9.1 CCD sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1.9.2 CMOS sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
1.9.3 Output sequence of color sensors (RGB Bayer) . . . . . . . . . . . . . . . . . . . . . . . . 66
1.9.4 Bilinear interpolation of color sensors (RGB Bayer) . . . . . . . . . . . . . . . . . . . . . . . 67
1.10 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.1 Hot mirror filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.2 Cold mirror filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.3 Glass filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.11 Application Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.11.1 wxPropView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.11.1.1 How to work with wxPropView . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.11.1.2 How to configure a device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
1.11.1.3 Command-line options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
1.11.2 mvDeviceConfigure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
1.11.2.1 How to set the device ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
1.11.2.2 How to update the firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
1.11.2.3 How to disable CPU sleep states a.k.a. C states (< Windows 8) . . . . . . . . . . 115
1.11.2.4 Command-line options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
1.12 HRTC - Hardware Real-Time Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
1.12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
1.12.1.1 Operating codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
1.12.2 How to use the HRTC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
1.13 Developing Applications Using The mvIMPACT Acquire SDK . . . . . . . . . . . . . . . . . . . . . 120
1.14 DirectShow Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

MATRIX VISION GmbH


iii

1.14.1 Supported Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121


1.14.1.1 IAMCameraControl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.14.1.2 IAMDroppedFrames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.14.1.3 IAMStreamConfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.14.1.4 IAMVideoProcAmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.14.1.5 IKsPropertySet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.14.1.6 ISpecifyPropertyPages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.14.2 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
1.14.3 Registering and renaming devices for DirectShow usage . . . . . . . . . . . . . . . . . . . 122
1.14.3.1 Registering devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
1.14.3.2 Renaming devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
1.14.3.3 Make silent registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
1.15 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
1.15.1 Accessing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
1.15.1.1 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
1.15.1.2 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
1.16 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
1.16.1 Introducing acquisition / recording possibilities . . . . . . . . . . . . . . . . . . . . . . . . 128
1.16.1.1 Generating very long exposure times . . . . . . . . . . . . . . . . . . . . . . . . 128
1.16.1.2 Using VLC Media Player . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
1.16.2 Improving the acquisition / image quality . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
1.16.2.1 Correcting image errors of a sensor . . . . . . . . . . . . . . . . . . . . . . . . . 131
1.16.2.2 Optimizing the color fidelity of the camera . . . . . . . . . . . . . . . . . . . . . . 141
1.16.3 Working with triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
1.16.3.1 Using external trigger with CMOS sensors . . . . . . . . . . . . . . . . . . . . . 150
1.16.4 Working with HDR (High Dynamic Range Control) . . . . . . . . . . . . . . . . . . . . . . 151
1.16.4.1 Adjusting sensor of camera models -x00w . . . . . . . . . . . . . . . . . . . . . 151
1.16.4.2 Adjusting sensor of camera models -x02d (-1012d) . . . . . . . . . . . . . . . . . 154
1.16.5 Working with LUTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
1.16.5.1 Introducing LUTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
1.16.6 Saving data on the device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
1.16.6.1 Creating user data entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
1.16.7 Working with several cameras simultaneously . . . . . . . . . . . . . . . . . . . . . . . . . 162
1.16.7.1 Using 2 mvBlueFOX-MLC cameras in Master-Slave mode . . . . . . . . . . . . . 163
1.16.7.2 Synchronize the cameras to expose at the same time . . . . . . . . . . . . . . . 167
1.16.8 Working with the Hardware Real-Time Controller (HRTC) . . . . . . . . . . . . . . . . . . . 168
1.16.8.1 Achieve a defined image frequency (HRTC) . . . . . . . . . . . . . . . . . . . . 169
1.16.8.2 Delay the external trigger signal (HRTC) . . . . . . . . . . . . . . . . . . . . . . 170
1.16.8.3 Creating double acquisitions (HRTC) . . . . . . . . . . . . . . . . . . . . . . . . 171
1.16.8.4 Take two images after one external trigger (HRTC) . . . . . . . . . . . . . . . . . 171
1.16.8.5 Take two images with different expose times after an external trigger (HRTC) . . . 172
1.16.8.6 Edge controlled triggering (HRTC) . . . . . . . . . . . . . . . . . . . . . . . . . 174

MATRIX VISION GmbH


iv

1.16.8.7 Delay the expose start of the following camera (HRTC) . . . . . . . . . . . . . . . 176
1.17 Appendix A. Specific Camera / Sensor Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
1.17.1 A.1 CCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
1.17.1.1 mvBlueFOX-[Model]220 (0.3 Mpix [640 x 480]) . . . . . . . . . . . . . . . . . . . 177
1.17.1.2 mvBlueFOX-[Model]220a (0.3 Mpix [640 x 480]) . . . . . . . . . . . . . . . . . . 182
1.17.1.3 mvBlueFOX-[Model]221 (0.8 Mpix [1024 x 768]) . . . . . . . . . . . . . . . . . . 187
1.17.1.4 mvBlueFOX-[Model]223 (1.4 Mpix [1360 x 1024]) . . . . . . . . . . . . . . . . . 191
1.17.1.5 mvBlueFOX-[Model]224 (1.9 Mpix [1600 x 1200]) . . . . . . . . . . . . . . . . . 196
1.17.2 A.2 CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
1.17.2.1 mvBlueFOX-[Model]200w (0.4 Mpix [752 x 480]) . . . . . . . . . . . . . . . . . . 201
1.17.2.2 mvBlueFOX-[Model]202a (1.3 Mpix [1280 x 1024]) . . . . . . . . . . . . . . . . . 204
1.17.2.3 mvBlueFOX-[Model]202b (1.2 Mpix [1280 x 960]) . . . . . . . . . . . . . . . . . 207
1.17.2.4 mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960]) . . . . . . . . . . . . . . . . . 210
1.17.2.5 mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944]) . . . . . . . . . . . . . . . . . 214

MATRIX VISION GmbH


1.1 About This Manual 1

1.1 About This Manual

1.1.1 Composition of the manual

The manual starts with technical data about the device like sensors (for cameras) or electrical characteristics as
well as a quick start chapter. Afterwards there will be various information on tools and software packages that can
help with developing an application or getting a better understanding for the device.

• The installation package comes with a couple of tools offering a graphical user interface (GUI (p. 69)) to
control mvIMPACT Acquire compliant devices.

– wxPropView (p. 69) can be used to capture image data and to change parameters like AOI or gain
– mvDeviceConfigure (p. 111) can be used to e.g. perform firmware updates, assign a unique ID to a
device that is stored in non-volatile memory or to configure to log-message output.

• HRTC - Hardware Real-Time Controller (p. 118)

– It is possible to define sequences of operating steps to control acquisition or time critical I/O. This FPGA
built-in functionality is called Hardware Real-Time Controller (short: HRTC).

• Developing Applications Using The mvIMPACT Acquire SDK (p. 120)

• DirectShow developers (p. 121)

– This is the documentation of the MATRIX VISION DirectShow_acquire interface.

• Use Cases (p. 127)

– This section offers solutions and explanations for standard use cases.

1.1.2 How to get started?

This chapter gives you a short overview, how to get started with your device and where to find the necessary
information in the manual. It will also explain or link to the concepts behind the driver and the image acquisition.
Furthermore it explains how to get started programming own applications.

1.1.2.1 Installation

To install the mvBlueFOX properly you have to follow these steps:


(Please follow the links for detailed descriptions.)

• Windows:

– Check the system requirements (p. 20).


– Install the software and driver (p. 21).
– Install the hardware (p. 24).
– Configure the mvBlueFOX (p. 69)
* e.g. make a white balance (p. 98) (color sensors).
• Linux:

– Check the system requirements (p. 28).


– Install the software and driver (p. 29).
– Install the hardware (p. 34).
– Configure the mvBlueFOX (p. 69)
* e.g. make a white balance (p. 98) (color sensors).

MATRIX VISION GmbH


2

1.1.2.2 Driver concept

The driver supplied with the MATRIX VISION product represents the port between the programmer and the
hardware. The driver concept of MATRIX VISION provides a standardized programming interface to all image
processing products made by MATRIX VISION GmbH.
The advantage of this concept for the programmer is that a developed application runs without the need for any
major modifications to the various image processing products made by MATRIX VISION GmbH. You can also
incorporate new driver versions, which are available for download free of charge on our website: http←-
://www.matrix-vision.com.

The following diagram shows a schematic structure of the driver concept:

Figure 1: Driver concept

• 1 Part of any mvIMPACT Acquire driver installation package (Windows).

• 2 Separately available for 32 bit and 64 bit. Requires at least one installed driver package.

• 3 See 2, but requires an installed version of the mvBlueFOX driver.

• 4 Part of the NeuroCheck installer but requires at least one installed frame grabber driver.

• 5 Part of the mvIMPACT SDK installation. However, new designs should use the .NET libs that are now part
of mvIMPACT Acquire ("mv.impact.acquire.dll"). The namespace "mv.impact.acquire" of
"mv.impact.acquire.dll" provides a more natural and more efficient access to the same features
as contained in the namespace "mvIMPACT_NET.acquire" of "mvIMPACT_NET.dll", which is why
the latter one should only be used for backward compatibility but NOT when developing a new application.

• 6 Part of Micro-Manager.

MATRIX VISION GmbH


1.1 About This Manual 3

1.1.2.2.1 NeuroCheck support A couple of devices are supported by NeuroCheck. However between Neuro←-
Check 5.x and NeuroCheck 6.x there has been a breaking change in the internal interfaces. Therefore also the list
of supported devices differs from one version to another and some additional libraries might be required.

For NeuroCheck 5.x the following devices are supported:

Device Additional software needed


mvTITAN-G1 mvSDK driver for mvTITAN/mvGAMMA devices
mvTITAN-CL mvSDK driver for mvTITAN/mvGAMMA devices
mvGAMMA-CL mvSDK driver for mvTITAN/mvGAMMA devices
mvBlueFOX mvIMPACT Acquire driver for mvBlueFOX devices, "NCUSBmvBF.dll"

For NeuroCheck 6.0 the following devices are supported:

Device Additional software needed


mvTITAN-G1 mvIMPACT Acquire driver for mvTITAN/mvGAMMA de-
vices
mvTITAN-CL mvIMPACT Acquire driver for mvTITAN/mvGAMMA de-
vices
mvGAMMA-CL mvIMPACT Acquire driver for mvTITAN/mvGAMMA de-
vices
mvHYPERION-CLb mvIMPACT Acquire driver for mvHYPERION devices
Every other mvIMPACT Acquire compliant device mvIMPACT Acquire driver for the corresponding device
family, "mv.impact.acquire.NeuroCheck6.←-
dll" (comes with the driver package, but the driver
package must be installed AFTER installing NeuroCheck 6

For NeuroCheck 6.1 the following devices are supported:

Device Additional software needed


mvTITAN-G1 mvIMPACT Acquire driver for mvTITAN/mvGAMMA de-
vices
mvTITAN-CL mvIMPACT Acquire driver for mvTITAN/mvGAMMA de-
vices
mvGAMMA-CL mvIMPACT Acquire driver for mvTITAN/mvGAMMA de-
vices
mvHYPERION-CLb mvIMPACT Acquire driver for mvHYPERION devices
Every other mvIMPACT Acquire compliant device mvIMPACT Acquire driver for the corresponding device
family, "mv.impact.acquire.NeuroCheck6_←-
1.dll" (comes with the driver package, but the driver
package must be installed AFTER installing NeuroCheck
6.1

1.1.2.2.2 VisionPro support Every mvIMPACT Acquire driver package on Windows comes with an adapter to
VisionPro from Cognex. The installation order does not matter. After the driver package and VisionPro has been
installed, the next time VisionPro is started it will allow selecting the mvIMPACT Acquire device. No additional steps
are needed.
MATRIX VISION devices that also comply with the GigE Vision or USB3 Vision standard don't need any software
at all, but can also use VisionPro's built-in GigE Vision or USB3 Vision support.

1.1.2.2.3 HALCON support HALCON comes with built-in support for mvIMPACT Acquire compliant devices, so
once a device driver has been installed for the mvIMPACT Acquire device, it can also be operated from a HALCON
environment using the corresponding acquisition interface. No additional steps are needed.

MATRIX VISION GmbH


4

MATRIX VISION devices that also comply with the GigE Vision or USB3 Vision standard don't need any software
at all, but can also use HALCON's built-in GigE Vision or USB3 Vision support.

As some mvIMPACT Acquire device driver packages also come with a GenTL compliant interface, these can also
be operated through HALCON's built-in GenTL acquisition interface.

1.1.2.2.4 LabVIEW support Every mvIMPACT Acquire compliant device can be operated under LabVIEW
through an additional set of VIs which is shipped by MATRIX VISION as a separate installation ("mvLabVIEW
Acquire").

MATRIX VISION devices that also comply with the GigE Vision or USB3 Vision standard don't need any additional
software at all, but can also be operated through LabVIEW's GigE Vision or USB3 Vision driver packages.

1.1.2.2.5 DirectShow support Every mvIMPACT Acquire compliant device driver package comes with an inter-
face to DirectShow. In order to be usable from a DirectShow compliant application, devices must first be registered
for DirectShow support. How to this is explained here (p. 122).

1.1.2.2.6 Micro-Manager support Every mvIMPACT Acquire compliant device can be operated under
https://micro-manager.org when using mvIMPACT Acquire 2.18.0 or later and at least Micro-Manager
1.4.23 build AFTER 15.12.2016. The adapter needed is part of the Micro-Manager release. Additional information
can be found here: https://micro-manager.org/wiki/MatrixVision.

1.1.2.2.6.1 code

• https://valelab4.ucsf.edu/svn/micromanager2/trunk/DeviceAdapters/←-
MatrixVision/

• https://valelab4.ucsf.edu/trac/micromanager/browser/DeviceAdapters/←-
MatrixVision

1.1.2.3 Image acquisition concept

The image acquisition is based on queues to avoid the loss of single images. With this concept you can acquire im-
ages via single acquisition or triggered acquisition. For detailed description of the acquisition concept, please have
a look at "How the capture process works" in the mvIMPACT_Acquire_API manual matching the programming
language you are working with.

1.1.2.4 Programming

To understand how to control the device and handle image data you will have a good introduction by reading the main
pages of the corresponding mvIMPACT Acquire interface reference. Additionally, please have a look at the example
programs. Several basic examples are available. For details please refer to Developing Applications Using The
mvIMPACT Acquire SDK (p. 120) depending on the programming language you will use for your application.

MATRIX VISION GmbH


1.2 Imprint 5

1.2 Imprint

MATRIX VISION GmbH


Talstrasse 16
DE - 71570 Oppenweiler
Telephone: +49-7191-9432-0
Fax: +49-7191-9432-288
Website: http://www.matrix-vision.de
E-Mail:

[email protected]
[email protected]
[email protected]

Author

U. Lansche

Date

2019

This document assumes a general knowledge of PCs and programming.

Since the documentation is published electronically, an updated version may be available online. For this reason we
recommend checking for updates on the MATRIX VISION website.

MATRIX VISION cannot guarantee that the data is free of errors or is accurate and complete and, therefore, as-
sumes no liability for loss or damage of any kind incurred directly or indirectly through the use of the information of
this document.

MATRIX VISION reserves the right to change technical data and design and specifications of the described products
at any time without notice.

Copyright

MATRIX VISION GmbH. All rights reserved. The text, images and graphical content are protected by copyright
and other laws which protect intellectual property. It is not permitted to copy or modify them for trade use or
transfer. They may not be used on websites.

• Windows® XP, Windows® Vista, Windows® 7 are trademarks of Microsoft, Corp.

• Linux® is a trademark of Linus Torvalds.

All other product and company names in this document may be the trademarks and tradenames of their
respective owners and are hereby acknowledged.

The manual has been generated with Doxygen (Website: http://www.doxygen.org).

Parts of the log file creation and the log file display make use of Sarissa (Website: http://dev.←-
abiss.gr/sarissa) which is distributed under the GNU GPL version 2 or higher, GNU LGPL version
2.1 or higher and Apache Software License 2.0 or higher. The Apache Software License 2.0 is part of this
driver package.

1.3 Revisions

MATRIX VISION GmbH


6

Date Description
6. April 2020 Updated mvBlueFOX (p. 16) .
4. February 2020 Added some image save possibilities in How to see the first image (p. 74) .
19. March 2019 Added Using the analysis plots (p. 79) .
09. November 2018 Added "Hard Disk Recording" in wxPropView (p. 69).
21. December 2016 Updated Setting up multiple display support and/or work with several capture set-
tings in parallel (p. 88).
15. December 2016 Added Micro-Manger in Driver concept (p. 2).
23. August 2016 Added measured frame rates of sensors mvBlueFOX-[Model]200w (0.4 Mpix [752 x
480]) (p. 201)
mvBlueFOX-[Model]202b (1.2 Mpix [1280 x 960]) (p. 207)
mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960]) (p. 210)
mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944]) (p. 214).
01. August 2016 Extended use case Take two images with different expose times after an external
trigger (HRTC) (p. 172).
11. May 2016 Added Quick Setup Wizard (p. 70).
02. December 2015 Updated CE declarations (p. 11).
25. November 2015 Added Troubleshooting (p. 126).
27. October 2015 Added Command-line options (p. 110).
04. August 2015 Added Windows 10 support.
19. June 2015 Restructured chapter Use Cases (p. 127).
23. April 2015 Added use case Edge controlled triggering (HRTC) (p. 174).
16. April 2015 Updated supported Windows versions.
15. April 2015 Added lens protrusion.
11. March 2015 Added chapter Accessing log files (p. 97).
26. February 2015 Moved Creating double acquisitions (HRTC) (p. 171) to HRTC Use Cases.
27. January 2015 Added use case Using VLC Media Player (p. 129). Renewed Order code nomencla-
ture (p. 16).
09. January 2015 Extended sample Using 2 mvBlueFOX-MLC cameras in Master-Slave mode (p. 163).
10. December 2014 Corrected Order code nomenclature (p. 16) : mvBlueFOX cameras without filter have
the order code 9 (excluding mvBlueFOX-MLC).
01. December 2014 Extended use case Using 2 mvBlueFOX-MLC cameras in Master-Slave mode
(p. 163).
25. November 2014 Corrected the possible HRTC - Hardware Real-Time Controller (p. 118) steps to 256.
21. October 2014 Added description about the record mode in How to see the first image (p. 74).
17. July 2014 Added use case Introducing LUTs (p. 158).
25. April 2014 Added description about Working with the hardware Look-Up-Table (LUT) (p. 109).
25. March 2014 Added use case Correcting image errors of a sensor (p. 131).
10. March 2014 mvDeviceConfigure (p. 111) extended.
Added S-mount lensholder for mvBlueFOX-MLC in Order code nomenclature (p. 16).
18. February 2014 Updated Characteristics (p. 212) of mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960])
(p. 210).
13. January 2014 Changed figure 3 in Using 2 mvBlueFOX-MLC cameras in Master-Slave mode
(p. 163).
12. December 2013 Changed figure in Using 2 mvBlueFOX-MLC cameras in Master-Slave mode (p. 163).
06. December 2013 Added information about Changing the view of the property grid to assist writing
code that shall locate driver features (p. 96).
22. November 2013 Extended information in Adjusting sensor of camera models -x00w (p. 151) and Ad-
justing sensor of camera models -x02d (-1012d) (p. 154).
30. October 2013 Enhanced cable description in 12-pin Wire-to-Board header (USB 2.0 / Dig I/O) (p. 54).

MATRIX VISION GmbH


1.3 Revisions 7

15. October 2013 Added Webcasts (p. 9) links.


Added chapter Bit-shifting an image (p. 95).
09. October 2013 Added information about Positioning tolerances of sensor chip (p. 62).
02. September 2013 Updated Order code nomenclature (p. 16).
22. April 2013 Added chapter Sensor's optical midpoint and orientation (p. 54) and corrected feature
table in CMOS sensors (p. 65) (software trigger).
19. March 2013 Update Figure 4 in chapter Dimensions and connectors (p. 40) and added Figure 5.
24. January 2013 Added information about image error counts and disabling CPU sleep states: How to
disable CPU sleep states a.k.a. C states (< Windows 8) (p. 115).
16. January 2013 Added status LED description of the mvBlueFOX-MLC (p. 59).
14. December 2012 New version of technical documentation.
07. December 2012 All parts of the manual to do with programming are available as a separate manual now:

• "mvIMPACT_Acquire_API_CPP_manual.chm",

• "mvIMPACT_Acquire_API_C_manual.chm", and

• "mvIMPACT_Acquire_API_NET_manual.chm". These manuals can be down-


load from http://www.matrix-vision.com.

30. September 2012 Moved Working with the Hardware Real-Time Controller (HRTC) (p. 168) to Use
Cases (p. 127).
20. September 2012 Added chapter "Porting existing code written with versions earlier then 3.0.0".
17. August 2012 Added use case Adjusting sensor of camera models -x02d (-1012d) (p. 154).
16. July 2012 Extended "Characteristics of the digital inputs" in D-Sub 9-pin (male) (p. 41).
21. June 2012 Added description, how to install the Linux driver using the installer script (Installing the
mvIMPACT Acquire driver (p. 29)).
21. June 2012 Added information (electrical characteristic, pinning (p. 53)) about LVTTL version
(mvBlueFOX-MLC2xxx-XLW).
02. April 2012 Enhanced chapter Output sequence of color sensors (RGB Bayer) (p. 66) and
added chapter Bilinear interpolation of color sensors (RGB Bayer) (p. 67).
17. February 2012 Renewed chapter wxPropView (p. 69).
09. November 2011 Added Settings behaviour during startup (p. 38) in chapter Quickstart (p. 20).
21. September 2011 Added SXGA sensor (p. 210) -202d. Added mvBlueFOX-IGC (p. 60) information.
26. July 2011 Removed chapter
EventHandling. See "Porting existing code written with versions earlier then 2.←-
0.0".
11. July 2011 Added chapters
"Callback demo".
08. Juli 2011 Added chapter Using 2 mvBlueFOX-MLC cameras in Master-Slave mode (p. 163).
06. June 2011 Added chapters
"Porting existing code written with versions earlier then 2.0.0".
31. May 2011 Added chapter Creating double acquisitions (HRTC) (p. 171).
26. April 2011 Added chapter Using external trigger with CMOS sensors (p. 150).
Updated chapter Dimensions and connectors (p. 53) (digital inputs TTL, digital outputs
TTL) of mvBlueFOX-MLC version.
18. January 2011 Added chapter Setting up multiple display support and/or work with several capture
settings in parallel (p. 88).
29. Nov. 2010 Added ADC resolutions in Sensor Overview (p. 63).
19. October 2010 Added chapters
"Chunk data format".
07. Oct. 2010 Added High-Speed USB design guidelines (p. 11).
22. Sep. 2010 Added suitable for mvBlueFOX-MLC What's inside and accessories (p. 19).
26. Aug. 2010 Added cable end color of board-to-wire cable in Dimensions and connectors (p. 53).
Added chapter about Creating user data entries (p. 160).

MATRIX VISION GmbH


8

02. Aug. 2010 mvBlueFOX-200W and mvBlueFOX-MLC100W support flash control output: CMOS
sensors (p. 65).
Added chapter Import and Export images (p. 87).
21. Jun. 2010 Included exposure modes in the frame rate calculator of the Sensor Overview (p. 63).
31. May 2010 Added chapter Single-board version (mvBlueFOX-MLC2xx) (p. 53).
19. Apr. 2010 Added new example ContinuousCaptureDirectX.
01. Apr. 2010 Added Use Cases (p. 127) about high dynamic range (p. 151) of sensor mvBlueFOX-
[Model]200w (0.4 Mpix [752 x 480]) (p. 201).
10. Feb. 2010 Added note about Windows XP Embedded in System Requirements (p. 20).
28. Jan. 2010 Added chapter Copy grid data to the clipboard (p. 86).
13. Jan. 2010 Added chapters
"Porting existing code written with versions earlier then 1.12.0".
11. Jan. 2010 Due to a software update, documentation of CMOS sensor (-x00w) (p. 201) updated.
10. Nov. 2009 Added Windows 7 as supported operating system.
22. Oct 2009 Updated sensor data (p. 63).
19. Oct 2009 Updated wxPropView (p. 69) description about handling settings.
22. Sep. 2009 Added Wide-VGA sensor (p. 201) and removed sensor -x02.
17. Sep. 2009 Updated frame rate calculator of CCD sensors (p. 63).
05. May 2009 Added figures which shows "how to connect flash to digital output".
05. May 2009 Added book Use Cases (p. 127), which offers solutions and explanations for standard
use cases.
22. Jan. 2009 Added information about how to test the gerenal trigger functionality of the camera Set-
ting up external trigger and flash control (p. 102).
26. Nov. 2008 Added chapter Setting up external trigger and flash control (p. 102).
28. Oct 2008 Added mvBlueFOX-M accessory Accessories mvBlueFOX-Mxxx (p. 50).
21. Jul. 2008 Added power supply note in 4-pin circular plug-in connector with lock (USB 2.0)
(p. 40).
11. Jun. 2008 Added preliminary sensor data of -105 in Sensor Overview (p. 63).
10. Jun. 2008 Updated sensor data of -121 in Sensor Overview (p. 63).
09. Apr. 2008 Corrected Figure 4: DIG OUT mvBlueFOX-1xx in Dimensions and connectors (p. 40).
25. Feb. 2008 Added note about EEPROM of mvBlueFOX-M in Dimensions and connectors (p. 46).
19. Feb. 2008 Corrected sensor data in Sensor Overview (p. 63).
30. Jan. 2008 Added note about the obsolete differentiation between 'R' and 'U' version in chapter
Dimensions and connectors (p. 40).
01. Oct 2007 Update sensor data in chapter Order code nomenclature (p. 16).
20. Aug. 2007 Added part number of JST connectors used on the mvBlueFOX-M (see: Dimensions
and connectors (p. 46)).
31. Jul. 2007 Rewritten "How rto use this manual". This book now includes a getting started chapter
(see: Composition of the manual (p. 1)).
11. Jun. 2007 Updated images in digital I/O description of mvBlueFOX-M (see: Dimensions and con-
nectors (p. 46)).
29. May 2007 Added an attention in chapter Quickstart (p. 20) section Installing the hardware (p. 24)
(Windows) and Installing the hardware (p. 34) (Linux).
23. May 2007 Added calculators to calculate the frame rate of the sensors (see specific sensor
documentation: Sensor Overview (p. 63)).
23. Apr. 2007 Updated sensor description and added description of Micron's CMOS 1280x1024 (-
102a) (p. 204) sensor.
02. Apr. 2007 Updated description of mvBlueFOX-M1xx digital I/O in chapter Dimensions and con-
nectors (p. 46).
29. Jan. 2007 Repainted DigI/O images (see: Dimensions and connectors (p. 40)).

MATRIX VISION GmbH


1.4 Graphic Symbols 9

24. Nov. 2006 Added attention to the DigI/O description of the mvBlueFOX-M (see: Dimensions and
connectors (p. 46)).
14. Nov. 2006 Updated Linux installation documentation (see: Quickstart (p. 20)).
20. Oct 2006 Updated Linux installation documentation (see: Quickstart (p. 20)).
11. Sep. 2006 Devided the Quickstart chapter into Linux® and Windows® (see: Quickstart (p. 20)).
8. Sep. 2006 Updated CCD timing in CCD 640 x 480 (1/3") documentation (see: mvBlueFOX-
[Model]220a (0.3 Mpix [640 x 480]) (p. 182)).
5. Sep. 2006 Updated the sensor data (see: Sensor Overview (p. 63)).
23. Aug. 2006 Added general tolerance of the housing (see: Technical Data (p. 40)).
28. Jul. 2006 Removed some linking errors.
19. Jul. 2006 Added WEEE-Reg.-No. (see: European Union Declaration of Conformity statement
(p. 11)).
Added ambient temperature of the mvBlueFOX standard version (see: Components
(p. 45)).
17. Jun. 2006 New chapter "Configure the log output using mvDeviceConfigure" (see: "Configure the
log output using mvDeviceConfigure").
07. Jun. 2006 Extended the HRTC documentation (see: How to use the HRTC (p. 119)).
02. Jun. 2006 Fixed image errors in CCD 640 x 480 (1/3") documentation (see: mvBlueFOX-
[Model]220a (0.3 Mpix [640 x 480]) (p. 182)).
01. Jun. 2006 Updated the chm index.
18. May 2006 Sensor description: Changed black/white to gray scale (see: Sensor Overview (p. 63)).
14. Feb. 2006 Added CCD 640 x 480 (1/3") (see: mvBlueFOX-[Model]220a (0.3 Mpix [640 x 480])
(p. 182)).
13. Feb. 2006 Corrected the image of the "4-pin circular plug-in connector" (see: Dimensions and
connectors (p. 40)).

1.4 Graphic Symbols

1.4.1 Notes, Warnings, Attentions

Note

A note indicates important information that helps you optimize usage of the products.

Warning

A warning indicates how to avoid either potential damage to hardware or loss of data.

Attention

An attention indicates a potential for property damage, personal injury, or death.

All due care and attention has been taken in preparing this manual. In view of our policy of continuous product
improvement, however, we can accept no liability for completeness and correctness of the information contained in
this manual. We make every effort to provide you with a flawless product.

In the context of the applicable statutory regulations, we shall accept no liability for direct damage, indirect damage
or third-party damage resulting from the acquisition or operation of a MATRIX VISION product. Our liability for intent
and gross negligence is unaffected. In any case, the extend of our liability shall be limited to the purchase price.

1.4.2 Webcasts

MATRIX VISION GmbH


10

This icon indicates a webcast about an issue which is available on our website.

1.5 Important Information

We cannot and do not take any responsibility for the damage caused to you or to any other equipment
connected to the mvBlueFOX. Similarly, warranty will be void, if a damage is caused by not following
the manual.

Handle the mvBlueFOX with care. Do not misuse the mvBlueFOX. Avoid shaking, striking, etc. The
mvBlueFOX could be damaged by faulty handling or shortage.

Use a soft cloth lightly moistened with a mild detergent solution when cleaning the camera.

Never face the camera towards the sun. Whether the camera is in use or not, never aim at the sun or
other extremely bright objects. Otherwise, blooming or smear may be caused.

Please keep the camera closed or mount a lens on it to avoid the CCD or the CMOS from getting
dusty.

Clean the CCD/CMOS faceplate with care. Do not clean the CCD or the CMOS with strong or abrasive
detergents. Use lens tissue or a cotton tipped applicator and ethanol.

Never connect two USB cables to the mvBlueFOX even if one is only connected to a PC.

The mvBlueFOX is bus powered < 2.5 W.

The mvBlueFOX meets IP40 standards.

MATRIX VISION GmbH


1.5 Important Information 11

Using the single-board or board-level versions:

• Handle with care and avoid damage of electrical components by electrostatic discharge (ESD):

– Discharge body static (contact a grounded surface and maintain contact).


– Avoid all plastic, vinyl, and styrofoam (except antistatic versions) around printed circuit
boards.
– Do not touch components on the printed circuit board with your hands or with conductive
devices.

1.5.1 High-Speed USB design guidelines

If you want to make own High-Speed (HS) USB cables, please pay attention to following design guidelines:

• Route High-Speed (HS) USB signals with a minimum number of vias and sharp edges!

• Avoid stubs!

• Do not cut off power planes VCC or GND under the signal line.

• Do not route signals no closer than 20 ∗ h to the copper layer edge if possible (h means height over the
copper layer).

• Route signal lines with 90 Ohm +- 15% differential impedance.

– 7.5 mil printed circuit board track with 7.5 mil distance result in approx. 90 Ohm @ 110 um height over
GND plane.
– There are other rules when using double-ply printed circuit boards.

• Be sure that there is 20 mil minimum distance between High-Speed USB signal pair and other printed circuit
board tracks (optimal signal quality).

1.5.2 European Union Declaration of Conformity statement

The mvBlueFOX complies with the provision of the following European Directives:

• 2014/30/EU (EMC directive)

• 2014/35/EU (LVD - low voltage directive)

• For EN 61000-6-3:2007, mvBlueFOX-IGC with digital I/O needs the Steward snap-on ferrite
28A0350-0B2 on I/O cable.

• For EN 61000-6-3:2007, mvBlueFOX-MLC with digital I/O needs the Würth Elektronik snap-on
ferrite WE74271142 on I/O cable and copper foil on USB.

MATRIX VISION corresponds to the EU guideline WEEE 2002/96/EG on waste electrical and elec-
tronic equipment and is registered under WEEE-Reg.-No. DE 25244305.

MATRIX VISION GmbH


12

MATRIX VISION GmbH


1.5 Important Information 13

MATRIX VISION GmbH


14

1.5.3 Legal notice

1.5.3.1 For customers in the U.S.A.

MATRIX VISION GmbH


1.5 Important Information 15

Class B

This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part
15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference
when the equipment is operated in a residential environment. This equipment generates, uses, and can radiate
radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful
interference to radio communications. However there is no guarantee that interferences will not occur in a particular
installation. If the equipment does cause harmful interference to radio or television reception, the user is encouraged
to try to correct the interference by one or more of the following measures:

• Reorient or relocate the receiving antenna.

• Increase the distance between the equipment and the receiver.

• Use a different line outlet for the receiver.

• Consult a radio or TV technician for help.

You are cautioned that any changes or modifications not expressly approved in this manual could void your authority
to operate this equipment. The shielded interface cable recommended in this manual must be used with this
equipment in order to comply with the limits for a computing device pursuant to Subpart B of Part 15 of FCC Rules.

• To be compliant to FCC Class B, mvBlueFOX-IGC requires an I/O cable with an retrofittable ferrite to be used
such as

– Company: Steward Type: 28A0350-0B2

1.5.3.2 For customers in Canada

This apparatus complies with the Class B limits for radio noise emissions set out in the Radio Interference Regula-
tions.

1.5.3.3 Pour utilisateurs au Canada

Cet appareil est conforme aux normes classe B pour bruits radioélectriques, spécifiées dans le Règlement sur le
brouillage radioélectrique.

MATRIX VISION GmbH


16

1.6 Introduction

The mvBlueFOX is a compact industrial CCD & CMOS camera solution for any PC with a Hi-Speed USB (USB
2.0) port. A superior image quality makes it suited for most applications. Integrated preprocessing like binning
reduces the PC load to a minimum. The standard Hi-Speed USB interface guarantees an easy integration without
any additional interface board. To make the cameras flexible to any industrial applications, the image processing
tools mvIMPACT as well as different example solutions are available

Figure 1: mvBlueFOX

The mvBlueFOX is suitable for following tasks:

• machine vision

• robotics

• surveillance

• microscopy

• medical imaging

With the name mvBlueFOX-M1xx, the industrial camera mvBlueFOX is also available as a single-board version.

1.6.1 Order code nomenclature

1.6.1.1 mvBlueFOX

The mvBlueFOX nomenclature scheme is as follows:

mvBlueFOX - A B - (1) (2) (3) (4)

- A: Sensor model
220: 0.3 Mpix, 640 x 480, 1/4", CCD
220a: 0.3 Mpix, 640 x 480, 1/3", CCD
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
221: 0.8 Mpix, 1024 x 768, 1/3", CCD
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
223: 1.4 Mpix, 1360 x 1024, 1/2", CCD
224: 1.9 Mpix, 1600 x 1200, 1/1.8", CCD
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS

- B: Sensor color
G: Gray scale version
C: Color version

MATRIX VISION GmbH


1.6 Introduction 17

- (1): Lensholder
1: C-mount with adjustable backfocus (standard)
2: CS-mount with adjustable backfocus
3: S-mount

- (2): Filter
1: IR-CUT (standard)
2: Glass
3: Daylight cut
9: None

- (3): Case
1: Color blue (standard)
2: Color black, no logo, no label MATRIX VISION
3: Color blue, no logo, no label MATRIX VISION
9: None

- (4): Misc
1: None (standard)

1.6.1.2 mvBlueFOX-M

The mvBlueFOX-M nomenclature scheme is as follows:

mvBlueFOX-M A B - (1) (2) (3) (4)

- A: Sensor model
220: 0.3 Mpix, 640 x 480, 1/4", CCD
220a: 0.3 Mpix, 640 x 480, 1/3", CCD
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
221: 0.8 Mpix, 1024 x 768, 1/3", CCD
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
223: 1.4 Mpix, 1360 x 1024, 1/2", CCD
224: 1.9 Mpix, 1600 x 1200, 1/1.8", CCD
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS

- B: Sensor color
G: Gray scale version
C: Color version

- (1): Lensholder
1: No holder (standard)
2: C-mount with adjustable backfocus
3: CS-mount with adjustable backfocus
4: S-mount #9031
5: S-mount #9033

- (2): Filter
1: None (standard)
2: IR-CUT
3: Glass
4: Daylight cut

- (3): Misc
1: None (standard)

- (4): Misc
1: None (standard)

1.6.1.3 mvBlueFOX-IGC

The mvBlueFOX-IGC nomenclature scheme is as follows:

MATRIX VISION GmbH


18

mvBlueFOX-IGC A B - (1) (2) (3) (4)

- A: Sensor model
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
202b: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202d: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS

- B: Sensor color
G: Gray scale version
C: Color version

- (1): Lensholder
1: CS-mount without adjustable backfocus (standard)
2: C-mount without adjustable backfocus (CS-mount with add. 5 mm extension ring)
3: C-mount with adjustable backfocus

- (2): Filter
1: IR-CUT (standard)
2: Glass
3: Daylight cut
9: none

- (3): Case
1: Color blue (standard)
2: Color black, no logo, no label MATRIX VISION
9: None

- (4): I/O
1: None (standard)
2: With I/O #08727

1.6.1.4 mvBlueFOX-MLC

The mvBlueFOX-MLC nomenclature scheme is as follows:

mvBlueFOX-MLC A B - C D E - (1) (2) (3) (4)

- A: Sensor model
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
202b: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202d: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS

- B: Sensor color
G: Gray scale version
C: Color version

- C: Mini USB
U: with Mini USB (standard)
X: without Mini USB

- D: Digital I/Os
O: 1x IN + 1x OUT opto-isolated (standard)
T: 2x TTL IN + 2x TTL OUT
L: 3x LVTTL IN

- E: Connector
W: board-to-wire (standard)
B: board-to-board

- (1): Lensholder
1: No holder (standard)
2: C-mount with adjustable backfocus (CS-mount with add. 5 mm extension ring)
3: CS-mount with adjustable backfocus
4: C-mount without adjustable backfocus
5: CS-mount without adjustable backfocus

MATRIX VISION GmbH


1.6 Introduction 19

6: LENSHOLDER SH01F08V3 #09851


7: LENSHOLDER SH02M13V2 #09951
8: LENSHOLDER SH03H16V2 #09850

- (2): Filter
1: None (standard)
2: IR-CUT
3: Glass
4: Daylight cut

- (3): Misc
1: None (standard)

- (4): Misc
1: None (standard)

Examples:

mvBlueFOX-120G1 640 x 480, CCD 1/4", gray


1
mvBlueFOX-102C 1280 x 1024, CMOS 1/2", color
1
mvBlueFOX-M121C 1024 x 768, CCD 1/3", color, module
mvBlueFOX-MLC200wC-XOW-5111 752 x 480, CMOS 1/3", color, single-board, without Mini-USB, 1x IN +
1x OUT opto-isolated, board-to-wire, CS-mount (w/o backfocus adjust-
ment)

1
: -1111 is the standard delivery variant and for this reason it is not mentioned.

1.6.2 What's inside and accessories

Due to the varying fields of application the mvBlueFOX is shipped without accessories. The package contents:

• mvBlueFOX

• instruction leaflet

For the first use of the mvBlueFOX we recommend the following accessories to get the camera up and running:

• A USB 2.0 cable

Attention

According to the customer and if the mvBlueFOX-MLC is shipped without lensholder, the mvBlueFOX-MLC
will be shipped with a protective foil on the sensor. Before usage, please remove this foil!

Accessories for the mvBlueFOX:

MATRIX VISION GmbH


20

Part code Description


ADAPTER CS-MOUNT Lens fixing for mvBlueFOX to match with CS-mount lenses
KS-USB2-AA-EXT 05.0 USB 2.0 extension, activeUSB2 A plug to USB2 A jack, length 5m
KS-USB2-AB 03.0 TR USB 2.0 cable A-B, transparent, Profi Line. Length 3m
KS-USB2-B4ST 02.0 USB 2.0 cable for mvBlueFOX, Binder 4pol to USB2-A. Length: 2m
KS-USB2-B4ST 03.0 USB 2.0 cable for mvBlueFOX, Binder 4pol to USB2-A. Length: 3m
KS-USB2-B4ST 05.0 USB 2.0 cable for mvBlueFOX, Binder 4pol to USB2-A. Length: 5m
KS-USB2-PHR4 01.5 USB connector cable for mvBlueFOX-M1xx. Length: 1.5m
KS-PHR12 500 Cable for mvBlueFOX-M1xx dig. I/O, 12-pin. Length: 500mm
1..4 brown
5..8 gray
9 red
10 black
11 yellow
12 black

KS-MLC-IO-TTL 00.5 mvBlueFOX-MLC board-to-board TTL IO cable for master-slave synchroniza-


tion (Molex plug to Molex plug), Length: 0.5m
KS-MLC-IO-W mvBlueFOX-MLC board-to-wire I/O data cable, Molex 0510211200 with crimp
terminal 50058. Length: up to 1m
KS-MLC-USB2-IO-W mvBlueFOX-MLC board-to-wire I/O data and USB 2.0 cable, Molex
0510211200 with crimp terminal 50058 to USB2-A. Length: up to 1m
MV-Lensholder BFM-C C-mount lensholder for mvBlueFOX-M,incl. IR-Cut filter
MV-Lensholder BFM-S 9031 S-mount lensholder M12 x 0,5 type MS-9031 for mvBlueFOX-M102
MV-Lensholder BFM-S 9033 S-mount lensholder M12 x 0,5 type: MS-9033 for mvBlueFOX-M102
MV-LENSHOLDER SH02M13 S-mount lensholder M12 x 0.5, height 13mm for mvBlueFOX-MLC
MV-LENSHOLDER SH01F08 S-mount lensholder M12 x 0.5, height 8mm for mvBlueFOX-MLC
ADAPTER S-C AD01S Adapter for S-mount lens (M12x0,5) to C-mount, high penetration depth for
mvBlueFOX-IGC
ADAPTER S-C AD02F Adapter for S-mount lens (M12x0,5) to CS-mount, penetration depth: 5.5mm,
outside diameters: 31mm for mvBlueFOX-IGC
MV-Tripod Adapter BF Tripod adapter for mvBlueFOX

1.7 Quickstart

1.7.1 Windows

1.7.1.1 System Requirements

Currently supported Windows versions are:

• Microsoft Windows 7 (32-bit, 64-bit)

• Microsoft Windows 8.1 (32-bit, 64-bit)

• Microsoft Windows 10 (32-bit, 64-bit)

Other Windows version can be used at the user's own risk.

MATRIX VISION GmbH


1.7 Quickstart 21

Note

Since mvIMPACT Acquire version 2.8.0 it could be possible that you have to update your Windows installer
at least using Windows XP. The necessary packages are available from Microsoft's website: http←-
://www.microsoft.com/en-US/download/details.aspx?id=8483

All necessary drivers are available from the MATRIX VISION website at www.matrix-vision.de, section "Products
-> Cameras -> your interface -> your product -> Downloads".

Note

For Windows XP Embedded


As mvBlueFOX cameras will register as an 'imaging device' in the systems device manager please make
sure that your Windows XP Embedded (XPe) distribution is shipped/build with support for the corresponding
device class ("Class GUID {6bdd1fc6-810f-11d0-bec7-08002be2092f}") before installing
the mvBlueFOX device driver. Otherwise, an installing of the driver when connecting a camera will fail with an
error message like "a required section in the INF file could not be found". The camera will not be accessible
then.

The mvBlueFOX is a USB 2.0 compliant camera device and needs therefore a functioning USB 2.0 port. If you are
not sure about this, please follow these steps:

1. Press "Start" and click on "Run"

2. Enter

msinfo32

3. Click on "Components" and look after "USB"

4. If there is a entry like "USB 2.0 Root Hub" or "ROOT_HUB20", your system has USB 2.0.

Please be sure that your system has at least one free USB port.

1.7.1.2 Installing the mvIMPACT Acquire driver

Warning

Before connecting the mvBlueFOX, please install the software and driver first!

All necessary drivers are available from the MATRIX VISION website:
https://www.matrix-vision.←-
com "Products -> Hardware -> mvBlueFOX -> Downloads Tab".

By double clicking on "mvBlueFOX-x86-n.n.n.msi" (for 32-bit systems) or "mvBlueFOX-x86_64-n.n.n.msi" (for


64-bit systems), the mvBlueFOX installer will start automatically.

MATRIX VISION GmbH


22

Figure 1: mvBlueFOX installer - Start window

Select the folder, where you want to install the software.

Figure 2: mvBlueFOX installer - Select folder

Select the features you want to install. Following features exist:

• "Base Libraries"
This feature contains all necessary files for property handling and display. Therefore, it is not selectable.

• "mvBlueFOX driver"
This is also not selectable.

MATRIX VISION GmbH


1.7 Quickstart 23

• "Tools"
This feature contains tools for the mvBlueFOX (e.g. to configure MATRIX VISION devices (mvDevice←-
Configure) or to acquire images (wxPropView)).

• "Developer API"
The Developer API" contains the header for own programming. Additionally you can choose the examples,
which installs the sources of wxPropView, mvIPConfigure and various small examples. The project files
shipped with the examples have been generated with Visual Studio 2013. However projects and makefiles
for other compilers can be generated using CMake fairly easy. See CMake section in the C++ manual for
additional details. - \b "Documentation"
This will install this mvBlueFOX manual a single HTML help file (.chm).

Figure 3: mvBlueFOX installer - Select features

Confirm the installation by clicking "Next".

Figure 4: mvBlueFOX installer - Confirm installation

MATRIX VISION GmbH


24

The installation is finished now you can close the window.

Figure 5: mvBlueFOX installer - Finished installation

1.7.1.3 Installing the hardware

Warning

Before connecting the mvBlueFOX, please install the software and driver first!

It is not necessary to shutdown your system. On an USB port, it is possible to hot plug any USB device (hot plug
lets you plug in new devices and use them immediately).

Warning

If using the Binder connector first connect the cable to the camera, then connect the camera to the PC.

Plug the mvBlueFOX to an USB 2.0 Port. After plugging the mvBlueFOX Windows® shows "Found New Hardware"
and starts the Windows Hardware Wizard.

Figure 6: Windows - Found new hardware

The Wizard asks you for the driver. The installation doesn't need any Windows® automatic at this step and it is
recommended to type the driver directory by hand. Choose "No, not this time" and press "Next".

MATRIX VISION GmbH


1.7 Quickstart 25

Figure 7: Windows Hardware Wizard - Driver Installation

Choose "Install the software automatically" and press "Next".

Figure 8: Windows Hardware Wizard - Driver location

The Hardware Wizard installs the driver.

MATRIX VISION GmbH


26

Figure 9: Windows Hardware Wizard - Driver location

The Hardware Wizard will search the registry for the device identification and after a while the Wizard prompts you
to continue the installation or to abort it. Also Windows® will display the following message to inform the user that
this driver digitally signed by Microsoft. You have to select 'Continue anyway' otherwise, the driver can't be installed.
If you don't want to install a driver that is not signed you must stop the installation but can't work with the mvBlueFOX
camera then.

Press "Continue Anyway" and finish the driver installation.

Figure 10: Windows Hardware Wizard - Windows logo testing

After the Windows® Logo testing, you have to click "Finish" to complete the installation.

MATRIX VISION GmbH


1.7 Quickstart 27

Figure 11: Windows Hardware Wizard - Complete the installation

Now, you can find the installed mvBlueFOX in the Windows® "Device Manager" under image devices.

Figure 12: Windows Device Manager - Installed mvBlueFOX

After this installation, you can acquire images with the mvBlueFOX. Simply start the application wxPropView (p. 69)
(wxPropView.exe) from

mvBlueFOX/bin.

See also

wxPropView (p. 69)

MATRIX VISION GmbH


28

1.7.2 Linux

1.7.2.1 System Requirements

Kernel requirements

Kernel 2.6.x .. Kernel 3.x.x.

• usbfs support (CONFIG_USB_DEVICEFS)

Note

This is different from devfs! support. The USB device file system should, of course, be mounted at
/proc/bus/usb.

• SysV IPC support (CONFIG_SYSVIPC).

Note

Most distributions will have these kernel options turned on by default.

1.7.2.1.1 Software requirements

• Linux x86 (32-bit)

– The 32 bit version will run on a 64-bit Linux system if the other library requirements are met with 32-bit
libraries. I.e. you cannot mix 64 and 32-bit libraries and applications.
– Versions for Linux on x86-64 (64-bit), PowerPC, ARM or MIPS may be possible on request.

• GNU compiler version GCC 3.2.x or greater and associated tool chain.

Note

Our own modified version of libusb has been statically linked to our library and is therefore included, so
libusb is not a requirement.

1.7.2.1.2 Other requirements

• libexpat ( http://expat.sourceforge.net)

• Optional: wxWidgets 2.6.x (non Unicode) for the wxWidget test programs.

• Optional: udev or hotplug subsystem (see also 4. below).

As an example of which packets need to be installed, consider OpenSuSE 10.1:

• The compiler used is gcc 4.1.0 and may need to be installed. Use the "gcc" und "gcc-c++" RPMs. Other
RPMs may be installed automatically due to dependencies (e.g. make).

• libexpat will almost definitely be installed already in any software configuration. The RPM is called "expat".

• Install the wxWidgets "wxGTK" and "wxGTK-develop" RPMs. Others that will be automatically installed due
to dependencies include "wxGTK-compat" and "wxGTK-gl". Although the MATRIX VISION software does not
use the ODBC database API the SuSE version of wxWidgets has been compiled with ODBC support and the
RPM does not contain a dependency to automatically install ODBC. For this reason you must also install the
"unixODBC-devel" RPM.

• OpenSuSE 10.1 uses the udev system so a separate hotplug installation is not needed.

MATRIX VISION GmbH


1.7 Quickstart 29

1.7.2.1.3 Hardware requirements USB 2.0 Host controller (Hi-Speed) or USB 1.1 Host controller will also
work (with a max. frame rate of 3 to 4 fps at 640x480 only).

Note

We have noticed problems with some USB chip sets. At high data rates sometimes the image data appears
to be corrupted. If you experience this you could try one or more of the following things.

• a different PC.

• a plug-in PCI/USB-2.0 card without any cables between the card and the USB connector.

• turning off the image footer property - this will ignore data errors.

Note

The driver contains libraries for Linux x86 (32 bit) or Linux 64-bit (x86_64). There are separate package files
for systems with tool chains based on GNU gcc 3.2.x - 3.3.x and those based on GNU gcc >= 3.4.x. gcc 3.1.x
may work but, in general, the older your tool chain is, the lass likely it is that it will work. Tool chains based on
GNU gcc 2.x.x are not supported at all.
GCC 4.x (4.1.0) has been tested on OpenSuSE 10.1 and should work on other platforms.
This version (32-bit only) will also run in a VMware ( http://www.vmware.com) virtual machine!

1.7.2.2 Installing the mvIMPACT Acquire driver

To use the mvBlueFOX camera within Linux (grab images from it and change its settings), a driver is needed,
consisting of several libraries and several configuration files. These files are required during run time.

To develop applications that can use the mvBlueFOX camera, a source tree is needed, containing header files,
makefiles, samples, and a few libraries. These files are required at compile time.

Both file collections are distributed in a single package:

mvBlueFOX-x86_ABI2-n.n.n.tgz

1. Please start a console and change into a directory e.g. /home/username/workspace

cd /home/username/workspace

2. Copy the install script (available as download from https://www.matrix-vision.com) and the
hardware driver to the workspace directory (e.g. from a driver CD or from the website):

~/workspace$ cp /media/cdrom/drv/Linux/install_mvBlueFOX.sh /
. && cp /media/cdrom/drv/Linux/mvBlueFOX-x86_ABI2-1.12.45.tgz -t ./

3. Run the install script:

~/workspace$ ./install_mvBlueFOX.sh

MATRIX VISION GmbH


30

Note

The install script has to be executable. So please check the rights of the file.
During installation, the script will ask, if it should build all tools and samples.

You may need to enable the execute flag with

chmod a+x install_mvBlueFOX.sh.

The installation script checks the different packages and installs them with the respective standard packages man-
ager (apt-get) if necessary.

Note

The installation script ("install_mvBlueFOX.sh") and the archive ("mvBlueFOX-x86_ABI2-n.←-


n.n.tgz") must reside in the same directory. Nothing is written to this directory during script execution, so
no write access to the directory is needed in order to execute the script.

You need Internet access in case one or more of the packages on which the GenICam™ libs depend are not yet
installed on your system. In this case, the script will install these packages, and for that, Internet access is required.

The script takes two arguments, both of which are optional:

1. target directory name

2. version

The target directory name specifies where to place the driver. If the directory does not yet exist, it will be created.
The path can be either absolute or relative; i.e. the name may but need not start with "/.".

Note

This directory is only used for the files that are run time required.

The files required at compile time are always installed in "$HOME/mvimpact-acquire-n.n.n". The script
also creates a convenient softlink to this directory:

mvimpact-acquire -> mvIMPACT_acquire-1.12.45

If this argument is not specified, or is ".", the driver will be placed in the current working directory.

The version argument is entirely optional. If no version is specified, the most recent mvBlueFOX-x86_AB←-
I2-n.n.n.tgz found in the current directory will be installed.

You can now start wxPropView (p. 69), after installing the hardware (p. 34) like

wxPropView

because the installer script added the needed symbolic links.

Note

If you want to install the mvBlueFOX Linux driver without installer script manually, please have a look at the
following chapter:

MATRIX VISION GmbH


1.7 Quickstart 31

1.7.2.2.1 Installing the mvIMPACT Acquire driver manually

Note

We recommend to use the installer script to install the mvBlueFOX driver (p. 29).

The mvBlueFOX is controlled by a number of user-space libraries. It is not necessary to compile kernel modules for
the mvBlueFOX.

1. Logon to the PC as the "root" user or start a super user session with "su". Start a console with "root"
privileges.

2. Determine which package you need by issuing the following command in a terminal window:

gcc -v

This will display a lot of information about the GNU gcc compiler being used on your system. In case of the
version number you have to do following:

Version Description
2.x.x (obsolete) You cannot use the mvBlueFOX on your computer. Upgrade to a newer distribu-
tion.
3.2.x - 3.3.x (obsolete) Use the C++ ABI 1. This package has ABI1 in its name.
greater or equal 3.4.x Use the C++ ABI 2. This package has ABI2 in its name.

3. You can now install the mvBlueFOX libraries as follows:

• create a new directory somewhere on your system.


• copy the correct mvbluefox package file to this directory and change into this directory with "cd".

The mvBlueFOX libraries are supplied as a "tgz" archive with the extension ".tgz". The older
"autopackage" format is now deprecated since it cannot handle 64-bit libraries.

(a) Unpack the archive using "tar" e.g.:

tar xvzf mvBlueFOX-x86_ABI2-1.12.45.tgz

Note

Current versions of the ABI1 libraries were compiled using a SuSE 8.1 system for maximum com-
patibility with older Linux distributions. These libraries should work with all SuSE 8.x and SuSE
9.x versions as well as with Debian Sarge and older Red Hat / Fedora variants.
Current versions of the ABI2 libraries were compiled using a SuSE 10.1 system for maximum
compatibility with newer Linux distributions. These libraries should work with SuSE 10.x as well
as with Ubuntu 6.06 or newer, with up-to-date Gentoo or Fedora FC5.
(b) After installing the mvBlueFOX access libraries you will see something like the following directory struc-
ture in your directory (dates and file sizes will differ from the list below):

drwxr-xr-x 10 root root 4096 Jan 5 15:08 .


drwxr-xr-x 23 root root 4096 Jan 4 16:33 ..
drwxr-xr-x 3 root root 4096 Jan 5 15:08 DriverBase
-rw-r--r-- 1 root root 1079 Jan 5 15:08 Makefile
drwxr-xr-x 7 root root 4096 Jan 5 15:08 apps
drwxr-xr-x 4 root root 4096 Jan 5 15:08 common
drwxr-xr-x 3 root root 4096 Jan 5 15:08 lib
drwxr-xr-x 3 root root 4096 Jan 5 15:08 mvDeviceManager
drwxr-xr-x 2 root root 4096 Jan 5 15:08 mvIMPACT_CPP
drwxr-xr-x 3 root root 4096 Jan 5 15:08 mvPropHandling
drwxr-xr-x 1 root root 4096 Jan 5 15:08 scripts

MATRIX VISION GmbH


32

The directory "lib/x86" contains the pre-compiled 32-bit libraries for accessing the mvBlueFOX.
If 64-bit libraries are supplied, they will be found in "lib/x86_64". The "apps" directory contains test
applications (source code). The other directories contain headers needed to write applications for the
mvBlueFOX.
Since the libraries are not installed to a directory known to the system i.e. not in the "ldconfig"
cache you will need to tell the system where to find them by...
• using the "LD_LIBRARY_PATH" environment variable,
• or copying the libraries by hand to a system directory like "/usr/lib" (or using some symbolic
links),
• or entering the directory in "/etc/ld.so.conf" and running "ldconfig".
e.g. to start the application called "SingleCapture":
Note

Please declare the device e.g. BF∗ or BF00001


cd my_mvbf_directory
LD_LIBRARY_PATH=‘pwd‘/lib/x86 apps/SingleCapture/x86/SingleCapture BF*

For 64-bit it will look like this...


LD_LIBRARY_PATH=‘pwd‘/lib/x86_64 apps/SingleCapture/x86_64/SingleCapture BF*

For ARM it will look like this...


LD_LIBRARY_PATH=‘pwd‘/lib/arm apps/SingleCapture/arm/SingleCapture BF*

etc.
After installing the libraries and headers you may continue with "3." below as a normal user i.e. you do
not need to be "root" in order to compile the test applications. See also the note "4." below.
(c) To build the test applications type "make". This will attempt to build all the test applications contained
in "apps". If you have problems compiling the wxWidget library or application you may need to do one
or more of the following:
• install the wxWidget 3.x development files (headers etc.) supplied for your distribution. (See "Other
requirements" above).
• fetch, compile and install the wxWidget 3.x packet from source downloaded from the website (
http://www.wxwidgets.org).
• alter the Makefiles so as to find the wxWidget configuration script called wx-config.
The files you may need to alter are to be found here:
apps/mvPropView/Makefile.inc

You will find the compiled test programs in the subdirectories "apps/.../x86". For 64 bit systems
it will be "apps/.../x86_64". For ARM systems it will be "apps/.../arm".
If you cannot build the wxWidget test program you should, at least, be able to compile the text-based
test programs in apps/SingleCapture, apps/SingleCapture, etc.
(d) It may be possible to run applications as a non-root user on your system if you are using the udev
system or a fairly recent version of hotplug.

1.7.2.2.2 For udev (e.g. Gentoo∗, OpenSuSE 10.0 - 10.1) Add the following 2 rules to one of the files in
the directory "/etc/udev/rules.d" or make a new file in this directory (e.g. "/etc/udev/rules.←-
d/20-mvbf.rules") containing the lines below:

ENV{UDEVD_EVENT}=="1", ACTION=="add", BUS=="usb", ENV{PRODUCT}=="164c/101/0", /


RUN+="/bin/sh -c ’chmod 0664 $env{DEVICE}; chgrp usb $env{DEVICE}’"
ENV{UDEVD_EVENT}=="1", ACTION=="add", BUS=="usb", ENV{PRODUCT}=="164c/103/0", /
RUN+="/bin/sh -c ’chmod 0664 $env{DEVICE}; chgrp usb $env{DEVICE}’"

You will find an example file "20-mvbf.rules" in the scripts directory after installation.

MATRIX VISION GmbH


1.7 Quickstart 33

Note

Do not forget to add your user to the usb group! You may have to create the group first.

Current Gentoo systems support udev with a minimal, legacy hotplug system. It is usually sufficient to add any
usernames that are to be used for the mvBlueFOX to the group "usb" because there is already a udev rule giving
write permission to all USB devices to members of this group. If this does not work then try the other alternatives
described here. The udev method is better because hotplug is likely to be removed eventually.

1.7.2.2.3 For udev (OpenSuSE 10.2 - 10.x) In "/etc/fstab" in the line starting with usbfs change
noauto to defaults

Afterwards either restart the system or execute "mount -a"

In "/etc/udev/rules.d/50-udev-default.rules" (or similar) change the line after a comment re-


garding libusb. Change 'MODE="0644"' in 'MODE="0664", GROUP="users"'

Connect the camera to the system now or re-connect it if it has been connected already

In case the environment variable USB_DEVFS_PATH is not set, it needs to be set to "/dev/bus/usb" ('export US←-
B_DEVFS_PATH="/dev/bus/usb"'.

1.7.2.2.4 For udev on Ubuntu 06.10 Edit the file /etc/udev/rules.d/40-permissions.rules. Search for the entry
for usbfs. It should look like this:

# USB devices (usbfs replacement)


SUBSYSTEM="usb_device" MODE="0664"

Now change it to read like this:

# USB devices (usbfs replacement)


SUBSYSTEM="usb_device" GROUP="plugdev", MODE="0664"

1.7.2.2.5 udev on some other systems Some very up-to-date systems also set the environment variable $US←-
B_DEVFS_PATH to point to "/dev/bus/usb" instead of the older (default) value of "/proc/bus/usb". This
may cause the mvBlueFOX libraries to attempt to access device nodes in "dev/bus/usb" but the rules described
above will not change the permissions on these files. Normally you will find a rule in "/etc/udev/rules.←-
d/50-udev.rules" which will already cure this problem. You might like to slightly modify this rule to give write
permission to a specific group e.g. the group "usb". A patch is supplied in the scripts directory to do this.

1.7.2.2.6 Alternatively, for a system with full hotplugging (e.g. older SuSE systems) Copy the files named
below from the scripts directory into the directory "/etc/hotplug/usb":

matrixvision.usermap
matrixvision_config

The file "matrixvision.usermap" contains the vendor and product IDs for the mvBlueFOX cameras and
specifies that the script "matrixvision_config" should be run when a MATRIX VISION mvBluefOX cam-
era is plugged in or out. This script attempts to write some information to the system log and then changes the
permissions for the newly-created device node so that non-root users can access the camera.

This feature has not yet been extensively tested. If you find that the applications start but appear to hang or to wait
for a long time before continuing (and normally crashing) then changing the file permissions on your system does
not appear to be sufficient. We have observed this on a 32 bit SuSE 9.1 system. In this case you may have more
success if you change the owner of the application to "root" and set the suid bit to allow it to run with "root"
permissions.

MATRIX VISION GmbH


34

Note

This is considered a security risk by some experts.


If you have been using the mvBlueFOX with one user (e.g. root) and want to try it as another user you should
remove the complete directory "/tmp/mv/", which may contain several files of zero length. These files are
used to control mutexes within the software and will be owned by the first user. The new user will probably not
be able to write to these files and the mvBlueFOX will not work.

1.7.2.2.6.1 Using CMOS versions of the mvBlueFOX and mvBlueFOX-M especially with USB 1.1 Version
1.4.5 contains initial support for CMOS mvBlueFOX on USB 1.1 . In order to conform to the rigid timing specifications
of the CMOS sensor, onboard RAM is used. This RAM is available only on mvBlueFOX-M boards at the moment.
Therefore you cannot use the mvBlueFOX-102x with USB 1.1 . It will work with USB 2.0.

Note

If you want to capture continuous live images from mvBlueFOX-102 or mvBlueFOX-M102x you should switch
the trigger mode from "Continuous" to "OnDemand" for best reliable results. For single snaps the default
values should work correctly.

1.7.2.3 Installing the hardware

Warning

If using the Binder connector first connect the cable to the camera, then connect the camera to the PC.

The driver for Linux does not include hot-plugging support at the application level. I.e. a running application will not
be informed of new mvBlueFOX devices that have been plugged in and will probably crash if an mvBlueFOX camera
is unplugged whilst it is being used. You need to stop the application, plug in the new camera and then restart the
application. This will change in a later version.

1.7.3 Relationship between driver, firmware and FPGA file

To operate a mvBlueFOX device apart from the physical hardware itself 3 pieces of software are needed:

• a firmware running on the device (provides low-level functionality like allowing the device to act as a USB
device, support for multiple power states etc.)

• an FPGA file loaded into the FPGA inside the device (provides access features to control the behaviour of
the image sensor, the digital I/Os etc.)

• a device driver (this is the mvBlueFOX.dll on Windows® and the libmvBlueFOX.so on Linux) running on the
host system (provides control over the device from an application running on the host system)

The physical mvBlueFOX device has a firmware programmed into the device's non-volatile memory, thus allowing
the device to act as a USB device by just connecting the device to a free USB port. So the firmware version that will
be used when operating the device does NOT depend on the driver version that is used to communicate with the
device.

On the contrary the FPGA file version that will be used will be downloaded in volatile memory (RAM) when accessing
the device through the device driver thus the API. One or more FPGA files are a binary part of the device driver.
This shall be illustrated by the following figure:

MATRIX VISION GmbH


1.7 Quickstart 35

Figure 13: The firmware file is a binary part of the device driver

Note

As it can be seen in the image one or multiple firmware files are also a binary part of the device driver.
However it is important to notice that this firmware file will NOT be used automatically but only when the user
or an application explicitly updates the firmware on the device and will only become active after power-cycling
the device. Since mvIMPACT Acquire version 2.28.0 every firmware starting from version 49 is available within
a single driver library and can be selected for updating! mvDeviceConfigure however will always update the
device firmware to the latest version. If you need to downgrade the firmware for any reason please get into
contact with the MATRIX VISION support to get detailed instructions on how to do that.

1.7.3.1 FPGA

Until the device gets initialized using the API no FPGA file is loaded in the FPGA on the device. Only by opening
the device through the API the FPGA file gets downloaded and only then the device will be fully operational:

Figure 14: The FPGA file gets downloaded when the device will be opened through the API

As the FPGA file will be stored in RAM, disconnecting or closing the device will cause the FPGA file to be lost. The
firmware however will remain:

Figure 15: The FPGA file will be lost if the device is disconnected or closed

In case multiple FPGA files are available for a certain device the FPGA file that shall be downloaded can be selected
by an application by changing the value of the property Device/CustomFPGAFileSelector. However the value of this
property is only evaluated when the device is either initialized using the corresponding API function OR if a device
has been unplugged or power-cycled while the driver connection remains open and the device is then plugged back
in.

MATRIX VISION GmbH


36

Note

There is just a limited set of devices that offer more than one FPGA file and these additional FPGA files serve
very special purposes so in almost every situation the default FPGA file will be the one used by an application.
Before using custom FPGA files, please check with MATRIX VISION about why and if this makes sense for
your application.

So assuming the value of the property Device/CustomFPGAFileSelector has been modified while the device has
been unplugged, a different FPGA file will be downloaded once the device is plugged back into the host system:

Figure 16: A different FPGA file can be downloaded

1.7.3.2 Firmware

Only during a firmware update the firmware file that is a binary part of the device driver will be downloaded perma-
nently into the device's non-volatile memory.

Warning

Until mvIMPACT Acquire 2.27.0 each device driver just contained one specific firmware version thus once a
device's firmware has been updated using a specific device driver the only way to change the firmware version
will be using another device driver version for upgrading/downgrading the firmware again. Since mvIMPA←-
CT Acquire version 2.28.0 every firmware starting from version 49 is available within a single driver library
and can be selected for updating! mvDeviceConfigure however will always update the device firmware to the
latest version. If you need to downgrade the firmware for any reason please get into contact with the MATRIX
VISION support to get detailed instructions on how to do that.

So assume a device with a certain firmware version is connected to a host system:

Figure 17: A certain firmware version is connected to a host system

During an explicit firmware update, the firmware file from inside the driver will be downloaded onto the device. In
order to become active the device must be power-cycled:

MATRIX VISION GmbH


1.7 Quickstart 37

Figure 18: Firmware file will be downloaded during an firmware update...

When then re-attaching the device to the host system, the new firmware version will become active:

Figure 19: ... after repowering the device it will be active

• The current firmware version of the device can be obtained either by using one of the applications which
are part of the SDK such as mvDeviceConfigure (p. 111) or by reading the value of the property Device/←-
FirmwareVersion or Info/FirmwareVersion using the API

• The current FPGA file version used by the device can be obtained by reading the value of the property
Info/Camera/SensorFPGAVersion

Using wxPropView the same information is available as indicated by the following figure:

MATRIX VISION GmbH


38

Figure 20: wxPropView - FPGA and Firmware version numbers

1.7.4 Settings behaviour during startup

A setting contains all the parameters that are needed to prepare and program the device for the image capture.
Every image can be captured with completely different set of parameters. In almost every case, these parameters
are accessible via a property offered by the device driver. A setting e.g. might contain

• the gain to be applied to the analogue to digital conversion process for analogue video sources or

• the AOI to be captured from the incoming image data.

So for the user a setting is the one an only place where all the necessary modifications can be applied to achieve
the desired form of data acquisition.

Now, whenever a device is opened, the driver will execute following procedure:

MATRIX VISION GmbH


1.7 Quickstart 39

Figure 21: wxPropView - Device setting start procedure

• Please note that each setting location step in the figure from above internally contains two search steps. First
the framework will try to locate a setting with user scope and if this can't be located, the same setting will be
searched with global (system-wide) scope. On Windows® this e.g. will access either the HKEY_CURREN←-
T_USER or (in the second step) the HKEY_LOCAL_MACHINE branch in the Registry.

• Whenever storing a product specific setting, the device specific setting of the device used for storing will be
deleted (if existing). E.g. you have a device 'VD000001' which belongs to the product group 'VirtualDevice'
with a setting exclusively for 'VD000001'. As soon as you store a product specific setting, the (device specific)
setting for 'VD000001' will be deleted. Otherwise a product specific setting would never be loaded as a device
specific setting will always be found first.

• The very same thing will also happen when opening a device from any other application! wxPropView (p. 69)
does not behave in a special way but only acts as an arbitrary user application.

• Whenever storing a device family specific setting, the device specific or product specific setting of the device
used for storing will be deleted (if existing). See above to find out why.

• On Windows® the driver will not look for a matching XML file during start-up automatically as the native
storage location for settings is the Windows® Registry. This must be loaded explicitly by the user by using
the appropriate API function offered by the SDK. However, under Linux XML files are the only setting formats
understood by the driver framework thus here the driver will also look for them at start-up. The device specific
setting will be an XML file with the serial number of the device as the file name, the product specific setting
will be an XML file with the product string as the filename, the device family specific setting will be an XML
file with the device family name as the file name. All other XML files containing settings will be ignored!

• Only the data contained in the lists displayed as "Image Setting", "Digital I/O" and "Device
Specific Data" under wxPropView (p. 69) will be stored in these settings!

• Restoring of settings previously stored works in a similar way. After a device has been opened the settings
will be loaded automatically as described above.

MATRIX VISION GmbH


40

• A detailed description of the individual properties offered by a device will not be provided here but can be
found in the C++ API reference, where descriptions for all properties relevant for the user (grouped together in
classes sorted by topic) can be found. As wxPropView (p. 69) doesn't introduce new functionality but simply
evaluates the list of features offered by the device driver and lists them any modification made using the GUI
controls just calls the underlying function needed to write to the selected component. wxPropView (p. 69)
also doesn't know about the type of component or e.g. the list of allowed values for a property. This again is
information delivered by the driver and therefore can be queried by the user as well without the need to have
special inside information. One version of the tool will always be delivered in source so it can be used as a
reference to find out how to get the desired information from the device driver.

1.8 Technical Data

1.8.1 Power supply

Symbol Comment Min Typ Max Unit


UUSBPOWER_IN mvBlueFOX power supply via USB 4.75 5 5.25 V
IUSBPOWER_IN (@ 5V / 40MHz) 280 500 mA
IUSBPOWER_IN (Power Off Mode - only with 66 mA
mvBlueFOX-IGC / mvBlueFOX-MLC)

1.8.2 Standard version (mvBlueFOX-xxx)

1.8.2.1 Dimensions and connectors

Figure 1: Connectors mvBlueFOX

mvBlueFOX
Size without lens (w x h x l) 38.8 x 38.8 x 58.5 mm (CCD version)
38.8 x 38.8 x 53.1 mm (CMOS version)
General tolerance DIN ISO 2768-1-m (middle)

MATRIX VISION GmbH


1.8 Technical Data 41

Figure 2: Dimensional drawing of tripod adapter

1.8.2.1.1 D-Sub 9-pin (male)

Figure 3: D-Sub 9-pin (male), digital I/O

Pin Signal Description


1 IN0- Negative terminal of opto-isolated input 1
2 OUT0- Negative terminal of opto-isolated output (emitter of npn-phototransistor)
3 OUT1- Negative terminal of opto-isolated output (emitter of npn-phototransistor)
4 IN1- Negative terminal of opto-isolated input ∗
5 N/C
6 IN0+ Positive terminal of opto-isolated input ∗
7 OUT0+ Positive terminal of opto-isolated output (collector of npn-phototransistor)
8 OUT1+ Positive terminal of opto-isolated output (collector of npn-phototransistor)
9 IN1+ Positive terminal of opto-isolated input ∗

1
Voltage between + and - may be up to 26V, input current is 17mA.

1.8.2.1.1.1 Characteristics of the digital inputs Open inputs will be read as a logic zero.

When the input voltage rises above the trigger level, the input will deliver a logic one.

Symbol Comment Min. Std. Max. Unit


High level input voltage TTL logic 3 5 6.5 V
UIN_TTL
Low level input voltage TTL logic - 1 V
0.7
IIN_TTL Current TTL logic 8.←- 12 mA
5

High level input voltage PLC logic 12 24 V


UIN_PLC
Low level input voltage PLC logic - 8 V
0.7
Current PLC logic 17 25 mA
IIN_PLC

MATRIX VISION GmbH


42

Figure 4: DigIn mvBlueFOX-xxx

In wxPropView (p. 69) you can change between

• TTL ("DigitalInputThreshold = 2V") and

• PLC ("DigitalInputThreshold = 10V")

input behavior of the digital inputs using the DigitalInputThreshold property in "Digital I/O -> DigitalInput←-
Threshold":

Figure 5: wxPropView - DigitalInputThreshold

Umin [V] Umax [V] Imin [mA] Imax [mA]


Output 30 100 (on state current)

1.8.2.1.1.2 Characteristics of the digital outputs

MATRIX VISION GmbH


1.8 Technical Data 43

Figure 6: DigOut mvBlueFOX-xxx

1.8.2.1.1.3 Connecting flash to digital output You can connect a flash in series to the digital outputs as shown
in the following figure, however, you should only use LEDs together with a current limiter:

Figure 7: Connecting flash (LEDs) to DIG OUT

1.8.2.1.2 USB connector, type B (USB 2.0)

Figure 8: USB B connector (female)

Pin Signal
1 USBPOWER_IN
2 D-
3 D+
4 GND
Shell shield

MATRIX VISION GmbH


44

Note

The mvBlueFOX is an USB device!

Attention

Do not connect both USB ports at the same time.

1.8.2.1.3 4-pin circular plug-in connector with lock (USB 2.0)

Figure 9: 4-pin circular plug-in connector (female)

Pin Signal 'R' version Signal 'U' version


1 USBPOWER_IN Power out from USB
2 D+ not connected
3 GND GND
4 D- not connected

Manufacturer: Binder
Part number: 99-3390-282-04

Note

Differentiation between 'R' and 'U' version is obsolete. New mvBlueFOX versions have both connectors (cir-
cular connector and standard USB). The pin assignment corresponds to the description of 'R' version.
While mvBlueFOX is connected and powered via standard USB, it is possible to connect additional power via
circular connector (only power; the data lines must be disconnected!). Only in this case, the power switch
will change the power supply, if the current entry via standard USB is equal to or under the power supply of
"circular connector".

Attention

Do not connect both USB ports at the same time.

1.8.2.2 LED states

State LED
Camera is not connected or defect LED off
Camera is connected and active Green light on

MATRIX VISION GmbH


1.8 Technical Data 45

1.8.2.3 Components

• FPGA for image processing

• pixel clock up to 40 MHz

• reliable image transfer

– using bulk-mode

– image data surrounded by headers

• several trigger modes

– auto, SW, external

• flash control output

– using opto-isolated outputs

• opto-isolated I/O

– 2 inputs, 2 outputs on D-Sub 9 connector

• bus powered

– no external power supply needed

• two USB connectors

– standard USB or circular plug-in connector 4 pin locked

• ambient temperature operation: 0..45 deg C / 30..80 RH

• ambient temperature storage: -20..60 deg C / 20..90 RH

Additional features of mvBlueFOX-2xx:

• 8 Mega-pixel image memory (FiFo)

• new ADC

• 10 Bit mode

MATRIX VISION GmbH


46

1.8.3 Board-level version (mvBlueFOX-Mxxx)

1.8.3.1 Dimensions and connectors

Figure 10: mvBlueFOX-M12x (CCD) with C-mount

Figure 11: mvBlueFOX-M10x (CMOS)

Lens mount
Type "FB"
C-Mount 17.526 MATRIX VISION GmbH

CS-Mount 12.526
1.8 Technical Data 47

Figure 12: Backside view of the board

Note

The mvBlueFOX-M has a serial I2 C bus EEPROM with 64 KBit of which 512 Bytes can be used to store
custom arbitrary data.

See also

UserDataEntry class description

Pin Signal Comment Cable


1 USBPOWER_IN Supply voltage red
2 USB_DATA- Data white
3 USB_DATA+ Data green
4 GND Ground black

1.8.3.1.1 4-pin Wire-to-Board header (USB 2.0) Manufacturer: JST


Part number: B4B-PH-K

Pin Signal Comment


1 FPGA_IO0 Digital In 0
2 FPGA_IO1 Digital In 1
3 FPGA_IO2 Digital In 2
4 FPGA_IO3 Digital In 3
5 FPGA_IO4 Digital Out 0
6 FPGA_IO5 Digital Out 1
7 FPGA_IO6 Digital Out 2
8 FPGA_IO7 Digital Out 3
9 MAINPOWER Current from the USB cable
10 GND Ground
11 VCC24V 24 V output (10mA)
12 GND Ground
MATRIX VISION GmbH
48

1.8.3.1.2 12-pin Wire-to-Board header (Dig I/O) Manufacturer: JST


Part number: B12B-PH-K

Attention

Do not connect Dig I/O signals to the FPGA pins until the mvBlueFOX-M has been started and configured.
Otherwise, you will risk damaging the mvBlueFOX-M hardware!

See also

High-Speed USB design guidelines (p. 11)

1.8.3.1.3 Contact

Figure 13: Contact, dimensions in mm (in.)

Application wire Q'ty / reel


mm2 AWG # Insulation O.D. mm (in.)
0.05 to 0.22 30 to 24 0.9 to 1.5 (.035 to .059) 8.000

Material and finish: phosphor bronze, tin-plated


Manufacturer: JST
Part number: SPH-002T-P0.5S

1.8.3.1.4 Housing

Figure 14: Housing, dimensions in mm (in.)

Circuits Dimensions in mm (in.) Q'ty / box


A B
4 6.0 (.236) 9.8 (.386) 1.000
MATRIX VISION GmbH
12 22.0 (.866) 25.8 (1.016) 1.000
1.8 Technical Data 49

Material and finish: nylon 66, UL94V-0, natural (white)


Manufacturer: JST
Part number: PHR-4 / PHR-12

See also

Suitable assembled cable accessories for mvBlueFOX-M: What's inside and accessories (p. 19)

1.8.3.1.5 Characteristics of the mvBlueFOX-Mxxx digital I/Os

Symbol Comment Min Max Unit


UDIG_IN Input voltage - 3.6 V
0.←-
3

1.8.3.1.5.1 Dig I/O max. values

Symbol Comment Min Nom Max Unit


UDIG_IN_LOW low level input voltage (IIN = 1.67mA) - 0 0.9 V
0.←-
3
UDIG_IN_HIGH high level input voltage (IIN = 1.67mA) 2.←- 3.3 3.6 V
2
IIN input current (@ 3.3V) 0.←- 1.7 mA
4

1.8.3.1.5.2 Characteristics of the digital inputs

Figure 15: Digital input mvBlueFOX-Mxxx

Symbol Comment Min Nom Max Unit


IDIG_OUT current at digital output +-12 +-24 mA
digital output (IOUT =12mA) 1.6 V
UDIG_OUT_HIGH
Digital output (IOUT <2mA) 2.6 3.4 V
UDIG_OUT_LOW digital output (IOUT =2mA) 0.2 V

MATRIX VISION GmbH


50

1.8.3.1.5.3 Characteristics of the digital outputs UDIG_OUT_HIGH min = 2.8 - IOUT ∗ 100

Figure 16: Digital output mvBlueFOX-Mxxx

Attention

The Dig I/O are connected directly via a resistor to the FPGA pins and therefore they are not protected. For
this reason, an application has to provide a protection circuit to the digital I/O of mvBlueFOX-M.

Note

The Dig I/O characteristics of the mvBlueFOX-M are not compatible to the Dig I/O of the mvBlueFOX standard
version.

1.8.3.2 LED states

State LED
Camera is not connected or defect LED off
Camera is connected and active Green light on

1.8.3.3 Components

• 8 Mpixels image memory

1.8.3.4 Accessories mvBlueFOX-Mxxx

1.8.3.4.1 mvBlueFOX-M-FC-S The mvBF-M-FC-S contains high capacity condensers with switching electron-
ics for transferring stored energy of the condensers to external flash LEDs. It is possible to connect 2 pushbut-
tons/switches to the 8-pin header (CON3 - Control connector). Additionally, 2 LED interfaces are available. There
are two version of mvBF-M-FC-S:

• Model 1 can be connected to mvBlueFOX-M with a cable via CON5.

• Model 2 can be mounted on the mvBlueFOX-M via CON1 directly.

MATRIX VISION GmbH


1.8 Technical Data 51

Figure 17: Model 1 with CON5 connector

Figure 18: Model 2 with CON1 connector

Pin Signal Comment


1 Flash + Flash power
2 Flash - Switched to ground (low side switch)
MATRIX VISION GmbH
52

1.8.3.4.1.1 CON2 - Flash connector Manufacturer: JST


Part number: B-2B-PH

Pin Signal Comment


1 GND LED2 cathode connector / board ground
2 LED2 output LED2 anode connector1
3 GND LED1 cathode connector / board ground
4 LED1 output LED1 anode connector
5 GND Board ground
6 Input2 Switch to ground for setting Input2
7 GND Board ground
8 Input1 Switch to ground for setting Input1

1.8.3.4.1.2 CON3 - Control connector Manufacturer: JST


Part number: B-8B-PH-SM4 TB

Signal Parameter Min Typ Max Unit


GND Board ground 0 V
2
Output voltage 5 V
LED 1/2 output (anode) Internal series resistance 465.←- 470 474.←- Ohm
3 4
Forward current IF at ULED = 2V 1 6 mA
Voltage (open contact) 3.3 V
Input 1/2 (internal 10k pull up to 3.3V) VIL (low level input voltage) 0.9 V
VIH (high level input voltage) 2.5 5.5 V
Voltage (open contact) 23 24 25 V
Flash output capacitance 528 660 792 uF
Flash +
Internal capacitance storage energy 0.190 Ws
Flash capacitance charge current / 20 mA
output DC current
IOUT -2 A
Flash 2 On voltage at IOUT MAX 0.15 V
Off voltage 23 24 25 V

1
1.8.3.4.1.3 Electrical characteristic Depends on mvBlueFOX-M power supply
2
Attention: No over-current protection!

MATRIX VISION GmbH


1.8 Technical Data 53

Figure 19: CON3 schematic

1.8.4 Single-board version (mvBlueFOX-MLC2xx)

1.8.4.1 Typical Power consumption @ 5V

Model Power consumption (+/- 10%) Unit


-200w 1.09 W
-202a 1.39 W
-202b 1.58 W
-202d 1.28 W
-205 1.37 W

1.8.4.2 Dimensions and connectors

Figure 20: mvBlueFOX-MLC (without S-mount)

Note

The mvBlueFOX-MLC has a serial I2 C bus EEPROM with 16 KByte of which 8 KByte are reserved for the
firmware and 8 KByte can be used to store custom arbitrary data.

See also

UserDataEntry class description

MATRIX VISION GmbH


54

1.8.4.2.1 Sensor's optical midpoint and orientation The sensor's optical midpoint is in the center of the board
(Figure 21: intersection point of the holes diagonals). The (0,0) coordinate of the sensor is located at the one bottom
left corner of the sensor (please notice that Mini-B USB connector is located at the bottom at the back).

Note

Using a lens, the (0,0) coordinate will be mirrored and will be shown at the top left corner of the screen as
usual!

Figure 21: Sensor's optical midpoint and orientation

1.8.4.2.2 Mini-B USB (USB 2.0)

Figure 22: Mini-B USB

Pin Signal Comment


1 USBPOWER_IN Supply voltage
2 USB_DATA- Data
3 USB_DATA+ Data
4 ID Not connected
5 GND Ground

1.8.4.2.3 12-pin Wire-to-Board header (USB 2.0 / Dig I/O)

Note

If you have the mvBlueFOX-MLC variant which uses the standard Mini-B USB connector, pin 2 and 3 (USB←-
_DATA+ / USB_DATA-) of the header won't be connected!

MATRIX VISION GmbH


1.8 Technical Data 55

pin Opto-isolated variant TTL compliant variant LVTTL compliant variant (only available for mvBlueFOX-MLC202aG)

Signal Comment Signal Comment Signal Comment


1 GND Ground GND Ground GND Ground
2 USB_D←- Data USB_D←- Data USB_D←- Data
ATA+ ATA+ ATA+
3 USB_D←- Data USB_D←- Data USB_D←- Data
ATA- ATA- ATA-
4 USBPO←- Supply USBPO←- Supply USBPO←- Supply
WER_IN voltage WER_IN voltage WER_IN voltage
5 I2C SDA Serial I2C SDA Serial I2C SDA Serial
data line data line data line
(the I2C
interface
is master-
only,
which
means
that I2C
slaves
can only
be con-
nected
exter-
nally)
6 I2C SCL Serial I2C SCL Serial I2C SCL Serial
clock line clock line clock line
(the I2C
interface
is master-
only,
which
means
that I2C
slaves
can only
be con-
nected
exter-
nally)
7 USBPO←- Supply USBPO←- Supply USBPO←- Supply
WER_IN voltage WER_IN voltage WER_IN voltage
8 GND Ground GND Ground GND Ground
9 OUT0- Opto- OUT1 TTL com- N.C. Not con-
isolated pliant dig- nected
digital ital output
output 0 1
(Negative
voltage)
10 OUT0+ Opto- OUT0 TTL com- IN2 LVTTL
isolated pliant dig- compliant
digital ital output digital
output 0 0 input 2
(Positive
voltage)

MATRIX VISION GmbH


56

11 IN0- Opto- IN1 TTL com- IN1 LVTTL


isolated pliant digi- compliant
digital tal input 1 digital
input 0 input 1
(Negative
voltage)
12 IN0+ Opto- IN0 TTL com- IN0 LVTTL
isolated pliant digi- compliant
digital tal input 0 digital
input 0 input 0
(Positive
voltage)

Note

I2C bus uses 3.3 Volts. Signals have a 2kOhm pull-up resistor. Access to the I2C bus from an application is
possible for mvBlueFOX-MLC devices using an mvBlueFOX driver with version 1.12.44 or newer.

Manufacturer (suitable board-to-wire connector): Molex


Part number: 0510211200 1.25mm Housing
Link: http://www.molex.com/molex/products/datasheet.jsp?part=active/0510211200←-
_CRIMP_HOUSINGS.xml&channel=Products&Lang=en-US
Manufacturer (multi-pin connector for board-to-board connection): e.g. Garry
Link: http://www.mpe-connector.de/index.php?lang=de&menu=16&mating=1841&id←-
_product=6591 (recommended variant: 659-1-012-O-F-RS0-xxxx; xxxx = length of the pins)

See also

Suitable assembled cable accessories for mvBlueFOX-MLC: What's inside and accessories (p. 19)
High-Speed USB design guidelines (p. 11)
More information about the usage of retrofittable ferrite (p. 14)

1.8.4.2.3.1 Electrical characteristic Digital inputs TTL

Figure 23: TTL digital inputs block diagram

Note

If the digital input is not connected, the state of the input will be "1" (as you can see in wxPropView (p. 69)).

MATRIX VISION GmbH


1.8 Technical Data 57

TTL compliant variant


Comment Min Typ Max Unit
IIN ILOW (INx) - mA
0.5
VIH 3.←- 5.5 V
6
UIN
VIL - 1.3 V
0.←-
3
LVTTL compliant variant
Comment Min Typ Max Unit
IIN ILOW (INx) - mA
0.5
VIH 2 3.8 V
UIN
VIL - 0.8 V
0.←-
3

TTL input low level / high level time: Typ. < 210ns

Digital outputs TTL

Figure 24: TTL digital outputs block diagram

Comment Min Typ Max Unit


IOUT Dig_out power +-32 mA
VOH (IOUT =32mA) 3.←- V
8
VOH 5.25
UOUT VOL (IOUT =32mA) 0.55 V
VOL 0.←-
1

TTL output low level / high level time: Typ. < 40ns

Opto-isolated digital inputs

MATRIX VISION GmbH


58

Figure 25: Opto-isolated digital inputs block diagram with example circuit

Delay

Characteristics Symbol Typ. Unit


Turn-On time tON 3 us

The inputs can be connected directly to +3.3 V and 5 V systems. If a higher voltage is used, an external resistor
must be placed in series (Figure 25).

Used input voltage External series resistor


3.3V .. 5V none
12V 680 Ohm
24V 2 KOhm

Comment Min Typ Max Unit


VIH 3 5.5 V
UIN
VIL - 0.8 V
5.←-
5

Opto-isolated digital outputs

Figure 26: Opto-isolated digital outputs block diagram with example circuit

Delay

MATRIX VISION GmbH


1.8 Technical Data 59

Figure 27: Output switching times

Characteristics Symbol Test conditions Typ. Unit


Turn-On time tON 3
Storage time tS RL = 100 Ohm, VCC 10V, IC = 2mA 3 us
Turn-Off time tOFF 3
Turn-On time tON 2
Storage time tS RL = 1.9 kOhm, VCC 5V, IC = 16mA 25 us
Turn-Off time tOFF 40

Comment Min Typ Max Unit


Ion load current 15 mA
Ioff leakage current 10 uA
Von Sat. at 2.4 mA VIH 0 (0.←- 0.4 V
2)
Voff 30 V

1.8.4.3 LED states

State LED
Camera is not connected or defect LED off
Camera is connected but not initialized or in "Power off" mode. Orange light on
Camera is connected and active Green light on

1.8.4.4 Assembly variants

The mvBlueFOX-MLC is available with following differences:

• Mini-B USB connector and digital I/O pin header

– 1/1 opto-isolated or 2/2 TTL compliant digital I/O

• USB via header without Mini-B USB connector

• female board connector instead of pin header (board-to-board connection)

• 3 different S-mount depths

MATRIX VISION GmbH


60

• C(S)-mount compatibility using mvBlueCOUGAR-X flange

• ambient temperature operation: 5..55 deg C / 30..80 RH

• ambient temperature storage: -25..60 deg C / 20..90 RH

1.8.5 Single-board version with housing (mvBlueFOX-IGC2xx)

1.8.5.1 Dimensions and connectors

Figure 28: mvBlueFOX-IGC

Lens protrusion C-Mount CS-Mount


X 10 mm 5 mm

Figure 29: mvBlueFOX-IGC-3xxx with adjustable backfocus

Lens protrusion C-Mount


X 8 mm (9.5 mm with max. Ø 20 mm)

Note

The mvBlueFOX-IGC has a serial I2 C bus EEPROM with 16 KByte of which 8 KByte are reserved for the
firmware and 8 KByte can be used to store custom arbitrary data.

See also

UserDataEntry class description

MATRIX VISION GmbH


1.8 Technical Data 61

1.8.5.1.1 Mini-B USB (USB 2.0)

Figure 30: Mini-B USB

Pin Signal Comment


1 USBPOWER_IN Supply voltage
2 USB_DATA- Data
3 USB_DATA+ Data
4 ID Not connected
5 GND Ground

1.8.5.1.2 4-pin circular plug-in connector with lock (I/O)

Figure 31: 4-pin circular plug-in connector (female)

Pin Signal Comment Color (of cable)


1 IN0 + Opto-isolated digital input 0 (Positive voltage) brown
2 IN0 - Opto-isolated digital input 0 (Negative voltage) white
3 OUT0 + Opto-isolated digital output 0 (Positive voltage) blue
4 OUT0 - Opto-isolated digital output 0 (Negative voltage) black

Manufacturer: Binder
Part number: 79-3107-52-04

1.8.5.1.2.1 Electrical characteristic Please have a look at the mvBlueFOX-MLC digital I/O characteristics (opto-
isolated model) of the 12-pin Wire-to-Board Header (USB / Dig I/O) (p. 53).

1.8.5.2 LED states

State LED
Camera is not connected or defect LED off
Camera is connected but not initialized or in "Power off" mode. Orange light on
Camera is connected and active Green light on
MATRIX VISION GmbH
62

1.8.5.3 Positioning tolerances of sensor chip

The sensor's optical midpoint is in the center of the housing. However, several positioning tolerances in relation to
the housing are possible because of:

• Tolerance of mounting holes of the printed circuit board in relation to the edge of the lens holder housing is
not specified but produced according to general tolerance DIN ISO 2768 T1 fine.

• Tolerance of mounting holes on the printed circuit board because of the excess of the holes ± 0.1 mm (Figure
32; 2).

• Tolerance between conductive pattern and mounting holes on the printed circuit board.
Because there is no defined tolerance between conductive pattern and mounting holes, the general defined
tolerance of ± 0.1 mm is valid (Figure 32; 1 in the Y-direction ± 0.1 mm; 3 in the Z-direction ± 0.1 mm)

There are further sensor specific tolerances, e.g. for model mvBlueFOX-IGC200wG:

• Tolerance between sensor chip MT9V034 (die) and its package (connection pad)

– Chip position in relation to the mechanical center of the package: 0.2 mm (± 0.1mm) in the X- and
Y-direction (dimensions in the sensor data sheet according to ISO 1101)

• Tolerance between copper width of the sensor package and the pad width of the printed circuit board
During the soldering the sensor can swim to the edge of the pad: width of the pad 0.4 mm (possible tolerance
is not considered), width of pin at least 0.35 mm, max. offset: ± 0,025mm

Further specific tolerances of other models on request.

Figure 32: Positioning tolerances of sensor chip

Note

There are also tolerances in lens which could lead to optical offsets.

MATRIX VISION GmbH


1.9 Sensor Overview 63

1.9 Sensor Overview

By default, the steps exposure and readout out of an image sensor are done one after the other. By design, CCD
sensors support overlap capabilities also combined with trigger (see figure). In contrast, so-called pipelined CMOS
sensors only support the overlapped mode. Even less CMOS sensors support the overlapped mode combined with
trigger. Please check the sensor summary (p. 63). In overlapping mode, the exposure starts the exposure time
earlier during readout.

Figure 1: Overlapping / pipelined exposure and readout

1.9.1 CCD sensors

The CCD sensors are highly programmable imager modules which incorporate the following features:

Sensors 0.3 Mpixels 0.3 Mpixels 0.8 Mpixels 1.4 Mpixels 1.9 Mpixels
resolution CCD resolution CCD resolution CCD resolution CCD resolution CCD
sensor (-220) sensor (-220a) sensor (-221) sensor (-223) sensor (-224)
Sensor supplier Sony Sony Sony Sony Sony
Sensor name ICX098 AL/BL ICX424 AL/AQ ICX204 AL/AQ ICX267 AL/AQ ICX274 AL/AQ
Resolution 640 x 480 640 x 480 1024 x 768 1360 x 1024 1600 x 1200
gray scale or gray scale or gray scale or gray scale or gray scale or
RGB Bayer mo- RGB Bayer mo- RGB Bayer mo- RGB Bayer mo- RGB Bayer mo-
saic saic saic saic saic
Sensor format 1/4" 1/3" 1/3" 1/2" 1/1.8"

MATRIX VISION GmbH


64

Pixel clock 12 MHz / 24 20 MHz / 40 20 MHz / 40 tbd / 40 MHz 20 MHz / 40


MHz MHz MHz MHz
Max. frames 60 100 391 20 16
per second
Binning H+V H+V H+V H+V H+V
Exposure time 44 us - 10 s 26 us - 10 s 44 us - 10 s 33 us - 10 s 30 us - 10 s
ADC (on sen- 12 bit (up to 12 bit (up to 12 bit (up to 12 bit (up to 12 bit (up to
sor board) reso- 10 bit transmis- 10 bit transmis- 10 bit transmis- 10 bit transmis- 10 bit transmis-
lution sion) sion) sion) sion) sion)
Programmable X
analog gain and
offset
Frame integrat- X
ing progressive
scan sensor (no
interlaced prob-
lems!)
High resolution X
High color re- X
productivity (for
color version)
High sensitivity, X
low dark current
Continuous X
variable-speed
shutter
Pipelined in X/-
continuous /
triggered mode
Low smear X
Excellent an- X
tiblooming
characteristics
Programmable X
exposure time
from usec to
sec.
Programmable X
readout timing
with free cap-
ture windows
and partial scan
Trigger (Hard- X/X
ware / Soft-
ware)
Pipelined in X/- X/- X/- X/- X/-
continuous /
triggered mode
Flash con- X
trol output,
synchronous
to exposure
period

MATRIX VISION GmbH


1.9 Sensor Overview 65

More specific mvBlueFO←- mvBlueFO←- mvBlueFO←- mvBlueFO←- mvBlueFO←-


data X-[Model]220 X-[Model]220a X-[Model]221 X-[Model]223 X-[Model]224
(0.3 Mpix [640 (0.3 Mpix [640 (0.8 Mpix [1024 (1.4 Mpix (1.9 Mpix
x 480]) (p. 177) x 480]) (p. 182) x 768]) (p. 187) [1360 x 1024]) [1600 x 1200])
(p. 191) (p. 196)

1
With max. frame rate, image quality losings might be occur.

1.9.2 CMOS sensors

The CMOS sensor modules incorporate the following features:

Sensors: 0.4 Mpixels res- 1.3 Mpixels res- 1.2 Mpixels res- 1.2 Mpixels res- 5.0 Mpixels res-
olution CMOS olution CMOS olution CMOS olution CMOS olution CMOS
sensor (-200w) sensor (-202a) sensor (-x02b)1 sensor (-202d)1 sensor (-205)
only -MLC/-IGC only -MLC/-IGC
Sensor supplier Aptina Aptina Aptina Aptina Aptina
Sensor name MT9V034 MT9M001 MT9M021 MT9M034 MT9P031
Resolution 752 x 480 1280 x 1024 1280 x 960 1280 x 960 2592 x 1944
gray scale or gray scale gray scale or gray scale or gray scale or
RGB Bayer mo- RGB Bayer mo- RGB Bayer mo- RGB Bayer mo-
saic saic saic saic
Indication of 1/3" 1/2" 1/3" 1/3" 1/2.5"
sensor cat-
egory to be
used
Pixel clock 40 MHz 40 MHz 40 MHz 40 MHz 40 MHz
2
Max. frames 93 25 25 25 5.8
per second (in
free-running full
frame mode)
Binning H+V (frame rate H+V, Average←- H+V, Average←- H+V, Average←- H+V, 3H+3V,
170 Hz) H+V (frame rate H+V (frame rate H+V (frame rate AverageH+V,
unchanged) unchanged) unchanged) Average3H+3V,
DroppingH+V,
Dropping3←-
H+3V (frame
rate 22.7 Hz)
Exposure time 6 us - 4 s 100 us - 10 s 10 us - 4 s 10 us - 4 s 10 us - 10 s
ADC resolution 10 bit (10 / 8 bit 10 bit (10 / 8 bit 10 bit (10 / 8 bit 10 bit (10 / 8 bit 10 bit (10 / 8 bit
transmission) transmission) transmission) transmission) transmission)
SNR 42 dB 40 dB < 43 dB 37.4 dB
DR (normal / 55 dB / > 110 61 dB / - > 61 dB / > 61 dB / > 65 dB /
HDR (p. 151)) dB 110 dB (with
gray scale
version)
Progressive X X X X X
scan sensor
(no interlaced
problems!)
Rolling shutter - X - X X
Global shutter X - X - X

MATRIX VISION GmbH


66

Trigger (Hard- X/X X/- X/- X/- X/X


ware / Soft-
ware)
Pipelined in X/- X/- X/- X/- X / - (reset only)
continuous /
triggered mode
High color re- X no no no no
productivity (for
color version)
Programmable X X X X X
readout timing
with free cap-
ture windows
and partial scan
Flash con- X no no no no
trol output,
synchronous
to exposure
period
More specific mvBlueFO←- mvBlueFO←- mvBlueFO←- mvBlueFO←- mvBlueFO←-
data X-[Model]200w X-[Model]202a X-[Model]202b X-[Model]202d X-[Model]205
(0.4 Mpix [752 (1.3 Mpix (1.2 Mpix [1280 (1.2 Mpix [1280 (5.0 Mpix
x 480]) (p. 201) [1280 x 1024]) x 960]) (p. 207) x 960]) (p. 210) [2592 x 1944])
(p. 204) (p. 214)

1
The operation in device specific AEC/AGC mode is limited in (non continuous) triggered modes. AEC/AGC only
works while trigger signal is active. When the trigger signal is removed AEC/AGC stops and gain and exposure will
be set to a static value. This is due to a limitation of the sensor chip.
2
Frame rate increase with reduced AOI width, but only when width >= 560 pixels, below frame rate remains
unchanged.

Note

For further information about rolling shutter, please have a look at the practical report about rolling shutter
on our website: https://www.matrix-vision.com/tl_files/mv11/Glossary/art_←-
rolling_shutter_en.pdf
For further information about image errors of image sensors, please have a look at
For further information about image errors of image sensors, please have a look at Correcting image errors
of a sensor (p. 131).

1.9.3 Output sequence of color sensors (RGB Bayer)

MATRIX VISION GmbH


1.9 Sensor Overview 67

Figure 2: Output sequence of RAW data

1.9.4 Bilinear interpolation of color sensors (RGB Bayer)

For Bayer demosaicing in the camera, we use bilinear interpolation:

Figure 3: Bilinear interpolation

1. Interpolation of green pixels: the average of the upper, lower, left and right pixel values is assigned as the G
value of the interpolated pixel.
For example:

(G3+G7+G9+G13)
G8 = --------------
4

For G7:

(G1+G3+G11+G13)
G7_new = 0.5 * G7 + 0.5 * ---------------
4

2. Interpolation of red/blue pixels:


Interpolation of a red/blue pixel at a green position: the average of two adjacent pixel values in corresponding
color is assigned to the interpolated pixel.
For example:

(B6+B8) (R2+R12)
B7 = ------- ; R7 = --------
2 2

Interpolation of a red/blue pixel at a blue/red position: the average of four adjacent diagonal pixel values is
assigned to the interpolated pixel.
For example:

(R2+R4+R12+R14) (B6+B8+B16+B18)
R8 = --------------- ; B12 = ---------------
4 4

Any colored edge which might appear is due to Bayer false color artifacts.

Note

There are more advanced and adaptive methods (like edge sensitive ones) available if the host is doing this
debayering.

MATRIX VISION GmbH


68

1.10 Filters

MATRIX VISION offers two filters for the mvBlueFOX camera. The IR filter (p. 68) is part of the standard delivery
condition.

1.10.1 Hot mirror filter

The hot mirror filter FILTER IR-CUT 15,5X1,75 FE has great transmission in the visible spectrum and blocks out a
significant portion of the IR energy.

Technical data
Diameter 15.5 mm
Thickness 1.75 mm
Material Borofloat
Characteristics T = 50% @ 650 +/- 10 nm
T > 92% 390-620 nm
Ravg > 95% 700-1150 nm
AOI = 0 degrees
Surface quality Polished on both sides
Surface irregularity 5/3x0.06 scratch/digs on both sides
Edges cut without bezel

Figure 1: FILTER IR-CUT 15,5X1,75 FE wavelengths and transmission diagram

1.10.2 Cold mirror filter

The FILTER DL-CUT 15,5X1,5 is a high-quality day light cut filter and has optically polished surfaces. The polished
surface allows the use of the filter directly in the path of rays in image processing applications. The filter is protected
against scratches during the transport by a protection film that has to be removed before the installing the filter.

Technical data
Diameter 15.5 mm
Thickness 1.5 +/- 0.2 mm
Material Solaris S 306
Characteristics Tavg > 80% > 780 nm

MATRIX VISION GmbH


1.11 Application Usage 69

AOI = 0 degrees
Protective foil on both sides
Without antireflexion
Without bezel

Figure 2: FILTER DL-CUT 15,5X1,5 wavelengths and transmission diagram

Note

For further information how to change the filter, please have a look at our website:
http://www.matrix-vision.com/tl_files/mv11/Glossary/art_optical_filter←-
_en.pdf

1.10.3 Glass filter

It is also possible to choose the glass filter "FILTER GLASS 15,5X1,75" with following characteristics:

Technical data
Glass thickness 1.75 mm
Material Borofloat without coating
ground with protection chamfer
Surface quality polished on both sides P4
Surface irregularity 5/3x0.06 on both sides

1.11 Application Usage

1.11.1 wxPropView

wxPropView (p. 69) is an interactive GUI tool to acquire images and to configure the device and to display and
modify the device properties of MATRIX VISION GmbH hardware. After the installation you can find wxPropView
(p. 69)

MATRIX VISION GmbH


70

• as an icon with the name "wxPropView" on the desktop (Windows) or

• in "∼/mvimpact-acquire/apps/mvPropView/x86" (Linux).

wxPropView - Introduction:
https://www.matrix-vision.com/tl_files/mv11/trainings/wxPropView/wx←-
PropView_Introduction/index.html

1.11.1.1 How to work with wxPropView

wxPropView - Working with wxPropView:


https://www.matrix-vision.com/tl_files/mv11/trainings/wxPropView/wx←-
PropView_WorkingWith/index.html

Depending on the driver version, wxPropView starts with the Quick Setup Wizard (p. 70) (as soon as a camera
with the right firmware version was selected used or a single camera with the right firmware was found) or without
it (p. 73).

1.11.1.1.1 Quick Setup Wizard

Since

mvIMPACT Acquire 2.11.3

The Quick Setup Wizard is a tiny and powerful single window configuration tool to optimize the image quality
automatically and to set the most important parameters, which affect the image quality, in an easy way manually
and to get a preview of this changes. Settings will be accepted by clicking ok, otherwise the changes are cancelled.

MATRIX VISION GmbH


1.11 Application Usage 71

Figure 1:Quick Setup Wizard started

Depending on the camera spectrum (gray or color sensor), it will automatically pre-set the camera so that image
quality is usually as best as possible.

"For all cameras:"


Image format is chosen as 10 bit (if possible) as a good compromise on image quality and speed.
It will further set

• "Exposure" to Auto,

• "Gain" to Auto,

• "Frame rate" to Auto based on current settings of the camera, and

• switches camera into continuous mode

"In case of gray:"


The above settings will be also applied whenever the "Gray Preset" button is pressed. For gray cameras it is herewith
assumed that image processing prefers a linear camera response.

"In case of color:"


It will additionally set

• "White balance" in the camera to Auto, and will apply

• a host based moderate "Gamma correction" (1.8), and lastly it will apply

• a host (PC) based sensor specific "Color Correction Matrix" and use the respective "sRGB display matrix".

These settings will also be applied whenever the "Color Preset" button is pressed. It is herewith assumed that color
camera image is optimized for best human visual feedback.

1.11.1.1.1.1 Changing the Presets There are 3 presets:

• Gray

• Color

• Factory

Factory can be used as a fall back to quickly skip or remove all presets and load the factory default settings.

MATRIX VISION GmbH


72

1.11.1.1.1.2 Modifying Settings All auto modes can be switched off and all settings, such as Gain, Exposure
etc. can be subsequently modified by using:

• the sliders,

• the arrow keys, or

• entering real values with your keyboard.

Toggling Gamma button loads or unloads a host based 10 bit Gamma correction with a moderate value of 1.8 into
the signal processing path. Switch Gamma on if you require a gray level camera image to appear natural for the
human eye.

Toggling Color+ button switches both CCM and sRGB display matrix on and off. This optimizes the sensor color
response for the human eye and goes in conjunction with a display color response. Because sRGB displays are
mostly used and this is the default color space in Windows OS, these are preselected. If you require other display
matrices (e.g. Adobe or WideGamut) feel free to use the tree mode of wxPropView and select ColorTwistOutput←-
Correction accordingly.

Setting Black Level


Black level can be used if you require dark portions in the image to appear even darker or brighter. Please note that
this slider combines analog and digital settings meaningfully.

Setting Gain
Gain settings also combine analog and digital registers into one slider setting.

Setting Saturation
Saturation setting increases the color saturation to make the image appear more colored. It does not change
uncolored parts in the image nor changes the color tone or hue.

1.11.1.1.1.3 How to disable Quick Setup Wizard Uncheck the checkbox "Show This Display When A Device Is
Opened" to disable the Quick Setup Wizard to be called automatically. Use the "Wizards" menu and select "Quick
Setup" to open the Quick Setup Wizard once again.

1.11.1.1.1.4 How to Return to the Tree Mode Use OK to use the values and settings of the Quick Setup Wizard
and go back to the tree mode of wxPropView.

Use Cancel to discard the Quick Setup Wizard values and settings and go back to wxPropView and use the former
(or default) settings.

1.11.1.1.1.5 Image Display Functions Quick Setup Wizard allows zooming into the image by right clicking in
the image area and unchecking "Fit To Screen" mode. Use the mouse wheel to zoom in or out. Check "Fit To
Screen" mode, if you want the complete camera image to be sized in the window screen size.

1.11.1.1.1.6 Known Restrictions In cases of Tungsten (artificial) light, camera brightness may tend to oscilla-
tions if Auto functions are used. This can be minimized or avoided by setting the frame frequency to an integer
divisor of the mains frequency.

• Example:

– Europe: 50 Hz; Set frame rate to 100, 50, 25 12.5 fps or appropriate.
– In countries with 60 Hz use 120, 60, 30 or 15. . . accordingly.

MATRIX VISION GmbH


1.11 Application Usage 73

1.11.1.1.2 First View of wxPropView wxPropView (p. 69) consists of several areas:

Figure 2:wxPropView started

• "Menu Bar"
(to work with wxPropView (p. 69) using the menu)

• "Upper Tool Bar"


(to select and initialize a device, acquire images, play a recorder sequence)

• "Left Tool Bar"


(to hide and show parts of the GUI)

• "Status Tool Bar"

• "Main Window" with

– "Grid"
(tree control with the device settings accessible by the user)
– "Display"
(for the acquired images)

• "Analysis"
(information about whole images or an AOI)

By clicking on F1 you will get the HELP dialog.

Now, you can initialize a device by

MATRIX VISION GmbH


74

• selecting it in the drop down list in the "Upper Tool Bar" and

• clicking on "Use".

After having successfully initialized a device the tree control in the lower left part of the "Main Window" will display
the properties (settings or parameters) (according to the "interface layout") accessible by the user.

You've also got the possibility to set your "User Experience". According to the chosen experience, the level of
visibility is different:

• Beginner (basic camera settings/properties are visible)

• Expert (e.g. all advanced image processing are visible)

• Guru (all settings/properties are visible)

Properties displayed in light grey cannot be modified by the user. Only the properties, which actually have an impact
on the resulting image, will be visible. Therefore, certain properties might appear or disappear when modifying
another properties.

To permanently commit a modification made with the keyboard the ENTER must be pressed. If leaving the editor
before pressing ENTER will restore the old value.

1.11.1.1.3 How to see the first image As described earlier, for each recognized device in the system the devices
serial number will appear in the drop down menu in the upper left corner of the "Upper Tool Bar". When this is the
first time you start the application after the system has been booted this might take some seconds when working
with devices that are not connected to the host system via PCI or PCIe.

Once you have selected the device of your choice from the drop down menu click on the "Use" button to open it.

When the device has been opened successfully, the remaining buttons of the dialog will be enabled:

Note

Following screenshots are representative and where made using a mvBlueFOX camera as the capturing
device.

For color sensors, it is recommended to perform a white balance (p. 98) calibration before acquiring images. This
will improve the quality of the resulting images significantly.

MATRIX VISION GmbH


1.11 Application Usage 75

Figure 3:wxPropView - First start

Now, you can capture an image ("Acquisition Mode": "SingleFrame") or display live images ("Continuous"). Just

• select an "Acquisition Mode" e.g. "SingleFrame" and

• click the "Acquire" button.

Note

The techniques behind the image acquisition can be found in the developers sections.

The frame rate depends on

• the camera,

• the pixel clock of the sensor

MATRIX VISION GmbH


76

Since

mvIMPACT Acquire 2.37.0

To save an image directly from the live display directly, just

1. Right-click on the display.

2. Either select "Save Current Image" or "Copy Current Image To Clipboard".

With "Save Current Image" a dialog will appear, where you can specify the destination folder and the file format.

With "Copy Current Image To Clipboard" you can open you prefered image editing tool an paste the clipboard into
it. For this functionality you can also use the shortcuts CTRL-C and CTRL-V.

Figure 4: wxPropView - Using the record mode.

1.11.1.1.3.1 Record Mode It is also possible to record image sequences using wxPropView.

1. For this, you have to set the size of the recorder in "System Settings -> RequestCount" e.g. to 100.
This will save the last 100 requests in the request queue of the driver, i.e. the image data including the request
info like frame number, time stamp, etc.

2. Afterwards you can start the recording by clicking the Rec. button.

3. With the Next and Prev. buttons you can display the single images.

If you switched on the request info overlay (righ-click on the display area and select the entry to activate this
feature), these information will be displayed on the image, too. With the timestamp you can see the interval of the
single frames in microseconds.

MATRIX VISION GmbH


1.11 Application Usage 77

Figure 5: wxPropView - Using the record mode.

1.11.1.1.3.2 Hard Disk Recording You can save acquired images to the hard disk the following way:

1. In the "Menu Bar" click on "Capture -> Recording -> Setup Hard Disk Recording".

2. Confirm with "Yes".

3. Afterwards select the target folder for the images.

4. Finally, choose the file format of the acquired images.

MATRIX VISION GmbH


78

Figure 6: wxPropView - Hard Disk Recording.

1.11.1.1.3.3 Snapshot Mode

Since

mvIMPACT Acquire 2.37.0

The snapshot mode can be used to save a sequence of images from the current acquisition to the hard disk directly.

MATRIX VISION GmbH


1.11 Application Usage 79

Figure 7: wxPropView - Hard Disk Recording.

For this , please follow these steps:

1. In the "Menu Bar" click on "Capture -> Setup Snapshot To Hard Disk Mode".

2. Confirm with "Yes", that you want to enable the snapshot mode.

3. Select the destination folder on your hard disk.

4. Select the desired file format of the image(s).

5. Now you can save the current image by pressing the space bar.

1.11.1.1.4 Using the analysis plots With the analysis plots you have the possiblity to get image details and to
export them (p. 86).

MATRIX VISION GmbH


80

1.11.1.1.4.1 Spatial noise histogram The spatial noise histogram calculates and evalutates statistically the
difference between two neighbouring pixels in vertical and horizontal direction. I.e. it shows the sensor's spatial
background pattern like the sensitivity shifts of each pixel. An ideal sensor or camera has a spatial noise of zero.
However, you have to keep in mind the temporal noise as well.

Figure 8: wxPropView - Spatial noise histogram

Read: Channel::Direction (Mean difference, most frequent value count/ value, Standard deviation)
Example: For a single channel(Mono) image the output of 'C0Hor(3.43, 5086/ 0, 9.25), C0Ver(3.26, 4840/ 0, 7.30)
will indicate that the mean difference between pixels in horizontal direction is 3.43, the most frequent difference is 0
and this difference is present 5086 times in the current AOI. The standard deviation in horizontal direction is 9.25.
The C0Ver value list contains the same data but in vertical direction.

1.11.1.1.4.2 Temporal noise histogram The temporal noise histogram shows the changes of a pixel from image
to image. This method is more stable because it is relatively independent from the image content. By subtracting
two images, the actual structure is eliminated, leaving the change of a pixel from image to image, that is, the noise.
When capturing images, all parameters must be frozen, all automatic mechanisms have to be turned off and the
image may not have underexposed or saturated areas. However, there are no picture signals without temporal
noise. Light is a natural signal and the noise always increases with the signal strength. If the noise only follows the
natural limits, then the camera is good. Only if additional noise is added the camera or the sensor has errors.

Figure 9: wxPropView - Temporal noise histogram

MATRIX VISION GmbH


1.11 Application Usage 81

Read: Channel# (Mean difference, most frequent value count/ value, Standard deviation)

Example: For a single channel(Mono) image the output of 'C0(3.43, 5086/ 0, 9.25) will indicate that the mean
difference between pixels in 2 consecutive images is 3.43, the most frequent difference is 0 and this difference is
present 5086 times in the current AOI. The standard deviation between pixels in these 2 images is 9.25. Please
note the impact of the 'Update Interval' in this plot: It can be used to define a gap between 2 images to compare.
E.g. if the update interval is set to 2, the differences between image 1 and 3, 3 and 5, 5 and 7 etc. will be calculated.
In order to get the difference between 2 consecutive images the update interval must be set to 1!

1.11.1.1.5 Storing and restoring settings When wxPropView (p. 69) is started for the first time, the values of
properties set to their default values will be displayed in green to indicate that these values have not been modified
by the user so far. Modified properties (even if the value is the same as the default) will be displayed in black.

Figure 10: wxPropView - Storing settings

Settings can be stored in several ways (via the "Menu Bar": "Action -> Capture Settings -> Save Active Device
Settings"):

• "As Default Settings For All Devices Belonging To The Same Family (Per User Only)": As the start-up param-
eters for every device belonging to the same family, e.g. for mvBlueCOUGAR-X, mvBlueCOUGAR-XD.

• "As Default Settings For All Devices Belonging To The Same Family And Product Type": As the start-up
parameters for every device belonging to the same product, e.g. for any mvBlueCOUGAR-X but not for
mvBlueCOUGAR-XD.

• "As Default Settings For This Device(Serial Number)": As the start-up parameters for the currently selected
device.

• "To A File": As an XML file that can be used e.g. to transport a setting from one machine to another or even
to use the settings configured for one platform on another (Windows <-> Linux).

During the startup of a device, all these setting possibilities show different behaviors. The differences are described
in chapter Settings behaviour during startup (p. 38)

Restoring of settings previously stored works in a similar way. After a device has been opened the settings will be
loaded automatically as described in Settings behaviour during startup (p. 38)

However, at runtime the user has different load settings possibilities (via the "Menu Bar": "Action -> Capture Settings
-> Load Active Device Settings")

MATRIX VISION GmbH


82

• explicitly load the device family specific settings stored on this machine (from "The Default Settings Location
For This Devices Family (Per User Only)")

• explicitly load the product specific settings stored on this machine (from "The Default Settings Location For
This Devices Family And Product Type)")

• explicitly load the device specific settings stored on this machine (from "The Default Settings Location For
This Device(Serial Number)")

• explicitly load device family specific settings from a XML file previously created ("From A File")

Note

With "Action -> Capture Settings -> Manage..." you can delete the settings which were saved on the system.

Figure 11: wxPropView - Restoring settings

1.11.1.1.6 Properties All properties and functions can be displayed in the list control on the lower left side of the
dialog. To modify the value of a property select the edit control right of the properties name. Property values, which
refer to the default value of the device, are displayed in green. A property value once modified by the user will be
displayed in black (even if the value itself has not changed). To restore its default value of a single property

• right click on the name of the property and

• select "Restore Default".

To restore the default value for a complete list (which might include sub-lists)

• right click on the name of a list and

• select "Restore Default".

In this case a popup window will be opened and you have to confirm again.

MATRIX VISION GmbH


1.11 Application Usage 83

Figure 12: wxPropView - Restore the default value of a property

Most properties store one value only, thus they will appear as a single entry in the property grid. However, properties
are capable of storing more than one value, if this is desired. A property storing more than one value will appear as
a parent list item with a WHITE background color (lists will be displayed with a grey background) and as many child
elements as values stored by the property. The PARENT grid control will display the number of values stored by
the property, every child element will display its corresponding value index.

If supported by the property, the user might increase or decrease the number of values stored by right clicking on
the PARENT grid element. If the property allows the modification the pop up menu will contain additional entries
now:

Figure 13: wxPropView - A resizable property

When a new value has been created it will be displayed as a new child item of the parent grid item:

MATRIX VISION GmbH


84

Figure 14: wxPropView - A resized property

Currently, only the last value can be removed via the GUI and a value can't be removed, when a property stores
one value only.

Also the user might want to set all (or a certain range of) values for properties that store multiple values with a single
operation. If supported by the property, this can also be achieved by right clicking on the PARENT grid element. If
the property allows this modification the pop up menu will again contain additional entries:

Figure 15: wxPropView - Setting multiple property values

MATRIX VISION GmbH


1.11 Application Usage 85

It's possible to either set all (or a range of) elements of the property to a certain value OR to define a value range,
that then will be applied to the range of property elements selected by the user. The following example will explain
how this works:

Figure 16: wxPropView - Setting multiple property values within a certain value range

In this sample the entries 0 to 255 of the property will be assigned the value range of 0 to 255. This will result in the
following values AFTER applying the values:

Figure 17: wxPropView - After applying the value range to a property

MATRIX VISION GmbH


86

1.11.1.1.7 Methods Method appears as entries in the tree control as well. However, their name and behavior
differs significantly from the behavior of properties. The names of method objects will appear in 'C' syntax like e.g.
"int function( char∗, int )". This will specific a function returning an integer value and expecting a string and an
integer as input parameters. To execute a method object

• right click on the name of a method and

• select "Execute" from the popup menu:

Figure 18: wxPropView - Calling a method object

Parameters can be passed to methods by selecting the edit control left of a method object. Separate the parameters
by blanks. So to call a function expecting a string and an integer value you e.g. might enter "testString 0"
into the edit control left of the method.

The return value (in almost every case an error code as an integer) will be displayed in the lower right corner of the
tree control. The values displayed here directly correspond the error codes defined in the interface reference and
therefore will be of type TDMR_ERROR or TPROPHANDLING_ERROR.

1.11.1.1.8 Copy grid data to the clipboard Since wxPropView (p. 69) version 1.11.0 it is possible to copy
analysis data to the clipboard. The data will be copied in CSV style thus can be pasted directly into tools like Open
Office™ or Microsoft® Office™.

Just

• right-click on the specific analysis grid when in numerical display mode and

• select "Copy grid to clipboard" from the pop up menu.

MATRIX VISION GmbH


1.11 Application Usage 87

Figure 19: wxPropView - Copying grid data to the clipboard

1.11.1.1.9 Import and Export images wxPropView (p. 69) offers a wide range of image formats that can be
used for exporting captured image to a file. Some formats e.g. like packed YUV 4:2:2 with 10 bit per component are
rather special thus they can't be stored into a file like e.g. offered by the BMP file header. When a file is stored in a
format, that does not support this data type wxPropView (p. 69) will convert this image into something that matches
the original image format as close as possible. This, however, can result in the loss of data. In order to allow the
storage of the complete information contained in a captured image wxPropView (p. 69) allows to store the data in a
raw format as well. This file format will just contain a binary dump of the image with no leader or header information.
However, the file name will automatically be extended by information about the image to allow the restoring of the
data at a later time.

All image formats, that can be exported can also be imported again. Importing a file can be done in 3 different
ways:

• via the menu (via the "Menu Bar": "Action -> Load image...")

• by dragging an image file into an image display within wxPropView (p. 69)

• by starting wxPropView (p. 69) from the command line passing the file to open as a command line param-
eter (p. 110) (on Windows® e.g. "wxPropView.exe MyImage.png" followed by [ENTER])

When importing a "∗.raw" image file a small dialog will pop up allowing the user to define the dimensions and
the pixel format of the image. When the file name has been generated using the image storage function offered
by wxPropView (p. 69), the file name will be passed and the extracted information will automatically be set in the
dialog thus the user simply needs to confirm this information is correct.

MATRIX VISION GmbH


88

Figure 20: wxPropView - Raw image file import

1.11.1.1.10 Setting up multiple display support and/or work with several capture settings in parallel wx←-
PropView (p. 69) is capable of

• dealing with multiple capture settings or acquisition sequences for a single device and in addition to that

• it can be configured to deal with multiple image displays.

The amount of parallel image displays can be configured via the command line parameter (p. 110) "dcx" and
"dcy". In this step by step setup wxPropView (p. 69) has been started like this from the command line:

wxPropView dcx=1 dcy=2

This will result in 1 display in horizontal direction and 2 in vertical direction.

Since

mvIMPACT Acquire 2.18.1

Is is also possible to change the amount of display at runtime via "Settings -> Image Displays -> Configure Image
Display Count":

MATRIX VISION GmbH


1.11 Application Usage 89

Figure 21: wxPropView - Create capture setting

Additional capture settings can be created via "Menu Bar": "Capture -> Capture Settings -> Create Capture
Settings". The property grid will display these capture settings either in "Developers" or in "Multiple Settings
View".

Now, in order to set up wxPropView (p. 69) to work with 2 instead of one capture setting,

1. Various additional capture setting can be created. In order to understand what a capture setting actually is
please refer to

• "Working with settings" chapter of the "mvIMPACT Acquire API" manuals.

Creating a capture setting is done via "Capture -> Capture Settings -> Create Capture Setting".

Figure 22: wxPropView - Create capture setting

2. Then, the user is asked for the name of the new setting.

MATRIX VISION GmbH


90

Figure 23: wxPropView - Create capture setting - Choosing name

3. And finally for the base this new setting shall be derived from.

Figure 24: wxPropView - Create capture setting - Choosing base

Afterwards, in this example we end up having 2 capture settings:

• a "Base" setting, which is always available

• a "NewSetting1", which has been derived from "Base".

MATRIX VISION GmbH


1.11 Application Usage 91

Figure 25: wxPropView - two settings

As "NewSetting1" has been derived from "Base" changing a property in "Base" will automatically change this
property in "NewSetting1" if this property has not already been modified in "NewSetting1". Again to get an
understanding for this behaviour please refer to

• "Working with settings" chapter of the "mvIMPACT Acquire API" manuals.

Now, to set up wxPropView (p. 69) to display all images taken using capture setting "Base" in one display and all
image taken using capture setting "NewSetting1" in another display the capture settings need to be assigned to
image displays via "Capture -> Capture Settings -> Assign To Display(s)".

MATRIX VISION GmbH


92

Figure 26: wxPropView - Assigning displays

Figure 27: wxPropView - Assigning displays

By default a new setting when created will be assigned to one of the available displays in a round-robin scheme,
thus when there are 3 displays, the first (Base) setting will be assigned to "Display 0", the next to "Display 1", the
next to "Display 2" and a fourth setting will be assigned to "Display 0" again. The setting to display relationships
can be customized via "Capture -> Capture Settings -> Assign to Display(s)".

As each image display keeps a reference to the request, this image belongs to the driver can't re-use the request
buffer until a new request is blitted into this display. Thus, it might be necessary to increase the number of request
objects the driver is working with if a larger number of displays are involved. The minimum number of requests
needed is 2 times the amount of images displays. The number of requests used by the driver can be set up in the
drivers property tree:

MATRIX VISION GmbH


1.11 Application Usage 93

Figure 28: wxPropView - Setting up request count

Finally, wxPropView (p. 69) must be configured in order to use all available capture settings in a round-robin
scheme. This can be done by setting the capture setting usage mode to "Automatic" via "Capture -> Capture
Settings -> Usage Mode":

Figure 29: wxPropView - Capture setting usage mode

That's it. Now, starting a live acquisition will display live images in both displays and each display is using a different
set of capture parameters. If a device supports parallel acquisition from multiple input channels, this will increase

• the used bandwidth and also

• the CPU load

MATRIX VISION GmbH


94

as wxPropView (p. 69) now needs to display more images per second. Each display can be configured indepen-
dently thus e.g. one display can be used scaled while the other displays 1:1 data. The analysis plots can be
assigned to a specific display by left-clicking on the corresponding image display, the info plot will plot a graph for
each capture setting in parallel.

Figure 30: wxPropView - Running example

When only one setting shall be used at a given time, this can be achieved by setting the capture setting usage mode
back to "Manual" via "Capture -> Capture Settings -> Usage Mode". Then the setting that shall be used can be
manually selected in the request control list:

MATRIX VISION GmbH


1.11 Application Usage 95

Figure 31: Manual Setting Usage Mode

This can even be changed during a running acquisition.

1.11.1.1.11 Bit-shifting an image wxPropView (p. 69) shows snapped or live images in the display area of the
GUI. The area, however, shows the most significant bits (msb) of the image in the 8 bit display.

The following image shows how a mid-grey 12 bit pixel of an image is displayed with 8 bit. Additionally, two shifts
are shown.

Figure 32: Mid-grey 12 bit pixel image and 8 bit display with 2 example shifts

MATRIX VISION GmbH


96

In this particular case, the pixel will be brighter (as the most significant bits are 1’s). Perhaps you already recognized
it. Each shift means that each pixel value is multiplied or divided by 2 according to the direction.

Anyway, there is one restriction in the 8 bit display:

If the pixel value is greater than 255, the pixel value will be clipped to 255. To describe this from a programmer’s
view; a represents the pixel value:

a = ( a > 255 ) ? 255 : a

With wxPropView (p. 69) you can shift the bits in the display using the left and right arrow keys. Furthermore you
can turn on the monitor display to compare the images synchronously.

wxPropView - Bit-shifting an Image:


https://www.matrix-vision.com/tl_files/mv11/trainings/wxPropView/wx←-
PropView_Bit-shifting/index.html

1.11.1.1.12 Changing the view of the property grid to assist writing code that shall locate driver features
With wxPropView (p. 69) it is possible to switch the views between "Standard View" (user-friendly) and "Developers
View". While the first (default) view will display the device drivers feature tree in a way that might be suitable for most
users of a GUI application it might present the features in a slightly different order as they actually are implemented
in the device driver. The developers view switches the tree layout of the application to reflect the feature tree exactly
like it is implemented an presented by the SDK. It can be helpful when writing code that shall locate a certain
property in the feature tree of the driver using the C, C++, Java, .NET or Python interface. The feature hierarchy
displayed here can directly be used for searching for the features using the "ComponentLocator (C++/.←-
NET)" objects or "DMR_FindList (C)" and "OBJ_GetHandleEx (C)" functions.

Figure 33: Developers View

MATRIX VISION GmbH


1.11 Application Usage 97

1.11.1.1.13 Accessing log files

Since

mvIMPACT Acquire 2.11.9

Using Windows, it is possible to access the log files generated by MATRIX VISION via the Help menu. Sending us
the log files will speed up support cases.

Figure 34: wxPropView - Help menu

The options are to

• directly open the logs folder, to

• create a zip file with all the logs, and to

• open the systems default email client to send an email to [email protected].

See also

Accessing log files using Linux (p. 126)

1.11.1.2 How to configure a device

As described above, after the device has been initialized successfully in the "Grid" area of the GUI the available
properties according to the chosen "interface layout" (e.g. GenICam) are displayed in a hierarchy tree.

wxPropView - Configuring a device:


https://www.matrix-vision.com/tl_files/mv11/trainings/wxPropView/wx←-
PropView_ConfiguringDevice/index.html

MATRIX VISION GmbH


98

The next chapter will show how to set the interface layout and which interface you should use according to your
needs.

1.11.1.2.1 Different interface layouts

Devices belonging to this family only support the Device Specific interface layout which is the common interface
layout supported by most MATRIX VISION devices.

GenICam compliant devices can be operated in different interface layouts. Have a look at a GenICam compliant
device for additional information.

1.11.1.2.2 White balance of a camera device (color version) Start the wxPropView (p. 69) and initialize the
device by clicking "Use" and start a "Continuous" acquisition.

Figure 35: wxPropView - Starting window

While using a color version of the camera, the PC will calculate a color image from the original gray Bayer mosaic
data. For getting correct colors when working with a Bayer mosaic filter you have to calibrate the white balance (this
must be performed every time the lighting conditions change).

The wxPropView (p. 69) offers predefined settings for e.g.

• "Daylight",

• "TungstenLamp",

• "HalogenLamp",

• "FluorescentLamp" and many more.

Simply select the necessary item in the menu "Image Settings -> Base -> ImageProcessing -> WhiteBalance"
("DeviceSpecific interface layout") or "Setting -> Base -> ImageProcessing -> WhiteBalance" ("GenICam interface
layout").

If you need a user defined setting, you can also define own ones. For this, select a profile (e.g. User1) for this
setting:

MATRIX VISION GmbH


1.11 Application Usage 99

Figure 36: wxPropView - Selecting WhiteBalance profile

Note

You can use up to 4 profiles.

Point the camera on a white or light gray area (the pixels do not have to be saturated, so use gray values between
150 and 230).

Go to the menu item "WhiteBalanceCalibration" and select the parameter "Calibrate Next Frame":

MATRIX VISION GmbH


100

Figure 37: wxPropView - WhiteBalanceCalibration

By committing the selected value, the application accepts the change. The next acquired image will be the reference
for the white balance.

All further acquired images will be balanced with this setting:

Figure 38: wxPropView - White balance summary

For easier handling and easier working, all settings can be saved by clicking the menu Action -> Capture Settings
-> Save ... (p. 81).

MATRIX VISION GmbH


1.11 Application Usage 101

1.11.1.2.3 Configuring different trigger modes To configure a device for a triggered acquisition, in wxProp←-
View (p. 69) the property "Image Setting -> Camera -> TriggerMode" ("DeviceSpecific interface layout") or "Setting
-> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector" ("GenICam interface layout") is
available.

Note

The supported trigger modes of each sensor are described in the More specific data (p. 63) of each sensor.

1.11.1.2.4 Testing the digital inputs

Note

The following description will be significant if you are using the "DeviceSpecific interface layout". In GenICam
laylout, the "Digital I/O" section can be found in "Setting -> Base -> Camera -> GenICam -> Digital I/O
Control".

For performance reasons, device drivers will not automatically update their digital input properties if nobody is
interested in the current state. Therefore, in order to check the current state of a certain digital input, it is necessary
to manually refresh the state of the properties. To do this please right-click on the property you are interested in and
select "Force Refresh" from the pop-up menu.

GenICam interface layout only:

Some devices might also offer an event notification if a certain digital input changed its state. This event can then
be enabled

• via the "EventSelector" in "Setting -> Base -> Camera -> GenICam -> Event Control".

• Afterwards, a callback can be registered by right-clicking on the property you are interested in again.

• Now, select "Attach Callback" from the pop-up menu and switch to the "Output" tab in the lower right section
of wxPropView (Analysis tabs).

Whenever an event is send by the device that updates one of the properties a callback has been attached to, the
output window will print a message with some information about the detected change.

Figure 39: wxPropView - Call refresh

MATRIX VISION GmbH


102

1.11.1.2.5 Setting up external trigger and flash control To set up external trigger and flash control, following
things are required:

• mvBlueFOX with CCD sensor

• Host PC with USB 2.0 interface

• USB 2.0 cable with max. 5m

• Cable for supplying external trigger signal to camera

• Flash with needed cable and power supply

• Current mvBlueFOX driver

The camera is only connected by USB 2.0 cable to PC. All other signals are connected directly to the camera.
Trigger and flash signals are directly controlled by the FPGA, which does the timing in the camera. This makes the
trigger and flash control independent from CPU load of host PC or temporary USB 2.0 interrupts.

Figure 40: mvBlueFOX with trigger and flash

Trigger control
External trigger signal resets the image acquisition asynchronously to any other timing so that reaction delay is such
short that it can be ignored. If a delay between trigger signal and starting integration is needed, it can be defined.
By default it is set to 0 us.

Flash control
Signal for flash control is immediately set as soon as image integration begins. If a delay is needed it can be defined.
By default this delay is set to 0.

MATRIX VISION GmbH


1.11 Application Usage 103

1.11.1.2.5.1 Connection External trigger signal


Signal for triggering image acquisition must be connected to digital input on backside of mvBlueFOX on D-Sub
9-pin connector (p. 40). You can choose either input IN0 (pin 6 and 1) or IN1 (pin 9 and 4) for triggering.

Figure 41: External trigger signal

Schematic shows how to fit application's switch to camera's digital input. External trigger signal must be in following
conditions:

• TTL (5 V): High min. 3 V, Low max. 1 V

• PLC (24 V): High min. 12 V, Low max. 10 V

Application's switch can be a mechanical one any light barrier or some kind of encounter.

Note

Depending on used switch it might be necessary to use a pull-up or pull-down resistor so that camera input
can recognize signal correctly.

See also

Characteristics of the digital inputs (p. 40)

To test the general trigger functionality, please follow these steps:

1. Select the "Acquisition Mode" "Continuous".

2. Click on "Acquire".

3. Set the "TriggerMode" to "OnHighLevel".

MATRIX VISION GmbH


104

4. Set the "TriggerMode" to "DigitalInputThreshold" to e.g. 2V.

5. Connect a standard power supply with e.g. 5 V (higher than the value of "DigitalInputThreshold") to pin 1 (-)
and pin 6 (+)

As long as the power supply is connected, you can see a live preview. If you disconnect the power supply the live
preview should stop. If this is working the trigger input works.

Figure 42: Settings to test the general trigger functionality

External flash signal


For supplying flash with control signal use any digital output Out0 (pin 7 and 2) or Out1 (pin 8 and 3) on 9 pin D-Sub
connector.

MATRIX VISION GmbH


1.11 Application Usage 105

Figure 43: External flash signal

If current needed for flash is below 100 mA you can connect flash directly to camera outputs. If it is higher, you have
to use an additional driver for controlling the flash, which provides the higher current.

Note

Depending on used flash driver it could be necessary to use pull-up or pull-down resistors so that driver can
recognize the signal correctly.

See also

Characteristics of the digital outputs (p. 40)

1.11.1.2.5.2 Setting up In wxPropView (p. 69) you can open camera and display acquired images.

By default camera is running free. This means, it uses its own timing depending on set pixel clock, exposure time
and shutter mode.

Trigger
To let the camera acquire images only with an external trigger signal you must change the "TriggerMode" to the
mode suitable to your application:

MATRIX VISION GmbH


106

Figure 44: TriggerMode

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnLowExpose Each falling edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnAnyEdge Start the exposure of a frame when the trigger input level changes from high to low or from
low to high.

Now, define on which pin the trigger signal is connected to.

MATRIX VISION GmbH


1.11 Application Usage 107

Figure 45: TriggerSource

Choose either "DigIn0" if signal is connected to "IN0" or DigIn1 if signal is connected to IN1. In general entry "←-
RTCtrl" is not useful in this case because triggering would be controlled by Hardware Real-Time Controller
(p. 118) which is not described here and also not necessary.

Depending on voltage level you are using for trigger signal you must choose the "DigitalInputThreshold":

Figure 46: DigitalInputThreshold

MATRIX VISION GmbH


108

In case of TTL choose 2V and in case of PLC choose 10V.


Now, the image acquisition runs corresponding to external trigger signal and trigger mode. You will see the acquired
images on the left part of the window. Preview will be updated in frequency of external trigger.
The program knows a timeout period within at least one trigger signal must be provided to the camera. If no trigger
signal comes within this time, no image is acquired and an error is set (error count increases). So be sure to set
this timeout period to a value, which is long enough to receive at least one trigger signal. You can set this value in
"ImageRequestTimeout" property:

Figure 47: ImageRequestTimeout

Trigger
To activate flash control signal set "FlashMode" to the output you connected flash or flash driver to:

Figure 48: External flash signal

MATRIX VISION GmbH


1.11 Application Usage 109

Since this mode is activated each image acquisition will generate the flash signal. This generation is independent
from used trigger mode. Flash signal is directly derived from integration timing.

This means that if no "FlashToExposedToLightDelay" is set flash signal will rise as soon as integration starts and fall
when integration is finished. The pulse width cannot be changed. So you can be sure that integration takes place
when trigger signal is high.

1.11.1.2.6 Working with the hardware Look-Up-Table (LUT) There are two parameters which handles the pixel
formats of the camera:

• "Setting -> Camera -> PixelFormat" defines the pixel format used to transfer the image data into the target
systems host memory.

• "Setting -> ImageDestination -> PixelFormat" defines the pixel format of the resulting image (which is kept
in the memory by the driver).

If both formats are set to "Auto", 8 bit will be used.

If you set "LUTImplementation" to "Software" in "Setting -> ImageProcessing -> LUTOperations", the hardware
Look-Up-Table (LUT) will work with 8 bit data ("LUTMappingSoftware = 8To8"). Using Gamma functions you will
see gaps in the histogram:

Figure 49: 8to8 software LUT leads to gaps in the histogram using gamma functions (screenshot:
mvBlueFOX-MLC)

MATRIX VISION GmbH


110

If you set "LUTImplementation" to "Hardware" in "Setting -> ImageProcessing -> LUTOperations", the hardware
Look-Up-Table (LUT) will work with 10 bit data inside the camera and converts the data to 8 bit for output ("LUT←-
MappingHardware = 10To8"). Now, there will be no gaps in the histogram:

Figure 50: 10to8 hardware LUT shows no in the histogram (screenshot: mvBlueFOX-MLC)

1.11.1.3 Command-line options

It is possible to start wxPropView via command line and controlling the starting behavior using parameters. The
supported parameter are as follows:

Parameter Description
width or w Defines the startup width of wxPropView. Example: width=640
height or h Defines the startup height of wxPropView. Example: height=460
xpos or x Defines the startup x position of wxPropView.
ypos or y Defines the startup x position of wxPropView.
splitterRatio Defines the startup ratio of the position of the property grids splitter. Values be-
tween > 0 and < 1 are valid. Example: splitterRatio=0.5
propgridwidth or pgw Defines the startup width of the property grid.
debuginfo or di Will display debug information in the property grid.
dic Will display invisible (currently shadowed) components in the property grid.
displayCountX or dcx Defines the number of images displayed in horizontal direction.
displayCountY or dcy Defines the number of images displayed in vertical direction.

MATRIX VISION GmbH


1.11 Application Usage 111

fulltree or ft Will display the complete property tree (including the data not meant to be
accessed by the user) in the property grid. Example (Tree will be shown)←-
: fulltree=1
device or d Will directly open a device with a particular serial number. ∗ will take the first
device. Example: d=GX000735
qsw Will forcefully hide or show the Quick Setup Wizard, regardless of the default
settings. Example (Quick Setup Wizard will be shown): qsw=1
live Will directly start live acquisition from the device opened via device or d directly.
Example (will start the live acquisition): live=1

1.11.1.3.1 Sample (Windows)


wxPropView.exe d=* fulltree=1 qsw=0

This will start the first available device, will hide the Quick Setup Wizard, and will display the complete property tree.

1.11.2 mvDeviceConfigure

mvDeviceConfigure (p. 111) is an interactive GUI tool to configure MATRIX VISION devices. It shows all connected
devices.
Various things can also be done without user interaction (e.g. updating the firmware of a device). To find out how to
do this please start mvDeviceConfigure and have a look at the available command line options presented in the
text window in the lower section (the text control) of the application.

1.11.2.1 How to set the device ID

The device ID is used to identify the devices with a self defined ID. The default ID on the device's EEPROM is "0".
If the user hasn't assigned unique device IDs to his devices, the serial number can be used to selected a certain
device instead. However, certain third-party drivers and interface libraries might rely on these IDs to be set up in a
certain way and in most of the cases this means, that each device needs to have a unique ID assigned and stored
in the devices non-volatile memory. So after installing the device driver and connecting the devices setting up these
IDs might be a good idea.
To set the ID please start the mvDeviceConfigure (p. 111) tool. You will see the following window:

MATRIX VISION GmbH


112

Figure 47: mvDeviceConfigure - Overview devices

Whenever there is a device that shares its ID with at least one other device belonging to the same device family,
mvDeviceConfigure (p. 111) will display a warning like in the following image, showing in this example two mv←-
BlueFOX cameras with an ID conflict:

Figure 48: mvDeviceConfigure - Conflicting device IDs

1.11.2.1.1 Step 1: Device Selection Select the device you want to set up from the list box.

1.11.2.1.2 Step 2: Open dialog to set the ID With the device selected, select the menu item Action and click
on Set ID.

Note

It is also possible to select the action with a right click on the device.

Figure 49: mvDeviceConfigure - Select action

MATRIX VISION GmbH


1.11 Application Usage 113

1.11.2.1.3 Step 3: Assign the new ID Enter the new ID and click OK.

Figure 50: mvDeviceConfigure - New ID

Now the overview shows you the list with all devices as well as the new ID. In case there has been an ID conflict
before that has been resolved now mvDeviceConfigure (p. 111) will no longer highlight the conflict now:

Figure 51: mvDeviceConfigure - Resolved ID conflict

1.11.2.2 How to update the firmware

With the mvDeviceConfigure tool it is also possible to update the firmware. These steps are necessary:

1.11.2.2.1 Step 1: Device selection Select the device you want to update from the list box.

MATRIX VISION GmbH


114

1.11.2.2.2 Step 2: Open dialog to update the firmware With the device selected, select the menu item Action
and click on Update firmware.

Note

It is also possible to select the action with a right click on the device.

Figure 52: mvDeviceConfigure - Select action

1.11.2.2.3 Step 3: Confirm the firmware update You have to confirm the update.

Figure 53: mvDeviceConfigure - Confirm update

Note

The firmware is compiled within the installed driver. The mvDeviceConfigure uses this version and updates
the firmware. If you use an old driver, you will downgrade the firmware.

If the firmware update is successful, you will receive the following message:

Figure 54: mvDeviceConfigure - Update successful

MATRIX VISION GmbH


1.11 Application Usage 115

1.11.2.2.4 Step 4: Disconnect and reconnect the device Please disconnect and reconnect the device to
activate the new firmware.

Note

The firmware update is only necessary in some special cases (e.g. to benefit from a new functionality added
to the firmware or to fix a firmware related bug). Before updating the firmware be sure what you are doing and
have a look into the change log (versionInfo.txt and/or the manual to see if the update will fix your problem).
The firmware update takes approx. 30 seconds!

1.11.2.3 How to disable CPU sleep states a.k.a. C states (< Windows 8)

Modern PC's, notebook's, etc. try to save energy by using a smart power management. For this several hardware
manufacturers specified the ACPI standard. The standard defines several power states. For example, if processor
load is not needed the processor changes to a power saving (sleep) state automatically and vice versa. Every state
change will stop the processor for microseconds. This time is enough to cause image error counts!

See also

More informations about ACPI: http://en.wikipedia.org/wiki/Advanced_Configuration←-


_and_Power_Interface

To disable the power management on the processor level (so-called "C states"), you can use mvDevice←-
Configure:

Note

With Windows XP it is only possible to disable the C2 and C3 states. With Windows Vista / 7 / 8 all C states
(1,2, and 3) will be disabled.

Warning

Please be sure you know what you do! To turn off the processor's sleep states will lead to a higher power
consumption of your system. Some processor vendors might state that turning off the sleep states will result
in the processors warranty will expire.

Note

Modifying the sleep states using mvDeviceConfigure does only affects the current power scheme. For
notebooks this will e.g. make a difference depending on whether the notebook is running on battery or not.
E.g. if the sleep states have been disabled while running on battery and then the system is connected to an
external power supply, the sleep states might be active again. Thus in order to permanently disable the sleep
states, this needs to be done for all power schemes that will be used when operating devices.

1. Start mvDeviceConfigure.

2. Go to tab "Settings" and unselect "CPU Idle States Enabled".

MATRIX VISION GmbH


116

Figure 55: mvDeviceConfigure - Settings

The sleep states can also be enabled or disabled from a script by calling mvDeviceConfigure like this:

mvDeviceConfigure.exe set_processor_idle_states=1 quit

or

mvDeviceConfigure.exe set_processor_idle_states=0 quit

The additional quit will result in the application to terminate after the new value has been applied.

Note

With Windows Vista or newer mvDeviceConfigure must be started from a command shell with administrator
privileges in order to modify the processors sleep states.

1.11.2.4 Command-line options

It is possible to start mvDeviceConfigure via command line and controlling the starting behavior using parameters.
The supported parameter are as follows:

MATRIX VISION GmbH


1.11 Application Usage 117

Parameter Description
setid or id Updates the firmware of one or many devices(syntax←-
: 'id=<serial>.<id>' or 'id=<product>.<id>').
set_processor_idle_states or spis Changes the C1, C2 and C3 states for ALL processors in the
current system(syntax: 'spis=1' or 'spis=0').
set_userset_persistence or sup Sets the persistency of UserSet settings during firmware up-
dates (syntax: 'sup=1' or 'sup=0').
update_fw or ufw Updates the firmware of one or many devices.
update_fw_file or ufwf Updates the firmware of one or many devices. Pass a full
path to a text file that contains a serial number or a product
type per line.
custom_genicam_file or cgf Specifies a custom GenICam file to be used to open devices
for firmware updates. This can be useful when the actual XML
on the device is damaged/invalid.
update_kd or ukd Updates the kernel driver of one or many devices.
ipv4_mask Specifies an IPv4 address mask to use as a filter for the se-
lected update operations. Multiple masks can be passed here
separated by semicolons.
fw_file Specifies a custom name for the firmware file to use.
fw_path Specifies a custom path for the firmware files.
log_file or lf Specifies a log file storing the content of this text control upon
application shutdown.
quit or q Ends the application automatically after all updates have been
applied.
force or f Forces a firmware update in unattended mode, even if it isn't
a newer version.
∗ Can be used as a wildcard, devices will be searched by se-
rial number AND by product. The application will first try to
locate a device with a serial number matching the specified
string and then (if no suitable device is found) a device with a
matching product string.

The number of commands that can be passed to the application is not limited.

1.11.2.4.1 Sample (Windows)

mvDeviceConfigure ufw=BF000666

This will update the firmware of a mvBlueFOX with the serial number BF000666.

mvDeviceConfigure update_fw=BF*

This will update the firmware of ALL mvBlueFOX devices in the current system.

mvDeviceConfigure update_fw=mvBlueFOX-2* lf=output.txt quit

This will update the firmware of ALL mvBlueFOX-2 devices in the current system, then will store a log file of the
executed operations and afterwards will terminate the application.

MATRIX VISION GmbH


118

mvDeviceConfigure setid=BF000666.5

This will assign the device ID '5' to a mvBlueFOX with the serial number BF000666.

mvDeviceConfigure ufw=*

This will update the firmware of every device in the system.

mvDeviceConfigure ufw=BF000666 ufw=BF000667

This will update the firmware of 2 mvBlueFOX cameras.

mvDeviceConfigure ipv4_mask=169.254.*;192.168.100* update_fw=GX*

This will update the firmware of all mvBlueCOUGAR-X devices with a valid IPv4 address that starts with '169.254.'
or '192.168.100.'.

1.12 HRTC - Hardware Real-Time Controller

1.12.1 Introduction

The Hardware Real-Time Controller (HRTC) is built into the FPGA. The user can define a sequence of operating
steps to control the way how and when images are exposed and transmitted. Instead using an external PLC, the
time critical acquisition control is directly build into the camera. This is a very unique and powerful feature.

1.12.1.1 Operating codes

The operating codes for each step can be one of the followings:

OpCode Parameter Description


Nop - No operation
SetDigout Operation array on dig out Set a digital output
WaitDigin State definition array on dig in Wait for a digital input
WaitClocks Time in us Wait a defined time
Jump HRTC program address Jump to any step of the program
TriggerSet Frame ID Set internal trigger signal to sensor controller
TriggerReset - Reset internal trigger signal to sensor controller
ExposeSet - Set internal expose signal to sensor controller
ExposeReset - Reset internal expose signal to sensor controller
FrameNrReset - Reset internal sensor frame counter

256 HRTC steps are possible.

The section How to use the HRTC (p. 119) should give the impression what everything can be done with the HRTC.

MATRIX VISION GmbH


1.12 HRTC - Hardware Real-Time Controller 119

wxPropView - Introduction:
https://www.matrix-vision.com/tl_files/mv11/trainings/wxPropView/wx←-
PropView_HRTC/index.html

1.12.2 How to use the HRTC

To use the HRTC you have to set the trigger mode and the trigger source. With object orientated programming
languages the corresponding camera would look like this (C++ syntax):

CameraSettings->triggerMode = ctmOnRisingEdge
CameraSettings->triggerSource = ctsRTCtrl

When working with wxPropView (p. 69) this are the properties to modify in order to activate the evaluation of the
HRTC program:

Figure 1: wxPropView - Setting up the HRTC usage

Following trigger modes can be used with HRTC:

• OnLowLevel

• OnHighLevel

• OnFallingEdge

• OnRisingEdge

• OnHighExpose

Further details about the mode are described in the API documentation:

MATRIX VISION GmbH


120

See also

TCameraTriggerMode and TCameraTriggerSource in

• "Enumerations (C developers)"

• "CameraSettingsBlueFOX (C++ developers)"

In the Use Cases (p. 127) chapter there are the following HRTC sample:

• "Using single camera" :

– Achieve a defined image frequency (HRTC) (p. 169)


– Delay the external trigger signal (HRTC) (p. 170)
– Creating double acquisitions (HRTC) (p. 171)
– Take two images after one external trigger (HRTC) (p. 171)
– Take two images with different expose times after an external trigger (HRTC) (p. 172)

• "Using multiple cameras" :

– Delay the expose start of the following camera (HRTC) (p. 176)

1.13 Developing Applications Using The mvIMPACT Acquire SDK

The mvIMPACT Acquire SDK is a comprehensive software library that can be used to develop applications
using the devices described in this manual. A wide variety of programming languages is supported.

For C, C++, .NET, Python or Java developers separate API descriptions can be found on the MATRIX VISION
website:

• mvIMPACT Acquire C API

• mvIMPACT Acquire C++ API

• mvIMPACT Acquire Java API

• mvIMPACT Acquire .NET API

• mvIMPACT Acquire Python API

Compiled versions (CHM format) might already be installed on your system. These manuals contain chapters on

• how to link and build applications using mvIMPACT Acquire

• how the log output for "mvIMPACT Acquire" devices is configured and how it works in general

• how to create your own installation packages for Windows and Linux

• a detailed API documentation

• etc.

MATRIX VISION GmbH


1.14 DirectShow Interface 121

1.14 DirectShow Interface

Note

DirectShow can only be used in combination with the Microsoft Windows operating system.
Since Windows Vista, Movie Maker does not support capturing from a device registered for DirectShow
anymore.

This is the documentation of the MATRIX VISION DirectShow_acquire interface. A MATRIX VISION specific prop-
erty interface based on the IKsPropertySet has been added. All other features are related to standard DirectShow
programming.

• Supported Interfaces (p. 121)

• Logging (p. 121)

• Registering and renaming devices for DirectShow usage (p. 122)

1.14.1 Supported Interfaces

1.14.1.1 IAMCameraControl

1.14.1.2 IAMDroppedFrames

1.14.1.3 IAMStreamConfig

1.14.1.4 IAMVideoProcAmp

1.14.1.5 IKsPropertySet

The DirectShow_acquire supports the IKsPropertySet Interface. For further information please refer to the Microsoft
DirectX 9.0 Programmer's Reference.

Supported property set GUID's:

• AMPROPERTY_PIN_CATEGORY

• DIRECT_SHOW_ACQUIRE_PROPERTYSET

1.14.1.6 ISpecifyPropertyPages

1.14.2 Logging

The DirectShow_acquire logging procedure is equal to the logging of the MATRIX VISION products which uses
mvIMPACT Acquire. The log output itself is based on XML.

If you want more information about the logging please have a look at the Logging chapter of the respective "mvI←-
MPACT Acquire API" manual.

MATRIX VISION GmbH


122

1.14.3 Registering and renaming devices for DirectShow usage

Note

Please be sure to register the MV device for DirectShow with the right version of mvDeviceConfigure (p. 111)
. I.e. if you have installed the 32 bit version of the VLC Media Player, Virtual Dub, etc., you have to register the
MV device with the 32 bit version of mvDeviceConfigure (p. 111) ("C:\Program Files\MATRIX VISION\mvI←-
MPACT Acquire\bin") !

1.14.3.1 Registering devices

To register a device/devices for access under DirectShow please perform the following registration procedure:

1. Start mvDeviceConfigure.
If no device has been registered the application will more or less (depending on the installed devices) look
like this.

Figure 1: mvDeviceConfigure - start window

2. To register every installed device for DirectShow access click on the menu item "DirectShow" → "Register
all devices".

MATRIX VISION GmbH


1.14 DirectShow Interface 123

Figure 2: mvDeviceConfigure - register all devices

3. After a successful registration the column "registered for DirectShow" will display 'yes' for every device and
the devices will be registered with a default DirectShow friendly name.

MATRIX VISION GmbH


124

Figure 3: mvDeviceConfigure - registered devices

1.14.3.2 Renaming devices

If you want to modify the friendly name of a device under DirectShow, please perform the follwing procedure:

1. If mvDeviceConfigure is already not running, please start it.

2. Now, select the device you want to rename, click the right mouse button and select "Set DirectShow friendly
name":

Figure 4: mvDeviceConfigure - set DirectShow friendly name

3. Then, a dialog will appear. Please enter the new name and confirm it with "OK".

MATRIX VISION GmbH


1.14 DirectShow Interface 125

Figure 5: mvDeviceConfigure - enter new name

4. Afterwards the column "DirectShow friendly name" will display the newly assigned friendly name.

Figure 6: mvDeviceConfigure - renamed device

Note

Please do not select the same friendly name for two different devices. In theory this is possible, however the
mvDeviceConfigure GUI will not allow this to avoid confusion.

1.14.3.3 Make silent registration

To make a silent registration without dialogs, the Windows tool "regsvr32" via command line can be used.

The following command line options are available an can be passed during the silent registration:

EXAMPLES:

Register ALL devices that are recognized by mvIMPACT Acquire (this will only register devices which have drivers
installed).

regsvr32 <path>\DirectShow_acquire.ax /s

MATRIX VISION GmbH


126

1.15 Troubleshooting

• Accessing Log Files (p. 126)

1.15.1 Accessing Log Files

If you need support using our products, you can shorten response times by sending us your log files. Accessing the
log files is different in Windows and Linux:

1.15.1.1 Windows

Since

mvIMPACT Acquire 2.11.9

You can access the log files in Windows using wxPropView (p. 69). The way to do this is described in Accessing
log files (p. 97).

1.15.1.2 Linux

Since

mvIMPACT Acquire 2.24.0

You can access the log files in Linux via /opt/mvIMPACT_Acquire/data/logs .

You can also extract the directory using the following command

env | grep MVIMPACT_ACQUIRE_DATA_DIR

or change the directory directly via

cd $MVIMPACT_ACQUIRE_DATA_DIR/logs

For older versions:

Like on Windows, log files will be generated, if the activation flag for logging called mvDebugFlags.mvd is avail-
able in the same folder as the application (however, using Windows log files will be generated automatically, be-
cause the applications are started from the same folder). By default, on Linux the mvDebugFlags.mvd will be
installed in the installation's destination folder in the sub-folder "apps". For example, if the destination folder was
"/home/workspace", you can locate the mvDebugFlags.mvd like the following way:

MATRIX VISION GmbH


1.16 Use Cases 127

user@linux-desktop:~$ // <- Starting the console win


user@linux-desktop:~$ cd workspace/apps/ // <- Change the directory
user@linux-desktop:/home/workspace/apps$ ls -l // <- List the directory
insgesamt 144
drwxr-xr-x 9 user user 4096 Mai 21 15:08 Callback
drwxr-xr-x 8 user user 4096 Mai 21 15:08 Callback_C
drwxr-xr-x 9 user user 4096 Mai 21 15:08 CaptureToUserMemory_C
drwxr-xr-x 3 user user 4096 Mai 21 15:03 Common
drwxr-xr-x 11 user user 4096 Mai 21 15:09 ContinuousCapture
drwxr-xr-x 9 user user 4096 Mai 21 15:09 ContinuousCaptureAllDevices
drwxr-xr-x 6 user user 4096 Mai 21 15:09 ContinuousCaptureFLTK
drwxr-xr-x 9 user user 4096 Mai 21 15:09 ContinuousCapture_C
drwxr-xr-x 11 user user 4096 Mai 21 15:09 DigitalIOs
drwxr-xr-x 9 user user 4096 Mai 21 15:09 FirmwareUpgrade
drwxr-xr-x 11 user user 4096 Mai 21 15:09 GenericInterfaceLayout
drwxr-xr-x 11 user user 4096 Mai 21 15:09 GenICamInterfaceLayout
-rw-r--r-- 1 user user 854 Mai 21 15:03 Makefile
-rw-r--r-- 1 user user 7365 Mai 21 15:03 Makefile.samp.inc
-rw-r--r-- 1 user user 20713 Mai 21 15:03 mvDebugFlags.mvd // <- Log activation flag
drwxr-xr-x 7 user user 4096 Mai 21 15:09 mvDeviceConfigure
drwxr-xr-x 6 user user 4096 Mai 21 15:10 mvIPConfigure
drwxr-xr-x 6 user user 4096 Mai 21 15:11 mvPropView
drwxr-xr-x 9 user user 4096 Mai 21 15:11 SingleCapture
drwxr-xr-x 9 user user 4096 Mai 21 15:11 SingleCaptureStorage

For log file generation you have to execute your app from the folder where mvDebugFlags.mvd is located. E.g. if
you want to start wxPropView:

user@linux-desktop:/home/workspace/apps$ ./mvPropView/x86/wxPropView // <- Start the executable fro

Another possibility would be, to copy the mvDebugFlags.mvd file to the folder of the executable:

user@linux-desktop:/home/workspace/apps$ cp mvDebugFlags.mvd ./mvPropView/x86/wxPropView // <- Copy the log ac


user@linux-desktop:/home/workspace/apps$ cd ./mvPropView/x86/ // <- Change the dire
user@linux-desktop:/home/workspace/apps/mvPropView/x86/$ ./wxPropView // <- Start the execu

Afterwards, several log files are generated which are listed in files.mvloglist. The log files have the file
extension .mvlog. Please send these files to our support team.

1.16 Use Cases

• Introducing acquisition / recording possibilities (p. 128)

• Improving the acquisition / image quality (p. 131)

• Working with triggers (p. 150)

• Working with HDR (High Dynamic Range Control) (p. 151)

• Working with LUTs (p. 157)

• Saving data on the device (p. 160)

• Working with several cameras simultaneously (p. 162)

• Working with the Hardware Real-Time Controller (HRTC) (p. 168)

MATRIX VISION GmbH


128

1.16.1 Introducing acquisition / recording possibilities

There are several use cases concerning the acquisition / recording possibilities of the camera:

• Generating very long exposure times (p. 128)

• Using VLC Media Player (p. 129)

1.16.1.1 Generating very long exposure times

Since

mvIMPACT Acquire 1.10.65

Very long exposure times are possible with mvBlueFOX. For this purpose a special trigger/IO mode is used.

You can do this as follows (pseudo code):

TriggerMode = OnHighExpose
TriggerSource = DigOUT0 - DigOUT3

Attention

In the standard mvBlueFOX DigOUT2 and DigOUT3 are internal signals, however, they can be used for this
intention.

Note

Make sure that you adjust the ImageRequestTimeout_ms either to 0 (infinite)(this is the default value) or
to a reasonable value that is larger than the actual exposure time in order not to end up with timeouts resulting
from the buffer timeout being smaller than the actual time needed for exposing, transferring and capturing the
image:

ImageRequestTimeout_ms = 0 # or reasonable value

Now request a single image:

imageRequestSingle

Then the digital output is set and reset. Between these two instructions you can include source code to get the
desired exposure time.

# The DigOUT which was chosen in TriggerSource


DigitalOutput* pOut = getOutput(digital output)
pOut->set();

# Wait as long as the exposure should continue.

pOut->reset();

Afterwards you will get the image.

If you change the state of corresponding output twice this will also work with wxPropView (p. 69).

MATRIX VISION GmbH


1.16 Use Cases 129

1.16.1.2 Using VLC Media Player

With the DirectShow Interface (p. 121) MATRIX VISION devices become a (acquisition) video device for the VLC
Media Player.

Figure 1: VLC Media Player with a connected device via DirectShow

1.16.1.2.1 System requirements It is necessary that following drivers and programs are installed on the host
device (laptop or PC):

• Windows 7 or higher, 32-bit or 64-bit

• up-do-date VLC Media Player, 32-bit or 64-bit (here: version 2.0.6)

• up-do-date MATRIX VISION driver, 32-bit or 64-bit (here: version 2.5.6)

Attention

Using Windows 10 or Windows 7: VLC Media Player with versions 2.2.0 have been tested successfully with
older versions of mvIMPACT Acquire. Since version 3.0.0 of VLC at least mvIMPACT Acquire 2.34.0 will be
needed to work with devices through the DirectShow interface!

MATRIX VISION GmbH


130

1.16.1.2.2 Installing VLC Media Player

1. Download a suitable version of the VLC Media Player from the VLC Media Player website mentioned below.

2. Run the setup.

3. Follow the installation process and use the default settings.

A restart of the system is not required.

See also

http://www.videolan.org/

1.16.1.2.3 Setting up MV device for DirectShow

Note

Please be sure to register the MV device for DirectShow with the right version of mvDeviceConfigure (p. 111) .
I.e. if you have installed the 32 bit version of the VLC Media Player, you have to register the MV device with the
32-bit version of mvDeviceConfigure (p. 111) ("C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin")
!

1. Connect the MV device to the host device directly or via GigE switch using an Ethernet cable.

2. Power the camera using a power supply at the power connector.

3. Wait until the status LED turns blue.

4. Open the tool mvDeviceConfigure (p. 111) ,

5. set a friendly name (p. 124) ,

6. and register the MV device for DirectShow (p. 122) .

Note

In some cases it could be necessary to repeat step 5.

1.16.1.2.4 Working with VLC Media Player

1. Start VLC Media Player.

2. Click on "Media -> Open Capture Device..." .

MATRIX VISION GmbH


1.16 Use Cases 131

Figure 2: Open Capture Device...

3. Select the tab "Device Selection" .

4. In the section "Video device name" , select the friendly name of the MV device:

Figure 3: Video device name

5. Finally, click on "Play" .


After a short delay you will see the live image of the camera.

1.16.2 Improving the acquisition / image quality

There are several use cases concerning the acquisition / image quality of the camera:

• Correcting image errors of a sensor (p. 131)

• Optimizing the color fidelity of the camera (p. 141)

1.16.2.1 Correcting image errors of a sensor

Due to random process deviations, technical limitations of the sensors, etc. there are different reasons that image
sensors have image errors. MATRIX VISION provides several procedures to correct these errors, by default these
are host-based calculations.

Provided image corrections procedures are

MATRIX VISION GmbH


132

1. Defective Pixels Correction (p. 133),

2. Dark Current Correction (p. 135), and

3. Flat-Field Correction (p. 138).

Note

If you execute all correction procedures, you have to keep this order. All gray value settings of the corrections
below assume an 8-bit image.

Figure 1: Host-based image corrections

The path "Setting -> Base -> ImageProcessing -> ..." indicates that these corrections are host-based corrections.

Before starting consider the following hints:

• To correct the complete image, you have to make sure no user defined AOI has been selected: Right-click
"Restore Default" on the devices AOI parameters W and H in "Setting -> Base -> Camera".

• You have several options to save the correction data. The chapter Storing and restoring settings (p. 81)
describes the different ways.

See also

There is a white paper about image error corrections with extended information available on our website:
http://www.matrix-vision.com/tl_files/mv11/Glossary/art_image_errors_←-
sensors_en.pdf

MATRIX VISION GmbH


1.16 Use Cases 133

1.16.2.1.1 Defective Pixels Correction Due to random process deviations, not all pixels in an image sensor
array will react in the same way to a given light condition. These variations are known as blemishes or defective
pixels.
There are three types of defective pixels:

1. leaky pixel (in the dark)


which indicates pixels that produce a higher read out code than the average

2. hot pixel (in standard light conditions)


which indicates pixels that produce a higher non-proportional read out code when temperatures are rising

3. cold pixel (in standard light conditions)


which indicates pixels that produce a lower read out code than average when the sensor is exposed (e.g.
caused by dust particles on the sensor)

Note

Please use either an Mono or RAW Bayer image format when detecting defective pixel data in the image.

To correct the defective pixels various substitution methods exist:

1. "Replace 3x1 average"


which substitutes the detected defective pixels with the average value from the left and right neighboring pixel
(3x1)

2. "Replace 3x3 median"


which substitutes the detected defective pixels with the median value calculated from the nearest neighboring
in a 3 by 3 region

3. "Replace 3x3 Filtered Data Averaged"


which substitutes and treats the detected defective pixels as if they have been processed with a 3 by 3 filter
algorithm before reaching this filter
Only recommended for devices which do not offer a defective pixel compensation; packed RGB or packed
YUV444 data is needed. See enumeration value dpfmReplaceDefectivePixelAfter3x3Filter
in the corresponding API manual for additional details about this algorithm and when and why it is needed

1.16.2.1.1.1 Correcting leaky pixels To correct leaky pixels the following steps are necessary:

1. Set gain ("Setting -> Base -> Camera -> GenICam -> Analog Control ->
Gain = 0 dB") and exposure time "Setting -> Base -> Camera -> GenICam ->
Acquisition Control -> ExposureTime = 360 msec" to the given operating conditions
The total number of defective pixels found in the array depend on the gain and the exposure time.

2. Black out the lens completely

3. Set the (Filter-) "Mode = Calibrate leaky pixel"

4. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = Single←-
Frame")
5. To activate the correction, choose one of the substitution methods mentioned above

6. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)

Note

After having re-started the camera you have to reload the capture settings!

The filter checks:

Pixel > LeakyPixelDeviation_ADCLimit // (default value: 50)

All pixels above this value are considered as leaky pixel.

MATRIX VISION GmbH


134

1.16.2.1.1.2 Correcting hot pixels

Note

With "Mode = Calibrate Hot And Cold Pixel" you can execute both corrections at the same
time.

To correct hot pixels the following steps are necessary:

1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value
between 128 and 180)

2. Set the (Filter-) "Mode = Calibrate Hot Pixel"

3. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = Single←-
Frame")
4. To activate the correction, choose one of the substitution methods mentioned above

5. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)

Note

After having re-started the camera you have to reload the capture settings!

The filter checks:

Pixel > T[hot] // (default value: 15 %)

// T[hot] = deviation of the average gray value

1.16.2.1.1.3 Correcting cold pixels

Note

With "Mode = Calibrate Hot And Cold Pixel" you can execute both corrections at the same
time.

To correct cold pixels the following steps are necessary:

1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value
between 128 and 180)

2. Set the (Filter-) "Mode = Calibrate cold pixel" (Figure 2)

3. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = Single←-
Frame")
4. To activate the correction, choose one of the substitution methods mentioned above

5. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)

MATRIX VISION GmbH


1.16 Use Cases 135

Note

After having re-started the camera you have to reload the capture settings!

The filter checks:

Pixel < T[cold] // (default value: 15 %)

// T[cold] = deviation of the average gray value

All pixels below this value have a dynamic below normal behavior.

Figure 2: Image corrections: DefectivePixelsFilter

Note

Repeating the defective pixel corrections will accumulate the correction data which leads to a higher value
in "DefectivePixelsFound". If you want to reset the correction data or repeat the correction process
you have to set the filter mode to "Reset Calibration Data". In oder to limit the amount of defective
pixels detected the "DefectivePixelsMaxDetectionCount" property can be used.

1.16.2.1.2 Dark Current Correction Dark current is a characteristic of image sensors, which means, that image
sensors also deliver signals in total darkness by warmness, for example, which creates charge carriers sponta-
neously. This signal overlays the image information. Dark current depends on two circumstances:

1. Exposure time
The longer the exposure, the greater the dark current part. I.e. using long exposure times, the dark current
itself could lead to an overexposed sensor chip

2. Temperature
By cooling the sensor chips the dark current production can be highly dropped (approx. every 6 °C the dark
current is cut in half)

MATRIX VISION GmbH


136

1.16.2.1.2.1 Correcting Dark Current The dark current correction is a pixel wise correction where the dark
current correction image removes the dark current from the original image. To get a better result it is necessary to
snap the original and the dark current images with the same exposure time and temperature.

Note

Dark current snaps generally show noise.

To correct the dark current pixels following steps are necessary:

1. Black out the lens completely

2. Set "OffsetAutoCalibration = Off" (Figure 3)

3. If applicable, change Offset_pc until you'll see an amplitude in the histogram (Figure 4)

4. Set exposure time according to the application

5. Set the (Filter-) "Mode = Calibrate"

6. Snap an image ("Acquire" with "Acquisition Mode = SingleFrame")

7. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"

8. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)

The filter snaps a number of images and averages the dark current images to one correction image.

MATRIX VISION GmbH


1.16 Use Cases 137

Note

After having re-started the camera you have to reload the capture settings vice versa.

Figure 3: Image corrections (screen-shot mvBlueFOX): OffsetAutoCalibration = Off

MATRIX VISION GmbH


138

Figure 4: Image corrections: Offset histogram

Figure 5: Image corrections: Dark current

1.16.2.1.3 Flat-Field Correction Each pixel of a sensor chip is a single detector with its own properties. Par-
ticularly, this pertains to the sensitivity as the case may be the spectral sensitivity. To solve this problem (including
lens and illumination variations), a plain and equally "colored" calibration plate (e.g. white or gray) as a flat-field is
snapped, which will be used to correct the original image. Between flat-field correction and the future application
you must not change the optic. To reduce errors while doing the flat-field correction, a saturation between 50 % and
75 % of the flat-field in the histogram is convenient.

MATRIX VISION GmbH


1.16 Use Cases 139

Note

Flat-field correction can also be used as a destructive watermark and works for all f-stops.

To make a flat field correction following steps are necessary:

1. You need a plain and equally "colored" calibration plate (e.g. white or gray)

2. No single pixel may be saturated - that's why we recommend to set the maximum gray level in the brightest
area to max. 75% of the gray scale (i.e., to gray values below 190 when using 8-bit values)

3. Choose a BayerXY in "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".

4. Set the (Filter-) "Mode = Calibrate" (Figure 6)

5. Start a Live snap ("Acquire" with "Acquisition Mode = Continuous")

6. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"

7. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)

Note

After having re-started the camera you have to reload the capture settings vice versa.

The filter snaps a number of images (according to the value of the CalibrationImageCount, e.g. 5) and
averages the flat-field images to one correction image.

MATRIX VISION GmbH


140

Figure 6: Image corrections: Host-based flat field correction

1.16.2.1.3.1 Host-based Flat-Field Correction With Calibration AOI In some cases it might be necessary to
use just a specific area within the camera's field of view to calculate the correction values. In this case just a specific
AOI will be used to calculate the correction factor.

You can set the "host-based flat field correction" in the following way:

1. All necessary setting can be found under "ImageProcessing"-> "FlatfieldFilter".

2. Stop "Continuous" acquisition mode.

3. Set "CalibrationImageCount" to, for example, 5.

4. Set "Mode" to "Calibrate".

5. Set "CalibrationAoiMode" to "UseAoi".

6. Set the properties ("X, Y and W, H") appeared under "CalibrationAOI" to the desired AOI.

7. Start "Continuous" acquisition mode.

8. Finally, you have to activate the correction: Set the "Mode" to "On".

Figure 7: Image corrections: Host-based flat field correction with calibration AOI

1.16.2.1.3.2 Host-based Flat-Field Correction With Correction AOI In some cases it might be necessary to
correct just a specific area in the camera's filed of view. In this case the correction values are only applied to a
specific area. For the rest of the image, the correction factor will be just 1.0.

You can set the "host-based flat field correction" in the following way:

1. All necessary setting can be found under "ImageProcessing" -> "FlatfieldFilter".

2. Stop "Continuous" acquisition mode.

3. Set "CalibrationImageCount" to, for example, 5.

4. Set "Mode" to "Calibrate".

MATRIX VISION GmbH


1.16 Use Cases 141

5. Start "Continuous" acquisition mode.

6. Now, you have to activate the correction: Set the "Mode" to "On".

7. Set "CorrectionAOIMode" to "UseAoi".

8. Finally use the properties ("X, Y and W, H") which appeared under "CorrectionAOI" to configure the desired
AOI.

Figure 8: Image corrections: Host-based flat field correction with correction AOI

1.16.2.2 Optimizing the color fidelity of the camera

Purpose of this chapter is to optimize the color image of a camera, so that it looks as natural as possible on different
displays and for human vision.

This implies some linear and nonlinear operations (e.g. display color space or Gamma viewing LUT) which are
normally not necessary or recommended for machine vision algorithms. A standard monitor offers, for example,
several display modes like sRGB, "Adobe RGB", etc., which reproduce the very same color of a camera color
differently.

It should also be noted that users can choose for either

• camera based settings and adjustments or

• host based settings and adjustments or

• a combination of both.

Camera based settings are advantageous to achieve highest calculating precision, independent of the transmission
bit depth, lowest latency, because all calculations are performed in FPGA on the fly and low CPU load, because the
host is not invoked with these tasks. These camera based settings are

• gamma correction (p. 144)

• negative gain / gain (p. 144)

MATRIX VISION GmbH


142

• look-up table (LUT) (p. 144)

• white balance (p. 146)

• offset (p. 147)

• saturation and color correction (p. 148)

Host based settings save transmission bandwidth at the expense of accuracy or latency and CPU load. Especially
performing gain, offset, and white balance in the camera while outputting RAW data to the host can be recom-
mended.

Of course host based settings can be used with all families of cameras (e.g. also mvBlueFOX).

Host based settings are:

• look-up table (LUTOperations)

• color correction (ColorTwist)

To show the different color behaviors, we take a color chart as a starting point:

Figure 1: Color chart as a starting point

If we take a SingleFrame image without any color optimizations, an image can be like this:

Figure 2: SingleFrame snap without color optimization

MATRIX VISION GmbH


1.16 Use Cases 143

Figure 3: Corresponding histogram of the horizontal white to black profile

As you can see,

• saturation is missing,

• white is more light gray,

• black is more dark gray,

• etc.

Note

You have to keep in mind that there are two types of images: the one generated in the camera and the other
one displayed on the computer monitor. Up-to-date monitors offer different display modes with different color
spaces (e.g. sRGB). According to the chosen color space, the display of the colors is different.

The following figure shows the way to a perfect colored image

Figure 4: The way to a perfect colored image

including these process steps:

1. Do a Gamma correction (Luminance) (p. 144),

2. make a White balance (p. 146) and

3. Improve the Contrast (p. 147).

4. Improve Saturation (p. 148), and use a "color correction matrix" for both

(a) the sensor and / or


(b) the monitor.

The following sections will describe the single steps in detail.

MATRIX VISION GmbH


144

1.16.2.2.1 Step 1: Gamma correction (Luminance) First of all, a Gamma correction (Luminance) can be
performed to change the image in a way how humans perceive light and color.

For this, you can change either

• the exposure time,

• the aperture or

• the gain.

You can change the gain via wxPropView (p. 69) like the following way:

1. Click on "Setting -> Base -> Camera". There you can find

(a) "AutoGainControl" and


(b) "AutoExposeControl".

Figure 5: wxPropView: Setting -> Base -> Camera

You can turn them "On" or "Off". Using the auto controls you can set limits of the auto control; without you
can set the exact value.

MATRIX VISION GmbH


1.16 Use Cases 145

After gamma correction, the image will look like this:

Figure 6: After gamma correction

Figure 7: Corresponding histogram after gamma correction

MATRIX VISION GmbH


146

Note

As mentioned above, you can do a gamma correction via ("Setting -> Base -> ImageProcessing -> LUT←-
Operations"):

Figure 8: LUTOperations dialog

Just set "LUTEnable" to "On" and adapt the single LUTs like (LUT-0, LUT-1, etc.).

1.16.2.2.2 Step 2: White Balance As you can see in the histogram, the colors red and blue are below green.
Using green as a reference, we can optimize the white balance via "Setting -> Base -> ImageProcessing" ("←-
WhiteBalanceCalibration"):

Please have a look at White balance of a camera device (color version) (p. 98) for more information for an
automatic white balance.

To adapt the single colors you can use the "WhiteBalanceSettings-1".

After optimizing white balance, the image will look like this:

MATRIX VISION GmbH


1.16 Use Cases 147

Figure 9: After white balance

Figure 10: Corresponding histogram after white balance

1.16.2.2.3 Step 3: Contrast Still, black is more a darker gray. To optimize the contrast you can use "Setting ->
Base -> ImageProcessing -> LUTControl" as shown in Figure 8.

The image will look like this now:

Figure 11: After adapting contrast

MATRIX VISION GmbH


148

Figure 12: Corresponding histogram after adapting contrast

1.16.2.2.4 Step 4: Saturation and Color Correction Matrix (CCM) Still saturation is missing. To change this,
the "Color Transformation Control" can be used ("Setting -> Base -> ImageProcessing -> ColorTwist"):

1. Click on "Color Twist Enable" and

2. click on "Wizard" to start the saturation via "Color Transformation Control" wizard tool (since firmware version
1.4.57).

Figure 13: Selected Color Twist Enable and click on wizard will start wizard tool

3. Now, you can adjust the saturation e.g. "1.1".

MATRIX VISION GmbH


1.16 Use Cases 149

Figure 14: Saturation via Color Transformation Control dialog

4. Afterwards, click on "Enable".

5. Since driver version 2.2.2, it is possible to set the special color correction matrices at

(a) the input (sensor),


(b) the output side (monitor) and
(c) the saturation itself using this wizard.

6. Select the specific input and output matrix and

7. click on "Enable".

8. As you can see, the correction is done by the host ("Host Color Correction Controls").
Note

It is not possible to save the settings of the "Host Color Correction Controls" in the mvBlueFOX. Unlike
in the case of Figure 14, the buttons to write the "Device Color Correction Controls" to the mvBlueFOX
are not active.

9. Finally, click on "Apply".

After the saturation, the image will look like this:

Figure 15: After adapting saturation

MATRIX VISION GmbH


150

Figure 16: Corresponding histogram after adapting saturation

1.16.3 Working with triggers

There are several use cases concerning trigger:

• Using external trigger with CMOS sensors (p. 150)

1.16.3.1 Using external trigger with CMOS sensors

1.16.3.1.1 Scenario The CMOS sensors used in mvBlueFOX cameras support the following trigger modes:

• Continuous

• OnDemand (software trigger)

• OnLowLevel

• OnHighLevel

• OnHighExpose (only with mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944]) (p. 214))

If an external trigger signal occurs (e.g. high), the sensor will start to expose and readout one image. Now, if the
trigger signal is still high, the sensor will start to expose and readout the next image (see Figure 1, upper part). This
will lead to an acquisition just like using continuous trigger.

Figure 1: External Trigger with CMOS sensors

• ttrig = Time from trigger (internal or external) to integration start.

If you want to avoid this effect, you have to adjust the trigger signal. As you can see in Figure 1 (lower part), the
possible period has to be smaller than the time an image will need (texpose + treadout ).

MATRIX VISION GmbH


1.16 Use Cases 151

1.16.3.1.2 Example

1.16.3.1.2.1 External synchronized image acquisition (high active)

Note

Using mvBlueFOX-MLC or mvBlueFOX-IGC, you have to select DigIn0 as the trigger source, because the
camera has only one opto-coupled input. Only the TTL model of the mvBlueFOX-MLC has two I/O's.

• Trigger modes

– OnHighLevel:
The high level of the trigger has to be shorter than the frame time. In this case, the sensor will make
one image exactly. If the high time is longer, there will be images with the possible frequency of the
sensor as long as the high level takes. The first image will start with the low-high edge of the signal.
The integration time of the exposure register will be used.

– OnLowLevel:
The first image will start with the high-low edge of the signal.

– OnHighExpose
This mode is like OnHighLevel, however, the exposure time is used like the high time of the signal.

See also

Block diagrams with example circuits of the opto-isolated digital inputs and outputs can be found in Dimen-
sions and connectors (p. 53).

1.16.4 Working with HDR (High Dynamic Range Control)

There are several use cases concerning High Dynamic Range Control:

• Adjusting sensor of camera models -x00w (p. 151)

• Adjusting sensor of camera models -x02d (-1012d) (p. 154)

1.16.4.1 Adjusting sensor of camera models -x00w

1.16.4.1.1 Introduction The HDR (High Dynamic Range) mode of the sensor -x00w increases the usable con-
trast range. This is achieved by dividing the integration time in two or three phases. The exposure time proportion of
the three phases can be set independently. Furthermore, it can be set, how much signal of each phase is charged.

MATRIX VISION GmbH


152

1.16.4.1.2 Functionality

Figure 1: Diagram of the -x00w sensor's HDR mode

1.16.4.1.2.1 Description

• "Phase 0"

– During T1 all pixels are integrated until they reach the defined signal level of Knee Point 1.
– If one pixel reaches the level, the integration will be stopped.
– During T1 no pixel can reached a level higher than P1.

• "Phase 1"

– During T2 all pixels are integrated until they reach the defined signal level of Knee Point 2.
– T2 is always smaller than T1 so that the percentage compared to the total exposure time is lower.
– Thus, the signal increase during T2 is lower as during T1.
– The max. signal level of Knee Point 2 is higher than of Knee Point 1.

• "Phase 2"

– During T2 all pixels are integrated until the possible saturation.


– T3 is always smaller than T2, so that the percentage compared to the total exposure time is again lower
here.
– Thus, the signal increase during T3 is lower as during T2.

For this reason, darker pixels can be integrated during the complete integration time and the sensor reaches its full
sensitivity. Pixels, which are limited at each Knee Points, lose a part of their integration time - even more, if they are
brighter.

MATRIX VISION GmbH


1.16 Use Cases 153

Figure 2: Integration time of different bright pixels

In the diagram you can see the signal line of three different bright pixels. The slope depends of the light intensity ,
thus it is per pixel the same here (granted that the light intensity is temporally constant). Given that the very light
pixel is limited soon at the signal levels S1 and S2, the whole integration time is lower compared to the dark pixel. In
practice, the parts of the integration time are very different. T1, for example, is 95% of Ttotal , T2 only 4% and T3 only
1%. Thus, a high decrease of the very light pixels can be achieved. However, if you want to divide the integration
thresholds into three parts that is S2 = 2 x S1 and S3 = 3 x S1, a hundredfold brightness of one pixel's step from S2
to S3, compared to the step from 0 and S1 is needed.

1.16.4.1.3 Using HDR with mvBlueFOX-x00w Figure 3 is showing the usage of the HDR mode. Here, an image
sequence was created with the integration time between 10us and 100ms. You can see three slopes of the HDR
mode. The "waves" result from the rounding during the three exposure phases. They can only be partly adjusted
during one line period of the sensor.

Figure 3: wxPropView HDR screenshot

MATRIX VISION GmbH


154

1.16.4.1.3.1 Notes about the usage of the HDR mode with mvBlueFOX-x00w

• In the HDR mode, the basic amplification is reduced by approx. 0.7, to utilize a huge, dynamic area of the
sensor.

• If the manual gain is raised, this effect will be reversed.

• Exposure times, which are too low, make no sense. During the third phase, if the exposure time reaches a
possible minimum (one line period), a sensible lower limit is reached.

1.16.4.1.3.2 Possible settings using mvBlueFOX-x00w Possible settings of the mvBlueFOX-x00w in HDR
mode are:

"HDREnable":

• "Off": Standard mode

• "On": HDR mode on, reduced amplification:

-"HDRMode":

• "Fixed": Fixed setting with 2 Knee Points. modulation Phase 0 .. 33% / 1 .. 66% / 2 .. 100%

• "Fixed0": Phase 1 exposure 12.5% , Phase 2 31.25% of total exposure

• "Fixed1": Phase 1 exposure 6.25% , Phase 2 1.56% of total exposure

• "Fixed2": Phase 1 exposure 3.12% , Phase 2 0.78% of total exposure

• "Fixed3": Phase 1 exposure 1.56% , Phase 2 0.39% of total exposure

• "Fixed4": Phase 1 exposure 0.78% , Phase 2 0.195% of total exposure

• "Fixed5": Phase 1 exposure 0.39% , Phase 2 0.049% of total exposure

"User": Variable setting of the Knee Point (1..2), threshold and exposure time proportion

• "HDRKneePointCount": Number of Knee Points (1..2)

• "HDRKneePoints"

– "HDRKneePoint-0"

* "HDRExposure_ppm": Proportion of Phase 0 compared to total exposure in parts per million


(ppm)
* "HDRControlVoltage_mV": Control voltage for exposure threshold of first Knee Point (3030mV is
equivalent to approx. 33%)
– "HDRKneePoint-1"

* "HDRExposure_ppm": Proportion of Phase 1 compared to total exposure in parts per million


(ppm)
* "HDRControlVoltage_mV": Control voltage for exposure threshold of first Knee Point (2630mV is
equivalent to approx. 66%)

1.16.4.2 Adjusting sensor of camera models -x02d (-1012d)

1.16.4.2.1 Introduction The HDR (High Dynamic Range) mode of the Aptina sensor increases the usable con-
trast range. This is achieved by dividing the integration time in three phases. The exposure time proportion of the
three phases can be set independently.

MATRIX VISION GmbH


1.16 Use Cases 155

1.16.4.2.2 Functionality To exceed the typical dynamic range, images are captured at 3 exposure times with
given ratios for different exposure times. The figure shows a multiple exposure capture using 3 different exposure
times.

Figure 1: Multiple exposure capture using 3 different exposure times

Note

The longest exposure time (T1) represents the Exposure_us parameter you can set in wxPropView.

Afterwards, the signal is fully linearized before going through a compander to be output as a piece-wise linear signal.
the next figure shows this.

MATRIX VISION GmbH


156

Figure 2: Piece-wise linear signal

1.16.4.2.2.1 Description Exposure ratios can be controlled by the program. Two rations are used: R1 = T1/T2
and R2 = T2/T3.

Increasing R1 and R2 will increase the dynamic range of the sensor at the cost of lower signal-to-noise ratio (and
vice versa).

1.16.4.2.2.2 Possible settings Possible settings of the mvBlueFOX-x02d in HDR mode are:

• "HDREnable":

– "Off": Standard mode


– "On": HDR mode on, reduced amplification

* "HDRMode":
· "Fixed": Fixed setting with exposure-time-ratios: T1 -> T2 ratio / T2 -> T3 ratio
· "Fixed0": 8 / 4
· "Fixed1": 4 / 8
· "Fixed2": 8 / 8
· "Fixed3": 8 / 16
· "Fixed4": 16 / 16
· "Fixed5": 16 / 32

MATRIX VISION GmbH


1.16 Use Cases 157

Figure 3: wxPropView - Working with the HDR mode

1.16.5 Working with LUTs

There are several use cases concerning LUTs (Look-Up-Tables):

• Introducing LUTs (p. 158)

MATRIX VISION GmbH


158

1.16.5.1 Introducing LUTs

1.16.5.1.1 Introduction Look-Up-Tables (LUT) are used to transform input data into a desirable output format.
For example, if you want to invert an 8 bit image, a Look-Up-Table will look like the following:

Figure 1: Look-Up-Table which inverts a pixel of an 8 bit mono image

I.e., a pixel which is white in the input image (value 255) will become black (value 0) in the output image.

All MATRIX VISION devices use a hardware based LUT which means that

• no host CPU load is needed and

• the LUT operations are independent of the transmission bit depth.

1.16.5.1.2 Setting the hardware based LUTs via LUT Control

Note

The mvBlueFOX cameras also feature a hardware based LUT. Although, you have to set the LUT via Setting
-> Base -> ImageProcessing -> LUTOperations (p. 158), you can set where the processing takes place.
For this reason, there is the parameter LUTImplementation. Just select either "Software" or "Hardware".

1.16.5.1.3 Setting the Host based LUTs via LUTOperations Host based LUTs are also available via "Setting
-> Base -> ImageProcessing -> LUTOperations"). Here, the changes will affect the 8 bit image data and the
processing needs the CPU of the host system.

The mvBlueFOX cameras also feature a hardware based LUT. Although, you have to set the LUT via "Setting ->
Base -> ImageProcessing -> LUTOperations", you can set where the processing takes place. For this reason,
there is the parameter LUTImplementation. Just select either "Software" or "Hardware".

Three "LUTMode"s are available:

• "Gamma"
You can use "Gamma" to lift darker image areas and to flatten the brighter ones. This compensates the
contrast of the object. The calculation is described here. It makes sense to set the "←-
GammaStartThreshold" higher than 0 to avoid a too extreme lift or noise in the darker areas.

MATRIX VISION GmbH


1.16 Use Cases 159

• "Interpolated"
With "Interpolated" you can set the key points of a characteristic line. You can defined the number of key
points. The following figure shows the behavior of all 3 LUTInterpolationModes with 3 key points:

Figure 2: LUTMode "Interpolated" -> LUTInterpolationMode

• "Direct"
With "Direct" you can set the LUT values directly.

1.16.5.1.3.1 Example 1: Inverting an Image To get an inverted 8 bit mono image like shown in Figure 1, you
can set the LUT using wxPropView (p. 69). After starting wxPropView (p. 69) and using the device,

1. Set "LUTEnable" to "On" in "Setting -> Base -> ImageProcessing -> LUTOperations".

2. Afterwards, set "LUTMode" to "Direct".

3. Right-click on "LUTs -> LUT-0 -> DirectValues[256]" and select "Set Multiple Elements... -> Via A User
Defined Value Range".
This is one way to get an inverted result. It is also possible to use the "LUTMode" - "Interpolated".

4. Now you can set the range from 0 to 255 and the values from 255 to 0 as shown in Figure 2.

MATRIX VISION GmbH


160

Figure 3: Inverting an image using wxPropView with LUTMode "Direct"

1.16.6 Saving data on the device

Note

As described in Storing and restoring settings (p. 81), it is also possible to save the settings as an
XML file on the host system. You can find further information about for example the XML compatibil-
ities of the different driver versions in the mvIMPACT Acquire SDK manuals and the according setting
classes: https://www.matrix-vision.com/manuals/SDK_CPP/classmvIMPACT_1_←-
1acquire_1_1FunctionInterface.html (C++)

There are several use cases concerning device memory:

• Creating user data entries (p. 160)

1.16.6.1 Creating user data entries

1.16.6.1.1 Basics about user data It is possible to save arbitrary user specific data on the hardware's non-
volatile memory. The amount of possible entries depends on the length of the individual entries as well as the size
of the devices non-volatile memory reserved for storing:

• mvBlueFOX,

MATRIX VISION GmbH


1.16 Use Cases 161

• mvBlueFOX-M,

• mvBlueFOX-MLC,

• mvBlueFOX3, and

• mvBlueCOUGAR-X

currently offer 512 bytes of user accessible non-volatile memory of which 12 bytes are needed to store header
information leaving 500 bytes for user specific data.

One entry will currently consume:


1 + <length_of_name (up to 255 chars)> + 2 + <length_of_data (up to 65535 bytes)> + 1 (access mode) bytes

as well as an optional:
1 + <length_of_password> bytes per entry if a password has been defined for this particular entry

It is possible to save either String or Binary data in the data property of each entry. When storing binary data
please note, that this data internally will be stored in Base64 format thus the amount of memory required is 4/3
time the binary data size.

The UserData can be accessed and created using wxPropView (p. 69) (the device has to be closed). In the section
"UserData" you will find the entries and following methods:

• "CreateUserDataEntry"

• "DeleteUserDataEntry"

• "WriteDataToHardware"

Figure 1: wxPropView - section "UserData -> Entries"

MATRIX VISION GmbH


162

To create a user data entry, you have to

• Right click on "CreateUserDataEntry"

• Select "Execute" from the popup menu.


An entry will be created.

• In "Entries" click on the entry you want to adjust and modify the data fields.
To permanently commit a modification made with the keyboard the ENTER key must be pressed.

• To save the data on the device, you have to execute "WriteDataToHardware". Please have a look at
the "Output" tab in the lower right section of the screen as shown in Figure 2, to see if the write process
returned with no errors. If an error occurs a message box will pop up.

Figure 2: wxPropView - analysis tool "Output"

1.16.6.1.2 Coding sample If you e.g. want to use the UserData as dongle mechanism (with binary data), it is
not suggested to use wxPropView (p. 69). In this case you have to program the handling of the user data.

See also

mvIMPACT::acquire::UserDataEntry in mvIMPACT_Acquire_API_CPP_manual.chm.

1.16.7 Working with several cameras simultaneously

There are several use cases concerning multiple cameras:

• Using 2 mvBlueFOX-MLC cameras in Master-Slave mode (p. 163)

• Synchronize the cameras to expose at the same time (p. 167)

MATRIX VISION GmbH


1.16 Use Cases 163

1.16.7.1 Using 2 mvBlueFOX-MLC cameras in Master-Slave mode

1.16.7.1.1 Scenario If you want to have a synchronized stereo camera array (e.g. mvBlueFOX-MLC-202dG)
with a rolling shutter master camera (e.g. mvBlueFOX-MLC-202dC), you can solve this task as follows:

1. Please check, if all mvBlueFOX cameras are using firmware version 1.12.16 or newer.

2. Now, open wxPropView (p. 69) and set the master camera:

Figure 1: wxPropView - Master camera outputs at DigOut 0 a frame synchronous V-Sync pulse

Note

Alternatively, it is also possible to use HRTC - Hardware Real-Time Controller (p. 118) HRTC to set
the master camera. The following sample shows the HRTC - Hardware Real-Time Controller (p. 118)
HRTC program which sets the trigger signal and the digital output.
The sample will lead to a constant frame rate of 16 fps (50000 us + 10000 us = 60000 us for one cycle.
1 / 60000 us ∗ 1000000 = 16.67 Hz).

MATRIX VISION GmbH


164

Figure 2: wxPropView - HRTC program sets the trigger signal and the digital output

Do not forget to set HRTC as the trigger source for the master camera.

MATRIX VISION GmbH


1.16 Use Cases 165

Figure 3: wxPropView - HRTC is the trigger source for the master camera

3. Then, set the slave with wxPropView (p. 69) :

Figure 4: wxPropView - Slave camera with TriggerMode "OnHighLevel" at DigIn 0

1.16.7.1.1.1 Connection using -UOW versions (opto-isolated inputs and outputs) The connection of the
mvBlueFOX cameras should be like this:

MATRIX VISION GmbH


166

Figure 5: Connection with opto-isolated digital inputs and outputs

Symbol Comment Input voltage Min Typ Max Unit


Uext. External power 3.←- 30 V
3
Rout Resistor digital output 2 kOhm
3.3 V .. 5 V 0 kOhm
Rin Resistor digital input 12 V 0.68 kOhm
24 V 2 kOhm

You can add further slaves.

1.16.7.1.1.2 Connection using -UTW versions (TTL inputs and outputs) The connection of the mvBlueFOX
cameras should be like this:

Figure 6: Connection with TTL digital inputs and outputs

For this case we offer a synchronization cable called "KS-MLC-IO-TTL 00.5".

MATRIX VISION GmbH


1.16 Use Cases 167

Note

There a no further slaves possible.

See also

• Dimensions and connectors (p. 53) Figure 18 pin reference.

• Dimensions and connectors (p. 53) Table of connector pin out of "12-pin through-hole type shrouded
header (USB / Dig I/O)".

• Dimensions and connectors (p. 53) Electrical drawing "opto-isolated digital inputs" and "opto-isolated
digital outputs".

• A predefined frame rate is also possible using HRTC. (p. 169)

1.16.7.2 Synchronize the cameras to expose at the same time

This can be achieved by connecting the same external trigger signal to one of the digital inputs of each camera like
it's shown in the following figure:

Figure 1: Electrical setup for sync. cameras

Each camera then has to be configured for external trigger somehow like in the image below:

MATRIX VISION GmbH


168

Figure 2: wxPropView - Setup for sync. cameras

This assumes that the image acquisition shall start with the rising edge of the trigger signal. Every camera must be
configured like this. Each rising edge of the external trigger signal then will start the exposure of a new image at
the same time on each camera. Every trigger signal that will occur during the exposure of an image will be silently
discarded.

1.16.8 Working with the Hardware Real-Time Controller (HRTC)

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.

There are several use cases concerning the Hardware Real-Time Controller (HRTC):

• "Using single camera":

– Achieve a defined image frequency (HRTC) (p. 169)


– Delay the external trigger signal (HRTC) (p. 170)
– Creating double acquisitions (HRTC) (p. 171)
– Take two images after one external trigger (HRTC) (p. 171)
– Take two images with different expose times after an external trigger (HRTC) (p. 172)
– Edge controlled triggering (HRTC) (p. 174)

• "Using multiple cameras":

– Delay the expose start of the following camera (HRTC) (p. 176)

MATRIX VISION GmbH


1.16 Use Cases 169

1.16.8.1 Achieve a defined image frequency (HRTC)

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.

With the use of the HRTC, any feasible frequency with the accuracy of micro seconds(us) is possible. The program
to achieve this roughly must look like this (with the trigger mode set to ctmOnRisingEdge):

0. WaitClocks( <frame time in us> - <trigger pulse width in us>) )


1. TriggerSet 1
2. WaitClocks( <trigger pulse width in us> )
3. TriggerReset
4. Jump 0

So to get e.g. exactly 10 images per second from the camera the program would somehow look like this(of course
the expose time then must be smaller or equal then the frame time in normal shutter mode):

0. WaitClocks 99000
1. TriggerSet 1
2. WaitClocks 1000
3. TriggerReset
4. Jump 0

Figure 1: wxPropView - Entering the sample "Achieve a defined image frequency"

MATRIX VISION GmbH


170

See also

Download this sample as an rtp file: Frequency10Hz.rtp. To open the file in wxPropView (p. 69),
click on "Digital I/O -> HardwareRealTimeController -> Filename" and select the
downloaded file. Afterwards, click on "int Load( )" to load the HRTC program.

Note

Please note the max. frame rate of the corresponding sensor!

To see a code sample (in C++) how this can be implemented in an application see the description of the class
mvIMPACT::acquire::RTCtrProgram (C++ developers)

1.16.8.2 Delay the external trigger signal (HRTC)

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.

0. WaitDigin DigIn0->On
1. WaitClocks <delay time>
2. TriggerSet 0
3. WaitClocks <trigger pulse width>
4. TriggerReset
5. Jump 0

<trigger pulse width> should not less than 100us.

Figure 1: Delay the external trigger signal

As soon as digital input 0 changes from high to low (0), the HRTC waits the < delay time > (1) and starts the image
expose. The expose time is used from the expose setting of the camera. Step (5) jumps back to the beginning to
be able to wait for the next incoming signal.

MATRIX VISION GmbH


1.16 Use Cases 171

Note

WaitDigIn waits for a state.


Between TriggerSet and TriggerReset has to be a waiting period.
If you are waiting for an external edge in a HRTC sequence like

WaitDigIn[On,Ignore]
WaitDigIn[Off,Ignore]

the minimum pulse width which can be detected by HRTC has to be at least 5 us.

1.16.8.3 Creating double acquisitions (HRTC)

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.

If you need a double acquisition, i.e. take two images in a very short time interval, you can achieve this by using the
HRTC.

With the following HRTC code, you will

• take an image using TriggerSet and after TriggerReset you have to

• set the camera to ExposeSet immediately.

• Now, you have to wait until the first image was read-out and then

• set the second TriggerSet.

The ExposureTime was set to 200 us.

0 WaitDigin DigitalInputs[0] - On
1 TriggerSet 1
2 WaitClocks 200
3 TriggerReset
4 WaitClocks 5
5 ExposeSet
6 WaitClocks 60000
7 TriggerSet 2
8 WaitClocks 100
9 TriggerReset
10 ExposeReset
11 WaitClocks 60000
12 Jump 0

1.16.8.4 Take two images after one external trigger (HRTC)

MATRIX VISION GmbH


172

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.

0. WaitDigin DigIn0->Off
1. TriggerSet 1
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <time between 2 acquisitions - 10us> (= WC1)
5. TriggerSet 2
6. WaitClocks <trigger pulse width>
7. TriggerReset
8. Jump 0

<trigger pulse width> should not less than 100us.

Figure 1: Take two images after one external trigger

This program generates two internal trigger signals after the digital input 0 is going to low. The time between those
internal trigger signals is defined by step (4). Each image is getting a different frame ID. The first one has the
number 1, defined in the command (1) and the second image will have the number 2. The application can ask for
the frame ID of each image, so well known which image is the first and the second one.

1.16.8.5 Take two images with different expose times after an external trigger (HRTC)

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.

The following code shows the solution in combination with a CCD model of the camera. With CCD models you have
to set the exposure time using the trigger width.

0. WaitDigin DigIn0->Off
1. ExposeSet
2. WaitClocks <expose time image1 - 10us> (= WC1)
3. TriggerSet 1
4. WaitClocks <trigger pulse width>
5. TriggerReset
6. ExposeReset

MATRIX VISION GmbH


1.16 Use Cases 173

7. WaitClocks <time between 2 acquisitions - expose time image1 - 10us> (= WC2)


8. ExposeSet
9. WaitClocks <expose time image2 - 10us> (= WC3)
10. TriggerSet 2
11. WaitClocks <trigger pulse width>
12. TriggerReset
13. ExposeReset
14. Jump 0

<trigger pulse width> should not less than 100us.

Figure 1: Take two images with different expose times after an external trigger

Note

Due to the internal loop to wait for a trigger signal, the WaitClocks call between "TriggerSet 1" and "Trigger←-
Reset" constitute 100. For this reason, the trigger signal cannot be missed.
Before the ExposeReset, you have to call the TriggerReset otherwise the normal flow will continue and the
image data will be lost!
The sensor expose time after the TriggerSet is 0.

See also

Download this sample as an rtp file: 2Images2DifferentExposureTimes.rtp with two con-


secutive exposure times (10ms / 20ms). To open the file in wxPropView (p. 69), click on "Digital I/O
-> HardwareRealTimeController -> Filename" and select the downloaded file. Afterwards,
click on "int Load( )" to load the HRTC program. There are timeouts added in line 4 and line 14 to illustrate
the different exposure times.

Using a CMOS model (e.g. the mvBlueFOX-MLC205), a sample with four consecutive exposure times (10ms /
20ms / 40ms / 80ms) triggered just by one hardware input signal would look like this:

0. WaitDigin DigIn0->On
1. TriggerSet
2. WaitClocks 10000 (= 10 ms)
3. TriggerReset
4. WaitClocks 1000000 (= 1 s)
5. TriggerSet
6. WaitClocks 20000 (= 20 ms)
7. TriggerReset

MATRIX VISION GmbH


174

8. WaitClocks 1000000 (= 1 s)
9. TriggerSet
10. WaitClocks 40000 (= 40 ms)
11. TriggerReset
12. WaitClocks 1000000 (= 1 s)
13. TriggerSet
14. WaitClocks 80000 (= 40 ms)
15. TriggerReset
16. WaitClocks 1000000 (= 1 s)
17. Jump 0

See also

This second sample is also available as an rtp file: MLC205_four_images_diff_exp.rtp.

1.16.8.6 Edge controlled triggering (HRTC)

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.

To achieve an edged controlled triggering, you can use HRTC. Please follow these steps:

1. First of all, you have to set the TriggerMode to OnHighLevel .

2. Then, set the TriggerSource to RTCtrl .

Figure 1: wxPropView - TriggerMode and TriggerSource

Afterwards you have to configure the HRTC program:

1. The HRTC program waits for a rising edge at the digital input 0 (step 1).

2. If there is a rising edge, the trigger will be set (step 2).

3. After a short wait time (step 3),

4. the trigger will be reset (step 4).

5. Now, the HRTC program waits for a falling edge at the digital input 0 (step 5).

6. If there is a falling edge, the trigger will jump to step 0 (step 6).

MATRIX VISION GmbH


1.16 Use Cases 175

Note

The waiting time at step 0 is necessary to debounce the signal level at the input (the duration should be shorter
than the frame time).

Figure 2: wxPropView - Edge controller triggering using HRTC

See also

Download this sample as a capture settings file: MLC200wG_HRTC_TriggerFromHighLevelTo←-


EdgeControl.xml. How you can work with capture settings is described in the following chapter
(p. 81).

To see a code sample (in C++) how this can be implemented in an application see the description of the class
mvIMPACT::acquire::RTCtrProgram (C++ developers)

MATRIX VISION GmbH


176

1.16.8.7 Delay the expose start of the following camera (HRTC)

Note

Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
The use case Synchronize the cameras to expose at the same time (p. 167) shows how you have to
connect the cameras.

If a defined delay should be necessary between the cameras, the HRTC can do the synchronization work.

In this case, one camera must be the master. The external trigger signal that will start the acquisition must be
connected to one of the cameras digital inputs. One of the digital outputs then will be connected to the digital input
of the next camera. So camera one uses its digital output to trigger camera two. How to connect the cameras to
one another can also be seen in the following image:

Figure 1: Connection diagram for a defined delay from the exposure start of one camera relative to another

Assuming that the external trigger is connected to digital input 0 of camera one and digital output 0 is connected
to digital input 0 of camera two. Each additional camera will then be connected to it predecessor like camera 2 is
connected to camera 1. The HRTC of camera one then has to be programmed somehow like this:

0. WaitDigin DigIn0->On
1. TriggerSet 0
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <delay time>
5. SetDigout DigOut0->On
6. WaitClocks 100us
7. SetDigout DigOut0->Off
8. Jump 0

<trigger pulse width> should not less than 100us.

When the cameras are set up to start the exposure on the rising edge of the signal <delay time> of course is the
desired delay time minus <trigger pulse width>.

If more than two cameras shall be connected like this, every camera except the last one must run a program like
the one discussed above. The delay times of course can vary.

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 177

Figure 2: Delay the expose start of the following camera

1.17 Appendix A. Specific Camera / Sensor Data

• A.1 CCD (p. 177)

• A.2 CMOS (p. 201)

1.17.1 A.1 CCD

• mvBlueFOX-[Model]220 (0.3 Mpix [640 x 480]) (p. 177)

• mvBlueFOX-[Model]220a (0.3 Mpix [640 x 480]) (p. 182)

• mvBlueFOX-[Model]221 (0.8 Mpix [1024 x 768]) (p. 187)

• mvBlueFOX-[Model]223 (1.4 Mpix [1360 x 1024]) (p. 191)

• mvBlueFOX-[Model]224 (1.9 Mpix [1600 x 1200]) (p. 196)

1.17.1.1 mvBlueFOX-[Model]220 (0.3 Mpix [640 x 480])

1.17.1.1.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications

Industrial applications:

• triggered image acquisition with precise control of image exposure start by hardware trigger input.

• image acquisition of fast moving objects due to:

MATRIX VISION GmbH


178

– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.

Scientific applications:

• long time exposure for low light conditions.

• optimizing image quality using the variable shutter control.

1.17.1.1.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.

1.17.1.1.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel Start an exposure of a frame as long as the trigger input is below the trigger threshold.
OnHighLevel Start an exposure of a frame as long as the trigger input is above the trigger threshold.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.

See also

For detailed description about the trigger modes ( https://www.matrix-vision/manuals/ [mv←-


IMPACT Acquire API])

• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode

1.17.1.1.2.2 Exposure aka Integration After an active trigger, the exposure phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Exposure time is adjustable by software in increments of treadline .

1.17.1.1.2.3 Readout When exposure is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 179

1.17.1.1.3 CCD Timing

Name Description Pixel clock


12 MHz 24 MHz
ttrig Time from trigger 10us
(internal or external) to exposure start
ttrans Image transfer time 64us 32us
(move image to readout cells in CCD)
treadline time needed to readout a line 64us 32us
tvshift time needed to shift unused lines away 3.15us 1.6us
twait minimal time to next trigger 64us 32us
texposure Exposure time 2uss - 128s
treadout Image readout time treadout = (ActiveLines ∗ treadline ) + (510 - ActiveLines) ∗ tvshift
(move image from readout cells to memory

1.17.1.1.3.1 Timings

Note

In partial scan mode (readout window ysize < 480 lines).

To calculate the maximum frames per second (FPSmax ) you will need following formula (ExposeMode: Standard):

FPS_max = 1
-----------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait

(ExposeMode: Overlapped):

t_trig + t_readout + t_trans + t_wait < t_exposure: FPS_max = 1


---------------
t_integ

t_trig + t_readout + t_trans + t_wait > t_exposure: FPS_max = 1


-------------------------------------
t_trig + t_readout + t_trans + t_wait

Example: Frame rate as function of lines & exposure time

Now, when we insert the values using exposure time of, for example, 65 us, 100 lines and 12MHz pixel clock
(ExposeMode: Standard):

MATRIX VISION GmbH


180

FPS_max = 1
-----------------------------------------------------------------------------------
10 us + ((100 * 64 us) + ((510 - 100) * 4.85 us) + 3.15 us) + 65 us + 64 us + 64 us
= 0.0001266704667806700868 1 / us
= 126.7

Note

The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.

See also

To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).

1.17.1.1.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur

• Changing the exposure time

• Changing the capture window

• Changing Trigger Modes

Reprogram time consists of two phases

1. Time needed to send data to the CCD controller depending on what is changed
exposure : abt 2..3ms
window: abt 4..6ms
trigger mode: from 5..90ms,
varies with oldmode/newmode combination

2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms

So for example when reprogramming the capture window you will need (average values)

tregprog = change_window + init_ccd

tregprog = 5ms + 4.5ms

tregprog = 9.5ms

1.17.1.1.5 CCD Sensor Data Device Structure

• Interline CCD image sensor

• Image size: Diagonal 4.5mm (Type 1/4)

• Number of effective pixels: 659 (H) x 494 (V) approx. 330K pixels

• Total number of pixels: 692 (H) x 504 (V) approx. 350K pixels

• Chip size: 4.60mm (H) x 3.97mm (V)

• Unit cell size: 5.6um (H) x 5.6um (V)

• Optical black:

– Horizontal (H) direction: Front 2 pixels, rear 31 pixels


– Vertical (V) direction: Front 8 pixels, rear 2 pixels

• Number of dummy bits: Horizontal 16 Vertical 5

• Substrate material: Silicon

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 181

1.17.1.1.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.

1.17.1.1.5.2 Color version

1.17.1.1.5.3 Gray scale version

MATRIX VISION GmbH


182

1.17.1.1.6 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).

The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)
Device Feature And Property List (p. 182)

1.17.1.1.7 Device Feature And Property List

• mvBlueFOX-220G Features (p. 182)

• mvBlueFOX-220C Features (p. 182)

1.17.1.1.7.1 mvBlueFOX-220G Features

1.17.1.1.7.2 mvBlueFOX-220C Features

1.17.1.2 mvBlueFOX-[Model]220a (0.3 Mpix [640 x 480])

1.17.1.2.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications
Industrial applications:

• triggered image acquisition with precise control of image integration start by hardware trigger input.

• image acquisition of fast moving objects due to:

– frame integration, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.

Scientific applications:

• long time integration for low light conditions.

• optimizing image quality using the variable shutter control.

1.17.1.2.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.

1.17.1.2.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.
The following trigger modes are available:

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 183

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.

TriggerSource mvIMPACT Acquire TriggerSource GenICam(BCX)


GP-IN0 Line4
GP-IN1 Line5

See also

For detailed description about the trigger modes ( https://www.matrix-vision/manuals/ [mv←-


IMPACT Acquire API])

• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode

Note

Trigger modes which use an external input (ctmOnLowLevel, ctmOnHighLevel, ctmOnRisingEdge, ctm←-
OnFallingEdge) will use digital input 0 as input for the trigger signal. Input 0 is not restricted to the trigger
function. It can always also be used as general purpose digital input. The input switching threshold of all
inputs can be programmed with write_dac(level_in_mV). The best is to set this to the half of the input voltage.
So for example if you apply a 24V switching signal to the digital inputs set the threshold to 12000 mV.

1.17.1.2.2.2 Exposure aka Integration After an active trigger, the integration phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Exposure time is adjustable by software in increments of treadline .

1.17.1.2.2.3 Readout When integration is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.

1.17.1.2.3 CCD Timing

MATRIX VISION GmbH


184

Name Description Pixel clock


20 MHz 40 MHz
ttrig Time from trigger 3.6us 1.8us
(internal or external) to exposure start
ttrans Image transfer time 42.6us 21.3us
(move image to readout cells in CCD)
treadline time needed to readout a line 39.05us 19.525us
tvshift time needed to shift unused lines away 3.6us 1.8us
twait minimal time to next trigger 7.2us 3.6us
texposure Exposure time 1us..10s 1us..10s
treadout Image readout time treadout = (ActiveLines ∗ treadline ) + (504 - ActiveLines) ∗ tvshift + treadline
(move image from readout cells to memory

1.17.1.2.3.1 Timings

Note

In partial scan mode (readout window ysize < 480 lines).

To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: No overlap):

FPS_max = 1
--------------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait

(Expose mode: Overlapped):

t_trig + t_readout + t_trans + t_wait < t_exposure: FPS_max = 1


---------------
t_exposure

t_trig + t_readout + t_trans + t_wait > t_exposure: FPS_max = 1


-------------------------------------
t_trig + t_readout + t_trans + t_wait

1.17.1.2.3.2 Example: Frame rate as function of lines & exposure time Now, when we insert the values using
exposure time of, for example, 8000 us, 480 lines and 40MHz pixel clock (Expose mode: No overlap):

FPS_max = 1
-----------------------------------------------------------------------------------------------
1.8 us + ((480 * 19.525 us) + ((504 - 480) * 1.80 us) + 19.525 us) + 8000 us + 21.3 us + 3.6 us
= 0.0000572690945899318068 1 / us
= 57.3

1.17.1.2.3.3 Frame rate calculator

Note

The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.

See also

To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 185

1.17.1.2.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur

• Changing the exposure time

• Changing the capture window

• Changing Trigger Modes

Reprogram time consists of two phases

1. Time needed to send data to the CCD controller depending on what is changed
exposure : abt 2..3ms
window: abt 4..6ms
trigger mode: from 5..90ms,
varies with oldmode/newmode combination

2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms

So for example when reprogramming the capture window you will need (average values)

tregprog = change_window + init_ccd

tregprog = 5ms + 4.5ms

tregprog = 9.5ms

1.17.1.2.5 CCD Sensor Data Device Structure

• Interline CCD image sensor

• Image size: Diagonal 6mm (Type 1/3)

• Number of effective pixels: 659 (H) x 494 (V) approx. 330K pixels

• Total number of pixels: 692 (H) x 504 (V) approx. 350K pixels

• Chip size: 5.79mm (H) x 4.89mm (V)

• Unit cell size: 7.4um (H) x 7.4um (V)

• Optical black:

– Horizontal (H) direction: Front 2 pixels, rear 31 pixels


– Vertical (V) direction: Front 8 pixels, rear 2 pixels

• Number of dummy bits: Horizontal 16 Vertical 5

• Substrate material: Silicon

MATRIX VISION GmbH


186

1.17.1.2.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.

1.17.1.2.5.2 Color version

1.17.1.2.5.3 Gray scale version

Device Feature And Property List (p. 187)

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 187

1.17.1.2.6 Device Feature And Property List

• mvBlueFOX-220aG Features (p. 187)

• mvBlueFOX-220aC Features (p. 187)

1.17.1.2.6.1 mvBlueFOX-220aG Features

1.17.1.2.6.2 mvBlueFOX-220aC Features

1.17.1.3 mvBlueFOX-[Model]221 (0.8 Mpix [1024 x 768])

1.17.1.3.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications

Industrial applications:

• triggered image acquisition with precise control of image exposure start by hardware trigger input.

• image acquisition of fast moving objects due to:

– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.

Scientific applications:

• long time exposure for low light conditions.

• optimizing image quality using the variable shutter control.

1.17.1.3.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.

1.17.1.3.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
MATRIX VISION GmbH width.
OnLowExpose Each falling edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnAnyEdge Start the exposure of a frame when the trigger input level changes from high to low or from
188

See also

For detailed description about the trigger modes ( https://www.matrix-vision/manuals/ [mv←-


IMPACT Acquire API])

• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode

1.17.1.3.2.2 Exposure aka Integration After an active trigger, the exposure phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Integration time is adjustable by software in increments of treadline .

1.17.1.3.2.3 Readout When exposure is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.

1.17.1.3.3 CCD Timing

Name Description Pixel clock


20 MHz 40 MHz
ttrig Time from trigger 9.7us 4.85us
(internal or external) to exposure start
ttrans Image transfer time 45us 22.5us
(move image to readout cells in CCD)
treadline time needed to readout a line 65.4us 32.7us
tvshift time needed to shift unused lines away 9.7us 4.85us
twait minimal time to next trigger 116us 58us
texposure Integration time 1us..10s 1us..10s
treadout Image readout time treadout = (ActiveLines ∗ treadline ) + (788 - ActiveLines) ∗ tvshift + treadline
(move image from readout cells to memory

1.17.1.3.3.1 Timings

Note

In partial scan mode (readout window ysize < 768 lines).

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 189

To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: Sequential):

FPS_max = 1
-----------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait

(Expose mode: Overlapped):

t_trig + t_readout + t_trans + t_wait < t_exposure: FPS_max = 1


---------------
t_exposure

t_trig + t_readout + t_trans + t_wait > t_exposure: FPS_max = 1


-------------------------------------
t_trig + t_readout + t_trans + t_wait

Example: Frame rate as function of lines & exposure time

Now, when we insert the values using exposure time of, for example, 8000 us, 768 lines and 40MHz pixel clock
(Expose mode: Sequential):

FPS_max = 1
-------------------------------------------------------------------------------------------
4.85 us + ((768 * 32.7 us) + ((788 - 768) * 4.85 us) + 32.7 us) + 8000 us + 22.5 us + 58 us
= 0.000030004215592290717 1 / us
= 30

Note

The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.

See also

To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).

1.17.1.3.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur

• Changing the exposure time

• Changing the capture window

• Changing Trigger Modes

Reprogram time consists of two phases

1. Time needed to send data to the CCD controller depending on what is changed
exposure : abt 2..3ms
window: abt 4..6ms
trigger mode: from 5..90ms,
varies with oldmode/newmode combination

2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms

So for example when reprogramming the capture window you will need (average values)

tregprog = change_window + init_ccd

tregprog = 5ms + 4.5ms

tregprog = 9.5ms

MATRIX VISION GmbH


190

1.17.1.3.5 CCD Sensor Data Device Structure

• Interline CCD image sensor

• Image size: Diagonal 6mm (Type 1/3)

• Number of effective pixels: 1025 (H) x 768 (V) approx. 790K pixels

• Total number of pixels: 1077 (H) x 788 (V) approx. 800K pixels

• Chip size: 5.80mm (H) x 4.92mm (V)

• Unit cell size: 4.65um (H) x 4.65um (V)

• Optical black:

– Horizontal (H) direction: Front 3 pixels, rear 40 pixels


– Vertical (V) direction: Front 7 pixels, rear 2 pixels

• Number of dummy bits: Horizontal 29 Vertical 1

• Substrate material: Silicon

1.17.1.3.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.

1.17.1.3.5.2 Color version

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 191

1.17.1.3.5.3 Gray scale version

1.17.1.3.6 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).

The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)

Device Feature And Property List (p. 191)

1.17.1.3.7 Device Feature And Property List

• mvBlueFOX-221G Features (p. 191)

• mvBlueFOX-221C Features (p. 191)

1.17.1.3.7.1 mvBlueFOX-221G Features

1.17.1.3.7.2 mvBlueFOX-221C Features

1.17.1.4 mvBlueFOX-[Model]223 (1.4 Mpix [1360 x 1024])

1.17.1.4.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications

Industrial applications:

• triggered image acquisition with precise control of image exposure start by hardware trigger input.

• image acquisition of fast moving objects due to:

MATRIX VISION GmbH


192

– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.

Scientific applications:

• long time exposure for low light conditions.

• optimizing image quality using the variable shutter control.

1.17.1.4.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.

1.17.1.4.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.

See also

For detailed description about the trigger modes ( https://www.matrix-vision/manuals/ [mv←-


IMPACT Acquire API])

• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode

1.17.1.4.2.2 Exposure aka Integration After an active trigger, the exposure phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Exposure time is adjustable by software in increments of treadline .

1.17.1.4.2.3 Readout When exposure is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 193

1.17.1.4.3 CCD Timing

1.17.1.4.3.1 Timings

Note

In partial scan mode (readout window ysize < 1024 lines).

To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: No overlap):

1.17.1.4.3.2 Example: Frame rate as function of lines & exposure time Now, when we insert the values using
exposure time of, for example, 8000 us, 1024 lines and 56MHz pixel clock (Expose mode: No overlap):

See also

To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).

1.17.1.4.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur

• Changing the exposure time

• Changing the capture window

• Changing Trigger Modes

Reprogram time consists of two phases

1. Time needed to send data to the CCD controller depending on what is changed exposure : abt 2..3ms
window: abt 4..6ms trigger mode: from 5..90ms, varies with oldmode/newmode combination

2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms

So for example when reprogramming the capture window you will need (average values)

tregprog = change_window + init_ccd

tregprog = 5ms + 4.5ms

tregprog = 9.5ms

MATRIX VISION GmbH


194

1.17.1.4.5 CCD Sensor Data Device Structure

• Interline CCD image sensor

• Image size: Diagonal 8mm (Type 1/2)

• Number of effective pixels: 1392 (H) x 1040 (V) approx. 1.45M pixels

• Total number of pixels: 1434 (H) x 1050 (V) approx. 1.5M pixels

• Chip size: 7.60mm (H) x 6.2mm (V)

• Unit cell size: 4.65um (H) x 4.65um (V)

• Optical black:

– Horizontal (H) direction: Front 2 pixels, rear 40 pixels


– Vertical (V) direction: Front 8 pixels, rear 2 pixels

• Number of dummy bits: Horizontal 20 Vertical 3

• Substrate material: Silicon

1.17.1.4.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.

1.17.1.4.5.2 Color version

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 195

1.17.1.4.5.3 Gray scale version

1.17.1.4.6 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).

The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)

Device Feature And Property List (p. 195)

1.17.1.4.7 Device Feature And Property List

• mvBlueFOX-223G Features (p. 195)

• mvBlueFOX-223C Features (p. 195)

1.17.1.4.7.1 mvBlueFOX-223G Features

1.17.1.4.7.2 mvBlueFOX-223C Features

MATRIX VISION GmbH


196

1.17.1.5 mvBlueFOX-[Model]224 (1.9 Mpix [1600 x 1200])

1.17.1.5.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications

Industrial applications:

• triggered image acquisition with precise control of image exposure start by hardware trigger input.

• image acquisition of fast moving objects due to:

– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.

Scientific applications:

• long time exposure for low light conditions.

• optimizing image quality using the variable shutter control.

1.17.1.5.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.

1.17.1.5.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 197

Name Description Pixel clock


20 MHz 40 MHz
ttrig Time from trigger 10.2us 5.1us
(internal or external) to exposure start
ttrans Image transfer time 96us 48us
(move image to readout cells in CCD)
treadline time needed to readout a line 96us 48us
tvshift time needed to shift unused lines away 10.2us 5.1us
twait minimal time to next trigger 316us 158us
texposure Exposure time 1us..10s 1us..10s
treadout Image readout time treadout = (ActiveLines ∗ treadline ) + (1248 - ActiveLines) ∗ tvshift + treadline
(move image from readout cells to memory

1.17.1.5.2.2 Timings

Note

In partial scan mode (readout window ysize < 1200 lines).

To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: No overlap):

FPS_max = 1
--------------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait

(Expose mode: Overlapped):

t_trig + t_readout + t_trans + t_wait < t_exposure: FPS_max = 1


---------------
t_exposure

t_trig + t_readout + t_trans + t_wait > t_exposure: FPS_max = 1


-------------------------------------
t_trig + t_readout + t_trans + t_wait

MATRIX VISION GmbH


198

1.17.1.5.2.3 Example: Frame rate as function of lines & exposure time Now, when we insert the values using
exposure time of, for example, 8000 us, 1200 lines and 40MHz pixel clock (Expose mode: No overlap):

FPS_max = 1
---------------------------------------------------------------------------------------
5.1 us + ((1200 * 48 us) + ((1248 - 1200) * 5.1 us) + 48 us) + 8000 us + 48 us + 158 us
= 0.000015127700483632586 1 / us
= 15.1

1.17.1.5.2.4 Frame rate calculator

Note

The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.

See also

To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).

1.17.1.5.3 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur

• Changing the exposure time

• Changing the capture window

• Changing Trigger Modes

Reprogram time consists of two phases

1. Time needed to send data to the CCD controller depending on what is changed exposure : abt 2..3ms
window: abt 4..6ms trigger mode: from 5..90ms, varies with oldmode/newmode combination

2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms

So for example when reprogramming the capture window you will need (average values)

tregprog = change_window + init_ccd

tregprog = 5ms + 4.5ms

tregprog = 9.5ms

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 199

1.17.1.5.4 CCD Sensor Data Device Structure

• Interline CCD image sensor

• Image size: Diagonal 8.923mm (Type 1/1.8)

• Number of effective pixels: 1600 (H) x 1200 (V) approx. 1.92M pixels

• Total number of pixels: 1688 (H) x 1248 (V) approx. 2.11M pixels

• Chip size: 8.50mm (H) x 6.8mm (V)

• Unit cell size: 4.4um (H) x 4.4um (V)

• Optical black:

– Horizontal (H) direction: Front 12 pixels, rear 48 pixels


– Vertical (V) direction: Front 10 pixels, rear 2 pixels

• Number of dummy bits: Horizontal 28 Vertical 1

• Substrate material: Silicon

1.17.1.5.4.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.

1.17.1.5.4.2 Color version

MATRIX VISION GmbH


200

1.17.1.5.4.3 Gray scale version

1.17.1.5.5 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).

The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)

Device Feature And Property List (p. 200)

1.17.1.5.6 Device Feature And Property List

• mvBlueFOX-224G Features (p. 200)

• mvBlueFOX-224C Features (p. 200)

1.17.1.5.6.1 mvBlueFOX-224G Features

1.17.1.5.6.2 mvBlueFOX-224C Features

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 201

1.17.2 A.2 CMOS

• mvBlueFOX-[Model]200w (0.4 Mpix [752 x 480]) (p. 201)

• mvBlueFOX-[Model]202a (1.3 Mpix [1280 x 1024]) (p. 204)

• mvBlueFOX-[Model]202b (1.2 Mpix [1280 x 960]) (p. 207)

• mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960]) (p. 210)

• mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944]) (p. 214)

1.17.2.1 mvBlueFOX-[Model]200w (0.4 Mpix [752 x 480])

1.17.2.1.1 Introduction The CMOS sensor module (MT9V034) incorporates the following features:

• resolution to 752 x 480 gray scale or RGB Bayer mosaic

• supports window AOI mode with faster readout

• high dynamic range (p. 151) 110 dB

• programmable analog gain (0..12 dB)

• progressive scan sensor (no interlaced problems!)

• full frame shutter

• programmable readout timing with free capture windows and partial scan

• many trigger modes (free-running, hardware-triggered)

1.17.2.1.2 Details of operation The sensor uses a full frame shutter (ShutterMode = "FrameShutter"),
i.e. all pixels are reset at the same time and the exposure commences. It ends with the charge transfer of the
voltage sampling.
Furthermore, the sensor offers two different modes of operation:

• free running mode (Overlapping exposure and readout)

• snapshot mode (Sequential exposure and readout)

1.17.2.1.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.

To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:

FrameTime = (ImageWidth + 61) * ((ImageHeight + 45) / PixelClock)

If exposure time is lower than frame time:

FPS_max = 1
----------------------
FrameTime

If exposure time is greater than frame time:

FPS_max = 1
----------------------
ExposureTime

MATRIX VISION GmbH


202

1.17.2.1.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:

1.17.2.1.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.

See also

Using external trigger with CMOS sensors (p. 150)

1.17.2.1.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.

Note

Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.

The step width for the exposure time is 1 us.

Image data is then shifted out line-by-line and transferred to memory.

To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:

FrameTime = (ImageWidth + 61) * ((ImageHeight + 45) / PixelClock)

FPS_max = 1
-----------------------------------
FrameTime + ExposureTime

AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 93.7 Mono8
W:608 x H:388 40 100 131.4 Mono8
W:492 x H:314 40 100 158.5 Mono8
W:398 x H:206 40 100 226.7 Mono8

1.17.2.1.3 Measured frame rates

1.17.2.1.4 Sensor Data Device Structure

• Progressive scan CMOS image sensor

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 203

• Image size: 4.51(H)x2.88(V)mm (Type 1/3")

• Number of effective pixels: 752 (H) x 480 (V)

• Unit cell size: 6um (H) x 6um (V)

1.17.2.1.4.1 Characteristics

1.17.2.1.4.2 Color version

1.17.2.1.4.3 Gray scale version

Device Feature And Property List (p. 204)

MATRIX VISION GmbH


204

1.17.2.1.5 Device Feature And Property List

• mvBlueFOX-200wG Features (p. 204)

• mvBlueFOX-200wC Features (p. 204)

1.17.2.1.5.1 mvBlueFOX-200wG Features

1.17.2.1.5.2 mvBlueFOX-200wC Features

1.17.2.2 mvBlueFOX-[Model]202a (1.3 Mpix [1280 x 1024])

1.17.2.2.1 Introduction The CMOS sensor module (MT9M001) incorporates the following features:

• resolution to 1280 x 1024 gray scale

• supports window AOI mode with faster readout

• dynamic range 61dB

• programmable analog gain (0..12dB)

• progressive scan sensor (no interlaced problems!)

• rolling shutter

• programmable readout timing with free capture windows and partial scan

• many trigger modes (free-running, hardware-triggered)

1.17.2.2.2 Details of operation The sensor uses following acquisition mode:

• rolling shutter (ShutterMode = "ElectronicRollingShutter").

With the rolling shutter the lines are exposed for the same duration, but at a slightly different point in time.

Note

Moving objects together with a rolling shutter can cause a shear in moving objects.

Furthermore, the sensor offers one operating mode:

• snapshot mode (which means sequential exposure and readout)

1.17.2.2.2.1 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:

1.17.2.2.2.2 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.

The following trigger modes are available:

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 205

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnLowExpose Each falling edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnAnyEdge Start the exposure of a frame when the trigger input level changes from high to low or from
low to high.

See also

For detailed description about the trigger modes ( https://www.matrix-vision/manuals/ [mv←-


IMPACT Acquire API])

• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode

1.17.2.2.2.3 Erase, exposure and readout After the trigger pulse, the complete sensor array is erased. This
takes some time, so there is a fix delay from about 285 us between the trigger pulse on digital input 0 and the start
of exposure of the first line.
The exact time of exposure start of each line (except the first line) depends on the exposure time and the position
of the line. The exposure of a particular line N is finished when line N is ready for readout. Image data is read out
line-by-line and transferred to memory (see: http://www.matrix-vision.com/tl_files/mv11/←-
Glossary/art_rolling_shutter_en.pdf).
Exposure time is adjustable by software and depends on the image width. To calculate the exposure step size you
will need following formula:

LineDelay = 0

PixelClkPeriod = 1
--------
PixelClk

RowTime = ( ImageWidth + 244 + LineDelay ) * PixelClkPeriod

RowTime = MinExposurTime = ExposureStepSize

Image data is then shifted out line-by-line and transferred to memory.

To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:

FrameTime = (ImageWidth + 244) * ((ImageHeight + 16) / PixelClock)

FPS_max = 1
-----------------------------------
FrameTime + ExposureTime

MATRIX VISION GmbH


206

1.17.2.2.2.4 CMOS Timing in Snapshot mode

1.17.2.2.3 Sensor Data Device Structure

• Progressive scan CMOS image sensor

• Image size: 6.66(H)x5.32(V)mm (Type 1/2")

• Number of effective pixels: 1280 (H) x 1024 (V)

• Unit cell size: 5.2um (H) x 5.2um (V)

1.17.2.2.4 Characteristics

1.17.2.2.4.1 Gray scale version

Device Feature And Property List (p. 207)

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 207

1.17.2.2.5 Device Feature And Property List

• mvBlueFOX-102aG Features (p. 207)

1.17.2.2.5.1 mvBlueFOX-102aG Features

1.17.2.3 mvBlueFOX-[Model]202b (1.2 Mpix [1280 x 960])

1.17.2.3.1 Introduction The CMOS sensor module (MT9M021) incorporates the following features:

• resolution to 1280 x 960 gray scale or RGB Bayer mosaic

• supports window AOI mode with faster readout

• programmable analog gain (0..12 dB)

• progressive scan sensor (no interlaced problems!)

• pipelined global shutter

• programmable readout timing with free capture windows and partial scan

• many trigger modes (free-running, hardware-triggered)

1.17.2.3.2
Details of operation The sensor uses a pipelined global snapshot shutter (ShutterMode = "←-
FrameShutter") , i.e. light exposure takes place on all pixels in parallel, although subsequent readout is se-
quential.
Therefore the sensor offers two different modes of operation:

• free running mode (Overlapping exposure and readout)

• snapshot mode (Sequential exposure and readout)

1.17.2.3.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.

To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:

FrameTime = (ImageHeight * (1650 / PixelClock)) + (25 * (1650 / PixelClock))

If exposure time is lower than frame time:

FPS_max = 1
----------------------
FrameTime

If exposure time is greater than frame time:

MATRIX VISION GmbH


208

FPS_max = 1
------------------------
ExposureTime

1.17.2.3.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:

1.17.2.3.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.

See also

Using external trigger with CMOS sensors (p. 150)

1.17.2.3.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.

Note

Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 209

The step width for the exposure time is 1 us.

Image data is then shifted out line-by-line and transferred to memory.

To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:

FrameTime = (ImageHeight * (1650 / PixelClock)) + (25 * (1650 / PixelClock))

FPS_max = 1
-----------------------------------
FrameTime + ExposureTime

AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 24.6 Mono8
W:1036 x H:776 40 100 30.3 Mono8
W:838 x H:627 40 100 37.1 Mono8
W:678 x H:598 40 100 38.9 Mono8
W:550 x H:484 40 100 47.6 Mono8

1.17.2.3.3 Measured frame rates

1.17.2.3.4 Sensor Data Device Structure

• CMOS image sensor (Type 1/3")

• Number of effective pixels: 1280 (H) x 960 (V)

• Unit cell size: 3.75um (H) x 3.75um (V)

1.17.2.3.4.1 Characteristics

1.17.2.3.4.2 Color version

MATRIX VISION GmbH


210

1.17.2.3.4.3 Gray scale version

Device Feature And Property List (p. 210)

1.17.2.3.5 Device Feature And Property List

• mvBlueFOX-202bG Features (p. 210)

• mvBlueFOX-202bC Features (p. 210)

1.17.2.3.5.1 mvBlueFOX-202bG Features

1.17.2.3.5.2 mvBlueFOX-202bC Features

1.17.2.4 mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960])

1.17.2.4.1 Introduction The CMOS sensor module (MT9M024) incorporates the following features:

• resolution to 1280 x 960 gray scale or RGB Bayer mosaic

• supports window AOI mode with faster readout

• programmable analog gain (0..12 dB)

• progressive scan sensor (no interlaced problems!)

• high dynamic range (p. 154) 115 dB (with gray scale version)

• rolling shutter

• programmable readout timing with free capture windows and partial scan

• many trigger modes (free-running, hardware-triggered)

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 211

1.17.2.4.2 Details of operation The sensor uses following acquisition mode:

• rolling shutter (ShutterMode = "ElectronicRollingShutter")

With the rolling shutter the lines are exposed for the same duration, but at a slightly different point in time.

Note

Moving objects together with a rolling shutter can cause a shear in moving objects.

Furthermore, the sensor offers following operating modes:

• free running mode (Overlapping exposure and readout)

• snapshot mode (Sequential exposure and readout)

1.17.2.4.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.

To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:

FrameTime = (ImageHeight * (1650 / PixelClock)) + (25 * (1650 / PixelClock))

If exposure time is lower than frame time:

FPS_max = 1
----------------------
FrameTime

If exposure time is greater than frame time:

FPS_max = 1
------------------------
ExposureTime

1.17.2.4.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:

1.17.2.4.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.

MATRIX VISION GmbH


212

See also

Using external trigger with CMOS sensors (p. 150)

1.17.2.4.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.

Note

Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.

The step width for the exposure time is 1 us.

Image data is then shifted out line-by-line and transferred to memory.

To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:

FrameTime = (ImageHeight * (1650 / PixelClock)) + (25 * (1650 / PixelClock))

FPS_max = 1
-----------------------------------
FrameTime + ExposureTime

AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 24.6 Mono8
W:1036 x H:776 40 100 30.3 Mono8
W:838 x H:627 40 100 37.1 Mono8
W:678 x H:598 40 100 38.9 Mono8
W:550 x H:484 40 100 47.6 Mono8

1.17.2.4.3 Measured frame rates

1.17.2.4.4 Sensor Data Device Structure

• CMOS image sensor (Type 1/3")

• Number of effective pixels: 1280 (H) x 960 (V)

• Unit cell size: 3.75um (H) x 3.75um (V)

1.17.2.4.4.1 Characteristics

1.17.2.4.4.2 Color version

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 213

1.17.2.4.4.3 Gray scale version

Device Feature And Property List (p. 213)

1.17.2.4.5 Device Feature And Property List

• mvBlueFOX-ML#IGC202dG Features (p. 213)

• mvBlueFOX-ML#IGC202dC Features (p. 214)

1.17.2.4.5.1 mvBlueFOX-ML#IGC202dG Features

MATRIX VISION GmbH


214

1.17.2.4.5.2 mvBlueFOX-ML#IGC202dC Features

1.17.2.5 mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944])

1.17.2.5.1 Introduction The CMOS sensor module (MT9P031) incorporates the following features:

• resolution to 2592 x 1944 gray scale or RGB Bayer mosaic

• supports window AOI mode with faster readout

• programmable analog gain (0..32dB)

• progressive scan sensor (no interlaced problems!)

• rolling shutter / global reset release

• programmable readout timing with free capture windows and partial scan

• many trigger modes (free-running, hardware-triggered)

1.17.2.5.2 Details of operation The sensor uses two acquisition modes:

• rolling shutter (ShutterMode = "ElectronicRollingShutter") and

• global reset release shutter (ShutterMode = "GlobalResetRelease").

With the rolling shutter the lines are exposed for the same duration, but at a slightly different point in time:

Note

Moving objects together with a rolling shutter can cause a shear in moving objects.

The global reset release shutter, which is only available in triggered operation, starts the exposure of all rows
simultaneously and the reset to each row is released simultaneously, too. However, the readout of the lines is equal
to the readout of the rolling shutter: line by line:

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 215

Note

This means, the bottom lines of the sensor will be exposed to light longer! For this reason, this mode will only
make sense, if there is no extraneous light and the flash duration is shorter or equal to the exposure time.

Furthermore, the sensor offers two operating modes:

• free running mode (Overlapping exposure and readout)

• snapshot mode (Sequential exposure and readout) in triggered operation

1.17.2.5.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.

To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:

FrameTime = (ImageWidth + 900) * ((ImageHeight + 9) / PixelClock)

If exposure time is lower than frame time:

FPS_max = 1
----------------------
FrameTime

If exposure time is greater than frame time:

FPS_max = 1
------------------------
ExposureTime

1.17.2.5.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:

1.17.2.5.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.

The following trigger modes are available:

Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel Start an exposure of a frame as long as the trigger input is below the trigger threshold .
OnHighLevel Start an exposure of a frame as long as the trigger input is above the trigger threshold.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.

MATRIX VISION GmbH


216

See also

Using external trigger with CMOS sensors (p. 150)

1.17.2.5.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.

Note

Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.

The step width for the exposure time is 1 us.

Image data is then shifted out line-by-line and transferred to memory.

To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:

FrameTime = (ImageWidth + 900) * ((ImageHeight + 9) / PixelClock)

FPS_max = 1
------------------------------------
(FrameTime + ExposureTime)

1.17.2.5.2.5 Use Cases As mentioned before, "Global reset release" will only make sense, if a flash is used
which is brighter than the ambient light. The settings in wxPropView (p. 69) will look like this:

In this case, DigOut0 gets a high signal as long as the exposure time (which is synchronized with the GlobalReset←-
Release). This signal can start a flash light.

AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 5.9 Mono8
W:2098 x H:1574 40 100 8.4 Mono8
W:1696 x H:1272 40 100 12.0 Mono8
W:1376 x H:1032 40 100 16.9 Mono8
W:1104 x H:832 40 100 23.7 Mono8
W:800 x H:616 40 100 32 Mono8

1.17.2.5.3 Measured frame rates

1.17.2.5.4 Sensor Data Device Structure

• Progressive scan CMOS image sensor

MATRIX VISION GmbH


1.17 Appendix A. Specific Camera / Sensor Data 217

• Image size: 5.70(H)x4.28(V)mm (Type 1/2.5")

• Number of effective pixels: 2592 (H) x 1944 (V)

• Unit cell size: 2.2um (H) x 2.2um (V)

1.17.2.5.4.1 Characteristics

1.17.2.5.4.2 Color version

1.17.2.5.4.3 Gray scale version

Device Feature And Property List (p. 218)

MATRIX VISION GmbH


218

1.17.2.5.5 Device Feature And Property List

• mvBlueFOX-205G Features (p. 218)

• mvBlueFOX-205C Features (p. 218)

1.17.2.5.5.1 mvBlueFOX-205G Features

1.17.2.5.5.2 mvBlueFOX-205C Features

MATRIX VISION GmbH

You might also like