MvBlueFOX Technical Manual
MvBlueFOX Technical Manual
MvBlueFOX Technical Manual
Technical Manual
i
1.16.8.7 Delay the expose start of the following camera (HRTC) . . . . . . . . . . . . . . . 176
1.17 Appendix A. Specific Camera / Sensor Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
1.17.1 A.1 CCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
1.17.1.1 mvBlueFOX-[Model]220 (0.3 Mpix [640 x 480]) . . . . . . . . . . . . . . . . . . . 177
1.17.1.2 mvBlueFOX-[Model]220a (0.3 Mpix [640 x 480]) . . . . . . . . . . . . . . . . . . 182
1.17.1.3 mvBlueFOX-[Model]221 (0.8 Mpix [1024 x 768]) . . . . . . . . . . . . . . . . . . 187
1.17.1.4 mvBlueFOX-[Model]223 (1.4 Mpix [1360 x 1024]) . . . . . . . . . . . . . . . . . 191
1.17.1.5 mvBlueFOX-[Model]224 (1.9 Mpix [1600 x 1200]) . . . . . . . . . . . . . . . . . 196
1.17.2 A.2 CMOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
1.17.2.1 mvBlueFOX-[Model]200w (0.4 Mpix [752 x 480]) . . . . . . . . . . . . . . . . . . 201
1.17.2.2 mvBlueFOX-[Model]202a (1.3 Mpix [1280 x 1024]) . . . . . . . . . . . . . . . . . 204
1.17.2.3 mvBlueFOX-[Model]202b (1.2 Mpix [1280 x 960]) . . . . . . . . . . . . . . . . . 207
1.17.2.4 mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960]) . . . . . . . . . . . . . . . . . 210
1.17.2.5 mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944]) . . . . . . . . . . . . . . . . . 214
The manual starts with technical data about the device like sensors (for cameras) or electrical characteristics as
well as a quick start chapter. Afterwards there will be various information on tools and software packages that can
help with developing an application or getting a better understanding for the device.
• The installation package comes with a couple of tools offering a graphical user interface (GUI (p. 69)) to
control mvIMPACT Acquire compliant devices.
– wxPropView (p. 69) can be used to capture image data and to change parameters like AOI or gain
– mvDeviceConfigure (p. 111) can be used to e.g. perform firmware updates, assign a unique ID to a
device that is stored in non-volatile memory or to configure to log-message output.
– It is possible to define sequences of operating steps to control acquisition or time critical I/O. This FPGA
built-in functionality is called Hardware Real-Time Controller (short: HRTC).
– This section offers solutions and explanations for standard use cases.
This chapter gives you a short overview, how to get started with your device and where to find the necessary
information in the manual. It will also explain or link to the concepts behind the driver and the image acquisition.
Furthermore it explains how to get started programming own applications.
1.1.2.1 Installation
• Windows:
The driver supplied with the MATRIX VISION product represents the port between the programmer and the
hardware. The driver concept of MATRIX VISION provides a standardized programming interface to all image
processing products made by MATRIX VISION GmbH.
The advantage of this concept for the programmer is that a developed application runs without the need for any
major modifications to the various image processing products made by MATRIX VISION GmbH. You can also
incorporate new driver versions, which are available for download free of charge on our website: http←-
://www.matrix-vision.com.
• 2 Separately available for 32 bit and 64 bit. Requires at least one installed driver package.
• 4 Part of the NeuroCheck installer but requires at least one installed frame grabber driver.
• 5 Part of the mvIMPACT SDK installation. However, new designs should use the .NET libs that are now part
of mvIMPACT Acquire ("mv.impact.acquire.dll"). The namespace "mv.impact.acquire" of
"mv.impact.acquire.dll" provides a more natural and more efficient access to the same features
as contained in the namespace "mvIMPACT_NET.acquire" of "mvIMPACT_NET.dll", which is why
the latter one should only be used for backward compatibility but NOT when developing a new application.
• 6 Part of Micro-Manager.
1.1.2.2.1 NeuroCheck support A couple of devices are supported by NeuroCheck. However between Neuro←-
Check 5.x and NeuroCheck 6.x there has been a breaking change in the internal interfaces. Therefore also the list
of supported devices differs from one version to another and some additional libraries might be required.
1.1.2.2.2 VisionPro support Every mvIMPACT Acquire driver package on Windows comes with an adapter to
VisionPro from Cognex. The installation order does not matter. After the driver package and VisionPro has been
installed, the next time VisionPro is started it will allow selecting the mvIMPACT Acquire device. No additional steps
are needed.
MATRIX VISION devices that also comply with the GigE Vision or USB3 Vision standard don't need any software
at all, but can also use VisionPro's built-in GigE Vision or USB3 Vision support.
1.1.2.2.3 HALCON support HALCON comes with built-in support for mvIMPACT Acquire compliant devices, so
once a device driver has been installed for the mvIMPACT Acquire device, it can also be operated from a HALCON
environment using the corresponding acquisition interface. No additional steps are needed.
MATRIX VISION devices that also comply with the GigE Vision or USB3 Vision standard don't need any software
at all, but can also use HALCON's built-in GigE Vision or USB3 Vision support.
As some mvIMPACT Acquire device driver packages also come with a GenTL compliant interface, these can also
be operated through HALCON's built-in GenTL acquisition interface.
1.1.2.2.4 LabVIEW support Every mvIMPACT Acquire compliant device can be operated under LabVIEW
through an additional set of VIs which is shipped by MATRIX VISION as a separate installation ("mvLabVIEW
Acquire").
MATRIX VISION devices that also comply with the GigE Vision or USB3 Vision standard don't need any additional
software at all, but can also be operated through LabVIEW's GigE Vision or USB3 Vision driver packages.
1.1.2.2.5 DirectShow support Every mvIMPACT Acquire compliant device driver package comes with an inter-
face to DirectShow. In order to be usable from a DirectShow compliant application, devices must first be registered
for DirectShow support. How to this is explained here (p. 122).
1.1.2.2.6 Micro-Manager support Every mvIMPACT Acquire compliant device can be operated under
https://micro-manager.org when using mvIMPACT Acquire 2.18.0 or later and at least Micro-Manager
1.4.23 build AFTER 15.12.2016. The adapter needed is part of the Micro-Manager release. Additional information
can be found here: https://micro-manager.org/wiki/MatrixVision.
1.1.2.2.6.1 code
• https://valelab4.ucsf.edu/svn/micromanager2/trunk/DeviceAdapters/←-
MatrixVision/
• https://valelab4.ucsf.edu/trac/micromanager/browser/DeviceAdapters/←-
MatrixVision
The image acquisition is based on queues to avoid the loss of single images. With this concept you can acquire im-
ages via single acquisition or triggered acquisition. For detailed description of the acquisition concept, please have
a look at "How the capture process works" in the mvIMPACT_Acquire_API manual matching the programming
language you are working with.
1.1.2.4 Programming
To understand how to control the device and handle image data you will have a good introduction by reading the main
pages of the corresponding mvIMPACT Acquire interface reference. Additionally, please have a look at the example
programs. Several basic examples are available. For details please refer to Developing Applications Using The
mvIMPACT Acquire SDK (p. 120) depending on the programming language you will use for your application.
1.2 Imprint
[email protected]
[email protected]
[email protected]
Author
U. Lansche
Date
2019
Since the documentation is published electronically, an updated version may be available online. For this reason we
recommend checking for updates on the MATRIX VISION website.
MATRIX VISION cannot guarantee that the data is free of errors or is accurate and complete and, therefore, as-
sumes no liability for loss or damage of any kind incurred directly or indirectly through the use of the information of
this document.
MATRIX VISION reserves the right to change technical data and design and specifications of the described products
at any time without notice.
Copyright
MATRIX VISION GmbH. All rights reserved. The text, images and graphical content are protected by copyright
and other laws which protect intellectual property. It is not permitted to copy or modify them for trade use or
transfer. They may not be used on websites.
All other product and company names in this document may be the trademarks and tradenames of their
respective owners and are hereby acknowledged.
Parts of the log file creation and the log file display make use of Sarissa (Website: http://dev.←-
abiss.gr/sarissa) which is distributed under the GNU GPL version 2 or higher, GNU LGPL version
2.1 or higher and Apache Software License 2.0 or higher. The Apache Software License 2.0 is part of this
driver package.
1.3 Revisions
Date Description
6. April 2020 Updated mvBlueFOX (p. 16) .
4. February 2020 Added some image save possibilities in How to see the first image (p. 74) .
19. March 2019 Added Using the analysis plots (p. 79) .
09. November 2018 Added "Hard Disk Recording" in wxPropView (p. 69).
21. December 2016 Updated Setting up multiple display support and/or work with several capture set-
tings in parallel (p. 88).
15. December 2016 Added Micro-Manger in Driver concept (p. 2).
23. August 2016 Added measured frame rates of sensors mvBlueFOX-[Model]200w (0.4 Mpix [752 x
480]) (p. 201)
mvBlueFOX-[Model]202b (1.2 Mpix [1280 x 960]) (p. 207)
mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960]) (p. 210)
mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944]) (p. 214).
01. August 2016 Extended use case Take two images with different expose times after an external
trigger (HRTC) (p. 172).
11. May 2016 Added Quick Setup Wizard (p. 70).
02. December 2015 Updated CE declarations (p. 11).
25. November 2015 Added Troubleshooting (p. 126).
27. October 2015 Added Command-line options (p. 110).
04. August 2015 Added Windows 10 support.
19. June 2015 Restructured chapter Use Cases (p. 127).
23. April 2015 Added use case Edge controlled triggering (HRTC) (p. 174).
16. April 2015 Updated supported Windows versions.
15. April 2015 Added lens protrusion.
11. March 2015 Added chapter Accessing log files (p. 97).
26. February 2015 Moved Creating double acquisitions (HRTC) (p. 171) to HRTC Use Cases.
27. January 2015 Added use case Using VLC Media Player (p. 129). Renewed Order code nomencla-
ture (p. 16).
09. January 2015 Extended sample Using 2 mvBlueFOX-MLC cameras in Master-Slave mode (p. 163).
10. December 2014 Corrected Order code nomenclature (p. 16) : mvBlueFOX cameras without filter have
the order code 9 (excluding mvBlueFOX-MLC).
01. December 2014 Extended use case Using 2 mvBlueFOX-MLC cameras in Master-Slave mode
(p. 163).
25. November 2014 Corrected the possible HRTC - Hardware Real-Time Controller (p. 118) steps to 256.
21. October 2014 Added description about the record mode in How to see the first image (p. 74).
17. July 2014 Added use case Introducing LUTs (p. 158).
25. April 2014 Added description about Working with the hardware Look-Up-Table (LUT) (p. 109).
25. March 2014 Added use case Correcting image errors of a sensor (p. 131).
10. March 2014 mvDeviceConfigure (p. 111) extended.
Added S-mount lensholder for mvBlueFOX-MLC in Order code nomenclature (p. 16).
18. February 2014 Updated Characteristics (p. 212) of mvBlueFOX-[Model]202d (1.2 Mpix [1280 x 960])
(p. 210).
13. January 2014 Changed figure 3 in Using 2 mvBlueFOX-MLC cameras in Master-Slave mode
(p. 163).
12. December 2013 Changed figure in Using 2 mvBlueFOX-MLC cameras in Master-Slave mode (p. 163).
06. December 2013 Added information about Changing the view of the property grid to assist writing
code that shall locate driver features (p. 96).
22. November 2013 Extended information in Adjusting sensor of camera models -x00w (p. 151) and Ad-
justing sensor of camera models -x02d (-1012d) (p. 154).
30. October 2013 Enhanced cable description in 12-pin Wire-to-Board header (USB 2.0 / Dig I/O) (p. 54).
• "mvIMPACT_Acquire_API_CPP_manual.chm",
• "mvIMPACT_Acquire_API_C_manual.chm", and
30. September 2012 Moved Working with the Hardware Real-Time Controller (HRTC) (p. 168) to Use
Cases (p. 127).
20. September 2012 Added chapter "Porting existing code written with versions earlier then 3.0.0".
17. August 2012 Added use case Adjusting sensor of camera models -x02d (-1012d) (p. 154).
16. July 2012 Extended "Characteristics of the digital inputs" in D-Sub 9-pin (male) (p. 41).
21. June 2012 Added description, how to install the Linux driver using the installer script (Installing the
mvIMPACT Acquire driver (p. 29)).
21. June 2012 Added information (electrical characteristic, pinning (p. 53)) about LVTTL version
(mvBlueFOX-MLC2xxx-XLW).
02. April 2012 Enhanced chapter Output sequence of color sensors (RGB Bayer) (p. 66) and
added chapter Bilinear interpolation of color sensors (RGB Bayer) (p. 67).
17. February 2012 Renewed chapter wxPropView (p. 69).
09. November 2011 Added Settings behaviour during startup (p. 38) in chapter Quickstart (p. 20).
21. September 2011 Added SXGA sensor (p. 210) -202d. Added mvBlueFOX-IGC (p. 60) information.
26. July 2011 Removed chapter
EventHandling. See "Porting existing code written with versions earlier then 2.←-
0.0".
11. July 2011 Added chapters
"Callback demo".
08. Juli 2011 Added chapter Using 2 mvBlueFOX-MLC cameras in Master-Slave mode (p. 163).
06. June 2011 Added chapters
"Porting existing code written with versions earlier then 2.0.0".
31. May 2011 Added chapter Creating double acquisitions (HRTC) (p. 171).
26. April 2011 Added chapter Using external trigger with CMOS sensors (p. 150).
Updated chapter Dimensions and connectors (p. 53) (digital inputs TTL, digital outputs
TTL) of mvBlueFOX-MLC version.
18. January 2011 Added chapter Setting up multiple display support and/or work with several capture
settings in parallel (p. 88).
29. Nov. 2010 Added ADC resolutions in Sensor Overview (p. 63).
19. October 2010 Added chapters
"Chunk data format".
07. Oct. 2010 Added High-Speed USB design guidelines (p. 11).
22. Sep. 2010 Added suitable for mvBlueFOX-MLC What's inside and accessories (p. 19).
26. Aug. 2010 Added cable end color of board-to-wire cable in Dimensions and connectors (p. 53).
Added chapter about Creating user data entries (p. 160).
02. Aug. 2010 mvBlueFOX-200W and mvBlueFOX-MLC100W support flash control output: CMOS
sensors (p. 65).
Added chapter Import and Export images (p. 87).
21. Jun. 2010 Included exposure modes in the frame rate calculator of the Sensor Overview (p. 63).
31. May 2010 Added chapter Single-board version (mvBlueFOX-MLC2xx) (p. 53).
19. Apr. 2010 Added new example ContinuousCaptureDirectX.
01. Apr. 2010 Added Use Cases (p. 127) about high dynamic range (p. 151) of sensor mvBlueFOX-
[Model]200w (0.4 Mpix [752 x 480]) (p. 201).
10. Feb. 2010 Added note about Windows XP Embedded in System Requirements (p. 20).
28. Jan. 2010 Added chapter Copy grid data to the clipboard (p. 86).
13. Jan. 2010 Added chapters
"Porting existing code written with versions earlier then 1.12.0".
11. Jan. 2010 Due to a software update, documentation of CMOS sensor (-x00w) (p. 201) updated.
10. Nov. 2009 Added Windows 7 as supported operating system.
22. Oct 2009 Updated sensor data (p. 63).
19. Oct 2009 Updated wxPropView (p. 69) description about handling settings.
22. Sep. 2009 Added Wide-VGA sensor (p. 201) and removed sensor -x02.
17. Sep. 2009 Updated frame rate calculator of CCD sensors (p. 63).
05. May 2009 Added figures which shows "how to connect flash to digital output".
05. May 2009 Added book Use Cases (p. 127), which offers solutions and explanations for standard
use cases.
22. Jan. 2009 Added information about how to test the gerenal trigger functionality of the camera Set-
ting up external trigger and flash control (p. 102).
26. Nov. 2008 Added chapter Setting up external trigger and flash control (p. 102).
28. Oct 2008 Added mvBlueFOX-M accessory Accessories mvBlueFOX-Mxxx (p. 50).
21. Jul. 2008 Added power supply note in 4-pin circular plug-in connector with lock (USB 2.0)
(p. 40).
11. Jun. 2008 Added preliminary sensor data of -105 in Sensor Overview (p. 63).
10. Jun. 2008 Updated sensor data of -121 in Sensor Overview (p. 63).
09. Apr. 2008 Corrected Figure 4: DIG OUT mvBlueFOX-1xx in Dimensions and connectors (p. 40).
25. Feb. 2008 Added note about EEPROM of mvBlueFOX-M in Dimensions and connectors (p. 46).
19. Feb. 2008 Corrected sensor data in Sensor Overview (p. 63).
30. Jan. 2008 Added note about the obsolete differentiation between 'R' and 'U' version in chapter
Dimensions and connectors (p. 40).
01. Oct 2007 Update sensor data in chapter Order code nomenclature (p. 16).
20. Aug. 2007 Added part number of JST connectors used on the mvBlueFOX-M (see: Dimensions
and connectors (p. 46)).
31. Jul. 2007 Rewritten "How rto use this manual". This book now includes a getting started chapter
(see: Composition of the manual (p. 1)).
11. Jun. 2007 Updated images in digital I/O description of mvBlueFOX-M (see: Dimensions and con-
nectors (p. 46)).
29. May 2007 Added an attention in chapter Quickstart (p. 20) section Installing the hardware (p. 24)
(Windows) and Installing the hardware (p. 34) (Linux).
23. May 2007 Added calculators to calculate the frame rate of the sensors (see specific sensor
documentation: Sensor Overview (p. 63)).
23. Apr. 2007 Updated sensor description and added description of Micron's CMOS 1280x1024 (-
102a) (p. 204) sensor.
02. Apr. 2007 Updated description of mvBlueFOX-M1xx digital I/O in chapter Dimensions and con-
nectors (p. 46).
29. Jan. 2007 Repainted DigI/O images (see: Dimensions and connectors (p. 40)).
24. Nov. 2006 Added attention to the DigI/O description of the mvBlueFOX-M (see: Dimensions and
connectors (p. 46)).
14. Nov. 2006 Updated Linux installation documentation (see: Quickstart (p. 20)).
20. Oct 2006 Updated Linux installation documentation (see: Quickstart (p. 20)).
11. Sep. 2006 Devided the Quickstart chapter into Linux® and Windows® (see: Quickstart (p. 20)).
8. Sep. 2006 Updated CCD timing in CCD 640 x 480 (1/3") documentation (see: mvBlueFOX-
[Model]220a (0.3 Mpix [640 x 480]) (p. 182)).
5. Sep. 2006 Updated the sensor data (see: Sensor Overview (p. 63)).
23. Aug. 2006 Added general tolerance of the housing (see: Technical Data (p. 40)).
28. Jul. 2006 Removed some linking errors.
19. Jul. 2006 Added WEEE-Reg.-No. (see: European Union Declaration of Conformity statement
(p. 11)).
Added ambient temperature of the mvBlueFOX standard version (see: Components
(p. 45)).
17. Jun. 2006 New chapter "Configure the log output using mvDeviceConfigure" (see: "Configure the
log output using mvDeviceConfigure").
07. Jun. 2006 Extended the HRTC documentation (see: How to use the HRTC (p. 119)).
02. Jun. 2006 Fixed image errors in CCD 640 x 480 (1/3") documentation (see: mvBlueFOX-
[Model]220a (0.3 Mpix [640 x 480]) (p. 182)).
01. Jun. 2006 Updated the chm index.
18. May 2006 Sensor description: Changed black/white to gray scale (see: Sensor Overview (p. 63)).
14. Feb. 2006 Added CCD 640 x 480 (1/3") (see: mvBlueFOX-[Model]220a (0.3 Mpix [640 x 480])
(p. 182)).
13. Feb. 2006 Corrected the image of the "4-pin circular plug-in connector" (see: Dimensions and
connectors (p. 40)).
Note
A note indicates important information that helps you optimize usage of the products.
Warning
A warning indicates how to avoid either potential damage to hardware or loss of data.
Attention
All due care and attention has been taken in preparing this manual. In view of our policy of continuous product
improvement, however, we can accept no liability for completeness and correctness of the information contained in
this manual. We make every effort to provide you with a flawless product.
In the context of the applicable statutory regulations, we shall accept no liability for direct damage, indirect damage
or third-party damage resulting from the acquisition or operation of a MATRIX VISION product. Our liability for intent
and gross negligence is unaffected. In any case, the extend of our liability shall be limited to the purchase price.
1.4.2 Webcasts
This icon indicates a webcast about an issue which is available on our website.
We cannot and do not take any responsibility for the damage caused to you or to any other equipment
connected to the mvBlueFOX. Similarly, warranty will be void, if a damage is caused by not following
the manual.
Handle the mvBlueFOX with care. Do not misuse the mvBlueFOX. Avoid shaking, striking, etc. The
mvBlueFOX could be damaged by faulty handling or shortage.
Use a soft cloth lightly moistened with a mild detergent solution when cleaning the camera.
Never face the camera towards the sun. Whether the camera is in use or not, never aim at the sun or
other extremely bright objects. Otherwise, blooming or smear may be caused.
Please keep the camera closed or mount a lens on it to avoid the CCD or the CMOS from getting
dusty.
Clean the CCD/CMOS faceplate with care. Do not clean the CCD or the CMOS with strong or abrasive
detergents. Use lens tissue or a cotton tipped applicator and ethanol.
Never connect two USB cables to the mvBlueFOX even if one is only connected to a PC.
• Handle with care and avoid damage of electrical components by electrostatic discharge (ESD):
If you want to make own High-Speed (HS) USB cables, please pay attention to following design guidelines:
• Route High-Speed (HS) USB signals with a minimum number of vias and sharp edges!
• Avoid stubs!
• Do not cut off power planes VCC or GND under the signal line.
• Do not route signals no closer than 20 ∗ h to the copper layer edge if possible (h means height over the
copper layer).
– 7.5 mil printed circuit board track with 7.5 mil distance result in approx. 90 Ohm @ 110 um height over
GND plane.
– There are other rules when using double-ply printed circuit boards.
• Be sure that there is 20 mil minimum distance between High-Speed USB signal pair and other printed circuit
board tracks (optimal signal quality).
The mvBlueFOX complies with the provision of the following European Directives:
• For EN 61000-6-3:2007, mvBlueFOX-IGC with digital I/O needs the Steward snap-on ferrite
28A0350-0B2 on I/O cable.
• For EN 61000-6-3:2007, mvBlueFOX-MLC with digital I/O needs the Würth Elektronik snap-on
ferrite WE74271142 on I/O cable and copper foil on USB.
MATRIX VISION corresponds to the EU guideline WEEE 2002/96/EG on waste electrical and elec-
tronic equipment and is registered under WEEE-Reg.-No. DE 25244305.
Class B
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part
15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference
when the equipment is operated in a residential environment. This equipment generates, uses, and can radiate
radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful
interference to radio communications. However there is no guarantee that interferences will not occur in a particular
installation. If the equipment does cause harmful interference to radio or television reception, the user is encouraged
to try to correct the interference by one or more of the following measures:
You are cautioned that any changes or modifications not expressly approved in this manual could void your authority
to operate this equipment. The shielded interface cable recommended in this manual must be used with this
equipment in order to comply with the limits for a computing device pursuant to Subpart B of Part 15 of FCC Rules.
• To be compliant to FCC Class B, mvBlueFOX-IGC requires an I/O cable with an retrofittable ferrite to be used
such as
This apparatus complies with the Class B limits for radio noise emissions set out in the Radio Interference Regula-
tions.
Cet appareil est conforme aux normes classe B pour bruits radioélectriques, spécifiées dans le Règlement sur le
brouillage radioélectrique.
1.6 Introduction
The mvBlueFOX is a compact industrial CCD & CMOS camera solution for any PC with a Hi-Speed USB (USB
2.0) port. A superior image quality makes it suited for most applications. Integrated preprocessing like binning
reduces the PC load to a minimum. The standard Hi-Speed USB interface guarantees an easy integration without
any additional interface board. To make the cameras flexible to any industrial applications, the image processing
tools mvIMPACT as well as different example solutions are available
Figure 1: mvBlueFOX
• machine vision
• robotics
• surveillance
• microscopy
• medical imaging
With the name mvBlueFOX-M1xx, the industrial camera mvBlueFOX is also available as a single-board version.
1.6.1.1 mvBlueFOX
- A: Sensor model
220: 0.3 Mpix, 640 x 480, 1/4", CCD
220a: 0.3 Mpix, 640 x 480, 1/3", CCD
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
221: 0.8 Mpix, 1024 x 768, 1/3", CCD
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
223: 1.4 Mpix, 1360 x 1024, 1/2", CCD
224: 1.9 Mpix, 1600 x 1200, 1/1.8", CCD
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS
- B: Sensor color
G: Gray scale version
C: Color version
- (1): Lensholder
1: C-mount with adjustable backfocus (standard)
2: CS-mount with adjustable backfocus
3: S-mount
- (2): Filter
1: IR-CUT (standard)
2: Glass
3: Daylight cut
9: None
- (3): Case
1: Color blue (standard)
2: Color black, no logo, no label MATRIX VISION
3: Color blue, no logo, no label MATRIX VISION
9: None
- (4): Misc
1: None (standard)
1.6.1.2 mvBlueFOX-M
- A: Sensor model
220: 0.3 Mpix, 640 x 480, 1/4", CCD
220a: 0.3 Mpix, 640 x 480, 1/3", CCD
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
221: 0.8 Mpix, 1024 x 768, 1/3", CCD
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
223: 1.4 Mpix, 1360 x 1024, 1/2", CCD
224: 1.9 Mpix, 1600 x 1200, 1/1.8", CCD
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS
- B: Sensor color
G: Gray scale version
C: Color version
- (1): Lensholder
1: No holder (standard)
2: C-mount with adjustable backfocus
3: CS-mount with adjustable backfocus
4: S-mount #9031
5: S-mount #9033
- (2): Filter
1: None (standard)
2: IR-CUT
3: Glass
4: Daylight cut
- (3): Misc
1: None (standard)
- (4): Misc
1: None (standard)
1.6.1.3 mvBlueFOX-IGC
- A: Sensor model
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
202b: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202d: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS
- B: Sensor color
G: Gray scale version
C: Color version
- (1): Lensholder
1: CS-mount without adjustable backfocus (standard)
2: C-mount without adjustable backfocus (CS-mount with add. 5 mm extension ring)
3: C-mount with adjustable backfocus
- (2): Filter
1: IR-CUT (standard)
2: Glass
3: Daylight cut
9: none
- (3): Case
1: Color blue (standard)
2: Color black, no logo, no label MATRIX VISION
9: None
- (4): I/O
1: None (standard)
2: With I/O #08727
1.6.1.4 mvBlueFOX-MLC
- A: Sensor model
200w: 0.4 Mpix, 752 x 480, 1/3", CMOS
202b: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202d: 1.2 Mpix, 1280 x 960, 1/3", CMOS
202a: 1.3 Mpix, 1280 x 1024, 1/2", CMOS
205: 5.0 Mpix, 2592 x 1944, 1/2.5", CMOS
- B: Sensor color
G: Gray scale version
C: Color version
- C: Mini USB
U: with Mini USB (standard)
X: without Mini USB
- D: Digital I/Os
O: 1x IN + 1x OUT opto-isolated (standard)
T: 2x TTL IN + 2x TTL OUT
L: 3x LVTTL IN
- E: Connector
W: board-to-wire (standard)
B: board-to-board
- (1): Lensholder
1: No holder (standard)
2: C-mount with adjustable backfocus (CS-mount with add. 5 mm extension ring)
3: CS-mount with adjustable backfocus
4: C-mount without adjustable backfocus
5: CS-mount without adjustable backfocus
- (2): Filter
1: None (standard)
2: IR-CUT
3: Glass
4: Daylight cut
- (3): Misc
1: None (standard)
- (4): Misc
1: None (standard)
Examples:
1
: -1111 is the standard delivery variant and for this reason it is not mentioned.
Due to the varying fields of application the mvBlueFOX is shipped without accessories. The package contents:
• mvBlueFOX
• instruction leaflet
For the first use of the mvBlueFOX we recommend the following accessories to get the camera up and running:
Attention
According to the customer and if the mvBlueFOX-MLC is shipped without lensholder, the mvBlueFOX-MLC
will be shipped with a protective foil on the sensor. Before usage, please remove this foil!
1.7 Quickstart
1.7.1 Windows
Note
Since mvIMPACT Acquire version 2.8.0 it could be possible that you have to update your Windows installer
at least using Windows XP. The necessary packages are available from Microsoft's website: http←-
://www.microsoft.com/en-US/download/details.aspx?id=8483
All necessary drivers are available from the MATRIX VISION website at www.matrix-vision.de, section "Products
-> Cameras -> your interface -> your product -> Downloads".
Note
The mvBlueFOX is a USB 2.0 compliant camera device and needs therefore a functioning USB 2.0 port. If you are
not sure about this, please follow these steps:
2. Enter
msinfo32
4. If there is a entry like "USB 2.0 Root Hub" or "ROOT_HUB20", your system has USB 2.0.
Please be sure that your system has at least one free USB port.
Warning
Before connecting the mvBlueFOX, please install the software and driver first!
All necessary drivers are available from the MATRIX VISION website:
https://www.matrix-vision.←-
com "Products -> Hardware -> mvBlueFOX -> Downloads Tab".
• "Base Libraries"
This feature contains all necessary files for property handling and display. Therefore, it is not selectable.
• "mvBlueFOX driver"
This is also not selectable.
• "Tools"
This feature contains tools for the mvBlueFOX (e.g. to configure MATRIX VISION devices (mvDevice←-
Configure) or to acquire images (wxPropView)).
• "Developer API"
The Developer API" contains the header for own programming. Additionally you can choose the examples,
which installs the sources of wxPropView, mvIPConfigure and various small examples. The project files
shipped with the examples have been generated with Visual Studio 2013. However projects and makefiles
for other compilers can be generated using CMake fairly easy. See CMake section in the C++ manual for
additional details. - \b "Documentation"
This will install this mvBlueFOX manual a single HTML help file (.chm).
Warning
Before connecting the mvBlueFOX, please install the software and driver first!
It is not necessary to shutdown your system. On an USB port, it is possible to hot plug any USB device (hot plug
lets you plug in new devices and use them immediately).
Warning
If using the Binder connector first connect the cable to the camera, then connect the camera to the PC.
Plug the mvBlueFOX to an USB 2.0 Port. After plugging the mvBlueFOX Windows® shows "Found New Hardware"
and starts the Windows Hardware Wizard.
The Wizard asks you for the driver. The installation doesn't need any Windows® automatic at this step and it is
recommended to type the driver directory by hand. Choose "No, not this time" and press "Next".
The Hardware Wizard will search the registry for the device identification and after a while the Wizard prompts you
to continue the installation or to abort it. Also Windows® will display the following message to inform the user that
this driver digitally signed by Microsoft. You have to select 'Continue anyway' otherwise, the driver can't be installed.
If you don't want to install a driver that is not signed you must stop the installation but can't work with the mvBlueFOX
camera then.
After the Windows® Logo testing, you have to click "Finish" to complete the installation.
Now, you can find the installed mvBlueFOX in the Windows® "Device Manager" under image devices.
After this installation, you can acquire images with the mvBlueFOX. Simply start the application wxPropView (p. 69)
(wxPropView.exe) from
mvBlueFOX/bin.
See also
1.7.2 Linux
Kernel requirements
Note
This is different from devfs! support. The USB device file system should, of course, be mounted at
/proc/bus/usb.
Note
– The 32 bit version will run on a 64-bit Linux system if the other library requirements are met with 32-bit
libraries. I.e. you cannot mix 64 and 32-bit libraries and applications.
– Versions for Linux on x86-64 (64-bit), PowerPC, ARM or MIPS may be possible on request.
• GNU compiler version GCC 3.2.x or greater and associated tool chain.
Note
Our own modified version of libusb has been statically linked to our library and is therefore included, so
libusb is not a requirement.
• libexpat ( http://expat.sourceforge.net)
• Optional: wxWidgets 2.6.x (non Unicode) for the wxWidget test programs.
• The compiler used is gcc 4.1.0 and may need to be installed. Use the "gcc" und "gcc-c++" RPMs. Other
RPMs may be installed automatically due to dependencies (e.g. make).
• libexpat will almost definitely be installed already in any software configuration. The RPM is called "expat".
• Install the wxWidgets "wxGTK" and "wxGTK-develop" RPMs. Others that will be automatically installed due
to dependencies include "wxGTK-compat" and "wxGTK-gl". Although the MATRIX VISION software does not
use the ODBC database API the SuSE version of wxWidgets has been compiled with ODBC support and the
RPM does not contain a dependency to automatically install ODBC. For this reason you must also install the
"unixODBC-devel" RPM.
• OpenSuSE 10.1 uses the udev system so a separate hotplug installation is not needed.
1.7.2.1.3 Hardware requirements USB 2.0 Host controller (Hi-Speed) or USB 1.1 Host controller will also
work (with a max. frame rate of 3 to 4 fps at 640x480 only).
Note
We have noticed problems with some USB chip sets. At high data rates sometimes the image data appears
to be corrupted. If you experience this you could try one or more of the following things.
• a different PC.
• a plug-in PCI/USB-2.0 card without any cables between the card and the USB connector.
• turning off the image footer property - this will ignore data errors.
Note
The driver contains libraries for Linux x86 (32 bit) or Linux 64-bit (x86_64). There are separate package files
for systems with tool chains based on GNU gcc 3.2.x - 3.3.x and those based on GNU gcc >= 3.4.x. gcc 3.1.x
may work but, in general, the older your tool chain is, the lass likely it is that it will work. Tool chains based on
GNU gcc 2.x.x are not supported at all.
GCC 4.x (4.1.0) has been tested on OpenSuSE 10.1 and should work on other platforms.
This version (32-bit only) will also run in a VMware ( http://www.vmware.com) virtual machine!
To use the mvBlueFOX camera within Linux (grab images from it and change its settings), a driver is needed,
consisting of several libraries and several configuration files. These files are required during run time.
To develop applications that can use the mvBlueFOX camera, a source tree is needed, containing header files,
makefiles, samples, and a few libraries. These files are required at compile time.
mvBlueFOX-x86_ABI2-n.n.n.tgz
cd /home/username/workspace
2. Copy the install script (available as download from https://www.matrix-vision.com) and the
hardware driver to the workspace directory (e.g. from a driver CD or from the website):
~/workspace$ cp /media/cdrom/drv/Linux/install_mvBlueFOX.sh /
. && cp /media/cdrom/drv/Linux/mvBlueFOX-x86_ABI2-1.12.45.tgz -t ./
~/workspace$ ./install_mvBlueFOX.sh
Note
The install script has to be executable. So please check the rights of the file.
During installation, the script will ask, if it should build all tools and samples.
The installation script checks the different packages and installs them with the respective standard packages man-
ager (apt-get) if necessary.
Note
You need Internet access in case one or more of the packages on which the GenICam™ libs depend are not yet
installed on your system. In this case, the script will install these packages, and for that, Internet access is required.
2. version
The target directory name specifies where to place the driver. If the directory does not yet exist, it will be created.
The path can be either absolute or relative; i.e. the name may but need not start with "/.".
Note
This directory is only used for the files that are run time required.
The files required at compile time are always installed in "$HOME/mvimpact-acquire-n.n.n". The script
also creates a convenient softlink to this directory:
If this argument is not specified, or is ".", the driver will be placed in the current working directory.
The version argument is entirely optional. If no version is specified, the most recent mvBlueFOX-x86_AB←-
I2-n.n.n.tgz found in the current directory will be installed.
You can now start wxPropView (p. 69), after installing the hardware (p. 34) like
wxPropView
Note
If you want to install the mvBlueFOX Linux driver without installer script manually, please have a look at the
following chapter:
Note
We recommend to use the installer script to install the mvBlueFOX driver (p. 29).
The mvBlueFOX is controlled by a number of user-space libraries. It is not necessary to compile kernel modules for
the mvBlueFOX.
1. Logon to the PC as the "root" user or start a super user session with "su". Start a console with "root"
privileges.
2. Determine which package you need by issuing the following command in a terminal window:
gcc -v
This will display a lot of information about the GNU gcc compiler being used on your system. In case of the
version number you have to do following:
Version Description
2.x.x (obsolete) You cannot use the mvBlueFOX on your computer. Upgrade to a newer distribu-
tion.
3.2.x - 3.3.x (obsolete) Use the C++ ABI 1. This package has ABI1 in its name.
greater or equal 3.4.x Use the C++ ABI 2. This package has ABI2 in its name.
The mvBlueFOX libraries are supplied as a "tgz" archive with the extension ".tgz". The older
"autopackage" format is now deprecated since it cannot handle 64-bit libraries.
Note
Current versions of the ABI1 libraries were compiled using a SuSE 8.1 system for maximum com-
patibility with older Linux distributions. These libraries should work with all SuSE 8.x and SuSE
9.x versions as well as with Debian Sarge and older Red Hat / Fedora variants.
Current versions of the ABI2 libraries were compiled using a SuSE 10.1 system for maximum
compatibility with newer Linux distributions. These libraries should work with SuSE 10.x as well
as with Ubuntu 6.06 or newer, with up-to-date Gentoo or Fedora FC5.
(b) After installing the mvBlueFOX access libraries you will see something like the following directory struc-
ture in your directory (dates and file sizes will differ from the list below):
The directory "lib/x86" contains the pre-compiled 32-bit libraries for accessing the mvBlueFOX.
If 64-bit libraries are supplied, they will be found in "lib/x86_64". The "apps" directory contains test
applications (source code). The other directories contain headers needed to write applications for the
mvBlueFOX.
Since the libraries are not installed to a directory known to the system i.e. not in the "ldconfig"
cache you will need to tell the system where to find them by...
• using the "LD_LIBRARY_PATH" environment variable,
• or copying the libraries by hand to a system directory like "/usr/lib" (or using some symbolic
links),
• or entering the directory in "/etc/ld.so.conf" and running "ldconfig".
e.g. to start the application called "SingleCapture":
Note
etc.
After installing the libraries and headers you may continue with "3." below as a normal user i.e. you do
not need to be "root" in order to compile the test applications. See also the note "4." below.
(c) To build the test applications type "make". This will attempt to build all the test applications contained
in "apps". If you have problems compiling the wxWidget library or application you may need to do one
or more of the following:
• install the wxWidget 3.x development files (headers etc.) supplied for your distribution. (See "Other
requirements" above).
• fetch, compile and install the wxWidget 3.x packet from source downloaded from the website (
http://www.wxwidgets.org).
• alter the Makefiles so as to find the wxWidget configuration script called wx-config.
The files you may need to alter are to be found here:
apps/mvPropView/Makefile.inc
You will find the compiled test programs in the subdirectories "apps/.../x86". For 64 bit systems
it will be "apps/.../x86_64". For ARM systems it will be "apps/.../arm".
If you cannot build the wxWidget test program you should, at least, be able to compile the text-based
test programs in apps/SingleCapture, apps/SingleCapture, etc.
(d) It may be possible to run applications as a non-root user on your system if you are using the udev
system or a fairly recent version of hotplug.
1.7.2.2.2 For udev (e.g. Gentoo∗, OpenSuSE 10.0 - 10.1) Add the following 2 rules to one of the files in
the directory "/etc/udev/rules.d" or make a new file in this directory (e.g. "/etc/udev/rules.←-
d/20-mvbf.rules") containing the lines below:
You will find an example file "20-mvbf.rules" in the scripts directory after installation.
Note
Do not forget to add your user to the usb group! You may have to create the group first.
Current Gentoo systems support udev with a minimal, legacy hotplug system. It is usually sufficient to add any
usernames that are to be used for the mvBlueFOX to the group "usb" because there is already a udev rule giving
write permission to all USB devices to members of this group. If this does not work then try the other alternatives
described here. The udev method is better because hotplug is likely to be removed eventually.
1.7.2.2.3 For udev (OpenSuSE 10.2 - 10.x) In "/etc/fstab" in the line starting with usbfs change
noauto to defaults
Connect the camera to the system now or re-connect it if it has been connected already
In case the environment variable USB_DEVFS_PATH is not set, it needs to be set to "/dev/bus/usb" ('export US←-
B_DEVFS_PATH="/dev/bus/usb"'.
1.7.2.2.4 For udev on Ubuntu 06.10 Edit the file /etc/udev/rules.d/40-permissions.rules. Search for the entry
for usbfs. It should look like this:
1.7.2.2.5 udev on some other systems Some very up-to-date systems also set the environment variable $US←-
B_DEVFS_PATH to point to "/dev/bus/usb" instead of the older (default) value of "/proc/bus/usb". This
may cause the mvBlueFOX libraries to attempt to access device nodes in "dev/bus/usb" but the rules described
above will not change the permissions on these files. Normally you will find a rule in "/etc/udev/rules.←-
d/50-udev.rules" which will already cure this problem. You might like to slightly modify this rule to give write
permission to a specific group e.g. the group "usb". A patch is supplied in the scripts directory to do this.
1.7.2.2.6 Alternatively, for a system with full hotplugging (e.g. older SuSE systems) Copy the files named
below from the scripts directory into the directory "/etc/hotplug/usb":
matrixvision.usermap
matrixvision_config
The file "matrixvision.usermap" contains the vendor and product IDs for the mvBlueFOX cameras and
specifies that the script "matrixvision_config" should be run when a MATRIX VISION mvBluefOX cam-
era is plugged in or out. This script attempts to write some information to the system log and then changes the
permissions for the newly-created device node so that non-root users can access the camera.
This feature has not yet been extensively tested. If you find that the applications start but appear to hang or to wait
for a long time before continuing (and normally crashing) then changing the file permissions on your system does
not appear to be sufficient. We have observed this on a 32 bit SuSE 9.1 system. In this case you may have more
success if you change the owner of the application to "root" and set the suid bit to allow it to run with "root"
permissions.
Note
1.7.2.2.6.1 Using CMOS versions of the mvBlueFOX and mvBlueFOX-M especially with USB 1.1 Version
1.4.5 contains initial support for CMOS mvBlueFOX on USB 1.1 . In order to conform to the rigid timing specifications
of the CMOS sensor, onboard RAM is used. This RAM is available only on mvBlueFOX-M boards at the moment.
Therefore you cannot use the mvBlueFOX-102x with USB 1.1 . It will work with USB 2.0.
Note
If you want to capture continuous live images from mvBlueFOX-102 or mvBlueFOX-M102x you should switch
the trigger mode from "Continuous" to "OnDemand" for best reliable results. For single snaps the default
values should work correctly.
Warning
If using the Binder connector first connect the cable to the camera, then connect the camera to the PC.
The driver for Linux does not include hot-plugging support at the application level. I.e. a running application will not
be informed of new mvBlueFOX devices that have been plugged in and will probably crash if an mvBlueFOX camera
is unplugged whilst it is being used. You need to stop the application, plug in the new camera and then restart the
application. This will change in a later version.
To operate a mvBlueFOX device apart from the physical hardware itself 3 pieces of software are needed:
• a firmware running on the device (provides low-level functionality like allowing the device to act as a USB
device, support for multiple power states etc.)
• an FPGA file loaded into the FPGA inside the device (provides access features to control the behaviour of
the image sensor, the digital I/Os etc.)
• a device driver (this is the mvBlueFOX.dll on Windows® and the libmvBlueFOX.so on Linux) running on the
host system (provides control over the device from an application running on the host system)
The physical mvBlueFOX device has a firmware programmed into the device's non-volatile memory, thus allowing
the device to act as a USB device by just connecting the device to a free USB port. So the firmware version that will
be used when operating the device does NOT depend on the driver version that is used to communicate with the
device.
On the contrary the FPGA file version that will be used will be downloaded in volatile memory (RAM) when accessing
the device through the device driver thus the API. One or more FPGA files are a binary part of the device driver.
This shall be illustrated by the following figure:
Figure 13: The firmware file is a binary part of the device driver
Note
As it can be seen in the image one or multiple firmware files are also a binary part of the device driver.
However it is important to notice that this firmware file will NOT be used automatically but only when the user
or an application explicitly updates the firmware on the device and will only become active after power-cycling
the device. Since mvIMPACT Acquire version 2.28.0 every firmware starting from version 49 is available within
a single driver library and can be selected for updating! mvDeviceConfigure however will always update the
device firmware to the latest version. If you need to downgrade the firmware for any reason please get into
contact with the MATRIX VISION support to get detailed instructions on how to do that.
1.7.3.1 FPGA
Until the device gets initialized using the API no FPGA file is loaded in the FPGA on the device. Only by opening
the device through the API the FPGA file gets downloaded and only then the device will be fully operational:
Figure 14: The FPGA file gets downloaded when the device will be opened through the API
As the FPGA file will be stored in RAM, disconnecting or closing the device will cause the FPGA file to be lost. The
firmware however will remain:
Figure 15: The FPGA file will be lost if the device is disconnected or closed
In case multiple FPGA files are available for a certain device the FPGA file that shall be downloaded can be selected
by an application by changing the value of the property Device/CustomFPGAFileSelector. However the value of this
property is only evaluated when the device is either initialized using the corresponding API function OR if a device
has been unplugged or power-cycled while the driver connection remains open and the device is then plugged back
in.
Note
There is just a limited set of devices that offer more than one FPGA file and these additional FPGA files serve
very special purposes so in almost every situation the default FPGA file will be the one used by an application.
Before using custom FPGA files, please check with MATRIX VISION about why and if this makes sense for
your application.
So assuming the value of the property Device/CustomFPGAFileSelector has been modified while the device has
been unplugged, a different FPGA file will be downloaded once the device is plugged back into the host system:
1.7.3.2 Firmware
Only during a firmware update the firmware file that is a binary part of the device driver will be downloaded perma-
nently into the device's non-volatile memory.
Warning
Until mvIMPACT Acquire 2.27.0 each device driver just contained one specific firmware version thus once a
device's firmware has been updated using a specific device driver the only way to change the firmware version
will be using another device driver version for upgrading/downgrading the firmware again. Since mvIMPA←-
CT Acquire version 2.28.0 every firmware starting from version 49 is available within a single driver library
and can be selected for updating! mvDeviceConfigure however will always update the device firmware to the
latest version. If you need to downgrade the firmware for any reason please get into contact with the MATRIX
VISION support to get detailed instructions on how to do that.
During an explicit firmware update, the firmware file from inside the driver will be downloaded onto the device. In
order to become active the device must be power-cycled:
When then re-attaching the device to the host system, the new firmware version will become active:
• The current firmware version of the device can be obtained either by using one of the applications which
are part of the SDK such as mvDeviceConfigure (p. 111) or by reading the value of the property Device/←-
FirmwareVersion or Info/FirmwareVersion using the API
• The current FPGA file version used by the device can be obtained by reading the value of the property
Info/Camera/SensorFPGAVersion
Using wxPropView the same information is available as indicated by the following figure:
A setting contains all the parameters that are needed to prepare and program the device for the image capture.
Every image can be captured with completely different set of parameters. In almost every case, these parameters
are accessible via a property offered by the device driver. A setting e.g. might contain
• the gain to be applied to the analogue to digital conversion process for analogue video sources or
So for the user a setting is the one an only place where all the necessary modifications can be applied to achieve
the desired form of data acquisition.
Now, whenever a device is opened, the driver will execute following procedure:
• Please note that each setting location step in the figure from above internally contains two search steps. First
the framework will try to locate a setting with user scope and if this can't be located, the same setting will be
searched with global (system-wide) scope. On Windows® this e.g. will access either the HKEY_CURREN←-
T_USER or (in the second step) the HKEY_LOCAL_MACHINE branch in the Registry.
• Whenever storing a product specific setting, the device specific setting of the device used for storing will be
deleted (if existing). E.g. you have a device 'VD000001' which belongs to the product group 'VirtualDevice'
with a setting exclusively for 'VD000001'. As soon as you store a product specific setting, the (device specific)
setting for 'VD000001' will be deleted. Otherwise a product specific setting would never be loaded as a device
specific setting will always be found first.
• The very same thing will also happen when opening a device from any other application! wxPropView (p. 69)
does not behave in a special way but only acts as an arbitrary user application.
• Whenever storing a device family specific setting, the device specific or product specific setting of the device
used for storing will be deleted (if existing). See above to find out why.
• On Windows® the driver will not look for a matching XML file during start-up automatically as the native
storage location for settings is the Windows® Registry. This must be loaded explicitly by the user by using
the appropriate API function offered by the SDK. However, under Linux XML files are the only setting formats
understood by the driver framework thus here the driver will also look for them at start-up. The device specific
setting will be an XML file with the serial number of the device as the file name, the product specific setting
will be an XML file with the product string as the filename, the device family specific setting will be an XML
file with the device family name as the file name. All other XML files containing settings will be ignored!
• Only the data contained in the lists displayed as "Image Setting", "Digital I/O" and "Device
Specific Data" under wxPropView (p. 69) will be stored in these settings!
• Restoring of settings previously stored works in a similar way. After a device has been opened the settings
will be loaded automatically as described above.
• A detailed description of the individual properties offered by a device will not be provided here but can be
found in the C++ API reference, where descriptions for all properties relevant for the user (grouped together in
classes sorted by topic) can be found. As wxPropView (p. 69) doesn't introduce new functionality but simply
evaluates the list of features offered by the device driver and lists them any modification made using the GUI
controls just calls the underlying function needed to write to the selected component. wxPropView (p. 69)
also doesn't know about the type of component or e.g. the list of allowed values for a property. This again is
information delivered by the driver and therefore can be queried by the user as well without the need to have
special inside information. One version of the tool will always be delivered in source so it can be used as a
reference to find out how to get the desired information from the device driver.
mvBlueFOX
Size without lens (w x h x l) 38.8 x 38.8 x 58.5 mm (CCD version)
38.8 x 38.8 x 53.1 mm (CMOS version)
General tolerance DIN ISO 2768-1-m (middle)
1
Voltage between + and - may be up to 26V, input current is 17mA.
1.8.2.1.1.1 Characteristics of the digital inputs Open inputs will be read as a logic zero.
When the input voltage rises above the trigger level, the input will deliver a logic one.
input behavior of the digital inputs using the DigitalInputThreshold property in "Digital I/O -> DigitalInput←-
Threshold":
1.8.2.1.1.3 Connecting flash to digital output You can connect a flash in series to the digital outputs as shown
in the following figure, however, you should only use LEDs together with a current limiter:
Pin Signal
1 USBPOWER_IN
2 D-
3 D+
4 GND
Shell shield
Note
Attention
Manufacturer: Binder
Part number: 99-3390-282-04
Note
Differentiation between 'R' and 'U' version is obsolete. New mvBlueFOX versions have both connectors (cir-
cular connector and standard USB). The pin assignment corresponds to the description of 'R' version.
While mvBlueFOX is connected and powered via standard USB, it is possible to connect additional power via
circular connector (only power; the data lines must be disconnected!). Only in this case, the power switch
will change the power supply, if the current entry via standard USB is equal to or under the power supply of
"circular connector".
Attention
State LED
Camera is not connected or defect LED off
Camera is connected and active Green light on
1.8.2.3 Components
– using bulk-mode
• opto-isolated I/O
• bus powered
• new ADC
• 10 Bit mode
Lens mount
Type "FB"
C-Mount 17.526 MATRIX VISION GmbH
CS-Mount 12.526
1.8 Technical Data 47
Note
The mvBlueFOX-M has a serial I2 C bus EEPROM with 64 KBit of which 512 Bytes can be used to store
custom arbitrary data.
See also
Attention
Do not connect Dig I/O signals to the FPGA pins until the mvBlueFOX-M has been started and configured.
Otherwise, you will risk damaging the mvBlueFOX-M hardware!
See also
1.8.3.1.3 Contact
1.8.3.1.4 Housing
See also
Suitable assembled cable accessories for mvBlueFOX-M: What's inside and accessories (p. 19)
1.8.3.1.5.3 Characteristics of the digital outputs UDIG_OUT_HIGH min = 2.8 - IOUT ∗ 100
Attention
The Dig I/O are connected directly via a resistor to the FPGA pins and therefore they are not protected. For
this reason, an application has to provide a protection circuit to the digital I/O of mvBlueFOX-M.
Note
The Dig I/O characteristics of the mvBlueFOX-M are not compatible to the Dig I/O of the mvBlueFOX standard
version.
State LED
Camera is not connected or defect LED off
Camera is connected and active Green light on
1.8.3.3 Components
1.8.3.4.1 mvBlueFOX-M-FC-S The mvBF-M-FC-S contains high capacity condensers with switching electron-
ics for transferring stored energy of the condensers to external flash LEDs. It is possible to connect 2 pushbut-
tons/switches to the 8-pin header (CON3 - Control connector). Additionally, 2 LED interfaces are available. There
are two version of mvBF-M-FC-S:
1
1.8.3.4.1.3 Electrical characteristic Depends on mvBlueFOX-M power supply
2
Attention: No over-current protection!
Note
The mvBlueFOX-MLC has a serial I2 C bus EEPROM with 16 KByte of which 8 KByte are reserved for the
firmware and 8 KByte can be used to store custom arbitrary data.
See also
1.8.4.2.1 Sensor's optical midpoint and orientation The sensor's optical midpoint is in the center of the board
(Figure 21: intersection point of the holes diagonals). The (0,0) coordinate of the sensor is located at the one bottom
left corner of the sensor (please notice that Mini-B USB connector is located at the bottom at the back).
Note
Using a lens, the (0,0) coordinate will be mirrored and will be shown at the top left corner of the screen as
usual!
Note
If you have the mvBlueFOX-MLC variant which uses the standard Mini-B USB connector, pin 2 and 3 (USB←-
_DATA+ / USB_DATA-) of the header won't be connected!
pin Opto-isolated variant TTL compliant variant LVTTL compliant variant (only available for mvBlueFOX-MLC202aG)
Note
I2C bus uses 3.3 Volts. Signals have a 2kOhm pull-up resistor. Access to the I2C bus from an application is
possible for mvBlueFOX-MLC devices using an mvBlueFOX driver with version 1.12.44 or newer.
See also
Suitable assembled cable accessories for mvBlueFOX-MLC: What's inside and accessories (p. 19)
High-Speed USB design guidelines (p. 11)
More information about the usage of retrofittable ferrite (p. 14)
Note
If the digital input is not connected, the state of the input will be "1" (as you can see in wxPropView (p. 69)).
TTL input low level / high level time: Typ. < 210ns
TTL output low level / high level time: Typ. < 40ns
Figure 25: Opto-isolated digital inputs block diagram with example circuit
Delay
The inputs can be connected directly to +3.3 V and 5 V systems. If a higher voltage is used, an external resistor
must be placed in series (Figure 25).
Figure 26: Opto-isolated digital outputs block diagram with example circuit
Delay
State LED
Camera is not connected or defect LED off
Camera is connected but not initialized or in "Power off" mode. Orange light on
Camera is connected and active Green light on
Note
The mvBlueFOX-IGC has a serial I2 C bus EEPROM with 16 KByte of which 8 KByte are reserved for the
firmware and 8 KByte can be used to store custom arbitrary data.
See also
Manufacturer: Binder
Part number: 79-3107-52-04
1.8.5.1.2.1 Electrical characteristic Please have a look at the mvBlueFOX-MLC digital I/O characteristics (opto-
isolated model) of the 12-pin Wire-to-Board Header (USB / Dig I/O) (p. 53).
State LED
Camera is not connected or defect LED off
Camera is connected but not initialized or in "Power off" mode. Orange light on
Camera is connected and active Green light on
MATRIX VISION GmbH
62
The sensor's optical midpoint is in the center of the housing. However, several positioning tolerances in relation to
the housing are possible because of:
• Tolerance of mounting holes of the printed circuit board in relation to the edge of the lens holder housing is
not specified but produced according to general tolerance DIN ISO 2768 T1 fine.
• Tolerance of mounting holes on the printed circuit board because of the excess of the holes ± 0.1 mm (Figure
32; 2).
• Tolerance between conductive pattern and mounting holes on the printed circuit board.
Because there is no defined tolerance between conductive pattern and mounting holes, the general defined
tolerance of ± 0.1 mm is valid (Figure 32; 1 in the Y-direction ± 0.1 mm; 3 in the Z-direction ± 0.1 mm)
There are further sensor specific tolerances, e.g. for model mvBlueFOX-IGC200wG:
• Tolerance between sensor chip MT9V034 (die) and its package (connection pad)
– Chip position in relation to the mechanical center of the package: 0.2 mm (± 0.1mm) in the X- and
Y-direction (dimensions in the sensor data sheet according to ISO 1101)
• Tolerance between copper width of the sensor package and the pad width of the printed circuit board
During the soldering the sensor can swim to the edge of the pad: width of the pad 0.4 mm (possible tolerance
is not considered), width of pin at least 0.35 mm, max. offset: ± 0,025mm
Note
There are also tolerances in lens which could lead to optical offsets.
By default, the steps exposure and readout out of an image sensor are done one after the other. By design, CCD
sensors support overlap capabilities also combined with trigger (see figure). In contrast, so-called pipelined CMOS
sensors only support the overlapped mode. Even less CMOS sensors support the overlapped mode combined with
trigger. Please check the sensor summary (p. 63). In overlapping mode, the exposure starts the exposure time
earlier during readout.
The CCD sensors are highly programmable imager modules which incorporate the following features:
Sensors 0.3 Mpixels 0.3 Mpixels 0.8 Mpixels 1.4 Mpixels 1.9 Mpixels
resolution CCD resolution CCD resolution CCD resolution CCD resolution CCD
sensor (-220) sensor (-220a) sensor (-221) sensor (-223) sensor (-224)
Sensor supplier Sony Sony Sony Sony Sony
Sensor name ICX098 AL/BL ICX424 AL/AQ ICX204 AL/AQ ICX267 AL/AQ ICX274 AL/AQ
Resolution 640 x 480 640 x 480 1024 x 768 1360 x 1024 1600 x 1200
gray scale or gray scale or gray scale or gray scale or gray scale or
RGB Bayer mo- RGB Bayer mo- RGB Bayer mo- RGB Bayer mo- RGB Bayer mo-
saic saic saic saic saic
Sensor format 1/4" 1/3" 1/3" 1/2" 1/1.8"
1
With max. frame rate, image quality losings might be occur.
Sensors: 0.4 Mpixels res- 1.3 Mpixels res- 1.2 Mpixels res- 1.2 Mpixels res- 5.0 Mpixels res-
olution CMOS olution CMOS olution CMOS olution CMOS olution CMOS
sensor (-200w) sensor (-202a) sensor (-x02b)1 sensor (-202d)1 sensor (-205)
only -MLC/-IGC only -MLC/-IGC
Sensor supplier Aptina Aptina Aptina Aptina Aptina
Sensor name MT9V034 MT9M001 MT9M021 MT9M034 MT9P031
Resolution 752 x 480 1280 x 1024 1280 x 960 1280 x 960 2592 x 1944
gray scale or gray scale gray scale or gray scale or gray scale or
RGB Bayer mo- RGB Bayer mo- RGB Bayer mo- RGB Bayer mo-
saic saic saic saic
Indication of 1/3" 1/2" 1/3" 1/3" 1/2.5"
sensor cat-
egory to be
used
Pixel clock 40 MHz 40 MHz 40 MHz 40 MHz 40 MHz
2
Max. frames 93 25 25 25 5.8
per second (in
free-running full
frame mode)
Binning H+V (frame rate H+V, Average←- H+V, Average←- H+V, Average←- H+V, 3H+3V,
170 Hz) H+V (frame rate H+V (frame rate H+V (frame rate AverageH+V,
unchanged) unchanged) unchanged) Average3H+3V,
DroppingH+V,
Dropping3←-
H+3V (frame
rate 22.7 Hz)
Exposure time 6 us - 4 s 100 us - 10 s 10 us - 4 s 10 us - 4 s 10 us - 10 s
ADC resolution 10 bit (10 / 8 bit 10 bit (10 / 8 bit 10 bit (10 / 8 bit 10 bit (10 / 8 bit 10 bit (10 / 8 bit
transmission) transmission) transmission) transmission) transmission)
SNR 42 dB 40 dB < 43 dB 37.4 dB
DR (normal / 55 dB / > 110 61 dB / - > 61 dB / > 61 dB / > 65 dB /
HDR (p. 151)) dB 110 dB (with
gray scale
version)
Progressive X X X X X
scan sensor
(no interlaced
problems!)
Rolling shutter - X - X X
Global shutter X - X - X
1
The operation in device specific AEC/AGC mode is limited in (non continuous) triggered modes. AEC/AGC only
works while trigger signal is active. When the trigger signal is removed AEC/AGC stops and gain and exposure will
be set to a static value. This is due to a limitation of the sensor chip.
2
Frame rate increase with reduced AOI width, but only when width >= 560 pixels, below frame rate remains
unchanged.
Note
For further information about rolling shutter, please have a look at the practical report about rolling shutter
on our website: https://www.matrix-vision.com/tl_files/mv11/Glossary/art_←-
rolling_shutter_en.pdf
For further information about image errors of image sensors, please have a look at
For further information about image errors of image sensors, please have a look at Correcting image errors
of a sensor (p. 131).
1. Interpolation of green pixels: the average of the upper, lower, left and right pixel values is assigned as the G
value of the interpolated pixel.
For example:
(G3+G7+G9+G13)
G8 = --------------
4
For G7:
(G1+G3+G11+G13)
G7_new = 0.5 * G7 + 0.5 * ---------------
4
(B6+B8) (R2+R12)
B7 = ------- ; R7 = --------
2 2
Interpolation of a red/blue pixel at a blue/red position: the average of four adjacent diagonal pixel values is
assigned to the interpolated pixel.
For example:
(R2+R4+R12+R14) (B6+B8+B16+B18)
R8 = --------------- ; B12 = ---------------
4 4
Any colored edge which might appear is due to Bayer false color artifacts.
Note
There are more advanced and adaptive methods (like edge sensitive ones) available if the host is doing this
debayering.
1.10 Filters
MATRIX VISION offers two filters for the mvBlueFOX camera. The IR filter (p. 68) is part of the standard delivery
condition.
The hot mirror filter FILTER IR-CUT 15,5X1,75 FE has great transmission in the visible spectrum and blocks out a
significant portion of the IR energy.
Technical data
Diameter 15.5 mm
Thickness 1.75 mm
Material Borofloat
Characteristics T = 50% @ 650 +/- 10 nm
T > 92% 390-620 nm
Ravg > 95% 700-1150 nm
AOI = 0 degrees
Surface quality Polished on both sides
Surface irregularity 5/3x0.06 scratch/digs on both sides
Edges cut without bezel
The FILTER DL-CUT 15,5X1,5 is a high-quality day light cut filter and has optically polished surfaces. The polished
surface allows the use of the filter directly in the path of rays in image processing applications. The filter is protected
against scratches during the transport by a protection film that has to be removed before the installing the filter.
Technical data
Diameter 15.5 mm
Thickness 1.5 +/- 0.2 mm
Material Solaris S 306
Characteristics Tavg > 80% > 780 nm
AOI = 0 degrees
Protective foil on both sides
Without antireflexion
Without bezel
Note
For further information how to change the filter, please have a look at our website:
http://www.matrix-vision.com/tl_files/mv11/Glossary/art_optical_filter←-
_en.pdf
It is also possible to choose the glass filter "FILTER GLASS 15,5X1,75" with following characteristics:
Technical data
Glass thickness 1.75 mm
Material Borofloat without coating
ground with protection chamfer
Surface quality polished on both sides P4
Surface irregularity 5/3x0.06 on both sides
1.11.1 wxPropView
wxPropView (p. 69) is an interactive GUI tool to acquire images and to configure the device and to display and
modify the device properties of MATRIX VISION GmbH hardware. After the installation you can find wxPropView
(p. 69)
• in "∼/mvimpact-acquire/apps/mvPropView/x86" (Linux).
wxPropView - Introduction:
https://www.matrix-vision.com/tl_files/mv11/trainings/wxPropView/wx←-
PropView_Introduction/index.html
Depending on the driver version, wxPropView starts with the Quick Setup Wizard (p. 70) (as soon as a camera
with the right firmware version was selected used or a single camera with the right firmware was found) or without
it (p. 73).
Since
The Quick Setup Wizard is a tiny and powerful single window configuration tool to optimize the image quality
automatically and to set the most important parameters, which affect the image quality, in an easy way manually
and to get a preview of this changes. Settings will be accepted by clicking ok, otherwise the changes are cancelled.
Depending on the camera spectrum (gray or color sensor), it will automatically pre-set the camera so that image
quality is usually as best as possible.
• "Exposure" to Auto,
• "Gain" to Auto,
• a host based moderate "Gamma correction" (1.8), and lastly it will apply
• a host (PC) based sensor specific "Color Correction Matrix" and use the respective "sRGB display matrix".
These settings will also be applied whenever the "Color Preset" button is pressed. It is herewith assumed that color
camera image is optimized for best human visual feedback.
• Gray
• Color
• Factory
Factory can be used as a fall back to quickly skip or remove all presets and load the factory default settings.
1.11.1.1.1.2 Modifying Settings All auto modes can be switched off and all settings, such as Gain, Exposure
etc. can be subsequently modified by using:
• the sliders,
Toggling Gamma button loads or unloads a host based 10 bit Gamma correction with a moderate value of 1.8 into
the signal processing path. Switch Gamma on if you require a gray level camera image to appear natural for the
human eye.
Toggling Color+ button switches both CCM and sRGB display matrix on and off. This optimizes the sensor color
response for the human eye and goes in conjunction with a display color response. Because sRGB displays are
mostly used and this is the default color space in Windows OS, these are preselected. If you require other display
matrices (e.g. Adobe or WideGamut) feel free to use the tree mode of wxPropView and select ColorTwistOutput←-
Correction accordingly.
Setting Gain
Gain settings also combine analog and digital registers into one slider setting.
Setting Saturation
Saturation setting increases the color saturation to make the image appear more colored. It does not change
uncolored parts in the image nor changes the color tone or hue.
1.11.1.1.1.3 How to disable Quick Setup Wizard Uncheck the checkbox "Show This Display When A Device Is
Opened" to disable the Quick Setup Wizard to be called automatically. Use the "Wizards" menu and select "Quick
Setup" to open the Quick Setup Wizard once again.
1.11.1.1.1.4 How to Return to the Tree Mode Use OK to use the values and settings of the Quick Setup Wizard
and go back to the tree mode of wxPropView.
Use Cancel to discard the Quick Setup Wizard values and settings and go back to wxPropView and use the former
(or default) settings.
1.11.1.1.1.5 Image Display Functions Quick Setup Wizard allows zooming into the image by right clicking in
the image area and unchecking "Fit To Screen" mode. Use the mouse wheel to zoom in or out. Check "Fit To
Screen" mode, if you want the complete camera image to be sized in the window screen size.
1.11.1.1.1.6 Known Restrictions In cases of Tungsten (artificial) light, camera brightness may tend to oscilla-
tions if Auto functions are used. This can be minimized or avoided by setting the frame frequency to an integer
divisor of the mains frequency.
• Example:
– Europe: 50 Hz; Set frame rate to 100, 50, 25 12.5 fps or appropriate.
– In countries with 60 Hz use 120, 60, 30 or 15. . . accordingly.
1.11.1.1.2 First View of wxPropView wxPropView (p. 69) consists of several areas:
• "Menu Bar"
(to work with wxPropView (p. 69) using the menu)
– "Grid"
(tree control with the device settings accessible by the user)
– "Display"
(for the acquired images)
• "Analysis"
(information about whole images or an AOI)
• selecting it in the drop down list in the "Upper Tool Bar" and
• clicking on "Use".
After having successfully initialized a device the tree control in the lower left part of the "Main Window" will display
the properties (settings or parameters) (according to the "interface layout") accessible by the user.
You've also got the possibility to set your "User Experience". According to the chosen experience, the level of
visibility is different:
Properties displayed in light grey cannot be modified by the user. Only the properties, which actually have an impact
on the resulting image, will be visible. Therefore, certain properties might appear or disappear when modifying
another properties.
To permanently commit a modification made with the keyboard the ENTER must be pressed. If leaving the editor
before pressing ENTER will restore the old value.
1.11.1.1.3 How to see the first image As described earlier, for each recognized device in the system the devices
serial number will appear in the drop down menu in the upper left corner of the "Upper Tool Bar". When this is the
first time you start the application after the system has been booted this might take some seconds when working
with devices that are not connected to the host system via PCI or PCIe.
Once you have selected the device of your choice from the drop down menu click on the "Use" button to open it.
When the device has been opened successfully, the remaining buttons of the dialog will be enabled:
Note
Following screenshots are representative and where made using a mvBlueFOX camera as the capturing
device.
For color sensors, it is recommended to perform a white balance (p. 98) calibration before acquiring images. This
will improve the quality of the resulting images significantly.
Now, you can capture an image ("Acquisition Mode": "SingleFrame") or display live images ("Continuous"). Just
Note
The techniques behind the image acquisition can be found in the developers sections.
• the camera,
Since
With "Save Current Image" a dialog will appear, where you can specify the destination folder and the file format.
With "Copy Current Image To Clipboard" you can open you prefered image editing tool an paste the clipboard into
it. For this functionality you can also use the shortcuts CTRL-C and CTRL-V.
1.11.1.1.3.1 Record Mode It is also possible to record image sequences using wxPropView.
1. For this, you have to set the size of the recorder in "System Settings -> RequestCount" e.g. to 100.
This will save the last 100 requests in the request queue of the driver, i.e. the image data including the request
info like frame number, time stamp, etc.
2. Afterwards you can start the recording by clicking the Rec. button.
3. With the Next and Prev. buttons you can display the single images.
If you switched on the request info overlay (righ-click on the display area and select the entry to activate this
feature), these information will be displayed on the image, too. With the timestamp you can see the interval of the
single frames in microseconds.
1.11.1.1.3.2 Hard Disk Recording You can save acquired images to the hard disk the following way:
1. In the "Menu Bar" click on "Capture -> Recording -> Setup Hard Disk Recording".
Since
The snapshot mode can be used to save a sequence of images from the current acquisition to the hard disk directly.
1. In the "Menu Bar" click on "Capture -> Setup Snapshot To Hard Disk Mode".
2. Confirm with "Yes", that you want to enable the snapshot mode.
5. Now you can save the current image by pressing the space bar.
1.11.1.1.4 Using the analysis plots With the analysis plots you have the possiblity to get image details and to
export them (p. 86).
1.11.1.1.4.1 Spatial noise histogram The spatial noise histogram calculates and evalutates statistically the
difference between two neighbouring pixels in vertical and horizontal direction. I.e. it shows the sensor's spatial
background pattern like the sensitivity shifts of each pixel. An ideal sensor or camera has a spatial noise of zero.
However, you have to keep in mind the temporal noise as well.
Read: Channel::Direction (Mean difference, most frequent value count/ value, Standard deviation)
Example: For a single channel(Mono) image the output of 'C0Hor(3.43, 5086/ 0, 9.25), C0Ver(3.26, 4840/ 0, 7.30)
will indicate that the mean difference between pixels in horizontal direction is 3.43, the most frequent difference is 0
and this difference is present 5086 times in the current AOI. The standard deviation in horizontal direction is 9.25.
The C0Ver value list contains the same data but in vertical direction.
1.11.1.1.4.2 Temporal noise histogram The temporal noise histogram shows the changes of a pixel from image
to image. This method is more stable because it is relatively independent from the image content. By subtracting
two images, the actual structure is eliminated, leaving the change of a pixel from image to image, that is, the noise.
When capturing images, all parameters must be frozen, all automatic mechanisms have to be turned off and the
image may not have underexposed or saturated areas. However, there are no picture signals without temporal
noise. Light is a natural signal and the noise always increases with the signal strength. If the noise only follows the
natural limits, then the camera is good. Only if additional noise is added the camera or the sensor has errors.
Read: Channel# (Mean difference, most frequent value count/ value, Standard deviation)
Example: For a single channel(Mono) image the output of 'C0(3.43, 5086/ 0, 9.25) will indicate that the mean
difference between pixels in 2 consecutive images is 3.43, the most frequent difference is 0 and this difference is
present 5086 times in the current AOI. The standard deviation between pixels in these 2 images is 9.25. Please
note the impact of the 'Update Interval' in this plot: It can be used to define a gap between 2 images to compare.
E.g. if the update interval is set to 2, the differences between image 1 and 3, 3 and 5, 5 and 7 etc. will be calculated.
In order to get the difference between 2 consecutive images the update interval must be set to 1!
1.11.1.1.5 Storing and restoring settings When wxPropView (p. 69) is started for the first time, the values of
properties set to their default values will be displayed in green to indicate that these values have not been modified
by the user so far. Modified properties (even if the value is the same as the default) will be displayed in black.
Settings can be stored in several ways (via the "Menu Bar": "Action -> Capture Settings -> Save Active Device
Settings"):
• "As Default Settings For All Devices Belonging To The Same Family (Per User Only)": As the start-up param-
eters for every device belonging to the same family, e.g. for mvBlueCOUGAR-X, mvBlueCOUGAR-XD.
• "As Default Settings For All Devices Belonging To The Same Family And Product Type": As the start-up
parameters for every device belonging to the same product, e.g. for any mvBlueCOUGAR-X but not for
mvBlueCOUGAR-XD.
• "As Default Settings For This Device(Serial Number)": As the start-up parameters for the currently selected
device.
• "To A File": As an XML file that can be used e.g. to transport a setting from one machine to another or even
to use the settings configured for one platform on another (Windows <-> Linux).
During the startup of a device, all these setting possibilities show different behaviors. The differences are described
in chapter Settings behaviour during startup (p. 38)
Restoring of settings previously stored works in a similar way. After a device has been opened the settings will be
loaded automatically as described in Settings behaviour during startup (p. 38)
However, at runtime the user has different load settings possibilities (via the "Menu Bar": "Action -> Capture Settings
-> Load Active Device Settings")
• explicitly load the device family specific settings stored on this machine (from "The Default Settings Location
For This Devices Family (Per User Only)")
• explicitly load the product specific settings stored on this machine (from "The Default Settings Location For
This Devices Family And Product Type)")
• explicitly load the device specific settings stored on this machine (from "The Default Settings Location For
This Device(Serial Number)")
• explicitly load device family specific settings from a XML file previously created ("From A File")
Note
With "Action -> Capture Settings -> Manage..." you can delete the settings which were saved on the system.
1.11.1.1.6 Properties All properties and functions can be displayed in the list control on the lower left side of the
dialog. To modify the value of a property select the edit control right of the properties name. Property values, which
refer to the default value of the device, are displayed in green. A property value once modified by the user will be
displayed in black (even if the value itself has not changed). To restore its default value of a single property
To restore the default value for a complete list (which might include sub-lists)
In this case a popup window will be opened and you have to confirm again.
Most properties store one value only, thus they will appear as a single entry in the property grid. However, properties
are capable of storing more than one value, if this is desired. A property storing more than one value will appear as
a parent list item with a WHITE background color (lists will be displayed with a grey background) and as many child
elements as values stored by the property. The PARENT grid control will display the number of values stored by
the property, every child element will display its corresponding value index.
If supported by the property, the user might increase or decrease the number of values stored by right clicking on
the PARENT grid element. If the property allows the modification the pop up menu will contain additional entries
now:
When a new value has been created it will be displayed as a new child item of the parent grid item:
Currently, only the last value can be removed via the GUI and a value can't be removed, when a property stores
one value only.
Also the user might want to set all (or a certain range of) values for properties that store multiple values with a single
operation. If supported by the property, this can also be achieved by right clicking on the PARENT grid element. If
the property allows this modification the pop up menu will again contain additional entries:
It's possible to either set all (or a range of) elements of the property to a certain value OR to define a value range,
that then will be applied to the range of property elements selected by the user. The following example will explain
how this works:
Figure 16: wxPropView - Setting multiple property values within a certain value range
In this sample the entries 0 to 255 of the property will be assigned the value range of 0 to 255. This will result in the
following values AFTER applying the values:
1.11.1.1.7 Methods Method appears as entries in the tree control as well. However, their name and behavior
differs significantly from the behavior of properties. The names of method objects will appear in 'C' syntax like e.g.
"int function( char∗, int )". This will specific a function returning an integer value and expecting a string and an
integer as input parameters. To execute a method object
Parameters can be passed to methods by selecting the edit control left of a method object. Separate the parameters
by blanks. So to call a function expecting a string and an integer value you e.g. might enter "testString 0"
into the edit control left of the method.
The return value (in almost every case an error code as an integer) will be displayed in the lower right corner of the
tree control. The values displayed here directly correspond the error codes defined in the interface reference and
therefore will be of type TDMR_ERROR or TPROPHANDLING_ERROR.
1.11.1.1.8 Copy grid data to the clipboard Since wxPropView (p. 69) version 1.11.0 it is possible to copy
analysis data to the clipboard. The data will be copied in CSV style thus can be pasted directly into tools like Open
Office™ or Microsoft® Office™.
Just
• right-click on the specific analysis grid when in numerical display mode and
1.11.1.1.9 Import and Export images wxPropView (p. 69) offers a wide range of image formats that can be
used for exporting captured image to a file. Some formats e.g. like packed YUV 4:2:2 with 10 bit per component are
rather special thus they can't be stored into a file like e.g. offered by the BMP file header. When a file is stored in a
format, that does not support this data type wxPropView (p. 69) will convert this image into something that matches
the original image format as close as possible. This, however, can result in the loss of data. In order to allow the
storage of the complete information contained in a captured image wxPropView (p. 69) allows to store the data in a
raw format as well. This file format will just contain a binary dump of the image with no leader or header information.
However, the file name will automatically be extended by information about the image to allow the restoring of the
data at a later time.
All image formats, that can be exported can also be imported again. Importing a file can be done in 3 different
ways:
• via the menu (via the "Menu Bar": "Action -> Load image...")
• by dragging an image file into an image display within wxPropView (p. 69)
• by starting wxPropView (p. 69) from the command line passing the file to open as a command line param-
eter (p. 110) (on Windows® e.g. "wxPropView.exe MyImage.png" followed by [ENTER])
When importing a "∗.raw" image file a small dialog will pop up allowing the user to define the dimensions and
the pixel format of the image. When the file name has been generated using the image storage function offered
by wxPropView (p. 69), the file name will be passed and the extracted information will automatically be set in the
dialog thus the user simply needs to confirm this information is correct.
1.11.1.1.10 Setting up multiple display support and/or work with several capture settings in parallel wx←-
PropView (p. 69) is capable of
• dealing with multiple capture settings or acquisition sequences for a single device and in addition to that
The amount of parallel image displays can be configured via the command line parameter (p. 110) "dcx" and
"dcy". In this step by step setup wxPropView (p. 69) has been started like this from the command line:
Since
Is is also possible to change the amount of display at runtime via "Settings -> Image Displays -> Configure Image
Display Count":
Additional capture settings can be created via "Menu Bar": "Capture -> Capture Settings -> Create Capture
Settings". The property grid will display these capture settings either in "Developers" or in "Multiple Settings
View".
Now, in order to set up wxPropView (p. 69) to work with 2 instead of one capture setting,
1. Various additional capture setting can be created. In order to understand what a capture setting actually is
please refer to
Creating a capture setting is done via "Capture -> Capture Settings -> Create Capture Setting".
2. Then, the user is asked for the name of the new setting.
3. And finally for the base this new setting shall be derived from.
As "NewSetting1" has been derived from "Base" changing a property in "Base" will automatically change this
property in "NewSetting1" if this property has not already been modified in "NewSetting1". Again to get an
understanding for this behaviour please refer to
Now, to set up wxPropView (p. 69) to display all images taken using capture setting "Base" in one display and all
image taken using capture setting "NewSetting1" in another display the capture settings need to be assigned to
image displays via "Capture -> Capture Settings -> Assign To Display(s)".
By default a new setting when created will be assigned to one of the available displays in a round-robin scheme,
thus when there are 3 displays, the first (Base) setting will be assigned to "Display 0", the next to "Display 1", the
next to "Display 2" and a fourth setting will be assigned to "Display 0" again. The setting to display relationships
can be customized via "Capture -> Capture Settings -> Assign to Display(s)".
As each image display keeps a reference to the request, this image belongs to the driver can't re-use the request
buffer until a new request is blitted into this display. Thus, it might be necessary to increase the number of request
objects the driver is working with if a larger number of displays are involved. The minimum number of requests
needed is 2 times the amount of images displays. The number of requests used by the driver can be set up in the
drivers property tree:
Finally, wxPropView (p. 69) must be configured in order to use all available capture settings in a round-robin
scheme. This can be done by setting the capture setting usage mode to "Automatic" via "Capture -> Capture
Settings -> Usage Mode":
That's it. Now, starting a live acquisition will display live images in both displays and each display is using a different
set of capture parameters. If a device supports parallel acquisition from multiple input channels, this will increase
as wxPropView (p. 69) now needs to display more images per second. Each display can be configured indepen-
dently thus e.g. one display can be used scaled while the other displays 1:1 data. The analysis plots can be
assigned to a specific display by left-clicking on the corresponding image display, the info plot will plot a graph for
each capture setting in parallel.
When only one setting shall be used at a given time, this can be achieved by setting the capture setting usage mode
back to "Manual" via "Capture -> Capture Settings -> Usage Mode". Then the setting that shall be used can be
manually selected in the request control list:
1.11.1.1.11 Bit-shifting an image wxPropView (p. 69) shows snapped or live images in the display area of the
GUI. The area, however, shows the most significant bits (msb) of the image in the 8 bit display.
The following image shows how a mid-grey 12 bit pixel of an image is displayed with 8 bit. Additionally, two shifts
are shown.
Figure 32: Mid-grey 12 bit pixel image and 8 bit display with 2 example shifts
In this particular case, the pixel will be brighter (as the most significant bits are 1’s). Perhaps you already recognized
it. Each shift means that each pixel value is multiplied or divided by 2 according to the direction.
If the pixel value is greater than 255, the pixel value will be clipped to 255. To describe this from a programmer’s
view; a represents the pixel value:
With wxPropView (p. 69) you can shift the bits in the display using the left and right arrow keys. Furthermore you
can turn on the monitor display to compare the images synchronously.
1.11.1.1.12 Changing the view of the property grid to assist writing code that shall locate driver features
With wxPropView (p. 69) it is possible to switch the views between "Standard View" (user-friendly) and "Developers
View". While the first (default) view will display the device drivers feature tree in a way that might be suitable for most
users of a GUI application it might present the features in a slightly different order as they actually are implemented
in the device driver. The developers view switches the tree layout of the application to reflect the feature tree exactly
like it is implemented an presented by the SDK. It can be helpful when writing code that shall locate a certain
property in the feature tree of the driver using the C, C++, Java, .NET or Python interface. The feature hierarchy
displayed here can directly be used for searching for the features using the "ComponentLocator (C++/.←-
NET)" objects or "DMR_FindList (C)" and "OBJ_GetHandleEx (C)" functions.
Since
Using Windows, it is possible to access the log files generated by MATRIX VISION via the Help menu. Sending us
the log files will speed up support cases.
See also
As described above, after the device has been initialized successfully in the "Grid" area of the GUI the available
properties according to the chosen "interface layout" (e.g. GenICam) are displayed in a hierarchy tree.
The next chapter will show how to set the interface layout and which interface you should use according to your
needs.
Devices belonging to this family only support the Device Specific interface layout which is the common interface
layout supported by most MATRIX VISION devices.
GenICam compliant devices can be operated in different interface layouts. Have a look at a GenICam compliant
device for additional information.
1.11.1.2.2 White balance of a camera device (color version) Start the wxPropView (p. 69) and initialize the
device by clicking "Use" and start a "Continuous" acquisition.
While using a color version of the camera, the PC will calculate a color image from the original gray Bayer mosaic
data. For getting correct colors when working with a Bayer mosaic filter you have to calibrate the white balance (this
must be performed every time the lighting conditions change).
• "Daylight",
• "TungstenLamp",
• "HalogenLamp",
Simply select the necessary item in the menu "Image Settings -> Base -> ImageProcessing -> WhiteBalance"
("DeviceSpecific interface layout") or "Setting -> Base -> ImageProcessing -> WhiteBalance" ("GenICam interface
layout").
If you need a user defined setting, you can also define own ones. For this, select a profile (e.g. User1) for this
setting:
Note
Point the camera on a white or light gray area (the pixels do not have to be saturated, so use gray values between
150 and 230).
Go to the menu item "WhiteBalanceCalibration" and select the parameter "Calibrate Next Frame":
By committing the selected value, the application accepts the change. The next acquired image will be the reference
for the white balance.
For easier handling and easier working, all settings can be saved by clicking the menu Action -> Capture Settings
-> Save ... (p. 81).
1.11.1.2.3 Configuring different trigger modes To configure a device for a triggered acquisition, in wxProp←-
View (p. 69) the property "Image Setting -> Camera -> TriggerMode" ("DeviceSpecific interface layout") or "Setting
-> Base -> Camera -> GenICam -> Acquisition Control -> Trigger Selector" ("GenICam interface layout") is
available.
Note
The supported trigger modes of each sensor are described in the More specific data (p. 63) of each sensor.
Note
The following description will be significant if you are using the "DeviceSpecific interface layout". In GenICam
laylout, the "Digital I/O" section can be found in "Setting -> Base -> Camera -> GenICam -> Digital I/O
Control".
For performance reasons, device drivers will not automatically update their digital input properties if nobody is
interested in the current state. Therefore, in order to check the current state of a certain digital input, it is necessary
to manually refresh the state of the properties. To do this please right-click on the property you are interested in and
select "Force Refresh" from the pop-up menu.
Some devices might also offer an event notification if a certain digital input changed its state. This event can then
be enabled
• via the "EventSelector" in "Setting -> Base -> Camera -> GenICam -> Event Control".
• Afterwards, a callback can be registered by right-clicking on the property you are interested in again.
• Now, select "Attach Callback" from the pop-up menu and switch to the "Output" tab in the lower right section
of wxPropView (Analysis tabs).
Whenever an event is send by the device that updates one of the properties a callback has been attached to, the
output window will print a message with some information about the detected change.
1.11.1.2.5 Setting up external trigger and flash control To set up external trigger and flash control, following
things are required:
The camera is only connected by USB 2.0 cable to PC. All other signals are connected directly to the camera.
Trigger and flash signals are directly controlled by the FPGA, which does the timing in the camera. This makes the
trigger and flash control independent from CPU load of host PC or temporary USB 2.0 interrupts.
Trigger control
External trigger signal resets the image acquisition asynchronously to any other timing so that reaction delay is such
short that it can be ignored. If a delay between trigger signal and starting integration is needed, it can be defined.
By default it is set to 0 us.
Flash control
Signal for flash control is immediately set as soon as image integration begins. If a delay is needed it can be defined.
By default this delay is set to 0.
Schematic shows how to fit application's switch to camera's digital input. External trigger signal must be in following
conditions:
Application's switch can be a mechanical one any light barrier or some kind of encounter.
Note
Depending on used switch it might be necessary to use a pull-up or pull-down resistor so that camera input
can recognize signal correctly.
See also
2. Click on "Acquire".
5. Connect a standard power supply with e.g. 5 V (higher than the value of "DigitalInputThreshold") to pin 1 (-)
and pin 6 (+)
As long as the power supply is connected, you can see a live preview. If you disconnect the power supply the live
preview should stop. If this is working the trigger input works.
If current needed for flash is below 100 mA you can connect flash directly to camera outputs. If it is higher, you have
to use an additional driver for controlling the flash, which provides the higher current.
Note
Depending on used flash driver it could be necessary to use pull-up or pull-down resistors so that driver can
recognize the signal correctly.
See also
1.11.1.2.5.2 Setting up In wxPropView (p. 69) you can open camera and display acquired images.
By default camera is running free. This means, it uses its own timing depending on set pixel clock, exposure time
and shutter mode.
Trigger
To let the camera acquire images only with an external trigger signal you must change the "TriggerMode" to the
mode suitable to your application:
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnLowExpose Each falling edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnAnyEdge Start the exposure of a frame when the trigger input level changes from high to low or from
low to high.
Choose either "DigIn0" if signal is connected to "IN0" or DigIn1 if signal is connected to IN1. In general entry "←-
RTCtrl" is not useful in this case because triggering would be controlled by Hardware Real-Time Controller
(p. 118) which is not described here and also not necessary.
Depending on voltage level you are using for trigger signal you must choose the "DigitalInputThreshold":
Trigger
To activate flash control signal set "FlashMode" to the output you connected flash or flash driver to:
Since this mode is activated each image acquisition will generate the flash signal. This generation is independent
from used trigger mode. Flash signal is directly derived from integration timing.
This means that if no "FlashToExposedToLightDelay" is set flash signal will rise as soon as integration starts and fall
when integration is finished. The pulse width cannot be changed. So you can be sure that integration takes place
when trigger signal is high.
1.11.1.2.6 Working with the hardware Look-Up-Table (LUT) There are two parameters which handles the pixel
formats of the camera:
• "Setting -> Camera -> PixelFormat" defines the pixel format used to transfer the image data into the target
systems host memory.
• "Setting -> ImageDestination -> PixelFormat" defines the pixel format of the resulting image (which is kept
in the memory by the driver).
If you set "LUTImplementation" to "Software" in "Setting -> ImageProcessing -> LUTOperations", the hardware
Look-Up-Table (LUT) will work with 8 bit data ("LUTMappingSoftware = 8To8"). Using Gamma functions you will
see gaps in the histogram:
Figure 49: 8to8 software LUT leads to gaps in the histogram using gamma functions (screenshot:
mvBlueFOX-MLC)
If you set "LUTImplementation" to "Hardware" in "Setting -> ImageProcessing -> LUTOperations", the hardware
Look-Up-Table (LUT) will work with 10 bit data inside the camera and converts the data to 8 bit for output ("LUT←-
MappingHardware = 10To8"). Now, there will be no gaps in the histogram:
Figure 50: 10to8 hardware LUT shows no in the histogram (screenshot: mvBlueFOX-MLC)
It is possible to start wxPropView via command line and controlling the starting behavior using parameters. The
supported parameter are as follows:
Parameter Description
width or w Defines the startup width of wxPropView. Example: width=640
height or h Defines the startup height of wxPropView. Example: height=460
xpos or x Defines the startup x position of wxPropView.
ypos or y Defines the startup x position of wxPropView.
splitterRatio Defines the startup ratio of the position of the property grids splitter. Values be-
tween > 0 and < 1 are valid. Example: splitterRatio=0.5
propgridwidth or pgw Defines the startup width of the property grid.
debuginfo or di Will display debug information in the property grid.
dic Will display invisible (currently shadowed) components in the property grid.
displayCountX or dcx Defines the number of images displayed in horizontal direction.
displayCountY or dcy Defines the number of images displayed in vertical direction.
fulltree or ft Will display the complete property tree (including the data not meant to be
accessed by the user) in the property grid. Example (Tree will be shown)←-
: fulltree=1
device or d Will directly open a device with a particular serial number. ∗ will take the first
device. Example: d=GX000735
qsw Will forcefully hide or show the Quick Setup Wizard, regardless of the default
settings. Example (Quick Setup Wizard will be shown): qsw=1
live Will directly start live acquisition from the device opened via device or d directly.
Example (will start the live acquisition): live=1
This will start the first available device, will hide the Quick Setup Wizard, and will display the complete property tree.
1.11.2 mvDeviceConfigure
mvDeviceConfigure (p. 111) is an interactive GUI tool to configure MATRIX VISION devices. It shows all connected
devices.
Various things can also be done without user interaction (e.g. updating the firmware of a device). To find out how to
do this please start mvDeviceConfigure and have a look at the available command line options presented in the
text window in the lower section (the text control) of the application.
The device ID is used to identify the devices with a self defined ID. The default ID on the device's EEPROM is "0".
If the user hasn't assigned unique device IDs to his devices, the serial number can be used to selected a certain
device instead. However, certain third-party drivers and interface libraries might rely on these IDs to be set up in a
certain way and in most of the cases this means, that each device needs to have a unique ID assigned and stored
in the devices non-volatile memory. So after installing the device driver and connecting the devices setting up these
IDs might be a good idea.
To set the ID please start the mvDeviceConfigure (p. 111) tool. You will see the following window:
Whenever there is a device that shares its ID with at least one other device belonging to the same device family,
mvDeviceConfigure (p. 111) will display a warning like in the following image, showing in this example two mv←-
BlueFOX cameras with an ID conflict:
1.11.2.1.1 Step 1: Device Selection Select the device you want to set up from the list box.
1.11.2.1.2 Step 2: Open dialog to set the ID With the device selected, select the menu item Action and click
on Set ID.
Note
It is also possible to select the action with a right click on the device.
1.11.2.1.3 Step 3: Assign the new ID Enter the new ID and click OK.
Now the overview shows you the list with all devices as well as the new ID. In case there has been an ID conflict
before that has been resolved now mvDeviceConfigure (p. 111) will no longer highlight the conflict now:
With the mvDeviceConfigure tool it is also possible to update the firmware. These steps are necessary:
1.11.2.2.1 Step 1: Device selection Select the device you want to update from the list box.
1.11.2.2.2 Step 2: Open dialog to update the firmware With the device selected, select the menu item Action
and click on Update firmware.
Note
It is also possible to select the action with a right click on the device.
1.11.2.2.3 Step 3: Confirm the firmware update You have to confirm the update.
Note
The firmware is compiled within the installed driver. The mvDeviceConfigure uses this version and updates
the firmware. If you use an old driver, you will downgrade the firmware.
If the firmware update is successful, you will receive the following message:
1.11.2.2.4 Step 4: Disconnect and reconnect the device Please disconnect and reconnect the device to
activate the new firmware.
Note
The firmware update is only necessary in some special cases (e.g. to benefit from a new functionality added
to the firmware or to fix a firmware related bug). Before updating the firmware be sure what you are doing and
have a look into the change log (versionInfo.txt and/or the manual to see if the update will fix your problem).
The firmware update takes approx. 30 seconds!
1.11.2.3 How to disable CPU sleep states a.k.a. C states (< Windows 8)
Modern PC's, notebook's, etc. try to save energy by using a smart power management. For this several hardware
manufacturers specified the ACPI standard. The standard defines several power states. For example, if processor
load is not needed the processor changes to a power saving (sleep) state automatically and vice versa. Every state
change will stop the processor for microseconds. This time is enough to cause image error counts!
See also
To disable the power management on the processor level (so-called "C states"), you can use mvDevice←-
Configure:
Note
With Windows XP it is only possible to disable the C2 and C3 states. With Windows Vista / 7 / 8 all C states
(1,2, and 3) will be disabled.
Warning
Please be sure you know what you do! To turn off the processor's sleep states will lead to a higher power
consumption of your system. Some processor vendors might state that turning off the sleep states will result
in the processors warranty will expire.
Note
Modifying the sleep states using mvDeviceConfigure does only affects the current power scheme. For
notebooks this will e.g. make a difference depending on whether the notebook is running on battery or not.
E.g. if the sleep states have been disabled while running on battery and then the system is connected to an
external power supply, the sleep states might be active again. Thus in order to permanently disable the sleep
states, this needs to be done for all power schemes that will be used when operating devices.
1. Start mvDeviceConfigure.
The sleep states can also be enabled or disabled from a script by calling mvDeviceConfigure like this:
or
The additional quit will result in the application to terminate after the new value has been applied.
Note
With Windows Vista or newer mvDeviceConfigure must be started from a command shell with administrator
privileges in order to modify the processors sleep states.
It is possible to start mvDeviceConfigure via command line and controlling the starting behavior using parameters.
The supported parameter are as follows:
Parameter Description
setid or id Updates the firmware of one or many devices(syntax←-
: 'id=<serial>.<id>' or 'id=<product>.<id>').
set_processor_idle_states or spis Changes the C1, C2 and C3 states for ALL processors in the
current system(syntax: 'spis=1' or 'spis=0').
set_userset_persistence or sup Sets the persistency of UserSet settings during firmware up-
dates (syntax: 'sup=1' or 'sup=0').
update_fw or ufw Updates the firmware of one or many devices.
update_fw_file or ufwf Updates the firmware of one or many devices. Pass a full
path to a text file that contains a serial number or a product
type per line.
custom_genicam_file or cgf Specifies a custom GenICam file to be used to open devices
for firmware updates. This can be useful when the actual XML
on the device is damaged/invalid.
update_kd or ukd Updates the kernel driver of one or many devices.
ipv4_mask Specifies an IPv4 address mask to use as a filter for the se-
lected update operations. Multiple masks can be passed here
separated by semicolons.
fw_file Specifies a custom name for the firmware file to use.
fw_path Specifies a custom path for the firmware files.
log_file or lf Specifies a log file storing the content of this text control upon
application shutdown.
quit or q Ends the application automatically after all updates have been
applied.
force or f Forces a firmware update in unattended mode, even if it isn't
a newer version.
∗ Can be used as a wildcard, devices will be searched by se-
rial number AND by product. The application will first try to
locate a device with a serial number matching the specified
string and then (if no suitable device is found) a device with a
matching product string.
The number of commands that can be passed to the application is not limited.
mvDeviceConfigure ufw=BF000666
This will update the firmware of a mvBlueFOX with the serial number BF000666.
mvDeviceConfigure update_fw=BF*
This will update the firmware of ALL mvBlueFOX devices in the current system.
This will update the firmware of ALL mvBlueFOX-2 devices in the current system, then will store a log file of the
executed operations and afterwards will terminate the application.
mvDeviceConfigure setid=BF000666.5
This will assign the device ID '5' to a mvBlueFOX with the serial number BF000666.
mvDeviceConfigure ufw=*
This will update the firmware of all mvBlueCOUGAR-X devices with a valid IPv4 address that starts with '169.254.'
or '192.168.100.'.
1.12.1 Introduction
The Hardware Real-Time Controller (HRTC) is built into the FPGA. The user can define a sequence of operating
steps to control the way how and when images are exposed and transmitted. Instead using an external PLC, the
time critical acquisition control is directly build into the camera. This is a very unique and powerful feature.
The operating codes for each step can be one of the followings:
The section How to use the HRTC (p. 119) should give the impression what everything can be done with the HRTC.
wxPropView - Introduction:
https://www.matrix-vision.com/tl_files/mv11/trainings/wxPropView/wx←-
PropView_HRTC/index.html
To use the HRTC you have to set the trigger mode and the trigger source. With object orientated programming
languages the corresponding camera would look like this (C++ syntax):
CameraSettings->triggerMode = ctmOnRisingEdge
CameraSettings->triggerSource = ctsRTCtrl
When working with wxPropView (p. 69) this are the properties to modify in order to activate the evaluation of the
HRTC program:
• OnLowLevel
• OnHighLevel
• OnFallingEdge
• OnRisingEdge
• OnHighExpose
Further details about the mode are described in the API documentation:
See also
• "Enumerations (C developers)"
In the Use Cases (p. 127) chapter there are the following HRTC sample:
– Delay the expose start of the following camera (HRTC) (p. 176)
The mvIMPACT Acquire SDK is a comprehensive software library that can be used to develop applications
using the devices described in this manual. A wide variety of programming languages is supported.
For C, C++, .NET, Python or Java developers separate API descriptions can be found on the MATRIX VISION
website:
Compiled versions (CHM format) might already be installed on your system. These manuals contain chapters on
• how the log output for "mvIMPACT Acquire" devices is configured and how it works in general
• how to create your own installation packages for Windows and Linux
• etc.
Note
DirectShow can only be used in combination with the Microsoft Windows operating system.
Since Windows Vista, Movie Maker does not support capturing from a device registered for DirectShow
anymore.
This is the documentation of the MATRIX VISION DirectShow_acquire interface. A MATRIX VISION specific prop-
erty interface based on the IKsPropertySet has been added. All other features are related to standard DirectShow
programming.
1.14.1.1 IAMCameraControl
1.14.1.2 IAMDroppedFrames
1.14.1.3 IAMStreamConfig
1.14.1.4 IAMVideoProcAmp
1.14.1.5 IKsPropertySet
The DirectShow_acquire supports the IKsPropertySet Interface. For further information please refer to the Microsoft
DirectX 9.0 Programmer's Reference.
• AMPROPERTY_PIN_CATEGORY
• DIRECT_SHOW_ACQUIRE_PROPERTYSET
1.14.1.6 ISpecifyPropertyPages
1.14.2 Logging
The DirectShow_acquire logging procedure is equal to the logging of the MATRIX VISION products which uses
mvIMPACT Acquire. The log output itself is based on XML.
If you want more information about the logging please have a look at the Logging chapter of the respective "mvI←-
MPACT Acquire API" manual.
Note
Please be sure to register the MV device for DirectShow with the right version of mvDeviceConfigure (p. 111)
. I.e. if you have installed the 32 bit version of the VLC Media Player, Virtual Dub, etc., you have to register the
MV device with the 32 bit version of mvDeviceConfigure (p. 111) ("C:\Program Files\MATRIX VISION\mvI←-
MPACT Acquire\bin") !
To register a device/devices for access under DirectShow please perform the following registration procedure:
1. Start mvDeviceConfigure.
If no device has been registered the application will more or less (depending on the installed devices) look
like this.
2. To register every installed device for DirectShow access click on the menu item "DirectShow" → "Register
all devices".
3. After a successful registration the column "registered for DirectShow" will display 'yes' for every device and
the devices will be registered with a default DirectShow friendly name.
If you want to modify the friendly name of a device under DirectShow, please perform the follwing procedure:
2. Now, select the device you want to rename, click the right mouse button and select "Set DirectShow friendly
name":
3. Then, a dialog will appear. Please enter the new name and confirm it with "OK".
4. Afterwards the column "DirectShow friendly name" will display the newly assigned friendly name.
Note
Please do not select the same friendly name for two different devices. In theory this is possible, however the
mvDeviceConfigure GUI will not allow this to avoid confusion.
To make a silent registration without dialogs, the Windows tool "regsvr32" via command line can be used.
The following command line options are available an can be passed during the silent registration:
EXAMPLES:
Register ALL devices that are recognized by mvIMPACT Acquire (this will only register devices which have drivers
installed).
regsvr32 <path>\DirectShow_acquire.ax /s
1.15 Troubleshooting
If you need support using our products, you can shorten response times by sending us your log files. Accessing the
log files is different in Windows and Linux:
1.15.1.1 Windows
Since
You can access the log files in Windows using wxPropView (p. 69). The way to do this is described in Accessing
log files (p. 97).
1.15.1.2 Linux
Since
You can also extract the directory using the following command
cd $MVIMPACT_ACQUIRE_DATA_DIR/logs
Like on Windows, log files will be generated, if the activation flag for logging called mvDebugFlags.mvd is avail-
able in the same folder as the application (however, using Windows log files will be generated automatically, be-
cause the applications are started from the same folder). By default, on Linux the mvDebugFlags.mvd will be
installed in the installation's destination folder in the sub-folder "apps". For example, if the destination folder was
"/home/workspace", you can locate the mvDebugFlags.mvd like the following way:
For log file generation you have to execute your app from the folder where mvDebugFlags.mvd is located. E.g. if
you want to start wxPropView:
Another possibility would be, to copy the mvDebugFlags.mvd file to the folder of the executable:
Afterwards, several log files are generated which are listed in files.mvloglist. The log files have the file
extension .mvlog. Please send these files to our support team.
There are several use cases concerning the acquisition / recording possibilities of the camera:
Since
Very long exposure times are possible with mvBlueFOX. For this purpose a special trigger/IO mode is used.
TriggerMode = OnHighExpose
TriggerSource = DigOUT0 - DigOUT3
Attention
In the standard mvBlueFOX DigOUT2 and DigOUT3 are internal signals, however, they can be used for this
intention.
Note
Make sure that you adjust the ImageRequestTimeout_ms either to 0 (infinite)(this is the default value) or
to a reasonable value that is larger than the actual exposure time in order not to end up with timeouts resulting
from the buffer timeout being smaller than the actual time needed for exposing, transferring and capturing the
image:
imageRequestSingle
Then the digital output is set and reset. Between these two instructions you can include source code to get the
desired exposure time.
pOut->reset();
If you change the state of corresponding output twice this will also work with wxPropView (p. 69).
With the DirectShow Interface (p. 121) MATRIX VISION devices become a (acquisition) video device for the VLC
Media Player.
1.16.1.2.1 System requirements It is necessary that following drivers and programs are installed on the host
device (laptop or PC):
Attention
Using Windows 10 or Windows 7: VLC Media Player with versions 2.2.0 have been tested successfully with
older versions of mvIMPACT Acquire. Since version 3.0.0 of VLC at least mvIMPACT Acquire 2.34.0 will be
needed to work with devices through the DirectShow interface!
1. Download a suitable version of the VLC Media Player from the VLC Media Player website mentioned below.
See also
http://www.videolan.org/
Note
Please be sure to register the MV device for DirectShow with the right version of mvDeviceConfigure (p. 111) .
I.e. if you have installed the 32 bit version of the VLC Media Player, you have to register the MV device with the
32-bit version of mvDeviceConfigure (p. 111) ("C:/Program Files/MATRIX VISION/mvIMPACT Acquire/bin")
!
1. Connect the MV device to the host device directly or via GigE switch using an Ethernet cable.
Note
4. In the section "Video device name" , select the friendly name of the MV device:
There are several use cases concerning the acquisition / image quality of the camera:
Due to random process deviations, technical limitations of the sensors, etc. there are different reasons that image
sensors have image errors. MATRIX VISION provides several procedures to correct these errors, by default these
are host-based calculations.
Note
If you execute all correction procedures, you have to keep this order. All gray value settings of the corrections
below assume an 8-bit image.
The path "Setting -> Base -> ImageProcessing -> ..." indicates that these corrections are host-based corrections.
• To correct the complete image, you have to make sure no user defined AOI has been selected: Right-click
"Restore Default" on the devices AOI parameters W and H in "Setting -> Base -> Camera".
• You have several options to save the correction data. The chapter Storing and restoring settings (p. 81)
describes the different ways.
See also
There is a white paper about image error corrections with extended information available on our website:
http://www.matrix-vision.com/tl_files/mv11/Glossary/art_image_errors_←-
sensors_en.pdf
1.16.2.1.1 Defective Pixels Correction Due to random process deviations, not all pixels in an image sensor
array will react in the same way to a given light condition. These variations are known as blemishes or defective
pixels.
There are three types of defective pixels:
Note
Please use either an Mono or RAW Bayer image format when detecting defective pixel data in the image.
1.16.2.1.1.1 Correcting leaky pixels To correct leaky pixels the following steps are necessary:
1. Set gain ("Setting -> Base -> Camera -> GenICam -> Analog Control ->
Gain = 0 dB") and exposure time "Setting -> Base -> Camera -> GenICam ->
Acquisition Control -> ExposureTime = 360 msec" to the given operating conditions
The total number of defective pixels found in the array depend on the gain and the exposure time.
4. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = Single←-
Frame")
5. To activate the correction, choose one of the substitution methods mentioned above
6. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings!
Note
With "Mode = Calibrate Hot And Cold Pixel" you can execute both corrections at the same
time.
1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value
between 128 and 180)
3. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = Single←-
Frame")
4. To activate the correction, choose one of the substitution methods mentioned above
5. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings!
Note
With "Mode = Calibrate Hot And Cold Pixel" you can execute both corrections at the same
time.
1. You will need a uniform sensor illumination approx. 50 - 70 % saturation (which means an average gray value
between 128 and 180)
3. Snap an image (e.g. by pressing Acquire in wxPropView with "Acquisition Mode = Single←-
Frame")
4. To activate the correction, choose one of the substitution methods mentioned above
5. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings!
All pixels below this value have a dynamic below normal behavior.
Note
Repeating the defective pixel corrections will accumulate the correction data which leads to a higher value
in "DefectivePixelsFound". If you want to reset the correction data or repeat the correction process
you have to set the filter mode to "Reset Calibration Data". In oder to limit the amount of defective
pixels detected the "DefectivePixelsMaxDetectionCount" property can be used.
1.16.2.1.2 Dark Current Correction Dark current is a characteristic of image sensors, which means, that image
sensors also deliver signals in total darkness by warmness, for example, which creates charge carriers sponta-
neously. This signal overlays the image information. Dark current depends on two circumstances:
1. Exposure time
The longer the exposure, the greater the dark current part. I.e. using long exposure times, the dark current
itself could lead to an overexposed sensor chip
2. Temperature
By cooling the sensor chips the dark current production can be highly dropped (approx. every 6 °C the dark
current is cut in half)
1.16.2.1.2.1 Correcting Dark Current The dark current correction is a pixel wise correction where the dark
current correction image removes the dark current from the original image. To get a better result it is necessary to
snap the original and the dark current images with the same exposure time and temperature.
Note
3. If applicable, change Offset_pc until you'll see an amplitude in the histogram (Figure 4)
7. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"
8. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)
The filter snaps a number of images and averages the dark current images to one correction image.
Note
After having re-started the camera you have to reload the capture settings vice versa.
1.16.2.1.3 Flat-Field Correction Each pixel of a sensor chip is a single detector with its own properties. Par-
ticularly, this pertains to the sensitivity as the case may be the spectral sensitivity. To solve this problem (including
lens and illumination variations), a plain and equally "colored" calibration plate (e.g. white or gray) as a flat-field is
snapped, which will be used to correct the original image. Between flat-field correction and the future application
you must not change the optic. To reduce errors while doing the flat-field correction, a saturation between 50 % and
75 % of the flat-field in the histogram is convenient.
Note
Flat-field correction can also be used as a destructive watermark and works for all f-stops.
1. You need a plain and equally "colored" calibration plate (e.g. white or gray)
2. No single pixel may be saturated - that's why we recommend to set the maximum gray level in the brightest
area to max. 75% of the gray scale (i.e., to gray values below 190 when using 8-bit values)
3. Choose a BayerXY in "Setting -> Base -> Camera -> GenICam -> Image Format Control -> PixelFormat".
6. Finally, you have to activate the correction: Set the (Filter-) "Mode = On"
7. Save the settings including the correction data via "Action -> Capture Settings -> Save
Active Device Settings"
(Settings can be saved in the Windows registry or in a file)
Note
After having re-started the camera you have to reload the capture settings vice versa.
The filter snaps a number of images (according to the value of the CalibrationImageCount, e.g. 5) and
averages the flat-field images to one correction image.
1.16.2.1.3.1 Host-based Flat-Field Correction With Calibration AOI In some cases it might be necessary to
use just a specific area within the camera's field of view to calculate the correction values. In this case just a specific
AOI will be used to calculate the correction factor.
You can set the "host-based flat field correction" in the following way:
6. Set the properties ("X, Y and W, H") appeared under "CalibrationAOI" to the desired AOI.
8. Finally, you have to activate the correction: Set the "Mode" to "On".
Figure 7: Image corrections: Host-based flat field correction with calibration AOI
1.16.2.1.3.2 Host-based Flat-Field Correction With Correction AOI In some cases it might be necessary to
correct just a specific area in the camera's filed of view. In this case the correction values are only applied to a
specific area. For the rest of the image, the correction factor will be just 1.0.
You can set the "host-based flat field correction" in the following way:
6. Now, you have to activate the correction: Set the "Mode" to "On".
8. Finally use the properties ("X, Y and W, H") which appeared under "CorrectionAOI" to configure the desired
AOI.
Figure 8: Image corrections: Host-based flat field correction with correction AOI
Purpose of this chapter is to optimize the color image of a camera, so that it looks as natural as possible on different
displays and for human vision.
This implies some linear and nonlinear operations (e.g. display color space or Gamma viewing LUT) which are
normally not necessary or recommended for machine vision algorithms. A standard monitor offers, for example,
several display modes like sRGB, "Adobe RGB", etc., which reproduce the very same color of a camera color
differently.
• a combination of both.
Camera based settings are advantageous to achieve highest calculating precision, independent of the transmission
bit depth, lowest latency, because all calculations are performed in FPGA on the fly and low CPU load, because the
host is not invoked with these tasks. These camera based settings are
Host based settings save transmission bandwidth at the expense of accuracy or latency and CPU load. Especially
performing gain, offset, and white balance in the camera while outputting RAW data to the host can be recom-
mended.
Of course host based settings can be used with all families of cameras (e.g. also mvBlueFOX).
To show the different color behaviors, we take a color chart as a starting point:
If we take a SingleFrame image without any color optimizations, an image can be like this:
• saturation is missing,
• etc.
Note
You have to keep in mind that there are two types of images: the one generated in the camera and the other
one displayed on the computer monitor. Up-to-date monitors offer different display modes with different color
spaces (e.g. sRGB). According to the chosen color space, the display of the colors is different.
4. Improve Saturation (p. 148), and use a "color correction matrix" for both
1.16.2.2.1 Step 1: Gamma correction (Luminance) First of all, a Gamma correction (Luminance) can be
performed to change the image in a way how humans perceive light and color.
• the aperture or
• the gain.
You can change the gain via wxPropView (p. 69) like the following way:
1. Click on "Setting -> Base -> Camera". There you can find
You can turn them "On" or "Off". Using the auto controls you can set limits of the auto control; without you
can set the exact value.
Note
As mentioned above, you can do a gamma correction via ("Setting -> Base -> ImageProcessing -> LUT←-
Operations"):
Just set "LUTEnable" to "On" and adapt the single LUTs like (LUT-0, LUT-1, etc.).
1.16.2.2.2 Step 2: White Balance As you can see in the histogram, the colors red and blue are below green.
Using green as a reference, we can optimize the white balance via "Setting -> Base -> ImageProcessing" ("←-
WhiteBalanceCalibration"):
Please have a look at White balance of a camera device (color version) (p. 98) for more information for an
automatic white balance.
After optimizing white balance, the image will look like this:
1.16.2.2.3 Step 3: Contrast Still, black is more a darker gray. To optimize the contrast you can use "Setting ->
Base -> ImageProcessing -> LUTControl" as shown in Figure 8.
1.16.2.2.4 Step 4: Saturation and Color Correction Matrix (CCM) Still saturation is missing. To change this,
the "Color Transformation Control" can be used ("Setting -> Base -> ImageProcessing -> ColorTwist"):
2. click on "Wizard" to start the saturation via "Color Transformation Control" wizard tool (since firmware version
1.4.57).
Figure 13: Selected Color Twist Enable and click on wizard will start wizard tool
5. Since driver version 2.2.2, it is possible to set the special color correction matrices at
7. click on "Enable".
8. As you can see, the correction is done by the host ("Host Color Correction Controls").
Note
It is not possible to save the settings of the "Host Color Correction Controls" in the mvBlueFOX. Unlike
in the case of Figure 14, the buttons to write the "Device Color Correction Controls" to the mvBlueFOX
are not active.
1.16.3.1.1 Scenario The CMOS sensors used in mvBlueFOX cameras support the following trigger modes:
• Continuous
• OnLowLevel
• OnHighLevel
• OnHighExpose (only with mvBlueFOX-[Model]205 (5.0 Mpix [2592 x 1944]) (p. 214))
If an external trigger signal occurs (e.g. high), the sensor will start to expose and readout one image. Now, if the
trigger signal is still high, the sensor will start to expose and readout the next image (see Figure 1, upper part). This
will lead to an acquisition just like using continuous trigger.
If you want to avoid this effect, you have to adjust the trigger signal. As you can see in Figure 1 (lower part), the
possible period has to be smaller than the time an image will need (texpose + treadout ).
1.16.3.1.2 Example
Note
Using mvBlueFOX-MLC or mvBlueFOX-IGC, you have to select DigIn0 as the trigger source, because the
camera has only one opto-coupled input. Only the TTL model of the mvBlueFOX-MLC has two I/O's.
• Trigger modes
– OnHighLevel:
The high level of the trigger has to be shorter than the frame time. In this case, the sensor will make
one image exactly. If the high time is longer, there will be images with the possible frequency of the
sensor as long as the high level takes. The first image will start with the low-high edge of the signal.
The integration time of the exposure register will be used.
– OnLowLevel:
The first image will start with the high-low edge of the signal.
– OnHighExpose
This mode is like OnHighLevel, however, the exposure time is used like the high time of the signal.
See also
Block diagrams with example circuits of the opto-isolated digital inputs and outputs can be found in Dimen-
sions and connectors (p. 53).
There are several use cases concerning High Dynamic Range Control:
1.16.4.1.1 Introduction The HDR (High Dynamic Range) mode of the sensor -x00w increases the usable con-
trast range. This is achieved by dividing the integration time in two or three phases. The exposure time proportion of
the three phases can be set independently. Furthermore, it can be set, how much signal of each phase is charged.
1.16.4.1.2 Functionality
1.16.4.1.2.1 Description
• "Phase 0"
– During T1 all pixels are integrated until they reach the defined signal level of Knee Point 1.
– If one pixel reaches the level, the integration will be stopped.
– During T1 no pixel can reached a level higher than P1.
• "Phase 1"
– During T2 all pixels are integrated until they reach the defined signal level of Knee Point 2.
– T2 is always smaller than T1 so that the percentage compared to the total exposure time is lower.
– Thus, the signal increase during T2 is lower as during T1.
– The max. signal level of Knee Point 2 is higher than of Knee Point 1.
• "Phase 2"
For this reason, darker pixels can be integrated during the complete integration time and the sensor reaches its full
sensitivity. Pixels, which are limited at each Knee Points, lose a part of their integration time - even more, if they are
brighter.
In the diagram you can see the signal line of three different bright pixels. The slope depends of the light intensity ,
thus it is per pixel the same here (granted that the light intensity is temporally constant). Given that the very light
pixel is limited soon at the signal levels S1 and S2, the whole integration time is lower compared to the dark pixel. In
practice, the parts of the integration time are very different. T1, for example, is 95% of Ttotal , T2 only 4% and T3 only
1%. Thus, a high decrease of the very light pixels can be achieved. However, if you want to divide the integration
thresholds into three parts that is S2 = 2 x S1 and S3 = 3 x S1, a hundredfold brightness of one pixel's step from S2
to S3, compared to the step from 0 and S1 is needed.
1.16.4.1.3 Using HDR with mvBlueFOX-x00w Figure 3 is showing the usage of the HDR mode. Here, an image
sequence was created with the integration time between 10us and 100ms. You can see three slopes of the HDR
mode. The "waves" result from the rounding during the three exposure phases. They can only be partly adjusted
during one line period of the sensor.
1.16.4.1.3.1 Notes about the usage of the HDR mode with mvBlueFOX-x00w
• In the HDR mode, the basic amplification is reduced by approx. 0.7, to utilize a huge, dynamic area of the
sensor.
• Exposure times, which are too low, make no sense. During the third phase, if the exposure time reaches a
possible minimum (one line period), a sensible lower limit is reached.
1.16.4.1.3.2 Possible settings using mvBlueFOX-x00w Possible settings of the mvBlueFOX-x00w in HDR
mode are:
"HDREnable":
-"HDRMode":
• "Fixed": Fixed setting with 2 Knee Points. modulation Phase 0 .. 33% / 1 .. 66% / 2 .. 100%
"User": Variable setting of the Knee Point (1..2), threshold and exposure time proportion
• "HDRKneePoints"
– "HDRKneePoint-0"
1.16.4.2.1 Introduction The HDR (High Dynamic Range) mode of the Aptina sensor increases the usable con-
trast range. This is achieved by dividing the integration time in three phases. The exposure time proportion of the
three phases can be set independently.
1.16.4.2.2 Functionality To exceed the typical dynamic range, images are captured at 3 exposure times with
given ratios for different exposure times. The figure shows a multiple exposure capture using 3 different exposure
times.
Note
The longest exposure time (T1) represents the Exposure_us parameter you can set in wxPropView.
Afterwards, the signal is fully linearized before going through a compander to be output as a piece-wise linear signal.
the next figure shows this.
1.16.4.2.2.1 Description Exposure ratios can be controlled by the program. Two rations are used: R1 = T1/T2
and R2 = T2/T3.
Increasing R1 and R2 will increase the dynamic range of the sensor at the cost of lower signal-to-noise ratio (and
vice versa).
1.16.4.2.2.2 Possible settings Possible settings of the mvBlueFOX-x02d in HDR mode are:
• "HDREnable":
* "HDRMode":
· "Fixed": Fixed setting with exposure-time-ratios: T1 -> T2 ratio / T2 -> T3 ratio
· "Fixed0": 8 / 4
· "Fixed1": 4 / 8
· "Fixed2": 8 / 8
· "Fixed3": 8 / 16
· "Fixed4": 16 / 16
· "Fixed5": 16 / 32
1.16.5.1.1 Introduction Look-Up-Tables (LUT) are used to transform input data into a desirable output format.
For example, if you want to invert an 8 bit image, a Look-Up-Table will look like the following:
I.e., a pixel which is white in the input image (value 255) will become black (value 0) in the output image.
All MATRIX VISION devices use a hardware based LUT which means that
Note
The mvBlueFOX cameras also feature a hardware based LUT. Although, you have to set the LUT via Setting
-> Base -> ImageProcessing -> LUTOperations (p. 158), you can set where the processing takes place.
For this reason, there is the parameter LUTImplementation. Just select either "Software" or "Hardware".
1.16.5.1.3 Setting the Host based LUTs via LUTOperations Host based LUTs are also available via "Setting
-> Base -> ImageProcessing -> LUTOperations"). Here, the changes will affect the 8 bit image data and the
processing needs the CPU of the host system.
The mvBlueFOX cameras also feature a hardware based LUT. Although, you have to set the LUT via "Setting ->
Base -> ImageProcessing -> LUTOperations", you can set where the processing takes place. For this reason,
there is the parameter LUTImplementation. Just select either "Software" or "Hardware".
• "Gamma"
You can use "Gamma" to lift darker image areas and to flatten the brighter ones. This compensates the
contrast of the object. The calculation is described here. It makes sense to set the "←-
GammaStartThreshold" higher than 0 to avoid a too extreme lift or noise in the darker areas.
• "Interpolated"
With "Interpolated" you can set the key points of a characteristic line. You can defined the number of key
points. The following figure shows the behavior of all 3 LUTInterpolationModes with 3 key points:
• "Direct"
With "Direct" you can set the LUT values directly.
1.16.5.1.3.1 Example 1: Inverting an Image To get an inverted 8 bit mono image like shown in Figure 1, you
can set the LUT using wxPropView (p. 69). After starting wxPropView (p. 69) and using the device,
1. Set "LUTEnable" to "On" in "Setting -> Base -> ImageProcessing -> LUTOperations".
3. Right-click on "LUTs -> LUT-0 -> DirectValues[256]" and select "Set Multiple Elements... -> Via A User
Defined Value Range".
This is one way to get an inverted result. It is also possible to use the "LUTMode" - "Interpolated".
4. Now you can set the range from 0 to 255 and the values from 255 to 0 as shown in Figure 2.
Note
As described in Storing and restoring settings (p. 81), it is also possible to save the settings as an
XML file on the host system. You can find further information about for example the XML compatibil-
ities of the different driver versions in the mvIMPACT Acquire SDK manuals and the according setting
classes: https://www.matrix-vision.com/manuals/SDK_CPP/classmvIMPACT_1_←-
1acquire_1_1FunctionInterface.html (C++)
1.16.6.1.1 Basics about user data It is possible to save arbitrary user specific data on the hardware's non-
volatile memory. The amount of possible entries depends on the length of the individual entries as well as the size
of the devices non-volatile memory reserved for storing:
• mvBlueFOX,
• mvBlueFOX-M,
• mvBlueFOX-MLC,
• mvBlueFOX3, and
• mvBlueCOUGAR-X
currently offer 512 bytes of user accessible non-volatile memory of which 12 bytes are needed to store header
information leaving 500 bytes for user specific data.
as well as an optional:
1 + <length_of_password> bytes per entry if a password has been defined for this particular entry
It is possible to save either String or Binary data in the data property of each entry. When storing binary data
please note, that this data internally will be stored in Base64 format thus the amount of memory required is 4/3
time the binary data size.
The UserData can be accessed and created using wxPropView (p. 69) (the device has to be closed). In the section
"UserData" you will find the entries and following methods:
• "CreateUserDataEntry"
• "DeleteUserDataEntry"
• "WriteDataToHardware"
• In "Entries" click on the entry you want to adjust and modify the data fields.
To permanently commit a modification made with the keyboard the ENTER key must be pressed.
• To save the data on the device, you have to execute "WriteDataToHardware". Please have a look at
the "Output" tab in the lower right section of the screen as shown in Figure 2, to see if the write process
returned with no errors. If an error occurs a message box will pop up.
1.16.6.1.2 Coding sample If you e.g. want to use the UserData as dongle mechanism (with binary data), it is
not suggested to use wxPropView (p. 69). In this case you have to program the handling of the user data.
See also
mvIMPACT::acquire::UserDataEntry in mvIMPACT_Acquire_API_CPP_manual.chm.
1.16.7.1.1 Scenario If you want to have a synchronized stereo camera array (e.g. mvBlueFOX-MLC-202dG)
with a rolling shutter master camera (e.g. mvBlueFOX-MLC-202dC), you can solve this task as follows:
1. Please check, if all mvBlueFOX cameras are using firmware version 1.12.16 or newer.
2. Now, open wxPropView (p. 69) and set the master camera:
Figure 1: wxPropView - Master camera outputs at DigOut 0 a frame synchronous V-Sync pulse
Note
Alternatively, it is also possible to use HRTC - Hardware Real-Time Controller (p. 118) HRTC to set
the master camera. The following sample shows the HRTC - Hardware Real-Time Controller (p. 118)
HRTC program which sets the trigger signal and the digital output.
The sample will lead to a constant frame rate of 16 fps (50000 us + 10000 us = 60000 us for one cycle.
1 / 60000 us ∗ 1000000 = 16.67 Hz).
Figure 2: wxPropView - HRTC program sets the trigger signal and the digital output
Do not forget to set HRTC as the trigger source for the master camera.
Figure 3: wxPropView - HRTC is the trigger source for the master camera
1.16.7.1.1.1 Connection using -UOW versions (opto-isolated inputs and outputs) The connection of the
mvBlueFOX cameras should be like this:
1.16.7.1.1.2 Connection using -UTW versions (TTL inputs and outputs) The connection of the mvBlueFOX
cameras should be like this:
Note
See also
• Dimensions and connectors (p. 53) Table of connector pin out of "12-pin through-hole type shrouded
header (USB / Dig I/O)".
• Dimensions and connectors (p. 53) Electrical drawing "opto-isolated digital inputs" and "opto-isolated
digital outputs".
This can be achieved by connecting the same external trigger signal to one of the digital inputs of each camera like
it's shown in the following figure:
Each camera then has to be configured for external trigger somehow like in the image below:
This assumes that the image acquisition shall start with the rising edge of the trigger signal. Every camera must be
configured like this. Each rising edge of the external trigger signal then will start the exposure of a new image at
the same time on each camera. Every trigger signal that will occur during the exposure of an image will be silently
discarded.
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
There are several use cases concerning the Hardware Real-Time Controller (HRTC):
– Delay the expose start of the following camera (HRTC) (p. 176)
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
With the use of the HRTC, any feasible frequency with the accuracy of micro seconds(us) is possible. The program
to achieve this roughly must look like this (with the trigger mode set to ctmOnRisingEdge):
So to get e.g. exactly 10 images per second from the camera the program would somehow look like this(of course
the expose time then must be smaller or equal then the frame time in normal shutter mode):
0. WaitClocks 99000
1. TriggerSet 1
2. WaitClocks 1000
3. TriggerReset
4. Jump 0
See also
Download this sample as an rtp file: Frequency10Hz.rtp. To open the file in wxPropView (p. 69),
click on "Digital I/O -> HardwareRealTimeController -> Filename" and select the
downloaded file. Afterwards, click on "int Load( )" to load the HRTC program.
Note
To see a code sample (in C++) how this can be implemented in an application see the description of the class
mvIMPACT::acquire::RTCtrProgram (C++ developers)
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
0. WaitDigin DigIn0->On
1. WaitClocks <delay time>
2. TriggerSet 0
3. WaitClocks <trigger pulse width>
4. TriggerReset
5. Jump 0
As soon as digital input 0 changes from high to low (0), the HRTC waits the < delay time > (1) and starts the image
expose. The expose time is used from the expose setting of the camera. Step (5) jumps back to the beginning to
be able to wait for the next incoming signal.
Note
WaitDigIn[On,Ignore]
WaitDigIn[Off,Ignore]
the minimum pulse width which can be detected by HRTC has to be at least 5 us.
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
If you need a double acquisition, i.e. take two images in a very short time interval, you can achieve this by using the
HRTC.
• Now, you have to wait until the first image was read-out and then
0 WaitDigin DigitalInputs[0] - On
1 TriggerSet 1
2 WaitClocks 200
3 TriggerReset
4 WaitClocks 5
5 ExposeSet
6 WaitClocks 60000
7 TriggerSet 2
8 WaitClocks 100
9 TriggerReset
10 ExposeReset
11 WaitClocks 60000
12 Jump 0
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
0. WaitDigin DigIn0->Off
1. TriggerSet 1
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <time between 2 acquisitions - 10us> (= WC1)
5. TriggerSet 2
6. WaitClocks <trigger pulse width>
7. TriggerReset
8. Jump 0
This program generates two internal trigger signals after the digital input 0 is going to low. The time between those
internal trigger signals is defined by step (4). Each image is getting a different frame ID. The first one has the
number 1, defined in the command (1) and the second image will have the number 2. The application can ask for
the frame ID of each image, so well known which image is the first and the second one.
1.16.8.5 Take two images with different expose times after an external trigger (HRTC)
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
The following code shows the solution in combination with a CCD model of the camera. With CCD models you have
to set the exposure time using the trigger width.
0. WaitDigin DigIn0->Off
1. ExposeSet
2. WaitClocks <expose time image1 - 10us> (= WC1)
3. TriggerSet 1
4. WaitClocks <trigger pulse width>
5. TriggerReset
6. ExposeReset
Figure 1: Take two images with different expose times after an external trigger
Note
Due to the internal loop to wait for a trigger signal, the WaitClocks call between "TriggerSet 1" and "Trigger←-
Reset" constitute 100. For this reason, the trigger signal cannot be missed.
Before the ExposeReset, you have to call the TriggerReset otherwise the normal flow will continue and the
image data will be lost!
The sensor expose time after the TriggerSet is 0.
See also
Using a CMOS model (e.g. the mvBlueFOX-MLC205), a sample with four consecutive exposure times (10ms /
20ms / 40ms / 80ms) triggered just by one hardware input signal would look like this:
0. WaitDigin DigIn0->On
1. TriggerSet
2. WaitClocks 10000 (= 10 ms)
3. TriggerReset
4. WaitClocks 1000000 (= 1 s)
5. TriggerSet
6. WaitClocks 20000 (= 20 ms)
7. TriggerReset
8. WaitClocks 1000000 (= 1 s)
9. TriggerSet
10. WaitClocks 40000 (= 40 ms)
11. TriggerReset
12. WaitClocks 1000000 (= 1 s)
13. TriggerSet
14. WaitClocks 80000 (= 40 ms)
15. TriggerReset
16. WaitClocks 1000000 (= 1 s)
17. Jump 0
See also
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
To achieve an edged controlled triggering, you can use HRTC. Please follow these steps:
1. The HRTC program waits for a rising edge at the digital input 0 (step 1).
5. Now, the HRTC program waits for a falling edge at the digital input 0 (step 5).
6. If there is a falling edge, the trigger will jump to step 0 (step 6).
Note
The waiting time at step 0 is necessary to debounce the signal level at the input (the duration should be shorter
than the frame time).
See also
To see a code sample (in C++) how this can be implemented in an application see the description of the class
mvIMPACT::acquire::RTCtrProgram (C++ developers)
Note
Please have a look at the Hardware Real-Time Controller (HRTC) (p. 118) chapter for basic information.
The use case Synchronize the cameras to expose at the same time (p. 167) shows how you have to
connect the cameras.
If a defined delay should be necessary between the cameras, the HRTC can do the synchronization work.
In this case, one camera must be the master. The external trigger signal that will start the acquisition must be
connected to one of the cameras digital inputs. One of the digital outputs then will be connected to the digital input
of the next camera. So camera one uses its digital output to trigger camera two. How to connect the cameras to
one another can also be seen in the following image:
Figure 1: Connection diagram for a defined delay from the exposure start of one camera relative to another
Assuming that the external trigger is connected to digital input 0 of camera one and digital output 0 is connected
to digital input 0 of camera two. Each additional camera will then be connected to it predecessor like camera 2 is
connected to camera 1. The HRTC of camera one then has to be programmed somehow like this:
0. WaitDigin DigIn0->On
1. TriggerSet 0
2. WaitClocks <trigger pulse width>
3. TriggerReset
4. WaitClocks <delay time>
5. SetDigout DigOut0->On
6. WaitClocks 100us
7. SetDigout DigOut0->Off
8. Jump 0
When the cameras are set up to start the exposure on the rising edge of the signal <delay time> of course is the
desired delay time minus <trigger pulse width>.
If more than two cameras shall be connected like this, every camera except the last one must run a program like
the one discussed above. The delay times of course can vary.
1.17.1.1.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications
Industrial applications:
• triggered image acquisition with precise control of image exposure start by hardware trigger input.
– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.
Scientific applications:
1.17.1.1.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.
1.17.1.1.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel Start an exposure of a frame as long as the trigger input is below the trigger threshold.
OnHighLevel Start an exposure of a frame as long as the trigger input is above the trigger threshold.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
See also
• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode
1.17.1.1.2.2 Exposure aka Integration After an active trigger, the exposure phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Exposure time is adjustable by software in increments of treadline .
1.17.1.1.2.3 Readout When exposure is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.
1.17.1.1.3.1 Timings
Note
To calculate the maximum frames per second (FPSmax ) you will need following formula (ExposeMode: Standard):
FPS_max = 1
-----------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait
(ExposeMode: Overlapped):
Now, when we insert the values using exposure time of, for example, 65 us, 100 lines and 12MHz pixel clock
(ExposeMode: Standard):
FPS_max = 1
-----------------------------------------------------------------------------------
10 us + ((100 * 64 us) + ((510 - 100) * 4.85 us) + 3.15 us) + 65 us + 64 us + 64 us
= 0.0001266704667806700868 1 / us
= 126.7
Note
The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.
See also
To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).
1.17.1.1.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur
1. Time needed to send data to the CCD controller depending on what is changed
exposure : abt 2..3ms
window: abt 4..6ms
trigger mode: from 5..90ms,
varies with oldmode/newmode combination
2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms
So for example when reprogramming the capture window you will need (average values)
tregprog = 9.5ms
• Number of effective pixels: 659 (H) x 494 (V) approx. 330K pixels
• Total number of pixels: 692 (H) x 504 (V) approx. 350K pixels
• Optical black:
1.17.1.1.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.
1.17.1.1.6 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).
The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)
Device Feature And Property List (p. 182)
1.17.1.2.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications
Industrial applications:
• triggered image acquisition with precise control of image integration start by hardware trigger input.
– frame integration, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.
Scientific applications:
1.17.1.2.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.
1.17.1.2.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.
The following trigger modes are available:
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
See also
• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode
Note
Trigger modes which use an external input (ctmOnLowLevel, ctmOnHighLevel, ctmOnRisingEdge, ctm←-
OnFallingEdge) will use digital input 0 as input for the trigger signal. Input 0 is not restricted to the trigger
function. It can always also be used as general purpose digital input. The input switching threshold of all
inputs can be programmed with write_dac(level_in_mV). The best is to set this to the half of the input voltage.
So for example if you apply a 24V switching signal to the digital inputs set the threshold to 12000 mV.
1.17.1.2.2.2 Exposure aka Integration After an active trigger, the integration phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Exposure time is adjustable by software in increments of treadline .
1.17.1.2.2.3 Readout When integration is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.
1.17.1.2.3.1 Timings
Note
To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: No overlap):
FPS_max = 1
--------------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait
1.17.1.2.3.2 Example: Frame rate as function of lines & exposure time Now, when we insert the values using
exposure time of, for example, 8000 us, 480 lines and 40MHz pixel clock (Expose mode: No overlap):
FPS_max = 1
-----------------------------------------------------------------------------------------------
1.8 us + ((480 * 19.525 us) + ((504 - 480) * 1.80 us) + 19.525 us) + 8000 us + 21.3 us + 3.6 us
= 0.0000572690945899318068 1 / us
= 57.3
Note
The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.
See also
To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).
1.17.1.2.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur
1. Time needed to send data to the CCD controller depending on what is changed
exposure : abt 2..3ms
window: abt 4..6ms
trigger mode: from 5..90ms,
varies with oldmode/newmode combination
2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms
So for example when reprogramming the capture window you will need (average values)
tregprog = 9.5ms
• Number of effective pixels: 659 (H) x 494 (V) approx. 330K pixels
• Total number of pixels: 692 (H) x 504 (V) approx. 350K pixels
• Optical black:
1.17.1.2.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.
1.17.1.3.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications
Industrial applications:
• triggered image acquisition with precise control of image exposure start by hardware trigger input.
– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.
Scientific applications:
1.17.1.3.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.
1.17.1.3.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
MATRIX VISION GmbH width.
OnLowExpose Each falling edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnAnyEdge Start the exposure of a frame when the trigger input level changes from high to low or from
188
See also
• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode
1.17.1.3.2.2 Exposure aka Integration After an active trigger, the exposure phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Integration time is adjustable by software in increments of treadline .
1.17.1.3.2.3 Readout When exposure is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.
1.17.1.3.3.1 Timings
Note
To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: Sequential):
FPS_max = 1
-----------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait
Now, when we insert the values using exposure time of, for example, 8000 us, 768 lines and 40MHz pixel clock
(Expose mode: Sequential):
FPS_max = 1
-------------------------------------------------------------------------------------------
4.85 us + ((768 * 32.7 us) + ((788 - 768) * 4.85 us) + 32.7 us) + 8000 us + 22.5 us + 58 us
= 0.000030004215592290717 1 / us
= 30
Note
The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.
See also
To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).
1.17.1.3.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur
1. Time needed to send data to the CCD controller depending on what is changed
exposure : abt 2..3ms
window: abt 4..6ms
trigger mode: from 5..90ms,
varies with oldmode/newmode combination
2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms
So for example when reprogramming the capture window you will need (average values)
tregprog = 9.5ms
• Number of effective pixels: 1025 (H) x 768 (V) approx. 790K pixels
• Total number of pixels: 1077 (H) x 788 (V) approx. 800K pixels
• Optical black:
1.17.1.3.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.
1.17.1.3.6 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).
The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)
1.17.1.4.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications
Industrial applications:
• triggered image acquisition with precise control of image exposure start by hardware trigger input.
– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.
Scientific applications:
1.17.1.4.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.
1.17.1.4.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
See also
• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode
1.17.1.4.2.2 Exposure aka Integration After an active trigger, the exposure phase starts with a maximum jitter
of ttrig . If flash illumination is enabled in software the flash output will be activated exactly while the sensor chip is
integrating light. Exposure time is adjustable by software in increments of treadline .
1.17.1.4.2.3 Readout When exposure is finished, the image is transferred to hidden storage cells on the CCD.
Image data is then shifted out line-by-line and transferred to memory. Shifting out non active lines takes tvshift,
while shifting out active lines will consume treadline . The number of active pixels per line will not have any impact on
readout speed.
1.17.1.4.3.1 Timings
Note
To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: No overlap):
1.17.1.4.3.2 Example: Frame rate as function of lines & exposure time Now, when we insert the values using
exposure time of, for example, 8000 us, 1024 lines and 56MHz pixel clock (Expose mode: No overlap):
See also
To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).
1.17.1.4.4 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur
1. Time needed to send data to the CCD controller depending on what is changed exposure : abt 2..3ms
window: abt 4..6ms trigger mode: from 5..90ms, varies with oldmode/newmode combination
2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms
So for example when reprogramming the capture window you will need (average values)
tregprog = 9.5ms
• Number of effective pixels: 1392 (H) x 1040 (V) approx. 1.45M pixels
• Total number of pixels: 1434 (H) x 1050 (V) approx. 1.5M pixels
• Optical black:
1.17.1.4.5.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.
1.17.1.4.6 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).
The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)
1.17.1.5.1 Introduction The CCD sensor is a highly programmable imaging module which will, for example,
enable the following type of applications
Industrial applications:
• triggered image acquisition with precise control of image exposure start by hardware trigger input.
– frame exposure, integrating all pixels at a time in contrast to CMOS imager which typically integrate
line-by-line.
– short shutter time, to get sharp images.
– flash control output to have enough light for short time.
Scientific applications:
1.17.1.5.2 Details of operation The process of getting an image from the CCD sensor can be separated into
three different phases.
1.17.1.5.2.1 Trigger When coming out of reset or ready with the last readout the CCD controller is waiting for a
Trigger signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
1.17.1.5.2.2 Timings
Note
To calculate the maximum frames per second (FPSmax ) you will need following formula (Expose mode: No overlap):
FPS_max = 1
--------------------------------------------------
t_trig + t_readout + t_exposure + t_trans + t_wait
1.17.1.5.2.3 Example: Frame rate as function of lines & exposure time Now, when we insert the values using
exposure time of, for example, 8000 us, 1200 lines and 40MHz pixel clock (Expose mode: No overlap):
FPS_max = 1
---------------------------------------------------------------------------------------
5.1 us + ((1200 * 48 us) + ((1248 - 1200) * 5.1 us) + 48 us) + 8000 us + 48 us + 158 us
= 0.000015127700483632586 1 / us
= 15.1
Note
The calculator returns the max. frame rate supported by the sensor. Please keep in mind that it will depend
on the interface and the used image format if this frame rate can be transferred.
See also
To find out how to achieve any defined freq. below or equal to the achievable max. freq., please have a look
at Achieve a defined image frequency (HRTC) (p. 169).
1.17.1.5.3 Reprogramming CCD Timing Reprogramming the CCD Controller will happen when the following
changes occur
1. Time needed to send data to the CCD controller depending on what is changed exposure : abt 2..3ms
window: abt 4..6ms trigger mode: from 5..90ms, varies with oldmode/newmode combination
2. Time to initialize (erase) the CCD chip after reprogramming this is fixed, abt 4.5 ms
So for example when reprogramming the capture window you will need (average values)
tregprog = 9.5ms
• Number of effective pixels: 1600 (H) x 1200 (V) approx. 1.92M pixels
• Total number of pixels: 1688 (H) x 1248 (V) approx. 2.11M pixels
• Optical black:
1.17.1.5.4.1 Characteristics These zone definitions apply to both the color and gray scale version of the sensor.
1.17.1.5.5 CCD Signal Processing The CCD signal is processed with an analog front-end and digitized by an
12 bit analog-to-digital converter (ADC). The analog front-end contains a programmable gain amplifier which is
variable from 0db (gain=0) to 30dB (gain=255).
The 8 most significant bits of the ADC are captured to the frame buffer. This will give the following transfer function
(based on the 8 bit digital code): Digital_code [lsb] = ccd_signal[V] ∗ 256[lsb/V] ∗ exp(gain[bB]/20) lsb : least
significant bit (smallest digital code change)
1.17.2.1.1 Introduction The CMOS sensor module (MT9V034) incorporates the following features:
• programmable readout timing with free capture windows and partial scan
1.17.2.1.2 Details of operation The sensor uses a full frame shutter (ShutterMode = "FrameShutter"),
i.e. all pixels are reset at the same time and the exposure commences. It ends with the charge transfer of the
voltage sampling.
Furthermore, the sensor offers two different modes of operation:
1.17.2.1.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.
To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:
FPS_max = 1
----------------------
FrameTime
FPS_max = 1
----------------------
ExposureTime
1.17.2.1.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:
1.17.2.1.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
See also
1.17.2.1.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.
Note
Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.
To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:
FPS_max = 1
-----------------------------------
FrameTime + ExposureTime
AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 93.7 Mono8
W:608 x H:388 40 100 131.4 Mono8
W:492 x H:314 40 100 158.5 Mono8
W:398 x H:206 40 100 226.7 Mono8
1.17.2.1.4.1 Characteristics
1.17.2.2.1 Introduction The CMOS sensor module (MT9M001) incorporates the following features:
• rolling shutter
• programmable readout timing with free capture windows and partial scan
With the rolling shutter the lines are exposed for the same duration, but at a slightly different point in time.
Note
Moving objects together with a rolling shutter can cause a shear in moving objects.
1.17.2.2.2.1 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:
1.17.2.2.2.2 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
OnFallingEdge Each falling edge of trigger signal acquires one image.
OnRisingEdge Each rising edge of trigger signal acquires one image.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnLowExpose Each falling edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
OnAnyEdge Start the exposure of a frame when the trigger input level changes from high to low or from
low to high.
See also
• C: TCameraTriggerMode
• C++: mvIMPACT::acquire::TCameraTriggerMode
1.17.2.2.2.3 Erase, exposure and readout After the trigger pulse, the complete sensor array is erased. This
takes some time, so there is a fix delay from about 285 us between the trigger pulse on digital input 0 and the start
of exposure of the first line.
The exact time of exposure start of each line (except the first line) depends on the exposure time and the position
of the line. The exposure of a particular line N is finished when line N is ready for readout. Image data is read out
line-by-line and transferred to memory (see: http://www.matrix-vision.com/tl_files/mv11/←-
Glossary/art_rolling_shutter_en.pdf).
Exposure time is adjustable by software and depends on the image width. To calculate the exposure step size you
will need following formula:
LineDelay = 0
PixelClkPeriod = 1
--------
PixelClk
To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:
FPS_max = 1
-----------------------------------
FrameTime + ExposureTime
1.17.2.2.4 Characteristics
1.17.2.3.1 Introduction The CMOS sensor module (MT9M021) incorporates the following features:
• programmable readout timing with free capture windows and partial scan
1.17.2.3.2
Details of operation The sensor uses a pipelined global snapshot shutter (ShutterMode = "←-
FrameShutter") , i.e. light exposure takes place on all pixels in parallel, although subsequent readout is se-
quential.
Therefore the sensor offers two different modes of operation:
1.17.2.3.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.
To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:
FPS_max = 1
----------------------
FrameTime
FPS_max = 1
------------------------
ExposureTime
1.17.2.3.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:
1.17.2.3.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
See also
1.17.2.3.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.
Note
Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.
To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:
FPS_max = 1
-----------------------------------
FrameTime + ExposureTime
AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 24.6 Mono8
W:1036 x H:776 40 100 30.3 Mono8
W:838 x H:627 40 100 37.1 Mono8
W:678 x H:598 40 100 38.9 Mono8
W:550 x H:484 40 100 47.6 Mono8
1.17.2.3.4.1 Characteristics
1.17.2.4.1 Introduction The CMOS sensor module (MT9M024) incorporates the following features:
• high dynamic range (p. 154) 115 dB (with gray scale version)
• rolling shutter
• programmable readout timing with free capture windows and partial scan
With the rolling shutter the lines are exposed for the same duration, but at a slightly different point in time.
Note
Moving objects together with a rolling shutter can cause a shear in moving objects.
1.17.2.4.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.
To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:
FPS_max = 1
----------------------
FrameTime
FPS_max = 1
------------------------
ExposureTime
1.17.2.4.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:
1.17.2.4.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnLowLevel As long as trigger signal is Low camera acquires images with own timing.
OnHighLevel As long as trigger signal is High camera acquires images with own timing.
See also
1.17.2.4.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.
Note
Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.
To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:
FPS_max = 1
-----------------------------------
FrameTime + ExposureTime
AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 24.6 Mono8
W:1036 x H:776 40 100 30.3 Mono8
W:838 x H:627 40 100 37.1 Mono8
W:678 x H:598 40 100 38.9 Mono8
W:550 x H:484 40 100 47.6 Mono8
1.17.2.4.4.1 Characteristics
1.17.2.5.1 Introduction The CMOS sensor module (MT9P031) incorporates the following features:
• programmable readout timing with free capture windows and partial scan
With the rolling shutter the lines are exposed for the same duration, but at a slightly different point in time:
Note
Moving objects together with a rolling shutter can cause a shear in moving objects.
The global reset release shutter, which is only available in triggered operation, starts the exposure of all rows
simultaneously and the reset to each row is released simultaneously, too. However, the readout of the lines is equal
to the readout of the rolling shutter: line by line:
Note
This means, the bottom lines of the sensor will be exposed to light longer! For this reason, this mode will only
make sense, if there is no extraneous light and the flash duration is shorter or equal to the exposure time.
1.17.2.5.2.1 Free running mode In free running mode, the sensor reaches its maximum frame rate. This is
done by overlapping erase, exposure and readout phase. The sensor timing in free running mode is fixed, so there
is no control when to start an acquisition. This mode is used with trigger mode Continuous.
To calculate the maximum frames per second (FPSmax ) in free running mode you will need following formula:
FPS_max = 1
----------------------
FrameTime
FPS_max = 1
------------------------
ExposureTime
1.17.2.5.2.2 Snapshot mode In snapshot mode, the image acquisition process consists off several sequential
phases:
1.17.2.5.2.3 Trigger Snapshot mode starts with a trigger. This can be either a hardware or a software signal.
Mode Description
Continuous Free running, no external trigger signal needed.
OnDemand Image acquisition triggered by command (software trigger).
OnLowLevel Start an exposure of a frame as long as the trigger input is below the trigger threshold .
OnHighLevel Start an exposure of a frame as long as the trigger input is above the trigger threshold.
OnHighExpose Each rising edge of trigger signal acquires one image, exposure time corresponds to pulse
width.
See also
1.17.2.5.2.4 Erase, exposure and readout All pixels are light sensitive at the same period of time. The whole
pixel core is reset simultaneously and after the exposure time all pixel values are sampled together on the storage
node inside each pixel. The pixel core is read out line-by-line after exposure.
Note
Exposure and read out cycle is carry-out in serial; that causes that no exposure is possible during read out.
To calculate the maximum frames per second (FPSmax ) in snapshot mode you will need following formula:
FPS_max = 1
------------------------------------
(FrameTime + ExposureTime)
1.17.2.5.2.5 Use Cases As mentioned before, "Global reset release" will only make sense, if a flash is used
which is brighter than the ambient light. The settings in wxPropView (p. 69) will look like this:
In this case, DigOut0 gets a high signal as long as the exposure time (which is synchronized with the GlobalReset←-
Release). This signal can start a flash light.
AOI PixelClock (MHz) Exposure Time (us) Maximal Frame Rate (fps) PixelFormat
Maximum 40 100 5.9 Mono8
W:2098 x H:1574 40 100 8.4 Mono8
W:1696 x H:1272 40 100 12.0 Mono8
W:1376 x H:1032 40 100 16.9 Mono8
W:1104 x H:832 40 100 23.7 Mono8
W:800 x H:616 40 100 32 Mono8
1.17.2.5.4.1 Characteristics