VLT MAN ESO 14200 4038 - v0
VLT MAN ESO 14200 4038 - v0
VLT MAN ESO 14200 4038 - v0
O. Hainaut
Approved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Date Signature
A. Kaufer
Released . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Date Signature
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 ii
Change Record
Contents
1 Introduction 1
1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Reference documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Stylistic conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
5 Zero Points 12
7 Coronagraphic Imaging 16
7.1 Cosmetics and vignetted regions . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.2 Dark subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.3 Flat fielding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.4 Sky subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.5 Image stacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.6 Central star subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.7 Coronography dedicated to astrometry . . . . . . . . . . . . . . . . . . . . . . 17
9 Spectroscopy 19
9.1 Flat fielding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
9.2 Sky subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
9.3 Slit curvature correction and wavelength calibration . . . . . . . . . . . . . . . 19
9.4 Combining 2d spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9.5 Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9.6 Removing telluric lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9.7 Flux calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
9.8 How to use isaac spc jitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1 Introduction
1.1 Purpose
The document is intended for astronomers who want to reduce NACO data. It describes
the various data formats delivered by NACO, observational scenarios and reduction proce-
dures. This document concentrates on the methodology rather than individual routines that
are available in either IRAF or MIDAS. The document describes the algorithms implemented
in the eclipse data reduction package. The ECLIPSE package is an integral part of the NACO
pipeline, which produces calibration products, quality control information and reduced data.
The pipeline does produce reduced data; however, this is not meant to replace more general
reduction packages such as IRAF, MIDAS or IDL. The pipeline does not replace interactive
analysis and cannot make educated choices. Thus, the data that the NACO pipeline produces
should be considered as a way of quickly assessing the quality of the data (a quick look if you
like) or a first pass at the reduction of the data. Throughout this document we will list the
eclipse routines that are used to reduce NACO data and we will give a short description of
how they can be used. We will also list the shortcomings these routines have, so that users
can decide if they need to reduce their data more carefully. For completeness, we have also
included a description of the eclipse routines whose primary aim is to provide quality control
information. These routines are probably of little interest to astronomers. This document
does not describe the NACO instrument, its modes of operations, how to acquire data, the
offered templates, or the various issues attached to Phase II Proposal Preparation. The reader
is assumed to have read the NACO User’s Manual beforehand, and have a basic knowledge
of infrared data reduction in imaging and spectroscopy. This document is a living document,
and as such follows the evolution of the recipes, the implementation of new algorithms, en-
hancements, new supported modes, etc. Anything related here has to be understood to be
valid at the date of writing.
Since this document has been created, the pipeline has been updated and now
works under CPL (=Common Pipeline Language). The information contained in this
manual are nevertheless still valid and definitively useful, since the current implementation
is based on the same ’scientific data reduction recipes’. Hereafter for details about reduction
scripts (recipes) mentioned, we refer you to the pipeline user’s manual.
The NACO pipeline is an ESO development to provide reduced data to service mode users,
visitors at the telescope and operators who execute service mode observations. The pipeline
runs in two modes: on-line (for both service and visitor mode programs) and off-line (for
service mode programs). On-line data reduction happens in the control-room during the night
without any human intervention. The goal here is to provide enough information to assess the
quality of the acquired data in the shortest possible time. This quick feedback is often useful
e.g. to check the image quality in imaging modes. On-line reduced data are not automatically
distributed to observers. Off-line data reduction happens before the data are packed and
distributed. The main difference between the on-line and off-line versions is that calibration
frames, such as flats and darks, are used. In both cases, it should be understood that the
pipeline does not replace careful data analysis and the trained eye of the astronomer. The
pipeline implements recipes that are meant to work in most cases. Most astronomers choose
to take the pipeline input as a first guess, then refine some points where the default method
is not providing the best results for what they are doing.
The NACO pipeline is entirely based on the eclipse library. You need to download the library to
compile the pipeline. The packages can be found at: http://www.eso.org/projects/aot/eclipse/
Installation instructions are provided with the package and on the Web site. The current ver-
sion (released Sept. 2005) is 5.0. You always need to download the eclipse-main package,
which contains all the libraries and documentation for main commands. After that, you only
need to download the package for the instrument or language binding you are interested in
(ADONIS, ISAAC, Python, Lua, CONICA, ...). However, some routines for ISAAC are also
usable for NACO, so it is better to install both. Please note that NACO is called CONICA...
After installation is complete, you should end up with a number of executables in eclipse/bin.
The main program you are looking for is called conicap, it contains the recipes for most of
the NACO data reduction modes, together with some basic documentation for the command
usage.
End of June 2007, the NACO pipeline (version 3.6.2) has been publicly released. It can be
downloaded from http://www.eso.org/pipelines. This webpage contains detailled information
on what to download and how to install the pipeline, as well as an user’s manual describing
its functionnalities in detail. We advise user’s to refer to it for the latest news. The present
document remains as information basis for beginners.
In eclipse version 4.0 and later versions, most of the NACO pipeline is contained in a single
Unix command called conicap. Additional commands are jitter for imaging jitter, and
spjitter for spectroscopy. Once you have compiled conicap, you can start launching it with
no argument to get a list of supported recipes. To get more information about the command
itself, try conicap man. To get more information about one recipe, try conicap man recipe,
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 4
where recipe is the name of the recipe you are interested in.
dfits dumps a FITS header on stdout. You can use it to dump the FITS headers of many
files, to allow the parsing of the output. Example: dfits *.fits | grep ”TPL ID” Usually, you
want to get the value of a list of given FITS keywords in a list of FITS files.
fitsort reads the output from dfits, classifies the keywords into columns, and prints out in
a readable format the keyword values and file names. Example: dfits *.fits | fitsort NAXIS1
NAXIS2 BITPIX
fitsort also understands the shortFITS notation, where e.g. ESO TPL ID is shortened to
TPL.ID. A classification example could be (both commands are equivalent, since fitsort is
case-insensitive):
dfits *.fits | fitsort TPL.ID DPR.TYPE
dfits *.fits | fitsort tpl.id dpr.type
to help making the output readable on a terminal, or by a spreadsheet program. See the dtfits
manual page to get more information.
TPL.ID contains an unique identifier describing the template which was used to produce the
data. Frame selection in the pipeline is mostly based on this keyword value.
DPR.CATG Data Product category (SCIENCE, CALIB, ...).
DPR.TYPE Data Product type (OBJECT, SKY, ...).
DPR.TECH Data Product acquisition technique (e.g. IMAGE, SPECTRUM).
TPL.NEXP Number of scheduled exposures within the template.
TPL.EXPNO Exposure number within template.
A template may produce several different frame types. Frames are discriminated by the
value of the DPR keywords: DPR.CATG, DPR.TYPE, and DPR.TECH take different values
depending on the observed frame type.
The offsets sent to the telescope for jitter observations, both in imaging and spectroscopy,
are stored into 8 keywords. This applies to AutoJitter, AutoJitterOffset, and GenericOffset
templates.
SEQ.CUMOFFSETX and SEQ.CUMOFFSETY for cumulative offsets in pixels.
SEQ.CUMOFFSETA and SEQ.CUMOFFSETD for cumulative offsets in arcseconds (alpha,
delta).
SEQ.RELOFFSETX and SEQ.RELOFFSETY for relative offsets in pixels.
SEQ.RELOFFSETA and SEQ.RELOFFSETD for relative offsets in arcseconds (alpha, delta).
Cumulative offsets are always relative to the first frame in the batch (TPL.EXPNO=1). Rel-
ative offsets are always relative to the previous frame (TPL.EXPNO-1) in the batch. If the
same guide star is used before and after an offset, the offsetting accuracy is about 0.1 arc sec-
onds. All recipes looking for offset information take this into account and will use the header
offset information as a first guess and will refine the offset through cross-correlation techniques.
In AutoJitter mode, the jitter offsets are generated using a Poisson distribution. SEQ.POISSON
is an integer describing the Poisson homogeneity factor used for this distribution. See the
eclipse web page (http://www.eso.org/eclipse) for more information about this factor.
The jitter recipe from eclipse always expects offsets to be given in pixels, not in arcseconds.
If your headers do not mention the offsets in pixels, you must translate arcseconds to pixels
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 9
yourself and feed the information back into the jitter command. The input offsets are then
given by an ASCII file instead of being read from the FITS headers.
PRO.DID contains the version number of the dictionary used for all product keywords.
PRO.TYPE contains the type of the data products as one of TEMPORARY, PREPRO-
CESSED, REDUCED, or QCPARAM.
PRO.CATG is probably the most important product keyword, since it labels each frame with
a product ID unique to the recipe. It qualifies files with hopefully understandable product
labels, e.g. NACO IMG DARK AVG.
PRO.REC1.ID identifies the recipe that generated the file, with a unique name.
PRO.REC1.DRS.ID identifies the Data Reduction System that was used to produce the file.
PRO.DATANCOM specifies the number of raw frames that were combined to generate the
product. Its exact meaning depends on the recipe, see recipe documentation to learn what it
refers to.
Pipeline implementation: for each pixel on the detector, a curve is plotted of the median
plane value against the individual pixel value in this plane. This curve shows the pixel re-
sponse from which a robust linear regression provides a pixel gain. The image showing all
pixel gains (i.e. the flat-field) is normalized to have an average value of 1. By-products of this
routine are: a map of the zero-intercepts and an error map of the fit. Both can be used to
visually check the validity of the fit. A bad pixel map can also be produced by declaring all
pixels whose value after normalization is below 0.5 or above 2.0 as bad. The eclipse recipe is
called naco img twflat.
For the specific case of sky flats taken with the filters L and M, the sky is relatively insensitive
to changes in the twilight sky, so one cannot use the method used for SW data. Instead, sky
flats are taken at 3 different airmass (1., 1.5, 2.0) with the same exposure time. The difference
in airmass turns into flux level in the image. The flat can then be created by subtracting the
images that were taken at one of the higher airmasses from the image taken at Zenith. The
resulting image is then normalised to 1.
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 11
Pipeline implementation: for each pair of image, the routine produces a normalized image
with median to 1. ADU. The eclipse recipe is called naco img lampflat. The resulting flat
field is similar to a sky flat within 5%.
Pipeline implementation: the routine detects vertical arcs in a spectral image, models the
corresponding deformation (in the x direction only) and corrects this deformation. Finally a
wavelength calibration using a lamp spectrum catalog is made using vertical arcs. For each
pair of image, the routine produces a table calibrated in wavelength. There is no specific recipe
to reduce NACO arcs but one can use the ISAAC recipe (isaac spc arc) but be careful that
the NACO arcs are orthogonal to the ISAAC ones.
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 12
5 Zero Points
Standard stars are observed every night in the J, H and Ks filters, S27 objective, visible
dichroic. For other combinations of filters, objective and dichroics, standards are observed as
required.
Standard stars are imaged over a grid of five positions, one just above the center of the array
and one in each quadrant. The recipe finds the standard (it assumes that the star in the first
image is near the center), computes the instrumental magnitude, and then uses the standard
star database to determine the ZP, which is uncorrected for extinction.
The standard star database contains about 1000 stars with magnitudes in the J, H, K, Ks,
L and M bands, although most stars only have magnitudes in a subset of these filters. Stars
are currently taken from the following catalogs: Arnica, ESO, Van der Bliek, LCO Palomar,
LCO Palomar NICMOS red stars, MSSSO Photometric, MSSSO Spectroscopic, SAAO Carter,
UKIRT extended, UKIRT fundamental.
The implemented recipe is the following: For any couple of consecutive images (image1, im-
age2):
1/ diff = image1 - image2
2/ Locate in diff the star around the expected pixel position (provided by the FITS header or
by an external offset list).
3/ Compute the background around the star, and the star flux.
4/ Store the flux result in an output table.
Apply steps 2 to 4 to the inverted image image2-image1. This yields 2(N-1) measurements for
N input frames. From this statistical set, the highest and lowest values are removed, then an
average and standard deviation are computed.
The conversion formula from ADUs to magnitudes is: zmag = mag + 2.5 * log10(flux) - 2.5
* log10(DIT) where: zmag is the computed zero-point. mag is the known magnitude of the
standard star in the observed band. flux is the measured flux in ADUs in the image. DIT is
the detector integration time.
Note that neither the extinction nor the colour correction are included in the ZP pipeline
calculation. The average airmass is given in the output result file, together with individual
airmass values for each frame. The average extinction on Paranal for the J, H, Ks and NB M
filters, is available from the ISAAC web pages.
QC parameters contains two filters: QC.FILTER.OBS, indicating the band at which the
observations have been performed, e.g. K s and QC.FILTER.REF, which is the closest filter
given in the catalog and matching the observation, e.g. K. This correspondence completely
ignores corrections due to filter mismatch, and, in some cases, these corrections are substantial.
It is possible to use the naco img zpoint recipe to compute the zero point.
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 13
Remember that using an AO system, images suffer from anisoplanatism, -the field dependence
of the PSF-. It corresponds to the angular decorrelation of the wavefront coming from two
angularly separated stars. This phenomenon affects the quality of the AO correction in the
direction of the target when the reference star is not on axis, but it can also affect other parts
of the field, this depends on the conditions at the time of the observation.
to measure the sky. For deep exposures, the sky is computed from a subset of exposures
and there will be one sky frame for each object frame. For accurate photometry, it is very
important that the object frame is not included in the frames that are used to compute the
sky. This is a weakness of the current jitter recipe. For H and K band observations the sky
frame should be computed from frames that were taken 5-10 minutes either side of the object
frame. For J band observations, these numbers can be doubled. For conditions where the sky
background is varying rapidly (clouds or observations taken just after evening twilight) a more
rapid sampling of the sky is necessary. All sky frames contain objects, so one has to combine
them with suitable clipping. A robust method is to first scale frames to a common median
and then remove objects by rejecting the highest and lowest pixels. Rejecting the two highest
and two lowest pixels would produce even more robust results. The remaining pixels are then
averaged (the median can also be used, but it is a noisier statistic). The resulting sky frame
is then scaled (to match the median of the object frame) and subtracted.
A more sophisticated approach is to do the sky subtraction in two steps. The first step reduces
the data as described above, produces the combined image and then locates all the objects. In
the second step, the data is rereduced with the knowledge of where the objects are. These ob-
jects are then excluded when the sky is estimated in the second pass. This is the approach used
by the XDIMSUM package in IRAF and for very deep imaging it is the recommended package.
Use the naco img jitter recipe to reduce your imaging data.
Use the naco img jitter recipe to reduce your imaging data.
6.6 A few words about the Eclipse recipe jitter to reduce your
imaging data
jitter reduces images taken in infrared jitter imaging mode. It makes a number of assumptions
on the input signal and has a list of several possible algorithms with associated parameters
for each reduction stage.
jitter has been developped to reduce jitter imaging data taken from infrared instruments,
e.g. IRAC2, SOFI, ISAAC or NACO. Although some features are specific to the two latter
instruments, it is reasonable to think that the same algorithms should work on similar data.
jitter is configured through an initialization file. The name of this file is defaulted to jitter.ini
but can be changed through the use of the -f option. In the following documentation, this file
is referred to as the ini file.
The jitter data reduction process is divided into flat-fielding/dark subtraction/bad pixel cor-
rection, sky estimation and subtraction, frame offset detection, frame re-centering, frame
stacking to a single frame, and optional post-processing tasks. Some processes may be deac-
tivated depending on what you intend to do with the data. Describing all the algorithms in
this command is far beyond the scope of this manual page. Please refer to the pipeline user’s
manual for more details.
To setup the process, you need first to generate a default jitter.ini file and then change pa-
rameters according to your needs. This initialization file is self-documented.
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 16
7 Coronagraphic Imaging
7.1 Cosmetics and vignetted regions
Idem as Imaging without chopping.
faint stellar and/or sub-stellar companions. A main step in data reduction is the removal of
the scattered light remaining around the mask. This implies to use a reference single star
whose scattered light is scaled to that of the object of interest. The scaling factor is generally
estimated by azimuthally averaging the division of the star of interest by the reference star.
In the case of NACO, the instrument is mounted at the Nasmyth B focus of an Alt/Azimuthal
telescope. For this reason the pupil is rotating while the FoV is kept fixed. A critical con-
sequence is that the reference star must be chosen with great care to keep the same pupil
configuration (i.e the same parallactic angle) for the science and the reference star observa-
tions. If not, the telescope aberrations will not be the same. This will then cause important
residuals in the science image subtracted from the re-centered and scaled reference star. No
ESO tool are provided for this reduction/analysis step or for the reference star selection.
• Biller B. et al., 2006, Proc. IAU Colloquium #200, Aime C. & Vakili F., 571–576:
Suppressing speckle noise for simultaneous differential extrasolar planet imaging (SDI)
at the VLT and MMT
This paper gives a block diagram explaining a kind of pipeline reduction of the data.
9 Spectroscopy
The most basic way of taking IR spectra is to observe the target along two slit positions. The
sky is then removed by a process which is sometimes called double sky subtraction. The basic
steps of how to reduce these type of data are detailed hereafter. Note that there is no chopping
in spectroscopy. The Prism could be reduced as other settings, but remember that there are
no arcs in that case. There is no specific recipe for NACO spectroscopic data reduction in
eclipse. One can use instead the ISAAC recipe isaac spc jitter.
One should remember that reducing spectroscopic data is difficult, and the pipeline may be
not accurate enough to produce final data ready for interpretation... So don’t use pipeline
reduced data to send a paper to Nature claiming the detection of a QSO at z = 69 before you
are sure of what you actually did !
9.5 Extraction
For the classical ABBA technique, one should extract the spectrum without fitting the sky.
Fitting the sky only adds noise. For more complex cases, a low order fit to the sky may be
required.
10.1 Cosmetics
Using the Uncorrelated readout mode at extremely fast rate, many hot pixels appear. This is
a real concern. As usual, maps of bad pixels are produced by the naco img dark recipe.
computing the offsets between frames. Initial estimates are taken from the FITS headers and
further refined through cross-correlation.
- registering the frames and stack into a single image.
NACO data reduction cookbook VLT-MAN-ESO-14200-4038 24
oOo