ESM186L Lab Manual
ESM186L Lab Manual
ESM186L Lab Manual
1
Spring 2011 Syllabus ................................................................................... 3
Tutorial 1: Getting Started with ENVI ......................................................... 6
Tutorial 2.1: Mosaicking Using ENVI ........................................................22
Tutorial 2.2: Image Georeferencing and Registration ..................................25
Tutorial 3: Vector Overlay & GIS Analysis ................................................30
Tutorial 4.1: N-D Visualizer .......................................................................41
Tutorial 4.2 Data Reduction 1: Indexes……………………………………48
Tutorial 5: Data Reduction 2: Principal Components ..................................55
Tutorial 6: Unsupervised and Supervised Classification .............................64
Tutorial 7: Change Detection ......................................................................77
Tutorial 8: Map Composition in ENVI ........................................................83
Tutorial 9: Wildfire Exercise: Fire Detection Image Data ...........................94
Tutorial 10.1: Spectral Mapping Methods .................................................100
Tutorial 10.2: Spectral Mixture Analysis ..................................................105
Tutorial 11: LiDAR ..................................................................................110
2
ERS186L – Environmental Remote Sensing Lab
Spring 2011
Schedule
Date Lab* Topic
March 29 L1 Introduction to ENVI
March 31 L1, A1 Fieldwork Exercise, Introduction to ENVI, Image Exploration
Assignment
April 5 A1 Fieldwork Exercise, Image Exploration Assignment
April 7 L2 Georegistration & Mosaicking
April 12 L3, A2 Vector Data, Georegistration Assignment
April 14 A2 Georegistration Assignment, cont.
April 19 L4 Data Reduction I: Indexes
April 21 L5 Data Reduction II: Principal Components
April 26 L6 Unsupervised and Supervised Classification
April 28 A3 Classification & Data Reduction Assignment
May 3 A3 Classification & Data Reduction Assignment, cont.
May 5 A3 Classification & Data Reduction Assignment, cont.
May 10 L7, A4 Change Detection Lab, Change Detection Assignment
May 12 L8, A4 Map Composition Lab, Change Detection Assignment, cont.
May 17 A4 Change Detection Assignment, cont.
May 19 A4 Change Detection Assignment, cont.
May 24 L9 Wildfire Exercise Lab
May 26 L10,A5 Spectral Mapping and Unmixing Lab & Assignment
May 31 L11,A5 LIDAR Lab Exercise, Spectral Mapping and Unmixing
Assignment, cont.
June 2 A6 LIDAR Assignment
* LX = Lab exercise #X; AX = Lab Assignment #X.
Lab Exercises
You will complete 11 lab exercise tutorials in ERS186L. These tutorials have been designed to
familiarize you with common image processing tools and will provide you with the background
and skills necessary to complete your assignments. In addition, there will be two days of
fieldwork exercises to introduce you the data collection techniques corresponding to remote
sensing research.
3
Assignments
There will be 5 lab assignments in ERS186L and each of these assignments will be worth 20% of
your grade for the quarter. If you are unable to complete an assignment during the time provided
in the lab sessions, check the computer lab‘s schedule and return to work on it when no classes
are meeting. All assignments should be submitted by 8am on the day it is due to the ERS186L
Smartsite page at smartsite.ucdavis.edu. Late work will be penalized.
All assignments must be submitted electronically in Microsoft word format. Please remember that
your homework assignments must be clear, well-written, and of professional quality (include your
name, titles/numbering, etc). You will be required to include screen shots of your work in your
lab write-ups. These MUST be inserted into your Word document as JPEGs.
When submitting your assignments, please use the following file naming convention:
Last name, First name, Lab#, and the date submitted (i.e. Doe_John_Lab4_05242011).
Date Lecture # Lecture homework assigned or due Lab homework assigned or due
29-Mar Lecture 1 Homework 1 assigned
31-Mar Lecture 2 Homework 1 assigned
5-Apr Lecture 3
7-Apr Lecture 4 Homework 1 due; HW 2 assigned
12-Apr Lecture 5 Homework 1 due; HW 2 assigned
14-Apr Lecture 6
19-Apr Lecture 7 Homework 2 due
21-Apr 1st midterm
26-Apr Lecture 9 Homework 3 assigned Homework 2 due; HW 3 assigned
28-Apr Lecture 10
3-May Lecture 11
5-May Lecture 12 Homework 3 due; HW 4 assigned
10-May Lecture 13 Homework 3 due; HW 4 assigned
12-May Lecture 14
17-May Lecture 15 Homework 4 due
19-May 2nd midterm
24-May Lecture 17 Homework 5 assigned Homework 4 due
26-May Lecture 18 HW 5 assigned
31-May Lecture 19
2-Jun Lecture 20 Homework 5 due Lab 6 - in class exercise
3-Jun Homework 5 due
Final:
8-Jun 1:00pm
4
ADAPTED FROM …
September, 2004 Edition
Copyright © Research Systems, Inc.
All Rights Reserved
ENVI Tutorials
0904ENV41TUT
Limitation of Warranty
Research Systems, Inc. makes no warranties, either express or implied, as to any matter not expressly set forth in the license agreement, including without
limitation the condition of the software, merchantability, or fitness for any particular purpose. Research Systems, Inc. shall not be liable for any direct,
consequential, or other damages suffered by the Licensee or any others resulting from use of the ENVI, IDL, and ION software packages or their
documentation.
Acknowledgments
ENVI® and IDL® are registered trademarks of Research Systems Inc., registered in the United States Patent and Trademark Office, for the computer
program described herein. ION™, ION Script™, ION Java™, Dancing Pixels, Pixel Purity Index, PPI, n-Dimensional Visualizer, Spectral Analyst, Spectral
Feature Fitting, SFF, Mixture-Tuned Matched Filtering, MTMF, 3D SurfaceView, Band Math, Spectral Math, ENVI Extension, Empirical Flat Field Optimal
Reflectance Transformation (EFFORT), Virtual Mosaic, and ENVI NITF Module are trademarks of Research Systems, Inc. Numerical Recipes™ is a trademark of
Numerical Recipes Software. Numerical Recipes routines are used by permission.
GRG2™ is a trademark of Windward Technologies, Inc. The GRG2 software for nonlinear optimization is used by permission. NCSA Hierarchical Data Format
(HDF) Software Library and Utilities
Copyright © 1988-1998 The Board of Trustees of the University of Illinois All rights reserved.
NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities Copyright 1998, 1999, 2000, 2001, 2002 by the Board of Trustees of the University
of Illinois. All rights reserved. CDF Library
Copyright © 1999 National Space Science Data Center, NASA/Goddard Space Flight Center NetCDF Library Copyright © 1993-1996 University Corporation for
Atmospheric Research/Unidata HDF EOS Library Copyright © 1996 Hughes and Applied Research Corporation This software is based in part on the work of
the Independent JPEG Group. Portions of this software are copyrighted by INTERSOLV, Inc., 1991-1998.
Use of this software for providing LZW capability for any purpose is not authorized unless user first enters into a license agreement with Unisys under U.S.
Patent No. 4,558,302 and foreign counterparts. For information concerning licensing, please contact: Unisys Corporation, Welch Licensing Department -
C1SW19, Township Line & Union Meeting Roads, P.O. Box 500, Blue Bell, PA 19424. Portions of this computer program are copyright © 1995-1999
LizardTech, Inc. All rights reserved. MrSID is protected by U.S. Patent No. 5,710,835. Foreign Patents Pending.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/)
Portions of ENVI were developed using Unisearch’s Kakadu software, for which RSI has a commercial license. Kakadu Software. Copyright © 2001. The
University of New South Wales, UNSW, Sydney NSW 2052, Australia, and Unisearch Ltd, Australia. MODTRAN is licensed from the United States of America
under U.S. Patent No. 5,315,513 and U.S. Patent No. 5,884,226. FLAASH is licensed from Spectral Sciences, Inc. under a U.S. Patent Pending. Other
trademarks and registered trademarks are the property of the respective trademark holders.
5
Tutorial 1: Getting Started with ENVI
The following topics are covered in this tutorial:
Overview of This Tutorial
Getting Started with ENVI
6
by the third pixel for all bands, etc., interleaved up to the number of pixels. This format provides
optimum performance for spectral (Z) access of the image data.
BIL format provides a compromise in performance between spatial and spectral processing and is
the recommended file format for most ENVI processing tasks. Images stored in BIL format have
the first line of the first band followed by the first line of the second band, followed by the first
line of the third band, interleaved up to the number of bands. Subsequent lines for each band are
interleaved in similar fashion.
ENVI also supports a variety of data types: byte, integer, unsigned integer, long integer, unsigned long
integer, floating-point, double-precision floating-point, complex, double-precision complex, 64-bit
integer, and unsigned 64-bit integer.
The separate text header file provides information to ENVI about the dimensions of the image, any
embedded header that may be present, the data format, and other pertinent information. The header file
is normally created (sometimes with your input) the first time a particular data file is read by ENVI.
You can view and edit it at a later time by selecting File → Edit ENVI Header from the ENVI main
menu bar, or by right-clicking on a file in the Available Bands List and selecting Edit Header. You
can also generate ENVI header files outside ENVI, using a text editor.
7
Tip: To load a single-band image, simply double-click on the band.
The File menu at the top of the Available Bands List dialog provides access
to file opening and closing, file information, and canceling the Available
Bands List. The Options menu provides a function to find the band closest to
a specific wavelength, shows the currently displayed bands, allows toggling
between full and shortened band names in the list, and provides the
capability to fold all of the bands in a single open image into just the image
name. Folding and unfolding the bands into single image names or lists of
bands can also be accomplished by clicking on the + (plus) or – (minus)
symbols to the left of the file name in the Available Bands List dialog.
8
The Scroll Window
The Scroll window displays the entire image at reduced resolution (subsampled). The subsampling factor
is listed in parentheses in the window Title Bar at the top of the image. The highlighted scroll control box
(red by default) indicates the area shown at full resolution in the Main Image window.
To reposition the portion of the image shown in the Main Image window, position the mouse
cursor inside the scroll control box, hold down the left mouse button, drag to the desired
location, and release. The Main Image window is updated automatically when the mouse
button is released.
You can also reposition the cursor anywhere within the Scroll window and click the left mouse
button to instantly move the selected Main Image window area. If you click, hold, and drag the
left mouse button in this fashion, the Image window will be updated as you drag (the speed
depends on your computer resources).
Finally, you can reposition the image by clicking in the Scroll window and pressing the arrow
keys on your keyboard. To move the image in larger increments, hold down the Shift key
while using the arrow keys.
9
Move the Zoom window by clicking in it and using the arrow keys on your keyboard. To
move several pixels at a time, hold down the Shift key while using the arrow keys.
Clicking and holding the left mouse button in the Zoom window while dragging causes the
Zoom window to pan within the Main Image display.
Clicking the left mouse button on the – (minus) graphic in the lower left corner of the Zoom
window zooms out by a factor of 1. Clicking the middle mouse button on this graphic zooms
out to half the current magnification. Clicking the right mouse button on the graphic returns
the zoom window to the default zoom factor.
Clicking the left mouse button on the + (plus) graphic in the lower left corner of the Zoom
window zooms in by a factor of 1. Clicking the middle mouse button doubles the current
magnification. Clicking the right mouse button on the graphic returns the Zoom window to the
default zoom factor.
Click the left mouse button on the right (third) graphics box in the lower left corner of the
Zoom window to toggle the Zoom window crosshair cursor. Click the middle mouse button on
this graphic to toggle the Main Image crosshair cursor. Click the right mouse button on this
graphic to toggle the Zoom control box in the Main Image window on or off.
Note: On Microsoft Windows systems with a two button mouse, click the Ctrl key and the left
mouse button simultaneously to emulate the middle mouse button.
The Zoom window can also have optional scroll bars, which provide an alternate method for
moving through the Zoom window. To add scroll bars to the Zoom window, right-click in the
Zoom window to display the shortcut menu and select Toggle → Zoom Scroll Bars.
Tip: To have scroll bars appear on the Zoom window by default, select File → Preferences from
the ENVI main menu. Select the Display Defaults tab, and set the Zoom Window Scroll
Bars toggle to Yes.
To
dismiss the dialog, select File Figure 1-3: the Cursor Location/Value Dialog → Cancel
from the menu at the top of the Cursor
Location /Value dialog.
To hide/unhide the Cursor Location/Value dialog once it has been displayed, double-click using
the left mouse button in the Main Image window.
11
6. Select Tools → Profiles → X Profile from the Main Image display menu bar to display a window
plotting data values versus sample number for a selected line in the image (Figure 1-5).
7. Repeat the process, selecting Y Profile to display a plot of data value versus line number, and
selecting Z Profile to display a spectral plot (Figure 1-5).
Tip: You can also open a Z profile from the shortcut menu in any image window.
8. Select Window → Mouse Button Descriptions to view the descriptions of the mouse button
actions in the Profile displays.
9. Position the Profile plot windows so you can see all three at once. A red crosshair extends to the
top and bottom and to the sides of the Main Image window. The red lines indicate the line or
sample locations for the vertical or horizontal profiles.
10. Move the crosshair around the image (just as you move the zoom indicator box) to see how the
three image profile plots are updated to display data on the new location.
11. Close the profile plots by selecting File → Cancel from within each plot window.
Figure 1-5: The Horizontal (X) Profile (left) and Spectral (Z) Profile (right) Plots
Collecting Spectra
When collecting spectral profiles in your image, you
can ―drag and drop‖ spectra from the z profile window
into a new ENVI plot window.
1. In the Spectral Profile window, select
Options → Plot Key. Or you can right click
on the window and select plot key from the
shortcut menu. The plot key default name is
the x,y coordinates of the pixel you selected
(Figure 1-6).
2. To collect spectra in the Spectral Profile
window, select Options → Collect Spectra.
Now navigate through your image. Each pixel Figure 1-6: The Spectral Profile Window
you select will be plotted in the Spectral profile Window.
3. To edit plot parameters, select Edit→ Plot Parameters… You can edit the x- and y-axis scale,
names, and appearance of the plot.
4. To open a new ENVI plot window, select Options → New Window: Blank… to open a new
window without plots, or Options → New Window: With Plots…
12
5. Drag the plot key of a spectrum from the Spectral Profile window to the new blank ENVI Plot
Window.
6. To rename a spectrum, select Edit→ Data Parameters. You can change the name, and
appearance of the line in this dialog.
7. To save a spectral plot as a spectral library (or an image file), select File→ Save Plot As →
Spectral library → Select All Items → OK → Check Output Result to Memory. The spectral
library will show up in your Available Bands List, and will be refered to later in this exercise.
13
Dynamic Overlays
ENVI‘s multiple Dynamic Overlay feature allows you to dynamically superimpose parts of one or
more linked images onto the other image. Dynamic overlays are turned on automatically when you
link two displays, and may appear in either the Main Image window or the Zoom window.
1. To start, click the left mouse button to see both displays completely overlaid on one another.
2. To create a smaller overlay area, position the mouse cursor anywhere in either Main Image
window (or either Zoom window) and hold down and drag with the middle mouse button.
Upon button release, the smaller overlay area is set and a small portion of the linked image
will be superimposed on the current image window.
3. Now click the left mouse button and drag the small overlay window around the image to see
the overlay effects.
4. You can resize the overlay area at any time by clicking and dragging the middle mouse button
until the overlay area is the desired size.
You can turn off the dynamic overlay by right clicking in the image window and choosing
Dynamic Overlay Off.
14
Editing ENVI Headers
Use Edit ENVI Header to edit existing header files. See Editing Header Files in ENVI Online Help for
steps to open the Header Info dialog and edit required header information. See the next section for details
about editing optional header information.
15
1. In the Header Info dialog, click Edit Attributes and select Bad Bands List. The Edit Bad Bands
List values dialog appears.
2. All bands in the list are highlighted by default as good. Deselect any desired bands in order to
designate them as bad bands.
3. To designate a range of bands, enter the beginning and ending band numbers in the fields next to
the Add Range button. Click Add Range.
4. Click OK.
1. In the Header Info dialog, click Edit Attributes and select Z Plot Information. The Edit Z Plot
Information dialog appears.
2. Enter the minimum range value in the left and maximum value in the Z Plot Range fields.
3. Enter the desired axes titles in the X Axis Title and Y Axis Title fields.
4. To specify the size (in pixels) of the box used to calculate an average spectrum, enter the
parameters into the Z Plot Average Box fields.
5. To specify an additional filename from which to extract Z profiles, click Default Additional Z
Profiles. The Default Additional Z Profiles dialog appears.
6. Click Add New File.
7. Select the desired filename and click OK. The filename appears in the list.
8. To remove a filename from the list, select the filename and click Remove Selected File.
9. Click OK, then click OK again.
16
6. Click OK. ENVI saves the stretch setting in the .hdr file. Whenever you display this image, this
stretch setting overrides the global default stretch given in the envi.cfg file.
Note: If the Default Stretch is set to None, ENVI uses the Default Stretch set in your ENVI preferences.
Figure 1-7: 2-d scatter plot of Landsat not to click and drag the mouse cursor inside the zoom box in the
TM bands 1(x-axis) and band 4 (y-axis)
17
window. As you move the cursor, you will notice different pixels are highlighted in the scatter
plot, making the pixels appear to ―dance.‖ The dancing pixels in the display are the highlighted 2-
band pixel values found in a 10-pixel by 10-pixel region centered on the cursor.
4. Define a region of interest (ROI) in the Scatter Plot window. To do this, click the left mouse
button several times in different areas in the Scatter Plot window. Doing this selects points to be
the vertices of a polygon. Click the right mouse button when you are done selecting vertices. This
closes the polygon. Pixels in the Main Image and Zoom windows whose values match the values
contained in the selected region of the scatter plot are highlighted.
5. To define a second ROI class, do one of the following:
Select Class → New from the Scatter Plot menu and repeat the actions described in the step 4.
By default, the new ROI class is assigned the next unused color sequentially in the Items 1:20
color list.
OR
Select Class → Items #:# from the Scatter Plot menu. Choose the color for your next class
and repeat the actions described in the step 4.
6. Select Options → Export All from the Scatter Plot window menu to export the regions of interest.
The ROI Tool dialog appears. The ROI Tool dialog can also be started from the Main Image
window by selecting Overlay → Region of Interest from the menu bar. By default, ENVI assigns
Scatter Plot Export in the ROI Tool dialog, followed by the color of the region and number of
points contained in the region as the name for the region of interest.
7. In the ROI Tool menu bar, select File → Cancel to dismiss the dialog. The region definition is
saved in memory for the duration of the ENVI session.
8. In the Scatter Plot window, close the scatter plot by selecting File → Cancel.
Classifying an Image
ENVI provides two types of unsupervised classification and several types of supervised classification.
The following example demonstrates one of the supervised classification methods.
1. From the ENVI main menu bar, select Classification → Supervised → Parallelepiped.
2. In the Classification Input File dialog, select Delta_LandsatTM_2008.img and click OK.
3. When the Parallelepiped Parameters dialog appears, select the regions of interest (ROIs) you just
created above, by clicking on the region name in the Select Classes from Regions list at the left of
the dialog.
4. Select Memory in the upper right corner of the dialog to output the result to memory.
5. Click on the small arrow button in the right-center of the Parallelepiped Parameters dialog to
toggle off Rule Image generation, and then click OK. The classification function then calculates
statistics and a progress window appears during the classification. A new entry titled,
Parallel(Delta_LandsatTM_2008.img) is added to the Available Bands List.
6. Select New Display from the Display #1 menu button in the Available Bands List.
7. In the Available Bands List, select the Gray Scale radio button, click on Parallel
(Delta_LandsatTM_2008.img), and select Load Band. A new display group is created,
containing the classified image.
18
Select Regions Of Interest
ENVI lets you define regions of interest (ROIs) in your images. ROIs are typically used to extract
statistics for classification, masking, and other operations.
1. From the Main Image Display menu, select Overlay → Region of Interest, or right-click in the
image to display the shortcut menu and select ROI Tool.
The ROI Tool dialog for that display will appear (Figure 1-
8).
2. To draw a polygon that represents the region of interest:
Click the left mouse button in the Main Image window to
establish the first point of the ROI polygon.
Select further border points in sequence by clicking the
left button again, and close the polygon by clicking the
right mouse button. The middle mouse button deletes the
most recent point, or (if you have closed the polygon) the
entire polygon. Click the right mouse button a second
time to fix the polygon.
ROIs can also be defined in the Zoom and Scroll
windows by selecting the appropriate window radio
button in the ROI Tool dialog. Figure 1-8: ROI Tool
When you have finished defining an ROI, it is shown in the dialog table, with the name, region
color, number of pixels enclosed, and other ROI properties (Figure 1-8).
3. To define a new ROI, click the New Region button.
You can enter a name for the region and select the color and fill patterns for the region by
editing the values in the cells of the table.
19
Grow an ROI to its neighboring pixels within a specified threshold by selecting it and then
clicking the Grow button.
Pixelate polygon and polyline ROIs by selecting them in the table and then clicking the Pixel
button. Pixelated objects become a collection of editable points.
Delete ROIs by selecting them in the table and then clicking the Delete button.
The table also allows you to view and edit various ROI properties, such as name, color, and fill
pattern. The other options under the pull-down menus at the top of the ROI Tool dialog let you
perform various other tasks, such as calculate ROI means, save your ROI definitions, and load
saved definitions.
ROI definitions are retained in memory after the ROI Tool dialog is closed, unless you explicitly
delete them. ROIs are available to other ENVI functions even if they are not displayed.
20
Save and Output an Image
ENVI gives you several options for saving and outputting your filtered, annotated, gridded images. You
can save your work in ENVI‘s image file format, or in several popular graphics formats (including
Postscript) for printing or importing into other software packages. You can also output directly to a printer.
Saving your Image in ENVI Image Format
To save your work in ENVI‘s native format (as an RGB file):
1. From the Main Image window menu bar, select File → Save Image As → Image File. The
Output Display to Image File dialog appears.
2. Select 24-Bit color or 8-Bit grayscale output, graphics options (including annotation and
gridlines), and borders. If you have left your annotated and gridded color image on the display,
both the annotation and grid lines will be automatically listed in the graphics options. You can
also select other annotation files to be applied to the output image.
3. Select output to Memory or File using the desired radio button.
If output to File is selected, enter an output filename.
Note: If you select other graphics file formats from the Output File Type button which, by
default is set to ENVI, your choices will be slightly different.
4. Click OK to save the image.
Note: This process saves the current display values for the image, not the actual data values.
For the remainder of this course, we strongly recomend that you fill out the excel
spreadsheets located in MyDocuments\ERS_186\Lab_Data\Documents titled
Data_products_record_example.xls and
Georeg_tracking_info_example.xls
These spreadsheets have been partially filled out as examples of how you may
record your input and output files during or after completing each lab/tutorial.
21
Tutorial 2.1: Mosaicking Using ENVI
The following topics are covered in this tutorial:
Mosaicking in ENVI
Pixel-Based Mosaicking Example
Map Based Mosaicking Example
Color Balancing During Mosaicking
Mosaicking in ENVI
Use mosaicking to overlay two or more images that have overlapping areas (typically georeferenced) or to
put together a variety of non-overlapping images and/or plots for presentation output (typically pixel-
based). For more information on pixel-based mosaicking, see ENVI Online help. You can mosaic
individual bands, entire files, and multi-resolution georeferenced images. You can use your mouse or
pixel- or map-based coordinates to place images in mosaics and you can apply a feathering technique to
22
blend image boundaries. You can save the mosaicked images as a virtual mosaic to avoid having to save
an additional copy of the data to a disk file. Mosaic templates can also be saved and restored for other
input files.
Virtual Mosaics
ENVI allows the use of the mosaic template file as a means of constructing a ―Virtual Mosaic‖ (a
mosaic that can be displayed and used by ENVI without actually creating the mosaic output file).
Note: Feathering cannot be performed when creating a virtual mosaic in ENVI.
1
Use the Cursor Location/Value indicator in an image display to determine what the background value is.
23
Create the Output Virtual Mosiac
1. In the Mosaic widget, select File → Save Template. In the Output Mosaic Template dialog, select
the appropriate output folder, and enter the output filename Delta_ortho_mos. Make sure the
―Open Template as Virtual Mosaic?‖ is turned to ―yes‖. Click OK to create the virtual mosaic.
2. Explore your mosaic and check for errors
24
Tutorial 2.2.: Image Georeferencing and Registration
The following topics are covered in this tutorial:
Georeferenced Images in ENVI
Georeferenced Data
Image-to-Image Registration
25
following sections provide examples of some of the map-based capabilities built into ENVI.
Consult the ENVI User‘s Guide for additional information.
Georeferenced Data
Open and Display HyMap and Reference Data
1. Open the orthophoto virtual mosaic file that will be used as the base or reference image,
Delta_ortho_mos and load it into display 1.
2. Open the Hymap file: Delta_HyMap_2008.img from
My Documents\ERS_186\Lab_Data\Hyperspectral\ and load a true color image into display 2.
Reminder: To load this image in true color: In the available bands list → click on the
RGB Color radial button, then select bands 3, 2 and 1 consecuatively (so R is band 3, G
is band 2, and B is band 1).
Cursor Location/Value
To open a dialog box that displays the location of the cursor
in the Main Image, Scroll, or Zoom windows, do the
following.
1. From the Main Image window menu bar,
select Tools → Cursor Location/Value. Figure 2-2: The Cursor Location Dialog Displaying
You can also open this dialog from both the Pixel and Georeferenced Coordinates
Main Image window menu bar, by selecting
Window → Cursor Location/Value, or by right clicking the image itself and choosing Cursor
Location/Value from the drop down menu. Note that the coordinates are given in both pixels and
georeferenced coordinates for this georeferenced image.
2. Move the cursor around the image and examine the coordinates for specific locations and note the
relation between map coordinates and latitude/longitude (Figure 2-2).
3. Select File → Cancel to dismiss the dialog when finished.
26
Image to Image Registration
This section of the tutorial takes you step-by-step through an Image to Image registration. The
georeferenced virtual mosaic of the orthophotos, Delta_ortho_mos, will be used as the Base image,
and the coarsely georeferenced Hymap Delta images will be warped to match the orthophoto mosaic.
Registration of multiple images can take several days. Often you will need to re-use gcps to register later
image products. Therefore, in order to keep your work organized, create a spreadsheet that records your
work.
27
8. After you are done selecting the 20 pairs, click on individual GCPs in the Image to Image GCP List
dialog and examine the locations of the points in the two images, the actual and predicted coordinates,
and the RMS error. Resize the dialog to observe the total RMS Error listed in the Ground Control
Points Selection dialog. In the GCP list you can order points by error (Options→Order Points by
Error) to see which GCPs are contributing the most to your RMSE.
28
Warp Images
Images can be warped from the displayed band, or all bands of multiband images can be warped at
once. We will warp only 3 bands to reduce computing demand.
1. In the Ground Control Points Selection dialog, select Options → Warp File (as image to
map…). Select warp image as Delta_Hymap_2008. Select a spectral subset with bands, 14, 8
and 2.
2. The Registration Parameters dialog appears (Figure 2-5). Use the Warp Method pulldown menu to
select RST, and the Resampling button menu to select Nearest Neighbor resampling.
3. Change the X and Y pixel size to 3 m. Press enter after changing each pixel size to make sure
that the output X and Y sizes are adjusted.
4. Choose your output folder, enter the filename Delta_Hymap_2008_geo and click OK. The
warped image will be listed in the Available Bands List when the warp is completed.
29
Tutorial 3.1: Vector Overlay & GIS Analysis
The following topics are covered in this tutorial:
Stand alone vector GIS analysis, including input of shapefiles and associated DBF attribute files
Display in vector windows
Viewing and editing attribute data
Point and click spatial query
30
Vector Overlay and GIS Concepts
Capabilities
ENVI provides extensive vector overlay and GIS analysis capabilities. These include the
following:
Import support for industry-standard GIS file formats, including shapefiles and associated DBF
attribute files, ArcInfo interchange files (.e00, uncompressed), MapInfo vector files (.mif) and
attributes from associated .mid files, Microstation DGN vector files, DXF, and USGS DLG and
SDTS formats. ENVI uses an internal ENVI Vector Format (EVF) to maximize performance.
Vector and image/vector display groups provide a stand-alone vector plot window for displaying
vector data and composing vector maps. More importantly, ENVI provides vector overlays in
display groups (Image windows, Scroll windows, and Zoom windows).
You can generate world boundary vector layers, including low- and high-resolution political
boundaries, coastlines, and rivers, and USA state boundaries. You can display all of these in
vector windows or overlay them in image display groups.
You can perform heads-up (on-screen) digitizing in a vector or raster display group. Heads-up
digitizing provides an easy means of creating new vector layers by adding polygons, lines, or
points.
Image- and vector window-based vector editing allows you to modify individual polygons,
polylines, and points in vector layers using standard editing tools, taking full advantage of the
image backdrop provided by raster images in ENVI.
ROIs, specific image contour values, classification images, and other raster processing results can
be converted to vector format for use in GIS analysis.
Latitude/longitude and map coordinate information can be displayed and exported for image-to-
map registration. Attribute information can be displayed in real-time as each vector is selected.
ENVI supports linked vectors and attribute tables with point-and-click query for both vector and
raster displays. Click on a vector in the display group, and the corresponding vector and its
associated information is highlighted in the attribute table. Click on an attribute in the table, and
the display scrolls to and highlights the corresponding vector.
Scroll and pan through rows and columns of vector attribute data. Edit existing information or
replace attributes with constant values, or with data imported from ASCII files. Add or delete
attribute columns. Sort column information in either forward or reverse order. Export attribute
records as ASCII text.
Query vector database attributes directly to extract information that meets specific search criteria.
You can perform GIS analysis using simple mathematical functions and logical operators to
produce new information and layers. Results can either be output to memory or to a file for later
access.
You can set vector layer display characteristics and modify line types, fill types, colors, and
symbols. Use attributes to control labels and symbol sizes. Add custom vector symbols.
31
You can reproject vector data from any map projection to another.
You can convert vector data to raster ROIs for extraction of statistics, calculation of areas, and
use in ENVI‘s many raster analysis functions.
Generate maps using ENVI annotation in either vector or image windows. Set border widths and
background colors, and configure graphics colors. Automatically generate vector layer map keys.
Insert objects such as rectangles, ellipses, lines, arrows, symbols, text, and image insets. Select
and modify existing annotation objects. Save and restore annotation templates for specific map
compositions.
Create shapefiles and associated DBF attribute files and indices, or DXF files, from the internal
ENVI Vector Format (EVF). New vector layers generated using ENVI‘s robust image processing
capabilities, and changes made to vector layers in ENVI are exported to industry-standard GIS
formats.
Concepts
ENVI‘s vector overlay and GIS analysis functions generally follow the same paradigms as
ENVI‘s raster processing routines, including the same procedures for opening files and the use of
standard dialogs for output to memory or file. The following sections describe some of the basic
concepts.
32
Create World Boundaries
ENVI uses IDL map sets to generate low- and high-resolution world boundaries in EVF. Select
Options → Create World Boundaries from the Available Vectors List, or Vector → Create
World Boundaries from the ENVI main menu bar. You can also generate political boundaries,
coastlines, rivers, and USA state boundaries.
High-resolution format is available only if the IDL high-resolution maps are installed. If these are
not currently installed on your system, you can install them using the ENVI Installation CD,
modifying your installation to include the high-resolution maps.
Figure 3-2: The Vector Parameters Window and New Vector Window
ENVI Attributes
ENVI provides access to fully attributed GIS data in a shapefile DBF format. Attributes are listed
in an editable table, allowing point-and-click selection and editing.
33
Double-clicking in a particular cell selects that cell for editing. The table also supports full
column substitution using a uniform value and replacement with values from an ASCII file.
Options include adding and deleting individual columns and sorting data forward and backward
based on information within a column. You can save attributes to an ASCII file or to a DBF file.
Point-and-click spatial query is supported in ENVI attribute tables to help you locate key features
in images or in a vector window. Select specific records by clicking the label at the left edge of
the table for a specific row in the table. The corresponding vector is highlighted in a contrasting
color in the image display group or vector window. You can select multiple records, including
non-adjacent records, by holding down the <Ctrl> key as you click the additional row labels.
Open a Shapefile
1. From the ENVI main menu bar, select File → Open Vector File. A Select Vector Filenames
dialog appears.
2. Navigate to Lab_Data\vector. Click the Files of type drop-down list in the Select Vector
Filenames dialog, and select Shapefile (at the bottom right hand corner).
3. Select Bay_Delta_Preserves.shp. Click Open. The Import Vector Files Parameters
dialog appears. This dialog allows you to select file or memory output, enter an output
filename for the ENVI .evf file, and enter projection information if ENVI is unable to find the
projection information automatically.
4. Click the Output Results to file button. Accept the default values by clicking OK. A status
window indicates the number of vector vertices being read, and the Available Vectors List
appears when the data have been converted.
34
5. Select Bay_Delta_Preserves in the Available Vectors List and click Load Selected.
The Vector Window #1 dialog appears with regional Bay Delta preserves plotted. The default
mode (shown in the title bar or in the lower-right corner of the dialog) is Cursor Query.
Query Attributes
1. Ensure that Bay_Delta_Preserves.shp is still the active layer. From the Vector
Window #1 dialog menu bar, select Options → Select Active Layer → Layer:
Bay_Delta_Preserves.
2. From the Vector Window #1 dialog menu bar, select Edit → Query Attributes. A Layer
Attribute Query dialog appears.
35
3. Click Start. A Query Expression section appears at the top of the Layer Attribute Query
dialog.
4. Click the SITE drop-down list and select Site.
5. Click the > drop-down list and select = =.
6. In the String field, enter ―Jasper Ridge Biological Preserve‖ (be sure to match this case).
7. Click the Memory radio button and click OK. ENVI creates a new vector layer and associated
DBF file based on the results of the query. The new layer appears in the Available Vectors
List and is loaded into Vector Window #1. Zoom to the selected vectors using the middle
mouse button to draw a box around Jasper Ridge Biological Preserve.
1. From the Display group menu bar, select Overlay → Vectors. A Vector Parameters dialog
appears.
2. From the Vector Parameters dialog menu bar, select File → Open Vector File. This menu
option is also accessible from the ENVI main menu bar. A Select Vector Filenames dialog
appears.
3. Click the Files of type: drop-down list and select Shapefile (at the bottom right corner).
Navigate to Lab_Data/vector and select both Bay_Delta_Preserves.shp and
2008_field_points.shp by holding down the shift key and selecting the files. Click
Open. An Import Vector Files Parameters dialog appears.
4. Select File or Memory output, and enter an output filename for the ENVI .evf file if you
selected File.
5. In the Native Projection list, select UTM (or ensure that it is already selected). Click Datum.
6. A Select Geographic Datum dialog appears. Select North America 1983 and click OK. Do
the same for the next vector file.
7. Select Memory output and click OK. A status window reports the number of vector vertices
being read. When the data have been converted, they are automatically loaded into the Vector
Parameters dialog and displayed in white on the image. The vectors.shp layer should be
highlighted in the Vector Parameters dialog.
8. Right click the Current Layer colored box to select a more visible color for the vector layer or
right-click on the box and select from the menu. Click Apply to update the vector color.
36
or Zoom radio button in the Vector Parameters dialog to allow vector tracking in the
corresponding window. Select the “Off” radio button to allow normal scrolling in the
Scroll and Main windows and zooming in the Zoom window. Try different zoom factors
in the Zoom window to assess the accuracy of the vectors. You can only view attribute
information for the vector file highlighted in the Vector Parameters dialog.
3. Ensure that you are in Cursor Query mode by selecting Mode from the Vector Parameter
dialog menu bar.
4. From the Vector Parameters dialog menu bar, select Edit → View/Edit/Query
Attributes. A Layer Attributes table appears. Select random records by clicking the
numbered columns to highlight specific polygons on the image. You may want to change
the Current Highlight color in the Vector Parameters dialog to something that is more
visible in your display group.
37
Parameters dialog, and click the row labels in the Layer Attributes table. The
corresponding polygon is highlighted in the Image window.
13. From the Layer Attributes dialog menu bar, select File → Cancel. When you are
prompted to save the attribute table, click No.
To finish this section, select Window → Available Vectors List from the ENVI main menu bar
to display the Available Vectors List. Delete any new layers you have created by selecting them
in the Available Vectors List and clicking Remove Selected. Do not remove the
Bay_Delta_Preserves.shp or 2008_field_points.shp layer.
Query Operations
1. From the Vector Parameters dialog menu bar, select Mode → Cursor Query.
2. In the Vector Parameters dialog, highlight 2008_field_points.shp. Select
Edit → View/Edit/Query Attributes. A Layer Attributes table appears.
3. Examine the land_cover column and note the different land cover classes, including
several types of vegetation, soil, water, and non-photosynthetic vegetation (npv).
Close the attribute table by selecting File → Cancel.
4. From the Vector Parameters dialog menu bar, select Edit → Query Attributes. A
Layer Attribute Query dialog appears.
5. In the Query Layer Name field check that field_points is entered in the field. Click
Start.
6. In the Query Expression section that appears at the top of the Vector Parameters
dialog, click the drop-down list and select land_cover.
7. Click the ID drop down list and select land_cover. Then click the > drop-down list
and select = =.
38
8. In the String field, type ―water‖. (Be sure to match the case in the attribute table).
9. Select the Memory radio button and click OK. The selected layer (called a subset)
generated by the query appears in the Vector Parameters dialog.
10. In the Vector Parameters dialog, select the new subset[Layer: 2008_field_points.shp]
layer and select Edit → Edit Layer Properties from the menu bar to change layer
parameters. An Edit Vector Layers dialog appears.
11. Click the Point Symbol drop-down list and select Flag. Click OK. The water field
points as flags are highlighted as a new layer.
12. To examine the attributes for this layer, select subset[Layer: Delta_field_points.shp]in
the Vector Parameters dialog, and select Edit → View/Edit/Query Attributes from
the menu bar. A Layer Attributes table appears. Examine the query results.
13. Close the Layer Attributes table and repeat the query for the "levee_herbaceous" land
cover, highlighting it in a different color or symbol.
14. Try other queries on combinations of attributes by choosing one of the logical
operators in the Layer Attribute Query dialog.
39
Load Predefined ROIs
1. From the Display group menu bar, select Overlay → Region of Interest. An ROI
Tool dialog appears.
2. Your ROIs from the above exercise should reload. If not, from the ROI Tool dialog
menu bar, select File → Restore ROIs.
3. Navigate to your saved ROI file. Click Open. An ENVI Message dialog reports what
regions have been restored. Click OK. The predefined ROI is loaded into the ROI
Tool dialog and plotted on the image.
You can now use these polygons with query operations and GIS analysis with other vector data,
or you can export them to shapefiles by selecting File → Export Active Layer to Shapefile from
the Vector Window Parameters dialog.
40
Tutorial 4.1: The n-D Visualizer
The following topics are covered in this tutorial:
Exploration of feature space and land cover classes
Multispectral data
1. Start ENVI and open the image file Delta_LandsatTM_2008.img. Load the image
file to a true-color RGB display.
2. Overlay the Delta_classes_2008.roi regions of interest file on the image.
3. In the ROI Tool dialog, select File → Export ROIs to n-D visualizer Select
Delta_LandsatTM_2008.img.
4. In the n-D Visualizer Input ROIs, Select All Items and click OK. The n-D visualizer and n-D
controls dialogs appear (Figure 4-1).
Clicking on an individual band number in the n-D Controls dialog turns the band
number white and displays the corresponding band pixel data in the n-D scatter plot.
You must select at least two bands to view a scatter plot.
Clicking the same band number again turns it black and turns off the band pixel data
in the n-D scatter plot.
Selecting two bands in the n-D Controls dialog produces a 2-D scatter plot; selecting
three bands produces a 3-D scatter plot, and so on. You can select any combination of
bands at once.
41
Figure 4-1: n-D Visualizer (left) and n-D Controls dialog (right)
Selecting Dimensions and Rotating Data
Rotate data points by stepping between random projection views. You can control the speed and
stop the rotation at any time. You can move forward and backward step-by step through the
projection views, which allows you to step back to a desired projection view after passing it.
1. In the n-D Controls dialog, click the band numbers (thus the number of dimensions) you want
to project in the n-D Visualizer. If you select only two dimensions, rotation is not possible. If
you select 3-D, you have the option of driving the axes, or initiating automatic rotation. If you
select more than 3-D, only automatic random rotation is available.
2. Select from the following options:
2. To drive the axes, select Options → 3D: Drive Axes from the n-D Controls
menu bar. Click and drag in the n-D Visualizer to manually spin the axes of the
3D scatter plot.
3. To display the axes themselves, select Options → Show Axes from the n-D
Controls menu bar.
4. To start or stop rotation, click Start or Stop in the n-D Controls dialog.
5. To control the rotation speed, enter a Speed value in the n-D Controls dialog.
Higher values cause faster rotation with fewer steps between views.
6. To move step-by-step through the projection views, click < to go backward and >
to go forward.
7. To display a new random projection view, click New in the n-D Controls dialog.
42
From the n-D Controls menu bar, select Options → Class Controls.
All of the defined classes appear in the dialog. The white class contains all of the unclustered or
unassigned points. The number of points in each class is shown in the fields next to the colored
squares.
To turn a class off in the n-D Visualizer, de-select the On check box for that class in the n-D
Class Controls dialog. Click again to turn it back on.
To turn all but one of the classes off in the n-D Visualizer, double-click the colored box at the
bottom of the n-D Class Controls dialog representing the class that you want to remain displayed.
Double-click again to turn the other classes back on.
To designate a class as the active class, click once on the colored square (at the bottom of the n-D
Class Controls dialog) corresponding to that class.
The color appears next to the Active Class label in the n-D Class Controls dialog, and any
functions you execute from the n-D Class Controls dialog affect only that class.
You may designate a class as the active class even though it is not enabled in the n-D Visualizer.
1. Click the Stats, Mean, or Plot button on the n-D Class Controls dialog. The Input File
Associated with n-D Data dialog appears.
o Stats: Display the mean, minimum, maximum, and standard deviation spectra of
the current class in one plot. These should be derived from the original
reflectance or radiance data file.
o Mean: Display the mean spectrum of the current class alone. This should be
derived from the original reflectance or radiance data file.
o Plot: Display the spectrum of each pixel in the class together in one plot. This
should be derived from the original reflectance or radiance data file.
2. Select the input file that you want to calculate the spectra from.
If you select a file with different spatial dimensions than the file you used as input into
the n-D visualizer, enter the x and y offset values for the n-D subset when prompted.
Note: If you select Plot for a class that contains hundreds of points, the spectra for all the points
will be plotted and the plot may be unreadable.
43
Clearing Classes
To remove all points from a class, click Clear on the n-D Class Controls Options dialog, or right-
click in the n-D Visualizer and select Clear Class or Clear All.
To include the statistics from a class when calculating the projection used to collapse the data,
select the Clp check box next to that class name in the n-D Class Controls dialog.
If the data are in a collapsed state, they will be recollapsed using the selected classes when you
select any of the Clp check boxes.
Collapsing Classes
You can collapse the classes by means or by variance to make class definition easier when the
dimensionality of a dataset is higher than four or five. With more than four or five dimensions,
interactively identifying and defining many classes becomes difficult. Both methods iteratively
collapse the data cloud based on the defined classes.
To collapse the data, calculate a projection (based either on class means or covariance) to
minimize or hide the space spanned by the pre-defined classes and to maximize or enhance the
remaining variation in the dataset. The data are subjected to this special projection and replace the
original data in the n-D Visualizer.
Additionally, an eigenvalue plot displays the residual spectral dimension of the collapsed data.
The collapsed classes should form a tight cluster so you can more readily examine the remaining
pixels. The dimensionality of the data, shown by the eigenvalue plot, should decrease with each
collapse.
1. From the n-D Controls menu bar, select Options → Collapse Classes by Means or
Collapse Classes by Variance (see the descriptions in the following sections).
An eigenvalue plot displays, showing the remaining dimensionality of the data and
suggesting the number of remaining classes to define. The n-D Selected Bands widget
changes color to red to indicate that collapsed data are displayed in the n-D Visualizer.
You must define at least two classes before using this collapsing method. The space spanned by
the spectral mean of each class is derived through a modified Gram-Schmidt process. The
complementary, or null, space is also calculated. The dataset is projected onto the null space, and
the means of all classes are forced to have the same location in the scatter plot. For example, if
44
you have identified two classes in the data cloud and you collapse the classes by their mean
values, ENVI arranges the data cloud so that the two means of the identified classes appear on top
of each other in one place. As the scatter plot rotates, ENVI only uses the orientations where
these two corners appear to be on top of each other.
With this method, ENVI calculates the band-by-band covariance matrix of the classified pixels
(lumped together regardless of class), along with eigenvectors and eigenvalues. A standard
principal components transformation is performed, packing the remaining unexplained variance
into the low-numbered bands of the collapsed data. At each iterative collapsing, this process is
repeated using all of the defined classes. The eigenvalue plot shows the dimensionality of the
transformed data, suggesting the number of remaining classes to define.
The full dataset is projected onto the eigenvectors of the classified pixels. Each of these projected
bands is divided by the square root of the associated eigenvalue. This transforms the classified
data into a space where they have no covariance and one standard deviation.
You should have at least nb * nb/2 pixels (where nb is the number of bands in the dataset)
classified so that ENVI can calculate the nb*nb covariance matrix.
ENVI calculates a whitening transform from the covariance matrix of the classified pixels, and it
applies the transform to all of the pixels. Whitening collapses the colored pixels into a fuzzy ball
in the center of the scatter plot, thereby hiding any corners they may form. If any of the
unclassified pixels contain mixtures of the endmembers included among the classified pixels,
those unclassified pixels also collapse to the center of the data cloud. Any unclassified pixels that
do not contain mixtures of endmembers defined so far will stick out of the data cloud much better
after class collapsing, making them easier to distinguish.
Collapsing by variance is often used for partial unmixing work. For example, if you are trying to
distinguish very similar (but distinct) endmembers, you can put all of the other pixels of the data
cloud into one class and collapse this class by variance. The subtle distinctions between the
unclassified pixels are greatly enhanced in the resulting scatter plot.
UnCollapsing Classes
To uncollapse the data and return to the original dataset, select Options → UnCollapse from the
n-D Controls menu bar.
All defined classes are shown in the n-D Visualizer, and the band numbers return to a white color
in the n-D Controls menu bar.
Select Options from the n-D Controls menu bar to access the n-D Class Controls dialog, to
annotate the n-D Visualizer, to start a Z Profile window, to import, delete, and edit library
spectra, to collapse classes, to clear classes, to export classes to ROIs, to calculate mean spectra,
and to turn the axes graphics on or off.
45
Opening the Class Controls Dialog
To access the n-D Class Controls dialog, select Options → Class Controls from the n-D
Controls menu bar. For details, see Interacting with Classes.
Adding Annotation
To add an annotation to the n-D Visualizer window, select Options → Annotate Plot from the n-
D Controls menu bar. See Annotating Images and Plots for further details. You cannot add
borders to the n-D Visualizer.
Plotting Z Profiles
To open a plot window containing the spectrum of a point selected in the n-D Visualizer:
1. Select Options → Z Profile from the n-D Controls menu bar. The Input File Associated
with n-D Data dialog appears.
2. Select the data file associated with the n-D data. Typically, this file is the reflectance or
original data. If you select an input file with different spatial dimensions than the file
used for input into the n-D Visualizer, you will be prompted to enter the x and y offsets
that point to the n-D subset.
When the Z Profile plot window is open, the selected file is automatically used to
calculate the mean spectra when you select Options → Mean Class or Mean All from
the n-D Controls menu bar.
Select File → Save Plot As → PostScript or Image from the n-D Controls menu bar.
To print the n-D Visualizer window, select File → Print (see Printing in ENVI for details).
46
Saving States
To save the n-D Visualizer state, select File → Save State from the n-D Controls menu bar and
enter an output filename with the extension .ndv for consistency.
To restore a previously saved state, select File → Restore State and select the appropriate file.
You can also restore a previously saved state by selecting Spectral → n-Dimensional
Visualizer → Visualize with Previously Saved Data from the ENVI main menu bar.
47
Tutorial 4.2: Data Reduction 1 - Indexes
The following topics are covered in this tutorial:
Band-Math for Calculating Narrow-band Indexes
Continuum Removal
Data Reduction
Because of the enormous volume of data contained in a hyperspectral data set, data reduction
techniques are an important aspect of hyperspectral data analysis. Reducing the volume of
data, while maintaining the information content, is the goal of the data reduction techniques
covered in this section. The images created by the data reduction techniques can be used as
inputs to classification.
48
Narrow Band Indexes
Band Math
Here you will calculate vegetation indexes, covariance
statistics, and correlation matrices with ENVI‘s Band Math
function.
The first vegetation index we will calculate, shown to the
left, the Photochemical Reflectance Index (PRI), is a
measure of photosynthetic efficiency. The formula for PRI
R531 R570
is R531 R570
where R531 and R570 are the reflectance values at 531nm and
570nm, respectively. Since Hymap does not sample at
exactly these wavelengths, we will calculate PRI using the
bands closest to 531 and 570nm.
49
SR, Simple Ratio, float(b1)/float(b2).
CAI, Cellulose Absorption Index: 0.5*float((b1+b2) – b3)
NDNI, Normalized Difference Nitrogen Index:
(alog10(float(b1)/float(b2)))/(alog10(1/(float(b1)*float(b2))))
To take the sum of a set of bands, a shortcut is to go to Basic Tools Statistics Sum
Data Bands, and choose the bands you wish to sum as a spectral subset.
This is just a sample of the many physiological indexes that have been developed to estimate
a wide variety of properties including, pigment contents and ratios between pigments, foliar
water content, and foliar dry matter content.
50
Table 4-1: Physiological indexes used in vegetation mapping (continued)
R695 Zarco-Tejada
PI2, Pigment Index 2 plant stress status
R760 (1998)
Water indexes
R860 R1240
NDWI, Normalized
R860 R1240 leaf water content Gao (1996)
Difference Water Index
51
3. Choose a file path to My Documents\ERS_186\Lab_Data\Lab_Products.
4. Name the file Delta_Hymap_2008_indexstack.img and click Open.
5. Click OK.
6. In ENVI‘s main menu, select File Edit ENVI Header.
7. Select Delta_Hymap_2008_indexstack.img.
8. Click Edit Attributes and select Band Names. Change the band names to the appropriate
index names you just wrote down.
9. Click the Display button and select New Display. Click Load RGB and select three
indexes from the New Stacked Layer into a new display.
Masking
Masking reduces the spatial extent of the analysis by masking out areas of the image
which do not contain data of interest. Masking reduces processing times by reducing the
number of pixels an analysis must consider. Masking may also improve results by
removing extraneous, confounding variation from the analysis. It is common for analysts
to mask out hydrologic features (streams, rivers, lakes), roads, or nonvegetated pixels, for
example, depending on the project goals.
During the masking process, the user compares the mask carefully to the original image
to verify that only non-essential data is removed. If there is any doubt whether data is
important, it is left in the data set.
52
Calculate Statistics and Covariance Image
1. In ENVI‘s main window, select Basic Tools Statistics Compute Statistics.
2. Select Delta_Hymap_2008_indexstack.img as the input file, click ―Select Mask
Band‖, choose Delta_Hymap_2008_index_mask.img, and click OK.
3. Check Covariance in the Compute Statistics Parameters dialog and click OK.
4. Maximize the Statistics Results and scroll (if necessary) to the correlation matrix. A high
absolute value (close to 1 or negative 1) indicates that the two indexes are highly
correlated. What clusters of highly correlated indexes fall out? Which indexes are not
correlated to any others? If your correlation matrix contains many nonsensical values,
you did not successfully mask the image. See the troubleshooting guide on page 6.
You may use physiological indexes (instead of reflectance data) as the input to any classification
algorithm. Choose indexes to include as classification inputs using the following rules of thumb:
Inspect each of your index images. Indexes that are very noisy (i.e., those with a lot of
speckle and low spatial coherence) should be excluded from further analyses.
Use only one index from a set of highly correlated indexes (i.e., |r| > 0.9).
Continuum Removal
Many hyperspectral mapping and classification methods require that data be reduced to
reflectance and that a continuum be removed from the reflectance data prior to analysis. A
continuum is a mathematical function used to isolate a particular absorption feature for analysis
(Clark and Roush, 1984; Kruse et al, 1985; Green and Craig, 1985). It corresponds to a
background signal unrelated to specific absorption features of interest. Spectra are normalized to
a common reference using a continuum formed by defining high points of the spectrum (local
maxima) and fitting straight line segments between these points. The continuum is removed by
dividing the original spectrum by the continuum. In this way, the spectrum is normalized for
albedo in order to quantify the absorption feature.
Figure 4-3: Fitted Continuum and a Continuum-Removed Spectrum for the Mineral
Kaolinite
53
Create Continuum-Removed Data
Continuum Removal in ENVI Plot Windows
1. Open the file Delta_Hymap_2008.img and display it as a color infrared by right
clicking it in the Availible Bands List and selecting Load CIR….
2. Right-click in the Image window and select Z Profile (Spectrum…)
3. Make sure that Options → Auto-scale Y Axis is checked in the Spectral Profile
window.
4. Select Edit → Plot parameters…
5. Edit Range to display wavelengths from 2.0 µm to 2.41 µm and close the Plot
Parameter dialog.
6. Select Plot Function → Continuum Removed. The spectrum will be displayed after
continuum removal.
Navigate to soil pixels in your image and observe the spectra. Note the absorption at 2.2
µm for clay, 2.3 µm for carbonates and if the pixel has dry vegetation, the spectra will
also show a cellulose absorption at 2.1 µm. Click back and forth between Normal and
Continuum Removed in the Plot_Function menu so that you can see how the shape of the
reflectance spectrum corresponds to the shape of the continuum removed spectrum. Can
you see the absorption features in both?
54
used in the physiological indexes you calculated earlier. Can you see what spectral
features they‘re taking advantage of?
Note: You may wish to organize your Lab_Products folder using subfolders to appropriately
group your files together (i.e. index files vs continuum removal images), or transfer your files
to your appropriate personal lab folder(s).
55
Tutorial 5: Data Reduction 2 - Principal Components
The following topics are covered in this tutorial:
Masking
Principal Components Analysis
Minimum Noise Fraction Transform
56
Since PCA is a simple rotation and translation of the coordinate axes, PC bands are linear
combinations of the original spectral bands. You can calculate the same number of output PC
bands as input spectral bands. To reduce dimensionality using PCA, simply exclude those last
PC bands that contain very little variance and appear noisy. Unlike the original bands, PC
bands are uncorrelated to each other.
U2 X2 Principal Component bands produce more
colorful color composite images than
spectral color composite images because
Data ―cloud ‖ the data is uncorrelated. ENVI can
complete forward and inverse PC
1
U
rotations.
Richards, J.A., 1999. Remote Sensing
Digital Image Analysis: An Introduction,
Springer-Verlag, Berlin, Germany, p.
240.
57
Note: You can click Stats Subset to calculate the variance-covariance statistics based on a
spatial subset such as an area under an ROI. However, the default is for the statistics to be
calculated from the entire image.
4. Enter resize factors less than 1 in the Stats X/Y Resize Factor text boxes to sub-sample
the data when calculating the statistics. For example, using a resize factor of 0.1 will use
every 10th pixel in the statistics calculations. This will increase the speed of the statistics
calculations.
5. Output your statistics file to your Lab_Products folder using the filename:
Delta_Hymap_2008_pcastats.sta.
6. Select to calculate the PCs based on the Covariance Matrix using the arrow toggle button.
Note: Typically, use the covariance matrix when calculating the principal components.
Use the correlation matrix when the data range differs greatly between bands and
normalization is needed.
7. Save your PCA file to the Lab_Products folder, using the file name
Delta_Hymap_2008_pca.img.
8. From the Output Data Type menu, select the desired data type of the output file (we‘ll
stick with Floating Point).
9. Select the number of output PC bands as 30. You can limit the number of output PC
bands, by entering the desired number of output bands in the text box or by using the
arrow increment button next to the Number of Output PC Bands label. The default
number of output bands is equal to the number of input bands. Reducing the number of
output bands will increase processing speed and also reduce disk space requirements. It
is unlikely that PC bands past 30 will contain much variance.
10. Alternatively, you can choose to select the number of output PC bands using the
eigenvalues to ensure that you don‘t omit useful information. To do this, perform the
following steps.
Click the arrow toggle button next to the Select Subset from Eigenvalues label to
select Yes. Once the statistics are calculated the Select Output PC Bands dialog
appears with each band listed with its corresponding eigenvalue. Also listed is the
cumulative percentage of data variance contained in each PC band for all PC bands.
Select the number of bands to output by entering the desired number into the
Number of Output PC Bands box or by clicking on the arrow buttons. PC Bands
with large eigenvalues contain the largest amounts of data variance. Bands with
lower eigenvalues contain less data information and more noise. Sometimes, it is best
to output only those bands with large eigenvalues to save disk space.
Click OK in the Select Output PC Bands dialog. The output PC rotation will contain
only the number of bands that you selected. For example, if you chose "30" as the
number of output bands, only the first 30 PC bands will appear in your output file.
11. In the Forward PC Rotation Parameters dialog, click OK.
12. The PCA will take a few minutes. When ENVI has finished processing, the PC
Eigenvalues plot window appears and the PC bands are loaded into the Available Bands
List where you may access them for display. For information on editing and other options
in the eigenvalue plot window, see Using Interactive Plot Functions in ENVI Help.
58
Figure 5-2: PC Eigenvalues Plot Window
13. Load an RGB image of the top 3 PCA bands. Inspect the z-profile of the PCA image.
Link the PCA image to your CIR reflectance image. Are any features more readily
apparent in the PCA-transformed data? Are different land cover classes more distinctly
colored than in the CIR?
14. Load band 30 as a gray scale. How does it differ from the reflectance image and the top
3 PCA bands?
15. Inspect the variance structure of the image. Open the statistics file: Basic Tools
Statistics View Statistics File. In the Enter Statistics Filename dialog, find your file
Delta_Hymap_2008_pcastats.sta and click OK. Output the statistics to a text
file by selecting File Save results to text file in the Stats File window. Use the
filename Delta_Hymap_2008_pcastats.txt.
You can now open this text file in Microsoft Excel. Specify that the file type is delimited
and that Excel should start import at line 4. Click Next and then Finish. Excel should
open a spreadsheet with the PC bands as the rows. For each PC band it includes the
minimum, maximum, and mean values, the standard deviation, and the eigenvalue. If
you scroll down further, you will see the variance-covariance matrix and the
eigenvectors.
Calculate the sum of all the eigenvalues. This is the total amount of variance in your
image. Now, next to your eigenvalue column make a new column entitled ―% variation‖.
Calculate this as 100 * the eigenvalue of each band divided by the sum of all the
eigenvalues you just calculated. How well distributed is the variation in your PCA
bands?
59
requirements for subsequent processing (See Boardman and Kruse, 1994). The MNF
transform, as modified from Green et al. (1988) and implemented in ENVI, is essentially two
cascaded Principal Components transformations. The first transformation, based on an
estimated noise covariance matrix, decorrelates and rescales the noise in the data. This first
step results in transformed data in which the noise has variance equal to one and no band-to-
band correlations. The second step is a standard Principal Components transformation of the
noise-whitened data. For the purposes of further spectral processing, the inherent
dimensionality of the data is determined by examination of the final eigenvalues and the
associated images. The data space can be divided into two parts: one part associated with
large eigenvalues and coherent eigenimages, and a complementary part with near-unity
eigenvalues and noise-dominated images. By using only the coherent portions, the noise is
separated from the data, thus improving spectral processing results.
Figure 5-3 summarizes the MNF procedure in ENVI. The noise estimate can come from one
of three sources; from the dark current image acquired with the data (for example AVIRIS),
from noise statistics calculated from the data themselves, or from statistics saved from a
previous transform. Both the eigenvalues and the MNF images (eigenimages) are used to
evaluate the dimensionality of the data. Eigenvalues for bands that contain information will
be an order of magnitude larger than those that contain only noise. The corresponding images
will be spatially coherent, while the noise images will not contain any spatial information.
60
1. Select Transforms MNF Rotation Forward MNF Estimate Noise Statistics
From Data or Spectral MNF Rotation Forward MNF Estimate Noise
Statistics From Data.
2. When the MNF Transform Input File dialog appears, select and subset your input file
using the standard ENVI file selection procedures (choose a spectral subset of the first 93
bands) and click OK.
You can perform an MNF on any subset of bands or all bands within an image. We are
limiting our analysis here to the first 93 bands to reduce processing times. The Forward
MNF Transform Parameters dialog appears.
Note: Click Shift Diff Subset if you wish to select a spatial subset or an area under an
ROI on which to calculate the statistics. You can then apply the calculated results to the
entire file (or to the file subset if you selected one when you selected the input file). For
instructions, see Using Statistics Subsetting. The default is for the statistics to be
calculated from the entire image.
Saving your MNF files to the Lab_Products folder:
3. In the Enter Output Noise Stats Filename [.sta] text box, enter a filename for the noise
statistics (e.g., Delta_Hymap_2008_mnf_noisestats.sta).
4. In the Enter Output MNF Stats Filename [.sta] text box, enter an output file for the MNF
statistics (e.g., Delta_Hymap_2008_mnf_stats.sta).
Note: Be sure that the MNF and noise statistics files have different names.
5. Select File output and give it the filename Delta_Hymap_2008_mnf.img.
6. Select the number of output MNF bands by using one of the following options:
A. Enter ―40‖ in the Number of Output MNF Bands box, or
B. To select the number of output MNF bands by examining the eigenvalues, click the
arrow toggle button next to the Select Subset from Eigenvalues label to select Yes.
Click OK to calculate the noise statistics and perform the first rotation. Once the statistics
are calculated the Select Output MNF Bands dialog appears, with each band listed with
its corresponding eigenvalue. Also listed is the cumulative percentage of data variance
contained in each MNF band for all bands.
Click the arrow buttons next to the Number of Output MNF Bands label to set number of
output bands to the desired number, or enter the number into the box. Choose to include
only bands with large eigenvalues that contain nontrivial proportions of variation. As
you can see, by band 30, most of the variation is explained and the addition of each
successive band only adds additional information in very small increments.
Click OK in the Select Output MNF Bands dialog to complete the rotation.
Note: For the best results, and to save disk space, output only those bands with high
eigenvalues-bands with eigenvalues close to 1 are mostly noise.
61
7. The MNF transform will take a few minutes. When ENVI has finished processing, it
loads the MNF bands into the Available Bands List and displays the MNF Eigenvalues
Plot Window. The output only contains the number of bands you selected for output. For
example, if your input data contained 224 bands, but you selected only 50 bands for
output, your output will only contain the first 50 calculated MNF bands from your input.
You can now open this text file in Microsoft Excel. Specify that the file type is delimited and
that Excel should start import at line 4. Click Next and then Finish. Excel should open a
spreadsheet with the PC bands as the rows. For each MNF band it includes the minimum,
maximum, and mean values, the standard deviation, and the eigenvalue. If you scroll down
further, you will see the variance-covariance matrix and the eigenvectors.
Calculate the sum of all the eigenvalues. This is the total amount of variance in your image.
Now, next to your eigenvalue column make a new column entitled ―% variation‖. Calculate
this as 100 * the eigenvalue of each band divided by the sum of all the eigenvalues you just
calculated. How well distributed is the variation in your MNF bands?
Now create another new column entitled ―cumulative variation‖ and calculate the values. (A
quick way to do this is to set the cumulative variation for the first band equal to its %
62
variation. For the second band, enter the formula ―= the cell with the % variation for that
band + the cell with the cumulative variation for the preceding band‖. Now copy that
formula and paste it into the remaining rows.)
11. To perform dimensionality reduction of MNF (or PCA) bands, common rules of thumb
are to:
a. Exclude all bands occurring after a threshold of 80% cumulative variation.
b. Exclude all bands whose eigenvalue is less than the average eigenvalue.
c. Plot the eigenvalues vs. band number. This is called a ―Scree plot‖. Identify the
band at which a kink occurs and the scree plot flattens out and exlude all bands
occurring after this one.
d. View the individual MNF bands and exclude those that are dominated by noise
and are not spatially coherent.
e. If you performed the MNF transform on a mosaic of several images, you should
inspect each MNF output band and discard those that show dramatic differences
between the individual images that make up the mosaic.
63
Figure 5-5: MNF Scatter Plot
7. Use linked windows, overlays, ―dancing pixels‖, and Z-profiles to understand the
reflectance spectra of the MNF corner pixels. Look for areas where the MNF data
stops being ―pointy‖ and begins being ―fuzzy‖. Also notice the relationship between
scatter plot pixel location and spectral mixing as determined from image color and
individual reflectance spectra.
64
Tutorial 6: Unsupervised and Supervised Classification
The following topics are covered in this tutorial:
Unsupervised and Supervised Classification Techniques
K-Means
IsoData
Parallelepiped
Minimum distance
Mahalanobis distance
Maximum likelihood
Rule classifier
Post-classification Processing
Class statistics
Accuracy assessment
Classification generalization
Creating a class GIS
65
Delta_2008_class_mahdr.img Mahalanobis Distance rules file
Start ENVI
Start ENVI by double-clicking on the ENVI icon. The ENVI main menu appears when the
program has successfully loaded and executed.
66
wetlands. Urbanization is also readily apparent. The following figure shows the resulting
Main Image window for these bands.
Cursor Location/Value
Use ENVI‘s Cursor Location/Value dialog to preview image values in the displayed spectral
bands and the location of the cursor.
1. Select Tools → Cursor Location/Value from the Main Image window menu bar.
Alternatively, double-click the left mouse button in the image display to toggle the
Cursor Location/Value dialog on and off. Or you can right click in any window of the
display and choose Cursor Location/Value.
2. Move the cursor around the image and examine the data values in the dialog for specific
locations. Also note the relationship between image
color and data value.
3. Select File → Cancel in the Cursor Location/Value
dialog to dismiss it when finished.
67
clicking the left mouse button in any of the display group windows. The Spectral Profile
window will display the spectrum for the pixel you selected. Note the relations between
image color and spectral shape. Pay attention to the location of the displayed image bands
in the spectral profile, marked by the red, green, and blue bars in the plot.
3. Select File → Cancel in the Spectral Profile dialog to dismiss it.
Unsupervised Classification
Start ENVI‘s unsupervised classification routines from the ENVI main menu, by choosing
Classification → Unsupervised → K-Means or IsoData.
K-Means
Unsupervised classifications use statistical techniques to group n-dimensional data into
their natural spectral classes. The K-Means unsupervised classifier uses a cluster analysis
approach which requires the analyst to select the number of clusters to be located in the
data. The classifier arbitrarily locates this number of cluster centers, then iteratively
repositions them until optimal spectral separability is achieved.
Choose Classification → Unsupervised → K-Means, use all of the default values,
choose the Lab_Products output directory, give the name,
Delta_2008_class_km.img and click OK.
1. Load the file, Delta_2008_class_km.img. Highlight the band name for this
classification image in the available bands list, click on the Gray Scale radio button,
select New Display on the Display button pull-down menu, and then Load Band.
2. From the Main Image display menu, select Tools → Link → Link Displays and
click OK in the dialog to link the images.
3. Compare the K-Means classification result to the color composite image. You can
resize the portion of the image using the dynamic overlay by clicking the center
mouse button and defining a rectangle. Move the dynamic overlay around the image
by clicking and dragging with the left mouse button.
4. Try to indentify the land cover associated with each class and write this down.
5. When finished, select Tools→ Link → Unlink Display to remove the link and
dynamic overlay.
If desired, experiment with different numbers of classes, change thresholds, standard
deviations, and maximum distance error values to determine their effect on the
classification.
IsoData
IsoData unsupervised classification calculates class means evenly distributed in the data
space and then iteratively clusters the remaining pixels using minimum distance
techniques. Each iteration recalculates means and reclassifies pixels with respect to the
new means. This process continues until the number of pixels in each class changes by
less than the selected pixel change threshold or the maximum number of iterations is
reached.
Choose Classification → Unsupervised → IsoData, use all of the default values, choose
your output directory, give the name, Delta_2008_class_id.img and click on OK.
68
1. Load the file Delta_2008_class_id.img. Highlight the band name for this
classification image in the available bands list, click on the Gray Scale radio button,
select New Display on the Display button pull-down menu, and then Load Band.
2. In the main image window, select Tools → Link→ Link Displays. Click OK to link
this image to the false-color CIR and the K-Means classification.
3. Compare the IsoData classification result to the color composite image using the
dynamic overlay as you did for the K-Means classification. Change the image that is
displayed by the dynamic overlay by holding the left mouse button down in an image
window and simultaneously clicking on the middle mouse button.
4. Try to identify the land cover associated with each class and write this down.
5. Compare the IsoData and K-Means classifications. Note that these two classifiers
will have assigned different colors to similar spectral classes. Do class boundaries
generally agree spatially between the two techniques? Look at your land cover
interpretations for the 2 classifications. Do they split the spectral data into similar
classes?
If desired, experiment with different numbers of classes, change thresholds, standard
deviations, maximum distance error, and class pixel characteristic values to determine
their effect on the classification.
Supervised Classification
Supervised classifications require that the user select training areas to define each class.
Pixels are then compared to the training data and assigned to the most appropriate class.
ENVI provides a broad range of different classification methods, including Parallelepiped,
Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle Mapper,
Binary Encoding, and Neural Net. Examine the processing results below, or use the default
classification parameters for each of these classification methods to generate your own
classes and compare results.
To perform your own classifications, in the ENVI main menu select Classification →
Supervised→ [method], where [method] is one of the supervised classification methods in
the pull-down menu (Parallelepiped, Minimum Distance, Mahalanobis Distance, Maximum
Likelihood, Spectral Angle Mapper, Binary Encoding, or Neural Net). Use one of the two
methods below for selecting training areas, also known as regions of interest (ROIs).
69
3. The Enter ROI Filename dialog opens. Select Delta_classes_2008.roi as
the input file to restore.
You can check out these regions by selecting one in the ROI Tool dialog and clicking
Goto.
Parallelepiped
Parallelepiped classification uses a simple decision rule to classify multispectral data.
The decision boundaries form an n-dimensional parallelepiped in the image data
space. The dimensions of the parallelepiped are defined based upon a standard
deviation threshold from the mean of each selected class. Pixels are assigned to a
class when they occur within that class‘s parallelepiped. If they are outside all
parallelepipeds, they are left unclassified.
1. Perform a parallelepiped classification (Classification → Supervised→
Parallelepiped) on the image Delta_LandsatTM_2008.img using the
Delta_classes_2008.roi regions of interest or the ROIs you defined.
Run the classification using the default parameters. Save your results in the
Lab_Products folder as Delta_2008_class_pp.img. You may also output
70
a rules image if you like. Use the toggle switch to choose whether or not a rules
image is generated.
2. Use image linking and the dynamic overlay to compare this classification to the
color composite image. Do you see any pixels that are obviously misclassified?
(e.g., vegetated pixels assigned to the urban class, etc..)
Minimum Distance
The minimum distance classification (Classification → Supervised→ Minimum
Distance) uses the centroids (i.e., the mean spectral values) of each ROI and
calculates the Euclidean distance from each unknown pixel to the centroid for each
class. All pixels are classified to the closest ROI class unless the user specifies
standard deviation or distance thresholds, in which case some pixels may be
unclassified if they do not meet the selected criteria.
1. Perform a minimum distance classification of the Landsat scene using the
Delta_classes_2008.roi regions of interest or the ROIs you defined.
Run the classification using the default parameters. Save results as
Delta_2008_class_mind.img. You may also output a rules image if you
like. Use the toggle switch to choose whether or not a rules image is generated.
2. Use image linking and the dynamic overlay to compare this classification to the
color composite image and the parallelepiped classification. Do you see any
pixels that are obviously misclassified? How do the parallelepiped and minimum
distance results differ? Note especially the Aquatic Vegetation class if you used
the predefined ROIs.
Mahalanobis Distance
The Mahalanobis Distance classification (Classification → Supervised→
Mahalanobis Distance) is a direction sensitive distance classifier that uses
covariance statistics in addition to class means and standard deviations. It is similar
to the Maximum Likelihood classification but assumes all class covariances are equal
and therefore is a faster method. All pixels are classified to the closest ROI class
unless the user specifies a distance threshold, in which case some pixels may be
unclassified if they do not meet the threshold.
1. Perform a Mahalanobis distance classification using the
Delta_classes_2008.roi regions of interest or the ROIs you defined.
Run the classification using the default parameters. Save results as
Delta_2008_class_mahd.img. Choose to output a rules file and name it
Delta_2008_class_mahdr.img.
2. Use image linking and the dynamic overlay to compare this classification to the
color composite image and previous supervised classifications. Do you see any
pixels that are obviously misclassified? How do Mahalanobis results differ from
the other 2 supervised classifications? Note especially the Aquatic Vegetation
class if you used the predefined ROIs.
3. Load a band from the rule image and link it to the classification. The values in
the rule image are the calculated Mahalanobis distances from that pixel to the
training data for each class. Display the z-profile for the rule image. Notice the
relationship between the relative values of the rule bands and the classification
71
result. The pixel is assigned to the class for which it has the minimum
Mahalanobis distance.
Maximum Likelihood
Maximum likelihood classification (Classification → Supervised→ Maximum
Likelihood) assumes that the reflectance values for each class in each band are
normally distributed and calculates the probability that a given pixel belongs to each
class. Unless a probability threshold is selected, all pixels are classified. Each pixel is
assigned to the class for which it has the highest probability of membership (i.e., the
maximum likelihood).
1. Perform a maximum likelihood classification using the
Delta_classes_2008.roi regions of interest or the ROIs you defined.
Run the classification using the default parameters. Save results as
Delta_2008_class_ml.img. You may also output a rules image if you
like. Use the toggle switch to choose whether or not a rules image is generated.
2. Use image linking and the dynamic overlay to compare this classification to the
color composite image and previous supervised classifications. Do you see any
pixels that are obviously misclassified? How do Mahalanobis results differ from
the other 2 supervised classifications? Note especially the Urban class if you
used the predefined ROIs.
3. You may now close all of your classification displays.
72
Note: You must select the algorithm BEFORE importing endmembers in order for ENVI to
calculate the correct statistics.
4. Close the Endmember Collection dialog.
Rule Images
ENVI creates images that show the pixel values used to create the classified image. These
optional images allow users to evaluate classification results and to reclassify as
necessary using different thresholds. These are gray scale images; one for each class in
the classification. The rule image pixel values represent different things for different
types of classifications, for example:
Classification Method Rule Image Values
Parallelepiped Number of bands satisfying the parallelepiped criteria
Minimum Distance Euclidean distance from the class mean
Maximum Likelihood Probability of pixel belonging to class (rescaled)
Mahalanobis Distance Mahalanobis distance from the class mean
1. For the Mahalanobis Distance classification above, load the classified image and the
rule image for one class into separate displays. Invert the rule images using Tools →
Color Mapping → ENVI Color Tables and drag the Stretch Bottom and Stretch
Top sliders to opposite ends of the dialog. Pixels closer to class means (i.e., those
with spectra more similar to the training ROI and thus shorter Mahalanobis distances)
now appear bright.
2. Link the classification and rule image displays. Use Z-profiles and the Cursor
Location/Value tool to determine if better thresholds could be used to obtain more
spatially coherent classifications. In particular, identify a better threshold value for
the Aquatic Vegetation class so that classified pixels include aquatic vegetation, but
exclude the Pacific Ocean and upland green and nonphotosynthetic vegetation. To
do so, find a Mahalanobis distance value is greater than those exhibited by most
pixels that truly contain aquatic vegetation, but it lower than pixels that are
erroneously classified as Aquatic Vegetation.
3. If you find better thresholds, select Classification→ Post Classification → Rule
Classifier from the ENVI main menu.
4. Choose the Delta_2008_class_mahdr.img input file as the rule image and
click OK to bring up the Rule Image Classifier Tool, then enter a threshold to create a
new classified image. Click on the radio button to classify by Minimum Value. This
lets ENVI know that smaller rule values represent better matches.
5. Click Quick Apply to have your reclassified image displayed in a new window.
6. Compare your new classification to the previous classifications. Since you have set
thresholds where there were none originally, you should now have some unclassified
pixels, displayed as black.
7. You may continue to adjust the rule classifier until you are satisfied with the results.
Click Save To File when you are happy with the results, and choose the filename
Delta_2008_class_mahd2.img.
73
Post Classification Processing
Classified images require post-processing to evaluate classification accuracy and to
generalize classes for export to image-maps and vector GIS. ENVI provides a series of tools
to satisfy these requirements.
Class Statistics
This function allows you to extract statistics from the image used to produce the
classification. Separate statistics consisting of basic statistics, histograms, and average
spectra are calculated for each class selected.
1. Choose Classification→ Post Classification → Class Statistics to start the process
and select a Classification Image (e.g.: Delta_2008_class_mahd2.img) and
click OK.
2. Select the image used to produce the
classification
(Delta_LandsatTM_2008.
img) and click OK.
3. Click Select All Items and then OK in
the Class Selection dialog.
4. Click the Histograms and Covariance
check boxes in the Compute Statistics
Parameters dialog to calculate all the
possible statistics.
5. Click OK at the bottom of the
Compute Statistics Parameters dialog.
The Class Statistics Results dialog
appears. The top of this window
displays the mean spectra for each
class. Do the mean spectra
correspond to expected reflectance
profiles for these land cover classes?
Summary statistics for each class by
band are displayed in the Statistics
Results dialog. You may close this
window. Figure 6-4: Sample Class Statistics Report
Confusion Matrix
ENVI‘s confusion matrix function allows comparison of two classified images (the
classification and the ―truth‖ image), or a classified image and ROIs. The truth image can
be another classified image, or an image created from actual ground truth measurements.
We do not have ground reference data for this scene, so you will be comparing two of
your classifications to each other. You will also compare a classification to the training
ROIs, although this will not provide an unbiased measure of accuracy.
1. Select Classification → Post Classification → Confusion Matrix → [method],
where [method] is either Using Ground Truth Image, or Using Ground Truth ROIs.
2. For the Ground Truth Image Method, compare the Parallelepiped and Maximum
Likelihood images you previously created by choosing the two files,
74
Delta_2008_class_ml.img and Delta_2008_class_pp.img and
clicking OK (for the purposes of this exercise, we are using the
Delta_2008_class_pp.img file as the ground truth).
3. Use the Match Classes Parameters dialog to pair corresponding classes from the two
images and click OK. (If the classes have the same name in each image, ENVI will
pair them automatically.)
4. Answer ―No‖ in the Confusion Matrix Parameters where it asks ―Output Error
Images?‖.
5. Examine the confusion matrix. For which class do the classifiers agree the most? On
which do they disagree the most? Determine sources of error by comparing the
classified images to the original reflectance image using dynamic overlays, spectral
profiles, and Cursor Location/Value.
6. For the Using Ground Truth ROIs method, select the classified image
Delta_2008_class_ml.img to be evaluated.
7. Match the image classes to the ROIs loaded from Delta_classes.roi, and click
OK to calculate the confusion matrix.
8. Click OK in the Confusion Matrix Parameters dialog.
9. Examine the confusion matrix and determine sources of error by comparing the
classified image to the ROIs in the original reflectance image using spectral profiles,
and Cursor Location/Value. According to the confusion matrix, which classes have
the lowest commission and omission errors? Is this supported by your inspection of
the images?
Figure 6-5: Confusion Matrix using a Second Classification Image as Ground Truth
75
Clump and Sieve
Clump and Sieve provide methods for generalizing classification images. Sieve is usually
run first to remove the isolated pixels based on a size (number of pixels) threshold, and
then clump is run to add spatial coherency to existing classes by combining adjacent
similar classified areas. Illustrate what each of these tools does by performing the
following operations and comparing the results to your original classification.
1. To sieve, select Classification→ Post Classification → Sieve Classes, choose
Delta_2008_class_mahd2.img, choose your output folder, give filename
Delta_2008_class_mahd2_sieve.img and click OK.
2. Use the output of the sieve operation as the input for clumping. Choose
Classification → Post Classification → Clump Classes, choose
Delta_2008_class_mahd2_sieve.img and click OK.
3. Output as Delta_2008_class_mahd2_clump.img and click OK in the
Clump Parameters dialog.
4. Compare the three images. Do you see the effect of both sieving and clumping?
Reiterate if necessary with different thresholds to produce a generalized classification
image.
Combine Classes
The Combine Classes function provides an alternative method for classification
generalization. Similar classes can be combined into a single more generalized class.
1. Perform your own combinations as described below.
2. Select Classification→ Post Classification → Combine Classes.
3. Select the Delta_2008_class_mahd2.img file in the Combine Classes Input
File dialog and click OK.
4. Choose Urban (as the input class) to combine with Unclassified (as the output class),
click on Add Combination, and then OK in the Combine Classes Parameters dialog.
Choose ―Yes‖ in response to the question ―Remove Empty Classes?‖. Output as
Delta_class_mahd2_comb.img and click OK.
5. Compare the combined class image to the classified images and the sieved and
clumped classification image using image linking and dynamic overlays.
76
Classes to Vector Layers
Execute the function and convert one of the classification images to vector layers which you
can use in a GIS.
1. Select Classification→ Post Classification → Classification to Vector and choose the
generalized image Delta_2008_class_mahd2_clump.img within the Raster to
Vector Input Band dialog. (It is wise to output sieved & clumped classifications rather
than the raw class outputs to vector. Sieved & clumped maps are more generalized and
less complex. This reduces computing time and the complexity of the resulting
polygons.)
2. In the Raster to Vector Parameters, you can choose which classes you wish to convert to
vectors and also whether you would like all classes to be in a single vector file or for a
separate vector file to be created for each class.
3. We will not convert our classification results to vectors because it can be very time
consuming, so click Cancel.
77
Tutorial 7: Change Detection
The following topics are covered in this tutorial:
Image Differencing
Change Detection
Change detection is a major remote sensing application. A change detection analyzes two or
more images acquired on different dates to identify regions that have undergone change and
to interpret the types and causes of change. Several common methods are:
78
Image differencing – Image differencing change detection subtracts a reflectance band or
reflectance product of one image date from another. For example:
NDVIdiff = NDVIt+1 - NDVIt
―Change‖ pixels are those with a large difference (positive or negative). They are typically
identified by setting thresholds. For example, pixels with values more than 3 standard
deviations from the average difference might be ―change‖ pixels. Image differencing that has
been generalized to multiband situations is known as Change Vector Analysis (CVA). In
these cases, the magnitude of differences indicates whether or not change has occurred and
the direction of differences in multiband space provides information as to the type of change.
For example, CVA may be performed using indexes or linear spectral unmixixing (LSU)
fractions as inputs.
Principal components analysis (PCA) – Another change detection method is to stack image
dates into a single file and perform a PCA on the multidate image. The first few PC bands
typically represent unchanged areas (since change generally happens over only a small
portion of a scene). Higher order PC bands highlight change. As with the statistical data
reduction techniques, PCA change detections may be difficult to interpret. Furthermore, they
merely identify change but provide no information as to the type of change. Users must
interpret the change themselves by inspecting the original images.
Post-classification change detection – In this method, the two image dates are classified
independently. The change detection then determines whether and how the class membership
of each pixel changed between the image dates. This technique provides detailed ―from – to‖
information about the type of change. However, it is hampered by the accuracy of the input
classifications. The accuracy of a post-classification change detection can never be higher
than the product of the individual classification accuracies.
6. In the Output File Range section choose the ―Exclusive: range encompasses file
overlap‖ option.
79
The information on the right half of the dialog has been imported from the input files
and should be correct.
7. Save your multi-date image to the Lab_Products folder and name your output file
Delta_LandsatTM_2date.img. Click OK. This will take a few minutes for
ENVI to process.
8. Load an RGB of the multidate image with 2008 Band 4 in both the red and the
green and 1998 Band 4 in the blue and geographically link it to the two single date
images.
In this multidate composite display, pixels that have brighter NIR reflectance in 2008
will appear yellow. Pixels that have brighter NIR reflectance in 1998 will appear
blue. Pixels with similar NIR reflectance in the two images will be displayed in
shades of gray.
Right click in your multidate image and select the “Pixel Locator” tool, then Click
around in the linked images to find areas of change.
Go to the following pixel coordinates, press apply after entering each pair:
Sample: 3333, Line: 2301 – Changed water levels in the Los Vaqueros Reservoir
show up as blue in the multidate image because the reservoir had higher water levels
in 2008, which is darker in the NIR, than in 1998.
Sample: 2081, Line: 3527 – This area of the South Bay show up as yellow in the
multidate image because it appears to have been developed more in 2008, and
flooded in 1998, showing brighter NIR in this area in 2008.
Find a few more instances of change and see if you can intuit the cause.
You will notice that there is a blue fringe along the entire northern edge of the image
area. This is due to the fact that the 1998 LandsatTM image extends further north
than the 2008 image.
9. Create a mask for the area of overlap from the two image dates:
Go to Basic Tools → Masking → Build Mask and choose the display number that
corresponds to your multidate image.
In the Mask Definition dialog choose Options → Selected Areas “Off‖. Then
choose Options → Import Data Range.
Select the input file Delta_LandsatTM_2date.img
80
Go to Basic Tools → Masking → Apply Mask and choose
Delta_LandsatTM_2date.img.
Click Select Mask Band → Mask Band (under
Delta_LandsatTM_2date_msk.img)
Save your output file to the Lab_Products folder and name it
Delta_LandsatTM_2date_masked.img. Click OK.
Where in this scene has most of the change in NDVI occurred? Where has remained
relatively constant?
8. Calculate NDVI change statistics. Go to Basic Tools → Statistics → Compute Statistics.
Choose the input file Delta_LandsatTM_NDVI_diff.img. Click OK and Click OK
again in the Compute Statistics Parameters dialog.
9. A Statistics Results window will open displaying the minimum, maximum, mean, and
standard deviations of the NDVI difference image. Write down these values. You will need
them to choose thresholds for identifying changed pixels.
10. Calculate threshold values of mean ± 2*st.dev. Load the NDVI difference image into
Display #1 and create ROIs of positive and negative change using these threshold values.
Starting in the image window, select Overlay → Region of Interest → Options → Band
Threshold to ROI → Delta_LandsatTM_NDVI_diff Band Math (b1-b2) and click OK.
In the Band Threshold to ROI Parameters dialog:
81
For areas that showed an increase inNDVI greater than 2 standard deviations, enter a
minimum threshold value of -9999 and a maximum threshold of your recorded mean minus a
value of 2 standard deviation.
For areas that showed a decrease in NDVI, starting in your ROI Tool box, repeat the steps
Options → Band Threshold to ROI → Delta_LandsatTM_NDVI_diff Band Math (b1-b2)
→ OK, except this time enter a value of 2 standard deviations above your recorded mean for
your minimum threshold value, and 9999 for your maximum threshold value.
11. Do you think these thresholds do a good job of identifying changed pixels?
For example, PC band 1 seems to be highlighting areas that weren‘t vegetated in both images,
PC band 2 seems to highlight areas that were vegetated in both images, and PC band 3 has
bright values in areas with a change in vegetation cover.
Look at other PC bands and identify what they‘re telling you. What other PC bands are
sensitive to change?
Does the PCA change detection have similar results to the NDVI difference?
Note: You can change the colors of your ROIs by right clicking them in the ROI Tool box. If you
which to hide specific ROIs, select them individually (on the left border) and click Hide ROIs.
82
3. Click through each of the ROIs over the 1998 CIR display several times using the Goto
button of the ROI tool to make sure that they still contain the correct classes. If any have
changed, delete them by clicking on them with the center mouse button in the active window.
4. Perform a Mahalanobis Distance classification on the 1998 bands from the
Delta_LandsatTM_2date_masked.img (selecting the appropriate spectral subset of
just the 1998 bands). Save your classification as Delta_LandsatTM_1998_mahd.img.
Do not output a rule image.
NOTE: When doing a post-classification change detection, it is important that the identical
classification procedure be performed on each image date. Different classifications might have
different biases, which would falsely identify change.
5. Load both the 1998 classification you just created and the optimized Mahalanobis
classification from the 3 rd homework assignment. (If you used the original ROIs, open the
original Mahalanobis classification from lab 4 instead of the improved one.)
6. Navigate to Classification → Post Classification → Change Detection Statistics. Select
Delta_LandsatTM_1998_mahd.img as the Initial State Image and click OK. Select
the 2008 classification as the Final State Image and click OK.
7. Pair the classes with their counterparts from each image date in the Define Equivalent Classes
dialog. (Leave unclassified and masked pixels unpaired.) Click OK. In the Change
Detection Statistics Output toggle ―No‖ for both ―Output Classification Mask Images?‖
and ―Save Auto-Coregistered Input Images?‖ and click OK.
8. A Change Detection Statistics window will open tabulating the amount of change that has
occurred between each class pair. Click on the tabs to see this output in terms of pixel
counts, %, and area in square meters. Go back to the pixel count display.
The ―Class total‖ row gives the total number of pixels assigned to that class in the 1998
image within the shared extent. The ―Row Total‖ column gives the total number of pixels
assigned to that class in the 2008 image within the shared extent. (Row Total and Class Total
columns differ by the number of pixels in the edge pixels of the 1998 image.)
The ―Class Changes‖ column is the number of pixels for a class in 1998 that were no longer
that class in 2008. I.e., it is the sum of off-diagonal elements of that column.
The ―Image Difference‖ is the difference between the Class total in 2008 and the Class total
in 1998 for a class. It is thus an index of average change across the scene, but not pixel-by-
pixel change. Image Difference differs from Class Changes because change is occurring in
both directions throughout the image. This will tend to balance out over the image, as
measured by the Image Difference, despite large numbers of individually changing pixels.
NOTE: If you choose to look at the change detection matrix in terms of percents, the cells are
calculated as the % of the pixels classified to the class in the columns in the initial state (the 1998
image) that were classified to the class in the rows in the final state (the 2008 image).
9. Click around your classifications using the geographic link and find areas that have changed.
Are these true changes or were the pixels wrongly classified in one image but correctly
classified in the other?
83
Compare the results of the three change detection techniques. How do the products differ? What
are the strengths and weaknesses of each? Which do you prefer?
84
Tutorial 8: Map Composition in ENVI
The following topics are covered in this tutorial:
Map Elements
Customizing Map Layout
Saving Results
QuickMap allows you to set the map scale and the output page size and orientation; to select the
image spatial subset to use for the map; and to add basic map components such as map grids,
scale bars, map titles, logos, projection information, and other basic map annotation. Other
custom annotation types include map keys, declination diagrams, arrows, images or plots, and
additional text. Using annotation or grid line overlays means you can modify QuickMap default
overlays and place all map elements in a custom manner.
You can save your map composition in a display group and restore it for future modification or
printing. Using annotation, you can build and save individual templates of common map objects.
85
Open and Display Landsat TM Data
1. From the ENVI main menu bar, select File → Open Image File. A file selection
dialog appears. Open and load a true color image (RGB) of
Delta_LandsatTM_2008.img (from the Multispectral folder)
86
8. Click Save Template at the bottom of the dialog. A Save QuickMap Template to File
dialog appears.
9. In the Enter Output Filename field, enter Delta_LandsatTM_2008_map.qm.
Click OK to save the QuickMap results as a QuickMap template file. You can recall
this template later and use it with any image of the same pixel size by displaying the
desired image and selecting File → QuickMap → from Previous Template from the
Display group menu bar.
10. Click Apply in the QuickMap Parameter dialog to display the QuickMap results in a
display group. If desired, you can modify the settings in the QuickMap Parameters
dialog and click Apply to change the displayed QuickMap.
11. At this stage, you can output the QuickMap to a printer or a Postscript file. Save or
print a copy if desired. Otherwise, continue with the next step.
12. Review the QuickMap results and observe the map grids, scale bars, north arrow, and
positioning of the default text.
Map Elements
ENVI offers many options for customizing your map composition. Options include virtual
borders, text annotation, grid lines, contour lines, plot insets, vector overlays, and classification
87
overlays. You can use the display group (Image window, Scroll window, or Zoom window) to
perform additional, custom map composition. (If you are working in the Scroll window, you may
want to enlarge it by dragging one of the corners to resize the display.) The following sections
describe the different elements and provide general instructions.
1. To change the default border, select Overlay → Grid Lines from the Display group
menu bar associated with the QuickMap. A Grid Line Parameters dialog appears.
2. From the Grid Line Parameters dialog menu bar, select Options → Set Display
Borders. A Display Borders dialog appears.
3. Enter values as shown in the following figure.
88
Using the Display Preferences
1. You can change virtual borders and other display settings using the Display
Preferences dialog.
2. From the Display group menu bar associated with the QuickMap, select File →
Preferences. A Display Parameters dialog appears with a Display Border section
similar to the above figure.
3. Enter the desired values and select the desired color for the border.
4. Click OK. The new borders are immediately applied to the image.
89
3. From the Annotation dialog menu bar, select Object and choose the desired annotation
object.
4. In the Annotation dialog, select the Image, Scroll, or Zoom radio button to indicate
where the annotation will appear.
5. Drag the object to a preferred location, then right-click to lock it in place.
6. To reselect and modify an existing annotation object, select Object → Selection/Edit
from the Annotation dialog menu bar. Then select the object by drawing a box around
it. You can move the selected object by clicking the associated handle and dragging
the object to a new location. You can delete or duplicate an object by choosing the
appropriate option from the selected menu. Right-click to relock the annotation in
place.
7. Remember to select the Off radio button in the Annotation dialog before attempting
non-annotation mouse functions in the display group.
8. Keep the Annotation dialog open for the following exercises.
Text:
1. Select Object → Text from the Annotation dialog menu bar.
2. Click Font and select a font.
3. Select the font size, color, and orientation using the appropriate buttons and fields in
the Annotation dialog. For information on adding additional fonts, see ―Using Other
TrueType Fonts with ENVI‖ in ENVI Help. TrueType fonts provide more flexibility.
Select one of the TrueType fonts available on your system by clicking Font, selecting
a True Type option, and selecting the desired font.
4. Type your text in the empty field in the Annotation dialog.
5. Drag the text object to a preferred location in the image and right-click to lock it in
place.
Symbols:
1. Select Object → Symbol from the Annotation dialog menu bar.
2. Select the desired symbol from the table of symbols that appears in the Annotation
dialog.
3. Drag the text object to a preferred location in the image and right-click to lock it in
place.
90
Polygon and Shape Annotation
You can draw rectangles, squares, ellipses, circles, and free-form polygons in an image. These
can be an outline only, or filled with a solid color or a pattern. Placement is interactive, with easy
rotation and scaling.
1. Select Object → Rectangle, Ellipse, or Polygon from the Annotation dialog menu bar.
2. Enter object parameters as desired in the Annotation dialog.
3. Drag the shapes to a preferred location in the image and right-click to lock them in
place. For polygons, use the left mouse button to define polygon vertices and the right mouse
button to close the polygon.
Arrows:
1. Select Object → Arrow from the Annotation dialog menu bar.
2. Enter object parameters as desired in the Annotation dialog.
3. To draw an arrow, click and hold the left mouse button and drag the cursor in the
image to define the length and orientation of the arrow. Release the left mouse button to complete
the arrow. You can move it by dragging the red diamond handle. Right-click to lock the arrow in
place.
Lines:
1. Select Object → Polyline from the Annotation dialog menu bar.
2. Enter object parameters as desired in the Annotation dialog.
3. To draw a free-form line, click and hold the left mouse button as you are drawing. To
draw a straight line, click repeatedly (without holding the left mouse button) to define the
vertices. Right-click to complete the line. You can move it by dragging the red diamond
handle. Right-click again to lock the line in place.
Declination Diagrams
ENVI generates declination diagrams based on your preferences. You can specify the size of the
diagram and enter azimuths for true north, grid north, and magnetic north in decimal degrees.
1. Select Object → Declination from the Annotation dialog menu bar.
2. Enter object parameters as desired in the Annotation dialog.
3. Click once in the image to show the declination diagram. Move it to a preferred
location by dragging the red diamond handle. Right-click to lock the diagram.
91
Map Key Annotation
Map keys are automatically generated for classification images and vector layers, but you can
manually add them for all other images. Following is an example of a map key:
1. Select Object → Map Key from the Annotation dialog menu bar.
2. Click Edit Map Key Items to add, delete, or modify individual map key items.
3. Click once in the image to show the map key. Move it to a preferred location by
dragging the red diamond handle. Right-click to lock the map key in place.
4. If you want a border and title for the map key, you must add these separately as
polygon and text annotations, respectively:
2. In the Annotation dialog, enter minimum and maximum values and intervals as
desired. Also set vertical or horizontal orientation.
3. Click once in the image to show the color ramp. Move it to a preferred location by
dragging the red diamond handle. Right-click to lock the color ramp in place.
Because 8-bit displays cannot easily assign a new color table to the inset image, ENVI only
shows a gray scale image in the display group. If your display has 24-bit color, a color image will
be displayed.
92
Plot Insets as Annotation
You can easily inset ENVI plots into an image during the map composition/annotation process.
These vector plots maintain their vector character (meaning they will not be rasterized) when
output to the printer or to a Postscript file. They will not appear when output to an image.
You must have a plot window open, such as an X Profile, Y Profile, Z Profile, spectral plot, or
arbitrary profile.
1. Select Object → Plot from the Annotation dialog menu bar.
2. Click Select New Plot. A Select Plot Window dialog appears.
3. Select the plot and enter the desired dimensions to set the plot size. Click OK.
4. Click once in the image to show the plot. Right-click to lock the plot in place.
Because 8-bit displays cannot easily assign a new color table to the inset plot, ENVI only shows a
representation of the plot in the display group. The actual plot is placed when the image is output
directly to the printer or to a Postscript file, and the annotation is burned in. Again, this option
does not produce a vector plot when output to ―Image.‖
93
Overlaying Vector Layers
ENVI can import shapefiles, MapInfo files, Microstation DGN files, DXF files, ArcInfo
interchange files, USGS DLG files, or ENVI vector files (.evf).
1. From the Display group menu bar associated with the map composition, select
Overlay → Vectors. A Vector Parameters dialog appears.
2. From the Vector Parameters dialog menu bar, select File → Open Vector File. A file
selection dialog appears.
3. Select a file and click Open. An Import Vector Files Parameters dialog appears.
4. Select the appropriate map projection, datum, and units for the vector layer.
5. Click OK. ENVI converts the input vectors into an ENVI vector format (.evf).
6. Load the vectors into the map composition by clicking Apply in the Vector
Parameters dialog.
7. In the Vector Parameters dialog, adjust the vector attributes to obtain the desired
colors, thickness, and line types. See the Vector Overlay and GIS Analysis tutorial or
see ENVI Help for additional information.
The QuickMap you created earlier for your Lab Assignment will be used in the following
exercises. If you already closed Delta_LandsatTM_2008.img, redisplay it as a true color
image.
94
11. Create a scale bar, and an object (your choice) indicating where
Delta_HyMap_2008.img is in the LandsatTM scene. Create text that indicates
what this object is referring to.
12. Click and drag the handles to move the annotation objects. Modify some parameters
for the selected objects. Right-click the objects to lock them in place. Be sure to save
any changes by selecting File → Save Annotation from the Annotation dialog menu
bar. See ENVI Help for further details.
Printing
You can also select direct printing of the ENVI map composition, in which case, the map
composition will be printed directly to your printer using system software drivers.
In all of the output options listed above, graphics and map composition objects are burned
into the image on output.
95
Tutorial 9: Wildfire Detection Exercise
The following topics are covered in this tutorial:
Exploration of Fire Imagery
Flames
Smoke
Influence of temperature on emitted radiance
Band Math for Calculating Fire Indexes
Index of potassium emission
Index of atmospheric CO2 absorption
Observe that smoke from the fire is readily obvious and obscures the underlying
ground cover. Flames are visible only in the areas burning most brightly that are
not covered by smoke. There is little contrast between vegetated areas and
charred areas.
3. Load a false-color CIR display of the image.
96
Smoke plumes are still obvious, but flames stand out more and vegetation is
more clearly distinguished in this display.
What spectral regions penetrate the smoke? Why is the ability to penetrate
smoke dependent on wavelength?
5. Now load a false-color RGB display using the 1682, 1107, and 655 nm channels
of the AVIRIS scene (displayed as red, green, and blue, respectively).
In this display, vegetated areas appear green and burned areas appear dark gray.
Smoke appears bright blue due to high reflectance in the 655 nm channel. Fire
varies from red (cooler fires) to yellow (hotter fires). Now that smoke is not
completely obscuring the display, a much greater burn area is visible. You
should be able to see both flaming and smoldering fires.
6. Display the spectral (Z) profile (right clicking the image) and inspect reflected
radiance of pixels that a) have been burned, b) contain green vegetation, and c)
that are currently burning in the imagery. Note that this scene is of radiance data.
It has not been corrected to reflectance (can you think why not?). The shape of
non-burning spectra will be dominated by the solar emission spectrum and
influenced by land-cover and atmospheric
absorptions.
Can you see the influence of the smoke on the vegetation spectrum? How does
this relate to what you learned about the penetration of smoke by different
wavelengths in step 3?
8. Create another new plot window, this time leaving it blank to start with.
Navigate to burning pixels and collect a sample of burning spectra in your new
plot window using the same drag and drop technique as before.
97
How does fire temperature influence the spectral profile? Is this caused by
reflected radiation or emitted radiation?
Display spectra burning at different temperatures in the same plot window and
color-code them by temperature:
For example, in the plot above the black spectrum is not burning. All other
spectra are of pixels that are on fire. Spectra are colored so that cooler colors
represent cooler pixels. As fire temperature increases, radiance in the SWIR
increases. As pixels become even hotter and more dominated by fire, the
AVIRIS sensor is saturated. Even saturated pixels provide some indication of
fire temperature, however. Hotter pixels saturate more wavelengths. For
example, the orange spectrum above saturates only in the SWIR 2 region, the red
spectrum saturates both SWIR 1 and SWIR 2, the magenta spectrum saturates
into the NIR, and the maroon spectrum saturates throughout the NIR.
Continue to click around your image and investigate the effects of smoke and
burning on radiance spectra. Using sophisticated unmixing techniques (which
you will learn in a few weeks), it is possible to model the fire temperature of each
pixel, but here we will assess temperature only qualitatively.
9. You can now close the true-color, false-color CIR, and any gray-scale displays
you have open.
Figure 9-2: Spectral Profile of several different pixels. Notice the saturation of the
sensor at high temperatures.
98
spectral features, just as a lab spectrometer is. Here we will use two narrow spectral features (one
emission and one absorption) to detect burning pixels. We will calculate indexes based on these
features using ENVI‘s Band Math capabilities.
Potassium emission
Flaming combustion thermally excites potassium (K) at relatively low excitation energies; the
Figure 9-3: Spectral profile of flaming and smoldering pixels. Note the potassium emission.
excited potassium then emits at the NIR wavelengths 766.5 and 769.9 nm. Potassium is an
essential mineral nutrient to plants and is present at detectable levels in soils and plants.
Potassium emission can be detected in hyperspectral data and can be used to identify actively
burning pixels (Vodacek et al. 2002).
1. Load a false-color RGB of band 45 (769 nm) in red and band 46 (779 nm) in both blue and
green. These are the bands that we will use in our K emission index. Pixels that are not
undergoing flaming combustion will appear in shades of gray. Flaming pixels will display as
red to bright white. (You may need to adjust the display using a predefined stretch or
interactive stretching in Enhance.)
2. Inspect the spectral profile of flaming and non-flaming pixels. Focus on the region of the K
emission around 770 nm. Zoom into this spectral region either by drawing a box around it in
the Spectral Profile window while holding down the center mouse button or by adjusting the
axis ranges in Options → Plot Parameters. Plot flaming and non-flaming spectra in the
same window. Can you see the K emission feature?
3. Calculate the K emission index using band math (Basic Tools → Band Math in the ENVI
main menu), enter the expression: float(b1)/float(b2) and press OK.
In the Variables to Bands Pairing Dialog define band 1 to correspond with AVIRIS band 45
(769 nm) and band 2 to correspond with AVIRIS band 46 (779 nm). Enter the output
filename AVIRIS_simi_fire_Kindex.img , save it to the correct directory, and click OK.
4. Load the K emission index to a new display and link it to the false-color RGB of bands 45
and 46 and also the false-color RGB of the 1682, 1107, and 655 nm bands. You may need to
adjust the display using a predefined stretch or interactive stretching in Enhance (a good
approach would be to place the zoom window to contain flaming pixels and select Enhance
→ [Zoom] Linear 2%).
99
Do you see how the K emission index highlights burning pixels, which should appear bright
white? Toggle on the cursor location/value. What range of K index values do flaming,
smoldering, smoky, and not burning pixels exhibit?
3. Calculate the CO2 absorption index using band math (Basic Tools → Band Math in the
ENVI main menu), enter the expression: float(b1)/(0.666*float(b2)+0.334*float(b3)) and
press OK.
In the Variables to Bands Pairing Dialog define band 1 to correspond with AVIRIS band 173
(2000 nm), band 2 to correspond with AVIRIS band 171 (1980 nm), and band 3 to
correspond with AVIRIS band 177 (2041 nm). Enter the output filename
AVIRIS_simi_fire_CO2index.img, save it to the correct directory, and click OK.
(Note that our index is formulated such that pixels with less absorption by CO2 will have
higher index values.)
4. Load the CO2 absorption index to a new display and link it to the K emission index and also
the false-color RGB of the 1682, 1107, and 655 nm bands. You may need to adjust the
100
display using a predefined stretch or interactive stretching.
Do you see how the CO2 absorption index highlights burning pixels, which should appear
bright white? Toggle on the cursor location/value. What range of CO2 index values do
flaming, smoldering, smoky, and not burning pixels exhibit?
Compare the false-color RGB, K emission index, and CO2 absorption index. What are the
strengths and weaknesses of each? Is one able to detect fires that the others cannot and vice
versa? How are each of them influenced by sensor saturation by the brightest fires?
References
Dennison, P.E., Charoensiri, K., Roberts, D.A., Peterson, S.H., & Green, R.O. (2006). Wildfire
temperature and land cover modeling using hyperspectral data. Remote Sensing of
Environment. 100: 212-222.
Dennison, P.E. (2006). Fire detection in imaging spectrometer data using atmospheric carbon
dioxide absorption. International Journal of Remote Sensing. 27: 3049-3055.
Vodacek, A., Kremens, R.L., Fordham, A.J., Vangorden, S.C., Luisi, D., Schott, J.R., & Latham,
D.J. (2002). Remote optical detection of biomass burning using a potassium emission
signature. International Journal of Remote Sensing. 23: 2721-2726.
101
Tutorial 10.1: Spectral Mapping Methods
The following topics are covered in this tutorial:
Spectral Libraries
Spectral Angle Mapper
Spectral Libraries
Spectral Libraries are used to build and maintain personalized libraries of material spectra, and to
access several public-domain spectral libraries. ENVI provides spectral libraries developed at the
Jet Propulsion Laboratory for three different grain sizes of approximately 160 "pure" minerals
102
from 0.4 to 2.5 mm. ENVI also provides public-domain U.S. Geological Survey (USGS) spectral
libraries with nearly 500 spectra of well-characterized minerals and a few vegetation spectra,
from a range of 0.4 to 2.5 mm. Spectral libraries from Johns Hopkins University contain spectra
for materials from 0.4 to 14 mm. The IGCP 264 spectral libraries were collected as part of IGCP
Project 264 during 1990. They consist of five libraries measured on five different spectrometers
for 26 well-characterized samples. Spectral libraries of vegetation spectra were provided by Chris
Elvidge, measured from 0.4 to 2.5 mm.
ENVI spectral libraries are stored in ENVI's image format, with each line of the image
corresponding to an individual spectrum and each sample of the image corresponding to an
individual spectral measurement at a specific wavelength (see ENVI Spectral Libraries). You can
display and enhance ENVI spectral libraries.
103
7. Save your spectral library. Click File Save Spectra as Spectral library file.
8. In the Output Spectral Library window, select Z plot range 0-5000, x-axis Title
Wavelength y-axis title Value, Reflectance Scale Factor 10,000, Wavelength Units
Micrometers. Save your spectral library as Delta_HyMap_2008_spec_lib.sli in
your folder. Click OK.
104
The SAM algorithm implemented in ENVI takes as input a number of ―training classes‖ or
reference spectra from ASCII files, ROIs, or spectral libraries. It calculates the angular distance
between each spectrum in the image and the reference spectra or ―endmembers‖ in n-dimensions.
The result is a classification image showing the best SAM match at each pixel and a ―rule‖ image
for each endmember showing the actual angular distance in radians between each spectrum in the
image and the reference spectrum. Darker pixels in the rule images represent smaller spectral
angles, and thus spectra that are more similar to the reference spectrum. The rule images can be
used for subsequent classifications using different thresholds to decide which pixels are included
in the SAM classification image.
105
8. Use the Stretch Bottom and Stretch Top sliders to adjust the SAM rule thresholds to
highlight those pixels with the greatest similarity to the selected endmember.
9. Pull the Stretch Bottom slider all the way to the right and the Stretch Top slider all the
way to the left. Now pixels most similar to the endmember appear bright.
10. Move the Stretch Bottom slider gradually to the left to reduce the number of highlighted
pixels and show only the best SAM matches in white. You can use a rule image color
composite or image animation if desired to compare individual rule images.
11. Repeat the process with each SAM rule image. Select File → Cancel when finished to
close the ENVI Color Tables dialog.
12. Select Window → Close All Display Windows from the ENVI main menu to close all
open displays.
Generate new SAM Classified Images Using Rule Classifier
Try generating new classified images based on different thresholds in the rule images.
1. Display the individual bands of the SAM rule image and choose a threshold for the
classification by browsing using the Cursor Location/Value dialog.
2. Now select Classification → Post Classification→ Rule Classifier.
3. In the Rule Image Classifier dialog, select a rule file and click OK.
4. In the Rule Image Classifier Tool dialog, select ―Minimum Value‖ in the Classify by
field, and enter the SAM threshold you decided on in step 1 (for instance, maybe 0.6 is a
better threshold for Clear Water‖). All of the pixels with values lower than the minimum
will be classified. Lower thresholds result in fewer pixels being classified.
5. Click either Quick Apply or Save to File to begin the processing. After a short wait, the
new classification image will appear.
6. Compare with previous classifications and observe the differences.
Consider the following questions:
What ambiguities exist in the SAM classification based on the two different class results and
input spectra? Are there many areas that were not classified? Can you speculate why?
What factors could affect how well SAM matches the endmember spectra?
How could you determine which thresholds represent a true map of selected endmembers?
106
Tutorial 10.2: Spectral Mixture Analysis
The following topics are covered in this tutorial:
Linear Spectral Unmixing
107
Output Files Description
Delta_HyMap_2008_lsu.img LSU fraction image of HyMap data
Landsat TM image spatially subset to
Delta_LandsatTM_2008_subset.img
the extent of Delta_HyMap_12.img
Delta_LandsatTM_2008_subset_lsu LSU fraction image of Landsat data
108
7. Load the RMS Error band into a new, single band display. Examine the RMS Error image
and look for areas with high errors (bright areas in the image). Are there other endmembers
that could be used for iterative unmixing? How do you reconcile these results if the RMS
Note – refining your LSU: In order to improve your unmixing, you can extract spectra from
regions with high RMS error. Use these as new endmembers to replace old ones or possibly add
a new one if it is spectrally distinct and repeat the unmixing. If you get too many endmembers
that look similar to each other, the algorithm will make mistakes in the unmixing. So it is best to
keep the total number less than 6.
When the RMS image doesn't have any more high errors, and all of the fraction values range
from zero to one (or not much outside), then the unmixing is completed. This iterative method is
much more accurate than trying to artificially constrain the mixing, and even after extensive
iteration, also effectively reduces the computation time by several orders of magnitude
compared to the constrained method. Optionally, if you are confident that you have all of the
endmembers, run the unmixing again and click on Apply a unit sum constraint, click OK,
select a filename to save the file, look at the results and compare to a unconstrained LSU.
Error image does not have any high errors, yet there are negative abundances or abundances
greater than 1.0?
109
Locating Endmembers in a Spectral Data Cloud
When pixel data are plotted in a scatter plot that uses image bands as plot axes, the spectrally
purest pixels always occur in the corners of the data cloud, while spectrally mixed pixels always
occur on the inside of the data cloud.
Consider two pixels, where one is in a park with uniform grass, and the other is in a lake. Now,
consider another pixel that consists of 50 percent each of grass and lake. This pixel will plot
exactly between the previous two pixels. Now, if a pixel is 10 percent filled with grass and 90
percent filled with lake, the pixel should plot much closer to the pixel containing 100 percent
lake. This is shown in the following figure.
Figure 10-3:
Scatter Plot
Showing Pure
Pixels and Mixing
Endmembers
110
Create ROIs from spectral library
1. Open a true-color display of Delta_HyMap_2008.img in Display #1 and bands 2,
3, and 4 of Delta_HyMap_2008_mnf.img as RGB image in Display #2. Geographically
link the two displays.
3. Create an ROI at each of the pixel locations, and name each ROI the corresponding
class represented by the endmemnber (e.g. ―soil‖, ―non-photosynthetic vegetation‖,
―water‖). In Display #1 go to Overlay → Regions of Interest… and toggle the ROI
radio button to “Off”. Once you have navigated to the corresponding pixel location of
the endmember, in the ROI Tool dialog, select ROI Type → Point. Toggle the radio
button to "Zoom” and click on that pixel in the zoom window. You will have created one
ROI. Change the ROI Name to the corresponding class name. Repeat this for all
endmembers, creating a new ROI for each endmember.
4. In the MNF image (Display #2), go to Tools→2D Scatter Plots and create a scatter
plot with MNF band 1 and MNF band 3.
5. In the ROI Tool dialog, Toggle the Image on, and Go To your first ROI. Hold your
right mouse button down over the ROI in the Zoom window. In the scatter plot, the
corresponding pixel, and pixels highly similar to that one will be highlighted in red.
Repeat this for all of the ROIs. Where are the endmembers in the data cloud? Are they at
the edges or the center?
6. In the Scatter Plot window, go to Options → Change Bands… and plot two different
MNF bands. Highlight the endmembers and look to see where they fall in the data cloud.
Do this for several combinations of the first 10 MNF bands. What can you conclude
about the appropriateness of the endmembers used for the linear spectral unmixing? Were
they spectrally pure? Do the positions of the endmembers explain some of the non-
sensical results abundance image? How could you use the data cloud to improve your
spectral unmixing results?
111
Tutorial 11: LiDAR
The following topics are covered in this tutorial:
Overview of This Tutorial
Exploration of lidar data
Ground model
Top-of-canopy model
Determining object heights
Hyperspectral-lidar data fusion
Using lidar-derived heights to interpret classification results
Including lidar data in hyperspectral classifications
112
Examine gridded LiDAR products
LiDAR (light detection and ranging) is a form of active remote sensing. The sensor emits a pulse
of EMR and measures the time it takes for that pulse to reflect off the surface and return to the
sensor, allowing the elevation of objects to be determined. Lidar sensors provide either full-
waveform or discrete return data. Full-waveform sensors record the intensity of pulse returns
over all heights present, creating a complete vertical profile of the land cover. They typically
have large footprints. Discrete return sensors bin returns into two or more classes; the most
common are first returns, which are the first reflected signals received by a sensor from a
footprint (i.e., signals reflected off of the top of trees), and last returns, or the last reflected signals
received from a footprint (i.e., signals reflected from the ground). Lidar data is usually analyzed
as the raw point clouds, which requires specialized software such as Terrascan. These points can
be classified, interpolated, and gridded to produce surface models such as digital elevation
models or top-of-canopy models. We will be exploring gridded lidar products today, since these
data can be processed in ENVI.
Note: The elevation of water-covered areas was not modeled; these pixels contain the default
value ‗************************‘. This is interfering with the histogram stretch applied
when displaying this data. Try centering your image or zoom windows in areas that contain
no water and then choosing Enhance → [Image] Linear 2% or Enhance → [Zoom] Linear
2% to produce a more meaningful display.
2. Calculate statistics for this file (under Basic Tools) to determine the highest, lowest, and
mean elevations in the scene. Where do pixels with these elevations occur?
You will first need to create a mask to exclude all the ************************ values.
Open the ROI tool and choose Options → Band Threshold to ROI. Select the ground
model file and click OK. Enter ―************************‖ as both the min and max
values (you can copy that text from this tutorial and paste it into the Band Threshold to ROI
Parameters dialog) and click OK.
Go to Basic Tools → Masking → Build Mask and choose the correct display for your mask
to be associated with. Choose Options → Selected areas “off”. Then define your mask by
going to Options → Import ROIs, select the ROI you just created and click OK. Save your
mask as Delta_12_bareearth_watermask.img.
Now calculate your statistics (Basic Tools → Statistics → Compute Statistics → Select
Mask Band) while applying this mask band. Click OK three times.
113
4. Calculate statistics for this file to determine the highest, lowest, and mean elevations of the
top of objects in the scene. How do these values compare to those for the ground model?
5. Open the file Delta_Hymap_12.img, load a CIR to a new display, and geographically link
it to the lidar displays. Explore the hyperspectral and lidar data together.
6. Calculate the height of objects using the band math (Basic Tools → Band Math) function
―b1-b2‖. You should subtract the ground model (set as b2) from the top-of-canopy model
(set as b1). Save this file as Delta_12_lidar_heights.img. Display your results and
geographically link it to the other displays. Compute statistics for this file to determine the
minimum, maximum, and mean object heights. You will need to apply the bare earth
watermask band again when you calculate statistics.
2. Create a data fusion file with both the MNF bands and the lidar heights: Go to Basic
Tools → Layer Stacking. Click the ―Import File…‖ button and choose the file
Delta_Hymap_12_mnf with a spectral subset of the first 5 MNF bands only. Repeat this
process to import Delta_12_lidar_heights.img. Make sure the radio button for
―Exclusive: range encompasses file overlap‖ is selected. Leave all the other entries as they
are, enter the output file name Delta_12_fusion.img, and click OK.
3. Load a display with your fusion image and restore the ROI file
Delta_12_fusion_ROIs.roi.
Train your classification with the ROIs you restored. Do not output a rule image. Save your
classification as Delta_12_mnf_class.img.
5. View the output classification file. Note where it performs well and where it performs
poorly. What classes are especially poor?
6. Determine the average object height for each class. Go to Classification → Post
Classification → Class Statistics. Choose your classification file,
Delta_12_mnf_class.img; click OK. Now choose your Statistics Input File,
Delta_12_fusion.img and spectrally subset it to the lidar heights band. Click ―Select
Mask Band‖ and choose the mask band Delta_12_fusion_mask.img. Click OK. In the
Class Selection window, choose all classes but ―Unclassified‖ and ―Masked Pixels‖ and click
OK. Click OK once more in the Compute Statistics Parameters dialog.
114
7. The Class Statistics Results window will appear, displaying a plot with the class means in the
top, and in the bottom the number of pixels classified to each class and the basic stats for an
individual class. Write down the min, max, mean and standard deviation of heights for each
class. To change the class that is displayed, click on the pulldown menu underneath the
toolbar of the statistics results window that is labeled as ―Stats for XXX‖, where XXX is the
class that is displayed.
8. Now we will repeat the classification including the lidar height data with the MNF bands as a
classification input.
9. View the output classification file. Link it to the classification created with spectral data
only. Note where the classifier performs poorly and where it performs well. Does including
the lidar height data improve your classification? Have the problem classes from the original
spectral classification been improved?
10. Repeat steps 6 and 7 to determine the min, max, mean and standard devation of class heights
for the data fusion classification. Do these heights make sense for these classes? Are they
more reasonable than the mean and max class heights from the classification using only
spectral data?
11. Compare the two classifications using a confusion matrix to see which classes were changed
the most by inclusion of the lidar information. Go to Classification → Post Classification
→ Confusion Matrix → Using Ground Truth Image.
A confusion matrix displaying the classes from the fusion classification in the columns and
from the spectral-only classification in the rows should appear. Inspect this confusion matrix.
Which classes were relatively uninfluenced by the inclusion of structural (lidar) data? Which
classes lost many pixels when structural information was included? What were those pixels
classified as instead? Which classes gained many pixels when structural information was
included? What classes had those pixels been classified as?
115