Accepted Final

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Exploring Sentinel-1 and Sentinel-2 diversity for Flood inundation mapping using deep learning

Goutam Konapala*1,2, Sujay V. Kumar1, Shahryar Khalique Ahmad1,3


1
Hydrological Sciences Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD, USA
2
Universities Space Research Association, Greenbelt, MD, USA
3
Science Applications International Corporation, Greenbelt, MD, USA
*Corresponding author: [email protected]

Abstract transformation’s superior contrast distinguishing


Identification of flood water extent from abilities. Additionally, U-Net algorithm was able to
satellite images has historically relied on either learn the relationship between raw S2 based water
synthetic aperture radar (SAR) or multi-spectral extraction indices and their corresponding raw S2
(MS) imagery. MS sensors are limited to cloud free bands, but not of HSV owing to relatively complex
conditions, whereas SAR imagery is plagued by computation involved in the latter. Results of the
noise-like speckle. Prior studies that use paper establishes important benchmarks for the
combinations of MS and SAR data to overcome extension of S1 and S2 data-based flood inundation
individual limitations of these sensors have not fully mapping efforts over large spatial extents.
examined sensitivity of flood mapping performance Introduction
to different combinations of SAR and MS derived Reliable, accurate and near-real time
spectral indices or band transformations in color mapping of flood inundated areas is important for
space. This study explores the use of diverse bands flood protection and preparedness (Mosavi et al.,
of Sentinel 2 (S2) through well-established water 2018). In this context, satellite remote sensing is
indices and Sentinel 1 (S1) derived SAR imagery invaluable in providing estimates of location, extent,
along with their combinations to assess their and severity of flooding over political boundaries
capability for generating accurate flood inundation and complex terrain. Attempts to map flood extents
maps. The robustness in performance of S-1 and S-2 using remote sensing data have focused on using
band combinations was evaluated using 446 hand either spectral data from optical sensors or
labeled flood inundation images spanning across 11 backscatter data from synthetic aperture radar (SAR)
flood events from Sen1Floods11 dataset which are (see Huang et al., 2018 and Shen et al., 2019a for
highly diverse in terms of land cover as well as review). Spectral data from optical sensors are highly
location. A modified K-fold cross validation correlated with open water surfaces (Irwin et al.,
approach is used to evaluate the performance of 32 2017). Therefore, spectral data is the preferred
combinations of S1 and S2 bands using a fully source for flood inundation mapping under cloud-
connected deep convolutional neural network known free weather conditions (Pekel et al., 2014, Gevaert
as U-Net. Our results indicated that usage of et al., 2015). However, one of the serious limitations
elevation information has improved the capability of of these bands derived from optical imagery is their
S1 imagery to produce more accurate flood inability to penetrate cloud cover that is usually
inundation maps. Compared to a median F1 score of present during large scale floods caused by
0.62 when using only S1 bands, the combined use of precipitation (McNairn et al., 2009, Clement et al.,
S1 and elevation information led to an improved 2018). Synthetic Aperture Radar (SAR) which
median F1 score of 0.73. Water extraction indices records data based on energy reflectance can detect
based on S2 bands have a statistically significant water irrespective of cloud interference and is also
superior performance in comparison to S1. Among effective both during day and nighttime (Shen et al.,
all the band combinations, HSV (Hue, Saturation, 2019a). However, limitations such as the inability to
Value) transformation of S2 bands provides a median differentiate between water and water-like surfaces,
F1 score of 0.9, outperforming the commonly used noise-like speckle, and geometric correction may
water spectral indices owing to HSV’s challenge the SAR’s viability for global flood
inundation mapping applications (Shen et al., m - 60 m spatial resolution was launched in June
2019b). Combinations of MS and SAR data are 2015. Since the spatial and temporal resolution of
found to perform better in case of urban area extents sentinel products is a significant improvement over
(Iannelli and Gamba, 2018), invasive plant detection previous satellite products, majority of recent studies
(Rajah et al., 2018), mapping soil moisture (Gao et have typically used Sentinel products for flood
al., 2017), wildfire assessment (Colson et al., 2018), mapping (Amitrano et al., 2019; Twele et al., 2016,
crop monitoring (Betbeder et al., 2014, Dusseux et Li et al., 2018, Devries et al., 2020; Goffi et al., 2020;
al., 2014), land cover classification (Ienco, 2019) and Jain et al., 2020). Since S1 and S2 have different
also in flood inundation mapping (Manakos et al., sensors on board with S1 being an all-weather radio
2020; Bioresita et al., 2019, Slagter et al., 2020). imaging mission and S2 a multispectral (MS)
Thus, combining both multi-spectral imagers and imaging satellite, the S1 and S2 measurements offer
SARs may provide complementary information fundamentally different information sources. In
useful for flood mapping efforts. addition, the high spatial and temporal resolution of
Initial applications of flood inundation S1 and S2 are of particular interest in the context of
mapping involved the use of visible spectral bands flood inundation mapping.
and their combinations such as Normalized Most of the studies which investigate the
Difference Vegetation Index (NDVI) from coarse- combinations of S1 and S2 for flood inundation
scale VHRR (Very High Resolution Radiometer) mapping have blended SAR sensor data using NDWI
and AVHRR (Advanced Very High-Resolution as the index. For instance, Manakos et al., (2020)
Radiometer) optical imagery at 1km resolution used pixel based random forest classifier to identify
(Wiesnet, 1974; Barton and Bathols, 1989). With the the mean day difference between the target S1 image
advent of MODIS and Landsat, not only the spatial and available S2 high accurate inundation maps.
and temporal resolution have increased, but infrared Decision-level fusion rules was used by Bioresita et
spectral bands were also made available, al., (2020) to integrate S1 and S2 derived NDWI
significantly improving flood monitoring bands. More recent studies have introduced
capabilities. Therefore, several water-based indices additional indices for flood mapping. For example,
such as Normalized Difference Water Index (NDWI; Slagter et al., (2020) used multi-level random forest
McFeeters, 1996), Modified Normalized Difference classification scheme to fuse S1 and S2 derived
Water Index (MNDWI; Xu et al., 2006), Automatic NDWI / MNDWI bands for flood inundation
Water Extraction Indices (AWEI; Feyisha et al., segmentation. Goffi et al., (2020) have used a rule-
2014) were derived based on optical bands of based framework to weight the different water
Landsat and MODIS to delineate water extent. indices such as NDWI, MNDWI and AWEI for
Consequently, SAR data from various satellites has water extent mapping. Even though use of NDWI
also been widely used for flood mapping and with SAR has been successful to some extent in flood
management (Oberstadler et al., 1997; Matgen et al., mapping, fusion approaches have not systematically
2007; Matgen et al., 2011; Martinis et al., 2013). investigated the sensitivity of inundation mapping
Specifically, these studies indicate that dual- performance metrics to different combinations of S1
polarized SAR (VV, VH and HH polarization) data and S2 derived spectral indices or band
is suitable for flood mapping. transformations in color space. Understanding the
More recently, the European Space Agency’s differences in performance resulting from such
Sentinel-1 (S1) satellite mission which was launched choices is important because the S2 based spectral
in October 2014 started providing Synthetic indices have their own set of limitations such as their
Aperture Radar (SAR) data at 10 m resolution with inability to capture shallow water surfaces,
6-day revisit period. Subsequently, Sentinel-2 (S2) sensitivity to inundation of land cover classes, and
satellite mission which provides multi-spectral (MS) misclassification due to shadows of clouds,
optical sensor with a 5-10 day revisit period with 10 mountains and buildings (Li et al., 2013; Boschetti et
al., 2014). Therefore, even though the combinations simpler classification algorithms. Overall, to the best
of S1 and S2 imagery have been applied to map flood of our knowledge, no other study has jointly
inundations, a systematic investigation on the evaluated 1) the sensitivity of combination of S1 and
optimal combination of S1 and S2 is still needed. S2 bands on the performance of flood inundation
Also, prior knowledge about an optimal combination mapping in the context of deep learning and 2) the
will save time and computational resources when robustness of performance of the band combinations
generating flood inundation mapping in near real for flood inundations across a dataset with diverse
time settings. land cover spanning across 5 continents.
Recently, there have been significant Through this study, we aim to address the
advancements in computer vision tasks such as following questions: 1) What are the optimal
classification and segmentation due to the growth of combination of S1 and S2 bands for flood inundation
deep learning algorithms (Hinton and Salakhuditnov, mapping through deep learning approaches? 2) Does
2006) and more particularly due to the introduction the combination of S1 and S2 perform better than the
of Convolutional Neural Networks (CNNs; Jia et al., individual performance of S1 and S2? To address
2014). Unlike pixel-based learning approaches, these questions we structure the rest of the article as
CNNs can also leverage the spatial structure of target follows: the study sites and its associated data are
segment (such as flood inundation mask). Similarly, introduced in Section 2; Section 3 describes the deep
CNNs have been applied successfully in various learning architecture for flood inundation mapping
flood mapping studies (Gebrehiwot et al., 2019; from radar/optical data and the experimental
Nemni et al., 2020; Peng et al., 2019; Wang et al., settings; Section 4 discusses the evaluation results.
2020; Li et al., 2020; Potnis et al., 2019; Mateo- Finally, Section 5 concludes the work.
Garcia et al., 2021; Rambour et al., 2020; Bonafilia Data:
et al., 2020). Given the relative success of CNNs For this study, we used a recently introduced
across a range of domains, we argue that a systematic georeferenced flood label data i.e. Sen1Floods11
investigation of the application of CNNs for blending
(Bonafiell et al., 2020) which provides flood
the diverse bands of S1 and S2 will likely help us inundation labels spanning over 11 flood events
determine the optimal combination for improved across the world (Figure1 and Table 1). This dataset
flood inundation mapping. Combinations of optical contains human annotated flood labels generated for
and radar satellite images based on deep learning 446 images at 10-meter resolution at 512 × 512
techniques have been proposed to address tasks such dimensions. The dataset further provides
as land cover change detection (Liu et al., 2018), corresponding Sentinel 1 and Sentinel 2 bands for the
river discharge estimation (Tarpanelli et al., 2019) labeled flood events. For our study, we use all the
and land cover (Ienco et al., 2019). However, the 446-human supervised images for our deep learning
same opportunity has not been fully exploited yet in model evaluation. These 446 images are selected by
the context of flood inundation mapping tasks. stratified sampling from a larger pool of 4385 images
Therefore, in this study we apply a special for the above mentioned 11 flood events. As Sentinel
type of CNN known as U-Net [Ronneberger et al.,
2 has bands at varying resolutions, all bands are
2015] successful in binary segmentation tasks to resampled linearly to 10 m resolution for common
learn flood inundation mapping from S1 and S2 comparison. Further details on the Sen1Floods11
bands utilizing hand labeled flood inundation data dataset characteristics are provided in Bonafiell et
covering 11 flood events across the globe [Bonafilia al., (2020). In addition to the data from
et al., 2020]. By training U-Net on different Sen1Floods11, we also use elevation data derived
combinations of S1 and S2 on a dataset which spans from Shuttle Radar Topography Mission (SRTM)
across multiple land cover regions, we develop a with a resolution of 30-meters. SRTM used two
classification approach more robust than prior single-pass interferometers to produce a global
studies which have trained on less extensive data and digital elevation model (DEM) [Yang et al., 2011].
Figure 1: Locations of flood events sampled in Sen1Floods11 (adopted from Bonafilia et al., 2020)

Table 1: Flood events sampled in Sen1Floods11 dataset


ID Country S2 Date S1 Date Rel. orbit Orbit VH Threshold
1 BOLIVIA 2/15/2018 2/15/2018 156 Descending <-20.44
2 GHANA 9/19/2018 9/18/2018 147 Ascending <-22.81
3 INDIA 8/12/2016 8/12/2016 77 Descending <-21.56
4 VIETNAM 8/4/2018 8/5/2018 26 Ascending <23.06
5 NIGERIA 9/20/2018 9/21/2018 103 Ascending <-21.94
6 PAKISTAN 6/28/2017 6/28/2017 5 Descending <-19.56
7 PARAGUAY 10/31/2018 10/31/2018 68 Ascending <-19.94
8 SOMALIA 5/5/2018 5/7/2018 116 Ascending <-21.06
9 SPAIN 9/18/2019 9/17/2019 110 Descending <-25.13
10 SRI LANKA 5/28/2017 5/30/2017 19 Descending <-21.69
11 USA 5/22/2019 5/22/2019 136 Ascending <-22.62

The 30-meter SRTM DEM is void-filled In this section, we describe the following
using elevation data from various other sources. To aspects: (1) Preprocessing of S2 data, (2) The
match the resolution of sentinel bands (both S1 and experimental setup, (3) Architecture of the CNN
S2), SRTM DEM data was resampled linearly to 10- model, and (4) Evaluation of the deep learning model
meter resolution. This resampled data is used as an performance.
ancillary input for our deep learning algorithm. Preprocessing of S2 data:
As S1 SAR has only two available bands with
Methodology:
VV and VH polarizations, we use them directly in
the machine learning setup without making any
Table 2: Sentinel 2 Water Spectral indices formulation and their corresponding references
Index Reference
MNDWI = (GREEN – SWIR1) / (GREEN + SWIR1) Xu, 2006
NDWI = (GREEN – NIR) / (GREEN + NIR) McFeeters, 1996
AWEI = 4* (GREEN − SWIR1) − (0.25 * NIR + 2.75 ∗ SWIR2) Feyisa et al., 2014
AWEISH = BLUE + 2.5 * GREEN – 1.5 * (NIR + SWIR1) – 0.25 * SWIR2 Feyisa et al., 2014

further combinations/indices. However, the 12 bands 𝐻=


in S2 allow the possibility to specify several 0, 𝑖𝑓⁡𝑉 = min(𝑅, 𝐺, 𝐵)
combinations of bands. Boschetti et al., [2014] have 𝐺−𝐵
(60°⁡ × 𝑉−min(𝑅,𝐺,𝐵) + 360°) 𝑚𝑜𝑑⁡360°, 𝑖𝑓⁡𝑉 = 𝑅
indicated that spectral indices which contains the
bands of visual spectra with wavelength (µm) 𝐵−𝑅 ⁡
60°⁡ × 𝑉−min(𝑅,𝐺,𝐵) + 120°, 𝑖𝑓⁡𝑉 = 𝐺
between 0.5 to 0.7 and near infrared spectra with a 𝑅−𝐺
wavelength(µm) 0.7 to 1.1 have been successful in { 60°⁡ × 𝑉−min(𝑅,𝐺,𝐵) + 240°, 𝑖𝑓⁡𝑉 = 𝐵
delineating water on land. Therefore, we evaluated (3)
the spectral indices based on a set of S2 bands which
can detect water features within the specified Hue (H), is expressed in degrees with range of 0°–
wavelength ranges (Table 2). In addition to the 360°, representing spectral composition of color.
spectral indices, S2 Short Wave Infrared Radiation Value (V) can be defined as the brightness of color.
(SWIR2), Near Infrared Radiation (NIR), red bands Saturation (S) quantifies the distance of a color from
were transformed to Hue, Saturation and Value a grey of equal brightness.
(HSV) color space. Water can be effectively
delineated by defining a relation between H, S and V
components (Pekel et al., 2016, 2014). HSV
transformation was initially developed for spectral Experimental setup:
bands of MODIS (Pekel et al., 2014). Subsequently,
Landsat data is manually classified based on HSV The 446 images used in this study are derived
values to map global water (Pekel et al., 2016). The from different events spanning across 11 countries.
SWIR2, NIR, red values are assigned to red, green, For this study, we adopt a modified k-fold cross
and blue colors (RGB) respectively and transformed validation approach to train and test our experimental
into the HSV color space using a standardized setup. This modified k-fold approach is implemented
colorimetric transformation (Smith, 1978), as shown as follows
in Eqs. (1), (2), (3).
𝑉 = max⁡(𝑅, 𝐺, 𝐵) (1) 1. From the pool of eleven countries, randomly
select nine countries
2. The training dataset for each of repetition in
k-fold approach is augmented fourfold by
𝑆 = 𝑉 − min(𝑅, 𝐺, 𝐵) (2) flipping the S1, S2 and ground truth images
up, down, right and left.
3. Train the deep learning algorithm on all the
flood events for the selected nine countries
4. Evaluate the performance of trained deep
learning algorithm on all the flood images
from remaining two countries
5. Repeat the process for k = 10 by again overall bias. For all the combinations, a separate set
randomly selecting nine countries for training of experiments with DEM as an additional input to
and two for testing with a different combination bands of S1 and S2 are also run. By
combination. using DEM data as an additional input, we can
6. Summarize the skill of the approach by using evaluate whether the S1 and S2 combinations
a median of model evaluation scores across through deep learning algorithms can distinguish the
all the test samples for all k=10 folds. drainage areas without any elevation information.
Firstly, in case of Sentinel 1, we use both
By using a testing subset from countries which are VV/VH bands to establish the benchmark
not used in training, we avoid spatial autocorrelation performance of S1 for flood inundation mapping. In
which is likely in the context of geo-spatial case of S2, the combined spectral indices of NDWI
segmentation [Scratz et al., 2019]. In addition to that, and MNDWI (referred as cNDWI), AWEIsh and
for each of the 10 folds, the deep learning model is AWEInsh (cAWEI) and their combination is
trained/tested on different subset decreasing the

Figure 2: U-net architecture adopted for segmentation of water extents

provided as input to our deep learning algorithm evaluate our deep learning algorithm using the raw
(Table 2). Also, the HSV bands and their bands named as rNDWI, rAWEI and rHSV and their
combination with spectral indices are used in our combinations. Finally, all the S2 band configurations
experiment. We also want to investigate if the raw are combined with S1 to evaluate the effectiveness of
bands used in cNDWI, cAWEI and HSV can also S1 and S2 combinations. Overall, a total of 32
produce similar performance compared to their combinations are formulated in our experimental
spectral index counterparts. Therefore, we also setup (Table 3)
Table 3: Input experiment design and their corresponding descriptions
Experime
Input name Input description
nt ID
1 S1 VV and VH bands of S1
2 S1+DEM VV and VH bands of S1 + 10 M resampled SRTM 30 M DEM
3 cAWEI NDWI and MNDWI indices of S2
NDWI and MNDWI indices of S2 + 10 M resampled SRTM 30M
4 cAWEI+DEM
DEM
5 cNDWI AWEIsh and AWEInsh indices of S2
AWEIsh and AWEInsh indices of S2 + 10 M resampled SRTM 30M
6 cNDWI+DEM
DEM
7 HSV HSV transformation of S2 RGB Bands
HSV transformation of S2 RGB Bands + 10 M resampled SRTM
8 HSV+DEM
30M DEM
9 rNDWI Bands used for computing NDWI and MNDWI from S2
Bands used for computing NDWI and MNDWI from S2 +10 M
10 rNDWI+DEM
resampled SRTM 30M DEM
11 rAWEI Bands used for computing AWEIsh and AWEInsh from S2
Bands used for computing AWEIsh and AWEInsh from S2 +10
12 rAWEI+DEM
M resampled SRTM 30M DEM
13 rHSV Bands used for computing HSV from S2
Bands used for computing HSV from S2 +10 M resampled
14 rHSV+DEM
SRTM 30M DEM
15 cAWEI+cNDWI NDWI, MNDWI, AWEIsh and AWEInsh indices of S2
NDWI, MNDWI, AWEIsh and AWEInsh indices of S2 + 10 M
16 cAWEI+cNDWI+DEM
resampled SRTM 30M DEM
HSV transformation of RGB Bands , NDWI, MNDWI, AWEIsh and
17 HSV+cAWEI+cNDWI
AWEInsh of S2
HSV transformation of RGB Bands , NDWI, MNDWI, AWEIsh and
18 HSV+cAWEI+cNDWI+DEM
AWEInsh of S2 + 10 M resampled SRTM 30M DEM
Bands used for computing NDWI, MNDWI, AWEIsh and AWEInsh
19 rAWEI+rNDWI
indices of S2
Bands used for computing NDWI, MNDWI, AWEIsh and AWEInsh
20 rAWEI+rNDWI+DEM
indices of S2 + 10 M resampled SRTM 30M DEM
Bands used for computing HSV transformation, NDWI, MNDWI,
21 rHSV+rAWEI+rNDWI
AWEIsh and AWEInsh of S2
Bands used for computing HSV transformation, NDWI, MNDWI,
22 rHSV+rAWEI+rNDWI+DEM
AWEIsh and AWEInsh of S2 + 10 M resampled SRTM 30M DEM
23 S1+cAWEI VV , VH (S1), NDWI and MNDWI (S2)
VV , VH (S1), NDWI and MNDWI (S2) + 10 M resampled SRTM
24 S1+cAWEI+DEM
30M DEM
25 S1+cNDWI VV , VH (S1), AWEIsh and AWEInsh (S2)
VV , VH (S1), AWEIsh and AWEInsh (S2) + 10 M resampled
26 S1+cNDWI+DEM
SRTM 30M DEM
27 S1+cAWEI+cNDWI VV , VH (S1), NDWI, MNDWI, AWEIsh and AWEInsh (S2)
VV , VH (S1), NDWI, MNDWI, AWEIsh and AWEInsh (S2) + 10
28 S1+cAWEI+cNDWI+DEM
M resampled SRTM 30M DEM
29 S1+HSV VV , VH (S1), HSV, NDWI, MNDWI, AWEIsh and AWEInsh (S2)
VV , VH (S1), HSV, NDWI, MNDWI, AWEIsh and AWEInsh (S2)
30 S1+HSV+DEM
+ 10 M resampled SRTM 30M DEM
31 S1+cAWEI+cNDWI+HSV VV , VH (S1) and HSV(S2)
S1+cAWEI+cNDWI+HSV+D
32 VV , VH (S1) and HSV(S2) + 10 M resampled SRTM 30M DEM
EM

U-Net Architecture: at the end of epoch is selected for representing model


performance.
The CNN architecture applied for flood
Evaluation criteria:
segmentation is U-Net as illustrated in Figure 1. U-
Net is designed originally for the task of segmenting In a binary segmentation study such as flood
biomedical imagery [Ronneberg et al., 2016] inundation, two outcomes which correspond to water
targeted for applications which have relatively and non-water regions are possible. The output can
smaller number of training images and to yield be classified as (1) True Positive (TP): where water
segmentation at the same resolution as the input pixels are correctly classified as water; (2) True
image. U-Net architecture consists of two parts (i.e. Negative (TN), where non-water pixels are correctly
encoder and decoder) (Figure 1). Important features classified as non-water regions; (3) False Positive
are extracted through the encoder part of the U-Net (FP): non-water pixels incorrectly classified as water
model through a downsampling process. The (4) False Negative (FN): water pixels incorrectly
decoder uses these features as an input and builds classified as non-water.
back the spatial information of the input through an Based on these outputs, pixel accuracy which
upsampling process. This upsampling and determines the percentage of pixels correctly
downsampling process of U-Net architecture can classified can be computed. However, as accuracy
systematically combine low-level features and high- computes this percentage irrespective of classes, it
level features, while performing element-wise can be misleading when the class of interest (i.e.
segmentation from multiple features. U-Net also water) has relatively low number of pixels. To avoid
incorporates skip connections to capture precise this, Precision, Recall, and F1 scores are commonly
locations, at every step of the decoder as shown in used. Precision and Recall are interdependent
the figure 1. These skip connections concatenate the measures of over and under-segmentation Low
output of the decoder layers with the feature maps values of Precision and Recall indicates over-
from the encoder at the same level making the segmentation and under-segmentation, respectively.
segmentation of pixels more precise. More details on F1 score is the harmonic mean of Precision and
the U-Net architecture and its recent applications can Recall scores capturing both the aspects as a single
be found in Du et al., (2020). metric.
We use Keras coupled with TensorFlow to train
and test all our models. As our goal is to evaluate the Precision illustrates how many of the predicted
S1 and S2 combination rather than train the best water pixels matched the water pixels in the
possible models, we do not perform an exhaustive annotated labels. It can be calculated as
hyperparameter search. We use the Adam optimizer 𝑇𝑃
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = ⁡ 𝑇𝑃+𝐹𝑃 (4)
with a base learning rate of 5e-4 and a weight decay
coefficient of 1e-2 [KingMa and Ba, 2014]. Each of Whereas Recall denotes how many have been
32 combinations in Table 3 is trained separately predicted as water pixels by our deep learning model.
using a single NVIDIA V100 GPU. All the models It can be defined as:
are trained for 500 epochs. Finally, models obtained
𝑇𝑃 Results:
𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃+𝐹𝑁 (5)
Performance metrics of input combinations:
For an image to be classified accurately, both As highlighted before, ability of S1 and S2
Precision and Recall should be high. For this data inputs to map flood inundation with and without
purpose, F1 score, is often used as a tradeoff metric DEM is measured by three metrics of F1 score,
to quantify both over- and under-segmentation into Precision and Recall (Table 4). F1 score, Precision
one measure. and Recall metrics of each individual image in test
2×𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛×𝑅𝑒𝑐𝑎𝑙𝑙 dataset across all 10 folds was first computed. Then
𝐹1⁡𝑆𝑐𝑜𝑟𝑒 = ⁡ (6) performance of each input combination is
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑅𝑒𝑐𝑎𝑙𝑙

We evaluate the performance of 32 input determined as median of individual metrics across all
combinations as follows the folds. Further, non-parametric Kruskal–Wallis
test is performed to determine if there is a statistically
1) For each of the 32 input combinations, we significant difference between medians of two band
generate flood inundation maps for test combinations. In our case, two band combinations
datasets belonging to all K =10 folds. are deemed to be significantly different, if the p-
2) Compute F1 score, Precision and Recall value is lesser than 0.05. Figure 3 presents results of
metrics separately for each individual image Kruskal-Wallis test in form of matrix. Statistically
in test dataset across all the 10 folds. significant differences between performance metrics
3) Then for all 10 folds, median of all the three on the rows and columns are indicated by the symbol
metrics is computed and reported. ● in figure 3. Green (red) color indicates the
performance of input combination on row is higher

Table 4: Modified K-fold based median performance metrics for S1 and S2 inputs

F1 Precision Recall Input data Type


0.62 0.59 0.88 S1
Original Bands
0.73 0.68 0.86 S1+DEM
0.89 0.85 0.93 cAWEI
0.88 0.86 0.93 cAWEI+DEM
0.89 0.86 0.93 cNDWI
Feature engineered bands
0.88 0.86 0.93 cNDWI+DEM
0.90 0.89 0.94 HSV
0.90 0.89 0.93 HSV+DEM
0.87 0.86 0.92 rNDWI
0.88 0.87 0.91 rNDWI+DEM
0.88 0.87 0.92 rAWEI
Original Bands
0.88 0.85 0.93 rAWEI+DEM
0.86 0.86 0.89 rHSV
0.87 0.85 0.91 rHSV+DEM

(lower) than that of input combination on the that only S1 imagery as an input has the least F1
column. Performance metrics in Table 4, indicate score of 0.62 among individual S1 and S2 inputs to
U-Net. Low F1 score of S1 is a case of under- 0.62. This increase in performance of S1+DEM is
segmentation as expressed by a low precision score due to improvement in under-segmentation as
of 0.58. However, S1’s performance improved with represented by a relatively better precision score of
added use of DEM to a median F1 score of 0.73 from 0.68.

Figure 3: Outcome of Kruskal-Wallis test between the input combinations in rows and columns. The statistically
significant differences between the performance metrics on the rows and columns are indicated by the symbol ●.
Green (red) color indicates the performance of the input combination on the row is higher (lower) than that of the
input combination on the column.
Our results indicating an improved S1 data without DEM only provides moderate skill in
performance with added use of DEM is not mapping the flood inundation. For instance, Figure 4
surprising as elevation data distinguishes the shows an instance of flood over a location in
potential flood plains from other regions and has southeastern Paraguay in S1 images acquired on
been used previously for numerous flood inundation October 31, 2018 (Table 1 and Figure 1). The U-Net
studies (Saksena & Merwade, 2015; Zheng et al., trained with S1 SAR data without DEM was unable
2018; Musa et al., 2015). Note that the use of S1 SAR to detect the flood extent in Paraguay despite
presence of a clear contrast in VV and VH architecture (Figure 4 (E)). Further, speckle noise in
backscatter (Figure 4 (B), (C), (D), (F)). However, S1 SAR images and its imperfect filtering can also
using DEM (Fig 4 (A)) as an ancillary input lead to false positives in classified flood maps
dramatically improves the U-net ability to detect (Schmitt, 2020; Gulácsi, A., & Kovács, 2020).
water extent indicating no deficiency in U-Net

Figure 4: Flood inundation mapping inputs of (A) Elevation, (B) VV and (C) VH backscatters for a location in
Paraguay. The lower panel represents the flood inundation extent by (D) ground truth, produced by U-Net when
using (E) S1+DEM as input and (F) S1 as input.
In case of all S2 spectral indices with and However, in comparison to S1, the performance of
without DEM, high F1 scores with median F1 scores all the S2 based spectral indices have a statistically
ranging between 0.88 to 0.9 values can be observed. significant increase in terms of F1, precision as well
These higher F1 scores are due to both high as recall scores (Figure 3). However, we must note
Precision and Recall values indicating a balance that Recall scores have increased by a lesser extent
between over and under-segmentation in case of S2 indicating an improvement in over-segmentation in
spectral indices. There does not appear to be any few cases. Also, in comparison to water indices
significant advantage in combining DEM with any of (cNDWI and cAWEI), the performance of flood
spectral indices unlike the case of S1 imagery. inundation mapping using HSV transformation of S2
bands has a statistically significant better between HSV transformations and their
performance with median F1 score value of 0.9. corresponding raw bands leading to relatively lower
Interestingly, U-Net based flood mapping F1 score than its feature engineered counterpart. This
performance using raw bands of S2 water indices differential performance of a deep learning model
(rNDWI and rAWEI) are similar as compared to might be related to degree of computation
cNDWI and cAWEI (Figure 3 and Table 4). This complexity involved in the considered spectral
indicates that the U-Net algorithm was able to learn indices and HSV transformation. Table 2 and
features that are at least as meaningful as spectral equation 1-3 indicate that the spectral indices
indices for flood segmentation. However, in case of computation is relatively simpler than HSV.
HSV, raw band’s performance (rHSV) is Therefore, based on our training data size and hyper-
comparatively lower, but like the performance of parameters, U-net may have captured the
spectral indices. Unlike the case of spectral indices, representation of spectral indices but not that of HSV
U-net algorithm was unable to capture the relation transformation.

Figure 5: Flood inundation mapping inputs of (A) Elevation, (B) False color composite of S1 data and (C) False
color composite of S2 for a location in India. The lower panel represents the flood inundation extent by (D) ground
truth, produced by U-Net when using (E) S1+DEM as input and (F) HSV + DEM as input.
Distinct visual contrasts between water and those previously identified regions by S1’s FCC
non-water pixels in S2 based indices over S1 may be image (Figure 5 (E)). However, the FCC of S2 bands
a plausible reason for the better performance of S2 was able to distinguish the flooded agricultural land,
based indices. For instance, Figure 5 shows an wetlands as well as the meandering rivers (Figure 5
instance of flood in northeastern part of India from (F)) and have produced a nearly identical map as
S1 and S2 images acquired on August 12, 2016 shown in the hand labeled ground truth images
(Figure1 and Table 1). The false color composite (Figure 5 (D)). Hence, the resulting prediction of
(FCC) of S1 bands (Figure 5 (A),(B)) can clearly HSV based transformation of S2 data has captured
distinguish the meandering river in the south west the flood area extents across all the land types. This
region and the wetland in north east region, but was superior performance of HSV may be due to the
unable to capture the flooded agricultural lands better spectral correlation of band pixels with water
which are in the northern regions. As a result, the U- characteristics (Huang et al., 2018).
net was able to produce flood extent only across

Table 5: Modified K-fold based median performance metrics for combinations within S2 inputs
F1 Precision Recall Input data Type
0.86 0.85 0.91 cAWEI+cNDWI
0.88 0.86 0.91 cAWEI+cNDWI+DEM Feature Engineered
0.90 0.89 0.92 HSV+cAWEI+cNDWI bands
0.90 0.88 0.93 HSV+cAWEI+cNDWI+DEM
0.87 0.85 0.91 rAWEI+rNDWI
0.87 0.86 0.90 rAWEI+rNDWI+DEM
Original bands
0.88 0.85 0.91 rHSV+rAWEI+rNDWI
0.86 0.84 0.92 rHSV+rAWEI+rNDWI+DEM

Table 6: Modified K-fold based median performance metrics for combinations of S1 and S2 inputs
F1 Precision Recall Input data
0.88 0.86 0.90 S1+cAWEI
0.87 0.82 0.93 S1+cAWEI+DEM
0.88 0.88 0.91 S1+cNDWI
0.89 0.86 0.92 S1+cNDWI+DEM
0.88 0.85 0.94 S1+cAWEI+cNDWI
0.89 0.85 0.94 S1+cAWEI+cNDWI+DEM
0.90 0.90 0.92 S1+HSV
0.90 0.90 0.93 S1+HSV+DEM
0.90 0.90 0.92 S1+cAWEI+cNDWI+HSV
0.90 0.90 0.93 S1+cAWEI+cNDWI+HSV+DEM
Combinations within S2 imagery: appear to be any significant advantage by the
The previous performance results of combination approach. Even though there is a slight
individual spectral indices indicate that HSV decrease in median F1 score for the combined
transformation has superior performance in mapping spectral indices (cAWEI + cNDWI) when compared
floods across the globe. But, in these three indices, to the individual indices, it is still statistically
there are some non-overlapping S2 spectral bands insignificant (Figure 3). By combining all three S2
and their combinations might result in further index approaches, we obtain a performance similar
improvement in performance. Therefore, it makes to the performance HSV alone. Therefore, our results
sense to investigate how the combinations within S2 indicate that overall, even if there are non-
imagery such as combining cAWEI , cNDWI and overlapping bands in our experimental setup,
HSV transformation would perform compared to the combining spectral indices and HSV has no added
individual indices (Table 5). The combinations advantage in terms of performance.
within S2 bands have indicated that, there does not

Figure 6: Flood inundation mapping inputs of (A) Elevation, (B) VV and (C)VH backscatter intensities of S1 data
and (D) False color composite of S2 for a location in Ghana. The lower panel represents the flood inundation
extent by (E) ground truth, U-Net produced flood masks when using (F) S1+DEM as input and (G) HSV + DEM
as input and (H) fusion of HSV and S1+DEM as input.
Combinations of S1-SAR and S2-spectral and S2’s HSV transformation has resulted in a more
imagery: accurate representation of flood extent without any
We combined S1 SAR and S2 based spectral overestimation across the flood plains (Figure 6 (H)).
indices to test whether there is a significant Since, Sens1Floods11 dataset is curated to exclude
advantage in performance due to their combination majority of the satellite imagery with clouds, we
and examine if there is an optimal combination of S1 could not find the significant advantage in our
and S2. Therefore, all the combinations of S1 and S2 experiments. However, in presence of clouds, the
spectral indices are tested (Table 6). Combining S1 fusion of S1 and S2 will have an added advantage.
and S2 indices indicated that their performance did Discussion:
not change significantly when compared to the Our results indicate that even though the SAR
performance of individual S2 indices. For instance, data is not affected by cloud cover, poor contrast
the performance of flood mapping when using between VV and VH backscatter affects S1 data’s
cAWEI, cNDWI and HSV is not significantly flood inundation mapping performance. F1 score for
different when compared to the performance of S1+ S1 based flood water mapping varied between 0.65
cAWEI , S1+ cNDWI and S1+HSV (Figure 3). All to 0.91 in previous studies depending on the location
the best performing input combinations have HSV as and approach [Bioresita et al., 2018, Liang and Liu,
a part of their combination. However, the 2020]. Therefore, the F1 score of 0.62 obtained in our
performance of these input combinations is not case compares reasonably well to these studies. Our
significantly different from each other and are current configuration performs semantic
similar to the performance of HSV alone. Even segmentation with a single-trained model for the
though the combination of S1 and S2 data has entire dataset. Previous studies have suggested that
relatively no performance improvement in this may lead to poor accuracy due to unclear
comparison to individual S2 indices in detecting the backscatter conditions in case of S1 of the inundated
flood water extents, it is important to clarify that the areas in different land cover features [Manakos et al.,
U-net trained with cloud obscured S2 images still 2020]. Therefore, a land cover specific ensemble of
cannot efficiently detect flood extents. An example deep learning models or pixel centric approaches can
instance is shown in Figure 12 for a flooding case be used to improve the performance [Huang et al.,
over central Ghana using S1 and S2 images acquired 2018; Pham-Duc et al., 2017]. Additionally, the 10-
on September 18-19, 2016 (Table 1 and Figure 1). meter resampled SRTM 30M - DEM improves S1
The DEM shows a narrow strip of low elevation data’s flood mapping abilities. This can be attributed
indicating a potentially narrow river (Figure 6 (A)). to the identification of low lying flood plains
VV (Figure 6 (B)) and VH (Figure 6 (C)) (Manfreda et al., 2015; Samela et al., 2016) in the
backscatters show a complete picture of flood plain elevation data. Also, in urban areas, elevation data
surrounding the river and other flood inundated can help us to distinguish the areas of urban ground
regions across the plains. FCC of S2 indicated a total surface which may not be visible due to radar
cloud coverage over a flooded region in Ghana shadowing and building layovers preventing
(Figure 6 (D)). Consequently, the S1 predicted misclassification of flooded water extent [ Soergel et
flooded regions (Figure 6 (F)) have captured the al., 2003]. Further, DEMs can helps us distinguish
flooded extent similar to ground truth (Figure 6 (E)) roads and tarmac areas have a low backscatter similar
with some flood plains areas being overestimated. to water creating a misclassification [Mason et al.,
Despite, the huge cloud coverage, the HSV 2014]. But given the dependence of flood inundation
transformation of S2 data could capture the flood mapping on elevation data’s spatial resolution (Haile
plains in the Southwest region and some other and Rientjes, 2005; Fereshtehpour, and Karamouz,
patches across the map (Figure 6 (G)). But it could 2018), further studies should consider the sensitivity
not capture the river segment due to the dense cloud of flood inundation map’s performance to the DEM
coverage. However, the fusion of both S1 backscatter spatial resolution as well as the data source.
S2 based spectral indices are generally better U-net architecture (Ronneberger et al., 2015)
than S1 based SAR data for mapping flood extents has been previously proven to efficient in
due to stronger correlations between spectral features segmentation of binary classes. Even though our
of S2 bands with respect to water surfaces in the focus was to explore the diverse bands of S2 and S1
cloud-free S2 images (Boschetti et al., 2014; Klein et for flood mapping, our adopted configuration of U-
al., 2017; Bonafilia et al., 2020). Also, our results Net has performed well in delineating the flooded
which state that the S2 has better performance than regions. In this work, we modified the U-Net
S1 is similar to conclusion obtained by Bonafilia et configuration to accept S1 and S2 bands instead of
al., 2020. In particular, the HSV transformation of the traditional RGB bands to delineate flood floods
RGB bands in S2 satellite data has a statistically using Multi-spectral and SAR imagery. However, in
significant superior performance in comparison to case of S1, our modified U-Net’s inability to find a
conventional water spectral indices probably due to robust threshold which is sensitive enough to identify
improved contrast between water and non-water the contrast between water and non-water pixels
surface in HSV components (Pekel et al., 2014, based on SAR VV and VH backscatter frequencies
2016). However, the difference in performance has resulted in decreased performance in some
between HSV and spectral indices is relatively low. flooded regions. This may be achieved through
The performance of S1 and S2 combinations for modification of convolution configurations,
flood water mapping was not significantly different activation functions and loss functions in U-Nets
from the individual S2 indices’ performance. This is paving way for future research. In addition,
not surprising in our case, as the Sen1Floods11 benchmarking different deep learning architectures
dataset has relatively a smaller number of satellite for flood inundation mapping using the
imagery with clouds. But, in satellite imagery with Sen1Floods11 dataset can be investigated.
clouds, the fusion of S1 and S2 imagery has Conclusions
performed significantly better than individual S2 In this paper we explored the diverse bands
indices as demonstrated in figure 6. In addition to of S2 and S1 satellites along with combinations for
that, previous studies have also indicated in the flood inundation mapping through a deep CNN
fusion approach, S2 bands’ inability in penetrating model known as U-Net to train, validate and test
clouds is complemented by the SAR’s ability to map against manually annotated pixel level flood
water extent during cloud cover. inundated images. Our results indicate that using
Interestingly, U-Net algorithm’s DEM as an ancillary data can improve the
performance using cNDWI/cAWEI in segmenting performance of U-net when using S1 imagery as
water inundated areas is similar to that of using raw input. However, U-Net algorithm has shown a better
bands used for computing NDWI/AWEI indicating performance when using S2 bands when compared
the ability of U-Net to learn the features with similar to S1 bands, likely due to better spectral correlation
characteristics as spectral indices for flood between optical sensor output and water features. In
segmentation as a part of its encoder operations. addition, there is minimal influence of DEM
However, the U-Net algorithm’s performance using ancillary on the median performance of S2 bands.
S2 bands transformed to HSV is superior compared Among the S2 metrics, the U-Net with the HSV
to that of using raw S2 bands. This may be due to the transformation of RGB bands outperforms the
relatively shorter data record for training the U-Net established spectral indices such as AWEIsh,
algorithm, or inability of the U-net to capture the AWEInsh, NDWI and MNDWI owing to its superior
relatively complex formulation of HSV. Therefore, visual contrast segmentation. The U-Net algorithm
relatively complex feature engineering such as HSV was able to learn the relationship between raw S2
transformation may be performed before training the bands and cNDWI and cAWEI, but not of HSV
deep learning algorithm for flood inundation owing to relatively complex computation involved in
mapping. the latter. Therefore, based on our training data size
and hyper-parameters, U-net may have captured the water mapping. International Journal of Remote
representation of spectral indices but not that of HSV Sensing, 40(23), 9026-9049, doi:
transformation. These results also show that 10.1080/01431161.2019.1624869
automatic flood detection is possible when an Bonafilia, D., Tellman, B., Anderson, T., &
appropriate water index technique is being used. The Issenberg, E. (2020). Sen1Floods11: a georeferenced
extension of our approach to benchmark the dataset to train and test deep learning flood
performance of different deep learning architectures algorithms for Sentinel-1. In Proceedings of the
for flood water segmentation is left for a future work. IEEE/CVF Conference on Computer Vision and
Acknowledgements Pattern Recognition Workshops (pp. 210-211),
This research was supported by the NASA Earth doi: 10.1109/CVPRW50498.2020.00113
Science Technology Office sponsored New Boschetti, M., Nutini, F., Manfron, G., Brivio, P. A.,
Observing System (NOS) project. Computing was & Nelson, A. (2014). Comparative analysis of
supported by the resources at the NASA Center for normalised difference spectral indices derived from
Climate Simulation. The authors also acknowledge MODIS for detecting surface water in flooded rice
Cloud to Street for developing open-source flood cropping systems. PloS one, 9(2), e88741, doi:
label data available for access through Google Cloud 10.1371/journal.pone.0088741
Storage bucket at: gs://senfloods11/ Clement, M. A., Kilsby, C. G., & Moore, P. (2018).
Declaration of Competing Interest Multi‐temporal synthetic aperture radar flood
None mapping using change detection. Journal of Flood
Risk Management, 11(2), 152-168, doi:
References 10.1111/jfr3.12303
Amitrano, D., Di Martino, G., Iodice, A., Riccio, D. Colson, D., Petropoulos, G. P., & Ferentinos, K. P.
and Ruello, G., 2018. Unsupervised rapid flood (2018). Exploring the potential of Sentinels-1 & 2 of
mapping using Sentinel-1 GRD SAR images. IEEE the Copernicus Mission in support of rapid and cost-
Transactions on Geoscience and Remote Sensing, effective wildfire assessment. International journal
56(6), pp.3290-3299, of applied earth observation and geoinformation, 73,
doi: 10.1109/TGRS.2018.2797536 262-276, doi: 10.1016/j.jag.2018.06.011
Auynirundronkool, K., Chen, N., Peng, C., Yang, C., DeVries, B., Huang, C., Armston, J., Huang, W.,
Gong, J., & Silapathong, C. (2012). Flood detection Jones, J. W., & Lang, M. W. (2020). Rapid and
and mapping of the Thailand Central plain using robust monitoring of flood events using Sentinel-1
RADARSAT and MODIS under a sensor web and Landsat data on the Google Earth Engine.
environment. International Journal of Applied Earth Remote Sensing of Environment, 240, 111664, doi:
Observation and Geoinformation, 14(1), 245-255, 10.1016/j.rse.2020.111664
doi: 10.1016/j.jag.2011.09.017 Du, G., Cao, X., Liang, J., Chen, X., & Zhan, Y.
Barton, I.J. and Bathols, J.M., 1989. Monitoring (2020). Medical image segmentation based on u-net:
floods with AVHRR. Remote sensing of A review. Journal of Imaging Science and
Environment, 30(1), pp.89-94, doi: 10.1016/0034- Technology, 64(2), 20508-1, doi:
4257(89)90050-3 10.2352/J.ImagingSci.Technol.2020.64.2.020508
Betbeder, J., Rapinel, S., Corpetti, T., Pottier, E., Dusseux, P., Corpetti, T., Hubert-Moy, L., &
Corgne, S., & Hubert-Moy, L. (2014). Multitemporal Corgne, S. (2014). Combined use of multi-temporal
classification of TerraSAR-X data for wetland optical and radar satellite images for grassland
vegetation mapping. Journal of applied remote monitoring. Remote Sensing, 6(7), 6163-6182,
sensing, 8(1), 083648, doi: 10.1117/1.JRS.8.083648 doi:10.3390/rs6076163
Bioresita, F., Puissant, A., Stumpf, A., & Malet, J. P. Fereshtehpour, M., & Karamouz, M. (2018). DEM
(2019). Fusion of Sentinel-1 and Sentinel-2 image resolution effects on coastal flood vulnerability
time series for permanent and temporary surface assessment: Deterministic and probabilistic
approach. Water Resources Research, 54(7), 4965- networks. science, 313(5786), 504-507, doi:
4982, doi: 10.1029/2017WR022318 10.1126/science.1127647
Feyisa, G. L., Meilby, H., Fensholt, R., & Proud, S. Hornik, K., Stinchcombe, M., & White, H. (1990).
R. (2014). Automated Water Extraction Index: A Universal approximation of an unknown mapping
new technique for surface water mapping using and its derivatives using multilayer feedforward
Landsat imagery. Remote Sensing of Environment, networks. Neural networks, 3(5), 551-560.
140, 23-35, doi: 10.1016/j.rse.2013.08.029 Huang, C., Chen, Y., Zhang, S., & Wu, J. (2018).
Gao, Q., Zribi, M., Escorihuela, M. J., & Baghdadi, Detecting, extracting, and monitoring surface water
N. (2017). Synergetic use of Sentinel-1 and Sentinel- from space using optical sensors: A review. Reviews
2 data for soil moisture mapping at 100 m of Geophysics, 56(2), 333-360, doi:
resolution. Sensors, 17(9), 1966, doi: 10.1029/2018RG000598
10.3390/s17091966 Huang, G. B., Chen, L., & Siew, C. K. (2006).
Gebrehiwot, A., Hashemi-Beni, L., Thompson, G., Universal approximation using incremental
Kordjamshidi, P., & Langan, T. E. (2019). Deep constructive feedforward networks with random
convolutional neural network for flood extent hidden nodes. IEEE Trans. Neural Networks, 17(4),
mapping using unmanned aerial vehicles 879-892. Manakos, I., Kordelas, G. A., & Marini, K.
data. Sensors, 19(7), 1486. (2020). Fusion of Sentinel-1 data with Sentinel-2
Gevaert, C. M., Suomalainen, J., Tang, J., & products to overcome non-favourable atmospheric
Kooistra, L. (2015). Generation of spectral–temporal conditions for the delineation of inundation
response surfaces by combining multispectral maps. European Journal of Remote
satellite and hyperspectral UAV imagery for Sensing, 53(sup2), 53-66.
precision agriculture applications. IEEE Journal of Huang, W., DeVries, B., Huang, C., Lang, M.W., Jo
Selected Topics in Applied Earth Observations and nes, J.W., Creed, I.F.,
Remote Sensing, 8(6), 3140-3146, doi: & Carroll, M.L. (2018). Automated extraction of
10.1109/JSTARS.2015.2406339 surface water extent from Sentinel-1 data. Remote
Goffi, A., Stroppiana, D., Brivio, P. A., Bordogna, Sensing, 10(5), 797. doi:10.3390/rs10050797
G., & Boschetti, M. (2020). Towards an automated Iannelli, G. C., & Gamba, P. (2018, July). Jointly
approach to map flooded areas from Sentinel-2 MSI exploiting Sentinel-1 and Sentinel-2 for urban
data and soft integration of water spectral features. mapping. In IGARSS 2018-2018 IEEE International
International Journal of Applied Earth Observation Geoscience and Remote Sensing Symposium (pp.
and Geoinformation, 84, 101951, doi: 8209-8212), doi: 10.1109/IGARSS.2018.8518172
10.1016/j.jag.2019.101951 Ienco, D., Interdonato, R., Gaetano, R., & Minh, D.
Gulácsi, A., & Kovács, F. (2020). Sentinel-1- H. T. (2019). Combining Sentinel-1 and Sentinel-2
imagery-based high-resolution water cover detection Satellite Image Time Series for land cover mapping
on wetlands, Aided by Google Earth Engine. Remote via a multi-source deep learning architecture. ISPRS
Sensing, 12(10), 1614, doi: 10.3390/rs12101614 Journal of Photogrammetry and Remote
Haile, A. T., & Rientjes, T. H. M. (2005). Effects of Sensing, 158, 11-22, doi :
LiDAR DEM resolution in flood modelling: a model 10.1016/j.isprsjprs.2019.09.016
sensitivity study for the city of Tegucigalpa, Irwin, K., Beaulne, D., Braun, A., & Fotopoulos, G.
Honduras. Isprs wg iii/3, iii/4, 3, 12-14 (2017). Fusion of SAR, optical imagery and airborne
He, W., & Yokoya, N. (2018). Multi-temporal LiDAR for surface water detection. Remote
sentinel-1 and-2 data fusion for optical image Sensing, 9(9), 890, doi: 10.3390/rs9090890
simulation. ISPRS International Journal of Geo- Jain, P., Schoen-Phelan, B., & Ross, R. (2020,
Information, 7(10), 389, doi: 10.3390/ijgi7100389 March). Automatic flood detection in SentineI-2
Hinton, G. E., & Salakhutdinov, R. R. (2006). images using deep convolutional neural networks. In
Reducing the dimensionality of data with neural Proceedings of the 35th Annual ACM Symposium
on Applied Computing (pp. 617-623), doi: Liu, J., Gong, M., Qin, K., & Zhang, P. (2016). A
10.1145/3341105.3374023 deep convolutional coupling network for change
Jamali, A., Mahdianpari, M., Brisco, B., Granger, J., detection based on heterogeneous optical and radar
Mohammadimanesh, F., & Salehi, B. (2021). images. IEEE transactions on neural networks and
Comparing Solo Versus Ensemble Convolutional learning systems, 29(3), 545-559, doi:
Neural Networks for Wetland Classification Using 10.1109/TNNLS.2016.2636227
Multi-Spectral Satellite Imagery. Remote Manakos, I., Kordelas, G. A., & Marini, K. (2020).
Sensing, 13(11), 2046. Fusion of Sentinel-1 data with Sentinel-2 products to
Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., overcome non-favourable atmospheric conditions
Long, J., Girshick, R., ... & Darrell, T. (2014, for the delineation of inundation maps. European
November). Caffe: Convolutional architecture for Journal of Remote Sensing, 53(sup2), 53-66, doi:
fast feature embedding. In Proceedings of the 22nd 10.1080/22797254.2019.1596757
ACM international conference on Multimedia (pp. Manfreda, S., Nardi, F., Samela, C., Grimaldi, S.,
675-678), doi: 10.1145/2647868.2654889 Taramasso, A. C., Roth, G., and Sole, A., (2014).
Kingma, D. P., & Ba, J. (2014). Adam: A method for Investigation on the Use of Geomorphic Approaches
stochastic optimization. arXiv preprint for the Delineation of Flood Prone Areas, Journal of
arXiv:1412.6980. Hydrology, 517, 863-876, doi:
Klein, I., Gessner, U., Dietz, A. J., & Kuenzer, C. 10.1016/j.jhydrol.2014.06.009
(2017). Global WaterPack–A 250 m resolution Manjusree, P., Kumar, L. P., Bhatt, C. M., Rao, G.
dataset revealing the daily dynamics of global inland S., & Bhanumurthy, V. (2012). Optimization of
water bodies. Remote sensing of environment, 198, threshold ranges for rapid flood inundation mapping
345-362, doi: 10.1016/j.rse.2017.06.045 by evaluating backscatter profiles of high incidence
Kruskal, W. H., & Wallis, W. A. (1952). Use of ranks angle SAR images. International Journal of Disaster
in one-criterion variance analysis. Journal of the Risk Science, 3(2), 113-122,doi: 10.1007/s13753-
American statistical Association, 47(260), 583-621. 012-0011-5
Li, S., Sun, D., & Yu, Y. (2013). Automatic cloud- Martinis, S., Kersten, J., & Twele, A. (2015). A fully
shadow removal from flood/standing water maps automated TerraSAR-X based flood service. ISPRS
using MSG/SEVIRI imagery. International journal Journal of Photogrammetry and Remote
of remote sensing, 34(15), 5487-5502. Doi: Sensing, 104, 203-212, doi:
10.1080/01431161.2013.792969 10.1016/j.isprsjprs.2014.07.014
Li, Y., Martinis, S., & Wieland, M. (2019). Urban Martinis, S., Twele, A., Strobl, C., Kersten, J. and
flood mapping with an active self-learning Stein, E., 2013. A multi-scale flood monitoring
convolutional neural network based on TerraSAR-X system based on fully automatic MODIS and
intensity and interferometric coherence. ISPRS TerraSAR-X processing chains. Remote Sensing,
Journal of Photogrammetry and Remote 5(11), pp.5598-5619, doi: 10.3390/rs5115598
Sensing, 152, 178-191. Mason, D. C., Speck, R., Devereux, B., Schumann,
Li, Y., Martinis, S., Plank, S., & Ludwig, R. (2018). G. J. P., Neal, J. C., & Bates, P. D. (2009). Flood
An automatic change detection approach for rapid detection in urban areas using TerraSAR-X. IEEE
flood mapping in Sentinel-1 SAR data. International Transactions on Geoscience and Remote
journal of applied earth observation and Sensing, 48(2), 882-894, doi: r
geoinformation, 73, 123-135, doi: 10.1109/TGRS.2009.2029236
10.1016/j.jag.2018.05.023 Mateo-Garcia, G., Veitch-Michaelis, J., Smith, L.,
Liang, J., & Liu, D. (2020). A local thresholding Oprea, S. V., Schumann, G., Gal, Y., ... & Backes,
approach to flood water delineation using Sentinel-1 D. (2021). Towards global flood mapping onboard
SAR imagery. ISPRS Journal of Photogrammetry low cost satellites with machine learning. Scientific
and Remote Sensing, 159, 53-62.
reports, 11(1), 1-12. doi: 10.1038/s41598-021- Ohki, M., Tadono, T., Itoh, T., Ishii, K.,
86650-z Yamanokuchi, T., Watanabe, M., & Shimada, M.
Matgen, P., Hostache, R., Schumann, G., Pfister, L., (2019). Flood area detection using PALSAR-2
Hoffmann, L. and Savenije, H.H.G., 2011. Towards amplitude and coherence data: The case of the 2015
an automated SAR-based flood monitoring system: heavy rainfall in Japan. IEEE Journal of Selected
Lessons learned from two case studies. Physics and Topics in Applied Earth Observations and Remote
Chemistry of the Earth, Parts A/B/C, 36(7-8), Sensing, 12(7), 2288-2298,
pp.241-252, doi: 10.1016/j.pce.2010.12.009 doi: 10.1109/JSTARS.2019.2911596
Matgen, P., Schumann, G., Henry, J.B., Hoffmann, Pekel, J. F., Cottam, A., Gorelick, N., & Belward, A.
L. and Pfister, L., 2007. Integration of SAR-derived S. (2016). High-resolution mapping of global surface
river inundation areas, high-precision topographic water and its long-term changes. Nature, 540(7633),
data and a river flow model toward near real-time 418-422, doi:10.1038/nature20584
flood management. International Journal of Applied Pekel, J. F., Vancutsem, C., Bastin, L., Clerici, M.,
Earth Observation and Geoinformation, 9(3), Vanbogaert, E., Bartholomé, E., & Defourny, P.
pp.247-263, doi: 10.1016/j.jag.2006.03.003 (2014). A near real-time water surface detection
McFeeters, S. K. (1996). The use of the Normalized method based on HSV transformation of MODIS
Difference Water Index (NDWI) in the delineation multi-spectral time series data. Remote sensing of
of open water features. International journal of environment, 140, 704-716, doi:
remote sensing, 17(7), 1425-1432, doi: 10.1016/j.rse.2013.10.008
10.1080/01431169608948714 Peng, B., Meng, Z., Huang, Q., & Wang, C. (2019).
McNairn, H., Champagne, C., Shang, J., Holmstrom, Patch Similarity Convolutional Neural Network for
D., & Reichert, G. (2009). Integration of optical and Urban Flood Extent Mapping Using Bi-Temporal
Synthetic Aperture Radar (SAR) imagery for Satellite Multispectral Imagery. Remote
delivering operational annual crop Sensing, 11(21), 2492.
inventories. ISPRS Journal of Photogrammetry and Pham-Duc, B., Prigent, C.,
Remote Sensing, 64(5), 434-449. doi: & Aires, F. (2017). Surface water monitoring within
10.1016/j.isprsjprs.2008.07.006 Cambodia and the Vietnamese Mekong Delta over a
Mosavi, A., Ozturk, P. and Chau, K.W., 2018. Flood year, with Sentinel-1 SAR observations. Water,
prediction using machine learning models: Literature 9(6), 366. doi:10.3390/w9060366
review. Water, 10(11), p.1536, doi: Plank, S., Jüssi, M., Martinis, S., & Twele, A. (2017).
10.3390/w10111536 Mapping of flooded vegetation by means of
Musa, Z. N., Popescu, I., & Mynett, A. (2015). A polarimetric Sentinel-1 and ALOS-2/PALSAR-2
review of applications of satellite SAR, optical, imagery. International Journal of Remote
altimetry and DEM data for surface water modelling, Sensing, 38(13), 3831-3850, doi:
mapping and parameter estimation. Hydrology and 10.1080/01431161.2017.1306143
Earth System Sciences, 19(9), 3755, doi: Potnis, A. V., Shinde, R. C., Durbha, S. S., & Kurte,
10.5194/hess-19-3755-2015 K. R. (2019, July). Multi-class segmentation of urban
Nemni, E., Bullock, J., Belabbes, S., & Bromley, L. floods from multispectral imagery using deep
(2020). Fully convolutional neural network for rapid learning. In IGARSS 2019-2019 IEEE International
flood segmentation in synthetic aperture radar Geoscience and Remote Sensing Symposium (pp.
imagery. Remote Sensing, 12(16), 2532. 9741-9744). IEEE.
Oberstadler, R., Hönsch, H. and Huth, D., 1997. Rajah, P., Odindi, J., & Mutanga, O. (2018). Feature
Assessment of the mapping capabilities of ERS‐1 level image fusion of optical imagery and Synthetic
SAR data for flood mapping: a case study in Aperture Radar (SAR) for invasive alien plant
Germany. Hydrological processes, 11(10), pp.1415- species detection and mapping. Remote Sensing
1425.
Applications: Society and Environment, 10, 198- data: A case study in the St. Lucia wetlands, South
208, doi: 10.1016/j.rsase.2018.04.007 Africa. International Journal of Applied Earth
Rambour, C., Audebert, N., Koeniguer, E., Le Saux, Observation and Geoinformation, 86, 102009, doi:
B., Crucianu, M., & Datcu, M. (2020). Flood 10.1016/j.jag.2019.102009
Detection in Time Series of Optical and SAR Smith, A. R. (1978). Color gamut transform
Images. The International Archives of the pairs. ACM Siggraph Computer Graphics, 12(3), 12-
Photogrammetry, Remote Sensing and Spatial 19.
Information Sciences, 43(B2), 1343-1346. Soergel, U., Thoennessen, U., & Stilla, U. (2003,
Ronneberger, O., Fischer, P., & Brox, T. (2015, May). Visibility analysis of man-made objects in
October). U-net: Convolutional networks for SAR images. In 2003 2nd GRSS/ISPRS Joint
biomedical image segmentation. In International Workshop on Remote Sensing and Data Fusion over
Conference on Medical image computing and Urban Areas (pp. 120-124). IEEE.
computer-assisted intervention (pp. 234-241). Tanguy, M., Chokmani, K., Bernier, M., Poulin, J.,
Springer, Cham. doi: 10.1007/978-3-319-24574- & Raymond, S. (2017). River flood mapping in
4_28 urban areas combining Radarsat-2 data and flood
Saksena, Siddharth, and Venkatesh Merwade. return period data. Remote Sensing of
"Incorporating the effect of DEM resolution and Environment, 198, 442-459, doi:
accuracy for improved flood inundation 10.1016/j.rse.2017.06.042
mapping." Journal of Hydrology 530 (2015): 180- Tarpanelli, A., Santi, E., Tourian, M. J., Filippucci,
194, doi: 10.1016/j.jhydrol.2015.09.069 P., Amarnath, G., & Brocca, L. (2018). Daily river
Schmitt, M. (2020). Potential of Large-Scale Inland discharge estimates by merging satellite optical
Water Body Mapping from Sentinel-1/2 Data on the sensors and radar altimetry through artificial neural
Example of Bavaria’s Lakes and Rivers. PFG– network. IEEE Transactions on Geoscience and
Journal of Photogrammetry, Remote Sensing and Remote Sensing, 57(1), 329-341, doi:
Geoinformation Science, 88, 271-289, doi: 10.1109/TGRS.2018.2854625
10.1007/s41064-020-00111-2 Twele, A., Cao, W., Plank, S. and Martinis, S., 2016.
Schratz, Patrick, Jannes Muenchow, Eugenia Sentinel-1-based flood mapping: a fully automated
Iturritxa, Jakob Richter, and Alexander Brenning. processing chain. International Journal of Remote
2019. “Hyperparameter Tuning and Performance Sensing, 37(13), pp.2990-3004, doi:
Assessment of Statistical and Machine-Learning 10.1080/01431161.2016.1192304
Algorithms Using Spatial Data.” Ecological Wiesnet, D.R., McGinnis, D.F., and Pritchard, J.A.,
Modelling 406 (August): 109–20, doi: 1974. Mapping of the 1973 Mississippi River Floods
10.1016/j.ecolmodel.2019.06.002. by the NOAA-2 Satellite. JAWRA Journal of the
Shen, X., Anagnostou, E. N., Allen, G. H., American Water Resources Association, 10(5),
Brakenridge, G. R., & Kettner, A. J. (2019b). Near- pp.1040-1049, doi: 10.1111/j.1752-
real-time non-obstructed flood inundation mapping 1688.1974.tb00623.x
using synthetic aperture radar. Remote Sensing of Xu, H. (2006). Modification of normalised
Environment, 221, 302-315, doi: difference water index (NDWI) to enhance open
10.1016/j.rse.2018.11.008 water features in remotely sensed
Shen, X., Wang, D., Mao, K., Anagnostou, E., & imagery. International journal of remote
Hong, Y. (2019a). Inundation extent mapping by sensing, 27(14), 3025-3033, doi:
synthetic aperture radar: a review. Remote 10.1080/01431160600589179
Sensing, 11(7), 879, doi: 10.3390/rs11070879 Yang, L., Meng, X., & Zhang, X. (2011). SRTM
Slagter, B., Tsendbazar, N. E., Vollrath, A., & DEM and its application advances. International
Reiche, J. (2020). Mapping wetland characteristics Journal of Remote
using temporally dense Sentinel-1 and Sentinel-2
Resolution Terrain Analysis. Water Resources
Sensing, 32(14), 3875-3896, doi: Research, 54(12), 10-013, doi:
10.1080/01431161003786016 10.1029/2018WR023457
Yulianto, F., Sofan, P., Zubaidah, A., Sukowati, K.
A. D., Pasaribu, J. M., & Khomarudin, M. R. (2015).
Detecting areas affected by flood using multi-
temporal ALOS PALSAR remotely sensed data in
Karawang, West Java, Indonesia. Natural
Hazards, 77(2), 959-985, doi: 10.1007/s11069-015-
1633-x
Zheng, X., Maidment, D. R., Tarboton, D. G., Liu,
Y. Y., & Passalacqua, P. (2018). GeoFlood: Large‐
Scale Flood Inundation Mapping Based on High‐

You might also like