Accepted Final
Accepted Final
Accepted Final
The 30-meter SRTM DEM is void-filled In this section, we describe the following
using elevation data from various other sources. To aspects: (1) Preprocessing of S2 data, (2) The
match the resolution of sentinel bands (both S1 and experimental setup, (3) Architecture of the CNN
S2), SRTM DEM data was resampled linearly to 10- model, and (4) Evaluation of the deep learning model
meter resolution. This resampled data is used as an performance.
ancillary input for our deep learning algorithm. Preprocessing of S2 data:
As S1 SAR has only two available bands with
Methodology:
VV and VH polarizations, we use them directly in
the machine learning setup without making any
Table 2: Sentinel 2 Water Spectral indices formulation and their corresponding references
Index Reference
MNDWI = (GREEN – SWIR1) / (GREEN + SWIR1) Xu, 2006
NDWI = (GREEN – NIR) / (GREEN + NIR) McFeeters, 1996
AWEI = 4* (GREEN − SWIR1) − (0.25 * NIR + 2.75 ∗ SWIR2) Feyisa et al., 2014
AWEISH = BLUE + 2.5 * GREEN – 1.5 * (NIR + SWIR1) – 0.25 * SWIR2 Feyisa et al., 2014
provided as input to our deep learning algorithm evaluate our deep learning algorithm using the raw
(Table 2). Also, the HSV bands and their bands named as rNDWI, rAWEI and rHSV and their
combination with spectral indices are used in our combinations. Finally, all the S2 band configurations
experiment. We also want to investigate if the raw are combined with S1 to evaluate the effectiveness of
bands used in cNDWI, cAWEI and HSV can also S1 and S2 combinations. Overall, a total of 32
produce similar performance compared to their combinations are formulated in our experimental
spectral index counterparts. Therefore, we also setup (Table 3)
Table 3: Input experiment design and their corresponding descriptions
Experime
Input name Input description
nt ID
1 S1 VV and VH bands of S1
2 S1+DEM VV and VH bands of S1 + 10 M resampled SRTM 30 M DEM
3 cAWEI NDWI and MNDWI indices of S2
NDWI and MNDWI indices of S2 + 10 M resampled SRTM 30M
4 cAWEI+DEM
DEM
5 cNDWI AWEIsh and AWEInsh indices of S2
AWEIsh and AWEInsh indices of S2 + 10 M resampled SRTM 30M
6 cNDWI+DEM
DEM
7 HSV HSV transformation of S2 RGB Bands
HSV transformation of S2 RGB Bands + 10 M resampled SRTM
8 HSV+DEM
30M DEM
9 rNDWI Bands used for computing NDWI and MNDWI from S2
Bands used for computing NDWI and MNDWI from S2 +10 M
10 rNDWI+DEM
resampled SRTM 30M DEM
11 rAWEI Bands used for computing AWEIsh and AWEInsh from S2
Bands used for computing AWEIsh and AWEInsh from S2 +10
12 rAWEI+DEM
M resampled SRTM 30M DEM
13 rHSV Bands used for computing HSV from S2
Bands used for computing HSV from S2 +10 M resampled
14 rHSV+DEM
SRTM 30M DEM
15 cAWEI+cNDWI NDWI, MNDWI, AWEIsh and AWEInsh indices of S2
NDWI, MNDWI, AWEIsh and AWEInsh indices of S2 + 10 M
16 cAWEI+cNDWI+DEM
resampled SRTM 30M DEM
HSV transformation of RGB Bands , NDWI, MNDWI, AWEIsh and
17 HSV+cAWEI+cNDWI
AWEInsh of S2
HSV transformation of RGB Bands , NDWI, MNDWI, AWEIsh and
18 HSV+cAWEI+cNDWI+DEM
AWEInsh of S2 + 10 M resampled SRTM 30M DEM
Bands used for computing NDWI, MNDWI, AWEIsh and AWEInsh
19 rAWEI+rNDWI
indices of S2
Bands used for computing NDWI, MNDWI, AWEIsh and AWEInsh
20 rAWEI+rNDWI+DEM
indices of S2 + 10 M resampled SRTM 30M DEM
Bands used for computing HSV transformation, NDWI, MNDWI,
21 rHSV+rAWEI+rNDWI
AWEIsh and AWEInsh of S2
Bands used for computing HSV transformation, NDWI, MNDWI,
22 rHSV+rAWEI+rNDWI+DEM
AWEIsh and AWEInsh of S2 + 10 M resampled SRTM 30M DEM
23 S1+cAWEI VV , VH (S1), NDWI and MNDWI (S2)
VV , VH (S1), NDWI and MNDWI (S2) + 10 M resampled SRTM
24 S1+cAWEI+DEM
30M DEM
25 S1+cNDWI VV , VH (S1), AWEIsh and AWEInsh (S2)
VV , VH (S1), AWEIsh and AWEInsh (S2) + 10 M resampled
26 S1+cNDWI+DEM
SRTM 30M DEM
27 S1+cAWEI+cNDWI VV , VH (S1), NDWI, MNDWI, AWEIsh and AWEInsh (S2)
VV , VH (S1), NDWI, MNDWI, AWEIsh and AWEInsh (S2) + 10
28 S1+cAWEI+cNDWI+DEM
M resampled SRTM 30M DEM
29 S1+HSV VV , VH (S1), HSV, NDWI, MNDWI, AWEIsh and AWEInsh (S2)
VV , VH (S1), HSV, NDWI, MNDWI, AWEIsh and AWEInsh (S2)
30 S1+HSV+DEM
+ 10 M resampled SRTM 30M DEM
31 S1+cAWEI+cNDWI+HSV VV , VH (S1) and HSV(S2)
S1+cAWEI+cNDWI+HSV+D
32 VV , VH (S1) and HSV(S2) + 10 M resampled SRTM 30M DEM
EM
We evaluate the performance of 32 input determined as median of individual metrics across all
combinations as follows the folds. Further, non-parametric Kruskal–Wallis
test is performed to determine if there is a statistically
1) For each of the 32 input combinations, we significant difference between medians of two band
generate flood inundation maps for test combinations. In our case, two band combinations
datasets belonging to all K =10 folds. are deemed to be significantly different, if the p-
2) Compute F1 score, Precision and Recall value is lesser than 0.05. Figure 3 presents results of
metrics separately for each individual image Kruskal-Wallis test in form of matrix. Statistically
in test dataset across all the 10 folds. significant differences between performance metrics
3) Then for all 10 folds, median of all the three on the rows and columns are indicated by the symbol
metrics is computed and reported. ● in figure 3. Green (red) color indicates the
performance of input combination on row is higher
Table 4: Modified K-fold based median performance metrics for S1 and S2 inputs
(lower) than that of input combination on the that only S1 imagery as an input has the least F1
column. Performance metrics in Table 4, indicate score of 0.62 among individual S1 and S2 inputs to
U-Net. Low F1 score of S1 is a case of under- 0.62. This increase in performance of S1+DEM is
segmentation as expressed by a low precision score due to improvement in under-segmentation as
of 0.58. However, S1’s performance improved with represented by a relatively better precision score of
added use of DEM to a median F1 score of 0.73 from 0.68.
Figure 3: Outcome of Kruskal-Wallis test between the input combinations in rows and columns. The statistically
significant differences between the performance metrics on the rows and columns are indicated by the symbol ●.
Green (red) color indicates the performance of the input combination on the row is higher (lower) than that of the
input combination on the column.
Our results indicating an improved S1 data without DEM only provides moderate skill in
performance with added use of DEM is not mapping the flood inundation. For instance, Figure 4
surprising as elevation data distinguishes the shows an instance of flood over a location in
potential flood plains from other regions and has southeastern Paraguay in S1 images acquired on
been used previously for numerous flood inundation October 31, 2018 (Table 1 and Figure 1). The U-Net
studies (Saksena & Merwade, 2015; Zheng et al., trained with S1 SAR data without DEM was unable
2018; Musa et al., 2015). Note that the use of S1 SAR to detect the flood extent in Paraguay despite
presence of a clear contrast in VV and VH architecture (Figure 4 (E)). Further, speckle noise in
backscatter (Figure 4 (B), (C), (D), (F)). However, S1 SAR images and its imperfect filtering can also
using DEM (Fig 4 (A)) as an ancillary input lead to false positives in classified flood maps
dramatically improves the U-net ability to detect (Schmitt, 2020; Gulácsi, A., & Kovács, 2020).
water extent indicating no deficiency in U-Net
Figure 4: Flood inundation mapping inputs of (A) Elevation, (B) VV and (C) VH backscatters for a location in
Paraguay. The lower panel represents the flood inundation extent by (D) ground truth, produced by U-Net when
using (E) S1+DEM as input and (F) S1 as input.
In case of all S2 spectral indices with and However, in comparison to S1, the performance of
without DEM, high F1 scores with median F1 scores all the S2 based spectral indices have a statistically
ranging between 0.88 to 0.9 values can be observed. significant increase in terms of F1, precision as well
These higher F1 scores are due to both high as recall scores (Figure 3). However, we must note
Precision and Recall values indicating a balance that Recall scores have increased by a lesser extent
between over and under-segmentation in case of S2 indicating an improvement in over-segmentation in
spectral indices. There does not appear to be any few cases. Also, in comparison to water indices
significant advantage in combining DEM with any of (cNDWI and cAWEI), the performance of flood
spectral indices unlike the case of S1 imagery. inundation mapping using HSV transformation of S2
bands has a statistically significant better between HSV transformations and their
performance with median F1 score value of 0.9. corresponding raw bands leading to relatively lower
Interestingly, U-Net based flood mapping F1 score than its feature engineered counterpart. This
performance using raw bands of S2 water indices differential performance of a deep learning model
(rNDWI and rAWEI) are similar as compared to might be related to degree of computation
cNDWI and cAWEI (Figure 3 and Table 4). This complexity involved in the considered spectral
indicates that the U-Net algorithm was able to learn indices and HSV transformation. Table 2 and
features that are at least as meaningful as spectral equation 1-3 indicate that the spectral indices
indices for flood segmentation. However, in case of computation is relatively simpler than HSV.
HSV, raw band’s performance (rHSV) is Therefore, based on our training data size and hyper-
comparatively lower, but like the performance of parameters, U-net may have captured the
spectral indices. Unlike the case of spectral indices, representation of spectral indices but not that of HSV
U-net algorithm was unable to capture the relation transformation.
Figure 5: Flood inundation mapping inputs of (A) Elevation, (B) False color composite of S1 data and (C) False
color composite of S2 for a location in India. The lower panel represents the flood inundation extent by (D) ground
truth, produced by U-Net when using (E) S1+DEM as input and (F) HSV + DEM as input.
Distinct visual contrasts between water and those previously identified regions by S1’s FCC
non-water pixels in S2 based indices over S1 may be image (Figure 5 (E)). However, the FCC of S2 bands
a plausible reason for the better performance of S2 was able to distinguish the flooded agricultural land,
based indices. For instance, Figure 5 shows an wetlands as well as the meandering rivers (Figure 5
instance of flood in northeastern part of India from (F)) and have produced a nearly identical map as
S1 and S2 images acquired on August 12, 2016 shown in the hand labeled ground truth images
(Figure1 and Table 1). The false color composite (Figure 5 (D)). Hence, the resulting prediction of
(FCC) of S1 bands (Figure 5 (A),(B)) can clearly HSV based transformation of S2 data has captured
distinguish the meandering river in the south west the flood area extents across all the land types. This
region and the wetland in north east region, but was superior performance of HSV may be due to the
unable to capture the flooded agricultural lands better spectral correlation of band pixels with water
which are in the northern regions. As a result, the U- characteristics (Huang et al., 2018).
net was able to produce flood extent only across
Table 5: Modified K-fold based median performance metrics for combinations within S2 inputs
F1 Precision Recall Input data Type
0.86 0.85 0.91 cAWEI+cNDWI
0.88 0.86 0.91 cAWEI+cNDWI+DEM Feature Engineered
0.90 0.89 0.92 HSV+cAWEI+cNDWI bands
0.90 0.88 0.93 HSV+cAWEI+cNDWI+DEM
0.87 0.85 0.91 rAWEI+rNDWI
0.87 0.86 0.90 rAWEI+rNDWI+DEM
Original bands
0.88 0.85 0.91 rHSV+rAWEI+rNDWI
0.86 0.84 0.92 rHSV+rAWEI+rNDWI+DEM
Table 6: Modified K-fold based median performance metrics for combinations of S1 and S2 inputs
F1 Precision Recall Input data
0.88 0.86 0.90 S1+cAWEI
0.87 0.82 0.93 S1+cAWEI+DEM
0.88 0.88 0.91 S1+cNDWI
0.89 0.86 0.92 S1+cNDWI+DEM
0.88 0.85 0.94 S1+cAWEI+cNDWI
0.89 0.85 0.94 S1+cAWEI+cNDWI+DEM
0.90 0.90 0.92 S1+HSV
0.90 0.90 0.93 S1+HSV+DEM
0.90 0.90 0.92 S1+cAWEI+cNDWI+HSV
0.90 0.90 0.93 S1+cAWEI+cNDWI+HSV+DEM
Combinations within S2 imagery: appear to be any significant advantage by the
The previous performance results of combination approach. Even though there is a slight
individual spectral indices indicate that HSV decrease in median F1 score for the combined
transformation has superior performance in mapping spectral indices (cAWEI + cNDWI) when compared
floods across the globe. But, in these three indices, to the individual indices, it is still statistically
there are some non-overlapping S2 spectral bands insignificant (Figure 3). By combining all three S2
and their combinations might result in further index approaches, we obtain a performance similar
improvement in performance. Therefore, it makes to the performance HSV alone. Therefore, our results
sense to investigate how the combinations within S2 indicate that overall, even if there are non-
imagery such as combining cAWEI , cNDWI and overlapping bands in our experimental setup,
HSV transformation would perform compared to the combining spectral indices and HSV has no added
individual indices (Table 5). The combinations advantage in terms of performance.
within S2 bands have indicated that, there does not
Figure 6: Flood inundation mapping inputs of (A) Elevation, (B) VV and (C)VH backscatter intensities of S1 data
and (D) False color composite of S2 for a location in Ghana. The lower panel represents the flood inundation
extent by (E) ground truth, U-Net produced flood masks when using (F) S1+DEM as input and (G) HSV + DEM
as input and (H) fusion of HSV and S1+DEM as input.
Combinations of S1-SAR and S2-spectral and S2’s HSV transformation has resulted in a more
imagery: accurate representation of flood extent without any
We combined S1 SAR and S2 based spectral overestimation across the flood plains (Figure 6 (H)).
indices to test whether there is a significant Since, Sens1Floods11 dataset is curated to exclude
advantage in performance due to their combination majority of the satellite imagery with clouds, we
and examine if there is an optimal combination of S1 could not find the significant advantage in our
and S2. Therefore, all the combinations of S1 and S2 experiments. However, in presence of clouds, the
spectral indices are tested (Table 6). Combining S1 fusion of S1 and S2 will have an added advantage.
and S2 indices indicated that their performance did Discussion:
not change significantly when compared to the Our results indicate that even though the SAR
performance of individual S2 indices. For instance, data is not affected by cloud cover, poor contrast
the performance of flood mapping when using between VV and VH backscatter affects S1 data’s
cAWEI, cNDWI and HSV is not significantly flood inundation mapping performance. F1 score for
different when compared to the performance of S1+ S1 based flood water mapping varied between 0.65
cAWEI , S1+ cNDWI and S1+HSV (Figure 3). All to 0.91 in previous studies depending on the location
the best performing input combinations have HSV as and approach [Bioresita et al., 2018, Liang and Liu,
a part of their combination. However, the 2020]. Therefore, the F1 score of 0.62 obtained in our
performance of these input combinations is not case compares reasonably well to these studies. Our
significantly different from each other and are current configuration performs semantic
similar to the performance of HSV alone. Even segmentation with a single-trained model for the
though the combination of S1 and S2 data has entire dataset. Previous studies have suggested that
relatively no performance improvement in this may lead to poor accuracy due to unclear
comparison to individual S2 indices in detecting the backscatter conditions in case of S1 of the inundated
flood water extents, it is important to clarify that the areas in different land cover features [Manakos et al.,
U-net trained with cloud obscured S2 images still 2020]. Therefore, a land cover specific ensemble of
cannot efficiently detect flood extents. An example deep learning models or pixel centric approaches can
instance is shown in Figure 12 for a flooding case be used to improve the performance [Huang et al.,
over central Ghana using S1 and S2 images acquired 2018; Pham-Duc et al., 2017]. Additionally, the 10-
on September 18-19, 2016 (Table 1 and Figure 1). meter resampled SRTM 30M - DEM improves S1
The DEM shows a narrow strip of low elevation data’s flood mapping abilities. This can be attributed
indicating a potentially narrow river (Figure 6 (A)). to the identification of low lying flood plains
VV (Figure 6 (B)) and VH (Figure 6 (C)) (Manfreda et al., 2015; Samela et al., 2016) in the
backscatters show a complete picture of flood plain elevation data. Also, in urban areas, elevation data
surrounding the river and other flood inundated can help us to distinguish the areas of urban ground
regions across the plains. FCC of S2 indicated a total surface which may not be visible due to radar
cloud coverage over a flooded region in Ghana shadowing and building layovers preventing
(Figure 6 (D)). Consequently, the S1 predicted misclassification of flooded water extent [ Soergel et
flooded regions (Figure 6 (F)) have captured the al., 2003]. Further, DEMs can helps us distinguish
flooded extent similar to ground truth (Figure 6 (E)) roads and tarmac areas have a low backscatter similar
with some flood plains areas being overestimated. to water creating a misclassification [Mason et al.,
Despite, the huge cloud coverage, the HSV 2014]. But given the dependence of flood inundation
transformation of S2 data could capture the flood mapping on elevation data’s spatial resolution (Haile
plains in the Southwest region and some other and Rientjes, 2005; Fereshtehpour, and Karamouz,
patches across the map (Figure 6 (G)). But it could 2018), further studies should consider the sensitivity
not capture the river segment due to the dense cloud of flood inundation map’s performance to the DEM
coverage. However, the fusion of both S1 backscatter spatial resolution as well as the data source.
S2 based spectral indices are generally better U-net architecture (Ronneberger et al., 2015)
than S1 based SAR data for mapping flood extents has been previously proven to efficient in
due to stronger correlations between spectral features segmentation of binary classes. Even though our
of S2 bands with respect to water surfaces in the focus was to explore the diverse bands of S2 and S1
cloud-free S2 images (Boschetti et al., 2014; Klein et for flood mapping, our adopted configuration of U-
al., 2017; Bonafilia et al., 2020). Also, our results Net has performed well in delineating the flooded
which state that the S2 has better performance than regions. In this work, we modified the U-Net
S1 is similar to conclusion obtained by Bonafilia et configuration to accept S1 and S2 bands instead of
al., 2020. In particular, the HSV transformation of the traditional RGB bands to delineate flood floods
RGB bands in S2 satellite data has a statistically using Multi-spectral and SAR imagery. However, in
significant superior performance in comparison to case of S1, our modified U-Net’s inability to find a
conventional water spectral indices probably due to robust threshold which is sensitive enough to identify
improved contrast between water and non-water the contrast between water and non-water pixels
surface in HSV components (Pekel et al., 2014, based on SAR VV and VH backscatter frequencies
2016). However, the difference in performance has resulted in decreased performance in some
between HSV and spectral indices is relatively low. flooded regions. This may be achieved through
The performance of S1 and S2 combinations for modification of convolution configurations,
flood water mapping was not significantly different activation functions and loss functions in U-Nets
from the individual S2 indices’ performance. This is paving way for future research. In addition,
not surprising in our case, as the Sen1Floods11 benchmarking different deep learning architectures
dataset has relatively a smaller number of satellite for flood inundation mapping using the
imagery with clouds. But, in satellite imagery with Sen1Floods11 dataset can be investigated.
clouds, the fusion of S1 and S2 imagery has Conclusions
performed significantly better than individual S2 In this paper we explored the diverse bands
indices as demonstrated in figure 6. In addition to of S2 and S1 satellites along with combinations for
that, previous studies have also indicated in the flood inundation mapping through a deep CNN
fusion approach, S2 bands’ inability in penetrating model known as U-Net to train, validate and test
clouds is complemented by the SAR’s ability to map against manually annotated pixel level flood
water extent during cloud cover. inundated images. Our results indicate that using
Interestingly, U-Net algorithm’s DEM as an ancillary data can improve the
performance using cNDWI/cAWEI in segmenting performance of U-net when using S1 imagery as
water inundated areas is similar to that of using raw input. However, U-Net algorithm has shown a better
bands used for computing NDWI/AWEI indicating performance when using S2 bands when compared
the ability of U-Net to learn the features with similar to S1 bands, likely due to better spectral correlation
characteristics as spectral indices for flood between optical sensor output and water features. In
segmentation as a part of its encoder operations. addition, there is minimal influence of DEM
However, the U-Net algorithm’s performance using ancillary on the median performance of S2 bands.
S2 bands transformed to HSV is superior compared Among the S2 metrics, the U-Net with the HSV
to that of using raw S2 bands. This may be due to the transformation of RGB bands outperforms the
relatively shorter data record for training the U-Net established spectral indices such as AWEIsh,
algorithm, or inability of the U-net to capture the AWEInsh, NDWI and MNDWI owing to its superior
relatively complex formulation of HSV. Therefore, visual contrast segmentation. The U-Net algorithm
relatively complex feature engineering such as HSV was able to learn the relationship between raw S2
transformation may be performed before training the bands and cNDWI and cAWEI, but not of HSV
deep learning algorithm for flood inundation owing to relatively complex computation involved in
mapping. the latter. Therefore, based on our training data size
and hyper-parameters, U-net may have captured the water mapping. International Journal of Remote
representation of spectral indices but not that of HSV Sensing, 40(23), 9026-9049, doi:
transformation. These results also show that 10.1080/01431161.2019.1624869
automatic flood detection is possible when an Bonafilia, D., Tellman, B., Anderson, T., &
appropriate water index technique is being used. The Issenberg, E. (2020). Sen1Floods11: a georeferenced
extension of our approach to benchmark the dataset to train and test deep learning flood
performance of different deep learning architectures algorithms for Sentinel-1. In Proceedings of the
for flood water segmentation is left for a future work. IEEE/CVF Conference on Computer Vision and
Acknowledgements Pattern Recognition Workshops (pp. 210-211),
This research was supported by the NASA Earth doi: 10.1109/CVPRW50498.2020.00113
Science Technology Office sponsored New Boschetti, M., Nutini, F., Manfron, G., Brivio, P. A.,
Observing System (NOS) project. Computing was & Nelson, A. (2014). Comparative analysis of
supported by the resources at the NASA Center for normalised difference spectral indices derived from
Climate Simulation. The authors also acknowledge MODIS for detecting surface water in flooded rice
Cloud to Street for developing open-source flood cropping systems. PloS one, 9(2), e88741, doi:
label data available for access through Google Cloud 10.1371/journal.pone.0088741
Storage bucket at: gs://senfloods11/ Clement, M. A., Kilsby, C. G., & Moore, P. (2018).
Declaration of Competing Interest Multi‐temporal synthetic aperture radar flood
None mapping using change detection. Journal of Flood
Risk Management, 11(2), 152-168, doi:
References 10.1111/jfr3.12303
Amitrano, D., Di Martino, G., Iodice, A., Riccio, D. Colson, D., Petropoulos, G. P., & Ferentinos, K. P.
and Ruello, G., 2018. Unsupervised rapid flood (2018). Exploring the potential of Sentinels-1 & 2 of
mapping using Sentinel-1 GRD SAR images. IEEE the Copernicus Mission in support of rapid and cost-
Transactions on Geoscience and Remote Sensing, effective wildfire assessment. International journal
56(6), pp.3290-3299, of applied earth observation and geoinformation, 73,
doi: 10.1109/TGRS.2018.2797536 262-276, doi: 10.1016/j.jag.2018.06.011
Auynirundronkool, K., Chen, N., Peng, C., Yang, C., DeVries, B., Huang, C., Armston, J., Huang, W.,
Gong, J., & Silapathong, C. (2012). Flood detection Jones, J. W., & Lang, M. W. (2020). Rapid and
and mapping of the Thailand Central plain using robust monitoring of flood events using Sentinel-1
RADARSAT and MODIS under a sensor web and Landsat data on the Google Earth Engine.
environment. International Journal of Applied Earth Remote Sensing of Environment, 240, 111664, doi:
Observation and Geoinformation, 14(1), 245-255, 10.1016/j.rse.2020.111664
doi: 10.1016/j.jag.2011.09.017 Du, G., Cao, X., Liang, J., Chen, X., & Zhan, Y.
Barton, I.J. and Bathols, J.M., 1989. Monitoring (2020). Medical image segmentation based on u-net:
floods with AVHRR. Remote sensing of A review. Journal of Imaging Science and
Environment, 30(1), pp.89-94, doi: 10.1016/0034- Technology, 64(2), 20508-1, doi:
4257(89)90050-3 10.2352/J.ImagingSci.Technol.2020.64.2.020508
Betbeder, J., Rapinel, S., Corpetti, T., Pottier, E., Dusseux, P., Corpetti, T., Hubert-Moy, L., &
Corgne, S., & Hubert-Moy, L. (2014). Multitemporal Corgne, S. (2014). Combined use of multi-temporal
classification of TerraSAR-X data for wetland optical and radar satellite images for grassland
vegetation mapping. Journal of applied remote monitoring. Remote Sensing, 6(7), 6163-6182,
sensing, 8(1), 083648, doi: 10.1117/1.JRS.8.083648 doi:10.3390/rs6076163
Bioresita, F., Puissant, A., Stumpf, A., & Malet, J. P. Fereshtehpour, M., & Karamouz, M. (2018). DEM
(2019). Fusion of Sentinel-1 and Sentinel-2 image resolution effects on coastal flood vulnerability
time series for permanent and temporary surface assessment: Deterministic and probabilistic
approach. Water Resources Research, 54(7), 4965- networks. science, 313(5786), 504-507, doi:
4982, doi: 10.1029/2017WR022318 10.1126/science.1127647
Feyisa, G. L., Meilby, H., Fensholt, R., & Proud, S. Hornik, K., Stinchcombe, M., & White, H. (1990).
R. (2014). Automated Water Extraction Index: A Universal approximation of an unknown mapping
new technique for surface water mapping using and its derivatives using multilayer feedforward
Landsat imagery. Remote Sensing of Environment, networks. Neural networks, 3(5), 551-560.
140, 23-35, doi: 10.1016/j.rse.2013.08.029 Huang, C., Chen, Y., Zhang, S., & Wu, J. (2018).
Gao, Q., Zribi, M., Escorihuela, M. J., & Baghdadi, Detecting, extracting, and monitoring surface water
N. (2017). Synergetic use of Sentinel-1 and Sentinel- from space using optical sensors: A review. Reviews
2 data for soil moisture mapping at 100 m of Geophysics, 56(2), 333-360, doi:
resolution. Sensors, 17(9), 1966, doi: 10.1029/2018RG000598
10.3390/s17091966 Huang, G. B., Chen, L., & Siew, C. K. (2006).
Gebrehiwot, A., Hashemi-Beni, L., Thompson, G., Universal approximation using incremental
Kordjamshidi, P., & Langan, T. E. (2019). Deep constructive feedforward networks with random
convolutional neural network for flood extent hidden nodes. IEEE Trans. Neural Networks, 17(4),
mapping using unmanned aerial vehicles 879-892. Manakos, I., Kordelas, G. A., & Marini, K.
data. Sensors, 19(7), 1486. (2020). Fusion of Sentinel-1 data with Sentinel-2
Gevaert, C. M., Suomalainen, J., Tang, J., & products to overcome non-favourable atmospheric
Kooistra, L. (2015). Generation of spectral–temporal conditions for the delineation of inundation
response surfaces by combining multispectral maps. European Journal of Remote
satellite and hyperspectral UAV imagery for Sensing, 53(sup2), 53-66.
precision agriculture applications. IEEE Journal of Huang, W., DeVries, B., Huang, C., Lang, M.W., Jo
Selected Topics in Applied Earth Observations and nes, J.W., Creed, I.F.,
Remote Sensing, 8(6), 3140-3146, doi: & Carroll, M.L. (2018). Automated extraction of
10.1109/JSTARS.2015.2406339 surface water extent from Sentinel-1 data. Remote
Goffi, A., Stroppiana, D., Brivio, P. A., Bordogna, Sensing, 10(5), 797. doi:10.3390/rs10050797
G., & Boschetti, M. (2020). Towards an automated Iannelli, G. C., & Gamba, P. (2018, July). Jointly
approach to map flooded areas from Sentinel-2 MSI exploiting Sentinel-1 and Sentinel-2 for urban
data and soft integration of water spectral features. mapping. In IGARSS 2018-2018 IEEE International
International Journal of Applied Earth Observation Geoscience and Remote Sensing Symposium (pp.
and Geoinformation, 84, 101951, doi: 8209-8212), doi: 10.1109/IGARSS.2018.8518172
10.1016/j.jag.2019.101951 Ienco, D., Interdonato, R., Gaetano, R., & Minh, D.
Gulácsi, A., & Kovács, F. (2020). Sentinel-1- H. T. (2019). Combining Sentinel-1 and Sentinel-2
imagery-based high-resolution water cover detection Satellite Image Time Series for land cover mapping
on wetlands, Aided by Google Earth Engine. Remote via a multi-source deep learning architecture. ISPRS
Sensing, 12(10), 1614, doi: 10.3390/rs12101614 Journal of Photogrammetry and Remote
Haile, A. T., & Rientjes, T. H. M. (2005). Effects of Sensing, 158, 11-22, doi :
LiDAR DEM resolution in flood modelling: a model 10.1016/j.isprsjprs.2019.09.016
sensitivity study for the city of Tegucigalpa, Irwin, K., Beaulne, D., Braun, A., & Fotopoulos, G.
Honduras. Isprs wg iii/3, iii/4, 3, 12-14 (2017). Fusion of SAR, optical imagery and airborne
He, W., & Yokoya, N. (2018). Multi-temporal LiDAR for surface water detection. Remote
sentinel-1 and-2 data fusion for optical image Sensing, 9(9), 890, doi: 10.3390/rs9090890
simulation. ISPRS International Journal of Geo- Jain, P., Schoen-Phelan, B., & Ross, R. (2020,
Information, 7(10), 389, doi: 10.3390/ijgi7100389 March). Automatic flood detection in SentineI-2
Hinton, G. E., & Salakhutdinov, R. R. (2006). images using deep convolutional neural networks. In
Reducing the dimensionality of data with neural Proceedings of the 35th Annual ACM Symposium
on Applied Computing (pp. 617-623), doi: Liu, J., Gong, M., Qin, K., & Zhang, P. (2016). A
10.1145/3341105.3374023 deep convolutional coupling network for change
Jamali, A., Mahdianpari, M., Brisco, B., Granger, J., detection based on heterogeneous optical and radar
Mohammadimanesh, F., & Salehi, B. (2021). images. IEEE transactions on neural networks and
Comparing Solo Versus Ensemble Convolutional learning systems, 29(3), 545-559, doi:
Neural Networks for Wetland Classification Using 10.1109/TNNLS.2016.2636227
Multi-Spectral Satellite Imagery. Remote Manakos, I., Kordelas, G. A., & Marini, K. (2020).
Sensing, 13(11), 2046. Fusion of Sentinel-1 data with Sentinel-2 products to
Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., overcome non-favourable atmospheric conditions
Long, J., Girshick, R., ... & Darrell, T. (2014, for the delineation of inundation maps. European
November). Caffe: Convolutional architecture for Journal of Remote Sensing, 53(sup2), 53-66, doi:
fast feature embedding. In Proceedings of the 22nd 10.1080/22797254.2019.1596757
ACM international conference on Multimedia (pp. Manfreda, S., Nardi, F., Samela, C., Grimaldi, S.,
675-678), doi: 10.1145/2647868.2654889 Taramasso, A. C., Roth, G., and Sole, A., (2014).
Kingma, D. P., & Ba, J. (2014). Adam: A method for Investigation on the Use of Geomorphic Approaches
stochastic optimization. arXiv preprint for the Delineation of Flood Prone Areas, Journal of
arXiv:1412.6980. Hydrology, 517, 863-876, doi:
Klein, I., Gessner, U., Dietz, A. J., & Kuenzer, C. 10.1016/j.jhydrol.2014.06.009
(2017). Global WaterPack–A 250 m resolution Manjusree, P., Kumar, L. P., Bhatt, C. M., Rao, G.
dataset revealing the daily dynamics of global inland S., & Bhanumurthy, V. (2012). Optimization of
water bodies. Remote sensing of environment, 198, threshold ranges for rapid flood inundation mapping
345-362, doi: 10.1016/j.rse.2017.06.045 by evaluating backscatter profiles of high incidence
Kruskal, W. H., & Wallis, W. A. (1952). Use of ranks angle SAR images. International Journal of Disaster
in one-criterion variance analysis. Journal of the Risk Science, 3(2), 113-122,doi: 10.1007/s13753-
American statistical Association, 47(260), 583-621. 012-0011-5
Li, S., Sun, D., & Yu, Y. (2013). Automatic cloud- Martinis, S., Kersten, J., & Twele, A. (2015). A fully
shadow removal from flood/standing water maps automated TerraSAR-X based flood service. ISPRS
using MSG/SEVIRI imagery. International journal Journal of Photogrammetry and Remote
of remote sensing, 34(15), 5487-5502. Doi: Sensing, 104, 203-212, doi:
10.1080/01431161.2013.792969 10.1016/j.isprsjprs.2014.07.014
Li, Y., Martinis, S., & Wieland, M. (2019). Urban Martinis, S., Twele, A., Strobl, C., Kersten, J. and
flood mapping with an active self-learning Stein, E., 2013. A multi-scale flood monitoring
convolutional neural network based on TerraSAR-X system based on fully automatic MODIS and
intensity and interferometric coherence. ISPRS TerraSAR-X processing chains. Remote Sensing,
Journal of Photogrammetry and Remote 5(11), pp.5598-5619, doi: 10.3390/rs5115598
Sensing, 152, 178-191. Mason, D. C., Speck, R., Devereux, B., Schumann,
Li, Y., Martinis, S., Plank, S., & Ludwig, R. (2018). G. J. P., Neal, J. C., & Bates, P. D. (2009). Flood
An automatic change detection approach for rapid detection in urban areas using TerraSAR-X. IEEE
flood mapping in Sentinel-1 SAR data. International Transactions on Geoscience and Remote
journal of applied earth observation and Sensing, 48(2), 882-894, doi: r
geoinformation, 73, 123-135, doi: 10.1109/TGRS.2009.2029236
10.1016/j.jag.2018.05.023 Mateo-Garcia, G., Veitch-Michaelis, J., Smith, L.,
Liang, J., & Liu, D. (2020). A local thresholding Oprea, S. V., Schumann, G., Gal, Y., ... & Backes,
approach to flood water delineation using Sentinel-1 D. (2021). Towards global flood mapping onboard
SAR imagery. ISPRS Journal of Photogrammetry low cost satellites with machine learning. Scientific
and Remote Sensing, 159, 53-62.
reports, 11(1), 1-12. doi: 10.1038/s41598-021- Ohki, M., Tadono, T., Itoh, T., Ishii, K.,
86650-z Yamanokuchi, T., Watanabe, M., & Shimada, M.
Matgen, P., Hostache, R., Schumann, G., Pfister, L., (2019). Flood area detection using PALSAR-2
Hoffmann, L. and Savenije, H.H.G., 2011. Towards amplitude and coherence data: The case of the 2015
an automated SAR-based flood monitoring system: heavy rainfall in Japan. IEEE Journal of Selected
Lessons learned from two case studies. Physics and Topics in Applied Earth Observations and Remote
Chemistry of the Earth, Parts A/B/C, 36(7-8), Sensing, 12(7), 2288-2298,
pp.241-252, doi: 10.1016/j.pce.2010.12.009 doi: 10.1109/JSTARS.2019.2911596
Matgen, P., Schumann, G., Henry, J.B., Hoffmann, Pekel, J. F., Cottam, A., Gorelick, N., & Belward, A.
L. and Pfister, L., 2007. Integration of SAR-derived S. (2016). High-resolution mapping of global surface
river inundation areas, high-precision topographic water and its long-term changes. Nature, 540(7633),
data and a river flow model toward near real-time 418-422, doi:10.1038/nature20584
flood management. International Journal of Applied Pekel, J. F., Vancutsem, C., Bastin, L., Clerici, M.,
Earth Observation and Geoinformation, 9(3), Vanbogaert, E., Bartholomé, E., & Defourny, P.
pp.247-263, doi: 10.1016/j.jag.2006.03.003 (2014). A near real-time water surface detection
McFeeters, S. K. (1996). The use of the Normalized method based on HSV transformation of MODIS
Difference Water Index (NDWI) in the delineation multi-spectral time series data. Remote sensing of
of open water features. International journal of environment, 140, 704-716, doi:
remote sensing, 17(7), 1425-1432, doi: 10.1016/j.rse.2013.10.008
10.1080/01431169608948714 Peng, B., Meng, Z., Huang, Q., & Wang, C. (2019).
McNairn, H., Champagne, C., Shang, J., Holmstrom, Patch Similarity Convolutional Neural Network for
D., & Reichert, G. (2009). Integration of optical and Urban Flood Extent Mapping Using Bi-Temporal
Synthetic Aperture Radar (SAR) imagery for Satellite Multispectral Imagery. Remote
delivering operational annual crop Sensing, 11(21), 2492.
inventories. ISPRS Journal of Photogrammetry and Pham-Duc, B., Prigent, C.,
Remote Sensing, 64(5), 434-449. doi: & Aires, F. (2017). Surface water monitoring within
10.1016/j.isprsjprs.2008.07.006 Cambodia and the Vietnamese Mekong Delta over a
Mosavi, A., Ozturk, P. and Chau, K.W., 2018. Flood year, with Sentinel-1 SAR observations. Water,
prediction using machine learning models: Literature 9(6), 366. doi:10.3390/w9060366
review. Water, 10(11), p.1536, doi: Plank, S., Jüssi, M., Martinis, S., & Twele, A. (2017).
10.3390/w10111536 Mapping of flooded vegetation by means of
Musa, Z. N., Popescu, I., & Mynett, A. (2015). A polarimetric Sentinel-1 and ALOS-2/PALSAR-2
review of applications of satellite SAR, optical, imagery. International Journal of Remote
altimetry and DEM data for surface water modelling, Sensing, 38(13), 3831-3850, doi:
mapping and parameter estimation. Hydrology and 10.1080/01431161.2017.1306143
Earth System Sciences, 19(9), 3755, doi: Potnis, A. V., Shinde, R. C., Durbha, S. S., & Kurte,
10.5194/hess-19-3755-2015 K. R. (2019, July). Multi-class segmentation of urban
Nemni, E., Bullock, J., Belabbes, S., & Bromley, L. floods from multispectral imagery using deep
(2020). Fully convolutional neural network for rapid learning. In IGARSS 2019-2019 IEEE International
flood segmentation in synthetic aperture radar Geoscience and Remote Sensing Symposium (pp.
imagery. Remote Sensing, 12(16), 2532. 9741-9744). IEEE.
Oberstadler, R., Hönsch, H. and Huth, D., 1997. Rajah, P., Odindi, J., & Mutanga, O. (2018). Feature
Assessment of the mapping capabilities of ERS‐1 level image fusion of optical imagery and Synthetic
SAR data for flood mapping: a case study in Aperture Radar (SAR) for invasive alien plant
Germany. Hydrological processes, 11(10), pp.1415- species detection and mapping. Remote Sensing
1425.
Applications: Society and Environment, 10, 198- data: A case study in the St. Lucia wetlands, South
208, doi: 10.1016/j.rsase.2018.04.007 Africa. International Journal of Applied Earth
Rambour, C., Audebert, N., Koeniguer, E., Le Saux, Observation and Geoinformation, 86, 102009, doi:
B., Crucianu, M., & Datcu, M. (2020). Flood 10.1016/j.jag.2019.102009
Detection in Time Series of Optical and SAR Smith, A. R. (1978). Color gamut transform
Images. The International Archives of the pairs. ACM Siggraph Computer Graphics, 12(3), 12-
Photogrammetry, Remote Sensing and Spatial 19.
Information Sciences, 43(B2), 1343-1346. Soergel, U., Thoennessen, U., & Stilla, U. (2003,
Ronneberger, O., Fischer, P., & Brox, T. (2015, May). Visibility analysis of man-made objects in
October). U-net: Convolutional networks for SAR images. In 2003 2nd GRSS/ISPRS Joint
biomedical image segmentation. In International Workshop on Remote Sensing and Data Fusion over
Conference on Medical image computing and Urban Areas (pp. 120-124). IEEE.
computer-assisted intervention (pp. 234-241). Tanguy, M., Chokmani, K., Bernier, M., Poulin, J.,
Springer, Cham. doi: 10.1007/978-3-319-24574- & Raymond, S. (2017). River flood mapping in
4_28 urban areas combining Radarsat-2 data and flood
Saksena, Siddharth, and Venkatesh Merwade. return period data. Remote Sensing of
"Incorporating the effect of DEM resolution and Environment, 198, 442-459, doi:
accuracy for improved flood inundation 10.1016/j.rse.2017.06.042
mapping." Journal of Hydrology 530 (2015): 180- Tarpanelli, A., Santi, E., Tourian, M. J., Filippucci,
194, doi: 10.1016/j.jhydrol.2015.09.069 P., Amarnath, G., & Brocca, L. (2018). Daily river
Schmitt, M. (2020). Potential of Large-Scale Inland discharge estimates by merging satellite optical
Water Body Mapping from Sentinel-1/2 Data on the sensors and radar altimetry through artificial neural
Example of Bavaria’s Lakes and Rivers. PFG– network. IEEE Transactions on Geoscience and
Journal of Photogrammetry, Remote Sensing and Remote Sensing, 57(1), 329-341, doi:
Geoinformation Science, 88, 271-289, doi: 10.1109/TGRS.2018.2854625
10.1007/s41064-020-00111-2 Twele, A., Cao, W., Plank, S. and Martinis, S., 2016.
Schratz, Patrick, Jannes Muenchow, Eugenia Sentinel-1-based flood mapping: a fully automated
Iturritxa, Jakob Richter, and Alexander Brenning. processing chain. International Journal of Remote
2019. “Hyperparameter Tuning and Performance Sensing, 37(13), pp.2990-3004, doi:
Assessment of Statistical and Machine-Learning 10.1080/01431161.2016.1192304
Algorithms Using Spatial Data.” Ecological Wiesnet, D.R., McGinnis, D.F., and Pritchard, J.A.,
Modelling 406 (August): 109–20, doi: 1974. Mapping of the 1973 Mississippi River Floods
10.1016/j.ecolmodel.2019.06.002. by the NOAA-2 Satellite. JAWRA Journal of the
Shen, X., Anagnostou, E. N., Allen, G. H., American Water Resources Association, 10(5),
Brakenridge, G. R., & Kettner, A. J. (2019b). Near- pp.1040-1049, doi: 10.1111/j.1752-
real-time non-obstructed flood inundation mapping 1688.1974.tb00623.x
using synthetic aperture radar. Remote Sensing of Xu, H. (2006). Modification of normalised
Environment, 221, 302-315, doi: difference water index (NDWI) to enhance open
10.1016/j.rse.2018.11.008 water features in remotely sensed
Shen, X., Wang, D., Mao, K., Anagnostou, E., & imagery. International journal of remote
Hong, Y. (2019a). Inundation extent mapping by sensing, 27(14), 3025-3033, doi:
synthetic aperture radar: a review. Remote 10.1080/01431160600589179
Sensing, 11(7), 879, doi: 10.3390/rs11070879 Yang, L., Meng, X., & Zhang, X. (2011). SRTM
Slagter, B., Tsendbazar, N. E., Vollrath, A., & DEM and its application advances. International
Reiche, J. (2020). Mapping wetland characteristics Journal of Remote
using temporally dense Sentinel-1 and Sentinel-2
Resolution Terrain Analysis. Water Resources
Sensing, 32(14), 3875-3896, doi: Research, 54(12), 10-013, doi:
10.1080/01431161003786016 10.1029/2018WR023457
Yulianto, F., Sofan, P., Zubaidah, A., Sukowati, K.
A. D., Pasaribu, J. M., & Khomarudin, M. R. (2015).
Detecting areas affected by flood using multi-
temporal ALOS PALSAR remotely sensed data in
Karawang, West Java, Indonesia. Natural
Hazards, 77(2), 959-985, doi: 10.1007/s11069-015-
1633-x
Zheng, X., Maidment, D. R., Tarboton, D. G., Liu,
Y. Y., & Passalacqua, P. (2018). GeoFlood: Large‐
Scale Flood Inundation Mapping Based on High‐