Figure 1.
Overall architecture of the proposed method. Firstly, we extract multiple deep features from the pre-trained ConvNets. Secondly, process deep local features via feature maps selection algorithm and region descriptor. Finally, use an addition descriptor to fuse the two stream features and achieve classification.
Figure 1.
Overall architecture of the proposed method. Firstly, we extract multiple deep features from the pre-trained ConvNets. Secondly, process deep local features via feature maps selection algorithm and region descriptor. Finally, use an addition descriptor to fuse the two stream features and achieve classification.
Figure 2.
and fusion process of different feature points.
Figure 2.
and fusion process of different feature points.
Figure 3.
Sample images of the UCM dataset: (1) Agricultural, (2) Airplane, (3) Baseball diamond, (4) Beach, (5) Building, (6) Chaparral, (7) Dense residential, (8) Forest, (9) Freeway, (10) Golf course, (11) Harbor, (12) Intersection, (13) Medium residential, (14) Mobile-homepark, (15) Overpass, (16) Parking lot, (17) River, (18) Runway, (19) Sparse residential, (20) Storage tanks, (21) Tennis court.
Figure 3.
Sample images of the UCM dataset: (1) Agricultural, (2) Airplane, (3) Baseball diamond, (4) Beach, (5) Building, (6) Chaparral, (7) Dense residential, (8) Forest, (9) Freeway, (10) Golf course, (11) Harbor, (12) Intersection, (13) Medium residential, (14) Mobile-homepark, (15) Overpass, (16) Parking lot, (17) River, (18) Runway, (19) Sparse residential, (20) Storage tanks, (21) Tennis court.
Figure 4.
Sample images of the AID dataset: (1) Airport, (2) Bare land, (3) Baseball field, (4) Beach, (5) Bridge, (6) Center, (7) Church, (8) Commercial, (9) Dense residential, (10) Desert, (11) Farmland, (12) Forest, (13) Industrial, (14) Meadow, (15) Medium residential, (16) Mountain, (17) Park, (18) Parking lot, (19) Playground, (20) Pond, (21) Port, (22) Railway station, (23) Resort, (24) River, (25) School, (26) Sparse residential, (27) Square, (28) Stadium, (29) Storage tanks, (30) Viaduct.
Figure 4.
Sample images of the AID dataset: (1) Airport, (2) Bare land, (3) Baseball field, (4) Beach, (5) Bridge, (6) Center, (7) Church, (8) Commercial, (9) Dense residential, (10) Desert, (11) Farmland, (12) Forest, (13) Industrial, (14) Meadow, (15) Medium residential, (16) Mountain, (17) Park, (18) Parking lot, (19) Playground, (20) Pond, (21) Port, (22) Railway station, (23) Resort, (24) River, (25) School, (26) Sparse residential, (27) Square, (28) Stadium, (29) Storage tanks, (30) Viaduct.
Figure 5.
Sample images of the NWPU dataset: (1) Airplane, (2) Bridge, (3) Church, (4) Circular_farmland, (5) Dense_residential, (6) Desert, (7) Forest, (8) Freeway, (9) Golf_course, (10) Ground_track_field, (11) Harbor, (12) Industrial_area, (13) Intersection, (14) Island, (15) Lake, (16) Meadow, (17) Mobile_home_park, (18) Mountain, (19) Overpass, (20) Palace, (21) Parking_lot, (22) Railway, (23) Railway_station, (24) Rectangular_farmland, (25) River, (26) Sea_ice, (27) Tennis_court, (28) Terrace, (29) Thermal_power_station, (30) Wetland, (31) Airport, (32) Baseball_diamond, (33) Basketball_court, (34) Beach, (35) Chaparral, (36) Cloud, (37) Commercial_area, (38) Medium_residential, (39) Roundabout, (40) Runway, (41) Ship, (42) Snowberg, (43) Sparse_residential, (44) Stadium, (45) Storage_tank.
Figure 5.
Sample images of the NWPU dataset: (1) Airplane, (2) Bridge, (3) Church, (4) Circular_farmland, (5) Dense_residential, (6) Desert, (7) Forest, (8) Freeway, (9) Golf_course, (10) Ground_track_field, (11) Harbor, (12) Industrial_area, (13) Intersection, (14) Island, (15) Lake, (16) Meadow, (17) Mobile_home_park, (18) Mountain, (19) Overpass, (20) Palace, (21) Parking_lot, (22) Railway, (23) Railway_station, (24) Rectangular_farmland, (25) River, (26) Sea_ice, (27) Tennis_court, (28) Terrace, (29) Thermal_power_station, (30) Wetland, (31) Airport, (32) Baseball_diamond, (33) Basketball_court, (34) Beach, (35) Chaparral, (36) Cloud, (37) Commercial_area, (38) Medium_residential, (39) Roundabout, (40) Runway, (41) Ship, (42) Snowberg, (43) Sparse_residential, (44) Stadium, (45) Storage_tank.
Figure 6.
Classification performance of the proposed MDFR framework with different parameter .
Figure 6.
Classification performance of the proposed MDFR framework with different parameter .
Figure 7.
Visualization results of different datasets. (a) UCM dataset. (b) AID dataset. The top figures show the features before selection and the bottom figures show the features after selection.
Figure 7.
Visualization results of different datasets. (a) UCM dataset. (b) AID dataset. The top figures show the features before selection and the bottom figures show the features after selection.
Figure 8.
Confusion matrix and all misclassification examples for the UCM dataset using the proposed MDFR architecture. (a) Confusion matrix of the UCM dataset with OA = 98.57%. (b) Misclassified samples of the UCM dataset.
Figure 8.
Confusion matrix and all misclassification examples for the UCM dataset using the proposed MDFR architecture. (a) Confusion matrix of the UCM dataset with OA = 98.57%. (b) Misclassified samples of the UCM dataset.
Figure 9.
Confusion matrix and some misclassification examples for the AID dataset using the proposed MDFR architecture. (a) Confusion matrix of the AID dataset with OA = 93.64%. (b) Some misclassified samples of the AID dataset.
Figure 9.
Confusion matrix and some misclassification examples for the AID dataset using the proposed MDFR architecture. (a) Confusion matrix of the AID dataset with OA = 93.64%. (b) Some misclassified samples of the AID dataset.
Figure 10.
Confusion matrix and some misclassification examples for the NWPU dataset using the proposed MDFR architecture. (a) Confusion matrix of the NWPU dataset with OA = 86.89%. (b) Some misclassified samples of the NWPU dataset.
Figure 10.
Confusion matrix and some misclassification examples for the NWPU dataset using the proposed MDFR architecture. (a) Confusion matrix of the NWPU dataset with OA = 86.89%. (b) Some misclassified samples of the NWPU dataset.
Figure 11.
Box plots of different methods. From top to the bottom are the plots by (a) UCM, (b) AID, (c) NWPU.
Figure 11.
Box plots of different methods. From top to the bottom are the plots by (a) UCM, (b) AID, (c) NWPU.
Table 1.
Classification accuracy of different convolutional layers (%).
Table 1.
Classification accuracy of different convolutional layers (%).
Feature Size | Method | OA | Feature Size | Method | OA | Feature Size | Method | OA |
---|
3k | c_1 | 66.32 | 12k | c_2_3 | 96.49 | 28k | c_1_3_4 | 97.12 |
3k | c_2 | 91.53 | 12k | c_2_4 | 96.50 | 28k | c_1_3_5 | 96.64 |
3k | c_3 | 96.40 | 12k | c_2_5 | 96.45 | 28k | c_1_4_5 | 97.15 |
3k | c_4 | 96.83 | 12k | c_3_4 | 97.10 | 28k | c_2_3_4 | 97.20 |
3k | c_5 | 96.11 | 12k | c_3_5 | 97.02 | 28k | c_2_3_5 | 96.27 |
12k | c_1_2 | 92.69 | 12k | c_4_5 | 97.05 | 28k | c_2_4_5 | 97.23 |
12k | c_1_3 | 95.57 | 28k | c_1_2_3 | 95.87 | 28k | c_3_4_5 | 97.25 |
12k | c_1_4 | 96.31 | 28k | c_1_2_4 | 96.51 | 28k | – | – |
12k | c_1_5 | 94.66 | 28k | c_1_2_5 | 95.76 | 28k | – | – |
Table 2.
The number of features and categories on the UCM dataset.
Table 2.
The number of features and categories on the UCM dataset.
Number | Category | Number | Category | Number | Category |
---|
#1 | Agricultural | #8 | Forest | #15 | Overpass |
#2 | Airplane | #9 | Freeway | #16 | Parking lot |
#3 | Baseball diamond | #10 | Golf course | #17 | River |
#4 | Beach | #11 | Harbor | #18 | Runway |
#5 | Building | #12 | Intersection | #19 | Sparse residential |
#6 | Chaparral | #13 | Medium residential | #20 | Storage tanks |
#7 | Dense residential | #14 | Mobile-homepark | #21 | Tennis court |
Table 3.
The number of features and categories on the AID dataset.
Table 3.
The number of features and categories on the AID dataset.
Number | Category | Number | Category | Number | Category |
---|
#1 | Airport | #11 | Farmland | #21 | Port |
#2 | Bare land | #12 | Forest | #22 | Railway station |
#3 | Baseball field | #13 | Industrial | #23 | Resort |
#4 | Beach | #14 | Meadow | #24 | River |
#5 | Bridge | #15 | Medium residential | #25 | School |
#6 | Center | #16 | Mountain | #26 | Sparse residential |
#7 | Church | #17 | Park | #27 | Square |
#8 | Commercial | #18 | Parking lot | #28 | Stadium |
#9 | Dense residential | #19 | Playground | #29 | Storage tanks |
#10 | Desert | #20 | Pond | #30 | Viaduct |
Table 4.
Classification accuracies of different features. The best result is in bold.
Table 4.
Classification accuracies of different features. The best result is in bold.
Feature | | 1st_Feature | OA | 2nd_Feature | OA | 3rd_Feature | OA |
---|
Low-level | | SIFT | 32.57% | CH | 46.30% | SIFT ⊕ CH | 62.55% |
Mid-level | | IFK (SIFT) | 82.08% | IFK (CH) | 80.38% | IFK (SIFT) ⊕ IFK (CH) | 90.47% |
High-level | | | 97.52% | | 95.83% | | 98.02% |
Table 5.
Comparison of classification accuracy with other methods (UCM). The best result is in bold.
Table 5.
Comparison of classification accuracy with other methods (UCM). The best result is in bold.
Method | Feature Size | OA (%) |
---|
CaffeNet [51] | 4k | 95.02 ± 0.81 |
Scenario(II) [39] | >50k | 96.90 ± 0.77 |
GLDFB [41] | 7k | 97.62 |
GLM16 [49] | 7k | 94.97 ± 1.16 |
AlexNet + MSCP [53] | 29k | 97.29 ± 0.63 |
MDFR | 10k | 98.02 ± 0.51 |
Table 6.
Comparison of classification accuracy with other methods. The best results are in bold.
Table 6.
Comparison of classification accuracy with other methods. The best results are in bold.
Method | OA (%) |
---|
AID (50%) | AID (20%) | NWPU (20%) | NWPU (10%) |
---|
CaffeNet [51] | 89.53 ± 0.31 | 86.86 ± 0.47 | 81.08 ± 0.21 | 78.01 ± 0.27 |
AlexNet [1] | - | - | 79.85 ± 0.13 | 76.69 ± 0.21 |
VGG-VD-16 [51] | 89.64 ± 0.36 | 86.59 ± 0.29 | - | - |
Fine-tuned GoogLeNet [1] | - | - | 86.02 ± 0.18 | 82.57 ± 0.12 |
DCA [48] | 89.71 ± 0.33 | - | - | - |
AlexNet+MSCP [53] | 92.36 ± 0.21 | 88.99 ± 0.38 | 85.58 ± 0.16 | 81.70 ± 0.23 |
MDFR | 93.37 ± 0.29 | 90.62 ± 0.27 | 86.89 ± 0.17 | 83.37 ± 0.26 |