Few-Shot Learning for Crop Mapping from Satellite Image Time Series
Abstract
:1. Introduction
- In order to address the challenge of crop-type mapping in label-scarce environments, we apply eight FSL methods to crop mapping. To adapt the FSL methods, which were designed for natural images, to crop mapping, we employed a realistic query sampling strategy based on the Dirichlet distribution and 1D CNN with a TPP layer. The TPP layer enables the use of varying-length time series while capturing information at different levels of granularity.
- We reveal that effectively integrating the query sets into the learning process using an unsupervised loss function improves the performance.
- The literature on few-shot crop type mapping is very limited. In order to inspire further research in this direction and encourage the research community to design and implement new few-shot crop mapping methods, we released the full implementation code and instructions on setting up a benchmark for few-shot crop mapping, which can be found here: https://github.com/Sina-Mohammadi/FewShotCrop (accessed on 9 March 2024). We hope that researchers can benefit from our work in advancing FSL for crop mapping.
2. The Method
2.1. Few-Shot Learning Problem Formulation
2.2. Transductive Methods
2.3. Inductive Methods
3. The Study Areas and the FewCrop Benchmark
3.1. The Study Areas
3.2. The FewCrop Benchmark
4. Experiments
4.1. Performance Comparison Using the Standard Five-Way Setting
4.2. Comparison of the Classification Performance Obtained Using the Number of Classes in the Target Datasets as the Number of Ways
4.3. Results Obtained When Removing Base Training on the Source Domain
4.4. Classification Results Obtained When Removing the Temporal Pyramid Pooling Layer
5. Discussion
5.1. Interpretation of the Reported Classification Results
5.2. Limitations and Future Work
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ramankutty, N.; Mehrabi, Z.; Waha, K.; Jarvis, L.; Kremen, C.; Herrero, M.; Rieseberg, L.H. Trends in global agricultural land use: Implications for environmental health and food security. Annu. Rev. Plant Biol. 2018, 69, 789–815. [Google Scholar] [CrossRef]
- Kussul, N.; Lemoine, G.; Gallego, F.J.; Skakun, S.V.; Lavreniuk, M.; Shelestov, A.Y. Parcel-based crop classification in Ukraine using Landsat-8 data and Sentinel-1A data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2500–2508. [Google Scholar] [CrossRef]
- Wang, S.; Azzari, G.; Lobell, D.B. Crop type mapping without field-level labels: Random forest transfer and unsupervised clustering techniques. Remote Sens. Environ. 2019, 222, 303–317. [Google Scholar] [CrossRef]
- Xu, J.; Zhu, Y.; Zhong, R.; Lin, Z.; Xu, J.; Jiang, H.; Huang, J.; Li, H.; Lin, T. DeepCropMapping: A multi-temporal deep learning approach with improved spatial generalizability for dynamic corn and soybean mapping. Remote Sens. Environ. 2020, 247, 111946. [Google Scholar] [CrossRef]
- Chen, B.; Zheng, H.; Wang, L.; Hellwich, O.; Chen, C.; Yang, L.; Liu, T.; Luo, G.; Bao, A.; Chen, X. A joint learning Im-BiLSTM model for incomplete time-series Sentinel-2A data imputation and crop classification. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102762. [Google Scholar] [CrossRef]
- Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
- Pelletier, C.; Webb, G.I.; Petitjean, F. Temporal convolutional neural network for the classification of satellite image time series. Remote Sens. 2019, 11, 523. [Google Scholar] [CrossRef]
- Wang, L.; Wang, J.; Zhang, X.; Wang, L.; Qin, F. Deep segmentation and classification of complex crops using multi-feature satellite imagery. Comput. Electron. Agric. 2022, 200, 107249. [Google Scholar] [CrossRef]
- Mohammadi, S.; Belgiu, M.; Stein, A. Improvement in crop mapping from satellite image time series by effectively supervising deep neural networks. ISPRS J. Photogramm. Remote Sens. 2023, 198, 272–283. [Google Scholar] [CrossRef]
- Rußwurm, M.; Körner, M. Self-attention for raw optical satellite time series classification. ISPRS J. Photogramm. Remote Sens. 2020, 169, 421–435. [Google Scholar] [CrossRef]
- Garnot, V.S.F.; Landrieu, L.; Chehata, N. Multi-modal temporal attention models for crop mapping from satellite time series. ISPRS J. Photogramm. Remote Sens. 2022, 187, 294–305. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Boudiaf, M.; Ziko, I.; Rony, J.; Dolz, J.; Piantanida, P.; Ben Ayed, I. Information maximization for few-shot learning. Adv. Neural Inf. Process. Syst. 2020, 33, 2445–2457. [Google Scholar]
- Snell, J.; Swersky, K.; Zemel, R. Prototypical networks for few-shot learning. In Proceedings of the 31st International Conference on Neural Information, Long Beach, CA, USA, 4–9 December 2017; pp. 4080–4090. [Google Scholar]
- Finn, C.; Abbeel, P.; Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1126–1135. [Google Scholar]
- Zhai, M.; Liu, H.; Sun, F. Lifelong learning for scene recognition in remote sensing images. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1472–1476. [Google Scholar] [CrossRef]
- Rußwurm, M.; Wang, S.; Korner, M.; Lobell, D. Meta-learning for few-shot land cover classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 200–201. [Google Scholar]
- Gella, G.W.; Tiede, D.; Lang, S.; Wendit, L.; Gao, Y. Spatially transferable dwelling extraction from Multi-Sensor imagery in IDP/Refugee Settlements: A meta-Learning approach. Int. J. Appl. Earth Obs. Geoinf. 2023, 117, 103210. [Google Scholar] [CrossRef]
- Tseng, G.; Kerner, H.; Nakalembe, C.; Becker-Reshef, I. Learning to predict crop type from heterogeneous sparse labels using meta-learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1111–1120. [Google Scholar]
- Veilleux, O.; Boudiaf, M.; Piantanida, P.; Ben Ayed, I. Realistic evaluation of transductive few-shot learning. Adv. Neural Inf. Process. Syst. 2021, 34, 9290–9302. [Google Scholar]
- Rustowicz, R.M.; Cheong, R.; Wang, L.; Ermon, S.; Burke, M.; Lobell, D. Semantic segmentation of crop type in Africa: A novel dataset and analysis of deep learning methods. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 75–82. [Google Scholar]
- Waldner, F.; Chen, Y.; Lawes, R.; Hochman, Z. Needle in a haystack: Mapping rare and infrequent crops using satellite imagery and data balancing methods. Remote Sens. Environ. 2019, 233, 111375. [Google Scholar] [CrossRef]
- Garnot, V.S.F.; Landrieu, L. Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 4872–4881. [Google Scholar]
- Turkoglu, M.O.; D’Aronco, S.; Perich, G.; Liebisch, F.; Streit, C.; Schindler, K.; Wegner, J.D. Crop mapping from image time series: Deep learning with multi-scale label hierarchies. Remote Sens. Environ. 2021, 264, 112603. [Google Scholar] [CrossRef]
- Lee, K.; Maji, S.; Ravichandran, A.; Soatto, S. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10657–10665. [Google Scholar]
- Chen, W.Y.; Liu, Y.C.; Kira, Z.; Wang, Y.C.F.; Huang, J.B. A Closer Look at Few-shot Classification. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Wang, Y.; Chao, W.L.; Weinberger, K.Q.; van der Maaten, L. SimpleShot: Revisiting nearest-neighbor classification for few-shot learning. arXiv 2019, arXiv:1911.04623. [Google Scholar]
- Dhillon, G.S.; Chaudhari, P.; Ravichandran, A.; Soatto, S. A Baseline for Few-Shot Image Classification. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Sudholt, S.; Fink, G.A. Evaluating word string embeddings and loss functions for CNN-based word spotting. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; Volume 1, pp. 493–498. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Sakoe, H.; Chiba, S. Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech, Signal Process. 1978, 26, 43–49. [Google Scholar] [CrossRef]
- Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
- Ten Holt, G.A.; Reinders, M.J.; Hendriks, E.A. Multi-dimensional dynamic time warping for gesture recognition. In Proceedings of the Thirteenth Annual Conference of the Advanced School for Computing and Imaging, Heijen, The Netherlands, 13–15 June 2007; Volume 300, p. 1. [Google Scholar]
- Maghoumi, M. Deep Recurrent Networks for Gesture Recognition and Synthesis. Ph.D. Thesis, University of Central Florida, Orlando, FL, USA, 2020. [Google Scholar]
- Maghoumi, M.; Taranta, E.M.; LaViola, J. DeepNAG: Deep Non-Adversarial Gesture Generation. In Proceedings of the 26th International Conference on Intelligent User Interfaces, Station, TX, USA, 14–17 April 2021; pp. 213–223. [Google Scholar]
- Hamidi, M.; Safari, A.; Homayouni, S. An auto-encoder based classifier for crop mapping from multitemporal multispectral imagery. Int. J. Remote Sens. 2021, 42, 986–1016. [Google Scholar] [CrossRef]
- Zhang, P.; Hu, S.; Li, W.; Zhang, C. Parcel-level mapping of crops in a smallholder agricultural area: A case of central China using single-temporal VHSR imagery. Comput. Electron. Agric. 2020, 175, 105581. [Google Scholar] [CrossRef]
- Nowakowski, A.; Mrziglod, J.; Spiller, D.; Bonifacio, R.; Ferrari, I.; Mathieu, P.P.; Garcia-Herranz, M.; Kim, D.H. Crop type mapping by using transfer learning. Int. J. Appl. Earth Obs. Geoinf. 2021, 98, 102313. [Google Scholar] [CrossRef]
- Antoniou, A.; Edwards, H.; Storkey, A. How to train your MAML. In Proceedings of the Seventh International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised Contrastive Learning. Adv. Neural Inf. Process. Syst. 2020, 33, 18661–18673. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008.
- Li, Z.; Chen, G.; Zhang, T. A CNN-transformer hybrid approach for crop classification using multitemporal multisensor images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 847–858. [Google Scholar] [CrossRef]
- Islam, A.; Chen, C.F.R.; Panda, R.; Karlinsky, L.; Feris, R.; Radke, R.J. Dynamic distillation network for cross-domain few-shot recognition with unlabeled data. Adv. Neural Inf. Process. Syst. 2021, 34, 3584–3595. [Google Scholar]
- Chen, C.; Xie, W.; Huang, W.; Rong, Y.; Ding, X.; Huang, Y.; Xu, T.; Huang, J. Progressive feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 627–636. [Google Scholar]
- Choi, J.; Jeong, M.; Kim, T.; Kim, C. Pseudo-Labeling Curriculum for Unsupervised Domain Adaptation. In Proceedings of the British Machine Vision Conference (BMVC), Cardiff, UK, 9–12 September 2019; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Zhong, L.; Gong, P.; Biging, G.S. Efficient corn and soybean mapping with temporal extendability: A multi-year experiment using Landsat imagery. Remote Sens. Environ. 2014, 140, 1–13. [Google Scholar] [CrossRef]
- Rubner, Y.; Tomasi, C.; Guibas, L.J. A metric for distributions with applications to image databases. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), Bombay, India, 7 January 1998; pp. 59–66. [Google Scholar]
- Oh, J.; Kim, S.; Ho, N.; Kim, J.H.; Song, H.; Yun, S.Y. Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty. arXiv 2022, arXiv:1508.04409. [Google Scholar]
- Yoo, D.; Kweon, I.S. Learning loss for active learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 93–102. [Google Scholar]
- Su, T. Active learning with prediction vector diversity for crop classification in western Inner Mongolia. Multimed. Tools Appl. 2023, 82, 15079–15112. [Google Scholar] [CrossRef]
- Zhang, Z.; Pasolli, E.; Crawford, M.M. Crop Mapping through an Adaptive Multiview Active Learning Strategy. In Proceedings of the 2019 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Portici, Italy, 24–26 October 2019; pp. 307–311. [Google Scholar]
- Rodríguez, P.; Laradji, I.; Drouin, A.; Lacoste, A. Embedding propagation: Smoother manifold for few-shot classification. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXVI 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 121–138. [Google Scholar]
- Wang, Y.; Zhang, L.; Yao, Y.; Fu, Y. How to trust unlabeled data? instance credibility inference for few-shot learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6240–6253. [Google Scholar] [CrossRef] [PubMed]
Scenario 1 | Scenario 2 | |
---|---|---|
Source dataset | 7,254,310 | 6,434,782 |
Validation dataset | 6,465,945 | 494,954 |
Target dataset | 1,164,896 | 324,574 |
Scenario 1: Ghana (Five-way) | |||||
Method | 1-shot | 5-shot | 10-shot | 20-shot | |
Induct. | Protonet | 41.5 (↑ 2.7) | 56.4 (↑ 3.2) | 61.6 (↑ 3.2) | 63.8 (↑ 2.9) |
MetaOptNet | 39.9 (↑ 2.7) | 54.4 (↑ 3.5) | 61.9 (↑ 3.4) | 67.7 (↑ 3.3) | |
Baseline | 42.5 (↑ 2.7) | 57.3 (↑ 2.8) | 61.4 (↑ 2.8) | 64.7 (↑ 2.9) | |
SimpleShot | 42.0 (↑ 2.9) | 56.8 (↑ 2.9) | 60.5 (↑ 2.9) | 63.0 (↑ 2.9) | |
Transduct. | MAML | 38.9 (↑ 3.2) | 57.8 (↑ 3.9) | 63.9 (↑ 3.8) | 66.9 (↑ 3.7) |
Entropy-min | 38.8 (↑ 1.2) | 57.5 (↑ 0.3) | 61.0 (↑ 1.2) | 62.8 (↓ 0.1) | |
TIM | 42.9 (↑ 10.5) | 52.7 (↑ 13.8) | 56.5 (↑ 15.3) | 59.3 (↑ 16.7) | |
-TIM | 43.4 (↑ 3.7) | 60.4 (↑ 1.9) | 68.8 (↑ 2.1) | 75.2 (↑ 2.3) | |
DTW | 40.3 (↑ 3.0) | 56.9 (↑ 3.3) | 64.2 (↑ 3.3) | 70.9 (↑ 3.1) | |
Scenario 2: Infrequent Crop Types of France (Five-way) | |||||
Method | 1-shot | 5-shot | 10-shot | 20-shot | |
Induct. | Protonet | 47.0 (↑ 3.0) | 63.5 (↑ 3.9) | 69.8 (↑ 4.0) | 73.1 (↑ 4.0) |
MetaOptNet | 43.5 (↑ 2.9) | 55.3 (↑ 3.9) | 66.3 (↑ 4.0) | 71.1 (↑ 4.0) | |
Baseline | 46.2 (↑ 3.1) | 68.4 (↑ 3.8) | 73.3 (↑ 3.9) | 76.1 (↑ 3.7) | |
SimpleShot | 48.8 (↑ 3.4) | 68.4 (↑ 3.8) | 73.2 (↑ 3.9) | 75.9 (↑ 3.7) | |
Transduct. | MAML | 39.5 (↑ 3.4) | 60.0 (↑ 5.0) | 67.2 (↑ 5.1) | 71.2 (↑ 5.2) |
Entropy-min | 46.3 (↑ 4.1) | 67.6 (↑ 6.8) | 72.9 (↑ 5.5) | 75.6 (↑ 4.7) | |
TIM | 48.8 (↑ 12.0) | 60.0 (↑ 16.7) | 62.8 (↑ 17.6) | 64.7 (↑ 18.3) | |
-TIM | 48.1 (↑ 3.9) | 66.5 (↑ 1.0) | 75.5 (↑ 2.1) | 80.8 (↑ 2.3) | |
DTW | 26.2 (↑ 2.2) | 39.3 (↑ 3.4) | 46.3 (↑ 3.8) | 54.7 (↑ 4.3) |
Scenario 1: Ghana (24-way) | |||||
Method | 1-shot | 5-shot | 10-shot | 20-shot | |
Induct. | Protonet | 29.0 (↑ 2.0) | 38.9 (↑ 2.4) | 42.5 (↑ 2.3) | 43.6 (↑ 2.3) |
MetaOptNet | 26.3 (↑ 2.1) | 35.8 (↑ 3.2) | - | - | |
Baseline | 29.4 (↑ 2.2) | 39.6 (↑ 2.2) | 42.1 (↑ 2.2) | 44.5 (↑ 2.4) | |
SimpleShot | 28.9 (↑ 2.2) | 39.1 (↑ 2.2) | 41.2 (↑ 2.2) | 42.6 (↑ 2.2) | |
Transduct. | MAML | - | - | - | - |
Entropy-min | 27.4 (↑ 1.7) | 37.9 (↑ 1.2) | 39.1 (↑ 1.2) | 39.6 (↑ 1.3) | |
TIM | 28.0 (↑ 7.1) | 38.1 (↑ 10.0) | 41.5 (↑ 11.3) | 44.0 (↑ 12.5) | |
-TIM | 28.8 (↑ 1.0) | 44.8 (↑ 2.2) | 52.6 (↑ 2.5) | 59.6 (↑ 2.7) | |
DTW | 27.6 (↑ 2.3) | 42.7 (↑ 2.9) | 49.9 (↑ 3.1) | 56.9 (↑ 3.1) | |
Scenario 2: Infrequent Crop Types of France (Seven-way) | |||||
Method | 1-shot | 5-shot | 10-shot | 20-shot | |
Induct. | Protonet | 39.3 (↑ 3.0) | 56.0 (↑ 4.0) | 63.2 (↑ 4.2) | 67.0 (↑ 4.3) |
MetaOptNet | 35.9 (↑ 2.6) | 49.0 (↑ 3.9) | 61.1 (↑ 4.0) | 66.3 (↑ 4.1) | |
Baseline | 38.8 (↑ 2.7) | 61.4 (↑ 4.1) | 67.0 (↑ 4.2) | 70.2 (↑ 4.1) | |
SimpleShot | 41.0 (↑ 3.1) | 61.4 (↑ 4.2) | 66.9 (↑ 4.2) | 69.8 (↑ 4.2) | |
Transduct. | MAML | - | - | - | - |
Entropy-min | 36.9 (↑ 4.3) | 60.4 (↑ 7.2) | 66.1 (↑ 6.2) | 68.9 (↓ 0.1) | |
TIM | 41.4 (↑ 9.5) | 54.4 (↑ 15.2) | 57.9 (↑ 16.3) | 60.3 (↑ 17.3) | |
-TIM | 37.4 (↑ 2.5) | 58.4 (↑ 1.4) | 69.4 (↑ 2.2) | 75.9 (↑ 2.6) | |
DTW | 21.0 (↑ 1.8) | 33.0 (↑ 3.0) | 40.1 (↑ 3.6) | 48.9 (↑ 4.3) |
Scenario 1: Ghana | |||||
Five-way | 24-way | ||||
Method | 5-shot | 20-shot | 5-shot | 20-shot | |
Induct. | Protonet | 47.6 (↑ 8.8) | 49.5 (↑ 14.3) | 29.4 (↑ 9.5) | 29.2 (↑ 14.4) |
MetaOptNet | 57.5 (↓ 3.1) | 72.1 (↓ 4.4) | 43.7 (↓ 7.9) | - | |
Baseline | 48.0 (↑ 9.3) | 51.0 (↑ 13.7) | 29.1 (↑ 10.5) | 29.3 (↑ 15.2) | |
SimpleShot | 47.1 (↑ 9.7) | 50.5 (↑12.5) | 28.8 (↑10.3) | 28.4 (↑14.2) | |
Transduct. | MAML | 34.0 (↑ 23.8) | 35.1 (↑ 31.8) | - | - |
Entropy-min | 57.5 | 61.4 (↑ 1.4) | 37.6 (↑ 0.3) | 39.1 (↑ 0.5) | |
TIM | 51.7 (↑ 1.0) | 56.4 (↑ 2.9) | 33.6 (↑ 4.5) | 35.4 (↑ 8.6) | |
-TIM | 56.1 (↑ 4.3) | 67.4 (↑ 7.8) | 39.0 (↑ 5.8) | 43.9 (↑ 15.7) | |
Scenario 2: Infrequent Crop Types of France | |||||
Five-way | Seven-way | ||||
Method | 5-shot | 20-shot | 5-shot | 20-shot | |
Induct. | Protonet | 37.5 (↑ 26.0) | 46.2 (↑ 26.9) | 30.7 (↑ 25.3) | 35.4 (↑ 31.6) |
MetaOptNet | 47.1 (↑ 8.2) | 67.0 (↑ 4.1) | 42.2 (↑ 6.8) | 60.6 (↑ 5.7) | |
Baseline | 37.0 (↑ 31.4) | 43.7 (↑ 32.4) | 29.9 (↑ 31.5) | 35.7 (↑ 34.5) | |
SimpleShot | 38.5 (↑ 29.9) | 44.6 (↑ 31.3) | 31.4 (↑ 30.0) | 36.7 (↑ 33.1) | |
Transduct. | MAML | 36.9 (↑ 23.1) | 45.9 (↑ 25.3) | - | - |
Entropy-min | 61.0 (↑ 6.6) | 68.1 (↑ 7.5) | 55.3 (↑ 5.1) | 63.1 (↑ 5.8) | |
TIM | 40.9 (↑ 19.1) | 50.5 (↑ 14.2) | 34.7 (↑ 19.7) | 43.8 (↑ 16.5) | |
-TIM | 46.7 (↑ 19.8) | 64.4 (↑ 16.4) | 40.4 (↑ 18.0) | 57.1 (↑ 18.8) |
Scenario 1: Ghana (24-Way) | ||||
---|---|---|---|---|
Method | 1-Shot | 5-Shot | 10-Shot | 20-Shot |
Entropy-min | 19.5 (↑ 7.9) | 20.3 (↑ 17.6) | 23.7 (↑ 15.4) | 22.7 (↑ 16.9) |
TIM | 28.4 (↓ 0.4 ) | 34.1 (↑ 4.0) | 35.6 (↑ 5.9) | 36.8 (↑ 7.2) |
-TIM | 26.7 (↑ 2.1) | 39.0 (↑ 5.8) | 41.5 (↑ 11.1) | 43.5 (↑ 16.1) |
p-value | <0.05 | |||
Scenario 2: Infrequent Crop Types of France (Seven-way) | ||||
Method | 1-shot | 5-shot | 10-shot | 20-shot |
Entropy-min | 27.4 (↑ 9.5) | 35.9 (↑ 24.5) | 37.3 (↑ 28.8) | 37.6 (↑ 31.3) |
TIM | 37.4 (↑ 4.0) | 50.3 (↑ 4.1) | 54.2 (↑ 3.7) | 56.7 (↑ 3.6) |
-TIM | 35.7 (↑ 1.7) | 55.9 (↑ 2.5) | 62.1 (↑ 7.3) | 66.3 (↑ 9.6) |
p-value | <0.05 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mohammadi, S.; Belgiu, M.; Stein, A. Few-Shot Learning for Crop Mapping from Satellite Image Time Series. Remote Sens. 2024, 16, 1026. https://doi.org/10.3390/rs16061026
Mohammadi S, Belgiu M, Stein A. Few-Shot Learning for Crop Mapping from Satellite Image Time Series. Remote Sensing. 2024; 16(6):1026. https://doi.org/10.3390/rs16061026
Chicago/Turabian StyleMohammadi, Sina, Mariana Belgiu, and Alfred Stein. 2024. "Few-Shot Learning for Crop Mapping from Satellite Image Time Series" Remote Sensing 16, no. 6: 1026. https://doi.org/10.3390/rs16061026
APA StyleMohammadi, S., Belgiu, M., & Stein, A. (2024). Few-Shot Learning for Crop Mapping from Satellite Image Time Series. Remote Sensing, 16(6), 1026. https://doi.org/10.3390/rs16061026