Region Segmentation of Whole-Slide Images for Analyzing Histological Differentiation of Prostate Adenocarcinoma Using Ensemble EfficientNetB2 U-Net with Transfer Learning Mechanism
Abstract
:Simple Summary
Abstract
1. Introduction
2. Related Works
3. Materials and Methods
3.1. Dataset
3.2. Image Preprocessing
3.3. Tissue Region Segmentation
3.3.1. Transfer Learning
3.3.2. Network Architecture
- UNet Architecture: UNet [39] was initially proposed and implemented in 2015. The network consists of encoder and decoder convolutional blocks, with skip connections and a bottleneck layer to propagate encoded features to decoder blocks. The encoder includes a number of blocks, each of which accepts input using two 3 × 3 convolution layers, followed by a rectified linear unit (ReLU) and 2 × 2 max-pooling layers, which down-sample the image by 2 for the next layer. The obtained feature maps are propagated to the decoder block through the bottleneck layer to convert a vector into a segmented image and up-sample the image with 2 × 2 up-convolutional layers. Between the layers of contraction and expansion, the bottleneck layer causes interference. The context is captured via a compact feature map using an encoder-like contraction route.
- ResNet-34 UNet Architecture: ResNet-34 [40] is a CNN architecture made up of a number of residual blocks. It varies from other CNNs by having shortcut connections [41]. This technique is used in residual building blocks to skip the convolutional layers. The final calculation is performed by adding the input features with a residual block output via a skip link. Therefore, the problem of a vanishing gradient is alleviated by increasing the depth of the neural network. When introduced, ResNet easily won that year’s ImageNet competition. Figure 5 shows ResNet34 as a backbone model of UNet [42,43]. The residual block is constructed using several convolutional layers (Conv), batch normalization (BN), a ReLU activation function, and a shortcut. Similarly, the entire ResNet34 is constructed using 33 convolutional layers, a max-pooling layer of size 3 × 3, and an average pooling layer followed by a fully connected layer.
- ResNeXt50 UNet Architecture: RexNeXt, a variant of ResNet, was developed in 2017 by Xie et al. [44]. The primary distinction between ResNeXt and ResNet is that instead of having continuous blocks (one after the other), ResNeXt considers and integrates ‘cardinality’ or the size of transformations, drawing inspiration from Inception/GoogleLenet. Notably, ResNeXt outperformed ResNet in the Imagenet Challenge, despite having fewer parameters. On the same dataset, the prior U-Net design that had been enhanced with the ResNeXt50 backbone was put into practice. The concept behind ResNeXt is that it combines several transformations with similar topologies using repeating building elements. Experimenting with cardinality (the size of the collection of transformations) gives depth and width further advantages. As a result, the network’s accuracy can be increased more efficiently by increasing cardinality. This makes it possible to explore the dataset while also upgrading the underlying U-Net design using ResNeXt50 blocks.
- InceptionV3 UNet Architecture: In 2014, Szegedy et al. launched GoogleNet, commonly known as Inception [45]. At the time, it was one of the biggest and most effective categorization networks. GoogLeNet/Inception is more computationally efficient than VGG in terms of parameters and costs, which include memory and other resources. It also lowered the network classification top-5 error rate to 6.67%. GoogLeNet/Inception comes in a number of iterations, including Inceptionv1, Inceptionv2, Inceptionv3, Inceptionv4, and Inception-ResNet. The version employed in this study, Inceptionv3, was applied to improve the network accuracy while lowering computational costs. Inceptionv3 is made up of 42 layers and around 24 million parameters. The network makes use of the multilevel feature extractor known as the inception block. The conception block is made up of filters of various sizes, including 1 × 1, 3 × 3, and 5 × 5. A convolutional layer with a filter size of 1 × 1 is utilized in the network’s center to minimize dimensionality, and global average pooling is used in place of completely linked layers.
- EfficientB2 UNet Architecture: EfficientNet [46] is a novel CNN and scaling technique that uses a compound coefficient to consistently scale the depth, breadth, and image resolution. The EfficientNet scaling approach evenly scales the network breadth, depth, and resolution using a set of preset scaling coefficients, in contrast to standard practice, which scales these variables arbitrarily. The network width, depth, and resolution are all consistently scaled by EfficientNet logically using a compound coefficient. At first, EfficientNet-B0 was developed as a baseline network from Neural Architecture Search (NAS) using the AutoML MNAS framework [46]. Later, this baseline was extended and improved to obtain an efficient family (i.e., EfficientNet-B1-EfficientNet-B7). In general, EfficientNet is constructed using the mobile inverted bottleneck (MBConv) building block [47].
- Ensemble EfficientNetB2 U-Net Architecture: The major sources of inspiration for this research were Ensemble Learning and UNet with EfficientNet-B2. To construct the proposed ensemble UNet model, we employed a multi-head pretrained CNN (i.e., EfficientNet-B2) to encode the feature maps from the input image and applied a fine-tuning technique, ‘freezing,’ to accelerate neural network training by progressively freezing hidden layers. Freezing a layer in the CNN is about controlling the process of updating the weights during backpropagation; specifically, if any layer is frozen, then its weight is not updated during model learning. In this study, we applied three freezing techniques to perform feature extraction in the encoder blocks, as shown in Figure 4b–d. The decoder blocks received the encoded feature maps to up-sample the image size, and the up-sampling output was concatenated with the output of the corresponding part of the encoder. The output of each fine-tune-based EfficientNetB2-UNet architecture was concatenated to create the final output of the proposed ensemble model, as shown in Figure 6. The encoder block (five down-sampling layers) contained a 3 × 3 convolutional layer, 22 MBConv structures, and a 1 × 1 convolutional layer (Table 1). The decoder block (four up-sampling layers) was constructed using up-sampled, concatenated, 3 × 3 convolutional layers.
4. Results and Discussion
4.1. Region Segmentation Results
4.2. Slide-Level Prediction
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Pernar, C.H.; Ebot, E.M.; Wilson, K.M.; Mucci, L.A. The Epidemiology of Prostate Cancer. Cold Spring Harb. Perspect. Med. 2018, 8, a030361. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bashir, M.N. Epidemiology of Prostate Cancer. Asian Pac. J. Cancer Prev. 2015, 16, 5137–5141. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gnanapragasam, V.J.; Bratt, O.; Muir, K.; Lee, L.S.; Huang, H.H.; Stattin, P.; Lophatananon, A. The Cambridge Prognostic Groups for Improved Prediction of Disease Mortality at Diagnosis in Primary Non-Metastatic Prostate Cancer: A Validation Study. BMC Med. 2018, 16, 31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Daskivich, T.J.; Fan, K.-H.; Koyama, T.; Albertsen, P.C.; Goodman, M.; Hamilton, A.S.; Hoffman, R.M.; Stanford, J.L.; Stroup, A.M.; Litwin, M.S.; et al. Prediction of Long-Term Other-Cause Mortality in Men with Early-Stage Prostate Cancer: Results from the Prostate Cancer Outcomes Study. Urology 2015, 85, 92–100. [Google Scholar] [CrossRef] [Green Version]
- Ozkan, T.A.; Eruyar, A.T.; Cebeci, O.O.; Memik, O.; Ozcan, L.; Kuskonmaz, I. Interobserver Variability in Gleason Histological Grading of Prostate Cancer. Scand. J. Urol. 2016, 50, 420–424. [Google Scholar] [CrossRef]
- Rodriguez-Urrego, P.A.; Cronin, A.M.; Al-Ahmadie, H.A.; Gopalan, A.; Tickoo, S.K.; Reuter, V.E.; Fine, S.W. Interobserver and Intraobserver Reproducibility in Digital and Routine Microscopic Assessment of Prostate Needle Biopsies. Hum. Pathol. 2011, 42, 68–74. [Google Scholar] [CrossRef]
- Allsbrook, W.C.; Mangold, K.A.; Johnson, M.H.; Lane, R.B.; Lane, C.G.; Epstein, J.I. Interobserver Reproducibility of Gleason Grading of Prostatic Carcinoma: General Pathologist. Hum. Pathol. 2001, 32, 81–88. [Google Scholar] [CrossRef]
- Sadimin, E.T.; Khani, F.; Diolombi, M.; Meliti, A.; Epstein, J.I. Interobserver Reproducibility of Percent Gleason Pattern 4 in Prostatic Adenocarcinoma on Prostate Biopsies. Am. J. Surg. Pathol. 2016, 40, 1686–1692. [Google Scholar] [CrossRef]
- Zhou, M.; Li, J.; Cheng, L.; Egevad, L.; Deng, F.-M.; Kunju, L.P.; Magi-Galluzzi, C.; Melamed, J.; Mehra, R.; Mendrinos, S.; et al. Diagnosis of “Poorly Formed Glands” Gleason Pattern 4 Prostatic Adenocarcinoma on Needle Biopsy. Am. J. Surg. Pathol. 2015, 39, 1331–1339. [Google Scholar] [CrossRef]
- Meliti, A.; Sadimin, E.; Diolombi, M.; Khani, F.; Epstein, J.I. Accuracy of Grading Gleason Score 7 Prostatic Adenocarcinoma on Needle Biopsy: Influence of Percent Pattern 4 and Other Histological Factors. Prostate 2017, 77, 681–685. [Google Scholar] [CrossRef]
- Zietman, A.; Smith, J.; Klein, E.; Droller, M.; Dasgupta, P.; Catto, J. Consensus Guidelines for Reporting Prostate Cancer Gleason Grade. BJU Int. 2016, 117, 849. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zietman, A.; Smith, J.; Klein, E.; Droller, M.; Dasgupta, P.; Catto, J. Describing the Grade of Prostate Cancer: Consistent Use of Contemporary Terminology Is Now Required. Eur. Urol. 2016, 70, 1. [Google Scholar] [CrossRef] [PubMed]
- Chen, C.; Huang, Y.; Fang, P.; Liang, C.; Chang, R. A Computer-aided Diagnosis System for Differentiation and Delineation of Malignant Regions on Whole-slide Prostate Histopathology Image Using Spatial Statistics and Multidimensional DenseNet. Med. Phys. 2020, 47, 1021–1033. [Google Scholar] [CrossRef] [PubMed]
- Rezaei, S.; Emami, A.; Zarrabi, H.; Rafiei, S.; Najarian, K.; Karimi, N.; Samavi, S.; Reza Soroushmehr, S.M. Gland Segmentation in Histopathology Images Using Deep Networks and Handcrafted Features. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; IEEE: Piscataway, NJ, USA; pp. 1031–1034. [Google Scholar] [CrossRef] [Green Version]
- Sirinukunwattana, K.; Pluim, J.P.W.; Chen, H.; Qi, X.; Heng, P.-A.; Guo, Y.B.; Wang, L.Y.; Matuszewski, B.J.; Bruni, E.; Sanchez, U.; et al. Gland Segmentation in Colon Histology Images: The Glas Challenge Contest. Med. Image Anal. 2017, 35, 489–502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Qu, H.; Yan, Z.; Riedlinger, G.M.; De, S.; Metaxas, D.N. Improving Nuclei/Gland Instance Segmentation in Histopathology Images by Full Resolution Neural Network and Spatial Constrained Loss. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2019; pp. 378–386. [Google Scholar] [CrossRef]
- Farjam, R.; Soltanian-Zadeh, H.; Jafari-Khouzani, K.; Zoroofi, R.A. An Image Analysis Approach for Automatic Malignancy Determination of Prostate Pathological Images. Cytom. Part B Clin. Cytom. 2007, 72, 227–240. [Google Scholar] [CrossRef]
- Naik, S.; Doyle, S.; Agner, S.; Madabhushi, A.; Feldman, M.; Tomaszewski, J. Automated Gland and Nuclei Segmentation for Grading of Prostate and Breast Cancer Histopathology. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; IEEE: Piscataway, NJ, USA; pp. 284–287. [Google Scholar] [CrossRef]
- Peng, Y.; Jiang, Y.; Eisengart, L.; Healy, M.A.; Straus, F.H.; Yang, X.J. Computer-Aided Identification of Prostatic Adenocarcinoma: Segmentation of Glandular Structures. J. Pathol. Inf. 2011, 2, 33. [Google Scholar] [CrossRef]
- Nguyen, K.; Sabata, B.; Jain, A.K. Prostate Cancer Grading: Gland Segmentation and Structural Features. Pattern Recognit. Lett. 2012, 33, 951–961. [Google Scholar] [CrossRef]
- Singh, M.; Kalaw, E.M.; Giron, D.M.; Chong, K.-T.; Tan, C.L.; Lee, H.K. Gland Segmentation in Prostate Histopathological Images. J. Med. Imaging 2017, 4, 027501. [Google Scholar] [CrossRef]
- Madabhushi, A.; Lee, G. Image Analysis and Machine Learning in Digital Pathology: Challenges and Opportunities. Med. Image Anal. 2016, 33, 170–175. [Google Scholar] [CrossRef] [Green Version]
- Janowczyk, A.; Madabhushi, A. Deep Learning for Digital Pathology Image Analysis: A Comprehensive Tutorial with Selected Use Cases. J. Pathol. Inf. 2016, 7, 29. [Google Scholar] [CrossRef]
- Song, J.; Xiao, L.; Molaei, M.; Lian, Z. Multi-Layer Boosting Sparse Convolutional Model for Generalized Nuclear Segmentation from Histopathology Images. Knowl.-Based Syst. 2019, 176, 40–53. [Google Scholar] [CrossRef]
- Ren, J.; Sadimin, E.; Foran, D.J.; Qi, X. Computer Aided Analysis of Prostate Histopathology Images to Support a Refined Gleason Grading System. In Medical Imaging 2017: Image Processing; Styner, M.A., Angelini, E.D., Eds.; SPIE: Bellingham, WA, USA, 2017; Volume 10133, p. 101331V. [Google Scholar] [CrossRef]
- Xu, Y.; Li, Y.; Wang, Y.; Liu, M.; Fan, Y.; Lai, M.; Chang, E.I.C. Gland Instance Segmentation Using Deep Multichannel Neural Networks. IEEE Trans. Biomed. Eng. 2017, 64, 2901–2912. [Google Scholar] [CrossRef] [Green Version]
- Bulten, W.; Kartasalo, K.; Chen, P.-H.C.; Ström, P.; Pinckaers, H.; Nagpal, K.; Cai, Y.; Steiner, D.F.; van Boven, H.; Vink, R.; et al. Artificial Intelligence for Diagnosis and Gleason Grading of Prostate Cancer: The PANDA Challenge. Nat. Med. 2022, 28, 154–163. [Google Scholar] [CrossRef] [PubMed]
- Soerensen, S.J.C.; Fan, R.E.; Seetharaman, A.; Chen, L.; Shao, W.; Bhattacharya, I.; Kim, Y.; Sood, R.; Borre, M.; Chung, B.I.; et al. Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy. J. Urol. 2021, 206, 604–612. [Google Scholar] [CrossRef] [PubMed]
- Bulten, W.; Pinckaers, H.; van Boven, H.; Vink, R.; de Bel, T.; van Ginneken, B.; van der Laak, J.; Hulsbergen-van de Kaa, C.; Litjens, G. Automated Deep-Learning System for Gleason Grading of Prostate Cancer Using Biopsies: A Diagnostic Study. Lancet Oncol. 2020, 21, 233–241. [Google Scholar] [CrossRef] [Green Version]
- Xu, H.; Berendt, R.; Jha, N.; Mandal, M. Automatic Measurement of Melanoma Depth of Invasion in Skin Histopathological Images. Micron 2017, 97, 56–67. [Google Scholar] [CrossRef]
- Cheremkhin, P.A.; Kurbatova, E.A. Comparative Appraisal of Global and Local Thresholding Methods for Binarisation of Off-Axis Digital Holograms. Opt. Lasers Eng. 2019, 115, 119–130. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- Shin, H.-C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [Green Version]
- Kanavati, F.; Tsuneki, M. A Deep Learning Model for Gastric Diffuse-Type Adenocarcinoma Classification in Whole Slide Images. Sci. Rep. 2021, 11, 20486. [Google Scholar] [CrossRef]
- Ponzio, F.; Macii, E.; Ficarra, E.; Di Cataldo, S. Colorectal Cancer Classification Using Deep Convolutional Networks-An Experimental Study. In Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies, Funchal, Madeira, Portugal, 19–21 January 2018; SciTePress-Science and Technology Publications: Setúbal, Portugal; pp. 58–66. [Google Scholar] [CrossRef]
- Jain, P.K.; Sharma, N.; Giannopoulos, A.A.; Saba, L.; Nicolaides, A.; Suri, J.S. Hybrid Deep Learning Segmentation Models for Atherosclerotic Plaque in Internal Carotid Artery B-Mode Ultrasound. Comput. Biol. Med. 2021, 136, 104721. [Google Scholar] [CrossRef]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Gómez-de-Mariscal, E.; Maška, M.; Kotrbová, A.; Pospíchalová, V.; Matula, P.; Muñoz-Barrutia, A. Deep-Learning-Based Segmentation of Small Extracellular Vesicles in Transmission Electron Microscopy Images. Sci. Rep. 2019, 9, 13211. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
- Gao, M.; Qi, D.; Mu, H.; Chen, J. A Transfer Residual Neural Network Based on ResNet-34 for Detection of Wood Knot Defects. Forests 2021, 12, 212. [Google Scholar] [CrossRef]
- Wen, L.; Li, X.; Gao, L. A Transfer Convolutional Neural Network for Fault Diagnosis Based on ResNet-50. Neural Comput. Appl. 2020, 32, 6111–6124. [Google Scholar] [CrossRef]
- Zhang, Q.; Cui, Z.; Niu, X.; Geng, S.; Qiao, Y. Image Segmentation with Pyramid Dilated Convolution Based on ResNet and U-Net. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2017; pp. 364–372. [Google Scholar] [CrossRef]
- Le’Clerc Arrastia, J.; Heilenkötter, N.; Otero Baguer, D.; Hauberg-Lotte, L.; Boskamp, T.; Hetzer, S.; Duschner, N.; Schaller, J.; Maass, P. Deeply Supervised UNet for Semantic Segmentation to Assist Dermatopathological Assessment of Basal Cell Carcinoma. J. Imaging 2021, 7, 71. [Google Scholar] [CrossRef] [PubMed]
- Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA; pp. 5987–5995. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; IEEE: Piscataway, NJ, USA; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th The International Conference on Machine Learning ICML 2019, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
- Rahman, M.A.; Wang, Y. Optimizing Intersection-Over-Union in Deep Neural Networks for Image Segmentation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2016; pp. 234–244. [Google Scholar] [CrossRef]
- Laxhammar, R.; Falkman, G. Sequential conformal anomaly detection in trajectories based on hausdorff distance. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011; pp. 1–8. [Google Scholar]
- Qadri, S.F.; Shen, L.; Ahmad, M.; Qadri, S.; Zareen, S.S.; Khan, S. OP-ConvNet: A Patch Classification-Based Framework for CT Vertebrae Segmentation. IEEE Access 2021, 9, 158227–158240. [Google Scholar] [CrossRef]
- Ryu, H.S.; Jin, M.-S.; Park, J.H.; Lee, S.; Cho, J.; Oh, S.; Kwak, T.-Y.; Woo, J.I.; Mun, Y.; Kim, S.W.; et al. Automated Gleason Scoring and Tumor Quantification in Prostate Core Needle Biopsy Images Using Deep Neural Networks and Its Comparison with Pathologist-Based Assessment. Cancers 2019, 11, 1860. [Google Scholar] [CrossRef] [Green Version]
Steps | Operator | Resolution | Layers |
---|---|---|---|
Down-sampling (1) | Conv 3 × 3, Stride = 2 | 128 × 128 | 1 |
MBConv1, 3 × 3 | 128 × 128 | 2 | |
Down-sampling (2) | MBConv6, 3 × 3 | 64 × 64 | 3 |
Down-sampling (3) | MBConv6, 5 × 5 | 32 × 32 | 3 |
Down-sampling (4) | MBConv6, 3 × 3 | 16 × 16 | 4 |
MBConv6, 5 × 5 | 16 × 16 | 4 | |
Down-sampling (5) | MBConv6, 5 × 5 | 8 × 8 | 5 |
MBConv6, 3 × 3 | 8 × 8 | 2 | |
Conv 1 × 1 | 8 × 8 | 1 |
ResNet34 | ResNeXt50 | InceptionV3 | EfficientNetB2 | Proposed | ||
---|---|---|---|---|---|---|
Stroma | Dice coefficient | 0.935 | 0.949 | 0.947 | 0.949 | 0.956 |
IoU | 0.878 | 0.903 | 0.900 | 0.903 | 0.916 | |
Hausdorff (mm) | 17.7 | 16.9 | 16.6 | 17.2 | 15.8 | |
Benign | Dice coefficient | 0.624 | 0.775 | 0.705 | 0.778 | 0.802 |
IoU | 0.453 | 0.633 | 0.544 | 0.638 | 0.670 | |
Hausdorff (mm) | 17.4 | 16.9 | 16.4 | 17.3 | 16.2 | |
Cancer | Dice coefficient | 0.848 | 0.892 | 0.879 | 0.905 | 0.914 |
IoU | 0.736 | 0.805 | 0.785 | 0.827 | 0.843 | |
Hausdorff (mm) | 17.5 | 16.9 | 16.3 | 16.9 | 15.9 | |
Average | Dice coefficient | 0.802 | 0.872 | 0.843 | 0.877 | 0.891 |
IoU | 0.689 | 0.780 | 0.743 | 0.789 | 0.811 | |
Hausdorff (mm) | 17.5 | 16.9 | 16.4 | 17.1 | 15.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ikromjanov, K.; Bhattacharjee, S.; Sumon, R.I.; Hwang, Y.-B.; Rahman, H.; Lee, M.-J.; Kim, H.-C.; Park, E.; Cho, N.-H.; Choi, H.-K. Region Segmentation of Whole-Slide Images for Analyzing Histological Differentiation of Prostate Adenocarcinoma Using Ensemble EfficientNetB2 U-Net with Transfer Learning Mechanism. Cancers 2023, 15, 762. https://doi.org/10.3390/cancers15030762
Ikromjanov K, Bhattacharjee S, Sumon RI, Hwang Y-B, Rahman H, Lee M-J, Kim H-C, Park E, Cho N-H, Choi H-K. Region Segmentation of Whole-Slide Images for Analyzing Histological Differentiation of Prostate Adenocarcinoma Using Ensemble EfficientNetB2 U-Net with Transfer Learning Mechanism. Cancers. 2023; 15(3):762. https://doi.org/10.3390/cancers15030762
Chicago/Turabian StyleIkromjanov, Kobiljon, Subrata Bhattacharjee, Rashadul Islam Sumon, Yeong-Byn Hwang, Hafizur Rahman, Myung-Jae Lee, Hee-Cheol Kim, Eunhyang Park, Nam-Hoon Cho, and Heung-Kook Choi. 2023. "Region Segmentation of Whole-Slide Images for Analyzing Histological Differentiation of Prostate Adenocarcinoma Using Ensemble EfficientNetB2 U-Net with Transfer Learning Mechanism" Cancers 15, no. 3: 762. https://doi.org/10.3390/cancers15030762
APA StyleIkromjanov, K., Bhattacharjee, S., Sumon, R. I., Hwang, Y. -B., Rahman, H., Lee, M. -J., Kim, H. -C., Park, E., Cho, N. -H., & Choi, H. -K. (2023). Region Segmentation of Whole-Slide Images for Analyzing Histological Differentiation of Prostate Adenocarcinoma Using Ensemble EfficientNetB2 U-Net with Transfer Learning Mechanism. Cancers, 15(3), 762. https://doi.org/10.3390/cancers15030762