Computational Pathology and
Computational Pathology and
Computational Pathology and
com
textbookfull
More products digital (pdf, epub, mobi) instant
download maybe you interests ...
https://textbookfull.com/product/shape-in-medical-imaging-
international-workshop-shapemi-2018-held-in-conjunction-with-
miccai-2018-granada-spain-september-20-2018-proceedings-martin-
reuter/
https://textbookfull.com/product/ophthalmic-medical-image-
analysis-7th-international-workshop-omia-2020-held-in-
conjunction-with-miccai-2020-lima-peru-
Danail Stoyanov · Zeike Taylor
Francesco Ciompi · Yanwu Xu et al. (Eds.)
Computational Pathology
LNCS 11039
123
Lecture Notes in Computer Science 11039
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology Madras, Chennai, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7412
Danail Stoyanov Zeike Taylor
•
Computational Pathology
and Ophthalmic Medical
Image Analysis
First International Workshop, COMPAY 2018
and 5th International Workshop, OMIA 2018
Held in Conjunction with MICCAI 2018
Granada, Spain, September 16–20, 2018
Proceedings
123
Editors
Danail Stoyanov Francesco Ciompi
University College London Radboud University Medical Center
London, UK Nijmegen, The Netherlands
Zeike Taylor Yanwu Xu
University of Leeds Baidu
Leeds, UK Beijing, China
LNCS Sublibrary: SL6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Additional Workshop Editors
Anne Martel
University of Toronto
Toronto, ON
Canada
Lena Maier-Hein
German Cancer Research Center (DKFZ)
Heidelberg
Germany
We were very excited to host the first MICCAI COMPAY workshop in the rapidly
emerging area of computational pathology, the study of disease using computational
analysis of digitized images of tissue slides. We believe this first event on computa-
tional pathology and its synergy with advanced image analysis and deep learning
provided a space for researchers in the MICCAI community to meet, discuss, and share
their advances in these fields. The MICCAI conference was the perfect venue and it
was the best time for this to happen. The aim of COMPAY was to bring together
scientific researchers, medical experts, and industry partners working in the field of
computational pathology, in order to push further innovative and clinically relevant
solutions for digital pathology. We strived to provide a platform for scientific dis-
cussion on computational pathology with a focus on artificial intelligence and deep
learning, which can help foster cooperative projects at an international level. We hope
that you will find the contributions on the state of the art computational pathology
stimulating and enjoyable. We are grateful to the MICCAI organizers for giving us this
opportunity. We also extend our sincere gratitude to all the reviewers who helped
ensure the high quality of papers presented at COMPAY 2018, the first of hopefully a
series of workshops at MICCAI.
Age-related macular degeneration, diabetic retinopathy, and glaucoma are main causes
of blindness. Oftentimes blindness can be avoided by early intervention, making
computer-assisted early diagnosis of retinal diseases a research priority. Related
research is exploring retinal biomarkers for systemic conditions such as dementia,
cardiovascular disease, and complications of diabetes. Significant challenges remain,
including reliability and validation, effective multimodal analysis (e.g., fundus pho-
tography, optical coherence tomography, and scanning laser ophthalmoscopy), more
powerful imaging technologies, and the effective deployment of cutting-edge computer
vision and machine learning techniques. The 4th International Workshop on Oph-
thalmic Medical Image Analysis (OMIA5) addressed all these aspects and more, this
year in collaboration with the ReTOUCH retinal image challenge.
Localizing Optic Disc and Cup for Glaucoma Screening via Deep Object
Detection Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Xu Sun, Yanwu Xu, Mingkui Tan, Huazhu Fu, Wei Zhao,
Tianyuan You, and Jiang Liu
1 Introduction
Diagnoses made by pathologists using tissue biopsy images are central for many
tasks such as the detection of cancer and estimation of its current stage [2].
One routine yet important step within histological analyses is the segmentation
of nuclei. Nuclear morphology is an important indicator of the grade of cancer
and the stage of its progression [3]. It has also been shown to be a predictor of
cancer outcome [4]. Currently, histological analysis such as these are done man-
ually, with pathologists counting and evaluating cells by inspection. Developing
c Springer Nature Switzerland AG 2018
D. Stoyanov et al. (Eds.): COMPAY 2018/OMIA 2018, LNCS 11039, pp. 3–10, 2018.
https://doi.org/10.1007/978-3-030-00949-6_1
4 Y. H. Roohani and E. G. Kiss
automated methods to perform this analysis will help pathologists maintain con-
sistent quality, allow for greater use of histological analysis by reducing cost and
throughput.
However, automating nuclei detection is not a trivial task and can be chal-
lenging for a number of reasons - one important challenge is the lack of stain
standardization. Stain manufacturing and aging can lead to differences in applied
color. It could also be the result of variation in tissue preparation (dye con-
centration, evenness of the cut, presence of foreign artifacts or damage to the
tissue sample), stain reactivity or image acquisition (image compression arti-
facts, presence of digital noise, specific features of the slide scanner). Each stain
has different absorption characteristics(sometimes overlapping) which impact
the resulting slide color. Finally, storage of the slide samples can have aging
effects that alter the color content [2,5]. Radiologists have established standards
(such as DICOM) to ensure consistency between scans from different origins and
time. Ideally, histopathology would also work within a framework like DICOM
where images can be standardized against experimental conditions to ensure
consistency across datasets.
Recently, there has been considerable interest in the application of novel
machine learning tools such as deep learning to aid in routine tasks such as seg-
mentation. These models generally work on raw pixel values, but could achieve
greater accuracy through reducing the variance contributed by slide and stain
specific variables. However, the approach must not be too general or else false
positives will occur through altering the image signal [1,2].
The aim of this project is to address the impact of variability in histological
images on the accuracy of deep learning based algorithms for segmentation. A
Convolutional Neural Network (CNN) was trained to perform nuclei segmen-
tation and tested to get a baseline. Four stain normalization techniques, his-
togram equalization, Reinhard, Macenko, and spline mapping were then applied
as means to reduce color variability of the raw images. The CNN was trained
and tested again for each of these normalization conditions to get segmentation
accuracy in each case. This paper is unique in that it employs a wide variety of
normalization methods, uses deep learning based nuclei segmentation accuracy
as a metric, and tests the model on a different dataset to understand model
generalizability.
This section describes the methods used to generate deep learning based image
segmentation models. The goal was to train a model using one dataset and to test
using another. We aimed to perform this approach using different normalization
strategies to narrow down on an approach that best reduced variability and
improved performance.
We first trained and validated the model on the same dataset to make sure that
our training procedure was working correctly. For this we used breast tissue slices
from [3]. We randomly split the dataset into 70% train and 30% validation. After
experimenting with different network architectures, the final architecture that
was chosen after the validation procedure was (Conv-BNorm-ReLU)x6 - (Fully
Convolutional) - Softmax [3].
6 Y. H. Roohani and E. G. Kiss
3.2 Dataset
The training dataset consisted of 143 histological sections of breast tissue from
the Case Western Reserve digital pathology dataset. Each RGB image was
2000 × 2000 pixels, 20× magnification and was H&E stained. Manually anno-
tated masks were provided for over 12000 nuclei. We found that, for training,
a patch size of 64 × 64 (87.8% baseline validation accuracy) worked better for
training than 32 × 32 (82% accuracy). A total of 400,000 unique patches were
generated for each scenario.
However, it was not sufficient to randomly sample from non-nuclear regions as
defined by the hand annotations. There was a significant probability of sampling
unannotated nuclei while developing negative patches for the training set. To
address this problem, we used the approach outlined by [3]. Nuclei are known to
absorb greater levels of the eosin (red) stain and so the red channel in the images
was enhanced. A negative mask was thus generated defining regions outside
of these enhanced red zones that were deemed safe for non-nuclei class patch
selection. We also made sure to allocated a third of the non-nuclei patches to
boundaries around the nuclei so that these would be clearly demarcated in the
output. Moreover, positive and negative samples were equal in number even
after accounting for these changes. The model prediction accuracy was found to
benefit from these approaches.
The test set was composed also of breast tissue slices from a hand anno-
tated dataset provided by the BioImaging lab at UCSB [11]. Referred to as
the BioSegmentation benchmark, these were 58 H&E stained images at a much
smaller resolution (896 × 768). This dataset proved to be ideal for model testing
because the images were quite different from our training set both in terms of
image quality and resolution and also in terms of the staining used (more eosin
content). Patching was not required for the test set because we were using a fully
convolutional network.
1
Code: https://github.com/yhr91/NucleiSegmentation/tree/master/BMI-260.
Improving Acc. of Nuclei Segmentation by Reducing Hist. Image Variability 7
3.3 Training
Once our model architecture and dataset generation approach had been finalized,
we began to train separate model for each of the normalization scenarios as shown
in Fig. 1. We used a batch size of 1000 because that could fit comfortably in our
memory (P100 GPU 16 GB × 2).
There were four models - these corresponded to the four techniques out-
lined previously: Histogram Equalization (SH), Macenko (MM), Reinhard (RH),
Spline Mapping (SM). There was a also a model for the unnormalized (Unnorm)
case. Model performance would generally begin to plateau around 5–10 epochs.
We did not notice overfitting in any model until 25 epochs except in SM. How-
ever, we could not continue training much beyond that point due to time con-
straints.
Fig. 1. The first row shows the test image as fed into the model after stain normal-
ization (labelled using acronym). The normalization applied on the test image was the
same as that applied on the training dataset in that case. The bottom row shows the
model predicted output on the test images.
4 Results
The top row in Fig. 1 shows the original images after being transformed using the
four different stain normalization approaches. We can see that all four images
appear different in some respect. For example, HE and RH, which involve stain
normalization through working directly with the color values show a noticeable
blue tint. This is more pronounced in HE, where non-nuclear regions in the top
right of the cell get quite heavily stained with hematoxylin. On the other hand,
SM and MM, which both use stain vectors to map an input image to a target
space, don’t show a blue-ish tint and provide a much more robust transformation
8 Y. H. Roohani and E. G. Kiss
that is true to the semantic information of the parent image (e.g.: treating nuclei
and background regions distinctly).
The bottom row looks at the class probability for nuclear regions as predicted
by models trained on datasets that were each stain normalized differently. Clearly
all four normalized sets perform far better than the unnormalized dataset where
almost no nuclei were detected due to the drastic change in staining as compared
to what the model had been trained on. HE does pick up most of the nuclei but
also a lot of background noise due to its inability to differentiate clearly between
different types of staining. RH is also more sensitive to noise but does a better
and clearer detection of nuclei as is visible in the clear boundaries. SM clearly
performs the best at segmenting nuclei while also being most robust to false
positives.
Fig. 2. ROC curve for models trained using different stain normalization schemes
RH to pick up more false positives than SM. This trade-off between true and
false positives is best captured by the ROC curve (Fig. 2). Here, we see that
the unnormalized case doesn’t add any value at all while all the normalization
scenarios show improved prediction accuracy. SM is the clear winner showing
an excellent range of operation at a TPR of >80/90% while only allowing an
FPR of 50%. This is very impressive considering how the model was trained on
a staining visually very different from the one in the test data. This difference
is quantitatively captured by the AUC. Finally, the F-score is another attempt
to capture segmentation accuracy without getting bogged down by all the true
negatives. It calculates the intersection of pixels that have been classified as
nuclei in both the prediction and the ground truth and it divides that over the
union of all pixels classified as nuclei by either set. Again, SM is seen to be the
best at improving accuracy of the algorithm.
5 Discussion
Through this study, we have explored several stain normalization approaches
that were all shown to reduce inter slide variability. The results (particu-
larly AUC, F-score) clearly indicate that using a stain normalization approach
increases the performance of the deep learning based segmentation algorithm.
We found that SM performed better than all other approaches. We believe this
is because it use a non-linear mapping function that is more accurate than the
other approaches. It is able to delineate between different regions and map them
appropriately to the target space.
We also noticed that the model seems to perform more poorly in case of
normalizations that are biased more towards the eosin channel. In future, it may
make sense to normalize the stain of the training dataset using two different
approaches. This would push the model to become robust to these subtle changes
and be less dependent on any one channel. Moreover, stain normalization could
also be looked at as a regularization approach to enhance generalizability of deep
learning based models in this space and prevent overfitting. On the other hand,
we must remain conscious of the fact that staining color is a very valuable source
of information in histological analyses and adopt a balanced approach towards
stain normalization.
10 Y. H. Roohani and E. G. Kiss
6 Conclusion
In this study, we looked at the impact of stain normalization as a means of
improving the accuracy of segmentation algorithms across datasets. To the best
of our knowledge, this is the first study that compares the chosen four stain
normalization techniques through assessing their usability in the context of deep
learning based segmentation models. There is scope for expanding upon this
work with a deeper analysis of why certain normalization approaches or model
architectures are better suited for this task.
References
1. Ghaznavi, F.: Digital imaging in pathology: whole-slide imaging and beyond. Annu.
Rev. Pathol.: Mech. Dis. 8, 331–359 (2013)
2. Irshad, H.: Methods for nuclei detection, segmentation, and classification in digital
histopathology: a review’ current status and future potential. IEEE Rev. Biomed.
Eng. 7, 97–114 (2014)
3. Janowczyk, A., Madabhushi, A.: Deep learning for digital pathology image analysis:
a comprehensive tutorial with selected use cases. J. Pathol. Inform. (2016)
4. Basavanhally, A., Feldman, M., Shih, N.: Multi-field-of-view strategy for image-
based outcome prediction of multi-parametric estrogen receptor-positive breast
cancer histopathology: comparison to oncotype DX. J. Pathol. Inform. 2, S1 (2011).
https://doi.org/10.4103/2153-3539.92027
5. Khan, K.M., et al.: A nonlinear mapping approach to stain normalization in dig-
ital histopathology images using image-specific color deconvolution. IEEE Trans.
Biomed. Eng. 61(6), 1729–1738 (2014)
6. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic
segmentation. In: Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pp. 3431–3440 (2015)
7. Reinhard, E., Ashikhmin, M., Gooch, B., Shirley, P.: Color transfer between images.
IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)
8. Macenko, M., et al.: A method for normalizing histology slides for quantitative
analysis. In: ISBI, vol. 9, pp. 1107–1110, June 2009
9. https://www.math.uci.edu/icamp/courses/math77c/demos/hist eq.pdf. Accessed
24 Jul 2018
10. http://www2.warwick.ac.uk/fac/sci/dcs/research/tia/software/sntoolbox. Access-
ed 24 Jul 2018
11. http://bioimage.ucsb.edu/research/bio-segmentation . Accessed 24 Jul 2018
12. Rizzi, A., Gatta, C., Marini, D.: From retinex to automatic color equalization:
issues in developing a new algorithm for unsupervised color equalization. J. Elec-
tron. Imaging 13(1), 75–85 (2004)
13. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomed-
ical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F.
(eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).
https://doi.org/10.1007/978-3-319-24574-4 28
14. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE Inter-
national Conference on Computer Vision (ICCV), pp. 2980–2988. IEEE, October
2017
Multi-resolution Networks for Semantic
Segmentation in Whole Slide Images
Feng Gu(B) , Nikolay Burlutskiy, Mats Andersson, and Lena Kajland Wilén
1 Introduction
(image regions of a WSI at a given resolution) with the finest details should
probably be extracted at the highest resolution, to utilize such a capability. It
may also require the patch size to be considerably larger, making it infeasible for
the VRAM of a modern GPU to fully explore the multi-resolution. As a result,
approaches capable of learning from data and aggregating information efficiently
and effectively at multiple resolutions are desired.
In this paper, two novel multi-resolution networks are proposed to learn from
input patches extracted at multiple levels. These patches share the same centroid
and shape (size in pixels), but with an octave based increase of the pixel size,
micrometers per pixel (mpp). Only the central high resolution patch is segmented
at the output. The proposed methods are evaluated and compared with the
standard U-Net on a benchmark dataset of WSIs.
2 Related Work
Semantic segmentation problems were initially solved by traditional machine
learning approaches, where hand crafted features were engineered [15].
Researchers applied methods, such as predictive sparse decomposition and spa-
tial pyramid matching, to extract features of histopathological tissues [4]. How-
ever, deep learning approaches based on FCNs [11] showed significantly higher
performance and eventually have substituted them [7]. To overcome the so called
‘checkerboard artifacts’ of transposed convolutions, several approaches have been
proposed, e.g. SegNet [1], DeepLab-CRF [5], and upscaling using dilated convo-
lutions [18]. To increase localization of learned features, high resolution features
from the downsampling path can be aggregated with the upsampled output. Such
an operation is known as ‘skip connections’, which enables a successive convo-
lution layer to learn and assemble a more precise output based on the aggre-
gated information. Several researchers successfully demonstrated that architec-
tures with skip connections can result in better performance. Such networks
include U-Net, densely connected convolutional networks [9], and highway net-
works with skip connections [16]. On overall, U-Net has proved to be one of the
most popular networks for biomedical segmentation tasks [3].
One limitation of standard FCNs is the fact that the networks are composed
of convolution layers with a set of filters that have the same receptive field size.
The receptive field size corresponds to the context that a network can learn from,
and eventually influences the network performance. Grais et al. [8] proposed a
multi-resolution FCN with different receptive field sizes for each layer, for the
audio source separation problem. Such a design allowed to extract features of
the same input at multiple perspectives (determined by the receptive field sizes),
and thus to capture global and local details from the input. Fu et al. introduced
a multi-scale M-Net [6] to tackle the problem of joint optic disc and cup seg-
mentation, where the same image contents of different input shapes or scales are
passed through the network. However, both methods were designed to handle
the same input audio or image content, while learning features from multiple
perspectives by either employing encoders with varied respective fields or taking
MRN for Semantic Segmentation in WSIs 13
inputs with multiple scales. Roullier et al. [14] proposed multi-resolution graph-
based analysis of whole slide images for mitotic cell segmentation. The approach
is based on domain specific knowledge, which cannot be easily transferred to
another problem domain. Recently, an approach of using multi-resolution infor-
mation in FCN was described in [12]. However, the fusion of multi-resolution
inputs was performed before encoders, instead of within the network. In addi-
tion, the approach could only be applied to a subset of small regions of interests,
rather than WSIs.
Networks that incorporate inputs extracted from different resolutions with
respect to the same corresponding tissue area in WSIs are desired, to tackle
the challenge of multi-resolution effectively. In addition, the networks should be
scalable in terms of resolutions and VRAM efficient for training and prediction.
These motivated us to develop the multi-resolution networks in this work.
3 Algorithmic Formulation
A common practice of handling a WSI with deep learning is to divide it into
multiple equally sized patches [17]. Here the deep learning task is formulated
as a binary semantic segmentation problem of patches, where each patch is
considered an image example. At prediction, a trained model first predicts each
patch individually, and then stitches predictions of all the patches, to form the
prediction of the entire slide (a probabilistic map indicating the probability of
each pixel belonging to the class of interest).
where the scalar λ determines the weighting between two terms. One popular
regularization function is 2 -regularization, such that R(W) = W22 . Search of
14 F. Gu et al.
Fig. 1. From left to right are the patches with the same central coordinates, where
mpp = 0.5 is equivalent to 20x and so forth. The increase of mpp values corresponds to
the zooming out action to enlarge the field of view, and the yellow squares represent
the effective tissue ares at different magnifications. (Color figure online)
Fig. 2. An illustration of the proposed MRN methods when two resolutions are
involved. The dark blue boxes represent stacks of two 3 × 3 convolution layers with
ReLU activations; the red boxes are 2 × 2 max pooling layers; the light blue boxes are
1 × 1 convolution layers with identity activations; the green boxes are 2 × 2 transposed
convolution layers with stride = 2 and ReLU activations. (Color figure online)
the optimal set of parameters Ŵ = arg minW EW for the objective function is
known as optimization in machine learning. Popular optimizers include stochas-
tic gradient descent (SGD), adaptive gradient (AdaGrad), and root mean square
propagation (RMSProp). Recently, adaptive moment estimation (Adam) [10] has
become a particularly popular method for optimizing deep networks.
Here we propose two multi-resolution networks (MRN) that are based on the
architecture of U-Net [13]. A standard U-Net can be seen as two parts, an
‘encoder’ for downsampling and a ‘decoder’ for upsampling. The downsampled
feature maps are concatenated with the corresponding layers of the decoder in
the upsampling pathway. The proposed MRN employ multiple encoders corre-
sponding to different resolutions that are structurally identical for downsam-
pling, and one single decoder for upsampling.
MRN for Semantic Segmentation in WSIs 15
The input shapes of all resolutions are identical, and the examples share
the common central coordinates and effectively cover tissue areas in a pyramid
manner, as in Fig. 1. Let (x, y) be an example, where xj = [x1 , x2 , . . . , xJ ] and
y = y1 , where the resolutions are in a descending order. The shapes of x and y
are there h × w × 3 × J and h × w × 1 respectively. The rationale behind such an
arrangement is that the pixel correspondence is more cumbersome compared to
a standard U-Net. A key issue is to enable a sufficient receptive field for the low
resolution branches of the network to successfully convey the information from
the peripheral regions into the central parts.
To preserve the information relevant to the area of interest (i.e. the central
part) at a lower resolution, we center crop the output feature maps of each
encoder unit and then resize them back to the original resolutions via upscaling.
We can defined a nested function u ◦ v such that
w h w h
u : Rw×h×c → R γ × γ ×c and v : R γ × γ ×c → Rw×h×c (2)
where the cropping factor is γ = 2N , since resolutions at different levels of a WSI
are usually downsampled by a factor 2 in both height and width. On one hand,
the function u center crops a real-valued tensor of shape h×w ×c (height, width,
and channels) to the shape of w h
γ × γ × c. On the other hand, the function
v upscales the output of u to the original shape. For upscaling, we present two
options, namely ‘MRN-bilinear’ via bilinear interpolation and ‘MRN-transposed’
through transposed convolution.
The outputs of u ◦ v are concatenated with the convoluted feature maps of
the corresponding layers in the encoder of the highest resolution. The concate-
nated feature maps are then passed though a 1 × 1 convolution layer with an
identity activation, before being combined with layers in the decoder. The 1 × 1
convolution acts as a weighted sum to aggregate feature maps from all the reso-
lutions, while keeping the number feature maps in the decoder constant despite
the number of resolutions involved. Figure 2 illustrates an example of such net-
works when two resolutions are involved, and this is easily expandable with more
resolutions.
4 Experimental Conditions
4.1 Implementation Details
We implemented all the networks in TensorFlow, with ‘SAME’ padding. Batch
normalization and 2 -regularization with λ = 0.005 were applied to all the con-
volution and transposed convolution layers, to improve convergence rates and
counter over-fitting. We employ the Adam optimizer with default parameters
(η = 0.001, β1 = 0.9, β2 = 0.999, and = 10−8 ). The input shape is 512 in height
and width, and the collection of resolutions are mpp ∈ {0.5, 1, 2, 4}, where the
U-Net deals with one of the resolutions at each time and the MRN methods
handles all the resolutions simultaneously. The batch size is equal to 16, which is
limited by the VRAM of an NVIDIA Titan XP. The number maximum epochs
is set to 500 for the training to be terminated.
Another random document with
no related content on Scribd:
uteliaisuutta.
"Siellä missä laivoja on, ainoastaan siellä hän viihtyy", sanoi vanha
rouva, "se on minun vakaumukseni. Ollen itse kapteeni ja vanha
merimies ei hän voi elääkään muualla kuin meren läheisyydessä ja
ainoastaan siellä voi hän ansaita toimeentulonsa — miesraukka —
ellei hän jo ole lähtenyt ulkomaille, joka kuitenkin on hyvin
epäiltävää."
"Ei!"
"Niin, kapteeni?"
"No, jos tulet, niin tule nyt, vanha hullu", toisti Dick. "Tosiaankin
olisi parasta, ettet koskaan joisi mitään maitoa väkevämpää."
"Ei", vastasi vanha mies. "Älkäähän nyt hätäilkö niin riivatusti; kyllä
se takaisin tulee."
Kokki myönsi.
"Vai niin! Olette siis niitä koko maailman rahain kokoojia", sanoi
isäntä leikillisesti. "Miten pitkälle aiotte?"
"No, eipä juuri", sanoi kokki. "Etsin vaan erästä John Dunnia."
Kokki seurasi käskyä ja sulki oven tarkkaan perässään. Hän oli nyt
pienessä salissa, vanhan suuttuneen näköisen, nojatuolissa
sanomalehteä lukevan herrasmiehen edessä.
"Vai niin", sanoi ukko. "Kyllä sittenkin haluatte." Hän astui ovelle ja
huusi rappusille: "Roger!"