Dissertation Kurzendorfer Tanja

Download as pdf or txt
Download as pdf or txt
You are on page 1of 197

Fully Automatic Segmentation of Anatomy

and Scar from LGE-MRI

Vollautomatische Segmentierung von


Anatomie und Narben in LGE-MRI

Der Technischen Fakultät


der Friedrich-Alexander-Universität
Erlangen-Nürnberg

zur

Erlangung des Doktorgrades Dr.-Ing.


vorgelegt von

Tanja Kurzendorfer
aus
Amberg
Als Dissertation genehmigt von der
Technischen Fakultät der
Friedrich-Alexander-Universität Erlangen-Nürnberg

Tag der mündlichen Prüfung: 15.10.2018


Vorsitzende/r des Promotionsorgans: Prof. Dr.-Ing. R. Lerch
Gutachter: Prof. Dr.-Ing. habil. A. Maier
Prof. Dr. G. Greiner
Abstract
The leading cause of death worldwide are cardiovascular diseases. In addition, the
number of patients suffering from heart failure is rising. The underlying cause of
heart failure is often a myocardial infarction. For diagnosis in clinical routine, cardiac
magnetic resonance imaging is used, as it provides information about morphology,
blood flow, perfusion, and tissue characterization. In more detail, the analysis of the
tissue viability is very important for diagnosis, procedure planning, and guidance,
i. e., for implantation of a bi-ventricular pacemaker. The clinical gold standard for
the viability assessment is 2-D late gadolinium enhanced magnetic resonance imaging
(LGE-MRI). In the last years, the imaging quality continuously improved and LGE-
MRI was extended to a 3-D whole heart scan. This scan guarantees an accurate
quantification of the myocardium to the extent of myocardial scarring.
The main challenge arises in the accurate segmentation and analysis of such im-
ages. In this work, novel methods for the segmentation of the LGE-MRI data sets,
both 2-D and 3-D, are proposed. One important goal is the direct segmentation of
the LGE-MRI and the independence of an anatomical scan to avoid errors from the
anatomical scan contour propagation. For the 2-D LGE-MRI segmentation, the short
axis stack of the left ventricle (LV) is used. First, the blood pool is detected and a
rough outline is maintained by a morphological active contours without edges ap-
proach. Afterwards, the endocardial and epicardial boundary is estimated by either
a filter or learning based method in combination with a minimal cost path search
in polar space. For the endocardial contour refinement, an additional scar exclusion
step is added. For the 3-D LGE-MRI, the LV is detected within the whole heart scan.
In the next step, the short axis view is estimated using principal component analysis.
For the endocardial and epicardial boundary estimation also a filter based or learning
based approach can be applied in combination with dynamic programming in polar
space. Furthermore, because of the high resolution also the papillary muscles are
segmented.
In addition to the fully automatic LV segmentation approaches, a generic semi-
automatic method based on Hermite radial basis function interpolation is introduced
in combination with a smart brush. Effective interactions with less number of equa-
tions accelerate the performance and therefore, a real-time and an intuitive, interac-
tive segmentation of 3-D objects is supported effectively.
After the segmentation of the left ventricle’s myocardium, the scar tissue is quan-
tified. In this thesis, three approaches are investigated. The full-width-at-half-max
algorithm and the x-standard deviation methods are implemented in a fully automatic
manner. Furthermore, a texture based scar classification algorithm is introduced.
Subsequently, the scar tissue can be visualized, either in 3-D as a surface mesh or in
2-D projected onto the 16 segment bull’s eye plot of the American Heart Association.
However, for precise procedure planning and guidance, the information about the scar
transmurality is very important. Hence, a novel scar layer visualization is introduced.
Therefore, the scar tissue is divided into three layers depending on the location of
the scar within the myocardium. With this novel visualization, an easy distinction
between endocardial, mid-myocardial, or epicardial scar is possible. The scar layers
can also be visualized in 3-D as surface meshes or in 2-D projected onto the 16
segment bull’s eye plot.
Kurzübersicht
Die häufigste Todesursache weltweit sind kardiovaskuläre Herzkrankheiten. Des Wei-
teren steigt die Anzahl an Patienten, die an Herzinsuffizienz leiden. Die zugrunde
liegende Ursache von Herzinsuffizienz ist oft ein Myokardinfarkt. Zur Diagnose wird
oft Magnetresonanztomografie verwendet, da es Informationen über die Morphologie,
den Blutfluss, die Perfusion und die Gewebecharakterisierung liefern kann. Vor allem
ist die Analyse der Gewebevitalität wichtig für die Diagnose, Prozedurplanung und die
Orientierungshilfe während des Eingriffes unter anderem für die Implantation eines
biventrikulären Herzschrittmachers. Der Goldstandard für die Viabilitätsanalyse des
Myokards ist 2D-„Late Gadolinium Enhanced“ (LGE)-Magnetresonanztomografie. In
den letzten Jahren wurde die Bildqualität kontinuierlich verbessert, und LGE-MRI
wurde zu einer 3D Herzaufnahme erweitert. Diese Aufnahme ermöglicht eine genaue
Myokardanalyse bis zum vollen Ausmaß von Narbengewebe.
Die größte Herausforderung liegt in der akkuraten Segmentierung und Analyse
von solchen Daten. In dieser Arbeit werden neue Methoden für die Segmentierung
von 2D- und 3D-LGE-MRI-Daten vorgestellt. Ein wichtiges Ziel ist die direkte Seg-
mentierung der LGE-MRI-Daten ohne die Verwendung von anatomischen Daten, um
bereits Fehler zu vermeiden, die bei der Kontourübertragung entstehen können. Für
die 2D-LGE-MRI-Segmentierung werden die Kurzachsenschnitte des linken Ventri-
kels verwendet. Als erstes wird der Blutpool des linken Ventrikels detektiert und
ein grober Umriss durch einen Aktiven-Kontouren-ohne-Kanten Ansatz erstellt. Im
nächsten Schritt wird die endokardiale und epikardiale Abgrenzung entweder durch ei-
nen filterbasierten oder lernbasierten Ansatz bestimmt unter Zuhilfenahme von einer
minimalen Kosten Pfadsuche in Polarkoordinaten. Bei der endokardialen Abgrenzung
wird ein Narbenausschlussschritt hinzugefügt. Für die 3D-LGE-MRI-Segmentierung
muss der linke Ventrikel in der Aufnahme vom ganzen Herzen erst lokalisiert werden.
Danach wird der Kurzachsenschnitt durch eine Hauptachsenanalyse abgeschätzt. Die
endokardiale und epikardiale Abgrenzung wird auch entweder durch einen filterbasier-
ten oder lernbasierten Ansatz bestimmt in Kombination mit dynamischer Program-
mierung in Polarkoordinaten. Zusätzlich können wegen der hohen Auflösung auch die
Papilarmuskeln segmentiert werden.
Außerdem, wird eine generische, semi-automatische Methode basierend auf Her-
miten radialen Basisfunktionen vorgestellt in Kombination mit einer „Smart Brush “.
Effektive Iterationen mit geringerer Anzahl an Gleichungen beschleunigen die Leis-
tung. Dies ermöglicht eine Echtzeit intuitive und interaktive Segmentierung der 3D
Objekte.
Nach der Segmentierung des Myokards kann das Narbengewebe quantifiziert wer-
den. In dieser Arbeit werden drei Methoden betrachtet. Der Halbwertsbreite Algorith-
mus und die x-fache Standardabweichung werden mit einem komplett automatischen
Verfahren implementiert. Des Weiteren wird eine texturbasierte Narbengewebeklas-
sifikation eingeführt.
Darauffolgend kann das Narbengewebe in 3D als Oberflächennetz dargestellt wer-
den oder in 2D auf den 16-Segmente-Bull’s-Eye-Plot der American Heart Associati-
on projiziert werden. Allerdings erlaubt diese Visualisierung keine genaue Prozedur-
planung und Orientierungshilfe, da die wichtige Information der Transmuralität der
Narbe verloren geht. Deswegen wird eine neue schichtweise Narbendarstellung vorge-
stellt. Dafür wird abhängig von der Position im Myokard das Narbengewebe in drei
Schichten eingeteilt. Mit der neuen Visualisierungsmethode ist es möglich, zwischen
endokardialen, mid-myokardialen oder epikardialen Narbengewebe zu unterscheiden.
Die Narbenschichten können dann als 3D-Oberflächennetze oder projiziert auf die
2D-Bull’s-Eye-Plots dargestellt werden.
Acknowledgment

I would like to thank a number of people who have helped and supported me to write
this thesis, both regarding the content and the motivation.

First of all, I am very grateful to my supervisor Prof. Dr.-Ing. habil. Andreas Maier.
I appreciate his outstanding confidence in me, his support, and guidance over the
years, not only as a scientific mentor, his encouragement, and the freedom he allowed
me regarding the contents of my work. Prof. Maier reviewed my work carefully and
provided valuable feedback. His input helped to improve the quality of my work.

Furthermore, I am deeply grateful to the head of the Segmentation Colloquium, Dr.-


Ing. habil. Stefan Steidl. I have learned much more from him regarding scientific
work than I could have ever expected. His precise revision of my thesis and his feed-
back made a significant contribution to my thesis.

I deeply appreciate the support from Dr.-Ing. Alexander Brost and Dr.-Ing Christoph
Forman during the last years as my Siemens advisors. The discussions we had were
always encouraging and contributed to a continuous improvement of this work. They
spent a lot of time in reviewing papers and giving valuable feedback regarding the
methods.

Many thanks to my colleagues at the Pattern Recognition Lab, for the pleasant and
friendly atmosphere at the lab, for the ongoing knowledge sharing, and for the joyful
times outside the working hours. In particular, let me thank Katharina Breininger
and Peter Fischer for the great time and inspiring scientific discussions. Among the
students I have supervised, let me acknowledge Sabrina Reiml and Negar Mirshaz-
adeh for their excellent work that contributed to several publications.

Thanks also goes to my colleagues at Siemens Healthcare GmbH in Forchheim. They


provided a pleasant working environment and for their overall support. I also want
to thank Dr. med. Christoph Tillmans and my colleagues in London for providing
the clinical data. Furthermore, I would like to thank Peter Mountney and Daniel
Toth for the suggestions during my thesis and Sana Hummady for the gold standard
annotations.

Last but not least, I want to thank my family and friends for their patience and
support over the years. In particular, I would like to thank my boyfriend for the
steady support, the patience, and the encouragement over all the years.

Tanja Kurzendorfer
Contents
I Introduction 1

Chapter 1 Introduction 3
1.1 Heart Failure and Treatment Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Scientific Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Chapter 2 Clinical and Technical Background 13


2.1 Magnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.1 Slice Selective 2-D Sequences vs. 3-D Sequences. . . . . . . . . . . . . . . . . 14
2.1.2 Late Gadolinium Enhanced MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Image Processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Image Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.2 Segmentation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.3 Morphological Active Contours without Edges . . . . . . . . . . . . . . . . . . . . 21
2.3 Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1 Decision Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.2 Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.3 Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

II Left Ventricle Segmentation 33

Chapter 3 Left Ventricle Segmentation in 2-D LGE-MRI 35


3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Segmentation Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.1 Left Ventricle Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 Blood Pool Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.3 Endocardial Contour Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.4 Epicardial Contour Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.1 Data Description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

i
Chapter 4 Left Ventricle Segmentation in 3-D LGE-MRI 59
4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3 Automatic Left Ventricle Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3.1 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.2 Short Axis Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3.3 Endocardial Contour Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3.4 Epicardial Contour Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.3.5 Papillary Muscle Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.3.6 Mesh Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4 Semi-Automatic Left Ventricle Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4.1 Smart Brush. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.4.2 Control Point Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.4.3 Control Point Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.4.4 3-D Interpolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.4.5 Surface Reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.5 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.5.1 Data Description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.5.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.6 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

III Scar Segmentation and Visualization 109

Chapter 5 Scar Segmentation in LGE-MRI 111


5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.3 Scar Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.3.1 x-Fold Standard Deviation Scar Quantification . . . . . . . . . . . . . . . . . . . . 114
5.3.2 Full-Width-at-Half-Maximum Scar Quantification. . . . . . . . . . . . . . . . . . . 116
5.3.3 Classification Based Scar Quantification . . . . . . . . . . . . . . . . . . . . . . . . 116
5.4 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.4.1 Data Description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.4.2 Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Chapter 6 Scar Visualization 129


6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.3 Scar Layer Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.3.1 Scar Layer Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.3.2 3-D Layer Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.3.3 2-D Layer Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.4 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

ii
IV Outlook and Summary 141

Chapter 7 Outlook 143


7.1 Left Ventricle Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.2 Scar Segmentation and Visualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

Chapter 8 Summary 145

Appendix 149

List of Abbreviations 153

List of Algorithms 155

List of Figures 157

List of Symbols 161


General Symbols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Left Ventricle Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Scar Segmentation and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

List of Tables 167

Bibliography 169

iii
iv
PART I

Introduction

1
CHAPTER 1

Introduction
1.1 Heart Failure and Treatment Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Scientific Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

According to a study from the world health organization in 2015, the major cause
of death worldwide is cardiovascular disease1 . Cardiovascular disease is a generic
term that includes several diseases, such as high blood pressure, coronary heart
disease, myocardial infarction (MI), stroke, congenital cardiovascular defects, and
heart failure (HF) [Go 14]. For diagnosis in clinical routine cardiac magnetic reso-
nance imaging (MRI) is used, as it can provide information on morphology, tissue
characterization, blood flow, or perfusion [Peti 11, Suin 14]. The technical progress in
the last years gave rise to advanced image analysis. One major interest is the viability
assessment of the myocardium and further analysis, which can be used for planning
and guidance of minimal invasive procedures. Late gadolinium enhanced (LGE)-MRI
(cf. Section 2.1.2) is clinically the gold standard for the assessment of myocardial vi-
ability [Rash 15]. The main research goal of this thesis is to develop segmentation
algorithms for robust segmentation of the left ventricle (LV), scar quantification, and
visualization, which can be used for planning and guiding during minimally invasive
procedures.
In this chapter, a short introduction on the medical background of congestive HF
and the development of myocardial scar is given. In addition, one treatment option
is described suitable for patients with advanced drug-refractory heart failure, systolic
dysfunction, and ventricular dyssynchrony. Furthermore, the achieved scientific con-
tributions regarding left ventricle segmentation, scar quantification, and visualization
are detailed. Finally, the outline of this thesis is presented.

1.1 Heart Failure and Treatment Options


In 2014, about 26 million people worldwide were affected from HF [Poni 14]. The
symptoms of HF typically include shortness of breath, signs of fluid retention, fa-
1
www.who.int/mediacentre/factsheets/fs310/en/

3
4 Introduction

Rhythm
disorders Valve
Other
disorder of defects High blood
the heart
Heart pressure
problems Poor blood
Heart muscle
supply to
defects
lungs
Coronary
heart disease
Lung Lung disease, asthma
problems bronchitis, obstructed
airway

High blood pressure


in lungs
Failure to take
preventive Lifestyle Anaemia
medications
Kidney
Alcohol or Other medical
Diet (excessive disease
drug misuse
salt or fluid conditions Diabetes
intake) Obesity
Side effects
of medications Disordered breathing
Infections during sleep
Rheumatic
Chagas fever
disease

Figure 1.1: The common causes of heart failure are among others, heart problems,
high blood pressure, lung problems, lifestyle, infections, or other medical conditions.
The graphic is based on Ponikowski et al. [Poni 14].

tigue, exercise intolerance, diminished appetite, and depression [Shea 03]. The most
common cause of HF related to cardiac issues is a myocardial infarction [McMu 12].
According to Dickstein et al. [Dick 08] at least 51 % of patients suffering from HF have
an ischemic history. Myocardial infarction describes the ischemic cell death caused
by a disturbed perfusion of the heart cells [Tayl 05]. However, also other factors like
viral infections, personal lifestyle, heart problems, high blood pressure, lung prob-
lems, or other medical conditions may cause heart failure [Poni 14]. An illustration
of potential causes for HF is provided in Figure 1.1.
In general, heart failure is defined as the inability to generate sufficient cardiac
output to meet the metabolic needs of the organs and the tissue of the human body.
This can include systolic, diastolic, and high-output failure. If there are symptoms of
systematic or pulmonary fluid volume overload, this is considered as congestive heart
failure [Tayl 05].
The New York Heart Association (NYHA) classification is widely used to rate the
cardiac disease according to the symptoms in four different classes. Class I patients
have HF but no limitation of physical activity. Class II patients have mild symptoms,
which means they are comfortable at rest, but physical activity results in symptoms
like shortness of breath. Class III patients are comfortable at rest, but already show
symptoms at low level activity, for example walking for a short distance. Class IV
1.1 Heart Failure and Treatment Options 5

Class NYHA functional classification


I No limitation of physical activity.
II Slight limitation of physical activity. Comfortable at rest, but ordinary
physical activity results in symptoms.
III Limitation of physical activity. Comfortable at rest, but low physical
activity results in symptoms.
IV Unable to carry on any physical activity without discomfort. Symptoms
at rest. If any physical activity is undertaken, discomfort is increased.

Table 1.1: NYHA functional classification for heart failure based on physical activity
and symptoms. This table is based on the ESC Guidelines [Dick 08].

(a) Normal QRS (b) Wide QRS

Figure 1.2: (a) Normal ECG signal with a normal QRS complex. (b) Abnormal
ECG signal with a wide QRS complex.

patients already have symptoms at rest. See Table 1.1 for a detailed classification of
the NYHA classes.
Patients suffering from HF are commonly asked to estimate the distance they are
able to walk before getting out of breath. This is a measure of exercise capacity
and helps to classify the patients into the NYHA classes [Dick 08]. Patients at risk
for chronic HF should be screened for symptoms of f atigue, activity intolerance,
congestive symptoms, edema, and shortness of breath (FACES) [Tayl 05].
All patients that show symptoms for chronic HF should undergo further diag-
nostic evaluation of the LV function. Most important is the left ventricular ejection
fraction, which can be measured by electrocardiography or radionuclide ventriculog-
raphy [Tayl 05]. This information helps to distinguish systolic from diastolic dysfunc-
tion. If systolic dysfunction is verified, the duration of the QRS complex should be
investigated. If the duration is greater than 120 ms, it is often associated with a left
bundle branch block [Abra 03]. The delay of the electrical signal can be seen with an
electrocardiogram (ECG). In Figure 1.2 a normal ECG with a normal QRS complex
is compared to an abnormal ECG with a widened QRS complex.
The electrical signal is delayed by the left bundle branch, therefore the right ven-
tricle contracts earlier as the left ventricle. This leads to an asynchronous contraction
of the heart and reduces the pump efficiency of the left ventricle [Shea 03]. The left
ventricular ejection fraction can be less than 35 % [Shea 03, Abra 03, Blee 06]. Nor-
mally, the left ventricular ejection fraction is between 50 % and 70 % [Will 07]. In
general, about 30 % to 50 % of all patients that suffer from chronic heart failure also
have a preserved ejection fraction [Sant 14].
6 Introduction

Right
atrial lead CRT
generator

Coronary
sinus lead
Right
ventricular lead

Figure 1.3: Illustration of a CRT pacemaker implanted in the chest, with the LV
lead placed in the coronary sinus (green). The other leads are placed in the right
atrium (blue) and right ventricle (red).

One important treatment option for symptoms associated with drug-refractory,


congestive HF is cardiac resynchronization therapy (CRT). Candidates for CRT de-
vice implantations are moderately to severely symptomatic despite pharmacotherapy.
They are normally grouped as NYHA class III to IV. Furthermore, they can be diag-
nosed with cardiomyopathy which results in a weakened and enlarged heart muscle,
and they show a significant electrical delay across the lower pumping chambers, which
results in widening of the QRS complex [Shea 03].
The goal of CRT is simple: restoration of the normal synchronized pumping
function of the ventricles by overcoming the delay in electrical conduction caused by
the left bundle branch block [Abra 03]. The resynchronization should improve the
symptoms of HF, exercise capacity, ventricular function and structure, heart failure-
related hospitalizations, and mortality risk. The resynchronization is accomplished
by a specialized type of cardiac pacemaker. This device continuously monitors the
heartbeat of the patient and delivers an electrical signal to stimulate a synchronous
heartbeat. The cardiac resynchronization device has three electrodes, one in the right
atrium, one in the right ventricle – both as with a regular pacemaker – and the third
one in a cardiac vein on the outer surface of the LV [Shea 03]. An illustration of an
implanted CRT device is provided in Figure 1.3. The leads in the right atrium and
in the right ventricle maintain the normal coordinated pumping relationship between
the top and bottom of the heart. The leads are connected to a pulse generator, simply
speaking a battery pack, which is placed under the skin in the upper left chest. The
third lead allows to simultaneously stimulate the left and right ventricles and restore
the synchronous coordinated contraction pattern of the heart. This kind of pacing is
often referred to as bi-ventricular pacing, as the left and right ventricle are stimulated
1.1 Heart Failure and Treatment Options 7

Figure 1.4: X-ray image acquired after a successful CRT implantation.

at the same time. This bi-ventricular pacing should result in a narrower and more
normal QRS complex [Shea 03].
The problem, however, with cardiac resynchronization therapy is that about 30 %
to 40 % of the patients do not clinically respond to this therapy [Shet 14]. The ade-
quate patient selection is very important, as some patients do not benefit from CRT
implantation [Auri 11]. Furthermore, the number of non-responders may be related
to suboptimal lead positioning. Pacing the LV generally produces the best haemo-
dynamic response, but patients may not respond if the lead is placed on scar, as
it is not electrical conductive. Additionally, there is a significant variation in the
response when pacing different regions of the LV [Shet 12]. Finally, the therapy can
be optimized by adequate device programming and follow up of the patient after the
implantation [Brig 13].
Fluoroscopic imaging is used to guide the placement of the three leads, but this
modality provides neither functional nor anatomical information with respect to the
heart [Ma 12]. A typical fluoroscopic image of a CRT procedure is shown in Figure 1.4.
Due to the complexity of the procedure, the implantation of the LV lead through the
coronary vein may take long. This results in a considerable amount of fluoroscopic
exposure and a repeated use of contrast agent to outline the cardiac chambers and
great vessels [Bram 10]. In particular, the optimal placement of the LV lead in a
sub-branch of the coronary sinus is one of the most challenging aspects of the CRT
implantation. In addition, the presence of myocardial scar at the LV placement area
is an important determination of the response rate. If the lead is positioned in areas
with transmural scarring, the desired effect of CRT on clinical outcome and cardiac
performance may be reduced [Morg 09].
Integrating the knowledge of the LV’s anatomy, mechanical activation, and scar
tissue distribution to treatment planning and guidance is likely to improve the out-
come of the CRT implantation [Leyv 11, Beha 17, Moun 17]. Recent studies from
Shetty et al. [Shet 12] demonstrated that an overlay approach using MRI fused to
8 Introduction

X-ray is a powerful combination for planning and guiding a CRT procedure. The
mechanical activation of the heart wall can be estimated from MR, ultrasound, CT,
or C-arm CT [Jian 14, Moun 17, Kape 05, Po 11, Mull 14].
Cardiac MRI is the clinical gold standard for assessing the functional parameters
of the heart [McMu 12]. To visualize fibrosis and scarring LGE-MRI is used [Bako 13].
Besides technological improvements regarding image acquisition and the clear clinical
demand [Bilc 08], the challenge arises in providing automatic tools for image analysis.
Therefore, the main research goal of this thesis is to develop segmentation tools for
LGE-MRI sequences, scar quantification methods, and visualization options for 2-D
as well as 3-D LGE-MRI data sets, to support optimal procedure guidance.

1.2 Scientific Contributions


In the last years, there was a significant improvement regarding the image acquisition.
However, the segmentation of the LGE-MRI and the further processing remains a
challenge. In this section, the contributions of this thesis to the progress of LGE-MRI
myocardium segmentation, scar quantification, and scar visualization are listed below
by the individual topics.

Left Ventricle Segmentation

An important goal of this thesis is to directly segment the left ventricle out of 2-D
and 3-D LGE-MRI to avoid errors from the cine MRI contour propagation. For the
2-D LGE-MRI the short axis (SA) scan is used for the segmentation of the contours.
Therefore, the LV is automatically detected and a morphological active contours with-
out edges approach is used to estimate the blood pool. Afterwards, the endocardial
and epicardial boundary can be estimated either using a filter based or learning based
method in combination with dynamic programming. For the endocardial approach,
an additional scar exclusion step is added. The contributions have been presented at
three international conferences [Kurz 17a, Kurz 17e, Kurz 18a].
A different segmentation pipeline is developed for 3-D LGE-MRI. As this scan
is a whole heart scan, first the left ventricle has to be initialized using a two-stage
registration step. In the next step, the short axis orientation is estimated using
principal component analysis. Having the rough outline of the left ventricle either
a learning based or a filter based approach can be applied for the segmentation of
the endocardial and epicardial border. For the endocardial boundary refinement, an
additional scar exclusion step is added. Furthermore, because of the high resolution
also the papillary muscles can be segmented from the 3-D LGE-MRI scan. The
methods were presented at international conferences [Kurz 15, Kurz 17b, Kurz 16a,
Kurz 17c] and in one international peer-reviewed journal [Kurz 17f].
In addition to the fully automatic left ventricle segmentation, an intuitive, in-
teractive, and generic segmentation tool is developed based on radial basis function
interpolation for the segmentation of the left ventricle. However, as it is a generic
segmentation tool, any region of interest can be segmented. First, a smart brush is
implemented that allows to perform fast 2-D segmentations of individual slices. In
the next step, from the segmented masks the contours are extracted and scattered
1.2 Scientific Contributions 9

data points are computed. From these points the 2-D normal vectors are estimated
based on the curvature of the contour. Afterwards, our new formulation of Hermite
radial basis functions is applied to reconstruct the 3-D surface. Effective 2-D interac-
tions using the smart brush and less number of equations accelerate the performance
of the 3-D interpolation. Thus this method allows a real-time and an intuitive seg-
mentation of 3-D objects and can handle any 3-D data set. The contributions have
been presented at one international conference [Mirs 17] and in one peer-reviewed
journal [Kurz 17d].

Scar Quantification

Having the segmented contours from the myocardium, the scar quantification can be
performed fully automatic. Therefore, the full-width-at-half-max (FWHM) and the
x-standard deviation (x-SD) algorithms are implemented in a fully automatic man-
ner, to be independent from any user interaction. Furthermore, a learning based scar
quantification is developed, to analyze the texture of the myocardium and not only
being dependent on the intensity distribution of the myocardium. These contribu-
tions were presented at two international conferences [Kurz 15, Kurz 18b] and in an
international scientific journal [Kurz 17f].

Scar Visualization

Having the segmented scar, different visualization methods can be applied. The
scar can be visualized as one 3-D surface mesh, however, with this visualization
it is not possible to delineate between endocardial or epicardial scar. Therefore,
a novel scar layer visualization is developed. The scar is divided into several lay-
ers to allow the user to delineate between endocardial, mid-cardial, and epicardial
scar. Furthermore, the transmurality of the scar can be evaluated by visual in-
spection. However, this method cannot just be applied to 3-D visualization, it can
also be used for the 2-D American Heart Association (AHA) bull’s eye plot (BEP)
(cf. Section 6.3.3). This novel visualization method has been presented at two con-
ferences [Reim 17a, Kurz 17g].

Other Contributions to Segmentation and Image Guided Therapy

A serious of contributions to image guided therapy have been made, which are in
close correlation to this thesis.
One aim of the segmentation is to use the segmented objects for guidance dur-
ing the procedure. Therefore, the benefit of X-ray magnetic resonance fusion has
been investigated. Mainly three procedures are considered, guidance for congenital
heart disease catheterization procedures, guidance for bone biopsies, and guidance for
sclerotherapy [Kurz 14d, Kurz 14a, Gira 14, Kurz 14e, Hwan 14a, Hwan 14b, Kurz 14b,
Bao 14, Kurz 14c]. It has been shown, that using MRI overlay for interventional ra-
diology is sufficient. However, for more complex procedures, such as congenital heart
disease catheterization procedures 3-D meshes are beneficial, i. e., as also the ostia of
a vessel can be visualized.
10 Introduction

An additional research focus has been on the guidance of atrial fibrillation pro-
cedures. The goal of this procedure is to electrical isolate the pulmonary veins
from the left atrium [Bros 13]. One common catheter for the pulmonary vein iso-
lation is a cryo-balloon catheter. However, this catheter is hardly visible under
fluoroscopy. Therefore, special detection and tracking algorithms have been devel-
oped [Bour 12, Kurz 12, Kurz 13, Kurz 16b, Kowa 15].

1.3 Organization of the Thesis


A chapter-wise overview is given in this section to provide a guide through the thesis.
A graphical overview of the different chapters is depicted in Figure 1.5.

Chapter 2 – Clinical and Technical Background

In Chapter 2, the medical and technical background for this thesis is explained.
First, the basic principles of magnetic resonance imaging are described. Afterwards,
the difference between 2-D slice selective sequences and 3-D MRI volumes is outlined.
Finally, the basic principle of a LGE-MRI acquisition is introduced. In the second
section, the image processing basics used in this thesis are outlined, consisting of
the polar transform, a short overview of segmentation methods, and the active con-
tours without edges approach. In the last section, the pattern recognition pipeline
is depicted and the background which is needed in this work is explained. First, the
principles of a decision tree are detailed. Afterwards, the random forest classifier is
introduced and finally, the idea of cross-validation is depicted.

Chapter 3 – Left Ventricle Segmentation in 2-D LGE-MRI

Late gadolinium enhanced MRI is the gold standard to visualize myocardial scar-
ring. The challenge of this MRI acquisition is the accurate segmentation of the my-
ocardium, as it is a prerequisite for most automatic scar quantification approaches.
In Chapter 3, segmentation approaches for 2-D LGE-MRI in SA orientation are in-
troduced. For the 2-D LGE-MRI, a filter based and a learning based approach is
described, respectively. For both approaches, the left ventricle is detected in the
short axis image stack. Afterwards, the endocardial boundary is estimated either
using a filter based detection or a learning based classification in combination with a
minimal cost path search in polar space. Having the endocardial contour, the epicar-
dial contour is refined. Afterwards, the data and the metrics used for the evaluation
are described. The proposed methods are evaluated on 100 clinical 3-D LGE-MRI
data sets.

Chapter 4 – Left Ventricle Segmentation in 3-D LGE-MRI

In the last years, the imaging quality continuously improved and LGE-MRI was
extended to 3-D. This sequence allows for a more precise quantification of myocardial
scar.
However, little research has been done in the processing of these 3-D LGE-MRI
sequences. Therefore, in this work three novel methods, a filter based, a learning
based, and a semi-automatic approach, for the segmentation of the left ventricle in
1.3 Organization of the Thesis 11

3-D LGE-MRI are presented. For the filter based and learning based approach, the
left ventricle is detected using a two-stage registration approach. Second, the short
axis view of the left ventricle is estimated. Third, the endocardial contour is refined
by either using an edge detection or a machine learning approach in combination
with a minimal cost path search. Fourth, the epicardial contour is refined using the
information of the endocardium. Fifth, the papillary muscles (cf. Section 4.3.5) are
segmented. Finally, the contours are extracted as 3-D surface meshes.
In addition, a generic semi-automatic approach based on Hermite radial basis
function interpolation is used. First, individual slices are annotated in 2-D using the
smart brush. Second, the contours of the segmented masks are extracted, control
points are defined, and their normal vectors are computed. Third, the new formula-
tion of Hermite radial basis function is used to reconstruct the 3-D surface.
The proposed methods are evaluated on clinical 3-D LGE-MRI data sets from two
clinical sites.
Chapter 5 – Scar Segmentation in LGE-MRI

The LGE-MRI sequences are acquired to visualize the viability of the myocardium.
A prerequisite for the scar quantification is the accurate segmentation of the my-
ocardium. Having the segmented myocardium, different methods can be applied for
a fully automatic scar quantification. In this work, different fully automatically meth-
ods are implemented, such as the x-SD or the FWHM algorithm which are considered
as state-of-the-art scar segmentation approaches. Furthermore, a learning based ap-
proach using intensity and texture features is investigated. The scar quantification is
evaluated using 30 clinical 3-D LGE-MRI data sets.
Chapter 6 – Scar Visualization

The success rate of cardiac resynchronization therapy is very low, as 30 % to 50 % of


the patients do not respond to this therapy. One of the main issues is considered to
be the suboptimal placement of the left ventricular pacing lead. Pacing in areas of
myocardial scar has no effect, as the tissue is not electrical conductive. To improve
the success rate, precise scar quantification is important as presented in Chapter 5.
However, also the visualization of the scar tissue is very important for treatment
planning. Therefore, in this chapter a novel scar layer creation is introduced. Having
the layers, the user can delineate between endocardial, mid-cardial, and epicardial
scar. Hence, also the scar transmurality (cf. Section 6.1) can be investigated inher-
ently by visual inspection. The layers can be visualized as 3-D surface meshes or
projected on a 16 segment BEP from the AHA.
Part IV – Outlook and Summary

Possible directions for future research and remaining challenges for clinical translation
of the presented methodology are presented in Chapter 7. The thesis concludes with
a summary of the medical and technical background and the scientific contributions
presented in this work.
12 Introduction

Chapter 1
Introduction

Chapter 2

Clinical and Technical Background


• Magnetic Resonance Imaging

• Image Processing and Pattern Recognition

Chapter 3 Chapter 4
Left Ventricle Segmen- Left Ventricle Segmen-
tation in 2-D LGE-MRI tation in 3-D LGE-MRI

Chapter 5
Scar Segmentation in LGE-MRI

Chapter 6
Scar Visualization

Chapter 7
Outlook

Chapter 8
Summary

Figure 1.5: Graphical overview of the organization of this thesis.


CHAPTER 2

Clinical and Technical


Background
2.1 Magnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Image Processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

In this chapter, the basics of MRI and the technical fundamentals of this thesis are
summarized. In the first section, the physical basics of MRI are described. Then, a
differentiation between 2-D slice selective sequences and 3-D volumes is given. Af-
terwards, LGE-MRI is explained, which is the gold standard to visualize myocardial
viability. In the second section, the image processing basics, such as the polar trans-
form or the morphological active contours without edges approach are explained. In
the last section, the pattern recognition principles used in this thesis for the LV seg-
mentation are introduced, including decision trees, the random forest classifier, and
the evaluation using cross-validation.

2.1 Magnetic Resonance Imaging


MRI provides excellent soft-tissue contrast for morphological imaging, as well as a
range of possibilities for functional imaging, e. g., blood flow visualization, tissue
perfusion, or diffusion processes, without the use of ionizing radiation. The basic
principles for cardiac MRI are essentially the same as for other parts of the human
body. For the physical principals of MRI, the reader is referred to textbooks, such as
[Weis 08, Nish 10, Brow 14].
For the MRI scan, the patient is brought into a homogeneous high-strength mag-
netic field B0 pointing in the same direction, which will align all the nuclei in the
body along the long axis of the magnetic field. The interaction of the nuclei with the
magnetic field is called magnetic moment, or referred to as spin. The spin precesses
with the Larmor frequency ω0 around the magnetic field. The Larmor frequency is
proportional to the magnetic field strength and defined as

ω0 = γ||B0 || , (2.1)

13
14 Clinical and Technical Background

short TE long TE
short TR T1 weighting –
long TR PD weighting T2 weighting

Table 2.1: The influence of the repetition time (TR) and the echo time (TE) on the
weighting of the images and therefore, on the image contrast. The effects of TE and
TR result in different contrast for the MRI, T1 , T2 , or proton density (PD) weighted.
This table is based on Weishaupt et al. [Weis 08].

where γ is the gyromagnetic ratio. The larger the field strength B0 of the magnetic
field, the higher will be the precessional frequency. The spins can then be divided
into two components, the longitudinal magnetization, which is aligned with the mag-
netic field, and a transverse magnetization, which is perpendicular to the magnetic
field [Weis 08].
The alignment of the spins can be changed by a transverse magnetization in com-
bination with a radio frequency pulse (RFP), whose frequency correlates with the
Larmor frequency ω0 . After the excitation the spin realigns back to the magnetic
field B0 . This state is the so-called resting state, as this is the energetically best
position for the spins. Considering the two different components of a spin, the lon-
gitudinal magnetization will increase and the transverse magnetization will vanish.
The increase of longitudinal magnetization over time is called the T1 relaxation time.
T1 is tissue specific, i. e., a tissue with long T1 recovers slowly after a RFP. The van-
ishing of the transversal magnetization is associated with T2 relaxation. Tissues with
a short T2 will loose very fast the transverse magnetization. The contrast of the MRI
sequence is determined by the timing of the excitation pulses, the successive gradient
magnetic field, and the T1 and T2 relaxation processes [Dyma 04]. The parameters
that control the weighting are the echo time (TE) and the repetition time (TR). TE
is defined by the time delay after each RFP is measured, i. e., the loss in transverse
magnetization. Hence, TE defines the T2 weighting of the images. TR is the period
of time until the RFP is repeated, as several similar measurements need to be per-
formed to encode multiple lines of an image. The RFP will flip the excited spins
to a more transversal magnetization. The relaxation will recover the longitudinal
magnetization, which is determined by T1 . Furthermore, a third type of contrast can
be applied, which is called proton density (PD) weighting, where T1 and T2 weight-
ing are minimized. Therefore, only the variations due to proton density itself are
left [Weis 08]. In Table 2.1 the effects of TE and TR are summarized. In Figure 2.1
example images of the brain with the different contrast are shown.
In the following subsections, the difference between 2-D vs. 3-D sequences is out-
lined and LGE-MRI is introduced, which is used for the visualization of myocardial
scarring.

2.1.1 Slice Selective 2-D Sequences vs. 3-D Sequences


MRI is a tomographic technique and can generate cross-sectional images of the hu-
man body. The three gradient coils oriented in the three orthogonal directions are an
important component of an MRI system. They can impose a linear variation of the
2.1 Magnetic Resonance Imaging 15

(a) T1 weighted (b) T2 weighted (c) PD weighted

Figure 2.1: (a) T1 weighted image, where the contrast is based on a short TR and
a short TE. (b) T2 weighted image, where the contrast is based on a long TR and
a long TE. (c) PD weighted image, where the contrast is based on a long TR and
a short TE. The images are courtesy of the Siemens Healthcare GmbH, Erlangen,
Germany [Hend 03].

otherwise homogeneous magnetic field B0 , by combining a weighted combination of


the three coils. This combination allows to image in any direction. The excitation of
a specific slice and the identification of the site of origin of a signal within the slice
relies on the fact that the Larmor frequency ω0 is proportional to the magnetic field
strength [Weis 08]. To facilitate slice selection the magnetic field is made inhomo-
geneous in a linear fashion using the gradient coils. Therefore, excitation pulses are
exposed to the slice that shall be imaged, which match the Larmor frequency of the
specific slice. However, if the RFP only contains a single frequency, the correspond-
ing slice will be very thin. This means, that not enough nuclei will resonate and the
induced signal will not be measurable. Furthermore, the precise linear gradients are
also not possible due to hardware limitations. Therefore, a range of frequencies is
emitted. Depending on the wave size and the spatial resolution, the signal-to-noise
ratio varies [Weis 08]. The slice position is defined by the frequency of the RFP. The
spatial position of the signal is identified by spatial encoding, which can be decom-
posed into two steps, phase encoding and frequency encoding. However, a detailed
illustration of spatial encoding is out of scope of this thesis, therefore, please refer to
Weishaupt et al. [Weis 08] for a detailed explanation.
MRI can be distinguished into 2-D multi-slice acquisitions or 3-D volume-selective
sequences. Multi-slice selection is used to successively acquire and reconstruct a 2-D
stack of slices of the desired 3-D volume, using 2-D spatial encoding for localization
within each slice [Weis 08]. Therefore, different gradients are applied to excite the
spins in the respective slice. The spins outside the layer remain unaffected. Slice
selective 2-D sequences feature a high in-plane resolution, but with a high slice thick-
ness of several millimeters, due to the range of radio frequency pulses. The large slice
thickness and spacing between the slices may cause problems when distinguishing
small or flat structures like myocardial scarring.
A 3-D volume-selective acquisition is acquired when the entire volume is excited –
without slice selection – and 3-D spatial encoding is used for localization. Therefore,
an additional phase-encoding gradient is applied, to encode the spatial position along
16 Clinical and Technical Background

the z-direction. For more details about the additional phase encoding, please refer
to Weishaupt et al. [Weis 08]. The 3-D volume is computed by applying a 3-D recon-
struction after acquisition. In general, 3-D volume-selective sequences can achieve a
higher spatial isotropic resolution, however the scan time increases [Weis 08].
In the recent years, 3-D volume-selective acquisition have been applied for whole
heart imaging, where the whole heart is covered by a single acquisition. However,
a long acquisition time was the drawback of the sequences. Therefore, special sam-
pling patterns, such as the Cartesian spiral phyllotaxis sampling pattern has been
introduced for a faster data acquisition [Picc 11]. Furthermore, free breathing acqui-
sitions have been developed by applying respiratory gating [Picc 12, Grim 13, Form 13,
Form 14, Form 15, Wetz 17]. These technologies have the potential to bring 3-D car-
diac imaging into clinical routine.

2.1.2 Late Gadolinium Enhanced MRI

Late gadolinium enhanced MRI is the clinical gold standard for non-invasive assess-
ment of myocardial viability [Kim 00, Kell 12, Shin 14, Rash 15]. The contrast of this
sequence is based on gadolinium (Gd), which belongs to the lanthanide series of
the chemical elements, which are often referred as the rare earth elements [Weis 08].
Gadolinium in its pure form is a paramagnetic, toxic metal and therefore, an un-
favorable element in the human body. Hence, gadolinium is never injected in the
pure form in the body but bounded to a carrier molecule. The purpose of the car-
rier molecule, called chelating agent is to modify the distribution of the gadolinium
within the human body, to overcome the toxicity but maintaining the contrast prop-
erties. Different vendors use different chelating molecules, to bind the paramagnetic
gadolinium Gd3+ [Rogo 16]. The contrast agent is injected intravenously and will be
eliminated later trough the kidneys.
LGE-MRI uses the difference in contrast agent accumulation between viable and
damaged tissue, which leads to various enhancements within the myocardium. The
increased concentration of the contrast agent in the scar tissue is based on a larger
extra-vascular, extra-cellular volume, and slower washout. LGE-MRI images are
typically acquired 10 min to 20 min after a gadolinium based contrast agent injec-
tion [Kim 00, Kell 12]. The enhancement is based on T1 -shortening and the different
distribution of the contrast agent. GRE-based inversion recovery (IR) or phase sensi-
tive inversion recovery using ECG gating are widely used sequences for the acquisition
of LGE-MRI. The sequence has normally a 180◦ IR pre-pulse to flip the signals of all
tissues to the negative axis. The inversion time (TI) is different for each patient, as
it is dependent on the patient weight, contrast dose, renal function, and time-point
after the contrast agent injection. After the inversion pulse, the tissue recovers back
to the resting state. The signal should be measured exactly when the healthy my-
ocardium tissue crosses the zero line. Hence, the resulting image depicts the normal
myocardium as dark tissue and scar tissue is visualized bright. Figure 2.2 depicts the
acquisition protocol of an LGE-MRI sequence. It can be seen that the data is ECG
gated to mid-diastole of every other cardiac cycle. Furthermore, the tissue specific
TI is visualized, which is normally around 300 ms [Dyma 04].
2.1 Magnetic Resonance Imaging 17

ECG

Acquisition
window
Trigger

Non-selective Scar tissue Non-selective


180◦ IR 180◦ IR

Viable myocardium
T [ms]
TI
300 ms
Mz

Figure 2.2: An example of a LGE-MRI acquisition. The data is gated to mid-


diastole of every other cardiac cycle. It can be seen that the TI is adjusted patient
specific to the zero-crossing of the healthy myocardium tissue. Graphic is based on
Dymarokowski et al. [Dyma 04].

Recently, the use of IR with single shot steady state free precession, known as
true-FISP, has become more popular. This sequence has a fast multi-slice coverage
and provides good results for patients who suffer from arrhythmias or have difficulty
holding their breath [Kell 12]. In clinical routine, 2-D multi-slice LGE-MRI has proven
to provide diagnostic accuracy. The acquisition of a single slice is acquired during the
period of one breath hold. The slice selective breath hold scan has to be repeated up to
14 times, till the entire LV is covered in the desired orientation [Shin 14]. Commonly,
the SA view is used for the LV. In Figure 2.3 (a) the SA orientation and in (b) the
long axis (LA) orientation of the SA scan is visualized for a 2-D LGE-MRI sequence,
where the myocardial scarring is highlighted through an orange arrow.

Recently, LGE-MRI was extended to 3-D to continuously cover the whole heart
with a high resolution in a single acquisition. This scan guarantees an accurate
quantification of the myocardium to the extent of myocardial scarring. However,
the acquisition time can be increased, because it is a free breathing scan which is
navigator gated. But no resting periods between the scans have to be included for
the patient’s comfort [Shin 14]. To increase the acquisition time of the scan the
acquisition is improved using compressed sensing [Form 13]. Figure 2.3 depicts a 3-D
LGE-MRI sequence, where (c) corresponds to the SA orientation and (d) to the LA
view, where the myocardial scar is highlighted through orange arrows.
18 Clinical and Technical Background

(a) 2-D SA (b) 2-D LA

(c) 3-D SA (d) 3-D LA

Figure 2.3: Comparison of a 2-D and 3-D LGE-MRI of the left ventricle, showing
two individual patients. (a) 2-D LGE-MRI SA slice of the LV, with a myocardial
infarction (orange arrow). (b) LA view of the 2-D data, with a bad inter-slice reso-
lution because of the large slice thickness and distance between the slices. The 2-D
LGE-MRI sequence has the following parameters: pixel spacing (1.77 × 1.77) mm2 ,
slice thickness 8 mm, spacing between the slices 10 mm, number of slices 10. (c) 3-D
LGE-MRI reorientated in a pseudo SA view. (d) 3-D LGE-MRI in a pseudo LA
view. With the 3-D LGE-MRI an accurate quantification to the extent of myocardial
scarring is possible, the orange arrows are pointing to the MI. The 3-D LGE-MRI
sequence has the following parameters: voxel size 1.30 mm3 isotropic, number of slices
120.

2.2 Image Processing

After describing the fundamentals for the image acquisition, the basic image process-
ing methods used in this thesis are described.

First, the polar transform is detailed, as the polar space is widely used for left
ventricle segmentation [Joll 11, Cord 11, Huan 11, Drei 13, Hu 13, Hu 14]. Second, a
general overview of segmentation methods is given. Third, a morphological active
contours without edges approach is described, which is applied for the 2-D left ven-
tricle segmentation for the rough estimation of the blood pool.
2.2 Image Processing 19

2.2.1 Image Transform


The refinement of the myocardium in SA orientation is often performed in polar
space. There are several advantages using the polar coordinates instead of Cartesian
coordinates. The endocardium and epicardium in the SA orientation have a roughly
circular shape, but with different diameters. In polar space, all circles have the same
horizontal length, regardless of the radii. Furthermore, the image size in polar space
is smaller, which allows for a faster processing.
Through this mapping a point in Cartesian coordinates pc (x, y) is converted to
polar coordinates pp (r, ρ)
   p 
r x2 + y 2
pp = = , (2.2)
ρ arctan( xy )

where r ∈ R+ ◦ ◦
0 is the radius and ρ ∈ [0 , 360 ] is the angle in polar space. The
conversion is straight forward for continuous functions. However, if the function is
defined as a discrete and equally spaced grid, such as an image, an additional bilinear
interpolation is needed for the conversion [Park 07]. In this thesis, the origin of the
polar space image corresponds to the center of the endocardium, see Figure 2.4.
For the final result of the left ventricle segmentation, the polar coordinates are
transformed back to Cartesian coordinates
   
x r · cos(ρ)
pc = = . (2.3)
y r · sin(ρ)

2.2.2 Segmentation Methods


For the processing of medical images, segmentation is a common task and often a pre-
requisite for the further image analysis, therapy planning, and guidance [Kurz 17f].
The scope of segmentation methods is very large, ranging from manual slice by slice
outlining to fully automatic segmentation methods. Manual segmentation is still
widely used for complex segmentation tasks. However, manual annotation of ev-
ery image slice can be very cumbersome and time consuming, considering the high
resolution of the 3-D image volumes [Mirs 17]. Automatic segmentation methods
require a large amount of a-priori knowledge about the structure or a large train-
ing data base and often fail in difficult cases. In particular, deep learning is known
to require huge amounts of annotated data. Most segmentation approaches can be
divided into the following categories: thresholding, region growing, classifier, clus-
tering, Markov random fields, neural networks, deformable models, and atlas-guided
approaches [Pham 00, Peti 11, Unbe 15, Chen 17, Heim 09, Kohl 11, Amre 17].
Therefore, several semi-automatic segmentation approaches have been proposed,
to provide a compromise between manual and fully automatic segmentation. In
interactive segmentation algorithms, parts of the foreground and background have
to be roughly identified by the user by placing seed points [Amre 16]. Well known
semi-automatic interactive segmentation algorithms are region growing [Adam 94],
watershed transform [Beuc 91], intelligent scissors [Mort 98], graph cuts [Boyk 01],
Gaussian mixture model [Farn 08], random walker [Grad 06], and GrowCut [Vezh 04].
20 Clinical and Technical Background

79
0 79
x

(a) Cartesian coordinates


0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ

(b) Polar coordinates

Figure 2.4: (a) Image in Cartesian coordinates, where the center of the blood pool
is marked with an orange dot. (b) Polar image of (a), where the origin of the image
corresponds to the center of the blood pool in Cartesian space.

Model based methods have been successfully used in the last years for image
segmentation [Heim 09, Mont 01]. Deformable models are physically motivated, as
they deform a closed parametric curve under the influence of internal and external
energy functions [Pham 00]. Kass et al. [Kass 88] first introduced freely deformable
models as snakes using explicit deformable contours. The evolution of the curve is
driven by two forces, an external energy that adapts the model to the image and an
internal energy that stabilizes the shape based on smoothness criteria. The contour
S called active contour or snake is evolved by minimizing the energy functional F .
The energy functional F is given by
Z 1 Z 1 Z 1
0 2 00 2
F (S) = µ |S (p)| dp + ν |S (p)| dp − λ1 |∇I(S(p))|2 dp , (2.4)
0 0 0

where µ, ν, and λ1 ∈ N are positive constants that control the strength of each
term and ∇I is the gradient image of the input image I. The first two terms of
Equation (2.4) control the smoothness of the contour, i. e., the internal energy and
2.2 Image Processing 21

the third term attracts the contour towards the boundary of the image, i. e., the
external energy [Chan 01].
However, parametric deformable models have two main limitations. First, the
initialization of the model is very crucial. Second, topological adaption such as split-
ting or merging of model parts is very difficult, as a new parametrization must be
constructed [Ange 05]. Osher and Sethian [Oshe 88] introduced the concept of geo-
metric deformable models. This formulation can be used as an implicit formulation
of the deformable contour, as it allows for cusps, corners, and automatic topological
changes [Chan 01]. Hence, geometric deformable models were introduced to address
these limitations [Case 93], using the mean curvature motion equation [Oshe 88]
 
∂u ∇u
= |∇u|div , (2.5)
∂t |∇u|

where u : Rd → R is the defined level set at time t. Therefore, the energy functional
of Equation (2.4) is given by
 
∂u ∇u
= |∇u|div e(I) , (2.6)
∂t |∇u|

where the image gradient ∇I is replaced by a general edge function e(I) [Crem 07].
This new formulation is also known as geodesic active contours, as the energy can be
interpreted as length of the contour [Case 97].
Nonetheless, the stopping of the curve is still depending on the image gradient. To
overcome this issue Chan and Vese [Chan 01] introduced a new formulation of active
contours, which is not dependent on the edge function, the so-called active contours
without edges (ACWE). The external energy function is based on the Mumford-Shah
segmentation [Mumf 89], where the image is modeled as a piecewise-smooth function.
The energy functional F takes the content of the interior Ω and exterior Ω regions of
the curve S into account. The functional of the curve S is dependent on the image
I ∈ RN ×M and defined by
Z Z
F (c1 , c2 , S) = µ·L(S)+ν ·A(S)+λ1 ||I(p)−c1 ||2 dp+λ2 ||I(p)−c2 ||2 dp, (2.7)
Ω Ω

where the non-negative parameters µ, ν, λ1 , and λ2 ∈ N control the strength of each


term, L denotes the length of the curve S, A the area of the curve S, and p ∈ RD a
point in the image I [Chan 01].

2.2.3 Morphological Active Contours without Edges


For the rough estimation of the blood pool in the 2-D LGE-MRI an active contours
approach is used. As the boundaries in the LGE-MRI between the blood pool and
the myocardium are commonly very smooth and the delineation is difficult because
of myocardial scarring, a morphological active contours without edges (MACWE)
approach is used in this thesis to get a rough estimate of the outline of the blood
pool. The MACWE algorithm does not need well defined borders and is less sensitive
to the initial configuration and to the model parameters [Chan 01]. The advantage
22 Clinical and Technical Background

of MACWE is that the stopping of the curve evolution is not dependent on the edge
function, instead it is region-based, which means that it uses image statistics both
inside and outside of the contour. The difference to ACWE [Chan 01] is that this
approach is based on morphological discretization instead of partial differential equa-
tions, as they are computationally expensive and may have stability issues, depending
on the initialization [Marq 14].
The goal is to minimize the energy functional F of Equation (2.7),

min F (c1 , c2 , S) . (2.8)


c1 ,c2 ,S

For a fixed contour S, c1 and c2 are the mean intensity values of the area inside Ω, and
outside Ω of the curve S. The third and the fourth term of the energy functional F
of Equation (2.7) minimize the intensity difference of the values inside and outside of
the contour S. The first and second term of the energy functional F of Equation (2.7)
are added for regularization purposes [Chan 01].
The Euler-Lagrange equation for the implicit version of the functional F in Equa-
tion (2.7) using a level set formulation is [Chan 01]
   
∂u ∇u 2 2
= |∇u| µ · div − ν − λ1 (I − c1 ) + λ2 (I − c2 ) , (2.9)
∂t |∇u|

where u : Rd → R is the defined level set at time t . This equation specifies how the
curve S should evolve to minimize the functional F in a steepest descent manner. It
consists of the balloon term, the smoothing term, and the image attachment term.
The balloon force is given by the equation

∂u
= ν|∇u| . (2.10)
∂t
Depending on the sign of ν, the balloon term is equivalent to the classical morpho-
logical operators, erosion and dilation

i
D(u (p))
 if ν > 0
ui+1 (p) = E(ui (p)) if ν < 0 , (2.11)

 i
u (p) if ν = 0

where D is the dilation, E is the erosion, and i is the iteration step. The smoothing
term is given by the equation
 
∂u ∇u
= |∇u|µ div . (2.12)
∂t |∇u|

The smoothing term can be approximated using morphological curvature operators.


The number of smoothing iterations is given by µ. The image attachment term is
given by the equation

∂u
= |∇u| λ1 (I − c1 )2 + λ2 (I − c2 )2 .

(2.13)
∂t
2.3 Pattern Recognition 23

(a) Initialization of MACWE (b) Result of MACWE

Figure 2.5: (a) Initialization of the MACWE algorithm. (b) Result of the MACWE
evolution.

The attraction force term depends on the mean intensities inside c1 and outside c2 of
the curve S [Marq 14]

1
 if λ1 (I − c1 )2 < λ2 (I − c2 )2
ui+1 (p) = 0 if λ1 (I − c1 )2 > λ2 (I − c2 )2 . (2.14)

 i
u (p) otherwise

See Figure 2.5 for an exemplary result for the MACWE in a 2-D LGE-MRI sequence,
where the non-negative parameters are set as follows: µ = 1, ν = 1, λ1 = 1, and
λ2 = 2.

2.3 Pattern Recognition


The pattern recognition pipeline from Niemann [Niem 83, Niem 13] is depicted in
Figure 2.6. According to the pipeline, a pattern has to be recorded first. In our
case a pattern corresponds to an image I, which is digitalized. After the image is
digitalized, pre-processing is applied to increase the later classification performance.
For each image I a feature vector f is extracted. This feature vector contains distinct
information in order to classify the pattern. Finally, a classifier is used to assign each
feature vector f a class label c ∈ C, where C ∈ {c1 , ..., cN } is the set of all classes.
This whole process can be summarized as classification phase.
However, in order to assign each feature vector a class label c, the different classes
and the individual properties of each class have to be known. During the training
phase the properties are learned using a training set D ∈ RD , which consists of
representative features for each of the classes.
Depending on the type and amount of features, different classifiers can be used.
The different types of classifiers can be separated into:

• statistical classifier: assumes that the a-priori probabilities pc as well as the


probability density function p(f |c) of each class are known. One example for a
statistical classifier is the Bayes classifier.
24 Clinical and Technical Background

Classification Phase

I Pre- Feature f ci
Recording Classification
Processing Extraction

Learning Phase D Training

Figure 2.6: Overview of the pattern recognition pipeline, adapted according to


Niemann [Niem 83].

• parametric classifier: summarize the data by a parametric function and use a


distance measure. One example for a parametric classifier is logistic regression.

• non-parametric classifier: do not require any parameter estimation as they most


often save the whole training data set. One non-parametric classifier is the
nearest neighbor classifier.

• other classifiers: random forest, support vector machine, hidden Markov mod-
els, neural networks, etc.

Important to know is, that every classifier is only as good as the training samples
used. In addition, the more training samples available the better the classifier will
generalize. In the next two sections, the focus in on decisions trees and the random
forest classifier, as they are used in this thesis for boundary classification.

In 1984, Breiman et al. [Brei 83] first introduced a tree model for classification
and regression, which is also known as classification and regression trees (CART).
From then on, decision trees became very popular and were widely used for different
machine learning problems. Reasons for their success are that they are fast and
scalable even for large data sets and they can be formulated in a probabilistic fashion.
Breiman [Brei 01] further developed the ensemble of decision trees and introduced in
2001 the injection of randomness in the learning phase of each individual tree. Since
then, the random forest has been successfully applied to many different machine
learning applications.
In this section, the basic machine learning methods used in this thesis are sum-
marized. First, the principles of decision trees are described and their application
for classification is introduced. Second, the concept of the random forest classifier is
explained. Afterwards, the idea of cross-validation is described and the optimization
of the hyper-parameters of the random forest classifier is explained.
2.3 Pattern Recognition 25

→ Root Node

→ Split Nodes

→ Leaf Nodes

Figure 2.7: A decision tree consists of a set of nodes, which are connected through
edges. The top node is referred to as root. The internal nodes or split nodes are
visualized as circles. The leaf or also called terminal nodes are depicted as rectangles.
In contrast to graphs, in a decision tree there are no loops.

2.3.1 Decision Trees


A decision tree performs predictions using a sequence of simple decisions. In our case
only binary decisions are considered, which means each node can only have two chil-
dren. One tree is a collection of nodes and edges which are organized in a hierarchical
structure, as visualized in Figure 2.7. In contrast to graphs, a decision tree has no
loops and the tree can only be traversed using a descending path. Therefore, every
node can have only one incoming node. Furthermore, as we just consider simple,
binary trees, each internal node can only have two outgoing edges. The top node of
a tree is referred to as root, the nodes with no children are called leaf nodes. All the
nodes in the middle of the tree are split or internal nodes [Crim 12].
Let the input data for the root node N 0 be represented by the set D ∈ RD of data
points {f }. For each node N a so-called splitting function s : f → {0, 1} needs to be
determined, whose role is to split the incoming observations or features denoted by D
into two subsets Dleft and Dright . The two subsets are disjoint, i. e., D = Dleft ∪ Dright
and Dleft ∩ Dright = ∅.
The binary split function s is associated with each split node i

s(f , θ i ) ∈ {0, 1} , (2.15)

where 0 and 1 represent the decision for the left and right node and θ is a set of
parameters for the split function. The parameters of θ can be further decomposed
into
θ = (φ, ψ, τ ) , (2.16)
0 0
where φ : Rd → Rd , with Rd << Rd is the feature selector function, which selects
some features of choice out of the vector f . The parameter ψ ∈ Rd defines the
geometric primitive used for separating the data, i. e., axis-aligned hyper-plane or
an oblique hyper-plane. The parameter τ saves the threshold used in the binary
26 Clinical and Technical Background

x2 ψ x2 ψ

x1 x1
(a) Axis-aligned hyper-plane (b) General oriented hyper-plane

Figure 2.8: Examples of splitting functions, which are also referred to as week
learners. Mostly, linear splitting functions are used, followed by a thresholding op-
eration. (a) An axis aligned hyper-plane as splitting function. (b) General oriented
hyper-plane as splitting function. For visualization purposes the feature vector is in
2-D space, f = (x1 , x2 )T ∈ R2 . Images based on Criminisi et al. [Crim 12].

test [Crim 13]. All the parameters will be optimized at each split node [Crim 12]. In
Figure 2.8 different examples of splitting functions are visualized. Commonly, linear
functions coupled with a threshold operation are applied

s(f , θ i ) = [τ > φ(f ) · ψ] . (2.17)

During the training of the decision tree, the optimal parameters for θ i for each of
the split nodes needs to be computed. Considering classification, the aim is to reduce
the class uncertainty. The most popular objective functions are the information
gain and a variation of it called Gini impurity. The information gain needs to be
maximized as it measures the difference between the class uncertainty before and
after the splitting,
right
θ ?i = argmax Gi (Di , Dleft
i , Di , θi ) , (2.18)
θi

where G : D → R is the information gain for one node. The information gain after
the split is computed by
X |Di |
G = H(D) − H(D) , (2.19)
|D|
i∈{1,2}

where H : D → R is e. g., the Shannon entropy HS : D → R which is defined as


X
HS (D) = − p(c|D) log(p(c|D)) , (2.20)
c

where c ∈ C indicates the class label, C ∈ {c1 , ..., cN } is the set of all classes, and p
the empirical distribution of the training point of c within the test data D.
2.3 Pattern Recognition 27

1.0

0.8

Class percentage
0.6

x2
0.4

0.2

0.0
x1 Class distribution

(a) Data distribution (b) Uniform class distribution


ψ 1.0 1.0

0.8 0.8

Class percentage

Class percentage
0.6 0.6
x2

0.4 0.4

0.2 0.2

0.0 0.0
x1 Class distribution Class distribution

(c) Vertical split (d) Left split (e) Right split

Figure 2.9: Information gain after splitting. (a) Data distribution before splitting,
where the different colors indicate different classes. (b) Class distribution before
splitting, with a uniform distribution, as for every class the same number of points is
given. (c) Data distribution after vertical splitting. (d) Class distribution after the
vertical split for the left part, only the red and blue class are left and the distribution
is uniform. (e) Class distribution after the vertical split for the right part, only the
green class is left. Graphic based on Criminisi et al. [Crim 12].

As mentioned before, an alternative measure is the Gini impurity HG : D → R,


which is defined as
X X
HG (D) = p(c|D) (1 − p(c|D)) = 1 − p2 (c|D) . (2.21)
c c

Figure 2.9 (a) depicts three different classes, where the color indicates the class
label. The distribution over the classes is uniform, because each class has the same
number of points, which can be seen in Figure 2.9 (b). In Figure 2.9 (c) the data
is split vertically, which produces two sets of data, Dleft and Dright . Each data set
after the split has higher information and therefore, a lower entropy. Therefore, the
information gain is maximized after the first split, as one class is already separated.
The calculation of the information gain happens during the training phase of the
forest. The decision functions at each node are optimized individually to iteratively
split the training data until a stopping criteria is reached [Crim 12].
There are mainly three common stopping criteria: i) maximal tree depth D ∈ N is
reached, ii) minimum population per leaf is reached, or iii) minimum variation of the
information gain G. If one of these criteria is fulfilled, the last tree node is reached,
the iterative splitting of the training data D stops and the node will turn into a leaf
node.
The function of the leaf node is to model the posterior distribution of the given
subset of D of the training data. Hence, each leaf node corresponds to a part of
28 Clinical and Technical Background

x2

x1

(a) Partition of feature space (b) Resulting decision tree

Figure 2.10: (a) Partition of feature space into three classes. (b) Resulting decision
tree, where each leaf node represents one class.

the feature space, as depicted in Figure 2.10. The probabilistic leaf model can be
formulated as
p(c|f ) , (2.22)
where c ∈ C is indexing the class label and p ∈ [0, 1] is the probability [Crim 12].
To further speed up the decision process and avoid overfitting, leaf pruning can
be applied [Crim 13]. The approach after Gelfand et al. [Gelf 89] proposes to split
the training data set into two disjunct sets and alternating the role of the both sets.
First, the tree is grown using the first half and pruned using the error rate of the
second half. Afterwards, the tree is regrown from the pruned tree using the previous
test half and pruned using the previous training half. These steps are repeated until
the tree size does not change anymore.
To conclude this section, decision trees can approximate new information if the
training data is large enough. However, a single decision tree is prone to overfitting.
Therefore, in the next section the idea of the random forest classifier is introduced to
overcome overfitting.

2.3.2 Random Forest


The goal of ensemble methods, such as the random forest classifier, is to combine
the predictions of several base classifiers in order to improve the generalization and
robustness over a single estimator.
For a forest T ∈ N, individual decision trees t ∈ {1, ..., T } are combined, where
all trees are trained individually. During the testing phase, each new feature vector
f is passed through all trees in parallel, until a leaf node is reached. The parallelism
is computational efficient and makes the testing very fast. Having the prediction of
each individual tree, the final prediction can be estimated by averaging over all tree
specific predictions
T
1X
p(c|f ) = pt (c|f ) . (2.23)
T t=1
If many trees are trained and averaged, the result is more robust, as the effect of noisy
tree distributions is reduced. This has also been demonstrated by Breiman [Brei 01].
2.3 Pattern Recognition 29

x2

x1

(a) Randomly sampled features (b) Individual decision trees

Figure 2.11: (a) Randomly sampled feature space. (b) Three individual trees,
trained with a randomized subset of the feature space.

However, a pre-requisite for a well-performing forest model is that all trees are
randomly different from each other. The randomness is achieved during the train-
ing phase, where mainly two different approaches are used. i) The training data is
sampled randomly, which was defined as bagging by Breiman [Brei 01], as it is a com-
bination of bootstrap and aggregating. Given the training data set D a subset Dt is
extracted, where all the elements have been randomly sampled using a uniform distri-
bution. The splitting and randomization of the training data yields a good training
efficiency. In Figure 2.11 an example of the randomized splitting of the feature space
D is given, where each of the three trees is trained on a different subset Dt of the
training data. ii) A randomized node optimization is applied [Geur 06], where each
tree can be trained on the whole training data D. For the ith node of a tree, the set
of all possible parameters θ can be defined as P. However, during the training only
a small random subset P i ⊂ P of the possible parameters is used. This leads to an
individual adopted optimization function for each node

θ i = argmax G(Di , θ) . (2.24)


θ∈P i

The injection of randomness is very important, as the degree of randomness increases


the difference between the individual trees and therefore, provides a better general-
ization of the whole model.
In general, the random forest is very flexible and can be designed very task specific.
The most important hyper-parameters that influence the random forest are
• the forest size T , i. e., the number of trees;

• the maximum allowed tree depth D;

• the amount and type of randomness;

• the choice of the weak learner model;

• minimum number of features at split node;

• minimum number of samples at leave node;


30 Clinical and Technical Background

Underfitting Overfitting

Prediction error
Prediction error

Number of trees T Tree depth D

(a) Influence number of trees T (b) Influence tree depth D

Figure 2.12: The number of the trees T and the tree depth D are the two most im-
portant parameters regarding the optimization of the random forest. (a) Correlation
between the number of trees T and the convergence of the prediction error. (b) Cor-
relation between the depth of the tree D and the problem of over- or underfitting the
data.

• and the choice of the features.


However, the first two bullet points are the two most relevant parameters. The num-
ber of trees T is important for noisy data to increase the prediction rate. Nevertheless,
the prediction error converges after a certain number of trees, therefore a trade of
between the number of trees and the runtime has to be made. Figure 2.12 (a) the
convergence of the prediction error by increasing the number of trees T is outlined.
The maximal allowed tree depth D has a direct impact on the generalization of each
tree. But if the tree depth is too deep, only few samples of the training data will
be contained in each leave node and this leads to overfitting of the data [Crim 12].
Figure 2.12 (b) depicts an example between the correlation of the tree depth D and
overfitting the of the data.
In this thesis, the random forest is used for classification, where the aim is to
assign each feature vector f a discrete class label c ∈ C. Classification forests can
handle multiple classes, provide a probabilistic output for each feature vector, they
generalize well also to unseen data, and they are very efficient due to their parallelism.

2.3.3 Cross-Validation
For the training of a classifier, parameters have to be learned. If the testing would
be performed with the training data set, the model would have a very good predic-
tion score, e. g., for the nearest neighbor classifier the labels would just be repeated.
However, this is a methodological mistake and is called overfitting. To avoid such
situations, the data set D has to be split into a training set Dtrain and a test data set
Dtest , where D = Dtrain ∪ Dtest and Dtrain ∩ Dtest = ∅.
Furthermore, overfitting can be present when the hyper-parameters of a classifier
are optimized on the test set Dtest , therefore, the evaluation metric does no longer
report the general performance of the classifier. Hence, another data set has to be ex-
tracted from the training set Dtrain , the so-called validation set Dvalidation , where Dtrain
2.3 Pattern Recognition 31

(a) Simple Training (b) k-fold CV (k = 3)

Figure 2.13: Illustration of different evaluation approaches. (a) Simple evaluation,


where the data set D is split into a training set (blue) and test set (red). (b) k-fold
cross validation, where k = 3. All the data is used for testing, as it is divided into k
disjoint subsets.

Algorithm 2.1 Cross Validation


Input: D
1: for k ∈ {1, ..., K} do
2: Dktraining = D \ Dk
3: Dktest = Dk
4: Train classifier using Dktraining
5: Test classifier using Dktest
6: end for
7: Compute average classification error

= D0train ∪ Dvalidation . The training is then performed on the training set D0train , after-
wards, the evaluation of the hyper-parameters is done on the validation set Dvalidation .
After satisfying results are achieved with the training, the final evaluation can be
done using the test data set Dtest .
However, if there is just a limited amount of data available, the splitting of the
data drastically reduces the number of samples available in each set and the results
can depend on the choice of the data sets. There is a need for a large training,
validation, and test set at the time to achieve a generic classifier which is not sensitive
to the training and validation data and to prove the generalization, i. e., the subject
independence by a large test data set. To solve this problem, a common approach
is to use cross-validation (CV), or also often called k-fold CV, where the data set is
split into k disjoint subsets of equal size. The model is trained on k − 1 subsets of the
data set D. Afterwards, the trained classifier is evaluated using the remaining part
of the data set D where the classification error is estimated. The steps are repeated
k times, until every subset was used for testing. In Figure 2.13 (b) an illustration
for a k-fold CV is given, where k = 3. Using a k-fold CV the classification error
can be estimated for the whole data set D, and more precise statements about the
performance of the classifier can be given [Cawl 10]. In the extreme case, where k
corresponds to the size of the data set D, the cross-validation is called leave-one-out
CV. In Algorithm 2.1, a pseudo algorithm for the cross-validation is presented.
Cross-validation can be combined with a grid search for the optimal hyper-parameter
selection of the classifier, as the hyper-parameters, such as the tree depth D and the
number of trees T for the random forest, are not directly learned within the training
phase of the classifier. A grid search can be used to estimate the performance of all
32 Clinical and Technical Background

Algorithm 2.2 Nested Cross Validation


Input: D
1: for k ∈ {1, ..., K} do
2: Dktest = Dk
3: Dktraining = D \ Dktest
4: for i ∈ {1, ..., I} do
5: Divalidation = Dk,i
training
6: Ditraining = Dktraining \ Divalidation
7: Train classifier using Ditraining
8: Validate each parameter set using Divalidation
9: end for
10: Determine best hyper-parameter set, where θ k = argmaxθ∈P i G(Dktraining , θ)
11: Train classifier using Dktraining and θ k
12: Test classifier using Dktest
13: end for
14: Compute average classification error

different parameter combinations for one classifier. Therefore, a range of the possible
parameters has to be specified. With a nested k-fold CV, the hyper-parameters and
the evaluation of the classifier can be achieved at the same time [Cawl 10]. For the
nested cross-validation, two loops of cross-validation are required. The outer loop
splits the data D into a training data set Dtrain and a testing data set Dtest . The
inner loop splits the training data Dtrain of the outer loop into a disjoint training
D0train and validation data set Dvalidation . Within the inner loop the hyper-parameter
of the model are optimized by a cross-validation combined with a grid search. After
the optimal parameters have been found, the outer loop is trained with the hyper-
parameters and evaluated using the test set Dtest . In Algorithm 2.2, the individual
steps of the nested cross-validation are detailed. If the model is stable and also the
features used for training the model are well selected, the hyper-parameters θ i for
each of the k-folds should be similar. However, in real world scenarios, the parame-
ters are often different for each split. Then for the final model, i. e., the mean value
of each parameter can be used for training.
PART II

Left Ventricle Segmentation

33
CHAPTER 3

Left Ventricle Segmentation in


2-D LGE-MRI
3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Segmentation Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

This chapter presents methods for the left ventricle segmentation in 2-D LGE-MRI.
In particular, filter based and learning based methods for the LV segmentation are
introduced. This chapter is organized as follows: The motivation is depicted in
Section 3.1. In Section 3.2 relevant literature is reviewed. The proposed segmentation
framework is introduced and described in Section 3.3. The evaluation and results
using 2-D LGE-MRI are shown in Section 3.4. In Section 3.5 the results are discussed
and a conclusions is drawn. Parts of this chapter have previously been published in
three conference publications [Kurz 17a, Kurz 17e, Kurz 18a].

T. Kurzendorfer, A. Brost, C. Forman, and A. Maier. “Automated


Left Ventricle Segmentation in 2-D LGE-MRI”. In: IEEE, Ed., Pro-
[Kurz 17a]
ceedings of the 2017 IEEE International Symposium on Biomedical
Imaging: From Nano to Macro, pp. 831–834, April 2017
T. Kurzendorfer, C. Forman, A. Brost, and A. Maier. “Random
Forest Based Left Ventricle Segmentation in LGE-MRI”. In: In-
[Kurz 17e]
ternational Conference on Functional Imaging and Modeling of the
Heart, pp. 152–160, Springer, June 2017
T. Kurzendorfer, K. Breiniger, S. Steidl, A. Brost, C. Forman, and
A. Maier. “Left Ventricle Segmentation in LGE-MRI: Filter Based
[Kurz 18a]
vs. Learning Based”. In: IEEE Nuclear Science Symposium and
Medical Imaging Conference, Nov. 2018

35
36 Left Ventricle Segmentation in 2-D LGE-MRI

LV Cine to LGE LGE Contour


Segmenttaion Registration Propagation

Figure 3.1: Overview of the state-of-the-art LGE-MRI segmentation pipeline. First,


the LV is segmented in the cine MRI. In the next step, the cine MRI is registered to
the LGE-MRI. In the third step, the contours from the cine MRI are propagated to
the LGE-MRI.

3.1 Motivation
The clinical gold standard to visualize myocardial scarring is 2-D LGE-MRI [Born 95,
Shin 14, Rash 15]. In clinical routine, the left ventricle’s myocardium in 2-D LGE-MRI
is commonly segmented manually. However, manual delineation of the myocardial
boundary is complex, time consuming, and prone to intra- and inter-observer variabil-
ity. The main challenge with the segmentation of LGE-MRI is the non-homogeneous
contrast distribution within the myocardium due to the different contrast agent accu-
mulation in the healthy and damaged myocytes. Therefore, the challenge lies in the
accurate contour delineation of the endocardial and epicardial boundary of the left
ventricle. Hence, most 2-D LGE-MRI segmentation methods rely on the registration
of the cine MRI scans to the LGE-MRI. In Figure 3.1 the workflow of the cine MRI
registration to the LGE-MRI is depicted.
The registration and fitting from the cine MRI to the LGE-MRI has several prob-
lems. The global position of the heart may change during the acquisitions due to
patient movement, as there is a large time gap between the acquisition of the two
sequences. The cardiac phases from the cine MRI may not precisely match to the
LGE-MRI. Inter-slice shifts can arise from multiple breath-holds. Even though these
shifts may appear minor, they can lead to significant errors in the myocardial scar
quantification.

3.2 Related Work


Segmentation methods to extract the endocard and epicard of the left ventricle
using 2-D SA LGE-MRI acquisitions most often rely on the registration of the
LGE-MRI to an anatomical cine MRI scan [Diki 04, Wei 11, Ciof 08, Tao 14] or shape
priors [Alba 14]. Cine MRI scans have a uniform texture and the wall delineation is
better visible compared to LGE-MRI. Therefore, the segmentation of the LV from
cine MRI is a well studied problem in literature [Suin 14, Peti 11]. Little research has
been done for fully automatic myocardium segmentation solely using LGE-MRI.
3.3 Segmentation Pipeline 37

Dikici et al. [Diki 04] use a non-rigid variational approach to register cine MRI
with LGE-MRI. Afterwards, active contours are used to optimize a parametric affine
transform of the propagated parameters. Cifolo et al. [Ciof 08] propose to deform a
geometrical template to fit to the myocardial contour for each MRI slice. In addition,
the LV was divided into four quadrants and those likely to contain large areas of
myocardial scarring were treated differently. The contours from the cine MRI were
then aligned to the geometrical template of the LGE-MRI. Wei et al. [Wei 11] use a
model based approach for the LV segmentation that comprises several steps. Inter-
slice shifts in cine MRI images are corrected and the LGE-MRI is registered to the
cine MRI. The cine MRI contours are further deformed by features in the LGE short
or long axis images. Tao et al. [Tao 14] propose a four step algorithm to segment
the LV from LGE-MRI. The LGE-MRI is aligned to the cine MRI. The endo- and
epicardial borders from cine MRI are fitted to the LGE-MRI, optimized in 3-D and
refined based on the LGE-MRI.
Alabà et al. [Alba 14] propose a modified graph cut approach to segment the LV
using 2-D LGE-MRI. Therefore, six rules are defined: (i) the blood pool appears
bright, (ii) the blood pool has a circular shape, (iii) the blood pool is continuous in
the longitudinal direction, (iv) the myocardium can include dark and bright voxels,
(v) the myocardial thickness changes smoothly, and (vi) the 3-D global shape of the
myocardium is smooth. The method from Alabà et al. does not use information
from cine MRI. The direct segmentation using LGE-MRI avoids segmentation errors
caused by the use of the 2-D cine MRI and a misalignment between the sequences. In
addition, it allows for a faster segmentation approach. Hence, the direct segmentation
of the LV solely based on LGE-MRI is desirable. The main difference from Alabà
et al. to the proposed method is that Alabà et al. use a graph cut approach, in
combination with shape and interslice constraints, which require prior information
derived from the six rules. In the proposed algorithm, a morphological active contours
approach is used for the rough estimation of the blood pool. For the boundary
estimation either a filter based or learning based approach is used in combination
with a minimal cost path search in polar space.

3.3 Segmentation Pipeline

The segmentation of the LV can be divided into four major steps. First, the left ven-
tricle is detected using a combination of circular Hough transforms, Otsu’s thresh-
olding, and a circularity measure (Section 3.3.1). In the second step, a region of
interest is identified using a morphological active contours without edges (MACWE)
approach (Section 3.3.2). In the third step, the endocardial boundary is estimated
(Section 3.3.3). In this thesis two approaches are described, a filter based segmenta-
tion and a learning based segmentation [Kurz 17a, Kurz 17e, Kurz 18a]. In the last
step, the epicardial contour is obtained by considering the endocardial boundary
(Section 3.3.4). Figure 3.2 provides an overview of the segmentation pipeline.
38 Left Ventricle Segmentation in 2-D LGE-MRI

LV Blood Pool Endocardial Epicard


Detection Segmentation Refinement Extraction
(Section 3.3.1) (Section 3.3.2) (Section 3.3.3) (Section 3.3.4)

Figure 3.2: Overview of the LV segmentation pipeline in 2-D LGE-MRI. First, the
LV is detected using circular Hough transforms and a circularity measure. In the
next step, the blood pool is segmented by applying a MACWE approach. In the
third step, the endocardial border is refined in polar space using a dynamic program-
ming approach. In the final step, the epicardial contour is extracted considering the
endocardial contour and the edge information.

3.3.1 Left Ventricle Detection

The LV is detected in the mid-slice of the 2-D LGE-MRI stack. First, the Canny
edge detector is used to extract the edges from the image [Cann 86], as depicted in
Figure 3.3 (a). In the next step, circular Hough transforms are applied [Duda 72].
The radii of the circular Hough transforms are in range of 17 mm to 35 mm, which
is defined according to the anatomical information in literature [Lang 06] and with
a step size of 2 mm due to performance. Figure 3.3 (b) illustrates the five most
prominent circles which are detected using the circular Hough transform. The most
prominent candidate is selected as potential LV blood pool candidate. To verify this
position, an additional roundness measure of different objects is estimated. Therefore,
Otsu’s thresholding is applied to the whole slice, to convert the image into a binary
mask [Otsu 79], see Figure 3.3 (c). Afterwards, binary erosion is applied and objects
that are smaller than a predefined threshold θo ∈ R, where θo = 25, are removed.
The resulting image is visualized in Figure 3.3 (d). The threshold for θo is defined
heuristically. Fromqthe remaining objects the eccentricity, i. e., the roundness R ∈ R
2
is estimated R = 1 − ab 2 , where a is the semi-major axis and b is the semi-minor
axis of the object. If the object is circular, R = 0. In Figure 3.3 (e) the roundness
measure for the remaining three objects is calculated, where prior to the calculation
the convex hull of the objects is estimated. If the center points c1 and c2 of the
roundest object and the result of the circular Hough transform are within θc ∈ R,
where θc = ||c1 − c2 ||2 , the LV has been accurately detected. And the center point c
of the blood pool is defined as c = 12 (c1 + c2 ). Otherwise, the user is asked to verify
the center of the LV by clicking into the center. The result of the LV detection is
shown in Figure 3.4 (a).
3.3 Segmentation Pipeline 39

(a) (b) (c) (d) (e)

Figure 3.3: LV detection pipeline. (a) Canny edge image. (b) Result of the circular
Hough transform, showing the five most dominant circles. (c) Binary image after
Otsu’s threshold was applied. (d) Erosion and small object removal. (e) Results
of the roundness measure for the three remaining objects, where the convex hull is
applied for the computation.

3.3.2 Blood Pool Estimation

After the center of the LV is detected in the center slice of the LGE-MRI stack,
this information is used for the boundary estimation of the endocardium. The mid-
slice is a good slice to start with the segmentation, as the result is used to propa-
gate in basal and apical direction. Having the rough location of the LV, the center
point is used as initialization for a morphological active contours approach with-
out edges (MACWE) [Marq 14]. A detailed introduction of the MACWE method is
provided in Section 2.2.3.
The functional for the contour S ∈ R2 is dependent on the image slice I ∈ RN ×M
and defined by

F (c1 , c2 , S) = µ · L(S) + ν · A(S)+


(3.1)
Z Z
λ1 ||I(p) − c1 ||2 dp + λ2 ||I(p) − c2 ||2 dp ,
Ω Ω

where the non-negative parameters µ, ν, λ1 , and λ2 ∈ R control the strength of each


term, L denotes the length of the contour S, and A the area of the contour S. The
parameters are initialized as follows µ = 1, ν = 1, λ1 = 1, and λ2 = 2. For a fixed
contour, c1 and c2 are the mean intensity values of the area inside Ω and outside Ω
of the contour S. Therefore, an implicit version of the functional F of Equation 3.1
can be defined using level sets [Marq 14], see Section 2.2.3 for more detail.
For the starting of the contour evolution, a circle is initialized using the detected
center from the LV detection and a radius of r = 10 pixels. The MACWE is stopped
after 15 iterations, which is heuristically chosen. The MACWE curve evolution is not
stopped after a certain convergence level is reached, because for some cases the curve
would start oscillating. Therefore, the fixed stopping criterion is used, as it achieves
good results for all cases. See Figure 3.4 (b) for an example result from the MACWE
evolution.
40 Left Ventricle Segmentation in 2-D LGE-MRI

(a) Detected center (b) MACWE

Figure 3.4: Mid-slice of the LGE-MRI stack. a) Detected center of the LV using
circular Hough transforms and circularity constraints. (b) Result of the morphological
active contours without edges approach (MACWE).

3.3.3 Endocardial Contour Extraction


The further refinement of the endocardial contour can be done in two ways, either
filter based or learning based. Both approaches will be detailed in the next two
paragraphs.

Filter Based Segmentation

After the blood pool approximation of the LV, the contour S is used for the refinement
of the endocardial border. As the rough outline of the LV is known, the image I is
cropped around the region of interest to perform further image processing steps on
the cropped image.
The polar image is calculated, where the origin of the polar image corresponds to
the center of the blood pool. The maximum radius is selected to cover all potential
myocardium boundaries. The estimated contour S from the MACWE approach is
also converted to polar coordinates and used to refine the endocardial boundary.
Figure 3.5 (a) depicts an example of the polar image and the converted contours.
In the polar image the edge information is extracted by applying the Canny edge
detection [Cann 86]. To extract minor edges, a Gaussian smoothing with a standard
deviation of σ = 1.5 is used, see Figure 3.5 (b) for an example. In addition, the
mean intensity µbp ∈ R and the standard deviation σbp ∈ R of the blood pool are
estimated using the intensity values inside the contour obtained from the MACWE
approach. In the next step, a scar threshold θst ∈ R is defined as θst = µbp + σbp . All
pixel intensities that are greater than θst and outside of the blood pool are defined as
potential myocardial scar and labeled with 1, the non-scar pixels are labeled with 0,
see Figure 3.5 (c). Afterwards, all pixels with increasing radius after the potential scar
candidates are labeled with 1, resulting in a scar map, as visualized in Figure 3.5 (d).
The scar map is combined with the edge image, which derives the cost array K ∈ Rr×ρ
for the minimal cost path search, as shown in Figure 3.5 (e). Six equally distributed
points are selected from the converted blood pool contours S and the MCP search is
3.3 Segmentation Pipeline 41

0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(a) Polar image with contours
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(b) Canny edge image
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(c) Scar probability
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(d) Scar map
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(e) Cost array
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(f ) Final minimal cost path (MCP) result

Figure 3.5: Final steps of the endocardial boundary segmentation. (a) Polar image
with the converted contour S from the blood pool segmentation in red. (b) Edge
image using the Canny edge detection. (c) Scar classification, all pixel that are greater
than θst and outside of the blood pool are labeled with 1. (d) Scar map, where all
pixels with increasing radius after a scar candidate are labeled with 1. (e) The cost
array is derived from the edge image combined with the scar map. (f) Final result of
the minimal cost path search.
42 Left Ventricle Segmentation in 2-D LGE-MRI

initialized to find the optimal contour [Dijk 59]. The MCP finds the distance weighted
minimal cost path through the cost array. The cost path is calculated as the sum
of the costs at each point of the path, where edge pixels have 0 costs and non edge
pixels 1. The cost c ∈ R of one step from point pi to pi+1 is calculated as follows:
d d
c (pi , pi+1 ) = · K (pi ) + · K (pi+1 ) , (3.2)
2 2
where d ∈ R is the distance between the two points pi and pi+1 . The diagonal
versus axial moves are of different length and therefore, the path costs are weighted
accordingly. The final contour C ∈ RN ×D of the MCP is illustrated in Figure 3.5 (f).
After the refinement, the contour C is converted back to Cartesian coordinates.
As the result might be frayed and papillary muscles close to the endocardial border
may be included, the convex hull is calculated for the estimated contour points C.
The final contour C is derived from the smallest convex polygon, as depicted in the
third box of Figure 3.2. This assumption is based on the fact that the left ventricle’s
cavity is convex in the SA view.
After the first contour is refined, the information of the location of the LV is used
for the initialization of the MACWE for the succeeding slices in basal and apical
direction. The refinement steps are repeated for every slice. Furthermore, the radius
and area of the previous curve are considered for the refinement of the lower and
upper slices.
The refinement is stopped in apical direction if either the last slice is reached, or
the radius of the endocardial contour r falls below a threshold θr ∈ R.
For the refinement in basal direction the iteration stops, if either the last slice is
reached, the difference of the areas of the previous contour C i−1 and the newly found
contour C i is greater than a threshold θbmax ∈ R, or the left ventricular outflow tract
is detected. For the detection of the left ventricular outflow tract, the root mean
square error of the intensities of the current slice to the previous slice is calculated.
If the error is greater than θLVOT ∈ R, the left ventricular outflow tract is reached
and the refinement stops.

Learning Based Segmentation

As for the filter based approach, the outline of the MACWE approach is used to
extract potential endocardial boundary candidates using circular ray casting. There-
fore, the image is converted to polar coordinates, as for the filter based approach.
Potential boundary candidates are then selected for N ∈ N equidistant points along
the contour S, as depicted in Figure 3.6 (a) in Cartesian coordinates. Each potential
boundary candidate is then classified using a trained random forest (RF) classifier.
The result of the classification is illustrated in Figure 3.6 (b) and (c) as a cost map,
where green corresponds to 0 costs and red corresponds to 1.
The performance of any classifier is limited by the discriminative power of the
features used for training. Steerable features are used [Zhen 08], as they are compu-
tationally efficient and can capture the orientation and scale. In total, 16 features are
extracted for each boundary candidate, based on local intensity and gradient, which
result in a feature vector f ∈ R16 that is used for training and detection. For a given
boundary candidate p = (x, y)T with the intensity I ∈ R and the gradient g ∈ R2 ,
3.3 Segmentation Pipeline 43

1.0

0.8

0.6

0.4

0.2

0.0

(a) Boundary candidates (b) Boundary detection (c) Boundary costs

Figure 3.6: Learning based boundary classification using a random forest classifier.
(a) Potential boundary candidates extracted using circular ray casting. (b) Endocar-
dial boundary detection result obtained from the trained RF classifier. (c) Boundary
cost map in Cartesian coordinates.

√ √
where 5I(p) =pg = (gp T
x , gy ) , the following features are extracted:
p I, I, 3 I, I 2 ,
I 3 , log I, ||g||, ||g||, 3 ||g||, ||g||2 , ||g||3 , log ||g||, gx , gy , gx 2 + gy 2 , and 52 I(p).
Note, that all the features are extracted in polar space, which is the steerable space.
The center position in Cartesian space, i. e., the origin in polar space, has no influence
on the classification result.
The training of the RF is based on ground truth annotations from which positive
as well as negative samples are extracted. For the training pathologic as well as
healthy subjects are used to generate a broad range for the training data base. After
the training, the classifier predicts the endocardial boundary probability pendo (f ) ∈
[0, 1]. The endocardial boundary probability can be interpreted as costs c, where
c = 1 − pendo . If the boundary probability is very high, the costs are close to 0.
To improve the detection of the boundaries in areas of scarred myocardium, an
additional scar exclusion step is added. Given the mean intensity of the blood pool µbp
and the standard deviation σbp , the scar threshold θst is defined as θst = µbp + σbp .
All the pixels above this threshold and outside of the blood pool are defined as
potential scar candidates, see Figure 3.7 (c). The scar map is generated from the scar
candidates, where all pixels with increasing radius from potential scar candidates
are labeled with 1, as depicted in Figure 3.7 (d). In the next step, the boundary
probability is compared to the scar map, at locations where the scar map is labeled
with 1, the boundary map is also labeled with 1. Hence, the boundary potentials are
impaired, which results in the endocardial cost array K ∈ Rr×ρ , see Figure 3.7 (e).
In the next step, the final segmentation result of the endocardial contour has to
be obtained from the endocardial cost map K. Therefore, a dynamic programming
approach is used in polar space, to compute the optimal endocardial contour from
left to right of the polar image [Qian 15]. The minimal cost path (MCP) search is
used [Dijk 59], which finds the distance weighted minimum path through the cost ar-
ray, as for the filter based approach. The result of the MCP is shown in Figure 3.7 (f).
After the optimal path is found, the contour is transferred back to Cartesian
coordinates, see Figure 3.8 (a). The convex hull is calculated from the contour, as
44 Left Ventricle Segmentation in 2-D LGE-MRI

0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(a) Polar image
0 1
r [pixel]

0.5

38 0
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(b) Boundary costs
0 scar
r [pixel]

38 healthy
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(c) Scar candidates
0 1
r [pixel]

38 0
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(d) Scar map
0 1
r [pixel]

0.5

38 0
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(e) Boundary costs combined with scar map
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(f ) MCP result

Figure 3.7: Final steps of the learning based endocardial boundary segmentation.
(a) Mid-slice image after polar transformation. (b) Boundary cost map obtained from
the trained RF classifier in polar coordinates. (c) Potential scar candidates which
have an intensity value greater than θst and are not within the blood pool. (d) Scar
map, where all scar candidates with increasing radius are labeled with 1. (e) Final
endocardial boundary cost map, resulting from the combined boundary detection
result and the scar map. (f) Final result of the minimal cost path (MCP) search in
polar coordinates.
3.3 Segmentation Pipeline 45

(a) Result of MCP (b) Convex hull

Figure 3.8: Final steps of the endocardial boundary segmentation. (a) Result of
the minimal cost path search (MCP) in Cartesian coordinates. (b) Convex hull of
the result shown in (a).

papillary muscles close to the endocardial border might be included, as visualized in


Figure 3.8 (b).
After the contour is refined in the mid-slice, the information is used for the bound-
ary detection in apical and basal direction. The center is propagated to the succeeding
slices and the MACWE is initialized. The boundary detection using the RF classifier,
the scar map generation, and the MCP is repeated for all succeeding slices until the
base and apex are reached.
For the stopping criteria for the base and the apex, the same conditions are applied
as for the filter based segmentation.

3.3.4 Epicardial Contour Extraction


As for the endocardial boundary segmentation, the epicardial border delineation can
be performed filter based or learning based, as well. In the first paragraph, the filter
based approach is detailed. Afterwards, the learning based method is explained.

Filter Based Segmentation

For the epicardial contour extraction the previously found endocardial contour points
are used. The contour extraction is performed in polar space. As the epicardium
has to be greater than the endocardium, the radius is enlarged by θepi ∈ R, see Fig-
ure 3.9 (a) for an example, where the red contour corresponds to the final endocardial
contour and the yellow contour to the enlarged endocardial contour. The previously
calculated edge image from the endocardial refinement is used. The edges within
the enlarged endocardial segment are erased. Figure 3.9 (b) depicts the edge image
which is compared to the enlarged endocardial mask in Figure 3.9 (c) and results
in an image where only possible epicardial edges remain and all edges within the
endocardial mask are set to 0, see Figure 3.9 (d). Having the enlarged endocardial
contour, the closest edge with increased radius, in a certain margin is searched for, as
46 Left Ventricle Segmentation in 2-D LGE-MRI

0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(a) Enlarged endocardial contour
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(b) Edge image
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(c) Endocardial mask
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(d) Epicardial edges
0
r [pixel]

38
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(e) Detected edges for epicardial contour in yellow

Figure 3.9: Final steps of the filter based epicardial boundary segmentation. (a) Po-
lar image with the smoothed endocardial contour in red and the enlarged endocardial
contour in yellow. (b) Edge imaged derived from the Canny edge detector. (c) En-
larged endocardial binary mask. (d) The resulting epicardial edge image is compared
to the edge image in (b) and the enlarged endocardial mask in (c), which results in an
edge image showing only the remaining possible epicardial edges. (e) With increased
radius the next edge to the enlarged endocardial contour with increasing radius is
searched for, resulting in the yellow contour.
3.4 Evaluation and Results 47

(a) Final result (b) 3-D model

Figure 3.10: Final steps of the epicardial boundary segmentation. (a) The final
result of the boundary estimation for the endocardium in red and the epicardium in
yellow. (b) 3-D model of the endocardial and epicardial contour in red and yellow,
respectively.

illustrated in Figure 3.9 (e). Afterwards, the newly found edge points are converted
back to Cartesian coordinates. As the result is frayed, the convex hull is estimated
to smooth the contour and remove outliers.
After the first slice is refined, the steps are repeated till the base and apex are
reached, i. e., the segmentation for the epicardial contour stops at the same slice as
the endocardial contour refinement is stopped.

Learning Based Segmentation

For the learning based approach, also the endocardial contour is used as a starting
point. From the enlarged contour, potential boundary candidates are extracted using
circular ray casting. Again a RF classifier is trained for the epicardial boundary de-
tection using the same 16 features as for the endocardial border estimation, resulting
in an epicardial boundary probability pepi (f ). The result of the epicardial boundary
detection is used as cost array for the MCP search. The MCP is applied in polar
coordinates for the same reasons as mentioned before. The MCP finds the distance
weighted minimal path from the left to the right end of the polar image. The result
is then transferred back to Cartesian coordinates and the convex hull is taken. The
result is depicted in Figure 3.10 (a).
For all the segmentation approaches, the contours of the myocardium are extracted
as 3-D surface models using the marching cubes algorithm [Lore 87]. For the marching
cubes algorithm a standard scikit-image Python implementation is used. The output
is a list of vertices and faces, which are automatically saved as a standard surface
mesh format. Figure 3.10 (b) shows an example of a 3-D surface mesh, where the
endocardium is visualized in red and the epicardium in yellow.

3.4 Evaluation and Results


In this section, the evaluation of the filter based and learning based segmentation
are presented. First, the data used for the evaluation is described. In the second
48 Left Ventricle Segmentation in 2-D LGE-MRI

section, the evaluation methods are presented. In the last section, the results are
shown for the two segmentation approaches and the two approaches are compared to
each other.

3.4.1 Data Description


The automatic segmentation of the LV endocardium and epicardium is evaluated on
100 clinical 2-D LGE-MRI data sets from individual patients. The inversion recovery
LGE-MRI sequences were acquired with a Siemens MAGNETOM Aera 1.5T scanner
(Siemens Healthcare GmbH, Erlangen, Germany). The slice thickness was set to
8 mm, with a pixel size of (1.59–2.08 × 1.59–2.08) mm2 and the spacing between
the slices was set to 10 mm. Each data set contained between 10 and 13 SA slices.
The parameter values were equal for all cases. Gold standard annotations of the
LV endo- and epicardium are provided by two clinical experts, therefore two ground
truth annotations per data set are available. The annotations were performed using
ITKSnap. The clinical experts were asked to outline the endocardial and epicardial
border separately.

3.4.2 Evaluation
Given the gold standard annotations, the segmentation algorithms are evaluated us-
ing different measures: the volumetric Dice coefficient (DC) and the mean surface
distance (MSD). The DC is a quantitative score to estimate the segmentation qual-
ity, as it measures the proportion of true positives in the segmentation result. The
DC ranges from 0 to 1, with 1 corresponding to a perfect overlap. The metric is
defined as
2|A ∩ B|
DC(A, B) = , (3.3)
|A| + |B|
where A is the segmentation result, which corresponds to a set of pixel and B is
the gold standard annotation from one of the clinical experts. The MSD calculates
the mean surface distance between the surface voxel of the binary mask A and their
nearest surface voxel of the gold standard mask B, averaged over all contour points.
Furthermore, the inter-observability between the two clinical experts is evaluated.

3.4.3 Results
In the first paragraph, the results of the filter based segmentation are presented.
In the second paragraph, the results of the learning based segmentation are shown.
Finally, both results are compared against each other.

Filter Based Segmentation Results

The filter based segmentation is evaluated on 100 clinical data sets, which are detailed
in Section 3.4.1. The automatic segmentation of the endocardium results in an overlap
with the gold standard segmentation of 0.80 ± 0.11 considering the Dice coefficient.
The best segmentation result achieves a DC of 0.95 and the worst a DC of 0.47. For
the epicard, a mean DC of 0.81 ± 0.09 is achieved. The best segmentation result of
3.4 Evaluation and Results 49

Endo Epi
Description Mean ± Std Min Max Mean ± Std Min Max
Mean 0.80 ± 0.11 0.47 0.95 0.81 ± 0.09 0.35 0.95
Observer 1 0.79 ± 0.11 0.47 0.95 0.80 ± 0.10 0.35 0.93
Observer 2 0.81 ± 0.11 0.47 0.94 0.82 ± 0.08 0.56 0.95
Inter-Observer 0.95 ± 0.06 0.61 0.99 0.96 ± 0.05 0.68 0.99

Table 3.2: Quantitative segmentation results of the filter based 2-D LGE-MRI
segmentation using the DC. The results are shown separately for the endocardial
(Endo) and epicardial (Epi) contour. Furthermore, the mean DC for both observers,
as well as the mean DC for each of the observer is presented. In addition, the inter-
observer variability between the two observers is investigated.

the epicardium yields a DC of 0.95 and the worst a DC of 0.35. As two gold standard
annotations are available, also the inter-observer variability is evaluated. The inter-
observer variability between the two observers results in a DC of 0.95 ± 0.06 for the
endocardium and 0.96 ± 0.05 for the epicardium. The best overlap for the inter-
observer variability achieves a DC of 0.99 for the endocard and epicard, respectively.
The worst inter-observer variability DC for the endocard results in 0.61 and in 0.68
for the epicard. In Table 3.2 the results for the Dice coefficient are summarized and
the two different observers are distinguished, as well as the inter-observer variability
is presented. In Figure 3.11 the mean Dice coefficient for every data set is visualized
separately for the endocard and epicard. The endocard is sorted in increasing order
according to the Dice coefficient. The epicard is sorted according to the endocard.
It can be seen that the DC of the endocard and epicard in general correlates well,
which results in a person correlation coefficient of 0.82. The major difference between
the endocard and epicard is in the apex, as there the boundary delineation for the
epicard is sometimes hard to see.
Furthermore, the mean surface distance is evaluated. For the endocardium a mean
distance of 3.71 mm ± 2.57 mm is accomplished, with a minimum distance of 0.44 mm
and a maximum distance of 11.79 mm. For the epicard a mean distance of 4.33 mm
± 2.65 mm is achieved with a minimum distance of 0.5 mm and a maximum distance
of 16.34 mm. As previously, the MSD for the inter-observer variability is evaluated,
which results in a mean distance of 0.89 mm ± 1.15 mm between the two observers
for the endocardium and 0.92 mm ± 1.14 mm for the epicardium. The minimum
distance for the inter-observer variability results in 0.00 mm for the endocardium and
epicardium, respectively. The maximum inter-observer variability distance for the
endocardium is 6.01 mm and for the epicardium 5.55 mm. In Table 3.3 the results
for the mean surface distance are shown. In addition, the two different observers are
distinguished, and the inter-observer variability is summarized.

Learning Based Segmentation Results

For the evaluation of the learning based approach, a nested cross-validation is used,
as described in Section 2.3.3. The nested cross-validation is needed for the random
forest classifier as hyper-parameters need to be optimized. As these parameters are
50 Left Ventricle Segmentation in 2-D LGE-MRI

0.9

0.8
Dice Coefficient

0.7

0.6

0.5

0.4

Endocard
0.3
1 10 20 30 40 50 60 70 80 90 100
Sequence Number (sorted)

(a) DC for endocard

0.9

0.8
Dice Coefficient

0.7

0.6

0.5

0.4

Epicard
0.3
1 10 20 30 40 50 60 70 80 90 100
Sequence Number (sorted according to (a))

(b) DC for epicard

Figure 3.11: Individual Dice coefficients for the 2-D filter based segmentation for
each of the 100 data sets for the endocard and epicard, respectively. The DC for the
endocard is sorted in increasing order. The DC for the epicard is sorted according to
the endocard.
3.4 Evaluation and Results 51

Endo [mm] Epi [mm]


Description Mean ± Std Min Max Mean ± Std Min Max
Mean 3.71 ± 2.57 0.44 11.79 4.33 ± 2.65 0.50 16.33
Observer 1 3.89 ± 2.59 0.44 11.79 4.56 ± 2.73 1.00 16.34
Observer 2 3.54 ± 2.56 0.59 11.24 4.10 ± 2.56 0.50 11.68
Inter-Observer 0.89 ± 1.15 0.00 6.01 0.92 ± 1.14 0.00 5.55

Table 3.3: Quantitative segmentation results of the filter based 2-D LGE-MRI
segmentation using the MSD in mm. The results are shown separately for the endo-
cardial (Endo) and epicardial (Epi) contour. Furthermore, the mean MSD for both
observers, as well as the mean MSD for each of the observer is presented. In addition,
the inter-observer variability between the two observers is investigated.

not learned during the normal training phase of a classifier. For this purpose, a grid-
search is used, as it exhaustively compares all parameter combinations. For the 100
clinical data sets a 5-fold nested cross-validation is used. Hence, 20 data sets are
used for the testing of the classifier and the rest is used to train and validate the
classifier. For the grid-search the following parameter sets are evaluated: number of
trees T ∈ {30, 40, 50, 60, 70, 80}, maximal tree depth D ∈ {5, 10, 15, 20, 25, 30}, mini-
mum number of samples for the split node S ∈ {15, 20, 25, 30, 35, 40}, and minimum
number of samples required at the leave node F ∈ {2, 3, 4, 5, 6, 7, 8, 9}. The inner
loop of the nested cross-validation is also set to a 5-fold cross-validation. The op-
timal hyper-parameters for the random forest for the endocardium are summarized
in Table 3.4 (a) for each of the five folds. The optimal hyper-parameters for the
epicardium are summarized in Table 3.4 (b).
In Table 3.5 and Table 3.6, the results for the learning based approach are pre-
sented using the Dice coefficient and the mean surface distance, respectively. The
computed metrics are presented for the endocardium and epicardium, as well as for
the two different observers.
In Figure 3.12 the mean Dice coefficient for every data set is visualized separately
for the endocard and epicard. The endocard is sorted in increasing order according
to the Dice coefficient. The epicard is sorted according to the endocard. It can be
seen that the DC of the endocard and epicard in general correlates well, which results
in a person correlation coefficient of 0.69.
Furthermore, the feature importance for the endocard and epicard is evaluated,
see Figure 3.13 (a) and (b), respectively. It can be seen that the importance of the
features for the endocard and the epicard corresponds. In general, the gradient fea-
tures are more important compared to the intensity features. This can be attributed
to the non-homogeneous intensity distribution of the LGE-MRI in case of myocardial
scarring.
As the hyper-parameters for each fold are different, as shown in Table 3.4 (a) and
Table 3.4 (b), an additional evaluation is performed to see the impact of the different
parameters on the final segmentation result. Hence, the evaluation is repeated using
a regular 5-fold cross-validation for each of the 5 optimal hyper-parameter sets Hi
for the endocard and epicard, respectively. The results are shown in Table 3.7 for
the endocard and in Table 3.8 for the epicard. Furthermore, the mean Dice coeffi-
52 Left Ventricle Segmentation in 2-D LGE-MRI

0.9

0.8
Dice Coefficient

0.7

0.6

0.5

0.4

Endocard
0.3
1 10 20 30 40 50 60 70 80 90 100
Sequence Number (sorted)

(a) DC for endocard

0.9

0.8
Dice Coefficient

0.7

0.6

0.5

0.4

Epicard
0.3
1 10 20 30 40 50 60 70 80 90 100
Sequence Number (sorted according to (a))

(b) DC for epicard

Figure 3.12: Individual Dice coefficient for 2-D learning based segmentation for
each of the 100 data sets for the endocard and epicard, respectively. The DC for the
endocard is sorted in increasing order. The DC for the epicard is sorted according to
the endocard.
3.4 Evaluation and Results 53

Description T D S F
1
Hendo 80 15 3 20
2
Hendo 60 15 4 40
3
Hendo 70 15 8 40
4
Hendo 80 15 4 30
5
Hendo 50 15 6 15
(a) Optimized hyper-parameters for endocard

Description T D S F
1
Hepi 70 15 5 35
2
Hepi 80 20 5 40
3
Hepi 70 15 9 30
4
Hepi 70 20 9 35
5
Hepi 50 15 3 40
(b) Optimized hyper-parameters for epicard

Table 3.4: Optimized hyper-parameters for 2-D random forest, for each individual
fold. For the grid-search the following parameter sets are optimized: number of trees
T , maximal tree depth D, minimum number of samples for the split node S, and
minimum number of samples required at the leave node F .

Endo Epi
Description Mean ± Std Min Max Mean ± Std Min Max
Mean 0.82 ± 0.08 0.60 0.96 0.81 ± 0.08 0.36 0.95
Observer 1 0.81 ± 0.08 0.60 0.96 0.80 ± 0.09 0.36 0.94
Observer 2 0.82 ± 0.08 0.61 0.96 0.82 ± 0.07 0.58 0.95

Table 3.5: Quantitative segmentation results of the learning based 2-D LGE-MRI
segmentation using the DC. The results are shown separately for the endocardial
(Endo) and epicardial (Epi) contour. Furthermore, the mean DC for both observers,
as well as the mean DC for each of the observer is presented.

cient for each set of the hyper-parameters Hi for the endocardium and epicardium is
illustrated in Figure 3.14. It can be seen that changing the hyper-parameters has a
small influence on the final segmentation result, as the classification result is not the
final result for the boundary delineation. For the endocardium, an additional scar
exclusion step is added and afterwards the boundary is delineated using a minimal
cost path search for the endocardium and epicardium.

Comparison between Filter and Learning Based Results

In Figure 3.15, the qualitative results of the segmentation approaches are presented.
Therefore, an average segmentation result is chosen. The first row depicts the raw
data from base to apex. The second row shows the gold standard annotation of
the first clinical expert i. e., observer 1, where the endocardial contour is orange and
the epicardial contour green. The third row illustrates the result of the filter based
54 Left Ventricle Segmentation in 2-D LGE-MRI

Endo [mm] Epi [mm]


Description Mean ± Std Min Max Mean ± Std Min Max
Mean 3.73 ± 2.17 0.36 10.58 4.39 ± 2.19 0.61 10.85
Observer 1 3.96 ± 2.18 0.52 10.58 4.65 ± 2.21 0.93 10.85
Observer 2 3.50 ± 2.15 0.36 10.04 4.13 ± 2.15 0.61 9.17

Table 3.6: Quantitative segmentation results of the learning based 2-D LGE-MRI
segmentation using the MSD in mm. The results are shown separately for the endo-
cardial (Endo) and epicardial (Epi) contour. Furthermore, the mean MSD for both
observers, as well as the mean MSD for each of the observer is presented.

0.20 0.20
Importance

Importance
0.10 0.10

0.00 0.00
q ||g||

q ||g||
log(I)

log(I)
log(||g||)

log(||g||)
I

I
I

I
gx
2

gx
2

3
2

3
gy

gy
q 5 I(p)

q 5 I(p)
||g||

||g||

||g||

||g||


||g||

q||g||

||g||

q||g||
y

y
2 + g2

2 + g2
I

I
3

3
2

2
3

3
gx

gx

Feature Feature

(a) Endocardial feature importance (b) Epicardial feature importance

Figure 3.13: Feature importance of the random forest classifier for the 2-D
LGE-MRI classification. (a) Feature importance for the random forest classifier
trained for the endocardial boundary detection. (b) Feature importance for the
random forest classifier trained for the epicardial boundary detection. It can be seen,
that the feature importance correlates for both classifiers.

segmentation algorithm, where the endocardium is red and the epicardium yellow.
The last row visualizes the results of the learning based approach using the same
colors as for the filter based approach. It can be seen that the presented results
match well with the gold standard annotation. The biggest difference occur in the
apex, especially for the epicard.
In Figure 3.16, a comparison of the DC for the filter based vs. the learning based
segmentation of the endocardium and epicardium is given, respectively. It can be
seen that the learning based segmentation outperforms the filter based segmentation
and is more robust with respect to outliers, especially for the endocardium. However,
the final difference considering the DC is not that big, as the post-processing steps
are similar for both methods.
Furthermore, the correlation between the filter based and the learning based seg-
mentation was investigated using a scatter plot. The results of the scatter plot are
depicted in Figure 3.17 (a) for the endocard and in Figure 3.17 (b) for the epicard.
The Pearson correlation coefficient between the endocard of the two methods results
3.4 Evaluation and Results 55

all D1 D2 D3 D4 D5
1
Hendo 0.82±0.08 0.82±0.08 0.82±0.09 0.80±0.10 0.81±0.07 0.80±0.06
2
Hendo 0.82±0.08 0.87±0.07 0.82±0.09 0.79±0.10 0.81±0.07 0.81±0.06
3
Hendo 0.81±0.09 0.85±0.07 0.82±0.09 0.76±0.11 0.81±0.07 0.80±0.06
4
Hendo 0.81±0.09 0.87±0.07 0.82±0.08 0.78±0.11 0.80±0.07 0.80±0.07
5
Hendo 0.81±0.09 0.86±0.08 0.82±0.09 0.78±0.10 0.81±0.07 0.80±0.07

Table 3.7: Influence of hyper-parameters for 2-D endocardial random forest, for
each individual fold, using the DC as measure.

all D1 D2 D3 D4 D5
1
Hepi 0.81 ± 0.08 0.84 ± 0.07 0.82 ± 0.07 0.75 ± 0.11 0.82 ± 0.05 0.80 ± 0.07
2
Hepi 0.81 ± 0.08 0.84 ± 0.07 0.82 ± 0.06 0.75 ± 0.11 0.82 ± 0.06 0.80 ± 0.07
3
Hepi 0.81 ± 0.08 0.84 ± 0.07 0.82 ± 0.06 0.74 ± 0.11 0.82 ± 0.05 0.81 ± 0.07
4
Hepi 0.80 ± 0.09 0.84 ± 0.07 0.83 ± 0.06 0.74 ± 0.12 0.82 ± 0.06 0.80 ± 0.08
5
Hepi 0.80 ± 0.09 0.84 ± 0.07 0.82 ± 0.07 0.74 ± 0.12 0.82 ± 0.05 0.80 ± 0.08

Table 3.8: Influence of hyper-parameters for 2-D epicardial random forest, for each
individual fold, using the DC as measure.

1.0 1.0
Dice Coefficient
Dice Coefficient

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0.0 0.0
1
Opti Hendo 2 3 4 5 Opti 1 2 3 4 5
Hendo Hendo Hendo Hendo Hepi Hepi Hepi Hepi Hepi

(a) Hyper-parameters RF endocard for DC (b) Hyper-parameters RF epicard for DC

Figure 3.14: (a) Comparison of the influence of the different sets of hyper-
i
parameters Hendo on the Dice coefficients for the endocardium, where the blue line
represents the mean Dice coefficient. (b) Comparison of the influence of the different
i
sets of hyper-parameters Hepi on the Dice coefficients for the epicardium, where the
blue line represents the mean Dice coefficient.

in 0.75 and in 0.80 for the epicard. Thus, there is a good correlation between the two
segmentation methods.
The proposed approaches are implemented on an Intel i7-4810MQ with 2.80 GHz
CPU equipped with 16 GB of RAM in Python 2.7. The runtime for both approaches
is very fast with around 10 seconds for one data set. However, the evaluation of the
random forest classifier using a nested cross-validation takes about 2 days. Solely
the training without optimization of the hyper-parameters takes about 1 minute,
depending on the forest size T , the tree depth D, and the parallelization of the
random forest.
56 Left Ventricle Segmentation in 2-D LGE-MRI

(a) Native slices without any contours from base to apex

(b) Gold standard annotation from first clinical expert

(c) Filter based segmentation result

(d) Learning based segmentation result

Figure 3.15: Comparison of the segmentation result for one data set, which rep-
resents an average segmentation result. From top to bottom: native slices without
any contours, gold standard annotation from first clinical expert i. e., observer 1, seg-
mentation result of the filter based segmentation, and segmentation result for the
learning based method.

3.5 Discussion and Conclusion

The presented methods solely use 2-D LGE-MRI data for the segmentation of the
left ventricle, compared to most work reported in literature, which make use of cine
MRI and propagate the contours to the LGE-MRI [Diki 04, Wei 11, Ciof 08, Tao 14].
Alabà et al. [Alba 14] compute directly the contours from LGE-MRI. The presented
results are in the same range of the reported errors in literature. However, a direct
comparison to the method is not possible as the data sets differ.
3.5 Discussion and Conclusion 57

1.0 1.0

Dice Coefficient

Dice Coefficient
0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0.0 0.0
Filter Based Learning Based Filter Based Learning Based

(a) Endocard DC (b) Epicard DC

Figure 3.16: (a) Comparison of the Dice coefficient between the filter based and the
learning based segmentation for the endocardium, where the blue line represents the
mean Dice coefficient. (b) Comparison of the Dice coefficient between the filter based
and the learning based segmentation for the epicardium, where the blue line repre-
sents the mean Dice coefficient. It can be seen that the learning based segmentation
performs better with less outliers, both for the endocardium and epicardium.

1.0 1.0
DC Learning Based

0.8 0.8
DC Learning Based

0.6 0.6

0.4 0.4

0.2 0.2

0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
DC Filter Based DC Filter Based

(a) Endocard (b) Epicard

Figure 3.17: Scatter plot of the Dice coefficient for the endocard and epicard of
the filter based segmentation compared to the learning based segmentation. (a) The
Pearson correlation coefficient for the endocard results in 0.75. (b) The Pearson
correlation coefficient for the epicard results in 0.80.

The proposed method achieves a DC of around 0.82 for the endocardium and
epicardium considering the learning based approach. The biggest differences occur in
the basal region, as the delineation of the left ventricular out flow tract is not always
clear. The poor performance of the MSD is mainly due to the larger error in the
apex and the left ventricular out flow tract. However, the results in the mid-cavity
are convincing, which is shown in Figure 3.15. It is expected that incorporating a
model will directly improve the segmentation result, especially in the apex and base.
In the course of this work, it has been shown that rather simple methods can be
used for the boundary detection of the endocardium and epicardium. In combination
58 Left Ventricle Segmentation in 2-D LGE-MRI

with a minimal cost path search, accurate and consistent results can be achieved.
The clear benefit of the method is the independence of registration to cine MRI and
the speed.
CHAPTER 4

Left Ventricle Segmentation in


3-D LGE-MRI
4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.3 Automatic Left Ventricle Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4 Semi-Automatic Left Ventricle Segmentation . . . . . . . . . . . . . . . . . . . . . . . 76
4.5 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.6 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

In the previous chapter, methods for left ventricle segmentation in 2-D LGE-MRI are
presented. In this chapter, the segmentation of the left ventricle in 3-D LGE-MRI
is detailed. This chapter is organized as follows: The motivation is illustrated in
Section 4.1. Related literature is reviewed in Section 4.2. The automatic pipeline
for the LV segmentation using 3-D LGE-MRI is described in Section 4.3. A semi-
automatic approach for the left ventricle segmentation based on Hermite radial ba-
sis function (HRBF) is outlined in Section 4.4. The experimental results for 3-D
LGE-MRI are shown in Section 4.5. The results are discussed in Section 4.6 and a
conclusion is given. Parts of this chapter have previously appeared in five conference
publications [Kurz 15, Kurz 16a, Kurz 17b, Kurz 17c, Mirs 17] and two peer-reviewed
journal publications [Kurz 17f, Kurz 17d].

T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt, C. Tillmanns,


and J. Hornegger. “Semi-Automatic Segmentation and Scar Quan-
[Kurz 15] tification of the Left Ventricle in 3-D Late Gadolinium Enhanced
MRI”. In: ESMRMB, Ed., 32nd Annual Scientific Meeting of the
ESMRMB, pp. 318–319, October 2015
T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns, A. Maier,
and A. Brost. “Fully Automatic Segmentation and Scar Quantifica-
[Kurz 16a]
tion of the Left Ventricle in 3-D Late Gadolinium Enhanced MRI”.
In: M. C. Weiss, Ed., Book of Abstracts, October 2016

59
60 Left Ventricle Segmentation in 3-D LGE-MRI

T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt, C. Tillmanns,


and A. Maier. “Fully Automatic Segmentation of Papillary Muscles
[Kurz 17b]
in 3-D LGE-MRI.”. In: Bildverarbeitung für die Medizin (BVM
2017), March 2017
T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt, C. Tillmanns,
S. Steidl, and A. Maier. “3-D LGE-MRI Segmentation using a Ran-
[Kurz 17c]
dom Forest Classifier and Dynamic Programming”. In: ESMRMB,
Ed., 34th Annual Scientific Meeting of the ESMRMB, October 2017
N. Mirshahzadeh, T. Kurzendorfer, P. Fischer, T. Pohl, A. Brost,
S. Steidl, and A. Maier. “Radial Basis Function Interpolation for
[Mirs 17] Rapid Interactive Segmentation of 3-D Medical Images”. In: An-
nual Conference on Medical Image Understanding and Analysis,
pp. 651–660, Springer, July 2017
T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns, A. Maier,
and A. Brost. “Fully Automatic Segmentation of the Left Ventricu-
[Kurz 17f]
lar Anatomy in 3-D LGE-MRI”. Journal of Computerized Medical
Imaging and Graphics, Vol. 59, pp. 13–27, July 2017
T. Kurzendorfer, P. Fischer, N. Mirshazadeh, T. Pohl, A. Brost,
S. Steidl, and A. Maier. “Rapid Interactive and Intuitive Segmen-
[Kurz 17d]
tation of 3-D Medical Images Using Radia Basis Function Interpo-
lation”. Journal of Imaging, December 2017

4.1 Motivation
The viability assessment of the myocardium after a myocardial infarction is very
important for diagnosis, therapy planning, success prediction of the therapy, and pa-
tient prognosis. Especially, the location and quantification of a patient’s scar burden
promises to increase the success rate of different therapies, such as ablation of ven-
tricular tachycardia or CRT [Kari 16, Diki 04, Moun 17]. LGE-MRI is the clinical
gold standard for non-invasive assessment of myocardial viability [Shin 14, Rash 15,
Kell 12].
Recently, LGE-MRI was extended to 3-D to continuously cover the whole heart
with a high resolution of up to (0.7 × 0.7 × 1.5) mm3 in a single acquisition. This
scan promises an accurate quantification of the myocardium to the extent of my-
ocardial infarction [Shin 14], see Figure 4.1 for an example. Besides the technological
improvements regarding image acquisition and the clear clinical demand [Bilc 08],
the challenge arises in providing automatic tools for fast image segmentation and
analysis.
The main issue with processing LGE-MRI data is the non-homogeneous intensity
distribution within the myocardium, resulting from the different accumulations of
contrast agent in the damaged tissue. Moreover, the 3-D whole heart acquisition
does not allow a direct use of any ring or circular shape prior, as commonly used for
2-D SA images. Furthermore, the left ventricle is covered now in more than 80 slices
compared to around 10 slices for the 2-D LGE-MRI acquisitions. To overcome these
4.2 Related Work 61

(a) 3-D LGE-MRI SA view (b) 3-D LGE-MRI LA view

Figure 4.1: (a) 3-D LGE-MRI reorientated in a pseudo SA view. (b) 3-D LGE-MRI
in a pseudo LA view. With the 3-D LGE-MRI an accurate quantification to the extent
of myocardial scarring is possible. The 3-D LGE-MRI sequence has the following
parameters: (1.30 × 1.30) mm2 pixel spacing, 1.30 mm slice thickness, 120 slices.

issues, two novel approaches for fully automatic left ventricle segmentation using 3-D
LGE-MRI data are proposed. The two different approaches, filter and learning based,
are detailed in the following sections.

4.2 Related Work


There has been little research aimed using 3-D LGE-MRI images for segmentation,
as the 3-D LGE-MRI is not yet widely used in clinical routine. The first semi-
automatic solution for myocardial segmentation using solely 3-D LGE-MRI is pro-
posed in [Kurz 15]. The segmentation comprises three steps. First, an initial seed-
point within the LV cavity has to be selected by the user. An approximation of the
endocardium is achieved using morphological active contours without edges. The SA
view is estimated using principal component analysis. Next, the endo- and epicardial
contours are refined in polar space using edge information.
The work proposed in this thesis is distinct from the aforementioned in several as-
pects. First, the method is fully automatic, where the initial seed-point and the active
contours segmentation is replaced by a two-step registration approach. Furthermore,
prior knowledge in terms of constraints for shape and inter-slice smoothness is used
for myocardial segmentation. And solely the 3-D whole heart LGE-MRI data set is
used for the segmentation.

4.3 Automatic Left Ventricle Segmentation


The segmentation pipeline for 3-D LGE-MRI comprises five steps: First, a two-step
registration is performed for an initialization of the LV. Second, the principal axes of
the LV are computed and a pseudo SA view is estimated. Third, the endocardium
62 Left Ventricle Segmentation in 3-D LGE-MRI

LV SA Endocardial Epicardial Surface


Initialization Estimation Refinement Refinement Extraction
(Section 4.3.1) (Section 4.3.2) (Section 4.3.3) (Section 4.3.4) (Section 4.3.6)

Figure 4.2: Overview of the segmentation pipeline. First, the LV is initialized


using a two-stage registration based approach. In the next step, the SA view is
estimated with the help of principal component analysis (PCA). In the third step,
the endocardial contour is refined in polar space. In the fourth step, the epicardial
contour is extracted. In the final step, the contours are exported as surface meshes
and can be used for further processing.

is refined in polar space using a cost array and applying a minimal cost path search.
The cost array can either be derived filter based or learning based and always con-
siders potential scar candidates [Kurz 17f, Kurz 17c] Fourth, the epicardium is com-
puted, starting from the endocardium and either considering the edge information
for the filter based approach or the cost array for the learning based approach. Prior
knowledge, such as shape and inter-slice smoothness constraints, is used during the
refinement step. Finally, the endocardial and epicardial contours are exported as 3-D
surface meshes using the marching cubes algorithm [Lore 87]. Figure 4.2 provides an
overview of the segmentation pipeline.

4.3.1 Registration
For the initialization of the LV within the 3-D whole heart scan, a two-stage registra-
tion to an atlas volume A ∈ RN ×N ×N is performed [Unbe 15]. The atlas volume A has
a corresponding labeled mask L ∈ RN ×N ×N showing the left ventricle’s endocardium,
which was segmented manually. The registration can be defined as a one-to-one map-
ping between coordinates in one space and those in another space such that points
in the two spaces that correspond to the same anatomical structure are mapped to
each other [Maur 93]. In the first step, a multi resolution affine registration is per-
formed to roughly align the atlas volume A to the input volume V ∈ RN ×N ×N , see
Figure 4.3 (b) for an example registration result. The transformation T ∈ RN ×N can
be decomposed into scaling s ∈ R, rotation R ∈ RN ×N , and translation t ∈ RD ,

T (p) = sRp + t . (4.1)

However, to match the complex deformations and anatomical changes between indi-
vidual patients, more variations need to be allowed. Hence, a non-rigid multi resolu-
tion registration is applied. The checkerboard result of the non-rigid registration is
depicted in Figure 4.3 (c). Nevertheless, an affine transform has to be applied prior
to the non-rigid registration for an initialization, as non-rigid transformations most
often cannot handle large scaling, translation, or rotation.
4.3 Automatic Left Ventricle Segmentation 63

(a) No registration (b) Affine registration (c) Non-rigid registration

Figure 4.3: (a) Checkerboard image before the registration, showing the thorax in a
sagittal viewing plane. (b) Checkerboard image showing the registration result after
the affine registration. (c) Final result after the non-rigid registration.

The matching of the transformation T is dependent on the similarity measure.


A mutual information based similarity measure is employed where the image inten-
sities are understood as random variables with a certain probability density func-
tion [Thev 08]. A random coordinate sampler is used, which is not limited to voxel
positions. During the registration a linear interpolation is utilized. Regarding the
non-rigid transformation, B-splines are applied for the parametrization to describe
the local deformation of the left ventricle [Ruec 99]. B-splines are locally controlled
which makes them computationally efficient even for larger number of control points.
The spacing of the control points defines how local or global the non-rigid registration
will be. Therefore, the proposed approach from Rueckert et al. [Ruec 99] is applied,
using a hierarchical multi-resolution approach, where the number of control points
varies according to the scale-space pyramid.
In the last step, the transformation T of the two-stage registration is applied to
the labeled mask L of the atlas, resulting in a registered mask M ∈ RN ×N ×N . The
two stage-registration is implemented using elastix [Klei 10]1 .

4.3.2 Short Axis Estimation


After the initialization of the LV through the two-stage registration approach, the
SA view of the LV is estimated using principal component analysis (PCA) of the
transformed atlas mask M [Joll 86]. The contour points of the transformed mask
M are extracted using the marching cubes algorithm [Lore 87], resulting in a set of
vertices V ∈ {v 1 , ..., v N }, where vi ∈ R3 is a single vertex. Afterwards, the covariance
matrix Σ ∈ R3×3 is calculated,

N
1 X
Σ= (v i − v̄) (v i − v̄)T , (4.2)
N i=1

1
http://elastix.isi.uu.nl/
64 Left Ventricle Segmentation in 3-D LGE-MRI

where v̄ is the mean vector of all vertices v,


N
1 X
v̄ = vi . (4.3)
N i=1
Having the covariance matrix Σ, the singular value decomposition (SVD) is applied
to perform principal component analysis,
Σ = U SU T , (4.4)
where U ∈ Rn×m is a matrix of orthogonal eigenvectors and S ∈ Rm×m is a diagonal
matrix whose elements are the eigenvalues of the covariance matrix Σ. The eigen-
vectors of U are called principal axis of the set of vertices V and U T U = E, where
E ∈ Rn×n is the identity matrix.
Normally, the SVD is applied to the mean value free data matrix X ∈ Rn×m ,
X = U ΛW T , (4.5)
where U is a column orthogonal matrix, Λ is a diagonal matrix of singular values, and
W is a column orthogonal matrix. To proof Equation (4.4), the covariance matrix
can be decomposed into
1
Σ = XX T , (4.6)
N
if the SVD is applied to Equation (4.6) we get
1 1  T 
XX T = U ΛW T U ΛW T , (4.7)
N N
!
1 1
XX T = UΛ W T T T
| {zW} Λ U , (4.8)
N N
E
1 1
XX T = U ΛΛT U T ,

(4.9)
N N
Σ = U SU T , (4.10)
with
1
S= ΛΛT . (4.11)
N
The projections of the data on the principal axes are called principal components,
where the first column corresponds to the largest eigenvalue. Consequently, the vector
has the largest variation among the transformed mask M , i. e., the SA orientation.
Furthermore, the offset o ∈ R3 to the rotation center is calculated,
T
o = v̄ T − v̄ T U U −1

. (4.12)
Then the affine transformation of U and considering the offset o is applied to the 3-D
DICOM volume V and to the registered mask M to rotate the volume along the first
principal axes of the LV. Figure 4.4 (b) depicts an example of the registered mask M
and the estimated principal components, with the first one highlighted in blue. The
SA view is commonly used for most segmentation approaches reported in literature,
as most algorithms rely on standard 2-D LGE-MRI SA acquisitions [Diki 04, Wei 11,
Ciof 08, Tao 14, Alba 14]. Having the SA view, prior knowledge such as circularity
and convexity of the contour points can be taken into account.
4.3 Automatic Left Ventricle Segmentation 65

(a) Input volume (b) PCA (c) SA view (d) SA view

Figure 4.4: (a) 3-D LGE-MRI of the three orthogonal imaging planes, sagittal, coro-
nal, and axial. (b) Approximated cavity of the LV through the registration to the atlas
data A. The principal components are visualized, with the first principal component
in blue and the second and third principal components in black. (c) Cropped pseudo
short axis (SA) view after PCA. (d) SA view with the contour of the transformed
mask M in red.

4.3.3 Endocardial Contour Extraction


As the rough outline of the LV is known, the volume V is cropped around the region of
interest and the further image processing steps are performed on the cropped volume.
The cropping of the volume V is performed due to computational efficiency.
For the endocardial contour estimation, either a filter based or learning based
approach can be used. Both approaches are detailed in the next two sections.

Filter Based Endocardial Contour Extraction

After a pseudo SA view of the LV is estimated, the algorithm starts with the slice that
corresponds to the center of mass of the transformed mask M . The contour points
v i ∈ R3 of the transformed atlas mask M are extracted in the SA orientation using
the marching cubes algorithm [Lore 87]. In Figure 4.4 (d) the slice that corresponds
to the center of mass is depicted, with the contour of the transformed mask M in
red.
In the next step, the polar image is calculated, where the origin of the polar
image corresponds to the mean of the found contours v̄, i. e., the center of mass.
For the particular slice l ∈ {1, 2, ..., N }, Figure 4.5 (a) is the analogue image of
Figure 4.4 (d) in polar space. The maximum radius r ∈ R+ 0 is selected to cover all
potential myocardium boundaries and the angle ρ ∈ [0 , 360◦ ]. The contour points

v i from the transformed mask are converted to polar coordinates and used to refine
the boundaries of the endocardium. In the polar image the edge information is
extracted by applying the Canny edge detection [Cann 86]. To extract minor edges,
a Gaussian smoothing with a standard deviation of σ = 2.5 is applied, as depicted
in Figure 4.5 (b). The lower bound of the hysteresis thresholding was set to 10 % of
the maximum and the upper bound to 20 %.
In addition, the scar probability is evaluated. Therefore, the mean µbp ∈ R and
the standard deviation σbp ∈ R of the blood pool intensity are estimated. In the
next step, a scar threshold θst ∈ R is defined as θst = µbp + σbp . All pixels that are
greater then θst are quantified as potential scar candidates. Knowing the position of
66 Left Ventricle Segmentation in 3-D LGE-MRI

0
r [pixel]

54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(a) Polar image with transformed mask in polar coordinates (red)
0
r [pixel]

54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(b) Edge image using Canny edge detector
0
r [pixel]

54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(c) Scar map
0
r [pixel]

54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(d) Cost array for the MCP
0
r [pixel]

54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(e) Cost array with the found contour from MCP
0
r [pixel]

54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(f ) Final result of the endocardium in the polar image

Figure 4.5: Filter based endocardial contour refinement in polar space. (a) Polar
image and the contour points in polar coordinates (red). (b) Edge information from
Canny edge detector with a Gaussian smoothing of σ = 2.5. (c) Scar map. (d) Cost
array for the MCP search, combining the scar map and the edge image. (e) Cost
array with the found contour from the MCP search. (d) Final result of the MCP in
the polar image.
4.3 Automatic Left Ventricle Segmentation 67

(a) MCP results (b) Convex hull

Figure 4.6: Final steps of endocardial refinement. (a) Contour points after conver-
sion from polar to Cartesian coordinates. (b) Convex hull of the endocardial outline
in (a).

the scar candidates, all pixels with increasing radius are labeled with 1, resulting in
a scar map, as visualized in Figure 4.5 (c). The edge image and the scar map are
then combined, resulting in a cost array K ∈ Rr×ρ , see Figure 4.5 (d). The cost
array K is used to initialize a minimal cost path (MCP) search [Dijk 59]. Six equally
distributed contour points are selected from v li to initialize the MCP search, where
the index l corresponds to the current slice. The initialization points are visualized in
Figure 4.5 (d). The number of initialization points for the MCP is chosen heuristically.
The MCP finds the distance weighted minimal cost path through the cost array K.
The cost path is calculated as the sum of the costs at each point of the path, where
edge pixels have 0 costs and non-edge pixels have costs of 1. The result of the MCP
search is visualized in Figure 4.5 (e) and (f).
After the refinement, the contour points v li are converted back to Cartesian coor-
dinates, as illustrated in Figure 4.6 (a). As the result is frayed and papillary muscles
close to the endocardial border may be included, the convex hull is calculated for the
estimated points. This assumption is based on the fact that the left ventricle’s cavity
is convex in the short axis orientation. The final endocardial contour V final is derived
from the smallest convex polygon of the contour points v li , as shown in Figure 4.6 (b).
After the first contour V lfinal ∈ {v li , ..., v lN } is refined, the result is used for further
improvement of the silhouettes in the lower and upper pseudo SA slices, till the base
and the apex are reached. The refinement in basal and apical direction is performed
in polar space for the same reasons as described above. The distance between the
previously found contour points V l−1 final and the silhouette from the transformed atlas
l
mask V final of the next slice is calculated. If the distance between two points is larger
than a margin θdist ∈ R, the contour point of the transformed mask is not considered
for the further improvement,

(
v l−1
i if ||v l−1
i − v li || < θdist
v li = . (4.13)
{} otherwise
68 Left Ventricle Segmentation in 3-D LGE-MRI

In the next step, the set of contour points from the previous slice V l−1 final and the
0
remaining set of contour points from the current slice V l are merged, V l = V l−1 final ∪
l
V and afterwards sub-sampled with a sampling rate of ξ ∈ Z. The sub-sampling
is performed in order to obtain a smoother contour and to remove single outliers.
0
Afterwards, additional refinement steps for the endocardial contour V l approximation
are performed. The polar image is extended to the left (ρ < 0◦ ) and right (ρ >
360◦ ) with γ ∈ [30◦ , 60◦ , 90◦ ], where γ defines the extension of the polar image.
From these enlarged polar images, the edge information is extracted using the Canny
edge detector with a Gaussian smoothing of σ = 3 to extract more dominant edges.
The Gaussian smoothing parameter σ was chosen heuristically. In addition, the
0
sub-sampled silhouette points V l are extended to the left and the right with the
corresponding γ. The enlargement of the polar image is done to avoid errors at the
border of the edge image, in particular if papillary muscles or myocardial infarctions
are present.
Furthermore, the scar map is calculated by applying θst . The cost array is derived
from the scar map combined with the edge image. As before, six equally distributed
0
points are selected from the extended contours V l for the initialization of the MCP
search, for all three cases with respect to γ. First, the shortest path for each of the
three cases is chosen. Second, the length of the path is normalized with respect to the
corresponding enlargement γ. The final contour V lfinal is picked from the results by
choosing the solution with the minimal costs. This outline is then transferred back
to Cartesian coordinates. The convex hull is computed for the refined points V lfinal .
A round shape of the LV in SA view is assumed [McLe 15, Duan 99]. For the
inter-slice smoothness criteria, the shape of the previous contour to the newly found
contour is compared. If the difference of the two contours is bigger than a certain
margin θdiff ∈ R, the previous contour is considered. Depending on the apical or basal
direction, erosion is applied to the contour, as the convex hull slightly enlarges the
contour and in apical direction the radius is decreasing. For further improvements
different constraints are considered depending on the apical or basal direction.
For the apical direction the center cpre of the final contour and the radius rpre
of the semi-axis from the previous slice are examined. As a high resolution is given
through the 3-D acquisition technique, the center of the new curve cnew has to be
within a certain distance θc ∈ R to the center of the previous curve cpre . To be
precise, a shift between two pseudo SA slices is not possible, as they are computed
from the same 3-D volume with isotropic voxel size. Furthermore, the radius rnew
should always be equal or smaller to the radius rpre from the previous refined slice
for the apical direction. If one of these constraints is not met, the previous contour
is taken and erosion is applied to achieve a slightly smaller radius,
(
V lfinal if ||cpre − cnew || < θc ∧ rnew ≤ rpre
V lfinal = l−1
, (4.14)
E(V final ) otherwise

where E denotes binary erosion. The apical iteration ends if the radius rnew of the
semi-axis falls below a threshold θr ∈ R. See Figure 4.7 (b) for an illustration of the
apical refinement direction.
Similar conditions are applied for the basal direction. Here, the center cnew of
the new curve, the center cpre of the previous slice as well as the previous contour
4.3 Automatic Left Ventricle Segmentation 69

(a) Basal direction

(b) Apical direction

Figure 4.7: (a) Refinement for the basal direction, considering the shape and center
of the previous contours to guarantee for inter-slice smoothness. (b) Refinement for
the apical direction considering the radius, shape, and center of the previous contour
to garantee for inter-slice smoothness.

shape and size are considered. If the distance between the centers is greater than a
threshold θc , the endocardial contour is approximated as follows,
(
Vl if ||cpre − cnew || < θc ∧ rnew ≤ rpre
V lfinal = l−1
. (4.15)
V final otherwise

The basal iteration either stops if the maximum point from the transformed mask
is reached or the difference of the areas of the previous contour V l−1 final and the newly
l
found contour V final is greater than a threshold θbmax ∈ R. In this case, the outflow
tract is reached. See Figure 4.7 (a) for an illustration of the basal refinement direction.
All the thresholds for the contour refinement are summarized in Table 4.2.

Learning Based Endocardial Contour Extraction

As for the filter based approach, the learning based method starts with the mid-slice
of the LV, which corresponds to the center of mass of the transformed mask M .
Figure 4.8 (a) depicts the estimated SA view with the contour of the transformed
mask M . Potential boundary points are extracted by circular ray casting, using the
set of contour points V l as initialization, see Figure 4.8 (b).
The potential boundary candidates are classified using a trained RF classifier. The
performance of any classifier is limited to the discriminative power of the features used
70 Left Ventricle Segmentation in 3-D LGE-MRI

Description Symbol
Scar threshold θst
Distance between contour points θdist
Sub-sampling rate ξ
Extension of polar image γ
Difference between areas θdiff
Difference between centers θc
Apex reached epicard θr
Base reached θbmax
Enlargement for epicard θepi

Table 4.2: Parameters for the left ventricle segmentation.

(a) SA view with transformed mask (b) Boundary candidates


1 .0

0 .8

0 .6

0 .4

0 .2

0 .0
(c) Boundary costs cendo (d) Boundary costs cendo

Figure 4.8: (a) Estimated SA view using PCA with the contour of the transformed
mask M in red. (b) Potential boundary candidates extracted by circular ray casting
and using the contour of the transformed mask as an initialization. (c) Boundary
costs cendo overlaid on the SA view. (d) Boundary costs cendo as cost array in Cartesian
coordinates.
4.3 Automatic Left Ventricle Segmentation 71

for training. The RF is trained using 16 steerable features, based on local intensity
and gradient [Zhen 08]. Steerable features are used because they are computationally
efficient and can capture orientation and scale. For a given boundary candidate
p = (x, y)T with the intensity √ I and
√ the gradient 5I(p) =pg = (gxp , gy )T , the following
features are extracted:
p I, I, 3 I, I 2 , I 3 , log I, ||g||, ||g||, 3 ||g||, ||g||2 , ||g||3 ,
log ||g||, gx , gy , gx 2 + gy 2 , 52 I(p). Therefore, the polar image is calculated and
the features are extracted in polar space.
The training of the RF is based on gold standard annotations from which positive
and negative samples are extracted. For the training, pathologic as well as healthy
subjects were used to generate a broad range of training data. The RF classifier
assigns each boundary candidate a probability p ∈ [0, 1]. The classification result
is interpreted as costs cendo , where cendo = 1 − p, see Figure 4.8 (c) and (d) as an
example. However, the boundary costs are not sufficient to accurately detect the
endocardial contour. Therefore, an additional scar exclusion step is added. As for
the filter based approach, a scar threshold θst is defined based on the mean intensity
µbp and standard deviation σbp of the blood pool. Having the scar threshold θst , a
scar map is derived, as illustrated in Figure 4.9 (c).
The scar map is impaired with the boundary costs, resulting in a cost array, see
Figure 4.9 (e). To find the final endocardial contour dynamic programming is used
to compute the optimal boundary from the left to the right hand side of the polar
image. Hence, a MCP search is used. The cost path is calculated as the sum of the
costs for each move and weighted by the length of the path. In Figure 4.9 (f) the
result of the MCP search is depicted.
After the contour is refined in the mid slice, this information is used for the suc-
ceeding slices in apical and basal direction. Here, the same steps are applied, the
possible boundary point extraction using circular ray casting, the boundary proba-
bility estimation using the trained RF classifier, the scar map generation, and the
MCP search. Furthermore, similar inter-slice smoothness constraints are applied as
for the filter based approach.

4.3.4 Epicardial Contour Extraction


For the epicardial contour extraction, the information of the previously estimated en-
docardial contour is used. As for the endocardium, two possible boundary estimation
methods are available, filter based or learning based, which are detailed in the next
two subsections.

Filter Based Epicardial Contour Extraction

The contour extraction of the epicardium is performed in polar space for the same
reason as mentioned for the endocardial refinement, see Section 4.3.3. The edge infor-
mation is extracted using the Canny edge detector [Cann 86]. The standard deviation
σ of the Gaussian filter kernel is set to σ = 2, where σ is chosen heuristically. As the
epicardium has to be greater then the endocardium, the radius of the endocardial
contour is enlarged by θepi ∈ R, as depicted in Figure 4.10 (a).
In addition, all edges that fall within the endocardial contour are erased from the
edge image, as visualized in Figure 4.10 (b). Having the enlarged endocardial contour,
72 Left Ventricle Segmentation in 3-D LGE-MRI

0
r [pixel]

54
0◦ 00◦ 180◦ 270◦ 360◦
ρ
(a) Polar image
0 scar
r [pixel]

54 healthy
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(b) Scar classification
0 1
r [pixel]

54 0
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(c) Scar map
0 1
r [pixel]

0.5

54 0
0◦ 00◦ 180◦ 270◦ 360◦
ρ
(d) Boundary costs cendo in polar space
0 1
r [pixel]

0.5

54 0
0◦ 00◦ 180◦ 270◦ 360◦
ρ
(e) Cost array derived from the scar map and the boundary costs
0
r [pixel]

54
0◦ 00◦ 180◦ 270◦ 360◦
ρ
(f ) Result of the MCP in polar space

Figure 4.9: (a) Polar image. (b) Scar probability. (c) Scar map derived from the
scar threshold θst . (d) Boundary costs cendo in polar space. (e) Cost array derived
from the combination of the scar map with the bundary costs array. (f) Final result
of the endocardial boundary estimation using a MCP search.
4.3 Automatic Left Ventricle Segmentation 73

r [pixel]
54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(a) Epicardial contour enlarged by θepi
0
r [pixel]

54
0◦ 90◦ 180◦ 270◦ 360◦
ρ
(b) Epicardial edges only

Figure 4.10: (a) The final endocardial boundary is visualized in red and the enlarged
epicardial contour in yellow. (b) Canny edge image showing only the edges that do
not fall within the endocardial contour.

the closest edge with increased radius is searched for in the epicardial edge image. A
polynomial is fitted through the found points to remove outliers. In the next step,
the extracted contour points are transformed back to Cartesian coordinates. As the
result might be frayed, the convex hull is estimated to smooth the contour. Due to
the smoothing operation, the epicardial contour is slightly enlarged, hence, erosion is
applied to shrink the outline. The result of the endocardial and epicardial contours
are shown in Figure 4.11.
For the basal direction, the iterations of the epicardial contour refinement stop
with the refinement of the endocardial contour, i. e., when the left ventricular outflow
tract (LVOT) is reached. For the apical direction, there is no endocardial contour
anymore, as the endocardial contour refinement stops when the semi-axis falls below
a certain threshold θr . Hence, the previously estimated epicardial contour is used
for the refinement with decreasing radius. The epicardial contour refinement for the
apex stops, when the semi-axis of the epicardium falls below a certain threshold θrepi .

Learning Based Epicardial Contour Extraction

As for the learning based endocardial contour detection, which is described in Sec-
tion 4.3.3, a RF classifier is used to determine possible boundary candidates. The
RF is trained with the same 16 steerable features as for the endocardial boundary
estimation. For the training, ground truth annotations of the epicardial boundary
are used, where positive as well as negative samples are extracted. To extract pos-
sible epicardial boundary candidates, the previously found endocardial boundary is
used and the radius is enlarged by θepi . From the enlarged contour, the boundary
candidates are extracted using circular ray casting, which is performed in polar space.
The boundary candidates are classified using the trained RF, resulting in a boundary
probability p ∈ [0, 1]. The result of the epicardial boundary classification is used as a
cost array for the MCP search. The MCP search finds the distance weighted minimal
74 Left Ventricle Segmentation in 3-D LGE-MRI

Figure 4.11: Final result of the endocardial (red) and epicardial (yellow) contour
in Cartesian coordinates.

path through the cost array. The result of the MCP is transferred back to Cartesian
coordinates and the smallest convex polygon is estimated to achieve a smooth looking
contour.
The refinement of the basal and apical direction ends with the same conditions
as for the filter based approach, which is described in the previous Section 4.3.4.

4.3.5 Papillary Muscle Segmentation


The papillary muscles are segmented after the whole endocardium contour is re-
fined [Kurz 17b]. The papillary muscles are located within the left ventricle, where
they are attached to the cusps of the mitral valve via chordae tendineae, as illus-
trated in Figure 4.12. The papillary muscles also contract during systole to prevent
the blood flowing back in the left atrium [Robe 72].
For the segmentation of the papillary muscles in the first step, morphological
erosion is applied to the endocardial contour in 3-D, using a sphere shaped structuring
element with a radius of r = 2. Afterwards, a mask array is generated containing the
eroded endocardial contour, i. e., the blood pool. In the next step, Otsu’s thresholding
is applied resulting in a threshold θO ∈ R [Otsu 79]. All pixels that are less than the
estimated threshold θO are defined as possible papillary muscles candidates. The
connectivity of the candidates is evaluated. If there are less than 7 voxels connected,
the voxels are declared as noise and they are not considered as potential papillary
muscles candidates [Kurz 17b]. The final result of the papillary muscles segmentation
for one slice is depicted in Figure 4.13 (a), where the endocardial contour is red and
the papillary muscles are cyan. Figure 4.13 (b) visualizes the segmentation result in
3-D.

4.3.6 Mesh Generation


In the final step, the segmented endocardial, epicardial, and papillary muscles are
extracted as 3-D surface meshes. For the mesh generation the marching cubes algo-
4.3 Automatic Left Ventricle Segmentation 75

Figure 4.12: Image of the human heart, where the right and left ventricle is open.
The papillary muscles are connected to the mitral valve via the chordae tendineae.
The image "Heart normal short axis left ventricle view with papillary muscles" by
Patrick J. Lynch is licensed under CC BY 2.5.

(a) 2-D papillary muscles (b) 3-D papillary muscles (c) 3-D surface mesh

Figure 4.13: (a) Final result showing the endocardial contour in red and the papil-
lary muscles in cyan. (b) Papillary muscles visualized as 3-D surface mesh in cyan and
the endocardial surface in red. (c) Final segmentation result of the endocardial (red)
and epicardial (yellow) surface, with a myocardial infarction visualized in purple.

rithm is used to obtain the iso-surfaces of the contours [Lore 87]. The output of the
algorithm is a triangular mesh consisting of a set of vertices and connected faces. The
extracted vertices and faces, as well as face normals are then saved in standard sur-
face mesh file formats. Figure 4.13 (c) illustrated an example of a 3-D surface mesh,
where the endocardial surface is red, the epicardial surface yellow, and a myocardial
infarction is visualized in purple.
76 Left Ventricle Segmentation in 3-D LGE-MRI

3-D Smart Contour Control HRBF


Volume Brush Extraction Points Interpolation

Figure 4.14: Semi-automatic segmentation pipeline for the left ventricle segmenta-
tion. The first image from the left shows the 3-D volume as input. In the next step,
single slices are segmented using the smart brush functionality. Third, the control
points of the contours are extracted. Fourth, the 2-D and 3-D normal vectors are
computed for the HRBF interpolation. In the final image, the interpolated surface is
visualized.

4.4 Semi-Automatic Left Ventricle Segmentation


In this section, a semi-automatic approach for the left ventricle segmentation is pro-
posed [Kurz 17d, Mirs 17]. The challenge is to design a fast, generic, and easy semi-
automatic segmentation tool that allows to generate clinical segmentations in both
2-D and 3-D medical images and is able to handle intensity inhomogeneities.
For the 3-D surface reconstruction there is a vast literature which is mainly
grouped into direct meshing and implicit approaches. Nowadays, methods based
on implicit surface reconstruction are gaining more and more attention. For this
approach, first a signed scalar field f (·) is obtained. The value of this scalar field
is zero at all control points p, f (p) = 0, and negative/positive for inside/outside
of the surface [Mors 05]. The desired surface is reconstructed by extracting the
zero-level set of the mentioned field. The radial basis function (RBF) interpola-
tion guarantees to have a smooth field from non-uniformly distributed data points
[Duch 77, Carr 01, Turk 02, Ijir 13].
In previous related work [Ijir 13], this field f (·) is computed in a bilateral domain
where the spatial and intensity range domain are joined. The interpolation is done
using RBFs with Hermite data which incorporates normals and gradients of the scalar
field directly, ∇f (p) = n.
In this section, a new formulation of surface reconstruction is proposed, which is
independent of the 3-D intensity gradient information and makes use of both 2-D and
3-D normal vectors obtained from individually segmented 2-D slices. The approach
combines advantages of semi-automatic segmentation methods, as well as the user’s
high-level anatomical knowledge to generate segmentations quickly and accurately
with fewer interactions. Using our method, the user first segments a few slices with
the smart brush, then the scattered data points are extracted and the 2-D normal
information of the annotated slices is computed. Applying our new formulation of
HRBF which incorporates normals and gradients of the scalar field directly, the de-
sired surface is reconstructed [Mirs 17, Kurz 17d]. In Figure 4.14 the segmentation
pipeline is illustrated.
4.4 Semi-Automatic Left Ventricle Segmentation 77

4.4.1 Smart Brush


The 2-D segmentation functionality classifies pixels into foreground and background
based on the intensity information. Prior to the segmentation, pre-processing is
required for the MRI data set, as the intensity values are in arbitrary units. Therefore,
a normalization is required. For this aim, an intensity interval I = [I min , I max ] is
defined based on the maximum and minimum intensity of the whole DICOM data
set V ∈ RN ×N ×N . Then for each volume, the values outside the interval I are
clipped to the interval borders. To have a gray-scale image, the minimum value of
the clipped image is set to zero and the maximum to 255. To reduce the existing
noise, an edge-preserving, denoising filter called bilateral filter is used to suppress
the noise in the normalized image. In this method, spatial closeness and radiometric
similarity are measured by the Gaussian function of the Euclidean distance between
two pixel intensities [Toma 98].
The smart brush segmentation is user driven and the interactions start with each
displacement of the mouse cursor. The segmentation scheme comprises the following
steps: (i) manually initializing a small part of the region of interest (ROI); (ii) im-
proving the segmentation using the smart brush functionality; (iii) post-processing
the segmentation result and extending the previously segmented region.
First, a small area of the region of interest has to be segmented manually by the
user. The mean intensity of this area is required for the initialization of the smart
brush functionality. When the user selects a new ROI with the brush, PN
an adaptive
Ii
threshold is computed using the mean intensity of the ROI I mean = i=1 N
, where N
is the number of pixels in the selected area F 0 . Afterwards, the user progresses with
the smart brush and a new area is selected. For the new selected area the intensity
distribution is investigated. A threshold for pixel-wise classification is derived for
the mean values. The pixels intensities whose mean is closer to the mean intensity
of the initial area are classified as foreground. Finally, to reduce false positives, the
morphological connectivity of each pixel in the ROI to the initial ROI is checked using
a 4-connected structuring element. This way, pixels that have the same intensity value
but are not connected to the previous segmentation are removed. In Figure 4.15 (a)
the initialization of the smart brush is shown in red and the brush is moved over a new
area, illustrated by the yellow circle. In Figure 4.15 (b) the correct segmented area is
visualized, using the adaptive thresholding together with the propagation checking.
If no propagation checking would be applied for this example, also the right ventricle
would be segmented, as it has the same intensity values. In the final step, to get a
smooth looking contour the wholes are filled using binary hole filling.

4.4.2 Control Point Extraction


We assume that multiple slices are segmented in axial, sagittal, and coronal orienta-
tion using the smart brush functionality. First, the contours are extracted from the
segmentations of the individual slices. Given a structural element, a morphological
operation named binary erosion is used to extract the boundary. The structuring
element is a square connectivity equal to one. Then, by subtracting the eroded mask
from the original mask and applying a threshold, a one-pixel thick edge is extracted.
78 Left Ventricle Segmentation in 3-D LGE-MRI

(a) Smart brush initialization (b) Smart brush result

Figure 4.15: (a) Initialization of the smart brush in red and the smart brush is
propagated which is illustrated as yellow circle. (b) Segmented area under the brush
using adaptive thresholding and propagation checking.

In the next step, the control point (CP)s are computed from the contours adaptively
according to the shape of the object.
The contour is sampled equidistantly with a predefined sampling size ξ ∈ Z. The
number of control points ne ∈ N is based on the contour length k ∈ R+
0 and computed
as follows
k
ne = b c . (4.16)
ξ
Furthermore, nc ∈ N convexity defect points, where the contour has the maximum
distance to its convex hull are added, see the blue points in Figure 4.16 (a). To
increase the accuracy of the 3-D interpolation for complex objects, the number of
CPs is increased at rough areas. Therefore, the local curvature κ ∈ R is checked for
all CPs and additional points are added in case of roughness. The curvature κ is
dependent on the derivatives of the curve k ∈ R2 with k(t) = (x(t), y(t))T ,

|x0 y 00 − y 0 x00 |
κ= , (4.17)
(x02 + y 02 )3/2

where the primes refer to derivatives dtd with respect to the parameter t. To compare
curvature values, a reference quantity r ∈ R (global roughness) is defined which is
the ratio of the convex hull area of the curve Ah and the curve area Ac [Abbe 17],

Ac
r= . (4.18)
Ah
New CPs are added at a certain distance to the investigated CP, if the criterion
κ
> θκ (4.19)
r
is fulfilled, where the threshold θκ ∈ R is obtained heuristically. The number of
additional CPs due to curvature is denoted as nκ . The total number of CPs is
4.4 Semi-Automatic Left Ventricle Segmentation 79

(a) Convexity defect points (b) Rough surface points

Figure 4.16: Control point extraction. (a) A rough surface with initial equidistant
points in red and convexity defect points in blue. (b) A rough surface with increased
number of points in green.

Np = ne + nc + nκ . Figure 4.16 (b) depicts the total number of extracted control


points with the convexity defect points in blue and the additional rough surface points
in green.
The subsequent interpolation requires Hermite data, i. e., function values and
their derivatives. In this case, we need the normal vector for each control point. The
first derivative of the contour approximates the tangent vector of the curve. Having
the 2-D tangent vector d = (dx , dy )T , the orthogonal normal vector is obtained by
n = (−dy , dx )T .

4.4.3 Control Point Merging


The CPs are used to interpolate the 3-D surface belonging to one object of interest.
To gain more accuracy, it is always better to have cross sections from different orien-
tations of the desired object. Considering a 3-D object, the intersection of any two
non-parallel image planes will result in a line with at least two intersection points.
Figure 4.17 shows the location of these two points in yellow. It implies that there are
joint points in case the selected planes intersect.
However, the extracted contour needs to be identical for both planes at the point
of intersection to result in the same point in 3-D space. In practice, the contour of
the segmented mask may not be located precisely at the actual object border, as the
annotation is done manually by the user. Consequently, there is no intersection point
in 3-D. Hence, instead of having one point at each junction, there will be two points at
each intersection, corresponding to the annotated planes. These points can then lead
to incorrect 3-D interpolations, as they have conflicting gradient and zero-level set
information. As a result, unwanted holes may appear in the final interpolation result.
To prevent this artifact, all possible intersections of cross sections must be detected.
Therefore, points in a certain radius rp ∈ R+ 0 are merged to a single 3-D point. Apart
from eliminating undesired artifacts, merging CPs has another advantage, because
the 3-D normal vector can be computed and this will further improve the accuracy
80 Left Ventricle Segmentation in 3-D LGE-MRI

Figure 4.17: Two orthogonal planes are segmented in red and the resulting inter-
section points are depicted in yellow.

of the 3-D interpolation. The calculation of the intersection points and their normal
vectors is explained in the following section.

Contour Intersection

Since the volumetric 3-D image is segmented in arbitrary 2-D slices, intersections
between segmented slices from different orientations occur. Suppose that closed con-
tours are extracted from the intersecting slices, the intersection should result in two
3-D points. The computation of these intersection points is performed iteratively.
The iterations are over all non-repetitive 2-permutations of N segmented slices and
comprise of two steps.
First, the existence of intersections is checked. In case of an intersection, the cor-
responding points are extracted. In some complex objects, i. e., non-convex shapes,
more than two intersection points or a set of neighboring points are extracted. Fig-
ure 4.18 shows two possible cases for extracted intersection points. In case of multiple
intersection candidates, classification is applied in order to distinguish between the
different groups of points. In the next step, the average of each group is taken as final
3-D intersection point. The classification of the point groups is described in more
detail in the following section.

Classification and Merging

Having found the intersection point candidates at each junction, the next step is to
merge close intersection points in order to decrease the redundancy. To obtain the
neighboring points, a nearest neighbor graph within a given radius rp is applied to the
set of intersection point candidates. According to the size and the complexity of the
desired object, the user can change the search radius rp for the neighboring points.
The algorithm does not merge close points from parallel planes, otherwise information
loss would occur. The merging of the 3-D points is simply done by averaging, resulting
in one 3-D point. The merging of the normal vectors n is performed such that the
4.4 Semi-Automatic Left Ventricle Segmentation 81

(a) Multiple intersection points (b) Line intersection

Figure 4.18: Possible intersection points of annotated non-parallel image slices.


(a) The intersection occurs in multiple points, see yellow points. (b) The intersection
occurs in the form of one line, see yellow line.

final result is a unit vector again. Averaging of a collection of three dimensional


points is describe as:

m m m m
!T
1 X 1 X 1 X 1 X
p= pi = xi , yi, zi , (4.20)
m i=1 m i=1 m i=1 m i=1

where pi refers to the intersection point candidates in one neighborhood and m is the
number of candidates within the search radius rp . To have a normal unit vector the
merging is done as

m m m
!T
1 X 1 X 1 X
n= ni,x , ni,y , ni,z , (4.21)
N (ni,x ) i=1
N (ni,y ) i=1
N (ni,z ) i=1
n
nnormalized = , (4.22)
knk

where operator N counts the number of valid elements, i. e., the intersection points
that have an in-plane normal for this dimension. In Figure 4.19, all the control points
extracted from the different image planes are shown, where the control points with
a 2-D normal vector are visualized in green and all control points with a 3-D normal
vector are shown in blue.

4.4.4 3-D Interpolation


For the 3-D interpolation of the scalar field, RBF are used. The RBF function
interpolation depends only on the distance of the center c to a point pi [Mors 05],

ϕ (c) = ϕ (kc − pi k) , (4.23)

where ϕ ∈ R is a non-linear activation function and pi is an extracted control point.


82 Left Ventricle Segmentation in 3-D LGE-MRI

Figure 4.19: Control points extracted from three different orientations, where N
points have a 2-D normal vector in green and M points with 3-D normal vector in
blue.

If we consider N radial basis functions f around every control point pi , we end


up with a system of linear equations
N
X
f (c) = αi ϕ (kc − pi k) , (4.24)
i=1

where α ∈ R is a weighting factor for each control point. To make sure that the
equation is always solvable a low-degree polynomial g (c) is added

N
X
f (c) = αi ϕ (kc − pi k) + g (c) . (4.25)
i=1

However, this simple RBF formulation requires the definition of inside and outside
values. To address this issue, Hermite data is incorporated into the RBF, which
directly use derivatives. This method ensures the existence of a non-null implicit
surface without the need of additional information [Mace 11]. Using the first order
Hermite interpolation in combination with RBF, the scalar field can be formulated
as follows:
XN
f (c) = αi ϕ (kc − pi k) − β i ∇ϕ (kc − pi k) + g (c) , (4.26)
i=1

where α and β ∈ R3 are weighting factors.


In this work, a new formulation of HRBF is introduced that allows to reconstruct
the 3-D surface based on scattered control points and their associated 2-D and 3-D
normal vectors. Assume that N Hermite data points {(pi , ni )|pi ∈ R3 , n2D 2
i ∈ R ,i =
3
1, ..., N } with a 2-D normal vector and M Hermite data points {(pi , ni )|pi ∈ R , ni ∈
R3 , i = N + 1, ..., N + M } with a 3-D normal vector are generated. In RBF inter-
polation, the final segmentation is given as the zero level set of a scalar field. The
4.4 Semi-Automatic Left Ventricle Segmentation 83

scalar field f consists of two components f = f 2D + f 3D . The scalar field f 2D for the
2-D normal vectors is formulated as
N
T
X
2D
f (c) = α2D 2D
i ϕ (kc − pi k) − β i h2D
i (∇ϕ (kc − pi k)) + g (c) , (4.27)
i=1

where g(c) is a low-degree polynomial, h2D i (c) is a function that selects the 2-D
gradient direction that is available for control point pi , and α2D
i ∈ R, β 2D
i ∈ R2 are
3D
the RBF coefficients. The scalar field f for the 3-D normal vectors is formulated
accordingly,
NX
+M
T
f 3D (c) = α3D 3D
i ϕ (kc − pi k) − β i ∇ϕ (kc − pi k) + g (c) , (4.28)
i=N +1

where g (c) is a low-degree polynomial and α3D i ∈ R, β 3D


i ∈ R3 are the RBF coeffi-
cients. A 3-D gradient selection function similar to h2D
i (c) is not necessary, since all
dimensions are specified by the 3-D normals.
According to previous work from Ijiri et al. [Ijir 13], the commonly used tri-
harmonic kernel ϕ (t) = t3 , t ∈ R, with a linear polynomial g(c) = aT c + b yields ade-
quate results in terms of shape aesthetics. To determine the coefficients α2D 3D 2D
i , αi , β i ,
and β 3D
i constraints are derived from the CPs [Ijir 13]
f (pi ) = 0 (4.29)
h2D 2D
i (∇f (pi )) = ni (4.30)
∇f (pi ) = ni . (4.31)
In addition, the orthogonality conditions
N
X
α2D
i = 0 (4.32)
i=1
NX
+M
α3D
i = 0 (4.33)
i=N +1
N
X
α2D 2D 2D
i hi (pi ) + β i = 0 (4.34)
i=1
NX
+M
α3D 3D
i pi + β i = 0 , (4.35)
i=N +1

have to be fulfilled [Ijir 13]. These constraints yield a linear system of equations
represented in the matrix form as
O T1 ··· O TM O TM +1 ··· O TM +N
    
0 s 0
 O1 G1,1 ··· G1,M G1,M +1 ··· G1,M +N  w1   b1

 .. .. .. .. .. .. ..  ..   ..


 . . . . . . .

 .
 
 

.
 OM GM,1 · · · GM,M GM,M +1 ··· GM,M +N   wM  =  bM  ,
    
 O M +1 GM +1,1

· · · GM +1,M GM +1,M +1 ··· GM +1,M +N   wM +1   bM +1 
    (4.36)
 . .. .. .. .. .. ..  .   . 
 .. . . . . . .   ..   .. 
O M +N GM +N,1 · · · GM +N,M GM +N,M +1 ··· GM +N,M +N wM +N bM +N
| {z } | {z } | {z }
=D =Y =B
84 Left Ventricle Segmentation in 3-D LGE-MRI

where different color in the matrix implies the points and the corresponding normal
vectors with different dimensionality (3-D blue, 2-D green, mixed purple). The linear
systems of equations can be also written as DY = B, with D ∈ RM +N ×M +N , Y ∈
RM +N and B ∈ RM +N .
The blue block describes the constraints on the 3-D variables α3D 3D
i , βi derived
from the 3-D constraints and orthogonality conditions Equations (4.29), (4.31), (4.33)
and (4.35). Thus, the matrices Gi,j , O i and the vectors c, wi , and bi are defined as:
   T 
ϕ(kpi − pj k) −∇ϕ(kpi − pj k)T pi 1
Gi,j = , Oi =
∇ϕ(kpi − pj k) −Hϕ(kpi − pj k) E 0
   3D    (4.37)
a αi 0
s= , wi = 3D , bi = ,
b βi ni

where E ∈ R3×3 is a unit matrix and Hϕ ∈ R3×3 is the Hessian matrix of the kernel ϕ,
which arises due to the normal constraint Equation (4.31) applied to ∇ϕ. The green
block describes the constraints on the 2-D variables αi2D , βi2D derived from the 2-D
constraints and orthogonality conditions Equations (4.29), (4.30), (4.32) and (4.34).
Thus, the matrices GM +i,M +j and O M +i and the vectors wM +i and cM +i are defined
as:
(∇ϕ(kpi − pj k))T 
 
ϕ(kpi − pj k) −h2D
i
GM +i,M +j = 2D ,
hi (∇ϕ (kpi − pj k)) −h2Di ∇T h2D
i (∇ϕ (kpi − pj k))
 2D T  
h pi 1
O M +i = i (4.38)
E 0
 2D   
αi 0
wM +i = 2D , bM +i = .
βi n2D
i

The mixed blocks are defined analogously. There is always a unique solution to the
system of equations, if the points pi are pairwise distinct [Braz 10, Ijir 13].
Considering the basis function of the tri-harmonic kernel ϕ(t) = t3 , the gradient
and the Hessian matrix of the kernel ϕ is denoted as follows:
∇ϕ(t) = 3 ktk t
(
0 if ktk = 0 (4.39)
Hϕ = 3ttT ,
ktk
+ 3 ktk E k otherwise

where E k ∈ Rk×k is a unit matrix. To solve the linear system of equations, it is


assumed that all the M + N points have 3-D normal vectors. Therefore, the linear
system has the size of 3(M + N + 1) × 3(M + N + 1).
The unknown parameters α2D 3D 2D 3D
i , αi , β i , β i , a, and b can be obtained directly as
the matrix is square and non-singular,
DY = B
(4.40)
Y = D −1 B .
The parameter αi is the weight of each RBF at its center pi and β i is the weight of
the normal vector at the same center. The next step is to extract the 3-D surface
which is presented in the next section.
4.4 Semi-Automatic Left Ventricle Segmentation 85

(a) Point cloud (b) Interpolated surface

Figure 4.20: Surface reconstruction based on RBF interpolation. (a)Point cloud


used for 3-D interpolation based on RBF. (b) Implicit reconstructed surface based
on the RBF.

4.4.5 Surface Reconstruction


As mentioned in Section 4.4, the RBF surface can be interpreted as an implicit sur-
face. The RBF interpolation guarantees to have a smooth field, from non-uniformly
distributed data points [Duch 77, Carr 01, Turk 02, Ijir 13]. Therefore, after interpo-
lating the scalar field, in order to get the 3-D surface, the zero level of the scalar
field has to be extracted. Using this method, no a-priori knowledge is required about
the topology of the reconstructed shape. In general, the level set u0 at time t of a
function ψ(x, y, t), is the set of arguments {(x, y), ψ(x, y, t) = u0 }. In zero level set
the idea is to define a function ψ(x, y, t) such that at any time,

υ(t) = {(x, y), ψ(x, y, t) = 0} . (4.41)

The function ψ has many other level sets, in addition to υ, while only υ has a meaning
for the segmentation and not for any other level sets of ψ. A very commonly chosen
function ψ is the signed distance to the front υ(0) given as

−d(x, y, 0) if (x, y) inside the front

ψ(x, y, 0) = 0 if (x, y) on the front . (4.42)

d(x, y, 0) if (x, y) outside the front

The level set method segments the surface iteratively. In the first step, the front υ(0)
is initialized at a certain position. The second step is to compute ψ(x, y, 0) and then
iterate over until convergence.

ψ(x, y, t + 1) = ψ(x, y, t) + ∆ψ(x, y, t) , (4.43)

Lastly, the υ(tend ) is marked as a desired extracted surface.


86 Left Ventricle Segmentation in 3-D LGE-MRI

(a) Cohort 1 SA view (b) Cohort 1 LA view (c) Cohort 2 SA view (d) Cohort 2 LA view

Figure 4.21: (a) and (b) shows one example data set of the first cohort in a SA and
LA axis orientation. (c) and (d) depicts an example data set of the second cohort.
Here the voxel spacing is not isotropic and no additional fat suppression was applied.
This results in further enhancements in the apical region of the left ventricle, see (d).

4.5 Evaluation and Results


In this section, the evaluation is detailed and the results are presented. In Sec-
tion 4.5.1, the data used for the evaluation is described. The different evaluation
metrics used and the assessment of the evaluation are explained in Section 4.5.2. The
results are shown in Section 4.5.3.

4.5.1 Data Description


The automatic segmentation of the LV endocardium and epicardium is evaluated on
30 clinical MRI data sets from individual patients acquired at two clinical sites. The
two clinical sites are distinguished in cohort 1 and cohort 2. The first cohort consists
of nine data sets and the second cohort contains 21 data sets.
From the first clinical site, the data is acquired using a MAGNETOM Skyra
3T scanner (Siemens Healthcare GmbH, Erlangen, Germany). The free-breathing
whole heart 3-D LGE-MRI is performed based on a protocol adapted from Forman
et al. [Form 14]. The main difference is that the sequence is inversion prepared. The
patient specific TI time is adjusted with a TI scout scan prior to the 3-D LGE-MRI ac-
quisition. For acceleration, the Cartesian variable-density spiral phyllotaxis sampling
pattern is set up for a net acceleration of 7.7 relative to the fully sampled k-space.
The acquisition utilizing a GRE sequence has the following parameters: TR/TE
4.0/1.5 ms, radio frequency excitation angle 20◦ , FOV 247-270 × 270 × 135-180 mm3 ,
voxel size 1.3 mm3 isotropic, and a receiver bandwidth of 401 Hz/Px. The acquisi-
tion window of the ECG-triggered sequence is adapted to the patient specific cardiac
resting phase in mid-diastole. Image reconstruction is fully integrated on the scanner
and performed with a regularized SENSE-type iterative reconstruction as described
in [Liu 12]. For all data sets, the regularization parameter is fixed to 0.002 and image
reconstruction is terminated after 20 iterations of the mFISTA algorithm.
From the second clinical site, the data is acquired using two different clinical
scanners, MAGNETOM Verio 3T and MAGNETOM Espree 1.5T (Siemens Health-
care GmbH, Erlangen, Germany). The acquisition utilizing a GRE sequence has
the following parameters: TR/TE 2.76–4.02 ms/1.38–2.01 ms, radio frequency exci-
4.5 Evaluation and Results 87

Description Symbol Value


Distance between contour points θdist 4
Difference between areas θdiff ±25 %
Sub-sampling rate ξ 4
Difference between centers θc 5
Base reached θbmax ≥188 %
Enlargement for epicard θepi 6
Apex reached epicard θr <2

Table 4.3: Parameters for the left ventricle segmentation. These parameters are
chosen heuristically.

tation angle 13–14◦ , FOV 379–384×379–384, voxel size (0.66–0.85 × 0.66–0.85 ×


1.5–1.7) mm3 , and a receiver bandwidth of 349–755 Hz/Px. Image reconstruction is
performed using parallel imaging [Gris 02].
Figure 4.21 shows an example data set for each cohort, (a) and (c) show the SA
view and (b) and (d) the LA view. In the second cohort, no additional fat suppression
is applied, which leads to various enhancements, especially in the apex, see Figure
4.21 (d). The different resolution of cohort 1 and 2 in the z-axis has no influence
on the algorithm. However, the data set of cohort 1 are preferable as the pixel
spacing is isotropic and the additional fat suppression is helpful for the correct scar
quantification.

4.5.2 Evaluation
Gold standard annotations of the LV endocardium and epicardium are provided by
two clinical experts and are performed using various open source segmentation tools
(Slicer [Fedo 12], Seg3D [CIBC 15], and MITK [Nold 13]). The three tools are equiv-
alent and have no influence on the segmentation result, as always the brush func-
tionality is used. The observers are asked to segment the endocardial and epicardial
contour.
For the automatic segmentation of the endo- and epicardium, several parameters
have to be chosen heuristically. A summary of all the parameters is provided in
Table 4.3. The influence of the parameter values is discussed in Section 4.5.3.

Metrics

Given the gold standard annotation, the segmentation is evaluated using two different
measures. First, the Dice coefficient (DC) as a quantitative score of the segmentation
quality is evaluated, as it measures the proportion of the true positives in the seg-
mentation. Dice scores range from 0 to 1, with 1 corresponding to a perfect overlap.
For the definition of the Dice coefficient please refer to Equation (3.3). For the DC,
the whole 3-D volume is considered.
The second evaluation method is the mean surface distance (MSD) between the
surface voxel of the binary mask A and their nearest surface voxel of binary object
B, averaged over all contour points and all slices of the volume.
88 Left Ventricle Segmentation in 3-D LGE-MRI

(a) Non-smoothed gold standard (b) Smoothed gold standard

Figure 4.22: (a) Example of the non-smoothed gold standard annotation in the
SA orientation. It can be seen that the contours look frayed. (b) Smoothed gold
standard orientation in the SA orientation.

Assessment

The two different measures are applied to the whole volume, differentiating the endo-
and epicardium. Furthermore, the inter-observer variability between the observers is
investigated. In addition, the volume is separated in three parts, the base, mid-cavity,
and apex, by dividing the gold standard annotation into equal thirds, perpendicular
to its long axis [Ma 12].

Smoothing

The gold standard annotation is done in the sagittal, coronal, and axial plane, there-
fore, the results look frayed in the SA orientation, see Figure 4.22 (a) for an example.
The data was already annotated by the physician, therefore all the annotations are
performed in the standard image planes. However, to overcome this issue with the
frayed contours, the gold standard annotation is post-processed by applying the con-
vex hull to the contour points for every slice in the SA orientation. To be more
precise, the convex hull is estimated in 2-D for every slice without considering the
overall 3-D shape. An example of the smoothed contour is shown in Figure 4.22 (b).
Hence, the DC and MSD are computed for the gold standard annotations with and
without smoothing and also the difference between the gold standard annotation with
and without smoothing is evaluated.

4.5.3 Results
First, the results for the filter based segmentation are presented. Then the effect of
the smoothing is evaluated and the parameter variability is investigated. Afterwards,
the results for the learning based segmentation are shown. Furthermore, the filter
based and learning based results are compared against each other. Finally, the semi-
automatic segmentation approach using the smart brush and the adopted HRBF
interpolation is evaluated.
4.5 Evaluation and Results 89

Endo Epi
Cohort 2 Cohort 1 Description Mean ± Std Min Max Mean ± Std Min Max
Mean 0.84 ± 0.06 0.73 0.93 0.85 ± 0.05 0.73 0.90
Observer 1 0.84 ± 0.06 0.73 0.91 0.84 ± 0.05 0.73 0.89
Observer 2 0.85 ± 0.06 0.74 0.93 0.85 ± 0.04 0.76 0.90
Inter-Observer 0.94 ± 0.02 0.92 0.98 0.93 ± 0.02 0.95 0.95
Mean 0.83 ± 0.03 0.76 0.89 0.78 ± 0.04 0.68 0.85
Observer 1 0.83 ± 0.03 0.77 0.89 0.78 ± 0.03 0.70 0.82
Observer 2 0.83 ± 0.03 0.76 0.89 0.78 ± 0.05 0.68 0.85
Inter-Observer 0.90 ± 0.04 0.81 0.98 0.87 ± 0.05 0.75 0.96
(a) Dice coefficient without smoothing

Endo Epi
Description Mean ± Std Min Max Mean ± Std Min Max
Cohort 2 Cohort 1

Mean 0.85 ± 0.06 0.72 0.91 0.85 ± 0.06 0.69 0.91


Observer 1 0.84 ± 0.06 0.72 0.91 0.83 ± 0.06 0.70 0.90
Observer 2 0.85 ± 0.06 0.72 0.91 0.86 ± 0.06 0.72 0.91
Inter-Observer 0.96 ± 0.02 0.92 0.98 0.94 ± 0.02 0.92 0.97
Mean 0.84 ± 0.03 0.76 0.90 0.78 ± 0.04 0.67 0.85
Observer 1 0.83 ± 0.03 0.78 0.90 0.78 ± 0.04 0.70 0.83
Observer 2 0.84 ± 0.03 0.76 0.89 0.78 ± 0.05 0.68 0.85
Inter-Observer 0.91 ± 0.04 0.82 0.98 0.87 ± 0.05 0.77 0.95
(b) Dice coefficient with smoothing

Table 4.4: Quantitative results of filter based 3-D LV segmentation. The results are
shown separately for the endocardial (Endo) and epicardial (Epi) contour. Further-
more, it is distinguished between the two different clinical cohorts. (a) DC without
smoothing of the gold standard annotation. (b) DC with smoothing of the gold
standard annotation.

Filter Based Segmentation Results

First, the gold standard without smoothing is considered. The automatic seg-
mentation of the endocardium results in an overlap to the gold standard annotation
of 0.83 ± 0.04 considering the DC. The best segmentation result has a DC of 0.93
and the worst a DC of 0.73 because of a huge myocardial scarring. For the epicard,
an overall DC of 0.80 ± 0.05 is achieved. The best segmentation of the epicardium
yields a DC of 0.90 and the worst a DC of 0.68. As two gold standard annotations
are available, the inter-observer variability is addressed. The inter-observer variabil-
ity between the two observers results in a DC of 0.92 ± 0.04. The best inter-observer
variability results in a DC of 0.98 and the worst in a DC of 0.81. In Table 4.4 (a) a
distinction between the two different clinical cohorts is made. It can be seen that the
results for the endocardial contour extraction are similar with a DC of around 0.83.
However, the results for the epicard are worse for cohort two. This correlates with a
reduced inter-observer variability of 0.87 for the second cohort’s epicardium.
90 Left Ventricle Segmentation in 3-D LGE-MRI

1.0 1.0
Dice Coefficient

Dice Coefficient
0.8 0.8

0.6 0.6
All All
0.4 Base 0.4 Base
Mid-Cavity Mid-Cavity
0.2 0.2
Apex Apex
0.0 0.0
0 20 40 60 80 100 0 20 40 60 80 100
Percentage [%] Percentage [%]

(a) DC w/o smoothing (b) DC w/ smoothing

Figure 4.23: (a) The gold standard without smoothing is used. Considering the
endocardium and epicardium together, an average DC of 0.82 ± 0.07 is achieved.
Furthermore, the LV is divided into three parts, the base, mid-cavity, and apex.
(b) The gold standard with smoothing is used. Considering the endocardium and
epicardium together, an average DC of 0.82 ± 0.07 is achieved. Furthermore, the LV
is divided into three parts, the base, mid-cavity, and apex.

Furthermore, the DC of the basal, mid-cavity, and apex is evaluated separately


considering the endocardium and the epicardium together. The algorithm performs
best for the mid-cavity area, where a DC of 0.88 ± 0.07 is achieved. The error is
larger for the basal area and the apex, where a DC of 0.77 ± 0.15 and 0.74 ± 0.16 is
achieved. This distribution is illustrated in Figure 4.23 (a).
For the gold standard with smoothing, a DC of 0.84 ± 0.04 is achieved. The
best segmentation result has a DC of 0.91 and the worst a DC of 0.72, similar to the
previous results. For the epicard, an overall DC of 0.80 ± 0.06 is achieved. The best
segmentation of the epicardium yields a DC of 0.91 and the worst a DC of 0.68. The
inter-observer variability of the post-processed data results in a DC of 0.92 ± 0.04,
with a minimum DC of 0.82 and a maximum DC of 0.98. In Table 4.4 (b), a
distinction between the two different clinical cohorts is made. It can be seen that
the results for the endocardial contour extraction are similar with a DC of 0.84.
Furthermore, in Figure 4.24 the Dice coefficients for the individual data sets are
visualized for the endocard and epicard, respectively.
As before, the DC of the basal, mid-cavity, and apex is evaluated separately
considering the endocardium and the epicardium together regarding the smoothed
gold standard annotation. The algorithm performs best for the mid-cavity area,
where a DC of 0.89 ± 0.07 is achieved. The error is larger for the basal area and the
apex, where a DC of 0.78 ± 0.19 and 0.75 ± 0.16 is achieved. This is visualized in
Figure 4.23 (b).
It can be seen that there is no big difference in the Dice coefficient using the
gold standard annotation with or without smoothing. This can be attributed to the
fact that the DC compares the area of the segmentation result and not the contour.
Therefore, considering the area there is no big difference after the convex hull is
estimated. However, the results look visual appealing and a smooth surface is also
4.5 Evaluation and Results 91

1.0

0.9
Dice Coefficient

0.8

0.7

0.6
Endocard
Epicard
0.5
1 5 10 15 20 25 30
Sequence Number

Figure 4.24: Individual Dice coefficient for 3-D filter based segmentation for each of
the 30 data sets for the endocard and epicard, respectively. The DC for the endocard
is sorted in increasing order. The DC for the epicard is sorted according to the
endocard.

required for the mesh generation. Furthermore, the results in the mid-cavity are very
convincing and are adequate for the scar quantification. However, the segmentation
of the apex for some cases is not sufficient especially for the second cohort. The
decreased quality is due to the hyper-enhancements in the apex as for the second
cohort no fat suppression is applied.
Now, the results for the mean surface distance are reported. First, the gold
standard without smoothing is considered. Using the MSD, the endocard has a
mean distance of 2.80 mm ± 0.80 mm, with a minimum of 1.14 mm and a maximum of
4.85 mm. The epicard has a mean distance of 4.17 mm ± 1.15 mm, with a minimum
average of 1.82 mm and a maximum average of 6.75 mm. The inter-observer variability
between the two observers results in a mean surface distance of 0.91 mm ± 0.04 mm,
with a minimum of 0.81 mm, and a maximum of 0.98 mm. In Table 4.5 (a) it is
differentiated between the two different clinical cohorts. The mean surface distance
of the epicard for the second cohort is worse compared to the first cohort. This
observation correlates with the MSD of the inter-observer variability for the epicard.
For the gold standard annotation with smoothing, the endocard has a mean
distance of 2.75 mm ± 0.73 mm, with a minimum of 1.49 mm and a maximum of
4.48 mm. The epicard has a mean distance of 4.29 mm ± 1.18 mm, with a mini-
mum average of 1.90 mm and a maximum of 6.72 mm. The inter-observer variability
between the two observers results in a MSD of 1.35 mm ± 0.73 mm, with a min-
imum mean distance of 0.30 mm, and a maximum mean distance of 3.05 mm. In
Table 4.5 (b) there is a distinction for the two different clinical cohorts. Here, the
same observations can be seen as for the DC with smoothing.
92 Left Ventricle Segmentation in 3-D LGE-MRI

Endo [mm] Epi [mm]


Description Mean ± Std Min Max Mean ± Std Min Max
Cohort 2 Cohort 1

Mean 2.63 ± 1.15 1.15 4.85 3.16 ± 0.10 1.82 5.37


Observer 1 2.75 ± 1.13 1.51 4.85 3.39 ± 1.06 2.12 5.37
Observer 2 2.52 ± 1.13 1.15 4.82 2.92 ± 0.93 1.82 4.66
Inter Observer 0.79 ± 0.40 0.32 1.40 1.22 ± 0.41 0.85 2.22
Mean 2.87 ± 0.60 1.63 4.02 4.61 ± 0.92 2.53 6.75
Observer 1 2.98 ± 0.60 1.90 4.02 4.89 ± 0.81 3.40 6.75
Observer 2 2.75 ± 0.60 1.63 3.75 4.31 ± 0.96 2.53 6.40
Inter Observer 1.62 ± 0.78 0.37 3.55 2.82 ± 1.20 0.81 5.54
(a) MSD without smoothing

Endo [mm] Epi [mm]


Description Mean ± Std Min Max Mean ± Std Min Max
Cohort 2 Cohort 1

Mean 2.65 ± 0.96 1.49 4.48 3.42 ± 1.20 1.90 6.47


Observer 1 2.73 ± 0.90 1.76 4.48 3.72 ± 1.22 2.42 6.47
Observer 2 2.58 ± 1.06 1.49 4.43 3.13 ± 1.17 1.90 5.86
Inter Observer 0.83 ± 0.50 0.65 0.87 1.42 ± 0.51 0.30 1.79
Mean 2.79 ± 0.61 1.52 3.85 4.65 ± 0.96 2.69 6.72
Observer 1 2.88 ± 0.60 1.78 3.75 4.90 ± 0.80 3.41 6.72
Observer 2 2.70 ± 0.63 1.52 3.85 4.41 ± 1.06 2.69 6.64
Inter Observer 1.57 ± 0.71 0.38 3.05 2.97 ± 1.19 1.17 5.81
(b) MSD with smoothing

Table 4.5: Quantitative results of filter based 3-D LV segmentation, using the MSD.
The results are shown separately for the endocardial (Endo) and epicardial (Epi)
contour in mm. Furthermore, it is distinguished between the two different clinical
cohorts. (a) MSD without smoothing of the gold standard annotation. (b) MSD with
smoothing of the gold standard annotation.

Qualitative Results

Figure 4.25 shows an example for a qualitative evaluation for one clinical data set
of the first cohort. The first row shows the pseudo SA slices from basal to apical
direction without any contours. The second row depicts the gold standard annotation
without smoothing from the physician, where the endocardium is marked in orange
and the epicardium in green. The third row illustrates the gold standard annotation
with smoothing, where the endocardium is marked in orange and the epicardium
in green. The fourth row delineates the filter based segmentation result, where the
endocardial contour is red and the epicardial contour is yellow. It can be seen that the
presented result matches well with the smoothed gold standard annotation. However,
the manual annotation looks frayed. Furthermore, it can be observed that the biggest
variance of the annotations is in the apex, which is also shown in Figure 4.23 regarding
the DC.
4.5 Evaluation and Results 93

(a) Pseudo SA slices from basal to apical direction

(b) Gold standard annotation from the physician without smoothing

(c) Gold standard annotation from the physician with smoothing

(d) Filter based segmentation result

Figure 4.25: Comparison of the segmentation result for sequence number seven from
the first cohort. The first row shows the pseudo SA slices from basal to apical direction
without any contours. The second row depicts the gold standard annotation from the
physician without smoothing, where the endocardium is marked in orange and the
epicardium in green. The third row depicts the smoothed gold standard annotation,
where the endocardium is marked in orange and the epicardium in green. The fourth
row delineates the filter based result of the presented segmentation algorithm, where
the endocardial contour is red and the epicardial contour is yellow.
94 Left Ventricle Segmentation in 3-D LGE-MRI

Physician
Expert

(a) Endocardium

Physician
Expert

(b) Epicardium

Figure 4.26: Comparison of the segmentation result for the two observers. The first
row depicts the smoothed gold standard annotation from the endocardium, where
the lighter orange contour is from the physician and the darker contour from the
clinical expert. The second row shows the smoothed gold standard annotation of the
epicardium, where the light green contour is from the physician and the dark green
from the clinical expert.

Gold-Standard Annotation Comparison

Figure 4.26 compares the two different gold standard annotations obtained from the
clinical experts, from base to apex. In the first row, the smoothed gold standard
annotations from the endocardium are shown. In the second row, the smoothed
gold standard annotations from the epicardium are shown. In this data set, a huge
myocardial scarring is present. It can be seen that the contours differ especially in
the apex and at the left ventricular outflow tract. These regions also show the largest
differences compared to the fully automatic segmentation.
In addition to the inter-observer variability, the Dice coefficient and mean sur-
face distance between the smoothed gold standard annotation and non-smoothed
annotations is evaluated. The endocard has a mean DC of 0.96 ± 0.01. The best
segmentation overlap has a DC of 0.97 and the worst a DC of 0.89. The epicard has
a mean DC of 0.96 ± 0.02. The best match yields a DC of 0.98 and the worst a DC
of 0.92.
Using the mean surface distance, the endocard has a mean distance of 0.68 mm ±
0.32 mm, with a minimum of 0.36 mm and a maximum of 2.41 mm. For the epicard, a
mean distance of 0.79 mm ± 0.40 mm is evaluated, with a minimum of 0.33 mm and a
maximum distance of 1.92 mm. The results for the smoothing impact are summarized
in Table 4.6.
4.5 Evaluation and Results 95

1.0 1.0

Dice Coefficient
Dice Coefficient

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5
θdist = 2 θdist = 4 θdist = 6 ξ=2 ξ=4 ξ=8

(a) Variability θdist (b) Variability ξ

1.0 1.0
Dice Coefficient

Dice Coefficient
0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5
θdif f = 20 θdif f = 25 θdif f = 30 θC = 3 θC = 5 θC = 7

(c) Variability θdiff (d) Variability θC

1.0 1.0
Dice Coefficient
Dice Coefficient

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5
θbmax = 178 θbmax = 188 θbmax = 198 θepi = 4 θepi = 6 θepi = 8

(e) Variability θbmax (f ) Variability θepi

Figure 4.27: Parameter variability evaluation. (a) Variability evaluation of the


distance between points θdist . (b) Variability evaluation of the sub-sampling rate ξ.
(c) Variability evaluation of the distance between areas θdiff . (d) Variability evaluation
of the distance between centers θC . (e) Variability evaluation of base threshold θbmax .
(f) Variability evaluation of the enlargement of the epicard θepi .
96 Left Ventricle Segmentation in 3-D LGE-MRI

Description DC MSD
Endo 0.94 ± 0.02 0.74 ± 0.22
Epi 0.96 ± 0.01 0.88 ± 0.33

Table 4.6: Comparison of the gold standard annotation without smoothing to the
smoothed gold standard annotation using the Dice coefficient (DC) and the mean
surface distance (MSD). The results are shown separately for the endocardial (Endo),
epicardial (Epi) contour.

Parameter Variability

As many parameters are set, as shown in Table 4.3, the sensitivity of these parameters
is evaluated, see Figure 4.27. It can be seen that sensitivity of the distance between
two points θdist , the distance between centers θC , and the base threshold θbmax is low
and the mean value is always in the same range. The sensitivity for the sub-sampling
rate ξ is higher, as the sub-sampled points are the input for the contour refinement.
Furthermore, the sensitivity for the distance between two areas θdiff is higher as this
is an important criteria, if the newly estimated contour is used or the previously
segmented contour. For θepi , it can be seen, that the radius should be enlarged by 6
pixels, otherwise the enlargement is not big enough and thus the contour estimation
using the filter based approach is not as accurate.

Learning Based Segmentation Results

As the smoothing has no big influence on the Dice coefficient as shown in Table 4.6
and the results seem more physiologically meaningful compared to the non-smoothed
results as seen in Figure 4.22, only the smoothed gold standard annotations are
considered for the further evaluation of the learning based approach. As the data sets
from the two cohorts differ significantly, only the first cohort is evaluated using the

Endocardium Epicardium
Description T D T D
1
Hendo 70 15 80 15
2
Hendo 60 15 70 15
3
Hendo 50 15 70 15
4
Hendo 60 15 70 15
5
Hendo 60 15 60 15
6
Hendo 50 15 70 15
7
Hendo 60 15 50 15
8
Hendo 50 70 4 30
9
Hendo 60 15 80 15

Table 4.7: Optimized hyper-parameters for 3-D random forest, for each of the
individual folds of the first cohort, for the endocardium and epicardium, respectively.
For the grid-search the following parameter sets are optimized: number of trees T
and maximal tree depth D, for the endocardium and epicardium, respectively.
4.5 Evaluation and Results 97

Description Mean Observer 1 Observer 2 Observer Overlap


Endo 0.84 ± 0.07 0.85 ± 0.08 0.84 ± 0.07 0.95 ± 0.02
Cohort 1
Epi 0.85 ± 0.07 0.86 ± 0.07 0.84 ± 0.07 0.92 ± 0.01

Table 4.8: Quantitative results of the learning based 3-D LV segmentation using
the DC with smoothing of the gold standard annotations. The results are shown
separately for the endocardial (Endo) and epicardial (Epi) contour.

0.9

0.8
Dice Coefficient

0.7

0.6

0.5

Endocard
0.4
Epicard
0.3
1 2 3 4 5 6 7 8 9
Sequence Number

Figure 4.28: Individual Dice coefficient for 3-D learning based segmentation for
each of the 9 data sets of the first cohort for the endocard and epicard, respectively.

learning based segmentation. The first cohort is chosen, as this acquisition protocol
is the advanced imaging protocol with an isotropic resolution. Furthermore, for the
first cohort fat suppression is applied to remove the enhancements of the pericardial
fat, which could lead to inaccurate segmentation results especially in the apex.
For the evaluation of the first cohort, a leave-one-out cross-validation is performed
for the nine subjects. For the evaluation of the hyper-parameters, a grid-search with
the following parameter sets is evaluated: number of trees T ∈ {50, 60, 70, 80} and
maximal tree depth D ∈ {10, 15, 20, 25, 30}.
The optimal hyper-parameters of the random forest of the endocardium and epi-
cardium are summarized in Table 4.7 for each of the folds.
The learning based segmentation of the left ventricle results in a Dice coefficient
of 0.84 ± 0.07 for the endocardium. The best segmentation results in a DC of 0.91
and the worst in a DC of 0.67. For the epicardium, an overall DC of 0.85 ± 0.07 is
achieved. The best segmentation of the epicardium yields a DC of 0.90 and the worst
a DC of 0.65. The segmentation results for the individual observers are summarized
in Table 4.8. In Figure 4.28 the Dice coefficients for the individual data sets are
represented for the endocard and epicard, respectively.
In addition, also the mean surface distance is evaluated. Using the MSD, the
endocard has a mean distance of 2.96 mm ± 1.25 mm, with a minimum of 1.66 mm and
a maximum of 5.74 mm. The epicard has a mean distance of 3.25 mm ± 1.11 mm, with
a minimum distance of 1.95 mm and a maximum distance of 5.76 mm. In Table 4.9
it is differentiated between the two observers.
98 Left Ventricle Segmentation in 3-D LGE-MRI

Description Mean Observer 1 Observer 2 Observer Overlap


Endo 2.96 ± 1.25 3.04 ± 1.40 2.87 ± 1.17 0.82 ± 0.33
Cohort 1
Epi 3.25 ± 1.11 3.15 ± 1.05 3.35 ± 1.22 1.77 ± 0.24

Table 4.9: Quantitative results of learning based 3-D LV segmentation, using the
MSD with smoothing of the gold standard annotation. The results are shown sepa-
rately for the endocardial (Endo) and epicardial (Epi) contour in mm.

0.15 0.15
Importance

Importance
0.10 0.10

0.05 0.05

0.00 0.00
q ||g||

q ||g||
log(I)

log(I)
log(||g||)

log(||g||)
I

I
I

I
gx

gx

3
2
3

2
3
gy

gy
5 I(p)

5 I(p)
||g||

||g||

||g||

||g||


||g||
q||g||

||g||
q||g||
y

y
2 + g2

2 + g2
I

I
3

3
2

2
3

3
gx

gx
q

Feature Feature

(a) Endocardial feature importance (b) Epicardial feature importance

Figure 4.29: Feature importance of the random forest classifier for the 3-D
LGE-MRI classification. (a) Features importance for the random forest classifier
trained for the endocardial boundary detection. (b) Feature importance for the ran-
dom forest classifier trained for the epicardial boundary detection. It can be seen,
that the feature importance correlates for both trained random forest classifiers.

Furthermore, the feature importance for the endocard and epicard is evaluated,
see Figure 4.29 (a) and (b), respectively. It can be seen that the order of the features
with respect to their importance is identical for the endocard and epicard. In general,
the gradient features are more important compared to the intensity features. This
can be attributed to the fact of the non-homogeneous intensity distribution of the
LGE-MRI sequences in case of myocardial scarring.

Comparison between Filter and Learning Based Results

In this section, the results of the filter based approach are compared to the learning
based approach for the first cohort. In Figure 4.30 the filter based results vs. the
learning based segmentation are visualized for the endocardium and epicardium, re-
spectively.
Furthermore, the correlation between the filter based and learning based seg-
mentation is investigate using a scatter plot, as depicted in Figure 4.31 (a) for the
endocard and in Figure 4.31 (b) for the epicard. The Pearson correlation between
the endocardium segmentation results of the two methods results in 0.86 and in 0.84
for the epicard. Therefore, there is a good correlation between the two segmentation
methods.
4.5 Evaluation and Results 99

1.0 1.0
Dice Coefficient

Dice Coefficient
0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5
Filter Based Learning Based Filter Based Learning Based

(a) DC for endocard (b) DC for epicard


Dice Coefficient

Dice Coefficient
0.8 0.8

0.6 0.6

Filter Filter
0.4 Learning 0.4 Learning

1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
Sequence Number Sequence Number

(c) DC for endocard (d) DC for epicard

Figure 4.30: (a) Comparison of the Dice coefficients between the filter based and
the learning based segmentation for the endocardium, where the blue line represents
the mean Dice coefficient. (b) Comparison of the Dice coefficients between the filter
based and the learning based segmentation for the epicardium, where the blue line
represents the mean Dice coefficient. (c) Individual DC of the endocard for the filter
based approach compared to the learning based approach. (d) Individual DC of the
epicard for the filter based approach compared to the learning based approach.

1.0 1.0
DC of Learning Based

DC of Learning Based

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
DC of Filter Based DC of Filter Based

(a) Endocard (b) Epicard

Figure 4.31: Scatter plot of the Dice coefficient of the filter based segmentation
and the learning based segmentation. (a) The Pearson correlation for the endocard
results in 0.86. (b) The Pearson correlation for the epicard results in 0.84.
100 Left Ventricle Segmentation in 3-D LGE-MRI

The qualitative results are compared in Figure 4.32 for sequence number one of the
first cohort. The first row shows the pseudo SA slices from basal to apical direction
without any contours. The second row shows the gold standard annotation from one
clinical expert. The third row delineates the final filter based segmentation result.
The fourth row depicts the results of the learning based segmentation.

Semi-Automatic Segmentation Results

As the smoothing has no big influence on the Dice coefficient as shown in Table 4.5
only the smoothed gold standard annotations are considered for the further evalua-
tion of the semi-automatic segmentation based approach. As for the learning based
approach, only the first cohort is used for the evaluation of the semi-automatic LV
segmentation. The 2-D ground truth annotations were used to assess the 2-D seg-
mentation and the complete 3-D ground truth for the 3-D interpolation scheme.

Smart Brush Evaluation


The main problem with evaluating the smart brush is that it inherently involves
human interaction. Furthermore, there are many parameters that affect the result
of the 2-D segmentation, such as the size of the brush or the initialization step.
Therefore, objective testing without human interaction is difficult. To address this,
we mimicked user interactions such as slice selection, mouse movement, brush size,
etc. Iteratively, a 2-D slice is selected and one patch of the ground truth annotation is
used for the initialization of the brush. The evaluation of the smart brush is performed
on a different patch by computing the Dice coefficient per patch. As it is difficult to
test all the parameters, we evaluate the performance of the proposed method with a
fixed brush size and constant morphological operations such as opening and closing.
For each data set 5 slices per orientation and 5 different positions for each slice of the
DICOM volume, which leads to 75 patches for each data set, are evaluated.
The results of the 2-D evaluation of our smart brush are depicted in Figure 4.33
for the first cohort. For most patients, an average Dice coefficient of 0.87 is achieved.
For the 2-D evaluation, the outliers occur mainly because of two reasons. First,
when there is no boundary or change in the intensity within the region of interest,
in contrast to background or an undesired object, i. e., two different regions have
the same intensities. For example, in heart segmentations the intensities of the left
ventricle and the left atrium have the same intensity and only experts can differ
between these two objects. In this case, the fully automatic evaluation of the smart
brush fails as it considers both objects as a single one, see Figure 4.34 for an example.
Considering this case, the smart brush accuracy decreases as it performs based
on the intensity thresholding. The second case of having a low Dice coefficient is
when the hole filling is applied on the patch. In a real use of the smart brush, the
user fills the holes at the end of the segmentation. In the automatic evaluation, the
filling is done each time after the patch is segmented and as it is done by using the
morphological operations, e. g., opening and closing, it expands the region of interest
in this case. As the evaluation is performed patch-wise, the hole filling is done for a
single patch each time and at the end the ROI is bigger than the ground truth.
4.5 Evaluation and Results 101

(a) Pseudo SA slices from basal to apical direction

(b) Smoothed gold standard annotation

(c) Filter based segmentation result

(d) Learning based segmentation result

Figure 4.32: Comparison of the segmentation result of the filter based vs. the
learning based approach for sequence number one of the first cohort. The first row
shows the pseudo SA slices from basal to apical direction without any contours.
The second row shows the gold standard annotation from the physician, where the
endocardial contour is orange and the epicardial contour is green. The third row
delineates the final filter based segmentation result, where the endocardial contour
is red and the epicardial contour is yellow. The fourth row depicts the results of the
learning based segmentation, where the endocardial contour is visualized in red and
the epicardial contour in yellow.
102 Left Ventricle Segmentation in 3-D LGE-MRI

1.0

0.8
Dice Coefficient

0.6

0.4

0.2

0.0
1 2 3 4 5 6 7 8 9
Patient Number

Figure 4.33: The evaluation of the 2-D segmentation result using the smart brush
for the first cohort.

3-D Interpolation Evaluation


For the evaluation of the 3-D interpolation, the same data sets are used as for the
smart brush evaluation. The Dice coefficient is used to evaluate the quantitative score
of overlap of the 3-D interpolation compared to the gold standard segmentation. The
accuracy of the 2-D segmentations, the number of segmented slices, and the distri-
bution of these slices are all criteria which can directly change the result of the 3-D
interpolation. Therefore, the evaluation is done based on the ground truth segmen-
tation of the 3-D volume, where individual slices are extracted to initialize the 3-D
interpolation. For each data set, the evaluation is performed with a different number
of segmented slices per orientation. We evaluate 1, 3, and 5 slices per orientation,
which means to have a total number of 3, 9, and 15 segmented slices, respectively.
The slice selection is randomly, however for the first three slices the center slice for
each orientation is chosen. The same method of control point extraction is used for
both methods.
The semi-automatic left ventricle segmentation using our adapted-Hermite radial
basis function (A-HRBF) interpolation achieved and average Dice coefficient for the
endocard of 0.90 ± 0.02, 0.94 ± 0.01, and 0.95 ± 0.01 for 1, 3, and 5 slices per
orientation, respectively. For the epicard, an average Dice coefficient of 0.90 ± 0.02,
0.94 ± 0.02 and 0.95 ± 0.01 for 1, 3, and 5 slices per orientation was achieved.
The results for each data set of the 3-D A-HRBF interpolation are depicted in Fig-
ure 4.35 (a) for the endocard and Figure 4.35 (b) for the epicard, respectively. It can
be seen that by increasing the number of slices the Dice coefficient increases slightly.
As previously, also the mean surface distance is evaluated. Using the MSD, the
endocard has a mean distance of 1.85 mm ± 0.36 mm, 1.06 mm ± 0.22 mm, and
0.90 mm ± 0.17 mm for 1, 3, and 5 slices per orientation, respectively. The epicard
4.5 Evaluation and Results 103

(a) Ground thruth (b) Extracted patch

(c) Initialization smart brush (d) Result smart brush

Figure 4.34: Smart brush outlier evaluation. (a) The overlaid ground truth shown
in red and the smart brush patch shown as a yellow rectangular. (b) The extracted
patch from the smart brush. (c) The pre-segmented mask which is obtained by
eroding the extracted patch. (d) The segmentation result by using the smart brush
which is different to the ground truth patch due to similar intensity values.

has a MSD of 2.08 mm ± 0.40 mm, 1.39 mm ± 0.38 mm, and 1.14 mm ± 0.25 mm for
1, 3, and 5 slices per orientation, respectively.
The main difference from our proposed A-HRBF method to the HRBF [Ijir 13] is
that we use a combination of 2-D and 3-D gradients based on the extracted contour
of the 2-D segmentation, which makes the interpolation faster. The standard HRBF
method uses the 3-D intensity gradient for their 3-D interpolation. These can lead
to errors, especially in case of ambiguous boundaries, such as the transition between
the left and the right ventricle or areas of myocardial scarring.
Figure 4.36 depicts the qualitative results of the A-HRBF 3-D interpolation scheme
for one example data set, where the result is shown in red and the ground truth in
orange.
In contrast to previous implicit methods for 3-D interpolation [Ijir 13], this method
cannot only be used for high-contrast images, but also for images with high noise level
or other confounding factors due to the independence of intensity information for the
3-D interpolation. The main advantage happens when there is an ambiguous bound-
ary which only an expert can recognize (e. g., between left ventricle and left atrium
104 Left Ventricle Segmentation in 3-D LGE-MRI

1.0 1.0

0.9 0.9
Dice Coefficient

Dice Coefficient
0.8 0.8

0.7 0.7

0.6 0.6
1 Slice/Orientation 1 Slice/Orientation
0.5 3 Slice/Orientation 0.5 3 Slice/Orientation
5 Slice/Orientation 5 Slice/Orientation
0.4 0.4
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
Patient Number Patient Number

(a) A-HRBF endocard (b) A-HRBF epicard

Figure 4.35: The 3-D interpolation evaluation results: (a) The A-HRBF result for
the endocard with an average Dice coefficient of 0.90, 0.94, and 0.95 for 1, 3, and 5
slices per orientation, respectively. (b) The A-HRBF result for the epicard with an
average Dice coefficient of 0.90, 0.94 and 0.95 for 1, 3, and 5 slices per orientation,
respectively.

at the left ventricular outflow tract). In this case, the normal vector computation
fails based on the previous method [Ijir 13], whereas using our method, the normal
vectors are oriented based on the contour extracted from the 2-D segmentation mask,
see Figure 4.37. This leads to a better segmentation result compared to the standard
HRBF method.

4.6 Discussion and Conclusion

The runtime is an important factor for the employment in clinical practice. The
current implementation is single threaded only, without any optimization. The whole
segmentation pipeline needs less than 5 minutes, implemented with Python. The most
time-consuming step is the initialization of the left ventricle in the whole heart scan,
as a two-stage registration is applied.
The proposed filter based method achieves a Dice coefficient of 0.84 ± 0.04 for
the endocardium and 0.80 ± 0.06 for the epicardium. The learning based method
achieves a Dice coefficient of 0.84 ± 0.07 for the endocardium, and 0.85 ± 0.07 for
the epicardium. There is no big difference between the two proposed methods, as the
post-processing is similar for both methods.
The results for the base and the apical regions are worse compared to the mid-
cavity. For the base, this can be attributed to the fact that the transition from
the LV to the outflow tract is smooth. Therefore, the slice where the segmentation
algorithm stops is not defined by the volumetric data set, compared to regular 2-D
LGE-MRI. Furthermore, the clinical experts annotated the data in the axial, sagittal,
and coronal planes, so the transition of the outflow tract is even harder to delineate.
4.6 Discussion and Conclusion 105

(a) 0.85 (b) 0.94 (c) 0.93 (d) 0.91

(e) 0.87 (f ) 0.95 (g) 0.98 (h) 0.92

(i) 0.89 (j) 0.94 (k) 0.94 (l) 0.85

Figure 4.36: The ground truth annotation in orange and the result of 3-D inter-
polation in red are shown for the endocard. The interpolation is obtained based on
one reference slices per orientation, where an overall DC of 0.92 is achieved. Each
row depicts a different orientation (sagittal, axial, and coronal) including the Dice
coefficient for each individual slice. It is expected that the closer to the reference
slice, the higher Dice coefficient is obtained.

In the apex, the endocardium gets very small. If there is a myocardial infarction
it is even harder to delineate between myocardium and blood pool, especially if the
annotation is not done in the short axis orientation. This is another reason for the
reduced DC in the apical region. Furthermore, for the data of the second cohort
no additional fat suppression is applied, therefore various enhancements in the apex
appeared. The last two columns of Figure 4.25 depict the apical region without scar,
however it can be seen that there the biggest differences occur compared to the gold
standard annotation.
Approaches as presented by Peters et al. [Pete 07] or Zhuang et al. [Zhua 10]
are not suitable for our case. First of all, the data sets differ. Furthermore, the
optimization of such models is difficult as there is no clear boundary between the
myocardial infarction and the blood pool. Therefore, no model based approach was
applied. To improve the results for the base, a cutoff parameter could be defined to
make an identical decision for the outflow tract. Furthermore, it can be seen that the
106 Left Ventricle Segmentation in 3-D LGE-MRI

(a) Normal vectors based on intensity

(b) Normal vectors based on contour

Figure 4.37: Normal vector orientation for left ventricle segmentation with an am-
biguous boundary: (a) Control points in yellow and associated normal vectors in
blue based on intensity gradients for the HRBF [Ijir 13] method. (c) Control points
in yellow and associated normal vectors in blue based on the drawn contour in red
for our proposed A-HRBF method.

results for the epicardium segmentation of cohort two is worse compared to cohort
one. This can be explained by the insufficient fat suppression of the MRI data.
For the learning based segmentation, a graph cut based approach can be applied,
instead of dynamic programming to achieve a global cut for all the slices and not
only for each individual slice. This could lead to more consistent results throughout
the slices.
A comparison across different studies is difficult to perform, as data sets differ.
Also, most of the work reported in literature was performed using 2-D LGE-MRI.
Nevertheless, the proposed algorithm achieves similar or better results, in particular
when compared with the results by Albà et al. [Alba 14]. They reported a Dice
coefficient of 0.81 ± 0.05, solely using 2-D LGE-MRI. In the following the advantages
of the presented approach compared to the reported methods are identified.
First, Dikici et al. [Diki 04] and Wei et al. [Wei 11] only consider the 2-D slices
without taking the longitudinal axis into account. This can lead to inter-slice shifts
and to discontinuous 3-D shapes. In comparison, in this work the center, radius, and
shape of the previously found slices are considered, which allows for an inter-slice
smoothness.
4.6 Discussion and Conclusion 107

Second, Ciofolo et al. [Ciof 08] use a 3-D deformable mesh for the LV segmentation,
where the meshes are only attracted to features in the SA slices. As there are no
features considered in the other slices, this approach can lead to inter-slice shifts.
Wei et al. [Wei 13], who also deforms 3-D meshes, works in a 3-D framework and
considers the LA of the image. This approach adds more features for the deformation
and results in a smoother shape. Our method, while computational simple, uses the
pseudo SA slice and considers the propagation to the succeeding slices to deal with
potential misalignment and avoid segmentation errors, without the need to segment
any long axis or CINE images.
Although the gold standard annotation is obtained from a physician and a clini-
cal expert, it still involves inaccuracies regarding the manual delineation of the my-
ocardium. To some extent, this is due to a lack of good annotation tools. This issue is
also confirmed by the larger inter-observer variability. We have tried to compensate
this by an additional smoothing of the gold standard annotations. The effect of the
smoothing considering the Dice coefficient is rather minimal, as shown in Table 4.6.
The Dice coefficient changes only about 0.04 and the MSD is not notably affected
(less than 1 mm). However, the visual impression is improved. It has to be noted
that the smoothing is only in the pseudo short-axis slices and therefore, does not con-
sider the 3-D shape of the ventricle. Hence, this can lead to some stepping artifacts.
Furthermore, we could not compare our segmentation results to previous methods as
stated in Section 4.2, as there is none available for 3-D LGE-MRI.
For the precise scar quantification, a prerequisite is the accurate and reliable seg-
mentation of the left ventricle. However, the scar quantification heavily depends on
the technique applied [Kari 16, Rajc 14]. This is investigated in Chapter 5. Having
the segmentation of the scar, the myocardial infarction itself can be analyzed fur-
ther. For example, the infarct size and mass can be estimated, as these parameters
are good predictors for the success of a therapy. In addition to that, the type of
scarring is of interest. In clinical practice, three types of fibrosis are differentiated:
endocardial, epicardial, and transmural scar. Having the segmentation of the en-
docardium and epicardium, the transmurality of the scar can be classified precisely
[Reim 16, Reim 17a]. The different scar visualization methods are detailed in Chap-
ter 6.
The benefit of the filter basded and learning based method is the independence of
any user input. The algorithm requires only the 3-D LGE-MRI volume. This results
in a more robust segmentation. However, a graphical user interface is provided in
case the user is not satisfied with the segmentation result and the contours can be
edited manually.
For the semi-automatic left ventricle segmentation, our experiments showed that
one slice per orientation is sufficient to get a good segmentation result. Furthermore,
in order to achieve more accurate interpolation results, the user has to segment those
slices which have the maximum mismatch with the actual ground truth. In fact, for
3-D interpolation, the user selects those slices which are a good representation of the
complete shape. Hence, the actual result of the interpolation is even better than the
evaluation result shows.
In contrast to previous implicit methods for 3-D interpolation [Ijir 13], this method
can not only be used for high-contrast images, but also for images with high noise level
108 Left Ventricle Segmentation in 3-D LGE-MRI

or other confounding factors due to the independence of intensity information for the
3-D interpolation. The main advantage happens when there is an ambiguous bound-
ary which only an expert can recognize (e. g., between left ventricle and left atrium at
the left ventricular outflow tract). In this case, the normal vector computation fails
based on the previous method [Ijir 13], whereas using our method, the normal vectors
are oriented based on the contour extracted from the 2-D segmentation mask.
The purpose of this study was to provide an accurate and stable left ventricle
myocardial segmentation method for 3-D LGE-MRI sequences. Segmentation of the
LV endo- and epicardium has been studied in literature, but only a few methods
focused solely on LGE-MRI data. None of them considers 3-D LGE-MRI, to the best
of our knowledge. The presented work solely uses 3-D LGE-MRI for the segmen-
tation, unlike most related work which makes use of CINE MRI for the LGE-MRI
segmentation.
In the course of this work, two automatic and one semi-automatic segmentation
methods for the left ventricle endo- and epicardium have been presented that provides
accurate and consistent results for 3-D LGE-MRI. The automatic method achieved
an overall Dice coefficient of 0.84 for the endocard and 0.80 for the epicardium using a
simple filter based approach. Using the average surface distance, the endocard had a
mean error of 2.75 mm and the epicard had a mean error of 4.29 mm. A clear benefit
of the presented methods is the independence from an anatomical scan and from user
interaction.
Considering the semi-automatic segmentation, the method should be extended to
use arbitrary orientations of the 2-D slices, and not only axial, sagittal, and coronal
image slices for the 3-D interpolation. Having the possibility to annotate arbitrary
orientations, the 2-D annotations can be better adopted to the 3-D segmentation
problem. For the left ventricle for example, one would annotate the short axis ori-
entation, and the two long axis orientations to achieve excellent 3-D interpolation
results. Therefore, the 3-D interpolation with the 2-D gradient selector has to be
adopted. In addition, also the functionality of the 2-D smart brush can be improved,
as right now only the intensity distribution and the connectivity is considered. How-
ever, for medical images also the texture can play an important role. Incorporating
texture features for the classification of foreground and background could further
improve the 2-D segmentation result.
The benefit of the method is that the user can correct the segmentation result
easily by segmenting an additional slice with the maximum mismatch. Furthermore,
no prior knowledge is involved which leads to the ability to generate any arbitrary
segmentation of any 3-D data set, irrespectively of image modality, displayed organ,
or clinical application.
PART III

Scar Segmentation and


Visualization

109
CHAPTER 5

Scar Segmentation in LGE-MRI


5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.3 Scar Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.4 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

In the previous two chapters, segmentation methods for the myocardium of the left
ventricle are presented for 2-D and 3-D LGE-MRI. In this chapter, the scar quantifi-
cation of LGE-MRI images is described. In Section 5.1, a short motivation is given.
Related literature is reviewed in Section 5.2. In Section 5.3, three different scar
quantification methods are outlined, namely, the x-standard deviation (x-SD), the
full-width-at-half-max (FWHM), and a texture based approach. The evaluation and
results of the different approaches are presented in Section 5.4. In the last section,
the results are discussed and a conclusion is drawn.
Parts of this chapter have been published previously in three conference publica-
tions [Kurz 15, Kurz 16a, Kurz 18b] and one journal article [Kurz 17f].

T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt, C. Tillmanns,


and J. Hornegger. “Semi-Automatic Segmentation and Scar Quan-
[Kurz 15] tification of the Left Ventricle in 3-D Late Gadolinium Enhanced
MRI”. In: ESMRMB, Ed., 32nd Annual Scientific Meeting of the
ESMRMB, pp. 318–319, October 2015
T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns, A. Maier,
and A. Brost. “Fully Automatic Segmentation and Scar Quantifica-
[Kurz 16a]
tion of the Left Ventricle in 3-D Late Gadolinium Enhanced MRI”.
In: M. C. Weiss, Ed., Book of Abstracts, October 2016
T. Kurzendorfer, K. Breininger, S. Steidl, A. Brost, C. Forman, and
A. Maier. “Myocardial Scar Segmentation in LGE-MRI using Frac-
[Kurz 18b]
tal Analysis and Random Forest Classification”. In: 2018 24th In-
ternational Conference on Pattern Recognition (ICPR), Aug. 2018

111
112 Scar Segmentation in LGE-MRI

T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns, A. Maier,


and A. Brost. “Fully Automatic Segmentation of the Left Ventricu-
[Kurz 17f]
lar Anatomy in 3-D LGE-MRI”. Journal of Computerized Medical
Imaging and Graphics, Vol. 59, pp. 13–27, July 2017

5.1 Motivation
As previously mentioned, LGE-MRI is the clinical gold standard to non-invasively
visualize the viability of the myocardium. The contrast agent accumulates in the
damaged cells because of a delayed wash in and wash out in regions with ruptured
cell membrane, as there is an increased extracellular space. Furthermore, gadolinium
can diffuse in ruptured cell membranes [Dolt 13]. Both of these effects lead to an
increase in the gadolinium concentration and therefore is responsible for the hyper-
enhanced regions.
The quantification of the scar is very important for diagnosis, treatment plan-
ning, and guidance. Recent studies have shown that the knowledge about infarct
size, shape, and location can be very helpful during the ablation of ventricular tachy-
cardia [Estn 11, Andr 11]. In clinical routine, the scar is often segmented manually,
where hyper-enhanced tissue is selected. However, manual segmentation is prone to
inter and intra-observer variability and very time-consuming, especially for the 3-D
LGE-MRI acquisitions [Mirs 17]. Furthermore, there are also hyper-enhanced regions
at the location of the right ventricle insertion point due to the partial volume ef-
fect, especially at the left ventricular outflow tract and hyper-enhancements due to
epicardial and pericardial fat [Kari 16].

5.2 Related Work


In the recent years, many approaches have been developed to segment the myocardial
scar in LGE-MRI [Amad 04, Henn 08, Pop 13, Lu 12, Kari 16, Larr 17]. The segmen-
tation approaches can be mainly divided into three categories: threshold based, clas-
sification methods, and a combination of both methods. The most common approach
is the x-fold standard deviation, where the mean µ and the standard deviation σ of
a healthy myocardium region is calculated [Flet 11]. The region of the healthy my-
ocardium has to be manually specified by the user. Afterwards, the scar threshold
θx is estimated by
θx = µ + xσ , (5.1)

where the multiplication factor x for the standard deviation has to be defined by the
user, with x = {2, 3, ..., 6}.
The second common approach is the full-width-at-half-maximum approach, where
half of the maximum intensity within a user-specified hyper-enhanced region is se-
lected as the scar threshold [Amad 04]

θFWHM = I max /2 . (5.2)


5.3 Scar Quantification 113

However, both of these approaches require user input and are therefore still prone
to inter- and intra-observer variability. Hence, a fully automatic scar quantification
is desirable to achieve comparable results.
Tao et al. [Tao 10] combine intensity and spatial information for the scar segmen-
tation. First, an automatic thresholding is applied using Otsu’s method [Otsu 79],
where the intensities of the blood pool and the myocardium are considered. It is
expected that the histogram has two modes, one for the blood pool and the hyper-
enhanced tissue and one for the healthy myocardium. The blood pool is included in
the histogram as the Otsu’s threshold produces more stable results if there is only
a small myocardial infarction. Afterwards, the connectivity of the pixels is investi-
gated if less then three voxels are connected, these pixels are not further considered
as potential scar tissue. In addition, connected regions that are very thin in the long
axis direction are excluded.
Pop et al. [Pop 13] use a Gaussian mixture model in combination with an expec-
tation maximization to classify the intensity histogram of the myocardium into three
classes, healthy myocardium, border zone, and scar tissue.
Hennemuth et al. [Henn 08] combine a histogram analysis with a constrained wa-
tershed segmentation. Hence, some prior constraints are considered: i) The inten-
sity values of the LGE-MRI image are distributed according to the Rice distribu-
tion [Henn 08]. ii) The scar tissue is most likely sub-endocardial. iii) Relevant scar
tissue is compact and has a crescent-shaped area. iv) Dark regions surrounded by
scar tissue are no-reflow areas and are therefore included in the scar tissue.
Lu et al. [Lu 12] propose a graph cut based approach to segment the scar tissue in
LGE-MRI. The two terminal nodes are defined as healthy myocardium and scarred
myocardium, i. e., background and foreground of the image. The weights of the edges
are obtained from a Gaussian mixture model. For the delineation of the gray zone
and infarct zone, the FWHM algorithm is applied to the segmented infarct area.
Larrazo et al. [Larr 17] propose a classification based approach to segment the
myocardium. Therefore, texture features such as the run-length matrix, gray-level
co-occurrence matrix, and autoregressive model are extracted. In a feature selection
step, the most 17 discriminant features are selected and a support vector machine is
trained. To obtain the final segmentation result after the classification, morphological
opening is applied.
In general, a fully automatic scar segmentation is desirable, as the results are then
reproducible and not prone to inter- and intra-observer variability.

5.3 Scar Quantification


In this thesis, three different methods are proposed for a fully automatic scar quan-
tification. First, the state-of-the-art methods, x-fold standard deviation and FWHM,
are implemented in a fully automatic manner, thus no user selection of the region
of interest is required, see Section 5.3.1 and Section 5.3.2. Second, a new machine
learning approach is introduced which uses texture as well as local intensity distribu-
tions as features (Section 5.3.3) [Kurz 18b]. For the classification, a random forest is
trained.
114 Scar Segmentation in LGE-MRI

25000

20000
Number of pixels

15000

10000

5000

0
0 50 100 150 200 250 300
Intensities

Figure 5.1: Intensity histogram of the myocardium and blood pool. The blue line
indicates the Otsu’s threshold to separate the two modes. The green line indicates
the mean value of the lower mode, which is used to calculate the x-SD.

5.3.1 x-Fold Standard Deviation Scar Quantification


One of the widest used scar segmentation algorithm is the x-fold standard deviation
approach. Here, the intensities of the myocardium are thresholded to a fixed number
of standard deviations from the mean intensity of a healthy myocardium region. This
region has to be specified by the user, as well as the number of standard deviations.
As the selection of the healthy myocardium is user dependent, a fully automatic
selection is desirable.
Hence, for reference a fully automatic approach is implemented, where just the
number of folds has to be defined by the user. First, the intensity histogram of the my-
ocardium is investigated. If there is a huge myocardial scarring present, it is expected
that the histogram has two modes: one mode for the healthy myocardium and one
mode for the scarred myocardium. However, if there is only a small myocardial scar,
the delineation of the two modes can be difficult. Thus, for the intensity histogram
the myocardium and the blood pool are considered. This makes the distinction be-
tween the two modes easier, as the blood pool has a similar intensity distribution as
myocardial scarring. To find the optimal threshold between the two modes, Otsu’s
thresholding is applied [Otsu 79]. In the next step, the mean value µ and the standard
deviation σ of the lower mode is calculated, where the lower mode corresponds to
healthy myocardium. In Figure 5.1, an intensity histogram of the myocardium and
the blood pool is depicted. The blue line defines Otsu’s threshold to separate the two
modes into healthy and scarred myocardium, where the lower mode corresponds to
healthy myocardium. The green line depicts the mean value µ of the lower mode. In
the next step, the scar threshold θxSD is calculated using Equation (5.1), where x is
defined by the user.
Figure 5.2 depicts the results of different values for x ∈ {2, 3, ..., 6}. The first
column depicts the intensity histogram of the myocardium only and the red dashed
5.3 Scar Quantification 115

Number of pixels
20000

10000

0
0 100 200 300
Intensities
(a) 2-fold standard deviation
Number of pixels

20000

10000

0
0 100 200 300
Intensities
(b) 3-fold standard deviation
Number of pixels

20000

10000

0
0 100 200 300
Intensities
(c) 4-fold standard deviation
Number of pixels

20000

10000

0
0 100 200 300
Intensities
(d) 5-fold standard deviation
Number of pixels

20000

10000

0
0 100 200 300
Intensities
(e) 6-fold standard deviation

Figure 5.2: Segmentation result using the x-SD method for different values of x.
The first column, depicts the intensity histogram of the myocardium only and the
red dashed line indicates the x-SD threshold θxSD . The second column, depicts a
single slice, where the scar mask is overlaid in red. In the last column, the scar mask
is rendered in 3-D. It can be seen that with an increasing x value, the scar mask
decreases in size. The best segmentation result is achieved with x = 4.
116 Scar Segmentation in LGE-MRI
Number of pixels

20000

10000

0
0 100 200 300
Intensities

(a) FWHM threshold (b) 2-D slice (c) 3-D scar mask

Figure 5.3: Segmentation result using the FWHM method. (a) Depicts the intensity
histogram of the myocardium only and the orange dashed line indicates the FWHM
threshold θFWHM . (b) Illustrates a single slice, where the scar mask is overlaid in
orange. (c) The scar mask is rendered in 3-D.

line indicates the x-SD threshold θxSD . The second column illustrates a single slice,
where the scar mask is overlaid in red. In the last column, the scar mask is rendered
in 3-D. It can be seen that with an increasing x value, the scar mask decreases in
size.

5.3.2 Full-Width-at-Half-Maximum Scar Quantification


The second very common threshold based method for scar segmentation is the full-
width-at-half-maximum approach. Here, the user has to select a hyper-enhanced
region in the myocardium. In this region, the scar threshold θFWHM is defined as
half of the maximum intensity, see Equation (5.2). However, the standard FWHM
depends on the user selection. Therefore, in this thesis a fully automatic approach is
implemented. First, the intensity histogram of the myocardium is analyzed and the
maximum intensity of the histogram is taken. The threshold θFWHM is then defined
as half of the maximum intensity value. Figure 5.3 (a) shows the intensity histogram
of the myocardium, where the orange dashed line indicates the FWHM threshold
θFWHM . In Figure 5.3 (b), a single slice with the overlaid scar mask in orange is
visualized. Figure 5.3 (c) shows the 3-D rendering of the scar mask.

5.3.3 Classification Based Scar Quantification


In this section, a new machine learning method for the quantification of myocardial
scar is introduced to overcome the limitations of solely intensity based methods. The
scar quantification is divided into four major steps. First, some pre-processing is
applied to the input image. Second, different features based on the texture and
intensity of the image are extracted. In the third step, a random forest classifier is
trained for the feature classification. In the last step, the final scar mask is obtained
by applying some post-processing. An overview of the scar segmentation pipeline is
given in Figure 5.4.
In the pre-processing step, the LGE-MRI images have to be normalized. The
data is normalized in a range between 0 and 255. In the next step, an additional
5.3 Scar Quantification 117

Classification Phase

Scar
Image Pre- Feature Scar
Quan-
processing Extraction Classification
tification

Learning Phase Training

Figure 5.4: Overview of the scar quantification pipeline, adapted according


to [Niem 83]. First, a pre-processing step is performed, afterwards the features can be
extracted. In the third step, the pixels are classified into two classes, scar and no scar.
The last step is the scar quantification, where some post-processing is performed on
the classification result.

normalization step of the myocardium is added, where the intensities within the
myocardium are also distributed in a range between 0 and 255.

Feature Extraction

In the next step, the features are extracted from the pre-processed image. For the
description of the scar within the myocardium, texture features are used. A fea-
ture vector f ∈ RD is created for every pixel within the myocardium, therefore, a
sliding-window approach is used. For each pixel also the surrounding neighborhood
is considered, with a pre-defined patch size of 7 × 7 pixels. For the feature extraction,
the segmentation-based fractal texture analysis (SFTA) algorithm is used [Cost 12].
The extraction algorithm is based on a set of binary images from which the fractal
dimensions of the regions’ borders are computed to describe the segmented texture
patterns. The fractal dimension provides an index of complexity comparing the de-
tails in a pattern by varying the scale where it is measured. The feature extraction
using the SFTA algorithm can be decomposed into two steps [Cost 12]. First, the
gray scale image is divided into a set of binary images. For the division, a technique
called two-threshold binary decomposition (TTBD) is applied. Second, for each of
the resulting binary images, the fractal dimension from the region boundaries is com-
puted. Furthermore, the mean gray level and the area of the remaining objects is
computed. These two steps are explained in more detail in the next two sections.

Two-Threshold Binary Decomposition The input of the TTBD algorithm is


a gray-scale image I. In the first step, a set of threshold values T ∈ {θ1 < θ2 <
... < θNt } is computed, where θi ∈ R is an individual threshold value. The number of
thresholds Nt ∈ N is defined by the user. The individual threshold values are obtained
by applying the multi-level Otsu algorithm [Liao 01]. The aim of the multi level Otsu
118 Scar Segmentation in LGE-MRI

Figure 5.5: Input image for the SFTA algorithm.

algorithm is to find the threshold that maximizes the intra-class variance. After the
first threshold is found, the Otsu algorithm is applied recursively to each sub-image,
until the the desired number of thresholds is reached. After the Nt threshold values
are defined, the input image I is decomposed into a set of binary images. Therefore,
pairs of threshold values θi and θi+1 are selected from the set of thresholds T , where
θi < θi+1 . In the next step, the pair of thresholds is applied to the input image I to
achieve a two-threshold segmentation
(
1 if θl < I(p) ≤ θu
I θ (p) = , (5.3)
0 otherwise

where θl and θu denote the lower and upper threshold, respectively. A set of binary
images is obtained by applying all possibles pairs of thresholds to the input image I.
These pairs consists of the continuous thresholds from T i and all pairs of thresholds
between θ and I max , where I max is the maximum intensity of the image I. Hence,
the number of binary images is 2Nt . The image shown in Figure 5.5 is decomposed
using the TTBD algorithm and visualized in Figure 5.6, where Nt = 8.
One important property of the TTBD is that the set of binary images obtained
from the algorithm is a super-set of all binary images that would be obtained by apply-
ing a one threshold segmentation using the thresholds computed with the multi-level
Otsu algorithm [Cost 12]. The aim of using threshold pairs for the image segmen-
tation is to segment structures that would not have been segmented using regular
thresholding. This is the case for gray values that lie in the middle ranges of the
input image.

SFTA Extraction Algorithm In the next step, the SFTA features are calculated
using the results of the two threshold binary decomposition. The SFTA feature
vector takes into account the size of the binary images, the mean gray value, and
the boundaries fractal dimension [Cost 12]. The fractal dimension is used to describe
the boundary complexity of the objects’ structure in the thresholded image I θ . The
boundaries of the thresholded image I θ are denoted as the border image ∆I. The
boundary image ∆I(x, y) has the value 1 if in the corresponding thresholded image
I θ at position (x, y) has the value 1 and at least one neighboring pixel has the value 0.
5.3 Scar Quantification 119

12 < I(p) ≤ I max 19 < I(p) ≤ I max 33 < I(p) ≤ I max 49 < I(p) ≤ I max

63 < I(p) ≤ I max 83 < I(p) ≤ I max 102 < I(p) ≤ I max 134 < I(p) ≤ I max

12 < I(p) ≤ 19 19 < I(p) ≤ 33 33 < I(p) ≤ 49 49 < I(p) ≤ 63

63 < I(p) ≤ 83 83 < I(p) ≤ 102 102 < I(p) ≤ 134 134 < I(p) ≤ 255

Figure 5.6: Decomposition of the image shown in Figure 5.5 using the TTBD
algorithm, where Nt = 8. The pairs of thresholds is given below each image.
120 Scar Segmentation in LGE-MRI

Box counting values


8
Fitted line
6

log Nι
4

−6 −4 −2 0
1
log
ι

Figure 5.7: The fractal dimension F is estimated by the slope of the regression line
of the estimated box counting values. In this case the fractal dimension is F = 1.46

Otherwise the boundary image ∆I(x, y) has the value 0. The aim of this procedure
is to have a one pixel thin boundary.
In the next step, the fractal dimension F ∈ R of ∆I(x, y) is computed using the
box counting algorithm, as binary images are used [Schr 92]. Therefore, the input
image ∆I is divided into a grid consisting of boxes of size ι × ι. In the next step, the
boxes are counted that contain at least one pixel of the object, resulting in Nι ∈ N,
which is depended on the box size ι ∈ N. The box size ι is varied till enough values
are obtained to estimate a curve defined by log Nι vs. log 1ι . Finally, the fractal
dimension F is computed by fitting a line to the curve to estimate, i. e., least square
fitting. In Figure 5.7 the estimated curve using the box counting values is visualized
with a fractal dimension of F = 1.46 for the first image of Figure 5.6. The fractal
dimension F corresponds to the slope of the line.
The mean gray value Ī and the area A ∈ R+ 0 of each binary image is extracted
without significantly increasing the computation time. Hence, the final feature vector
for each binary image I θ consists of three values, the fractal dimension F , the mean
gray value Ī, and the size of the binary image A. Thus, the final SFTA feature vector
consists of the number of binary images obtained from the TTBD times three. In the
case where Nt = 8, 16 binary images are obtained from the TTBD resulting in a final
SFTA feature vector of f ∈ R48 . The SFTA feature extraction pipeline is visualized
in Figure 5.8.
In addition to the SFTA features also the mean intensity I mean of the patch and
the local intensity I of the center pixel is extracted. Therefore, the final feature vector
used for the scar classification is of size f ∈ R50 .

Feature Classification and Scar Quantification

For the classification of the scar tissue, a random forest classifier is used. The train-
ing of the random forest classifier is based on ground truth annotations from which
scar and non-scar samples are extracted. However, only pixels with the surround-
ing neighborhood completely inside or outside the infarcted myocardium are use for
training. Otherwise, information corresponding to both regions would be added to
the classifier.
5.4 Evaluation and Results 121


Figure 5.8: The SFTA feature extraction pipeline. First, the input image is decom-
posed in 2Nt binary images using the TTBD algorithm. Second, the SFTA features
are extracted, consisting of the fractal dimension F , the mean gray value Ī, and the
size of the binary image A for each binary image, respectively.

The RF classifier assigns a probability of corresponding to scar or healthy tissue


to each pixel. The probability image is binarized by applying a threshold. After
the binary image is obtained, a connected component analysis is performed. Hence,
all components smaller than a specified size are removed, as they are considered as
noise. The smallest allowable connected component size is 30. Finally, morphological
closing (dilation followed by erosion) is applied with a ball shape structuring element
with the radius of two.

5.4 Evaluation and Results

In this section, the evaluation of the three different scar quantification methods is
presented. First, the data used for the evaluation is described. In the second section,
the evaluation metrics are detailed. In the last section, the results are presented for
the three proposed methods.
122 Scar Segmentation in LGE-MRI

(a) Axial (b) Coronal (c) Sagittal

Figure 5.9: Example data set used for the evaluation of the scar quantification.
(a) Axial view of the data set, showing a small myocardial infarction (orange arrow).
(b) Coronal view and (c) sagittal view.

5.4.1 Data Description


The automatic scar quantification approaches are evaluated on 30 clinical 3-D LGE-
MRI data sets from individual patients. These data sets differ from the data in Chap-
ter 4. The data is acquired using two different clinical scanners, MAGNETOM Verio
3T and MAGNETOM Espree 1.5T (Siemens Healthcare GmbH, Erlangen, Germany).
The GRE sequence has the following parameters: TR/TE 2.76–4.02 ms/1.38–2.01 ms,
radio frequency excitation angle 13–14◦ , FOV 379–384 × 379–384, voxel size (0.66–0.85
× 0.66–0.85 × 1.5–1.7) mm3 , and a receiver bandwidth of 349–755 Hz/Px. Image re-
construction is performed using parallel imaging [Gris 02]. An example data set is
depicted in Figure 5.9, with the axial (a), coronal (b), and sagittal (c) view and small
myocardial scarring visible in the axial view.
Gold standard annotations of the myocardium are provided by a clinical expert.
The annotations are performed using MITK [Nold 13]. For the scar quantification,
the clinical expert uses the result of the FWHM method as a starting point, for
subsequent manual post-processing. Therefore, there is a bias towards the FWHM
scar quantification method.

5.4.2 Evaluation Methods


For the evaluation of the algorithms, the segmentation result is compared to the
gold standard annotation. First, the overlap of the segmentation masks is evaluated
using the Dice coefficient, see Equation (3.3). This similarity metric measures the
proportion of true positives in the segmentation.
As a second metric, the total volume error (TVE) between the gold standard
segmentation and the segmentation result is evaluated:
TVE = |V R − V G | , (5.4)
where V R is the scar volume of the segmentation result and V G is the scar volume of
the gold standard annotation.
As a learning based approach is used for the scar quantification, also the hyper-
parameters of the random forest have to be optimized. Therefore, a 6-fold nested
5.4 Evaluation and Results 123

Description FWHM 2SD 3SD 4SD 5SD 6SD RF


Mean 0.71 0.45 0.48 0.49 0.47 0.44 0.66
Std 0.21 0.21 0.22 0.23 0.26 0.28 0.17
Min 0.12 0.10 0.04 0.00 0.00 0.00 0.11
Max 0.99 0.89 0.98 0.85 0.83 0.95 0.85
(a) Dice coefficient
Description FWHM 2SD 3SD 4SD 5SD 6SD RF
Mean 0.04 0.21 0.16 0.14 0.12 0.11 0.03
Std 0.05 0.17 0.15 0.14 0.12 0.12 0.03
Min 0.00 0.01 0.00 0.00 0.00 0.00 0.00
Max 0.16 0.58 0.51 0.47 0.43 0.41 0.10
(b) Total volume error

Table 5.2: Results of the scar quantification methods using the (a) Dice coefficient
and (b) the total volume error.

cross-validation is used, i. e., five data sets are used for testing and the remaining 25
are split into the training and validation data sets for the classifier. To optimize the
hyper-parameters, a grid-search is applied with the following parameters: number of
trees T ∈ {30, 40, 50, 60, 70, 80} and maximal tree depth D ∈ {5, 10, 15, 20, 25, 30}.
The inner loop of the nested cross-validation is set to a 5-fold cross-validation, i. e., five
data sets are used for the validation and the remaining 20 data sets are used for the
training of the classifier. The optimal hyper-parameters for the random forest are
T = 70 and D = 10.

5.4.3 Results

In this section, the results of the scar quantification are presented. The segmentation
using the learning based approach results in a Dice coefficient of 0.66 ± 0.17. The
best segmentation results in a DC of 0.85 and the worst in a DC of 0.11.
The Dice coefficient is also evaluated for the FWHM and the x-SD methods. The
results using the Dice coefficient for all the methods are summarized in Table 5.2 (a).
The worst result for the learning based scar quantification correlates with the result
for the FWHM method.
Furthermore, the DC of the different methods is visualized using a box plot, see
Figure 5.10.
In addition, the total volume error is evaluated for the learning based scar seg-
mentation. For the total volume error, a mean error of 0.04 ± 0.04 is achieved, with
a minimum error of 0.00 and a maximum error of 0.18. The TVE for all of the scar
quantification methods is summarized in Table 5.2 (b).
Furthermore, in Figure 5.11 (a) the individual Dice coefficients for the learning
based scar quantification are depicted in increasing order. The total volume error for
each data set is shown in Figure 5.11 (b) and sorted according to the Dice coefficient.
124 Scar Segmentation in LGE-MRI

1.0

0.8
Dice Coefficient

0.6

0.4

0.2

0.0
FWHM 2SD 3SD 4SD 5SD 6SD RF
Methods

Figure 5.10: Comparison of the Dice coefficient for all segmentation approaches,
where the blue line represents the mean Dice coefficient.

0.20
Total Volume Error

0.8
Dice Coefficient

0.15
0.6
0.10
0.4

0.2 0.05

0.0 0.00
1 5 10 15 20 25 30 1 5 10 15 20 25 30
Sequence Number Sequence Number

(a) Dice coefficient (b) Total volume error

Figure 5.11: Segmentation results for learning based scar quantification. (a) In-
dividual Dice coefficients for learning based scar quantification in increasing order.
(b) Total volume error for each data set using the results of the learning based scar
quantification, sorted according to the Dice coefficient.

In addition, the correlation between the total volume error and the Dice coefficient
is investigated using a scatter plot. The result is depicted in Figure 5.12. The Pearson
correlation results in -0.16.
Moreover, the feature importance is investigated, as shown in Figure 5.13. The
most important feature is the SFTA feature. However, 48 features are compared to
two single features. Therefore, also the individual features of the SFTA are consid-
5.5 Discussion and Conclusion 125

1.0

0.8

Total Volume Error


0.6

0.4

0.2

0.0
0.0 0.2 0.4 0.6 0.8 1.0
Dice Coefficent

Figure 5.12: Scatter plot between DC and TVE of the learning based scar quan-
tification result, which results in a Pearson correlation of -0.16.
Importance

0.50

0.00
SFTA I Imean
Feature

Figure 5.13: Feature importance for SFTA features, mean intensity I mean , and the
local intensity I where the SFTA features are most important.

ered, see Figure 5.14. There it can be seen that the most important feature is the
mean intensity I mean followed by the mean gray value of different patches.
Besides the quantitative results, also the qualitative results of the proposed learn-
ing based method are evaluated. In Figure 5.15 the segmentation result is compared
to the gold standard annotation in 3-D and 2-D.

5.5 Discussion and Conclusion


The presented methods quantify myocardial scar without any user interaction. How-
ever, a prerequisite for all methods is the accurate segmentation of the myocardium.
The presented learning based method achieves a Dice coefficient of 0.66 ± 0.17. It
outperforms the x-SD methods, which can be seen in Figure 5.10. However, it cannot
be directly compared to the FWHM method, as the gold standard annotations are
based on the results of the FWHM. Therefore, the result of the FWHM has to be
considered separately.
Furthermore, the Pearson correlation coefficient between the Dice coefficient and
the total volume error shows that there is a high variance between the overlap and the
126 Scar Segmentation in LGE-MRI

0.16

0.14

0.12

0.10
Importance

0.08

0.06

0.04

0.02

0.00

I
A7
A3
F2
F3
A2
A1
F4

A4
A8
A5
F5
F8

I¯2

A9

I1
I¯3
I¯1

I¯8
I¯7

I¯6
I¯4
I¯5
A16

F16

A6
F7

I¯9

F6
F1
A13
A10
F12
A15

A14
F15

F14
A12

F13
A11
F11

F10

I¯10
I¯12
I¯15

I¯11
I¯14

I¯13
Imean
¯
Feature

Figure 5.14: Detailed feature importance for all features in increasing order, where
the mean intensity I mean followed by the mean gray value of different patches are the
most important features.

total volume error. Normally you would expect a low TVE for a high Dice coefficient.
However, for data set 4, there is a low Dice coefficient but also a low total volume
error. After analyzing this data set, it can be seen that the total amount of scar is
very small. Therefore, the total overlap is bad, as there is only a small amount of
scar tissue within the myocardium, which is the reason for a low DC. Thus, this leads
to a small total volume error, as the volume itself is similar and only a small portion
of the myocardium has myocardial scar.
The benefit of all presented methods is the independence of any user interaction.
The algorithms can be applied to 2-D and 3-D LGE-MRI sequences. The purpose
of this study was to provide automatic, accurate, and stable tools for scar quantifi-
cation. The proposed learning based method achieves better results when compared
to state-of-the-art methods such as the x-SD method. In the course of this work, an
automatic scar quantification approach based on texture features has been presented
that provides accurate and consistent results for LGE-MRI sequences. Our method
achieves an overall Dice coefficient of 0.64 and a total volume error of 0.04. A clear
benefit of the presented methods is the independence from user interaction.
5.5 Discussion and Conclusion 127

(a) 3-D scar result (b) 3-D ground truth

(c) 2-D scar result (d) 2-D ground truth

Figure 5.15: Qualitative segmentation results for learning based scar quantification
compared to the gold standard annotation. (a) 3-D scar segmentation result for
the learning based scar segmentation. (b) 3-D gold standard annotation of the scar
tissue. (c) 2-D scar segmentation result for an individual slice. (d) 2-D gold standard
annotation for an individual slice.
128 Scar Segmentation in LGE-MRI
CHAPTER 6

Scar Visualization
6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.3 Scar Layer Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.4 Evaluation and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.5 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

In the previous chapters, the segmentation of the left ventricle’s myocardium and
the scar quantification is detailed for 2-D and 3-D LGE-MRI. In this chapter, ad-
vanced scar visualization methods are described for both 2-D and 3-D, as an intuitive
visualization of the scar is helpful for planning and guiding the procedure.
In the first section, a motivation is given. In Section 6.2, an overview of related
work is provided. In Section 6.3, the two different methods for visualizing the scar
in 2-D and 3-D are outlined. The evaluation and the results for both methods are
presented in Section 6.4. In the last section, the results are discussed and a conclusion
is given.
Parts of this chapter previously appeared in two conference publications [Reim 17a,
Kurz 17g].

S. Reiml, T. Kurzendorfer, D. Toth, P. Mountney, K. Rhode,


A. Maier, and A. Brost. “Automatic Layer Gerneration for Scar
[Reim 17a]
Transmurality Visualization”. In: Bildverarbeitung für die Medizin
(BVM 2017), March 2017
T. Kurzendorfer, S. Reiml, A. Brost, D. Toth, M. Panayiotou,
P. Mountney, S. Steidl, and A. Maier. “2-D Interactive Scar Layer
[Kurz 17g]
Visualization”. In: Image-Guided Interventions Conferences (IGIC
2017), November 2017

129
130 Scar Visualization

(a) 2-D scar slice (b) 3-D scar mesh

Figure 6.1: Two different scar visualization methods. (a) The scar is visualized as
a purple overlay on the MRI slice. (b) The scar is shown as a 3-D surface mesh in
purple and the endocardial mesh is shown in red.

6.1 Motivation
Heart failure affected in 2014 about 26 million people [Poni 14]. Cardiac resyn-
chronization therapy is one of the most successful therapies to treat patients with
advanced drug-refractory heart failure, systolic dysfunction, and ventricular dyssyn-
chrony [Moun 17]. However, the problem with CRT is that 30 % to 50 % of the
patients do not respond clinically to this therapy [Daub 12]. One of the reasons for
non-response is considered to be the suboptimal placement of the left ventricular
pacing lead. Pacing on areas of myocardial infarction has less or no effect, as the scar
tissue is not electrical conductive. Therefore, the localization and quantification of
scar tissue in the LV is crucial to increase the success rate of CRT [Reim 17a].
One important information is the scar burden and the scar transmurality. The
scar burden indicates the percentage of scar in the myocardium. If there is a high
percentage of scar burden, it is important to know if the scar is transmural, i. e., from
the endocardial wall to the epicardial wall. If the scar is only endocardial, it might be
still possible to place a lead in that segment. Hence, an intuitive scar visualization is
important to distinguish easily between endocardial, midcardial, and epicardial scar.

6.2 Related Work


Currently, there are two methods for the visualization of scar after the segmentation.
The first method is to visualize the scar as a 3-D mesh, see Figure 6.1 (b). The
advantage of this approach is the 3-D anatomical visualization. The disadvantage of
this method is that there is no information about scar burden or transmurality.
The second method maps the scar information to the so-called 16 segment bull’s
eye plot (BEP), as shown in Figure 6.2 where every segment is color coded with
an individual color. Therefore, the left ventricle’s surface has to be divided into
16 segments according to the American Heart Association [Ma 12]. In the BEP the
scar location is represented in polar coordinates, where the radius is defined by the
6.3 Scar Layer Visualization 131

(a) Scar distribution (b) Scar burden (c) Scar transmurality

Figure 6.2: 2-D scar visualization. (a) The 3-D scar distribution in light gray is
projected on the BEP. (b) The scar burden is color coded depending on the percentage
of each segment. (c) The scar transmurality is color coded according to the thickness
of the scar in each segment.

distance from the apex along the principal axis of the LV as a proportion of the
height of the segmented area. The angle is defined by the right ventricle direction in
a plane which is perpendicular to the principal axis [Moun 17]. The problem with the
scar distribution is however, that it does not represent the scar location or thickness
within one segment of the BEP. Figure 6.2 (a) depicts the scar distribution in light
gray, where the 3-D scar mesh is projected on the segments of the BEP.
Hence, additional analysis has to be done to get the information of the scar burden
and the transmurality in each of the segments. The scar burden is the ratio between
myocardial scar and the total myocardial volume in each segment [Moun 17]. To
have an intuitive representation, the segments of the BEP are color coded according
the scar burden percentage, as depicted in Figure 6.2 (b). Furthermore, the scar
transmurality can be computed as the extent of scar as percentage of the myocardial
wall thickness, where the median of the transmurality is presented. For the intuitive
representation, the BEP is color coded depending on the transmurality percentage as
visualized in Figure 6.2 (c).
The drawback of the 2-D representations is the non-anatomical visualization of
the LV as a BEP and the missing information about the scar’s location within the
myocardium, as the transmurality gives no information if there is endocardial or
epicardial scar. The scar can either touch the epicardium, the endocardium, or
can be located within the myocardium without touching the epicardium or endo-
cardium [Reim 16]. Furthermore, no 3-D guidance is possible.
However, if scar is only endocardial, it might me still possible to place the lead
at that segment. Hence, an interactive scar location and transmurality visualization
method is proposed, where the scar is divided into different layers [Reim 17a].

6.3 Scar Layer Visualization


The scar layer generation is divided into five main steps. First, the left ventricle’s
myocardium and scar has to be segmented, e. g., using the approaches described in
the previous chapters. Second, the endocardial and epicardial contours have to be
132 Scar Visualization

MRI Anatomy Layer Scar Layer Layer


Segmen- Delineation Computation Extraction Visualization
tation

Figure 6.3: Overview of the scar layer visualization pipeline, which is divided into
five main steps. First, the left ventricle’s myocardium and the scar are segmented.
Second, the epicardium and endocardium are delineated. Third, the layers are com-
puted. Fourth, the scar layers are extracted. In the last step, the scar layer’s can be
visualized in 3-D as meshes or projected onto the BEP in 2-D.

delineated from the segmentation masks. In the third step, the layers are computed,
i. e., one layer for endocardial, midcardial, and epicardial scar. Fourth, the scar layers
are extracted depending on the previously computed layer. Finally, the scar layers can
be visualized in 2-D and 3-D [Reim 17a, Kurz 17g]. An overview of the segmentation
pipeline is given in Figure 6.3.

6.3.1 Scar Layer Generation


The first step of the scar layer generation is the segmentation of the LV myocardium in
LGE-MRI. The segmentation of the myocardium can be achieved by the approaches
described in Chapter 3 and Chapter 4 depending on 2-D or 3-D LGE-MRI sequences.
Afterwards, the scar can be quantified using one of the approaches described in
Chapter 5. The final segmentation result is depicted in Figure 6.3 in the first box,
where the endocardial contour is visualized in red, the epicardial contour in yellow,
and the scar is colored purple. The output of the segmentation algorithm is a mask
containing the blood pool, the myocardium of the LV, and the segmented myocardial
scar, as shown in the second box of Figure 6.3.
In the second step, the endocardial and epicardial contour are extracted from the
the segmentation mask, for each of the slices. The extraction is based on the marching
squares algorithm, which finds iso-valued contours for a particular level value in the
segmentation mask [Lore 87]. The result is depicted in the second box of Figure 6.3,
where the endocardial contour is visualized in red and the epicardial contour yellow
overlaid on the segmentation mask.
In the third step, the layers are computed. Due to Cartesian coordinates, there are
more epicardial points than endocardial points. Hence, it is more difficult to divide the
area between the endocardium and the epicardium in multiple layers, as circular ray
casting has to be used. Therefore, the segmentation mask as well as the endocardial
and epicardial contour are converted to polar coordinates. This transformation has
several advantages as detailed in Section 2.2.1. After the transformation to polar
coordinates, the endocardial and epicardial contour points are interpolated to achieve
a smooth contour. From the interpolated contours, layers between the endocardium
6.3 Scar Layer Visualization 133

r [pixel] 38
0◦ 90◦ 180◦ 270◦ 360◦
ρ

Figure 6.4: Transformed scar mask in polar coordinates with computed layers (red
and orange) between the endocardium and epicardium.

(a) (b) (c) (d)

Figure 6.5: Scar layer generation process illustrated for one slice. (a) Segmenta-
tion mask with detected endocardium (red) and epicardium (yellow). Within the
myocardium, the scar is shown in a lighter shade of gray and white. (b) Depicts
the calculated layers, where l = 3. (c) The scar layers are generated using logical
comparison. (d) The final scar layer masks, which are used for the surface mesh
generation and the projection on the BEP.

and epicardium are computed. Therefore, the distance between the endocardium and
epicardium is calculated and then divided into multiple layers. For l ∈ N layers, the
myocardium needs to be divided l − 1 times. For each angle ρ, l − 1 values within
the myocardium are computed. The result of the layer generation is visualized in
Figure 6.4 in polar coordinates.
After the delineation of the l layers in polar space, they are transformed back
to Cartesian coordinated see Figure 6.5 (b). In this work, the number of layers l is
set to three. Because for the left ventricular lead placement, it is helpful to have
an epicardial, mid-myocardial, and endocardial scar layer to decide for the best lead
location.
In the fourth step, the scar layers are extracted. Therefore, the previously defined
layers and the scar mask are compared using logical operations. For the three scar
layers, the myocardium is divided twice, see Figure 6.5 (b). The first defined line is
next to the endocardial contour and the second defined line is close to the epicardial
contour. The first filled layer B 1 ∈ RN ×N ×N is defined as the area within the first
subdivision layer. The second filled layer B 2 ∈ RN ×N ×N is defined as the area within
the second subdivision line. The third layer B 3 ∈ RN ×N ×N is the area within the
epicardium. Then the filled layers are compared using logical operations with the
scar mask Z ∈ RN ×N ×N and the three individual scar layer masks Z 1 , Z 2 , and Z 3
are obtained, where
Z 1 = B1 ∧ Z , (6.1)
134 Scar Visualization

Z 2 = B1 ∧ B2 ∧ Z , (6.2)
and
Z 3 = B1 ∧ B2 ∧ B3 ∧ Z . (6.3)
The result is depicted in Figure 6.5 (c) and (d).
Afterwards, the scar layers can be visualized in 2-D or 3-D which is detailed in
Section 6.3.2 and Section 6.3.3.

6.3.2 3-D Layer Visualization


For the visualization in 3-D, a mesh needs to be generated. Therefore, the vertices
and faces of the individual scar layer masks are extracted using the marching cubes
algorithm [Lore 87]. The result is a set of vertices V ∈ {v 1 , ..., v N }, where v i ∈ R3
is a single vertex and a set of faces F ∈ {e1 , ..., eN }, where ei ∈ R3 is a single face.
Furthermore, the image coordinates have to be transformed to the patient coordinate
system, e. g., to account for the large slice thickness for the 2-D LGE-MRI. Therefore,
the transformation matrix from image to the patient coordinate system has to be
calculated.
For the calculation, several parameters from the DICOM header are needed. The
pixel spacing with the DICOM tag (0028,0030) for the spacing in the row ∆r ∈ R
and column ∆c ∈ R, as well as the slice thickness ∆s ∈ R with the DICOM tag
(0018,0050). To get the information about the orientation of the DICOM image,
the directed cosine is needed which is found under the DICOM tag (0020,0037),
also known as Image Orientation Patient. The information is saved in the matrix
F ∈ R3×2 , where the first column corresponds to the direction of the column cosine
and the second column contains the row direction cosine. In addition, also the image
patient position of the first slice s1 ∈ R3 and last slice sN ∈ R3 is needed, which can
be found under the DICOM tag (0020,0032), also known as Image Position Patient.
From the image patient position, the translation vector k ∈ R3 is calculated by

k = (s1 − sN )/(1 − N ) . (6.4)

The final affine matrix D ∈ R4×4 is the defined as


 
F 11 ∆r F 12 ∆c k1 ∆s s1,1
F 21 ∆r F 22 ∆c k2 ∆s s1,2 
D= F 31 ∆r F 32 ∆c k3 ∆s s1,3  .
 (6.5)
0 0 0 1

The 3-D patient coordinate point is obtained by multiplying the pixel coordinates
with the affine matrix    
xx xr
 yy   yc 
  = D  . (6.6)
 zy   zs 
1 1
The final result is shown in Figure 6.6, where in (a) all three layers are visualized,
in (b) the mid-myocardial layer and the endocardial layer, and in (c) only the en-
docardial layer. The subdivision of the scar mesh into several scar layers enables an
6.3 Scar Layer Visualization 135

(a) Three scar layers (b) Two scar layers

(c) One scar layer (d) 3-D scar mesh

Figure 6.6: 3-D scar layer visualization. (a) The endocardial mesh is visualized in
dark red with the three scar layers, from endocardial to epicardial. (b) Endocardial
mesh with the endocardial and mid-myocardial scar layer. (c) Endocardial mesh with
only one scar layer, the endocardial scar layer. (d) Standard representation of the
scar as one mesh in purple.

interactive peeling of the scar in 3-D. It can be scrolled from epicardium to endo-
cardium and vice versa. With the help of the scar layer visualization, an intuitive
representation of the transmurality of the scar is possible. If the scar is fully trans-
mural, all scar layers add up. Figure 6.6 (d) depicts the scar as one single mesh.
The 3-D scar layers can be overlaid onto the fluoroscopic images, as depicted in
Figure 6.7. This visualization method can be used during the intervention. For the
overlay, the epicardial mesh of the LV is registered to the fluoroscopic image [Toth 18].
Then, the epicardium, the endocardium, the scar mesh, and the scar layers can
be visualized in different colors. The colors and opacity can be adapted manually.
Meshes, which the physician is not interested in, can be hidden. This supports the
136 Scar Visualization

Figure 6.7: Fluoroscopic image with overlaid endocardial mesh in red and the three
3-D scar layers.

physician during the intervention, as only the required and important information is
shown.

6.3.3 2-D Layer Visualization


For the 2-D layer representation, the scar layers are projected onto the 16 segment
BEP. For the projection, first the principal axis of the left ventricle has to be esti-
mated, see Figure 6.8 (a). Therefore, the vertices of the left ventricle’s endocardium
are extracted using the marching cubes algorithm [Lore 87], resulting in a set of ver-
tices V ∈ {v 1 , ..., v N }, where vi ∈ R3 is a single vertex. Afterwards, the covariance
matrix Σ ∈ R3×3 is calculated,
N
1 X
Σ= (v i − v̄) (v i − v̄)T , (6.7)
N i=1

where v̄ is the mean vector of all vertices v,


N
1 X
v̄ = vi . (6.8)
N i=1

Having the covariance matrix Σ, the SVD is applied to the covariance matrix for the
purpose of performing principal component analysis,

Σ = U SU T , (6.9)

where U ∈ Rn×m is a matrix of eigenvectors and S ∈ Rm×m is a diagonal matrix


whose elements are the eigenvalues of the covariance matrix Σ. For the mathemat-
ical proof please see Section 4.3.2. The columns of U are orthogonal unit vectors,
where the first column corresponds to the largest eigenvalue. Thus, the vector has
6.4 Evaluation and Results 137

(a) PCA of LV (b) BEP projection (c) 16 segment BEP

Figure 6.8: Projection of the left ventricle onto the 16 segments BEP. (a) Estimation
of the principal axis of the left ventricle. The red point marks the mitral valve plane.
(b) The endocardial mesh of the left ventricle is projected to the 16 segment BEP.
(c) 16 segment BEP where the mitral valve plane is marked in red. Furthermore, for
orientation purposes, the septum, anterior, inferior, and lateral orientation is labeled.

the largest variation among the left ventricle’s endocardium, i. e., the short axis ori-
entation. Furthermore, the lowest and highest vertices, i. e., the LV apex and base,
are computed by projecting all the vertices v to the principal axis. In addition, the
mitral valve is automatically detected and excluded from the segment calculation, as
depicted in Figure 6.8 (b), where the red dot marks the mitral valve plane. The prin-
cipal axis is divided into three sections: the basal, the midcardial, and apical section.
Based on this, the mesh vertices in the basal and midcardial sections are divided into
six sub-sections, based on anatomical landmarks. The anatomical landmarks used
are the insertion points connecting the right ventricular and left ventricular wall. The
apical section is divided into four sub-sections to match the segments of the BEP.
The 16 segment BEP is depicted in Figure 6.8 (c).
After the division of the endocardial mesh into the 16 segments, also the scar
mesh is divided into these 16 segments. However, if the whole 3-D mesh is projected
into the BEP, a distinction between endocardial or epicardial scar is not possible as
seen in Figure 6.9 (b). Also the additional information about the scar transmurality
cannot help in distinguishing the location of the scar within the myocardium as
shown in Figure 6.9 (a). Therefore, the previously generated scar layers are projected
onto the BEP. The scar layers can be projected on top of each other as visualized
in Figure 6.9 (c). Alternatively the individual layers can be shown separately, as
depicted in Figure 6.9 (d)-(f) for the endocardial, midcardial, and epicardial layer,
respectively.

6.4 Evaluation and Results


The scar layer visualization is evaluated using seven clinical data sets out of the 100
from Chapter 3. The data is acquired with a Siemens MAGNETOM Aera 1.5T scan-
ner (Siemens Healthcare GmbH, Erlangen, Germany), for a more detailed description
138 Scar Visualization

(a) Scar transmurality (b) Scar distribution (c) Scar layer

(d) Endocardial layer (e) Midcardial layer (f ) Epicardial layer

Figure 6.9: 2-D scar layer visualization. (a) Scar transmurality shown as percent-
age for each segment and color coded accordingly. (b) The 3-D scar distribution is
projected on the 16 segment BEP. (c) Three scar layers projected onto the 16 seg-
ment BEP, where a differentiation of endocardial, midcardial, and epicardial scar is
possible. (d) Endocardial scar layer. (e) Midcardial scar layer. (f) Epicardial scar
layer.

of the data please refer to Section 3.4.1. For the evaluation, two tests are created. In
the first test, nine physicians are shown four cases. For each case, two visualization
methods are presented: the segmented LV overlaid with the 3-D scar mesh and the
segmented LV overlaid with the scar layer visualization, similar to Figure 6.6. They
are asked to decide which visualization method they would prefer. In 80.6 % cases,
the clinical experts prefer the scar layer visualization, in 16.7 % cases they prefer the
3-D scar mesh and in 2.8 % cases they do not have a preference.
In the second test, eight physicians are shown six 3-D scar meshes and six scar layer
meshes. For each visualization method, they should decide if the scar is epicardial or
endocardial. The results are shown in Table 6.2.
These two experiments show, that with the scar layer visualization, the clinicians
can easily choose an optimal lead placement location as they can decide whether the
scar is epicardial or endocardial.

6.5 Discussion and Conclusion


In this chapter, novel methods for interactive visualization of the scar information are
presented. In CRT, the left ventricular lead is commonly placed on the epicardium
through a coronary vein. Precise information about the location and transmural-
6.5 Discussion and Conclusion 139

3-D Scar Mesh Scar Layer Meshes


Correct 18.75 % 93.75 %
Wrong 6.25 % 6.25 %
No Determination 75.00 % 0.00 %

Table 6.2: Evaluation with eight clinical experts and twelve scar meshes. They
should decide for each mesh if the scar is epicardial, endocardial, or they could not
determine.

ity of the scar within the myocardium is needed, as scar is electrically almost non-
conductive. The results show, that the clinicians could easier decide about the scar
location and transmurality using the layer visualization. The precise control over how
the scar transmurality is visualized, in 2-D and 3-D, allows the user to see the scar
location to the extent of transmurality. An interactive scrolling through scar layers
is realized, such that scar layers are added or removed from the visualization. The
endocardium and the scar meshes can be further overlaid onto fluoroscopic images.
The overlay of the meshes can be used to guide an intervention.
140 Scar Visualization
PART IV

Outlook and Summary

141
CHAPTER 7

Outlook
7.1 Left Ventricle Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.2 Scar Segmentation and Visualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

In this chapter, new possibilities of future work are outlined, based on the presented
methods and previous discussions.

7.1 Left Ventricle Segmentation


For the 2-D LGE-MRI segmentation, the detection of the left ventricular outflow tract
can be improved. Currently, only the root mean square error between two subsequent
slices is calculated. However, if the valve plane would be detected in the long axis
acquisition, this information can be applied for the detection of the outflow tract in
the short axis orientation. Furthermore, a model of the left ventricle can be applied
to the contours for a better segmentation in the apex and base.
In addition, also deep learning methods can be applied, such as the U-net [Ronn 15],
which was already successfully applied to many medical imaging segmentation chal-
lenges. The problem with deep learning is that it requires a lot of labeled training
data.
The segmentation of the left ventricular anatomy using only 3-D LGE-MRI data
still provides a lot of tasks that need to be solved. The segmentation result of the
endo- and epicardium can be improved by not only considering the pseudo short
axis slices for segmentation but also the pseudo longitudinal axis of the left ventricle.
Considering the pseudo LA slices, further misalignments and segmentation errors
may be reduced. Furthermore, a graph-cut based approach in 3-D can be applied to
achieve consistent results through the slices, for the learning based segmentation.
For the learning based segmentation, also different features can be evaluated. In
this thesis, only steerable features are considered, which are low level features based
on the local gradient and intensity. In addition, different classifiers can be evaluated,
such as neural networks or support vector machines.
In addition, for the 3-D LGE-MRI the detection of the left ventricular outflow
tract needs to be improved. A definition of a cut-off parameter can already improve
the segmentation result.

143
144 Outlook

Finally, the runtime of the segmentation pipeline for the 3-D LGE-MRI needs to
be improved. First, the initialization of the left ventricle can be improved by using
marginal space deep learning [Ghes 16]. Regarding the contour extraction, as of now
the apical contours are refined and afterwards the basal contours. This could be
performed in parallel.
Considering the semi-automatic segmentation approach based on HRBF, the
method should be extended to use arbitrary oriented slices for the 3-D interpolation.
For the left ventricle, the short axis and long axis orientations would be desirable to
achieve even better segmentation results. Therefore, the gradient selector has to be
adopted. In addition, also the smart brush can be improved by not only considering
intensity features.

7.2 Scar Segmentation and Visualization


The quantification of the scar tissue requires further investigation. A pre-requisite for
the scar segmentation is the accurate segmentation of the left ventricle’s myocardium.
For the scar segmentation, currently a texture based method is used. For the learning
based scar segmentation, different texture features can be investigated as well as dif-
ferent classifiers. Deep learning could be applied for the scar segmentation, however,
therefore more annotated data is needed.
As a next step, the scar burden of the LV along with other parameters could
be analyzed automatically as well. The total scar burden is a good predictor for
the success of a therapy. In addition to that, the type of scarring is of interest.
Three types of fibrosis can be differentiated: endocardial, epicardial, and transmural
scar. Furthermore, also a differentiation between the gray zone and the scar tissue,
is important for procedure planning.
Having the segmentation of the left ventricle and the scar quantification, the
next step is the registration to the fluoroscopic images to guide the implantation of
the bi-ventricular pacemaker. Therefore, an automatic 2-D/3-D registration of the
MRI volume to the fluoroscopic images is desirable. Currently, there are three main
approaches to achieve the registration of the cardiac MRI to the fluoroscopic images:
i) fully manual alignment of the MRI volume to the fluoroscopic images [Kais 14],
ii) fiducial marker based registration [Dang 12], and iii) automatic soft tissue-based
registration based on contrast agent [Toth 16]. The problem with a fully manual
registration is, that it is prone to user errors and intra- and inter-observer variability.
Using fiducial markers, the pre-operative MRI images and the fluoroscopic data have
to be acquired within a narrow time frame, because the markers have to remain on
the patients body for the registration. Although the registration of soft-tissue can
be fully automatic, the physician has to inject contrast agent, as in the fluoroscopic
images there is no contrast from the soft tissue. The goal of using contrast agent is
to make the structure and location of the veins – or similar structures – visible under
fluoroscopy.
To overcome these issues, the vertebrae or collar-bone in the fluoroscopic image
can be registered with the bones in the MRI. To see the bones in the MRI volume, a
special MRI scan needs to be acquired, which makes bones visible. The segmentation
of the vertebrae is already proposed by Reiml et al. [Reim 17b].
CHAPTER 8

Summary
Cardiovascular disease is the major cause of death worldwide. In more detail,
ischemic heart disease is the leading cause of death. This disease is in a close rela-
tionship to heart failure. In 2014, about 26 million people worldwide suffered from
heart failure. Patients that suffer from chronic heart failure can benefit from cardiac
resynchronization therapy (CRT). CRT is a successful treatment option, for patients
suffering from drug-refractory heart failure, have a wide QRS complex, and a reduced
left ventricular ejection fraction with less than 35 %. The bi-ventricular pacemaker
synchronizes the contraction pattern of the heart. The CRT device has three leads,
one placed in the right atrium, one in the right ventricle, and the last one through the
coronary sinus on the left ventricles myocardium. The problem, however, is that 30 %
to 40 % do not respond to this therapy. Therefore, for precise procedure planning,
information about the left ventricle’s anatomy and the scar distribution is needed.
Late gadolinium enhanced MRI is the gold standard for non-invasive assessment
of the tissue viability. Recently, this technology was extended to 3-D for a more
precise quantification of the left ventricles’ myocardium to the extent of myocardial
scarring. The principles of LGE-MRI and the difference between 2-D slice-selective
sequences and 3-D sequences is detailed in Chapter 2. In addition, a brief overview
is given on the image processing basics applied in this thesis, the pattern recognition
pipeline is outlined, and the random forest classifier is explained in more detail.
However, the challenge arises in the image analysis of these MRI sequences. Be-
cause, if the LV lead is placed on fibrosis, there will likely be no impact from the
pacing, as scar tissue is electrical non-conductive. The focus of this work was on left
ventricle segmentation, scar quantification, and scar visualization for late gadolin-
ium enhanced magnetic resonance imaging to provide information for the procedure
planning and guidance.
In Chapter 3, segmentation methods for 2-D LGE-MRI are presented. The seg-
mentation pipeline consists of four major steps. First, the left ventricle is initialized
in the mid-slice of the short axis stack using circular Hough transforms and a cir-
cularity measure. Second, a morphological active contours approach is used for the
rough estimation of the blood pool. Third, the endocardial contour can be refined
using either a filter based approach or a learning based approach. For the filter based
approach, the edge information and the scar probabilities are used to generate a cost
map and a minimal cost path search is applied to get the final result. For the learning
based approach, steerable features are extracted in a ray casting fashion and classified

145
146 Summary

using a random forest. The final result is obtained by combining the classification
result with the scar probability and a dynamic programming approach. Finally, the
epicardial boundary is extracted by using the information of the endocardial contour
and applying also either a filter based or learning based approach for the generation
of the cost map and a minimal cost path search for the final endocardial contour.
The presented method is evaluated on 100 clinical data sets and achieves similar or
better results when compared to literature. For the learning based segmentation, a
5-fold nested cross-validation is used. Hence, 20 data sets are used for the testing
of the classifier and the rest is used for the training and validation of the classifier.
The learning based segmentation also slightly outperforms the filter based segmen-
tation, especially considering the endocardial contour segmentation, where a mean
Dice coefficient of 0.82 for the endocardium and 0.81 for the epicardium is achieved.
In Chapter 4, segmentation methods for 3-D LGE-MRI are detailed. The auto-
matic segmentation is divided into five major steps. First, the left ventricle has to
be detected in the whole heart LGE-MRI scan. Therefore, a two-stage registration
based approach is used, consisting of a rigid registration followed by a non-rigid reg-
istration. Afterwards, the principle components of the left ventricle are estimated.
Knowing the principle axis of the left ventricle, the volume is rotated around this
axes to estimate the short axis view, which is commonly used for the segmentation
of the left ventricle. In the next step, the endocardial contour is refined using either
a learning based approach or a filter based approach. For the filter based approach,
the cost array for the boundary estimation is calculated from the edge information
and the scar probability. For the learning based approach, a trained random forest
classifier is used to estimate the boundary probability. This probability combined
with the scar probability results in the cost array. The final endocardial contour for
both approaches is obtained by a minimal cost path search in polar space. For the
succeeding slices, the center and the radius of the previous contour are considered
to achieve a smooth endocardial segmentation. Afterwards, the epicardial boundary
is estimated using the information from the endocardial boundary. For the filter
based approach, the closest edge to the endocardial boundary with increasing radius
is searched for. For the final result, the convex hull is estimated to achieve a smooth
looking contour. For the learning based approach, a trained random forest classifier
is applied to get the boundary probability for the possible boundary candidates. In
the next step, a dynamic programming approach in polar space is applied to obtain
the final result. The iterations stops, if the apex or left ventricular outflow tract
is reached. For the 3-D LGE-MRI, also the papillary muscles are segmented using
Otsu’s thresholding and a connectivity check. Finally, the contours are exported as
3-D surface meshes and used for further processing.
In addition to the fully automatic left ventricle segmentation approaches, a generic
semi-automatic segmentation method based on HRBF in combination with a smart
brush is introduced. First, the user segments some 2-D slices using the smart brush.
In the next step, the contours are extracted from the segmented masks and control
points are estimated. Based on the contour, the normal vectors of the control points
are calculated. For intersecting planes, close control points are merged and a 3-D
normal vector is estimated. Finally, our new formulation of HRBF is applied to
reconstruct the desired surface.
147

For the evaluation of the filter-based 3-D LGE-MRI segmentation, 30 clinical data
sets from two sites are used. Gold standard annotations of the endocardium and
epicardium are provided by two clinical experts for each of the data sets. The Dice
coefficient for the filter based segmentation resulted in an overlap of 0.83 ± 0.04 for
the endocardium and 0.80±0.05 for the epicardium. As the gold standard annotation
are performed in the coronal, axial, and sagittal view an additional smoothing step
is added in the short axis orientation for smoother looking results. The DC using
the smoothed gold standard annotations resulted in an overlap of 0.84 ± 0.04 for the
endocard and 0.80 ± 0.06 for the epicard. Furthermore, the left ventricle is divided
into three equal thirds, the base, the mid-cavity, and the apex. The Dice coefficient is
evaluated separately for these three individual parts. The best overlap is shown for the
mid-cavity with a mean DC of 0.88 ± 0.07. In addition, also the parameter variability
is evaluated to see the influence of all individual parameters. For the learning-based
segmentation, a nested cross-validation is used for the optimization of the hyper-
parameters of the random forest classifier. For the learning based segmentation only
the first cohort is considered. The DC resulted in an overlap of 0.84 ± 0.07 for the
endocard and 0.85 ± 0.07 for the epicard.
For the semi-automatic segmentation also the first cohort is used. First the smart
brush is evaluated. For each data set, 5 slices per orientation with 5 different positions
for each slice are extracted, leading to 75 patched for each data set. For most patients
an average Dice coefficient of 0.87 is achieved. However, the main problem with the
automatic evaluation of the smart brush is that normally the smart brush inherently
involves human interaction. Hence, it is expected that manual annotation using the
smart brush would even achieve better results. For the 3-D interpolation the same
data sets are used. For each data set 1, 3, and 5 slices per orientation are extracted,
which means to have a total number of 3, 9, and 15 segmented slices, respectively. The
semi-automatic segmentation using our A-HRBF interpolation achieved an average
Dice coefficient for the endocard of 0.90 ± 0.02, 0.94 ± 0.01, and 0.95 ± 0.01 for 1, 3,
and 5 slices per orientation, respectively. For the epicard, an average Dice coefficient
of 0.90 ± 0.02, 0.94 ± 0.02 and 0.95 ± 0.01 for 1, 3, and 5 slices per orientation was
achieved.
Having the segmented left ventricle the scar tissue within the myocardium is quan-
tified. The scar quantification is described in Chapter 5. First, state-of-the-art scar
segmentation methods are reviewed, such as the full-width-at-half-max algorithm or
the x-SD method. These methods are implemented in a fully automatic manner, to
eliminate inter- and intra observer variability errors. Furthermore, a texture based
scar segmentation algorithm is developed. Therefore, 50 features are extracted from
a scar patch using fractal analysis of the texture. The segmentation based fractal
texture analysis (SFTA) can be decomposed into two major steps. First, the gray
scale image is divided into a set of binary images using two-threshold binary decom-
position. Second, for each binary image, the fractal dimension, the mean gray level,
and the size is computed. For the final feature vector in addition to the SFTA fea-
tures, the mean intensity value of the extracted patch and the center intensity value
is added to the feature vector. For the training of the random forest classifier, only
patches which are completely inside or outside of the scar tissue are used. The RF
classifier assigns each pixel a probability of corresponding to scar or healthy tissue.
148 Summary

In the next step, the probability image is binarized, small components are removed,
and morphological closing is applied. The proposed scar quantification method is
evaluated on 30 clinical LGE-MRI data sets. A 10-fold nested cross-validation is
used for the evaluation of the texture based scar quantification. The DC results in
0.63 ± 0.17 for the texture based approach. In addition, the total volume error is
evaluated, which results in 0.04 ± 0.04. The results of the texture classification is also
compared to the FWHM and the x-SD. It can be seen that the texture based scar
quantification outperforms the x-SD method. However, the results cannot directly
be compared to the FWHM method as the gold standard annotations are based on
this method.
In Chapter 6, the scar visualization is described. As for the CRT procedure plan-
ning, the scar transmurality plays an important role, a new scar layer visualization is
introduced. The scar layer workflow consists of five major steps. A pre-requisite, is
the prior segmentation of the myocardium and scar tissue. Afterwards, the anatomy
is delineated. In the next step, the layers between the endocardial and epicardial
boundary are computed. Having the separate layers, the scar is extracted for each
layer. Finally, the scar is visualized either in 3-D as surface mesh or in 2-D projected
onto the 16 segment bull’s eye plot of the American Heart Association. For the 3-D
visualization, the DICOM to patient transformation matrix has to estimated. All
information needed for the calculation of the matrix is in the DICOM header. For
the 2-D layer visualization, the principle axis of the left ventricle has to be estimated.
The principal axis is divided into three sections: the basal, the mid-cardial, and apical
section. Based on this, the layers in the basal and mid-cardial sections are divided
into six sub-sections, based on anatomical landmarks. The apical section is divided
into four sub-sections to match the segments of the BEP. The anatomical landmarks
used are the insertion points connecting the right ventricular and left ventricular wall.
With the scar layer visualization it is possible to distinguish between endocardial and
epicardial scar easily. Furthermore, also the transmurality of the scar can be inves-
tigated, and therefore, suitable pacing sites for the left ventricular lead of the CRT
device can be found.
In Chapter 7, ideas on future work are summarized to improve and further inves-
tigate the methods presented in this thesis. First, further improvement suggestions
for the left ventricle segmentation are given, such as the usage of a model, the detec-
tion of the left ventricular outflow tract, or the application of deep learning methods.
Afterwards, new ideas for the scar segmentation and visualization are proposed. In
addition, the next steps for the guidance of the CRT procedure are outlined, as the
segmented myocardium and scar has to be overlaid onto the fluoroscopic images.
Appendix
Contributions to Published Papers
In the course of this thesis, contributions were published in the form of conference
proceedings and journal articles. Being the first author, I was responsible for the
development of the methods, the implementation, the evaluation of the proposed
approaches, and the writing of the manuscripts. Thus I declare that the presented
work is my own, while gratefully acknowledging that I received valuable advice along
the way. This refers to the following publications which are part of this thesis:

T. Kurzendorfer, A. Brost, C. Forman, and A. Maier.


“Automated Left Ventricle Segmentation in 2-D LGE-
[Kurz 17a] MRI”. In: IEEE, Ed., Proceedings of the 2017 IEEE Chapter 3
International Symposium on Biomedical Imaging: From
Nano to Macro, pp. 831–834, April 2017
T. Kurzendorfer, C. Forman, A. Brost, and A. Maier.
“Random Forest Based Left Ventricle Segmentation in
[Kurz 17e] LGE-MRI”. In: International Conference on Func- Chapter 3
tional Imaging and Modeling of the Heart, pp. 152–160,
Springer, June 2017
T. Kurzendorfer, K. Breiniger, S. Steidl, A. Brost,
C. Forman, and A. Maier. “Left Ventricle Segmenta-
[Kurz 18a] tion in LGE-MRI: Filter Based vs. Learning Based”. In: Chapter 3
IEEE Nuclear Science Symposium and Medical Imaging
Conference, Nov. 2018
T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt,
C. Tillmanns, and J. Hornegger. “Semi-Automatic Seg-
mentation and Scar Quantification of the Left Ventricle
[Kurz 15] Chapter 4,
in 3-D Late Gadolinium Enhanced MRI”. In: ESMRMB,
Chapter 5
Ed., 32nd Annual Scientific Meeting of the ESMRMB,
pp. 318–319, October 2015
T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns,
A. Maier, and A. Brost. “Fully Automatic Segmentation
[Kurz 16a] and Scar Quantification of the Left Ventricle in 3-D Late Chapter 4,
Gadolinium Enhanced MRI”. In: M. C. Weiss, Ed., Book Chapter 5
of Abstracts, October 2016

149
150 Appendix

T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt,


C. Tillmanns, and A. Maier. “Fully Automatic Segmen-
[Kurz 17b] Chapter 4
tation of Papillary Muscles in 3-D LGE-MRI.”. In: Bild-
verarbeitung für die Medizin (BVM 2017), March 2017
T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt,
C. Tillmanns, S. Steidl, and A. Maier. “3-D LGE-MRI
[Kurz 17c] Segmentation using a Random Forest Classifier and Dy- Chapter 4
namic Programming”. In: ESMRMB, Ed., 34th Annual
Scientific Meeting of the ESMRMB, October 2017
T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns,
A. Maier, and A. Brost. “Fully Automatic Segmenta-
[Kurz 17f] tion of the Left Ventricular Anatomy in 3-D LGE-MRI”. Chapter 4,
Journal of Computerized Medical Imaging and Graphics, Chapter 5
Vol. 59, pp. 13–27, July 2017
T. Kurzendorfer, P. Fischer, N. Mirshazadeh, T. Pohl,
A. Brost, S. Steidl, and A. Maier. “Rapid Interactive and
[Kurz 17d] Intuitive Segmentation of 3-D Medical Images Using Ra- Chapter 4
dia Basis Function Interpolation”. Journal of Imaging,
December 2017
T. Kurzendorfer, K. Breininger, S. Steidl, A. Brost,
C. Forman, and A. Maier. “Myocardial Scar Segmen-
[Kurz 18b] tation in LGE-MRI using Fractal Analysis and Random Chapter 5
Forest Classification”. In: 2018 24th International Con-
ference on Pattern Recognition (ICPR), Aug. 2018
T. Kurzendorfer, S. Reiml, A. Brost, D. Toth,
M. Panayiotou, P. Mountney, S. Steidl, and A. Maier.
[Kurz 17g] “2-D Interactive Scar Layer Visualization”. In: Image- Chapter 6
Guided Interventions Conferences (IGIC 2017), Novem-
ber 2017

Furthermore, on two publications being the second author I was responsible for
the supervision of the students. I provided major support in the development of
the methods, the implementation, and evaluation of the proposed approaches. In
addition, I extensively assisted in the writing of the manuscripts including several
iterations of proof-reading. This refers to the following publications which are part
of this thesis:

N. Mirshahzadeh, T. Kurzendorfer, P. Fischer, T. Pohl,


A. Brost, S. Steidl, and A. Maier. “Radial Basis Function
Interpolation for Rapid Interactive Segmentation of 3-D
[Mirs 17] Chapter 4
Medical Images”. In: Annual Conference on Medical Im-
age Understanding and Analysis, pp. 651–660, Springer,
July 2017
151

S. Reiml, T. Kurzendorfer, D. Toth, P. Mountney,


K. Rhode, A. Maier, and A. Brost. “Automatic Layer
[Reim 17a] Gerneration for Scar Transmurality Visualization”. In: Chapter 6
Bildverarbeitung für die Medizin (BVM 2017), March
2017
152 Appendix
List of Abbreviations

ACWE Active Contours Without Edges

AHA American Heart Association

A-HRBF Adapted Hermite Radial Basis Function

BEP Bull’s Eye Plot

CART Classification and Regression Trees

cine Video of the heart motion during the cardiac cycle

CP Control Point

CRT Cardiac Resynchronization Therapy

CV Cross-Validation

DC Dice Coefficient

ECG Electrocardiogram

FWHM Full-Width at Half-Max

Gd Gadolinium

HF Heart Failure

HRBF Hermite Radial Basis Function

IR Inversion Recovery

LA Long Axis Orientation

LGE Late Gadolinium Enhanced

LV Left Ventricle

LVOT Left Ventricular Outflow Tract

153
154 List of Abbreviations

MACWE Morphological Active Contours Without Edges

MCP Minimal Cost Path

MI Myocardial Infarction

MRI Magnetic Resonance Imaging

MSD Mean Surface Distance

NYHA New York Heart Association

PCA Principal Component Analysis

PD Proton Density

RBF Radial Basis Function

RF Random Forest

RFP Radio Frequency Puls

ROI Region of Interest

SA Short Axis Orientation

SFTA Segmentation-based Fractal Texture Analysis

SVD Singular Value Decomposition

TE Echo Time

TI Inversion Time

TR Repetition Time

TTBD Two-Threshold Binary Decomposition

TVE Total Volume Error

x-SD x-Standard Deviation


List of Algorithms
2.1 Cross validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2 Nested cross validation . . . . . . . . . . . . . . . . . . . . . . . . . . 32

155
156 List of Algorithms
List of Figures
1.1 Causes of heart failure . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Normal and wide ECG signal . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Illustration of a CRT device implant. . . . . . . . . . . . . . . . . . . 6
1.4 X-ray image acquired after a CRT implant. . . . . . . . . . . . . . . . 7
1.5 Graphical overview of thesis structure . . . . . . . . . . . . . . . . . . 12

2.1 Contrast in MRI, T1 , T2 , and PD weighted images . . . . . . . . . . . 15


2.2 LGE-MRI sequence acquisition protocol . . . . . . . . . . . . . . . . 17
2.3 LGE-MRI data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Polar transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Result MACWE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6 Pattern recogniton pipeline . . . . . . . . . . . . . . . . . . . . . . . . 24
2.7 Decision tree topology . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.8 Examples of splitting function . . . . . . . . . . . . . . . . . . . . . . 26
2.9 Information gain after splitting . . . . . . . . . . . . . . . . . . . . . 27
2.10 Partitioning of data in decision tree . . . . . . . . . . . . . . . . . . . 28
2.11 Randomly sampled data and decison trees . . . . . . . . . . . . . . . 29
2.12 Impact of forest parameters . . . . . . . . . . . . . . . . . . . . . . . 30
2.13 Illustration of different evaluation approaches . . . . . . . . . . . . . 31

3.1 Overview of standard LGE-MRI segmentation . . . . . . . . . . . . . 36


3.2 Overview of 2-D LV segmentation pipeline . . . . . . . . . . . . . . . 38
3.3 LV detection pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 LV detection and MACWE . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Filter based endocardial boundary estimation in 2-D LGE-MRI . . . 41
3.6 Random forest based endocardial boundary estimation . . . . . . . . 43
3.7 Learning based endocardial boundary estimation . . . . . . . . . . . . 44
3.8 Final steps of endocardial boundary estimation . . . . . . . . . . . . 45
3.9 Filter based epicardial boundary estimation in 2-D LGE-MRI . . . . 46
3.10 Epicardial boundary estimation . . . . . . . . . . . . . . . . . . . . . 47
3.11 Individual DC for 2-D filter based segmentation . . . . . . . . . . . . 50
3.12 Individual DC for 2-D learning based segmentation . . . . . . . . . . 52
3.13 Feature importance for 2-D LGE-MRI . . . . . . . . . . . . . . . . . 54
3.14 Comparison of DC for different hyper-parameters . . . . . . . . . . . 55
3.15 Qualitative segmentation results. . . . . . . . . . . . . . . . . . . . . 56
3.16 Comparison of DC filter vs. learning based segmentation . . . . . . . 57
3.17 Correlation of filter and learning based epicard segmentation . . . . . 57

157
158 List of Figures

4.1 3-D LGE-MRI data set . . . . . . . . . . . . . . . . . . . . . . . . . . 61


4.2 3-D LGE-MRI segmentation pipeline . . . . . . . . . . . . . . . . . . 62
4.3 Two-step registration . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4 PCA and SA view . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.5 Filter based endocardial refinement . . . . . . . . . . . . . . . . . . . 66
4.6 Result of filter based endocardial refinement . . . . . . . . . . . . . . 67
4.7 Refinement in apical and basal direction . . . . . . . . . . . . . . . . 69
4.8 Boundary candidates in Cartesian coordinates . . . . . . . . . . . . . 70
4.9 Learning based endocardial boundary estimation . . . . . . . . . . . . 72
4.10 Filter based epicardial boundary estimation . . . . . . . . . . . . . . 73
4.11 Final result of endocardial and epicardial contour . . . . . . . . . . . 74
4.12 Papillary muscles and chordae tendineae . . . . . . . . . . . . . . . . 75
4.13 Papillary muscles segmentation and 3-D surface mesh . . . . . . . . . 75
4.14 Semi-automatic left ventricle segmentation pipeline . . . . . . . . . . 76
4.15 Initialization of the smart brush . . . . . . . . . . . . . . . . . . . . . 78
4.16 Control point extraction . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.17 Intersection of two orthogonal planes . . . . . . . . . . . . . . . . . . 80
4.18 Multiple intersections of annotated slices . . . . . . . . . . . . . . . . 81
4.19 Control points with 2-D and 3-D normal vector . . . . . . . . . . . . 82
4.20 RBF interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.21 3-D LGE-MRI from two different cohorts . . . . . . . . . . . . . . . . 86
4.22 Gold standard annotation . . . . . . . . . . . . . . . . . . . . . . . . 88
4.23 Average DC for differnt parts of LV . . . . . . . . . . . . . . . . . . . 90
4.24 Individual DC for 3-D filter based segmentation . . . . . . . . . . . . 91
4.25 Qualitative results of first cohort using the filter based segmentation . 93
4.26 Comparison between gold standard annotations . . . . . . . . . . . . 94
4.27 Parameter variability evaluation for the 3-D LV segmentation . . . . 95
4.28 Individual DC for 3-D learning based segmentation . . . . . . . . . . 97
4.29 Feature importance for 3-D LGE-MRI . . . . . . . . . . . . . . . . . 98
4.30 Filter based vs. learning based results for the 3-D LGE-MRI . . . . . 99
4.31 Correlation of filter and learning based segmentation . . . . . . . . . 99
4.32 Qualitative results for filter vs. the learning based segmentation . . . 101
4.33 2-D smart brush evaluation . . . . . . . . . . . . . . . . . . . . . . . 102
4.34 Smart brush outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.35 3-D A-HRBF interpolation evaluation . . . . . . . . . . . . . . . . . . 104
4.36 Quantitative semi-automatic evaluation result . . . . . . . . . . . . . 105
4.37 Normal vector orientation . . . . . . . . . . . . . . . . . . . . . . . . 106

5.1 Intensity histogram of myocardium and blood pool . . . . . . . . . . 114


5.2 Segmentation results for x-SD method . . . . . . . . . . . . . . . . . 115
5.3 Segmentation results for FWHM method . . . . . . . . . . . . . . . . 116
5.4 Overview of the scar quantification pipeline. . . . . . . . . . . . . . . 117
5.5 Input image for SFTA . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.6 Resulting images using the TTBD algorithm . . . . . . . . . . . . . . 119
5.7 Box counting curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.8 SFTA feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . 121
List of Figures 159

5.9 LGE-MRI data set used for scar quantification . . . . . . . . . . . . . 122


5.10 Box plot of DC for all segmentation methods . . . . . . . . . . . . . . 124
5.11 Segmentation results for learning based scar quantification . . . . . . 124
5.12 Scatter plot between DC and TVE of scar quantification . . . . . . . 125
5.13 General feature importance for scar quantification . . . . . . . . . . . 125
5.14 Detailed feature importance for scar quantification . . . . . . . . . . . 126
5.15 Qualitative results for learning based scar quantification . . . . . . . 127

6.1 3-D scar visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . 130


6.2 2-D scar visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.3 Overview of scar layer visuaization pipeline . . . . . . . . . . . . . . . 132
6.4 Layer generation in polar coordinates . . . . . . . . . . . . . . . . . . 133
6.5 Scar layer gerneration process . . . . . . . . . . . . . . . . . . . . . . 133
6.6 3-D scar layer visualization . . . . . . . . . . . . . . . . . . . . . . . . 135
6.7 3-D scar layer overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.8 BEP projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.9 2-D scar layer visualization . . . . . . . . . . . . . . . . . . . . . . . . 138
160 List of Figures
List of Symbols

General Symbols
c∈C Class label

r ∈ R+
0 Radius

t ∈ {1, ..., T } Individual decission tree

x∈R x-Coordinate

y∈R y-Coordinate

z∈R z-Coordinate

D∈N Tree depth

N ∈N Node

T ∈N Size of forest

G:D→R Information gain

H:D→R Entropy

HS : D → R Shannon entropy

HG : D → R Gini impurity

p ∈ [0, 1] Probability

s : f → {0, 1} Split function

f ∈ RD D-dimensional feature vector

p ∈ RD D-dimensional point

θ ∈ R3 Set of split parameters


0
φ : Rd → Rd Feature selector

ψ ∈ Rd Geometric primitive

161
162 Left Ventricle Segmentation

ρ ∈ [0◦ , 360◦ ] Angle

τ ∈ Rd Threshold inequality

C ∈ {c1 , ..., cN } Set of all classes set

D ∈ RD Set of data points

P ∈ {θ1 , ..., θT } Set of split parameters

Left Ventricle Segmentation


c1 ∈ R Mean intensity value inside contour

c2 ∈ R Mean intensity value outside contour

c∈R Cost of individual step

d∈R Distance between two points

dx ∈ R Tangent in x-direction

dy ∈ R Tangent in y-direction

e Edge function

f Radial basis function

g Low-degree polynomial function

gx ∈ R Gradient in x-direction

gy ∈ R Gradient in y-direction

k ∈ R+
0 Length of contour

l ∈ {1, 2, ..., N } Slice index

n∈N Number of control points

p ∈ [0, 1] Boundary probability

r∈R Reference quantity

s∈R Scaling factor

u Level set function

h Gradient direction selector function

rp ∈ R+
0 Radius

A ∈ R+
0 Area
Left Ventricle Segmentation 163

F Energy functional

I∈R Intensity value

Np ∈ N Number of control points

R∈R Roundness measure

b ∈ R2 Point vector

c ∈ RD D-dimensional center point

d ∈ R2 Tangent vector

g ∈ R2 Gradient vector

k ∈ R2 Curve

n ∈ R2 Normal vector

o ∈ R3 3-D offset of center of rotation

s ∈ R2 Weight vector

t ∈ RD D-dimensional translation vector

vi ∈ R3 3-D vertex

w ∈ R2 Weight vector

A ∈ RN ×N ×N 3-D atlas volume

C ∈ RN ×D Set of N D-dimensional contours points

E ∈ Rn×n Identity matrix

F ∈ Rm×m Area of ROI

G ∈ Rm×m RBF matrix

H ∈ Rm×m Hessian matrix

I ∈ RN ×M Image slice

K ∈ Rr×ρ Cost array

L ∈ RN ×N ×N 3-D atlas mask

Λ ∈ Rm×m Singular matrix

M ∈ RN ×N ×N 3-D registered atlas mask

O ∈ Rm×m Point matrix

R ∈ RN ×N N − D rotation matrix
164 Left Ventricle Segmentation

Σ ∈ R3×3 Covariance matrix

S ∈ Rm×m Eigenvalue matrix

T ∈ RN ×N N − D rotation matrix

U ∈ Rn×m Unitary matrix

V ∈ RN ×N ×N 3-D DICOM volume

W ∈ Rm×m Unitary matrix

X ∈ Rn×m Data matrix

α∈R Weighting factor

β ∈ R3 Weighting factor

γ∈Z Extension of polar image

κ∈R Curvature

λ1 ∈ N Weighting factor

λ2 ∈ N Weighting factor

µ∈N Weighting factor

µbp ∈ R Mean blood pool intensity

ν∈N Weighting factor

Ω∈N Inside the surface

Ω∈N Inside the surface

ϕ∈R Non-linear activation function

ψ Function

σbp ∈ R Standard deviation of the blood pool intensity

θbmax ∈ R Threshold base reached

θc ∈ R Threshold distance between centers

θdiff ∈ R Threshold maximal distance between two contours

θdist ∈ R Threshold maximal distance between points

θepi ∈ R Threshold enlargement of epicard

θκ ∈ R Curvature threshold

θLVOT ∈ R Threshold if LVOT is reached


Scar Segmentation and Visualization 165

θo ∈ R Threshold minimal size of binary object

θO ∈ R Otsu’s threshold

θr ∈ R Threshold apex reached

θst ∈ R Scar threshold

υ Signed distance function

ξ∈Z Sub-sampling rate

S ∈ R2 Segmented contour from MACWE

V ∈ RV Set of vertices

Scar Segmentation and Visualization


l∈N Number of scar layers

A ∈ R+
0 Area of binary image

F ∈R Fractal dimension

Nι ∈ N Number of boxes

Nt ∈ N Number of thresholds

V ∈R Scar volume

ei ∈ R3 Face indices

k ∈ R3 Translation vector

s ∈ R3 Image position patient

D ∈ R4×4 Affine image to patient transformation matrix

F ∈ R3×2 Image orientation patient

B ∈ RN ×N ×N Scar layer

Z ∈ RN ×N ×N Scar mask

∆c ∈ R Pixel spacing for column

∆r ∈ R Pixel spacing for row

∆s ∈ R Slice thickness

F ∈ RN Set of faces

ι∈N Box size


166 Scar Segmentation and Visualization

θi ∈ R Threshold

T ∈ {θ1 < θ2 < Set of threshols


... < θNt }
List of Tables
1.1 NYHA classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Influence of TE and TR . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.2 DC for filter based 2-D LGE-MRI . . . . . . . . . . . . . . . . . . . . 49


3.3 MSD for filter based 2-D LGE-MRI . . . . . . . . . . . . . . . . . . . 51
3.4 Optimized hyper-parameters for 2-D random forest . . . . . . . . . . 53
3.5 DC for filter based 2-D LGE-MRI . . . . . . . . . . . . . . . . . . . . 53
3.6 MSD for filter based 2-D LGE-MRI . . . . . . . . . . . . . . . . . . . 54
3.7 Influence of hyper-parameters for 2-D endocardial RF using DC . . . 55
3.8 Influence of hyper-parameters for 2-D epicardial RF using DC . . . . 55

4.2 Parameters for LV segmentation . . . . . . . . . . . . . . . . . . . . . 70


4.3 Parameter values for LV segmentation . . . . . . . . . . . . . . . . . 87
4.4 Results of filter based segmentation using DC . . . . . . . . . . . . . 89
4.5 Results of filter based segmentation using MSD . . . . . . . . . . . . 92
4.6 Quantitative results of smoothing effect . . . . . . . . . . . . . . . . . 96
4.7 Hyper-parameters for 3-D endocardial RF of the first cohort . . . . . 96
4.8 Results of learning based segmentation with smoothing using DC . . 97
4.9 Results of learning based segmentation with smoothing using MSD . 98

5.2 Results for scar quantification . . . . . . . . . . . . . . . . . . . . . . 123

6.2 Scar layer evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

167
168 List of Tables
Bibliography
[Abbe 17] E. Abbena, S. Salamon, and A. Gray. Modern differential geometry of
curves and surfaces with Mathematica. CRC press, September 2017.
[Abra 03] W. Abraham and D. Hayes. “Cardiac Resynchronization Therapy for
Heart Failure”. Circulation, Vol. 108, No. 21, pp. 2596–2603, November
2003.
[Adam 94] R. Adams and L. Bischof. “Seeded Region Growing”. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence, Vol. 16, No. 6,
pp. 641–647, June 1994.
[Alba 14] X. Alba, F. i Ventura, M. Rosa, K. Lekadir, C. Tobon-Gomez, C. Hoogen-
doorn, and A. F. Frangi. “Automatic Cardiac LV Segmentation in MRI
Using Modified Graph Cuts with Smoothness and Interslice Constraints”.
Magnetic Resonance in Medicine, Vol. 72, No. 6, pp. 1775–1784, Decem-
ber 2014.
[Amad 04] L. C. Amado, B. L. Gerber, S. N. Gupta, D. W. Rettmann, G. Szarf,
R. Schock, K. Nasir, D. L. Kraitchman, and J. A. Lima. “Accurate
and Objective Infarct Sizing by Contrast-Enhanced Magnetic Resonance
Imaging in a Canine Myocardial Infarction Model”. Journal of the Amer-
ican College of Cardiology, Vol. 44, No. 12, pp. 2383–2389, December
2004.
[Amre 16] M. Amrehn, J. Glasbrenner, S. Steidl, and A. Maier. “Comparative Eval-
uation of Interactive Segmentation Approaches”. In: Bildverarbeitung für
die Medizin 2016, pp. 68–73, Springer, March 2016.
[Amre 17] M. Amrehn, S. Gaube, M. Unberath, F. Schebesch, T. Horz, M. Stru-
mia, S. Steidl, M. Kowarschik, and A. Maier. “UI-Net: Interactive Arti-
ficial Neural Networks for Iterative Image Segmentation Based on a User
Model”. In: C. Rieder, F. Ritter, I. Hotz, and D. Merhof, Eds., EG
VCBM 2017, pp. 143–147, September 2017.
[Andr 11] D. Andreu, A. Berruezo, J. T. Ortiz-Pérez, E. Silva, L. Mont, R. Bor-
ràs, T. M. de Caralt, R. J. Perea, J. Fernández-Armenta, H. Zeljko,
and J. Brugada. “Integration of 3D Electroanatomic Maps and Magnetic
Resonance Scar Characterization Into the Navigation System to Guide
Ventricular Tachycardia Ablation”. Circulation: Arrhythmia and Elec-
trophysiology, Vol. 4, No. 5, pp. 674–683, October 2011.
[Ange 05] E. Angelini, Y. Jin, and A. Laine. “State of the Art of Level Set Methods
in Segmentation and Registration of Medical Imaging Modalities”. In:
Handbook of Biomedical Image Analysis, pp. 47–101, Springer, 2005.
[Auri 11] A. Auricchio and F. W. Prinzen. “Non-Responders to Cardiac Resyn-
chronization Therapy – The Magnitude of the Problem and the Issues”.
Circulation, Vol. 75, No. 3, pp. 521–527, February 2011.

169
170 Bibliography

[Bako 13] Z. Bakos, H. Markstad, E. Ostenfeld, M. Carlsson, A. Roijer, and


R. Borgquist. “Combined preoperative information using a bullseye plot
from speckle tracking echocardiography, cardiac CT scan, and MRI scan:
targeted left ventricular lead implantation in patients receiving cardiac
resynchronization therapy”. European Heart Journal – Cardiovascular
Imaging, Vol. 15, No. 5, pp. 523–531, November 2013.

[Bao 14] J. Bao, T. Kurzendorfer, E. Girard, and A. M. Cahill. “Novel technique


using MRI/X-ray overlay to guide sclerotherapy for treatment of low-flow
vascular malformations in children”. In: S. 2014, Ed., Pediatric Radiology,
May 2014.

[Beha 17] J. M. Behar, P. Mountney, D. Toth, S. Reiml, M. Panayiotou, A. Brost,


B. Fahn, R. Karim, S. Claridge, T. Jackson, B. Sieniewicz, N. Patel,
M. O’Neill, R. Razavi, K. Rhode, and C. A. Rinaldi. “Real-Time X-
MRI-Guided Left Ventricular Lead Implantation for Targeted Delivery of
Cardiac Resynchronization Therapy”. JACC: Clinical Electrophysiology,
Vol. 3, No. 8, pp. 803–814, August 2017.

[Beuc 91] S. Beucher and C. D. M. Mathmatique. “The Watershed Transformation


Applied to Image Segmentation”. Scanning Microscopy International,
pp. 299–299, 1991.

[Bilc 08] K. Bilchick, V. Dimaano, K. C. Wu, R. Helm, R. Weiss, J. Lima,


R. Berger, G. Tomaselli, D. Bluemke, H. Halperin, T. Abraham,
K. David, and A. Lardo. “Cardiac magnetic resonance assessment of
dyssynchrony and myocardial scar predicts function class improvement
following cardiac resynchronization therapy”. JACC: Cardiovascular
Imaging, Vol. 1, No. 5, pp. 561–568, September 2008.

[Blee 06] G. Bleeker, T. Kaandorp, H. Lamb, E. Boersma, P. Steendijk, A. de Roos,


E. van der Wall, M. Schalij, and J. Bax. “Effect of Posterolateral Scar
Tissue on Clinical and Echocardiographic Improvement After Cardiac
Resynchronization Therapy”. Circulation, Vol. 113, No. 7, pp. 969–976,
February 2006.

[Born 95] P. Börnert and D. Jensen. “Coronary Artery Imaging at 0.5 T Using
Segmented 3D Echo Planar Imaging”. Magnetic Resonance in Medicine,
Vol. 34, No. 6, pp. 779–785, December 1995.

[Bour 12] F. Bourier, A. Brost, A. Kleinoeder, T. Kurzendorfer, M. Koch, A. Kiraly,


H.-J. Schneider, J. Hornegger, N. Strobel, and K. Kurzidim. “Navigation
for Fluoroscopy-Guided Cryo-Balloon Ablation Procedures of Atrial Fib-
rillation”. In: S. M. Imaging, Ed., Proceedings of SPIE Medical Imaging
2012: Image-Guided Procedures, Robotic Interventions, and Modeling,
February 2012.

[Boyk 01] Y. Boykov, O. Veksler, and R. Zabih. “Fast Approximate Energy Mini-
mization via Graph Cuts”. IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 23, No. 11, pp. 1222–1239, November 2001.

[Bram 10] M. Brambilla, E. Occhetta, M. Ronconi, L. Plebani, A. Carriero, and


P. Marino. “Reducing operator radiation exposure during cardiac resyn-
chronization therapy”. Europace, Vol. 12, No. 12, pp. 1769–1773, October
2010.
Bibliography 171

[Braz 10] E. V. Brazil, I. Macedo, M. C. Sousa, L. H. de Figueiredo, and L. Velho.


“Sketching Variational Hermite-RBF Implicits”. In: Proceedings of the
Seventh Sketch-Based Interfaces and Modeling Symposium, pp. 1–8, Eu-
rographics Association, June 2010.
[Brei 01] L. Breiman. “Random Forests”. Machine learning, Vol. 45, No. 1,
pp. 5–32, October 2001.
[Brei 83] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification
and Regression Trees. Chapman and Hall/CRC, June 1983.
[Brig 13] M. Brignole, A. Auricchio, G. Baron-Esquivias, P. Bordachar, G. Bo-
riani, O.-A. Breithardt, J. Cleland, J.-C. Deharo, V. Delgado, P. El-
liott, B. Gorenek, C. Israel, C. Leclercq, C. Linde, L. Mont, L. Padeletti,
R. Sutton, and P. Vardas. “2013 ESC Guidelines on cardiac pacing and
cardiac resynchronization therapy”. European Heart Journal, Vol. 15,
No. 8, pp. 1070–1118, June 2013.
[Bros 13] A. Brost, J. Raab, A. Kleinoeder, T. Kurzendorfer, F. Bourier, M. Koch,
M. Hoffmann, N. Strobel, K. Kurzidim, and J. Hornegger. “Medizinis-
che Bildverarbeitung für die minimal-invasive Behandlung von Vorhof-
flimmern”. Deutsche Zeitschrift für klinische Forschung, Innovation und
Praxis (DZKF), Vol. 17, No. 6, pp. 36–41, June 2013.
[Brow 14] R. W. Brown, E. M. Haacke, Y.-C. N. Cheng, M. R. Thompson, and
R. Venkatesan. Magnetic Resonance Imaging: Physical Principles and
Sequence Design. John Wiley & Sons, May 2014.
[Cann 86] J. Canny. “A Computational Approach to Edge Detection”. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, Vol. 8, No. 6,
pp. 679–698, November 1986.
[Carr 01] J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright,
B. C. McCallum, and T. R. Evans. “Reconstruction and Representa-
tion of 3D Objects with Radial Basis Functions”. In: Proceedings of the
28th annual conference on Computer graphics and interactive techniques,
pp. 67–76, ACM, August 2001.
[Case 93] V. Caselles, F. Catté, T. Coll, and F. Dibos. “A geometric model for
active contours in image processing”. Numerische Mathematik, Vol. 66,
No. 1, pp. 1–31, December 1993.
[Case 97] V. Caselles, R. Kimmel, and G. Sapiro. “Geodesic Active Contours”.
International Journal of Computer Vision, Vol. 22, No. 1, pp. 61–79,
February 1997.
[Cawl 10] G. C. Cawley and N. L. Talbot. “On Over-fitting in Model Selection
and Subsequent Selection Bias in Performance Evaluation”. Journal of
Machine Learning Research, Vol. 11, pp. 2079–2107, July 2010.
[Chan 01] T. Chan and L. Vese. “Active Contours Without Edges”. IEEE Transac-
tions on Image Processing, Vol. 10, No. 2, pp. 266–277, February 2001.
[Chen 17] S. Chen, J. Endres, S. Dorn, J. Maier, M. Lell, M. Kachelrieß, and
A. Maier. “A Feasibility Study of Automatic Multi-Organ Segmentation
Using Probabilistic Atlas”. In: Bildverarbeitung für die Medizin 2017,
pp. 218–223, Springer, March 2017.
172 Bibliography

[CIBC 15] CIBC. 2015. Seg3D: Volumetric Image Segmentation and Visualiza-
tion. Scientific Computing and Imaging Institute (SCI), Download from:
http://www.seg3d.org.
[Ciof 08] C. Ciofolo, M. Fradkin, B. Mory, G. Hautvast, and M. Breeuwer. “Au-
tomatic Myocardium Segmentation in Late-Enhancement MRI”. In:
Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE
International Symposium on, pp. 225–228, IEEE, May 2008.
[Cord 11] L. Cordero-Grande, G. Vegas-Sánchez-Ferrero, P. Casaseca-de-la
Higuera, J. A. San-Román-Calvar, A. Revilla-Orodea, M. Martín-
Fernández, and C. Alberola-López. “Unsupervised 4D myocardium seg-
mentation with a Markov Random Field based deformable model”. Med-
ical Image Analysis, Vol. 15, No. 3, pp. 283–301, June 2011.
[Cost 12] A. F. Costa, G. Humpire-Mamani, and A. J. M. Traina. “An Efficient
Algorithm for Fractal Analysis of Textures”. In: Graphics, Patterns and
Images (SIBGRAPI), 2012 25th SIBGRAPI Conference on, pp. 39–46,
IEEE, August 2012.
[Crem 07] D. Cremers, M. Rousson, and R. Deriche. “A Review of Statistical Ap-
proaches to Level Set Segmentation: Integrating Color, Texture, Motion
and Shape”. International Journal of Computer Vision, Vol. 72, No. 2,
pp. 195–215, April 2007.
[Crim 12] A. Criminisi, J. Shotton, and E. Konukoglu. “Decision Forests: A Unified
Framework for Classification, Regression, Density Estimation, Manifold
Learning and Semi-Supervised Learning”. Foundations and Trends R in
Computer Graphics and Vision, Vol. 7, No. 2–3, pp. 81–227, March 2012.
[Crim 13] A. Criminisi and J. Shotton. Decision Forests for Computer Vision and
Medical Image Analysis. Springer Science & Business Media, January
2013.
[Dang 12] H. Dang, Y. Otake, S. Schafer, J. Stayman, G. Kleinszig, and J. Siew-
erdsen. “Robust methods for automatic image-to-world registration in
cone-beam CT interventional guidance”. Medical Physics, Vol. 39, No. 10,
pp. 6484–6498, October 2012.
[Daub 12] J.-C. Daubert, L. Saxon, P. B. Adamson, A. Auricchio, R. D. Berger,
J. F. Beshai, O. Breithard, M. Brignole, J. Cleland, D. B. DeLurgio,
K. Dickstein, D. V. Exner, M. Gold, R. A. Grimm, D. L. Hayes, C. Israel,
C. Leclercq, C. Linde, J. Lindenfeld, B. Merkely, L. Mont, F. Murgatroyd,
F. Prinzen, S. F. Saba, J. S. Shinbane, J. Singh, A. S. Tang, P. E. Vardas,
B. L. Wilkoff, and J. L. Zamorano. “2012 EHRA/HRS expert consensus
statement on cardiac resynchronization therapy in heart failure: implant
and follow-up recommendations and management”. Heart Rhythm, Vol. 9,
No. 9, pp. 1524–1576, September 2012.
[Dick 08] K. Dickstein, A. Cohen-Solal, G. Filippatos, J. McMurray, P. Ponikowski,
P. A. Poole-Wilson, A. Strömberg, D. Veldhuisen, D. Atar, A. Hoes,
A. Keren, A. Mebazaa, M. Nieminen, S. G. Priori, and K. Swedberg.
“ESC Guidelines for the diagnosis and treatment of acute and chronic
heart failure 2008”. European Journal of Heart Failure, Vol. 10, No. 10,
pp. 933–989, October 2008.
[Dijk 59] E. Dijkstra. “A Note on Two Problems in Connexion with Graphs”.
Numerische Mathematik, Vol. 1, No. 1, pp. 269–271, June 1959.
Bibliography 173

[Diki 04] E. Dikici, T. ODonnell, R. Setser, and R. White. “Quantification of


Delayed Enhancement MR Images”. In: Medical Image Computing and
Computer-Assisted Intervention–MICCAI 2004, pp. 250–257, Springer,
September 2004.
[Dolt 13] A. Doltra, B. Hoyem Amundsen, R. Gebker, E. Fleck, and S. Kelle.
“Emerging Concepts for Myocardial Late Gadolinium Enhancement
MRI”. Current Cardiology Reviews, Vol. 9, No. 3, pp. 185–190, August
2013.
[Drei 13] J. F. Dreijer, B. M. Herbst, and J. A. Du Preez. “Left ventricular seg-
mentation from MRI datasets with edge modelling conditional random
fields”. BMC Medical Imaging, Vol. 13, No. 1, p. 24, July 2013.
[Duan 99] J.-R. Duann, S.-H. Chiang, S.-B. Lin, C.-C. Lin, J.-H. Chen, and J.-L.
Su. “Assessment of left ventricular cardiac shape by the use of volumetric
curvature analysis from 3D echocardiography”. Computerized Medical
Imaging and Graphics, Vol. 23, No. 2, pp. 89–101, March 1999.
[Duch 77] J. Duchon. “Splines minimizing rotation-invariant semi-norms in Sobolev
spaces”. In: W. Schempp and K. Zeller, Eds., Constructive theory of
functions of several variables, pp. 85–100, Springer, April 1977.
[Duda 72] R. O. Duda and P. E. Hart. “Use of the Hough transformation to detect
lines and curves in pictures”. Communications of the ACM, Vol. 15, No. 1,
pp. 11–15, January 1972.
[Dyma 04] S. Dymarkowski and H. Bosmans. “Cardiac MRI physics”. In: Clinical
Cardiac MRI, pp. 1–31, Springer, November 2004.
[Estn 11] H. L. Estner, M. M. Zviman, D. Herzka, F. Miller, V. Castro, S. Nazarian,
H. Ashikaga, Y. Dori, R. D. Berger, H. Calkins, A. C. Lardo, and H. R.
Halperin. “The critical isthmus sites of ischemic ventricular tachycardia
are in zones of tissue heterogeneity, visualized by magnetic resonance
imaging”. Heart Rhythm, Vol. 8, No. 12, pp. 1942–1949, December 2011.
[Farn 08] R. Farnoosh and B. Zarpak. “Image Segmentation Using Gaussian Mix-
ture Model”. IUST international journal of engineering science, Vol. 19,
No. 1-2, pp. 29–32, April 2008.
[Fedo 12] A. Fedorov, R. Beichel, J. Kalpathy-Cramer, J. Finet, J.-C. Fillion-
Robin, S. Pujol, C. Bauer, D. Jennings, F. Fennessy, M. Sonka, J. Buatti,
S. Aylward, J. V. Miller, S. Pieper, and R. Kikinis. “3D Slicer as an image
computing platform for the Quantitative Imaging Network”. Magnetic
Resonance Imaging, Vol. 30, No. 9, pp. 1323–1341, November 2012.
[Flet 11] A. S. Flett, J. Hasleton, C. Cook, D. Hausenloy, G. Quarta, C. Ariti,
V. Muthurangu, and J. C. Moon. “Evaluation of Techniques for the
Quantification of Myocardial Scar of Differing Etiology Using Cardiac
Magnetic Resonance”. JACC: Cardiovascular Imaging, Vol. 4, No. 2,
pp. 150–156, February 2011.
[Form 13] C. Forman, R. Grimm, J. M. Hutter, A. Maier, J. Hornegger, and M. O.
Zenge. “Free-Breathing Whole-Heart Coronary MRA: Motion Compen-
sation Integrated into 3D Cartesian Compressed Sensing Reconstruc-
tion”. In: International Conference on Medical Image Computing and
Computer-Assisted Intervention, pp. 575–582, Springer, September 2013.
174 Bibliography

[Form 14] C. Forman, D. Piccini, R. Grimm, J. Hutter, J. Hornegger, and M. Zenge.


“High-resolution 3D whole-heart coronary MRA: a study on the combi-
nation of data acquisition in multiple breath-holds and 1D residual respi-
ratory motion compensation”. Magnetic Resonance Materials in Physics,
Biology and Medicine, Vol. 27, No. 5, pp. 435–443, October 2014.
[Form 15] C. Forman, D. Piccini, R. Grimm, J. Hutter, J. Hornegger, and M. O.
Zenge. “Reduction of Respiratory Motion Artifacts for Free-Breathing
Whole-Heart Coronary MRA by Weighted Iterative Reconstruction”.
Magnetic resonance in medicine, Vol. 73, No. 5, pp. 1885–1895, May
2015.
[Gelf 89] S. B. Gelfand, C. Ravishankar, and E. J. Delp. “An Iterative Growing and
Pruning Algorithm for Classification Tree Design”. In: Systems, Man and
Cybernetics, 1989. Conference Proceedings., IEEE International Confer-
ence on, pp. 818–823, IEEE, February 1989.
[Geur 06] P. Geurts, D. Ernst, and L. Wehenkel. “Extremely randomized trees”.
Machine Learning, Vol. 63, No. 1, pp. 3–42, April 2006.
[Ghes 16] F. C. Ghesu, E. Krubasik, B. Georgescu, V. Singh, Y. Zheng, J. Horneg-
ger, and D. Comaniciu. “Marginal Space Deep Learning: Efficient Archi-
tecture for Volumetric Image Parsing”. IEEE Transactions on Medical
Imaging (Special Issue), Vol. 35, No. 5, pp. 1217–1228, March 2016.
[Gira 14] E. Girard, T. Kurzendorfer, K. Gralewski, N. Strobel, and Y. Dori.
“MRI/X-ray Fusion for Overlay Guidance during Congenital Heart Dis-
ease Catheterization Procedures”. In: ISMRM, Ed., Proceedings of the
22nd Annual Meeting of ISMRM, May 2014.
[Go 14] A. Go, D. Mozaffarian, V. Roger, E. Benjamin, J. Berry,
S. Blaha, Michaeland Dai, E. Ford, C. Fox, S. Franco, H. J. Fullerton,
C. Gillespie, S. M. Hailpern, J. A. Heit, V. J. Howard, M. D. Huffman,
S. E. Judd, B. M. Kissela, S. J. Kittner, D. T. Lackland, J. H. Lichtman,
L. D. Lisabeth, R. H. Mackey, D. J. Magid, G. M. Marcus, A. Marelli,
D. B. Matchar, D. K. McGuire, E. R. Mohler, C. S. Moy, M. E. Mussolino,
R. W. Neumar, G. Nichol, D. K. Pandey, N. P. Paynter, M. J. Reeves,
P. D. Sorlie, J. Stein, A. Towfighi, T. N. Turan, S. S. Viran, N. D. Wong,
D. Woo, and M. B. Turner. “Heart Disease and Stroke Statistics–2014
Update: A Report From the American Heart Association”. Circulation,
Vol. 129, No. 3, p. e28, January 2014.
[Grad 06] L. Grady. “Random Walks for Image Segmentation”. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 11,
pp. 1768–1783, November 2006.
[Grim 13] R. Grimm, S. Fürst, I. Dregely, C. Forman, J. M. Hutter, S. I. Ziegler,
S. Nekolla, B. Kiefer, M. Schwaiger, J. Hornegger, and T. Block. “Self-
Gated Radial MRI for Respiratory Motion Compensation on Hybrid
PET/MR Systems”. In: International Conference on Medical Image
Computing and Computer-Assisted Intervention, pp. 17–24, Springer,
September 2013.
[Gris 02] M. A. Griswold, P. M. Jakob, R. M. Heidemann, M. Nittka, V. Jellus,
J. Wang, B. Kiefer, and A. Haase. “Generalized autocalibrating partially
parallel acquisitions (GRAPPA)”. Magnetic Resonance in Medicine,
Vol. 47, No. 6, pp. 1202–1210, June 2002.
Bibliography 175

[Heim 09] T. Heimann and H.-P. Meinzer. “Statistical shape models for 3D medical
image segmentation: A review”. Medical Image Analysis, Vol. 13, No. 4,
pp. 543–563, August 2009.
[Hend 03] A. Hendrix and J. Krempe. “Magnete, Spins und Resonanzen-Eine Ein-
führung in die Grundlagen der Magnetresonanztomographie”. Erlangen:
Siemens AG, 2003.
[Henn 08] A. Hennemuth, A. Seeger, O. Friman, S. Miller, B. Klumpp, S. Oeltze,
and H.-O. Peitgen. “A comprehensive approach to the analysis of contrast
enhanced cardiac MR images”. IEEE Transactions on Medical Imaging,
Vol. 27, No. 11, pp. 1592–1610, November 2008.
[Hu 13] H. Hu, H. Liu, Z. Gao, and L. Huang. “Hybrid segmentation of left ven-
tricle in cardiac MRI using gaussian-mixture model and region restricted
dynamic programming”. Magnetic Resonance Imaging, Vol. 31, No. 4,
pp. 575–584, May 2013.
[Hu 14] H. Hu, Z. Gao, L. Liu, H. Liu, J. Gao, S. Xu, W. Li, and L. Huang. “Au-
tomatic Segmentation of the Left Ventricle in Cardiac MRI Using Local
Binary Fitting Model and Dynamic Programming Techniques”. PloS one,
Vol. 9, No. 12, p. e114760, December 2014.
[Huan 11] S. Huang, J. Liu, L. C. Lee, S. K. Venkatesh, L. L. San Teo, C. Au,
and W. L. Nowinski. “An Image-Based Comprehensive Approach for
Automatic Segmentation of Left Ventricle from Cardiac Short Axis Cine
MR Images”. Journal of Digital Imaging, Vol. 24, No. 4, pp. 598–608,
August 2011.
[Hwan 14a] T. Hwang, E. Girard, T. Kurzendorfer, X. Zhu, and A. M. Cahill.
“First Experience with iGuide Navigational Software Application for
Bone Biopsies in Pediatric Interventional Radiology”. Pediatric Radi-
ology, Vol. 44, No. 1, May 2014.
[Hwan 14b] T. Hwang, T. Kurzendorfer, E. Girard, X. Zhu, and A. M. Cahill. “First
Experiene with iGuide Navigational Software Application for Bone Biop-
sies in Pediatric Interventional Radiology”. In: S. 2014, Ed., Journal of
Vascular and Interventional Radiology, March 2014.
[Ijir 13] T. Ijiri, S. Yoshizawa, Y. Sato, M. Ito, and H. Yokota. “Bilateral Her-
mite Radial Basis Functions for Contour-based Volume Segmentation”.
In: Computer Graphics Forum, pp. 123–132, Wiley Online Library, May
2013.
[Jian 14] K. Jiang and X. Yu. “Quantification of regional myocardial wall motion
by cardiovascular magnetic resonance”. Quantitative Imaging in Medicine
and Surgery, Vol. 4, No. 5, p. 345, September 2014.
[Joll 11] M.-P. Jolly, C. Guetter, X. Lu, H. Xue, and J. Guehring. “Automatic
Segmentation of the Myocardium in Cine MR Images Using Deformable
Registration”. In: STACOM, pp. 98–108, Springer, September 2011.
[Joll 86] I. Jolliffe. Principal Component Analysis. Wiley Online Library, 1986.
[Kais 14] M. Kaiser, M. John, T. Heimann, T. Neumuth, and G. Rose. “Improve-
ment of Manual 2D/3D Registration by Decoupling the Visual Influence
of the Six Degrees of Freedom”. In: Biomedical Imaging (ISBI), 2014
IEEE 11th International Symposium on, pp. 766–769, IEEE, April 2014.
176 Bibliography

[Kape 05] S. Kapetanakis, M. Kearney, A. Siva, N. Gall, M. Cooklin, and M. Mon-


aghan. “Real-Time Three-Dimensional Echocardiography”. Circulation,
Vol. 112, No. 7, pp. 992–1000, August 2005.

[Kari 16] R. Karim, P. Bhagirath, P. Claus, R. J. Housden, Z. Chen,


Z. Karimaghaloo, H.-M. Sohn, L. L. Rodríguez, S. Vera, X. Albà, A. Hen-
nemuth, H.-O. Peitgen, T. Arbel, M. A. G. Ballester, A. F. Frangi,
M. Götte, R. Razavi, T. Schaeffter, and K. Rhode. “Evaluation of state-
of-the-art segmentation algorithms for left ventricle infarct from late
Gadolinium enhancement MR images”. Medical Image Analysis, Vol. 30,
pp. 95–107, May 2016.

[Kass 88] M. Kass, A. Witkin, and D. Terzopoulos. “Snakes: Active Contour


Models”. International Journal of Computer Vision, Vol. 1, No. 4,
pp. 321–331, January 1988.

[Kell 12] P. Kellman and A. Arai. “Cardiac Imaging Techniques for Physicians:
Late Enhancement”. Journal of Magnetic Resonance Imaging, Vol. 36,
No. 3, pp. 529–542, September 2012.

[Kim 00] R. J. Kim, E. Wu, A. Rafael, E.-L. Chen, M. A. Parker, O. Simonetti,


F. J. Klocke, R. O. Bonow, and R. M. Judd. “The use of contrast-
enhanced magnetic resonance imaging to identify reversible myocardial
dysfunction”. New England Journal of Medicine, Vol. 343, No. 20,
pp. 1445–1453, November 2000.

[Klei 10] S. Klein, M. Staring, K. Murphy, M. A. Viergever, and J. P. Pluim.


“elastix: A Toolbox for Intensity-Based Medical Image Registration”.
IEEE Transactions on Medical Imaging, Vol. 29, No. 1, pp. 196–205,
January 2010.

[Kohl 11] T. Kohlberger, M. Sofka, J. Zhang, N. Birkbeck, J. Wetzl, J. Kaftan,


J. Declerck, and S. K. Zhou. “Automatic Multi-Organ Segmentation
Using Learning-Based Segmentation and Level Set Optimization”. In:
International Conference on Medical Image Computing and Computer-
Assisted Intervention, pp. 338–345, Springer, September 2011.

[Kowa 15] C. Kowalewski, F. Heissenhuber, D. Vukajlovic, N. Strobel, F. Bourier,


T. Kurzendorfer, A. Kiraly, W. Wu, A. Kleinöder, A. Brost, M. Hoff-
mann, and K. Kurzidim. “Evaluation of the First Software Tool Featuring
3D Visualization of Cryo-balloon Ablation Catheters in Atrial Fibrillation
Procedures”. In: HRS, Ed., 36th Annual Scientific Meeting, pp. 04–03,
May 2015.

[Kurz 12] T. Kurzendorfer, A. Brost, F. Bourier, M. Koch, K. Kurzidim, J. Horneg-


ger, and N. Strobel. “Cryo-Balloon Catheter Tracking in Atrial Fibril-
lation Ablation Procedures”. In: T. Tolxdorff, T. M. Deserno, H. Han-
dels, and H.-P. Meinzer, Eds., Bildverarbeitung für die Medizin 2012,
pp. 386–391, Berlin / Heidelberg, March 2012.

[Kurz 13] T. Kurzendorfer, A. Brost, C. Jakob, P. Mewes, F. Bourier, M. Koch,


K. Kurzidim, J. Hornegger, and N. Strobel. “Cryo-balloon catheter lo-
calization in fluoroscopic images”. In: D. R. Holmes and Z. R. Yaniv,
Eds., SPIE Medical Imaging 2013: Image-Guided Procedures, Robotic
Interventions, and Modeling, March 2013.
Bibliography 177

[Kurz 14a] T. Kurzendorfer, E. Girard, K. Gralewski, A. Kiraly, Y. Dori, and


N. Strobel. “Biplane X-Ray Magnetic Resonance Image Fusion Proto-
type for 3D Enhanced Guidance in Cardiac Catheterization Procedures”.
In: IGIC, Ed., 1st Conference on Image-Guided Interventions, October
2014.
[Kurz 14b] T. Kurzendorfer, E. Girard, K. Gralewski, A. Kleinoeder, A. Kiraly,
N. Strobel, and Y. Dori. “New biplane X-ray magnetic resonance image
fusion prototype for 3D enhanced cardiac catheterization in congenital
heart diseases”. Journal of Cardiovascular Magnetic Resonance, Vol. 16,
No. 1, p. O103, January 2014.
[Kurz 14c] T. Kurzendorfer, E. Girard, G. Kevin, A. Kiraly, N. Strobel, and
Y. Dori. “New Biplane 3D Data Fusion Prototype with Multiple Vi-
sualization Techniques for 3D Enhanced Guidance in Congenital Heart
Disease Catheterizations”. Catheterization and Cardiovascular Interven-
tions, Vol. 83, No. 7, pp. 1192–1241, June 2014.
[Kurz 14d] T. Kurzendorfer, E. Girard, G. Krishnamurthy, and A. M. Cahill. “3D
Fusion of preprocedural MRI with intraprocedural C-arm CT for con-
firmation of bone biopsy location in pediatric interventional radiology”.
Journal of Vascular and Interventional Radiology, Vol. 25, No. 3, p. S130,
March 2014.
[Kurz 14e] T. Kurzendorfer, E. Girard, G. Krishnamurthy, and A. M. Cahill. “3D
Fusion of preprocedural MRI with intraprocedural C-arm CT for confir-
mation of bone biopsy location in pediatric interventional radiology”. In:
S. 2014, Ed., Pediatric Radiology, May 2014.
[Kurz 15] T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt, C. Tillmanns,
and J. Hornegger. “Semi-Automatic Segmentation and Scar Quantifi-
cation of the Left Ventricle in 3-D Late Gadolinium Enhanced MRI”.
In: ESMRMB, Ed., 32nd Annual Scientific Meeting of the ESMRMB,
pp. 318–319, October 2015.
[Kurz 16a] T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns, A. Maier, and
A. Brost. “Fully Automatic Segmentation and Scar Quantification of the
Left Ventricle in 3-D Late Gadolinium Enhanced MRI”. In: M. C. Weiss,
Ed., Book of Abstracts, October 2016.
[Kurz 16b] T. Kurzendorfer, P. Mewes, A. Maier, N. Strobel, and A. Brost. “Cryo-
Balloon Catheter Localization Based on a Support-Vector-Machine Ap-
proach”. IEEE Transactions on Medical Imaging, Vol. 35, No. 8,
pp. 1892–1902, March 2016.
[Kurz 17a] T. Kurzendorfer, A. Brost, C. Forman, and A. Maier. “Automated Left
Ventricle Segmentation in 2-D LGE-MRI”. In: IEEE, Ed., Proceedings of
the 2017 IEEE International Symposium on Biomedical Imaging: From
Nano to Macro, pp. 831–834, April 2017.
[Kurz 17b] T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt, C. Tillmanns, and
A. Maier. “Fully Automatic Segmentation of Papillary Muscles in 3-D
LGE-MRI.”. In: Bildverarbeitung für die Medizin (BVM 2017), March
2017.
[Kurz 17c] T. Kurzendorfer, A. Brost, C. Forman, M. Schmidt, C. Tillmanns,
S. Steidl, and A. Maier. “3-D LGE-MRI Segmentation using a Ran-
dom Forest Classifier and Dynamic Programming”. In: ESMRMB, Ed.,
34th Annual Scientific Meeting of the ESMRMB, October 2017.
178 Bibliography

[Kurz 17d] T. Kurzendorfer, P. Fischer, N. Mirshazadeh, T. Pohl, A. Brost, S. Steidl,


and A. Maier. “Rapid Interactive and Intuitive Segmentation of 3-D
Medical Images Using Radia Basis Function Interpolation”. Journal of
Imaging, December 2017.
[Kurz 17e] T. Kurzendorfer, C. Forman, A. Brost, and A. Maier. “Random Forest
Based Left Ventricle Segmentation in LGE-MRI”. In: International Con-
ference on Functional Imaging and Modeling of the Heart, pp. 152–160,
Springer, June 2017.
[Kurz 17f] T. Kurzendorfer, C. Forman, M. Schmidt, C. Tillmanns, A. Maier,
and A. Brost. “Fully Automatic Segmentation of the Left Ventricular
Anatomy in 3-D LGE-MRI”. Journal of Computerized Medical Imaging
and Graphics, Vol. 59, pp. 13–27, July 2017.
[Kurz 17g] T. Kurzendorfer, S. Reiml, A. Brost, D. Toth, M. Panayiotou, P. Mount-
ney, S. Steidl, and A. Maier. “2-D Interactive Scar Layer Visualization”.
In: Image-Guided Interventions Conferences (IGIC 2017), November
2017.
[Kurz 18a] T. Kurzendorfer, K. Breiniger, S. Steidl, A. Brost, C. Forman, and
A. Maier. “Left Ventricle Segmentation in LGE-MRI: Filter Based vs.
Learning Based”. In: IEEE Nuclear Science Symposium and Medical
Imaging Conference, Nov. 2018.
[Kurz 18b] T. Kurzendorfer, K. Breininger, S. Steidl, A. Brost, C. Forman, and
A. Maier. “Myocardial Scar Segmentation in LGE-MRI using Fractal
Analysis and Random Forest Classification”. In: 2018 24th International
Conference on Pattern Recognition (ICPR), Aug. 2018.
[Lang 06] R. M. Lang, M. Bierig, R. B. Devereux, F. A. Flachskampf, E. Foster,
P. A. Pellikka, M. H. Picard, M. J. Roman, J. Seward, J. Shanewise,
S. Solomo, K. T. Spencer, M. St. Jouhn Sutton, and W. Stewart. “Rec-
ommendations for chamber quantification”. European Journal of Echocar-
diography, Vol. 7, No. 2, pp. 79–108, February 2006.
[Larr 17] A. Larroza, M. P. López-Lereu, J. V. Monmeneu, V. Bodí, and
D. Moratal. “Texture Analysis for Infarcted Myocardium Detection on
Delayed Enhancement MRI”. In: Biomedical Imaging (ISBI 2017), 2017
IEEE 14th International Symposium on, pp. 1066–1069, IEEE, April
2017.
[Leyv 11] F. Leyva, P. Foley, S. Chalil, K. Ratib, R. Smith, F. Prinzen, and A. Au-
ricchio. “Cardiac resynchronization therapy guided by late gadolinium-
enhancement cardiovascular magnetic resonance”. Journal of Cardiovas-
cular Magnetic Resonance, Vol. 13, No. 1, pp. 29–35, June 2011.
[Liao 01] P.-S. Liao, T.-S. Chen, and P.-C. Chung. “A Fast Algorithm for Multilevel
Thresholding”. Journal of Information Science and Engineering, Vol. 17,
No. 5, pp. 713–727, September 2001.
[Liu 12] J. Liu, J. Rapin, T.-c. Chang, P. Schmidt, X. Bi, A. Lefebvre, M. Zenge,
E. Mueller, and M. Nadar. “Regularized reconstruction using redundant
Haar wavelets: A means to achieve high under-sampling factors in non-
contrast-enhanced 4D MRA”. In: Proc. ISMRM, May 2012.
[Lore 87] W. Lorensen and H. Cline. “Marching Cubes: A High Resolution 3D
Surface Construction Algorithm”. In: ACM Siggraph Computer Graphics,
pp. 163–169, ACM, July 1987.
Bibliography 179

[Lu 12] Y. Lu, Y. Yang, K. A. Connelly, G. A. Wright, and P. E. Radau. “Au-


tomated quantification of myocardial infarction using graph cuts on con-
trast delayed enhanced magnetic resonance images”. Quantitative Imag-
ing in Medicine and Surgery, Vol. 2, No. 2, p. 81, May 2012.
[Ma 12] Y. L. Ma, A. K. Shetty, S. Duckett, P. Etyngier, G. Gijsbers, R. Bullens,
T. Schaeffter, R. Razavi, C. A. Rinaldi, and K. S. Rhode. “An integrated
platform for image-guided cardiac resynchronization therapy”. Physics
in Medicine and Biology, Vol. 57, No. 10, p. 2953, April 2012.
[Mace 11] I. Macedo, J. P. Gois, and L. Velho. “Hermite Radial Basis Functions Im-
plicits”. In: Computer Graphics Forum, pp. 27–42, Wiley Online Library,
August 2011.
[Marq 14] P. Marquez-Neila, L. Baumela, and L. Alvarez. “A morphological ap-
proach to curvature-based evolution of curves and surfaces”. IEEE Trans-
actions on Pattern Analysis and Machine Intelligence, Vol. 36, No. 1,
pp. 2–17, January 2014.
[Maur 93] C. R. Maurer and J. M. Fitzpatrick. “A Review of Medical Image Regis-
tration”. Interactive Image-Guided Neurosurgery, Vol. 17, January 1993.
[McLe 15] K. McLeod, J. Saberniak, and K. Haugaa. “Statistical analysis of ventric-
ular shape of ARVC patients and correlation with clinical diagnostic in-
dices”. Journal of Cardiovascular Magnetic Resonance, Vol. 17, No. Suppl
1, p. P283, February 2015.
[McMu 12] J. McMurray, S. Adamopoulos, S. Anker, A. Auricchio, M. Böhm,
K. Dickstein, V. Falk, G. Filippatos, C. Fonseca, M. A. Gomez-Sanchez,
T. Jaarsma, L. Køber, G. Y. Lip, A. P. Maggioni, A. Parkhomenko,
B. M. Pieske, B. A. Popescu, P. K. Rønnevik, F. H. Rutten, J. Schwitter,
P. Seferovic, J. Stepinska, P. T. Trindade, A. A. Voors, F. Zannad, and
A. Zeiher. “ESC Guidelines for the diagnosis and treatment of acute and
chronic heart failure 2012”. European Journal of Heart Failure, Vol. 14,
No. 8, pp. 803–869, August 2012.
[Mirs 17] N. Mirshahzadeh, T. Kurzendorfer, P. Fischer, T. Pohl, A. Brost,
S. Steidl, and A. Maier. “Radial Basis Function Interpolation for Rapid
Interactive Segmentation of 3-D Medical Images”. In: Annual Conference
on Medical Image Understanding and Analysis, pp. 651–660, Springer,
July 2017.
[Mont 01] J. Montagnat, H. Delingette, and N. Ayache. “A review of deformable
surfaces: topology, geometry and deformation”. Image and Vision Com-
puting, Vol. 19, No. 14, pp. 1023–1040, December 2001.
[Morg 09] J. Morgan and V. Delgado. “Lead positioning for cardiac resynchroniza-
tion therapy: techniques and priorities”. Europace, Vol. 11, No. suppl 5,
pp. v22–v28, November 2009.
[Mors 05] B. S. Morse, T. S. Yoo, P. Rheingans, D. T. Chen, and K. R. Subrama-
nian. “Interpolating implicit surfaces from scattered surface data using
compactly supported radial basis functions”. In: ACM SIGGRAPH 2005
Courses, p. 78, ACM, 2005.
[Mort 98] E. N. Mortensen and W. A. Barrett. “Interactive Segmentation with
Intelligent Scissors”. Graphical Models and Image Processing, Vol. 60,
No. 5, pp. 349–384, September 1998.
180 Bibliography

[Moun 17] P. Mountney, J. M. Behar, D. Toth, M. Panayiotou, S. Reiml, M.-P. Jolly,


R. Karim, L. Zhang, A. Brost, C. A. Rinaldi, and K. Rhode. “A Planning
and Guidance Platform for Cardiac Resynchronization Therapy”. IEEE
Transactions on Medical Imaging, June 2017.
[Mull 14] K. Müller, A. K. Maier, Y. Zheng, Y. Wang, G. Lauritsch, C. Schwemmer,
C. Rohkohl, J. Hornegger, and R. Fahrig. “Interventional Heart Wall
Motion Analysis with Cardiac C-arm CT Systems”. Physics in Medicine
and Biology, Vol. 59, No. 9, p. 2265, May 2014.
[Mumf 89] D. Mumford and J. Shah. “Optimal Approximations by Piecewise Smooth
Functions and Associated Variational Problems”. Communications on
pure and applied mathematics, Vol. 42, No. 5, pp. 577–685, July 1989.
[Niem 13] H. Niemann. Pattern Analysis and Understanding. Vol. 4, Springer Sci-
ence & Business Media, 2013.
[Niem 83] H. Niemann. Klassifikation von Mustern. Springer-Verlag, May 1983.
[Nish 10] D. G. Nishimura. Principles of Magnetic Resonance Imaging. Standford
Univ., February 2010.
[Nold 13] M. Nolden, S. Zelzer, A. Seitel, D. Wald, M. Müller, A. M. Franz,
D. Maleike, M. Fangerau, M. Baumhauer, L. Maier-Hein, K. H. Maier-
Hein, H. P. Meinzer, and I. Wolf. “The Medical Imaging Interaction
Toolkit: challenges and advances”. International Journal of Computer
Assisted Radiology and Surgery, Vol. 8, No. 4, pp. 607–620, April 2013.
[Oshe 88] S. Osher and J. A. Sethian. “Fronts Propagating with Curvature-
Dependent Speed: Algorithms Based on Hamilton-Jacobi Formulations”.
Journal of Computational Physics, Vol. 79, No. 1, pp. 12–49, November
1988.
[Otsu 79] N. Otsu. “A Threshold Selection Method from Gray-Level Histograms”.
Automatica, Vol. 11, No. 285-296, pp. 23–27, January 1979.
[Park 07] W. Park and G. S. Chirikjian. “Interconversion Between Truncated Carte-
sian and Polar Expansions of Images”. IEEE Transactions on Image
Processing, Vol. 16, No. 8, pp. 1946–1955, July 2007.
[Pete 07] J. Peters, O. Ecabert, C. Meyer, H. Schramm, R. Kneser, A. Groth, and
J. Weese. “Automatic Whole Heart Segmentation in Static Magnetic Res-
onance Image Volumes”. In: International Conference on Medical Image
Computing and Computer-Assisted Intervention – MICCAI, pp. 402–410,
Springer, October 2007.
[Peti 11] C. Petitjean and J.-N. Dacher. “A review of segmentation methods in
short axis cardiac MR images”. Medical Image Analysis, Vol. 15, No. 2,
pp. 169–184, April 2011.
[Pham 00] D. L. Pham, C. Xu, and J. L. Prince. “Current methods in medical image
segmentation”. Annual review of biomedical engineering, Vol. 2, No. 1,
pp. 315–337, August 2000.
[Picc 11] D. Piccini, A. Littmann, S. Nielles-Vallespin, and M. O. Zenge. “Spiral
phyllotaxis: The natural way to construct a 3D radial trajectory in MRI”.
Magnetic Resonance in Medicine, Vol. 66, No. 4, pp. 1049–1056, October
2011.
Bibliography 181

[Picc 12] D. Piccini, A. Littmann, S. Nielles-Vallespin, and M. O. Zenge. “Res-


piratory Self-Navigation for Whole-Heart Bright-Blood Coronary MRI:
Methods for Robust Isolation and Automatic Segmentation of the Blood
Pool”. Magnetic Resonance in Medicine, Vol. 68, No. 2, pp. 571–579,
August 2012.
[Po 11] M. J. Po, M. B. Srichai, and A. F. Laine. “Quantitative Detection of
Left Ventricular Dyssynchrony from Cardiac Computed Tomography An-
giography”. In: Biomedical Imaging: From Nano to Macro, 2011 IEEE
International Symposium on, pp. 1318–1321, IEEE, April 2011.
[Poni 14] P. Ponikowski, S. Anker, K. AlHabib, M. Cowie, T. Force, S. Hu,
T. Jaarsma, H. Krum, V. Rastogi, L. Rohde, U. C. Samal, H. Shimokawa,
B. B. Siswanto, K. Sliwa, and G. Filippato. “Heart failure: preventing
disease and death worldwide”. ESC Heart Failure, Vol. 1, No. 1, pp. 4–25,
September 2014.
[Pop 13] M. Pop, N. R. Ghugre, V. Ramanan, L. Morikawa, G. Stanisz, A. J.
Dick, and G. A. Wright. “Quantification of fibrosis in infarcted swine
hearts by ex vivo late gadolinium-enhancement and diffusion-weighted
MRI methods”. Physics in Medicine and Biology, Vol. 58, No. 15, p. 5009,
July 2013.
[Qian 15] X. Qian, Y. Lin, Y. Zhao, J. Wang, J. Liu, and X. Zhuang. “Segmen-
tation of myocardium from cardiac MR images using a novel dynamic
programming based segmentation method”. Medical Physics, Vol. 42,
No. 3, pp. 1424–1435, March 2015.
[Rajc 14] M. Rajchl, J. Stirrat, M. Goubran, J. Yu, D. Scholl, T. M. Peters,
and J. A. White. “Comparison of semi-automated scar quantifica-
tion techniques using high-resolution, 3-dimensional late-gadolinium-
enhancement magnetic resonance imaging”. The International Journal
of Cardiovascular Imaging, Vol. 31, No. 2, pp. 349–357, February 2014.
[Rash 15] S. Rashid, S. Rapacchi, K. Shivkumar, A. Plotnik, P. Finn, and P. Hu.
“Modified wideband 3D late gadolinium enhancement (LGE) MRI for
patients with implantable cardiac devices”. Journal of Cardiovascular
Magnetic Resonance, Vol. 17, No. Suppl 1, p. Q26, February 2015.
[Reim 16] S. Reiml, D. Toth, M. Panayiotou, B. Fahn, R. Karim, J. M. Behar, C. A.
Rinaldi, R. Razavi, K. S. Rhode, A. Brost, and P. Mountney. “Interac-
tive Visualization for Scar Transmurality in Cardiac Resynchronization
Therapy”. In: SPIE Medical Imaging, pp. 97862S–97862S, International
Society for Optics and Photonics, March 2016.
[Reim 17a] S. Reiml, T. Kurzendorfer, D. Toth, P. Mountney, K. Rhode, A. Maier,
and A. Brost. “Automatic Layer Gerneration for Scar Transmurality
Visualization”. In: Bildverarbeitung für die Medizin (BVM 2017), March
2017.
[Reim 17b] S. Reiml, T. Kurzendorfer, D. Toth, P. Mountney, S. Steidl, A. Brost, and
A. Maier. “Automatic Vertebrae Segmentation in Fluoroscopic Images
for Electrophysiology”. In: 2017 IEEE Nuclear Science Symposium and
Medical Imaging Conference Record (NSS/MIC), October 2017.
[Robe 72] W. C. Roberts and L. S. Cohen. “Left Ventricular Papillary Muscles”.
Circulation, Vol. 46, No. 1, pp. 138–154, July 1972.
182 Bibliography

[Rogo 16] M. Rogosnitzky and S. Branch. “Gadolinium-based contrast agent toxi-


city: a review of known and proposed mechanisms”. Biometals, Vol. 29,
No. 3, pp. 365–376, April 2016.
[Ronn 15] O. Ronneberger, P. Fischer, and T. Brox. “U-Net: Convolutional Net-
works for Biomedical Image Segmentation”. In: International Confer-
ence on Medical Image Computing and Computer-Assisted Intervention
– MICCAI, pp. 234–241, Springer, November 2015.
[Ruec 99] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach, and D. J.
Hawkes. “Nonrigid Registration Using Free-Form Deformations: Appli-
cation to Breast MR Images”. IEEE Transactions on Medical Imaging,
Vol. 18, No. 8, pp. 712–721, August 1999.
[Sant 14] A. Santos, E. Kraigher-Krainer, N. Bello, B. Claggett, M. R. Zile,
B. Pieske, A. A. Voors, J. J. McMurray, M. Packer, T. Bransford,
M. Lekowitz, A. A. Shan, and S. D. Solomon. “Left ventricular dyssyn-
chrony in patients with heart failure and preserved ejection fraction”.
European Heart Journal, Vol. 35, No. 1, pp. 42–47, January 2014.
[Schr 92] M. Schroeder. Fractals, Chaos, Power Laws: Minutes from an Infinite
Paradise. W.H. Freeman, July 1992.
[Shea 03] J. Shea and M. Sweeney. “Cardiac Resynchronization Therapy A Pa-
tient’s Guide”. Circulation, Vol. 108, No. 9, pp. e64–e66, September
2003.
[Shet 12] A. Shetty, S. Duckett, M. Ginks, Y. Ma, M. Sohal, J. Bostock,
S. Kapetanakis, J. Singh, K. Rhode, M. Wright, M. D. O’Neill, J. S. Gill,
G. Carr-White, and R. R. C. A. Rinaldi. “Cardiac magnetic resonance-
derived anatomy, scar, and dyssynchrony fused with fluoroscopy to guide
LV lead placement in cardiac resynchronization therapy: a comparison
with acute haemodynamic measures and echocardiographic reverse re-
modelling”. European Heart Journal – Cardiovascular Imaging, Vol. 14,
No. 7, pp. 692–699, November 2012.
[Shet 14] A. Shetty, M. Sohal, Z. Chen, M. Ginks, J. Bostock, S. Amraoui, K. Ryu,
S. Rosenberg, S. Niederer, J. Gill, G. Carr-White, R. Razavi, and C. Ri-
naldi. “A comparison of left ventricular endocardial, multisite, and multi-
polar epicardial cardiac resynchronization: an acute haemodynamic and
electroanatomical study”. Europace, Vol. 16, No. 6, pp. 873–879, February
2014.
[Shin 14] T. Shin, M. Lustig, D. Nishimura, and B. Hu. “Rapid single-breath-hold
3D late gadolinium enhancement cardiac MRI using a stack-of-spirals
acquisition”. Journal of Magnetic Resonance Imaging, Vol. 40, No. 6,
pp. 1496–1502, December 2014.
[Suin 14] A. Suinesiaputra, B. R. Cowan, A. O. Al-Agamy, M. A. Elattar, N. Ay-
ache, A. S. Fahmy, A. M. Khalifa, P. Medrano-Gracia, M.-P. Jolly, A. H.
Kadish, D. C. Leef, J. Margeta, S. K. Warfield, and A. A. Young. “A
collaborative resource to build consensus for automated left ventricular
segmentation of cardiac MR images”. Medical Image Analysis, Vol. 18,
No. 1, pp. 50–62, January 2014.
[Tao 10] Q. Tao, J. Milles, K. Zeppenfeld, H. J. Lamb, J. J. Bax, J. H. Reiber, and
R. J. van der Geest. “Automated Segmentation of Myocardial Scar in Late
Enhancement MRI Using Combined Intensity and Spatial Information”.
Magnetic Resonance in Medicine, Vol. 64, No. 2, pp. 586–594, May 2010.
Bibliography 183

[Tao 14] Q. Tao, S. R. Piers, H. J. Lamb, and R. J. van der Geest. “Automated Left
Ventricle Segmentation in Late Gadolinium-Enhanced MRI for Objective
Myocardial Scar Assessment”. Journal of Magnetic Resonance Imaging,
November 2014.

[Tayl 05] R. B. Taylor. Taylor’s Cardiovascular Diseases: A Handbook. Vol. 79,


Springer, 2005.

[Thev 08] P. Thévenaz, M. Bierlaire, and M. Unser. “Halt on Sampling for Image
Registration Based on Mutual Information”. Sampling Theory in Signal
and Image Processing, Vol. 7, No. 2, March 2008.

[Toma 98] C. Tomasi and R. Manduchi. “Bilateral filtering for gray and color im-
ages”. In: Computer Vision, 1998. Sixth International Conference on,
pp. 839–846, January 1998.

[Toth 16] D. Toth, M. Panayiotou, A. Brost, J. M. Behar, C. A. Rinaldi, K. S.


Rhode, and P. Mountney. “Registration with Adjacent Anatomical Struc-
tures for Cardiac Resynchronization Therapy Guidance”. In: Interna-
tional Workshop on Statistical Atlases and Computational Models of the
Heart, pp. 127–134, Springer, October 2016.

[Toth 18] D. Toth, S. Miao, T. Kurzendorfer, C. A. Rinaldi, R. Liao, T. Mansi,


K. Rhode, and P. Mountney. “3D/2D model-to-image registration by im-
itation learning for cardiac procedures”. International Journal of Com-
puter Assisted Radiology and Surgery, Vol. 13, No. 8, pp. 1141–1149, Aug
2018.

[Turk 02] G. Turk and J. F. O’brien. “Modelling with Implicit Surfaces that Interpo-
late”. ACM Transactions on Graphics (TOG), Vol. 21, No. 4, pp. 855–873,
October 2002.

[Unbe 15] M. Unberath, A. Maier, D. Fleischmann, J. Hornegger, and R. Fahrig.


“Comparative Evaluation of Two Registration-based Segmentation Algo-
rithms: Application to Whole Heart Segmentation in CT”. In: S. Leon-
hardt, Ed., Proceedings of the GRC, pp. 5–8, June 2015.

[Vezh 04] V. Vezhnevets and V. Konouchine. “GrowCut: Interactive Multi-Label


N-D Image Segmentation by Cellular Automata”. In: Proc. of Graphicon,
pp. 150–156, November 2004.

[Wei 11] D. Wei, Y. Sun, P. Chai, A. Low, and S. H. Ong. “Myocardial Seg-
mentation of Late Gadolinium Enhanced MR Images by Propagation of
Contours from Cine MR Images”. In: Medical Image Computing and
Computer-Assisted Intervention–MICCAI 2011, pp. 428–435, Springer,
September 2011.

[Wei 13] D. Wei, Y. Sun, S.-H. Ong, P. Chai, L. L. Teo, and A. F. Low. “Three-
dimensional segmentation of the left ventricle in late gadolinium en-
hanced MR images of chronic infarction combining long-and short-axis
information”. Medical Image Analysis, Vol. 17, No. 6, pp. 685–697, Au-
gust 2013.

[Weis 08] D. Weishaupt, V. Köchli, and B. Marincek. How does MRI work?: An In-
troduction to the Physics and Function of Magnetic Resonance Imaging.
Springer Science & Business Media, October 2008.
184 Bibliography

[Wetz 17] J. Wetzl, F. Lugauer, R. Kroeker, M. Schmidt, A. Maier, and C. Forman.


“Free-Breathing Self-Navigated Isotropic 3-D CINE Imaging of the Whole
Heart using Adaptive Triggering and Retrospective Gating”. In: I. S. for
Magnetic Resonance in Medicine, Ed., Proceedings of the 25th Annual
Meeting of the ISMRM, April 2017.
[Will 07] J. Willerson, J. Cohn, H. Wellens, and D. Holmes. Cardiovascular
Medicine. Vol. 3, Springer, March 2007.
[Zhen 08] Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comani-
ciu. “Four-Chamber Heart Modeling and Automatic Segmentation for
3-D Cardiac CT Volumes Using Marginal Space Learning and Steer-
able Features”. IEEE Transactions on Medical Imaging, Vol. 27, No. 11,
pp. 1668–1681, November 2008.
[Zhua 10] X. Zhuang, K. S. Rhode, R. S. Razavi, D. J. Hawkes, and S. Ourselin. “A
Registration-Based Propagation Framework for Automatic Whole Heart
Segmentation of Cardiac MRI”. IEEE Transactions on Medical Imaging,
Vol. 29, No. 9, pp. 1612–1625, April 2010.
Bibliography 185

You might also like