Full Chapter Practical Pediatric Urology An Evidence Based Approach 1St Edition Prasad Godbole PDF
Full Chapter Practical Pediatric Urology An Evidence Based Approach 1St Edition Prasad Godbole PDF
Full Chapter Practical Pediatric Urology An Evidence Based Approach 1St Edition Prasad Godbole PDF
https://textbookfull.com/product/infectious-diseases-an-evidence-
based-approach-vikas-mishra/
https://textbookfull.com/product/education-and-learning-an-
evidence-based-approach-1st-edition-jane-mellanby/
https://textbookfull.com/product/rotatory-knee-instability-an-
evidence-based-approach-1st-edition-volker-musahl/
https://textbookfull.com/product/cannabis-in-medicine-an-
evidence-based-approach-kenneth-finn/
Clinical Cases in Glaucoma An Evidence Based Approach
1st Edition Shibal Bhartiya
https://textbookfull.com/product/clinical-cases-in-glaucoma-an-
evidence-based-approach-1st-edition-shibal-bhartiya/
https://textbookfull.com/product/intimate-partner-violence-an-
evidence-based-approach-rahn-kennedy-bailey/
https://textbookfull.com/product/paediatric-orthopaedics-an-
evidence-based-approach-to-clinical-questions-1st-edition-sattar-
alshryda/
https://textbookfull.com/product/hiatal-hernia-surgery-an-
evidence-based-approach-1st-edition-muhammad-ashraf-memon-eds/
https://textbookfull.com/product/schizophrenia-treatment-
outcomes-an-evidence-based-approach-to-recovery-amresh-
shrivastava/
Practical Pediatric
Urology
An Evidence-Based Approach
Prasad Godbole
Duncan T. Wilcox
Martin A. Koyle
Editors
123
Practical Pediatric Urology
Prasad Godbole • Duncan T. Wilcox
Martin A. Koyle
Editors
Martin A. Koyle
Department of Surgery and IHPME
University of Toronto Paediatric Urology
The Hospital for Sick Children
Toronto, ON
Canada
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
Paediatric Urology is one of the youngest surgical specialties and much of its devel-
opment has occurred during the era of evidence based medicine. By promoting
clinical and experimental research in Pediatric Urology, specialist societies such as
the Society for Pediatric Urology and European Society for Paediatric Urology have
played valuable roles in ensuring that the new specialty has been built on firm sci-
entific foundations. The establishment of the Journal of Pediatric Urology in 2005
was another influential landmark in Pediatric Urology's development as an evidence
based specialty.
Opportunities to attain the levels of evidence achieved in other areas of medical
research are inevitably more limited in a small volume surgical specialty such as
Pediatric Urology. With certain exceptions (such as the RIVUR, PRIVENT and
Swedish Reflux trials) it has proved very difficult to design and conduct prospective
randomised controlled trials with the statistical power required to meet the highest
levels of evidence based medicine. However, to some extent this deficiency is now
being addressed by systemic reviews and meta analyses of case—controlled clinical
studies. Other obstacles to high quality research include the difficulty in attracting
funding and the length of time before long term outcomes can be reliably evaluated
in adolescence or adulthood. Against this challenging background the editors and
contributors have set themselves a daunting task in seeking to define the scientific
evidence underpinning best practice in clinical Pediatric Urology.
The opening chapter sets the scene by providing an authoritative account of the
history and development of evidence based medicine. This is followed by chapters
on the development of evidence based guidelines, antibiotic usage and the role of
effective pain management: important topics which rarely feature in standard
Pediatric Urology textbooks. For the most part, Pediatric Urologists are practical
clinicians for whom evidence based medicine exists primarily to assist them in car-
ing for their young patients to the highest standards. With this in mind, most of this
excellent book is devoted to enabling Pediatric Urologists to adopt an evidence
based approach to the management of a wide range of practical problems encoun-
tered in their day to day clinical practice. This book will undoubtedly serve as a
valuable resource for trainees—particularly those preparing for examinations. In
addition it will provide established Pediatric Urologists with an opportunity to
v
vi Foreword
appraise their own specialist practice and decision—making in the context of the
latest published evidence. The editors and contributors have distilled a wealth of
valuable and clinically relevant evidence into this textbook and are to be com-
mended on their achievement.
David F. M. Thomas
Emeritus Consultant Paediatric Urologist
Leeds Teaching Hospitals & Professor of Paediatric Surgery
University of Leeds
Leeds, England
Preface
The world of pediatric urology is progressing at a rapid pace. With the advances in
pharmaceuticals, technologies and greater understanding of disease process, treat-
ment decisions can be based on evidence as opposed to anecdotal experience.
Furthermore increasing patient/parental awareness of pediatric urological condi-
tions mandate that the Pediatric Urologist discuss the various treatment options with
the patient and family to enable a shared decision making approach.
With this background, we are delighted to introduce this book ‘Practical Pediatric
Urology—An Evidence Based Approach’. The book is predominantly in a question
and answer scenario based format to enable higher order thinking and decision
making process. Wherever evidence is available, this has been cited in the discus-
sion. A small minority of chapters are descriptive in nature where the scenario based
format would not have been suitable.
This book would be of use to all Pediatric Urologists, Adult Urologists practising
Pediatric Urology, Pediatric Urology and Pediatric Surgery trainees as well as
Trainers. The book can also be used as a self assessment tool for preparation of
Board exams.
We would like to thank the outstanding and timely contributions from the authors.
A special thanks to Ms Madona Samuel, project coordinator for her periodic
prompting to ensure our consistent focus on the project and to Melissa Morton
Executive Editor at Springer for giving us the opportunity to publish this impor-
tant book.
Finally it goes without saying that we are grateful to our families for their
patience and support throughout this process.
vii
Contents
ix
x Contents
Learning Objective
• To understand the rationale and need for evidence based medicine for clinical
practice
• To recognise the hierarchies and systems designed to support the evaluation and
classification of clinical evidence
• To understand the challenges and controversies with current systems used in
evidence based medicine
1.1 Introduction
P. Dimitri (*)
Sheffield Children’s NHS Foundation Trust, Sheffield, UK
Sheffield Hallam University, Sheffield, UK
University of Sheffield, Sheffield, UK
e-mail: [email protected]
providing the foundations that have gone on to support the principles of EBM [2].
Whilst David Sackett is considered the father of EBM, it was not until nearly a
decade after the first principles of EBM were published that the term ‘evidence
based medicine’ was coined by Gordon Guyatt, the Program Director of Internal
Medicine and Professor of Epidemiology, Biostatistics, and Medicine at McMaster
University [3]. Sackett believed that the truth of medicine could only be identified
through randomised-controlled trials which eliminated the bias of clinical opinion
when conducted appropriately. Furthermore, Sackett distinguished the difference
between EBM and critical appraisal by defining the three principles of EBM (a)
consideration of the patient’s expectations; (b) clinical skills; and (c) the best evi-
dence available [4]. Thus whilst EBM is founded on robust clinical research evi-
dence, there is a recognition that practitioners have clinical expertise reflected in
effective and efficient diagnosis and incorporates the individual patients’ predica-
ments, rights, and preferences. In 1994 Sackett moved to Oxford, United Kingdom
where he worked as a clinician and Director of the Centre for Evidence-Based
Medicine. From here Sackett lectured widely across the UK and Europe on EBM. He
would begin his visits by doing a round on patients admitted the previous night with
young physicians and showing evidence based medicine in action. Junior doctors
learned how to challenge senior opinions encapsulated in expert based medicine
through evidence based medicine [5]. Based upon the growing support and recog-
nised need for EBM, in 1993 Iain Chalmers co-founded the Cochrane Centre which
has evolved to become an internationally renowned centre for the generation of
EBM. Thus the foundations of EBM had been laid to pave the way for an revolution
in interventional medical care, robust in quality, but subsequently open to challenge
from critics that believed that EBM had developed into an overly rigid system lim-
ited by generalisation.
Over the subsequent decade the popularity and recognition for EBM grew exponen-
tially. In 1992, only two article titles included the term EBM; within 5 years, more
than 1000 articles had used the term EBM [6]. A survey in 2004 identified 24 dedi-
cated textbooks, nine academic journals, four computer programs, and 62 internet
portals all dedicated to the teaching and development of EBM [7]. Evidence based
medicine derives its roots from clinical epidemiology. Epidemiology and its meth-
ods of quantification, surveillance, and control have been traced back to social pro-
cesses in eighteenth and nineteenth-century Europe. Toward the middle of the
twentieth century doctors began to apply these tools to the evaluation of clinical
treatment of individual patients [6]. The new field of clinical epidemiology was
established in 1938 by John R Paul. In 1928, Paul joined the faculty of the Yale
School of Medicine as a Professor of Internal Medicine and subsequently held the
Position of Professor of Preventive Medicine from 1940 until his retirement. Paul
established the Yale Poliomyelitis Study Unit in 1931 together with James D. Trask.
It was through this work that the concept of ‘clinical epidemiology’ was established
1 The Evolution of Evidence Based Clinical Medicine 3
in which the path of disease outbreaks in small communities was directly studied.
The concepts of clinical epidemiology were furthered by Alvan Feinstein, Professor
of Medicine and Epidemiology at Yale University School of Medicine from 1969.
Feinstein introduced the use of statistical research methods into the quantification of
clinical practices and study of the medical decision-making process. In 1967
Feinstein challenged the traditional process of clinical decision making based upon
belief and experience in his publication ‘Clinical Judgement’ [8], followed shortly
by Archie Cochrane’s publication ‘Effectiveness and Efficiency’ describing the lack
of controlled trials supporting many practices that had previously been assumed to
be effective [9]. In 1968, The McMaster University was established in Canada. The
new medical school introduced an integrative curriculum called ‘problem-based
learning’ combining the study of basic sciences and clinical medicine using clinical
problems in a tutorship system. The McMaster Medical School established the
world’s first department of clinical epidemiology and biostatistics, which was
directed by David Sackett. The process of problem-based learning led by Sackett
was fundamental to the curriculum; Alvan Feinstein was invited as a visiting
Professor for the first 2 years of the programme to combine clinical epidemiology
with the process of problem-based learning. Thus a new approach to clinical epide-
miology arose combining the methods problem-based learning curriculum, practi-
cal clinical problem solving and the analysis of medical decision making. In 1978
they developed a series of short courses at McMaster University based upon the use
of clinical problems as the platform for enquiry and discussion. This approach was
described in the Departmental Clinical Epidemiology and Biostatistics Annual
report 1979; ‘these courses consider the critical assessment of clinical information
pertaining to the selection and interpretation of diagnostic tests, the study of etiol-
ogy and causation, the interpretation of investigation of the clinical course and natu-
ral history of human disease, the assessment of therapeutic claims and the
interpretation of studies of the quality of clinical care’. The approach adopted in
these courses demonstrated that what we now know as EBM was practiced prior to
its formal introduction into the medical literature. These courses were the catalyst
for the landmark series of publications in the Canadian Medical Association Journal
in 1981 [2] describing the methodological approaches to critical appraisal, culmi-
nating in Guyatt’s publication in 1992 in JAMA (Journal of the American Medical
Association) popularising the term ‘Evidence Based Medicine’ [10]. Guyatt stated
‘a new paradigm for medical practice is emerging. Evidence-based medicine de-
emphasises intuition, unsystematic clinical experience, and pathophysiologic ratio-
nale as sufficient grounds for clinical decision making and stresses the examination
of evidence from clinical research’, thus challenging past medical knowledge,
established medical literature and practice formed by consensus and expertise based
upon knowledge derived from clinical research, epidemiology, statistics and bioin-
formatics. However, to ensure the that the principles of EBM carried credibility and
authority from consensus, this and other subsequent publications were written by an
anonymous Evidence-Based Medicine Working Group to ensure the greatest
impact. JAMA under the editorial authority of Drummond Rennie became one of
the first and principle proponents of EBM; of 22 articles on EBM published in the
4 P. Dimitri
first 3 years, 12 were published by JAMA with a further 32 published over the pro-
ceeding 8 years [6]. The terminology ‘evidence-based’ had been previously used by
David Eddy in the study of population policies from 1987 and subsequently pub-
lished in 1990 in JAMA, describing evidence-based guidelines and policies stating
that policy must be consistent with and supported by evidence [11, 12].
EBM is an approach to medical practice intended to optimise decision-making
by emphasising the use of evidence from well-designed research rather than the
beliefs of practitioners. The process of EBM adopts an epistemological and prag-
matic approach dictating that the strongest recommendations in clinical practice are
founded on robust clinical research approaches that include meta-analyses, system-
atic reviews, and randomised controlled trials. Conversely, recommendations
founded upon less robust research approaches (albeit well-recognised) such as the
case-control study result in clinical recommendations that are regarded as less
robust. Whilst the original framework of EBM was designed to improve the deci-
sion making process by clinicians for individual or groups of patients, the principles
of EBM have extended towards establishing guidelines, health service administra-
tion and policy known as evidence based policy and evidence based practice. More
recently there has been a recognition that clinical ‘interpretation’ of research and
clinical ‘judgement’ may also influence decisions on individual patients or small
groups of patients whereas policies applied to large populations need to be founded
on a robust evidence base that demonstrates effectiveness. Thus a modified defini-
tion of EBM embodies these two approaches—evidence-based medicine is a set of
principles and methods intended to ensure that to the greatest extent possible, medi-
cal decisions, guidelines, and other types of policies are based on and consistent
with good evidence of effectiveness and benefit [13]. Following the implementation
of the National Institute of Clinical Evidence (NICE) in the UK in 1999, there was
a recognition that evidence should be classified according on rigour of its experi-
mental design, and the strength of a recommendation should depend on the strength
of the evidence.
conducted his first trial whilst imprisoned during World War II defining the princi-
ples of the randomised-control trial. Through later work Cochrane demonstrated the
value of epidemiological studies and the threat of bias [15]. Cochrane’s most influ-
ential mark on healthcare was his 1971 publication, ‘Effectiveness and Efficiency’
strongly criticising the lack of reliable evidence behind many of the commonly
accepted healthcare interventions at the time, highlighting the need for evidence in
medicine [9]. His call for a collection of systematic reviews led to the creation of
The Cochrane Collaboration. The framework for the Cochrane Collaboration came
from preceding work by Iain Chalmers and Enkin through their development of the
Oxford Database of Perinatal Trials [16]. Through their work in this field, Chalmers
and Enkin uncovered practices that were unsupported by evidence and in some
cases dangerous, thus acting as a catalyst for adopting the same approach to estab-
lish and evidence base across all medical specialities.
The Cochrane Collaboration has grown into a global independent network of
researchers, professionals, patients, carers and people interested in health from 130
countries with a vision to ‘to improve health by promoting the production, under-
standing and use of high quality research evidence by patients, healthcare profes-
sionals and those who organise and fund our healthcare services’ (uk.cochrane.org).
The Cochrane Library now provides a comprehensive resource of medical evidence
for clinicians and researchers across the globe. The aim of the Cochrane Library is
to prepare, maintain, and promote the accessibility of systematic reviews of the
effects of healthcare interventions. It contains four databases: the Cochrane Database
of Systematic Reviews (CDSR), the Database of Abstracts of Reviews of
Effectiveness (DARE), the Cochrane Controlled Trials Register (CCTR), and the
Cochrane Review Methodology Database (CRMD) [17].
Cohort studies
Expert opinion
based upon a systematic review of cohort studies. This is because prognosis may be
determined by the impact of not providing introducing an intervention compared to
the use of an intervention. Thus well powered prospective cohort analyses or sys-
tematic reviews would provide the best evidence (Table 1.2). The Oxford CEBM
state ‘The levels are not intended to provide you with a definitive judgment about the
quality of evidence. There will inevitably be cases where ‘lower level’ evidence—say
from an observational study with a dramatic effect—will provide stronger evidence
than a ‘higher level’ study—say a systematic review of few studies leading to an
inconclusive result’. Moreover, the Oxford CEBM website states that the levels
have not been established to provide a recommendation and will not determine
whether the correct question is being answered. The following questions need to be
considered to determine a recommendations [19].
1. Do you have good reason to believe that your patient is sufficiently similar to the
patients in the studies you have examined? Information about the size of the
Table 1.2 Oxford Centre for evidence-based medicine 2011 levels of evidence [19]
Step 1 Step 2 Step 3 Step 4 Step 5
Question (Level 1a) (Level 2a) (Level 3a) (Level 4a) (Level 5)
How Local and Systematic Local Case-seriesb n/a
common is current review of non-random
the random surveys that sampleb
problem? sample allow matching
surveys (or to local
censuses) circumstancesb
Is this Systematic Individual Non- Case-control Mechanism-
diagnostic review of cross sectional consecutive studies, or based
or cross studies with studies, or “poor or reasoning
monitoring sectional consistently studies non-
test studies with applied without independent”
accurate? consistently reference consistently reference
(Diagnosis) applied standard and applied standardb
reference blinding reference
standard and standardsb
blinding
What will Systematic Inception Cohort study Case-series n/a
happen if review of cohort studies or control or case-
we do not inception arm of control
add a cohort studies randomized studies, or
therapy? triala poor quality
(Prognosis) prognostic
cohort studyb
Does this Systematic Randomised Non- Case-series, Mechanism-
intervention review of trial or randomized case-control based
help? randomized observational controlled studies, or reasoning
(Treatment trials or study with cohort/ historically
Benefits) n-of-1 trials dramatic effect follow-up controlled
studyb studiesb
(continued)
8 P. Dimitri
variance of the treatment effects is often helpful here: the larger the variance the
greater concern that the treatment might not be useful for an individual.
2. Does the treatment have a clinically relevant benefit that outweighs the harms? It
is important to review which outcomes are improved, as a statistically significant
difference (e.g. systolic blood pressure falling by 1 mmHg) may be clinically
irrelevant in a specific case. Moreover, any benefit must outweigh the harms.
Such decisions will inevitably involve patients’ value judgments, so discussion
with the patient about their views and circumstances is vital
1 The Evolution of Evidence Based Clinical Medicine 9
Other frameworks and tools exist for the assessment of evidence. The PRISMA
statement is a checklist and flow diagram to help systematic review and meta-
analyses authors assess and report on the benefits and harms of a healthcare inter-
vention. The Scottish Intercollegiate Guidelines Network (SIGN) Methodology
provides checklists to appraise studies and develop guidelines for healthcare inter-
ventions. The CONsolidated Standards of Reporting Trials (CONSORT) is an
evidence-based tool to help researchers, editors and readers assess the quality of the
reports of trials and the PEDro scale considers two aspects of trial quality, namely
internal validity of the trial and the value of the statistical information.
1.3.3 Grading
• Risk of bias: Is a judgement made on the basis of the chance that bias in included
studies has influenced the estimate of effect?
• Imprecision: Is a judgement made on the basis of the chance that the observed
estimate of effect could change completely?
• Indirectness: Is a judgement made on the basis of the differences in characteris-
tics of how the study was conducted and how the results are actually going to be
applied?
10 P. Dimitri
Objective tools may be used to assess each of the domains. For example, tools
exist for assessing the risk of bias in randomised and non-randomised trials [23–25].
The GRADE approach to rating imprecision focuses on the 95% confidence interval
around the best estimate of the absolute effect. Thus, certainty is lower if the clinical
decision is likely to be different if the true effect was at the upper versus the lower
end of the confidence interval. Indirectness is dictated by the population studied
assessing whether the population studied is different from those for whom the rec-
ommendation applies or the outcomes studied are different for those which are
required.
The GRADE system also provides a framework for assessing observational stud-
ies but conversely utilises a positive approach to assessing the quality of the
evidence.
• Large effect: This is when methodologically strong studies show that the
observed effect is so large that the probability of it changing completely is
less likely.
• Plausible confounding would change the effect: This is when despite the pres-
ence of a possible confounding factor which is expected to reduce the observed
effect, the effect estimate is still significant
• Dose response gradient: This is when the intervention used becomes more effec-
tive with increasing dose.
• High Quality Evidence: The authors are very confident that the estimate that is
presented lies very close to the true value.
• Moderate Quality Evidence: The authors are confident that the presented esti-
mate lies close to the true value, but it is also possible that it may be substantially
different.
• Low Quality Evidence: The authors are not confident in the effect estimate and
the true value may be substantially different.
• Very low quality Evidence: The authors do not have any confidence in the esti-
mate and it is likely that the true value is substantially different from it.
Table 1.3 Cochrane’s table of evidence to guide evaluations of the internal and external valid-
ity—(efficacy, effectiveness and cost-effectiveness) of medical intervention [9, 19]
Type of
evidence Question Description
Efficacy Can it work? Extent to which an intervention does more good than
harm under ideal circumstances
Effectiveness Does it work in Extent to which an intervention does more good than
practice? harm under usual circumstances
Cost- Is it worth it? The effect of an intervention in relation to the resources
effectiveness
have clearly defined eligibility criteria and have minimal missing data. Some studies
may only be applicable to narrowly defined patient populations and may not be
generalisable in other clinical contexts. Studies also have to be sufficiently powered
to ensure that the number of patients studied is sufficient to determine a difference
between interventions and also need to run over a sufficient period of time to ensure
sustainable change. Randomised placebo controlled trials are considered the gold
standard in this respect providing they are sufficiently powered and have minimised
missing data points.
As early as 1972, Cochrane proposed a simple framework for evaluating medical
care that could be applied to treatment and policy in current-day medical practice
[9]. The questions posed test the internal and external validity of an intervention
(Table 1.3).
The fundamental importance in this approach lies in the extent to which the pro-
cess focusses on the external validity accounting for the application of an interven-
tion in clinical practice and the resulting financial impact.
Evidence Based Medicine has clearly revolutionised the practice of medicine, the
choice of investigations and treatments and has challenged therapies which had pre-
viously been built on limited evidence and opinion, but had gone unchallenged due
to the hierarchical constraints of the medical profession. However, there has been
criticism about inherent weaknesses of EBM. Some have suggested there is an over-
reliance on data gathering that ignores experience and clinical acumen, and data
which may not have formed part of the clinical trial process, and does not ade-
quately account for personalised medicine and the individual holistic needs of the
patient. Thus, EBM does not seek to extend to more recent advances in stratified
medicine. Others have argued that the hierarchical approach to EBM places the
findings from basic science at a much lower level thus belittling the importance of
basic science in providing a means of understanding pathophysiological mecha-
nisms, providing a framework and justification for clinical interventions and an
explanation for inter-patient variability [27, 28]. Furthermore, EBM has been
regarded as overly generalisable, considering the treatment effect to large
12 P. Dimitri
populations, but not accounting for the severity of a disease, whereby a treatment
may offer significant effect to those who are seriously affected compared to little or
no impact for those who are mildly affected by the same condition. Thus within
analyses, sub-stratification of patient cohorts may overcome this issue. Although a
doyenne of EBM, Feinstein also argued that some of the greatest medical discover-
ies, for example the discovery of insulin and its use in diabetic ketoacidoisis have
come about from single trials and would not stand up to the rigours of evidence
based medicine [29]. Feinstein argued that there was too much emphasis placed
upon the randomised-control trial and the process simply tests one treatment against
another, with additional acumen needed to treat a patient in relation to presentation
and severity of symptoms. Thus there is a concern that practice that does not con-
form to EBM is marginalised as a consequence. EBM is also restricted in its use for
the defined patient population and does not consider alternative patient groups using
the same therapies and interventions. Evidence defined by the RCT should also be
challenged by observational and cohort studies in which supported treatments may
lead to adverse effects in certain patient populations. Meta-analyses often include
highly heterogeneous studies and ascribe conflicting results to random variability,
whereas different outcomes may reflect different patient populations, enrolment and
protocol characteristics [30]. Richardson and Doster proposed three dimensions in
the process of evidence-based decision making: baseline risk of poor outcomes
from an index disorder without treatment, responsiveness to the treatment option
and vulnerability to the adverse effects of treatment; whereas EBM is focused on the
potential therapeutic benefits it does not usually account for the patient inter-vari-
ability in the latter two dimensions [31].
The GRADE approach described earlier attempts to overcome some of these
challenges by defining a system that provides a ‘quality control’ for evidence such
that powerful observational studies for example may be upgraded due to the dra-
matic observed effect. The use of meta-analyses and systematic reviews as a gold
standard are also scrutinised by GRADE for their inherent weaknesses. Heterogeneity
(clinical, methodological or statistical) has been recognised as an inherent limita-
tion of meta-analyses [32]. Different methodological and statistical approaches
used in systematic reviews can also lead to different outcomes [33]. To this extent
some have suggested that the approach to the evidence based pyramid should be
adapted to incorporate a more rational approach to the assessment of evidence and
with the use of systematic reviews at all levels of the evidence pyramid to determine
the quality of the evidence [34]. Others have argued that the rigidity of the
randomised-control trial has allowed an exploitation through selective reporting,
exaggeration of benefits and the misinterpretation of evidence [35, 36]. Greenhalgh
and colleagues state that through ‘overpowering trials to ensure that small differ-
ences will be statistically significant, setting inclusion criteria to select those most
likely to respond to treatment, manipulating the dose of both intervention and con-
trol drugs, using surrogate endpoints, and selectively publishing positive studies,
industry may manage to publish its outputs as unbiassed in high-ranking peer-
reviewed journals’ [37]. Fundamentally and most importantly, whilst Sackett
believed that the predicament of the patient formed part of the process of EBM, the
1 The Evolution of Evidence Based Clinical Medicine 13
rigidity of the system has resulted in a paradigm shift away from this principle.
Some believe that EBM provides an oversimplified and reductionistic view of treat-
ment, failing to interpret the motivation of the patient, the value of clinical interac-
tion, co-morbidities, poly- pharmacy, expectations, environment and other
confounding and influential variables and demand a return to ‘real evidence based
medicine’ [37]. Others recognise that published evidence should also be presented
in a way that is readable and usable for patients and professionals [38].
1.5 Conclusion
References
1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine:
what it is and what it isn’t. BMJ. 1996;312(7023):71–2.
2. Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical epidemiology: a basic science for
clinical medicine. 2nd ed. Boston: Little Brown; 1991.
3. Guyatt G. Evidence-based medicine. Ann Intern Med. 1991;14(Supp 2):A-16.
4. Sackett D. How to read clinical journals: I. why to read them and how to start reading them
critically. Can Med Assoc J. 1981;124(5):555–8.
5. Smith R, Rennie D. Evidence based medicine—an oral history. BMJ. 2014;348(21):g371.
6. Zimerman A. Evidence-based medicine: a short history of a modern medical movement.
American Medical Association Journal of Ethics. 2013;15(1):71–6.
7. Haynes B. Advances in evidence-based information resources for clinical practice. ACP J
Club. 2000;132(1):A11–4.
8. Feinstein AR. Clinical Judgement. Baltimore, MD: Williams & Wilkins; 1967.
9. Cochrane AL. Effectiveness and efficiency: random reflections on health services. London:
Nuffield Provincial Hospitals Trust; 1972.
14 P. Dimitri
35. James J. Reviving Cochrane’s contribution to evidence-based medicine: bridging the gap
between evidence of efficacy and evidence of effectiveness and cost-effectiveness. Eur J Clin
Investig. 2017;47(9):617–21.
36. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124.
37. Greenhalgh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis?
BMJ. 2014;348:g3725.
38. Lavis JN, Davies HT, Gruen RL, Walshe K, Farquhar CM. Working within and beyond the
Cochrane collaboration to make systematic reviews more useful to healthcare managers and
policy makers. Healthcare Policy. 2006;1:21–33.
Clinical Practice Guidelines: Choosing
Wisely 2
Prasad Godbole
Learning Objectives
• To understand the process of developing guidelines
• To understand the process of critically reviewing guidelines
• To understand how/which guidelines should be implemented
2.1 Introduction
There are several key steps when developing guidelines. These are:
P. Godbole (*)
Department of Paediatric Surgery, Sheffield Children’s NHS Foundation Trust, Sheffield, UK
e-mail: [email protected]
The key consideration is to develop a guideline for areas which may be prevalent in
the local population or which will have improved outcomes for a maximum number
of patients. This could be areas such as urinary tract infections in children, congeni-
tal obstructive uropathies, urinary tract calculi, nocturnal enuresis to name a few.
Once an area has been established, all stakeholders including patients/carers should
be involved in the guideline development process. For urinary tract infections this
may include pediatricians, Paediatric urologists, general practitioners, nursing staff,
microbiologists, parents of infants and young children and older children. In essence
any stakeholder who may provide a clinical service for or who may benefit from the
area that the guideline is designed for should be included.
The initial chapters on Evidence Based Medicine already highlights the levels of
evidence and the hierarchy of evidence. As clinical guidelines are outcome focused
and are aimed to be cost effective, the following levels of evidence and their impli-
cation for clinical decision making may be used to assess existing guidelines. A
strategy to retrieve guidelines has to be agreed eg. Search terms, language/s, data-
bases etc.
2 Clinical Practice Guidelines: Choosing Wisely 19
While the agree criteria may be used to determine the quality of the guideline, a
quick screening process that has been advocated is to determine the rigor of devel-
opment (number 7 of the AGREE criteria). Furthermore, the guideline should be
current. The content of the guideline also must be considered. Where more than one
guideline is being considered, a comparison between guidelines, recommendations
20 P. Godbole
Once the process above is completed, a decision must be made by the guideline
development group as to the robustness of the guideline for local use. The guideline
may be used un modified or may need to be adapted for local use but maintaining
the key principles within the guideline.
Once peer reviewed, the guideline has to pass through a formal process of ratifica-
tion usually via a committee that approves the guideline for local use. In the authors’
institution, this is the Clinical Audit and Effectiveness Committee. Guidelines for
approval are sent out in advance of the meeting and discussed in the meeting prior
to approval.
Once approved, the guidelines are adopted for local use. Guidelines are reviewed at
periodic intervals of 2–3 years with updates.
While the process above describes best practice in developing guidelines and how
to determine which guidelines are robust, getting clinicians to adhere to the guide-
lines can be a different matter. In the past, surgical training was more paternalistic
in that the ‘doctor was always right’ and training was more experience based rather
2 Clinical Practice Guidelines: Choosing Wisely 21
2.10 Conclusion