Abstract
Objective
While the judicious use of antibiotics takes past microbiological culture results into consideration, this data’s typical format in the electronic health record (EHR) may be unwieldy when incorporated into clinical decision-making. We hypothesize that a visual representation of sensitivities may aid in their comprehension.
Materials and Methods
A prospective parallel unblinded randomized controlled trial was undertaken at an academic urban tertiary care center. Providers managing emergency department (ED) patients receiving antibiotics and having previous culture sensitivity testing were included. Providers were randomly selected to use standard EHR functionality or a visual representation of patients’ past culture data as they answered questions about previous sensitivities. Concordance between provider responses and past cultures was assessed using the kappa statistic. Providers were surveyed about their decision-making and the usability of the tool using Likert scales.
Results
518 ED encounters were screened from 3/5/2018 to 9/30/18, with providers from 144 visits enrolled and analyzed in the intervention arm and 129 in the control arm. Providers using the visualization tool had a kappa of 0.69 (95% CI: 0.65–0.73) when asked about past culture results while the control group had a kappa of 0.16 (95% CI: 0.12–0.20). Providers using the tool expressed improved understanding of previous cultures and found the tool easy to use (P < .001). Secondary outcomes showed no differences in prescribing practices.
Conclusion
A visual representation of culture sensitivities improves comprehension when compared to standard text-based representations.
Keywords: clinical decision support, data visualization, electronic health records; drug resistance; emergency medicine
INTRODUCTION
Background
Half of all patients receive antibiotics during their hospital stay.1 The timely use of appropriate antibiotics may improve mortality.2,3 However, the selection of a suitable antimicrobial agent can be challenging and may be further complicated by antibiotic resistance. There are more than 2 million people with resistant infections4 resulting in more than 150 000 estimated deaths5 each year in the US, while inadequate coverage may be achieved in 22.4% to 29.8% of patients.6–9
Antibiotic treatment becomes more nuanced in those with severe infections,10 and, even in the outpatient setting, antibiotic selection requires increasing levels of sophistication as the modern microbial milieu demonstrates complex patterns of resistance.11,12
Importance
Antimicrobial stewardship programs facilitate the judicious use of these medications and have been a mainstay in minimizing misuse as relatively few novel therapeutic agents are in development,13 and tools such as molecular diagnostics are not yet practical for widespread clinical use.14 One avenue towards facilitating antibiotic prescribing may be to improve clinicians’ understanding of a patient’s microbiological culture history.
Past antibiotic sensitivities should inform prescribing; however, reviewing all prior resistance data and then synthesizing these into clinical decision-making can be cumbersome. Additionally, electronic health records (EHRs) have been noted to be lacking in functionality to assist in antimicrobial stewardship goals.15 Given this difficulty in gathering information and time constraints in clinical care, prescribing practices may fall short of optimal behavior, particularly when patients’ clinical situations are complex. While EHRs can increase availability to large amounts of data, the data can be overwhelming16 and lead to idiosyncratic workarounds with resultant patient safety concerns.17,18
Visual representations of data may assist clinicians in synthesizing and understanding voluminous medical data.19,20 Data related to infectious diseases,21 specifically culture sensitivity data,22–24 may be particularly amenable to this approach. In contrast to discrete laboratory data such as a potassium level, microbiological data is inherently multidimensional: a sample from a specific location may reveal multiple species of bacteria with varying degrees of resistance to an array of antibiotics. Furthermore, these results will vary over time and must be interpreted accordingly. Many EHR representations do not fully convey the depth of microbiological data, as they typically comprise of time-sorted lists of individual culture samples requiring “drilling down” to find the details of the sensitivities, and offer little assistance in summarizing or comparing the results across multiple samples and time.
OBJECTIVE
We hypothesized that presenting an overview of sensitivities by utilizing a visual representation of various dimensions of culture sensitivity data would enhance clinician understanding of the sensitivity data compared to the standard text-based representation in our EHR.
MATERIALS AND METHODS
Study design and setting
We performed a single-center parallel unblinded randomized controlled trial to assess the effect of a culture result visualization tool on provider knowledge of previous culture sensitivities. The study hospital emergency department (ED) has approximately 55 000 visits per year and is a tertiary care adult-only academic hospital and Level I trauma center. The EHR in use was custom-developed at the institution.
Simple randomization was performed using a computerized random number generator prepared by the investigators with a 1:1 allocation ratio at the level of the patient visit. Randomizations were assigned through REDCap and the research assistants carried out the assigned intervention. If a patient visit was randomized to the intervention group, the provider who had ordered antibiotics for that patient would be shown a culture visualization tool by a research assistant on a laptop prior to answering questions regarding past sensitivity results. Cases in the control group did not have access to the visualization tool. Both groups had access to the standard EHR while responding to research assistants. Clinicians were presented with 4 antibiotics and asked whether any previous cultures demonstrated resistance or were sensitive to them in the past. Providers may have been approached by research assistants at any time after ordering antibiotics until either the antibiotics were administered or the patient left the department.
Several months before the study, users were given a several minute introduction to the study and later received a brief follow-up e-mail regarding use of the visualization. There was no further training regarding use of the tool.
Selection of participants
Antibiotic orders placed in the ED triggered a page to research assistants who then approached the ordering provider for eligibility screening and trial enrollment. Research assistants were available 8AM–11PM on weekdays and 11AM–8PM on weekends.
Patients met inclusion criteria if they had an order placed for an oral or parenteral antibiotic and also had a previous urine or blood culture with sensitivity results available in the EHR.
Patients were excluded if they were under the age of 18, they had no past urine or blood culture sensitivity results, they had the antibiotic orders cancelled at the time of screening, the antibiotic was administered prior to screening, the provider was unable to be reached before the visit concluded, or participation in the study would impede clinical care. In addition to the prespecified exclusion criteria, to minimize bias, cases were also excluded if the antibiotic in question was not being prescribed as the initial treatment of an acute infection (eg, prophylaxis and ongoing antibiotic courses were excluded) or if the participant was a study investigator.
Intervention
The software to visualize microbiological culture results was developed in an iterative fashion and refined in conjunction with input from informaticists, infectious disease physicians, infectious disease pharmacists, emergency physicians, internists, and surgeons. The tool was built directly into the production version of the EHR and written in Caché (InterSystems Corporation, Cambridge, MA), jQuery, HTML, and JavaScript by the authors (EYK, LAN, and SH).
Rows of the display are ordered by time, with a separate row for each organism that speciated from the past cultures (Figure 1). The remainder of the row is composed of the color-coded sensitivity results (eg, “sensitive,” “resistant”), with the corresponding antibiotic that the culture was tested against listed above the rows as column headers. Functionality to sort and filter based upon the source of the culture, date, and organism was provided. Please see the Supplemental Materials for a sample case and source code.
Methods of measurement
Study data were collected and managed using REDCap electronic data capture tools hosted at our institution.25 REDCap (Research Electronic Data Capture) is a secure, web-based application designed to support data capture for research studies, providing 1) an intuitive interface for validated data entry; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for importing data from external sources.
Providers were asked questions by research assistants with responses recorded directly onto iPads using REDCap. If a provider was randomly selected to having the tool available for use, it was provided on a laptop computer by the research assistant. Research assistants were not trained in use of the tool and use of the EHR by the provider was not restricted.
Providers’ knowledge of the patients’ prior culture results was assessed by a questionnaire. The physician was presented with 4 antibiotics and asked if the patient previously had infections resistant to each of the antibiotics at the home institution. The providers could respond that past resistances, including intermediate resistance, were noted to this antibiotic (“resistant”), no past resistances were noted (“sensitive”), or “unsure.”
Both the intervention and control group were also asked to subjectively assess their own knowledge of previous cultures and their confidence in their decision-making. These responses were obtained using a 5-point Likert scale (1 strongly disagree, 5 strongly agree) with the option to respond “unsure” to each question. Providers in the intervention group were also asked about the usability of the tool.
Patient charts were reviewed by the authors to determine the suspected source of infection as per the ED providers, review antibiotic order details, and assess culture results.
Outcome measures
The primary outcome was provider knowledge of patients’ past culture sensitivities. A response of “sensitive” was interpreted as correct when past cultures in the medical record all demonstrated in vitro sensitivity to the antibiotic in question. A response of “resistant” was interpreted as correct if any previous culture data showed organisms with either resistance or intermediate resistance to the antibiotic.
Prespecified secondary outcomes assessed the effectiveness of the ordered antibiotics by the provider. Adequacy of antibiotic choice was determined by comparing the antibiotics prescribed in the ED to cultures drawn during that same ED visit. Antibiotic regimens were judged to be adequate if all cultured isolates from that visit demonstrated in vitro sensitivity to the antibiotics prescribed. Prescribed antibiotics were deemed inadequate if there was demonstrated resistance or intermediate resistance to the antibiotics without otherwise adequate antimicrobial coverage. Patient encounters without cultures drawn or cultures without growth were excluded from this portion of the analysis as were cultures where the antibiotic in question, or another anitbiotic of the same class, was not tested. Charts were reviewed to ensure that the isolates being evaluated reflected the suspected infectious processes of primary concern and were not disregarded (eg, contaminants or isolates incidentally found that were not being considered as an infectious source). Other attributes of antibiotic ordering were not considered (eg, dose, appropriateness of empiric antibiotic regimens).
Additionally, previous resistance demonstrated in the medical record to antibiotics given during the present ED visit was tabulated in 3 time frames: cultures taken from the present visit, cultures from 2 years prior up until and excluding the present visit, and all available cultures preceding the present visit. In this analysis a prescribed antibiotic was noted to be resistant if any isolate within the noted timeframe had in vitro resistance or intermediate resistance to that antibiotic or 1 of the same class. No results were excluded in this analysis.
Cases where antibiotic orders were either cancelled or added were assessed, and survey questions regarding usability of the tool and the providers’ decision-making used 5-point Likert scales.
Sample size calculation
Using a prior randomized controlled trial which also assessed provider knowledge after an intervention, we assumed a conservative standard deviation of 1 for these calculations and a difference of 0.45 out of 4 questions being answered correctly.26 With the above assumptions, a sample size of 105 patients per arm of the study was calculated to provide 90% power at ɑ=0.05.
Primary data analysis
Data were analyzed using Stata 14.2 (State College, TX). Chi-square and Fisher’s exact tests were used in the analysis of categorical variables, while Wilcoxon ranked sum tests were used with continuous data. Likert data were analyzed categorically using Fisher’s exact test when comparing the control group to the intervention group.
Provider knowledge of past sensitivities was evaluated using Cohen’s kappa, where providers’ responses were assessed for agreement with the culture data in the EHR. Provider responses of “unsure” were weighted as half-correct using the weight “w” in Stata. Kappas were considered to be significantly different if the 95% confidence interval in the intervention group excluded the values in the control group. Analysis of agreement was also stratified by residents versus attendings and kappas were compared within and between groups. In addition to Cohen’s kappa, which included all provider responses, we also provided an area under the receiver operating characteristic curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value for all responses where the provider did not choose “unsure.”
Human subjects research statement
The trial protocol was registered on ClinicalTrials.gov (#NCT03580603) prior to study initiation and approved by the institution’s Committee on Clinical Investigation (#2017P-000432).
RESULTS
Characteristics of study subjects
From 3/5/2018 to 9/30/2018, 518 ED patient visits were assessed by research assistants for potential eligibility (Figure 2). All 273 patient encounters were included in this intention to treat analysis, with 129 encounters randomized to the control group and 144 encounters randomized to the intervention group. Patient demographics are reported in Table 1. 20 (7.3%) encounters were with attending physicians as the ordering provider, the remaining physicians enrolled were resident physicians (see Supplementary Materials). 23 (8.4%) cases had no cultures drawn during the ED visit, and 166 (60.8%) visits had ED cultures with no growth.
Table 1.
Control (n = 129) | Intervention (n = 144) | P value | |
---|---|---|---|
Age, median (IQR), years | 68 (55, 80) | 70 (59, 83) | .196 |
Female sex, number (%) | 85 (66) | 95 (66) | .989 |
Race, number (%)a | .893 | ||
Caucasian | 85 (70) | 92 (71) | |
African-American | 26 (21) | 29 (22) | |
Asian | 5 (4) | 3 (2) | |
Other | 5 (4) | 5 (4) | |
Ethnicity, number (%) | .209 | ||
Hispanic | 14 (11) | 11 (8) | |
Not Hispanic | 107 (83) | 116 (81) | |
Unknown | 8 (6) | 17 (12) | |
Disposition, number (%) | .955 | ||
Admitted, non-ICU | 83 (64) | 94 (65) | |
Discharged | 28 (22) | 31 (22) | |
ICU | 16 (12) | 18 (13) | |
Transferred or eloped | 2 (2) | 1 (1) | |
Source of infection, number (%) | |||
Urinary tract | 55 (43) | 61 (42) | .963 |
Pulmonary | 22 (17) | 26 (18) | .828 |
Skin and soft tissue | 23 (18) | 16 (11) | .113 |
Intraabdominal | 14 (11) | 12 (8) | .479 |
Hepatobiliary | 1 (1) | 4 (3) | .374 |
Bloodstream | 4 (3) | 0 (0) | .049 |
CNS | 2 (2) | 1 (1) | .604 |
Otherb | 9 (7) | 9 (6) | .813 |
N/Ac | 4 (3) | 18 (13) | .006 |
23 values missing (15 in intervention [10%] and 8 in control [6%]).
Includes infectious sources that are not listed and empiric treatment for infections without a clear source.
Includes antibiotics given as continuation of treatment regimens initiated prior to arrival and antibiotics given for reasons other than an acute infection (eg, peri-instrumentation prophylaxis, hepatic encephalopathy).
Main results
Participants in the control group had agreement with previous culture results with a kappa of 0.16 (95% CI: 0.12–0.20), while those randomized to the intervention group had a kappa of 0.69 (95% CI: 0.65–0.73) (Table 2). The intervention group also had a higher AUC of 0.85 (95% CI: 0.83–0.87) compared to the control group with 0.59 (95% CI: 0.57–0.61) (Table 3). There were no significant differences in changes in antibiotic order or in vitro susceptibility to prescribed antibiotics between the groups (Table 4).
Table 2.
Kappa | |
---|---|
Overall kappa between provider and EHR (n = 3036) | 0.42 (0.39–0.43) |
Control (n = 1458) | 0.16 (0.12–0.20) |
Intervention (n = 1578) | 0.69 (0.65–0.73) |
Residents (n = 2802) | 0.42 (0.39–0.45) |
Control (n = 1362) | 0.18 (0.14–0.22) |
Intervention (n = 1380) | 0.68 (0.64–0.72) |
Attendings (n = 234) | 0.33 (0.23–0.43) |
Control (n = 84) | −0.05 (−0.14–0.04) |
Intervention (n = 150) | 0.72 (0.61–0.83) |
Table 3.
AUC | Sensitivity | Specificity | PPV | NPV | |
---|---|---|---|---|---|
Overall (n = 3036) | 0.72 (0.71–0.74) | 62 (59–64) | 83 (81–85) | 85 (83–87) | 59 (56–61) |
Control (n = 1458) | 0.59 (0.57–0.61) | 35 (32–38) | 83 (79–86) | 76 (71–80) | 46 (42–49) |
Intervention (n = 1578) | 0.85 (0.83–0.87) | 86 (84–88) | 83 (80–86) | 89 (86–91) | 80 (77–83) |
Residents (n = 2802) | 0.73 (0.71–0.74) | 63 (61–65) | 82 (80–85) | 84 (82–86) | 59 (57–62) |
Control (n = 1374) | 0.60 (0.58–0.62) | 37 (34–41) | 83 (79–86) | 76 (72–80) | 47 (44–51) |
Intervention (n = 1428) | 0.84 (0.82–0.86) | 87 (84–89) | 82 (79–85) | 88 (86–90) | 80 (76–83) |
Attendings (n = 234) | 0.69 (0.63–0.74) | 47 (38–55) | 90 (82–96) | 88 (78–94) | 53 (45–61) |
Control (n = 84) | 0.45 (0.37–0.53) | 5 (1–13) | 86 (64–97) | 50 (12–88) | 23 (14–34) |
Intervention (n = 150) | 0.86 (0.81–0.92) | 81 (70–89) | 92 (83–97) | 91 (82–97) | 82 (71–89) |
Abbreviations: NPV, negative predictive value; PPV, positive predictive value.
Table 4.
Control | Intervention | P value | |
---|---|---|---|
ED antibiotics did not adequately treat infectiona (n = 64), number (%) | 5 (16) | 6 (18) | >0.999 |
No in vitro resistance to antibiotics administered in the EDb based on: | |||
cultures from the present visit | 123 (95) | 138 (96) | >0.999 |
culture data from the 2 years before the present visit | 115 (89) | 133 (92) | 0.358 |
all culture data before the present visit | 113 (88) | 131 (91) | 0.366 |
Any antibiotics canceled while in ED | 12 (9) | 7 (5) | 0.162 |
Any antibiotics added while in ED | 17 (13) | 21 (15) | 0.738 |
Antibiotics were deemed inadequate when cultures from the ED visit demonstrated in vitro resistance or intermediate sensitivity. Cultures were considered if drawn from clinically suspected sources of infection and the resulting organisms were not considered contaminants. Cases were excluded where cultures were not drawn, had no growth, or were not tested against the antibiotic class in question.
All cases within the timeframe noted were included in this analysis, including cases where no cultures were drawn.
Providers that were randomly selected to use the tool felt it was easy to use and helpful in understanding patients’ culture data. When compared to those not using the tool, providers who used the tool reported that they had a better understanding of patients’ culture data (P < .001) and that they made a more informed choice when compared to providers who did not use the tool (P < .001) (Figure 3 and Supplementary Materials).
DISCUSSION
Users expend a notable amount of effort in using the EHR, as a 10-hour ED shift may need 4000 mouse clicks,27 and much of a physician’s time may be spent using the EHR.28–31 Despite the extent of physicians’ EHR work, our results show that providers’ knowledge of clinical data may be incomplete. We employed a visual approach to communicate test results and their trends. EHR usability has been noted to be lacking32 and simply displaying unmodified text may be inadequate as further effort is often required to synthesize the information into actionable and meaningful knowledge. Though we did not study such outcomes, we hope that interventions derived from providers’ workflows could ultimately improve clinical care and decrease burnout.
EHRs provide access to extensive amounts of patient data. Though details that are relevant to care can be found with diligent exploration of the medical record, in a practical sense data may at times be difficult to access or utilize.16 In 1976, Clement McDonald advocated for the use of computers to assist in “mindless and repetitive tasks” as mistakes by the clinician may be due to “man’s limitations as a data processor rather than to correctable human deficiencies.”33 The difficulties of mindful antibiotic prescribing may likewise be helped by electronic tools, rather than relying on providers’ individual ad hoc systems for tracking sensitivities by hand. Though computers have become commonplace in healthcare in the decades following McDonald’s publication, many of the “mindless and repetitive tasks” he discusses are still extant in digital form while clinicians’ desired relationship with technology has yet to be achieved.
In this work, we developed an electronic tool to help improve clinical decision-making by making data not only available, but accessible. It is targeted to the information gathering and synthesis stage, the earliest stage of clinical thinking. This is in contrast to other types of clinical decision support targeted to later stages of clinical thinking, designed to prevent incorrect clinical decisions, rather than promote the formation of correct clinical decisions. This type of passive, visual decision support has been effective in reducing duplicate orders in routine clinical use,34 and we hypothesize that this study's visual representation of culture sensitivity data would also be effective in routine clinical use.
The tool was developed for use with a custom-built EHR. One path toward wider distribution of applications such as this would be through the use of CDS Hooks, Fast Healthcare Interoperability Resources (FHIR), and Substitutable Medical Applications and Reusable Technologies (SMART).35 However, given the complex and varied representations of microbiological data, there remains a lack of standardized FHIR specifications for them. This presents a particular challenge in the broad implementation of the tool using FHIR at this time, though we anticipate this to be temporary, as the FHIR standard matures. Nevertheless, with provisions for interoperability and data sharing in the 21st Century Cures Act, there are decreasing barriers to developing innovative EHR software ,36 though clinicians should also remain vigilant to use novel software in a safe and responsible manner.37
Interestingly, while the control group was significantly less accurate in identifying past antibiotic sensitivities, the majority of this group felt that they made an informed antibiotic choice. Furthermore, the attendings in the control group performed worse than chance when answering questions about sensitivities. Though the reasons for this are likely multifactorial and patient-oriented outcomes were not assessed, the relationship between subjective and objective evaluations of knowledge and its effect on antibiotic prescribing may bear further study.
Limitations
The study was conducted within a single department of a single institution as a convenience sample, limited to patients with previous data in the EHR. Also, our trial protocol enrolled the ordering physician, which at our institution is usually a junior physician. This environment may not be representative of other practice scenarios. Though providers are expected to take into account past microbiological results, treatment decisions may at times be deferred to other clinicians or treatment teams whose decision-making would not be captured in this study. In other cases, empiric therapy may be guided by institutional guidelines, though best practices would entail checking of previous antibiotic resistances in these situations as well.
Many of the providers approached for the study were not available as the ED is a high-acuity setting and clinical responsibilities were given precedence over study enrollment. Ideally there would have been more extensive evaluation of providers, but this was precluded by the time-sensitive nature of ED care. Given our enrollment, we were not able to control for variation among providers or assess clinical outcomes, such as mortality.
There were no demonstrated differences in the secondary outcomes between the study groups, though the study was not powered or designed to assess these outcomes. Providers more accurately assessed antibiotic sensitivities in the intervention group, yet the benefit of this knowledge was not clearly reflected in antibiotic ordering. Difficulty in randomized trials within informatics to demonstrate clinical outcomes has been previously noted.38 Furthermore, due to technical constraints we were unable to present the visualization prior to antibiotic ordering nor have the tool freely available, thus significantly altering provider workflow. This is inconsistent with several tenets of clinical decision support design,39 likely making the intervention less useful to clinicians. For example, by the time of the intervention, providers have already chosen an antibiotic they are comfortable prescribing and may not elect to validate this decision using the tool. Also, antibiotic selection is a sophisticated medical decision that may be difficult to judge the adequacy of without expert opinion, but this was outside of the scope of this study. Additionally, deficiencies in antibiotic prescribing are a known problem in medicine and the tool does not address factors such as knowledge gaps, faulty decision-making, or inappropriate prescribing habits that may contribute to antibiotic selection.
Though patient-centered outcomes are of paramount importance, the effect of the EHR on the provider was also a motivation for the development of the tool and we would have ideally captured “provider-centered outcomes” such as burnout, medical error, and time spent reviewing culture data. Though not all of these factors directly affect patient care, they may do so indirectly and clinicians may place significant importance on them.
CONCLUSION
Knowledge of patients’ prior antibiotic sensitivities may be suboptimal and can be enhanced by visualization tools, which can provide an expedient way to understand complex, multidimensional clinical data. As resistant infections become more commonplace, we must maximally utilize the data at hand to facilitate appropriate prescribing. Further work is needed to enable the development and sharing of tools on present EHRs and explore further methods to enhance the treatment of infections.
FUNDING
This work has been made possible by an Innovation Grant from the Center for Healthcare Delivery Science at Beth Israel Deaconess Medical Center. No funder had any role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
AUTHOR CONTRIBUTIONS
EYK, LAN, AVG, and SH conceived and designed the study. EYK, LAN, and SH designed the intervention. EYK and SH obtained grant funding. EYK collected the data. AVG performed the analysis. EYK and SH wrote the paper. All authors revised the paper. SH takes responsibility for the paper as a whole.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online
DATA AVAILABILITY STATEMENT
Data available on request.
CONFLICT OF INTEREST STATEMENT
Dr. Horng reports grants from Philips Research, outside the submitted work.
Supplementary Material
REFERENCES
- 1.Baggs J, Fridkin SK, Pollack LA, et al. Estimating national trends in inpatient antibiotic use among US hospitals from 2006 to 2012. JAMA Intern Med 2016; 176 (11): 1639–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Ferrer R, Martin-Loeches I, Phillips G, et al. Empiric antibiotic treatment reduces mortality in severe sepsis and septic shock from the first hour: results from a guideline-based performance improvement program. Crit Care Med 2014; 42 (8): 1749–55. [DOI] [PubMed] [Google Scholar]
- 3.Garnacho-Montero J, Gutiérrez-Pizarraya A, Escoresca-Ortega A, et al. Adequate antibiotic therapy prior to ICU admission in patients with severe sepsis and septic shock reduces hospital mortality. Crit Care 2015; 19 (1): 302. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Center for Disease Control and Prevention. About Antimicrobial Resistance. CDC. https://www.cdc.gov/drugresistance/about.htmlAccessed 18 March 2019
- 5.Burnham JP, Olsen MA, Kollef MH.. Re-estimating annual deaths due to multidrug-resistant organism infections. Infect Control Hosp Epidemiol 2019; 40 (1): 112–3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Chen H-C, Lin W-L, Lin C-C, et al. Outcome of inadequate empirical antibiotic therapy in emergency department patients with community-onset bloodstream infections. J Antimicrob Chemother 2013; 68 (4): 947–53. [DOI] [PubMed] [Google Scholar]
- 7.Savage RD, Fowler RA, Rishu AH, et al. The effect of inadequate initial empiric antimicrobial treatment on mortality in critically ill patients with bloodstream infections: a multi-centre retrospective cohort study. PLoS One 2016; 11 (5): e0154944. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Mettler J, Simcock M, Sendi P, et al. Empirical use of antibiotics and adjustment of empirical antibiotic therapies in a university hospital: a prospective observational study. BMC Infect Dis 2007; 7 (1): 21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Honda H, Higuchi N, Shintani K, et al. Inadequate empiric antimicrobial therapy and mortality in geriatric patients with bloodstream infection: a target for antimicrobial stewardship. J Infect Chemother 2018; 24 (10): 807–11. [DOI] [PubMed] [Google Scholar]
- 10.Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med 2006; 34: 1589–96. [DOI] [PubMed] [Google Scholar]
- 11.Rossignol L, Vaux S, Maugat S, et al. Incidence of urinary tract infections and antibiotic resistance in the outpatient setting: a cross-sectional study. Infection 2017; 45 (1): 33–40. [DOI] [PubMed] [Google Scholar]
- 12.Lim CJ, Kong DCM, Stuart RL.. Reducing inappropriate antibiotic prescribing in the residential care setting: current perspectives. Clin Interv Aging 2014; 9: 165–77. doi:10.2147/CIA.S46058 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.World Health Organization. Antibacterial agents in clinical development. https://apps.who.int/medicinedocs/documents/s23299en/s23299en.pdfAccessed 18 March 2019
- 14.Flentie K, Spears BR, Chen F, et al. Microplate-based surface area assay for rapid phenotypic antibiotic susceptibility testing. Sci Rep 2019; 9 (1): 237. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Kullar R, Goff DA, Schulz LT, et al. The “epic” challenge of optimizing antimicrobial stewardship: the role of electronic medical records and technology. Clin Infect Dis 2013; 57 (7): 1005–13. [DOI] [PubMed] [Google Scholar]
- 16.Singh H, Spitzmueller C, Petersen NJ, et al. Information overload and missed test results in electronic health record-based settings. JAMA Intern Med 2013; 173 (8): 702–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Menon S, Murphy DR, Singh H, et al. Workarounds and test results follow-up in electronic health record-based primary care. Appl Clin Inform 2016; 7 (2): 543–59. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Blijleven V, Koelemeijer K, Wetzels M, et al. Workarounds emerging from electronic health record system usage: consequences for patient safety, effectiveness of care, and efficiency of care. JMIR Hum Factors 2017; 4 (4): e27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.West VL, Borland D, Hammond WE.. Innovative information visualization of electronic health record data: a systematic review. J Am Med Inform Assoc 2015; 22 (2): 330–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Khairat SS, Dukkipati A, Lauria HA, et al. The impact of visualization dashboards on quality of care and clinician satisfaction: integrative literature review. JMIR Hum Factors 2018; 5 (2): e22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Luz CF, Berends MS, Dik J-WH, et al. Rapid Analysis of Diagnostic and Antimicrobial Patterns in R (RadaR): interactive open-source software app for infection management and antimicrobial stewardship. J Med Internet Res 2019; 21 (6): e12843. doi:10.2196/12843 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Wright A, Neri PM, Aaron S, et al. Development and evaluation of a novel user interface for reviewing clinical microbiology results. J Am Med Inform Assoc 2018; 25 (8): 1064–8. doi:10.1093/jamia/ocy014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Wilson G, Badarudeen S, Godwin A.. Real-time validation and presentation of the cumulative antibiogram and implications of presenting a standard format using a novel in-house software: ABSOFT. Am J Infect Control 2010; 38 (9): e25–e30. [DOI] [PubMed] [Google Scholar]
- 24.Simpao AF, Ahumada LM, Larru Martinez B, et al. Design and implementation of a visual analytics electronic antibiogram within an electronic health record system at a tertiary pediatric hospital. Appl Clin Inform 2018; 9 (1): 37–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Harris PA, Taylor R, Thielke R, et al. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009; 42 (2): 377–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Hess EP, Knoedler MA, Shah ND, et al. The chest pain choice decision aid: a randomized trial. Circ Cardiovasc Qual Outcomes 2012; 5 (3): 251–9. [DOI] [PubMed] [Google Scholar]
- 27.Hill RG, Sears LM, Melanson SW.. 4000 clicks: a productivity analysis of electronic medical records in a community hospital ED. Am J Emerg Med 2013; 31 (11): 1591–4. [DOI] [PubMed] [Google Scholar]
- 28.Sinsky C, Colligan L, Li L, et al. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Ann Intern Med 2016; 165 (11): 753–60. doi:10.7326/M16-0961 [DOI] [PubMed] [Google Scholar]
- 29.Young RA, Burge SK, Kumar KA, et al. A time-motion study of primary care physicians’ work in the electronic health record era. Fam Med 2018; 50 (2): 91–9. doi:10.22454/FamMed.2018.184803 [DOI] [PubMed] [Google Scholar]
- 30.Mamykina L, Vawdrey DK, Hripcsak G.. How do residents spend their shift time? A time and motion study with a particular focus on the use of computers. Acad Med 2016; 91 (6): 827–32. doi:10.1097/ACM.0000000000001148 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 15 (5): 419–26. doi:10.1370/afm.2121 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Melnick ER, Dyrbye LN, Sinsky CA, et al. The association between perceived electronic health record usability and professional burnout among US physicians. Mayo Clin Proc 2020; 95 (3): 476–87. [DOI] [PubMed] [Google Scholar]
- 33.McDonald CJ.Protocol-based computer reminders, the quality of care and the non-perfectability of man. N Engl J Med 1976; 295 (24): 1351–5. [DOI] [PubMed] [Google Scholar]
- 34.Horng S, Joseph JW, Calder S, et al. Assessment of unintentional duplicate orders by emergency department clinicians before and after implementation of a visual aid in the electronic health record ordering system. JAMA Netw Open 2019; 2 (12): e1916499. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Mandel JC, Kreda DA, Mandl KD, et al. SMART on FHIR: a standards-based, interoperable apps platform for electronic health records. J Am Med Inform Assoc 2016; 23 (5): 899–908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Rucker DW.Implementing the Cures Act: bringing consumer computing to health care. N Engl J Med 2020; 382 (19): 1779–81. [DOI] [PubMed] [Google Scholar]
- 37.Huckvale K, Torous J, Larsen ME.. Assessment of the data sharing and privacy practices of smartphone apps for depression and smoking cessation. JAMA Netw Open 2019; 2 (4): e192542. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Gray JE, Safran C, Davis RB, et al. Baby CareLink: using the internet and telemedicine to improve care for high-risk infants. Pediatrics 2000; 106 (6): 1318–24. [DOI] [PubMed] [Google Scholar]
- 39.Bates DW, Kuperman GJ, Wang S, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 2003; 10 (6): 523–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data available on request.