- Research
- Open access
- Published:
Evaluating a clinical decision support point of care instrument in low resource setting
BMC Medical Informatics and Decision Making volume 23, Article number: 51 (2023)
Abstract
Background
Clinical pathways are one of the main tools to manage the health care’s quality and concerned with the standardization of care processes. They have been used to help frontline healthcare workers by presenting summarized evidence and generating clinical workflows involving a series of tasks performed by various people within and between work environments to deliver care. Integrating clinical pathways into Clinical Decision Support Systems (CDSSs) is a common practice today. However, in a low-resource setting (LRS), this kind of decision support systems is often not readily accessible or even not available. To fill this gap, we developed a computer aided CDSS that swiftly identifies which cases require a referral and which ones may be managed locally. The computer aided CDSS is designed primarily for use in primary care settings for maternal and childcare services, namely for pregnant patients, antenatal and postnatal care. The purpose of this paper is to assess the user acceptance of the computer aided CDSS at the point of care in LRSs.
Methods
For evaluation, we used a total of 22 parameters structured in to six major categories, namely “ease of use, system quality, information quality, decision changes, process changes, and user acceptance.” Based on these parameters, the caregivers from Jimma Health Center's Maternal and Child Health Service Unit evaluated the acceptability of a computer aided CDSS. The respondents were asked to express their level of agreement using 22 parameters in a think-aloud approach. The evaluation was conducted in the caregiver's spare-time after the clinical decision. It was based on eighteen cases over the course of two days. The respondents were then asked to score their level of agreement with some statements on a five-point scale: strongly disagree, disagree, neutral, agree, and strongly agree.
Results
The CDSS received a favorable agreement score in all six categories by obtaining primarily strongly agree and agree responses. In contrast, a follow-up interview revealed a variety of reasons for disagreement based on the neutral, disagree, and strongly disagree responses.
Conclusions
Though the study had a positive outcome, it was limited to the Jimma Health Center Maternal and Childcare Unit, and hence a wider scale evaluation and longitudinal measurements, including computer aided CDSS usage frequency, speed of operation and impact on intervention time are needed.
Background
Health informatics research has produced a variety of technologies that aid the use and the production of health information. This includes automated clinical pathways (evidence-based recommendations and evidence-informed processes that integrate research evidence alongside practitioner expertise and the patient’s experience). They can reduce cost and risk at the point of care (POC) [1]. Clinical pathways (CPs) have been used to bridge the evidence/practice divide and to aid frontline healthcare workers by providing summarized evidence [2, 3]. Many clinical decision support tools, however, have remained out of reach for low-income countries (LRCs). The hurdles in care delivery for LRSs are complex, and study findings revealed that the national pyramidal health structure is "weakened at the bottom of the pyramid, and disproportionately favoring national hospitals" [4]. Furthermore, rising medical costs and scarcity of appropriate equipment, demographic challenges, inadequate infrastructure, coverage, lack of equitable health distribution, privacy and security, resource constraints, and a low literacy rate are all issues that have yet to be addressed in the implementation of an efficient integrated health information system, clinical care, and guidelines [5,6,7]. Moreover, our case study illustrates that existing paper-based point-of-care instruments have the "disadvantage of being non-interactive and difficult to use for retrieval of relevant clinical information, summarizing the patient history, creating a patient flow diagram, diagnosing all potential underlying diseases, and ultimately delivering optimal clinical pathways” [8]. As a result, putting evidence into practice is a very difficult matter.
To close this gap, we developed a computer aided CDSS to quickly identify cases that require referral and those that can be treated locally. To demonstrate the findings, we chose use cases of pregnant patients, routine antenatal and postnatal care, based on Ethiopian primary healthcare workflow and guidelines [9]. Within this setting the computer aided CDSS was developed using a hybrid algorithm to generate clinical pathways. The details of the design, development process, and the outcome are described in [10]. Our study is one of the rare initiatives that have opted to develop a computer aided CDSS, specifically for LRS. It focuses on primary healthcare in particular. To deal with data readiness and infrastructure, for example, it operates with limited input (clinical signs and symptoms) and progressively updates the generated clinical pathway when more information becomes available, and it was deployed on low-cost devices and accessible from a smart phone via mobile data or wireless networks. The computer-aided CDSS offers an automated, interactive, dynamic, and data-driven solution to assist front-line workers active under LRS conditions in the primary health setting.
The goal of this paper is to explain the user acceptance of the computer aided CDSS at the POC in a LRS. Moreover, we employed an artificial intelligence-enabled clinical decision support system framework (Ji, Mengting, et al. 2021 [11]) to assess the user acceptability of the computer aided CDSS.
Methods
Research & development options
The framework for evaluating an artificial intelligence–enabled clinical decision support system was developed based on Ji, Mengting, et al. 2021 [11]. We customized it to our needs, and a research protocol was developed. Additional file 1: Appendix I contains the protocol's details. For reporting the computer-aided CDSS evaluation, the DECIDE-AI reporting guidelines were used [12].
Site selection
The evaluation was carried out at Jimma Health Center Maternal and Child Health Service Unit, Jimma, Ethiopia. In Ethiopia, health centers typically serviced 15,000 to 25,000 people in rural settings and up to 40.000 people in urban settings [13]. Jimma Health Center is not an exception. The health center acts as a focal point by handling both inpatient and outpatient cases. It accepts referral cases from community health posts as well as it refers cases and assigns patients to the primary hospitals (Jimma University Specialized Hospital and Shanan Gibe General Hospital).
To manage frequently collected public health facility data in Ethiopia, the electronic Community Health Information System (eCHIS) and District Health Information Software (DHIS2) were introduced [14,15,16,17]. These tools are mainly used for collecting and reporting public health facility data. However, decision-support systems, electronic health records, the infrastructure for exchanging health information, and other similar technologies were not readily available to or used by frontline workers. During the need analysis [8, 18], we studied the clinical guidelines, the patient card-sheet, and referral-out registration logbook at Jimma Health Center. Paper-based clinical guidelines, card sheets, and referral registration log sheets are the only readily available resources for assisting frontline workers and their decisions [8, 18]. However, none of them are automated, interactive or dynamic. To capture the required information from the existing paper-based instrument takes much time and only contain limited information. Overall, it’s challenging to “capture and summarize the required clinical data, process it in a consistent manner, construct a patient flow sheet to monitor and record the progress of care” [8, 18]. It was difficult to audit records and track changes because of the inconsistent handwriting and layout. Furthermore, the health information transformation plan states that there are no known or accessible decision-support tools that generate and promote evidence-based decisions, including a lack of decision support tools that incorporate program and clinical guidelines, a lack of automated condition-specific order sets and documentation to facilitate decisions, and a lack of knowledge management systems [19, 20].
Participants
Caregivers who work at Jimma Health Center in the maternal and child healthcare unit or department were eligible to participate in the evaluation experiment. The caregivers volunteered to participate and gave their consent after receiving a description of the study and the computer aided CDSS. A flexible organization of the evaluation was needed to attract enough participants. The evaluation was done during the participants’ spare-time because the number of health professionals at the health center's maternal and child healthcare unit is limited, and they were fully occupied by completing their ordinary daily activities so that it was not feasible to include the evaluation in their daily practice. As a result, the evaluation procedure for the computer aided CDSS was carried out after the clinical decision was made rather than during the real-time decision-making process. Furthermore, instead of an instant patient-by-patient evaluation, the evaluation was completed a posteriori over the course of a half-day, and the computer aided CDSS had no impact on the care given. The participants were given a guide with detailed step-by-step instructions on how to use the computer aided CDSS to assist them in better preparing for the activity after clarifying the goal and obtaining consent.
Furthermore, the following factors were considered when determining the number of participants in the computer aided CDSS evaluation study:
-
I. Ethiopia has a health workforce that is far below the minimum standard [21, 22]. The number of health professionals at the health center level is insufficient. The number of maternal and child healthcare unit caregivers at the health center is very limited, five to seven care givers on average and exceptionally some more.
-
II. Consulting the literature, and given the small number of care givers, we adopted the "rule of thumb 4 ± 1" suggested when there are financial restrictions and a small number of participants. The magic number of five (4 ± 1) evaluators effectively recognizes the majority of usability issues as reported in [23, 24] for an evaluation in well constrained context. Three to five evaluators, for example, identify 85 to 91% of usability issues [23, 24]. Therefore, we made the assumption in our work that our experts can identify the vast majority of usability problems. Our computer aided CDSS evaluation studies were conducted with experts in primary care settings for maternal and childcare services, namely for pregnant patients, antenatal and postnatal care. Overall, we considered all caregivers who volunteered to take part in the computer aided CDSS evaluation at Jimma Health Center's maternal and child healthcare unit.
Computer aided clinical decision support POC
The computer aided CDSS was developed to promote high-quality care and assist healthcare workers in identifying referral and locally treatable cases. Integrating the knowledge-based approaches with the data-driven techniques is the core principle behind computer aided CDSS development [10]. This delivers the flexibility to dynamically map and evaluate knowledge in the local context. It also supports the use of historical evidence to adjust or re-adjust the order in the priorities of the knowledge-based CPs' decisions using the concordance table (a multi-criteria decision analysis). Bayesian probabilistic learning was combined with automated and dynamic knowledge-based approaches on the Jimma Health Center "pregnancy, childbearing, and family planning" dataset, providing a satisfying result [10]. Then, the computer aided CDSS was deployed in a low-cost fog computing architecture [25,26,27]. Raspberry Pi 4 Model B, which has a quad-core 64-bit processor and 4 GB of RAM was used as a platform. The computer aided CDSS data entry and processing was designed in a wizard style in accordance with the clinical guidelines [9]. A multi-criteria decision analysis and concordance table were generated based on the measured symptoms. The overall process consists of four steps: (I) Entering measured symptoms, (II) Validating and checking the measured symptoms, (III) Processing of clinical pathways to identify referral and locally treated cases, (IV) Selecting and endorsing the clinical pathways that have been generated, and (V) Saving the endorsed CP for future reference. Furthermore, automated antenatal (or postnatal) card plotting was done after the selection or endorsement of CPs. The computer aided CDSS can be accessed from a smart phone, tablet PC, or laptop running a web browser via a mobile data or wireless network. The architecture of computer-aided CDSS and the sample screenshot for postnatal (PNC) clinical workflow and the data processing are shown in Figs. 1 and 2, respectively.
Implementation
To evaluate the computer aided CDSS, we adopted 22 parameters from the evaluation framework of Ji, Mengting, et al. 2021 [11]. We only considered 22 of the 28 parameters because outcome changes (i.e. Change in clinical outcomes and Change in patient-reported outcomes), service quality (i.e. operation and maintenance, and information updating to keep timeliness), and process change productivity (i.e. productivity) were outside the scope of our study, and the computer aided CDSS was evaluated after the clinical decision was made. In addition, the variables Satisfaction of system quality, Satisfaction of information quality, and Satisfaction of service quality were difficult to distinguish, and hence we aggregated them as “Overall Satisfaction”.
The think-aloud protocol was followed while the participant uses the computer aided CDSS. The system was evaluated after the clinical decision was made using the concurrent think-aloud approach. In a thinking aloud test (TA-Test), participants were asked to use the computer aided CDSS while continuously thinking out loud [28]. Prior to the evaluation, a presentation describing the purpose of the evaluation and a computer-aided CDSS demonstration were delivered. During the think aloud-based and computer aided CDSS evaluation, the participant was not audio recorded. However, if participants felt uncertain and uncomfortable, they would "think aloud," and the researcher would document their thoughts.
Next, the caregivers at Jimma Health Center Maternal and Child Health Service Unit completed the questionnaire, which is organized around a kind of psychometric response scale [29] in which respondents express their level of agreement to a statement in five scores: (1) strongly disagree, (2) disagree, (3) neutral, (4) agree, and (5) strongly agree. In addition, when the evaluator responded, "strongly disagree, disagree, or neutral," we did a follow up by asking for more details on the reasons of their low scores in an interview. The interview was recorded and later reviewed for further computer aided CDSS improvement. The audio recordings of the follow-up interview were transcribed into verbatim text. The follow-up interview was conducted in Amharic and was then translated into English.
The questionnaire was structured into five sections with a total of 22 questions to validate and measure the perceptions on the instrument's characteristics in the following order: ease of use (6/22), system quality (2/22), information quality (2/22), decision changes (2/22), process changes (5/22), and user acceptance (5/22) [11]. The variables "learnability, operability, user interface, data entry, advice to display, and legibility" were used to assess ease of use. System quality relates to the performance of the computer aided CDSS system and the needed functionality as measured by "Response time and Stability”. Information quality denotes the computer aided CDSS’s capacity to conduct actions with suitable evidence within acceptable time frames as well as data protection, expressed in two factors, namely "Security and CP performance”. Then, decision changes were evaluated based on variables "Change in order behavior and Change in CP" to evaluate the computer aided CDSS’s capabilities to allow for real-time interactions, as well as the computer aided CDSS’s relevance. The variables "Effectiveness, Overall usefulness, Adherence to standards, Medical Quality, and User knowledge and skills" were then used to evaluate "Process changes”. Finally, user acceptance was assessed using the variables "usage, expectation confirmation, satisfaction over quality, overall stratification, and intention to use”. A more detailed description of the 22 evaluation parameters is given in Additional file 1: Appendix I. To compute the respondent response in each of the 6 sections, an averaged agreement score was calculated. The agreement score was computed using the responses "Agree" and "Strongly agree”. The disagreement score was computed on the responses “Strongly disagree”, “Agree” and “Neutral” (see Table 1).
Furthermore, the questionnaire was translated into Amharic. A freelance and experienced translator then reviewed the translated questionnaire to resolve any discrepancies between the original English version and the translated Amharic questionnaire. The questionnaire was accessible for submission through mobile phone, laptop, or paper-based format. We prefer mobile or laptop-based formats to paper-based formats, the latter being used exceptionally. The English version of the questionnaire is included in Additional file 1: Appendix I. In addition, the automated version of the questionnaire is available on Github.Footnote 1 Python and Streamlit framework were used to automate the questionnaire. Following submission, the automated questionnaire filters responses such as "strongly disagree," "disagree," and "neutral" for a follow-up interview.
Outcome
The primary outcome of the study was the evaluation of the user acceptance of the developed computer aided CDSS in a LRS. Computer-aided CDSS’s ease of use, system quality, information quality, decision changes, process changes, and user acceptance were explicitly addressed.
Safety, errors, and human factors
Since the evaluation was conducted after the clinical decision was made, there was no risk for the patient safety. Furthermore, an artificial intelligence-enabled clinical decision support system framework [11] was used, and the assessment was carried out using this framework, with participation from the caregiver at the health center.
Analysis
The study aimed at evaluating the user acceptance of a computer aided CDSS in a LRS and the findings were analyzed to gain insight and uncover common patterns to identify future actions. First, we analyzed the time taken to fill-in the questionnaire, to cross-check the plausibility and credibility of the evaluation. Then, the analysis was conducted based on ease of use, system quality, information quality, decision changes, process changes, and user acceptance of the computer aided CDSS. Moreover, we used a Python tool and Microsoft Excel for data processing and analysis.
This study reported the verbatim text and participant comments in two ways: (I). For verbatim quotations from a single participant, direct quotation marks were used, and (II). If more than one participant made the same comment on a specific topic the researcher paraphrased and summarized it without making use of quotes.
Ethics
Initially, we got ethical permission from Jimma University Institute of Health's Institutional Review Board. The data was then collected and processed anonymously following the Jimma health center signed consent during need analysis and computer aided CDSS development. The clinical guideline was employed as a gold standard for validating the automated and data-driven generated clinical pathways, ensuring the fairness of the results. Then, to assure the authenticity and integrity of the computer aided CDSS evaluation, we employed an artificial intelligence evaluation framework for computer-aided CDSS evaluation [11] and DECIDE-AI reporting guidelines [12] for reporting CDSS evaluation results. Finally, the computer-aided CDSS evaluation was conducted after the clinical decision was made to ensure that the computer-aided CDSS had no impact on the real decision-making process. Moreover, personal information exclusively used for questionnaire verification did not appear in the reporting and in the results. In general, we are committed to protecting personal information and respecting privacy as per the agreement consent.
Patient involvement
Since the primary purpose of the computer aided CDSS is to help caregivers and frontline workers identify referral and treatable cases at the health center, the patient was not directly involved in the user-acceptance evaluation study. In addition, the option has been taken to conduct the evaluation after the clinical decision for the patient had been taken.
Results
Caregivers from Jimma Health Center's Maternal and Child Health Service Unit evaluated the acceptability of a developed computer aided CDSS at the POC for two days. The evaluation was carried out at Jimma Health Center between June 4 and 6, 2022. The caregivers assessed the computer aided CDSS by responding to a total of 22 questions divided into six categories, namely ease of use, system quality, information quality, decision changes, process changes, and user acceptance.
The maternal and child healthcare unit caregivers at the health center were limited to five active caregivers during the computer aided CDSS evaluation. Four of these caregivers participated in the computer aided CDSS evaluation. One of the caregivers was unavailable during the evaluation. All of the respondents were female who have worked for the maternal and childcare health center unit and department.
There were eighteen cases in the Maternal and Childcare Unit (during the first day: ten cases, the second day: eight cases). The longest duration to complete answering the questions in the questionnaire was 98 min and the shortest was 31 min, with an average time of 57.75 min. Though the participants use their spare time for evaluation, we observed that they spent more time in the afternoon (or night) session than they did in the morning (or mid-day).
Based on the responses, the computer aided CDSS received an average user acceptability score of 3.75 out of 5; an ease-of-use score of 4.25 out of 6; a process change score of 4 out of 5; and a system quality score of 1.25 out of 2. Information quality and decision change received a score of 1.5 out of 2. Table 1 shows the complete results, and a summary of the respondent agreement scores.
Overall, nurses were comfortable using the computer aided CDSS. Figures 3, 4, 5, 6, 7 and 8 depict the detailed result of the computer aided CDSS evaluation. Computer aided CDSS users' responses were categorized as Ease of use, System quality, Information quality, Decision change, Process Change, and User Acceptance.
Ease of use
Only one respondent strongly agreed with the computer aided CDSS Learnability and Data Entry. 3/4 (75%) of respondents agreed on the operability, user Interface, and advice to display. Learnability, Data Entry, and Legibility obtained a 2/4(50%) agreement score from respondents. Figure 3 depicts the respondents’ scoring results for the category ease of use.
Generally, the participants were given 4.25 out of 6 for Ease of Use of the computer aided CDSS. Respondents, said the following:
-
The first respondent stated, "I have two cases with Blood Pressure (BP) readings of 100/70 and 110/70, however, the two values were not available in the drop-down option for selecting values.”
-
We feel that further instruction and guidance are essential for me to feel completely compliant with the computer-aided CDSS [Respondents 2 and 3].
-
Respondent 4 stated that:”The entire output information in the table did not show until I made them full screen.”
System quality
3/4(75%) and 2/4(50%) of the respondents were satisfied with the computer aided CDSS response time and stability, respectively. In contrast, one respondent was neutral about the response time and stability. Furthermore, one respondent disagreed with the computer aided CDS stability. Figure 4 depicts the System Quality respondents' outcome.
When using the computer-aided CDSS, participants expressed the importance of System Quality.
-
Since the computer aided CDSS lacked user-type choices, we were concerned about its quality. [Respondents 1 and 2].
-
"I wish it had an offline version, when the internet connection was lost or disrupted, the computer-aided CDSS was unresponsive”. [Respondent 4
Information quality
On the Information Quality, 3/4(75%) of the respondents were satisfied with security and CP Performance. In contrast, one respondent was neutral. Figure 5 illustrates further details about the Information Quality responses.
Decision change
All respondents were satisfied with the Change in CP, and 2/4(50%) were satisfied with the Change in order behavior. Figure 6 presents further details about the Decision Change responses.
When participant talking about the capability of computer aided CDSS allowing for realtime based interaction between the user and the computer-aided CDSS as well as the evidence got from the CDSS diversity and importance, respondent 2 and 4 stated that,
-
Our interaction with the computer-assisted CDSS seems to be restricted, and we were unable to enter cases outside of the drop-down options. [Respondents 2 and 4]
Process change
The evidence generated by the computer aided CDSS’s adherence to standards, such as the Ethiopian primary healthcare guidelines, was supported by the respondents. 3/4(75%) of the respondents were satisfied with the effectiveness and overall usefulness. However, one respondent were neutral in terms of effectiveness, overall usefulness, and medical quality, while one disagreed with the outcome's consistency with existing user knowledge and skills. Figure 7 shows further information about the Process Change responses.
There were participants who were concerned about the computer aided CDSS process changes.
-
Based on the input, the computer-aided CDSS successfully generated results. However, we expect other sources of evidence in addition to the guidelines. We wish there were user-typed alternatives and a more flexible data entry option. [Respondents 2, 3 and 4].
User acceptance
The overall quality, overall satisfaction, and intention to use the computer aided CDSS, obtained a strongly agreed score from 1/4(25%) of the respondents; 3/4(75%) of the respondents agreed on the usage and intention to use of the computer aided CDSS. Usage, expectation confirmation, satisfaction with overall quality, and overall satisfaction were all neutral for 1/4(25%) of the respondents. Furthermore, one respondent disagreed with the expectation confirmation of the computer aided CDSS. Figure 8 shows further information about the User Acceptance responses.
Participants expressed an interest in using computer-assisted CDSS in their daily routine. Participants were positive about the "use, expectation confirmations, satisfaction with overall quality, and overall satisfaction" of the computer-aided CDSS. On the other hand,
-
Respondents 2 and 3 stated that,
we have minimal prior experience, but in order to completely accept the computer-aided CDSS, we need flexible data entry as well as more sources of evidence.
-
Respondent 4 also stated that,
“I expect some type of advice to show and offline support version to completely approve the computer CDSS”.
Overall, the participants were positive on computer-aided CDSS “Intention to use, change in CP, and adherence to the standards”.
-
"Despite the limitations noted above, I believe it will be addressed in the next version” [Respondent 1].
-
We found the computer-aided CDSS useful and will use it again if we have access since it corresponds to the standards and clinical guidelines, and the results were apparent to us. [Respondents 1, 2, 3 and 4].
In summary, the computer aided CDSS had a positive agreement score. User acceptance achieved a 3.75 out of 5 agreement score. However, caregivers appear to be concerned, as evidenced by a disagreement score of 1.75 out of 6 on the computer aided CDSS's Ease of Use; 0.75 out of 2 on System Quality; 0.5 out of 2 on Information Quality; 0.5 out of 2 on Decision Change; 1 out of 4 on Process Change; and 1.25 out of 5 on User Acceptance. A variety of disagreements has been revealed during a follow up interview across each of the categories. With the exception of Change in CP, Adherence to standards, and Intention to use, most computer aided CDSS evaluation parameters received a disagreement score of 1/4(25%) to 2/4(50%) from respondents. Table 2 summarizes the parameters and the respondents' reasons for disagreement. The letters R1, R2, R3, and R4 in Table 2 denote Respondent 1, Respondent 2, Respondent 3, and Respondent 4. R2, for example, disagreed with the computer aided CDSS’s Perceived Ease of Use evaluation metrics based on learnability parameters.
In all, Table 2 summarizes the respondents' reports of disagreement. The details of the disagreement reason, on the other hand, were transcribed directly from the follow-up interview recording. Table 3 contains information on the reasons for disagreement, which are generally transcribed verbatim from the interview and were reported as extracted and summarized reasons of disagreement from the follow-up interview.
Discussion
Principal findings
The computer aided CDSS received a positive overall review, based on the average scores in all six categories. Even though this study attempted to evaluate the clinical decision support point of care instrument in a low-resource setting and obtained a favorable agreement score in terms of "Ease of use, System quality, Information quality, Decision change, Process Change, and User Acceptance," some respondents disagreed on this. (See Table 3).
Ease of use
While 3/4(75%) of the respondents agreed or strongly agreed with the computer aided CDSS’s ease of use factors ("learnability, user interface, operability, data entry, and advice to display,"), 1/4(25%) disagreed.
During the follow-up interview, we identified the 1/4(25%) disagreements in each category. The 1/4(25%) disagreements on learnability were due to the fact that: although the system is straightforward and easy to use, “it still requires some help as well as guidance, the respondents are sometimes puzzled, particularly on the first day”. When commenting on the computer aided CDSS’s "operability", respondents continued to favor a neutral score on the amount of work and time necessary for the usage of the computer aided CDSS and the accomplishment of the tasks correctly. The computer aided CDSS’s User Interface is not able to accept user-typed input options other than those proposed by the system concerning measured symptoms, and all of the output parameters were not visible while viewing the concordance table on a mobile device until the concordance table was made full screen. The computer aided CDSS data entry lacks data input options for cases treated at a health facility. For example, during the day one morning evaluation, there were two cases with Blood Pressure (BP) values of 100/70 and 110/70, but those values were not available in the drop-down option for selecting values. Thus, to enable flexibility in the event of an unexpected scenario, it would be better to provide user typing alternatives. Although the computer aided CDSS favors automated wizards and recommendations, the user also expects to be able to make his/her own decisions, including overruling the computer aided CDSS recommendations that are sometimes considered unsuitable. Respondents emphasized the necessity of displaying advice and documentation based on local languages and setting options. The disagreement in legibility is 25% higher than in the other categories concerning ease of use. According to the respondents, since the computer aided CDSS is based on an automated wizard, non-AI professionals would want some guidance and training to properly understand the system.
System quality
The computer aided CDSS’s System Quality disagreement was composed of 1/4(25%) of the scores being neutral about the computer aided CDSS response time and stability while 25% disagreed with the computer aided CDSS stability. "There were no exceptions possible in the data entry, which hampers the flexibility because inputs were strictly based on clinical guidelines”, which results in 1/4(25%) disagreement and 1/4(25%) neutral in a follow-up interview on the stability of the computer aided CDSS. Furthermore, one respondent had mobile network instability during the computer aided CDSS evaluation submission, resulting in a 25% disagreement on response time.
Information quality
3/4(75%) of the respondents were satisfied with the information quality, specifically security and CP performance, while 1/4(25%) were neutral on this aspect. The disagreement resulted from security and CP performance. Even though there was a password-protected login, the respondents preferred to give neutral scores and they were unable to make remarks on security since this requires the right experts’ opinion. The respondents that gave disagreement scores believe that further research is needed to conclude that the computer aided CDSS is better than the existing evidence-based resources such as paper-based clinical guidelines and card-sheets.
Decision change
Though all respondents were satisfied with the Change in CP, half of them disagreed with the Change in order behavior. According to the findings of the follow-up interview, the respondents feel that the computer aided CDSS lacks flexible real-time interaction between the user and the computer aided CDSS recommendations and is too much restricted to automated wizard generation and combo-box based choices.
Process change
Except for standards adherence, neither of the categories satisfies 1/4(25%) of the respondents concerning the Process Change, in particular, effectiveness, overall utility, medical quality, and user knowledge and abilities. Though the computer aided CDSS provides the required output workflow based on clinical guidelines, it needs to incorporate additional evidence besides the clinical guidelines, because the user's skill in exploring exceptions in the case of locally treatable cases is good. For example, it lacks flexible data entry and requires more historical evidence from the patient card sheet, even though the majority of the existing patient card-sheets lacks documentation of comprehensive patient information.
User acceptance
In general, respondents were satisfied with user acceptability. Respondents were particularly interested in using the system for daily regular duties and having access to the computer aided CDSS. However, 1/4(25%) of the respondents disagreed with the computer aided CDSS user acceptability, specifically for the Usage, Satisfaction of overall quality, and Overall satisfaction parameters. The major point of disagreement was that the computer aided CDSS uses clinical guidelines as a standard, is oriented towards referral cases by nature, and so lacks data input options (and/or flexibility) for patients treated at the health center. Additionally, 50% of the respondents disagreed on Expectation confirmation parameters. The computer aided CDSS’s "Expectation confirmation" disagreement arose as the result of: (I) Respondent I: “Since I don’t have prior experience, I prefer neutral”, and (II) Respondent II: “Because I lack past experience, I prefer to disagree rather than agree or being neutral”.
Follow-up interview summary
Table 3 presents the summary of disagreement reasons extracted during the follow up interview. The symbol "✓” indicates that there is disagreement on the specific parameters. (See Additional file 1: Appendices II for details on each parameter's questionnaire).
In conclusion, the evaluation findings including Decision Changes and Process Changes, revealed a variety of needs for the design of the next computer-aided CDSS iteration, including: (I). To improve data entry quality and manage exceptions, a user type option needs to be added to the drop-down options to provide a more flexible data entry system (II). An offline version of the computer-aided CDSS needs to be designed to promote maximum real-time interaction between the user and the computer-aided CDSS, (III). Recent literature evidence needs to be included as a source of evidence in addition to the clinical guideline and card sheet to enhance the robustness of the evidence.
Overall strengths and limitations
This study developed a low-cost, automated, and symptom-based clinical workflow for low-resource settings and reported a promising agreement score, during the evaluation. The novel aspect of our proposed computer-aided CDSS was the inclusion of a CP algorithm that dynamically maps and validates the knowledge-based CP using data-driven approaches, primarily using Bayesian learning and incremental learning to adjust and re-adjust the decision priority using multiple criteria decision analysis [10], and implementing the algorithm in low-cost alternatives such as the Raspberry Pi 4 Model B that are suitable for low-resource settings. Furthermore, the study tried to evaluate CP outside of the hospital, a data-intensive and chronic healthcare setting, and to design and evaluate it in a primary healthcare context in low-resource settings. The computer aided CDSS also provides interactive data visualization and a clinical wizard for easy reference, appropriate clinical management, and data processing by identifying referral and locally treated cases. Despite some participants’ disagreements, the computer aided CDSS received a score of 50% to 100% agreement in all evaluation parameters. The follow-up interview also revealed substantial remarks and improvement considerations for upgrading the computer aided CDSS based on the six categories of "Ease of use, System quality, Information quality, Decision Change, Process Change, and User Acceptance”. Furthermore, the key strength of this study is its methodology. An artificial intelligence-enabled clinical decision support system was adopted for evaluation [11], and the DECIDE-AI framework was used for reporting [12], which will minimize response and reporting bias. We also track the beginning and end of the questionnaire filling process to see how long it takes, and then, upon submission, the automated evaluation framework questionnaire filters the "Strongly disagree," "disagree, and neutral" for the discussions in interviews.
There are limitations to this study. First, the evaluation was performed after the clinical decisions over the course of a half-day. Hence, the real-time decision process was not considered. Second, since the participants were limited to the Jimma Health Center maternal and childcare unit, important considerations should be made before generalizing these findings to other contexts outside of the study site. Third, the number of participants was limited, and the evaluation was restricted to one study site, which may cause a limited diversity in perspectives. While self-reported computer aided CDSS evaluation can be used, it implies a risk on introducing a bias. Some more longitudinal measurements are needed, including computer aided CDSS usage frequency, duration, and other important evaluation metrics. Finally, while this study used a standard AI-enabled evaluation framework with a thinking aloud approach to evaluate the computer aided CDSS, the computer aided CDSS long-term use and impact could not be evaluated. Furthermore, it’s important to consider also other suitable “discount usability methods” for evaluating the user acceptance of computer aided CDSS in low resource settings.
Conclusion
The user acceptance evaluation of a computer aided CDSS at the point of care in LRSs was carried out using an artificial intelligence-enabled clinical decision support system framework. Respondents were asked to express their level of agreement using 22 parameters in a think-aloud approach. The evaluation criteria were categorized into six categories: ease of use, system quality, information quality, decision changes, process changes and user acceptance (see Additional file 1: Appendix 1). Despite considerable disagreement among participants, the computer aided CDSS achieved higher-than-average scores in all six categories, namely user acceptability (3.75 out of 5), ease-of-use (4.25 out of 6), process change (4 out of 5), and system quality (1.25 out of 2). The score for information quality and decision change was 1.5 out of 2. The evaluation, however, is limited to the Jimma Health Center Maternal and Childcare Unit. As a result, in addition to the self-reported computer aided CDSS evaluation, a larger scale evaluation and longitudinal measurements are required, including computer aided CDSS usage frequency, duration, and so on.
Availability of data and materials
Although most of the data is already available in the manuscript, the identified participant-level data will be made available upon request in return for a signed data access agreement. Please email to the corresponding author for access.
References
Kinsman L, Rotter T, James E, Snow P, Willis J. What is a clinical pathway? Development of a definition to inform the debate. BMC Med. 2010;8(1):1–3.
Van DeneckereEuwemaHerckLodewijckxPanellaSermeusVanhaecht SMPCMWK. Care pathways lead to better teamwork: results of a systematic review. Social Sci Med. 2012;75(2):264–8.
DiJerome L. The nursing case management computerized system: meeting the challenge of health care delivery through technology. Comput Nurs. 1992;10(6):250–8.
Azevedo MJ. The state of health system (s) in Africa: challenges and opportunities. Histor Perspect State Health Health Syst Africa. 2017;II:1–73.
Leslie HH, Sun Z, Kruk ME. Association between infrastructure and observed quality of care in 4 healthcare services: A cross-sectional study of 4,300 facilities in 8 countries. PLoS medicine. 2017;14(12):e1002464. https://doi.org/10.1371/journal.pmed.1002464.
Central Statistical Agency (CSA). Ethiopia Demographic and Health Survey 2016. Addis Ababa and Rockville: CSA and ICF; 2016.
Tarekegn SM, Lieberman LS, Giedraitis V. Determinants of maternal health service utilization in Ethiopia: analysis of the 2011 Ethiopian Demographic and Health Survey. BMC Pregnancy Childbirth. 2014;14(1):1–3.
Tegenaw GS, Amenu D, Ketema G, Verbeke F, Cornelis J, Jansen B. Using clinical guidelines and card sheets for guiding the design of data-driven clinical pathways. J Health Inform Dev Ctries. 2021;15(2). Retrieved from https://www.jhidc.org/index.php/jhidc/article/view/343.
Federal Democratic Republic of Ethiopia Ministry of Health. Ethiopian primary health care clinical guidelines. Care of Children 5–14 years and Adults 15 years or older in Health Centers. Federal Democratic Republic of Ethiopia Ministry of Health. 2017.
Tegenaw GS, Amenu D, Ketema G, Verbeke F, Cornelis J, Jansen B. A hybrid approach for designing dynamic and data-driven clinical pathways point of care instruments in low resource settings. In MEDINFO 2021: One World, One Health–Global Partnership for Digital Innovation 2022. IOS Press. pp. 316–320
Ji M, Genchev GZ, Huang H, Xu T, Lu H, Yu G. Evaluation framework for successful artificial intelligence-enabled clinical decision support systems: mixed methods study. J Med Internet Res. 2021;23(6):e25929.
Vasey B, Nagendran M, Campbell B, Clifton DA, Collins GS, Denaxas S, ... , McCulloch P. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med. 2022;28(5):924–33.
World Health Organization. World Health Organization. Primary health care systems (primasys): case study from Ethiopia: abridged version. World Health Organization; 2017.
Thangasamy P, Gebremichael M, Kebede M, Sileshi M, Elias N, Tesfaye B. A pilot study on district health information software 2: challenges and lessons learned in a developing country: an experience from Ethiopia. Int Res J Eng Technol. 2016;3(5):1646–51.
Mekonnen ZA, Chanyalew MA, Tilahun B, Gullslett MK, Mengiste SA. Lessons and Implementation Challenges of Community Health Information System in LMICs: A Scoping Review of Literature: CHIS. Online J Public Health Inform. 2022;14(1). https://doi.org/10.5210/ojphi.v14i1.12731.
World Health Organization. Support tool to strengthen health information systems: guidance for health information system assessment and strategy development. WHO. 2021.
Assefa Y, Gelaw YA, Hill PS, Taye BW, Van Damme W. Community health extension program of Ethiopia, 2003–2018: successes and challenges toward universal coverage for primary healthcare services. Glob Health. 2019;15(1):1–11.
Tegenaw GS, Amenu D, Ketema G, Verbeke F, Cornelis J, Jansen B. Analysis of low resource setting referral pathways to improve coordination and evidence-based services for maternal and child health in Ethiopia. PLoS ONE. 2022;17(8):e0273436.
World Health Organization (WHO). Ethiopian Health Sector Transformation Plan. 2015.
Biru A, Birhan D, Melkamu G, Gebeyehu A, Omer AM. Pathways to improve health information systems in Ethiopia: current maturity status and implications. Health Res Policy Syst. 2022;20(1):1–10.
Haileamlak A. How can Ethiopia mitigate the health workforce gap to meet universal health coverage? Ethiop J Health Sci. 2018;28(3):249.
Federal Ministry of Health of Ethiopia. Health Sector Transformation Plan 2015/16–2019/20. Federal Ministry of Health of Ethiopia. 2015 October. Available at: https://www.globalfinancingfacility.org/sites/gff_new/files/Ethiopia-health-system-transformation-plan.pdf. Access date: June 2022.
Cazañas A, de San MA, Parra E. Estimating sample size for usability testing. Enfoque UTE. 2017;8:172–85.
Alroobaea R, Mayhew PJ. How many participants are really enough for usability studies? In: 2014 Science and Information Conference. 2014. p. 48–56 IEEE.
Shi W, Cao J, Zhang Q, Li Y, Xu L. Edge computing: Vision and challenges. IEEE Internet Things J. 2016;3(5):637–46.
George A, Dhanasekaran H, Chittiappa JP, Challagundla LA, Nikkam SS, Abuzaghleh O. Internet of Things in health care using fog computing. In: 2018 IEEE Long Island Systems, Applications and Technology conference (LISAT). 2018. p. 1–6.
Cerina L, Notargiacomo S, Paccanit MG, Santambrogio MD. A fog-computing architecture for preven-tive healthcare and assisted living in smart ambients. In: 2017 IEEE 3rd International Forum on Research and Technologies for Society and Industry (RTSI). 2017. p. 1–6 IEEE.
Alhadreti O, Mayhew P. Rethinking thinking aloud: A comparison of three think-aloud protocols. In: Proceedings of the 2018 CHI conference on human factors in computing systems. 2018. p. 1–12.
Preedy VR. Handbook of disease burdens and quality of life measures. Watson RR, editor. New York: Springer; 2010. https://doi.org/10.1007/978-0-387-78665-0_6363.
Acknowledgements
The NASCERE (Network for Advancement of Sustainable Capacity in Education and Research in Ethiopia) program has assisted our work. We gratefully recognize Sister Aynalem Wubet from the Jimma Health Center Maternal and Childcare Unit, who offered to manage and arrange the necessary logistics for the computer aided CDSS evaluation from beginning to end. The computer aided CDSS evaluation would not have been possible without her participation and dedication.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
GST, DM, GK, FV, JC, and BJ conceptualized the research goals and objectives, as well as the methodology. GST conducted the data curation, formal analysis, investigation, visualization, and drafting of the manuscript. DM, GK, FV, JC, and BJ were involved in the supervision, validation, visualization, and review and editing of the manuscript, as well as the final proofreading. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Ethical clearance was obtained from the Jimma University, Institute of Health, Institutional Review Board (IRB). IHRPGI/467/2019 is the reference number. Permission was granted by the Jimma health center, and the data was gathered and analyzed anonymously after the consent was obtained. Informed consent was obtained from all the participants and/or their legal guardians. All methods were carried out in accordance with relevant guidelines and regulations. Furthermore, the evaluation was conducted after the clinical decision was made in order to avoid and reduce risk.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Tegenaw, G.S., Amenu, D., Ketema, G. et al. Evaluating a clinical decision support point of care instrument in low resource setting. BMC Med Inform Decis Mak 23, 51 (2023). https://doi.org/10.1186/s12911-023-02144-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12911-023-02144-0