Abstract
Virtual reality-based assessment and training platforms proffer the potential for higher-dimensional stimulus presentations (dynamic; three dimensional) than those found with many low-dimensional stimulus presentations (static; two-dimensional) found in pen-and-paper measures of cognition. Studies have investigated the psychometric validity and reliability of a virtual reality-based multiple errands task called the Virtual Environment Grocery Store (VEGS). While advances in virtual reality-based assessments provide potential for increasing evaluation of cognitive processes, less has been done to develop these simulations into adaptive virtual environments for improved cognitive assessment. Adaptive assessments offer the potential for dynamically adjusting the difficulty level of tasks specific to the user’s knowledge or ability. Former iterations of the VEGS did not adapt to user performance. Therefore, this study aimed to develop performance classifiers from participants (N = 75) using three classification techniques: Support Vector Machines (SVM), Naive Bayes (NB), and k-Nearest Neighbors (kNN). Participants were categorized as either high performing or low performing based upon the number items they were able to successfully find and add to their grocery cart. The predictors utilized for the classification focused on the times to complete tasks in the virtual environment. Results revealed that the SVM (88% correct classification) classifier was the most robust classifier for identifying cognitive performance followed closely by kNN (86.7%); however, NB tended to perform poorly (76%). Results suggest that participants’ task completion times in conjunction with SVM or kNN can be used to adjust the difficult level to best suit the user in the environment.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Virtual environments offer a high-dimensional way to assess a user’s cognitive capacities for performing everyday activities. As such, these assessments typically present more life-like stimuli and situations when compared to lower two-dimensional assessments (i.e., surveys or computer-based tasks with static stimuli; Parsons and Duffield 2020). High-dimensional assessments provide benefits over lower-dimensional tasks such as enhanced stimulus presentations and may lead to more naturalistic user behavior. Virtual reality-based cognitive tasks balance higher-dimensional presentations of multiple sensory modalities with experimental control (Kothgassner and Felnhofer 2020). Virtual reality-based cognitive tasks can also be used to establish computational models with latent context variables that can be extracted using nonlinear modeling and utilized for adaptation to user performance (McMahan et al. 2021).
The Virtual Environment Grocery Store (VEGS) is an example of a high-dimensional virtual reality-based cognitive assessment developed via user-centered design principles (Parsons 2012; Parsons et al. 2017; Virtual Environment Grocery Store 2012). The VEGS incorporated user-centered design (UCD) throughout to ensure that the developers and clinicians understood the user’s needs and expectations throughout all phases of design. An important user-design component was the focus on the manipulation of cognitive load experienced by the user. Specifically, it was important that the user experience experimentally controlled measures of cognitive load (ecologically valid environmental distractors; strategy formation; problem-solving) instead of extraneous load (van Merriënboer and Ayres 2005).
While immersed in the VEGS, users interact with objects and virtual human avatars as they perform shopping tasks. Specific to user-centered design was that emphasis on cognitive measures could be assessed by the VEGS including learning, memory, navigation, and executive functions. As users navigate the virtual store perform, they perform a variety of tasks. First, users travel from the front of the store to the pharmacist where they drop off a prescription with the virtual human pharmacist. From the virtual human pharmacist, the user is given a number for which the user must listen while shopping (see Fig. 1). There are also announcements broadcast over the public address system. To perform well, the user must pay attention to announcements, listen for their number, and ignore other numbers while shopping (cognitive inhibition: ignoring other numbers). The user is instructed to gather items from a shopping list that they learned prior to immersion. Users are also instructed to navigate to an automated teller machine (ATM) after two minutes (time-based prospective memory).
The VEGS also includes other tasks: navigating through the aisles; selecting and retrieving items from the shopping list; ignoring items that were not from the shopping list; and staying within budget. After hearing their prescription number broadcast, the user is to return to the virtual pharmacist and stand in line to pick up their prescription (event-based prospective memory). Following the user’s immersive experience in the VEGS, the user performs delayed free and cued recall of the VEGS shopping items. Recent attempts at psychometric validation of the VEGS have found it to have construct validity for assessing both older and younger aged adults in both high and low distraction conditions (Barnett et al. 2022, 2023; Weitzner et al. 2021). Of note, in the lower distraction conditions, the VEGS appears to be primarily a memory (episodic and prospective) assessment (Parsons and Barnett 2017). The addition of environmental distractors (e.g., additional avatars; more announcements; cell phones ringing) revealed that users’ performance was related to both memory and executing functioning measures (Parsons and McMahan 2017).
While these psychometric results are promising, there is a need for a virtual environment grocery store platform that adapts to the user’s performance. Adaptive virtual environments allow for a shift away from one-size-fits-all experiences toward individual user-centered designs (Scott et al. 2016; Shute and Towle 2018). Moreover, adaptive virtual learning environments can potentially lead to enhanced cognitive representations and better knowledge transfer to other contexts (Scott et al. 2016). Adaptive systems may infer user states to reduce cognitive load (Dorneich et al. 2016) and provide individualized training (Klasnja et al. 2015). Recent systematic reviews of adaptive VR-based training approaches suggest that optimized adaptive virtual environments will include user’s capabilities, performance, and needs (Vaughan et al. 2016; Zahabi and Abdul Razak 2020).
Adaptive algorithms can be developed to tailor assessment and training to the user’s strengths and weaknesses (Reise and Waller 2009; Gibbons et al. 2008, 2016). Recently, several virtual reality-based cognitive assessments have used machine learning to develop classifiers and adaptive algorithms that can be used for personalized cognitive assessments (Alcaniz Raya et al. 2020; Asbee et al. 2023; Belger et al. 2023; De Gaspari et al. 2023; Kerick et al. 2023; Marín-Morales et al. 2018; McMahan et al. 2021; Tsai et al. 2021). Once established, these algorithms can be used for developing an adaptive virtual shopping platform that dynamically adjusts the complexity of stimulus presentations relative to the performance of the user. An adaptive version of the VEGS will allow for the assessment of the user’s limits as well as adapt to the user’s performance in a dynamic manner. For an adaptive assessment to change in response to the user, the system must first determine the user’s state. User states are typically determined using metrics such as discrete behaviors (Scott et al. 2016). These user metrics are used to apply decision rules that classify the user’s state. However, before implementing decision rules, it is important to use machine learning to develop performance classifiers. An initial step in the creation of an adaptive VEGS assessment is an examination of the performance of the classifiers. Examination of classifiers can be used to create optimal decision rules and inform test administrators of the general accuracy of the classification of the user’s state. In this paper, we compare the predictive ability of three machine learning classifiers: the Support Vector Machine, K-Nearest Neighbors, and Naïve Bayes.
2 Methods
The study received approval by a university’s committee for the protection of human subjects.
2.1 Participants
Study data was gathered and analyzed from 75 college-age students from a large university in the southwestern USA. Demographics included mean age = 21.07 (range 18–40); 53% of the participants were female. Education level included high school degree, some college, and bachelor’s degree. Ethnicity distribution consists of N = 16 African American, N = 5 Asian, N = 20 Hispanic, N = 28 Caucasians, and N = 6 Other. 86.6% of the participants were right-handed.
For all participants, the inclusion/exclusion criteria involved: participants must be aged 18 years of age or older; with normal (to corrected) vision. Participants would be excluded if they had a history of acute psychiatric condition(s), attention-deficit/hyperactivity disorder, or other Axis I psychopathology (diagnosed or suspected). Moreover, participants would not be included if they had a history of epilepsy, intellectual disability (IQ < 70), and/or neurological impairments impacting cognitive and/or motor movements. No participants were excluded.
Participants all reported that they were comfortable with computers. All participants rated their technology competency as experienced. There were no significant differences for age, sex, estimated full-scale IQ or computer comfort. Hence, this sample was thought to be a homogenous sample.
2.2 Apparatus and measures
2.2.1 Procedure
The protocol for gathering data and experimental sessions took place over a 90-min period. After the participant (i.e., user) arrived at the laboratory, they were briefed on the study’s procedures, potential risks, and benefits, as well as being told that they could choose to not participate. Before beginning the protocol (before starting study and pre-immersion), participants signed a written informed consent (approved by the university’s institutional review board) that designated their approval to take part in testing and immersion in the virtual environment. Once informed consent was received, general demographic data was gathered and participants responded to questions designed to assess their computer experience, comfort, and usage activities, perceived level of computer skill (Likert scale (1–not at all to 5–very skilled); and what type of games they played (e.g., role-playing; eSports, etc.).
2.2.2 Virtual environment grocery store
The VEGS was run on the Windows 10 operating system. A high technology computer (HTC) with an Intel Core i7 (16 GB RAM) and an NVIDIA GeForce GTX 1080 was used. A DisplayPort 1.2 was used for video output. While the multiple head-mounted displays (HMDs) can be used with the VEGS, the HTC Vive (http://www.htcvive.com) head-mounted display was used. The HTC Vive uses an organic light-emitting diode (OLED) display with a resolution of 2160 × 1200. The refresh rate is 90 Hz. Participant head-position was tracked using embedded inertial measurement units as the external Lighthouse tracking system cleared common tracking drift (60 Hz update rate). The VEGS includes a number of everyday shopping activities that have been found to be associated with cognitive performance on traditional (low-dimensional) neuropsychological assessments. For example, in both low and high distraction conditions, performance on the VEGS has been associated with performance on traditional measures of memory (Parsons and Barnett 2017). Also, during high distraction conditions, the participant’s performance on VEGS tasks also is associated with executive functioning (Parsons and McMahan 2017).
Prior to being immersed in the VEGS, participants took part in an encoding phase (i.e., learned a list of shopping items that they would shop for once immersed in the VEGS) and a familiarization phase (immersed into the virtual environment and experienced controllers). During the encoding phase, the participants (not immersed) were exposed to learning trials aimed at communicating the shopping items needed once immersed. Participants listened as the examiner read aloud 16 items (between each item reading there was an inter-stimulus interval of two seconds). Participants were not provided with a copy of the shopping list. Immediately following the examiner’s reading of the list, the participant was instructed to repeat the shopping items from the shopping list in any order. The participant’s immediate recall of items was recorded verbatim by a microphone and was logged for each of the immediate recall trials (Trials 1–3). Following the encoding phase (but before taking part in the actual VEGS tasks), participants took part in a familiarization phase, during which they were immersed in the virtual environment and learned the controls, navigated the environment, and made selections of items from the shelves. The duration of the familiarization phase was determined by the participant’s reported comfort and prior experience with virtual reality platforms (ranged from 3 to 5 min). Before moving onto the testing phase, examiners made sure that the participant was adept at using the controls and answered any participant questions. Next, the participant was informed of tasks needing completion during the testing phase: (1) the participant would need to travel to the pharmacy at the back of the store and click on the pharmacist to drop off a prescription. Once they clicked on the pharmacist, they would receive a number to remember and instructions; (2) participants were to listen for their number to be called (and ignore other numbers) as they shopped for items from the shopping list (learned during the encoding phase); (3) participants were instructed to watch the clock and go to the ATM machine after 2 min in the virtual environment (time-based prospective memory); and (4) once they heard their prescription “pick-up” number called, they were to return to the pharmacy and click on the pharmacist for pick-up (event-based prospective memory). Once the participant agreed that the instructions were understood the VEGS protocol began.
2.3 Data analytic considerations
MATLAB (version 9.2, MathWorks, Natick, MA, USA) was utilized for all analyses. Participant data was identified that could be used as prediction variables for the machine learning algorithms (see Table 1). Prediction variables were selected based upon the criteria that the variables could be used in real time in the adaptive environment to supply the machine learning algorithm with predictions of participant performance levels. Figure 2 shows the dissemination of high performers and low performers for shopping items (learned during encoding phase) pick-up times.
Knowing their performance levels allows the platform to adapt and optimize user experience. Once the prediction variables were identified, the descriptive statistics were calculated for each predictor (see Table 2) and box plots are presented in Fig. 3.
It is important to note when looking at Table 2 the range of performances reflect high- and low-performance categories. For example, “# of times looked shop list” ranges from high performance (looked at shopping list in VEGS one time) to low performance (participant looked at shopping list 469 times—which means that participant constantly looked at the shopping list throughout the task). Likewise, with the timing some participants were high performers and completed tasks quickly, while other low performers took greater amounts of time. These were included in the model to establish classifiers for high and low performance.
Each participant was categorized as either a high performer or a low performer. Utilizing the number of items that each participant was able to find during the shopping phase, the mean was calculated (Mean = 7.5 items). A participant was assigned to high perform if the number of items they found was larger than the mean and assigned to low perform if the total number of items found was smaller than the mean. The category distribution was 37 high performers to 38 low performers.
(1) Support Vector Machine: The Support Vector Machine (SVM) utilizes a hyperplane to segment the data into two classes when classifying binary labeled data. The SVM trains using data belonging to both categories and attempts to place them into a higher-dimensional space. The goal of the SVM is to create a hyperplane with a maximum distance between the two categories. SVM algorithms can utilize different kernels (liner, polynomial, and radial basis function) to build different hyperplanes. Once trained, the SVM takes testing data and places it into one of the two categories. It determines the category based upon what side of the hyperplane the test data fall. The hyperplane can be optimized by selecting the maximum margins between the hyperplane and data. The SVM accomplish this by transforming the data from input space to feature space. This study implemented a Type 1 classification using 0.5 Nu with a radial basis function kernel (gamma = 0.016). The maximum number of iterations was set to 1000 with a stop error of 0.001. 10 k-fold cross-validation was employed which segmented the data randomly into 90% training and 10% testing.
(2) Naïve Bayes: Based upon Bayes theorem, the Naïve Bayes (NB) classifier is best for circumstances in which the dimensionality of inputs is high. One of the main advantages of NB is that it does not require a large set of training data. The NB classifier uses a calculated probability (see Eq. 1) that a set of data point belongs to a class. The NB algorithm attempts classification by choosing the highest probability as its result. As a supervised learning algorithm, NB is efficient at calculating the probability that new data fits into a specific group. The NB assumes that each predictor is independent from other predictors. A feature vector is calculated for each category during the training phase. During the testing phase, the classifier uses maximum likelihood for placing the data into correct categories. In this study, 10 k-fold cross-validation was utilized segmenting the data into a training set comprising 90% of the sample and a testing set comprising 10% of the sample. A normal distribution was assumed for each predictor.
(3) k-Nearest Neighbor: A supervised learning algorithm, k-Nearest Neighbor (kNN), uses location to determine data categorization. The kNN uses feature vectors to store the category’s datum during the training phase. When kNN is presented with new data, it utilizes (Eq. 2) to calculate the shortest distance to one of the two categories. Uneven data distribution is one of the primary issues with kNN. This can cause the algorithm to choose one category over the other. In this study, the kNN classifier implemented a 10 k-fold cross-validation randomly segmenting the data into 90% training and 10% testing. Additionally, the distance measure was set to Cityblock (Manhattan).
3 Results
The selected predictors (see Table 1) from the randomly chosen participants were used to categorize participants into high performers and low performers using a Support Vector Machine (SVM), a Naïve Bayes (NB) classifier, and k-Nearest Neighbor (kNN) classifier (see Table 3). Having 75 participants, the data was randomly segmented into 67 algorithm training samples and eight algorithm testing samples. Each sample contains 20 data points that were used as predictors for the machine learning algorithms.
The strongest classifier was the SVM, which produced an accuracy rate of 88%. This was followed by kNN (86.7%). NB came in last producing an accuracy of 76%. It is important to note, however, that the results from kNN, SVM, and NB demonstrate that the data was a symmetrical datasets as seen from the the F-Measures. kNN seemed to be is better at correctly assigning low performing participants than SVM which was more balanced when assigning low and high performing participants but it tended to favor high performers (see Fig. 4). This could be due to the kNN algorithm favoring low performers over the high performers. However, NB performed the worst, the NB classifier produced a poor correct classification rate (76%) as seen from the confusion matrices in Fig. 5.
4 Discussion
This study developed machine learning classifiers for the Virtual Environment Grocery Store. While psychometric validity (Parsons and Barnett 2017; Parsons and McMahan 2017) and reliability (Barnett et al. 2022; Weitzner et al. 2021) of the VEGS have been shown, there is a need for a virtual environment grocery store platform that adapts to the user’s performance. This study compared three machine learning algorithms: Support Vector Machine (SVM), Naïve Bayes (NB), and k-Nearest Neighbor (kNN). These classifiers were compared for determining when the VEGS environment would need to adapt for a user. Results revealed that the SVM (88% correct classification) classifier was the most robust classifier for identifying cognitive performance followed closely by kNN (86.7%) and NB (76% correct classification). While the SVM was better at balancing between lower- and higher-performing participants it tended to favor high performers, the kNN algorithm was better at assigning lower-performing participants. A hybrid model may be best in the adaptive VEGS platform with combined results from the SVM and kNN classifiers. These findings serve as an initial step toward developing decision rules that can be used for adapting the VEGS environment to the user in real time. These algorithms will be employed for a future version of an adaptive VEGS.
Based on data from the VEGS, the SVM classifier performed best with a correct classification rate of 88%. When utilizing a SVM for classification, the algorithm will attempt to maximize the margin (i.e., distance between the hyperplane used for classification and the training data; Nobel 2006). SVMs with greater margins are believed to perform better (Bhavsar and Panchal 2012). A hyperplane with a large margin may occur when the data is transformed to a higher plane, leading to a high classification rate for the SVM algorithm.
The results from testing the classifiers indicate that the SVM was stable at assigning participants' performance but it favored higher-performing participants but the kNN algorithm was better at assigning lower-performing participants. One reason may be that higher performers may have had more consistent scores. Higher performers may have completed tasks in a manner that was more similar, for example, taking efficient routes and remembering a similar number of items. However, low performers may have retrieved items at more random intervals, creating a wider distribution of scores and causing overlap of the higher performers' scores. This may have made it more difficult to accurately identify higher performers using kNN which classifies neighbors having similar scores within a feature space.
The NB classifier did not perform well. A possible reason for this performance is the fact that the dataset is not Naïve. One of the assumptions of the NB classifier is that each predictor is independent (Arar and Ayan 2017). It is possible that the predictors are not completely independent, which is causing lower classification accuracy. For example, if a participant becomes distracted in the assessment many of the time-based predictors would increase.
Often a single algorithm is used for classification, but a hybrid model may be best in the adaptive VEGS platform with combined results from the SVM and kNN classifiers. The hybrid system could compare the predicted classification. If they agree, then the system could choose that category. If the classifiers do not agree, then the system will use the category in which it has the highest confidence. A study by Mohan and colleagues (2019) found that a hybrid approach to machine learning outperformed standard approaches to prediction.
An important step in the creation of an adaptive system is the creation of a classifier to use decision rules to accurately determine the user’s state. Except for NB, the results suggest that the ML classifiers were able to accurately identify the users’ states (i.e., high or low performance). Psychologists and clinicians have had some success using various machine learning-based classifiers to detect physical and mental health issues such as traumatic brain injury (Mitra et al. 2016), autism spectrum disorder (Omar et al. 2019), and post-traumatic stress disorder (Galatzer-Levy et al. 2017).
Adapting the VEGS via machine learning classifiers allows the system to better identify the participants’ performance in real time and establish their ability level. This is important because participants perform tasks at various levels, with some users finding certain tasks easier or more difficult to perform. For example, some users have more experience or greater deficits than the average user. To perform this task in real time, classifiers (based on machine learning) and decision rules were developed. Machine learning offers a tool for personalized assessment and training of users. The machine learning classifiers were able to accurately identify user performance, but the study was not without limitations. The current work focused on high or low performance, additional categories for classification may be included in the future. For example, the addition of psychophysiological measures could be used to determine when participants may be experiencing frustration or high cognitive load. This could allow the adaptive system to provide assistance when these user states are identified.
The work presented in this research represents an initial step in the development of an adaptive virtual environment. The models implemented in the research utilize 20 predictors as an upper-level boundary to begin to identify the ideal predictors and the strongest classifier to implement within the AVE. However, it is known that not all these predictors will be available at the start of the assessment. The adaptive environment would take this into account and continue to adjust as new data becomes available. Future work requires the optimization of the framework to take delayed data into account in the decision-making process. Earlier approaches aimed at psychometric validation of the VEGS using the general linear model (Barnett et al. 2022, 2023; Weitzner et al. 2021; Parsons and Barnett 2017; Parsons and McMahan 2017). While the current research moves beyond earlier VEGS validation studies with healthy aging cohorts (with the exception of Barnett et al. 2023), there are several recent studies aimed at applying virtual reality and machine learning to aging clinical cohorts (Bayahya et al. 2022; Cavedoni et al. 2020; De Gaspari et al. 2023; Stasolla and Di Gioia 2023; Tsai et al. 2021). Hence, there is a need for a machine learning-based VEGS approach applied to clinical populations.
Using the classifiers identified herein, the VEGS can categorize user performance for use in a future adaptive iteration of the VEGS. The adaptive VEGS system will use a set of decision rules to inform the system on how exactly to process each category. Within the VEGS, rules can be defined for instances when a low performer is identified based upon the currently performed task. For example, if the task is dropping off the prescription and the performer is having difficulty (i.e., low performer), then the system could suggest a path for the user to take to better navigate to the pharmacist. Additionally, during the shopping task users may struggle to find items. The adaptive system could highlight products in the store that users have yet to pick up. If a user is categorized as a high performer for an extended duration, then the system would adapt to make the current task more difficult. This would continue until the user becomes a low performer. In VEGS, the difficulty can be increased by adding additional items that the user must find, or by adding additional tasks. In sum, an adaptive VEGS can use machine learning-based classifiers and decision rules to personalize the assessment and training of users. The development of these machine learning classifiers is a first step toward developing the concise item pools that provide equal to or greater precision at establishing ability levels compared to normative data referenced paper-and-pencil tests (Gibbons et al. 2008). Adaptive virtual environments allow for a shift away from one-size-fits-all experiences toward individual user-centered designs (Scott et al. 2016; Shute and Towle 2018).
Data availability statement
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Alcaniz Raya M, Marín-Morales J, Minissi ME, Teruel Garcia G, Abad L, ChicchiGiglioli IA (2020) Machine learning and virtual reality on body movements’ behaviors to classify children with autism spectrum disorder. J Clin Med 9(5):1260
Arar ÖF, Ayan K (2017) A feature dependent Naive Bayes approach and its application to the software defect prediction problem. Appl Soft Comput 59:197–209
Asbee J, Kelly K, McMahan T, Parsons TD (2023) Machine learning classification analysis for an adaptive virtual reality Stroop task. Virtual Real 27(2):1391–1407
Barnett MD, Chek CJ, Shorter SS, Parsons TD (2022) Comparison of traditional and virtual reality-based episodic memory performance in clinical and non-clinical cohorts. Brain Sci 12(8):1019
Barnett MD, Hardesty DR, Griffin RA, Parsons TD (2023) Performance on a virtual environment shopping task and adaptive functioning among older adults. J Clin Exp Neuropsychol 45:464–472
Bayahya AY, Alhalabi W, Alamri SH (2022) Older adults get lost in virtual reality: visuospatial disorder detection in dementia using a voting approach based on machine learning algorithms. Mathematics 10(12):1953
Belger J, Poppe S, Karnath HO, Villringer A, Thöne-Otto A (2023) The application of immersive virtual reality and machine learning for the assessment of unilateral spatial neglect. PRESENCE Virtual Augment Real. https://doi.org/10.1162/pres_a_00380
Bhavsar H, Panchal MH (2012) A review on support vector machine for data classification. Int J Adv Res Comput Eng Technol 1(10):185–189
Cavedoni S, Chirico A, Pedroli E, Cipresso P, Riva G (2020) Digital biomarkers for the early detection of mild cognitive impairment: artificial intelligence meets virtual reality. Front Hum Neurosci 14:245
De Gaspari S, Guillen-Sanz H, Di Lernia D, Riva G (2023) The Aged mind observed with a digital filter: detecting mild cognitive impairment through virtual reality and machine learning. Cyberpsychol Behav Soc Netw 26:798–801
Dorneich MC, Rogers W, Whitlow SD, DeMers R (2016) Human performance risks and benefits of adaptive systems on the flight deck. Int J Aviat Psychol 26(1–2):15–35
Galatzer-Levy IR, Ma S, Statnikov A, Yehuda R, Shalev AY (2017) Utilization of machine learning for prediction of post-traumatic stress: a re-examination of cortisol in the prediction and pathways to non-remitting PTSD. Transl Psychiatry 7(3):e1070–e1070
Gibbons RD, Weiss DJ, Kupfer DJ, Frank E, Fagiolini A, Grochocinski VJ et al (2008) Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatr Serv 59(4):361–368. https://doi.org/10.1176/ps.2008.59.4.361
Gibbons RD, Weiss DJ, Frank E, Kupfer D (2016) Computerized adaptive diagnosis and testing of mental health disorders. Annu Rev Clin Psychol 12:83–104. https://doi.org/10.1146/annurev-clinpsy-021815-093634
Kerick SE, Asbee J, Spangler DP, Brooks JB, Garcia JO, Parsons TD et al (2023) Neural and behavioral adaptations to frontal theta neurofeedback training: a proof of concept study. PLoS ONE 18(3):e0283418
Klasnja P, Hekler EB, Shiffman S, Boruvka A, Almirall D, Tewari A, Murphy SA (2015) Microrandomized trials: an experimental design for developing just-in-time adaptive interventions. Health Psychol 34(1):1220
Kothgassner OD, Felnhofer A (2020) Does virtual reality help to cut the Gordian knot between ecological validity and experimental control? Ann Int Commun Assoc 44(3):210–218
Marín-Morales J, Higuera-Trujillo JL, Greco A, Guixeres J, Llinares C, Scilingo EP et al (2018) Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors. Sci Rep 8(1):13657
McMahan T, Duffield T, Parsons TD (2021) Feasibility study to identify machine learning predictors for a virtual school environment: virtual reality stroop task. Front Virtual Real 2:673191
Mitra J, Shen K, Ghose S, Bourgeat P, Fripp J, Salvado O, Pannek K, Taylor DJ, Mathias JL, Rose S (2016) Statistical machine learning to identify traumatic brain injury (TBI) from structural disconnections of white matter networks. Neuroimage 129:247–259
Mohan S, Thirumalai C, Srivastava G (2019) Effective heart disease prediction using hybrid machine learning techniques. IEEE Access 7:81542–81554
Noble WS (2006) What is a support vector machine? Nat Biotechnol 24(12):1565–1567
Omar KS, Mondal P, Khan NS, Rizvi MRK, Islam MN (2019) A machine learning approach to predict autism spectrum disorder. In: 2019 international conference on electrical, computer and communication engineering (ECCE), pp 1–6
Parsons (2012) Virtual Environment Grocery Store: User Manual. Computational Neuropsychology and Simulation Lab, Arizona State University, Tempe
Parsons TD, Barnett M (2017) Validity of a newly developed measure of memory: feasibility study of the virtual environment grocery store. J Alzheimers Dis 59(4):1227–1235
Parsons T, Duffield T (2020) Paradigm shift toward digital neuropsychology and high-dimensional neuropsychological assessments. J Med Internet Res 22(12):e23777
Parsons TD, McMahan T (2017) An initial validation of the virtual environment grocery store. J Neurosci Methods 291:13–19
Parsons TD, McMahan T, Melugin P, Barnett M (2017) Virtual environment grocery store. In: Kane R, Parsons TD (eds) The role of technology in clinical neuropsychology. Oxford University Press, Oxford, pp 143–174
Reise SP, Waller NG (2009) Item response theory and clinical measurement. Annu Rev Clin Psychol 5(1):27–48
Scott E, Soria A, Campo M (2016) Adaptive 3D virtual learning environments—a review of the literature. IEEE Trans Learn Technol 10(3):262–276
Shute V, Towle B (2018) Adaptive e-learning. In: Educational psychologist. Routledge, London, pp 105–114
Stasolla F, Di Gioia M (2023) Combining reinforcement learning and virtual reality in mild neurocognitive impairment: a new usability assessment on patients and caregivers. Front Aging Neurosci 15:1189498
Virtual Environment Grocery Store [Computer software] (2012) Computational Neuropsychology and Simulation Lab, Tempe
Tsai CF, Chen CC, Wu EHK, Chung CR, Huang CY, Tsai PY, Yeh SC (2021) A machine-learning-based assessment method for early-stage neurocognitive impairment by an immersive virtual supermarket. IEEE Trans Neural Syst Rehabil Eng 29:2124–2132
van Merriënboer JJG, Ayres P (2005) Research on cognitive load theory and its design implications for e-learning. Educ Technol Res Dev 53(3):5–13
Vaughan N, Gabrys B, Dubey VN (2016) An overview of self-adaptive technologies within virtual reality training. Comput Sci Rev 22:65–87
Weitzner DS, Calamia M, Parsons TD (2021) Test-retest reliability and practice effects of the virtual environment grocery store (VEGS). J Clin Exp Neuropsychol 43(6):547–557
Zahabi M, Abdul Razak AM (2020) Adaptive virtual reality-based training: a systematic literature review and framework. Virtual Real 24(4):725–752
Acknowledgement
None.
Author information
Authors and Affiliations
Contributions
TP led the study conception and design. Analysis and initial draft were prepared by TP, JA, and TM. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethical approval
The authors state that there are no potential conflicts of interest. Research was conducted involving human participants, with approval by University’s Institutional Review Board. Participants were given informed consent prior to experimental procedures.
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Parsons, T.D., McMahan, T. & Asbee, J. Feasibility study to identify machine learning predictors for a Virtual Environment Grocery Store. Virtual Reality 28, 32 (2024). https://doi.org/10.1007/s10055-023-00927-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10055-023-00927-4