Next Article in Journal
A Low-Cost Indoor Activity Monitoring System for Detecting Frailty in Older Adults
Next Article in Special Issue
CDSP: A Solution for Privacy and Security of Multimedia Information Processing in Industrial Big Data and Internet of Things
Previous Article in Journal
Multitemporal Terrestrial Laser Scanning for Marble Extraction Assessment in an Underground Quarry of the Apuan Alps (Italy)
Previous Article in Special Issue
A Processing-in-Memory Architecture Programming Paradigm for Wireless Internet-of-Things Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Virtual Reality Soldier Simulator with Body Area Networks for Team Training

1
Simulator Systems Section, Aeronautical System Research Division, National Chung-Shan Institute of Science and Technology, Taichung 407, Taiwan
2
Department of Electrical Engineering, Innovation and Development Center of Sustainable Agriculture (IDCSA), National Chung Hsing University, Taichung 402, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(3), 451; https://doi.org/10.3390/s19030451
Submission received: 16 November 2018 / Revised: 16 January 2019 / Accepted: 18 January 2019 / Published: 22 January 2019
(This article belongs to the Special Issue Middleware Solutions for Wireless Internet of Things)

Abstract

:
Soldier-based simulators have been attracting increased attention recently, with the aim of making complex military tactics more effective, such that soldiers are able to respond rapidly and logically to battlespace situations and the commander’s decisions in the battlefield. Moreover, body area networks (BANs) can be applied to collect the training data in order to provide greater access to soldiers’ physical actions or postures as they occur in real routine training. Therefore, due to the limited physical space of training facilities, an efficient soldier-based training strategy is proposed that integrates a virtual reality (VR) simulation system with a BAN, which can capture body movements such as walking, running, shooting, and crouching in a virtual environment. The performance evaluation shows that the proposed VR simulation system is able to provide complete and substantial information throughout the training process, including detection, estimation, and monitoring capabilities.

1. Introduction

In recent years, since virtual reality (VR) training simulators allow soldiers to be trained with no risk of exposure to real situations, the development of a cost-effective virtual training environment is critical for training infantry squads [1,2,3]. This is because if soldiers do not develop and sustain tactical proficiency, they will not be able to react in a quickly evolving battlefield. To create a VR military simulator that integrates immersion, interaction, and imagination, the issue of how to use VR factors (e.g., VR engine, software and database, input/output devices, and users and tasks) is an important one. The VR simulator is able to integrate the terrain of any real place into the training model, and the virtual simulation trains soldiers to engage targets while working as a team [4,5].
With a head-mounted display (HMD), the key feature used in VR technology, soldiers are immersed in a complex task environment that cannot be replicated in any training areas. Visual telescopes are positioned in front of the eyes, and the movement of the head is tracked by micro electro mechanical system (MEMS) inertial sensors. Gearing up with HMDs and an omnidirectional treadmill (ODT), soldiers can perform locomotive motions without risk of injury [6,7,8]. Note that the above systems do not offer any posture or gesture interactions between the soldiers and the virtual environment. To address this problem, the authors of [9,10,11,12,13,14,15,16,17] proposed an action recognition method based on multiple cameras. To tackle the occlusion problem, the authors mostly used Microsoft Kinect to capture color and depth data from the human body, which requires multiple devices arranged in a specific pattern for the recognizing different human actions. Therefore, the above methods are not feasible in a real-time virtual environment for military training.
To overcome the limitations of the real-time training system, the dismounted soldier training system (DSTS) was developed [18,19,20,21]. In the DSTS, soldiers wear special training suits and HMDs, and stand on a rubber pad to create a virtual environment. However, the inertial sensors of training suits can only recognize simple human actions (e.g., standing and crouching). When soldiers would like to walk or run in the virtual environment, they have to control small joysticks and function keys on simulated rifles, which are not immersive for locomotion. Thus, in a training situation, soldiers will not be able to react quickly when suddenly encountering enemy fire. As a result, though the DSTS provides a multi-soldier training environment, it has weaknesses with respect to human action recognition. Therefore, in order to develop a training simulator that is capable of accurately capturing soldiers’ postures and movements and is less expensive and more effective for team training, body area networks (BANs) represent an appropriate bridge for connecting physical and virtual environments [22,23,24,25,26], which contain inertial sensors like accelerometers, e-compasses and gyroscopes to capture human movement.
Xsens Co. have been developing inertial BANs to track human movement for several years. Although the inertial BAN, called the MVN system, is able to capture human body motions wirelessly without using optical cameras, the 60 Hz system update rate is too slow to record fast-moving human motion. The latest Xsens MVN suit solved this problem, reaching a system update rate of 240 Hz. However, the MVN suit is relatively expensive in consumer markets. In addition, it is difficult to support more than four soldiers in a room with the MVN system, since the wireless sensor nodes may have unstable connections and high latency. On the other hand, since the Xsens MVN system is not an open-source system, the crucial algorithm of the inertial sensors does not adaptively allow the acquisition by and integration with the proposed real-time program. Moreover, the MVN system is only a motion-capture BAN and is not a fully functional training simulator with inertial BANs [26].
For the above-mentioned reasons, this paper proposes a multi-player training simulator that immerses infantry soldiers in the battlefield. The paper presents a technological solution for training infantry soldiers. Fully immersive simulation training provides an effective way of imitating real situations that are generally dangerous, avoiding putting trainees at risk. To achieve this, the paper proposes an immersive system that can immediately identify human actions. Training effectiveness in this simulator is highly remarkable in several respects:
  • Cost-Effective Design: We designed and implemented our own sensor nodes, such that the fundamental operations of the inertial sensors could be adaptively adjusted for acquisition and integration. Therefore, it has a competitive advantage in terms of system cost.
  • System Capacity: Based on the proposed simulator, a six-man squad is able to conduct military exercises that are very similar to real missions. Soldiers hold mission rehearsals in the virtual environment such that leaders can conduct tactical operations, communicate with their team members, and coordinate with the chain of command.
  • Error Analysis: This work provides an analysis of the quaternion error and further explores the sensing measurement errors. Based on the quaternion-driven rotation, the measurement relation with the quaternion error between the earth frame and the sensor frame can be fully described.
  • System Delay Time: The update rate of inertial sensors is about 160 Hz (i.e., the refresh time of inertial sensors has a time delay of about 6 ms). The simulator is capable of training six men at a 60 Hz system update rate (i.e., the refresh time of the entire system needs about 16 ms), which is acceptable for human awareness with delay (≤40 ms).
  • System Feedback: Instructors can provide feedback after mission rehearsals using the visual after action review (AAR) function in the simulator, which provides different views of portions of the action and a complete digital playback of the scenario, allowing a squad to review details of the action. Furthermore, instructors can analyze data from digital records and make improvements to overcome the shortcomings in the action (Figure 1). Accordingly, in the immersive virtual environment, soldiers and leaders improve themselves with respect to situational awareness, decision-making, and coordination skills.

2. System Description

The proposed system, which is deployed in five-square-meter area (Figure 2), is a robust training solution that can support up to six soldiers. The virtual reality environment is implemented in C++ and DirectX 11. In addition to the virtual system, each individual soldier stands in the center of an ODT. The ODTs are customized hardware designed for the soldier training simulator, which is equipped with a high-performance workstation that generates the visual graphics for the HMD and provides voice communication functions based on VoIP technology. The dimensions of the ODT are 47” x47 “x78”, 187 lbs. The computing workstation on the ODT is equipped with a 3.1 GHz Intel Core i7 processor, 8 GB RAM, and an NVidia GTX 980M graphics card. The HMD is an Oculus Rift DK2, which has two displays with 960 × 1080 resolution per eye. The VoIP software is implemented based on Asterisk open-source PBX. Soldiers are not only outfitted with HMDs, but also equipped with multiple inertial sensors, which consist of wearable sensor nodes (Figure 3) deployed over the full body (Figure 4). As depicted in Figure 3, a sensor node consists of an ARM-based microcontroller and nine axial inertial measurement units (IMUs), which are equipped with ARM Cotex-M4 microprocessors and ST-MEMS chips. Note that these tiny wireless sensor nodes can work as a group to form a wireless BAN, which uses an ultra-low power radio frequency band of 2.4G.

2.1. Sensor Modeling

This subsection the sensors are modeled in order to formulate the orientation estimation problem. The measurement models for the gyroscope, accelerometer, and magnetometer are briefly discussed in the following subsections [27].
(1) Gyroscope: The measured gyroscope signal s ω t m   can be represented in the sensor frame s at time t using
s ω t m = s ω t + b ω ,   t + e ω ,   t
where s ω t is the true angular velocity, b ω ,   t is the bias caused by low-frequency offset fluctuations, and the measurement error e ω ,   t is assumed to be zero-mean white noise. As shown in Figure 5, the raw data of the measured gyroscope signals in the x, y and z directions includes true angular velocity, bias and error.
(2) Accelerometer: Similar to the gyroscope signal, the measured accelerometer signal s a t m can be represented using
s a t m s a b , t + b a ,   t + e a ,   t
where s a b ,   t is the linear acceleration of the body after gravity compensation, b a ,   t is the bias caused by low-frequency offset fluctuations, and the measurement error e a ,   t is assumed to be zero-mean white noise. As shown in Figure 6, the raw data of the measured accelerometer signals in the x, y and z directions includes true linear acceleration, offset and error.
(3) Magnetometer: For the magnetometer, the measured signal s m t m is often modeled as the sum of the earth’s magnetic field s m t ,   magnetic   offset   b m ,   t , and noise e m ,   t , which yields
s m t m s m t + b m ,   t + e m ,   t
As shown in Figure 7, the raw data of the measured magnetometer signals in the x, y and z directions includes the earth’s magnetic field, offset and error.

2.2. Sensor Calibration

Since the accuracy of the orientation estimate depends heavily on the measurements, the null point, and the scale factor of each axis of the sensor, a calibration procedure should be performed before each practical use. To this end, the sensor calibration is described as follows:
  • Step 1: Given a fixed gesture, we measure the sensing data (i.e., the raw data) and calculate the measurement offsets.
  • Step 2: Remove the offset and normalize the modified raw data to the maximum resolution of the sensor’s analog-to-digital converter. In this work, the calibrated results (CR) of the sensors are described by Equations (4)–(6).
Table 1 shows the means of the calibrated results in the x, y and z directions of sensor nodes, which include the accelerometer, the magnetometer, and the gyroscope. Note that the inertial sensor signals (e.g., s a t m , s m t m , and s ω t m ) are measured without movement, and that a small bias or offset (e.g., b a ,   t or b m ,   t or b ω ,   t ) in the measured signals can therefore be observed. In general, the additive offset that needs to be corrected for before calculating the estimated orientation is very small. For instance, the additive offset would be positive or negative in the signal output (e.g., the average b ω ,   t = 0.08 degree/sec and s ω t m = 0 degree/sec) without movement.
CR a c c = ( s a t m ± b a ,   t ) / max ( s a t m )
CR m a g = ( s m t m ± b m ,   t ) / max ( s m t m )
CR g y r o = ( s ω t m ± b ω ,   t ) / max ( s ω t m )
After bias error compensation for each sensor, the outputs of the sensors represented in (1)–(3) can be rewritten as s ω t m = s ω t + e ω ,   t , s a t m s a b , t + e a ,   t , and s m t m s m t + e m ,   t , and they are then applied as inputs for the simulation system. In particular, the proposed algorithm can be good at handling zero-mean white noise. After the completion of modeling and calibration of the sensor measurements from the newly developed MARG, all measurements are then represented in the form of quaternions as inputs of the proposed algorithm.
As shown in Figure 8, a typical run of the attitude angles of a sensor node placed on the upper arm when standing in a T-pose is recorded. In the first step, we stand with the arms naturally at the sides. Then, we stretch the arms horizontally with thumbs forward, and then we move back to the first step. This experiment shows that the sensor node is capable of tracking the motion curve directly.

2.3. Information Processing

As mentioned above, human action recognition is crucial to developing an immersive training simulator. The wireless BAN is able to instantaneously track the action of the skeleton of a soldier. The microcontroller of the sensor node acquires the raw data from the accelerometer, gyroscope and magnetometer through SPI and I2C interfaces. The measurements of the accelerometer, gyroscope and magnetometer are contaminated by errors of scale factor and bias, resulting in a large estimation error during sensor fusion. Thus, a procedure for normalization and error correction is needed for the sensor nodes to ensure the consistency of the nine-axis data (Figure 9). After that, the calibrated accelerometer and magnetometer data can be used to calculate the orientation and rotation. Estimation based on the accelerometer does not include the heading angle, which is perpendicular to the direction of gravity. Thus, the magnetometer is used to measure the heading angle. the Euler angles are defined as follows:
Roll Φ :   rotation   about   the   X-axis Pitch θ :   rotation   about   the   Y-axis Heading ψ :   rotation   about   the   Z-axis .
Let Xacc, Yacc, Zacc be the accelerometer components. We have:
Pitch   θ = tan 1 ( X a c c Z a c c 2 + Y a c c 2 )
Roll   Φ = tan 1 ( Y a c c Z a c c )
Let Xmag, Ymag, Zmag be the magnetometer components. We have:
X h = X m a g cos ( θ ) + Y m a g sin ( θ ) × sin ( Φ ) + Z m a g sin ( θ ) × cos ( Φ )
Y h = Y m a g cos ( Φ ) + Z m a g sin ( Φ )
Heading   ψ = tan 1 ( Y h X h )
Accordingly, the complementary filter output (using gyroscope) is
A n g l e o u t p u t = W × ( A n g l e o u t p u t + A n g l e g y r o ) + ( 1 W ) × A n g l e a c c + m a g ,     0 < W < 1
To mitigate the accumulated drift from directly integrating the linear angular velocity of the gyroscope, the accelerometer and magnetometer are used as aiding sensors to provide the vertical and horizontal references for the Earth. Moreover, a complementary filter [28,29] is able to combine the measurement information of an accelerometer, a gyroscope and a magnetometer, offering big advantages in terms of both long-term and short-term observation. For instance, an accelerometer does not drift over the course of a long-term observation, and a gyroscope is not susceptible to small forces during a short-term observation. The W value of the complementary filter in Equation (12) is the ratio in which inertial sensor data is fused, which can then be used to compensate for gyroscope drift by using the accelerometer and magnetometer measurements. We always set the W value to 0.9, or probably even higher than 0.9. Therefore, the complementary filter provides an accurate orientation estimation.

2.4. Communication and Node Authentication Procedures

To tackle the problem of interference and to reduce the bit error rate (BER) of the wireless data transmission between the sensor nodes and the sink node, communication mechanisms can be applied to build a robust wireless communication system [30,31]. The communication technique employed for communication between the sensor nodes and the sink node is frequency hopping spread spectrum (FHSS). The standard FHSS modulation technique is able to avoid interference between different sensor nodes in the world-wide ISM frequency band, because there are six BANs in total, all of which are performing orientation updates in a small room. Moreover, the sensor nodes’ transmission of data to the sink node is based on packet communication.
In the beginning, sensor nodes send packets, which include orientation and rotation data. When the sink node receives a packet from a sensor node, an acknowledgement packet will be transmitted immediately (Figure 10). If the acknowledgement fails to arrive, the sensor node will retransmit the packet, unless the number of retries exceeds the retransmission limit. When the sink node receives packets in operation mode, it will check the payload of the packets. As shown in Figure 11, if the payload is equal to the sensor node’s unique identification information, the sink node will accept the packet and scan the next sensor node’s channel during the next time slot. The ten wireless sensor node channels operate at a frequency range of 2400–2525 GHz.

2.5. System Intialization

The sink node is deployed on the back of the body and collects the streaming data from multiple wireless sensor nodes. Afterwards, the streaming data from the sensor nodes is transmitted to the workstation via Ethernet cables, modeling the skeleton of the human body along with the structure of the firearm (Figure 12). Table 2 describes the data structure of a packet, which is sent by a sensor node to the sink node in every ODT. Table 3 shows the integrated data that describes the unique features of each soldier in the virtual environment. The description provides the appearance characteristics, locomotion, and behaviors of each soldier [32,33].
For sensor nodes, the data stream reported to the sink node is interpreted as the movement-based data structures of a skeleton. Each wearable sensor node of the skeleton has a unique identification information during the initialization phase. As a result, the sink node can distinguish which sensor nodes are online. Accordingly, when turning the power on, the HMDs on the soldiers are automatically connected to the workstations, and the voice communication group setting is immediately ready. Please note that the T-pose calibration is performed in sensor node n of a skeleton for initializing the root frame, which is given by
q 1 , n = q T , n q 0 , n q T , n ,
where q T , n is the reading of sensor node n in the modified T-pose and q 1 , n is the new body frame from the initial root frame q 0 , n .
Although sensor nodes are always worn on certain positions of a human body, the positions of sensor nodes may drift due to the movements occurring during the training process. Hence, the T-pose calibration procedure can be applied to estimate the orientation of the sensors. After that, the system is prepared to log in for simulation tasks. The system initialization flow diagram is shown in Figure 13.

3. Quaternion Representation

This section outlines a quaternion representation of the orientation of the sensor arrays. Quaternions provide a convenient mathematical notation for representing the orientation and rotation of 3D objects because quaternion representation is more numerically stable and efficient compared to rotation matrix and Euler angel representation. According to [34,35,36], a quaternion can be thought of as a vector with four components,
q = q 0 + q x i + q y j + q z k
as a composite of a scalar and ordinary vector. The quaternion units q x , q y , q z are called the vector part of the quaternion, while q 0 is the scalar part. The quaternion can frequently be written as an ordered set of four real quantities,
q = [ q 0 ,   q x ,   q y ,   q z ] .
Denote s e q as the orientation of the earth frame ue with respect to the sensor frame us. The conjugate of the quaternion can be used to represent an orientation by swapping the relative frame, and the sign * denotes the conjugate. Therefore, the conjugate of s e q can be denoted as
s e q * = e s q = [ q 0 ,   q x ,   q y ,   q z ] .
Moreover, the quaternion product can be used to describe compounded orientations, and their definition is based on the Hamilton rule in [37]. For example, the compounded orientation h s q can be defined by
h s q = e s q h e q ,
where h e q denotes the orientation of the earth frame ue with respect to the frame uh.
A human body model consists of a set of body segments connected by joints. For upper limbs and lower limbs, kinematic chains are modeled that branch out around the torso. The kinematic chain describes the relationship between rigid body movements and the motions of joints. A forward kinematics technique, which was introduced for the purposes of robotic control, is used to configure each pair of adjacent segments. In the system, the aim of building human kinematic chains is to determine the transformation matrix of a human body from the first to the last segments and to find the independent configuration for each joint and the relationship with the root frame.
Thus, the rotation matrix q n 1 n used for orientation from sensor node n-1 to sensor node n is given by
q n 1 n = q n 1 q n .
Figure 14 shows the simplified segment biomechanical model of the human body. The kinematics of segments on which no inertial sensors are attached (e.g., hand, feet, toes) are considered to be rigid connections between neighboring segments. The transformation matrix is defined as
Q n 1 n = [ q n 1 n T n 1 n 0 1 ] .
where T n 1 n is the translation matrix from sensor frame to body frame. According to [37,38], therefore, the transformation matrix of a human body from the first segment to the n-th segment is
Q 1 n = Q n 1 n Q n 2 n 1 Q n 3 n 2 Q 2 3 Q 1 2 .

4. Performance Analysis

The analysis focuses on the quaternion error and further explores the sensing measurement errors. Based on the quaternion-driven rotation, the measurement relation with the quaternion error between the earth frame and sensor frame can be further described.

4.1. Rotation Matrix

According to [36], given a unit quaternion q = q r + q x i + q y j + q z k, the quaternion-driven rotation can be further described by the rotation matrix R, which yields
R = [ 1 2 ( q y 2 + q z 2 ) 2 q x q y 2 q r q z 2 q x q z + 2 q r q y 2 q x q y + 2 q r q z 1 2 ( q x 2 + q z 2 ) 2 q y q z 2 q r q x 2 q x q z 2 q r q y 2 q y q z + 2 q r q x 1 2 ( q x 2 + q y 2 ) ] .
Let q ^ be an estimate of the true attitude quaternion q . The small rotation from the estimated attitude, q ^ , to the true attitude is defined as q e r r . The error quaternion is small but non-zero, due to errors in the various sensors. The relationship is expressed in terms of quaternion multiplication as follows:
q = q ^ q e r r .
Assuming that the error quaternion, q e r r , is to represent a small rotation, it can be approximated as follows:
c = [ q r   q x ( e r r )   q y ( e r r )   q z ( e r r ) ] T = [ q r   q e r r ] T .
Noting that the error quaternion q e r r is a perturbation of the rotation matrix, and the vector components of q e r r are small, the perturbation of the rotation matrix R in Equation (21) can be written as:
R ( q e r r ) [ 1 2 q r q z ( e r r ) 2 q r q y ( e r r ) 2 q r q z ( e r r ) 1 2 q r q x ( e r r ) 2 q r q y ( e r r ) 2 q r q x ( e r r ) 1 ] = I 3 x 3 + 2 q r [ q e r r ] x .
Equation (22) relating q ^ and q can be written as
R ( q ) = R ( q ^ ) R ( q e r r ) = R ( q ^ ) [ I 3 x 3 + 2 q r [ q e r r ] x ] .
R ( q ^ ) is the estimate of the rotation matrix or the equivalent of q ^ . Now, considering the sensor frame u s and the earth frame u e , we have
u e = R ( q ^ ) [ I 3 x 3 + 2 q r [ q e r r ] x ] u s = u ^ e + 2 q r [ q e r r ] x u s .
Thus, the measurement relation for the quaternion error is obtained:
Δ u e u e u ^ e = 2 q r [ q e r r ] x u s .
Accordingly, given the error quaternion q e r r and the sensor frame u s , the perturbation of the earth frame u e can be described. The quantitative analysis of the error quaternion is detailed in Section 6.1.

4.2 Error Analysis

The analysis in Section 4.1 focuses on the quaternion error. Here we further explore the sensing measurement errors, which consist of the elements of the error quaternion.
Roll Φ :   rotation   about   the   X-axis Pitch θ :   rotation   about   the   Y-axis Heading ψ :   rotation   about   the   Z-axis
Now we associate a quaternion with Euler angles, which yields
q ^ = [ sin Φ 2 sin θ 2 sin ψ 2 + cos Φ 2 cos θ 2 cos ψ 2 + sin Φ 2 cos θ 2 cos ψ 2 + cos Φ 2 sin θ 2 sin ψ 2 sin Φ 2 cos θ 2 sin ψ 2 + cos Φ 2 sin θ 2 cos ψ 2 + sin Φ 2 sin θ 2 cos ψ 2 + cos Φ 2 cos θ 2 sin ψ 2 ]
Denote the pitch angle measurement as θ + Δ θ , where θ is the true pitch angle information and Δ θ   is   the   measurement   error . To simplify the error analysis, assume the rotation errors are neglected in roll angle and heading angle measurements. Let sin ( Φ / 2 ) = A and sin ( ψ / 2 ) = B . Accordingly, considering the measurement error in the pitch angle, the quaternion can be rewritten as
q = [ A B sin θ + Δ θ 2 + ( 1 A 2 ) ( 1 B 2 ) cos θ + Δ θ 2 + A 1 B 2 cos θ + Δ θ 2 + B 1 A 2 sin θ + Δ θ 2 A B cos θ + Δ θ 2 + ( 1 A 2 ) ( 1 B 2 ) sin θ + Δ θ 2 + A 1 B 2 sin θ + Δ θ 2 + B 1 A 2 cos θ + Δ θ 2 ]
Assuming that the measurement error in the pitch angle is small, we obtain
sin θ + Δ θ 2 = sin θ 2 cos Δ θ 2 + cos θ 2 sin Δ θ 2 sin θ 2 + cos θ 2 Δ θ 2
cos θ + Δ θ 2 = cos θ 2 cos Δ θ 2 sin θ 2 sin Δ θ 2 cos θ 2 sin θ 2 Δ θ 2 .
According to Equation (29), the quaternion with measurement error in the pitch angle can be further approximated by
q [ A B sin θ 2 + ( 1 A 2 ) ( 1 B 2 ) cos θ 2 A B cos θ 2 · Δ θ 2 ( 1 A 2 ) ( 1 B 2 ) sin θ 2 Δ θ 2   + A 1 B 2 cos θ 2 + B 1 A 2 sin θ 2 A 1 B 2 sin θ 2 Δ θ 2 + B 1 A 2 cos θ 2 Δ θ 2 A B cos θ 2 + ( 1 A 2 ) ( 1 B 2 ) sin θ 2 + A B sin θ 2 Δ θ 2 + ( 1 A 2 ) ( 1 B 2 ) cos θ 2 Δ θ 2   + A 1 B 2 sin θ 2 + B 1 A 2 cos θ 2 + A 1 B 2 cos θ 2 Δ θ 2 B 1 A 2 sin θ 2 Δ θ 2 ] = q ^ q e r r
Note that, given a measurement error in the pitch angle, and the roll, pitch, and heading angle measurements, the error quaternion q e r r can be approximately derived by Equation (32). Therefore, the measurement relation with the quaternion error between the earth frame and the sensor frame can be further described using Equation (27).

5. System Operating Procedures

To evaluate the effectiveness and capability of the virtual reality simulator for team training, we designed a between-subjects study. In the experiments, the impacts of three key factors on system performance are considered: training experience, the group size of the participants, and information exchange between the group members. The experimental results are detailed as follows.

5.1. Participants

The experiment involved 6 participants. No participants had ever played the system before. Half of the participants were volunteers who had done military training in live situations, while the other half had never done live training. The age of the participants ranged from 27 to 35.

5.2. Virtual Environment

The immersive environment was designed for a rescue mission in an enemy-held building (Figure 15). In addition, three hostages were being guarded on the top floor of a three-storied building which was controlled by 15 enemies. To ensure that the virtual environment was consistent with an actual training facility, we simulated a real military training site, including the urban street, the enemy-held building, and so on. All enemies controlled automatically by the system were capable of shooting, evading attacks, and team striking. When participants were immersed in the virtual environment, they could interact with other participants not only through gesture tracking, but also through VoIP communication technology.
As mentioned above, sensing measurement errors greatly affect the sensor nodes, which are attached to a human body. The integration of the inertial sensors, including sensor signals and drift errors, is performed on the basis of the kinematics of the human body. Therefore, sensing errors will be accumulated in quaternion form. As shown in Figure 16, the sensing measurement errors are calibrated when the T-pose calibration is performed. In the first step, we normalize the accelerometer, gyroscope and magnetometer in all of the sensor nodes and compensate for bias errors. In the following step, a complementary filter is used to mitigate the accumulated errors based on the advantages of long-term observation and short-term observation respectively. In the final step, T-pose calibration is performed to align the orientation of the sensor nodes with respect to the body segments, after which the sensor node is able to capture body movement accurately in the virtual environment.

5.3. Procedure

The purpose of the experiment is to evaluate the system performance. Participants follow the same path through the virtual environment in the experiment. The time-trial mission starts when the participants begin from the storm gate. Moreover, the time taken by the participants to kill all of the enemies who guard the three hostages on the top floor will be recorded. All participants control simulated M4 rifles, and the enemies control virtual AK47 rifles. All of the weapons have unlimited ammo. Under these experimental conditions, the participants’ death rate (hit rate) is recorded for data analysis by the experimenters. Moreover, if all participants are killed before they complete the mission, the rescue task is terminated, and the time will not be recorded for the experiment.

6. Experimental Results

In order to assess the system performance, four sets of experiments were performed to explore the impact of quaternion error and the training experience on mission execution and management.

6.1. Error Analysis

In the first set of simulations, we explored the characteristic of the error quaternion q e r r . With reference to the analysis in Section 4.2, the rotation errors are assumed to be negligible in the roll angle and heading angle measurements, and the measurement error in the pitch angle is considered to be Δ θ . With angle information (e.g., the heading angle 60 ° , the roll angle 30 ° , the pitch angle 15 ° ) and Δ θ , Figure 17 presents the behavior of the error quaternion q e r r when varying the measurement error of the pitch angle. Note that, given Δ θ   ranging   from   0 °   to   0.2 ° , the vector parts of q e r r (i.e., q x ( e r r ) ,   q y ( e r r ) ,   q z ( e r r ) ) are approximately linear with respect to the Δ θ , which can provide a sensible way of describing the error behaviors of rotation X, rotation Y, and rotation Z. According to Equation (27), given the quaternion error q err and the sensor frame u s , the perturbation of the earth frame u e can be described. As shown in Figure 17, when the measurement error in the pitch angle is small, the small vector components of q   err lead to a small perturbation of the earth frame u e . In contrast, as the measurement error in the pitch angle increases, the perturbation in the Y-axis increases, which results in a larger error component in the Y-axis (e.g., with Δ θ = 0.1 , q x ( e r r ) = 0.006 ,   q y ( e r r ) =   0.05 ,   q z ( e r r ) = 0.0004).

6.2. Simulated Training Performance

Figure 18 shows a snapshot of the proposed system. All three individual participants, who had done the same military training at the actual training site, successfully completed the rescue mission, with times of 22′16″, 25′40″, 28′51″, respectively (mean = 25′36″, standard deviation = 3′18″). However, of the three participants who had never done the same training and started the mission individually, only one participant completed the rescue mission, with a time of 36′59″, and the other two participants were killed by enemies before completing the rescue task. The three two-man teams who had been trained in the live situation completed the rescue mission with times of 11′28″, 18′19″, 16′5″, respectively (mean = 15′17″, standard deviation = 3′30″). However, the three two-man teams who had never done the same training also completed the rescue mission, with times of 25′19″, 28′33″, 26′12″, respectively (mean = 26′41″, standard deviation = 1′40″). Finally, the three-man team that had live training experience completed rescue mission with a time of 7′49″. On the other hand, the three-man team that had never done the same training completed the rescue mission with a time of 13′47″. The results of experienced and unexperienced participants’ mean times in the experiment are shown in Figure 19. We also evaluated another situation, in which two subjects in the three-man groups completed the mission without the VoIP communication function. The mean time in this experiment increased by 1′26″ (Figure 20), which implies that communication and information processing can improve the performance for rescue missions.
Finally, we evaluated a six-man team of all participants in the rescue task, because standard deployment for a real live mission is a six-man entry team. The mission time decreased by 2′28″ with respect to the three-man experiment with experienced participants. The experimental results for mean times with different numbers of participants are shown in Figure 21. In addition, death rate (hit rate) revealed another difference between single and multiple participants. From the results, the mean of death rate (hit rate) was 1.5 shots/per mission when a single participant interacted with the system. However, the mean of death rate decreased to 0.38 shots/per mission when multiple participants interacted with the system.

6.3. Discussion

When the participants executed the rescue mission, the activities involved in the experiment included detecting enemies, engaging enemies, moving inside the building and rescuing the hostages. The results reveal significant differences in several respects, including experience, quantity, and communication, and show that compared with the inexperienced participants, all experienced participants who had done the same training in a live situation took less time to complete the rescue mission. The wireless BANs of the participants are able to work accurately in the virtual environment for experienced participants. Tactical skills (e.g., moving through built-up areas, reconnoiter area, reacting to contact, assaulting, and clearing a room) absolutely require team work, demanding that wireless BANs interact with each other perfectly in terms of connection and accuracy. Without proper BANs, participants may feel mismatched interaction with their virtual avatars, and may feel uncomfortable or sick in the virtual environment.
The experimental results show that a larger sized group of participants took less time to complete the rescue mission than a smaller sized group of participants. Moreover, a group of multiple participants had a lower death rate compared with that of a single participant. This is due to the fact that, as the group size of participants increases, team movement is more efficient and fire power is greater in the virtual environment, which is similar to a real world mission. Furthermore, when the VoIP communication function was disabled, whether participants were experienced or not, the rescue mission time in the experiment consequently increased. As we know, in the real rescue mission, team coordination is important in the battlefield. In the system, all participants are able to interact with each other through hand signal tracking and voice communication. As a result, multiple-user training may become a key feature of the system.

7. Conclusions

In this paper, we have addressed problems arising when building an infantry training simulator for multiple players. The immersive approach we proposed is an appropriate solution that is able to train soldiers in several VR scenarios. The proposed simulator is capable of training six men at a system update rate of 60 Hz (i.e., the refresh time of the entire system takes about 16 ms), which is acceptable for human awareness with delay (≤ 40 ms). Compared with the expensive Xsens MVN system, the proposed simulator has a competitive advantage in terms of system cost. For future work, we intend to develop improved algorithms to deal with accumulated sensing errors and environment noise on wireless BANs. Consequently, the system can develop finer gestures for military squad actions and enrich the scenario simulation for different usages in military training. The system is expected to be applied in different kind of fields and situations.

Author Contributions

Y.-C.F. and C.-Y.W. conceived and designed the experiments; Y.-C.F. performed the experiments; Y.-C.F. analyzed the data; Y.-C.F. and C.-Y.W. wrote the paper.

Funding

This research was funded by the Ministry of Science and Technology of Taiwan under grant number MOST-108-2634-F-005-002, and by the “Innovation and Development Center of Sustainable Agriculture” from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dimakis, N.; Filippoupolitis, A.; Gelenbe, E. Distributed Building Evacuation Simulator for Smart Emergency Management. Comput. J. 2010, 53, 1384–1400. [Google Scholar] [CrossRef] [Green Version]
  2. Knerr, B.W. Immersive Simulation Training for the Dismounted Soldier; No. ARI-SR-2007-01; Army Research Inst Field Unit: Orlando, FL, USA, 2007. [Google Scholar]
  3. Lele, A. Virtual reality and its military utility. J. Ambient Intell. Hum. Comput. 2011, 4, 17–26. [Google Scholar] [CrossRef]
  4. Zhang, Z.; Zhang, M.; Chang, Y.; Aziz, E.-S.; Esche, S.K.; Chassapis, C. Collaborative Virtual Laboratory Environments with Hardware in the Loop. In Cyber-Physical Laboratories in Engineering and Science Education; Springer: Berlin/Heidelberg, Germany, 2018; pp. 363–402. [Google Scholar]
  5. Stevens, J.; Mondesire, S.C.; Maraj, C.S.; Badillo-Urquiola, K.A. Workload Analysis of Virtual World Simulation for Military Training. In Proceedings of the MODSIM World, Virginia Beach, VA, USA, 26–28 April 2016; pp. 1–11. [Google Scholar]
  6. Frissen, I.; Campos, J.L.; Sreenivasa, M.; Ernst, M.O. Enabling Unconstrained Omnidirectional Walking through Virtual Environments: An Overview of the CyberWalk Project; Human Walking in Virtual Environments; Springer: New York, NY, USA, 2013; pp. 113–144. [Google Scholar]
  7. Turchet, L. Designing presence for real locomotion in immersive virtual environments: An affordance-based experiential approach. Virtual Real. 2015, 19, 277–290. [Google Scholar] [CrossRef]
  8. Park, S.Y.; Ju, H.J.; Lee, M.S.L.; Song, J.W.; Park, C.G. Pedestrian motion classification on omnidirectional treadmill. In Proceedings of the 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea, 13–16 October 2015. [Google Scholar]
  9. Papadopoulos, G.T.; Axenopoulos, A.; Daras, P. Real-Time Skeleton-Tracking-Based Human Action Recognition Using Kinect Data. In Proceedings of the MMM 2014, Dublin, Ireland, 6–10 January 2014. [Google Scholar]
  10. Cheng, Z.; Qin, L.; Ye, Y.; Huang, Q.; Tian, Q. Human daily action analysis with multi-view and color-depth data. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  11. Kitsikidis, A.; Dimitropoulos, K.; Douka, S.; Grammalidis, N. Dance analysis using multiple kinect sensors. In Proceedings of the 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Lisbon, Portugal, 5–8 January 2014; Volume 2. [Google Scholar]
  12. Kwon, B.; Kim, D.; Kim, J.; Lee, I.; Kim, J.; Oh, H.; Kim, H.; Lee, S. Implementation of human action recognition system using multiple Kinect sensors. In Proceedings of the Pacific Rim Conference on Multimedia, Gwangju, Korea, 16–18 September 2015. [Google Scholar]
  13. Beom, K.; Kim, J.; Lee, S. An enhanced multi-view human action recognition system for virtual training simulator. In Proceedings of the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016. [Google Scholar]
  14. Liu, T.; Song, Y.; Gu, Y.; Li, A. Human action recognition based on depth images from Microsoft Kinect. In Proceedings of the 2013 Fourth Global Congress on Intelligent Systems, Hong Kong, China, 3–4 December 2013. [Google Scholar]
  15. Berger, K.; Ruhl, K.; Schroeder, Y.; Bruemmer, C.; Scholz, A.; Magnor, M.A. Marker-less motion capture using multiple color-depth sensors. In Proceedings of the the Vision, Modeling, and Visualization Workshop 2011, Berlin, Germany, 4–6 October 2011. [Google Scholar]
  16. Kaenchan, S.; Mongkolnam, P.; Watanapa, B.; Sathienpong, S. Automatic multiple kinect cameras setting for simple walking posture analysis. In Proceedings of the 2013 International Computer Science and Engineering Conference (ICSEC), Nakorn Pathom, Thailand, 4–6 September 2013. [Google Scholar]
  17. Kim, J.; Lee, I.; Kim, J.; Lee, S. Implementation of an Omnidirectional Human Motion Capture System Using Multiple Kinect Sensors. IEICE Trans. Fundam. 2015, 98, 2004–2008. [Google Scholar] [CrossRef]
  18. Taylor, G.S.; Barnett, J.S. Evaluation of Wearable Simulation Interface for Military Training. Hum Factors 2012, 55, 672–690. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Barnett, J.S.; Taylor, G.S. Usability of Wearable and Desktop Game-Based Simulations: A Heuristic Evaluation; Army Research Inst for the Behavioral and Social Sciences: Alexandria, VA, USA, 2010. [Google Scholar]
  20. Bink, M.L.; Injurgio, V.J.; James, D.R.; Miller, J.T., II. Training Capability Data for Dismounted Soldier Training System; No. ARI-RN-1986; Army Research Inst for the Behavioral and Social Sciences: Fort Belvoir, VA, USA, 2015. [Google Scholar]
  21. Cavallari, R.; Martelli, F.; Rosini, R.; Buratti, C.; Verdone, R. A Survey on Wireless Body Area Networks: Technologies and Design Challenges. IEEE Commun. Surv. Tutor. 2014, 16, 1635–1657. [Google Scholar] [CrossRef]
  22. Alam, M.M.; Ben Hamida, E. Surveying wearable human assistive technology for life and safety critical applications: Standards, challenges and opportunities. Sensors 2014, 14, 9153–9209. [Google Scholar] [CrossRef] [PubMed]
  23. Bukhari, S.H.R.; Rehmani, M.H.; Siraj, S. A Survey of Channel Bonding for Wireless Networks and Guidelines of Channel Bonding for Futuristic Cognitive Radio Sensor Networks. IEEE Commun. Surv. Tutor. 2016, 18, 924–948. [Google Scholar] [CrossRef]
  24. Ambroziak, S.J.; Correia, L.M.; Katulski, R.J.; Mackowiak, M.; Oliveira, C.; Sadowski, J.; Turbic, K. An Off-Body Channel Model for Body Area Networks in Indoor Environments. IEEE Trans. Antennas Propag. 2016, 64, 4022–4035. [Google Scholar] [CrossRef]
  25. Seo, S.; Bang, H.; Lee, H. Coloring-based scheduling for interactive game application with wireless body area networks. J. Supercomput. 2015, 72, 185–195. [Google Scholar] [CrossRef]
  26. Xsens MVN System. Available online: https://www.xsens.com/products/xsens-mvn-animate/ (accessed on 21 January 2019).
  27. Tian, Y.; Wei, H.X.; Tan, J.D. An Adaptive-Gain Complementary Filter for Real-Time Human Motion Tracking with MARG Sensors in Free-Living Environments. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 21, 254–264. [Google Scholar] [CrossRef] [PubMed]
  28. Euston, M.; Coote, P.; Mahony, R.; Kim, J.; Hamel, T. A complementary filter for attitude estimation of a fixed-wing UAV. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008. [Google Scholar]
  29. Yoo, T.S.; Hong, S.K.; Yoon, H.M.; Park, S. Gain-Scheduled Complementary Filter Design for a MEMS Based Attitude and Heading Reference System. Sensors 2011, 11, 3816–3830. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Wu, Y.; Liu, K.S.; Stankovic, J.A.; He, T.; Lin, S. Efficient Multichannel Communications in Wireless Sensor Networks. ACM Trans. Sens. Netw. 2016, 12, 1–23. [Google Scholar] [CrossRef]
  31. Fafoutis, X.; Marchegiani, L.; Papadopoulos, G.Z.; Piechocki, R.; Tryfonas, T.; Oikonomou, G.Z. Privacy Leakage of Physical Activity Levels in Wireless Embedded Wearable Systems. IEEE Signal Process. Lett. 2017, 24, 136–140. [Google Scholar] [CrossRef]
  32. Ozcan, K.; Velipasalar, S. Wearable Camera- and Accelerometer-based Fall Detection on Portable Devices. IEEE Embed. Syst. Lett. 2016, 8, 6–9. [Google Scholar] [CrossRef]
  33. Ferracani, A.; Pezzatini, D.; Bianchini, J.; Biscini, G.; Del Bimbo, A. Locomotion by Natural Gestures for Immersive Virtual Environments. In Proceedings of the 1st International Workshop on Multimedia Alternate Realities, Amsterdam, The Netherlands, 16 October 2016. [Google Scholar]
  34. Kuipers, J.B. Quaternions and Rotation Sequences; Princeton University Press: Princeton, NJ, USA, 1999; Volume 66. [Google Scholar]
  35. Karney, C.F. Quaternions in molecular modeling. J. Mol. Graph. Model. 2007, 25, 595–604. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Gebre-Egziabher, D.; Elkaim, G.H.; Powell, J.D.; Parkinson, B.W. A gyro-free quaternion-based attitude determination system suitable for implementation using low cost sensors. In Proceedings of the IEEE Position Location and Navigation Symposium, San Diego, CA, USA, 13–16 March 2000. [Google Scholar]
  37. Horn, B.K.P.; Hilden, H.M.; Negahdaripour, S. Closed-form solution of absolute orientation using orthonormal matrices. JOSA A 1988, 5, 1127–1135. [Google Scholar] [CrossRef]
  38. Craig, J.J. Introduction to Robotics: Mechanics and Control; Pearson/Prentice Hall: Upper Saddle River, NJ, USA, 2005; Volume 3. [Google Scholar]
Figure 1. System training effectiveness process.
Figure 1. System training effectiveness process.
Sensors 19 00451 g001
Figure 2. System architecture: single-soldier layout (a); multi-soldier network (b).
Figure 2. System architecture: single-soldier layout (a); multi-soldier network (b).
Sensors 19 00451 g002
Figure 3. Top view of sensor nodes (Left); the wearable sensor node (Right).
Figure 3. Top view of sensor nodes (Left); the wearable sensor node (Right).
Sensors 19 00451 g003
Figure 4. Deployment of sensor nodes and the sink node.
Figure 4. Deployment of sensor nodes and the sink node.
Sensors 19 00451 g004
Figure 5. Mean and variance of the gyroscope in the x, y and z directions.
Figure 5. Mean and variance of the gyroscope in the x, y and z directions.
Sensors 19 00451 g005
Figure 6. Mean and variance of the accelerometer in the x, y and z directions.
Figure 6. Mean and variance of the accelerometer in the x, y and z directions.
Sensors 19 00451 g006
Figure 7. Mean and variance of the magnetometer in the x, y and z directions.
Figure 7. Mean and variance of the magnetometer in the x, y and z directions.
Sensors 19 00451 g007
Figure 8. The attitude angles of a sensor node placed on the upper arm when standing in a T-pose.
Figure 8. The attitude angles of a sensor node placed on the upper arm when standing in a T-pose.
Sensors 19 00451 g008
Figure 9. The diagram shows the process of sensor fusion.
Figure 9. The diagram shows the process of sensor fusion.
Sensors 19 00451 g009
Figure 10. Operation mode during each time slot. (a) Step 1: automatic packet synchronization; (b) Step 2: identification check on valid packets by the sink node.
Figure 10. Operation mode during each time slot. (a) Step 1: automatic packet synchronization; (b) Step 2: identification check on valid packets by the sink node.
Sensors 19 00451 g010
Figure 11. The diagram shows that sensor nodes communicate with sink node in two domains.
Figure 11. The diagram shows that sensor nodes communicate with sink node in two domains.
Sensors 19 00451 g011
Figure 12. A fully equipped soldier.
Figure 12. A fully equipped soldier.
Sensors 19 00451 g012
Figure 13. System initialization flow diagram.
Figure 13. System initialization flow diagram.
Sensors 19 00451 g013
Figure 14. The kinematic chain of a human body.
Figure 14. The kinematic chain of a human body.
Sensors 19 00451 g014
Figure 15. Virtual environment on a HMD. (a) An indoor view. (b) An outdoor view.
Figure 15. Virtual environment on a HMD. (a) An indoor view. (b) An outdoor view.
Sensors 19 00451 g015
Figure 16. Sensing measurement errors of the sensor nodes were calibrated when the T-pose calibration was performed. (a) All sensor nodes were calibrated well during the T-pose procedure. (b) One sensor node attached to the right thigh was not calibrated well, and a sensing error was derived in the pitch direction.
Figure 16. Sensing measurement errors of the sensor nodes were calibrated when the T-pose calibration was performed. (a) All sensor nodes were calibrated well during the T-pose procedure. (b) One sensor node attached to the right thigh was not calibrated well, and a sensing error was derived in the pitch direction.
Sensors 19 00451 g016
Figure 17. The scalar part and the vector part of the quaternion error for a small rotation of measurement error of pitch angle.
Figure 17. The scalar part and the vector part of the quaternion error for a small rotation of measurement error of pitch angle.
Sensors 19 00451 g017
Figure 18. Snapshot of the system.
Figure 18. Snapshot of the system.
Sensors 19 00451 g018
Figure 19. Means of experienced participants and unexperienced participants under various experimental conditions (horizontal axis: single-man, two-man, three-man; vertical axis unit: seconds).
Figure 19. Means of experienced participants and unexperienced participants under various experimental conditions (horizontal axis: single-man, two-man, three-man; vertical axis unit: seconds).
Sensors 19 00451 g019
Figure 20. Mean of three-man teams with voice communication/without voice communication under the experimental conditions (horizontal axis: experienced/unexperienced participants; vertical axis unit: seconds).
Figure 20. Mean of three-man teams with voice communication/without voice communication under the experimental conditions (horizontal axis: experienced/unexperienced participants; vertical axis unit: seconds).
Sensors 19 00451 g020
Figure 21. Mean times with different numbers of participants under the experimental conditions (horizontal axis: number of participants; vertical axis unit: seconds).
Figure 21. Mean times with different numbers of participants under the experimental conditions (horizontal axis: number of participants; vertical axis unit: seconds).
Sensors 19 00451 g021
Table 1. The calibrated results of the sensor nodes.
Table 1. The calibrated results of the sensor nodes.
C R a c c C R m a g C R g y r o
x0.950.980.96
y0.970.990.96
z0.990.980.94
Table 2. Data structure of a packet.
Table 2. Data structure of a packet.
HeaderData to Show Packet Number8 bits
PayloadBone data size 144 bits
Total data length
Soldier No.
T-Pose status
Sensor nodeID
Total Bones of a skeleton
Yaw value of the bone
Pitch value of the bone
Roll value of the bone
TailData to show end of packet8 bits
Table 3. Description of the features of each soldier.
Table 3. Description of the features of each soldier.
DATAUNITRANGE
Soildier no.N/A0~255
Friend or FoeN/A0~2
exterbal ControlN/A0/1
Team no.N/A0~255
RankN/A0~15
AppearanceN/A0~255
BMIKg/Meter218~32
HealthPercentage0~100
WeaponN/A0~28
Vechicle N/A0~5
Vechicle SeatN/A0~5
Position XMeterN/A
Position YMeterN/A
Position ZMeterN/A
HeadingDegree−180~180
MovementN/A0~255
BehaviorN/A0~255

Share and Cite

MDPI and ACS Style

Fan, Y.-C.; Wen, C.-Y. A Virtual Reality Soldier Simulator with Body Area Networks for Team Training. Sensors 2019, 19, 451. https://doi.org/10.3390/s19030451

AMA Style

Fan Y-C, Wen C-Y. A Virtual Reality Soldier Simulator with Body Area Networks for Team Training. Sensors. 2019; 19(3):451. https://doi.org/10.3390/s19030451

Chicago/Turabian Style

Fan, Yun-Chieh, and Chih-Yu Wen. 2019. "A Virtual Reality Soldier Simulator with Body Area Networks for Team Training" Sensors 19, no. 3: 451. https://doi.org/10.3390/s19030451

APA Style

Fan, Y. -C., & Wen, C. -Y. (2019). A Virtual Reality Soldier Simulator with Body Area Networks for Team Training. Sensors, 19(3), 451. https://doi.org/10.3390/s19030451

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop