Next Article in Journal
Leaf Area Index Estimation Using Chinese GF-1 Wide Field View Data in an Agriculture Region
Previous Article in Journal
Raman Plus X: Biomedical Applications of Multimodal Raman Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study of Vicon System Positioning Performance

1
Normandie University, UNIROUEN, ESIGELEC, IRSEEM, 76000 Rouen, France
2
Department of Multimodal Transportation Infrastructure, Cerema, 76120 Le Grand Quevilly, France
3
Normandie University, UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000 Rouen, France
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2017, 17(7), 1591; https://doi.org/10.3390/s17071591
Submission received: 23 May 2017 / Revised: 4 July 2017 / Accepted: 5 July 2017 / Published: 7 July 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today’s life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.

1. Introduction

Initially developed for gait analysis [1], robotics applications such as UAVs (Unmanned Aerial Vehicle) extensively use Vicon (Oxford, UK) systems. Worldwide numerous Vicon equipped arenas exist such as the arena described in [2]. Studies on quadrotor UAVs [3,4,5,6] use Vicon equipped arenas for purposes such as ground truth for positioning, 3D reconstruction or real-time control given the position estimated by the motion capture system. Marker positions are available at frequencies larger than hundred hertz.
Vicon is one of the key players in optoelectronic motion capture systems based on markers. The trademark is often used as a proprietary eponym for optoelectronic motion capture systems. Other manufacturers such as MotionAnalysis (Santa Rosa, CA, USA), Optitrack (Corvallis, OR, USA) or Qualisys (Göteborg, Sweden) also exist.
Still, few studies investigate the positioning performance of Vicon systems.
Barrows [7] used Vicon MX-F40 cameras while studying aeroelastic resonance in wind tunnels. Markers were placed on a 58 cm long rail. Using the rail, the researchers were able to precisely move the markers. A positioning error slightly larger than 1 mm is found. It corresponds to the error magnitude usually considered as a standard for this type of systems.
Yang et al. [8] have evaluated the positioning performance in 3D with a numerical control drilling machine.The expected positioning precision is given as 20 µm. Different samples are taken in a space corresponding to 400 × 300 × 300 mm 3 . Four marker sizes were investigated. They conclude that the marker size does not impact the positioning performance.The main difference is explained by the Vicon camera sensor resolution. The performance is evaluated from 20 positions. Positioning data is collected for 1 s at each location with a motion capture device running at 100 Hz. The metric used is the mean absolute error over all samples. The position-wise performance is not investigated.
Manecy et al. [2] studied the 3D positioning performance in a Vicon equipped arena used for UAVs. The arena is equipped with 17 cameras and represents a space of 6 × 6 × 8 m 3 . They did not exactly investigate the Vicon accuracy as no other ground truth setup was used. Markers were manually placed in the arena. The performance metric actually corresponds to the positioning measurement variability. This work demonstrates that the positioning variability is less than 1.5 mm.
Chiari et al. [9] proposes a survey of gait analysis instrumental errors. Their survey shows that inter-distance measurements between markers have extensively been used as the accuracy and precision measurements. Vicon dynamic performance evaluation is based on the ability to perform inter-distance measurements from several kinds of trajectories. They can be grouped into four categories as follows: pendulum test, full volume test, walking test and MAL (Movement Analysis Laboratory, introduced by [10]) test. There is no positioning ground truth system in these cited works.
Diaz et al. [11] proposed evaluating the positioning performance of dynamic objects. This work targets biomedical applications. Several markers were placed on a frame. An electric motor was used to rotate the frame. Several experiment variants were introduced by using two distances between markers, applying two rotational speeds and three motion capture setups: Vicon Mcam-60, Vicon T160 and Hu-m-an Canon Zr300. The inter-marker distance variability was used to assess the performance of the motion capture devices as no positioning ground truth is used. The maximum speed is 0.4 m· s 1 . The conclusion indicates that, the closer the cameras are to the tracked object, the better the motion capture setup performs. Moreover, better performances are achieved for slow motion patterns on the Hu-m-an Canon Zr300.
To sum up (Table 1), the current state-of-the art papers include either static or low speed experiments (<1 m/s). Contrary to state-of-the art papers, we do not use inter-distance but propose a new framework that relies on marker positioning errors to evaluate the optoelectronic motion capture devices. Marker-wise accuracies and precisions are not investigated in the literature (both robotic and bio mechanical sciences). Volumes found in the Vicon evaluation are rather small compared to environment found in the robotics applications especially for dynamic experiments.
The contributions of this work can be divided into two parts. First, we study the performance of a Vicon motion capture device in a framework corresponding to mobile robotic environment contrary to the state-of-the art papers. The environment encompasses a space large enough to be used in mobile robots applications (UAVs and UGVs). Object motions are close to mobile robotic applications. The system accuracy and precision are assessed for one marker at a time. Secondly, experiments are conducted with both static and dynamic objects with a ground truth positioning setup that does not use motion capture. The static studies was performed with a robot operating 3D motions.

2. Experimetal Setup and Evaluation Scheme

The Vicon performance was evaluated with three scenarios. First, we assessed the measurements repeatability also known as the precision of the system. Secondly, a numerically controlled setup was used to estimate the static measure accuracy. Finally, a setup mimicking a blade rotated by an electric motor was used to quantify the dynamic behavior of a motion capture system.

2.1. Static Measurement Precision

The Electromagnetic compatibility (EMC) laboratory from IRSEEM (Embedded Electronic Systems Research Institute) has a numeric controlled 4-axis robot used to perform EMC near field experiments (Figure 1). Its positioning specifications allow to reach a position with a 10 μ m resolution. The probe can be moved in a space of 2 × 1.5 × 1 m 3 . The robot was used to perform repeatability measurement evaluation and static measurement performance quantification.
A Vicon marker was placed on the probe monitored by eight T40S cameras. Marker positions are recorded in the robot and Vicon reference frames. Both frames are aligned as follows.
A list P of at least eight 3D points p i are used. Each point p = [ x , y , z , 1 ] T in the Vicon frame and its counterpart in the robot frame p r are related by a rigid transform as both frames use a calibrated metric representation:
p = R t 0 T 1 · p r ,
p = H · p r ,
with:
R :a 3 × 3 rotation matrix with det R = 1 and R T = R 1 ,
t :a 3 × 1 translation vector,
0 :a 3 × 1 zero vector.
Estimating the rigid transform between both frames can be expressed as a least-square problem as follows:
p = H · p r ,
p = h 1 h 2 h 3 h 4 · p r = h 1 , 1 h 1 , 2 h 1 , 3 h 1 , 4 h 2 , 1 h 2 , 2 h 2 , 3 h 2 , 4 h 3 , 1 h 3 , 2 h 3 , 3 h 3 , 4 h 4 , 1 h 4 , 2 h 4 , 3 h 4 , 4 · p r ,
p = p r T 0 0 0 0 p r T 0 0 0 0 p r T 0 0 0 0 p r T · h 1 h 2 h 3 h 4 ,
x y z 1 = p r T 0 0 0 0 p r T 0 0 0 0 p r T 0 0 0 0 p r T · h 1 h 2 h 3 h 4 ,
with:
R = H 1 : 3 , 1 : 3 ,
t = H 1 : 3 , 4 ,
0 =a 4 × 1 zero vector.
Equation (6) can be written in the form A · x = b with:
n:the number of points used in the rigid transform estimation,
A :a 4 n × 16 matrix,
b :a 4 n × 1 vector,
x :a 16 × 1 vector including the rigid transform parameters.
x M 1 y M 1 z M 1 1 0 0 0 0 0 0 x M 1 y M 1 z M 1 1 0 0 0 0 x M 1 y M 1 z M 1 1 0 0 0 0 x M 1 y M 1 z M 1 1 x M n y M n z M n 1 0 0 0 0 0 0 x M n y M n z M n 1 0 0 0 0 x M n y M n z M n 1 0 0 0 0 x M n y M n z M n 1 A · h 11 h 12 h 13 h 14 h 21 h 22 h 23 h 24 h 31 h 32 h 33 h 34 h 41 h 42 h 43 h 44 x = x V 1 y V 1 z V 1 1 x V 2 y V 2 z V 2 1 x V n y V n z V n 1 b
with:
x M 1 :is x coordinate of point p 1 in the world frame,
x V 1 :is x coordinate of point p 1 in the Vicon frame.
In order to reduce invertability problems, A t A · x = A t b is solved. Once x is found, the rotation matrix R is forced using the SVD operation:
U , S , V = s v d ( R ) ,
R ortho = V · U t .
The rotation matrix and translation between the Vicon and robot frame are estimated. It can be used to measure the Vicon measurement repeatability and accuracy in a static framework.
The Euclidean distance is used to quantify the Vicon performance.

2.2. Dynamic Measurement Evaluation

Contrary to the state-of-the art, dynamic measurements were performed at speeds encountered in mobile robotic environments. We were aiming at finding a ground truth setup with a sampling rate higher than the Vicon sampling rate used in our experiments (100 Hz).
We created a setup able to reach high speed made of a blade and a rotor. As it can be seen in Figure 2, the system is made of an electric motor, a gear box and a blade where four markers are sticked. Mechanical slacks are controlled in order to reduce their impact on the ground truth precision. The engine torque is controlled with a vector control command in order to ensure torque stability. The ground truth position is expressed as the angular position about the rotor axis. The angular ground truth is obtained from a 500 pulses per revolution quadrature encoder installed on the motor axle. Synchronisation with the motion capture system is ensured by plugging the resulting encoder signals into the Vicon Giganet. Encoder signals are sampled at 75 kHz.
A 0.045° angular resolution is achieved thanks to a gear reduction ratio of 4 and by exploiting rising and falling edges. Markers A and D are placed at each end of the blade 30 cm away from the motor axis. It results in a positioning resolution of 0.235 mm. The Vicon system accuracy is expected to be about one millimeter given reference [7]. The actual positioning resolution might be too large. As a result, we interpolated the angular position up to the encoder sampling rate, i.e., 75 kHz.
The resulting angular resolution now depends on the motor spinning speed. The expected accuracy for markers A and D lies between 10 µm and 102 µm for speeds of 100 rpm and 1000 rpm, respectively. These values are more suitable to evaluate a system with an expected accuracy of 1 mm.
As mentioned previously, we track and evaluate the estimated angular position of each marker and compare them to the ground truth angular position. As a result, the transformation T between the Vicon frame and the angular frame must be estimated with T defined as a similarity :
T = s R t 0 T 1 .
In order to avoid numerical instabilities or realignment issues, we used two approaches to project the measurements performed in the Vicon frame into the ground truth frame, i.e., angular positions. Contrary to static study, the marker location on the blade is considered as unknown, so we will estimate it (s scale parameter).
The blade initial position is unknown with respect to the world frame. Moreover, the marker locations can be roughly measured on the blade, but their position might not be accurate enough to ensure a proper transformation estimation.
In order to estimate the transformation between the Vicon frame and the angular frame, we must transform the angular coordinates to 3D points as follows :
x t = [ cos θ t   sin θ t   0 ] T ,
with:
θ 0 = 0 ,
θ t + 1 = θ t + 0.045 .
Consequently, x t will be moving on a unit circle encompassed in the x y plane.
x t , the counterpart of x t in the Vicon frame, is related by the following equation :
x t = T · x t .
As corresponding points are known from the sensor synchronization, we employed :
  • Uyemana’s method [12] to estimate s, R and t . Let us define X = x 0 , x 1 , , x n 1 , x n and X = x 0 , x 1 , , x n 1 , x n , the corresponding sets of three-dimensional points. Uyemana’s solution aims at minimizing :
    e = 1 n i = 1 n x i ( s R x i + t ) 2 .
    It is performed by first computing the X and X means :
    μ x = 1 n i = 1 n x i ,
    μ x = 1 n i = 1 n x i .
    Secondly, the covariance matrix of X and X is computed :
    Σ xx = 1 n i = 1 n ( x i μ x ) ( x i μ x ) T .
    Thirdly, a singular value decomposition is performed on Σ xx :
    U S V T = S V D ( Σ xx ) .
    Finally, let us consider the D defined as follows :
    D = d i a g ( 1 , 1 , s i g n ( det ( Σ xx ) ) * 1 ) , if r a n k ( Σ xx ) = 3 , d i a g ( 1 , 1 , s i g n ( det ( U ) * det ( V T ) ) * 1 ) , if r a n k ( Σ xx ) = 2 .
    s, R and t are computed as follows :
    R = U D V T ,
    s = 1 σ x 2 t r ( S D ) ,
    t = μ x s R μ x ,
    where:
    σ x 2 = 1 n i = 1 n x i μ x .
  • A second approach uses the fact that the markers are placed on a rotating blade. As a consequence, their trajectories are expected to be performed on a 3D plane.
    The 3D plane equation can be estimated as follows. A point p = [ x   y   z   1 ] T in the Vicon frame lying on the 3D plane N = [ a   b   c   d ] T must fulfill the condition p t · N = 0 . From at least four points, N can be estimated. N is the null space of P with P = [ p 1 t , p 2 t , p 3 t , p 4 t ] T . In practice, more than four points are used. As n u l l ( P t · P ) = n u l l ( P ) , the following operations are performed :
    U , S , V = s v d ( P t · P ) .
    N = v 4 with V = v 1 , v 2 , v 3 , v 4 .
    Each point is consequently transformed in a new frame that aligns the z-axis with N. The final step requires to fit a circle to the resulting point cloud. The circle radius corresponds to s and the circle center to t .
    One last variable must be estimated: the constant angular offset that exists between point extracted from the Vicon and the encoder location. In fact, as the angular position of the blade is not known at the first timestamp due to the fact that the encoder is incremental. Moreover, the reduction gear ratio brings a ground truth uncertainty depending on the rotation speed. As a result, recordings are performed once the rotation speed is stable. It also allows to obtain a unique speed for each dataset. We find the angular offset by incrementally testing offset values β that correspond to a multiple of the resulting angular resolution. β is set as :
    m i n β ( θ i α i + β ) 2 ,
    with:
    θ :the groundtruth angle at timestamp i,
    α :the Vicon angle at timestamp i,
    β :the offset angle to be estimated.
    Once β is found, N and angle β are used in axis-angle rotation conversion to a rotation matrix R .
The Euclidean distance is used as the performance metric of the Vicon positioning for both transformation estimations.

3. Results for Static Experiments

Figure 3 shows the Vicon markers represented in the Robot frame thanks to the transformation H found with the Least-square method explained in Section 2.1. It can be seen that both point clouds perfectly align. The transformation is properly estimated with a mean squared error of 0.224 mm after frame alignment. In Figure 4, it can be seen that the retro-projection error is less than 0.255 mm for all markers. The Mean Average Error (MAE) is 0.153 mm and Root Mean Squared Error (RMSE) is 0.154 mm.
In Figure 5, we investigate the Vicon accuracy and precision. A marker is placed at a location with the robot arm. Its position is sampled at 100 Hz over 1 s.
For each marker, Figure 5 shows the position accuracy, i.e., the bar graph, as well as the precision, i.e, the whiskers. The Vicon accuracy is really high with a worst accuracy of 25 µm. The precision is closely related to the magnitude of the accuracy. Marker 5 has an accuracy of 25 µm and a precision of 19 µm, while Marker 6 has an accuracy of 7 µm and a precision of 4.5 µm. It results in a coefficient of variation equal to 0.76 and 0.64, respectively. It could be regarded as quiet high. Still, the accuracy is so high that the coefficient of variation does not have any significant impact.
The precision found in our study is better than [2]. Moreover, our marker-wise study shows relatively significant difference in accuracies with values spanning from 7 µm to 25 µm. Still, those values are not significant for robotic applications.

4. Results for Dynamic Experiments

4.1. Extrinsic Calibration Method Comparison

The blade setup was used to generate several datasets as it can be seen in Table 2. Two markers were placed at each end of the blade. A third marker was placed between the blade rotation axis and the final marker.
As it can be seen in Figure 6, eight Vicon cameras were placed around the blade in a convenient fashion. The camera poses are represented in 3D. The circles correspond to the marker trajectories.
Eight datasets were recorded. They encompass different rotation speeds as well as Vicon sampling speeds. Vicon sampling speeds investigated are equal to 100 Hz and 200 Hz. For all experiments, the coder sampling rate was set to 75 kHz. Dataset 8 includes only one marker. In fact, markers A and D flew away due to the high rotational speeds.
First, we investigate the frame alignment performance. Two methods were investigated (c.f. Section 2.2): Uyemana method and an approach that uses a plane estimation followed by a circle fitting that we named “Plane+Circle”. As it can be seen in Table 3, the resulting rotations are close. The marker positions on the blade are also really close with no difference between the two models (c.f. Table 4). From Table 5, it can be seen that the model errors are close despite slight differences in the translation estimation (c.f. Table 6).
In Figure 7, we compare the average error achieved by both calibration datasets. Each point corresponds to the mean positioning error of a given marker taken from an experiment. As it can be seen, both model mean error and standard deviation are linearly correlated. Both models lead to sub millimeter mean error on all datasets. The mean error spans from 0.15 mm to 0.58 mm for both models. The standard deviation ranges from 0.05 mm to 0.46 mm. Overall, the coefficient of variation varies from 0.33 to 0.79. The standard deviation is correlated with the mean error as smaller coefficients of variation are reached for small mean errors and larger coefficients for larger mean errors.
The models are really close to each other. As a result, for the rest of this paper, we report “Plane + Circle” positioning results.

4.2. In-Plane versus Out-of-Plane Errors

Figure 8 shows the ground truth trajectory and the Vicon-based trajectory from marker B in dataset 1 resulting in its lowest speed. As it can be seen, there is an oscillation about the x y plane. As it can be noticed, the scale of the z-axis is zoomed in compared to the other two dimensions. However, we can suspect a measurement error or mechanical problems such as blades oscillations, motor carrying structure displacement, etc.
To check a blade flexural oscillation problem, we have plotted the z-axis error of outside marker A and inside marker B in Figure 9a,b. The vertical axis of the plots are not at the same scale. Still, the Z error amplitude are close to each other, and similar to Figure 10. The blade was made of 4 mm thick aluminum 2017A. Due to the blade small speed and its mechanical robustness, it is less likely that we are observing a blade mechanical oscillation.
To check the last hypothesis, we have placed three Vicon markers on the static structure (Figure 2, the marker are placed on the top of the brown panels). Figure 10 represents the Z displacements of fix and dynamic makers. We can notice that the vertical motions of the static makers are rather small compared to dynamic markers. The fix marker motions are within the Vicon measurement noise, and the values are closed to the repeatability error found in Figure 5. Consequently, ground truth static structure displacement can be discarded.
Nevertheless, in Figure 9, the z-axis error is small compare to x y planar error. To confirm this fact, we plot an x y view of marker A ground truth and measured position. Black lines between ground truth and measured positions represent the distances shown as blue lines in Figure 11.
All things considered, errors observed are likely due to measure process rather than mechanical uncertainties. The Vicon system is based on multi-views marker triangulation. These kinds of methods could be influenced by the camera location, distribution as well as calibration. To check that we did not evaluate the system in degraded case, we have plotted the marker trajectories in the unit plane of each Vicon camera (c.f. Figure 12). The markers’ retro-projection uses the camera positioning shown in Figure 6. Figure 12 shows that the maker trajectories are mostly centered in the image given the lens used in our experimentations.
After investigating extrinsic calibration and ground performance, our comparison process could be considered as correct. We now focus on markers’ speed influence on the measurement error and final result investigation.

4.3. Error versus Speed

We have experimented different speeds in eight datasets (c.f. Table 2). We explore the accuracy with respect to linear speeds. Results are shown in Figure 13a,b.
Firstly, the accuracy is generally better as the speed increases. It seems to be inversely proportional to the speed. The mean errors and their standard deviations follow the same trend.
Secondly, with the motion chosen in experimental setup, i.e., circular motion (Figure 2), the small diameter marker (B) error is fifty percent lower than outside diameter markers (A and D).
We have three hypotheses to explain this phenomenon :
  • The camera intrinsic calibration model may be better to correct distortions in the center of the image than on borders. As shown in Figure 12, the B marker is more often close to the image center than markers A and D.
  • In our experimentations, we used 12.7 mm markers. For speeds lower than 1.3 m/s, the marker will overlap on two consecutive images at 100 Hz acquisition rate. In Figure 13a,b, the error seems to flatten for speeds larger than 1.3 m/s for dataset 1 to 6. Then, a new stage is reached for datasets 7 and 8, where the Vicon sampling rate was set to 200 Hz. At 200 Hz, the marker overlapping is avoided for speeds larger than 2.6 m/s.
  • The fact that the error decreases with the speed could be due to marker tracking algorithms with higher speed measurement uncertainty at low speeds. Datasets recorded in our experiments at 200 Hz have linear speeds larger than this value.
All things considered, the worst mean absolute error (Dataset 1) is lower than 0.5 mm. In the static evaluation (Section 3), we have found the largest error was equal to 0.25 mm with standard deviation of 0.015 mm. This static error value is close to internal marker B.

4.4. Final Results

Table 7 shows that the Z error is not significant compared to XY components. This outcome confirms the observations made in Section 4.2.
Figure 14 presents the cumulative distribution function error of each marker. As it can be seen, the faster speed and the small diameter marker have the lower errors and less variability. Markers A and D have the same shape, due to their opposite location on the rotor. (Figure 2). On the one hand, for the worst case, Dataset 1 and markers A and D, the Vicon measurement is better than 2 mm at 3 σ . On the other hand, the best case, Dataset 8 and marker B, Vicon measurement is better than 0.3 mm at 3 σ .
Table 8 summarized the final results for three linear speed ranges: less than 1 m/s, 1 to 3 m/s and more than 3 m/s. To compute these values, we have averaged all the samples in the same range. We also considered the 1 σ , 2 σ and 3 σ error. It can be seen that sub-millimeter positioning error is reached for speed larger than 3 m/s. For lower speed, the error is still fair with errors lower than 2 mm.

5. Conclusions

In this paper, we proposed an optoelectronic motion capture evaluation protocol. It includes a new mechanical setup that encompasses sensors that give ground truth positioning. It allows for leveraging limits found in existing works that are based solely on inter-marker distances studies [9]. We are able to provide marker-wise absolute positioning performance. We studied the device behavior for two distinct scenarios, namely static and dynamic. Speeds and trajectories chosen are compatible with mobile robotic or UAV applications as well as other real life applications. We propose a circular trajectory that, to the best of our knowledge, has never been used to evaluate motion capture systems. Our evaluation was based on an external ground truth not relying on another motion capture device.
Static experiments have shown a mean absolute positioning error of 0.15 mm and a really low variability of 0.015 mm. Consequently, a Vicon system has an excellent precision and high accuracy for static cases.
Dynamic experiments have highlighted interesting aspects. In fact, faster displacements of the marker lead to lower errors. The error can be reduced by 40% for the same object if it moves at higher speeds. This phenomena might be related to tracking error and marker size. In fact, if the marker does not move sufficiently between two samples, the motion observed by the cameras is within the segmentation noise of each camera. Still, our study found that the Vicon positioning error can be considered as lower than 2 mm from low to high speed experiments. Moreover, the positioning variability is better for static experiments.
As a result, the marker size and the Vicon sampling rate should be properly tuned with respect to the speed displacements encountered in the monitoring applications to reach the Vicon optimal performance.
Our future work will be focused on performing other trajectories in order to evaluate the robustness of the positioning in a dynamic environment. First, we will perform linear motion at speeds similar to the circular trajectories found in our work. Secondly, we will use a Kuka robot in order to mimic human being or animal motion that could be considered as more complex motion than pure rotation or translations. Finally, the same setup will be used to mimic UAV acrobatic-like motions that are performed at higher speeds and less constrained than living being motions.

Author Contributions

P.M. and Y.D. conceived, designed and developed the proposed method and its experimental validation; All other authors contributed their expertise to validate the proposal, evaluate and question the experimental results. All authors contributed in framing the writing of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cappozzo, A.; Della Croce, U.; Leardini, A.; Chiari, L. Human movement analysis using stereophotogrammetry: Part 1: Theoretical background. Gait Posture 2005, 21, 186–196. [Google Scholar] [CrossRef] [PubMed]
  2. Manecy, A.; Marchand, N.; Ruffier, F.; Viollet, S. X4-MaG: A Low-Cost Open-Source Micro-Quadrotor and Its Linux-Based Controller. Int. J. Micro Air Veh. 2015, 7, 89–110. [Google Scholar] [CrossRef]
  3. Ducard, G.; Andrea, R.D. Autonomous quadrotor flight using a vision system and accommodating frames misalignment. In Proceedings of the IEEE International Symposium on Industrial Embedded Systems, Lausanne, Switzerland, 8–10 July 2009; pp. 261–264. [Google Scholar]
  4. Mellinger, D.; Michael, N.; Kumar, V. Trajectory generation and control for precise aggressive maneuvers with quadrotors. Int. J. Robot. Res. 2012, 31, 664–674. [Google Scholar] [CrossRef]
  5. Mueggler, E.; Huber, B.; Scaramuzza, D. Event-based, 6-DOF pose tracking for high-speed maneuvers. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), Chicago, IL, USA, 14–18 September 2014; pp. 2761–2768. [Google Scholar]
  6. Allen, R.; Pavone, M. A Real-Time Framework for Kinodynamic Planning with Application to Quadrotor Obstacle Avoidance. Proceeedings of the AIAA Conference on Guidance, Navigation and Control, San Diego, CA, USA, 4–8 January 2016. [Google Scholar]
  7. Barrows, D.A. Videogrammetric model deformation measurement technique for wind tunnel applications. AIAA Pap. 2007, 1163, 2007. [Google Scholar]
  8. Yang, P.F.; Sanno, M.; Brüggemann, G.P.; Rittweger, J. Evaluation of the performance of a motion capture system for small displacement recording and a discussion for its application potential in bone deformation in vivo measurements. Proc. Inst. Mech. Eng. Part H: J. Eng. Med. 2012, 226, 838–847. [Google Scholar] [CrossRef] [PubMed]
  9. Chiari, L.; Della Croce, U.; Leardini, A.; Cappozzo, A. Human movement analysis using stereophotogrammetry: Part 2: Instrumental errors. Gait Posture 2005, 21, 197–211. [Google Scholar] [CrossRef] [PubMed]
  10. Croce, U.D.; Cappozzo, A. A spot check for estimating stereophotogrammetric errors. Medical and Biological Engineering and computing 2000, 38, 260–266. [Google Scholar] [CrossRef] [PubMed]
  11. Diaz Novo, C.; Alharbi, S.; Fox, M.; Ouellette, E.; Biden, E.; Tingley, M.; Chester, V. The impact of technical parameters such as video sensor technology, system configuration, marker size and speed on the accuracy of motion analysis systems. Ingeniería Mecánica, Tecnología y Desarrollo 2014, 5, 265–271. [Google Scholar]
  12. Umeyama, S. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 376–380. [Google Scholar] [CrossRef]
Figure 1. Near field robot used in our static experiments. The probe can move within 2 × 1.5 × 1 m 3 , 10 µm positioning sensitivity.
Figure 1. Near field robot used in our static experiments. The probe can move within 2 × 1.5 × 1 m 3 , 10 µm positioning sensitivity.
Sensors 17 01591 g001
Figure 2. Rotor used in the dynamic experiments.
Figure 2. Rotor used in the dynamic experiments.
Sensors 17 01591 g002
Figure 3. Eight static points after Vicon and robot frame alignment.
Figure 3. Eight static points after Vicon and robot frame alignment.
Sensors 17 01591 g003
Figure 4. Static point positioning error.
Figure 4. Static point positioning error.
Sensors 17 01591 g004
Figure 5. Static point positioning variability. Bar indicate the standard deviation—red whiskers show the variability range.
Figure 5. Static point positioning variability. Bar indicate the standard deviation—red whiskers show the variability range.
Sensors 17 01591 g005
Figure 6. Trajectories of markers A, B and D. Camera locations and orientations obtained from the Vicon calibration wand procedure.
Figure 6. Trajectories of markers A, B and D. Camera locations and orientations obtained from the Vicon calibration wand procedure.
Sensors 17 01591 g006
Figure 7. Comparison of both extrinsic calibration models. The mean difference between both models is equal to 2.23%.
Figure 7. Comparison of both extrinsic calibration models. The mean difference between both models is equal to 2.23%.
Sensors 17 01591 g007
Figure 8. Marker B trajectory from dataset 1 and ground truth (axis in millimeters).
Figure 8. Marker B trajectory from dataset 1 and ground truth (axis in millimeters).
Sensors 17 01591 g008
Figure 9. In-plane and Out-of-plane positioning error from Dataset 1.
Figure 9. In-plane and Out-of-plane positioning error from Dataset 1.
Sensors 17 01591 g009
Figure 10. Out-of-plane error over time.
Figure 10. Out-of-plane error over time.
Sensors 17 01591 g010
Figure 11. Planar errors, Dataset 1 (axis in millimeters).
Figure 11. Planar errors, Dataset 1 (axis in millimeters).
Sensors 17 01591 g011
Figure 12. Marker trajectories in the eight Vicon camera unit planes—circles on ellipses indicate locations of main outliers of the Figure 9—A color represents the same marker seen from different cameras. Marker B trajectory corresponds to the small ellipse. Marker A trajectory corresponds to the large ellipse.
Figure 12. Marker trajectories in the eight Vicon camera unit planes—circles on ellipses indicate locations of main outliers of the Figure 9—A color represents the same marker seen from different cameras. Marker B trajectory corresponds to the small ellipse. Marker A trajectory corresponds to the large ellipse.
Sensors 17 01591 g012
Figure 13. Error vs. linear speed for each dataset.
Figure 13. Error vs. linear speed for each dataset.
Sensors 17 01591 g013
Figure 14. Position error cumulative distribution functions (x-axis in millimeters).
Figure 14. Position error cumulative distribution functions (x-axis in millimeters).
Sensors 17 01591 g014
Table 1. Summary of existing Vicon evaluation studies.
Table 1. Summary of existing Vicon evaluation studies.
ReferenceSpacePrecision EvaluationStaticDynamic
[2] 6 × 6 × 8 m 3
[7] 4.6 × 73 m 2 wind tunnel 0.58 m displacements
[8] 0.4 × 0.3 × 0.3   m 3
[11] 2.5 × 1 × 1 m 3 linear motion (maximum speed of 0.6 m/s)
Our work 2 × 1.5 × 1 m3 External ground truth (max. speed of 7.6 m/s)
Table 2. Dataset collected with the rotor setup.
Table 2. Dataset collected with the rotor setup.
Dataset NumberMotor Speed (rpm)Blades Speed (rpm)Usable MarkersVicon Sample Rate (Hz)Encoder Sample Rate (kHz)
110025A B D10075
220150.25A B D10075
330476A B D10075
430175.25A B D10075
5499124.75A B D10075
6756189A B D10075
71000250A B D20075
81711427.75B20075
Table 3. Reference frame rotation estimation.
Table 3. Reference frame rotation estimation.
ModelRoll (°)Pitch (°)Yaw (°)
Uyemana Method−1.332.38−65.00
Plane + Circle1.392.58−64.88
Model distances2.720.200.12
Table 4. Marker radial position.
Table 4. Marker radial position.
ModelMarker A (mm)Marker B (mm)Marker D (mm)
Uyemana Method291.34124.32291.18
Plane + Circle291.34124.32291.18
Deviation000
Table 5. Model errors.
Table 5. Model errors.
ModelMean Absolute Error (mm)
Uyemana Method0.5090
Plane + Circle0.5093
Table 6. Reference frame translation estimation.
Table 6. Reference frame translation estimation.
ModelTx (mm)Ty (mm)Tz (mm)
Uyemana Method−31.75116.98246.26
Plane + Circle−28.60133.19246.75
Model distances3.1516.210.49
Table 7. Dynamic experimentation results in millimeter. 3D error and in-plane error ( E r r o r X Y ). Dataset 8, markers A and B are not workable. Due to high speed, the centrifugal force unstuck markers.
Table 7. Dynamic experimentation results in millimeter. 3D error and in-plane error ( E r r o r X Y ). Dataset 8, markers A and B are not workable. Due to high speed, the centrifugal force unstuck markers.
DatasetMarkerEstimated RadiusLinear Speed (m/s)Error mean Error XY MeanError Std Error XY Std
1D291.340.760.580.570.460.46
1B124.320.320.280.250.190.20
1A291.180.760.570.570.460.46
2D291.341.530.510.500.330.34
2B124.320.650.250.220.140.15
2A291.191.530.500.490.330.34
3D291.352.320.430.420.290.30
3B124.330.990.220.190.110.12
3A291.192.320.420.410.280.29
4D291.352.290.440.420.290.30
4B124.330.980.220.190.110.12
4A291.192.290.420.420.270.28
5D291.353.810.360.340.230.24
5B124.351.620.190.160.090.10
5A291.193.800.350.340.230.23
6D291.365.770.350.330.220.23
6B124.382.460.180.150.080.09
6A291.205.770.340.330.210.21
7D291.427.630.280.250.170.18
7B124.623.260.160.130.060.07
7A291.267.630.270.250.160.17
8D180.098.07229.38229.38110.89110.89
8B125.065.600.150.130.060.06
8A315.5314.13401.63401.63193.92193.92
Table 8. Dynamic experimentation synthetic results in millimeters.
Table 8. Dynamic experimentation synthetic results in millimeters.
Speed RangeMean ErrorStd ErrorRMSE1 σ Error (68.27%)2 σ Error (95.45%)3 σ Error (99.73%)
<1 m/s0.35430.24390.52360.38610.91561.2073
[1, 3] m/s0.38630.24740.41710.47450.88691.1583
>3 m/s0.28230.16820.32930.34240.61650.8330

Share and Cite

MDPI and ACS Style

Merriaux, P.; Dupuis, Y.; Boutteau, R.; Vasseur, P.; Savatier, X. A Study of Vicon System Positioning Performance. Sensors 2017, 17, 1591. https://doi.org/10.3390/s17071591

AMA Style

Merriaux P, Dupuis Y, Boutteau R, Vasseur P, Savatier X. A Study of Vicon System Positioning Performance. Sensors. 2017; 17(7):1591. https://doi.org/10.3390/s17071591

Chicago/Turabian Style

Merriaux, Pierre, Yohan Dupuis, Rémi Boutteau, Pascal Vasseur, and Xavier Savatier. 2017. "A Study of Vicon System Positioning Performance" Sensors 17, no. 7: 1591. https://doi.org/10.3390/s17071591

APA Style

Merriaux, P., Dupuis, Y., Boutteau, R., Vasseur, P., & Savatier, X. (2017). A Study of Vicon System Positioning Performance. Sensors, 17(7), 1591. https://doi.org/10.3390/s17071591

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop