Next Article in Journal
Feasibility of Backscatter Communication Using LoRAWAN Signals for Deep Implanted Devices and Wearable Applications
Previous Article in Journal
Synchronization Theory-Based Analysis of Coupled Vibrations of Dual-Tube Coriolis Mass Flowmeters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improvement of Robot Accuracy with an Optical Tracking System

1
Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 201900, China
2
Shanghai Robot Industrial Technology Research Institute, Shanghai 200062, China
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(21), 6341; https://doi.org/10.3390/s20216341
Submission received: 21 September 2020 / Revised: 15 October 2020 / Accepted: 3 November 2020 / Published: 6 November 2020
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Robot positioning accuracy plays an important role in industrial automation applications. In this paper, a method is proposed for the improvement of robot accuracy with an optical tracking system that integrates a least-square numerical algorithm for the identification of kinematic parameters. In the process of establishing the system kinematics model, the positioning errors of the tool and the robot base, and the errors of the Denavit-Hartenberg parameters are all considered. In addition, the linear dependence among the parameters is analyzed. Numerical simulation based on a 6-axis UR robot is performed to validate the effectiveness of the proposed method. Then, the method is implemented on the actual robot, and the experimental results show that the robots can reach desired poses with an accuracy of ±0.35 mm for position and ±0.07° for orientation. Benefitting from the optical tracking system, the proposed procedure can be easily automated to improve the robot accuracy for applications requiring high positioning accuracy such as riveting, drill, and precise assembly.

1. Introduction

Industrial robots have been widely applied for manufacturing automation in high-volume production due to their good task repeatability features. In a common scenario of the use of these machines, a human operator teaches the robot to move to a desired position; the robot records this position and then repeats the taught path to complete the task. However, robot teaching is usually time-consuming for low-volume applications. Although offline programming can significantly reduce the workload for robot teaching, the generated robot paths are based on the robot’s nominal kinematic model and, therefore, whether the robot can successfully complete the task via offline programming depends on its absolute accuracy. The robot manufacturers provide the nominal values of the Denavit-Hartenberg (D-H) parameters of the robot. However, the actual values of these parameters can deviate from their nominal values due to the errors in manufacturing, assembly, etc., which accordingly cause positioning errors to the robot end-effector. As a result, the absolute accuracy of industrial robots is relatively low compared with many other types of manufacturing equipment such as CNC machine tools [1]. As a result, industrial robots still face challenges in many low-volume applications where high absolute accuracy (with the positioning error less than 0.50 mm for an industrial robot of a medium to large size) is required, such as milling, drilling, and precise assembly.
Kinematic calibration is a significant way to improve the absolute accuracy of robots [2]. Two types of calibration methods are available based on measurement methods. One is the open-loop calibration in which the absolute position and orientation of the robot end-effector are measured; the other is closed-loop calibration in which the position and orientation of the end-effector are measured relative to another reference part or gauge.
For open-loop calibration, laser trackers have been adopted as the measurement device with different calibration algorithms. By using a laser tracker in [3,4,5,6,7], the absolute positioning accuracy of the robot can reach about 0.10–0.30 mm. Among them, the least squares technique is the most often applied one, which aims at minimizing the sum of squared residuals [4,5]. In [7], a new kinematic calibration method has been presented using the extended Kalman filter (EKF) and particle filter (PF) algorithm that can significantly improve the positioning accuracy of the robot. Thanks to the principle of data driven modeling, artificial neural network (ANN) has a promising application in modeling complex systems such as calibration [3,6]. Nguyen [3] combined a model-based identification method of the robot geometric errors and an artificial neural network to obtain an effective solution for the correction of robot parameters. In [6], a back propagation neural network (BPNN) and particle swarm optimization (PSO) algorithm have been employed for the kinematic parameter identification of industrial robots with an enhanced convergence response. Coordinate measuring machines (CMM) have also been used in open-loop robot calibration [8,9,10]. For example, Lightcap [10] has determined the geometric and flexibility parameters of robots to achieve significant reduction of systematic positioning errors. In [11], an optical CMM and a laser tracker have been combined to calibrate the ABB IRB 120 industrial robot, so that the mean and maximum position errors can be reduced from more than 3.00 mm and 5.00 mm to about 0.15 mm and 0.50 mm, respectively.
For closed-loop calibration, the calibration models are established by incorporating different types of kinematic constraints induced by the extra reference parts or gauges. By using gauges in [12,13,14], the absolute positioning accuracy of the robot also can reach about 0.10–0.30 mm. He [12] has used point constraints to improve the accuracy of six-axis industrial robots. The robot parameters are calibrated by controlling a robot to reach the same location in different poses. In [13], a non-kinematic calibration method has been developed to improve the accuracy of a six-axis serial robot, by using a linear optimization model based on the closed-loop calibration approach with multiple planar constraints. Joubair [14] has presented a kinematic calibration method using distance and sphere constraints effectively improving the positioning accuracy of the robot. In another category of closed-loop calibration methods, the kinematic constraints are induced by vision systems [15,16,17,18]. For example, Du [15] has developed a vision-based robot calibration method that only requires several reference images, and the mean positioning error was reduced from 5–7 mm to less than 2 mm. Zhang [18] has proposed a stereo vision based self-calibration procedure with the max position error and orientation errors reduced to less than 2.50 mm and 3.50 ° , respectively.
The calibration procedure is usually not specific to a certain task, it does not account for the influence of the robot end-effector and it is difficult to correct the errors due to structural elastic deformation and other factors. Other research works are focused on the online positioning error compensation, i.e.,, correcting the positioning error directly with the integration of an external metrology system. Online positioning error compensation is usually task-oriented and does not require a precise kinematics model [19,20,21,22,23,24,25,26,27,28,29]. For example, Jiang [20] has proposed an on-line iterative compensation method combining with a feed-forward compensation method to enhance the assembly accuracy of a robot system (MIRS) with the integration of a 6-DoF measurement system (T-Mac) to track the real-time robot movement. In [21], an online compensation method has been presented based on a laser tracker to increase robot accuracy without precise calibration. Yin [22] has developed a real-time dynamic thermal error compensation method for a robotic visual inspection system. The method is designed to be applied on the production line and correct the thermal error during the robotic system operation. In [23], an embedded real-time algorithm has been presented for the compensation of the lateral tool deviation for a robotic friction stir welding system. Shu [26] has presented a dynamic path tracking scheme that can realize automatic preplanned tasks and improve the tracking accuracy with eye-to-hand photogrammetry measurement feedback. A dynamic pose correction scheme has been proposed by Gharaaty [27] which adopts the PID controller and generates commands to the FANUC robot controller. In [28,29], the authors have adopted an iterative learning control (ILC) to improve the tracking performance of an industrial robot.
In summary, different offline calibration and online compensation methods to improve the absolute accuracy of robots have been reported in the literature. However, to achieve high precision (up to 0.30–0.50 mm), these methods have some drawbacks. For open-loop calibration, those methods usually require external metrology systems such as laser trackers and CMM that are costly and lack of flexibility for production automation. For closed-loop calibration, using gauges to calibrate requires manual teaching and is difficult to automate, which will affect the calibration efficiency; using vision systems to calibrate can be quite flexible for autonomous calibration, but the positioning error can be as high as a few millimeters limited by the precision of vision systems. For online compensation, most methods adopt a laser tracker as an external metrology system with a laser target mounted on the end-effector, which is costly and can be influenced by the visibility of the laser target. Moreover, most of the above methods do not take into account the influence of positioning accuracy caused by tools. For these reasons, improving absolute accuracy is still a bottleneck challenge for robotic applications in low-volume high-precision tasks.
To overcome the above issues, we propose an approach to correct the values of the kinematic parameters through the measurement of the actual end-effector poses via an optical tracking system. This approach integrates an optical tracking system and a rigid-body marker (with multiple marker targets) mounted on the end-effector for online compensation of robot positioning error. The optical tracking system enables online measurement of the 6-DOF motion of the robot end-effector. In the parameter identification process, the influence of the tool, the positioning error of the robot base and the error in the D-H parameters are all considered. A least-square numerical algorithm is then used to correct the errors in these kinematic parameters of the robot. Compared with laser trackers, the cost of an optical tracking system is significantly lower than that of a laser tracking system. Also, the optical tracking system can simultaneously measure the poses of multiple markers attached on the measured object, which brings good visibility of the object and flexibility for integration to the optical tracking system. Compared with closed-loop calibration with gauges, the optical tracking system is convenient and easy to automate to improve the calibration efficiency. Compared with online compensation with laser trackers, this method is more cost effective and can provide better visibility of the robot tools. In summary, the contribution of this work is to propose a comprehensive method to correct not only the errors in the D-H parameters but also the positioning errors in the base and tool frames for the kinematic model, which by adopting the optical tracking system, can lead to an efficient and automatic calibration of the robotic system for the robot users.
In the rest of the paper, the system setup as described in Section 2, the detailed theoretical development is presented in Section 3 and Section 4 for the robot kinematics and the calibration of kinematic parameters respectively and then simulation and experimental results are given in Section 5 and Section 6, respectively.

2. System Description

2.1. System Setup

As illustrated in Figure 1, the system under study is composed of a serial robot, a tool mounted on the robot, and an optical tracking system. The robot can be programmed to drive the tool to move along a given path. The pose (position and orientation) of the tool can be obtained through the optical tracking system.
More specifically, a Camsense S optical tracking system (Camsense, Shenzhen, China) is adopted in this system. It is a high-precision dynamic tracking device customized for precision robotic applications which can localize the 3-DOF positions of single-point markers and 6-DOF poses (position and orientation) of rigid-body markers moving in space in real time. The optical tracking system has been calibrated by measuring an object mounted on a CMM. The measuring range of this optical tracking system is 2 to 5 m as drawn in Figure 2. The positioning accuracies in the measuring ranges of 2 to 3 m, 3 to 4 m and 4 to 5 m are 0.08 mm, 0.14 mm and 0.21 mm, respectively. The optical tracking system uses two infrared CCD cameras working at a frame rate of 40 Hz. The camera and the markers are synchronized through 2.40 G wireless network, hence to enable the markers to work under the pulse mode and further to increase the signal noise ratio. The marker is wireless and contains multiple active light sources with infrared diodes emitting light at prescribed frequencies, thus they can be easily identifiable and not sensitive to external light conditions. A power source is integrated with the marker, and wireless communication can be established between the marker and the controller of the optical tracking system. Moreover, the markers are light and compact, for example, the rigid-body markers are with a mass of about 400 g, a diameter of 136 mm and a height of 16.5 mm.
Benefitting from the fact that the markers of the optical tracking system are light, compact and equipped with an internal power source, these markers can be easily attached on the robot end-effector during the measurement. By this means, the influence of the end-effector on the robot accuracy can be taken into account using this approach. Also, the optical tracking system can simultaneously measure the poses of multiple markers attached on the end-effector to guarantee the visibility of the end-effector by the tracking system.
Compared with laser tracking systems, the optical tracking system provides smaller measurement volume and they are slightly less accurate, but they are much less expensive and more flexible for automation. Also, the optical tracking can already provide sufficient accuracy for the machining and assembly tasks that are targeted in this work with a positioning accuracy requirement of about 0.30–0.50 mm. Moreover, since the repeatability of a medium to large-size industrial robot for the targeted tasks is about 0.10–0.20 mm, it would be difficult to improve the robot accuracy better than its repeatability by calibration even with a laser tracking system.

2.2. System Integration

The integration of the optical tracking system with the UR robot used in this work is illustrated in Figure 3. With TCP/IP communication protocol, a computer running on robot operating system (ROS) can communicate with the robot controller and send commands to move the robot in a position control mode. When the robot moves to a series of calibration points, the position and orientation of the rigid-body marker mounted on the end-effector is measured by the optical tracking system for each point, and the robot joint angles are recorded simultaneously. Then, these data are used to calculate the actual kinematic parameters of the robotic system. Finally, these actual parameters can be output and used to calculate the robot kinematics for controlling the robot motion. The above procedure can be completed without human intervention, and thus provide the feasibility of autonomous calibration with the tool mounted on the robot.

3. Kinematic Modeling

3.1. Errors in the D-H Parameters

The kinematic scheme of a typical serial robot with m links and m revolute joints is plotted in Figure 4. The position vector of the ith joint in the base frame {O_XBYBZB} is denoted by ti. A local frame {oi_xiyizi} is established on each of the links with the origin at the ith joint. The tip of the tool mounted on the last link is denoted as E, with its position vector written in the base frame as tE.
The homogeneous matrix of the ith body frame {oi_xiyizi} relative to the (i-1)th body frame can be given as follows:
i i 1 T = cos θ i sin θ i cos α i sin θ i sin α i a i cos θ i sin θ i cos θ i cos α i cos θ i sin α i a i sin θ i 0 sin α i cos α i d i 0 0 0 1 ,
where ai, di, αi and θi are the D-H parameters of the ith link as described in Figure 5, in which ai is the link length, di is the link offset, αi is the link twist angle, and θi is the joint angle. Then the homogeneous matrix of the end-effector {O_XEYEZE} relative to the robot base frame {O_XBYBZB} can be calculated as:
E B T n = g ( x m , p n ) ,
where the nonlinear function g represents robotic forward kinematics, subscript n denotes nominal value, and x m R m × 1 is a vector of the joint variables calculated with nominal D-H parameters p n , i.e.,:
p n = α 1 a 1 d 1 θ 1 α m a m d m θ m T .
Vector p n contains the 4 m nominal D-H parameters, in which m is the number of the links. For a 6-axis industrial robot, vector p n R 24 × 1 contains 24 parameters in total.
If we consider the errors in the D-H parameters, then the forward kinematics of the robot can be re-written as:
E B T = g ( x m , p n + Δ p ) , p a = p n + Δ p .
Vector Δ p contains the errors in the D-H parameters, and p a contains the actual D-H parameters. The errors of the D-H parameters can be caused by many factors. For example, the errors in the joint encoders can affect the values of the θ i parameters in the D-H description.

3.2. Errors in the Marker Frame

In order to determine the pose relationship of the local frame fixed on the rigid-body marker relative to the end effector, the nominal pose of the marker frame with respect to the end-effector frame is written as:
R E T n = R E R n R E t n 0 1 ,
where R E R n and R E t n represent the nominal rotation matrix and nominal position vector between the end-effector tool frame and the rigid-body marker frame, respectively. In reality, the actual base frame R E T n can be inconsistent with the nominal base frame. We can use Δ E R T to represent the transformation relationship between the actual and normal frames, i.e.,:
R E T = R E T n Δ R E T ,
Δ R E T = r o t x ( Δ α ER ) r o t y ( Δ β ER ) r o t z ( Δ γ ER ) t r a n s ( Δ x ER , Δ y ER , Δ z ER ) ,
where Δ α RE , Δ β RE and Δ γ RE represent the errors in the rotation angles of the marker frame with respect to the x, y and z axes of the end-effector frame, Δ x MB , Δ y MB and Δ z MB represent the errors of the origin coordinates of the marker frame with respect to the end-effector frame.

3.3. Errors in the Base Frame

The nominal pose of the robot base frame with respect to the measurement frame cannot be obtained directly. However, the nominal D-H parameters of the robot are known, and the nominal pose of the marker frame with respect to the end-effector also is known. For a given robot configuration, the pose of the marker frame with respect to the measurement frame can be measured by the optical tracking system within the measurement range. Therefore, the nominal pose of the robot base frame with respect to the measurement frame can be obtained as follows:
B M T n = R M T ( E B T n R E T n ) 1 .
Similarly, in order to determine the pose relationship of the actual robot base frame relative to the measurement frame, the nominal pose of the actual base frame with respect to the measurement frame is written as:
B M T = B M T n Δ B M T ,
Δ B M T = r o t x ( Δ α MB ) r o t y ( Δ β MB ) r o t z ( Δ γ MB ) t r a n s ( Δ x MB , Δ y MB , Δ z MB ) ,
where Δ α MB , Δ β MB and Δ γ MB represent the errors in the rotation angles of the base frame with respect to the x, y and z axes of the measurement frame, Δ x MB , Δ y MB and Δ z MB represent the errors in the origin coordinates of the base frame with respect to the measurement frame.

3.4. Errors of the System

In order to clearly show the systematic error, we establish coordinate frames for robot calibration as shown in Figure 6, where { O M X M Y M Z M } represents the measurement frame fixed on the laser tracking system, { O B , n X B , n Y B , n Z B , n } represents the nominal robot base frame, { O B X B Y B Z B } is the actual robot base frame, { O E , n X E , n Y E , n Z E , n } is the frame fixed on the end-effector obtained with the nominal D-H parameters, { O E X E Y E Z E } is the actual end-effector frame of the robot, { O R , n X R , n Y R , n Z R , n } represents the nominal frame fixed on the rigid-body marker, and { O R X R Y R Z R } represents the actual marker frame. According to the above relationship, introduce the error of each part into the system, we can get the following relationship:
R M T = B M T n Δ B M T g ( x m , p n + Δ p ) R E T n Δ R E T .
Not only the errors in the D-H parameters but also the positioning errors of the robot base and the end-effector tool are considered in the calibration model in Equation (11).

4. Kinematic Parameter Identification

According to Equation (11), the pose of the marker frame relative to the measurement frame is a function of these kinematic parameters as:
y MR = f ( x MB , y MB , z MB , α MB , β MB , γ MB , p n , x ER , y ER , z ER , α ER , β ER , γ ER ) ,
where y MR = [ x MR , y MR , z MR , α MR , β MR , γ MR ] represent position and orientation of the marker frame relative to the measurement frame. Taking the small disturbance for both sides of Equation (12), the model can be linearized as:
Δ y MR = J Δ e ,
Δ e = [ Δ x MB , Δ y MB , Δ z MB , Δ α MB , Δ β MB , Δ γ MB , Δ p n T , Δ x ER , Δ y ER , Δ z ER , Δ α ER , Δ β ER , Δ γ ER ] T ,
where Δ y MR R 6 × 1 represents the disturbance in the pose of the marker frame, Δ e R 36 × 1 represents the disturbance in each parameter, and J is the 6 × 36 Jacobian matrix of the nonlinear kinematic model Equation (12) defined as follows:
J = f 1 x MB f 1 γ ER f 6 x MB f 6 γ ER .
This matrix can be numerically calculated by finite difference. For example, its element on the first row and the first column can be obtained as:
J 11 = f 1 x MB = f 1 ( x MB + Δ x MB ) f 1 ( x MB ) x MB   ,
where Δ x MB is a small disturbance and we use a value of 1 × 10 6 in this work. We can compare the column vectors in the Jacobian matrix J . If two column vectors are identical, the errors in the two corresponding kinematic parameters have identical effect on the error in the end-effector pose.
According to Equation (11), matrix Δ B M T is generated from 6 variables as Δ y MB = Δ x MB ,   Δ y MB ,   Δ z MB ,   Δ α MB ,   Δ β MB ,   Δ γ MB , Δ R E T is generated from 6 variables as Δ y ER = Δ x ER ,   Δ y ER ,   Δ z ER ,   Δ α ER ,   Δ β ER ,   Δ γ ER , and vector Δ p is generated from 4 m variables as the errors in the D-H parameter values. In this paper, multiple measurement data points are sampled for the pose of the marker frame relative to the measurement frame, and the robot joint angles are recorded simultaneously for each point. For each point, we have:
y MR , k = f ( Δ y MB , Δ p , Δ y ER , x m , k ) ,
where subscript k denotes the kth sample point, f is the forward kinematics of the calibration system, and x m , k denotes the robot joint angles. In this study, a least-square numerical algorithm is applied to solve for the errors in the kinematic parameters, so the objective equation can be established as:
( Δ y MB , Δ p , Δ y ER ) = arg min Δ y MB , Δ p , Δ y ER f ( Δ y MB , Δ p , Δ y ER , x m , k ) y MR , k ,
where · represents the 2 norm. In the optimization process, we use the fsolve function with the Levenberg-Marquard algorithm for a numerical solution in MATLAB. Since the system kinematic parameters error is small, we set the initial values of all the kinematic parameter errors to be zero; the maximum number of iterations is set to be 500; the value of the function tolerance is set to be 1 × 10 6 ; and the value of the variable tolerance is set to be 1 × 10 6 . The optimal solution is obtained as the system kinematic parameters error.
In summary, if we define a trajectory based on the robot base frame, the specific process of the robot to accurately track the trajectory is shown in Figure 7. After establishing the kinematics model, we can select a series of robot configurations to calibrate the robot. In order to implement our method on a robot, the direct and inverse kinematic problems should be solved for the robot on its controller using the corrected values of the kinematic parameters. By using the optical tracking system to measure the pose error, we can identify the kinematic parameters of the robot. According to the nominal value of the robot, we can solve the joint angle corresponding to the target trajectory. Substituting the joint angle obtained by the nominal value into the calibrated model, we can accurately reach the target position.

5. Simulation Study

5.1. Nominal Values

A simulation study is carried out to verify the above theoretical model. The simulated system is drawn in Figure 1. We define the nominal pose of the base frame relative to the measurement frame as:
B M T n = 1.00 0.00 0.00 2000.00 ( mm ) 0.00 1.00 0.00 2000.00 ( mm ) 0.00 0.00 1.00       0.00 0.00 0.00 0.00       1.00
The nominal pose of the marker frame relative to the end-effector is defined as:
R E T n = 1.00   0.00   0.00           0.00 0.00   1.00   0.00           0.00 0.00   0.00   1.00   100.00 ( mm ) 0.00   0.00   0.00           1.00
In this simulation, we use UR10 as an example. The nominal D-H parameters of the robot are shown as Table 1.

5.2. Giving Errors in Kinematic Parameters

In this simulation case, we define the errors in the base frame relative to the measurement frame in Table 2. We define the errors in the D-H parameters in Table 3. Also, we define the errors in the marker frame relative to the end-effector in Table 4.

5.3. Simulation Result

In the calibration process, the position and orientation of the rigid-body marker is measured by the optical tracking system. In order to make the simulation closer to reality, we introduce random measurement errors into the measurement results. The position errors in the X, Y and Z directions are between −0.10 mm and +0.10 mm, the orientation errors in the X, Y and Z directions are between −0.10 ° and +0.10 ° . A total of 30 sets of data are selected with joint angles shown in Appendix A. Simultaneously, the actual poses of the marker frame with respect to the measurement frame can be obtained as shown in Appendix B. For each data point, we establish six equations; therefore, the errors in the required kinematic parameters can be obtained numerically. The pose errors of the base frame relative to the measurement frame are shown in Table 5. The errors in the D-H parameters are shown in Table 6. The pose errors of the marker frame relative to the end-effector are shown in Table 7. Through the above analysis, the parameter error after calibration can be compared with the parameter errors given in Table 2, Table 3 and Table 4.
The comparison is shown in Table 8, with the given parameter error in the second column and the calculated parameter errors in the third column. It can be seen that the kinematic errors calculated from the theoretical model reasonably match the errors given in Section 5.2, which shows the effectiveness of the proposed method. A further investigation of the column vectors of the Jacobian matrix shows that some kinematic parameter errors cause identical effect on the end-effector pose. For instance, the third and the ninth column vectors in matrix J are identical, which represents that the kinematic parameter errors Δ z MB and Δd1 have identical effect on the positioning error of the effector, since the z MB axis is parallel to the axis of the first joint. These are related to the geometric characteristics of the UR10 robotic system. For these parameters, their calculated errors after calibration do not guarantee to be close to their given errors, but the summation of their calculated errors is close to the summation of their given errors.
We proceed to demonstrate the improvement of the robot absolute accuracy in the simulation. According to the joint angles of the robot at the 30 data points, the positions of the marker frame relative to the measurement frame can be obtained from the forward kinematics with the nominal parameter values given in Section 5.1, which are called the nominal positions. After calibration, the positions of the marker frame relative to the measurement frame can be calculated by the corrected parameters, which are called the corrected positions. The actual positions of the marker frame relative to the measurement frame can be obtained from the forward kinematics with the actual parameter values given in Section 5.2, which are called the actual positions. The nominal positions and corrected positions are compared with the actual positions in Figure 8. It can be observed that the corrected positions are closer to the actual positions than the nominal positions. The differences between the nominal/corrected pose of the marker and the actual poses are displayed in Figure 9. It can be seen that the positioning accuracy of the robot is obviously improved after calibration.
The average error of marker poses at the 30 data points in each direction is used as the evaluation index, i.e.,:
Δ y ¯ MR = k = 1 30 | y MR , k y MR , k , n | 30 .
Before calibration, the average error of the marker poses in each direction is shown in Table 9. After calibration, the average error of the marker poses in each direction is shown in Table 10. It is obvious that the error is significantly reduced in each direction.
We have also carried out a series of numerical simulation studies with different amount of errors in the D-H parameters. For the length variables, we set the error varying from −1 to +1 mm with a step of 0.05 mm; for the angle variable, we set the error varying from −1 ° to +1 ° with a step of 0.05 ° . For all the cases, the optimization process can converge and output the predefined error values.

6. Experimental Demonstration

6.1. Experimental Setup

An experimental study is given to demonstrate the validity of the proposed approach for a pin-hole insertion at multiple points. It is expected that after the robot is taught to insert the pin at limited points, it can complete the insertion automatically and smoothly at all the points. We will show that with nominal kinematic parameters, the robot is difficult to complete the task smoothly, while with corrected parameters, the insertion is performed much more smoothly.
As shown in Figure 10, the experimental setup consists of a six-DOF UR10 robot fixed on a workbench, an optical tracking system fixed on the ground, a rigid marker, an adaptor plate, and a tool mounted on the robot. Also, an aluminum rod with a diameter of 20.00 mm and an aluminum plate with a length of 800.00 mm, a width of 600.00 mm and 24 holes are machined for the assembly task. The rod and the plate are shown in Figure 10.

6.2. Calibration Results

It is preferable to move the robot throughout the workspace and within the measurement range during the calibration process. For this optical tracking system, the measuring range is 2 to 5 m with a nominal volumetric positioning accuracy of 0.20 mm. As mentioned in Section 2, the measurement accuracy in the measurement range of 2 to 3 m is better than that in the range of 3 to 5 m, therefore the measurement distance of the experimental platform is within 2 to 3 m. We have carried out a series of other tests where the poses of the marker were measured at different data points. In these tests, the number of the data points varied from 30 to 40, and we found very similar results from these calibration tests. Using the above method, the pose of the base frame relative to the measurement frame are initially determined as:
B M T n = 0.9996     0.0123   0.0231             42.46 ( mm ) 0.0229     0.0165             0.9996         341.45 ( mm )       0.0127     0.9998   0.0162     3739.49 ( mm )       0.00                 0.00                       0.00                         1.00   .
Also, the pose relationship between the marker frame and the end-effect can be initially determined from their CAD model, i.e.,:
R E T n = 1.00             0.00       0.00             0.00 0.00             0.00       1.00 98.50 ( mm ) 0.00       1.00       0.00   103.50 ( mm ) 0.00             0.00       0.00             1.00   .
During the calibration process, we recorded the joint angles of the robot at 30 data points as shown in Appendix C. The poses of the marker relative to the measurement frame are obtained by the optical tracking system and listed in Appendix D. By using the proposed calibration method, the pose error of the base frame relative to the measurement frame is calculated and shown in Table 11. The D-H parameter error obtained by the solution is shown in Table 12. The pose error of the marker frame relative to the end-effector frame is shown in Table 13.
The nominal positions (calculated from forward kinematics with nominal kinematic parameters) and corrected positions (calculated from forward kinematics with corrected kinematic parameters) are compared with actual positions (measured by optical tracking system) are compared Figure 11. It can be observed that the corrected positions are closer to the actual positions than the nominal positions. The pose errors before and after calibration are shown in Figure 12 for the calibration points. The average error at these points each direction before and after calibration is shown in Table 14 and Table 15 respectively. It is seen that the positioning accuracy of the robot can be dramatically improved for each direction at these calibration points. For example, the position error is reduced from 2.00–3.00 mm to less than 0.20 mm.
Furthermore, to verify the calibrated parameters, we randomly sampled 10 points rather than the calibration points and measured the poses of the marker at these random points. The joint angles for these points are shown in Appendix E and the measured pose of the marker relative to the measurement frame is shown in Appendix F. Substituting the joint angle of the robot into forward kinematics of the calibration system Equation (11) with nominal and corrected parameters, we can obtain the pose relationship of the marker frame relative to the measurement frame before and after calibration. The pose errors before and after calibration are compared in Figure 13 for these points. The average error of the marker pose in each direction before and after calibration is listed in Table 16 and Table 17, respectively. Again, we observe that the positioning accuracy of the robot can be dramatically improved for each direction at these randomly sampled points. It can be found that with the optical tracking system, the mean value of the position errors of the UR10 robot at the sampled points is improved to about 0.348 mm with a standard deviation of about 0.096 mm and the mean value of the angular errors is improved to about 0.070° with a standard deviation of about 0.024°. The resulting position accuracy is compared with the results obtained in some previous works on robot calibration in Table 18. It is observed is the position accuracy obtained using the proposed method is slightly lower than or similar to the accuracy after open-loop calibration with laser tracking system and CMM [4,7,11] and after closed-loop calibration with probes and gauges [13], and it is significantly higher than the accuracy after calibration with monocular and stereo vision cameras [15,18]. Note that the angular errors of the robots after calibration were not provided in these references. The robot accuracy improvement using the proposed method is limited by the measurement accuracy of the optical tracking system mentioned in Section 2.1, and it is also affected by the intrinsic characteristics of the robot such as the errors induced by non-kinematic factors like structural deformations. However, the proposed method can already provide sufficient accuracy for many tasks targeted by this work with an accuracy requirement of about 0.30–0.50 mm. Moreover, benefited from the compact size, light weight and good visibility of the markers of the optical tracking system, the proposed method provides possibility for correcting the kinematic model for a whole robotic system with a specific tool mounted on the robot withouts human intervention, which is convenient and easy to automate.

6.3. Insertion Results

We proceed to perform the insertion task with the robot. In the experiment, the robot is programmed to perform multiple pin-hole insertion tasks in a position control mode. If the robot experiences small positioning error, the pin can be well aligned with the holes and then successfully inserted into the hole. The diameter of the holes on the aluminum plate hole is inspected as 20.15 mm, and the rod diameter is inspected as 19.85 mm. To insert the aluminum rod into the holes, it is first necessary to determine the relationship between the base frame and the plate. For this purpose, three holes (not on the same line) are selected in the aluminum plate. Then the robot is taught to insert the pin into these holes using a traditional teaching method, and we can obtain the positional relationship of the teaching point relative to the base frame. As shown in Figure 14, in this experiment, we select points H11, H41 and H16 for teaching the robot. After teaching, we can obtain the positions of the t11, t41 and t16 relative to the robot base through the forward kinematics with recorded joint angles at these points. Then the position of the point on the ith row and jth column on the plate relative to the base can be obtained through three points, i.e.,:
t i j = t 11 + ( t 16 t 11 5 ) ( j - 1 ) + ( t 41 t 11 3 ) ( i - 1 ) .
When the robot reaches other positions, we maintain the same orientation for the end-effector as at the first teaching point H11. By performing inverse kinematics, the joint angles for the robot moving to each position can be obtained.
It is found that the pin can be smoothly inserted into each hole after calibration. However, with the nominal D-H parameters, the insertion is not smooth and can fail at some points. For example, the close observation at point H44 before and after calibration is shown in Figure 15. We can observe considerable misalignment between the tip of the pin and the centerline of the hole before calibration. To demonstrate the influence of positioning error correction on the actual robot motion, the joint angles of the robot before and after the calibration to reach the six holes on the third row of the workpiece are listed in Appendix G and Appendix H. The difference between the joint angles before and after calibration is shown in Figure 16. We can observe considerable joint angle difference when the corrected kinematic parameters are used to solve the inverse kinematics. Accordingly, the difference between the end-effector pose before and after calibration is displayed in Figure 17. Again, we observe considerable corrections in the X and Y directions and in the α angle which can significantly affect the smoothness of the insertion. We further analyze the corrected misalignment between the pin and the hole using the proposed method. As shown in Figure 18, we can define the orientation deviation as R a b and the position deviation as   t a b for marker frame before and after calibration. The position displacements of the tip of the rod before and after calibration are denoted as:
Δ d = R a b t t i p + t a b t t i p ,
where t t i p is the position vector of the tip of the pin written in the marker frame. It is noted that the displacement along the direction of the hole does not affect the insertion task. Therefore, we can obtain the misalignment of the pin as follows:
Δ l = Δ d ( Δ d n ) n ,
where Δ l is the misalignment, and n is the unit direction vector of along the direction of the hole. Figure 19 shows the misalignment corresponding to each hole on the part. In general, the larger the misalignment is, the more difficult it is to insert successfully. In this experiment, the points H44, H 45 and H 46 are not inserted successfully. A further observation shows the insertion forces acting on the robot in the vertical direction can be reduced from over 100 N before correction to less 2 N after correction for these holes.

7. Conclusions

In this work, we propose a method for the improvement of robot accuracy with an optical tracking system. Compared with existing methods using laser trackers, the proposed method has lower cost and is more flexible due to the advantages of the optical tracking system. In both the simulation and the experiment, the influence of the tool on the robot accuracy can be considered in this method, while most existing calibration methods are performed without the tool for the automation tasks. Furthermore, the proposed calibration procedure can be easily automated, in which the errors in the D-H parameters, the robot base position, and the tool position are all corrected. Instead of updating the kinematic parameters directly on the robot controller, the users can incorporate a separate controller to re-calculate the joint angles with the actual values of the kinematic parameters and then to send these joint angles to the robot to complete the motion. The proposed method provides possibility for a comprehensive calibration of a whole robotic system with a specific tool mounted on the robot. The procedure can be completed without human intervention and thus can be easily automated by robot users.
Simulation and experimental studies are performed to demonstrate the effectiveness of the proposed method for a UR10 robot. It is shown that the robot cannot complete an insertion task for multiple holes smoothly with nominal kinematic parameters by teaching a very limited number of points. However, using the proposed method, the robot can successfully complete the same task. This enables us to use the paths generated from offline programming to complete complicated tasks over a large work envelope. Although the simulation and experimental demo are done on a UR10 robot, it is expected that the proposed method can be extended to other industrial and collaborative serial robots. In future research, theoretical analysis on the convergence of the optimization process is necessary to evaluate the reliability of the final solution and using other equipment like a laser tracking system to calibrate the UR10 robot will be investigated for a comparison with the proposed method.

Author Contributions

Conceptualization, Y.L. (Yuwen Li); methodology, Y.L. (Yuwen Li) and Y.L. (Ying Liu); software, Y.L. (Ying Liu); validation and investigation, Z.Z. and Y.L. (Ying Liu); resources, T.S.; writing—original draft preparation, Y.L. (Ying Liu) and Y.L. (Yuwen Li); writing—review and editing, Y.L. (Yuwen Li); visualization, Z.Z.; supervision, Y.L. (Yuwen Li) and T.S.; funding acquisition, Y.L. (Yuwen Li) and T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 51775322 and 61803251, by the National Key Research and Development Program of China under Grant 2019YFB1310003, by Siemens Ltd., China, and by Shanghai Robot Functional Platform.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Robot joint angles corresponding to 30 data points in the simulation.
Table A1. Robot joint angles corresponding to 30 data points in the simulation.
θ1 (°)θ2 (°)θ3 (°)θ4 (°)θ5 (°)θ6 (°)
1149.60 28.17 82.32 111.26 167.79 150.32
2161.18 104.85 104.89 153.89 6.28 159.38
373.39 6.55 134.31 27.87 25.90 109.07
445.81 58.35 72.32 73.15 69.51 109.76
530.04 33.86 17.03 58.17 138.53 42.14
6133.27 124.71 148.33 149.04 52.81 55.69
794.15 58.55 149.73 145.85 100.26 47.33
8122.50 42.06 82.16 69.22 96.95 178.51
9135.94 176.48 42.26 95.14 9.26 136.24
10108.36 154.29 177.89 167.31 73.71 0.06
1197.36 37.39 39.47 58.65 17.27 134.56
12134.73 97.79 60.86 149.82 99.46 172.36
13160.71 64.17 98.35 62.40 112.10 143.39
14134.26 22.60 148.03 4.53 74.60 131.65
15140.65 66.11 134.08 160.61 43.67 23.33
1640.51 63.00 51.68 166.95 9.24 106.68
1729.32 150.91 30.16 90.40 179.88 63.97
188.47 38.46 71.61 60.06 41.33 168.50
19122.97 173.18 78.84 169.26 1.05 109.86
20144.19 41.94 167.84 137.39 148.76 103.22
21142.66 59.23 40.22 56.23 105.21 149.38
2252.28 72.46 155.17 110.65 178.41 36.67
23148.90 121.66 44.81 85.64 71.83 107.90
24144.09 18.91 147.86 151.40 63.81 77.41
25103.00 126.15 133.64 136.42 70.04 77.27
26172.14 103.13 152.95 49.74 112.02 105.91
27173.42 15.46 90.09 93.89 16.23 162.84
28159.19 79.02 140.71 26.72 111.57 46.91
2980.22 151.92 35.32 54.69 86.99 60.81
30143.73 177.75 28.63 42.64 126.40 67.58

Appendix B

Table A2. Actual poses of the marker relative to the measurement frame in the simulation.
Table A2. Actual poses of the marker relative to the measurement frame in the simulation.
x M R   ( mm ) y M R   ( mm ) z M R   ( mm ) α M R   ( ° ) β M R   ( ° ) γ M R   ( ° )
12313.221792.92−589.7480.57−37.86−77.55
21497.112537.32−281.22−89.8924.78−14.51
32304.661860.18−206.67−118.3978.74129.42
42277.651984.56−645.25−44.7957.3980.46
51390.681636.51−742.07−128.19−11.4498.47
61958.152445.0712.88−165.9442.65−2.39
72125.991999.70−220.1886.06−4.6947.70
82088.602107.16−591.93−70.43−40.07−169.87
91558.052900.97414.29−78.6949.63−95.51
102121.442318.46101.56−126.130.87−10.79
112399.721582.72−771.25−121.8464.98123.51
121790.612386.28−600.3037.0319.17−151.35
131716.832187.95−375.8214.30−47.02−129.10
142010.642288.81−89.58−93.86−29.99129.54
152058.982339.72−347.13−106.5481.21−137.63
162109.961653.80−936.8879.9639.4236.70
172842.362528.24−155.19−88.56−30.02−26.84
181916.721693.40−698.29101.6550.94−26.05
191813.532922.75560.58−91.3557.05−5.05
202064.341952.87−87.9283.72−4.21113.01
212047.822100.22−927.90−141.50−61.20108.64
222067.282144.44−134.59−87.03−55.04−119.65
231396.182608.17−294.57−23.45−4.68164.83
242326.612073.32−175.352.6654.81144.27
252129.312428.491.43132.3730.5967.68
261939.252095.88164.0331.1530.34173.43
272446.712306.63−458.95−83.78−10.46−175.26
281765.212180.81110.9314.76−26.74125.14
292340.173051.32158.13−25.527.6553.46
301165.362649.96568.5823.49−34.46136.14

Appendix C

Table A3. Robot joint angles corresponding to 30 data points in the experiment.
Table A3. Robot joint angles corresponding to 30 data points in the experiment.
θ 1   ( ° )   θ 2   ( ° ) θ 3   ( ° ) θ 4   ( ° ) θ 5   ( ° ) θ 6   ( ° )
1−152.81 −63.54 119.34 −143.66 −92.14 −61.12
2−140.55 −47.11 103.77 −150.59 −85.43 −52.33
3−130.90 −37.98 80.19 −131.04 −93.29 −41.23
4−125.18 −37.92 75.17 −131.02 −92.88 −41.14
5−120.02 −26.06 51.65 −117.04 −93.58 −21.81
6−115.18 −24.36 47.40 −121.50 −90.64 −21.81
7−109.32 −24.43 44.68 −113.53 −90.09 −21.82
8−107.71 −35.02 62.02 −117.97 −90.30 −21.82
9−109.97 −40.75 71.78 −124.13 −85.94 −21.82
10−106.69 −49.33 86.08 −132.94 −89.67 −0.80
11−108.77 −56.81 98.07 −132.92 −88.43 −0.94
12−105.77 −55.93 104.87 −141.30 −92.75 −27.15
13−107.46 −66.63 118.32 −141.23 −84.04 5.00
14−109.04 −69.60 127.59 −141.44 −87.38 −25.52
15−108.97 −81.65 139.60 −142.12 −95.35 −10.18
16−99.83 −45.76 82.54 −143.31 −64.87 −10.33
17−96.35 −24.46 59.62 −156.43 −92.18 −10.28
18−96.30 6.48 −39.73 −53.90 −93.53 −10.27
19−95.71 15.68 −68.46 −37.28 −88.28 −10.32
20−94.43 10.49 −79.26 −11.56 −84.87 14.51
21−88.27 4.61 −39.89 −53.28 −96.42 14.04
22−88.09 −11.72 23.95 −96.94 −79.45 14.04
23−84.94 −28.57 53.20 −119.15 −89.09 33.74
24−85.48 −47.26 83.25 −135.46 −90.53 39.54
25−89.56 −55.16 96.63 −133.57 −83.44 −14.82
26−85.84 −68.78 118.18 −131.97 −85.53 20.92
27−76.49 −60.10 114.03 −140.40 −90.71 −5.63
28−77.15 −40.11 84.01 −141.30 −86.02 −5.62
29−78.19 −28.77 56.12 −119.94 −86.48 30.33
30−82.20 −41.33 82.48 −137.64 −93.58 43.06

Appendix D

Table A4. Pose of the marker relative to the measurement frame in the experiment.
Table A4. Pose of the marker relative to the measurement frame in the experiment.
x M R   ( mm ) y M R   ( mm ) z M R   ( mm ) α M R   ( ° ) β M R   ( ° ) γ M R   ( ° )
1−507.55−343.78−3164.85−178.450.79177.85
2−491.67−415.23−2940.29172.68−2.47178.62
3−490.11−417.03−2755.51−178.28−1.16−179.88
4−416.83−359.45−2667.84177.49−6.72−177.23
5−412.02−372.38−2526.80178.957.42−178.07
6−316.45−333.96−2453.84171.292.55−177.91
7−179.45−324.76−2429.54175.82−3.49179.15
8−116.95−297.38−2511.32177.93−4.99178.67
9−135.61−273.98−2561.53174.39−2.73175.35
10−92.98−248.51−2644.52173.0715.00179.56
11−96.20−255.28−2749.69177.7316.91177.18
12−1.96−312.83−2788.46177.78−12.18−178.31
13−20.34−286.53−2913.43−179.2521.53172.09
1429.23−329.69−3007.87−176.35−7.50174.06
1514.00−301.70−3133.29−174.047.71−178.03
16124.89−199.61−2577.55158.68−0.28156.08
1791.92−325.38−2421.07148.59−4.48−176.66
1884.14−155.01−2444.09−177.59−5.29−178.76
19123.90−99.59−2585.09178.55−5.73176.42
20126.93−6.18−2754.79−169.2517.10172.35
21213.06−107.96−2468.28179.5110.97−175.52
22287.68−401.17−2400.00−172.5610.84167.44
23288.67−317.20−2491.16176.6428.07176.14
24245.85−241.94−2659.00171.4434.59177.31
25277.83−254.16−2751.42174.90−15.93171.45
26253.96−285.60−2983.85−171.4815.71174.52
27418.63−335.72−2967.80−177.40−20.09−179.90
28491.96−357.18−2701.12170.71−18.84172.30
29444.17−346.15−2547.34179.5917.74173.77
30294.38−334.62−2669.74171.8234.70−179.11

Appendix E

Table A5. Robot joint angles for 10 random data points in the experiment.
Table A5. Robot joint angles for 10 random data points in the experiment.
θ 1   ( ° ) θ 2   ( ° ) θ 3   ( ° ) θ 4   ( ° ) θ 5   ( ° ) θ 6   ( ° )
1−84.55 −19.26 41.22 −116.82 −88.81 36.94
2−91.87 −17.68 39.51 −116.13 −84.60 4.94
3−93.04 −25.77 41.83 −101.52 −87.57 4.94
4−78.61 −22.06 32.88 −108.27 −88.50 33.17
5−86.40 −16.94 21.85 −94.48 −84.87 3.43
6−89.80 −36.53 57.74 −111.28 −88.40 3.44
7−91.80 −50.83 86.49 −129.45 −88.34 3.43
8−109.42 −41.67 75.59 −124.28 −84.75 3.45
9−92.66 −51.57 77.57 −106.80 −92.64 3.44
10−96.30 −62.15 104.48 −129.26 −80.46 3.45

Appendix F

Table A6. Measured pose of the marker relative to measurement frame for 10 sampled points in the experiment.
Table A6. Measured pose of the marker relative to measurement frame for 10 sampled points in the experiment.
x M R   ( mm ) y M R   ( mm ) z M R   ( mm ) α M R   ( ° ) β M R   ( ° ) γ M R   ( ° )
1299.41−382.00−2442.93176.9030.80175.60
2196.70−396.14−2394.80175.606.14172.62
3160.22−295.90−2437.57−175.826.90175.46
4442.53−235.03−2449.30174.4021.03174.60
5325.23−255.80−2393.16−179.91−1.13172.97
6231.40−229.23−2507.00179.452.30176.50
7190.35−234.70−2662.45175.704.42176.69
8−152.60−305.32−2594.70179.6621.90172.80
9159.96−193.40−2664.40−171.674.71−179.48
10146.40−246.80−2815.70−176.908.65168.50

Appendix G

Table A7. Joint angles of the robot before calibration for insertion points in the third row.
Table A7. Joint angles of the robot before calibration for insertion points in the third row.
Holes θ 1   ( ° ) θ 2   ( ° ) θ 3   ( ° ) θ 4   ( ° ) θ 5   ( ° ) θ 6   ( ° )
t31−137.79 −49.57 103.69 −144.25 −89.99 −46.56
t32−133.18 −52.97 111.16 −148.32 −90.00 −41.96
t33−127.75 −55.74 117.37 −151.76 −90.01 −36.52
t34−121.40 −57.87 122.23 −154.49 −90.03 −30.17
t35−114.12 −59.33 125.63 −156.43 −90.04 −22.90
t36−106.04 −60.10 127.45 −157.46 −90.06 −14.82

Appendix H

Table A8. Joint angles of the robot after calibration for insertion points in the third row.
Table A8. Joint angles of the robot after calibration for insertion points in the third row.
Holes θ 1   ( ° ) θ 2   ( ° ) θ 3   ( ° ) θ 4   ( ° ) θ 5   ( ° ) θ 6   ( ° )
t31−137.81 −49.58 103.67 −144.08 −89.84 −46.61
t32−133.22 −52.99 111.16 −148.24 −89.78 −42.02
t33−127.78 −55.75 117.38 −151.77 −89.73 −36.59
t34−121.43 −57.87 122.26 −154.62 −89.71 −30.24
t35−114.15 −59.32 125.67 −156.69 −89.71 −22.96
t36−106.05 −60.06 127.49 −157.88 −89.75 −14.87

References

  1. Lin, Y.; Zhao, H.; Ding, H. Posture optimization methodology of 6r industrial robots for machining using performance evaluation indexes. Robot. Comput. Integr. Manuf. 2017, 48, 59–72. [Google Scholar] [CrossRef]
  2. Judd, R.P.; Knasinski, A.B. A technique to calibrate industrial robots with experimental verification. IEEE Trans. Robot. Autom. 1990, 6, 20–30. [Google Scholar] [CrossRef]
  3. Nguyen, H.N.; Zhou, J.; Kang, H.J. A calibration method for enhancing robot accuracy through integration of an extended Kalman filter algorithm and an artificial neural network. Neurocomputing 2015, 151, 996–1005. [Google Scholar] [CrossRef]
  4. Nubiola, A.; Bonev, I.A. Absolute calibration of an ABB IRB 1600 robot using a laser tracker. Robot. Comput.-Integr. Manuf. 2013, 29, 236–245. [Google Scholar] [CrossRef]
  5. Wu, Y.; Klimchik, A.; Caro, S.; Furet, B.; Pashkevich, A. Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments. Robot. Comput. Integr. Manuf. 2015, 35, 151–168. [Google Scholar] [CrossRef] [Green Version]
  6. Gao, G.; Liu, F.; San, H.; Wu, X.; Wang, W. Hybrid optimal kinematic parameter identification for an industrial robot based on bpnn-pso. Complexity 2018, 2018, 4258676. [Google Scholar] [CrossRef]
  7. Jiang, Z.; Zhou, W.; Li, H.; Mo, Y.; Ni, W.; Huang, Q. A new kind of accurate calibration method for robotic kinematic parameters based on the extended Kalman and particle filter algorithm. IEEE Trans. Ind. Electron. 2017, 65, 3337–3345. [Google Scholar] [CrossRef]
  8. Borm, J.H.; Menq, C.H. Experimental study of observability of parameter errors in robot calibration. International Conference on Robotics and Automation. IEEE 1989, 1, 587–592. [Google Scholar]
  9. Driels, M.R. Using Passive End-Point Motion Constraints to Calibrate Robot Manipulators. J. Dyn. Syst. Measur. Control 1993, 115, 560–566. [Google Scholar] [CrossRef]
  10. Lightcap, C.; Hamner, S.; Schmitz, T.; Banks, S. Improved positioning accuracy of the PA10-6CE robot with geometric and flexibility Calibration. IEEE Trans. Robot. 2008, 24, 452–456. [Google Scholar] [CrossRef]
  11. Nubiola, A.; Slamani, M.; Joubair, A.; Bonev, I.A. Comparison of two calibration methods for a small industrial robot based on an optical CMM and a laser tracker. Robotica 2014, 32, 447–466. [Google Scholar] [CrossRef]
  12. He, S.; Ma, L.; Yan, C.; Lee, C.H.; Hu, P. Multiple location constraints based industrial robot kinematic parameter calibration and accuracy assessment. Int. J. Adv. Manuf. Technol. 2019, 102, 1037–1050. [Google Scholar] [CrossRef]
  13. Joubair, A.; Bonev, I.A. Non-kinematic calibration of a six-axis serial robot using planar constraints. Precis. Eng. 2015, 40, 325–333. [Google Scholar] [CrossRef]
  14. Joubair, A.; Bonev, I.A. Kinematic calibration of a six-axis serial robot using distance and sphere constraints. Int. J. Adv. Manuf. Technol. 2015, 77, 515–523. [Google Scholar] [CrossRef]
  15. Du, G.; Zhang, P. Online robot calibration based on vision measurement. Robot. Comput. Integr. Manuf. 2013, 29, 484–492. [Google Scholar] [CrossRef]
  16. Wang, H.; Lu, X.; Hu, Z.; Li, Y. A vision-based fully-automatic calibration method for hand-eye serial robot. Ind. Robot Int. J. 2015, 42, 64–73. [Google Scholar] [CrossRef]
  17. Meng, Y.; Zhuang, H. Autonomous robot calibration using vision technology. Robot. Comput. Integr. Manuf. 2007, 23, 436–446. [Google Scholar] [CrossRef]
  18. Zhang, X.; Song, Y.; Yang, Y.; Pan, H. Stereo vision based autonomous robot calibration. Robot. Auton. Syst. 2017, 93, 43–51. [Google Scholar] [CrossRef]
  19. Saund, B.; Devlieg, R. High accuracy articulated robots with CNC control systems. SAE Int. J. Aerosp. 2013, 6, 780–784. [Google Scholar] [CrossRef]
  20. Jiang, Y.; Huang, X.; Li, S. An on-line compensation method of a metrology-integrated robot system for high-precision assembly. Ind. Robot Int. J. 2016, 43, 647–656. [Google Scholar] [CrossRef]
  21. Shi, X.; Zhang, F.; Qu, X.; Liu, B. An online real-time path compensation system for industrial robots based on laser tracker. Int. J. Adv. Robot. Syst. 2016, 13, 1–14. [Google Scholar] [CrossRef] [Green Version]
  22. Yin, S.; Guo, Y.; Ren, Y.; Zhu, J.; Yang, S.; Ye, S. Real-time thermal error compensation method for robotic visual inspection system. Int. J. Adv. Manuf. Technol. 2014, 75, 933–946. [Google Scholar] [CrossRef]
  23. Guillo, M.; Dubourg, L. Impact & improvement of tool deviation in friction stir welding: Weld quality & real-time compensation on an industrial robot. Robot. Comput. Integr. Manuf. 2016, 39, 22–31. [Google Scholar]
  24. Schneider, U.; Drust, M.; Ansaloni, M.; Lehmann, C.; Pellicciari, M.; Leali, F.; Gunnink, J.W.; Verl, A. Improving robotic machining accuracy through experimental error investigation and modular compensation. Int. J. Adv. Manuf. Technol. 2016, 85, 3–15. [Google Scholar] [CrossRef]
  25. Zeng, Y.; Tian, W.; Liao, W. Positional error similarity analysis for error compensation of industrial robots. Robot. Comput. Integr. Manuf. 2016, 42, 113–120. [Google Scholar] [CrossRef]
  26. Shu, T.; Gharaaty, S.; Xie, W.F.; Joubair, A.; Bonev, I.A. Dynamic path tracking of industrial robots with high accuracy using photogrammetry sensor. IEEE/ASME Trans. Mechatron. 2018, 23, 1159–1170. [Google Scholar] [CrossRef]
  27. Gharaaty, S.; Shu, T.; Joubair, A.; Xie, W.F.; Bonev, I.A. Online pose correction of an industrial robot using an optical coordinate measure machine system. Int. J. Adv. Robot. Syst. 2018, 15, 1–16. [Google Scholar] [CrossRef] [Green Version]
  28. Stückelmaier, P.; Grotjahn, M.; Fräger, C. Iterative improvement of path accuracy of industrial robots using external measurements. In Proceedings of the 2017 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Munich, Germany, 3–7 July 2017; Volume 7, pp. 688–693. [Google Scholar]
  29. Hsiao, T.; Huang, P.H. Iterative learning control for trajectory tracking of robot manipulators. Int. J. Autom. Smart Technol. 2017, 7, 133–139. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Robot calibration system configuration.
Figure 1. Robot calibration system configuration.
Sensors 20 06341 g001
Figure 2. Measurement range of the optical tracking system used in this work (unit: mm).
Figure 2. Measurement range of the optical tracking system used in this work (unit: mm).
Sensors 20 06341 g002
Figure 3. System integration diagram.
Figure 3. System integration diagram.
Sensors 20 06341 g003
Figure 4. Kinematic scheme of a serial robot.
Figure 4. Kinematic scheme of a serial robot.
Sensors 20 06341 g004
Figure 5. Definition of D-H parameters.
Figure 5. Definition of D-H parameters.
Sensors 20 06341 g005
Figure 6. Nominal and actual coordinate frames.
Figure 6. Nominal and actual coordinate frames.
Sensors 20 06341 g006
Figure 7. Specific process of the robot to accurately track the trajectory.
Figure 7. Specific process of the robot to accurately track the trajectory.
Sensors 20 06341 g007
Figure 8. Nominal position and corrected positions.
Figure 8. Nominal position and corrected positions.
Sensors 20 06341 g008
Figure 9. Pose errors before and after calibration for simulation: (a) X axis; (b) Y axis; (c) Z axis; (d) roll α ; (e) pitch β ; (f) yaw γ .
Figure 9. Pose errors before and after calibration for simulation: (a) X axis; (b) Y axis; (c) Z axis; (d) roll α ; (e) pitch β ; (f) yaw γ .
Sensors 20 06341 g009
Figure 10. Experimental setup.
Figure 10. Experimental setup.
Sensors 20 06341 g010
Figure 11. The nominal and corrected positions.
Figure 11. The nominal and corrected positions.
Sensors 20 06341 g011
Figure 12. Pose errors before and after calibration for test: (a) X axis; (b) Y axis; (c) Z axis; (d) roll   α ; (e) pitch   β ; (f) yaw   γ .
Figure 12. Pose errors before and after calibration for test: (a) X axis; (b) Y axis; (c) Z axis; (d) roll   α ; (e) pitch   β ; (f) yaw   γ .
Sensors 20 06341 g012
Figure 13. Pose errors before and after calibration of 10 data points: (a) X axis; (b) Y axis; (c) Z axis; (d) roll   α ; (e) pitch   β ; (f) yaw   γ .
Figure 13. Pose errors before and after calibration of 10 data points: (a) X axis; (b) Y axis; (c) Z axis; (d) roll   α ; (e) pitch   β ; (f) yaw   γ .
Sensors 20 06341 g013
Figure 14. Teaching the robot insertion at three points.
Figure 14. Teaching the robot insertion at three points.
Sensors 20 06341 g014
Figure 15. Alignment of rod and hole (a) before calibration and (b) after calibration.
Figure 15. Alignment of rod and hole (a) before calibration and (b) after calibration.
Sensors 20 06341 g015
Figure 16. Difference between the joint angles before and after calibration: (a) joint 1; (b) joint 2; (c) in joint 3; (d) joint 4; (e) joint 5; (f) joint 6.
Figure 16. Difference between the joint angles before and after calibration: (a) joint 1; (b) joint 2; (c) in joint 3; (d) joint 4; (e) joint 5; (f) joint 6.
Sensors 20 06341 g016
Figure 17. Difference between the end-effector position before and after calibration: (a) X axis; (b) Y axis; (c) Z axis; (d) roll   α ; (e) pitch   β ; (f) yaw   γ .
Figure 17. Difference between the end-effector position before and after calibration: (a) X axis; (b) Y axis; (c) Z axis; (d) roll   α ; (e) pitch   β ; (f) yaw   γ .
Sensors 20 06341 g017
Figure 18. Misalignment of the tip.
Figure 18. Misalignment of the tip.
Sensors 20 06341 g018
Figure 19. Corrected misalignment for each hole.
Figure 19. Corrected misalignment for each hole.
Sensors 20 06341 g019
Table 1. Nominal D-H parameters of the UR10 robot.
Table 1. Nominal D-H parameters of the UR10 robot.
i α i   ( ° ) a i   ( mm ) d i   ( mm ) θ i   ( ° )
10.000.00127.300.00
290.000.000.000.00
30.00−612.000.000.00
40.00−572.30163.900.00
590.000.00115.700.00
6−90.000.0092.200.00
Table 2. Given errors in the base frame relative to the measurement frame.
Table 2. Given errors in the base frame relative to the measurement frame.
Δ x MB   ( mm ) Δ y MB   ( mm ) Δ z MB   ( mm ) Δ α MB   ( ° ) Δ β MB   ( ° ) Δ γ MB   ( ° )
0.500.500.500.500.500.50
Table 3. Given errors in the D-H parameters.
Table 3. Given errors in the D-H parameters.
i Δ α i   ( ° ) Δ a i   ( mm ) Δ d i   ( mm ) Δ θ i ( ° )
10.300.300.300.30
20.300.300.300.30
30.300.300.300.30
40.300.300.300.30
50.300.300.300.30
60.300.300.300.30
Table 4. Given errors in the marker frame relative to the end-effect.
Table 4. Given errors in the marker frame relative to the end-effect.
Δ x ER   ( mm ) Δ y ER   ( mm ) Δ z ER   ( mm ) Δ α ER   ( ° ) Δ β ER   ( ° ) Δ γ E R   ( ° )
0.500.500.500.500.500.50
Table 5. Pose error of the base frame relative to the measurement frame via calibration.
Table 5. Pose error of the base frame relative to the measurement frame via calibration.
Δ x M B   ( mm ) Δ y M B   ( mm ) Δ z M B   ( mm ) Δ α M B   ( ° ) Δ β M B   ( ° ) Δ γ M B   ( ° )
0.39 0.52 0.40 0.41 0.50 0.40
Table 6. Calculated D-H parameter errors via calibration.
Table 6. Calculated D-H parameter errors via calibration.
i Δ α i   ( ° ) Δ a i   ( mm ) Δ d i   ( mm ) Δ θ i   ( ° )
10.390.390.400.40
20.300.270.310.30
30.300.290.310.30
40.300.280.310.30
50.300.300.210.31
60.330.320.410.43
Table 7. Pose error of the marker frame relative to the end-effect via calibration.
Table 7. Pose error of the marker frame relative to the end-effect via calibration.
Δ x E R   ( mm ) Δ y E R   ( mm ) Δ z E R   ( mm ) Δ α E R   ( ° ) Δ β E R   ( ° ) Δ γ E R   ( ° )
0.55 0.57 0.42 0.57 0.56 0.43
Table 8. Given and calculated errors in the kinematic parameters.
Table 8. Given and calculated errors in the kinematic parameters.
ParametersGiven Parameter ErrorCalculated Parameter Error
Δ x MB + Δ a 1   ( mm ) 0.80 0.78
Δ y MB   ( mm ) 0.50 0.52
Δ z MB + Δ d 1   ( mm ) 0.80 0.80
Δ α MB + Δ α 1   ( ° ) 0.80 0.80
Δ β MB   ( ° ) 0.50 0.50
Δ γ MB + Δ θ 1   ( ° ) 0.80 0.80
Δ α 2   ( ° ) 0.30 0.30
Δ α 3   ( ° ) 0.30 0.30
Δ α 4   ( ° ) 0.30 0.30
Δ α 5   ( ° ) 0.30 0.30
Δ α 6   ( ° ) 0.30 0.33
Δ a 2   ( mm ) 0.30 0.27
Δ a 3   ( mm ) 0.30 0.29
Δ α 4   ( ° ) 0.30 0.28
Δ a 5   ( mm ) 0.30 0.30
Δ a 6   ( mm ) 0.30 0.32
Δ d 2 + Δ d 3 + Δ d 4   ( mm ) 0.90 0.93
Δ d 5   ( mm ) 0.30 0.21
Δ d 6 + Δ z ER   ( mm ) 0.80 0.84
Δ θ 2   ( ° ) 0.30 0.30
Δ θ 3   ( ° ) 0.30 0.30
Δ θ 4   ( ° ) 0.30 0.30
Δ θ 5   ( ° ) 0.30 0.31
Δ θ 6 + Δ γ ER   ( ° ) 0.80 0.86
Δ x ER   ( mm ) 0.50 0.55
Δ y ER   ( mm ) 0.50 0.57
Δ α ER   ( ° ) 0.50 0.57
Δ β ER   ( ° ) 0.50 0.56
Table 9. Average error of 30 data points before calibration.
Table 9. Average error of 30 data points before calibration.
Δ x ¯ M R   ( mm ) Δ y ¯ M R   ( mm ) Δ z ¯ M R   ( mm ) Δ α ¯ M R   ( ° ) Δ β ¯ M R   ( ° ) Δ γ ¯ M R   ( ° )
7.82 10.23 8.59 1.64 1.14 1.74
Table 10. Average error of 30 data points after calibration.
Table 10. Average error of 30 data points after calibration.
Δ x ¯ M R   ( mm ) Δ y ¯ M R   ( mm ) Δ z ¯ M R   ( mm ) Δ α ¯ M R   ( ° ) Δ β ¯ M R   ( ° ) Δ γ ¯ M R   ( ° )
0.02 0.02 0.02 0.03 0.03 0.02
Table 11. Pose error of the base frame relative to the measurement frame via calibration.
Table 11. Pose error of the base frame relative to the measurement frame via calibration.
Δ x M R   ( mm ) Δ y M R   ( mm ) Δ z M R   ( mm ) Δ α M R   ( ° ) Δ β M R   ( ° ) Δ γ M R   ( ° )
2.52 −8.17 −1.06 −0.25 −0.28 0.07
Table 12. D-H parameter errors via calibration.
Table 12. D-H parameter errors via calibration.
i Δ α i   ( ° ) Δ a i   ( mm ) Δ d i   ( mm ) Δ θ i   ( ° )
1−0.27 2.49 −1.15 0.22
2−0.02 0.34 −0.10 0.02
3−0.51 −0.36 −0.34 0.01
4−0.64 0.80 −0.19 0.14
5−0.13 −0.64 −0.53 −0.04
60.15 −0.04 −0.59 0.60
Table 13. Pose error of the marker frame relative to the end-effect via calibration.
Table 13. Pose error of the marker frame relative to the end-effect via calibration.
Δ x E R   ( mm ) Δ y E R   ( mm ) Δ z E R   ( mm ) Δ α E R   ( ° ) Δ β E R   ( ° ) Δ γ E R   ( ° )
0.58 0.68 −0.37 −0.06 0.28 0.02
Table 14. Average error of 30 data points before calibration.
Table 14. Average error of 30 data points before calibration.
Δ x ¯ M R   ( mm ) Δ y ¯ M R   ( mm ) Δ z ¯ M R   ( mm ) Δ α ¯ M R   ( ° ) Δ β ¯ M R   ( ° ) Δ γ ¯ M R   ( ° )
2.20 3.24 3.24 0.50 0.16 0.49
Table 15. Average error of 30 data points after calibration.
Table 15. Average error of 30 data points after calibration.
Δ x ¯ M R   ( mm ) Δ y ¯ M R   ( mm ) Δ z ¯ M R   ( mm ) Δ α ¯ M R   ( ° ) Δ β ¯ M R   ( ° ) Δ γ ¯ M R   ( ° )
0.08 0.12 0.13 0.06 0.05 0.04
Table 16. Average error of 10 data points before calibration.
Table 16. Average error of 10 data points before calibration.
Δ x ¯ M R   ( mm ) Δ y ¯ M R   ( mm ) Δ z ¯ M R   ( mm ) Δ α ¯ M R   ( ° ) Δ β ¯ M R   ( ° ) Δ γ ¯ M R   ( ° )
6.60 9.73 9.71 0.74 0.16 0.67
Table 17. Average error of 10 data points after calibration.
Table 17. Average error of 10 data points after calibration.
Δ x ¯ M R   ( mm ) Δ y ¯ M R   ( mm ) Δ z ¯ M R   ( mm ) Δ α ¯ M R   ( ° ) Δ β ¯ M R   ( ° ) Δ γ ¯ M R   ( ° )
0.06 0.20 0.19 0.04 0.04 0.02
Table 18. Positioning error result comparison with previous works.
Table 18. Positioning error result comparison with previous works.
ReferenceMethodologyEquipmentRobotMean (mm)Std (mm)Max (mm)
[4]open-looplaser tracking systemABB IRB16000.3640.1300.696
[7]open-looplaser tracking systemKawasaki RS10N0.2620.1430.642
[11]open-looplaser tracker and CMMABB IRB1200.1460.0650.437
[13]closed-loopprobe and gaugeFANUC LR Mate 200iC0.1530.070.274
[15]closed-loopmonocular visionGOOGOL GRB30161.340N/AN/A
[18]closed-loopstereo visionUR52.500N/AN/A
This workopen-loopoptical tracking systemUR100.3480.0960.467
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Li, Y.; Zhuang, Z.; Song, T. Improvement of Robot Accuracy with an Optical Tracking System. Sensors 2020, 20, 6341. https://doi.org/10.3390/s20216341

AMA Style

Liu Y, Li Y, Zhuang Z, Song T. Improvement of Robot Accuracy with an Optical Tracking System. Sensors. 2020; 20(21):6341. https://doi.org/10.3390/s20216341

Chicago/Turabian Style

Liu, Ying, Yuwen Li, Zhenghao Zhuang, and Tao Song. 2020. "Improvement of Robot Accuracy with an Optical Tracking System" Sensors 20, no. 21: 6341. https://doi.org/10.3390/s20216341

APA Style

Liu, Y., Li, Y., Zhuang, Z., & Song, T. (2020). Improvement of Robot Accuracy with an Optical Tracking System. Sensors, 20(21), 6341. https://doi.org/10.3390/s20216341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop