Practice Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Practice Report

Robótica Computacional
Cheuk Kelly Ng Pante ([email protected])
December 29, 2023
Contents

1 Forward kinematics 1

2 Inverse kinematics 4
2.1 Example execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Localization 7
3.1 Localization code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Example execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1 Forward kinematics
Forward kinematics is a branch of robotics and mechanics that deals with the relationship between the
movements of the links of a robot and the variables that control them. In other words, forward kinematics
is the problem of finding the position and orientation of the end of the robot, given the set of parameters
that define the positions and orientations of all the links.
To explain the forward kinematics, the Denavit-Hartenberg (DH) coordinate system will be used. The
Denavit-Hartenberg coordinate system is a coordinate system used to model direct and inverse kinematics
of articulated robots. The Denavit-Hartenberg coordinate system is based on four parameters associated
with each joint. These parameters are:
• di : Distance between the zi−1 and zi axes along the xi−1 axis.
• θi : Angle between the zi−1 and zi axes around the xi−1 axis.
• ai : Distance between the xi−1 and xi axes along the zi−1 axis.
• αi : Angle between the xi−1 and xi axes around the zi−1 axis.
The DH parameters can be calculated according to the following table:

A B C
di Oi−1 Zi−1 ∩ Xi Zi−1
θi Xi−1 Xi Zi−1
ai Zi−1 ∩ Xi Oi Xi
αi Zi−1 Zi Xi

Table 1.1: Parameters of DH

For the explanation of forward kinematics, manipulator 3 will be used:

Figure 1.1: Example manipulator

1
To calculate the forward kinematics, we will first calculate the DH parameters:

Figure 1.2: Manipulator 3 DH parameters

As can be seen in figure 1.2, before calculating the DH parameters, some input variables have been assigned,
these correspond to the angles of the manipulator joints and these are the values that are introduced when
the program is executed. For example, if we run it as follows:
$ python3 ./cinematica_directa 5 0 5 90 45
the forward kinematics result would be:

Figure 1.3: Forward kinematics of manipulator 3

2
Then, the transformation matrix of each joint was calculated for this manipulator:

Figure 1.4: Matrix transformation of each joint of manipulator 3

Then, the transformation of each joint, specifying the data from the Denavit Hartenberg table specified
in figure 1.2:

Figure 1.5: Transformation of each manipulator joint 3

And finally, the result of the forward kinematics of manipulator 3:

Figure 1.6: Result of the forward kinematics of manipulator 3

3
2 Inverse kinematics
Inverse kinematics is a branch of robotics and mechanics that deals with the relationship between the
movements of the links of a robot and the variables that control them. In other words, inverse kinematics
is the problem of finding the set of parameters that define the positions and orientations of all the links,
given the position and orientation of the end of the robot.
Inverse kinematics can be a complex problem due to the presence of physical constraints and limitations
of the robot, such as joint movement limits, collisions or singularities.
In the practice of inverse kinematics, the distance that a prismatic joint at point Oi has to be extended
in such a way that the end point of the robot On is as close as possible to the target position R is to be
calculated.
The approach can only be made in the extension direction Li , which can be calculated as an angle of w
defining the rotation with respect to the absolute x-axis. The angle w can be calculated as:
i
X
ω= θj
j=0

Using the scalar product we can project the vector On to R in the direction of extension of the joint to
obtain the distance d:  
cos(ω)
d= · (R − ON)
sin(ω)

Therefore, the value of Li after each iteration becomes:


  i
cos(ω) X
Li + · (R − ON), con ω = θj
sin(ω)
j=0

To calculate the inverse kinematics in the script in Python we will do it with the Cyclic Coordinate Descent
(CCD) algorithm. And before we start developing the CCD algorithm, we must first define the joint values
for the direct kinematics. For this purpose, lists have been defined with the joint values of each joint of the
robot and these joint values have been defined so that the robot does not exceed the limits of the joints,
upper and lower limits. And then, with another list, we will differentiate between a rotational joint and a
prismatic joint.

Figure 2.1: Lists of limits joints articulations

4
Then, already differentiating the joint values, in the main loop we compute ω, which for the rotational
joint we calculate the vectors V1 and V2 , which represent the position differences between the points of
interest. For vector V1 is the difference between the robot end point and the target and for vector V2 is
the difference between the robot end point and the point of interest. Once the vectors are calculated, we
calculate the angles between the vectors V1 and V2 and these angles are added to the variable θ of the
rotational joint making the difference between the angles of the vectors V1 and V2 . And in the case of
the prismatic joint it will be the sum of all the θ up to the current one (inclusive) and then the scalar
projection of the vector that goes from the end point of the robot to the target on the extension direction
of the prismatic joint is calculated.
Afterwards, it is necessary to ensure that the robot does not exceed the limits of the joints, so the current
position must be checked to see if it is above or below the limits, and if so, the position will become that
of the limit itself, whether it is above or below.

Figure 2.2: Inverse kinematics code with the CCD algorithm

5
2.1 Example execution
An example of executing inverse kinematics script would be to move the robot to the point x = 5, y = 5:

Figure 2.3: Example of the execution of inverse kinematics in the first iteration

Figure 2.4: Example of inverse kinematics execution in the last iteration

6
3 Localization
Localization is the process of estimating the position of a robot in an unknown environment. Localization
is a fundamental problem in mobile robotics, as most mobile robotics tasks require the robot to know its
position in the environment. Localization is a difficult problem because the robot does not know its initial
position and the environment may be unknown and dynamic. In addition, the robot’s sensors may be
noisy and the robot may have inaccurate movement.
In the practice of localization, the most probable position of the robot has to be calculated from its sensory
system within a square region of centre "centre" and side 2 · radius.
To calculate the location in the script in Python is to complete the function localizacion which receives
as parameters the beacons, the actual location of the robot, the ideal location of the robot, the centre of
the region and the radius of the region and a show parameter to display the location graph. This function
aims to find the most probable location of a robot within a square region, using its sensory system and a
set of reference beacons.
The localization process is performed by comparing the measurements of the real robot with the measure-
ments of the ideal robot at different points within the square region. The ideal robot is moved to each
point and the error between the measurements of the ideal robot and the measurements of the real robot
is calculated. The point with the smallest error is considered the most likely location of the real robot.
The code uses a for loop to iterate over the points within the square region. It uses the function np.arange
to generate a sequence of values within the range of the radius of the region. This function returns an
array of values that are used to calculate the ‘x‘ and ‘y‘ coordinates of the current point.
Within the loop, the position of the ideal robot is updated to the current point and the error between the
measurements of the ideal robot and the measurements of the real robot is calculated using the function
real.measurement_prob(ideal.sense(balizas), balizas). The error is stored in an image matrix.
The code also keeps track of the minimum error found so far and the point corresponding to that minimum
error. If an error smaller than the current minimum error is found, the minimum error is updated and the
current point is saved as the best point.
Once the loop is completed, the ideal robot is placed at the best point found, which is considered the
most likely location of the actual robot. actual robot. Finally, the best point and the minimum error are
printed.
Once the function localizacion has been developed, it has been invoked before iterating over the list of
target points and also at the end of the loop to relocate the robot to calculate the best position for the
ideal robot based on the position of the real robot.

7
3.1 Localization code

Figure 3.1: Localization code

Figure 3.2: Target localization code

8
3.2 Example execution

Figure 3.3: Example of localization execution

Figure 3.4: Example of localization execution

You might also like