1 Introduction

Most of earth’s terrain is uneven and full of different kinds of obstacles, which makes it difficult for traditional wheeled vehicles to reach their destination in natural environment if roads are not available [1]. Such vehicles are especially demanded for a variety of critical applications such as rescue [2], disaster management [3] or border security patrol [4]. For off-road environment, new types of vehicles are being developed. The researchers aim to design autonomously navigating robots that can acquire and benefit from the data collected from their environment in order to overcome any obstacles and perform steady navigation in uncertain environment [5,6,7]. While image recognition of data received, for example, from Kinect RGB-D cameras, can be used for localization and mapping in unknown environments [8], it still does not solve the problem of overcoming the obstacles.

Often, the nature provides inspiration of the development of such vehicles [9], motivating the design of new and more universal vehicles such as robots. The most prevalent locomotion type in nature is walking. No wheeled or tracked locomotion can be found between the kingdoms of animals, insects, birds or fish. Thus, biped, quadruped or hexapod walking robots are considered to be the most versatile robots due to being able to adapt to all types of terrain [10]. However, even if walking on the plain surface is considered not a complex task, when the surface is rough or it has obstacles, then this process becomes complicated. While performing tasks in the heterogenous environment, the legged robots can plan to overcome, walk over or climb over the obstacles based on different characteristics of the obstacles [11]. Obstacles must be detected and climbed over or bypassed with minimum error from the path trajectory. This means that in order to have stable movement in complex dynamic environments, sensors and obstacle overcoming methods are necessary [12].

Over the years, different types of research have been done toward the legged robots. The most appealing subjects are walking stability, energy efficiency [13], gait selection and transition, irregular terrain traversal, locomotion with leg fault and sensory systems for environmental recognition. Solving such problems has many application including in navigation [14] and Internet-of-Things [15].

In the field of robot stability, Hajiabadi [16] developed foot force stability margin (FFSM) and modified foot force stability margin (MFFSM) methods to calculate the stability of the system which is based on sensing the foot forces. Bio-inspired reactive stability control method is used to stabilize the foot force margin, aiming to help legged robots to recover after the loss of its stability and maintain the stability in case of any external disturbance [17, 18]. Xin et al. [19] adopted the hierarchical control architecture with high-level gait planner, low-level trajectory planner and controllers. Kalman filter is used to monitor the robot stability according to the change of the gait and trajectory in a dynamic environment. In another study, the LTL-A* algorithm was used calculate a globally optimal path specified by linear temporal logic (LTL) and a weighted transition system [20]. On the other hand, neural network and deep learning-based methods have been proposed recently such as radial basis function neural network (RBFNN) applied for trajectory tracking of industrial Manutec-r15 robot [21], while grid-based search on randomized maps has been adopted in [22]. Recently, a number of hybrid and nature-inspired algorithms were suggested such as particle swarm optimization-modified frequency bat (PSO-MFB) algorithm for multi-target path planning [23], firefly algorithm for trajectory planning in highly uncertain environment [24], dragonfly algorithm [25], a hybrid beetle antennae search (BAS) and artificial potential field (APF) algorithm [26].

Energy efficiency is very important as it is necessary to minimize energy consumption in order to prolong robot’s operation in a potentially hazardous environment [27]. Walking robots consume much energy due to a high number of degrees of freedom (DOF) [28], because they have to carry their own weight in addition to payload [29]. Energy consumption also depends on the type of terrain [30, 31], movement speed, friction [32] and other walking parameters (such as stride length and gait). This is why the locomotion gaits should be chosen optimally or adopted depending on the situation [33]. In order to minimize the energy consumption of walking robots, the distribution of feet forces during the motion should be estimated. Behmanesh et al. [34] performed minimization of the feet forces and the joint torques for better energetic efficiency. Finally, the kinematics and the dynamics analysis should be taken into account when minimizing energy consumption [35,36,37].

In critical cases, the legs of the robot can break or get locked due to electric current fault. To complete the given task, the robot should be able to continue moving even with one or more legs disconnected. One of the solutions for such a problem uses a linear analysis for motion planning using fault-tolerant Jacobian matrices, which enables the robot to use broken legs to execute a desired movement [38]. Another study used a parallel–parallel six-legged robot to demonstrate that the robot is able to proceed moving even if there is a locked leg or one uncontrollable leg fault [39].

The appropriate selection of gaits, gait switching and gait parameters (period, transition, step length and height, etc.) can reduce energy consumption and improve locomotion over the irregular terrain. So, the ability to adapt the gait to a more efficient one according to the terrain irregularity is very important. For example, the metamorphic solution [40] showed that higher stride length can be obtained using insect-wave II gait or mammal kick-off gait, and smaller stride length is obtained using insect-wave I gait. The parameters of the optimum depend upon moving speed, for example, with increasing speed, the stride lengths increased, the duty factor decreased, the stance phase duration decreased with a relatively constant swing phase duration, the step clearance remained comparatively constant, and the vertical hip velocities increased [41].

In particular, the locomotion control and state estimation for multi-legged robots is a highly studied topic, which includes different leg designs: [42,43,44,45], reactive locomotion behaviors [46, 47], locomotion control over irregular terrain [48,49,50,51], state-estimation [52,53,54]. Especially the methods presented in [48, 49] are particularly suited for the same type of the robot used in this study.

However, the field of obstacle overcoming for walking robots has not been addressed sufficiently yet. In [47], the use of the neuromechanical control based on animal locomotion for the design of nature-inspired legged robots for travelling on different surfaces was presented. The designed controller made of a modular neural network (MNN) and virtual agonist–antagonist muscle control achieved variable compliant leg motions without having the feedback signals, e.g., torque or position. In another study [55], a hybrid force–position control algorithm-based path generation with the elements of impact dynamics allowed stabilizing the robot during walking over natural environment surfaces and obstacles. Path planning algorithms [56] or foot planner method [57] can be useful when avoiding the collision with the obstacle [58]; however, these methods do not solve the problem of walking over the obstacles or irregular terrain. A decision-making algorithm for stepping over the obstacle instead of avoiding an obstacle allowed to select the most suitable walking pattern for stable motion depending on the shape and the location of the obstacle [59]. A similar work was done mostly related to walking over irregular terrain, where the robot not only steps over the obstacle, but also on the obstacle [30, 46, 60,61,62,63]. Still, most of the methods are very slow, because they require complex computation, thus making them hardly suitable for real-time critical applications.

For acquisition of data required for obstacle detection and environment recognition, different types of sensors could be used [64,65,66]. Ultrasonic and the infrared (IR) sensors can be used to overcome obstacles when a robot moves sideward–forward direction [67]. Although the most sophisticated sensory system to observe the robot surroundings are vision-based systems (stereo cameras, etc.) [56], image processing still remains a complex problem to be solved before it can finally be used in the real-world robotic applications.

Here, we present a new method of obstacle overcoming using tactile sensors attached on each of the robot’s leg. In most cases, legged robots use tactile sensors on feet for leg placement indication. In our case, robot was equipped with tactile sensors on the front side of each leg providing the information that there is an obstacle in front of each of the legs. This method also allows to reactively respond to obstacles [5, 6] and is almost instantaneous compared to visual systems that require processing time. Tactile sensors can as well be used under low-light or no-light conditions (tunnels, night, underground, etc.), where there is not enough light for a computer vision-based solution. Also, it can be used as a secondary (back up) feedback if the robot’s vision-based system stops functioning properly. Finally, the tactile obstacle overcoming method can be used altogether with other environmental recognition sensors to obtain the best identification of the robot’s surroundings. Compared to other most common robotic perception systems such as ultrasonic and infrared, tactile sensing does not depend on obstacle material, shape, texture, etc., and environmental conditions (light, sound, pollution, etc.).

This paper is organized in the following manner. First, we describe the problem that is addressed in this paper as well as how we distinguish rough terrain and obstacles, describe the development process of the tactile sensor and leg trajectory and error calculation method. In Sect. 3, we describe experimental setup and how experiments were carried out. Results and discussion are presented in Sect. 4 followed by conclusions.

2 Obstacle overcoming method

We aim to develop a method and sensors that allows the walking robot to overcome the terrain obstacles and to arrive at the target position successfully. Note that the obstacles are not the same as terrain irregularities or rough surface. The difference between obstacles and terrain irregularities is described in detail in Sect. 2.2. We developed a method that can be used in an uncertain environment where the robot has no prior information about the positions of obstacles. We assume that the only known parameter is the maximum height of the obstacles. We introduce the obstacle overcoming method based on the use of tactile sensors and describe its implementation in the walking hexapod robot along with the experimental setup for testing the obstacle overcoming method.

2.1 Problem formulation

In this paper, we investigate only the robot’s movement between two adjacent points in its path. All points in robot’s path could be calculated using any path planning algorithm as shown in Fig. 1. However, path planning is not the scope of this work. In this work, we consider that robot has prior information about those points. If the distance between these points is rather small (1–2 m) or the terrain is flat with no obstacles along the robot’s path, it is possible that robot will be moving straight from one point to another. However, in real environment, flat surface is rare and the terrain is expected to be irregular, rough or there are obstacles which force the robot to recalculate the path trajectory in order to overcome even the smallest obstacles and reach the goal position successfully (see Fig. 1c). During the locomotion over the obstacles, the unwanted collision might appear resulting in the deviations from the robots path. These errors will accumulate resulting in the failure of a robot to reach the goal position or a high level of error.

Therefore, a new method for overcoming obstacles is required for robot to travel over the unstructured terrain, which allows robot to overcome small-sized obstacles [68, 69]. The tactile obstacle overcoming method that we propose in this paper can be used to overcome the obstacles that are smaller than the robot. The method does not require changing body position or height, only requires changing the step height and step length.

Fig. 1
figure 1

Obstacle overcoming staged: a finding intermediate points; b calculating path trajectory; c correcting path trajectory to overcome small obstacles. G—goal position; x—intermediate points

2.2 Obstacle size and terrain

The main difference between the obstacles and irregular terrain is that the obstacles are separate objects, whereas irregular terrain has a continuous change in height. Although is no formal definition for the irregular terrain, in our case we describe the irregular terrain as uneven terrain with tiny obstacles that are close to each other

We describe each obstacle as an ellipsoid shape with three parameters: length L, width W and height H. Obstacle length is measured alongside the robot movement direction, while the width of the obstacles is measured alongside the dimensions, which are perpendicular to the robot movement. However, determining only the shape is not enough; the size of the obstacles also matters. We need to compare obstacle size with the robot constraints because the robot will not be able to overcome all obstacles. In some cases, the obstacles might be too big to step over which means it is necessary to travel around the obstacles. In other cases, the obstacles may be very long and will require installing the force sensors on the robot’s feet and the selection of a different walking method. We can derive the obstacle sizes by the following equations:

$$\begin{aligned} L \le l_{i}; \ H \le h_{\max }; \ W \epsilon \ [0, \ \infty ] \end{aligned}$$
(1)

where \(h_{\max }\)—maximum step height.

We can see from Eq. 1 that the obstacle length must be less than or equal to initial step length; obstacle height must be less than the maximum step height; obstacle width can be any size. All locomotion parameters depend on the type and size of the robot and must be determined individually. Thus, Eq. 1 can be used for any type of walking robot to determine small obstacle sizes when integrating the tactile obstacle overcoming method.

We used two parameters to categorize the terrain: density and height. Density describes the distance between the separate obstacles; height—minimum and maximum obstacle heights in the path. We divided both density and height into two subcategories: low density, high density, small height and large height, which in total gave 4 different terrain complexities (see Fig. 14). For low density, distance R between centers of each separate obstacle was 15 cm or more (\(R \ge \) 15 cm). For high density, distance between obstacles was more than 10 cm, but less than 15 cm (10 cm \(< R < \) 15 cm). Small height interval was between 3 and 5 cm (3 cm \(\le H \le \) 5 cm), while large height interval was between 6 and 8 cm (6 cm \(\le H \le \) 8 cm). We did not use smaller obstacles because the final version of the tactile sensors did not reach the surface. However, it would be possible to detect smaller obstacles as well as larger ones by extending sensors.

Fig. 2
figure 2

Different terrain complexities used during the experiments: a low density, small height; b high density, small height; c low density, large height; d high density, large height

Fig. 3
figure 3

Tactile obstacle overcoming sensor prototypes that were tested on real hexapod robot: a first version without using protection material; b second version with silicone protection; c third version with minimized protection material

2.3 Tactile sensors

In this research, we used a novel approach to tactile perception. We developed specific tactile sensors that fit on each of the robots leg and allow reactive response to upcoming obstacles. The idea of using the tactile sensors was inspired by nature. There are almost no insects or animals that cannot sense by touching their surroundings. In nature, most animals have skin that covers the majority of their bodies, arms and legs. Insects have small hairs on their legs or/and antennas [70], which can sense the environment when bent. If skin or skin hair feels something touching it, then we know for certain that there is an obstacle near by. Moreover, light, sound, surface type or distance has no influence on this ability, contrary to vision or hearing or other types of environmental recognition methods.

To detect the obstacles that are directly in front of the robot or even in front of each of the robot’s leg, it is necessary to cover at least the front of each leg with tactile sensor (or sensors). It is not important to have sensors on leg sides or at the back end because the robot moves forward in most of the cases. Further, we describe the development of three front leg tactile sensor prototypes that were tested on the real walking robot (see Fig. 3).

The first tactile sensor prototype was designed by combining a limit switch and polypropylene material (flexible plastic). The limit switch was positioned on top of the tibia part of the robot’s leg and the 2-mm polypropylene rectangular attached to it to cover the leg’s front side. After running several tests on terrain with random obstacles, this prototype was dismissed. The main problem was the gap between the robot’s foot and the sensor. During the locomotion, the obstacles would get stuck in this gap quite often, which resulted in robot being stuck and as a result, the robot could not move forward. In some cases, if the obstacle was touched by the corner of the sensor, the sensor would bend to the side and the obstacle would remain undetected. However, the limit switch and its position on the leg has fulfilled our expectations (good sensitivity) and we decided to use this configuration for further development.

The second prototype was upgraded with silicone connected at robots foot and polypropylene bottom. Silicone was designed to cover the gap and partially the sides of the leg (Fig. 3b). This version was much more successful as the obstacles did not get stuck between the leg and the sensor so often, but the final result was still not satisfactory, because the robot’s legs still would get stuck on top of the obstacles. (An example is shown in Fig. 4a.) The final working version of the tactile obstacle overcoming sensor was combined from the first and the second versions (Fig. 3c). Silicone protection was minimized, which eliminated the problem of the robot’s leg getting stuck on the top of the obstacle, because the leg would slide of the obstacle (due to the minimized protection). Even though this sensor would get stuck in different ways (see Fig. 4), most of the times robots leg would slide to the side. Also, robots leg would get stuck only around once in 30 experiment runs.

Fig. 4
figure 4

Different cases of robots leg getting stuck: a on top of the obstacle; b obstacle between the leg and the sensor; c obstacle at the side of the sensor

The two main advantages that we can express using limit switches for our tactile obstacle detection sensors are resistance to collision and detection distance. First of all, during all of the experiments, not a single sensor has broken. This shows that our developed sensors could be used for long operations and would work properly throughout the duration of operation. Secondly, our obstacle detection sensors work quite similar like hair on real insects. The obstacle is first touched by the sensor rather than by the leg. This gives some detection distance due to which none of the legs were damaged during all the experiments. Using other tactile sensors, such as tactile skin [71, 72], the obstacle is detected by touching it with the robots leg, which could lead to major robot damage.

Fig. 5
figure 5

Change of leg coordinates over time: a x coordinate (leg motion forward and backward); b y coordinate (leg motion left and right); c z coordinate (leg motion up and down)

2.4 Calculation of leg trajectory

The obstacle overcoming requires to change the robot’s leg trajectory, because the sensors only signal that there is an obstacle near by. We use two different leg trajectories: the basic trajectory for neutral locomotion (even terrain) and the obstacle overcoming trajectory for overcoming the obstacles. The same specialization can be noticed in a natural world when animals walk on irregular terrain [73]. Basically, the leg trajectory transformation means changing step height and/or step length. For our method, we integrated the adaptation of both parameters.

Both basic and obstacle overcoming leg trajectories are combined from x(t), y(t), z(t) coordinates, which are presented in Eqs. 24 [68]. Visual representation of the leg trajectory coordinates is shown in Fig. 5.

$$\begin{aligned} x(t) = \left\{ \begin{array}{ll} -\cos (\varepsilon ) \cdot \left( \frac{l \cdot (T - 2 \cdot \varphi _{i} + 2 \cdot t - 1)}{2 \cdot (T - 1)} + \frac{l}{2} \right) ,&{} \quad \hbox {if} \ t \le \varphi _{i}\\ \cos (\varepsilon ) \cdot l \cdot (t - \varphi _{i}), &{} \quad \hbox {if} \ \varphi _{i} < t \le \varphi _{i}+ 1\\ \cos (\varepsilon ) \cdot \left( \frac{l \cdot (T - 2 \cdot \varphi _{i} + 2 \cdot t - 1)}{2 \cdot (T - 1)} + \frac{l}{2} \right) ,&{} \quad \hbox {if} \ t > \varphi _{i} + 1 \end{array}\right. \end{aligned}$$
(2)
$$\begin{aligned} y(t) = \left\{ \begin{array}{ll} -\sin (\varepsilon ) \cdot \left( \frac{l \cdot (T - 2 \cdot \varphi _{i} + 2 \cdot t - 1)}{2 \cdot (T - 1)} + \frac{l}{2} \right) ,&{} \quad \hbox {if}\ t \le \varphi _{i}\\ \sin (\varepsilon ) \cdot l \cdot (t - \varphi _{i}),&{} \quad \hbox {if} \ \varphi _{i} < t \le \varphi _{i} + 1\\ \sin (\varepsilon ) \cdot \left( \frac{l \cdot (T - 2 \cdot \varphi _{i} + 2 \cdot t - 1)}{2 \cdot (T - 1)} + \frac{l}{2} \right) ,&{} \quad \hbox {if} \ t > \varphi _{i} + 1 \end{array}\right. \end{aligned}$$
(3)
$$\begin{aligned} z(t) = \left\{ \begin{array}{ll} 0, \quad if \ t \le \varphi _{i}\\ h \cdot \sin \left( (t - \varphi _{i}) \cdot \pi \right) ,&{} \quad \hbox {if} \ \varphi _{i} < t \le \varphi _{i} + 1\\ 0,&{} \quad \hbox {if} \ t > \varphi _{i} + 1 \end{array}\right. \end{aligned}$$
(4)

where \(\varepsilon \)—movement direction, l—step length, T—gait period of one step, t—time, \(\varphi _{i}\)ith leg phase, and h—step height.

Fig. 6
figure 6

Leg trajectories during robot locomotion: a basic roaming trajectory (used for moving over flat terrain); b obstacle overcoming trajectory (used to overcome the obstacles). The change of the leg trajectory is only initiated at the beginning of the new step

For the obstacle overcoming, the robot’s step length and step height are changed. Otherwise, robot legs can get stuck on top of the obstacles more often. Step length and step height can obtain two separate values:

$$\begin{aligned} l=\left\{ \begin{matrix} l_{i} \\ l_{k} \end{matrix}\right. , h=\left\{ \begin{matrix} h_{i} \\ h_{k} \end{matrix}\right. \end{aligned}$$
(5)

where \(l_{i}\)—initial step length, \(l_{k}\)—obstacle overcoming step length, \(h_{i}\)—initial step height, and \(h_{k}\)—obstacle overcoming step height. Initial step length and height are parameters used for the locomotion over even terrain, usually set to minimum. As for the obstacle overcoming parameters, we have established empirically that in order to have a successful obstacle overcoming, step length must be increased by 2 times:

$$\begin{aligned} l_{k}=2 \cdot l_{i} \end{aligned}$$
(6)

The step height must be equal to:

$$\begin{aligned} h_{k}=H+2\ \hbox {[cm]} \end{aligned}$$
(7)

By combining Eqs. 5, 6 and 7 and Eqs. 24, we derive two sets of equations which define the initial leg trajectory and the obstacle overcoming trajectory (see Fig. 6). Switching from the initial to the obstacle overcoming leg trajectories only occurs at the beginning of the new step and only if the tactile sensor is active. However, switching trajectories during leg transfer can result in poor locomotion, because the robot’s legs would get stuck on the top of the obstacle.

The obstacle overcoming algorithm for a single robot leg is summarized in Fig. 7.

Fig. 7
figure 7

Diagram of the obstacle overcoming method for a single robot leg. Each leg consists of three servo motors and a tactile sensor, indicating the contact with the obstacle. When the leg touches the obstacle, signal is transferred on board and the obstacle overcoming method calculates different leg trajectory parameters (step length and step height). These parameters are used during the next step when obstacle overcoming is executed. If tactile sensor does not give any signal, leg trajectory parameters are not changed

2.5 Calculation of positional error

To evaluate the method, we need to calculate the traveling errors. We denote the longitudinal traveling distance error as \(\Delta x\) and the transverse (left/right) positional displacement error as \(\Delta y\).

First, we calculate the error in the x coordinate \(\Delta x\), which represents the robot’s forward and backward motion. This error is dependent on the number of obstacles N in the robot’s path and the number of legs n the robot has. Assuming the robot has to overcome every obstacle n/2 times, each time losing some traveling distance, because only one side of the robot will have to interact with the single obstacle. When the single leg of the robot hits the obstacle, the robot pushes backward and rotates to a certain side, causing errors \(\Delta x\) and \(\Delta y\).

So in order to calculate \(\Delta x\) we have to sum the lost traveling distance of each leg interacting with the obstacle for every obstacle in the robot’s motion path:

$$\begin{aligned} \Delta x=\sum _{j=1}^{N} \sum _{k=1}^{n/2} \Delta L \end{aligned}$$
(8)

where \(\Delta L\) is the positional error of single leg. This parameter is the origin of both errors \(\Delta x\) and \(\Delta y\), because the error \(\Delta L\) is a known parameter and can be obtained by measuring how much robot is pushing leg backward (Fig. 6). However, we do not know the exact positional error, because the robot can hit the obstacle at any time during the leg transfer phase. But we know that every step the robot moves forward its initial step length and only one side of the robot will interact with single obstacle. Having this in mind, the positional error can be learned from the spatial coordinates of the robot legs and their angle with respect to an obstacle.

2.6 Neural network

For prediction of longitudinal positional error, we have adopted generative recurrent neural network (GRNN) [74]. Given a set of known paths \(X = [X_1,X_2,...,X_n]\) and a known future path P of a known robot, we predict the future errors \(L=[L_1,L_2,...,L_m]\) for each leg of a walking robot in an unstructured environment. During training, the future positional errors E are known and used as a ground truth.

The architecture of the neural network is presented in Fig. 8. All GRNN units are made from a single long short-term memory (LSTM) cell, node RNNs having a hidden state size of 64. For all nonlinear network layers, a transformation using rectified linear unit (ReLU) embeds the input into a 64-dimensional vector. The network has been trained with a starting learning rate of 0.001 using an ADAM optimizer [75] for 50 epochs.

Fig. 8
figure 8

Architecture of the path travelling error prediction neural network. For each time step, the positions of the robot and the planned position are used as input. Dotted arrow denotes input. Dashed feedback connection denotes the inference

2.7 Calculation of transversal error

Deriving error \(\Delta y\) is not as straightforward, because we have to calculate the angle \(\alpha \) that the robot rotates at each obstacle. We also have to have in mind that rotating to one side will produce positive angle \(+ \alpha \) and rotating to the other side will produce negative angle \(- \alpha \). So in order to calculate error \(\Delta y\), we have to know the final rotation angle \(\Delta \alpha \) after overcoming all obstacles. This final angle must be relative to the straight path of the original robot movement trajectory (see Fig. 9).

Fig. 9
figure 9

Explanation of the origin of the error \(\Delta y\) (a) and the trigonometric problem for calculating rotation angle \(\alpha \) (b). The robot rotation point is leg and obstacle interaction point. After rotating, the robot moves with a certain angle, relative to the original trajectory. R—remaining travel distance; a—length of the robot at a certain configuration; \(\alpha \)—rotation angle at a single obstacle of a single leg; \(\Delta x\)—x coordinate error; GP—ideal goal position point

We can see from Fig. 9 that the error \(\Delta y\) can be calculated using theory of sines from the given triangle ABC, because we know all the angles of the triangle and one side:

$$\begin{aligned} \Delta y = \frac{(R - \Delta x) \cdot \sin (\Delta \alpha )}{\sin (90^{\circ } - \Delta \alpha )} \end{aligned}$$
(9)

where R is the remaining travel distance after overcoming all the obstacles in the path. R can be calculated by subtracting the distance covered from initial travel distance:

$$\begin{aligned} R = d - S \cdot l_{i} \end{aligned}$$
(10)

where d is the initial travel distance and S—number of steps that the robot has already made. Multiplying the number of steps by initial step length gives the covered distance, because in every step, the robot moves forward by the initial step length.

The final rotation angle of the robot \(\Delta \alpha \) can be calculated the same way as \(\Delta x\) by using the double sum:

$$\begin{aligned} \Delta \alpha = \sum _{j=1}^{N} \sum _{k=1}^{n/2} \alpha _{k} \end{aligned}$$
(11)

where \(\alpha _{k}\) is the angle by which the robot rotates at a single obstacle by a single leg. This angle can be calculated from triangle EFD in Fig. 9b. First, we need to calculate angle \(\beta \). We know that triangle EFD has two equal sides, so angle \(\beta \) can be calculated by:

$$\begin{aligned} \beta = \hbox {arccos}\left( \frac{\Delta L}{2 \cdot a} \right) \end{aligned}$$
(12)

where a is the width of the robot at a certain configuration and \(\Delta L\) is the distance between points F and D. This is the same lost distance as used to calculate \(\Delta x\), because when one side of the robot touches the obstacle, the other side still moves. Now when we have \(\beta \), we can derive \(\alpha _{1}\) and \(\alpha _{2}\), which are equal (see Fig. 9b):

$$\begin{aligned} \alpha _{1} = \alpha _{2} = 90^{\circ } - \beta = 90^{\circ } - \hbox {arccos}\left( \frac{\Delta L}{2 \cdot a} \right) \end{aligned}$$
(13)

Finally, we can see that \(\alpha _{k}\) is equal to:

$$\begin{aligned} \alpha _{k} = \alpha _{1} + \alpha _{2} = 180^{\circ } - 2 \cdot \beta \end{aligned}$$
(14)

By combining Eqs. 13, 14, 11, 10 and 9, we can derive final solution to calculate error \(\Delta y\).

3 Experiments

Further, we present results of experimental evaluation of the tactile obstacle overcoming. The method was implemented into small hexapod robot described in Sect. 2.3 and tested by series of experiments with different terrain complexities (Sect. 2.4) using two gaits. Then, the errors of moving alongside the straight path are compared with locomotion without the tactile obstacle overcoming method.

3.1 Hexapod robot and gaits

To test the tactile obstacle overcoming method, we used a 18-DOF hexapod walking robot (see Fig. 10) equipped with AX-12 servo motors and controlled by STM32F411RE microcontroller. Each of the robot’s leg was upgraded with our developed tactile sensor. Each tactile sensor value is checked for each control program cycle and is around several milliseconds. A structural diagram of the hexapod robot is shown in Fig. 11.

Fig. 10
figure 10

Hexapod walking robot used in the experiments: a front view; b top view

Fig. 11
figure 11

Structural diagram of the hexapod robot. Only a single power supply unit was used, which was not mounted on the robot

Power supply for the servo motors is 10–12 V, which is why the power source and servo motors are connected directly, but between the microcontroller and the power source; there is a voltage converter 7805, which converts given voltage to 5 V. The data line for controlling the AX-12 servo motors requires UART serial communication. Because servo motors have single data wire, we added a data direction converter combined from 74HC126 and 74HC04 circuits, which enables switching between transmitting data (tx) and receiving data (rx). The tactile sensors only require 3.3 V supply voltage and are connected directly to the microcontroller.

The hexapod robot can be considered as small. The weight of the robot is  1.5 kg; length and width—30 cm (including front and rear legs angle offsets and middle legs displacements). Robot constraints depend on the size of the legs, body elevation and body thickness (Fig. 12). The hexapod robot’s legs are constructed from three parts: coxa—5 cm; femur—8.2 cm; and tibia—12 cm. Using these lengths, the robot’s constraints were determined as follows:

$$\begin{aligned} l_{\max }=10~\hbox {[cm]}, \ h_{\max }=12~\hbox {[cm]} \end{aligned}$$
(15)
Fig. 12
figure 12

Hexapod robot constraints in neutral stance. SM—servo motor. Coxa leg part length is 5 cm, femur length is 8.2 cm, and tibia length is 12 cm. Height from the surface to the robot’s body bottom is 8 cm, h—step height

For all the experiments, we used two basic hexapod robot gaits—tripod gait and wave gait—because the tripod gait is the fastest gait and the wave gait is a more stable gait. Each gait differs by leg transfer sequences (Fig. 13). During the tripod gait, three legs are transferred, while the remaining three legs are in support phase (i.e., holding the weight of the robot). In first half of the period, the left middle, right front and right hind legs are transferred; in the second half of the period left front, the left hind and right middle legs are transferred. During wave gait, only one leg is transferred at a time. The leg transfer sequence of the wave gait leg is as follows: left front, left middle, left hind, right front, right middle and right hind.

Fig. 13
figure 13

Hexapod robot gaits used in the experiments. Gait is determined by each legs transfer phase which is related to gait period T. Leg abbreviations: LF—left front; LM—left middle; LH—left hind; RF—right front; RM—right middle; RH—right hind. T—period

3.2 Terrain and obstacle description

Using Eq. 1 and robot constraints, described in Sect. 3.1 and in Fig. 12, we determined sizes of the obstacles that can be overcome using the tactile sensor-based obstacle overcoming by our hexapod robot:

$$\begin{aligned} L \le 5~\hbox {[cm]}, \ H \le 5~\hbox {[cm]}, \ W \epsilon [0, \ \infty ]~\hbox {[cm]} \end{aligned}$$
(16)

By comparing robot constraints in Eq. 15 and final obstacle sizes in Eq. 16, we can see that the obstacles must be smaller than the maximum step height of the robot in order to have a successful locomotion over the obstacles without collision with the body. The obstacle lengths used were 2 or 3 cm. Several examples of the obstacles are shown in Fig. 14.

Fig. 14
figure 14

Examples of the obstacles that were used to form an experimental terrain. H—obstacle height; L—obstacle length; W—obstacle width

In all the experiments, we divided the terrain into sub-terrains of different complexity for better evaluation of our method.

3.3 Experimental settings

Before performing the experiments with the tactile obstacle overcoming method, it is important to perform calibration tests and obtain the locomotion data without the use of the proposed method. This data are then used as a baseline to compare the obstacle overcoming quality of the proposed method.

For all the following experiments, different speeds were set: for the wave gait—0.013 m/s; for the tripod gait—0.036 m/s. Each complexity was divided into two distances: 1 m and 2 m. Then, the measurements were taken to obtain steps needed to reach the goal position. For the wave gait, 20 steps were needed to travel 1 m distance and 40 steps to travel 2 m distance and for the tripod gait 13 steps for 1 m distance and 26 steps for 2 m distance.

The reference measurements consisted of the experiments performed without any obstacles in the path of the robot and with obstacles, but without using the proposed obstacle overcoming method. During the experiments without the obstacles, after each trial, the positional errors \(\Delta x\) and \(\Delta y\) (Sect. 2.2) were measured at the goal position to estimate how accurately the robot reaches it. The results are presented in Table 1.

For the experiments with the obstacles without the obstacle overcoming method, only low-density terrain was used and the path distance was 1 m. In addition, the direction angle change \(\Delta \alpha \) was also measured at the goal position. The explanation of data measurements is presented in Fig. 15. The results of locomotion without using the proposed obstacle overcoming method are presented in Fig. 16. The experimental data are presented as a vector field in (xy) plane with an angle in order to represent the robot’s end position.

Table 1 Calibration results: straight path errors during locomotion over the terrain without the obstacles
Fig. 15
figure 15

Measuring straight path errors and direction angle change. I quadrant: (\(+\Delta x\), \(+\Delta y\)) and \(+\Delta \alpha \); II quadrant: (\(-\Delta x\), \(+\Delta y\)) and \(+\Delta \alpha \); III quadrant: (\(-\Delta x\), \(-\Delta y\)) and \(-\Delta \alpha \); IV quadrant: (\(+\Delta x\), \(-\Delta y\)) and \(-\Delta \alpha \)

Fig. 16
figure 16

Errors of walking on straight path without using the proposed obstacle overcoming method. Low-density terrain; travel distance 1 m; goal position represented by (0, 0) coordinates. a Wave gait, obstacle height 3–5 cm; b wave gait, obstacle height 6–8 cm; c tripod gait, obstacle height 3–5 cm; d tripod gait, obstacle height 6–8 cm

The scale of x and y coordinates in Fig. 16 was selected so that the goal position coordinates could be seen. This scale will be used in further graphics.

The results of the experiments with the wave gait and tripod gait are presented in Figs. 17 and 18. We have performed 144 distinct experiments with two different gaits, two terrain complexities, two distances and two obstacle heights. The example of the walking robot locomotion over high-obstacle-density terrain using wave gait is shown in Fig. 19, while all results are discussed in Sect. 4.

Fig. 17
figure 17

Results of wave gait experiments: alow density, travel distance 1 m; b low density, travel distance 2 m; c high density, travel distance 1 m; d high density, travel distance 2 m

Fig. 18
figure 18

Results of tripod gait experiments: a low density, travel distance 1 m; b low density, travel distance 2 m; c high density, travel distance 1 m; d high density, travel distance 2 m

Fig. 19
figure 19

An example of hexapod robot locomotion over high-density terrain with small height obstacles using wave gait. At the beginning of the path, the robot moves on the black line. Only at the end of the trajectory, the robot deviates to the side

4 Results and discussion

From the experimental results, we can see that the robot deviates from the target movement trajectory. This can also be seen in Fig. 5. When the robot detects the obstacle with its leg, a certain error is introduced because the obstacle overcoming trajectory is initiated only during a new step. When the robot pushes backward at a certain obstacle, the remaining legs are still continuing moving forward. The same applied when the robot turns left or right. So the robot will lose distance and deviate to a certain side (depending on which side is the obstacle located).

From Fig. 16, it is obvious that without using the tactile obstacle overcoming method robot locomotion over terrain with obstacles is impossible, because the robot’s legs get stuck behind the obstacles and the robot pushes itself backward losing distance. This is also the reason, why \(\Delta x\) errors are usually larger than \(\Delta y\) errors. During the locomotion without the method, average errors for wave gait are: (25.2, 11.4) cm 17.4\(^{\circ }\) with small height obstacles and (22.4, 2.6) cm 6.8\(^{\circ }\) with large height obstacles and for tripod gait (25.2, 12.6) cm 22.8\(^{\circ }\) with small height obstacles and (22, 0.8) cm 12\(^{\circ }\) with large height obstacles. The average errors are calculated using absolute values. We can see that the robot is unable to reach the goal position even if it is 1 m away from the starting point which proves that there was no point in performing the experiments with 2 m distance.

The results of the experiments using the obstacle overcoming method are presented in Fig. 20. All obtained results were averaged and categorized to be able to compare which gait is a better choice to traverse the terrain with the obstacles. First of all, we can see from Fig. 20 that overall errors are smaller (4.16–16.5 cm) for both gaits when travelling shorter distances. When travelling longer distance, errors increase (15.4–41.5 cm). Of course, there are several exceptions (tripod gait, 2 m distance, low density, large obstacles, etc.). Summarizing, in almost all cases slower gait is better (wave) compared to a faster gait (tripod).

Fig. 20
figure 20

Comparison of tripod and wave gait error during different experiments. Each bar represents a distance error in centimeters that the robot deviated from the target point (shorter bars are better). The experiments were carried out with different obstacle sizes (small, large); different travel distances (1 m, 2 m); and different obstacle densities (low, high). The robot deviates less when using wave gait (6 out of 8 times). This means that slower gait is more accurate when following the path

Finally, we have calculated relative errors (compared to the travel distance) for all cases that were observed during the research. Relative errors without using the obstacle overcoming method reached up to \(\sim \)24 %. Using the obstacle overcoming method allowed to decrease relative errors to \(\sim \)12 %. These results state that using our developed obstacle overcoming method it is possible to reduce errors 2 times.

When using wave gait, it is possible to obtain a satisfactory result for locomotion over any type of terrain and distance. Even though the longer the distance and the higher the obstacles, the larger the errors are, the robot is still able to overcome all encountered obstacles. For tripod gait, locomotion is also possible, but the errors are much more random due to higher movement speed and leg transfer sequence. During tripod gait experiments, obstacles would get stuck between the sensor and the robot’s legs more often. However, in most cases legs would slide off the obstacles after short time (due to silicone protection) and the robot would continue moving forward. This always results in larger x coordinate errors, but is still a better locomotion than without the obstacle overcoming method. Also, the robot movement errors slightly depend on the obstacle positions, because the terrain was arranged randomly and not symmetrically. Some obstacles would restore robot movement direction in y coordinate, because different robot sides collided with separate obstacles. Finally, by comparing wave and tripod gait errors we can notice that while slower gaits provide better overall result, faster gaits can still be used for travelling short distances over terrain with smaller height obstacles.

To further improve the proposed method, in future we plan on adding feedback into the robot’s control system, like positioning sensors (accelerator, gyroscope, etc.). Using the positioning sensors, the robot body orientation and direction angle could be measured along with local position. The measured position could be implemented directly into robots control algorithm and used to correct movement errors and direction angle. Also, we intend to add additional and more sophisticated sensors. Surroundings of the robot and obstacle positions could be observed by using camera. This way it would be possible to overcome some of the obstacles which would reduce the errors. Finally, the leg trajectory for obstacle overcoming can also be improved.

5 Conclusions

In this paper, we presented the development of the tactile obstacle overcoming method for walking robots. The method combines the data from tactile sensors that cover front sides of the robot legs, calculation of obstacle avoiding leg trajectories and prediction of longitudinal spatial error. This combination provided good results for obstacle detection and overcoming reaching an error of only 12% for an artificial unstructured environment. We used a hexapod robot and ran a significant number of experiments on different types of terrain. The obtained results showed that the proposed hybrid obstacle overcoming method is a successful application for walking locomotion over terrain with various obstacles. The obstacle overcoming method eliminates the need to recalculate path trajectory to overcome small terrain obstacles, because the robot is able to move straight between the intermediate points with sufficiently high precision. We also determined that using slow gaits is more effective than faster gaits on most types of terrain, but faster gaits can be used for shorter distances with small height obstacles. This method proves to be a successful improvement in walking locomotion even without using additional positioning sensors to eliminate movement errors or the change of the direction angle. As a result, the tactile obstacle overcoming method can be applied for any type of walking robot as a primary obstacle detection system or as a reserve feedback, because it does not require high calculating power and can withstand collisions with obstacles.