Abstract
We present a short-step interior-point algorithm (IPA) for sufficient linear complementarity problems (LCPs) based on a new search direction. An algebraic equivalent transformation (AET) is used on the centrality equation of the central path system and Newton’s method is applied on this modified system. This technique was offered by Zsolt Darvay for linear optimization in 2002. Darvay first used the square root function as AET and in 2012 Darvay et al. applied this technique with an other transformation formed by the difference of the identity map and the square root function. We apply the AET technique with the new function to transform the central path equation of the sufficient LCPs. This technique leads to new search directions for the problem. We introduce an IPA with full Newton steps and prove that the iteration bound of the algorithm coincides with the best known one for sufficient LCPs. We present some numerical results to illustrate performance of the proposed IPA on two significantly different sets of test problems and compare it, with related, quite similar variants of IPAs.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The standard LCP is a classical problem of the optimization theory. Given a matrix \(M\in \mathbb {R}^{n\times n}\) and a vector \(\mathbf {q}\in \mathbb {R}^n\) our task is to find a vector pair \((\mathbf {x},\mathbf {s})\in \mathbb {R}^n\times \mathbb {R}^n\) satisfying
where \(\mathbf {x}\mathbf {s}\) stands for the Hadamard-product (or componentwise product) of two vectors.
There are different versions of LCPs, such as horizontal-LCP, mixed LCP, geometric LCP. Anitescu et al. [4] proved that the above versions of LCP can be transformed into the standard LCP. The primal-dual pair of linear or convex quadratic optimization problems can be reformulated as an LCP. Furthermore, several important practical problems lead to LCP. For a comprehensive treatment of LCPs theory and applications, we refer the reader to the monographs of Cottle et al. [8] and Kojima et al. [35].
Cottle et al. [9] introduced the concept of sufficient matrix. A matrix \(M\in \mathbb {R}^{n\times n}\) is called column sufficient if for all \(\mathbf {x}\in \mathbb {R}^n\) we have
\(x_i(M\mathbf {x})_i\le 0\;\) for all \(i\in \mathcal {I}\; \Longrightarrow \; x_i(M\mathbf {x})_i= 0\;\) for all \(i\in \mathcal {I},\)
where \(\mathcal {I} = \{1,2,\ldots , n\}\) is an index set. Moreover, M is row sufficient matrix if \(M^T\) is column sufficient. A matrix is called sufficient if it is both row and column sufficient.
The sufficient matrix class can be defined in a different way through the \(\mathcal {P}_*\) matrices. The concept of \(\mathcal {P}_*\)-matrices was introduced by Kojima et al. [35]. Let \(\kappa \ge 0\) be a nonnegative number. We call the matrix \(M\in \mathbb {R}^{n\times n}\) a \(\mathcal {P}_*(\kappa )\)-matrix if
holds for any \(\mathbf {x}\in \mathbb {R}^n\), where \(\mathcal {I}_+(\mathbf {x})=\{i\in \mathcal {I}\;|\; x_i(M\mathbf {x})_i>0\}\) and \(\mathcal {I}_-(\mathbf {x})=\{i\in \mathcal {I}\;|\;x_i(M\mathbf {x})_i<0\}\).
We may define the set of \(\mathcal {P_*}\)-matrices as the union of sets of \(\mathcal {P}_*(\kappa )\)-matrices in the following way:
Kojima et al. [35] proved that a \(\mathcal {P_*}\)-matrix is also column sufficient. Subsequently Guu and Cottle [28] proved that a \(\mathcal {P_*}\)-matrix is also row sufficient. Väliaho [42] proved the reverse inclusion, therefore the class of sufficient matrices and that of \(\mathcal {P_*}\)-matrices are the same. Considering the definition of \(\mathcal {P}_*\)-matrix class, a matrix is sufficient if and only if it is a \(\mathcal {P}_*(\kappa )\)-matrix for some \(\kappa \ge 0\). The smallest \(\kappa \) with this property is called the handicap of the matrix [43].
If the matrix of the LCP has the \(P_*(\kappa )\) property the LCP is called \(P_*(\kappa )\)-LCP. There are several approaches to solve \(\mathcal {P}_*(\kappa )\)-LCPs. Among them IPAs received much more attention than pivot algorithms [10,11,12, 22, 26, 27]. Since the primal-dual linear optimization (LO) problem pair can be formulated as a special class of LCPs, it is—still—an interesting question whether any IPA could be extended to solve efficiently \(P_*(\kappa )\)-LCPs. First Kojima et al. [35] followed this way and generalized a primal–dual path-following IPA to \(P_*(\kappa )\)-LCP. Their algorithm has \(O((1+\kappa )\sqrt{n}L)\) iteration complexity, where L is the input size of the problem with integer data. This complexity is still the best one for solving \(P_*(\kappa )\)-LCP. Since then a lot of known IPAs introduced for solving LO problems were extended to \(P_*(\kappa )\)-LCPs [32].
In 2002 Darvay offered a new technique for finding search directions of IPAs for LO problems [13,14,15]. Main idea of Darvay’s approach is very simple: perform a one-to-one transformation on the nonlinear equation of the central path problem and apply Newton’s method to the transformed system. In [13, 15] the transformation is achieved by the square root function. Since then, this technique has been extended to different types of optimization problems, for LCP [1, 3, 5, 6, 34, 39, 44, 46], for convex quadratic programming (CQP) [2], for semidefinite programing (SDP) [7, 45], for second-order cone programming (SOCP) [46] and symmetric optimization (SO) [33, 47].
Let us define the transformation that Darvay introduced and analysed for the central path of the LO problem as a continuously differentiable monotone increasing function
where \(0 \le \xi < 1\) is a fixed value, which depends on the function \(\varphi \). In the view of [13, 18], we may say that the first interesting transformation of the nonlinear equation of the central path is
that has been used widely in the area of LO (for details see Roos et al. [40]). First monotone function analysed by Darvay [13] was
Recently, Darvay et al. [18] used
Naturally, for functions used in [13, 18], the transformed central path equation and hence the right-hand side of the Newton system becomes more complicated, with considerable effects on the computations and the complexity analysis of the algorithm.
Kheirfam [33] and Pirhaji et al. [39] investigated the function \(\varphi (t) = \frac{\sqrt{t}}{2(1+\sqrt{t})}\) for \(\mathcal {P}_*(\kappa )\)-LCPs and circular cone optimization, respectively.
Due to the fact that function \(\varphi \) used by Darvay and others need to be a continuously differentiable monotone increasing function, we refer to this transformation technique as algebraic equivalent transformation of the central path equation.
Motivated by the mentioned works we propose a full-Newton step IPA for solving the \(\mathcal {P}_*(\kappa )\)-LCP based on the direction introduced by Darvay et al. [18]. We analyze the algorithm and obtain the complexity bound, which coincides with the best known result for \(\mathcal {P}_*(\kappa )\)-LCP, in this way showing that the direction presented in [18] can be used in IPAs for \(\mathcal {P}_*(\kappa )\)-LCPs. We mention that our approach is significantly different from [12], where criss-cross algorithms are analyzed. There are also important differences between our algorithm and the one presented in [32], which focuses on general LCPs, because we apply the AET technique, which is not considered in [32].
The outline of the rest of the paper is as follows. In Sect. 2 we recall some basic concepts and describe Darvay’s method for LCPs. In Sect. 3, we present a full-Newton step feasible IPA for \(\mathcal {P}_* (\kappa )-LCP\)s using the direction induced by the function given in (5) and state two theorems about the convergence and complexity of the algorithm. In Sect. 4 we provide some numerical results to illustrate performance of the proposed IPA and compare it, with other variants. Finally, we give some conclusions in Sect. 5.
The notations used in this paper are presented in “Appendix A.1”. Moreover, some important well-known results used in this paper are summerized in “Appendix A.2”.
In the next section we consider the AET technique presented by Darvay [13, 15] in order to obtain new search directions.
2 Newton system for the transformed central path equation
Let \(0 \le \xi < 1\) and \(\varphi :(\xi , \infty )\rightarrow \mathbb {R}\) be a continuously differentiable and invertible function. For a vector \(\mathbf {v}\in \mathbb {R}^n\) we define \(\varphi (\mathbf {v})\) in the following way:
We use an AET on the centrality condition of (26) getting the following equivalent system
where \(\mu >0\). Furthermore, we apply Newton’s method in order to approximate the solutions of (6). A simple calculus yields the following system
From Lemma 3 the above transformed Newton system has a unique solution if M is a \(\mathcal {P}_{*}(\kappa )\)-matrix.
This type of search direction was introduced by Zsolt Darvay for linear optimization. In [15] he used the function \(\varphi (t) = \sqrt{t}\) and later in [18] the function \(\varphi (t) = t - \sqrt{t}\). There are several generalization of Darvay’s method with the square root function [34, 46, 47].
In this paper we use the function \(\varphi (t) = t - \sqrt{t}\) to obtain a new search direction for short step IPA that solves \(\mathcal {P}_{*}(\kappa )\)-LCPs. We mention that this direction has been applied for the first time for LCPs in [17] and for horizontal LCPs in [19].
2.1 Scaling the Newton system
Scaling is a usual technique in the area of IPAs (see [40]). The aim of it is to simplify the analysis of IPAs. We use the following notations
Using (8) a simple computation yields the following scaled Newton system
where
For more details on obtaining the vector \(\mathbf {p_v}\) we refer to [13].
2.2 Centrality measure and some bounds
We use the following centrality measure
This measure meets our expectations: it is nonnegative and it is zero if and only if the \((\mathbf {x},\mathbf {s})\) vector pair lies on the central path \(\mathcal {C}\). In this paper we use the function given by (5) which was investigated for the first time in [16]. This function is not invertible on \(\mathbb {R}_+\), but it is invertible on the interval \(\displaystyle \left[ \frac{1}{4},\infty \right) \). Thus, we can apply \(\varphi \) for \(\mathbf {v}^2 = \frac{\mathbf {x}\mathbf {s}}{\mu }\) if all coordinates of this vector are greater than or equal to \(\displaystyle \frac{1}{4}\). For the function (5) we get
and
Using Lemma 3 for the system (7) we can easily prove the following inequalities.
Corollary 1
Let \(M\in \mathcal {P}_*(\kappa )\), \((\mathbf {x},\mathbf {s})\in \mathcal {F^+}\) and \(\mu >0\) be given. Moreover, let \(\delta := \delta (\mathbf {x}, \mathbf {s}; \mu )\).
-
(i)
System (7) has a unique solution \((\varDelta \mathbf {x}, \varDelta \mathbf {s})\) that satisfies the following bounds
$$\begin{aligned}&||\varDelta \mathbf {x}\varDelta \mathbf {s}||_{\infty }\le \left( \frac{1}{4} + \kappa \right) 4\,\mu \,\delta ^2, \quad ||\varDelta \mathbf {x}\varDelta \mathbf {s}||_1\le \left( \frac{1}{2} + \kappa \right) 4\,\mu \,\delta ^2,\\&||\varDelta \mathbf {x}\varDelta \mathbf {s}||_2\le \sqrt{\left( \frac{1}{4} + \kappa \right) \left( \frac{1}{2} + \kappa \right) }4\,\mu \,\delta ^2. \end{aligned}$$ -
(ii)
System (9) has a unique \((\mathbf {d_x}, \mathbf {d_s})\) solution for which
$$\begin{aligned}&||\mathbf {d_x}\mathbf {d_s}||_{\infty }\le \left( 1 + 4\,\kappa \right) \delta ^2, \quad ||\mathbf {d_x}\mathbf {d_s}||_1\le \left( 2 + 4\,\kappa \right) \delta ^2,\\&||\mathbf {d_x}\mathbf {d_s}||_2\le \sqrt{\left( 1 + 4\,\kappa \right) \left( 2 + 4\,\kappa \right) }\delta ^2 \end{aligned}$$hold.
3 The IPA and its analysis
In this section, we present a full Newton-step IPA for the \(\mathcal {P}_*(\kappa )\)-LCP. It is based on a new search direction coming from the system (7) using the function \(\varphi (t) = t - \sqrt{t}\).
3.1 Analysis of the short step IPA
The next theorem states that the Algorithm 1 is well-defined. This means that the full-Newton step is feasible in each iteration and the initial conditions for the new \((\mathbf {x}, \mathbf {s})\in \mathcal {F}^+\) and \(\mu \) hold, i. e. \(\delta := \delta (\mathbf {x},\mathbf {s};\mu )<\tau \) and \(\mathbf {v}^2>\frac{1}{4} \mathbf{e}\).
Theorem 1
Let \((\mathbf {x},\mathbf {s})\in \mathcal {F}^+\) be given such that \(\delta (\mathbf {x}, \mathbf {s}; \mu ) < \tau \) and \(\mathbf {v}^2 > \frac{1}{4} \, \mathbf {e}\) where \(\tau =\frac{1}{4\kappa + 2}\) and \(\mathbf {v}^2 = \frac{\mathbf {x}\mathbf {s}}{\mu }\). The Newton directions \((\varDelta \mathbf {x},\varDelta \mathbf {s})\) are the unique solutions of the system (7). Then \((\mathbf {x}^+,\mathbf {s}^+)\in \mathcal {F}^+\) such that \(\left( \mathbf {v}^+\right) ^2>\frac{1}{4}\,\mathbf {e}\) and \(\delta (\mathbf {x}^+,\mathbf {s}^+,\mu ^+) < \tau \), where \(\mathbf {x}^+ := \mathbf {x}+ \varDelta \mathbf {x}, \mathbf {s}^+:=\mathbf {s}+ \varDelta \mathbf {s}\), \(\mu ^+ = (1-\theta )\mu \) and \(\mathbf {v}^+ := \sqrt{\frac{\mathbf {x}^+ \mathbf {s}^+}{\mu ^+}}\).
Proof
Let us prove that \(\mathbf {x}+ \alpha \varDelta \mathbf {x}\) and \(\mathbf {s}+ \alpha \varDelta \mathbf {s}\) are strictly feasible solutions for any \(\alpha \in [0,1]\). Then,
Now \(\mathbf {x}(\alpha )\mathbf {s}(\alpha )>0\) if \(\mathbf {v}^2 + \mathbf {v}\mathbf {p_v}+ \alpha \mathbf {d_x}\mathbf {d_s}>0\). Bounding from below and using \(\mathbf {v}>\frac{1}{2}\mathbf {e}\) and Corollary 1 we get
The functions \(\alpha \rightarrow \mathbf {x}(\alpha )\) and \(\alpha \rightarrow \mathbf {s}(\alpha )\) are continuous, \(\mathbf {x}(0)> 0,\; \mathbf {s}(0) > 0\) and \(\mathbf {x}(\alpha )\, \mathbf {s}(\alpha ) > 0\) for each \(\alpha \in [0,1]\), thus we proved that \((\mathbf {x}^+,\mathbf {s}^+)\in \mathcal {F}^+\).
Now we need to show that \((\mathbf {v}^+)^2>\frac{1}{4}\mathbf {e}\). Actually we prove a stronger statement, namely that
where \(\eta = \sqrt{1-\theta }\) and \(\displaystyle \theta =\frac{1}{(8 + 9\kappa )\sqrt{n}}\).
We use a weaker condition for \(\delta \)
We have
since \(\displaystyle \frac{\mathbf {v}^2}{2\mathbf {v}-\mathbf {e}}\ge \mathbf {e}.\) Using inequalities (12) and (13) we get
Taking square root we obtain (11), thus the inequality \(\displaystyle \mathbf {v}^+>\frac{1}{2}\,\mathbf {e}\) follows. Now we need to show that \(\delta (\mathbf {x}^+,\mathbf {s}^+;\mu ^+)<\tau \). We have
Based on the first term of the previous expression, we introduce the function
Using f we can write
The function f is continuous, monotonically decreasing and positive on the interval \(\left( \frac{1}{2},\infty \right) \). Combining the properties of the function f with the inequality (11) we obtain
Using the inequalities (14) and (15) we get
Bounding the last term yields
In the last inequality we used Corollary 1, the inequalities \(\mathbf {v}^2 \ge 2\mathbf {v}- \mathbf {e}\ge \mathbf {0}\) and the definition of \(\delta \) given in (10). Using (16) and the previous estimation we are ready to bound \(\delta (\mathbf {x}^+,\mathbf {s}^+;\mu ^+)\) as follows
where
Let us bound from above \(\theta \) as
Therefore,
Thus, using that the function g is decreasing we get
Using the inequality (17) we obtain the following bound
Substituting the values of the parameters \(\theta \) and \(\tau \) and using the fact that \(c(\kappa )\le \frac{3}{2} + 4\kappa \) we have
So far we proved that the Algorithm 1 is well defined. The iterations are repeatable because of the inheritance of initial conditions. \(\square \)
3.2 Complexity analysis of the short step IPA
In this subsection we prove a theorem which deals with the complexity of the Algorithm 1. We show that this algorithm enjoys the best theoretical polynomial complexity.
First we examine the duality gap. If the Newton-step took us exactly to the \(\mu \)-center then \(\displaystyle \frac{(\mathbf {x}^+)^T\mathbf {s}^+}{n} = \mu \,\) would hold. The next lemma examines the relationship between the terms \(\,\displaystyle \frac{(\mathbf {x}^+)^T\mathbf {s}^+}{n}\,\) and \(\,\mu \).
Lemma 1
Let \((\mathbf {x},\mathbf {s})\in \mathcal {F}^+\) be given such that \(\delta (\mathbf {x}, \mathbf {s}; \mu ) < \tau \) and \(\mathbf {v}^2 = \frac{\mathbf {x}\mathbf {s}}{\mu }> \frac{1}{4} \, \mathbf {e}\), where \(\tau =\frac{1}{4\kappa + 2}\). The solution of the Newton system (7) is denoted by \((\varDelta \mathbf {x},\varDelta \mathbf {s})\). Then,
where \(\mathbf {x}^+ = \mathbf {x}+ \varDelta \mathbf {x}\), \(\mathbf {s}^+=\mathbf {s}+ \varDelta \mathbf {s}\) and \(\delta =\delta (\mathbf {x}, \mathbf {s}; \mu )\).
Proof
Let us compute \(\mathbf {x}^+\mathbf {s}^+\). We have
The last inequality follows from \(\mathbf {v}\ge \frac{1}{2}\,\mathbf {e}\), which implies \(\displaystyle \frac{2\mathbf {v}-\mathbf {e}}{\mathbf {v}^2}\in [0,1]^n\). From (19) we get
We use the following simple estimation:
From (20) and (21) we obtain the statement of the lemma. \(\square \)
Based on the previous lemma it is easy to derive the complexity result of the algorithm.
Theorem 2
Let \(\theta = \frac{1}{(9\kappa + 8)\sqrt{n}}\) and \(\tau = \frac{1}{4\kappa + 2}\). If \(M\in \mathcal {P}_*(\kappa )\), \((\mathbf {x}^0, \mathbf {s}^0)\in \mathcal {F}^+\), \(\mu ^0>0\), \(\;\delta (\mathbf {x}^0,\mathbf {s}^0;\mu ^0)<\tau \) and \(\frac{\mathbf {x}^0\mathbf {s}^0}{\mu ^0}>\frac{1}{4} \mathbf {e}\) then the Algorithm 1 provides an \(\varepsilon \)-solution of the LCP after at most
iterations.
Proof
Let us assume that we are in the \((k+1)^{\text {th}}\) iteration of our algorithm. At the beginning of the iteration we have \(\mathbf {x}=\mathbf {x}_{k},\,\mathbf {s}=\mathbf {s}_{k}\), \(\mu = \mu _{k}\) and \(\delta _k = \delta (\mathbf {x}_k,\mathbf {s}_k;\mu _k)\). Using Lemma 1 we have
Using the equality \(\mu _{k}=(1-\theta )^{k}\mu _0\) and the stopping criteria of our IPA we conclude that the algorithm will surely stop if the inequality
holds. Taking logarithm of (23) we get
Using the inequality \(\log (1-\theta )\le -\theta \) and the fact that \(\theta =\frac{1}{(9\kappa + 8)\sqrt{n}}\) we get that the algorithm stops if
Now using simple computation we have
which proves the theorem. \(\square \)
Corollary 2
Let \(\theta = \frac{1}{(9\kappa + 8)\sqrt{n}}\) and \(\tau = \frac{1}{4\kappa + 2}\). If \(M\in \mathcal {P}_*(\kappa )\), \((\mathbf {x}^0, \mathbf {s}^0)\in \mathcal {F}^+\), \(\mu ^0>0\), \(\;\delta (\mathbf {x}^0,\mathbf {s}^0;\mu ^0)<\tau \) and \(\frac{\mathbf {x}^0\mathbf {s}^0}{\mu ^0}>\frac{1}{4} \mathbf {e}\) then the Algorithm 1 provides an \(\varepsilon \)-solution of the LCP after at most
iterations.
Proof
Using \(n \ge 2\) and \(\mu ^0 \ge \varepsilon \) we obtain \(\frac{2n\mu ^0}{\varepsilon } \le \frac{n^2(\mu ^0)^2}{\varepsilon }\). Thereofore, we have
which implies the corollary. \(\square \)
Using the \(\varepsilon \)-solution provided by Corollary 2 we can apply the rounding procedure presented in Illés et al. [30] to obtain a maximally complementary solution in strongly polynomial time.
4 Numerical results
We implemented our algorithm in the C++ programming language using the object-oriented code developed by Darvay and Takó [20]. The main difference between our implementation and the analysed theoretical version of the algorithm is that in the implemented version of the PD IPA, the barrier update parameter, \(\theta \) is a given constant between 0 and 1, while in the theoretical version of the algorithm \(\theta = \frac{1}{(9 \, \kappa + 8) \, \sqrt{n}}\). The reason of this modification lies in the fact that usually the handicap is not exactly known or might be very (exponentially) large. In the latter case the barrier update parameter \(\theta \) is much smaller than it is necessary for a reliable implementation.
However, the price of this deviation from the theoretical version of the algorithm is that there is no guarantee that the full step can be performed. In each iteration we calculated the maximum step length \(\alpha _{\max }\) using the classical ratio test, which guarantees that the new iterates \(\mathbf {x}^+ = \mathbf {x}+ \alpha \, \varDelta \mathbf {x}\), \(\mathbf {s}^+=\mathbf {s}+ \alpha \, \varDelta \mathbf {s}\) remain nonnegative. To ensure the strict feasibility of the new iterates we used a factor \(\rho =0.95\) to shorten the step length, thus the used step length is \(\alpha ^+ = \rho \, \alpha _{\max }\).
Another problem caused by choosing arbitrary \(\theta \in (0, 1)\) is related to the selection of the new value of the \(\mu \). Since the \(\delta (\mathbf {x}^+, \mathbf {s}^+; \mu ) < \tau \) may not hold for any \(0< \tau < 1\) the selection of the target \(\mu \) value need to be modified comparing the theoretical version of the PD IPA. Namely, the new target value of \(\mu \) is computed as \(\mu = (1 - \theta ) \, \frac{(\mathbf {x}^+)^T \mathbf {s}^+}{n}\), where \(\theta \in \{ 0.999, 0.1 \}\) in our computations presented in this paper.
We compared three primal–dual algorithms based on the AET technique. The only difference between these methods were in the selection of the function \(\varphi \). We considered the identity map, the square root function and the our one, given in (5).
We tested the above mentioned algorithms on two different types of test problems. The first set of problems used sufficient matrices with positive handicap provided by Illés and Morapitiye [29]. For these problems we set the initial interior point as \(\mathbf {x}^0 = \mathbf {s}^0 = \mathbf {e}\), because the righthand side vector has been computed as \(\mathbf {q}:= - M \, \mathbf {e}+ \mathbf {e}\).
The second test set problem is based on the P-matrix proposed by Zsolt Csizmadia and presented in [23]: \(A_n=\left( a_{ij}\right) _{1\le i, j \le n}\), where
The handicap \(\kappa (A_n) \ge 2^{2n-8}-\frac{1}{4}\) as de Klerk and E.-Nagy [23] proved. The righthand side vectors of sufficient LCPs using matrix \(A_n\) has been computed as \(\mathbf {q}:= - A_n \, \mathbf {e}+ \mathbf {e}\), thus \(\mathbf {x}^0 = \mathbf {s}^0 = \mathbf {e}\) are ideal initial interior points for starting analized IPAs.
Based on the definition of \(P_*(\kappa )\)-matrices, (2), for a given vector \({\mathbf {y}}\in \mathbb {R}^n\) and \(P_*(\kappa )\)-matrix, M with the property that \({\mathbf {y}}^T M \, {\mathbf {y}}< 0\), corresponds to a \(\kappa ({\mathbf {y}}) > 0\) parameter such that (2) holds with equality for the given vector \({\mathbf {y}}\). Interestingly enough, \(\kappa (\varDelta \mathbf {x})\) may take large values when the handicap of the given \(P_*(\kappa )\)-matrix M, is large, like for the Csizmadia-matrix, \(A_n\).
Before presenting and analysing our computational results let us shortly discuss from complexity point of view the three implemented algorithms that differ only in the AET function.
Observe that independently from the used AET functions for transforming the central path equation (for \(\varphi (t) = t\) see [35]; \(\varphi (t)=\sqrt{t}\) see [5], and for \(\varphi (t)=t - \sqrt{t}\) see Corollary 2 of this paper), the best complexity results for primal-dual IPAs for sufficient LCPs are just the same, namely \(O((1+\kappa )\sqrt{n}\log \frac{n}{\varepsilon })\). Let us discuss computational performance of the PD IPAs differing only in the used \(\varphi \) functions on those earlier described two sufficient LCP test sets.
4.1 Computational results on LCPs with positive handicap
The sufficient matrices generated by Illés and Morapitiye [29] have positive handicap, but not very large. Furthermore, these matrices were generated using same rules and similar starting matrices therefore does not differ significantly from each other. This observation has been also confirmed by the behavior of the different versions of the PD IPAs on the first test set of problems.
We tested PD IPAs on sufficient LCPs for sizes \(n=10, 20, 50, 100, 200, 500\). For each of these sizes the test set contains 10 problems. Due to the fact that for a given size the iteration number and the CPU time was very similar, instead of giving results for separate problems, we present a table containing the necessary average iteration number for different sizes (Table 1).
4.2 Computational results on LCPs with Csizmadia-matrix
The handicap of Csizmadia matrix \(\kappa (A_n) \ge 2^{2n-8}-\frac{1}{4}\) (see [23]). The worst case complexity result with the default value of \(\theta \) (see Algorithm 1) ensures that the IPA has polynomial iteration complexity depending on \(\kappa \). Since we use much larger values for the update parameter \(\theta \) in our computations, no theoretical guarantee follows for polynomial number of iteration of the IPA from Theorem 2.
On the other hand, the implemented variants of theoretical IPAs for different AETs with several safeguard modifications, as we expected, performed very well (see Tables 2 and 3). From the obtained iteration numbers no one could see any effect of the exponentially large handicap on the performance of the implemented IPAs. However, if we analyse each iteration, we could observe an interesting behaviour of the implemented algorithm. Namely, there is a strict relation between the \(\kappa (\varDelta \mathbf {x})\) and the corresponding step length, \(\alpha \).
For instance, at \(A_{100}, \, \theta = 0.999\), the \(\kappa (\varDelta \mathbf {x}_k)\) was continuously decreasing from a very large number \(1.88905 \cdot 10^{34}\) to 0, and in the meantime the step length \(\alpha \) was continuously increasing from a very small number \(7.49944 \cdot 10^{-18}\) to 1, that is a full Newton-step (see [21]). Naturally, the gap decreases more when the step length is larger.
We tested the effect of the barrier parameter update \(\theta \) by using different values of it. Results with \(\theta = 0.1\) (see Table 3) show that the selection of AET function could have minor effect on the iteration number of the PD IPA depending on the value of \(\theta \).
However, in case of \(\theta = 0.1\) the influence of the handicap of the given matrix on the increase of the iteration numbers can be observed, as well.
5 Conclusions
In this article we have developed a new short-step path-following algorithm for solving LCPs. Our approach is a generalization of the results presented in [15]. We dealt with \(\mathcal {P}_*(\kappa )-LCP\)s and derived an algorithm with the currently best known complexity. We provided some numerical results which prove the practical efficiency of the method as well.
Darvay and his coauthors together with their followers demonstrated that this approach can be used widely. Moreover, some other interesting questions can be raised, which need further investigation. Can any Darvay-type AET be used to derive search directions for IPAs for different optimization problem classes? Is it possible to characterize those one-to-one transformations that are applicable to obtain new search directions for IPAs of different problem classes? Can we rank different (Darvay-type) search directions as one is better than the other in the sense of generating significantly less iterations in a practical implementation? We think that there are possibilities to modify the complexity analysis of Darvay’s IPAs for LO [13, 18], with careful applications of ideas presented in the book of Roos et al. [40]. As a further research we would like to extend the AET approach to the case of general LCPs analysed by Illés, Nagy and Terlaky [32]. Another question of interest is the following: is there a short-step IPA for \(\mathcal {P}_*(\kappa )\)-LCPs (or for a well defined subclass of it), where the centrality parameter \(\tau \) doesn’t depend on \(\kappa \)? We suspect that there isn’t. Understanding this phenomenon requires further research as well.
Notes
Since Illés et al. [31] did not published their approach, the details of the proof can be found in the PhD thesis of M. Nagy [38], a former PhD student of T. Illés.
Interestingly enough, M. Nagy in her PhD thesis [38] gave an example of an LCP, where the central path exists, but it is not unique (for details see pages 108-110). In this case the matrix was not sufficient matrix, of course.
References
Achache, M.: A weighted-path-following method for the linear complementarity problem. Stud. Univ. Babes-Bolyai. Ser. Inf. 49(1), 61–73 (2004)
Achache, M.: A new primal–dual path-following method for convex quadratic programming. Comput. Appl. Math. 25(1), 97–110 (2006)
Achache, M.: Complexity analysis and numerical implementation of a short-step primal-dual algorithm for linear complementarity problems. Appl. Math. Comput. 216(7), 1889–1895 (2010)
Anitescu, M., Lešaja, G., Potra, F.A.: Equivalence between different formulations of the linear complementarity promblem. Optim. Methods Softw. 7(3–4), 265–290 (1997)
Asadi, S., Mansouri, H.: Polynomial interior-point algorithm for \({P}_*(\kappa )\) horizontal linear complementarity problems. Numer. Algorithms 63(2), 385–398 (2013)
Asadi, S., Mansouri, H.: A new full-Newton step \(O(n)\) infeasible interior-point algorithm for \(\cal{P}_*(\kappa )\)-horizontal linear complementarity problems. Comput. Sci. J. Moldova 22(1), 37–61 (2014)
Bai, Y.Q., Wang, F.Y., Luo, X.W.: A polynomial-time interior-point algorithm for convex quadratic semidefinite optimization. RAIRO-Oper. Res. 44(3), 251–265 (2010)
Cottle, R.W., Pang, J.-S., Stone, R.E.: The Linear Complementarity Problem. Computer Science and Scientific Computing. Academic Press, Boston (1992)
Cottle, R.W., Pang, J.-S., Venkateswaran, V.: Sufficient matrices and the linear complementarity problem. Linear Algebra Appl. 114, 231–249 (1989)
Csizmadia, Zs.: New pivot based methods in linear optimization, and an application in petroleum industry. Ph.D thesis, Eötvös Loránd University of Sciences, Institute of Mathematics, Budapest, Hungary (2007)
Csizmadia, Z., Illés, T.: New criss-cross type algorithms for linear complementarity problems with sufficient matrices. Optim. Methods Softw. 21(2), 247–266 (2006)
Csizmadia, Z., Illés, T., Nagy, A.: The s-monotone index selection rule for criss-cross algorithms of linear complementarity problems. Acta Univ. Sapientiae-Inf. 5(1), 103–139 (2013)
Darvay, Z.: A new algorithm for solving self-dual linear optimization problems. Studia Univ. Babeş-Bolyai Ser. Inf. 47(1), 15–26 (2002)
Darvay, Z.: A weighted-path-following method for linear optimization. Studia Univ. Babes-Bolyai Ser. Inf. 47(2), 3–12 (2002)
Darvay, Z.: New interior point algorithms in linear programming. Adv. Model. Optim. 5(1), 51–92 (2003)
Darvay, Zs., Felméri, Á., Forró, N., Papp, I.-M., Takács, P.-R.: A new interior-point algorithm for solving linear optimization problems. In: E. Bitay (ed.) FMTÜ XVII. Proceedings of the 27th international scientific conference of young engineers, pp. 87–90. Transylvanian Museum Society, Cluj-Napoca (2012) In Hungarian
Darvay, Zs., Papp, I.-M.: A new primal-dual interior-point algorithm for linear complementarity problems. In: E. Bitay (ed.) FMTÜ XIX. Proceedings of the 19th international scientific conference of young engineers, pp. 125–128. Transylvanian Museum Society, Cluj-Napoca (2014) In Hungarian
Darvay, Z., Papp, I.M., Takács, P.R.: Complexity analysis of a full-Newton step interior-point method for linear optimization. Period. Math. Hung. 73(1), 27–42 (2016)
Darvay, Zs., Takács, P.-R.: New short-step interior-point algorithm for horizontal linear complementarity problems. In: E. Bitay (ed.) FMTÜ XXI. Proceedings of the 21st international scientific conference of young engineers, Papers on Technical Science 5, 125–128. Transylvanian Museum Society, Cluj-Napoca (2016) In Hungarian
Darvay, Zs., Takó, I.: Computational comparison of primal-dual algorithms based on a new software, unpublished manuscript (2012)
Darvay, Zs.: Log file of the implemented primal-dual IPA for \(A_{100}\) with \(\theta = 0.999\), April, 2020. (http://www.cs.ubbcluj.ro/~darvay/ORR2018-02/cszs100log.pdf)
den Hertog, D., Roos, C., Terlaky, T.: The linear complementarity problem, sufficient matrices and the criss-cross method. Linear Algebra Appl. 187, 1–14 (1993)
de Klerk, E., E.-Nagy, M.: On the complexitiy of computing the handicap of a sufficient matrix. Math. Program. 129, 383–402 (2011)
Fiedler, M., Pták, V.: On matrices with non-positive off-diagonal elements and positive principal minors. Czech. Math. J. 12, 382–400 (1962)
Fiedler, M., Pták, V.: Some generalizations of positive definiteness and monotonicity. Numer. Math. 9, 163–172 (1966)
Fukuda, K., Namiki, M., Tamura, A.: Ep theorems and linear complementarity problems. Discrete Appl. Math. 84(1–3), 107–119 (1998)
Fukuda, K., Terlaky, T.: Criss-cross methods: a fresh view on pivot algorithms. Math. Progr. 79, 369–395 (1997)
Guu, S.-M., Cottle, R.W.: On a subclass of \({P}_0\). Linear Algebra Appl. 223(224), 325–335 (1995)
Illés, T.: Morapitiye, S: Generating sufficient matrices. In: Friedler, F. (ed.) Short Papers of the 8th VOCAL Optimization Conference: Advanced Algorithms, pp. 56–61. Published by Pázmány Péter Catholic University, Budapest (2018)
Illés, T., Peng, J., Roos, C., Terlaky, T.: Strongly polynomial rounding procedure yielding a maximally complementary solution for \(P_*(\kappa )\) linear complementarity problems. SIAM J. Optim. 11(2), 320–340 (2000)
Illés, T., Roos, C., Terlaky, T.: Simple approach for the interior point theory of linear complementarity problems with \(P_*\) matrices. Unpublished manuscript (1997)
Illés, T., Nagy, M., Terlaky, T.: A polynomial path-following interior point algorithm for general linear complementarity problems. J. Global. Optim. 47(3), 329–342 (2010)
Kheirfam, B.: A new infeasible interior-point method based on Darvay’s technique for symmetric optimization. Ann. Oper. Res. 211(1), 209–224 (2013)
Kheirfam, B.: A predictor-corrector interior-point algorithm for \({P}_*(\kappa )\)-horizontal linear complementarity problem. Numer. Algorithms 66(2), 349–361 (2014)
Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems. Lecture Notes in Computer Science 538, Springer, Berlin (1991)
Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2, 575–601 (1992)
Megiddo, N.: Pathways to the optimal set in linear programming. In: Megiddo, N. (ed.) Progress in Mathematical Programming: Interior Point and Related Methods, pp. 131–158. Springer, New York (1989)
Nagy, M.: Interior point algorithms for general linear complementarity problems. Ph.D thesis, Eötvös Loránd University of Sciences, Institute of Mathematics, Budapest, Hungary (2009)
Pirhaji, M., Zangiabadi, M., Mansouri, H.: A path following interior-point method for linear complementarity problems over circular cones. Jpn. J. Ind. Appl. Math. 35, 1103–1121 (2018)
Roos, C., Terlaky, T., Vial, J-Ph: Theory and Algorithms for Linear Optimization. Springer, New York (2005)
Sonnevend, Gy.: An ”analytic center” for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In: Prékopa A., Szelezsán J., Strazicky B. (eds.) System Modelling and Optimization: Proceedings of the 12th IFIP-Conference, Budapest, Hungary, Lecture Notes in Control and Information Sciences 84, 866–876. Springer Verlag, Berlin (1986)
Väliaho, H.: \({P}_*\)-matrices are just sufficient. Linear Algebra Appl. 239, 103–108 (1996)
Väliaho, H.: Determining the handicap of a sufficient matrix. Linear Algebra Appl. 253, 279–298 (1997)
Wang, G.Q.: A new polynomial interior-point algorithm for the monotone linear complementarity problem over symmetric cones with full NT-steps. Asia-Pac. J. Oper. Res. 29(2), 1250015 (2012)
Wang, G.Q., Bai, Y.Q.: A new primal–dual path-following interior-point algorithm for semidefinite optimization. J. Math. Anal. Appl. 353(1), 339–349 (2009)
Wang, G.Q., Bai, Y.Q.: A primal–dual interior-point algorithm for second-order cone optimization with full Nesterov–Todd step. Appl. Math. Comput. 215(3), 1047–1061 (2009)
Wang, G.Q., Bai, Y.Q.: A new full Nesterov–Todd step primal-dual path-following interior-point algorithm for symmetric optimization. J. Optim. Theory Appl. 154(3), 966–985 (2012)
Acknowledgements
Open access funding provided by Budapest University of Technology and Economics. This research reported in this paper has been partially supported by the Hungarian Research Fund, OTKA (Grant No. NKFIH 125700) and by the National Research, Development and Innovation Officie (BME NC TKP2020 grant). The research of T. Illés has been partially supported by the BME Artificial Intelligence TKP2020 IE grant of NKFIH Hungary (BME IE-MI-SC TKP2020). Zs. Darvay acknowledges the support of Babeş-Bolyai University (Grant No. AGC32921/30.07.2019). The authors would like to thank the editor and the anonymous referees for their careful reading and valuable suggestions that improved a lot the section on numerical results.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Appendix
A Appendix
1.1 A.1 Notations
We use the following notations throughout the paper:
-
\(\mathbb {R}_+^n\) the set of n dimensional positive vectors
-
\(\mathbb {R}_{\oplus }^n\) the set of n dimensional nonnegative vectors
-
\(\mathcal {I}\) the index set \(\{1,2,\ldots , n\}\)
-
\(\mathbf {x}\), \(x_i\) vectors are bold latin letters, scalars are normal latin letters
-
\(\mathbf {x}\,\mathbf {s}\) the Hadamard product of vectors
-
\(\frac{\mathbf {x}}{\mathbf {s}}\) component-wise division of \(\mathbf {x}\) and \(\mathbf {s}\), where \(s_i \ne 0, \forall \; i = 1, 2, \ldots , n\)
-
\(\varphi (\mathbf {x})\) the vector obtained by applying the function \(\varphi \) for each element of \(\mathbf {x}\)
-
\(\mathrm{diag}(\mathbf {x})\) the diagonal matrix composed by the elements of \(\mathbf {x}\)
-
\(\mathbf {e}\) the all-one vector of appropriate size
Related to the LCP we will use the following notations:
-
\(\mathcal {F}\) \(\{(\mathbf {x},\mathbf {s})\in \mathbb {R}_{\oplus }^{n}\times \mathbb {R}_{\oplus }^{n}\, |\, -M\mathbf {x}+ \mathbf {s}= \mathbf {q}\}\,\) for the set of feasible solutions,
-
\(\mathcal {F}^+\) \(\mathcal {F}\cap \left( \mathbb {R}_+^{n}\times \mathbb {R}_+^{n}\right) ,\) for the set of strictly feasible solutions,
-
\(\mathcal {F}^*\) \(\{(\mathbf {x},\mathbf {s})\in \mathcal {F} \,|\, \mathbf {x}\mathbf {s}= \mathbf {0}\},\) for the set of feasible complementary solutions.
1.2 A.2 Important results in the area of \(\mathcal {P}_*(\kappa )\)-LCPs
Throughout this paper we assume that M is a \(\mathcal {P}_*(\kappa )\)-matrix and (1) satisfies the interior-point condition (IPC), i.e., \(\mathcal {F}^+\ne \emptyset \).
For IPAs the \(\mathbf {x}\mathbf {s}= \mathbf {0}\) complementarity condition is replaced with the \(\mathbf {x}\mathbf {s}= \mu \mathbf {e}\) centrality condition
defining the central path problem (CPP) for a given \(\mu >0\). Now we are ready to introduce the central path
that contains all feasible solutions of the \(\mathcal {P}_*(\kappa )\)-LCP, that solves (26) for some \(\mu >0\). The central path for LO problem has been introduced by Sonnevend [41] and Megiddo [37] independently. Illés et al. [31] gave an elementary proof for the the following theorem:
Theorem 3
Let a \(\mathcal {P}_*(\kappa )\)-LCP be given. Then the following statements are equivalent:
-
i)
\(\mathcal {F}^+\ne \emptyset ,\)
-
ii)
\(\forall \;\mathbf {w}\in \mathbb {R}^n_+,\; \exists ! (\mathbf {x},\mathbf {s})\in \mathcal {F}^+:\;\mathbf {x}\mathbf {s}= \mathbf {w},\)
-
iii)
\(\exists ! (\mathbf {x},\mathbf {s})\in \mathcal {F}^+:\;\mathbf {x}\mathbf {s}= \mu \mathbf {e}.\)
Statement iii) of the previous theorem says that the central path of a \(\mathcal {P}_*(\kappa )\)-LCP is uniqueFootnote 1.
Illés et al. [31] (see M. Nagy [38], page 47) proved another important theorem under the (IPC), i. e. \(\mathcal {F}^+\ne \emptyset \):
Theorem 4
Let a \(\mathcal {P}_*(\kappa )\)-LCP be given with the property \(\mathcal {F}^+\ne \emptyset \). Then there exists \((\mathbf {x}^*,\mathbf {s}^*)\in \mathbb {R}_{\oplus }^n\times \mathbb {R}_{\oplus }^n\) such that
-
i)
\(\displaystyle (\mathbf {x}^*,\mathbf {s}^*) = \lim _{\mu \rightarrow 0}(\mathbf {x}(\mu ),\mathbf {s}(\mu )),\)
-
ii)
\(\displaystyle (\mathbf {x}^*,\mathbf {s}^*)\in \mathcal {F}^*,\)
-
iii)
\(\displaystyle (\mathbf {x}^*,\mathbf {s}^*)\) is a maximally complementary solution.
Illés et al. [31] (see M. Nagy [38]) proved that under the \(\mathcal {F}^+\ne \emptyset \) condition \(\mathcal {F}^*\) is compact as well. Finally, we would like to state two important lemmas as we use them later. Recall that \(P_0\) denotes the class of matrices with all the principal minors nonnegative [24, 25].
Lemma 2
The matrix
is a nonsingular matrix for any positive diagonal matrices \(X,S \in \mathbb {R}^{n \times n}\), where I is the n-dimensional identity matrix if and only if M is a \(P_0\) matrix.
Lemma 2 was proved by Kojima et al. [35]. The next lemma states some well-known results regarding the Newton system (see for example Corollary 7 and Lemma 9 in [32]).
Lemma 3
Let \(M\in P_*(\kappa )\) and \(x,s\in \mathbb {R}^n_+\), \(a\in \mathbb {R}^n\) and consider the following system
Then,
-
(i)
there exists a unique \((\varDelta \mathbf {x},\varDelta \mathbf {s})\) solution of system (27),
-
(ii)
for the \((\varDelta \mathbf {x},\varDelta \mathbf {s})\) solution of system (27) we have
$$\begin{aligned}&||\varDelta \mathbf {x}\varDelta \mathbf {s}||_{\infty }\le \left( \frac{1}{4} + \kappa \right) \left| \left| \frac{\mathbf {a}}{\sqrt{\mathbf {x}\mathbf {s}}}\right| \right| ^2, \quad ||\varDelta \mathbf {x}\varDelta \mathbf {s}||_1\le \left( \frac{1}{2} + \kappa \right) \left| \left| \frac{\mathbf {a}}{\sqrt{\mathbf {x}\mathbf {s}}}\right| \right| ^2\\&||\varDelta \mathbf {x}\varDelta \mathbf {s}||_2\le \sqrt{\left( \frac{1}{4} + \kappa \right) \left( \frac{1}{2} + \kappa \right) }\left| \left| \frac{\mathbf {a}}{\sqrt{\mathbf {x}\mathbf {s}}}\right| \right| ^2. \end{aligned}$$\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Darvay, Z., Illés, T. & Majoros, C. Interior-point algorithm for sufficient LCPs based on the technique of algebraically equivalent transformation. Optim Lett 15, 357–376 (2021). https://doi.org/10.1007/s11590-020-01612-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-020-01612-0