Next Article in Journal
Research on the Hydrophilicity of Non-Coal Kaolinite and Coal Kaolinite from the Viewpoint of Experiments and DFT Simulations
Next Article in Special Issue
Optimized Design for NB-LDPC-Coded High-Order CPM: Power and Iterative Efficiencies
Previous Article in Journal
On a Fractional in Time Nonlinear Schrödinger Equation with Dispersion Parameter and Absorption Coefficient
Previous Article in Special Issue
Comparison Methods for Solving Non-Linear Sturm–Liouville Eigenvalues Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Algorithms and Common Solutions to Variational Inequalities

by
Hasanen A. Hammad
1,
Habib ur Rehman
2 and
Manuel De la Sen
3,*
1
Department of Mathematics, Sohag University, Sohag 82524, Egypt
2
Department of Mathematics, King Mongkut’s University of Technology Thonburi (KMUTT), Bangkok 10140, Thailand
3
Institute of Research and Development of Processes IIDP, University of the Basque Country, 48940 Leioa, Spain
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(7), 1198; https://doi.org/10.3390/sym12071198
Submission received: 30 June 2020 / Revised: 16 July 2020 / Accepted: 17 July 2020 / Published: 20 July 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
The paper aims to present advanced algorithms arising out of adding the inertial technical and shrinking projection terms to ordinary parallel and cyclic hybrid inertial sub-gradient extra-gradient algorithms (for short, PCHISE). Via these algorithms, common solutions of variational inequality problems (CSVIP) and strong convergence results are obtained in Hilbert spaces. The structure of this problem is to find a solution to a system of unrelated VI fronting for set-valued mappings. To clarify the acceleration, effectiveness, and performance of our parallel and cyclic algorithms, numerical contributions have been incorporated. In this direction, our results unify and generalize some related papers in the literature.

1. Introduction

In this manuscript, we discuss the problem of finding fixed points which also solve VI via a Hilbert space (Hs). Let ¥ be a nonempty closed convex subset (ccs) of Hs under the induced norm . and the inner product . , . .
The structure of the variational inequality problem (VIP) was built by the authors [1], for finding such that
, 0 for all ¥ ,
where : be a nonlinear mapping. They refer to the set of solutions of VIP (1) as V I ( , ¥ ) .
VI is involved in many interesting fields like, transportation, economics, engineering mechanics, mathematical programming. It is considered an indispensable tool in such specializations (see, for example, [2,3,4,5,6,7,8]). VI widely spread in optimization problems (OPs), where the algorithms were used solving it, see [7,9].
Under suitable stipulation to talk VIPs there is a two-way: projection modes and regularized manners. According to these lines, many iterative schemes have been presented and discuss for solving VIPs. Here, We focused on the first type. One of the easiest ways is using the gradient projection method, because when calculating it only needs one projection on the feasible set. However, the convergence of this method requires slightly strong assumptions that operators are strongly monotone or inverse strongly monotone. Via Lipschitz continuous and monotone mappings for solving saddle point problems and generalizing VIPs. Another projection method called the extra-gradient method has been presented by Korpelevich [10], which is built as below:
n = P ¥ ( n ( n ) ) , n + 1 = P ¥ ( n ( n ) ) ,
for a suitable parameter and the metric projection P ¥ onto ¥. Finding a projection and simplicity of the method depend on ¥ if it simple the extra-gradient method is computable and very useful otherwise the extra-gradient method is more complicated. The extra-gradient method used to solve two distance OPs if ¥ is a cc set.
In a Hs, the weak convergence of a solution of the VIPs is incorporated under the sub-gradient extra-gradient method [11] by the below algorithm:
n = P ¥ ( n ( n ) ) , n + 1 = P ξ n ( n ( n ) ) ,
where ξ n is a half-space defined as follows:
ξ n = { θ : ( n ( n ) ) n , θ n 0 } .
Authors [12] accelerate the speed of convergence of the algorithm by building the following algorithm:
n = P ¥ ( n ( n ) ) , Λ n = η n n + ( 1 η n ) P ξ n n , ¥ n = { Λ : Λ n Λ n Λ } , n = { Λ : n Λ , 0 n 0 } , n + 1 = P ¥ n n 0 .
Our paper is interested in finding CSVIP. The CSVIP here is to find a point = i = 1 N i such that
i ( ) , 0 for all i , i = 1 , . . , N .
where i : be a nonlinear mapping and i be a finite family of non-empty ccs of such that i = 1 N i . Please note that If N = 1 , CSVIP (2) reduce to VIP (1). Here CSVIPs takes many forms such as: Convex feasibility problem (CFP), if we consider all i = 0 , then we find a point i = 1 N i in the non-empty intersection of a finite family of cc sets. Common fixed point problem (CFPP) If we take the sets i are the fixed point sets in (CFP). These problems have been studied in-depth and expansion, and their numerous applications have become the focus of attention of many researchers see [13,14,15,16,17,18,19].
For multi-valued mappings of i : 2 , i = 1 , . . , N , an algorithm for solving the CSVIP is given by [20]. For simplicity we list the below algorithm for i is a single-valued: Choose 1 and compute
n i = P i ( n n i i ( n ) ) , Λ n i = P i ( n n i i ( n ) ) ¥ n i = { Λ : n Λ n i , Λ n γ n i ( Λ n i n ) } , ¥ n = i = 1 N ¥ n i , £ n = { Λ : 1 n , Λ n 0 } , n + 1 = P ¥ n £ n 1 .
the approximation n + 1 of the algorithm (3), can be found by constructing N + 1 subsets ¥ n 1 , ¥ n 2 ,.., ¥ n i and £ n and solve the following minimization problem:
min Λ 1 2 , such that Λ n 1 . . ¥ n i £ n .
when N is large, this task can be very costly. Respect to the power of the number of half-spaces, the number of subcases in the explicit solution formula of the problem (4) is two.
In Banach spaces, for finding a common element of the set of fixed points via a family of asymptotically quasi ϕ non-expansive mappings the authors [21,22] derived two strongly convergent parallel hybrid iterative methods. This algorithm can be formulated in Hilbert spaces as follows:
0 ¥ , n i = η n n + ( 1 η n ) S ¥ i n , i = 1 , . . , N , i n = arg max { n i n : i = 1 , . . , N } , n = n i n , ¥ n + 1 = { v ¥ n : v n v n } , n + 1 = P ¥ n + 1 0 .
where η n ( 0 , 1 ) , lim sup n η n < 1 . According to this algorithm, the approximation n + 1 is defined as the projection of 0 onto ¥ n + 1 , and finding the explicit form of the sets ¥ n and perform numerical experiments seems complicated. By the same scenario, Hieu [23], introduced two PCHSE algorithms for CSVIPs in Hilbert spaces and analyze their convergence by numerical results.
Our main goal in this paper is to present iterative procedures for solving CSVIPs and prove its strong convergence. We called it, PCHISE algorithms. Our algorithms generates a sequence that converges strongly to the nearest point projection of the starting point onto the solution set of the CSVIP. To simplify this convergence, we use the inertial technical and shrinking projection methods. Also, some numerical experiments to support our results are given.
The outline of this work is as follows: In the next section, we give a definition and lemmas that we will use in study of the strong convergence analysis. Strong convergence results are obtained bu these procedures in Section 3, and at the ending, in Section 4, non-trivial two computational examples to discuss the performance of our algorithms and support theoretical results are incorporated.

2. Definition and Necessary Lemmas

In this section, we recall some definitions and results which will be used later.
Definition 1.
[24] For all , , a nonlinear operator ℷ is called
(i)
monotone if
, 0 ,
(ii)
pseudomonotone if , 0 leads to
, 0 ,
(iii)
η inverse strongly monotone ( η ism) if there exists η > 0 such that
, η 2 ;
(iv)
maximal monotone if it is monotone and its graph
G ( ) = { ( , ) : }
is not a proper subset of one of any other monotone mapping,
(v)
L-Lipschitz continuous if there exists a positive constant L such that
| | ( ) ( ) | | L | | | | ;
(vi)
nonexpansive if
| | ( ) ( ) | | | | | | .
Here, the set Θ ( ) = { : = } is referred to the set of all fixed points of a mapping ℷ.
It’s obvious that a monotone mapping : is maximal iff, for each ( , ) × such that u , v 0 for all ( u , v ) G ( ) , it follows that = ( ) .
Lemma 1.
[25] Let ℸ be a real Hilbert space (rHs). Then for each , and τ [ 0 , 1 ] ,
(i) 2 2 + 2 2 , ,
(ii) + 2 2 2 , + ,
(iii) τ ( 1 τ ) 2 = τ 2 + ( 1 τ ) 2 τ ( 1 τ ) 2 .
For each , the projection P ¥ defined by P ¥ = arg min { : ¥ } . Also, P ¥ exists and is unique because is nonempty ccs of . The projection P ¥ : ¥ has the following properties:
Lemma 2.
[24] Assume that P ¥ : ¥ is a projection. Then
(i) P ¥ is 1-ism, i.e., for each , ,
P ¥ P ¥ , P ¥ P ¥ 2 .
(ii) For all ℘ , ¥ ,
P ¥ 2 + P ¥ 2 2 .
(iii) Λ = P ¥ if and only if
Λ , Λ 0 , ¥ .
Lemma 3.
[25] Suppose that ℷ is a monotone, hemi-continuous mapping form onto ℸ, where ¥ is a non-empty ccs of a Hs ℸ, then
V I ( , ¥ ) = { u ¥ : v u , ( v ) 0 , v ¥ } .
Lemma 4.
[26] Suppose that ¥ is a ccs of a Hs ℸ. Given that , , Λ and ι R , the set
{ v : v 2 v 2 + Λ , v + ι }
is cc.
The normal cone N ¥ to a set at a point ¥ defined by
N ¥ = { : , 0 , ¥ } .
Thus, the following result is very important.
Lemma 5.
[27] Suppose that ℷ is a monotone, hemi-continuous mapping form ¥ onto ℸ, where ¥ is a non-empty ccs of a Hs ℸ, with D ( ) = ¥ . Let ℑ be a mapping defined by
( ) = + N ¥ if ¥ if ¥
Then ℑ is a maximal monotone and ℑ 1 ( 0 ) = V I ( , ¥ ) .

3. Main Theorems

This part is devoted to discuss the strong converges for our proposed algorithms under the following considerations: The collection { i } i = 1 N is Lipschitz continuous with the same constant L > 0 . Via L = max { L i : i = 1 , . . . , N } , i is also L-Lipschitz continuous for each i = 1 , . . , N . Finally, we consider Θ = i = 1 N V I ( i , i ) is non-empty.
Theorem 1.
(PHISE algorithm)
Assume that ℸ is a rHs and = i = 1 N i , where i , i = 1 , . . , N be ccs of ℸ. Let { i } i = 1 n : be a finite collection of monotone and L-Lipschitz continuous mappings and the solution set Θ is nonempty. Let { n } be a sequence generated by 0 , 1 ¥ 1 i = ¥ = , for all i = 1 , . . , N and
Υ n = n + π n ( n n 1 ) , n i = P i ( Υ n n i ( Υ n ) ) , Λ n i = P ξ n i ( Υ n n i ( n i ) ) , ¥ n + 1 i = { v : Λ n i v 2 n v 2 + ( 1 r ) π n 2 n n 1 2 } , ¥ n + 1 = i = 1 N ¥ n + 1 i , n + 1 = P ¥ n + 1 1 , n 1 .
where n i i , Λ n i ξ n i = { v : ( Υ n n i ( Υ n ) ) n i , v n i 0 } , n ( 0 , 1 2 L ) and π n [ 0 , 1 ) . Assume that i = 1 π n n n 1 < . Then the sequence { n } converges strongly to ϖ = P Θ 1 .
Proof. 
The proof is divided into the below steps
Step 1. Show that
Λ n i 2 n 2 + ( 1 r ) π n 2 n 1 n 2 r Λ n i n i 2 + n i n 2 ,
where Θ and r = 1 n L > 0 .
Let Θ , then by Lemma 1 (i), we can write
Υ n 2 = ( n ) π n ( n 1 n ) 2 = n 2 + π n 2 n 1 n 2 2 π n n , n 1 n n 2 + π n 2 n 1 n 2 .
Also, by simple calculations, we can find
n i Υ n 2 n i n 2 + π n 2 n 1 n 2 .
Similarly,
Λ n i Υ n 2 Λ n i n 2 + π n 2 n 1 n 2 .
Since, i is monotone on i and n i i , we can get
i ( n i ) i ( ) , n i 0 , for all Θ .
This together with V I ( i , i ) , yields
i ( n i ) , n i 0 .
So
i ( n i ) , Λ n i i ( n i ) , Λ n i n i .
From definition of the metric projection onto ξ n i , one can obtain
Λ n i n i , ( Υ n n i ( Υ n ) ) n i 0 .
Thus, by (10), we get
Λ n i n i , ( Υ n n i ( Υ n ) ) n i = Λ n i n i , ( Υ n n i ( Υ n ) ) n i + n Λ n i n i , i ( Υ n ) i ( n i ) n Λ n i n i , i ( Υ n ) i ( n i ) .
Put s n i = Υ n n i ( n i ) and write again Λ n i = P ξ n i ( s n i ) . From Lemma 2 (ii) and (9), one can write
Λ n i 2 s n i 2 P ξ n i ( s n i ) s n i 2 = Υ n n i ( n i ) 2 Λ n i ( Υ n n i ( n i ) ) 2 = Υ n 2 Λ n i Υ n 2 + 2 n Λ n i , i ( n i ) Υ n 2 Λ n i Υ n 2 + 2 n n i Λ n i , i ( n i ) .
From (11), we have
Λ n i Υ n 2 2 n n i Λ n i , i ( n i ) = Λ n i n i + n i Υ n 2 2 n n i Λ n i , i ( n i ) = Λ n i n i 2 + n i Υ n 2 2 Λ n i n i , ( Υ n n i ( n i ) n i ) = Λ n i n i 2 + n i Υ n 2 2 n Λ n i n i , i ( Υ n ) i ( n i ) Λ n i n i 2 + n i Υ n 2 2 n Λ n i n i i ( Υ n ) i ( n i ) Λ n i n i 2 + n i Υ n 2 2 n L Λ n i n i Υ n n i Λ n i n i 2 + n i Υ n 2 n L Λ n i n i 2 + Υ n n i 2 ( 1 n L ) Λ n i n i 2 + n i Υ n 2 .
From (13) in (12) and applying (7), (8), we can get
Λ n i 2 Υ n ( 1 n L ) Λ n i n i 2 + n i Υ n 2 n 2 + π n 2 n 1 n 2 ( 1 n L ) Λ n i n i 2 + n i n 2 + π n 2 n 1 n 2 = n 2 + ( 1 r ) π n 2 n 1 n 2 r Λ n i n i 2 + n i n 2 .
Hence, we have the inequality (6).
Step 2. Show that n + 1 is well-defined for all 1 and Θ n + 1 . Since i is Lipschitz continuous, thus, Lemma 3 confirm that V I ( i , i ) is too for all i = 1 , . . , N . Hence, Θ is closed and convex. It follows from the definition of ¥ n + 1 and Lemma 4 that, ¥ n + 1 is closed and convex for each n 1 .
Let v Θ , thus we obtain from Step 1 that
Λ n i v 2 n v 2 + ( 1 r ) π n 2 n 1 n 2 r Λ n i n i 2 + n i n 2 n v 2 + ( 1 r ) π n 2 n 1 n 2 .
Therefore, we have v n + 1 . Thus, Θ n + 1 and n + 1 = P Θ 1 is will-defined.
Step 3. Prove that lim n n 1 n exists. Since Θ is ccs of , then there is a unique u Θ such that
u = P Θ 1
From n = P ¥ n 1 , ¥ n + 1 n and n + 1 ¥ n , we can get
n 1 n + 1 1 , for all n 1 .
On the other hand, as Θ n , we have
n 1 u 1 , for all n 1 .
This proves that { n } is bounded and non-decreasing. Hence, lim n n 1 n exists.
Step 4. Prove that for all i = 1 , . . , N . , the following relation holds
lim n n + 1 n = lim n Λ n i n = lim n n i n = lim n n i Υ n = 0 .
From n + 1 ¥ n + 1 ¥ n and n = P ¥ n 1 , we can get
n n + 1 2 n + 1 1 2 n 1 2 .
For this inequality, letting n and using Step 3, we find
lim n n + 1 n = 0 .
Since i = 1 θ n n n 1 < , then we have
lim n Υ n n = lim n n n 1 = 0 .
From (15) and the definitions of ¥ n + 1 i , n + 1 ¥ n + 1 , we have
Λ n i n + 1 2 n n + 1 2 + ( 1 r ) π n 2 n n 1 2 0
as n for all i = 1 , . . , N . By triangle inequality and using (15) and (17), one can obtain that
Λ n i n Λ n i n + 1 + n + 1 n 0
as n for all i = 1 , . . , N . From Step 1 and the triangle inequality, for each v Θ , one has
r n i n 2 n v 2 + ( 1 r ) π n 2 n 1 n 2 r Λ n i n i 2 Λ n i v 2 n v 2 + ( 1 r ) π n 2 n 1 n 2 Λ n i v 2 n v + Λ n i v n Λ n i + ( 1 r ) π n 2 n 1 n 2 .
Applying (16) and (18) in (19) and the boundedness of { x n } , { Λ n i } , yields
lim n n i n = 0 , i = 1 , . . , N .
By triangle inequality and using (16) and (20), one can write
n i Υ n n i n + n Υ n 0
as n for all i = 1 , . . , N .
Step 5. Show that the strongly convergent of { n } , { n i } and { Λ n i } generated by (5) to ϖ = P Θ 1 . Let { n } has a weak cluster point ϖ , and has a subsequence converging weakly to ϖ , i.e., n ϖ , from (20), n i ϖ .
Now we show that ϖ i = 1 N V I ( i , i ) . Lemma 5, ensures that the mapping
i ( ) = i + N i ( ) if ¥ if ¥
is a maximal monotone, where N i ( ) is the normal cone to i at i . For all ( , ) G ( i ) , we have i N i ( ) , where G ( i ) is the graph of i . By the definition of N i ( ) , we find that
Λ , i ( ) 0
for all Λ i . Since n i i ,
n i , i ( ) 0 .
Therefore,
n i , n i , i ( ) .
Considering n i = P i ( Υ n n i ( Υ n ) ) and Lemma 2 (iii), we can get
n i , n i Υ n + n i ( Υ n ) 0 ,
or
n i , i ( Υ n ) n i , Υ n n i n .
Therefore, from (22), (23) and the monotonicity of i , we have
n i , n i , i ( ) = n i , i ( ) i ( n i ) + n i , i ( n i ) i ( Υ n ) + n i , i ( Υ n ) n i , i ( n i ) i ( Υ n ) + n i , Υ n n i n .
Applying (21) in (24) and i is L-Lipschitz continuous,
lim n i ( n i ) i ( Υ n ) = 0 .
Taking the limit in (24) as n and using (25), n i ϖ , we have ϖ , 0 for all ( , ) G ( i ) . Taking into account the maximal monotonicity of i implies that for all i = 1 , . . , N , ϖ i 1 ( 0 ) = V I ( i , i ) .
Finally, we show that n ϖ = ζ = P Θ 1 . From (14) and ζ Θ , we can get
n 1 ζ 1 , for all n 1 .
By (26) and lower weak semi-continuity of the norm, we can write
ϖ 1 lim inf n n 1 lim sup n n 1 ζ 1 .
By the definition of ζ , ϖ = ζ and lim n n 1 = ζ 1 . Thus, from n 1 ζ 1 and the Kadec-Klee property of , one can get n 1 ζ 1 , and so n ϖ . Also, Steps 2, 4 ensures that the sequences { n i } , { Λ n i } converge strongly to P Θ 1 . This completes the proof. □
Theorem 2.
(CHISE algorithm)
Suppose that all requirements of Theorem 1 are fulfilled. Let { n } be a sequence generated by 0 , 1 = , for all i = 1 , . . , N and
Υ n = n + π n ( n n 1 ) , n = P [ n ] ( Υ n n [ n ] ( Υ n ) ) , Λ n = P ξ [ n ] ( Υ n n [ n ] ( n ) ) , ¥ n + 1 = { v ¥ n : Λ n v 2 n v 2 + ( 1 r ) π n 2 n n 1 2 } , n + 1 = P ¥ n + 1 1 , n 1 .
where n [ n ] , Λ n ξ [ n ] = { v : ( Υ n n [ n ] ( Υ n ) ) n , v n 0 } , n ( 0 , 1 2 L ) , [ n ] = n ( m o d N ) + 1 with the mod function here taking values in { 1 , 2 , . . , N } and π n [ 0 , 1 ) . Assume that i = 1 π n n n 1 < . Then the sequence { n } converges strongly to ϖ = P Θ 1 .
Proof. 
By arguing similarly as in the proof of Theorem 1, we obtain that Θ and ¥ n + 1 are cc and Θ n + 1 for all n 1 . We have demonstrated that before { n } , { n } and { Λ n } are bounded and
lim n n + 1 n = lim n Λ n n = lim n n n = lim n n Υ n = 0 .
Now, let the sequence { n } has some weak cluster points ϖ and subsequence { n k } . For i { 1 , 2 , . . . , N } be fixed, the set of indexes i is finite, this leads to n k p and [ n k ] = i for all k . also (27) gives n k ϖ as k . By the same scenario of (22)–(25), one can get ϖ V I ( i , i ) . for all i , and ϖ Θ . The rest of the proof comes immediately from proof Theorem 1.  □
Remark  1.
(i)
The projection n + 1 = P ¥ n + 1 ( 1 ) computed explicitly as in Theorem 1 because ¥ n + 1 is either half-spaces or the whole space ℸ.
(ii)
If ℷ is η ism mapping, then ℷ is 1 η Lipschitz continuous. Thus, for i = 1 , . . , N , our algorithms can use to solve the CSVIP for the η ism mappings i .

4. Numerical Experiments

In this section, we consider two numerical examples to explain the efficiency of the proposed algorithms. The MATLAB codes run in MATLAB version 9.5 (R2018b) on Intel(R) Core(TM)i5-6200 CPU PC @ 2.30 GHz 2.40 GHz, RAM 8.00 GB. We use the Quadratic programming to solve the minimization problems.
(1)
For Van Hieu results in [23] Algorithm 3.1 (Alg. 1) we use λ = 1 2 L .
(2)
For our proposed algorithms (Alg. 2) we use n = 1 2 L and π n = 0.2 .
Example 1.
Let the operators i can be define on the convex set ¥ R m as follows:
i ( ) = ( B i B i T + S i + D i ) + q i , i = 1 , , N ,
where q i R m , B i is an m × m matrix, S i is an m × m skew-symmetric matrix and D i is an m × m diagonal matrix whose diagonal entries are non-negative. All these above mentioned matrices and vectors q i are randomly generated ( B = r a n d ( m ) , C = r a n d ( m ) , S = 0.5 C 0.5 C T , D = d i a g ( r a n d ( m , 1 ) ) ) between ( 0 , 1 ) . The feasible set ¥ i = ¥ R m is cc set and defined as:
¥ = { R m : A d } ,
where A is an 20 × m matrix and d is a non-negative vector. It is clear that i is monotone and L-Lipschitz continuous with L = max B i B i T + S i + D i : i = 1 , , m . In this example, we choose q i = 0 . Thus, the solution set Ω = { 0 } . During Example 1, we use x 0 = x 1 = ( 1 , 1 , , 1 ) and D n = n .
Example 2.
Suppose that = L 2 ( [ 0 , 1 ] ) is a Hs with the norm
= 0 1 | ( t ) | 2 d t
and the inner product
, = 0 1 ( t ) ( t ) d t , for all , .
Assume that ¥ i : = { L 2 ( [ 0 , 1 ] ) : 1 } be the unit ball. Let us define an operator i : ¥ i by
i ( ) ( t ) = 0 1 ( t ) H i ( t , s ) f i ( ( s ) ) d s + g i ( t )
for all , t [ 0 , 1 ] and i = 1, 2, where
H 1 ( t , s ) = 2 t s e ( t + s ) e e 2 1 , f 1 ( ) = c o s , g 1 ( t ) = 2 t e t e e 2 1 .
H 2 ( t , s ) = 21 7 ( t + s ) , f 2 ( ) = exp ( 2 ) , g 2 ( t ) = 21 7 ( t + 0.5 ) .
As shown in [14] the i is monotone (hence pseudo-monotone) and L-Lipschitz-continuous with L = 2 . Moreover, the solution set of the CSVIPs for the operators i on ¥ i is Ω = { 0 } . During example 2, we use x 0 = x 1 = t and D n = n .

5. Discussion

We have the following observations concerning the above-mentioned experiments:
(i)
Figure 1 and Figure 2 and Table 1 demonstrates the behavior of both algorithms as the size of the problem m varies. We can see that the performance of the algorithm depends on the size of the problem. More time and a significant number of iterations are required for large dimensional problems. In this case, we can see that the inertial effect strengthens the efficiency of the algorithm and improves the convergence rate.
(ii)
Figure 3 and Table 2 display the behavior of both algorithms while the number of problems N varies. It could be said that the performance of algorithms also depends on the number of problems involved. In this scenario, we can see that roughly the same number of iterations are required, but the execution time depends entirely on the number of problems N.
(iii)
Figure 4, Figure 5 and Figure 6 and Table 3 shows the behavior of both algorithms as tolerance ϵ varies. In this case, we can see that, as tolerance ϵ is closer to zero, iteration and elapsed time also increase.
(iv)
Based on the progress of the numerical results, we find that our methods are effective and successful in finding solutions for VIP and our algorithms converges faster than the algorithms of Hieu [19].

6. Conclusions

In this manuscript, we propose two strongly convergent parallel and cyclic hybrid inertial CQ-sub-gradient extra-gradient algorithms for finding common the CSVIP. This problem consists of finding a common solution to a system of unrelated variational inequalities corresponding to set-valued mappings in a Hs. The algorithms presented in this article are a hybrid of synthesis the inertial technical, shrinking projection and CQ-terms to parallel and cyclic hybrid inertial sub-gradient extra-gradient algorithms to develop possible practical numerical methods when the number of sub-problems is large. Finally, non-trivial numerical examples are given here to verify the efficiency of the proposed parallel and cyclic algorithms.

Author Contributions

H.A.H. contributed in conceptualization, investigation, methodology, validation and writing the theoretical results; H.u.R. contributed in conceptualization, investigation and writing the numerical results; M.D.l.S. contributed in funding acquisition, methodology, project administration, supervision, validation, visualization, writing and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Basque Government under Grant IT1207-19.

Acknowledgments

The authors are grateful to the Spanish Government and the European Commission for Grant RTI2018-094336-B-I00 (MCIU/AEI/FEDER, UE).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hartman, P.; Stampacchia, G. On some non-linear elliptic differential-functional equations. Acta Math. 1966, 115, 271–310. [Google Scholar] [CrossRef]
  2. Aubin, J.P.; Ekeland, I. Applied Nonlinear Analysis; Wiley: New York, NY, USA, 1984. [Google Scholar]
  3. Baiocchi, C.; Capelo, A. Variational and Quasivariational Inequalities. In Applications to Free Boundary Problems; Wiley: New York, NY, USA, 1984. [Google Scholar]
  4. Glowinski, R.; Lions, J.L.; Trémolières, R. Numerical Analysis of Variational Inequalities; North-Holland: Amsterdam, The Netherlands, 1981. [Google Scholar]
  5. Konnov, I.V. Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1989, 27, 120–127. [Google Scholar]
  6. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  7. Konnov, I.V. Combined Relaxation Methods for Variational Inequalities; Springer: Berlin, Germany, 2001. [Google Scholar]
  8. Marcotte, P. Applications of Khobotov’s algorithm to variational and network equlibrium problems. Inf. Syst. Oper. Res. 1991, 29, 258–270. [Google Scholar]
  9. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Series in Operations Research; Springer: New York, NY, USA, 2003; Volume II. [Google Scholar]
  10. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  11. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  13. Censor, Y.; Chen, W.; Combettes, P.L.; Davidi, R.; Herman, G.T. On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Comput. Optim. Appl. 2011. [Google Scholar] [CrossRef]
  14. Hieu, D.V.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2017, 66, 75–96. [Google Scholar] [CrossRef]
  15. Hieu, D.V. Parallel hybrid methods for generalized equilibrium problems and asymptotically strictly pseudocontractive mappings. J. Appl. Math. Comput. 2016. [Google Scholar] [CrossRef] [Green Version]
  16. Yamada, I. The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications; Butnariu, D., Censor, Y., Reich, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
  17. Yao, Y.; Liou, Y.C. Weak and strong convergence of Krasnoselski-Mann iteration for hierarchical fixed point problems. Inverse Probl. 2008. [Google Scholar] [CrossRef]
  18. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  19. Stark, H. Image Recovery Theory and Applications; Academic: Orlando, FL, USA, 1987. [Google Scholar]
  20. Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set Val. Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
  21. Anh, P.K.; Hieu, D.V. Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpansive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
  22. Anh, P.K.; Hieu, D.V. Parallel hybrid methods for variational inequalities, equilibrium problems and common fixed point problems. Vietnam J. Math. 2015. [Google Scholar] [CrossRef]
  23. Hieu, D.V. Parallel and cyclic hybrid subgradient extragradient methods for variational inequalities. Afr. Mat. 2016. [Google Scholar] [CrossRef]
  24. Alber, Y.; Ryazantseva, I. Nonlinear Ill-Posed Problems of Monotone Type; Spinger: Dordrecht, The Netherlands, 2006. [Google Scholar]
  25. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  26. Martinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]
  27. Rockafellar, R.T. On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149, 75–88. [Google Scholar] [CrossRef]
Figure 1. Example 1: Numerical comparison for the values of m = 2, 5 and N = 20.
Figure 1. Example 1: Numerical comparison for the values of m = 2, 5 and N = 20.
Symmetry 12 01198 g001
Figure 2. Example 1: Numerical comparison for the values of m = 10, 20 and N = 20.
Figure 2. Example 1: Numerical comparison for the values of m = 10, 20 and N = 20.
Symmetry 12 01198 g002
Figure 3. Example 1 for m = 10 and different values of N = 5, 10, 15.
Figure 3. Example 1 for m = 10 and different values of N = 5, 10, 15.
Symmetry 12 01198 g003
Figure 4. Example 2: Numerical comparison by letting TOL = 10 2 .
Figure 4. Example 2: Numerical comparison by letting TOL = 10 2 .
Symmetry 12 01198 g004
Figure 5. Example 2: Numerical comparison by letting TOL = 10 3 .
Figure 5. Example 2: Numerical comparison by letting TOL = 10 3 .
Symmetry 12 01198 g005
Figure 6. Example 2: Numerical comparison by letting TOL = 10 4 .
Figure 6. Example 2: Numerical comparison by letting TOL = 10 4 .
Symmetry 12 01198 g006
Table 1. Numerical results for Figure 1 and Figure 2.
Table 1. Numerical results for Figure 1 and Figure 2.
NmAlgorithm 1Algorithm 2
Number of Iter.CPU (s)Number of Iter.CPU (s)
20226943.0177118528.6043
205788121.021352983.1379
20102493391.90321066245.6833
20207780765.50703038485.9237
Table 2. Numerical results for Figure 3.
Table 2. Numerical results for Figure 3.
NmAlgorithm 1Algorithm 2
Number of Iter.CPU (s)Number of Iter.CPU (s)
5103101172.42982267105.7254
10103009233.64992340159.1928
15103254353.01762109235.8372
Table 3. Numerical results for Figure 4, Figure 5 and Figure 6.
Table 3. Numerical results for Figure 4, Figure 5 and Figure 6.
NTOLAlgorithm 1Algorithm 2
Number of Iter.CPU (s)Number of Iter.CPU (s)
210−21610.17991180.1483
210−32080.35631500.1985
210−458016.040548634.4441

Share and Cite

MDPI and ACS Style

Hammad, H.A.; ur Rehman, H.; De la Sen, M. Advanced Algorithms and Common Solutions to Variational Inequalities. Symmetry 2020, 12, 1198. https://doi.org/10.3390/sym12071198

AMA Style

Hammad HA, ur Rehman H, De la Sen M. Advanced Algorithms and Common Solutions to Variational Inequalities. Symmetry. 2020; 12(7):1198. https://doi.org/10.3390/sym12071198

Chicago/Turabian Style

Hammad, Hasanen A., Habib ur Rehman, and Manuel De la Sen. 2020. "Advanced Algorithms and Common Solutions to Variational Inequalities" Symmetry 12, no. 7: 1198. https://doi.org/10.3390/sym12071198

APA Style

Hammad, H. A., ur Rehman, H., & De la Sen, M. (2020). Advanced Algorithms and Common Solutions to Variational Inequalities. Symmetry, 12(7), 1198. https://doi.org/10.3390/sym12071198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop