1. Introduction and Preliminaries
Differential equations, game theory, control theory, the variational inequality problem, the equilibrium problem, the fixed-point problem, the optimization problem and the split feasibility problem are some well-known examples of nonlinear problems to which nonlinear operator theory is applicable. Over the past few decades, the development of efficient, flexible, less expensive and manageable approximation methods that are easy to test and debug for approximating the solutions of nonlinear operator equations and inclusions has become an active area of research. As a continuation, we propose an efficient and flexible iterative algorithm for approximating the common solution of some generalized nonlinear problems.
Throughout this paper, the letters R, and N will denote the set of all real numbers, the set of all positive real numbers and the set of all natural numbers, respectively.
Let be a real Hilbert space, be a nonempty closed convex subset of and T be a self-mapping on . The set of all fixed points of T is denoted by A mapping T is called a Lipschitzian mapping if there exists a constant such that holds for all If in the above inequality, we restrict L to vary only in the interval then, the mapping T is called a contraction. Furthermore, the mapping T is called nonexpansive if we set in the above inequality.
The study of nonexpansive mappings is significant mainly because of three reasons: (1) The existence of fixed points of such mappings relies on the geometric properties of the underlying Banach spaces/Hilbert spaces instead of compactness properties. (2) These mappings are used as the transition operators for certain initial value problems of differential inclusions involving accretive or dissipative operators. (3) Different problems appearing in areas like compressed sensing, economics, convex optimization theory, variational inequality problems, monotone inclusions, convex feasibility, image restoration and other applied sciences give rise to operator equations which involve nonexpansive mappings (see [
1,
2]). Another reason for studying nonexpansive mappings involves complex analysis, holomorphic mappings and the Hilbert ball (see, for example, [
3,
4]).
Let us recall that a multi-valued mapping is said to be monotone if where , and .
A monotone mapping T is said to be maximal if the graph of T is not properly contained in the graph of any other monotone mapping.
An operator is called t-inverse strongly monotone if for all , we have for some .
If we set in the above inequality, then T is called inverse strongly monotone.
Let
be the given parameter and
I be the identity operator on
. If we set
then
is called the resolvent of the mapping
T. Note that
It is known that for each
, there is a unique element
such that
A mapping from onto is called a metric projection of onto .
Recall that for any
,
More information on metric projections can be found in
Section 3 in [
3]; also, we refer the reader to [
5].
Throughout this manuscript, we denote the strong and weak convergence of a sequence to a point by and respectively. The set of all weak subsequential limits of is denoted by ; that is, if such that then there exists some subsequence of the sequence which converges weakly to
Definition 1. A mapping is said to be firmly nonexpansive if for all , we haveNote that is a well-known example of a firmly nonexpansive mapping. More information on firmly nonexpansive mappings can be found in Section 11 of [3]. Moreover, ϕ is called hemicontinuous on if it is continuous along each line segment in .
Lemma 1 ([
6])
. Let be a nonempty closed convex subset of a Hilbert space and be nonexpansive. Then, is demiclosed on C; that is, any sequence in C with and gives that . Definition 2. A mapping is weakly lower semicontinuous at if for any sequence in with we have Lemma 2 ([
7])
. For any and the following results hold:- (i)
;
- (ii)
;
- (iii)
;
- (iv)
.
Lemma 3 ([
8])
. Let and with then, the following holds: Lemma 4 ([
9])
. Suppose that are sequences of positive real numbers with , , such that the following holds:- (i)
If for some then is a bounded sequence.
- (ii)
If and then we have .
Lemma 5 ([
10])
. Suppose that , with and such thatIf , for every subsequence of with we have
1.1. Some Nonlinear Problems
Throughout this paper, we suppose that are real Hilbert spaces, and are nonempty closed and convex subsets of and , respectively, and is a bounded linear operator with as its adjoint operator.
Let
. The fixed-point problem (FPP) can be formulated as:
For two multivalued mappings S and T, if , then we say that is a common fixed point of S and T.
Let
be a bifunction. An equilibrium problem (EP) involving
F and the set
is defined as follows:
Let
. The variational inequality problem (VIP) associated with
T and
is given as follows:
Suppose that
and
are two mappings. The generalized equilibrium problem,
, of
F and
T is defined as follows:
Note that if T is a zero operator in (2), then the reduces to the EP. If F is a zero operator in (2), then the becomes the VIP. The solution set of (2) is denoted by
The
unifies different problems such as the VIP, EP, complementarity problem, optimization problem, FPP and Nash equilibrium problem in noncooperative games (for instance, see [
11,
12,
13,
14]).
The split inverse problem (SIP) has gained a lot of attention from many researchers recently. The first version of the SIP was the split feasibility problem (SFP), which was proposed by Censor and Elfving in 1994 [
15].
The SFP associated with a bounded linear operator
is defined as follows:
That is, the SFP is a problem of finding a point of a closed convex subset such that the image of the point under a given bounded linear operator is in another closed convex subset. This problem has found several applications in real-world problems such as image recognizing, signal processing, intensity-modulated radiation therapy and many others. For more results in this direction, we refer to [
16,
17,
18,
19,
20,
21].
For any operator :
- (a)
The direct problem is to determine for any (that is, from the cause to the consequence).
- (b)
The inverse problem is to determine a point such that for any (that is, from the consequence to the cause).
The split inverse problem (SIP) is defined as follows:
Find a point
such that
where
is the inverse problem formulated in
and
is another inverse problem formulated in
.
Moudafi [
22] proposed the new version of the SIP called the split monotone variational inclusion problem (SMVIP).
Suppose that
and
are inverse strongly monotone mappings,
and
are multivalued maximal monotone mappings and
is a bounded linear operator. The SMVIP is defined as follows: Find a point
and
If
, then the SMVIP reduces to the following split variational inclusion problem (SVIP), which is defined as follows: Find a point
and
Moreover, Moudafi showed that the SFP is a special case of the SVIP. Many inverse problems arising in real-world problems can be modeled as an SVIP (for details, see [
16,
19]). We shall denote the solution set of the variational inclusion problem on
by
and the solution set of the variational inclusion problem on
by
. The solution set of the SVIP is denoted by
Remark 1. According to [23,24], the following hold, The mapping T is maximal monotone if and only if the resolvent operator is a single-valued mapping.
if and only if
The split variation inclusion problem given in (3) is equivalent to the following:
1.2. Some Notable Iterative Algorithms
The problem of approximating fixed points of nonexpansive mappings with the help of different iterative processes has been studied extensively (see [
9,
13,
25,
26,
27,
28,
29,
30]).
There have been several iterative methods proposed in the literature for the solution of nonlinear problems. For instance, in 2022, Abbas et al. [
31] proposed an iterative method known as the AA (Abbas–Asghar)-iteration.
The sequence generated by the AA-iteration is defined as follows in Algorithm 1:
Algorithm 1: AA-iterative algorithm proposed in [31]. |
Initialization: Let , and be sequences of real numbers in . |
Choose any ; |
For , calculate as follows: |
It was shown that the AA-iteration method has a faster rate of convergence than other well-known iteration methods existing in the literature [
31]. Note that the AA-iteration method has been successfully applied for obtaining the solutions of operator equations involving nonexpansive-type mappings; for instance, see [
32,
33,
34].
Byrne et al. [
35] proposed an iterative algorithm to solve the SVIP involving maximal monotone operators
and
which is as follows in Algorithm 2:
Algorithm 2: Proximal algorithm proposed in [35]. |
Initialization: Let be a sequence of real numbers in , and , where . |
Choose any ; |
For , calculate as follows:
where and are resolvent operators of and , respectively, and . |
The problem of finding the common solution of some nonlinear problems has gained a lot of attention from by many authors. For example, Wangkeeree et al. [
36] proposed the following iterative algorithm to obtain the common solution of the FPP and the SVIP for nonexpansive mappings. The proposed iterative method is given in Algorithm 3.
Algorithm 3: General iterative algorithm proposed in [36]. |
Initialization: Let be a sequence of real numbers in , and , where L is the spectral radius of operator . |
Choose any ; |
For , calculate as follows:
where is a contraction with contraction constant c, is a nonexpansive mapping, is a bounded linear operator with constant and , with , and and are multivalued maximal monotone operators. |
It was shown that under some appropriate conditions, the sequence defined in Algorithm 3 converges strongly to a common solution of the FPP and the SVIP.
The step size in any algorithm has an important role so far as its computation and the rate of convergence of an algorithm are concerned. Indeed, the selection of an appropriate step size can help in approximating the solution in fewer steps, and hence the step size may effect the rate of convergence of any iterative algorithm. Note that the step sizes described in Algorithms 2 and 3 depend upon the operator norm, and hence these algorithms are not easily implementable as the computation of the operator norm in each step makes the task difficult.
Later on, Tang [
24] modified Algorithm 2 with a self-adaptive step size for approximating the solution of the SVIP. The proposed method is described in Algorithm 4.
Algorithm 4: Iterative algorithm proposed by Tang in [24]. |
Initialization: Let be a sequence such that with . |
Choose any ; |
For , calculate as follows: |
Compute
then compute
where , , and is a sequence with the conditions given in Algorithm 2. |
Under some suitable conditions, a strong convergence result was proven for Algorithm 4. Moreover, many researchers have worked on inertial-type algorithms, in which each iteration is defined using the previous two iterations. Many authors have proposed some efficient algorithms combining the inertial process with self-adaptive step size methods for approximating the solutions of certain nonlinear problems; for more details we refer to ([
30,
37,
38,
39,
40]). Moreover, Rouhani et al. proposed different iterative algorithms to find the common solution of some important nonlinear problems in Hilbert and Banach spaces; for details, see [
41,
42,
43,
44].
Recently, Alakoya and Mewomo [
45] proposed an inertial-type viscosity algorithm hybrid with
S-iteration [
46] to approximate the common solution of certain nonlinear problems. They used a suitable step size in the proposed algorithm to approximate the solution without prior knowledge of an operator norm. A natural question arises: is it possible to develop a method which converges at a faster rate and approximate the solution of more general nonlinear problems?
Using the step size given in [
45], we proposed an efficient inertial viscosity algorithm hybrid with the AA-iteration for approximating the common solution of more generalized nonlinear problems. Indeed, finding common solutions to nonlinear problems, as opposed to solving them separately, is crucial because it offers a unified perspective on the interconnected variables. This approach provides a more comprehensive understanding of the system’s behavior, ensuring consistency and enabling more robust modeling and analysis in complex scenarios. Using suitable control parameters, we proved the strong convergence result to approximate the common solution of a split variation inclusion problem, the
, and the common FPP. These problems are much important in different fields, like network resources, signal processing, image processing and many others (for more details, we refer to [
26,
28]).
2. Convergence Analysis
Now, we present the following assumptions for the proposed algorithm.
Assumption 1. Let and . For solving the , we impose the following conditions on F and T:
, for all .
, for all .
, for all .
For each , is convex and lower semi-continuous.
Definition 3. For some , define the mapping as follows: Lemma 6. Under the conditions –, we have the following:
is firmly nonexpansive and single-valued.
.
is closed and convex.
Proof. For a given
, if
and
. Then, we have
and
. It follows from
that
Hence, we get
That is, is firmly nonexpansive. Furthermore, for , we get , which implies that is single-valued.
Now, we show that . If , then .
Since is firmly nonexpansive, and hence nonexpansive. The set of fixed points of a nonexpansive map is closed and convex. □
Proposed Algorithm
Here, we discuss our proposed algorithm. Initially, we describe some notations as follows.
Suppose that and are maximal monotone mappings, satisfies Assumption 1, is a bounded linear operator and the adjoint operator of A is denoted by . Let be nonexpansive mappings and be a contraction mapping with contraction constant c.
We define the following mappings as follows:
Note that
g and
h are weakly lower semi-continuous, convex and differentiable [
47]. Furthermore,
G and
H are Lipschitz continuous [
24].
We now present our proposed method which is given in Algorithm 5 and its flowchart diagram can be seen form
Figure 1.
Algorithm 5: Proposed inertial-type AA-viscosity algorithm for variational inclusion problems, and common FPP. |
Step 0. Suppose that and is non-negative real number. Set . |
Step 1. Given the th and nth iterations, set such that with given as
|
Step 2. Compute
|
Step 3. Find such that
|
Step 4. Compute
|
Step 5. Compute
where
|
Step 6. Evaluate
|
Step 7. Compute
|
Step 8. Set
|
Step 9. Find
Update: set and return back to step 1. |
The control parameters are given and satisfy the following conditions:
(i) is a sequence in with and .
(ii) are sequences in such that all are in with satisfying the following: .
(iii) is fixed and is a sequence of positive real numbers such that
(iv) and are sequences of positive real numbers such that and
Remark 2. Note that by conditions (i) and (iii), one can easily verify from (7) that In addition, and are given.
Suppose that . The strong convergence result for the proposed algorithm is given as follows,
Theorem 1. Suppose that and ϕ are mappings as described above. If is a sequence generated by Algorithm 5 and fulfills the conditions – and –, then the sequence converges strongly to a fixed point of .
We divide our proof into the following lemmas.
Lemma 7. If is a sequence generated by Algorithm 5, then is bounded.
Proof. Since
, and also noting that
is a contraction, then we can apply the Banach contraction result, which says that there exists a
such that
and
. This gives
As
is nonexpansive for each
n, then
By Remark 2,
. Then, it follows that there exists a constant
such that
So, by Equation (10), we obtain
Now, using the definition of
and the property of the firm nonexpansivity of
, we get
Now, by Lemma 2 and applying (13) together with the nonexpansivity of
, we have
and putting in the value of
, we have
By using the assumption on
, we obtain
Now, by using (15), we get
Now, by (15) and since (16) and
S are nonexpansive, we have
Now, by using condition
and (18), we have
where
, if we put
and
. Then, by applying Lemma 4 along with assumptions on control parameters, we get that
is bounded, and this implies that
is bounded. Moreover,
and
are bounded. □
Lemma 8. Let be a sequence defined in Algorithm 5 and ; also, the conditions given in Theorem 1 hold. Then, we have the following inequality. Proof. If
; then, using Lemma 2 and (9) and (14), we get
where
. Now,
Now, using
, we get
Hence, by the Cauchy–Schwarz inequality, we get
By (12), (15), (22) and (23) and knowing that
is a contraction and by Cauchy–Schwarz inequality, we have
Hence, we get
where
□
Lemma 9. If is a sequence defined in Algorithm 5 and , and also the conditions given in Theorem 1 hold. Then, we have the following inequality Proof. Let
, by (14), then we have
Applying Lemma 2 and the firmly nonexpansivity of
, we have
Hence, we have
where
. Next, by Lemma 3 and (16), (17) and (24), we get
□
Lemma 10. Under the assumptions of Theorem 1, the sequence defined by Algorithm 5 converges strongly to , where .
Proof. Let
. By Lemma 8, we have
We now show that
converges to zero as
. Set
and
in Lemma 5. We now show that
for every subsequence
of
satisfying
Suppose that
is a subsequence of
such that
By (27) and
, we obtain that
Following arguments similar to those given above, we have
Similarly, from Lemma 8, we obtain
and
As
G and
H are Lipschitz continuous, by
, we have
and
By (26) and (33) with Remark 2 and using
we get
Similarly, by Lemma 9, we have
Applying (28) and (36), we obtain that
Similarly, by applying (29), (34), (35) and (37), we have
Furthermore, by (37) and (38), we get
Using (38) with
, we get
Now, we show that
. Note that
. Indeed,
is bounded, so
. Let
be any arbitrary element, then there is a subsequence
of
such that
as
. By (37), it follows that
as
. By the definition of
, we get
By the monotonicity of
F, we have
By (28)
and the condition
, we have
Let
and
. This implies that
. Now, by (41) and applying the conditions
–
, we have
Taking
and by condition
, we have
This implies that
. Further, we show that
. By using the lower semi-continuity of
g, it follows from (31) that
which implies that
can be written as
or
On taking the limit as
in the above Equation (43), and applying (33), (34) and (38) and combining it with the result that the graph of a maximal monotone mapping is weakly strongly closed, we get
. Combining this with (42), we have
. Next, we show that
. By (38) and (39), we get
and
as
.
S and
T are nonexpansive and demiclosed principals and (39) gives
. Hence,
. Moreover, by (38) and (39), it follows that
. Since
is bounded, there exists a subsequence
of
such that
and
As
, we have
Now, by (40) and (44), we get
Applying Lemma 5 to (25) and using (45) with and , we conclude that and hence . □
3. Applications
In the following sections, we use our proposed iterative scheme to approximate the solution of some well-known nonlinear problems.
3.1. Split Feasibility Problem
Suppose that
and
are given as in previous section. The SFP is defined as follows:
This problem was introduced by Censor and Elfving in 1994 [
15] and is used to model problems arising in different fields such as image diagnosing and restoration, computer tomography and radiation therapy treatment. The set of solutions of the SFP (46) is denoted by
. Suppose that
is a nonempty closed and convex subset of a Hilbert space
and
is an indicator function which is defined as follows:
Define the normal cone
at
as follows:
As
is a proper, lower semicontinuous and convex function on
, the subdifferential
of
is a maximal monotone operator. Note that the resolvent
of
is given by
Furthermore, for each
, we have
As an application of Theorem 1, we obtain the approximation of the common solution of the SFP, the , and the common FPP involving nonexpansive mappings. We now present Algorithm 6 given below which serves this purpose.
Algorithm 6: Proposed algorithm for SFP, and common FPP. |
Step 0. Let and be any non-negative real number. Set . |
Step 1. Given the th and nth iterations, set such that with given as
|
Step 2. Compute
|
Step 3. Find such that
|
Step 4. Compute
|
Step 5. Compute
where
|
Step 6. Evaluate
|
Step 7. Compute
|
Step 8. Set
|
Step 9. Find
where
Update: set and return back to step 1. |
We now present the following result.
Theorem 2. Suppose that S and T are nonexpansive self-mappings on and is a contraction with contraction constant c. If and the conditions – and – hold, then the sequence defined by Algorithm 6 converges strongly to , where
Proof. The proof follows from Theorem 1. □
3.2. Relaxed Split Feasibility Problem
The relaxed split feasibility problem (RSFP) is a special case of the SFP, which is defined as follows.
Let
and
be convex and lower semicontinuous functions with bounded subdifferentials on bounded domains. Take the sets
and
as follows:
The solution set of the RSFP is denoted by . We now present an algorithm (Algorithm 7) to approximate the common solution of the RSFP, the and the common FPP.
Algorithm 7: Proposed algorithm for RSFP, and common FPP. |
Step 0. Let and be any non-negative real number. Set . |
Step 1. Given the th and nth iterations, set such that with given as
|
Step 2. Compute
|
Step 3. Find such that
|
Step 4. Compute
|
Step 5. Compute
where
and
|
Step 6. Evaluate
|
Step 7. Compute
|
Step 8. Set
|
Step 9. Find
where
Update: set and return back to step 1. |
Now, using Theorem 2, we have the following result which approximates the common solution of the RSFP, the and the common FPP involving nonexpansive mappings.
Theorem 3. Suppose that S and T are nonexpansive self-mappings on and is a contraction mapping with the contraction constant c. If and the conditions – and – hold, then the sequence defined by Algorithm 7 converges strongly to , where
Proof. The proof follows from Theorem 1. □
3.3. Split Common Null Point Problem
The split common null point problem (SCNPP) for multi-valued maximal monotone mappings was introduced by Byrne et al. [
35]. They also proposed iterative algorithms to solve this problem. The SCNPP includes the convex feasibility problem (CFP) ([
15]), the VIP ([
22]) and many constrained optimization problems as special cases; for more details about its practicability, we refer to [
16,
48]).
For multivalued mappings
,
, the SCNPP is formulated as:
We denote the solution set of the SCNPP (50) by
. It is well known that for any
,
is single-valued and nonexpansive if and only if
T is maximal and monotone. Let
be a maximal monotone mapping, then the resolvent operator
is a single-valued map associated with
T, where
. Moreover, the resolvent operator
is firmly nonexpansive and
if and only if
Moreover, Lemma 7.1 on page 392 of [
49] shows that this fact is equivalent to the classical Kirszbraun–Valentine extension theorem. Now, we propose Algorithm 8 to approximate the common solution of the
, the variational inclusion problem and the SCNPP.
Algorithm 8: Proposed algorithm for variational inclusion problem, and SCNPP. |
Step 0. Let and be any non-negative real number. Set . |
Step 1. Given the th and nth iterations, set such that with given as
|
Step 2. Compute
|
Step 3. Find such that
|
Step 4. Compute
|
Step 5. Compute
where
|
Step 6. Evaluate
|
Step 7. Compute
|
Step 8. Set
|
Step 9. Find
where
Update: set and return back to step 1. |
We now present the following result.
Theorem 4. Suppose that S and T are maximal monotone multivalued mappings on and is a contraction mapping with contraction constant c. If , and the conditions – and – hold, then the sequence defined by Algorithm 8 converges strongly to , where
Proof. As the resolvent operators and are firmly nonexpansive and hence nonexpansive, the proof follows from Theorem 1. □
3.4. Split Minimization Problem
Let us recall the definition of a proximal operator.
Let
be a Hilbert space,
and
be a convex proper and lower semicontinuous function. The proximal operator of mapping
is defined as follows:
It is known that
where
denotes the subdifferential of
which is given as:
The split minimization problem (SMP) introduced by Moudafi and Thakur [
48] has been successfully applied in Fourier regularization, multi-resolution and sparse regularization, alternating projection signal synthesis problems and hard-constrained inconsistent feasibility (see [
50]).
Suppose that
and
are convex proper and lower semicontinuous functions. The split minimization problem (SMP) is defined as follows: find a point
The solution set of the SMP (53) is denoted by .
Note that is a firmly nonexpansive and maximal monotone operator. Set and in Theorem 1 and use Algorithm 9 given below to approximate the common solution of the SMP, the and the common FPP.
Algorithm 9: Proposed algorithm for SMP, the and common FPP. |
Step 0. Suppose that and is any non-negative real number. Set . |
Step 1. Given the th and nth iterations, set such that with given as
|
Step 2. Compute
|
Step 3. Find such that
|
Step 4. Compute
|
Step 5. Compute
where
|
Step 6. Evaluate
|
Step 7. Compute
|
Step 8. Set
|
Step 9. Find
where
Update: set and return back to step 1. |
Finally, we present the following result.
Theorem 5. Suppose that S and T are nonexpansive self-mappings on , is a contraction with contraction constant c and and are convex proper and lower semicontinuous functions. If and the conditions – and – hold, then the sequence generated by Algorithm 9 converges strongly , where
Proof. The proof follows from Theorem 1. □