1 Introduction

Multiobjective optimization, in which there are multiple conflicting objective functions to be minimized simultaneously, has been studied extensively in the literature as application areas range from engineering to natural sciences. Vector optimization is a generalization, where values of the objective function might not be compared in an element-wise fashion. Stated in technical terms, the order relation on the objective space is determined by an ordering cone, which may differ from the positive orthant. Vector optimization plays an important role in application areas such as financial mathematics [13, 15], economics [25], and game theory [14], where the ordering cone of the problem is naturally different than the positive orthant.

There are various solution concepts and approaches regarding vector optimization problems in the literature, see for instance [16, 22]. In contrast to single-objective optimization, there is usually no unique optimal objective value. Instead, one is interested in Pareto optimal solutions whose objective values are minimal with respect to the order relation of the problem. The set of objective values of all Pareto optimal points is referred to as the Pareto frontier. For technical reasons, we will instead work with the upper image, the image of the feasible set of the problem plus the ordering cone. Crucially, the Pareto frontier is contained in the boundary of the upper image.

In this paper, we focus on convex vector optimization problems (CVOPs), where the objective function and the feasible set are appropriately convex. Linear vector optimization problems (LVOPs) form an important subclass of CVOPs. An important solution concept for LVOPs (see [19]) motivated by set optimization aims to generate the Pareto frontier. Since the upper image of an LVOP is known to be a polyhedron, it can be generated by finitely many extreme points (vertices) and extreme directions. Methods and algorithms exist for solving LVOPs in this sense; for more details see e.g., [2, 6, 15, 19, 27]. In the case of a CVOP, the upper image is a convex set, but not necessarily polyhedral. Hence, it is commonly infeasible to compute the exact Pareto frontier. Instead, there are solution concepts that generate approximations to it.

Whether a CVOP is bounded or unbounded determines what solution concepts and what solution methods are available. In vector optimization, a bounded problem is characterized by its upper image (and its Pareto frontier) being a subset of a shifted ordering cone. In the multiobjective case, this simplifies to each objective being bounded from below on the feasible set. Note that unbounded problems are encountered for instance in the computations of indifference prices under incomplete preferences [25] or in the implementation of the set-valued Bellman principle [18].

Bounded problems comprise the less challenging class: the Pareto frontier of a bounded LVOP can be generated by finding its extreme points. For a bounded CVOP, one aims to find finitely many Pareto optimal elements that generate both inner and outer polyhedral approximation of the Pareto frontier. There are solution algorithms such as [1, 8, 9, 20] capable of solving bounded CVOPs in this sense.

Unbounded problems present an additional challenge – one also has to compute (or approximate) the recession directions of the upper image. One of the solution methods for unbounded LVOPs can be found in [19]. In the first phase, the recession cone of the upper image is computed by solving a modified LVOP (the so-called homogeneous problem) which is of the same dimension as the original problem but known to be bounded. In the second phase, the ordering cone is replaced by the found recession cone, which transforms the original unbounded LVOP into an equivalent bounded LVOP. Recently, [30] proposed a solution concept and a solution algorithm to solve unbounded CVOPs. The solution approach is similar to the linear case and consists of two phases. In the first phase, an outer approximation of the recession cone is algorithmically computed. This outer approximation is then used to transform the problem into a bounded one so that existing algorithms can be applied in the second phase.

In this paper, we propose an alternative way of approximating the recession cone of the upper image, that is, the first phase given in [30]. In [30], this is done by solving Pascoletti-Serafini scalarizations (see [24]) and updating an approximation of the desired recession cone in an iterative manner. In this paper, instead of computing an approximation of the recession cone itself as in [30], we consider the dual cone of it. We use a characterization of the dual cone from [29] given in terms of the well-known weighted sum scalarizations. We observe that for some classes of CVOPs, it is possible to write the dual of the recession cone explicitly. Then, computing this set reduces to solving a bounded convex projection problem [18]. For the special case of LVOPs, it is possible to compute the recession cone exactly by solving a bounded polyhedral projection problem [21]. Moreover, in this case, it is possible to reduce the dimension of the projection problem by one.

When the dimension of the objective space is two, the procedure simplifies further and it reduces to solving two convex (or linear, if the corresponding problem is linear) scalar optimization problems. Compared to applying the algorithm from [30] or solving a two-dimensional LVOP as in [19], solving two convex or linear programs is simpler and more efficient.

The structure of the paper is as follows. In Sect. 2 we provide notation and preliminaries. In Sect. 3 we introduce the convex vector optimization problem and the relevant solution concepts. Section 4 introduces a method for approximating the recession cone of the upper image based on its connection to the set of weights for which the weighted sum scalarization is bounded. In Sect. 5 we discuss particular problem classes for which this method yields representation in the form of a convex projection problem. Section 6 provides examples.

2 Preliminaries

Let \(q \in \mathbb {N}\) and \(\mathbb {R}^q\) be the q-dimensional Euclidean space. Throughout the paper, we primarily use the \(\ell _2\) norm \(\Vert y\Vert := \left\| y \right\| _2 = \left( \sum _{i=1}^q \left| y_i \right| ^2 \right) ^{\frac{1}{2}}\) on \(\mathbb {R}^q\). We will shortly remark on results under the \(\ell _p\) norm \(\left\| y \right\| _p = \left( \sum _{i=1}^q \left| y_i \right| ^p \right) ^{\frac{1}{p}}\) for \(p \in [1, \infty )\) and the \(\ell _\infty \) norm \(\left\| y \right\| _\infty = \max _{i\in \{1,\ldots ,q\}} \left| y_i \right| \). The (closed \(\ell _2\)) ball centered at point \(c \in \mathbb {R}^q\) with radius \(r > 0\) is denoted by \(B(c,r):=\{y \in \mathbb {R}^q \mid \left\| y-c \right\| \le r\}\).

The interior, closure, boundary, and convex hull of a set \(A \subseteq \mathbb {R}^q\) are denoted by \(\mathrm{int \,}A, \mathrm{cl \,}A, \mathrm{bd \,}A\), and \(\mathrm{conv \,}A\), respectively. The (convex) conic hull of A,

$$\begin{aligned} \mathrm{cone\,}A:= \left\{ \sum _{i=1}^n \lambda _ia_i \mid n\in \mathbb {N}, \lambda _1,\ldots ,\lambda _n \ge 0, a_i\ldots ,a_n \in A\right\} , \end{aligned}$$

is the set of all conic combinations of points from A. The recession cone of a set A is

$$\begin{aligned} A_{\infty } = \left\{ d \in \mathbb {R}^q \mid a + \lambda d \in A \quad \forall a \in A, \lambda \ge 0 \right\} . \end{aligned}$$

For two sets \(A,B\subseteq \mathbb {R}^q\), their sum is understood as their Minkowski sum

$$\begin{aligned} A+B:=\{a+b \in \mathbb {R}^q \mid a \in A, b\in B\}, \end{aligned}$$

and their distance is measured via the Hausdorff distance

$$\begin{aligned} d^H(A,B):= \max \left\{ \sup _{a\in A} \inf _{b\in B} \left\| a-b \right\| ,\sup _{b\in B} \inf _{a\in A} \left\| a-b \right\| \right\} . \end{aligned}$$

If a different norm is considered, the Hausdorff distance can be defined analogously. We denote by \(A-B\), the set \(A + (-1)\cdot B = \{a-b \mid a\in A, b\in B\}\).

A set \(A \subseteq \mathbb {R}^q\) is a polyhedron if it can be identified through finitely many vertices \(v_1, \dots , v_{k_v} \in \mathbb {R}^q, k_v \in \mathbb {N}\) and directions \(d_1, \dots , d_{k_d} \in \mathbb {R}^q {\setminus } \{0\}, k_d \in \mathbb {N}\cup \{0\}\) as

$$\begin{aligned} A = \mathrm{conv \,}\{v_1, \dots , v_{k_v}\} + \mathrm{cone\,}\{d_1, \dots , d_{k_d}\}. \end{aligned}$$
(1)

A polyhedron can also be represented as a finite intersection of halfspaces.

The dual cone of a set \(A \subseteq \mathbb {R}^q\) is \(A^+:= \{ w \in \mathbb {R}^q \mid \forall a \in A: w^\textsf{T}a \ge 0\}\). A cone \(C \subseteq \mathbb {R}^q\) is nontrivial if \(\{0\} \subsetneq C \subsetneq \mathbb {R}^q\). It is pointed if it does not contain any line through the origin. A cone \(C \subseteq \mathbb {R}^q\) generates an order on \(\mathbb {R}^q\) given through

$$\begin{aligned} x \le _C y \iff y \in \{x\} + C \end{aligned}$$

for \(x, y \in \mathbb {R}^q\). If C is a nontrivial, pointed, convex ordering cone, then \(\le _C\) is a partial order. A function \(f: \mathbb {R}^n \rightarrow \mathbb {R}^q\) is C-convex if for all \(x, y \in \mathbb {R}^n\), and all \(\lambda \in [0, 1]\) it holds

$$\begin{aligned} f(\lambda x + (1-\lambda )y ) \le _C \lambda f(x) + (1-\lambda )f(y). \end{aligned}$$

Convex projection is a problem of the form

$$\begin{aligned} \text {compute } Y = \left\{ y \in \mathbb {R}^m \mid \exists x \in \mathbb {R}^n: (x, y) \in S \right\} , \end{aligned}$$

where \(S \subseteq \mathbb {R}^n\times \mathbb {R}^m\) is a convex feasible set. If the feasible set S is a polyhedron, then the problem is a polyhedral projection. Under solving a projection problem, we understand computing the set Y (if polyhedral) or an approximation of it (otherwise) in the sense of finding a representation as in (1). More details on polyhedral projections can be found in [21], and on convex projection in [18, 28].

3 Problem description

In this section, we introduce a convex vector optimization problem (CVOP) and its upper image. The main object of interest in this work is the recession cone of the upper image. Its importance can be seen by the role it plays in the boundedness properties of CVOPs and the appropriate solution concepts.

A convex vector optimization problem is

$$\begin{aligned} \text {minimize } f(x) \quad \text { with respect to} \ \le _C\quad \text { subject to } \ h(x) \le 0, \end{aligned}$$
(P)

where \(C \subseteq {\mathbb {R}}^q\) is a nontrivial, pointed, convex ordering cone with a non-empty interior and \(h:\mathbb {R}^n \rightarrow \mathbb {R}^m\) and \(f: \mathbb {R}^n \rightarrow \mathbb {R}^q\) are continuous functions that are \(\mathbb {R}^m_+\)- and C-convex, respectively. We denote the feasible region by \(\mathcal {X}:= \{x\in \mathbb {R}^n \mid h(x) \le 0\}\) and its image by \(f(\mathcal {X}):=\{f(x) \mid x \in \mathcal {X}\}\). The upper image of (P) is defined as

$$\begin{aligned} \mathcal {P}:=\mathrm{cl \,}\left( f(\mathcal {X})+C \right) . \end{aligned}$$

Here we are particularly interested in the recession cone of the upper image, that is, \(\mathcal {P}_\infty \). We encounter it within the boundedness notions for CVOPs recalled below.

Definition 3.1

[29, Definitions 4.2, 4.5] Problem (P) is called bounded if there is a point \(\hat{p} \in \mathbb {R}^q\) such that \(\mathcal {P}\subseteq \{\hat{p}\}+C\); and unbounded otherwise. Problem (P) is called self-bounded if \(\mathcal {P} \ne \mathbb {R}^q\) and there is a point \(\hat{p} \in \mathbb {R}^q\) such that \(\mathcal {P}\subseteq \{\hat{p}\}+\mathcal {P}_\infty \).

Note that for a bounded problem, it holds \(\mathcal {P}_\infty = \mathrm{cl \,}C\), see [18, Lemma 2.2]. A bounded problem is, in particular, always self-bounded. However, an unbounded problem can be self-bounded or not. An illustration is provided in Fig. 1. We refer readers to [29] for more examples and more in-depth discussion.

Fig. 1
figure 1

The image \(f(\mathcal {X})\) and the upper image \(\mathcal {P}\) for the problem with \(f(x)= (x,x^2)^\textsf{T}\), \(\mathcal {X} = \mathbb {R}\) and \(C = \mathbb {R}^2_+\). The bold line indicates \(\mathrm{bd \,}\mathcal {P}\cap f(\mathcal {X})\) and the light region shows \(\{\hat{p}\}+C\) for \(\hat{p} = (-8,0)^\textsf{T}\). The recession cone of the upper image is \(\mathbb {R}^2_+\). This problem is neither bounded nor self-bounded

An appropriate solution concept for a CVOP depends on whether the problem is bounded or not. Solution concepts for bounded CVOPs are proposed in [1, 8, 20] and for self-bounded problems in [29]. According to these, a solution consists of finitely many minimal elements on the boundary of the upper image \(\mathcal {P}\) which generate both an inner and an outer approximation of it. The self-bounded case, however, contains challenges: In general, it is difficult to check if a CVOP is self-bounded. Moreover, the solution concept of [29] includes the recession cone \(\mathcal {P}_\infty \). However, computing \(\mathcal {P}_\infty \) exactly may not be possible if it is not polyhedral.

Recently, a generalized solution concept was proposed in [30], which includes an approximation of the recession cone \(\mathcal {P}_\infty \) of the upper image. This solution concept is tailored for unbounded problems, but it is applicable for a CVOP regardless of whether it is (self-) bounded or not. Similarly to the above, it also yields polyhedral approximations of the upper image. We will provide this solution concept below explicitly as it illustrates the importance of approximating the recession cone \(\mathcal {P}_\infty \).

First, we define approximations of a convex cone. Interested readers can also compare this to the definition in [7] for convex sets.

Definition 3.2

Let \(K\subseteq \mathbb {R}^q\) be a convex cone. A finite set \(\mathcal {Y}\subseteq \mathbb {R}^q\) is called a finite \(\delta \)outer approximation of K if \(K \subseteq \mathrm{cone\,}\mathcal {Y}\) and \(d^H \left( K \cap B(0,1), \mathrm{cone\,}\mathcal {Y}\cap B(0,1) \right) \le \delta \). Similarly, a finite set \(\mathcal {Z}\subseteq \mathbb {R}^q\) is called a finite \(\delta \)inner approximation of K if \(K \supseteq \mathrm{cone\,}\mathcal {Z}\) and \(d^H \left( K \cap B(0,1), \mathrm{cone\,}\mathcal {Z}\cap B(0,1) \right) \le \delta \).

Definition 3.2 differs slightly from [30, Definition 3.3]: Here the \(\ell _2\) norm (and the corresponding Hausdorff distance) is used, while [30] applied the \(\ell _1\) norm to measure distance. The \(\ell _1\) norm was chosen in [30] primarily for algorithmic reasons. Here we opt for the \(\ell _2\) norm for pragmatic reasons: Since we will work with dual cones, the \(\ell _2\) norm has the advantage of being self-dual. Alternatively, we could work with the pair of \(\ell _1\) and \(\ell _\infty \) norms, but this would create a cumbersome terminology. When the choice of the norm(s) impacts the results of the paper, we provide corresponding remarks.

Now we can define a solution of a CVOP, where \(c \in \mathrm{int \,}C\) with \(\Vert c \Vert = 1\) is a fixed element. First, recall that a point \(\bar{x} \in \mathcal {X}\) is called a minimizer for (P) if \(f(\bar{x})\) is a C-minimal element of \(f(\mathcal {X})\), that is, if \((\{f(\bar{x})\}-C\setminus \{0\}) \cap f(\mathcal {X}) = \emptyset \). Similarly, \(\bar{x} \in \mathcal {X}\) is called a weak minimizer for (P) if \(f(\bar{x})\) is a weakly C-minimal element of \(f(\mathcal {X})\), that is, if \((\{f(\bar{x})\}-\mathrm{int \,}C) \cap f(\mathcal {X}) = \emptyset \).

Definition 3.3

A pair \((\bar{\mathcal {X}},\mathcal {Y})\) is a (weak) \((\varepsilon ,\delta )\)solution of (P) if \(\bar{\mathcal {X}} \ne \emptyset \) is a set of (weak) minimizers for (P), \(\mathcal {Y}\) is a \(\delta \)–outer approximation of \(\mathcal {P}_{\infty }\) and it holds

$$\begin{aligned} \mathcal {P} \subseteq \mathrm{conv \,}{f} (\bar{\mathcal {X}})+\mathrm{cone\,}\mathcal {Y}-\varepsilon \{c\}. \end{aligned}$$

A (weak) \((\varepsilon ,\delta )\)–solution \((\bar{\mathcal {X}},\mathcal {Y})\) of (P) is a finite (weak) \((\varepsilon ,\delta )\)solution of (P) if the sets \(\bar{\mathcal {X}},\mathcal {Y}\) consist of finitely many elements.

An approach to compute a solution of a CVOP in the sense of Definition 3.3 is provided in [30]. It was shown that once an outer approximation of \(\mathcal {P}_\infty \) is available, the algorithms for bounded CVOPs can be used to find a solution in the sense of Definition 3.3. This is done by transforming the (unbounded) CVOP into a bounded one by replacing the ordering cone with the outer approximation of \(\mathcal {P}_\infty \). [30] also contains an algorithm for computing a finite \(\delta \)-outer approximation of \(\mathcal {P}_\infty \).

In this paper, we provide an alternative approach to compute a polyhedral approximation of \(\mathcal {P}_\infty \). We consider some special classes of CVOPs for which we can compute a finite \(\delta \)-outer approximation \(\mathcal {Y}\) of \(\mathcal {P}_\infty \) by solving a particular convex projection problem. For example, we will see that if we consider linear vector optimization problems, then we can compute the exact \(\mathcal {P}_\infty \) by solving a polyhedral projection problem.

4 Approximating \(\mathcal {P}_\infty \) via \(\mathcal {P}_\infty ^+\)

Let us propose an approach to compute an approximation of \(\mathcal {P}_\infty \) by approximating its dual \(\mathcal {P}_\infty ^+\). It is based on the known close connection between the dual of the recession cone \(\mathcal {P}_\infty ^+\) and the set of weights for which the weighted sum scalarization of the CVOP is a bounded problem. The boundedness of a scalar (weighted sum scalarization) problem can be verified through the feasibility of its dual problem, assuming strong duality. Expressing the cone \(\mathcal {P}_\infty ^+\) through a set of weights for which the dual problem is feasible can be interpreted through the lens of a projection problem. This interpretation should become clearer for the particular special cases considered in Sect. 5. Solving this projection problem provides an inner approximation of \(\mathcal {P}_\infty ^+\). We show that an inner approximation of \(\mathcal {P}_\infty ^+\) yields an outer approximation of \(\mathcal {P}_\infty \) with an appropriate tolerance.

Let us start by recalling the weighted sum scalarization of (P), which is given by

figure a

for \(w \in \mathbb {R}^q\setminus \{0\}\). It is well known that if \(w \in C^+{\setminus } \{0\},\) then an optimal solution of (P\(_w\)) is a weak minimizer of (P), see [16, Theorem 5.28]. On the other hand, for a weak minimizer \(\bar{x}\in \mathbb {R}^n\), there exists \(w\in C^+\) such that \(\bar{x}\) is an optimal solution to (P\(_w\)), see [16, Theorem 5.13]. This shows us that for CVOPs one is interested in solving (P\(_w\)) for \(w \in C^+\). However, the weighted sum scalarization problem may be unbounded for some \(w \in C^+\) if (P) is not bounded. The set of weights for which the weighted sum scalarization is bounded, denoted by

$$\begin{aligned} W:= \{w \in C^+ \mid \inf _{x\in \mathcal {X}} w^\textsf{T}f(x) \in \mathbb {R}\}, \end{aligned}$$

will play an important role. The following proposition gives a relationship between the dual cone of \(\mathcal {P}_\infty \) and W.

Proposition 4.1

[29, Proposition 4.12 and Theorem 4.14] It holds true that \(\mathcal {P}_\infty ^+ = \mathrm{cl \,}W\). If (P) is self-bounded, then \(\mathcal {P}_\infty ^+ = W\). Furthermore, if \(\{0\} \ne \mathcal {P}_\infty ^+ = W\), then the problem is self-bounded.

Recall that f is a C-convex function. Then for all \(w\in C^+\) is \(w^\textsf{T}f: \mathbb {R}^n \rightarrow \mathbb {R}\) a convex function and hence (P\(_w\)) is a convex optimization problem. The Lagrangian \(\mathcal {L}: \mathbb {R}^n \times \mathbb {R}^m \rightarrow \mathbb {R}\) for (P\(_w\)) is given by

$$\begin{aligned} \mathcal {L}(x,\nu ):= w^\textsf{T}f(x) + \nu ^\textsf{T}h (x) \end{aligned}$$

and the dual problem is

figure b

where the dual objective function \(g:\mathbb {R}^m\rightarrow \mathbb {R}\cup \{\pm \infty \}\) is defined as \(g(\nu ):=\inf \limits _{{x \in \mathbb {R}^n}}\mathcal {L}(x,\nu )\). We say that the dual problem is feasible if the feasible region \(\mathbb {R}^m_+ \cap \mathrm{dom\,}g\) is nonempty, that is, \(\{\nu \in \mathbb {R}^m_+ \mid g(\nu ) > -\infty \} \ne \emptyset \). We know that the weak duality holds between the primal and dual problems (P\(_w\)) and (D\(_w\)), that is,

$$\begin{aligned} p^w:= \inf _{x\in \mathcal {X}} w^\textsf{T}f(x) \ge \sup _{\nu \in \mathbb {R}^m_+} g(\nu ) =:d^w. \end{aligned}$$

Moreover, we say that the strong duality holds if the value of primal and dual problems are the same, that is, \(p^w = d^w\). From now on, we assume the following.

Assumption 4.2

The problem (P) is feasible and it satisfies a constraint qualification such that the strong duality holds for the pair of (P\(_w\)) and (D\(_w\)) for any \(w \in C^+\).

This assumption is satisfied, for example, if the problem has only affine constraints, or if the (generalized) Slater’s condition holds, that is, there exists \(\bar{x} \in \mathrm{ri\,}\mathcal {X}\) such that \(h(\bar{x}) < 0\). Strong duality gives us the following result.

Theorem 4.3

Suppose Assumption 4.2 holds. It holds true that

$$\begin{aligned} W = \{w \in C^{+} \mid (\text {D}_w) \text { is feasible}\}. \end{aligned}$$

Proof

Since (P) is feasible, (P\(_w\)) for any \(w\in C^+\) is also a feasible problem. Then, \(p^w < \infty \) holds and the weak duality implies that the dual problem (D\(_w\)) is not unbounded. On the other hand, the strong duality implies that (P\(_w\)) is bounded if and only if (D\(_w\)) is feasible. \(\square \)

For some classes of convex optimization problems, it is possible to write the constraints of the dual problem (D\(_w\)) explicitly. In the following section, we will consider these classes, for which we will express \(\mathcal {P}_\infty ^+\) explicitly. This will provide a way to compute \(\mathcal {P}_\infty ^+\) or an approximation of it.

Recall that the initial aim was to compute \(\mathcal {P}_\infty \) or its outer approximation. If \(\mathcal {P}_\infty ^+\) is determined by finitely many generators, it is easy to compute (the finitely many generators of) \(\mathcal {P}_\infty \). What if its (inner) approximation is available instead? Will a dual cone of an approximation of \(\mathcal {P}_\infty ^+\) be an approximation of \(\mathcal {P}_\infty \)? The following proposition provides an answer.

Proposition 4.4

Let \(\mathcal {Z}\subseteq \mathbb {R}^q\) be a finite \(\delta \)-inner approximation of \(\mathcal {P}_\infty ^+\) and \(\mathcal {Y}\) be a finite set of generating vectors of \((\mathrm{cone\,}\mathcal {Z})^+\), that is, \((\mathrm{cone\,}\mathcal {Z})^+ = \mathrm{cone\,}\mathcal {Y}\). Then, \(\mathcal {Y}\) is a finite \(\delta \)–outer approximation of \(\mathcal {P}_\infty \).

Proof

Since \(\mathrm{cone\,}\mathcal {Z}\subseteq \mathcal {P}_\infty ^+\), we have \(\mathrm{cone\,}\mathcal {Y}= (\mathrm{cone\,}\mathcal {Z})^+ \supseteq \mathcal {P}_\infty \). Let \(y \in \mathrm{cone\,}\mathcal {Y}\cap B(0,1)\) and assume for contradiction that \((\{y\} + B(0,\delta )) \cap (\mathcal {P}_\infty \cap B(0,1)) = \emptyset \).

First, we show that \((\{y\} + B(0,\delta )) \cap (\mathcal {P}_\infty \cap B(0,1)) = \emptyset \) implies \((\{y\} + B(0,\delta )) \cap \mathcal {P}_\infty = \emptyset \). Assume, also by contradiction, that this is not the case, therefore there exists \(b \in B(0,\delta )\) such that \(y + b \in \mathcal {P}_\infty \setminus B(0,1)\). Since \(\mathcal {P}_\infty \) is a cone, it holds \(\lambda (y + b) \in \mathcal {P}_\infty \) for all \(\lambda \ge 0\). Consider the convex quadratic optimization problem

$$\begin{aligned} \min _{\lambda \ge 0} \Vert \lambda (y + b) - y \Vert ^2 = \min _{\lambda \ge 0} \{\lambda ^2 \Vert y + b \Vert ^2 - 2\lambda y^\textsf{T}(y+b) + \Vert y \Vert ^2\}. \end{aligned}$$
(2)

The quadratic problem (2) is solved by \(\lambda ^* = \frac{y^\textsf{T}(y+b)}{\Vert y + b \Vert ^2}\) if \(y^\textsf{T}(y+b) \ge 0\) and by \(\lambda ^* = 0\) otherwise. First, consider the case of \(y^{\textsf{T}}(y+b) < 0\), where we have \(\left\| y \right\| \le \left\| b \right\| \le \delta \) by monotonicity of the quadratic objective function for \([\frac{y^\textsf{T}(y+b)}{\Vert y+b\Vert ^2},\infty )\). Therefore, the vector \(0 \in (\{y\} + B(0,\delta )) \cap (\mathcal {P}_\infty \cap B(0,1))\) provides the desired contradiction for the case of \(y^{\textsf{T}}(y+b) < 0\). Second, consider the case \(\lambda ^* \ge 0\), where the Cauchy-Schwarz inequality yields \(y^{\textsf{T}}(y+b) \ge 0 (y+b) \in B(0,1)\) as

$$\begin{aligned} \lambda ^* = \frac{y^\textsf{T}(y+b)}{\Vert y + b \Vert ^2} \le \frac{\Vert y \Vert \Vert y + b \Vert }{\Vert y + b \Vert ^2} \le \frac{1}{\Vert y + b \Vert }. \end{aligned}$$

Since \(\lambda ^*\) is an optimal solution of (2), we also obtain \(\Vert \lambda ^* (y + b) - y \Vert \le \Vert (y + b) - y \Vert = \Vert b \Vert \le \delta \). Therefore, vector \(\lambda ^* (y+b) \in (\{y\} + B(0,\delta )) \cap (\mathcal {P}_\infty \cap B(0,1))\) provides the desired contradiction for the case of \(y^{\textsf{T}}(y+b) \ge 0\), and the implication is proven.

Second, we use \((\{y\} + B(0,\delta )) \cap \mathcal {P}_\infty = \emptyset \) to show that the initial assumption cannot hold. By separation arguments, there exists \(w \in \mathbb {R}^q \setminus \{0\}\) such that \(w^\textsf{T}(y - b)< w^\textsf{T}p\) for all \(b \in B(0,\delta ), p\in \mathcal {P}_\infty \). In particular, \(w \in \mathcal {P}_\infty ^+\) and \(w^\textsf{T}(y - b)< 0\) for all \(b \in B(0,\delta )\). Without loss of generality, we may assume \(\left\| w \right\| = 1\). The choice of \(\bar{b} = -\delta w\) shows that it holds \(w^\textsf{T}y < w^\textsf{T}\bar{b} = -\delta \). On the other hand, since \(w \in \mathcal {P}_\infty ^+ \cap B(0,1)\), there exists \(z \in \mathrm{cone\,}\mathcal {Z}\cap B(0,1)\) such that \(\left\| w-z \right\| \le \delta .\) Since \(z \in \mathrm{cone\,}\mathcal {Z}, y\in \mathrm{cone\,}\mathcal {Y}\), we have \(y^\textsf{T}z \ge 0. \) Then, using the Cauchy-Schwarz inequality, we obtain

$$\begin{aligned} 0 \le y^\textsf{T}z = y^\textsf{T}(z-w) + y^\textsf{T}w \le \left\| y \right\| \left\| z-w \right\| + y^\textsf{T}w < 0, \end{aligned}$$

which is a contradiction. \(\square \)

Let us now address the issue of the norm used. The above proposition holds for the (self-dual) \(\ell _2\) norm. Do we get a similar result for other (dual pairs of) norms? For computational purposes, the pair of \(\ell _1\) and \(\ell _\infty \) norms with polyhedral unit balls are particularly important. The following remark shows that, in the general case, the tolerance is increased, but by less than a factor of two.

Remark 4.5

Let \(p, r \in [1, \infty ]\) satisfy \(\frac{1}{p} + \frac{1}{r} = 1\) and consider the dual pair of \(\ell _p\) and \(\ell _{r}\) norms alongside an appropriately adapted Definition 3.2 of an approximation of a cone. The following can be shown: If \(\mathcal {Z}\subseteq \mathbb {R}^q\) is a finite \(\delta \)-inner approximation of \(\mathcal {P}_\infty ^+\) in \(\ell _p\) and \(\mathcal {Y}\) is a finite set of generating vectors of \((\mathrm{cone\,}\mathcal {Z})^+\), then \(\mathcal {Y}\) is a finite \(\frac{2\delta }{1+\delta }\)outer approximation of \(\mathcal {P}_\infty \) in \(\ell _{r}\).

We sketch the proof of this claim. Let \(B_p\) and \(B_{r}\) denote the closed balls with respect to the \(\ell _p\) and \(\ell _{r}\) norms. Let \(y \in \mathrm{cone\,}\mathcal {Y}\cap B_{r}(0,1)\). First, we prove by contradiction that \((\{y\} + B_{r}(0, \frac{2\delta }{1+\delta })) \cap (\mathcal {P}_\infty \cap B_{r}(0,1)) = \emptyset \) implies \((\{y\} + B_{r}(0,\delta )) \cap \mathcal {P}_\infty = \emptyset \): Assume that \(y + b \in \mathcal {P}_\infty {\setminus } B_{r}(0,1)\) for some \(b \in B_{r}(0,\delta )\) and consider the convex optimization problem

$$\begin{aligned} \min _{\lambda \ge 0} \left\| \lambda (y + b) - y \right\| _{r} . \end{aligned}$$
(3)

Since a coercive function attains a minimum over a closed set, there exists an optimal solution \(\lambda ^*\) satisfying \(\left\| \lambda ^* (y+b) - y \right\| _{r} \le \left\| b \right\| _{r} \le \delta \). Hence, \(\left\| \lambda ^* (y+b) \right\| _{r} \le \left\| y \right\| _{r} + \left\| \lambda ^* (y+b) - y \right\| _{r} \le 1+ \delta \). The point \(\frac{1}{1+\delta }\lambda ^* (y+b) \in \mathcal {P}_\infty \cap B_{r}(0,1)\) provides the desired contradiction since \(\frac{1}{1+\delta }\lambda ^* (y+b) \in \{y\} + B_{r}(0, \frac{2\delta }{1+\delta })\) follows from

$$\begin{aligned} \left\| \frac{1}{1+\delta }\lambda ^* (y+b) - y \right\| _{r} \le \frac{1}{1+\delta } \left\| \lambda ^* (y+b) - y \right\| _{r} + \frac{\delta }{1+\delta } \left\| y \right\| _{r} \le \frac{\delta }{1+\delta } + \frac{\delta }{1+\delta }. \end{aligned}$$

Second, we show that \((\{y\} + B_{{r}}(0,\delta )) \cap \mathcal {P}_\infty = \emptyset \) leads to a contradiction: By a separation argument, there exists \(w \in \mathcal {P}_\infty ^+ \setminus \{0\}\) such that \(w^\textsf{T}(y+b) < 0\) for all \(b \in B_{{r}}(0, \delta )\) and, therefore, \(w^\textsf{T}y < -\delta \). Since we can without loss of generality assume \(w \in P_\infty ^+ \cap B_p (0, 1)\), there must exist \(z \in \mathrm{cone\,}\mathcal {Z}\cap B_p (0,1)\) such that \(\left\| w - z \right\| _p \le \delta \). Since \((\mathrm{cone\,}\mathcal {Z})^+ = \mathrm{cone\,}\mathcal {Y}\), we get a contradiction

$$\begin{aligned} 0 \le z^\textsf{T}y \le y^\textsf{T}(z-w) + y^\textsf{T}w \le \left\| y \right\| _{r} \left\| z-w \right\| _p + y^\textsf{T}w < 1 \cdot \delta - \delta = 0. \end{aligned}$$

Remark 4.6

In [31], a slightly different Hausdorff distance for closed convex cones \(K_1,K_2 \subseteq \mathbb {R}^q\) is defined as

$$\begin{aligned} d^W(K_1,K_2):=\max \{\sup _{k_1 \in K_1 \cap B_p(0,1)} \inf _{k_2\in K_2} \left\| k_1-k_2 \right\| _p, \sup _{k_2 \in K_2 \cap B_p(0,1)} \inf _{k_1\in K_1} \left\| k_1-k_2 \right\| _p\}, \end{aligned}$$

where \(p \in [1, \infty ]\). If this Hausdorff distance is used to define \(\delta \)-inner and -outer approximations of cones in Definition 3.2, then Proposition 4.4 holds, that is, the approximation tolerance is preserved, for any dual pairs of norms by [31, Theorem 1]. However, this result cannot be applied to our case since the two distance measures do not coincide in general. To see this, consider \(K_1 = \mathrm{cone\,}\{(0.2, 0.8)^\textsf{T}\}\) and \(K_2 = \mathrm{cone\,}\{(0.4, 0.6)^\textsf{T}\} \subseteq \mathbb {R}^2\). If we use the \(\ell _1\) norm in both measures, we obtain \(d^W (K_1,K_2) = \frac{1}{3} \), while \(d^H(K_1\cap B_1(0,1),K_2\cap B_1(0,1)) = 0.4\).

Proposition 4.4 suggests that by computing a \(\delta \)-inner approximation of \(\mathcal {P}^+_\infty \), we can generate a \(\delta \)-outer approximation of \(\mathcal {P}_\infty \), which can be used to compute a finite \((\epsilon ,\delta )\)-solution to problem (P). Note that it is sufficient to consider the set W since this set corresponds, up to the closure, to the cone \(\mathcal {P}_\infty ^+\) of interest by Proposition 4.1. What about the closure? Set W can be computed exactly if it is polyhedral (so closed). Otherwise, it needs to be approximated, in which case an approximation of W is also an approximation of its closure.

From a practical point of view, instead of computing or approximating the cone W, we will compute or approximate the bounded convex set

$$\begin{aligned} W_c := W \cap \{ w \in \mathbb {R}^d \mid w^\textsf{T}c \le 1\} \end{aligned}$$
(4)

for a fixed \(c \in \mathrm{int \,}C\) with \(\left\| c \right\| = 1\). The next proposition shows that the set W can be approximated through an approximation of \(W_c\).

Proposition 4.7

Let \(W_c\) be as defined in (4) for some \(c \in \mathrm{int \,}C\) with \(\left\| c \right\| = 1\) and let \(\delta \in (0,1)\) be a tolerance. Assume that \(\bar{W}\) is a finite \(\delta \)-inner approximation of \(W_c\) in the sense that it holds

$$\begin{aligned} \bar{W} \subseteq W_c\quad \text {and} \quad d^H (W_c, \mathrm{conv \,}\bar{W})\le \delta . \end{aligned}$$

Then, \(\bar{W}\) is also a finite \(\delta \)-inner approximation of the cone W.

Proof

Consider an element \(w \in W \cap B(0,1)\). Note that the Cauchy-Schwarz inequality implies \(W \cap B(0,1) \subseteq W_c\), therefore, there exists \(\bar{w} \in \mathrm{conv \,}\bar{W}\) with \(\left\| w - \bar{w} \right\| \le \delta \). Our proof would be finished if \(\left\| \bar{w} \right\| \le 1\). We proceed with the case \(\left\| \bar{w} \right\| > 1\) where we show that the orthogonal projection \(\frac{w^\textsf{T}\bar{w}}{\bar{w}^\textsf{T}\bar{w}} \bar{w}\) provides the desired bound: Firstly, since \(w^\textsf{T}\bar{w} = \frac{1}{2} \left( \left\| w \right\| ^2 + \left\| \bar{w} \right\| ^2 - \left\| w - \bar{w} \right\| ^2 \right)> {\frac{1}{2}} (1 - \delta ^2) > 0\) we know that \(\frac{w^\textsf{T}\bar{w}}{\bar{w}^\textsf{T}\bar{w}} \bar{w} \in \mathrm{cone\,}\bar{W}\). Secondly, \(\frac{w^\textsf{T}\bar{w}}{\bar{w}^\textsf{T}\bar{w}} \bar{w} \in B(0,1)\) holds since \(\left\| \frac{w^\textsf{T}\bar{w}}{\bar{w}^\textsf{T}\bar{w}} \bar{w} \right\| = \frac{\vert w^\textsf{T}\bar{w}\vert }{\left\| \bar{w} \right\| } \le \frac{\left\| w \right\| \left\| \bar{w} \right\| }{\left\| \bar{w} \right\| } \le 1\). And thirdly, for \(\frac{w^\textsf{T}\bar{w}}{\bar{w}^\textsf{T}\bar{w}} = \mathop {\mathrm {arg\,min}}\limits \limits _{\alpha \in \mathbb {R}} \left\| w - \alpha \bar{w} \right\| \) it holds \(\left\| w - \frac{w^\textsf{T}\bar{w}}{\bar{w}^\textsf{T}\bar{w}} \bar{w} \right\| \le \left\| w - \bar{w} \right\| \le \delta \), which proves the claim. \(\square \)

In light of Remark 4.5, let us again address different norms in the context of Proposition 4.7. Keep in mind that the \(\ell _1\) and \(\ell _\infty \) norms are relevant for computational purposes.

Remark 4.8

Let \(p, {r} \in [1, \infty ]\) satisfy \(\frac{1}{p} + \frac{1}{{r}} = 1\) and use \(c \in \mathrm{int \,}C\) with \(\left\| c \right\| _{{r}} = 1\) to define the set \(W_c\). The following can be shown: If \(\bar{W}\) is a finite \(\delta \)-inner approximation of \(W_c\) in \(\ell _p\), then \(\bar{W}\) is a finite \(\frac{2\delta }{1+\delta }\)inner approximation of the cone W in \(\ell _p\).

We sketch the proof again. Let \(w \in W \cap B_p (0,1)\). Since the Cauchy-Schwarz inequality implies \(w \in W_c\), there exists \(\bar{w} \in \mathrm{conv \,}\bar{W}\) satisfying \(\left\| w - \bar{w} \right\| _p \le \delta \). Consider the convex optimization problem

$$\begin{aligned} \min \limits _{\alpha \ge 0} \left\| w - \alpha \bar{w} \right\| _p. \end{aligned}$$
(5)

Since a coercive function attains its minimum over a closed set, there exists an optimal solution \(\alpha ^*\) satisfying \(\left\| w - \alpha ^* \bar{w} \right\| _p \le \left\| w - \bar{w} \right\| _p \le \delta \) and \(\left\| \alpha ^* \bar{w} \right\| _p \le \left\| w \right\| _p + \left\| \alpha ^* \bar{w} - w \right\| _p \le 1 + \delta \). The claim follows from \(\frac{1}{1+\delta } \alpha ^* \bar{w} \in \mathrm{cone\,}\bar{W} \cap B_p (0,1)\) satisfying

$$\begin{aligned} \left\| w - \frac{1}{1+\delta } \alpha ^* \bar{w} \right\| _p \le \frac{\delta }{1+\delta } \left\| w \right\| _p + \frac{1}{1+\delta } \left\| w- \alpha ^* \bar{w} \right\| _p \le \frac{\delta }{1+\delta } + \frac{\delta }{1+\delta }. \end{aligned}$$

A cone is determined by its base, such as \(W \cap \{ w \in \mathbb {R}^d \mid w^\textsf{T}c = 1\}\). However, a full-dimensional set \(W_c\) is preferable for computational purposes. Alternatively, one could aim to replace the base with a \((q-1)\)-dimensional set generating it. Assume without loss of generality that \(c_q \ne 0\). Let \(c_{-q}\in \mathbb {R}^{q-1}\) denote the first \(q-1\) components of c and \(w:\mathbb {R}^{q-1} \rightarrow \mathbb {R}^q\) be defined as

$$\begin{aligned} w (\lambda ):= (\lambda ^\textsf{T}, \frac{1-\lambda ^\textsf{T}c_{-q}}{c_q})^\textsf{T}\end{aligned}$$

so that \(c^\textsf{T}w(\lambda ) = 1\) holds for all \(\lambda \in \mathbb {R}^{q-1}\). Then for the bounded set

$$\begin{aligned} \Lambda := \{\lambda \in \mathbb {R}^{q-1} \mid w(\lambda )\in C^+, \ (\text {D}_w) \text { is feasible for } w = w(\lambda )\}, \end{aligned}$$
(6)

we have \( W = \mathrm{cone\,}\{w(\lambda )\in \mathbb {R}^q \mid \lambda \in \Lambda \}\) by construction.

In particular, for the \(q=2\) case, the set \(\Lambda \) is a bounded interval and it suffices to solve two scalar problems to compute the bounds

$$\begin{aligned} \inf \{\lambda \in \mathbb {R}\mid w(\lambda )\in C^+, \ (\text {D}_w) \text { is feasible for } w = w(\lambda )\} \end{aligned}$$

and

$$\begin{aligned} \sup \{\lambda \in \mathbb {R}\mid w(\lambda )\in C^+, \ (\text {D}_w) \text { is feasible for } w = w(\lambda )\}. \end{aligned}$$

The drawback of considering the \((q-1)\)-dimensional set \(\Lambda \) arises if the set cannot be computed exactly, but has to be approximated: Approximation error for the set \(\Lambda \) is not preserved for the cone W and bound on the tolerance depends on the particular choice of vector c. Nevertheless, we consider the approach through the set \(\Lambda \) useful at least in two cases: (1) If the set \(\Lambda \) can be computed exactly. (2) In the \(q=2\) case when the interval \(\Lambda \) is approximated through two scalar problems, since solvers for scalar problems can in practice usually handle significantly lower precision than multi-objective problems or algorithms for projection problems.

5 Computations for special cases

The solution approach presented in Sect. 4 is applicable for the problems where Assumption 4.2 holds and the set

$$\begin{aligned} W = \{w \in C^+ \mid (\text {D}_w) \text { is feasible}\} \end{aligned}$$

can be expressed explicitly through the constraints of the dual problem. In this section, we will discuss three cases for which we can write the dual problem, hence the set W, explicitly.

We start with a relatively wide class of semidefinite problems, which is well-studied for the single objective case and has many application areas, see for instance the review paper [3]. There are also some studies that consider the class of semidefinite vector optimization problems in the literature, see [11, 12, 32]. Similar to the single objective case, the arguments of [3] can be straightforwardly extended to show that linear vector optimization and quadratic convex vector optimization problems with polyhedral ordering cones are special cases of semidefinite vector problems. Nevertheless, we also address linear and quadratic problems individually and provide further observations.

For the problems we consider below, the set of weights W, and consequently also the sets \(W_c\) of (4) and \(\Lambda \) of (6), have a form of a convex (or polyhedral) projection. Methods for solving convex (or polyhedral) projections can, therefore, be used to approximate (or compute) the set \(W_c\) (or the set \(\Lambda \)). In the light of Proposition 4.7, we obtain an approximation of \(\mathcal {P}_\infty ^+\). Finally, a dual cone of this approximation provides the desired approximation of the recession cone \(\mathcal {P}_\infty \) of the upper images as Proposition 4.4 shows.

An outer approximation of the recession cone is needed to solve a CVOP in the sense of Definition 3.3. The method proposed in this paper can be used to replace the first phase of the algorithm proposed in [30]. Keep in mind that if the problem is self-bounded, then the recession cone itself can also be used to solve the problem. If this is not the case, however, an outer approximation of it is needed even if it is possible to compute \(\mathcal {P}_\infty \) exactly. In the light of Proposition 4.1, unless the set W is known to be closed, we need to look for its inner approximation.

5.1 Semidefinite problems

The first class of problems we consider are the semidefinite problems. In the following, \(S^k\) denotes the set of symmetric \(k \times k\) matrices and \(S^k_+\) denotes the set of symmetric, positive semidefinite \(k \times k\) matrices. Consider a semidefinite vector program in inequality form,

$$\begin{aligned} \text {minimize }&\quad P^\textsf{T}x \quad \text { with respect to } \ \le _C \\ \text { subject to }&\quad x_1 F_1 + \ldots + x_n F_n + G \preceq 0, \end{aligned}$$
(SDVP)

for some \(P \in \mathbb {R}^{n\times q}, F_1,\ldots ,F_n, G \in S^k, k\ge 2\). The weighted sum scalarization for a weight \(w\in C^+\) is the scalar semidefinite program

$$\begin{aligned} \text {minimize }&\quad w^\textsf{T}P^\textsf{T}x \\ \text { subject to }&\quad x_1 F_1 + \ldots + x_n F_n + G \preceq 0 \end{aligned}$$

and its Lagrange dual is

$$\begin{aligned} \text {maximize }&\quad \textrm{tr}(GZ) \text { subject to } \quad \textrm{tr}(F_i Z) + e_i^T Pw = 0, \ i\in \{1,\ldots ,n\}, Z \succeq 0. \end{aligned}$$

We refer a reader interested in the derivation of the dual problem to [4, Example 5.11].

Assumption 4.2 on constraint qualification is satisfied if there exists \(x\in \mathbb {R}^n\) such that \(x_1 F_1 + \ldots + x_n F_n + G \prec 0\), consider [4, Equation 5.27]. Then the strong duality yields the set W of the convex projection form

$$\begin{aligned} W = \{w \in C^+ \mid \exists Z \succeq 0: \ \textrm{tr}(F_i Z) + e_i^T Pw = 0, \; i = 1,\ldots ,n \}, \end{aligned}$$

which can be computed by the method presented in [18].

In the following subsections, we consider two special cases of semidefinite problems for which further simplification and/or observation can be made.

5.2 Linear problems

The class of linear vector optimization problems is the most studied vector optimization problem class; many solution approaches are available and some of them are already mentioned in Sect. 1. Many existing methods are designed to solve bounded problems. Possibly unbounded LVOPs are considered for instance in [19, 26]. It has been shown that the recession cone can be computed via the homogeneous problem, which is again a q-dimensional LVOP, see [19, Section 4.6]. The parametric simplex method from [26] is a decision space algorithm and provides the recession directions of the upper image at the final stage of the algorithm together with its vertices.

In this section, we show that to compute the recession directions of an LVOP, it is sufficient to solve a polyhedral projection problem where the dimension of the problem can be decreased from q to \(q-1\). Note that the proposed method is an alternative to computing the recession cone via the homogeneous problem from [19] as both methods form the first phase of the solution approach considered in this paper.

Given matrices \(P\in \mathbb {R}^{n\times q}, A \in \mathbb {R}^{m\times n}\), a vector \(b\in \mathbb {R}^m\), and a polyhedral ordering cone C, we consider the linear vector optimization problem

$$\begin{aligned} \text {minimize } P^\textsf{T}x \quad \text { with respect to } \ \le _C\quad \text { subject to } A x \le b. \end{aligned}$$
(LVP)

For a weight vector \(w \in C^+\), the Lagrange dual (D\(_w\)) of the weighted sum scalarization problem (P\(_w\)) is given by

$$\begin{aligned} \text {maximize } -b^\textsf{T}y \quad \text { subject to } \quad A^\textsf{T}y = -P w, \quad y\ge 0. \end{aligned}$$

Applying Proposition 4.1 and Theorem 4.3, we obtain

$$\begin{aligned} \mathcal {P}_\infty ^+ = W = \{w \in C^+ \mid \exists y \ge 0 \ : -P w = A^\textsf{T}y\}. \end{aligned}$$
(7)

The problem of computing the set (7) is a polyhedral projection problem. Closure is not needed on the right-hand side of (7) since the set is a polyhedron. The polyhedral dual cone \(\mathcal {P}_\infty ^+\) can be computed exactly, rather than approximated, which is appropriate since the linear problem is self-bounded per Proposition 4.1.

As we suggested in Sect. 4, instead of computing the cone (7) in \(\mathbb {R}^q\), we can compute the \((q-1)\)-dimensional set

$$\begin{aligned} \Lambda = \{\lambda \in \mathbb {R}^{q-1} \mid w(\lambda ) \in C^+, \exists y \ge 0 \ : -P w(\lambda ) = A^\textsf{T}y\}, \end{aligned}$$

which also corresponds to solving a polyhedral projection problem. Moreover, we know that \(\Lambda \) is a closed interval if \(q=2\). In this case, to obtain the bounds of this interval, it suffices to solve the following two scalar linear problems

$$\begin{aligned} \text {minimize/maximize }&\quad \lambda \\ \text { subject to }&\quad w(\lambda ) \in C^+, \\&\quad -P w(\lambda )= A^\textsf{T}y,\\&\quad \lambda \in \mathbb {R}, y \ge 0. \end{aligned}$$

5.3 Convex quadratic problems

The last special case that we consider is the class of convex quadratic problems, which is a well-established area of mathematical programming in the scalar case. There are also recent papers that consider this class of problems in the multiobjective setting in different contexts, see [5, 10, 17, 23]. In this section, we show that if the convex quadratic vector optimization problem contains at least one quadratic constraint, then the problem is bounded. Moreover, below we identify several conditions under which it holds \(\mathcal {P}_\infty ^+ = C^+\).

We consider the following convex quadratic vector optimization problem

$$\begin{aligned} \text {minimize }&\quad f(x) \quad \text { with respect to}\ \le _C \\ \text { subject to }&\quad x^\textsf{T}Q_j x + c_j^\textsf{T}x + r_j \le 0, \quad j\in \{1,\ldots ,p \}, \\&\quad A x \le b, \end{aligned}$$
(QVP)

where \(Q_j \in S^{n}_+ \setminus \{0\}, c_j \in \mathbb {R}^n, r_j \in \mathbb {R}\) for \(j\in \{1,\ldots ,p\}\), \(A \in \mathbb {R}^{m\times n}, b\in \mathbb {R}^m\), and the C-convex objective function \(f = (f_1, \ldots ,f_q)^\textsf{T}:\mathbb {R}^n \rightarrow \mathbb {R}^q\) is given by \(f_i(x) = x^\textsf{T}P_i x + d_i^\textsf{T}x\) with \(P_i\in S^{n}, d_i \in \mathbb {R}^n\) for \(i = 1,\ldots ,q.\) Note that f is C-convex if and only if for all \(w\in C^+\) is \(w^\textsf{T}f\) convex, or equivalently \(\sum _{i=1}^q w_i P_i \succeq 0\). In particular, for \(C \supseteq \mathbb {R}^q_+\) a convexity of each objective \(f_1, \dots , f_q\) implies C-convexity of f. For \(C=\mathbb {R}^q_+\), the converse also holds.

Now let us look at what we can learn about the problem. The weighted sum scalarization for a weight vector \(w \in C^+\)

$$\begin{aligned} \text {minimize }&x^\textsf{T}\left( \sum _{i=1}^q w_i P_i\right) x + \left( \sum _{i=1}^q w_i d_i \right) ^\textsf{T}x \\ \text { subject to }&\quad x^\textsf{T}Q_j x + c_j^\textsf{T}x + r_j \le 0, \quad j\in \{1,\ldots ,p \}, \\&\quad A x \le b \end{aligned}$$

yields a dual function

$$\begin{aligned} g(\nu ,\mu ) =&\inf _{x \in \mathbb {R}^n} \left( x^\textsf{T}\left( \sum _{i=1}^q w_i P_i + \sum _{j=1}^p \nu _j Q_j \right) x + \left( \sum _{i=1}^q w_i d_i + \sum _{j=1}^p \nu _j c_j + A^\textsf{T}\mu \right) ^\textsf{T}x\right) \\ {}&+ \nu ^\textsf{T}r -\mu ^\textsf{T}b. \end{aligned}$$

Keeping Theorem 4.3 in mind, we are interested in the weights w for which the dual problem is feasible. Given the infimum term in the dual function, we have feasibility in two cases: if the quadratic expression in x is convex, or if the quadratic expression in x is constant. This yields the following form of set W,

$$\begin{aligned}&W = \left\{ w \in C^+ \mid \exists \nu \in \mathbb {R}^p_+ : 0 \ne \sum _{i=1}^q w_i P_i + \sum _{j=1}^p \nu _j Q_j \succeq 0 \right\} \ \cup \nonumber \\&\left\{ w \in C^+ \mid \exists \nu \in \mathbb {R}^p_+, \mu \in \mathbb {R}^m_+ : \ \sum _{i=1}^q w_i P_i + \sum _{j=1}^p \nu _j Q_j = 0, \sum _{i=1}^q w_i d_i + \sum _{j=1}^p \nu _j c_j + A^\textsf{T}\mu = 0\right\} . \end{aligned}$$
(8)

Using the structure of W given by (8), we show in the following two propositions that either the set W itself or its closure is equal to \(C^+\) in some standard cases.

Proposition 5.1

Consider problem (QVP). In each of the following cases, \(W = \mathcal {P}_\infty ^+ = C^+\) holds, in particular, the problem is bounded.

  1. (a)

    There is at least one nonlinear constraint, that is, \(p>0\).

  2. (b)

    \(P_1,\ldots ,P_q \in S^n\) are linearly independent.

Proof

For each case, we will show \(W=C^+\). This implies \(W = \mathcal {P}_\infty ^+\) and by Proposition 4.1, the problem is self-bounded. Indeed, it is bounded as we also have \(\mathcal {P}_\infty ^+ = C^+\).

  1. (a)

    By convexity, we have \(Q_1, \dots , Q_p \succeq 0\) and \(\sum _{i=1}^q w_i P_i \succeq 0\) for arbitrary \(w \in C^+\). If \(\sum _{i=1}^q w_i P_i \ne 0\), then the choice of \(\nu = 0\) gives \(0 \ne \sum _{i=1}^q w_i P_i + \sum _{j=1}^p \nu _j Q_j \succeq 0\). If \(\sum _{i=1}^q w_i P_i = 0\), then the choice of \(\nu _1 = 1, \nu _2 = \dots , \nu _p = 0\) gives \(0 \ne \sum _{i=1}^q w_i P_i + \sum _{j=1}^p \nu _j Q_j \succeq 0\). This shows that

    $$\begin{aligned} \left\{ w \in C^+ \mid \exists \nu \in \mathbb {R}^p_+: 0 \ne \sum _{i=1}^q w_i P_i + \sum _{j=1}^p \nu _j Q_j \succeq 0 \right\} = C^+. \end{aligned}$$

    Together with (8), this implies that \(W=C^+\).

  2. (b)

    By (a), it is sufficient to consider problems without nonlinear constraints, that is, \(p=0\). In this case, the cone W given by (8) simplifies to

    $$\begin{aligned} \left\{ w \in C^+ \mid 0 \ne \sum _{i=1}^q w_i P_i \succeq 0 \right\} \cup \left\{ w \in C^+ \mid \sum _{i=1}^q w_i P_i =0, \exists \mu \ge 0 : \ \sum _{i=1}^q w_i d_i + A^\textsf{T}\mu = 0\right\} . \end{aligned}$$

    Since the C-convexity of the objective implies \(\sum _{i=1}^q w_i P_i \succeq 0\) for all \(w \in C^+\), we can write \(W = (C^+\setminus W_1) \cup W_2,\) where

    $$\begin{aligned} \begin{aligned} W_1&:= \left\{ w \in C^+ \mid \sum _{i=1}^q w_i P_i = 0 \right\} , \\ W_2&:= \left\{ w \in C^+ \mid \exists \mu \ge 0 : \ \sum _{i=1}^q w_i P_i =0, \sum _{i=1}^q w_i d_i + A^\textsf{T}\mu = 0\right\} . \end{aligned} \end{aligned}$$
    (9)

If the matrices \(P_1, \dots , P_q\) are linearly independent, then \(W_1 = \{0\}\), since \(\sum _{i=1}^q w_i P_i = 0\) occurs only for \(w=0\). Since \(0 \in W_2\) and \(W = (C^+{\setminus } W_1) \cup W_2\), we conclude \(W=C^+\). \(\square \)

Proposition 5.2

Consider problem (QVP) and assume that the problem is nonlinear, that is, there is at least one nonlinear constraint or objective function. If \(C = \mathbb {R}^q_+\), then \(\mathcal {P}_\infty ^+ = \mathbb {R}^q_+\).

Proof

By Proposition 5.1 (a), it is sufficient to consider problems without nonlinear constraints, that is, \(p=0\). In this case, \(W = (C^+{\setminus } W_1) \cup W_2,\) where \(W_1,W_2\) are as in (9). If \(W_1 = \{0\}\), then \(\mathcal {P}_\infty ^+ = C^+\) follows since \(\mathcal {P}_\infty ^+ = \mathrm{cl \,}W\). Assume \(w \in W_1 {\setminus }\{0\}\). Noting that \(C^+ = \mathbb {R}^q_+\), \(w_j>0\) for some \(j\in \{1,\ldots ,q\}\). Consider the diagonal elements of the matrix \(\sum _{i=1}^q w_i P_i = 0\). Since the matrices \(P_1, P_2, \dots , P_q\) are positive semidefinite for \(C=\mathbb {R}^q_+\), all of their diagonal elements are nonnegative. Then, \(\sum _{i=1}^q w_i P_i = 0\) implies for \(w_j > 0\) that all the diagonal elements of matrix \(P_j\) are zero and, therefore, \(P_j\) is the zero matrix.

Since the problem is not linear, there exists \(i \in \{1, \dots , q\}\) with \(P_i \ne 0\). Then, for any \(w \in W_1\) we can construct a sequence of \(w^{(n)}:= w + \frac{1}{n}e^i \in \mathbb {R}^q_+ {\setminus } W_1\) converging to w. Hence, we conclude \(\mathcal {P}_\infty ^+ = \mathrm{cl \,}\left( \mathbb {R}^q_+ {\setminus } W_1 \right) = \mathbb {R}^q_+\). \(\square \)

We see that the computation of \(\mathcal {P}_\infty ^+\) is only relevant if \(C \ne \mathbb {R}^q_+\), (QVP) has only linear constraints and \(P_1, \dots , P_q\) are linearly dependent. In that case, it can be done via computing sets \(W_1, W_2\) given by (9) and setting \(\mathcal {P}_\infty ^+ = \mathrm{cl \,}\big ((C^+\setminus W_1) \cup W_2\big )\). As long as the ordering cone is polyhedral, \(W_2\) is in the form of a polyhedral projection, so \(\mathcal {P}_\infty ^+\) can be obtained through computations with polyhedra.

6 Numerical examples

In this section, we provide numerical examples to illustrate the proposed solution methodology. We consider a two-dimensional and a three-dimensional linear problem and two semidefinite programming problems with different objective functions minimized over the same feasible set.

Example 6.1

Consider the illustrative two-dimensional linear example

$$\begin{aligned} \min \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \text { w.r.t. } \le _{\mathbb {R}^2_+} \text { s.t. } \begin{pmatrix} -4 &{}\quad -1 \\ -2 &{}\quad -1 \\ -1 &{}\quad -1 \\ -1 &{}\quad -2 \\ -1 &{}\quad -4 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \le \begin{pmatrix} -5 \\ -5 \\ -4 \\ -5 \\ -5 \end{pmatrix}. \end{aligned}$$

As outlined in Sect. 5.2, to identify the recession cone of the upper image, it suffices to solve two scalar linear problems,

$$\begin{aligned} \text {minimize / maximize } \; \lambda \quad \text { subject to } \begin{pmatrix} \lambda \\ 1- \lambda \end{pmatrix} \ge 0, \; y \ge 0, \; \begin{pmatrix} 4 &{}\quad 2 &{}\quad 1 &{}\quad 1 &{}\quad 1 \\ 1 &{}\quad 1 &{}\quad 1 &{}\quad 2 &{}\quad 4 \end{pmatrix}\\ y = \begin{pmatrix} \lambda \\ 1- \lambda \end{pmatrix}. \end{aligned}$$

These yield the optimal values \(\lambda _{\min } = 0.2\) and \(\lambda _{\max } = 0.8\), which generate the dual cone \(W = \mathcal {P}_{\infty }^+ = \mathrm{cone\,}\left\{ \begin{pmatrix} 0.2 \\ 0.8 \end{pmatrix}, \begin{pmatrix} 0.8 \\ 0.2 \end{pmatrix} \right\} \) and, consequently, the recession cone of upper image \(\mathcal {P}_{\infty } = \mathrm{cone\,}\left\{ \begin{pmatrix} -1 \\ 4 \end{pmatrix}, \begin{pmatrix} 4 \\ -1 \end{pmatrix} \right\} \). The dual cone of weights W, the recession cone \(\mathcal {P}_\infty \), and the upper image \(\mathcal {P}\) are depicted in Fig. 2.

Fig. 2
figure 2

Linear problem from Example 6.1. Left: Recession cone \(\mathcal {P}_\infty \) (dark purple) and the set of weights W (lighter yellow). The depicted line \(w_1 + w_2 = 1\) represents the choice of base of the cones, which is used for the two scalar problems solved. Right: Upper image with highlighted recession direction. (Color figure online)

Fig. 3
figure 3

Linear problem from Example 6.2. Left: Recession cone \(\mathcal {P}_\infty \) (dark purple) and the set of weights W (lighter yellow). The depicted plane \(w_1 + w_2 + w_3 = 1\) represents the choice of the base of cone W. Right: The set \(\Lambda \). (Color figure online)

Example 6.2

Consider the three-dimensional linear problem

$$\begin{aligned} \min \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \text { w.r.t. } \le _{C} \text { s.t. } \begin{pmatrix} -1 &{}\quad -1 &{}\quad -1 \\ -4 &{}\quad -1 &{}\quad -1 \\ -1 &{}\quad -4 &{}\quad -1 \\ -1 &{}\quad -1 &{}\quad -4 \\ -1 &{}\quad -1 &{}\quad \,\,\, 0 \\ -1 &{}\quad \,\,\, 0 &{}\quad -1 \\ \,\,\,0 &{}\quad -1 &{}\quad -1 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \le \begin{pmatrix} -16 \\ -16 \\ -16 \\ -16 \\ -10 \\ -10 \\ -10 \end{pmatrix}, \end{aligned}$$

where the ordering cone is \(C = \mathrm{cone\,}\left\{ \begin{pmatrix} 4 \\ 2 \\ 2 \end{pmatrix}, \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}, \begin{pmatrix} 4 \\ 0 \\ 2 \end{pmatrix}, \begin{pmatrix} 1 \\ 0 \\ 2 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 2 \end{pmatrix}, \begin{pmatrix} 0 \\ 4 \\ 2 \end{pmatrix} \right\} \). We take \(c = (1, 1,1)^\textsf{T}\in \mathrm{int \,}C\), that is, \(w: \mathbb {R}^2 \rightarrow \mathbb {R}^3\) is given by \(w(\lambda )=(\lambda _1,\lambda _2,1-\lambda _1-\lambda _2)\). As explained in Sect. 5.2, it is possible to compute the set \(\Lambda \subseteq \mathbb {R}^2\) by solving a bounded linear projection problem. The sets \(W, \mathcal {P}_{\infty },\) and \(\Lambda \) are displayed in Fig. 3.

Fig. 4
figure 4

Semidefinite problem from Example 6.3 with objective \(P_1\) solved for \(\epsilon = 0.08\) (top) and \(\epsilon =0.01\) (bottom). Displayed are inner and outer approximations of the set W (left) and the recession cone \(\mathcal {P}_\infty \) (right)

Example 6.3

We consider the semidefinite problem

$$\begin{aligned} \text {minimize }&\quad P^\textsf{T}x \quad \text { with respect to } \ \le _{\mathbb {R}^3_+} \\ \text { subject to }&\quad x_1 \begin{pmatrix} -1 &{}\quad 2 \\ 2 &{}\quad 4 \end{pmatrix} + x_2 \begin{pmatrix} 2 &{}\quad 1 \\ 1 &{}\quad -1 \end{pmatrix} + x_3 \begin{pmatrix} 2 &{}\quad 2 \\ 2 &{}\quad 2 \end{pmatrix} \preceq 0 \end{aligned}$$

for objectives given by matrices

$$\begin{aligned} P_1 = \begin{pmatrix} 0 &{}\quad 0 &{}\quad -1\\ -1 &{}\quad 1 &{}\quad 0\\ 1 &{}\quad 1 &{}\quad -1 \end{pmatrix} \text { and } P_2 = \begin{pmatrix} 1 &{}\quad 0 &{}\quad -1\\ -1 &{}\quad 1 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad -1 \end{pmatrix}. \end{aligned}$$

We find an approximation of the cone of recession directions

$$\begin{aligned} W = \{w \in \mathbb {R}^3_+ \mid \exists Z \succeq 0: \ \textrm{tr}(F_i Z) + e_i^T Pw = 0, \; i = 1,2,3 \} \end{aligned}$$

through the convex projection problem of (approximately) computing the set

$$\begin{aligned} W_c = \{ w \in \mathbb {R}^3_+ \mid \exists Z \succeq 0: \ \textrm{tr}(F_i Z) + e_i^T Pw = 0, \; i = 1,2,3, \; w_1 + w_2 + w_3 \le \sqrt{3} \}. \end{aligned}$$

The convex projection yields both inner and outer approximations of the set \(W_c\), which generate inner and outer approximations of cones of W and \(\mathcal {P}_\infty \). All of them are displayed in Fig. 4 for the problem with objective \(P_1\). Outer approximation of \(\mathcal {P}_\infty \), for which approximation tolerance is guaranteed by Propositions 4.4 and 4.7, is needed as a part of a solution.

In Fig. 5 we use the problem with objective \(P_2\) to compare approximations obtained via the set \(W_c\) and the set \(\Lambda \). Recall that we only have tolerance guarantees for approach through the set \(W_c\).

Fig. 5
figure 5

Recession cone of the semidefinite problem from Example 6.3 with objective \(P_2\). Compare the approximations of \(\mathcal {P}_\infty \) obtained via the set \(W_c\) (left) and via the set \(\Lambda \) (right), both convex projections were solved for tolerance \(\epsilon = 0.005\)