Skip to main content

Coalescence-avoiding joint probabilistic data association based on bias removal

Abstract

In order to deal with the track coalescence problem of the joint probabilistic data association (JPDA) algorithm, a novel approach from a state bias removal point of view is developed in this paper. The factors that JPDA causes the state bias are analyzed, and the direct computation equation of the bias in the ideal case is given. Then based on the definitions of target detection hypothesis and target-to-target association hypothesis, the bias estimation is extended to the general and practical case. Finally, the estimated bias is removed from the state updated by JPDA to generate the unbiased state. The results of Monte Carlo simulations show that the proposed method can handle track coalescence and presents better performance when compared with the traditional methods.

1 Introduction

Target tracking is an important task for surveillance systems employing one or more sensors, such as radar, sonar, and so on, together with computer subsystems, to interpret the environment. Since most surveillance systems need to track multiple targets, multi-target tracking (MTT) is one of the most important tracking applications [1-6]. Typical sensor systems have target detection probability being less than unity and report measurements from diverse resources: targets of interest, internal thermal noise, or clutter. For MTT in this instance with missed detections, clutter, and false alarms, the JPDA algorithm [7], which is a multi-target extension of probabilistic data association (PDA) algorithm [8], has shown to be very effective. The JPDA receives many researchers’ attention, and a lot of variants were developed such as the series for computational complexity reduction and the extensions for some special purposes [9]. However, the JPDA has some undesirable characteristics such as bias and coalescence when used in a dense target environment. This is unfortunate, since such environment is the main justification for the use of the sophisticated MTT algorithms, e.g., JPDA.

In response to the track coalescence problem, the exact nearest neighbor JPDA (ENNJPDA) method and approximate nearest neighbor JPDA (ANNJPDA) method have been proposed [10]. The ENNJPDA computes measurement-to-target probabilities in the same manner as JPDA. However, after a measurement-to-target assignment is performed, tracks are only updated by a single measurement. The assignment is based on the solving of the assignment matrix in the same manner as the global nearest neighbor method. The ANNJPDA is similar to ENNJPDA, but its measurement-to-target probabilities are computed by an ad hoc formula. Another method being specific to track coalescence problem is JPDA* [11-13]. The JPDA* is the improved version of JPDA with one key modification: for each set of detected targets and set of measurements, only the best joint association event is chosen to be used in the calculation of the measurement-to-target probabilities. The other events that consist of the same sets of measurements and targets, but with a different assignment, are discarded. The developers believed that the ENNJPDA has the drawback of being sensitive to clutter and missed detections, and this drawback could be avoided by JPDA*. Also a fast version of JPDA* is presented [14]. A scaled joint probabilistic data association (SJPDA) method using an arbitrary positive scaling factor to favor the most likely association hypothesis has been proposed in [15]. The main drawback of this method is the lack of theory support about what the factor value should be chosen. Furthermore, based on entropy value theory, Xu et al. presented a modified probabilistic data association method [16-19].

The existing methods discussed above all applied hypothesis pruning or scaling strategy to prevent track coalescence. In essence, they are heuristic ones which can be considered as the compromises between the soft data association of the JPDA method and the hard data association of the global nearest neighbor method. Therefore, these methods deviate from the original definition of JPDA, which derives the target state by its weighted expectation over the feasible joint association hypothesis space.

Recently, another two methods are developed. One is the coalescence avoiding optimal JPDAF (C-JPDAF) [20], and the other is the SetJPDAF [21,22]. The C-JPDAF minimizes a similarity index of the estimates to avoid the track coalescence. Since the measurement model for this method is rather simple and the common clutter situation is not considered, its applications appear to be very limited. The SetJPDAF was derived with the objective of minimizing the mean optimal subpattern assignment (MOSPA) measure within the framework of finite set statistics. The SetJPDAF is only suitable for situations where the identity (labeling) of the targets is not of great importance, and the computational complexity or the convergence of the fast and suboptimal variant appears to be a great obstacle to the practical applications.

In this paper, we propose a coalescence-avoiding JPDA based on bias removal (BRJPDA). The BRJPDA is the same as JPDA, but a procedure of state bias estimation and removal is embedded after the ordinary state updating procedure. The bias estimation is similar to that in [23]. In [23], the bias of JPDA was calculated in an ideal case of two stationary targets with known separation, known measurement origin, unity probability of detection, unity probability of gating, and no false measurements nor measurement noise. It is apparent that these requirements can hardly be satisfied in practical applications. In this paper, these requirements are removed, thus the bias estimation is extended to the general and practical case. The extension is based on the definition of target detection hypothesis and target-to-target association hypothesis. Then, the bias could be estimated through its expectation over these hypotheses. Therefore, the impractical computation method [23] of the state bias could be avoided. Finally, the bias is removed to prevent the track coalescence and to promote the tracking performance.

The remainder of the paper is organized as follows. In Section 2, the system model and the fundamental principle of JPDA are described. The detail of state bias estimation and removal is discussed in Section 3. The numerical simulation results as well as the comparison with the traditional methods are provided in Section 4. Finally, we summarize and conclude this paper in Section 5.

2 System model and fundamental principle of JPDA

2.1 The target model

We consider t k targets at scan k, and we assume that the state of the ith target is modeled as follows:

$$ {\boldsymbol{x}}_i(k)=\boldsymbol{\varPhi} {\boldsymbol{x}}_i\left(k-1\right)+{\boldsymbol{v}}_i(k),\kern1.8em i=1,\dots, {t}_k, $$
(1)

where x i (k) is the l-vectorial state of the ith target; Φ is the (l × l)-state transition matrix; and v i (k) is the l-vectorial plant noise of the ith target. Further, v i (k) is a sequence of i.i.d. standard Gaussian variable with v i (k), v j (k) independent for all ij, and it’s variance matrix is Q.

2.2 The measurement model

A set of measurements consisting of two types of measurements, namely measurements originating from targets and measurements originating from clutter or false alarm, is considered. Assume the numbers of the two types of measurements are d k and f k , respectively. Therefore, the total number of the measurements is:

$$ {m}_k={d}_k+{f}_k. $$
(2)

Measurement originating from target has a detection probability p d and is modeled as:

$$ {\boldsymbol{z}}_i(k)=\boldsymbol{H}{\boldsymbol{x}}_i(k)+{\boldsymbol{w}}_i(k),\kern1.7em i=1,\dots, {d}_k, $$
(3)

where z i (k) is the m-vectorial measurement, H is the (m × m) measuring matrix, and w i (k) is the m-vectorial measurement noise. Furthermore, w i (k) is a sequence of i.i.d. standard Gaussian variable with w i (k), w j (k) independent for all ij, and it’s variance matrix is R.

The false measurements are assumed to distribute uniformly in the surveillance area. Moreover, f k is assumed to have Poisson distribution with a spatial density λ.

2.3 Fundamental principle of JPDA

The essence of JPDA is the computation of association probabilities of each measurement with each track, followed by the updating of each track by a weighted average of the measurements, the weightings being proportional to the probabilities. The most basic elements of JPDA are the state and variance updating equations of the tracks considered:

$$ {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)=\boldsymbol{\varPhi} {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)+{\boldsymbol{K}}_i{\displaystyle \sum_{j=1}^{m_k}\left[{p}_{ij}\left({\boldsymbol{z}}_j(k)-\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)\right]},\kern1.7em i=1,\dots, {t}_k, $$
(4)
$$ {\boldsymbol{P}}_i\left(k\Big|k\right)={\boldsymbol{P}}_i\left(k\Big|k-1\right)-\left(1-{p}_{i0}\right){\boldsymbol{K}}_i\left(\boldsymbol{H}{\boldsymbol{P}}_i\left(k\Big|k-1\right){\boldsymbol{H}}^{\hbox{'}}+\boldsymbol{R}\right){{\boldsymbol{K}}_i}^{\hbox{'}}+{\tilde{\boldsymbol{P}}}_i,\kern1.7em i=1,\dots, {t}_k, $$
(5)

where K i is the filter gain, P i (k|k − 1) is the state prediction variance, and

$$ \begin{array}{l}{\tilde{\boldsymbol{P}}}_i=\left({\widehat{\boldsymbol{x}}}_i\left(k\Big|k-1\right){\left({\widehat{\boldsymbol{x}}}_i\left(k\Big|k-1\right)\right)}^{\hbox{'}}-{\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right){\left({\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)\right)}^{\hbox{'}}\right){p}_{i0}+\\ {}\kern1.9em {\boldsymbol{K}}_i\left({\displaystyle \sum_{j=1}^{m_k}{p}_{ij}\left({\boldsymbol{z}}_j(k)-\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)\left({\boldsymbol{z}}_j(k)-\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)\hbox{'}}-{\tilde{\boldsymbol{z}}}_i(k){{\tilde{\boldsymbol{z}}}_i}^{\hbox{'}}(k)\right){{\boldsymbol{K}}_i}^{\hbox{'}},\end{array} $$
(6)
$$ {\tilde{\boldsymbol{z}}}_i(k)={\displaystyle \sum_{j=1}^{m_k}{p}_{ij}\left({\boldsymbol{z}}_j(k)-\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)}. $$
(7)

The main complexity about JPDA is the computation of p ij , which represents the probability of z j (k) originating from the ith target (i ≠ 0) or clutter/false alarm (i = 0). The JPDA computes p ij by enumerating all the feasible joint association hypotheses, normalizing the probability of each hypothesis, and summing up the probabilities of the hypotheses in which z j (k) originates from the ith target.

3 Bias estimation and removal

3.1 The source of bias

In order to reduce the computational complexity, gating, a technique for eliminating unlikely measurement to track pairings, is applied in JPDA. Thus, the validation region for each track is restricted to be a symmetric region (generally an ellipse or ellipsoid) whose center point coincides with the prediction measurement of the track. Therefore, one could have:

$$ E\left({\boldsymbol{z}}_j(k)\left|{\boldsymbol{z}}_j(k)\in {\boldsymbol{V}}_i,\kern0.3em {\boldsymbol{z}}_j(k)\kern0.3em \mathrm{is}\kern0.3em \mathrm{a}\kern0.3em \mathrm{false}\kern0.3em \mathrm{measurement}\right.\right)=E\left(\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right),\kern1.7em j=1,\dots, {m}_k, $$
(8)

where V i is the validation region for the ith track. Moreover, for measurement originating from the true target, one could have:

$$ E\left({\boldsymbol{z}}_j(k)\kern0.1em \left|\kern0.1em {\boldsymbol{z}}_j(k)\kern0.2em \mathrm{originates}\ \mathrm{from}\ \mathrm{the}\ i\mathrm{t}\mathrm{h}\ \mathrm{track}\right.\right)=E\left(\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right),\kern1.7em j=1,\dots, {m}_k. $$
(9)

Denote {z j (k)|z j (k) V i , j = 1, …, m k } by \( {\overline{\boldsymbol{Z}}}_i(k) \). Then, \( {\overline{\boldsymbol{Z}}}_i(k) \) could be partitioned into three sets:

$$ {\overline{\boldsymbol{Z}}}_i(k)={\overline{\boldsymbol{Z}}}_{i,1}(k){\displaystyle \cup {\overline{\boldsymbol{Z}}}_{i,2}(k){\displaystyle \cup {\overline{\boldsymbol{Z}}}_{i,3}(k)}}, $$
(10)

where \( {\overline{\boldsymbol{Z}}}_{i,1}(k) \), \( {\overline{\boldsymbol{Z}}}_{i,2}(k) \), and \( {\overline{\boldsymbol{Z}}}_{i,3}(k) \) are given as:

$$ {\overline{\boldsymbol{Z}}}_{i,1}(k)\triangleq \left\{{\boldsymbol{z}}_j(k)\left|{\boldsymbol{z}}_j(k)\in {\boldsymbol{V}}_i,{\boldsymbol{z}}_j(k)\kern0.2em \mathrm{originates}\ \mathrm{from}\ \mathrm{the}\ i\mathrm{t}\mathrm{h}\ \mathrm{track}\right.\right\}, $$
(11)
$$ {\overline{\boldsymbol{Z}}}_{i,2}(k)\triangleq \left\{{\boldsymbol{z}}_j(k)\left|{\boldsymbol{z}}_j(k)\in {\boldsymbol{V}}_i,{\boldsymbol{z}}_j(k)\kern0.4em \mathrm{is}\kern0.3em \mathrm{a}\kern0.3em \mathrm{false}\kern0.3em \mathrm{measurement}\right.\right\}, $$
(12)
$$ {\overline{\boldsymbol{Z}}}_{i,3}(k)\triangleq \left\{{\boldsymbol{z}}_j(k)\left|{\boldsymbol{z}}_j(k)\in {\boldsymbol{V}}_i,{\boldsymbol{z}}_j(k)\kern0.2em \mathrm{originates}\ \mathrm{from}\ \mathrm{the}\ n\mathrm{t}\mathrm{h}\ \mathrm{track},\ n\ne i\right.\right\}. $$
(13)

Then, Equation 4 could be rewritten as:

$$ {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)=\boldsymbol{\varPhi} {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)+{\boldsymbol{K}}_i{\displaystyle \sum_{\mu =1}^3\kern0.2em {\displaystyle \sum_{{\boldsymbol{z}}_j(k)\in {\overline{\boldsymbol{Z}}}_{i,\mu }(k)}\left\{{p}_{ij}\left({\boldsymbol{z}}_j(k)-\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)\right\}}},\kern1.7em i=1,\dots, {t}_k. $$
(14)

Therefore, the bias of \( {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right) \) can be computed by the following equation (here, \( {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right) \) is assumed to be unbiased, which could be derived by the following bias removal procedure at scan k − 1, and this is the prerequisite to estimate the bias of \( {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right) \):

$$ \begin{array}{l}\overline{\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i}\left(k\Big|k\right)=E\left({\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)\right)-E\left(\boldsymbol{\varPhi} {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)\\ {}\kern3.5em ={\boldsymbol{K}}_i{\displaystyle \sum_{\mu =1}^3{\displaystyle \sum_{{\boldsymbol{z}}_j(k)\in {\overline{\boldsymbol{Z}}}_{i,\mu }(k)}\left\{{p}_{ij}\left(E\left({\boldsymbol{z}}_j(k)\right)-E\left(\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)\right)\right\}}},\\ {}\kern3.5em ={\boldsymbol{K}}_i{\displaystyle \sum_{{\boldsymbol{z}}_j(k)\in {\overline{\boldsymbol{Z}}}_{i,3}(k)}\left\{{p}_{ij}\left(E\left({\boldsymbol{z}}_j(k)\right)-E\left(\boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right)\right)\right\}},\kern1.7em i=1,\dots, {t}_k.\end{array} $$
(15)

From Equation 15, it is intuitive that the bias of \( {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right) \) is only caused by \( {\boldsymbol{z}}_j(k)\in {\overline{\boldsymbol{Z}}}_{i,3}(k) \). However, this direct computation of the bias is impossible in the practical situation, since the measurements that belong to \( {\overline{\boldsymbol{Z}}}_{i,3}(k) \) are unknown.

3.2 Bias estimation for a two target case

In this paper, we tend to use another way to derive the bias approximation of JPDA. The basic idea is first to enumerate all the feasible joint association hypotheses of JPDA. Then, we can find a set of target-to-target association hypotheses corresponding to these feasible joint association hypotheses. Finally, the bias magnitude of every track within the framework of this set of target-to-target association hypotheses is calculated.

A key concept here is the target-to-target association hypothesis. Unlike the feasible joint association hypothesis which gives association relationships between targets and measurements, the target-to-target association hypothesis is concerned about the origin of every measurement being used to update the target. Consider a two target case, with unity probability of detection and gating, and no false measurements. In this situation, at every scan k, the JPDA will produce two joint association hypotheses, which are:

H 1: z 1(k) originates from the first target, and z 2(k) originates from the second target.

H 2: z 1(k) originates from the second target, and z 2(k) originates from the first target.

Then, the corresponding set of target to target association hypotheses could be given as:

h 1: the origin of the measurement used to update the first target is the first target, and the origin of the measurement used to update the second target is the second target.

h 2: the origin of the measurement used to update the first target is the second target, and the origin of the measurement used to update the second target is the first target.

It is clear that one joint association hypothesis must have one corresponding target-to-target association hypothesis, although the precise corresponding relationship is not known. Therefore from an overall point of view, the two sets of different kinds of hypotheses should describe the same thing in some sense. Since JPDA employs the whole set of joint association hypotheses to do the state updating procedure, we believe that the bias of JPDA could possibly be calculated within the framework of the corresponding set of target-to-target association hypotheses. Actually, we developed a method of calculating the probability of the target-to-target association hypothesis and the bias of each target in this hypothesis. The probability and the bias could be considered as an approximation of those produced by the corresponding joint association hypothesis. Still considering the two target examples, the probability of the target-to-target association hypothesis could be yielded as follows:

$$ p\left({h}_1\right)=\frac{1}{c}{G}_{1,1}{G}_{2,2}, $$
(16)
$$ p\left({h}_2\right)=\frac{1}{c}{G}_{1,2}{G}_{2,1}, $$
(17)

where c is a normalizing constant satisfying p(h 1) + p(h 2) = 1, and

$$ {G}_{i,j}=\frac{1}{{\left( \det \left(2\pi {\boldsymbol{C}}_i\right)\right)}^{1/2}}\cdot \exp \left(-\frac{1}{2}{\boldsymbol{r}}_{i,j}{\left({\boldsymbol{C}}_i\right)}^{-1}{\boldsymbol{r}}_{i,j}^{\hbox{'}}\right) $$
(18)

is the likelihood of the partial target-to-target hypothesis that the origin of the measurement used to update the ith target is the jth target. Here,

$$ {\boldsymbol{C}}_i={\boldsymbol{P}}_i\left(k\Big|k-1\right)+\boldsymbol{R} $$
(19)

represents the residual covariance of the ith target, and

$$ {\boldsymbol{r}}_{i,j}=\boldsymbol{H}\boldsymbol{\varPhi } \left({\widehat{\boldsymbol{x}}}_j\left(k-1\Big|k-1\right)-{\widehat{\boldsymbol{x}}}_i\left(k-1\Big|k-1\right)\right) $$
(20)

is an approximation of the residual of this partial hypothesis. This approximation is achieved by substituting the prediction measurement of the jth target (\( \boldsymbol{H}\boldsymbol{\varPhi } {\widehat{\boldsymbol{x}}}_j\left(k-1\Big|k-1\right) \)) for the observed measurement of the jth target. With the approximation, the determination about which measurement originates from the jth target is avoided. The approximation is reasonable since both the prediction measurement and the observed measurement of the jth target have the same expectation. Then, the state bias of the two targets could be given by a weighted average over the target-to-target hypotheses:

$$ \overline{\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_1}\left(k\Big|k\right)={\boldsymbol{K}}_1\left[p\left({h}_1\right){\boldsymbol{r}}_{1,1}+p\left({h}_2\right){\boldsymbol{r}}_{1,2}\right]={\boldsymbol{K}}_1p\left({h}_2\right){\boldsymbol{r}}_{1,2}, $$
(21)
$$ \overline{\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_2}\left(k\Big|k\right)={\boldsymbol{K}}_2\left[p\left({h}_1\right){\boldsymbol{r}}_{2,2}+p\left({h}_2\right){\boldsymbol{r}}_{2,1}\right]={\boldsymbol{K}}_2p\left({h}_2\right){\boldsymbol{r}}_{2,1}. $$
(22)

3.3 Bias estimation and removal for the general multi-target case

By now, we just take two targets into consideration. Next, we will focus on the enumeration of the target-to-target hypotheses in the general multi-target case with the false alarms and detection probability being less than unity.

First, we assume that JPDA produces a total number #H of feasible joint association hypothesis at the scan k, and δ i (H j ) is the detection indicator of the ith target in the jth hypothesis H j :

$$ {\delta}_i\left({H}_j\right)=\left\{\begin{array}{ll}1,\hfill & \mathrm{if}\kern0.1em \mathrm{the}\kern0.2em i\mathrm{t}\mathrm{h}\kern0.1em \mathrm{target}\ \mathrm{is}\ \mathrm{detected}\ \mathrm{in}\ \mathrm{h}\mathrm{ypothesis}\ {H}_j\hfill \\ {}0,\hfill & \mathrm{else},\hfill \end{array}\right. $$
(23)

Thus, the target detection indicator vector in H j could be defined as:

$$ \boldsymbol{\delta} \left({H}_j\right)\triangleq \left[{\delta}_1\left({H}_j\right)\kern0.3em {\delta}_2\left({H}_j\right)\kern0.3em \dots \kern0.3em {\delta}_{t_k}\left({H}_j\right)\right],\kern1.7em j=1,2,\dots, \#H. $$
(24)

Moreover, we assume that all the feasible joint association hypotheses generate a total number #δ of different target detection indicator vectors, which are δ 1, δ 2, …, δ #δ , respectively. Target detection hypothesis then is defined as a subset of feasible joint association hypotheses that produce the same target detection indicator vector:

$$ {\boldsymbol{\omega}}_i=\left\{{H}_j\left|\boldsymbol{\delta} \left({H}_j\right)={\boldsymbol{\delta}}_i,j=1,2,\dots, \#H\right.\right\},\kern1.7em i=1,2,\dots, \#\boldsymbol{\delta} . $$
(25)

Therefore,

$$ p\left({\boldsymbol{\omega}}_i\left|{\boldsymbol{Z}}^k\right.\right)={\displaystyle \sum_{H_j\in {\boldsymbol{\omega}}_i}p\left({H}_j\left|{\boldsymbol{Z}}^k\right.\right)},\kern1.8em i=1,2,\dots, \#\boldsymbol{\delta}, $$
(26)

where Z k is the accumulated measurements until scan k and p(H j | Z k) is the a posteriori probability of H j which was given in [7]. Then, number the jth target the number j, thus the target number set of every target detection hypothesis ω i is defined as:

$$ {\boldsymbol{\xi}}_i=\left\{j\left|{\delta}_j(H)=1,j=1,2,\dots, {t}_k\right.\right\},\kern1.6em \forall H\in {\boldsymbol{\omega}}_i,\kern0.4em i=1,2,\dots, \#\boldsymbol{\delta} . $$
(27)

The definition of this target number set is very important. Based on every ξ i , one subset of target-to-target hypotheses is enumerated, as the correspondence of the subset of the joint association hypotheses ω i . The procedure of target-to-target hypotheses enumeration based on ξ i is as follows. First, denote \( {\boldsymbol{\varPi}}_{{\boldsymbol{\xi}}_i} \) as the set of permutations on ξ i . Then for \( \forall {\boldsymbol{\pi}}_j\in {\boldsymbol{\varPi}}_{{\boldsymbol{\xi}}_i},j=1,2,\dots, \mathrm{card}\left({\boldsymbol{\varPi}}_{{\boldsymbol{\xi}}_i}\right), \) let π j represent the target-to-target association hypothesis that measurement of the π j (n)th target is used to update the ξ i (n) th target, where π j (n) and ξ i (n) mean the nth element of π j and ξ i , respectively. Therefore, the subset of target-to-target association hypotheses corresponding to ω i is enumerated. One may argue that a gating probability less than unity could prune some joint association hypotheses in ω i . We agree with this viewpoint. Since the hypothesis being pruned generally has a very small a posteriori probability when compared with the ones that are not being pruned, we believe this effect could be ignored. It should be noted that for the same ξ i , there may be more than one subset of observed measurements used to form the feasible joint association hypotheses in ω i . Therefore, the number of hypotheses in ω i is almost always larger that in \( {\boldsymbol{\varPi}}_{{\boldsymbol{\xi}}_i} \). For this problem, we multiply the normalized probability of hypotheses in \( {\boldsymbol{\varPi}}_{{\boldsymbol{\xi}}_i} \) by p(ω i |Z k).

Thereby, the bias of \( {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right) \) could be computed by a weighted average over all the target-to-target hypotheses:

$$ \begin{array}{l}\overline{\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i}\left(k\Big|k\right)=E\left(\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)\left|{\boldsymbol{Z}}^k\right.\right)\\ {}\kern4.8em ={\displaystyle \sum_{j=1}^{\#\boldsymbol{\delta}}E\left(\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)\left|{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right.\right)p\left({\boldsymbol{\omega}}_j\left|{\boldsymbol{Z}}^k\right.\right)}\\ {}\kern4.8em \approx {\displaystyle \sum_{j=1}^{\#\boldsymbol{\delta}}{\displaystyle \sum_{{\boldsymbol{\pi}}_n\in {\boldsymbol{\varPi}}_{{\boldsymbol{\xi}}_j}}E\left(\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)\left|{\boldsymbol{\pi}}_n,{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right.\right)p\left({\boldsymbol{\pi}}_n\left|{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right.\right)p\left({\boldsymbol{\omega}}_j\left|{\boldsymbol{Z}}^k\right.\right)}},\kern0.3em i=1,\dots, {t}_k,\end{array} $$
(28)

where \( E\left(\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)\left|{\boldsymbol{\pi}}_n,{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right.\right) \) is the bias of the ith target in hypothesis π n conditioned on ω j :

$$ E\left(\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)\left|{\boldsymbol{\pi}}_n,{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right.\right)={\boldsymbol{K}}_i{\boldsymbol{r}}_{i,{\boldsymbol{\pi}}_n(i)}, $$
(29)

and p(π n |ω j , Z k)) is the probability of hypothesis π n conditioned on ω j :

$$ p\left({\boldsymbol{\pi}}_n\left|{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right.\right)=\frac{1}{c}p\left({\boldsymbol{\pi}}_n,{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right), $$
(30)
$$ c={\displaystyle \sum_{n=1}^{\mathrm{card}\left({\boldsymbol{\varPi}}_{{\boldsymbol{\xi}}_j}\right)}p\left({\boldsymbol{\pi}}_n,{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right)}, $$
(31)
$$ p\left({\boldsymbol{\pi}}_n,{\boldsymbol{\omega}}_j,{\boldsymbol{Z}}^k\right)={\displaystyle \prod_{\mu =1}^{\mathrm{card}\left({\boldsymbol{\xi}}_j\right)}{G}_{{\boldsymbol{\xi}}_j\left(\mu \right),{\boldsymbol{\pi}}_n\left(\mu \right)}}. $$
(32)

At every scan k, once the ordinary state updating of JPDA has been done, the filtered but biased state vector \( {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right) \) for the ith track would be obtained. The bias removal is by subtracting \( \boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right) \) from \( {\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right) \), and thus the unbiased state is:

$$ {\overset{\frown }{\boldsymbol{x}}}_i\left(k\Big|k\right)={\widehat{\boldsymbol{x}}}_i\left(k\Big|k\right)-\overline{\boldsymbol{\varDelta} {\widehat{\boldsymbol{x}}}_i}\left(k\Big|k\right). $$
(33)

4 Simulations and discussions

In this section, the proposed BRJPDA is evaluated and compared with the existing methods, namely JPDA, ENNJPDA, SetJPDA, and JPDA*. Moreover, the filtering method with perfect data association, denoted by CorrectAsso, is also considered in the simulation for comparison. The implementation of SetJPDA retains only at most ten hypotheses with the largest a posteriori probabilities to do the reordering procedure by the brute force method. The fast and suboptimal method of SetJPDA is not applied because the starting point for this algorithm is important but may be scenario dependent and only is given for a special scenario through empirical studies [22]. The aim of the hypotheses pruning for SetJPDA is reducing the computational complexity to a general case. Thus, the implementation of the SetJPDA could be possible.

4.1 Scenarios and performance criteria

We applied two scenarios which are similar to those in [10,13,14,21,22]. For each scenario, the total scan number is M = 80, and the scan interval is T = 1 s. The track state x is defined to be \( \left[\begin{array}{cccc}\hfill x\hfill & \hfill \dot{x}\hfill & \hfill y\hfill & \hfill \dot{y}\hfill \end{array}\right]. \) We applied perfect association for all the algorithms considered for the first M/4 scans and set the process noise level according to Equation 24 in [14]. To make the comparisons more meaningful, for all tracking methods, the same random measurements streams were used.

Scenario 1: Two constant-speed, non-maneuvering targets whose paths cross. Tracking begins and ends far from the crossover point. Two parameters s x (being constant) and s y (being variable) are defined to denote the normalized target x and y speed

$$ \left\{\begin{array}{c}\hfill {s}_x=\frac{\left|{\dot{x}}_1\right|\cdot T}{\sqrt{{\boldsymbol{R}}_{11}}}=\frac{\left|{\dot{x}}_2\right|\cdot T}{\sqrt{{\boldsymbol{R}}_{11}}},\hfill \\ {}\hfill {s}_y=\frac{\left|{\dot{y}}_1\right|\cdot T}{\sqrt{{\boldsymbol{R}}_{22}}}=\frac{\left|{\dot{y}}_2\right|\cdot T}{\sqrt{{\boldsymbol{R}}_{22}}}.\hfill \end{array}\right. $$
(34)

Scenario 2: Two targets approach each other for the first M/4 scans, then keep parallel at a close distance for M/2 scans, and finally separate each other. The scenario is the same as that in [22], but with a variable minimum distance d 1 between the two targets to characterize the scenario.

Figures 1 and 2 give the illustrations of the two scenarios with typical parameters.

Figure 1
figure 1

The simulated scenario 1, R = I2 × 2, sx =0.414, sy = 0.01.

Figure 2
figure 2

The simulated scenario 2, R = 0.01 × I2 × 2, d1 = 20.

Four evaluation criteria similar to those in [10,13,14,22] are considered to evaluate the anti-coalescing and the computational complexity performance.

The first performance criterion [13] is the probability of coalescing situation of the tracks. We count tracks ij coalescing at scan k if

$$ \left({\left\Vert \left[\begin{array}{c}\hfill {x}_i(k)-{x}_j(k)\hfill \\ {}\hfill {y}_i(k)-{y}_j(k)\hfill \end{array}\right]\right\Vert}^2>\sqrt{{\boldsymbol{R}}_{11}+{\boldsymbol{R}}_{22}}\right)\wedge \left({\left\Vert \left[\begin{array}{c}\hfill {\overset{\frown }{x}}_i\left(k\Big|k\right)-{\overset{\frown }{x}}_j\left(k\Big|k\right)\hfill \\ {}\hfill {\overset{\frown }{y}}_i\left(k\Big|k\right)-{\overset{\frown }{y}}_j\left(k\Big|k\right)\hfill \end{array}\right]\right\Vert}^2\le \sqrt{{\boldsymbol{R}}_{11}+{\boldsymbol{R}}_{22}}\right). $$
(35)

Denote this probability by p coalescing. In Equation 35, ║∙∙∙║2 is the 2-norm operation.

The second performance criterion [10] is the probability of a successful track crossover, and it is specially designed for scenario 1. In detail, tracks ij are counted ‘successful crossover’ if

$$ \left({\displaystyle \sum_{k=1}^3{\overset{\frown }{y}}_i\left(k\Big|k\right)}>{\displaystyle \sum_{k=1}^3{\overset{\frown }{y}}_j\left(k\Big|k\right)}\right)\oplus \left({\displaystyle \sum_{k=M-2}^M{\overset{\frown }{y}}_i\left(k\Big|k\right)}>{\displaystyle \sum_{k=M-2}^M{\overset{\frown }{y}}_j\left(k\Big|k\right)}\right). $$
(36)

Denote this probability by p success. In Equation 36, is the exclusive OR operation.

The third performance criterion [14] is the optimal subpattern assignment (OSPA) statistic [14,22,24,25]. In our simulations, the ‘cardinality error’ of OSPA is not considered. The OSPA statistic between the set of target estimated states \( \overset{\frown }{\boldsymbol{X}}(k)\triangleq \left\{{\overset{\frown }{\boldsymbol{x}}}_i\left(k\Big|k\right)\left|i=1,2,\dots, {t}_k\right.\right\} \) and the set of target true states X(k) {x i (k)|i = 1, 2, …, t k } is given by:

$$ \overline{d}\left(\overset{\frown }{\boldsymbol{X}}(k),\boldsymbol{X}(k)\right)={\left(\frac{1}{t_k}\left(\underset{\boldsymbol{\pi} \in {\boldsymbol{\varPi}}_{\left\{1,2,\dots, {t}_k\right\}}}{ \min }{\displaystyle \sum_{i=1}^{t_k}d{\left({\overset{\frown }{\boldsymbol{x}}}_i\left(k\Big|k\right),{\boldsymbol{x}}_{\boldsymbol{\pi} (i)}(k)\right)}^2}\right)\right)}^{1/2}, $$
(37)

where \( d\left({\overset{\frown }{\boldsymbol{x}}}_i\left(k\Big|k\right),{\boldsymbol{x}}_{\boldsymbol{\pi} (i)}(k)\right) \) is the Euclidean distance between \( {\overset{\frown }{\boldsymbol{x}}}_i\left(k\Big|k\right) \) and x π(i)(k). We use only the position component of the state in the calculation of the OSPA. Over a number of Monte Carlo runs and the time indexes, we take the average of these OSPAs to find the mean OSPA (MOSPA), which is denoted by ε τ (ς) as a curve of the parameter of each scenario:

$$ {\varepsilon}_{\tau}\left(\boldsymbol{\varsigma} \right)=\frac{1}{N*\left(M-\left[M/4\right]\right)}{\displaystyle \sum_{j=1}^N{\displaystyle \sum_{k=\left[M/4\right]+1}^M{\overline{d}}_{\tau, \boldsymbol{\varsigma}, j}\left(\overset{\frown }{\boldsymbol{X}}(k),\boldsymbol{X}(k)\right)}}, $$
(38)

Here, \( {\overline{d}}_{\tau, \boldsymbol{\varsigma}, j}\left(\overset{\frown }{\boldsymbol{X}}(k),\boldsymbol{X}(k)\right) \) represents \( \overline{d}\left(\overset{\frown }{\boldsymbol{X}}(k),\boldsymbol{X}(k)\right) \) in scenario τ with parameter ς in the jth Monte Carlo run. Another MOSPA definition is the same as that in [21]:

$$ {\varepsilon}_{\tau, \boldsymbol{\varsigma}}(k)=\frac{1}{N}{\displaystyle \sum_{j=1}^N{\overline{d}}_{\tau, \boldsymbol{\varsigma}, j}\left(\overset{\frown }{\boldsymbol{X}}(k),\boldsymbol{X}(k)\right)}. $$
(39)

The fourth performance criterion (denoted by T running) is the total running time for every method, in MATLAB, for 300 Monte Carlo runs in s on an Intel Core i5 2.80 GHz.

4.2 Results and discussions

Figures 3 and 4 gave results of one Monte Carlo run for the two scenarios.

Figure 3
figure 3

The results of one Monte Carlo run for scenario 1 with sy = 0.074.

Figure 4
figure 4

The results of one Monte Carlo run for scenario 2 with d 1 = 0.6.

The p coalescing, p success, MOSPA, and T running curves were shown in Figures 5, 6, 7, 8, and 9.

Figure 5
figure 5

A comparison of the p coalescing performance of the six filter methods. (a) Scenario 1; (b) scenario 2.

Figure 6
figure 6

A comparison of the p success performance of the six filter methods in scenario 1.

Figure 7
figure 7

A comparison of the MOSPA performance of the six filter methods. (a) Scenario 1; (b) scenario 2.

Figure 8
figure 8

A comparison of the MOSPA performance of the six filter methods. (a) Scenario 1 with s y = 0.01; (b) scenario 2 with d 1 = 0.2.

Figure 9
figure 9

A comparison of the T running performance of the six filter methods. (a) Scenario 1; (b) scenario 2.

From the p coalescing plots in Figure 5, it was clear that the performances of all the methods varied with the changing of the scenario and the scenario parameter. And the best method under different parameter in scenario 1 was different. Specifically, BRJPDA was the only one which was better than JPDA in scenario 1. The performance improvements of all the anti-coalescing methods compared with JPDA seemed almost the same in scenario 2. Therefore, when p coalescing was used, the simulation results showed that BRJPDA could outperform all the other anti-coalescing methods. Note that the CorrectAsso should not be considered as an effective method, and it just acted as a performance reference.

Figure 6 showed that the p success performance of all the anti-coalescing methods was worse than that of JPDA. And BRJPDA had the best performance during all the anti-coalescing methods. In this criterion, SetJPDA was not considered for comparison since it had no target identity and thus was senseless.

The results in Figure 7a showed that SetJPDA significantly outperformed the other three anti-coalescing methods. It is because that the SetJPDA was specially designed to minimize the MOSPA measure at the cost of the lost of target identity. If SetJPDA was not considered for comparison, then BRJPDA seemed to be the best anti-coalescing method. For the results in Figure 7b, all the anti-coalescing methods appeared to have almost the same performance.

To give more detailed information, the MOSPA performances for two scenarios with special parameters were given in Figure 8. The results in Figure 8a were consistent with those in Figure 7a. However, there was an interesting phenomenon in Figure 8b. Particularly, for the keeping parallel period of the two targets, the BRJPDA showed the best performance. Although a performance decrease occurred when the targets began to separate, this decrease could continue for only a few scans.

Figure 9 showed that the SetJPDA had a rather bad performance in computation complexity when compared with the other anti-coalescing methods. And all the anti-coalescing methods except for the SetJPDA had almost the same computation complexity.

No doubt that the evaluation of the performance of a multi-target tracking system is a difficult problem, since the performance of the method is highly criterion, scenario, and parameter dependent. It may be impossible that a practical algorithm can outperform all the other algorithms over all the scenarios and all the parameter values. For example, JPDA showed the best performance in Figure 6; however, it showed the worst performance in Figures 5b, 7b, and 8b. Another example is that SetJPDA showed the best performance in Figure 7b; however, it showed the worst performance in Figures 5a and 9. This drastic variation of performance may also indicate that JPDA and SetJPDA are not robust. As for the other methods, BRJPDA were shown to be able to outperform ENNJPDA and JPDA* in Figures 5a, 6, 7a, and 8a and have almost the same performance as ENNJPDA and JPDA* in other simulation results. Moreover, BRJPDA exhibited a rather robust performance. By analyzing the simulation results in different scenarios and performance criteria, BRJPDA appeared to show better performance than JPDA and the other three anti-coalescing algorithms, which could be taken as a good choice for the MTT system.

5 Conclusions

In this paper, a novel solution to the inherent track coalescence problem of the well-known JPDA algorithm, from a target state bias removal point of view, is studied. First, the reason why JPDA could cause bias is analyzed. Then, the bias is estimated in the general and practical case, through its expectation over target detection hypotheses and target-to-target association hypotheses space. Finally, the bias is removed. Monte Carlo simulations are applied to evaluate the performance of the proposed method and the existing methods. From an overall point of view, BRJPDA exhibits better performance than the traditional algorithms.

References

  1. S Blackman, R Popoli, Design and Analysis of Modern Tracking System (MA: Artech House, Boston, 1999)

    MATH  Google Scholar 

  2. Y Bar-Shalom, XR Li, Multitarget-Multisensor Tracking: Principles and Techniques (CT: YBS Publishing, Storrs, 1995)

    Google Scholar 

  3. DB Kim, SM Hong, Multiple-target tracking and track management for an FMCW radar network. EURASIP J. Adv. Signal Process. (2013). doi:10.1186/1687-6180-2013-159

  4. C Hu, T Zeng, C Zhou, Accurate three-dimensional tracking method in bistatic forward scatter radar. EURASIP J. Adv. Signal Process. (2013). doi:10.1186/1687-6180-2013-66

  5. SS Blackman, Multiple hypothesis tracking for multiple target tracking. IEEE Aero. Electron. Syst. Mag. 19(1), 5 (2004)

    Article  Google Scholar 

  6. D Vivet, P Checchin, R Chapuis, P Faure, A mobile ground-based radar sensor for detection and tracking of moving objects. EURASIP J. Adv. Signal Process. (2012). doi:10.1186/1687-6180-2012-45

  7. TE Fortmann, Y Bar-Shalom, M Scheffe, Sonar tracking of multiple targets using joint probabilistic data association. IEEE J. Ocean. Eng. 8(3), 173 (1983)

    Article  Google Scholar 

  8. Y Bar-Shalom, E Tse, Tracking in a cluttered environment with probabilistic data association. Automatica 11(5), 451 (1975)

    Article  MATH  Google Scholar 

  9. B Habtemariam, R Tharmarasa, T Thayaparan, M Mallick, T Kirubarajan, A multiple-detection joint probabilistic data association filter. IEEE J Sel. Topic Signal Process. 7(3), 461 (2013)

    Article  Google Scholar 

  10. RJ Fitzgerald, Development of practical PDA logic for multitarget tracking by microprocessor, in American Control Conference. (Seattle, 1986), p. 889–898

  11. HAP Blom, EA Bloem, Joint Probabilistic Data Association Avoiding Track Coalescence, in IEE Colloquium on Algorithms for Target Tracking, (London, 1995), p. 1–3

  12. EA Bloem, HAP Blom, Joint Probabilistic Data Association Methods Avoiding Track Coalescence, in Proc. of the 34th IEEE Conference on Decision and Control, (New Orleans, 1995), p. 2752–2757

  13. HAP Blom, EA Bloem, Probabilistic data association avoiding track coalescence. IEEE Trans. Automat. Contr. 45(2), 247 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  14. K Romeo, DF Crouse, Y Bar-Shalom, P Willett, A Fast Coalescence-Avoiding JPDAF, in Proc. SPIE 8339, Signal and Data Processing of Small Targets, (Baltimore, 2012)

  15. HL Kennedy, Controlling Track Coalescence with Scaled Joint Probabilistic Data Association, in 2008 International Conference on Radar, (Adelaide, 2008), p. 440–445

  16. SL Chen, YB Xu, Modified Joint Probability Data Association Algorithm Avoiding Track Coalescence. Materials Science and Information Technology, (2012), p. 2298

  17. YB Xu, GM Wang, M Zhu, SL Chen, A Scaled Joint Probabilistic Data Association Algorithm, in 2012 International Conference on Communication Systems and Network Technologies, (Rajkot, 2012), p. 238–242

  18. YB Xu, JS Ma, Y Wen, M Zhu, Comparing of Several Modified Joint Probabilistic Data Association Algorithms, in Proc. SPIE 8768, International Conference on Graphic and Image Processing, (Singapore, 2012)

  19. X Yibing, C Songlin, W Zhaohui, K Lianrui, Modified Joint Probability Data Association Algorithm Controlling Track Coalescence, in 2011 Fourth International Conference on Intelligent Computation Technology and Automation, (Shenzhen, 2011), p. 442–445

  20. K Evan, L Thomas Alan, L Taeyoung, Optimal Joint Probabilistic Data Association Filter Avoiding Coalescence in Close Proximity, in 2014 European Control Conference, (Strasbourg, 2014), p. 2709–2714

  21. S Lennart, S Daniel, W Peter, Set JPDA Algorithm for Tracking Unordered Sets of Targets, in 12th International Conference on Information Fusion, (Seattle, 2009), p. 1187–1194

  22. S Lennart, S Daniel, G Marco, W Peter, Set JPDA filter for multitarget tracking. IEEE Trans. Signal Process. 59(10), 4677 (2011)

    Article  MathSciNet  Google Scholar 

  23. RJ Fitzgerald, Track biases and coalescence with probabilistic data association. IEEE Trans. Aero. Electron. Syst. AES-21(6), 822 (1985)

    Article  Google Scholar 

  24. D Schuhmacher, BT Vo, BN Vo, A consistent metric for performance evaluation of multi-object filters. IEEE Trans. Signal Process. 56(8), 3447 (2008)

    Article  MathSciNet  Google Scholar 

  25. B Ristic, BN Vo, D Clark, BT Vo, A metric for performance evaluation of multi-target tracking algorithms. IEEE Trans. Signal Process. 59(7), 3452 (2011)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiyou Xu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jing, P., Xu, S., Li, X. et al. Coalescence-avoiding joint probabilistic data association based on bias removal. EURASIP J. Adv. Signal Process. 2015, 24 (2015). https://doi.org/10.1186/s13634-015-0205-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-015-0205-2

Keywords