Next Article in Journal
Coupled Node Similarity Learning for Community Detection in Attributed Networks
Next Article in Special Issue
Bayesian Computational Methods for Sampling from the Posterior Distribution of a Bivariate Survival Model, Based on AMH Copula in the Presence of Right-Censored Data
Previous Article in Journal
Impact of Multi-Causal Transport Mechanisms in an Electrolyte Supported Planar SOFC with (ZrO2)x−1(Y2O3)x Electrolyte
Previous Article in Special Issue
Principles of Bayesian Inference Using General Divergence Criteria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Noise Enhanced Signal Detection of Variable Detectors under Certain Constraints

1
College of Communication Engineering, Chongqing University, Chongqing 400044, China
2
Chongqing Institution of Tuberculosis Treatment and Prevention, Chongqing 400050, China
3
GAC Automotive Engineering Institute, Guangzhou 510640, China
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(6), 470; https://doi.org/10.3390/e20060470
Submission received: 17 April 2018 / Revised: 4 June 2018 / Accepted: 12 June 2018 / Published: 17 June 2018
(This article belongs to the Special Issue Foundations of Statistics)

Abstract

:
In this paper, a noise enhanced binary hypothesis-testing problem was studied for a variable detector under certain constraints in which the detection probability can be increased and the false-alarm probability can be decreased simultaneously. According to the constraints, three alternative cases are proposed, the first two cases concerned minimization of the false-alarm probability and maximization of the detection probability without deterioration of one by the other, respectively, and the third case was achieved by a randomization of two optimal noise enhanced solutions obtained in the first two limit cases. Furthermore, the noise enhanced solutions that satisfy the three cases were determined whether randomization between different detectors was allowed or not. In addition, the practicality of the third case was proven from the perspective of Bayes risk. Finally, numerous examples and conclusions are presented.

1. Introduction

Stochastic resonance (SR) is a physical phenomenon where noise plays an active role in enhancing the performance of some nonlinear systems under certain conditions. Since the concept of SR was first put forward by Benzi et al. in 1981 [1], the positive effect of SR has been widely investigated and applied in various research fields, such as physics, chemical, biological and electronic, etc. [2,3,4,5,6,7,8,9,10]. In signal detection theory, it is also called noise enhanced detection [3]. The classical signature of noise enhanced detection can be an increase in output signal-to-noise ratio (SNR) [11,12,13] or mutual information (MI) [14,15,16,17,18], a decrease in Bayes risk [19,20,21] or probability of error [22], or an increase in detection probability without increasing the false-alarm probability [23,24,25,26,27,28].
Studies in recent years indicate that the detection performance of a nonlinear detector in a hypothesis testing problem can be improved by adding additive noise to the system input or adjusting the background noise level based on the Bayesian [19], Minimax [20] or Neyman–Pearson [24,25,28] criteria. In [19], S. Bayram et al. analyzed the noise enhanced M-ary composite hypothesis testing problem in a restricted Bayesian framework. Specifically, the minimax criterion can be used when the prior probabilities are unknown. The research results showed that the noise enhanced detection in the Minimax framework [20] can be viewed as a special case in the restricted Bayesian framework.
Numerous researches on how to increase the detection probability according to the Neyman–Pearson criterion have been made. In [23], S. Kay showed that for the detection of a direct current (DC) signal in a Gaussian mixture noise background, the detection probability of a sign detector can be enhanced by adding a suitable white Gaussian noise under certain conditions. A mathematical framework was established by H. Chen et al. in order to analyze the mechanism of the SR effect on the binary hypothesis testing problem according to Neyman–Pearson criterion for a fixed detector in [24]. The optimal additive noise that maximizes the detection probability without decreasing the false-alarm probability and its probability density function (pdf) are derived in detail. In addition, the conditions sufficing for improvability and non-improvability are given. In [27], Ashok Patel and Bart Kosko presented theorems and an algorithm to search the optimal or near-optimal additive noise for the same problem as in [24] from another perspective. In [28], binary noise enhanced composite hypothesis-testing problems are investigated according to the Max-sum, Max-min and Max-max criteria, respectively. Furthermore, a noise enhanced detection problem for a variable detector according to Neyman–Pearson criterion is investigated in [25]. Similar to [24,27,28], it also only considers how to increase the detection probability but ignores the importance of decreasing the false-alarm probability.
Few researchers focus on how to reduce the false-alarm probability and there is no evidence to indicate that the false-alarm probability cannot be decreased by adding additive noise on the premise of not deteriorating the detection probability. In fact, it is significant to decrease the false-alarm probability without decreasing the detection probability. In [2], a noise enhanced model for a fixed detector is proposed, which considers how to use the additive noise to decrease the false-alarm probability and increase the detection probability simultaneously. However, unfortunately, it does not take into account the case where the detector is variable. When no randomization exists between different detectors, we just need to find the most appropriate detector since the optimum solution for each detector can be obtained straightforwardly by utilizing the results in [2]. On the other hand, if the randomization between different detectors is allowed, some new noise enhanced solutions can be introduced by the randomization of multiple detector and additive noise pairs. Therefore, the aim of this paper was to find the optimal noise enhanced solutions for the randomization case.
Actually, in many cases, although the structure of the detector cannot be altered, some of its parameters can be adjusted to obtain a better performance. Even in some particular situations, the structure can also be changed. In this paper, we consider the noise enhanced model established in [2] for a variable detector, where a candidate set of decision functions can be utilized. Instead of solving the model directly, three alternative cases are considered. The first two cases are to minimize the false-alarm probability and maximize the detection probability without deterioration of one by the other, respectively. When the randomization between the detectors cannot be allowed, the first two cases can be realized by choosing a suitable detector and adding the corresponding optimal additive noise. When the randomization between different detectors can be allowed, the optimal noise enhanced solutions for the first two cases are suitable randomization between the two detectors and additive noise pairs. Whether the randomization between the detectors is allowed or not, the last case can be obtained by a convex combination of the optimal noise enhanced solutions for the first two cases with corresponding weights. In addition, the noise enhanced model also provides a solution to reduce the Bayes risk for the variable detector in this paper, which is different from the minimization of Bayes risk under Bayesian criterion in [19] where the false-alarm and detection probabilities are not of concern.
The remainder of this paper is organized as follows. In Section 2, a noise enhanced binary hypothesis-testing model for a variable detector is established, which is simplified into three different cases. In Section 3, the forms of the noise enhanced solutions are discussed. Furthermore, the exact parameter values of these noise enhanced solutions are determined in Section 4 allowing randomization between the detectors. Numerous results are presented in Section 5 and the conclusions are provided in Section 6.
Notation: Lower-case bold letters denote vectors, x is a K-dimensional observation vector; upper-case hollow letters denote sets, e.g., denotes a set of real numbers; p ( ) is used to denote pdf, while p ( | ) its corresponding conditional counterpart; ϕ denotes the decision function, Φ denotes a set of decision functions; δ ( ) denotes the Dirac function; , , , E { } , min , max and arg denote convolution, summation, integral, expectation, minimum, maximum and argument operators, respectively; inf { } and sup { } denote infimum and supremum operators, respectively; φ ( ; μ , σ 2 ) denotes a Gaussian pdf with mean μ and variance σ 2 .

2. Noise Enhanced Detection Model for Binary Hypothesis-Testing

2.1. Problem Formulation

A binary hypothesis-testing problem is considered as follows
H i : p i ( x ) , i = 0 , 1 ,
where H 0 and H 1 denote the original and alternative hypotheses, respectively, x is a K-dimensional observation vector, i.e., x K , denotes a set of real numbers, and p i ( x ) is the pdf of x under H i , i = 0 , 1 . Let ϕ ( x ) represent the decision function, which is also the probability of choosing H 1 , and 0 ϕ ( x ) 1 . For a given ϕ , the original false-alarm probability P F A x and detection probability P D x can be calculated as
P F A x = K ϕ ( x ) p 0 ( x ) d x ,
P D x = K ϕ ( x ) p 1 ( x ) d x .
The new noise modified observation y is obtained by adding an independent additive noise n to the original observation x such that
y = x + n .
Then the pdf of y under H i can be formulated as the following convolutions of p i ( ) and p n ( ) ,
p y ( y | H i ) = p n ( ) p i ( ) = K p n ( n ) p i ( y n ) d n .
The noise modified false-alarm probability P F A , ϕ y and detection probability P D , ϕ y for the given ϕ can be calculated by
P F A , ϕ y = K ϕ ( y ) p y ( y | H 0 ) d y = K p n ( n ) ( K ϕ ( y ) p 0 ( y n ) d y ) d n = K p n ( n ) F 0 , ϕ ( n ) d n = E n ( F 0 , ϕ ( n ) ) ,
P D , ϕ y = K ϕ ( y ) p y ( y | H 1 ) d y = K p n ( n ) F 1 , ϕ ( n ) d n = E n ( F 1 , ϕ ( n ) ) ,
such that
F i , ϕ ( n ) = K ϕ ( y ) p i ( y n ) d y ,     i = 0 , 1 .
From (6) and (7), P F A , ϕ y and P D , ϕ y are the respective expected values of F 0 , ϕ ( n ) and F 1 , ϕ ( n ) based on the distribution of the additive noise p n ( ) . Especially, P F A x = F 0 , ϕ ( 0 ) and P D x = F 1 , ϕ ( 0 ) for the given ϕ according to (8).

2.2. Noise Enhanced Detection Model for a Variable Detector

Actually, although the detector cannot be substituted in many cases, some parameters of the detectors can be adjusted to achieve a better detection performance, such as the decision threshold. Even in some particular cases, the structure of the detector can also be altered. Instead of the strictly fixed ϕ ( ) , a candidate set of decision functions Φ is provided to be utilized here. As a result, for a variable detector, the optimization of detection performance can be achieved by adding a suitable noise and/or changing the detector. If the randomization between the detectors is allowed, the optimal solution of the noise enhanced detection problem would be a combination of multiple decision function and additive noise pairs.
Under the constraints that P F A y P F A x and P D y P D x , a noise enhanced detection model for a variable detector is established as follows
{ P F A y = P F A x z 1 , 0 z 1 P F A x P D y = P D x + z 2 , 0 z 2 1 P D x ,
where z 1 and z 2 represent the improvements of false-alarm and detection probabilities, respectively. Let z be the overall improvement of the detectability. Namely, z is the sum of z 1 and z 2 such that z = z 1 + z 2 .
It is obvious that the ranges of z 1 and z 2 are limited and the maximum values of z 1 and z 2 cannot be obtained at the same time. In order to solve the noise enhanced detection model, we can first consider two limit cases, i.e., the noise enhanced optimization problems of maximizing z 1 and z 2 , respectively, under the constraints that P F A y P F A x and P D y P D x ,. Then a new suitable solution of the noise enhanced detection model in (9) can be obtained by a convex combination of two optimal noise enhanced solutions obtained in the two limit cases with corresponding weights. Consequently, the new noise enhanced solution can always guarantee P F A y P F A x and P D y P D x , and the corresponding value of z is between the values of z obtained in the two limit cases. The three cases discussed above can be formulated as below.
(i)
When P D y P D x , the minimization of P F A y is explored such that the maximum achievable z 1 is denoted by z 1 o , the corresponding z 2 is remarked as z 2 ( i ) and z 2 ( i ) 0 . Thus, the corresponding false-alarm and detection probabilities can be written as below,
{ P F A , o p t y = P F A x z 1 o P D y = P D x + z 2 ( i ) .
(ii)
When P F A y P F A x , the maximum P D y is searched such that the corresponding z 1 and z 2 are denoted by z 1 ( i i ) and z 2 o , respectively, where z 1 ( i i ) 0 and z 2 o is the maximum achievable z 2 . The corresponding false-alarm and detection probabilities can be expressed by
{ P F A y = P F A x z 1 ( i i ) P D , o p t y = P D x + z 2 o .
(iii)
A noise enhanced solution obtained as a randomization between two optimal solutions of case (i) and case (ii) with weights η and 1 η , respectively, is applied in this case. Combining (10) and (11), the corresponding false-alarm and detection probabilities are calculated by
{ P F A y = P F A x [ η z 1 o + ( 1 η ) z 1 ( i i ) ] < P F A x P D y = P D x + [ η z 2 ( i ) + ( 1 η ) z 2 o ] > P D x .
Naturally, z 1 = η z 1 o + ( 1 η ) z 1 ( i i ) and z 2 = η z 2 ( i ) + ( 1 η ) z 2 o . It is obvious that case (iii) is identical to cases (i) when η = 1 , while case (iii) is the same as case (ii) when η = 0 . In addition, when η ( 0 , 1 ) , more different noise enhanced solutions can be obtained by adjusting the value of η to increase the detection probability and decrease the false-alarm probabilities simultaneously.
Remarkably, if P F A y < α and P D y > β are required, we only need to replace P F A x and P D x in this model with α and β , respectively.

3. Form of Noise Enhanced Solution under Different Situations

For the noise enhanced detection problem, when the detector is fixed, we only need to consider how to find the suitable or optimal additive noise. Nevertheless, when the detector is variable, we also need to consider how to choose a suitable detector. In this section, the noise enhanced detection problem for the variable detector is discussed for the case where the randomization between different detectors is allowed or not.

3.1. No Randomization between Detectors

For the case where no randomization between detectors is allowed, only one detector can be applied for each decision, thereby the noise enhanced detection problem for the variable detector can be simplified to that for a fixed detector ϕ o . That means the optimal noise enhanced solution is to find the optimal detector ϕ o from Φ and add the corresponding optimal additive noise. The actual noise modified false-alarm and detection probabilities can be expressed by P F A y = P F A , ϕ o y and P D y = P D , ϕ o y .
For any ϕ Φ , the corresponding z 1 , ϕ o and z 2 , ϕ o can be obtained straightforwardly by utilizing the results in [2]. Then the optimal detectors corresponding to case (i) and case (ii) can be selected as ϕ i = arg max ϕ Φ z 1 , ϕ o and ϕ i i = arg max ϕ Φ z 2 , ϕ o . When ϕ i = ϕ i i , the optimal detector for case (iii) is selected as ϕ i . When ϕ i ϕ i i , if z ϕ i = z 1 , ϕ i o + z 2 , , ϕ i ( i ) > z ϕ i i = z 1 , , ϕ i i ( i i ) + z 2 , ϕ i i o , ϕ i is selected as the optimal detector for case (iii). Otherwise, ϕ i i is selected.

3.2. Randomization between Detectors

For the case where the randomization between different detectors is allowed, multiple detector and additive noise pairs can be utilized for each decision, thereby the actual noise modified false-alarm and detection probabilities can be expressed as
{ P F A y = i = 1 L ξ i P F A , ϕ i y P D y = i = 1 L ξ i P D , ϕ i y ,
where L 1 is the number of detectors involved in the noise enhanced solution, P F A , ϕ i x and P D , ϕ i x are the respective false-alarm and detection probabilities for ϕ i Φ , ξ i is the probability of ϕ i , and 0 ξ i 1 .
Let f 1 = F 1 , ϕ ( n ) , then we have n = F 1 , ϕ 1 ( f 1 ) and f 0 = F 0 , ϕ ( n ) = F 0 , ϕ ( F 1 , ϕ 1 ( f 1 ) ) where F 1 , ϕ 1 is a function which maps f 1 to n based on function F 1 , ϕ . Thus f 0 can be a one-to-one or one-to-multiple function with respect to (w.r.t.) f 1 , and vice versa. In addition, let U be the set of all pairs of ( f 0 , f 1 ) , i.e., U = { ( f 0 , f 1 ) | f 0 = F 0 , ϕ ( n ) , f 1 = F 1 , ϕ ( n ) , n K , ϕ Φ } . On the basis of these definitions, the forms of the optimal enhanced solutions for case (i) and case (ii) can be presented in the following theorem.
Theorem 1.
The optimal noise enhanced solution for case (i) (case (ii)) is a randomization of at most two detectors and discrete vector pairs, i.e., [ ϕ 1 , n 1 ] and [ ϕ 2 , n 2 ] with the corresponding probabilities. The corresponding proof is presented in Appendix A and omitted here.

4. Solutions of the Noise Enhanced Model with Randomization

In this section, we will explore and find the optimal enhanced solutions corresponding to cases (i) and (ii), then achieve case (iii) through utilizing the solutions of cases (i) and (ii). From (8), f 0 = F 0 , ϕ ( n ) and f 1 = F 1 , ϕ ( n ) can be treated as the false-alarm and detection probabilities, respectively, which are obtained by choosing a suitable ϕ Φ and adding a discrete vector n to x . Thus we can find the minimum f 0 marked as F 0 m and the maximum f 1 denoted by F 1 M from the set U . Then realizations of the case (i) and case (ii) can start with F 0 m and F 1 M , respectively. A more detailed solving process is given as follows.

4.1. The Optimal Noise Enhanced Solution for Case (i)

In this subsection, the main goal is to determine the exact values of two detectors and constant vector pairs in the optimal noise enhanced solution for case (i).
Define Γ ϕ ( f 1 ) = inf ( f 0 : F 1 , ϕ ( n ) = f 1 ) and Γ ( f 1 ) = inf ϕ Φ ( Γ ϕ ( f 1 ) ) . Namely, Γ ϕ ( f 1 ) and Γ ( f 1 ) are the minimum f 0 corresponding to a given f 1 for a fixed ϕ and for all ϕ Φ , respectively. According to the definitions of F 0 m and Γ ( f 1 ) , F 0 m is rewritten as F 0 m = min ( Γ ( f 1 ) ) , and the maximum f 1 corresponding to F 0 m can be denoted by f 1 * = arg max f 1 ( Γ ( f 1 ) = F 0 m ) . Combining with the location of f 1 * and Theorem 1, we have the following theorem.
Theorem 2.
If f 1 * P D x , then P F A , o p t y = F 0 m < P F A y and P D y = f 1 * P D x , the minimum achievable P F A y is obtained by choosing the detector ϕ 1 o and adding a discrete vector n 1 o . If f 1 * < P D x , the optimal noise enhanced solution that minimizes P F A y is a randomization of two detectors and discrete vector pairs, i.e., [ ϕ 11 , n 11 ] and [ ϕ 12 , n 12 ] with probabilities ζ and 1 ζ , and the corresponding P D y = P D x . The corresponding proof is given in Appendix B.
Obviously, when f 1 * P D x , the detector ϕ 1 o and constant vector n 1 o that minimizes P F A y can be determined by F 0 , ϕ 1 o ( n 1 o ) = F 0 m and F 1 , ϕ 1 o ( n 1 o ) = f 1 * . Moreover, z 1 o = P F A x F 0 m and z 2 ( i ) = f 1 * P D x .
In order to determine the exact values of [ ϕ 11 , n 11 ] , [ ϕ 12 , n 12 ] and ζ for the case of f 1 * < P D x , an auxiliary function H ( f 1 , c ) = Γ ( f 1 ) c f 1 is provided. There exists at least one c 0 > 0 that makes H ( f 1 , c ) obtain the same minimum value marked as υ in two intervals I 1 = [ 0 , P D x ] and I 2 = [ P D x , 1 ] . The maximum f 1 corresponding to υ in I 1 and I 2 are expressed by f 11 ( c 0 ) and f 12 ( c 0 ) , respectively. As a result, the optimal false-alarm probability P F A , o p t y and the corresponding detection probability P D y can be calculated as
P F A , o p t y = ζ F 0 , ϕ 11 ( n 11 ) + ( 1 ζ ) F 0 , ϕ 12 ( n 12 ) < P F A y ,
P D y = ζ F 1 , ϕ 11 ( n 11 ) + ( 1 ζ ) F 1 , ϕ 12 ( n 12 ) = P D x ,
where ζ = ( f 12 ( c 0 ) P D x ) / ( f 12 ( c 0 ) f 11 ( c 0 ) ) , ϕ 11 and n 11 are determined by F 0 , ϕ 11 ( n 11 ) = υ + c 0 f 11 ( c 0 ) and F 1 , ϕ 11 ( n 11 ) = f 11 ( c 0 ) , ϕ 12 and n 12 are determined by F 0 , ϕ 12 ( n 12 ) = υ + c 0 f 12 ( c 0 ) and F 1 , ϕ 12 ( n 12 ) = f 12 ( c 0 ) . As a result, z 1 o = P F A x P F A , o p t y and z 2 ( i ) = 0 .

4.2. The Optimal Noise Enhanced Solution for Case (ii)

The focus of this subsection is to determine the exact values of the parameters in the optimal noise enhanced solution for case (ii).
Define G ϕ ( f 0 ) = sup ( f 1 : F 0 , ϕ ( n ) = f 0 ) and G ( f 0 ) = sup ϕ Φ G ϕ ( f 0 ) , such that G ϕ ( f 0 ) and G ( f 0 ) are the maximum f 1 corresponding to a given f 0 for a fixed ϕ and all ϕ Φ , respectively. In addition, F 1 M = max ( G ( f 0 ) ) since F 1 M denotes the maximum f 1 , and the minimum f 0 corresponding to F 1 M can be denoted by f 0 * = arg min f 0 ( G ( f 0 ) = F 1 M ) . Combined with the location of f 0 * and Theorem 1, the following theorem is obtained.
Theorem 3.
If f 0 * P F A x , then P D , o p t y = F 1 M > P D x and P F A y = f 0 * , the maximum achievable P D y in case (i) is obtained by choosing the detector ϕ 2 o and adding a constant vector n 2 o to x . Otherwise, the maximization of P D y in case (ii) is obtained by a randomization of two pairs of [ ϕ 21 , n 21 ] and [ ϕ 22 , n 22 ] with the probabilities λ and 1 λ , respectively, and the corresponding P F A y = P F A x . The corresponding proof is similar to that of Theorem 2 and omitted.
According to Theorem 3, when f 0 * P F A x , the detector ϕ 2 o and constant vector n 2 o that maximizes P D y in case (ii) is determined by F 0 , ϕ 2 o ( n 2 o ) = f 0 * and F 1 , ϕ 2 o ( n 2 o ) = F 1 M . Also, z 1 ( i i ) = P F A x f 0 * and z 2 o = F 1 M P D x .
In addition, in order to determine the exact values of [ ϕ 21 , n 21 ] , [ ϕ 22 , n 22 ] and λ that maximizes P D y when f 0 * > P F A x , we define an auxiliary function that J ( f 0 , k ) = G ( f 0 ) k f 0 . There is at least one k 0 > 0 that makes J ( f 0 , k ) obtain the same maximum value denoted by ν in two intervals T 1 = [ 0 , P F A x ] and T 2 = [ P F A x , 1 ] . The minimum f 0 corresponding to ν in T 1 and T 2 are expressed by f 01 ( k 0 ) and f 02 ( k 0 ) , respectively. As a consequence, the optimal detection probability P D , o p t y and the corresponding false-alarm probability P F A y are recalculated as
P D , o p t y = λ F 1 , ϕ 21 ( n 21 ) + ( 1 λ ) F 1 , ϕ 22 ( n 22 ) > P D x ,
P F A y = λ F 0 , ϕ 21 ( n 21 ) + ( 1 λ ) F 0 , ϕ 22 ( n 22 ) = P F A x ,
where λ = ( f 02 ( k 0 ) P F A x ) / ( f 02 ( k 0 ) f 01 ( k 0 ) ) , ϕ 21 and n 21 are determined by the two equations F 0 , ϕ 21 ( n 21 ) = f 01 ( k 0 ) and F 1 , ϕ 21 ( n 21 ) = ν + k 0 f 01 ( k 0 ) , ϕ 22 and n 22 are determined by F 0 , ϕ 22 ( n 22 ) = f 02 ( k 0 ) and F 1 , ϕ 22 ( n 22 ) = ν + k 0 f 02 ( k 0 ) . Then we have z 1 ( i i ) = 0 and z 2 o = P D , o p t y P D x .

4.3. The Suitable Noise Solution for Case (iii)

According to the analyses in Section 4.1 and Section 4.2, the model in (9) can be achieved by choosing [ ϕ 1 o , n 1 o ] if f 1 * > P D x and/or choosing [ ϕ 2 o , n 2 o ] if f 0 * < P F A x . When f 1 * > P D x and f 0 * < P F A x hold at the same time, the model can also be achieved by a randomization of two detectors and noise pairs [ ϕ 1 o , n 1 o ] and [ ϕ 2 o , n 2 o ] with probabilities η and 1 η , respectively, where 0 η 1 .
If f 1 * P D x and f 0 * P F A x , the model in (9), i.e., case (iii) can be achieved by the randomization of the two optimal noise enhanced solutions for case (i) and case (ii) with probabilities η and 1 η , respectively, where 0 < η < 1 . In other words, case (iii) can be achieved by a suitable randomization of [ ϕ 11 , n 11 ] , [ ϕ 12 , n 12 ] , [ ϕ 21 , n 21 ] , and [ ϕ 22 , n 22 ] with probabilities η ζ , η ( 1 ζ ) , ( 1 η ) λ , and ( 1 η ) ( 1 λ ) , respectively, as shown in Table 1.
The corresponding false-alarm and detection probabilities are calculated as
P F A y = η ζ F 0 , ϕ 11 ( n 11 ) + η ( 1 ζ ) F 0 , ϕ 12 ( n 12 ) + ( 1 η ) P F A x < P F A x ,
P D y = η P D x + ( 1 η ) λ F 1 , ϕ 21 ( n 21 ) + ( 1 η ) ( 1 λ ) F 1 , ϕ 22 ( n 22 ) > P D x ,
where 0 ζ 1 , 0 λ 1 and 0 < η < 1 . Especially, η = 1 denotes case (i) and η = 0 denotes case (ii). It is clearly that different available false-alarm and detection probabilities can be obtained by adjusting the value of η under the constraints that P F A y P F A x and P D y > P D x .
From the perspective of Bayesian criterion, the noise modified Bayes risk can be expressed in the form of a false-alarm and detection problem such that
R = p ( H 0 ) C 00 + p ( H 1 ) C 01 + p ( H 0 ) ( C 10 C 00 ) P F A y p ( H 1 ) ( C 01 C 11 ) P D y .
where p ( H i ) is the prior probability of H i , C j i is the cost of choosing H j when H i is true, i , j = 0 , 1 , and C j i > C i i if j i . According to case (iii), the improvement Δ R of Bayes risk can be obtained by
Δ R = R R = p ( H 0 ) ( C 10 C 00 ) z 1 + p ( H 1 ) ( C 01 C 11 ) z 2 < 0 ,
where R is the Bayes risk of the original detector. As a result, case (iii) provides a solution to decrease the Bayes risk.
When p ( H i ) are unknown and C j i are known, an alternative method considers the Minimax criterion, i.e., min { max ( R 0 , R 1 ) } , where R 0 = C 00 + ( C 10 C 00 ) P F A y and R 1 = C 01 ( C 01 C 11 ) P D y are the conditional risks of choosing H 0 and H 1 , respectively, for the noise modified detector. Accordingly, case (i) and case (ii) also provide the optimal noise enhanced solution to minimize R 0 and R 1 , respectively, for the variable case.
The minimization of Bayes risk for a variable detector has also been discussed in [25]. Compared to this paper, the minimum Bayes risk is obtained without considering the false-alarm and detection probabilities, which is the biggest difference between reference [25] and our work. In addition, the minimization of Bayes risk in [25] is studied only under uniform cost assignment (UCA), i.e., C 10 = C 01 = 1 and C 00 = C 11 = 0 , when p ( H i ) is known.

5. Numerical Results

In this section, numerical detection examples are given to verify the theoretical conclusions presented in the previous sections. A binary hypothesis-testing problem is given by
{ H 0 : x [ i ] = ω [ i ] H 1 : x [ i ] = A + ω [ i ] ,
where x is a K-dimensional observation vector, i = 0 , , K 1 , A > 0 is a known signal and ω [ i ] are i.i.d. symmetric Gaussian mixture noise samples with the pdf
p ω ( ω ) = 0.5 φ ( ω ; μ , σ 2 ) + 0.5 φ ( ω ; μ , σ 2 ) ,
where φ ( ω ; μ , σ 2 ) = ( 1 / 2 π σ 2 ) exp ( ( ω μ ) 2 / 2 σ 2 ) . Let μ = 3 , A = 1 , and σ = 1 . A general decision process of a suboptimal detector is expressed as
T ( x ) H 1 > < H 0 γ ,
where γ is the decision threshold.

5.1. A Detection Example for K = 1

In this subsection, suppose that K = 1 and
T ( x ) = x .
The corresponding decision function is
ϕ ( x ) = { 1 ,   x γ 0 ,   x < γ .
When we add an additive noise n to x , the noise modified decision function can be written as
ϕ ( y = x + n ) = { 1 ,   x + n γ 0 ,   x + n < γ = { 1 ,   x + ( n γ ) 0 0 ,   x + ( n γ ) < 0
It is obvious from (27) that the detection performance obtained by setting the threshold as γ and adding a noise n is identical with that achieved by keeping the threshold as zero and adding a noise n γ . As a result, the optimal noise enhanced performances obtained are the same for different thresholds. That is also to say, the randomization between different thresholds cannot improve the optimum performance further and only the non-randomization case should be considered in this example. According to (8), we have
F 0 , ϕ ( n ) = ϕ ( y ) p 0 ( y n ) d n = 1 2 Q ( γ n μ σ 0 ) + 1 2 Q ( γ n + μ σ 0 ) ,
F 1 , ϕ ( n ) = ϕ ( y ) p 1 ( y n ) d y = 1 2 Q ( γ n μ A σ 0 ) + 1 2 Q ( γ n + μ A σ 0 ) ,
where Q ( x ) = x + 1 2 π exp ( t 2 2 ) d t . Based on the analysis on Equation (8), the original false-alarm and detection probabilities are P F A , ϕ x = F 0 , ϕ ( 0 ) = 1 2 Q ( γ μ σ 0 ) + 1 2 Q ( γ + μ σ 0 ) and P D , ϕ x = F 1 , ϕ ( 0 ) = 1 2 Q ( γ μ A σ 0 ) + 1 2 Q ( γ + μ A σ 0 ) , respectively.
From the definition of function Q , F 1 , ϕ ( n ) , and F 0 , ϕ ( n ) are monotonically increasing with n and F 1 , ϕ ( n ) > F 0 , ϕ ( n ) for any n . In addition, both F 1 , ϕ ( n ) and F 0 , ϕ ( n ) are one-to-one mapping functions w.r.t. n . Therefore, we have Γ ϕ ( f 1 ) = F 0 , ϕ ( F 1 , ϕ 1 ( f 1 ) ) = F 0 , ϕ ( n ) = f 0 and G ϕ ( f 0 ) = F 1 , ϕ ( F 0 , ϕ 1 ( f 0 ) ) = F 1 , ϕ ( n ) = f 1 for any ϕ . Furthermore, Γ ( f 1 ) = Γ ϕ ( f 1 ) and G ( f 0 ) = G ϕ ( f 0 ) . The relationship between f 01 , ϕ ( k 0 ) and f 11 , ϕ ( c 0 ) is one-to-one, as well as that between f 02 , ϕ ( k 0 ) and f 12 , ϕ ( c 0 ) . As a result, f 01 , ϕ ( k 0 ) = f 01 ( k 0 ) , f 11 , ϕ ( c 0 ) = f 11 ( c 0 ) , f 02 , ϕ ( k 0 ) = f 02 ( k 0 ) , and f 12 , ϕ ( c 0 ) = f 12 ( c 0 ) for any ϕ . That is to say, n 11 = n 21 and n 12 = n 22 , where ( n 11 , n 12 ) and ( n 21 , n 22 ) are the respective optimal noise components for case (i) and case (ii) for any γ . From [2,24], we have
n 11 = n 21 = γ μ 0.5 A ,
n 12 = n 22 = γ + μ + 0.5 A .
Then the pdf of the optimal additive noise corresponding to case (i) and case (ii) for the detector given in (26) can be expressed as
p 1 , ϕ o p t ( n ) = ζ δ ( n n 11 ) + ( 1 ζ ) δ ( n n 12 ) ,
p 2 , ϕ o p t ( n ) = λ δ ( n n 21 ) + ( 1 λ ) δ ( n n 22 ) ,
where ζ = ( F 1 , ϕ ( n 12 ) P D , γ x ) / ( F 1 , ϕ ( n 12 ) F 1 , ϕ ( n 11 ) ) and λ = ( F 0 , ϕ ( n 22 ) P F A , γ x ) / ( F 0 , ϕ ( n 22 ) F 0 , ϕ ( n 21 ) ) . Thus the suitable additive noise for case (iii) can be given by
p 3 , ϕ ( n ) = η p 1 , ϕ o p t ( n ) + ( 1 η ) p 2 , ϕ o p t ( n ) ,
where 0 η 1 . In this example, let η = 0.5 . The false alarm and detection probabilities for the three cases versus different γ are shown in Figure 1.
As plotted in Figure 1, with the increase of γ , the false alarm and detection probabilities for the three cases and the original detector gradually decrease from 1 to 0, and the noise enhanced phenomenon only occurs when γ ( 2.5 , 3.5 ) . Namely, the detection performance can be improved by adding additive noise when the value of γ is between −2.5 and 3.5. When γ ( 2.5 , 3.5 ) , f 11 ( c 0 ) < P D , ϕ x < f 12 ( c 0 ) and f 01 ( k 0 ) < P F A , ϕ x < f 02 ( k 0 ) , the corresponding 0 < ζ < 1 and 0 < λ < 1 , thereby the additive noises as shown in (32) and (33) exist to improve the detection performance. Furthermore, the receiver operating characteristic (ROC) curves for the three cases and the original detector are plotted in Figure 2. The ROC curves for the three cases overlap with each other exactly, and the detection probability can be increased by adding additive noise only when the false-alarm probability is between 0.1543 and 0.6543.
Given C j i and the prior probability p ( H i ) , i , j = 0 , 1 , the noise enhanced Bayes risk obtained according to case (iii) is given by
R i i i , , ϕ = R [ p ( H 0 ) ( C 10 C 00 ) η z 1 , ϕ o + p ( H 1 ) ( C 01 C 11 ) ( 1 η ) z 2 , ϕ o ] ,
where z 1 , ϕ o = P F A , ϕ x p 3 , ϕ ( n ) F 0 , ϕ ( n ) d n and z 2 , ϕ o = p 3 , ϕ ( n ) F 1 , ϕ ( n ) d n P D , ϕ x . Let η = 0.5 , C 10 = C 01 = 1 and C 00 = C 11 = 0 , then the Bayes risk of the original detector is calculated as
R = p ( H 0 ) P F A x p ( H 1 ) ( 1 P D x ) .
Figure 3 and Figure 4 depict the Bayes risks of the noise enhanced and the original detectors versus different γ for p ( H 0 ) = 0.45 and 0.55, respectively.
From Figure 3 and Figure 4, we can see that when the decision threshold γ is very small, the Bayes risks of the noise enhanced and the original detectors are close to p ( H 0 ) . As illustrated in Figure 3 and Figure 4, only when γ ( 2.5 , 3.5 ) , can the Bayes risk be decreased by adding additive noise. With the increase of γ , the difference between the Bayes risks of the noise enhanced detector and the original detector first increases and then decreases to zero, and reaches the maximum value when γ = 0.5 . If the decision threshold γ is large enough, the Bayes risks for the two detectors are close to p ( H 1 ) = 1 p ( H 0 ) . In addition, there is no link between the values of p ( H 0 ) and the possibility of the detection performance can or cannot be improved via additive noise, which is consistent with (35).

5.2. A Detection Example for K = 2

In this example, suppose that
T ( x ) = 1 K i = 0 K 1 S ( x [ i ] ) ,
where S ( x [ i ] ) = { 1 , x [ i ] 0 0 , x [ i ] < 0 . When K = 2 , we have
T ( x ) = 1 2 i = 0 1 ( S ( x [ i ] ) ) = { 1 ,   S ( x [ 0 ] ) 0   a n d   S ( x [ 1 ] ) 0 0.5 ,   S ( x [ 0 ] ) 0   o r   S ( x [ 1 ] ) 0 0 ,   S ( x [ 0 ] ) < 0   a n d   S ( x [ 1 ] ) < 0 .
It is obvious that T ( x ) > γ when γ < 0 and T ( x ) < γ when γ > 1 , which implies the detection result is invalid if γ < 0 or γ > 1 . In addition, the detection performance is the same for γ ( 0 , 0.5 ] ( γ ( 0.5 , 1 ] ). Therefore, suppose that two alternative thresholds are γ = 1 and γ = 0.5 , the corresponding decision functions are denoted by ϕ 1 and ϕ 2 , respectively. Let n = ( n 1 , n 2 ) T , then we have
F i , ϕ 1 ( n ) = F i , ϕ ( n 1 ) F i , ϕ ( n 2 ) ,
F i , ϕ 2 ( n ) = 1 ( 1 F i , ϕ ( n 1 ) ) ( 1 F i , ϕ ( n 2 ) ) ,
where i = 0 , 1 , ϕ ( ) is the decision function given in (26) with γ = 0 . Based on the theoretical analysis in Section 4, Γ ϕ i ( f 1 ) = min ( F 0 , ϕ i ( n ) : F 1 , ϕ i ( n ) = f 1 ) , and G ϕ i ( f 0 ) = max ( F 1 , ϕ i ( n ) : F 0 , ϕ i ( n ) = f 0 ) . Furthermore, Γ ( f 1 ) = min ( Γ ϕ 1 ( f 1 ) , Γ ϕ 2 ( f 1 ) ) and G ( f 0 ) = max ( G ϕ 1 ( f 0 ) , G ϕ 2 ( f 0 ) ) .
Through a series of analyses and calculations, it is true that Γ ϕ 1 ( f 1 ) = min ( F 0 , ϕ 2 ( n 1 ) , F 0 , ϕ ( n 2 ) ) , where n 1 and n 2 are determined by F 1 , ϕ 2 ( n 1 ) = f 1 and F 1 , ϕ ( n 2 ) = f 1 , respectively. Similarly, Γ ϕ 2 ( f 1 ) = min ( 1 ( 1 F 0 , ϕ ( n 1 ) ) 2 , 1 ( 1 F 0 , ϕ ( n 2 ) ) ) , where n 1 and n 2 are determined by 1 ( 1 F 1 , ϕ ( n 1 ) ) 2 = f 1 and 1 ( 1 F 1 , ϕ ( n 2 ) ) = f 1 , respectively. Moreover, Γ ( f 1 ) = min ( F 0 , ϕ 2 ( n 1 ) , 1 ( 1 F 0 , ϕ ( n 2 ) ) 2 ) where F 1 , ϕ 2 ( n 1 ) = 1 ( 1 F 1 , ϕ ( n 2 ) ) 2 = f 1 .
On the other hand, G ϕ 1 ( f 0 ) = max ( F 1 , ϕ 2 ( n 1 ) , F 1 , ϕ ( n 2 ) ) where n 1 and n 2 are determined by F 0 , ϕ 2 ( n 1 ) = f 0 and F 0 , ϕ ( n 2 ) = f 0 . Moreover, G ϕ 2 ( f 0 ) = max ( 1 ( 1 F 1 , ϕ ( n 1 ) ) 2 , 1 ( 1 F 1 , ϕ ( n 2 ) ) ) , where 1 ( 1 F 0 , ϕ ( n 1 ) ) 2 = f 0 and 1 ( 1 F 0 , ϕ ( n 2 ) ) = f 0 . As a consequence, G ( f 0 ) = max ( F 1 , ϕ 2 ( n 1 ) , 1 ( 1 F 1 , ϕ ( n 2 ) ) 2 ) where F 0 , ϕ 2 ( n 1 ) = 1 ( 1 F 0 , ϕ ( n 2 ) ) 2 = f 0 .
The minimum achievable false-alarm probabilities for ϕ 1 and ϕ 2 , i.e., P F A , ϕ 1 y and P F A , ϕ 2 y , can be obtained, respectively, by utilizing the relationships between Γ ϕ 1 ( f 1 ) , Γ ϕ 2 ( f 1 ) , and Γ ( f 1 ) as depicted in Figure 5. Then Figure 6, Figure 7 and Figure 8 are given to illustrate the relationship between P F A and P D clearly under two different decision thresholds. As illustrated in Figure 6, for the case of threshold γ = 1 , the false-alarm probability can be decreased by adding an additive noise when 0.2994 P D 0.7153 . Correspondingly, the false-alarm probability can be decreased by adding an additive noise only when 0.5416 P D 0.8867 for γ = 0.5 as shown in Figure 7. The minimum false-alarm probability for a given P D without threshold randomization is P F A , m y = min ( P F A , ϕ 1 y , P F A , ϕ 2 y ) , which is plotted in Figure 8 and represented by the legend “NRD”.
As illustrated in Figure 8, when the randomization between decision thresholds is allowed, the noise modified false-alarm probability can be decreased further compared with the case where no randomization between decision thresholds is allowed for 0.5091 P D 0.7862 . Actually, the minimum achievable noise modified false-alarm probability is obtained by a suitable randomization between two threshold and discrete vector pairs, i.e., { γ = 0.5 , n 11 = [ 3.75 , 3.75 ] } and { γ = 1 ,   n 12 = [ 2.75 , 2.75 ] } , with probabilities ζ and 1 ζ , respectively, such that
ζ = 0.7862 P D 0.2771 .
Remarkably, the minimum false-alarm probability obtained in the “NRD” case is always superior to the original false-alarm probability for any P D .
If the randomization between different decision thresholds is not allowed, the detection probability can be increased by adding additive noise when 0.1133 P F A 0.4281 and 0.2501 P F A 0.7006 for γ = 1 and γ = 0.5 , respectively. When the randomization is allowed, for 0.2164 P F A 0.4909 , the maximum achievable detection probability can be obtained by a randomization of two pairs { γ = 0.5 , n 21 = [ 3.75 , 3.75 ] } and { γ = 1 , n 22 = [ 2.75 , 2.75 ] } with the corresponding weights λ = 0.4909 P F A 0.2745 and 1 λ .
The probabilities of false-alarm and detection for different σ of the original detector and cases (i), (ii), and (iii) when the decision threshold γ = 1 and γ = 0.5 are compared in Figure 9 and Figure 10, respectively. As shown in Figure 9a,b, the original P F A maintains 0.25 for any σ and the original P D is between 0.25 and 0.3371 when γ = 1 . As plotted in Figure 10a,b, the original P F A maintains 0.75 and the original P D is between 0.75 and 0.8242 when γ = 0.5 .
According to the analyses above, the original P F A and P D obtained when γ = 1 and γ = 0.5 are in the interval where the noise enhanced detection could occur. When the randomization between the thresholds is not allowed, according to the theoretical analysis, the optimal solutions of the noise enhanced detection performance for both case (i) and (ii) are to choose a suitable threshold and add the corresponding optimal noise to the observation. After some comparisons, the suitable threshold is just the original detector under the constraints in which P F A y P F A x and P D y P D x in this example. Naturally, case (iii) can be achieved by choosing the original detector and adding the noise which is a randomization between two optimal additive noises obtained in case (i) and case (ii). The details are plotted in Figure 9 and Figure 10 for the original threshold γ = 1 and 0.5, respectively.
From Figure 9 and Figure 10, it is clear that the smaller the σ , the smaller the false-alarm probability and the larger the detection probability. When σ is close to 0, the false-alarm probability obtained in case (i) is close to 0 and the detection probability obtained in case (ii) is close to 1. As shown in Figure 9, compared to the original detector, the noise enhanced false-alarm probability and the detection probability obtained in case (iii) are decreased by 0.125 and increased by 0.35, respectively, when σ is close to 0 where γ = 1 and η = 0.5 . As shown in Figure 10, compared to the original detector, the noise enhanced false-alarm probability obtained in case (iii) is decreased by 0.375 and the corresponding detection probability is increased by 0.125, respectively, when γ = 0.5 and η = 0.5 . With the increase of σ , the improvements of false-alarm and detection probabilities decrease gradually to zero as shown in Figure 9 and Figure 10. When σ > 2 . 81 , because the pdf of p ω ( ω ) gradually becomes a unimodal noise, the detection performance cannot be enhanced by adding any noise.
After some calculations, we know that under the constraints that P F A y P F A x and P D y P D x , the false-alarm probability cannot be decreased further by allowing randomization between the two thresholds compared to the non-randomization case when the original threshold is γ = 1 . It means that even if randomization is allowed, the minimum false-alarm probability in case (i) is obtained by choosing threshold γ = 1 and adding the corresponding optimal additive noise, and the achievable minimum false-alarm probability in case (i) is the same as that plotted in Figure 9. On the contrary, the detection probability obtained under the same constraints when randomization exists between different thresholds is greater than that obtained in the non-randomization case for σ < 2.3 , which is shown in Figure 11. Based on the analysis in Section 3.2, the maximum detection probability in case (ii) can be achieved by a suitable randomization of the two decision thresholds and noise pairs { γ = 0.5 , n 21 = [ n 21 , n 21 ] } and { γ = 1 , n 22 = [ n 22 , n 22 ] } with probabilities λ and 1 λ , respectively, where λ = ( F 0 , ϕ 1 ( n 22 ) F 0 , ϕ 1 ( 0 ) ) / ( F 0 , ϕ 1 ( n 22 ) F 0 , ϕ 2 ( n 21 ) ) . Such as n 21 = [ 4.25 , 4.25 ] , n 22 = [ 2.15 , 2.15 ] , and λ = 0.8966 when σ = 1.8 . In addition, Figure 11 also plots the P D x under the Likelihood ratio test (LRT) based on the original observation x . It is obvious that the P D obtained under LRT is superior to that obtained in case (ii) for each σ . Although the performance of LRT is much better than the original and noise enhanced decision solutions, its implementation is much more complicated.
Naturally, case (iii) can also be achieved by randomization of the noise enhanced solution for case (i) and the new solution for case (ii) with the probabilities η and 1 η , respectively. Figure 12 compares the probabilities of false-alarm and detection obtained by the original detector, LRT and case (iii) when randomization can or cannot be allowed where η = 0.5 and the original threshold is γ = 1 . As shown in Figure 12, compared to the non-randomization case, the detection probability obtained in case (iii) is further improved for σ < 2.3 by allowing randomization of the two thresholds while the false-alarm probability cannot be further decreased. Moreover, the P F A of LRT increases when σ increases and will be greater than that obtained in case (iii) when σ > 0.66 and the original detector when σ > 1 .
Figure 13 illustrates the Bayes risks for the original detector, LRT and noise enhanced decision solutions when the randomization between detectors can or cannot be allowed for different η , where η = 1 , η = 0 , and η = 0.5 denote case (i), case (ii), and case (iii), respectively. As plotted in Figure 13, the Bayes risks obtained in case (i), (ii), and (iii) are smaller than the original detector, and the Bayes risk of LRT is the smallest one. Furthermore, the Bayes risk obtained in the randomization case is smaller than that obtained in the non-randomization case.
As shown in Figure 14, when the original threshold γ = 0.5 , under the constraints that P F A y P F A x and P D y P D x , the false-alarm probability can be greatly decreased by allowing randomization between different thresholds compared to the non-randomization case when σ < 1.1 . In addition, LRT performs best on P F A . Accordingly, the minimum false-alarm probability in case (i) is obtained by a suitable randomization of { γ = 0.5 , n 21 = [ n 21 , n 21 ] } and { γ = 1 , n 22 = [ n 21 , n 22 ] } with probabilities ζ and 1 ζ , respectively, where ζ = ( F 1 , ϕ 1 ( n 22 ) F 1 , ϕ 1 ( 0 ) ) / ( F 1 , ϕ 1 ( n 22 ) F 1 , ϕ 2 ( n 21 ) ) . Through some simple analyses, under the same constraints, the detection probability obtained when there exists randomization between different thresholds cannot be greater than that obtained in the non-randomization case. Thus the maximum detection probability in case (ii) is the same as that illustrated in Figure 10, which is achieved by choosing a threshold γ = 0.5 and adding the corresponding optimal additive noise to the observation.
Case (iii) can also be achieved by the randomization of the noise enhanced solutions for case (i) and case (ii) with the probabilities η and 1 η , respectively. As shown in Figure 15, compared to the non-randomization case, the false-alarm probability obtained in case (iii) is greatly improved by allowing randomization of the two thresholds while the detection probability cannot be increased when the original threshold γ = 0.5 . Although the P F A of LRT is always superior to that obtained in other cases, the P D of LRT will be smaller than that obtained by the original detector and case (iii) when σ increases to a certain extent.
Figure 16 illustrates the Bayes risks for the original detector, LRT and the noise enhanced decision solutions for different η . Also, η = 1 , η = 0 , and η = 0.5 denote case (i), case (ii) and case (iii), respectively. As plotted in Figure 16, the Bayes risks obtained by the three cases are smaller than the original detector for σ < 2.81 . The smallest one of the three is achieved in case (i) if σ < 0.7 or case (ii) if 0.7 < σ < 2.81 when no randomization exists between the thresholds, while it is achieved in case (i) if σ < 1.1 or case (ii) if 1.1 < σ < 2.81 when the randomization between the thresholds is allowed. Obviously, the Bayes risk obtained in the randomization case is not greater than that obtained in the non-randomization case. In addition, LRT achieves the minimum Bayes risk when σ < 1.83 and the maximum Bayes risk when σ > 2.12 .
As analyzed in 5.1, if the structure of a detector does not change with the decision thresholds, the optimal noise enhanced detection performances for different thresholds are the same, which can be achieved by adding the corresponding optimal noise. In such case, no improvement can be obtained by allowing randomization between different decision thresholds. On the other hand, if different thresholds correspond to different structures as shown in (39) and (40), randomization between different decision thresholds can introduce new noise enhanced solutions to improve the detection performance further under certain conditions.

6. Conclusions

In this study, a noise enhanced binary hypothesis-testing problem for a variable detector was investigated. Specifically, a noise enhanced model that can increase the detection probability and decrease the false-alarm probability simultaneously was formulated for a variable detector. In order to solve the model, three alternative cases were considered, i.e., cases (i), (ii), and (iii). First, the minimization of the false-alarm probability was achieved without decreasing the detection probability in case (i). For the case where the randomization between different detectors is allowed, the optimal noise enhanced solution of case (i) was proven as a randomization of at most two detectors and additive noise pairs. Especially, no improvement could be introduced by allowing the randomization between different detectors under certain conditions. Furthermore, the maximum noise enhanced detection probability was considered in case (ii) without increasing the false-alarm probability, and the corresponding optimal noise enhanced solution was explored regardless of whether the randomization of detectors is allowed. In addition, case (iii) was achieved by the randomization between two optimal noise enhanced solutions of cases (i) and (ii) with the corresponding weights. Remarkably, numerous solutions for increasing the detection probability and decreasing the false-alarm probability simultaneously are provided by adjusting the weights.
Moreover, the minimization of the Bayes risk based on the noise enhanced model was discussed for the variable detector. The Bayes risk obtained in case (iii) is between that obtained in case (i) and case (ii). Obviously, the noise enhanced Bayes risks obtained in the three cases are smaller than the original one without additive noise. It is significant to investigate case (i) and case (iii), especially for the situation where the result of case (ii) is not ideal or even not exists. Such as shown in Figure 16, the Bayes risks obtained in cases (i) and (iii) are smaller than that obtained in case (ii) under certain conditions. Studies on cases (i) and (iii) further supplement case (ii), and provide a more common noise enhanced solution for the signal detection. Finally, through the simulation results, the theoretical analyses were proven.
As a future work, the noise enhanced detection problem can be researched according to the maximum likelihood (ML) or the maximum a posteriori probability (MAP) criterion. Also, the theoretical results can be extended to the Rao test, which is a simpler alternative to the generalized likelihood ratio test (GLRT) [29], and applied to a stable system, a spectrum sensing problem in cognitive radio systems, and a decentralized detection problem [30,31,32]. In addition, a generalized noise enhanced parameter estimation problem based on the minimum mean square error (MMSE) criterion is another issue worth studying.

Author Contributions

Conceptualization, T.Y. and S.L.; Methodology, W.L.; Software, P.W.; Validation, T.Y. and S.L.; Formal Analysis, W.L.; Investigation, T.Y.; Resources, S.L.; Data Curation, J.G.; Writing-Original Draft Preparation, P.W.; Writing-Review & Editing, T.Y. and S.L. All authors have read and approved the final manuscript.

Funding

This research was partly supported by the National Natural Science Foundation of China (Grant No. 61701055), graduate scientific research and innovation foundation of Chongqing (Grant No. CYB17041), and the Basic and Advanced Research Project in Chongqing (Grant No. cstc2016jcyjA0134, No. cstc2016jcyjA0043).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Theorem 1.
Due to A , U is a two-dimensional linear space. Further, suppose that B is the convex hull of U . It can be testified that B is also the set of all possible ( P F A y , P D y ) . Generally, the optimal pairs of ( P F A y , P D y ) for case (i) and (ii) can only exist on , i.e., the boundary of B . Furthermore, according to the Caratheodory’s Theorem, an arbitrary point on can be denoted by a convex combination of no more than two elements in U . Consequently, case (i) and case (ii) can be achieved by a randomization of at most two detectors and discrete vector pairs.☐

Appendix B

Proof of Theorem 2.
If f 1 * P D x , there must exist a detector ϕ 1 o and a constant vector n 1 o must satisfy P F A , o p t y = F 0 , ϕ 1 o ( n 1 o ) = F 0 m < P F A y and P D y = F 1 , ϕ 1 o ( n 1 o ) = f 1 * P D x . Thus, the minimum achievable P F A y is obtained by choosing the detector ϕ 1 o and adding a discrete vector n 1 o to x . For the case of f 1 * < P D x , the optimal noise enhanced solution that minimizes P F A y is a randomization of two detectors and discrete vector pairs, i.e., [ ϕ 11 , n 11 ] and [ ϕ 12 , n 12 ] with probabilities ζ and 1 ζ directly according to Theorem 1. In addition, the contradiction method can be used here to prove P D y = P D x . First, suppose the minimum false-alarm probability P F A , o p t y is obtained when P D , o p t y = d < P D x with a noise enhanced solution { E [ ϕ , n ] } o p t . Then we suppose another valid solution which is a randomization of { E [ ϕ , n ] } o p t and [ ϕ 1 o , n 1 o ] with probabilities t = ( P D x d ) / ( f 1 * d ) and 1 t . Then the new noise modified detection and false-alarm probabilities can be calculated as
P D y = P D x d f 1 * d f 1 * + f 1 * P D x f 1 * d d = P D x ,
P F A y = P D x d f 1 * d F 0 m + f 1 * P D x f 1 * d P F A , o p t y < P F A , o p t y ,
where the last inequality holds from F 0 m < P F A , o p t y . Therefore, P D , o p t y = P D x since the result in (A2) contracts the definition of P F A , o p t y .☐

References

  1. Benzi, R.; Sutera, A.; Vulpiani, A. The mechanism of stochastic resonance. J. Phys. A Math. Gen. 1981, 14, 453–457. [Google Scholar] [CrossRef]
  2. Liu, S.; Yang, T.; Zhang, X.; Hu, X.; Xu, L. Noise enhanced binary hypothesis-testing in a new framework. Digit. Signal Process. 2015, 41, 22–31. [Google Scholar] [CrossRef]
  3. Lee, I.; Liu, X.; Zhou, C.; Kosko, B. Noise-enhanced detection of subthreshold signals with carbon nanotubes. IEEE Trans. Nanotechnol. 2006, 5, 613–627. [Google Scholar] [CrossRef]
  4. Wannamaker, R.A.; Lipshitz, S.P.; Vanderkooy, J. Stochastic resonance as dithering. Phys. Rev. E 2000, 61, 233–236. [Google Scholar] [CrossRef]
  5. Dai, D.; He, Q. Multiscale noise tuning stochastic resonance enhances weak signal detection in a circuitry system. Meas. Sci. Technol. 2012, 23, 115001. [Google Scholar] [CrossRef]
  6. Kitajo, K.; Nozaki, D.; Ward, L.M.; Yamamoto, Y. Behavioral stochastic resonance within the human brain. Phys. Rev. Lett. 2003, 90, 218103. [Google Scholar] [CrossRef] [PubMed]
  7. Moss, F.; Ward, L.M.; Sannita, W.G. Stochastic resonance and sensory information processing: A tutorial and review of applications. Clin. Neurophys. 2004, 115, 267–281. [Google Scholar] [CrossRef]
  8. Patel, A.; Kosko, B. Stochastic resonance in continuous and spiking neuron models with Levy noise. IEEE Trans. Neural Netw. 2008, 19, 1993–2008. [Google Scholar] [CrossRef] [PubMed]
  9. McDonnell, M.D.; Stocks, N.G.; Abbott, D. Optimal stimulus and noise distributions for information transmission via suprathreshold stochastic resonance. Phys. Rev. E 2007, 75, 061105. [Google Scholar] [CrossRef] [PubMed]
  10. Gagrani, M.; Sharma, P.; Iyengar, S.; Nadendla, V.S.; Vempaty, A.; Chen, H.; Varshney, P.K. On Noise-Enhanced Distributed Inference in the Presence of Byzantines. In Proceedings of the Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 28–30 September 2011. [Google Scholar]
  11. Gingl, Z.; Makra, P.; Vajtai, R. High signal-to-noise ratio gain by stochastic resonance in a double well. Fluctuat. Noise Lett. 2001, 1, L181–L188. [Google Scholar] [CrossRef]
  12. Makra, P.; Gingl, Z. Signal-to-noise ratio gain in non-dynamical and dynamical bistable stochastic resonators. Fluctuat. Noise Lett. 2002, 2, L145–L153. [Google Scholar] [CrossRef]
  13. Makra, P.; Gingl, Z.; Fulei, T. Signal-to-noise ratio gain in stochastic resonators driven by coloured noises. Phys. Lett. A 2003, 317, 228–232. [Google Scholar] [CrossRef]
  14. Stocks, N.G. Suprathreshold stochastic resonance in multilevel threshold systems. Phys. Rev. Lett. 2000, 84, 2310–2313. [Google Scholar] [CrossRef] [PubMed]
  15. Godivier, X.; Chapeau-Blondeau, F. Stochastic resonance in the information capacity of a nonlinear dynamic system. Int. J. Bifurc. Chaos 1998, 8, 581–589. [Google Scholar] [CrossRef]
  16. Kosko, B.; Mitaim, S. Stochastic resonance in noisy threshold neurons. Neural Netw. 2003, 16, 755–761. [Google Scholar] [CrossRef]
  17. Kosko, B.; Mitaim, S. Robust stochastic resonance for simple threshold neurons. Phys. Rev. E 2004, 70, 031911. [Google Scholar] [CrossRef] [PubMed]
  18. Mitaim, S.; Kosko, B. Adaptive stochastic resonance in noisy neurons based on mutual information. IEEE Trans. Neural Netw. 2004, 15, 1526–1540. [Google Scholar] [CrossRef] [PubMed]
  19. Bayram, S.; Gezici, S.; Poor, H.V. Noise enhanced hypothesis-testing in the restricted bayesian framework. IEEE Trans. Signal Process. 2010, 58, 3972–3989. [Google Scholar] [CrossRef] [Green Version]
  20. Bayram, S.; Gezici, S. Noise-enhanced M-ary hypothesis-testing in the mini-max framework. In Proceedings of the 3rd International Conference on Signal Processing and Communication Systems, Omaha, NE, USA, 28–30 September 2009; pp. 1–6. [Google Scholar]
  21. Bayram, S.; Gezici, S. Noise enhanced M-ary composite hypothesis-testing in the presence of partial prior information. IEEE Trans. Signal Process. 2011, 59, 1292–1297. [Google Scholar] [CrossRef]
  22. Kay, S.M.; Michels, J.H.; Chen, H.; Varshney, P.K. Reducing probability of decision error using stochastic resonance. IEEE Signal Process. Lett. 2006, 13, 695–698. [Google Scholar] [CrossRef]
  23. Kay, S. Can detectability be improved by adding noise. IEEE Signal Process. Lett. 2000, 7, 8–10. [Google Scholar] [CrossRef]
  24. Chen, H.; Varshney, P.K.; Kay, S.M.; Michels, J.H. Theory of the stochastic resonance effect in signal detection: Part I—Fixed detectors. IEEE Trans. Signal Process. 2007, 55, 3172–3184. [Google Scholar] [CrossRef]
  25. Chen, H.; Varshney, P.K. Theory of the stochastic resonance effect in signal detection: Part II—Variable detectors. IEEE Trans. Signal Process. 2007, 56, 5031–5041. [Google Scholar] [CrossRef]
  26. Chen, H.; Varshney, P.K.; Kay, S.; Michels, J.H. Noise enhanced nonparametric detection. IEEE Trans. Inf. Theory 2009, 55, 499–506. [Google Scholar] [CrossRef]
  27. Patel, A.; Kosko, B. Optimal noise benefits in Neyman–Pearson and inequality constrained signal detection. IEEE Trans. Signal Process. 2009, 57, 1655–1669. [Google Scholar] [CrossRef]
  28. Bayram, S.; Gezici, S. Stochastic resonance in binary composite hypothesis-testing problems in the Neyman–Pearson framework. Digit. Signal Process. 2012, 22, 391–406. [Google Scholar] [CrossRef] [Green Version]
  29. Ciuonzo, D.; Papa, G.; Romano, G.; Rossi, P.S.; Willett, P. One-bit decentralized detection with a Rao test for multisensor fusion. IEEE Signal Process. Lett. 2013, 20, 861–864. [Google Scholar] [CrossRef]
  30. Ciuonzo, D.; Rossi, P.S.; Willett, P. Generalized rao test for decentralized detection of an uncooperative target. IEEE Signal Process. Lett. 2017, 24, 678–682. [Google Scholar] [CrossRef]
  31. Wu, J.; Wu, C.; Wang, T.; Lee, T. Channel-aware decision fusion with unknown local sensor detection probability. IEEE Trans. Signal Process. 2010, 58, 1457–1463. [Google Scholar]
  32. Ciuonzo, D.; Rossi, P.S. Decision fusion with unknown sensor detection probability. IEEE Signal Process. Lett. 2014, 21, 208–212. [Google Scholar] [CrossRef]
Figure 1. P F A and P D as functions of γ for the original detector and the three cases where μ = 3 , A = 1 , and σ = 1
Figure 1. P F A and P D as functions of γ for the original detector and the three cases where μ = 3 , A = 1 , and σ = 1
Entropy 20 00470 g001
Figure 2. Receiver operating characteristic (ROC) curves for the original detector and the three cases where μ = 3 , A = 1 , and σ = 1 .
Figure 2. Receiver operating characteristic (ROC) curves for the original detector and the three cases where μ = 3 , A = 1 , and σ = 1 .
Entropy 20 00470 g002
Figure 3. Bayes risks of the noise enhanced and original detectors for p ( H 0 ) = 0.45 . “NED” denotes the noise enhanced detector.
Figure 3. Bayes risks of the noise enhanced and original detectors for p ( H 0 ) = 0.45 . “NED” denotes the noise enhanced detector.
Entropy 20 00470 g003
Figure 4. Bayes risks of the noise enhanced and the original detectors for p ( H 0 ) = 0.55 .
Figure 4. Bayes risks of the noise enhanced and the original detectors for p ( H 0 ) = 0.55 .
Entropy 20 00470 g004
Figure 5. The relationships between Γ ϕ 1 ( f 1 ) , Γ ϕ 2 ( f 1 ) and Γ ( f 1 ) .
Figure 5. The relationships between Γ ϕ 1 ( f 1 ) , Γ ϕ 2 ( f 1 ) and Γ ( f 1 ) .
Entropy 20 00470 g005
Figure 6. Γ ϕ 1 ( f 1 ) and the achievable minimum P F A obtained when γ = 1 .
Figure 6. Γ ϕ 1 ( f 1 ) and the achievable minimum P F A obtained when γ = 1 .
Entropy 20 00470 g006
Figure 7. The achievable minimum P F A obtained when γ = 0.5 and Γ ϕ 2 ( f 1 ) .
Figure 7. The achievable minimum P F A obtained when γ = 0.5 and Γ ϕ 2 ( f 1 ) .
Entropy 20 00470 g007
Figure 8. Comparison of minimum P F A achieved by the original detector and noise enhanced decisions for “NRD” and “RD” cases, which denote non-randomization and randomization exist between the thresholds, respectively.
Figure 8. Comparison of minimum P F A achieved by the original detector and noise enhanced decisions for “NRD” and “RD” cases, which denote non-randomization and randomization exist between the thresholds, respectively.
Entropy 20 00470 g008
Figure 9. P F A and P D as functions of σ for the original detector and the three cases without randomization existing between thresholds when γ = 1 for μ = 3 , A = 1 , and η = 0.5 .
Figure 9. P F A and P D as functions of σ for the original detector and the three cases without randomization existing between thresholds when γ = 1 for μ = 3 , A = 1 , and η = 0.5 .
Entropy 20 00470 g009
Figure 10. P F A and P D as functions of σ for the original detector and the three cases without randomization exisingt between thresholds when γ = 0.5 for μ = 3 , A = 1 , and η = 0.5 .
Figure 10. P F A and P D as functions of σ for the original detector and the three cases without randomization exisingt between thresholds when γ = 0.5 for μ = 3 , A = 1 , and η = 0.5 .
Entropy 20 00470 g010
Figure 11. Comparison of P D as function of σ for the original detector, LRT and case (ii) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 and the original threshold γ = 1 . “LRT” is the P D x obtained under the Likelihood ratio test (LRT) based on the original observation x .
Figure 11. Comparison of P D as function of σ for the original detector, LRT and case (ii) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 and the original threshold γ = 1 . “LRT” is the P D x obtained under the Likelihood ratio test (LRT) based on the original observation x .
Entropy 20 00470 g011
Figure 12. Comparison of P F A and P D as functions of σ for the original detector, LRT and case (iii) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 , and the original threshold γ = 1 .
Figure 12. Comparison of P F A and P D as functions of σ for the original detector, LRT and case (iii) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 , and the original threshold γ = 1 .
Entropy 20 00470 g012
Figure 13. Bayes risks of the original detector, LRT and noise enhanced decision solutions for different η when μ = 3 , A = 1 , and the original threshold γ = 1 .
Figure 13. Bayes risks of the original detector, LRT and noise enhanced decision solutions for different η when μ = 3 , A = 1 , and the original threshold γ = 1 .
Entropy 20 00470 g013
Figure 14. Comparison of P F A as function of σ for the original detector, LRT and case (i) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 , and the original threshold γ = 0.5 .
Figure 14. Comparison of P F A as function of σ for the original detector, LRT and case (i) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 , and the original threshold γ = 0.5 .
Entropy 20 00470 g014
Figure 15. Comparison of P F A and P D as functions of σ for the original detector, LRT and case (iii) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 , and the original threshold γ = 0.5 .
Figure 15. Comparison of P F A and P D as functions of σ for the original detector, LRT and case (iii) with or without randomization between thresholds, respectively, when μ = 3 , A = 1 , and the original threshold γ = 0.5 .
Entropy 20 00470 g015
Figure 16. Bayes risks of the original detector, LRT and the noise enhanced decision solutions for different η when μ = 3 , A = 1 , and the original threshold γ = 0.5 .
Figure 16. Bayes risks of the original detector, LRT and the noise enhanced decision solutions for different η when μ = 3 , A = 1 , and the original threshold γ = 0.5 .
Entropy 20 00470 g016
Table 1. The probability of each component in the suitable noise enhanced solution for case (iii).
Table 1. The probability of each component in the suitable noise enhanced solution for case (iii).
[ ϕ , n ] [ ϕ 11 , n 11 ] [ ϕ 12 , n 12 ] [ ϕ 21 , n 21 ] [ ϕ 22 , n 22 ]
p r o b a b i l i t y η ζ η ( 1 ζ ) ( 1 η ) λ ( 1 η ) ( 1 λ )

Share and Cite

MDPI and ACS Style

Yang, T.; Liu, S.; Liu, W.; Guo, J.; Wang, P. Noise Enhanced Signal Detection of Variable Detectors under Certain Constraints. Entropy 2018, 20, 470. https://doi.org/10.3390/e20060470

AMA Style

Yang T, Liu S, Liu W, Guo J, Wang P. Noise Enhanced Signal Detection of Variable Detectors under Certain Constraints. Entropy. 2018; 20(6):470. https://doi.org/10.3390/e20060470

Chicago/Turabian Style

Yang, Ting, Shujun Liu, Wenguo Liu, Jishun Guo, and Pin Wang. 2018. "Noise Enhanced Signal Detection of Variable Detectors under Certain Constraints" Entropy 20, no. 6: 470. https://doi.org/10.3390/e20060470

APA Style

Yang, T., Liu, S., Liu, W., Guo, J., & Wang, P. (2018). Noise Enhanced Signal Detection of Variable Detectors under Certain Constraints. Entropy, 20(6), 470. https://doi.org/10.3390/e20060470

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop