Next Article in Journal
Multi-Difference Image Fusion Change Detection Using a Visual Attention Model on VHR Satellite Data
Next Article in Special Issue
Space Targets with Micro-Motion Classification Using Complex-Valued GAN and Kinematically Sifted Methods
Previous Article in Journal
Identification and Measurement of Shrinking Cities Based on Integrated Time-Series Nighttime Light Data: An Example of the Yangtze River Economic Belt
Previous Article in Special Issue
A Two-Stage Track-before-Detect Method for Non-Cooperative Bistatic Radar Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An SAR Image Automatic Target Recognition Method Based on the Scattering Parameter Gaussian Mixture Model

National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(15), 3800; https://doi.org/10.3390/rs15153800
Submission received: 23 May 2023 / Revised: 26 July 2023 / Accepted: 28 July 2023 / Published: 30 July 2023

Abstract

:
General synthetic aperture radar (SAR) image automatic target recognition (ATR) methods perform well under standard operation conditions (SOCs). However, they are not effective in extended operation conditions (EOCs). To improve the robustness of the ATR system under various EOCs, an ATR method for SAR images based on the scattering parameter Gaussian mixture model (GMM) is proposed in this paper. First, an improved active contour model (ACM) is used for target–background segmentation, which is more robust against noise than the constant false alarm rate (CFAR) method. Then, as the extracted attributed scattering center (ASC) is sensitive to noise and resolution, the GMM is constructed using the extracted ASC set. Next, the weighted Gaussian quadratic form distance (WGQFD) is adopted to measure the similarity of GMMs for the recognition task, thereby avoiding false alarms and missed alarms caused by the varying number of scattering centers. Moreover, adaptive aspect–frame division is employed to reduce the number of templates and improve recognition efficiency. Finally, based on the public measured MSTAR dataset, different EOCs are constructed under noise, resolution change, model change, depression angle change, and occlusion of different proportions. The experimental results under different EOCs demonstrate that the proposed method exhibits excellent robustness while maintaining low computation time.

1. Introduction

Synthetic aperture radar (SAR) [1] offers continuous monitoring of local scenes, and is not affected by external environmental factors such as light. Over the past 60 years, SAR technology has matured and found widespread applications in both civil and military fields [2,3,4,5]. In civil applications, SAR is utilized for geological surveys, forest and crop censuses, and emergency rescues. In the military domain, the reliable interpretation of massive SAR images to extract valuable intelligence has become crucial, leading to the emergence of automatic target recognition (ATR) technology [6]. ATR technology has evolved from theoretical research to systematic development and application worldwide. Typical SAR ATR systems include semi-automatic IMINT processing (SAIP) and the moving and stationary target acquisition and recognition (MSTAR) program [7,8,9,10]. These systems describe target characteristics based on templates and models, respectively. The release of the MSTAR database has provided ample experimental material for researchers and triggered a surge in research on SAR ATR technology.
ATR aims to achieve the automatic detection, discrimination, and recognition of the potential ROI and to obtain the category and number of objects [11]. This paper mainly focuses on the recognition stage in the SAR target recognition process, that is, judging the target category. In realistic scenes, many uncertain operation conditions (OCs) can hide in the ROI; these can be divided into standard operation conditions (SOCs) and extended operation conditions (EOCs) [12]. The former refers to OCs included in the target feature library. Those not included are called EOCs. Limited by the size and accuracy of the target feature library, the measured samples to be recognized are mostly derived from EOCs, such as strong noise, partial occlusion, depression angle differences, etc. [13], making the ATR system unstable.
To enhance the robustness of the ATR system under various EOCs, it is necessary to extract and select the features carefully. SAR image ATR methods based on traditional features in recent studies can be roughly divided into three categories. The first category is based on geometry features, such as edge features [14] and region moment features [15]. These can intuitively describe a target; however, due to the existence of speckle noise it is difficult to accurately extract these features. The second category is based on linear or nonlinear feature projection. For linear projection, principal component analysis (PCA), linear discriminant analysis (LDA) [16], and non-negative matrix factorization [17] are typical representatives. Nonlinear projection is based on kernel methods, e.g., kernel principal component analysis (KPCA) [18], and nonlinear manifold learning methods, e.g., local discriminant embedding (LDE) [19]. While these features are convenient to extract, they are unable to represent the local characteristics of the target, making it difficult to cope with occluded targets. The last method is based on scattering center (SC) features [20], which can reflect the target’s global and local electromagnetic scattering characteristics. After decades of development, SC models have increasingly enhanced the ability o describe the target’s characteristics [8,20,21,22,23,24,25]. Classical SC models include the ideal point SC model, GTD model, Attribute Scattering Center (ASC) model, etc. [26,27,28]. Among these, the ASC model proposed by Potter and Moses has been successfully applied in SAR image feature extraction and ATR [29]. Thus, the ASC feature is a good candidate for the SAR image ATR task under EOCs.
The extracted ASC list is usually unordered, and may contain false alarms and missed alarms. Therefore, it is almost always unwise to use it to train networks such as support vector machines (SVM) [30] and neural networks directly [15]. Instead, researchers typically choose to determine target categories by comparing the differences between ASC sets. Therefore, a critical problem is to find an efficient and robust way to measure and assess the similarity between the two sets. Current strategies for tackling this dilemma can be broadly divided into two categories; first, to evaluate the similarity via the one-to-one correspondence between the two sets. In [8], the authors adopted this correspondence and evaluated the similarity using the Bayesian posterior probability. In [20], the authors constructed the Karhunen–Loeve (KL) decomposition and adopted the result-matching method to match two ASC sets. Tian et al. [21] reconstructed ASC features by using the World View Vector (WVV), then matched the feature set with the template through the weighted bipartite graph model (WBGM) to identify the target. Dungan et al. [22] carried out the SAR image recognition task through one-to-one matching using the Hausdorff distance. However, when noise pollution or occlusion is present in the image, the Hausdorff distance can easily cause false matching. Ding et al. [23] adopted a one-to-one ASC matching method based on the Hungarian algorithm, and improved the recognition performance using information on false alarms and missed alarms. Although these methods have achieved good results, they remain too complicated and cumbersome. Another approach is to directly evaluate the difference between the two sets, thereby avoiding complex one-to-one matching. Such methods are based on the recognition of the ASC point set. Good results can be achieved only when the number of point set elements is determined and consistent. When encountering issues with resolution, noise, etc., these methods display certain drawbacks.
Depth features extracted by deep learning methods have been increasingly used for SAR ATR in recent years. Chen et al. [31] first applied convolutional neural networks (CNN) to the SAR image target recognition field, obtaining excellent recognition accuracy. Li et al. [32] proposed a multiscale CNN based on component analysis for SAR ATR and improved the recognition accuracy. Furthermore, in our previous work we proposed a combined CNN and support vector machine (SVM) method to improve the robustness of the ATR system under limited data [33]. Guo et al. [34] proposed a target recognition method based on an SAR capsule network, which combines a traditional CNN with a capsule network. However, the capsule network used in this method is complex in design, leading to low generalization and a large number of parameters, making training difficult. Although they introduced vectorized fully-connected operations and variable folding crossover mechanisms to improve accuracy and robustness, this further increases the computational complexity and training time. Moreover, parameter selection for recovery and variable fold-crossing mechanisms poses challenges, and may have a slight impact on network performance. Although these deep learning-based methods can achieve excellent recognition results, they do not cope well with the EOC problem. Therefore, scholars have combined neural network with ASC to improve the robustness of the recognition system. Feng et al. [35] adopted partial convolution and an improved bidirectional convolution recurrent network to extract local features of the target through a partial model based on ASC. However, they only targeted partial occlusion of this medium EOC, and did not extend it to various EOCs. In [36], the authors proposed a target partial attention network (PAN) based on the attribute scattering center (ASC) model, combining electromagnetic properties with a deep learning framework. However, the design of this approach is complex and requires the design of multiple CNN models, each of which needs to be trained well enough to extract useful features. Although the deep learning approach performs relatively well, in general these approaches are black box models with poor interpretability. Moreover, they tend to have a large number of adjustable parameters, requiring a large amount of data in order to complete training. It is impractical to obtain large amounts of data with labels in SAR images, however, especially in the context of military applications.
Therefore, it is essential to explore a more suitable SAR image ATR method under EOCs. Taking into account the advantages and disadvantages of the above-mentioned methods, in this paper we propose an SAR image ATR method based on SC-GMM to accommodate different EOCs. The main contributions of this paper are as follows:
(1)
A robust SAR image ATR method is proposed that can adapt to different EOCs.
(2)
The method utilizes the Gaussian probability density function (PDF) to describe the statistical characteristics of the position and scattering coefficients of ASCs, ensuring resilience against noise and resolution. The PDFs of multiple ASC parameters are integrated using a Gaussian mixture model (GMM) to effectively represent the statistical characteristics of the SC sets.
(3)
To enhance calculation efficiency and robustness against noise and outliers, the Gaussian quadratic form distance (GQFD) is modified to a weighted GQFD (WGQFD). The WGQFD assigns a higher weight to position, facilitating similarity evaluation between GMMs and completing the recognition task.
(4)
To reduce the size of the feature template library and improve recognition efficiency, this paper proposes an adaptive aspect–frame division algorithm. While it is ideal to consider the orientation sensitivity of SAR images in the template library, including SC information for all attitude angles, this is impractical. Therefore, in this paper we divide the GMM into multiple aspect frames, achieving both high recognition accuracy and improved efficiency.
The remainder of the paper is organized as follows. Section 2 introduces related works on ASCs. Section 3 introduces the proposed ATR method in detail. In Section 4, experiments under EOCs are presented verify the effectiveness and robustness of the proposed method. Finally, Section 5 provides the conclusion of the paper.

2. Related Works

In general, the electromagnetic scattering characteristics of radar targets in the high-frequency region can be equivalent to the superposition effects of several local phenomena. These local phenomena are known as SCs [13]. As a parametric model, the ASC model [29] is used to describe the electromagnetic scattering characteristics of complex targets in the high-frequency region, which is based on physical optics and geometric response theory. The specific ASC model can be expressed as follows:
E ( f , φ ; Θ ) = i = 1 p E i ( f , φ ; θ i )
In Equation (1), E ( f , φ ; Θ ) represents the overall backscattering at frequency f and azimuth ϕ ; p is the model order, namely the ASC number of the target; and Θ = { θ i } ( i = 1 , 2 , , p ) . The formula for calculating a single ASC is as follows:
E i ( f , φ ; θ i ) = A i · ( j f f c ) α i · exp ( j 4 π f c ( x i cos φ + y i sin φ ) ) · sinc ( 2 π f c L i sin ( φ φ i ¯ ) ) · exp ( 2 π f γ i sin φ )
where f c represents the center frequency, c is the speed of light, θ i = [ A i , α i , x i , y i , L i , φ i ¯ , γ i ] represents the ASC parameter set, A i represents the amplitude, α i represents the frequency dependence factor, which is a discrete variable with values [−1, −0.5, 0, 0.5, 1], ( x i , y i ) represents the physical position of the SC in the scene, L i and φ represent the length and direction angle of the distributed SC, respectively, and γ i represents the dependence factor of the SC with respect to φ . When L i = φ i ¯ = 0 , the SC is local, while when γ i = 0 it is distributed.
The ASC parameters contain rich physical meaning and are closely related to the local characteristics of the target. Different combinations of α i and L i represent different geometric scattering types [37]. Therefore, the ASC can sense changes in the local physical structure of the target. The ASC model provides good physical reasoning ability, making it a feasible candidate to improve recognition performance under EOCs.

3. The Proposed ATR Method

Figure 1 shows the proposed SAR image ATR method, which consists of two stages: the offline training stage and the online test stage. In the first stage, the SAR image training set undergoes preprocessing followed by extraction of the ASC for the construction of GMMs, then the GMMs are divided by aspect–frame division and the training template library is obtained. In the second stage, the aspect–frame division step is not essential after building the GMM; instead, the WGQFD-based measurement matching operation is performed between the GMM and the training template library and the minimum WGQFD corresponding to a certain category is obtained, indicating completion of the recognition task. The key steps in the approach are described below.

3.1. Preprocessing

To improve the extraction accuracy of SCs, data need to be processed in advance. Here, preprocessing is composed of amplitude normalization, target segmentation, and alignment.
Due to the diversity of the electromagnetic wave propagation environment, the sensor type, and the target distance, the echo intensity from the same target may be diverse, giving rise to certain amplitude differences in the SAR image. Therefore, maximum amplitude normalization is adopted to weaken the amplitude sensitivity.
Another operation that has to be mentioned is target segmentation. The constant false-alarm rate (CFAR) method is usually used to detect and segment the target region from the background [38]. Nevertheless, the method often encounters modeling difficulties in the non-uniform clutter region, and the detection time is long. Moreover, the CFAR method relies heavily on the target–clutter contrast. Li et al. [39] proposed using the ACM to segment the target part from the background. They introduce a modified region-scalable fitting (RSF) model that incorporates a statistical dissimilarity measurement to overcome the speckle noise in SAR images. They then integrated this modified RSF model into the global minimization active contour (GMAC) framework to achieve robust and accurate segmentation. We found that ACM is more robust against noise than CFAR, as shown in Figure 2; thus, we employed the ACM to conduct target segmentation in this paper.
It can be seen that while both preprocessing methods can detect and extract the main target contour without noise, the target contour segmented by the CFAR method is relatively unsmooth. Additionally, as the signal-to-noise ratio (SNR) becomes smaller, the target segmented by CFAR method loses target information, indicating that the CFAR method is not robust to noise and will subsequently prompt inaccurate ASC feature extraction, thereby impairing the recognition effect.
Lastly, data alignment preprocessing cannot be ignored. Because the target position is not always in the center, it is necessary to align the input samples such that the position deviation can be diminished. The centroid alignment is adopted here. The target centroid after alignment is located in the center of the chip.

3.2. ASC Extraction

In essence, the extraction of ASCs is a parameter estimation process. Here, the classical approximate maximum likelihood (AML) algorithm [23] is used for the parameter estimation. As shown in Equation (3), measured data can be expressed as the sum of real data and noise:
D ( f , φ ) = E ( f , φ ; Θ ) + N ( f , φ )
where D ( f , φ ) represents the data obtained by the actual measurement, E ( f , φ ; Θ ) represents the real scattering field of the target, and N ( f , φ ) represents the error caused by noise and model mismatch, which is modeled by the zero-mean Gaussian distribution. The purpose of estimating parameters is to determine the parameter set Θ on the basis of D ( f , φ ) and E ( f , φ ; Θ ) . This estimation process of the ASC parameter can be expressed as follows:
Θ ^ A M L = arg min Θ D ( f , φ ) E ( f , φ ; Θ ) 2
Equation (4) provides the basic idea of parameter estimation. However, in actual operation a single SAR image may contain multiple SCs; thus, the parameter scale to be estimated can be very large. To solve this problem, in this paper we adopt the same image domain decoupling strategy as in [29], in which the SCs are separated one-by-one through image segmentation operations, then AML estimation is performed for the individual SCs.
Figure 3 displays the ASC extraction results for a vehicle. The extracted ASC set reflects the strong scattering point distribution rule of the original SAR image target, and includes a geometric size and structure description. The reconstructed SAR target image is very close to the original one. Morever, the remaining part is essentially the error caused by the clutter and the algorithm itself, which occupies a small amount of energy. This further indicates that the extraction algorithm has high accuracy and can facilitate subsequent recognition.
As mentioned above, the ASC parameter θ consists of seven items. Among these, five the local SCs have five items, while the distributed SCs have six. The most common items are [ A , α , x , y ] ; thus, it is reasonable to take [ A , α , x , y ] as the final parameter sets. Moreover, the frequency dependence factor α is associated with the specific structure of the SC; however, at the current technical level it is very difficult to obtain a relatively accurate α value. In addition, considering that α is a discrete variable, its estimation error can easily have a large impact on the similarity measure of the SC set. Previous researchers have decided to abandon it [23,40,41] and in this paper we do the same, obtaining the final SC parameter [ A , x , y ] . Our subsequent experiments are conducted based on these three common parameters.

3.3. Scattering Parameter Gaussian Mixture Model

Due to the effects of resolution and noise, the estimated values of ASCs tend to have errors and fluctuate around a certain value. Here, we attempt to handle this problem through statistical methods. We set a point target in the SAR image scene center. We performed 10,000 Monte Carlo experiments with a resolution of 0.3 m and white Gaussian noise of 5dB, obtaining the scattering center estimate values shown in Figure 4. As can be seen, these estimated values fluctuate around certain fixed values. Through fitting experiments, we found that these values of [ A , x , y ] fit well with the Gaussian distribution.
The Gaussian Mixture Model (GMM) is a statistical model used to represent the probability distribution of a set of datapoints that are believed to be generated from a mixture of several Gaussian distributions. In the GMM, the datapoints are modeled as a mixture of multiple Gaussian distributions, each with its own mean and covariance matrix. The advantages of using the GMM include its flexibility in modeling complex data distributions, its ability to handle missing data, and its ability to estimate the number of components in the mixture model using information criteria. Several studies have utilized the GMM for a variety of applications, including image segmentation, speech recognition, and anomaly detection [42,43,44].
Based on the above experimental results, we used a three-dimensional Gaussian distribution to describe the relative position of individual ASCs and the statistical properties of the scattering coefficient. Then, we combined multiple ASCs to construct a GMM to characterize an SAR image. In this way, the discrete SC point set can be transformed into a continuous high-dimensional PDF, which is a key step in avoiding the need for one-to-one matching to recognize SC points. The GMM can be expressed as
p ( x ) = i = 1 K w i ϕ ( x | μ i , i )
ϕ ( x | μ i , i ) = exp ( 1 2 ( x μ i ) T Σ i 1 ( x μ i ) ) 2 π det ( Σ i )
where K represents the number of Gaussian components, that is, the number of SCs, and μ i is determined by the scattering coefficient and the relative position, i.e., μ i = [ A i , x i , y i ] T , which is the estimated three-dimensional mean vector of the i th SC; moreover, w i represents the i th Gaussian distribution weight, which has a value of A i / i = 1 K A i , while i is the variance diagonal matrix, for which the diagonal element is the estimated variance of A i , x i and y i , i.e., i = σ A i 2 0 0 0 σ x i 2 0 0 0 σ y i 2 .
To estimate the diagonal matrix of the variance more accurately, in this paper we set the point target SAR images at diverse SNRs and resolutions and perform ASC extraction under each condition. As shown in Figure 5, 10,000 Monte Carlo experiments were used to calculate the estimated standard deviations of the individual SCs.
With reference to Figure 5, the following points require further discussion: (1) the higher the SNR, the smaller the estimated standard deviations of the scattering coefficient A i , x i , and y i , while σ A , σ x , and σ y change linearly with the SNR; (2) the lower the resolution, the smaller the estimated standard deviations of the scattering coefficient A i , x i , and y i , y i , while σ A , σ x , and σ y decrease more or less exponentially as the resolution decreases. After experimental verification, we fit the expressions of y i for σ A , σ x , and σ y :
σ A = ( k 1     ρ + k 2 )     e x p ( k 3     Δ r ) σ x = ( k 4     ρ + k 5 )     e x p ( k 3     Δ r ) + k 7 Δ r σ y = ( k 8     ρ + k 9 )     e x p ( k 10     Δ r ) + k 11 Δ r
where the parameters ( k 1 , k 2 , , k 11 R + ) can be calculated by the least square method [43], while ρ is the amplitude ratio of the SC and the noise and can be obtained from Equation (8)
ρ = 10 log A i 2 σ n 2 ,
where σ n 2 indicates the noise power.

3.4. Weighted GQFD

After establishing the GMMs, the problem of discrete ASC set matching recognition can be transformed into the similarity measurement between two GMMs. Usually, algorithms such as the Kullback–Leibler divergence, Mahalanobis distance, etc. [45,46] require the same number of components in the distribution to measure the similarity between them. However, the number of SCs extracted for each SAR image in this paper is different, resulting in inconsistent components in the GMMs; thus, these measurement algorithms cannot be used directly.
Fortunately, the GQFD is the characteristic quadratic distance between the two GMMs and can be used for similarity measurement [43,44]. It has the following advantages: (1) there is no need to establish point-to-point connection between two point sets or compensate for the presence of excess points (false alarms) and missing points (missed alarms); (2) it can consider the respectively overall structures of the point sets, rather than simply matching the isolated points; (3) the GQFD computation between two point sets uses analytic expressions and has high computational efficiency; and (4) it can calculate the distance between point sets at different dimensions. It is not affected by differences in amplitude or relative position, or by the number of target SCs extracted in the SAR image.
Therefore, considering the practical situation of this paper, we use the GQFD to measure the similarity between GMMs. The specific definition of the GQFD is as follows:
Assuming that M g and T g represent the GMMs of two ASC sets, their PDFs can be expressed by the following Equations  (9) and (10) [43]:
M g : p ( x ) = i = 1 K m w i m ϕ i m ( x | μ i m , Σ i m )
T g : p ( y ) = i = 1 K t w i t ϕ i t ( y | μ i t , Σ i t )
Then, the expected similarity between the two GMMs is defined as follows [44]:
S E ( ϕ i m , ϕ j t ) = x y ϕ i m ( x ) ϕ j t ( y ) f s ( x , y ) d x d y
where f s ( x , y ) is a similarity function, defined as
f s ( x , y ) = k = 1 d exp ( α x k y k 2 )
where x k and y k are the k th dimensional elements of x and y , respectively, and d is the dimension of x and y .
Due to the integral in Equation (11), it is difficult to make use of this similarity distance in numerical calculations. In [44], it was shown that the expected similarity can be simplified as follows:
S E ( ϕ i m , ϕ j t ) = k = 1 d exp ( α ( μ i k m μ j k t ) 2 1 + 2 α ( ( σ i k m ) 2 + ( σ j k t ) 2 ) ) 1 + 2 α ( ( σ i k m ) 2 + ( σ j k t ) 2 )
where μ k m and μ k t are the k th dimensional element of μ i m and μ i t . Accordingly, σ k m and σ k t are the k th diagonal element of Σ i m and Σ i t respectively.
Based on the expected similarity between two Gaussian distributions, the similarity matrix A g R ( I m + I t ) × ( I m + I t ) = [ a i j ] , where a i j is the element on the i th row and j th column of A g , can be defined as follows:
a i j = S E ( ϕ i m , ϕ j m ) 1 i , j K m S E ( ϕ i m , ϕ j K m t ) 1 i K m < j K m + K t S E ( ϕ i K m t , ϕ j m ) 1 j K m < i K m + K t S E ( ϕ i K m t , ϕ j K m t ) K m i , j K m + I t
where S E ( ϕ i m , ϕ j m ) and S E ( ϕ i I m t , ϕ j I m t ) are self-expectation similarity values of M g and T g , respectively, and S E ( ϕ i m , ϕ j I m t ) and S E ( ϕ i I m t , ϕ j m ) are co-expectation similarity values for M g and T g , respectively.
Therefore, the GQFD value of these two ASC sets can be calculated using Equation (15) [43]:
GQFD ( M g , T g ) = ( w m , w t ) · A g · ( w m , w t ) T
where ( w m , w t ) = ( w 1 m , w 2 m w I m m , w 1 t , w 2 t w I t t ) represents the series of w m   and   w t .
For a stable SC, its relative position changes with the target azimuth slowly, while its amplitude varies dramatically. The position information is more robust against target recognition than the amplitude. Thus, we assign different weight values γ k to the relative position and amplitude in the formula f s ( x , y ) , and propose the weighted GQFD (WGQFD):
f s ( x , y ) = k = 1 d exp ( γ k α x k y k 2 )
where γ k is obtained from the search method.
Substituting Equation (16) into Equation (11), we obtain the analytic expression of S E ( ϕ i m , ϕ j t ) :
S E ( ϕ i m , ϕ j t ) = k = 1 d exp ( γ k α ( μ i k m μ j k t ) 2 1 + 2 γ k α ( ( σ i k m ) 2 + ( σ j k t ) 2 ) ) 1 + 2 γ k α ( ( σ i k m ) 2 + ( σ j k t ) 2 )
Similarly, the other analytical expressions in Equation (14) can be obtained, and by substituting these into Equation (15), the WGQFD is obtained.

3.5. Aspect—Frame Division

There are problems with amplitude sensitivity and attitude sensitivity in radar ATR based on SCs. The amplitude sensitivity, as described in Section 3.1, can be reduced by normalizing the maximum amplitude value.
The attitude sensitivity refers exactly to the azimuth attitude sensitivity. Common approaches to solving the problem are to improve and enrich the template library as far as possible. As long as the target ASC information at each azimuth is contained, the target recognition effect is able to be further enhanced. However, too many templates reduce the efficiency of target recognition. Therefore, in this paper we design an adaptive aspect–frame division algorithm to reduce the stored template number and increase recognition efficiency. First, depending on the WGQFD of the training sample ASC set, the training sample is divided into several aspect-frames; then, in each aspect-frame, the SC set with the smallest WGQFD from other samples is selected as the template of the current aspect-frame. Through the same method, the aspect-frame of each target can be acquired. The detailed flow of the adaptive aspect–frame division algorithm is shown in Algorithm 1.
Algorithm 1 Adaptive aspect–frame division.
  • Input: the Gaussian mixture samples S T m , c m = 1 M c constructed by ASCs set of C classes target SAR training sample and the aspect-frame threshold T d , M c is the number of the training samples for the c th class target.
Initialization: the total aspect-frame number N c = 1 of each category sample.
1:
Let the Gaussian mixture index m = 1 .
2:
Let the aspect-frame index k = 1 .
3:
Assign S T m , c to the k th aspect-frame.
4:
Calculate the average WGQFD between the sample S T m + 1 , c and all Gaussian mixture samples in the k th aspect-frame.
5:
If the average WGQFD < T g , this sample is drawn into the current aspect-frame, set m = m + 1 , go to step 2. Otherwise, this sample doesn’t belong to the current aspect-frame, set k = k + 1 . If k > N c , add a new aspect-frame, set N c = N c + 1 , and m = m + 1 , go to step 2; if k N c , set m = m + 1 , and then go to step 3.
6:
All Gaussian mixture samples of C classes targets were traversed through steps 1–5.
  • Output:  N c aspect-frames of C classes targets.

3.6. The Training and Testing Processes

The training process includes the following steps: first, ASC extraction is performed for each class of preprocessed SAR image samples, then GMMs are constructed for each sample separately, and finally, all GMMs in each category are divided using Algorithm 1; in this way, a library of training templates is obtained. The specific details are shown in Algorithm 2.
Algorithm 2 Training process of the recognition method based on the GMM.
  • Input: the target SAR training sample I m c c = 1 , m = 1 C , M c , where C is the number of the target class of interest.
1:
for  c = 1 to C  do
2:
   Extract the ASC set of the c th class samples and construct the corresponding GMM.
3:
   for  m = 1 to M c  do
4:
     Extract the ASC set S ( m , c ) by using Algorithm 1 for the input SAR image samples I m c .
5:
     Construct the GMM S T m , c on the extracted ASC set S ( m , c ) .
6:
   end for
7:
   Divide the GMM set S T m , c m = 1 M c into N c aspect-frames by using Algorithm 1.
8:
   for  k = 1 to N c  do
9:
     In the k th aspect-frame, choose the GMM H T ( k , c ) with the smallest WGQFD from the other GMMs as the template.
10:
  end for
11:
end for
  • Output: Template library for training set: H T ( k , c ) c = 1 , k = 1 C , N c .
Testing is performed by constructing GMMs for the ASCs of the test samples and then finding the minimum WGQFD through calculation using the GMMs in the template library to obtain the final recognition results. The testing process used for specific recognition is shown in Algorithm 3.
Algorithm 3 Test process of the recognition method based on the GMM.
  • Input: the target SAR test sample I test , the training template library H T ( k , c ) c = 1 , k = 1 C , N c .
1:
Preprocess the test sample I test .
2:
Extract the ASCs from the preprocessed test sample and form the ASC set Q .
3:
Construct a GMM S Q by using the ASC set Q .
4:
Look for the smallest WGQFD D min , the class c * to which the template with D min belongs, i.e., c * = arg min k , c D ( S Q , H T k , c )
  • Output: The class c * of the test sample.

4. Experimental Results

4.1. Data Introduction and Experimental Platform

In this paper, the proposed method is verified by the MSTAR dataset, which has been extensively used internationally for verifying SAR target recognition algorithms since the 1990s. This dataset collects X-band airborne SAR images of various static ground military vehicle targets with horizontal polarization mode, azimuth, and distance resolution of 0.3 m, azimuth interval of approximately 0.03°, and depression angle ranges of 15°, 17°, 30°, and 45°. Figure 6 displays the optical images of the ten categories of targets in the MSTAR dataset, in the order of 2S1, BMP2, BRDM2, BTR60, BTR70, D7, T62, T72, ZIL131, and ZSU23_4. Other vehicle images of different models for certain categories are included in the MSTAR database as well.
In order to test the recognition performance of the proposed ATR method, the experiments described below constructed different EOCs based on these ten categories of vehicle targets and selected suitable target classes. Meanwhile, several existing SAR ATR methods were selected for comparison with the proposed method. The specific implementation of each method was as follows:
(1)
KNN: The PCA feature vector was input to the k-Nearest Neighbor (KNN) classifier to describe the original SAR image. The Euclidean distance is employed to measure the distance between the PCA feature vectors [16].
(2)
SVM: The PCA feature vector was input into the SVM classifier to describe the original SAR image. The Gaussian kernel was used as the kernel function of the SVM [47].
(3)
SRC: To describe the original SAR image with random projection features, the sparse representation coefficient vector was solved using the orthogonal match pursuit (OMP) algorithm [48].
(4)
Resnet: A neural network was constructed using a deep residual structure. In this paper, we chose Resnet18, which has an 18-layer network structure, for the comparison experiment [49].
(5)
VGG: All the convolution kernels were of size 3 × 3. Its network can be very deep, but has many parameters. In this paper, we chose VGG16, which has a network layer of 16, for the comparison [50].
(6)
Densenet: Compared to Resnet, it has a smaller number of parameters and a higher network depth. A 121-layer network structure was used here [51].
(7)
HD-ASC: The Hausdorff distance proposed in [52] was used to match and recognize the ASC set.
(8)
G-ASC: The recognition method proposed in this paper.
(9)
G-ASC1: The same recognition method as in G-ASC except without aspect–frame division.
All experiments were performed on a Dell Precision 5820 workstation (CPU: Intel i9-10920X, GPU: GeForce GTX 3090, RAM: 64 GB). The software was Matlab R2022a under Windows 10. All of the deep learning algorithms were implemented using the Pytorch library in Python 3.8.

4.2. Noise Experiments and Results

Due to the effects of the target background and radar sensor changes, the acquired SAR images are inevitably disturbed by different levels of noise. Fortunately, the chosen SAR target images from the MSTAR dataset possess a high SNR, which can reduce the difficulty of target recognition to an extent [53]. To verify the robustness of our method under different noise levels, we adopted the same method as in [23] to add Gaussian white noise to the original SAR target images, resulting in SAR images being constructed at different SNR levels. Considering the inability to completely remove the noise, we assume that the raw SAR images are noise-free.
As shown in Figure 7, the raw images were first processed by two-dimensional fast Fourier transform (2D FFT), then the zero padding and window were removed to obtain the frequency domain image. Next, Gaussian white noise was added to obtain the transformed image, following the formula S N R = 10 log 10 i = 1 H j = 1 W I ( h , w ) 2 H W σ n 2 . Finally, the transformed image was transmitted to the image domain by means of the reverse process, thereby obtaining the final SAR image with noise. The results can be found in Figure 8. Obviously, as the SNR of the added noise decreases, the target identifiability in the noise background is gradually weakened. Especially at the SNR of 0dB, the target area is almost wholly submerged in the noise.
Here, the training set consisted of SAR images of the BMP2(SN_9566), BTR70(SN_C71), and T72(SN_132) at a 17° depression angle, and the test set was constituted by the same target images with noise of 10 dB, 5 dB, and 0 dB. The recognition results are shown in Table 1.
It seems clear that the recognition rates of all recognition methods decreased to different degrees as the signal-to-noise ratio decreased, while our method was able to maintain the optimum rate. In addition, the four deep learning-based methods in the comparison methods did not adapt well to the interference caused by noise. A potential reason may be that the first three methods are based on global features and the next three are directly based on the pixels. After adding strong noise, the global intensity distribution of the SAR image changes notably, resulting in a large discrepancy between the test and training samples, causing the recognition performance to deteriorate. This difference due to noise can be visualized in Figure 8. By contrast, the latter three methods are very robust to the noise. This is because their extracted ASCs are able enough to retain the rich parametric features of the target, which is beneficial for target recognition. In addition, the proposed method considers noise during GMM construction and uses the WGQFD as a measure in the recognition process, which effectively avoids a varying number of SCs, such as false alarms and missed alarms. Naturally, the recognition performance is superior in terms of robustness. Of course, it can be seen that the recognition method after orientation frame segmentation is a little worse than the recognition method without aspect–frame division, although it outperforms the other comparison methods.

4.3. Resolution Change Experiments and Results

In practice, the training samples often cover only a single or few resolutions. For instance, the resolution of all SAR images in the MSTAR dataset is 0.3 m. However, the sample image to be recognized is likely to be inconsistent with the training template at the resolution level, which will inevitably affect the recognition results. For the sake of testing the recognition performance of our method at various resolutions, the test set was composed of images with different resolutions constructed according to [3]. The distance and azimuthal resolution of SAR images are determined by the radar sensor bandwidth and the synthetic aperture size, respectively. Referring to the method used in Figure 7, the raw SAR target image is switched to the frequency domain. According to the SAR imaging mechanism, the specified proportion of data is extracted from the center of the frequency domain at the set resolution. Then, the extracted frequency domain data are converted to the image domain according to the reverse process. Finally, different resolutions of SAR images are obtained. It is worth noting that after the extraction operation, only low-resolution images can be obtained.
As shown in Figure 9, it is obvious that the target outline and details of the SAR image gradually become more blurred with the decline in the resolution which is likely to cause issues during the recognition process.
The training samples were the same for this experiment as in Section 4.2, and the test samples were the same images at reduced resolutions (0.6 m, 0.9 m, and 1.2 m). The recognition performance results at distinct resolutions are shown in Table 2.
It is apparent that the recognition accuracy of the other nine methods all declined to various degrees, while the G-ASC1 and G-ASC methods maintained stronger robustness at all times. The decrease in resolution resulted in pixel distribution changes across the whole SAR image, meaning that the performance inevitably worsened for those methods based on the global features and pixels. By considering the influence of image resolution in the ASC estimation and GMM construction process, the proposed method based on the ASCs has better adaptability to resolution changes than the others. Moreover, the proposed method uses the WGQFD-based matching method to assign different weights to the ASC parameters, further guaranteeing robust recognition performance.

4.4. Target Model Change Experiments and Results

In the target recognition field, in most cases the test targets are non-cooperative. Compared to the library targets, these have the same category but different models. Usually, they are homologous to the library targets in terms of their geometrical shapes, but have several differences in their local structure [54]. The MSTAR dataset contains multiple models of target SAR images for a given category; for instace, BMP2 includes SN_9563, SN_9566, and SN_C21; T72 includes SN_132, SN_812, SN_S7, SN_A04, SN_A05, SN_A07, and SN_A10; etc. Here, we only selected the first three models of the above-mentioned targets. In order to ensure the rationality of the experiment, we added an extra BTR70 (SN_C71) target to the training set. Specific samples from the training set and the test set are shown in Table 3.
Figure 10 shows T72 optical images of different models. It can be seen that while the overall structure of these targets is very similar, there are subtle differences locally, such as the fuel tank, the gun barrel, and the front armor.
Table 4 presents the recognition–confusion matrix for our method, while Table 5 presents the average recognition rates of the other methods. In general, the recognition rate of all methods exceeded 94%, indicating that the subtle changes in detail caused by the model changes did not exert a great impact on the overall recognition effect. Compared with the HD-ASC method, our approach has more robust and competitive recognition performance, which reveals that the WGQFD-based matching method has good performance in perceiving changes in local details.

4.5. Depression Angle Change Experiments and Results

SAR images have strong sensitivity to the depression angle; in general, the greater the discrepancy in the depression angle, the greater the difference between SAR images [55]. The MSTAR dataset contains multiple target SAR images at different depression angles. Usually, images at 17° are used as the training set and images at 30° and 45° are used as the test set. In this paper, we do the same in order to verify the recognition performance under different depression angles. The selected training set and test set data are displayed in Table 6. The SAR images at different depression angles are shown in Figure 11. It can be seen that the image at 30° is relatively similar to the image at 17°, while the image at 45° is quite unlike it.
Table 7 depicts the recognition results of our method for three selected targets; the average recognition rates are 96.41% and 77.56% at 30° and 45°, respectively. Table 8 exhibits the average recognition rates of various methods at 30° and 45°. It can be seen that the recognition rates of all methods are above 90% at 30°, indicating that the small difference in the depression angle has little effect on the different recognition methods. However, the recognition accuracy decreased sharply with a depression angle of 45°. This shows that an excessive depression angle disparity leads to the test samples varying from the training samples. This characteristic can be intuitively seen in Figure 11. In comparison, the proposed method maintains a higher recognition rate at both depression angles, only slightly lower than the HD-ASC method at 30°.
This phenomenon may be due to the visual sensitivity of radar sensors; the recognition performance of the global feature-based methods (e.g., KNN, SVM, SRC) and pixel-based deep learning methods (e.g., Resnet18, VGG18, Densenet121) decreased significantly when the difference in depression angles was large. However, local features (strong SCs) can be retained and used; thus, the recognition accuracy of our method remains highly robust to changes in the depression angle. This advantage is more obvious at 45°. The results for the proposed method (after aspect–frame division) are a little bit lower than the HD-ASC method at 30°, though the model without aspect–frame division is higher than any comparison method. This shows that although aspect–frame division reduces the number of stored templates and improves recognition efficiency, it reduces the recognition accuracy to an extent. Nevertheless, the model remains within an acceptable range.

4.6. Partial Occlusion Experiments and Results

Another very important EOC is partial occlusion. Whether artificial or not, occlusion blocks the radar sight and seriously affects the target recognition effect. However, the targets in the existing MSTAR measured dataset are usually completely exposed, making it necessary to construct partially occluded target images. The concrete construction method for SAR target occlusion images refers to [56], as shown in Figure 12. The SAR target segmentation method in Section 3.1 is used to obtain the target region; then, partial target regions are removed from different directions at a set proportion. Finally, the original background pixels are selected to fill the removed area randomly. Figure 13 shows eight different occlusion directions, and Figure 14 unfolds the complete image and the images occluded at a 30% ratio from different directions. The same training set was used here as in Section 4.2 and Section 4.3. The test set was comprised of SAR images at different occlusion proportions (10%–50%) from eight occlusion directions. The obtained experimental results are shown in Figure 15.
The experimental results are shown in Figure 15, demonstrating that as the occlusion ratio increases, the recognition performance of each method declines significantly. In contrast, the recognition performance of ASC-based methods decreases more slowly, which can be attributed to these methods’ ability to accurately describe the target’s characteristics using ASCs in non-occluded regions, resulting in robust recognition performance even when high occlusion ratios are encountered. As expected, the recognition accuracy of the proposed method consistently outperforms all the comparison methods, remaining above 80%. This shows that when part of the SCs are missing, our method can make the best use of the remaining SC information to ensure recognition accuracy. Therefore, it can exactly remain robust under occlusion EOCs. Consistent with the experimental results of the previous EOCs, it is worth noting that, the recognition accuracy always decreases slightly after aspect–frame division; regardless, the proposed method is able to effectively maintain high recognition performance at all times.

4.7. Computational Time Analysis

To further demonstrate the advantages of the proposed method, we compared the average time required by each method to recognition a sample, with the results shown in Figure 16. It can be seen that the two classical methods, SVM and KNN, take the longest time; while the proposed method without aspect–frame division takes less time, it requires more time than the remaining comparison methods. On the contrary, the proposed method with aspect–frame division has significant savings in computation time. Without aspect–frame division, the samples in the template library need to be matched one-by-one, which is more time-consuming; however, after division the number of required matches is significantly reduced, resulting in a shorter matching time.
In combination with the recognition results for each EOC, our proposed algorithm is shown to save recognition time through aspect–frame division while maintaining a consistently strong level of robustness. While the HD-ASC method produces comparable results to our proposed algorithm in certain cases, it incurs a longer processing time overall. To summarize, our proposed method (after aspect–frame division) achieves a high recognition rate with a lower time cost, making it an efficient and effective solution that is able to maintain good robustness.

5. Conclusions

This paper proposes an SAR image ATR method based on the Scattering Parameter GMM. This approach takes into account the effects of resolution and noise by adopting GMMs to model the extracted ASC set. To address the issue of inconsistent numbers of scattering points, the proposed method uses the WGQFD to measure similarity and enable SAR image ATR. In addition, in order to improve the efficiency of the recognition, an adaptive frame division operation is used to reduce the number of templates, which reduces the time spent on recognition while ensuring that there is no excessive decrease in the recognition accuracy. Various EOCs are considered, including noise, resolution changes, model changes, depression angle changes, and partial occlusions; our recognition experiments show that the proposed method outperforms others in terms of both robustness and computation time. Moreover, our proposed method has strong engineering realizability.
However, because the ASC estimation method relies on the AML algorithm, which is computationally expensive, future research could focus on improving the efficiency and accuracy of ASC extraction. This could involve exploring ways to optimize the AML algorithm or introducing deep learning techniques to enhance the process. An additional next step is to consider combining the scattering center parameters with deep learning methods to organically combine the physical properties of SAR images with deep learning methods, which could further enhance the robustness and accuracy of SAR ATR.

Author Contributions

Conceptualization, J.Q.; Methodology, J.Q.; Validation, J.Q.; Investigation, J.Q. and H.Z.; Data curation, J.Q. and J.T.; Writing—review & editing, J.Q.; Project administration, Z.L. and R.X.; Funding acquisition, L.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the National Natural Science Foundation of China under Grant No. 62001346.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments to improve this paper’s quality.

Conflicts of Interest

The authors declare there is no conflict of interest.

References

  1. Pei, J.; Huo, W.; Wang, C.; Huang, Y.; Zhang, Y.; Wu, J.; Yang, J. Multiview deep feature learning network for SAR automatic target recognition. Remote Sens. 2021, 13, 1455. [Google Scholar] [CrossRef]
  2. Ran, L.; Xie, R.; Liu, Z.; Zhang, L.; Li, T.; Wang, J. Simultaneous range and cross-range variant phase error estimation and compensation for highly squinted SAR imaging. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4448–4463. [Google Scholar] [CrossRef]
  3. Maître, H. Processing of Synthetic Aperture Radar (SAR) Images; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  4. Ran, L.; Liu, Z.; Xie, R. Ground Maneuvering Target Focusing via High-Order Phase Correction in High-Squint Synthetic Aperture Radar. Remote Sens. 2022, 14, 1514. [Google Scholar] [CrossRef]
  5. Xu, W.; Wang, B.; Xiang, M.; Li, R.; Li, W. Image Defocus in an Airborne UWB VHR Microwave Photonic SAR: Analysis and Compensation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5213518. [Google Scholar] [CrossRef]
  6. El-Darymli, K.; Gill, E.W.; Mcguire, P.; Power, D.; Moloney, C. Automatic target recognition in synthetic aperture radar imagery: A state-of-the-art review. IEEE Access 2016, 4, 6014–6058. [Google Scholar] [CrossRef] [Green Version]
  7. Ding, Y. Multiset canonical correlations analysis of bidimensional intrinsic mode functions for automatic target recognition of SAR images. Comput. Intell. Neurosci. 2021, 2021, 4392702. [Google Scholar] [CrossRef]
  8. Chiang, H.C.; Moses, R.L.; Potter, L.C. Model-based classification of radar images. IEEE Trans. Inf. Theory 2000, 46, 1842–1854. [Google Scholar] [CrossRef] [Green Version]
  9. Diemunsch, J.R.; Wissinger, J. Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery V, Orlando, FL, USA, 14–17 April 1998; Volume 3370, pp. 481–492. [Google Scholar]
  10. Amoon, M.; Rezai-rad, G.a. Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features. IET Comput. Vis. 2014, 8, 77–85. [Google Scholar] [CrossRef]
  11. Mishra, A.K.; Motaung, T. Application of linear and nonlinear PCA to SAR ATR. In Proceedings of the 2015 25th International Conference Radioelektronika (RADIOELEKTRONIKA), Pardubice, Czech Republic, 21–22 April 2015; pp. 349–354. [Google Scholar]
  12. Zeng, Z.; Sun, J.; Han, Z.; Hong, W. SAR Automatic Target Recognition Method based on Multi-Stream Complex-Valued Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5228618. [Google Scholar] [CrossRef]
  13. Gerry, M.J.; Potter, L.C.; Gupta, I.J.; Van Der Merwe, A. A parametric model for synthetic aperture radar measurements. IEEE Trans. Antennas Propag. 1999, 47, 1179–1188. [Google Scholar] [CrossRef]
  14. Liu, H.; Li, S. Decision fusion of sparse representation and support vector machine for SAR image target recognition. Neurocomputing 2013, 113, 97–104. [Google Scholar] [CrossRef]
  15. Xu, Y.Y. Multiple-instance learning based decision neural networks for image retrieval and classification. Neurocomputing 2016, 171, 826–836. [Google Scholar] [CrossRef]
  16. Mishra, A.K. Validation of PCA and LDA for SAR ATR. In Proceedings of the Tencon IEEE Region 10 Conference, Hyderabad, India, 23–26 November 2009. [Google Scholar]
  17. Cao, C.; Cao, Z.; Cui, Z.; Wang, L. Incremental robust non-negative matrix factorization for SAR image recognition. In Proceedings of the 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Xiamen, China, 26–29 November 2019. [Google Scholar]
  18. Lin, C.; Peng, F.; Wang, B.H.; Sun, W.F.; Kong, X.J. Research on PCA and KPCA self-fusion based MSTAR SAR automatic target recognition algorithm. J. Electron. Sci. Technol. 2012, 10, 352–357. [Google Scholar]
  19. Huang, X.; Qiao, H.; Zhang, B.; Nie, X. Supervised polarimetric SAR image classification using tensor local discriminant embedding. IEEE Trans. Image Process. 2018, 27, 2966–2979. [Google Scholar] [CrossRef] [PubMed]
  20. Tang, T.; Su, Y. Object recognition based on feature matching of scattering centers in SAR imagery. In Proceedings of the 2012 5th International Congress on Image and Signal Processing, Chongqing, China, 16–18 October 2012; pp. 1073–1076. [Google Scholar]
  21. Tian, S.; Yin, K.; Wang, C.; Zhang, H. An SAR ATR method based on scattering centre feature and bipartite graph matching. IETE Tech. Rev. 2015, 32, 364–375. [Google Scholar] [CrossRef]
  22. Dungan, K.E.; Potter, L.C. Classifying transformation-variant attributed point patterns. Pattern Recognit. 2010, 43, 3805–3816. [Google Scholar] [CrossRef]
  23. Ding, B.; Wen, G.; Zhong, J.; Ma, C.; Yang, X. A robust similarity measure for attributed scattering center sets with application to SAR ATR. Neurocomputing 2017, 219, 130–143. [Google Scholar] [CrossRef]
  24. Bhanu, B.; Lin, Y. Stochastic models for recognition of occluded targets. Pattern Recognit. 2003, 36, 2855–2873. [Google Scholar] [CrossRef]
  25. Jianxiong, Z.; Zhiguang, S.; Xiao, C.; Qiang, F. Automatic target recognition of SAR images based on global scattering center model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3713–3729. [Google Scholar] [CrossRef]
  26. He, Y.; He, S.Y.; Zhang, Y.H.; Wen, G.J.; Yu, D.F.; Zhu, G.Q. A forward approach to establish parametric scattering center models for known complex radar targets applied to SAR ATR. IEEE Trans. Antennas Propag. 2014, 62, 6192–6205. [Google Scholar] [CrossRef]
  27. Potter, L.C.; Chiang, D.M.; Carriere, R.; Gerry, M.J. A GTD-based parametric model for radar scattering. IEEE Trans. Antennas Propag. 1995, 43, 1058–1067. [Google Scholar] [CrossRef]
  28. Zhiguang, S.; Jianxiong, Z.; Hongzhong, Z.; Qiang, F. Joint Model Selection and Parameter Estimation of GTD Model using RJ-MCMC Algorithm. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Honolulu, HI, USA, 16–20 April 2007; Volume 3, p. III–777. [Google Scholar]
  29. Potter, L.C.; Moses, R.L. Attributed scattering centers for SAR ATR. IEEE Trans. Image Process. 1997, 6, 79–91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Lardeux, C.; Frison, P.L.; Tison, C.; Souyris, J.C.; Stoll, B.; Fruneau, B.; Rudant, J.P. Support vector machine for multifrequency SAR polarimetric data classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4143–4152. [Google Scholar] [CrossRef]
  31. Chen, S.; Wang, H.; Xu, F.; Jin, Y.Q. Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  32. Li, Y.; Du, L.; Wei, D. Multiscale CNN based on component analysis for SAR ATR. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5211212. [Google Scholar] [CrossRef]
  33. Qin, J.; Liu, Z.; Ran, L.; Xie, R.; Tang, J.; Guo, Z. A target SAR image expansion method based on conditional wasserstein deep convolutional GAN for automatic target Recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7153–7170. [Google Scholar] [CrossRef]
  34. Guo, Y.; Pan, Z.; Wang, M.; Wang, J.; Yang, W. Learning capsules for SAR target recognition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4663–4673. [Google Scholar] [CrossRef]
  35. Feng, S.; Ji, K.; Zhang, L.; Ma, X.; Kuang, G. SAR target classification based on integration of ASC parts model and deep learning algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10213–10225. [Google Scholar] [CrossRef]
  36. Feng, S.; Ji, K.; Wang, F.; Zhang, L.; Ma, X.; Kuang, G. PAN: Part Attention Network Integrating Electromagnetic Characteristics for Interpretable SAR Vehicle Target Recognition. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5204617. [Google Scholar] [CrossRef]
  37. Liu, H.; Jiu, B.; Li, F.; Wang, Y. Attributed scattering center extraction algorithm based on sparse representation with dictionary refinement. IEEE Trans. Antennas Propag. 2017, 65, 2604–2614. [Google Scholar] [CrossRef]
  38. Schou, J.; Skriver, H.; Nielsen, A.A.; Conradsen, K. CFAR edge detector for polarimetric SAR images. IEEE Trans. Geosci. Remote Sens. 2003, 41, 20–32. [Google Scholar] [CrossRef] [Green Version]
  39. Li, T.; Liu, Z.; Xie, R.; Ran, L. Target detection in high-resolution SAR images based on modified active contour model. In Proceedings of the 2018 International Conference on Radar (RADAR), Brisbane, QLD, Australia, 27–31 August 2018. [Google Scholar]
  40. Zhang, X. Noise-robust target recognition of SAR images based on attribute scattering center matching. Remote Sens. Lett. 2019, 10, 186–194. [Google Scholar] [CrossRef]
  41. Ding, B.; Wen, G.; Huang, X.; Ma, C.; Yang, X. Target recognition in synthetic aperture radar images via matching of attributed scattering centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3334–3347. [Google Scholar] [CrossRef]
  42. Li, L.; Yang, M.; Wang, C.; Wang, B. Gaussian mixture model-signature quadratic form distance based point set registration. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 998–1003. [Google Scholar]
  43. Wang, J.j.; Liu, Z.; Li, T.; Ran, L.; Xie, R. Radar HRRP target recognition via statistics-based scattering centre set registration. IET Radar Sonar Navig. 2019, 13, 1264–1271. [Google Scholar] [CrossRef]
  44. Beecks, C.; Ivanescu, A.M.; Kirchhoff, S.; Seidl, T. Modeling image similarity by gaussian mixture models and the signature quadratic form distance. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1754–1761. [Google Scholar]
  45. Joyce, J.M. Kullback-leibler divergence. In International Encyclopedia of Statistical Science; Springer: Berlin/Heidelberg, Germany, 2011; pp. 720–722. [Google Scholar]
  46. McLachlan, G.J. Mahalanobis distance. Resonance 1999, 4, 20–26. [Google Scholar] [CrossRef]
  47. Chih-Chung, C.; Chih-Jen, L. A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2001, 2, 27. [Google Scholar]
  48. Thiagarajan, J.J.; Ramamurthy, K.N.; Knee, P.; Spanias, A.; Berisha, V. Sparse representations for automatic target classification in SAR images. In Proceedings of the 2010 4th International Symposium on Communications, Control and Signal Processing (ISCCSP), Limassol, Cyprus, 3–5 March 2010. [Google Scholar]
  49. Shafiq, M.; Gu, Z. Deep residual learning for image recognition: A survey. Appl. Sci. 2022, 12, 8972. [Google Scholar] [CrossRef]
  50. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  51. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  52. Yu, C.B.; Qin, H.F.; Cui, Y.Z.; Hu, X.Q. Finger-vein image recognition combining modified hausdorff distance with minutiae feature matching. Interdiscip. Sci. Comput. Life Sci. 2009, 1, 280–289. [Google Scholar] [CrossRef]
  53. Doo, S.H.; Smith, G.; Baker, C. Target classification performance as a function of measurement uncertainty. In Proceedings of the 2015 IEEE 5th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Singapore, 1–4 September 2015; pp. 587–590. [Google Scholar]
  54. Huang, X.; Qiao, H.; Zhang, B. SAR target configuration recognition using tensor global and local discriminant embedding. IEEE Geosci. Remote Sens. Lett. 2015, 13, 222–226. [Google Scholar] [CrossRef]
  55. Ravichandran, B.; Gandhe, A.; Smith, R.; Mehra, R. Robust automatic target recognition using learning classifier systems. Inf. Fusion 2007, 8, 252–265. [Google Scholar] [CrossRef]
  56. Jones, G.; Bhanu, B. Recognition of articulated and occluded objects. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 603–613. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The flow chart of the proposed ATR method.
Figure 1. The flow chart of the proposed ATR method.
Remotesensing 15 03800 g001
Figure 2. Target segmentation results for the ACM method and CFAR detection method. The images in the first row are from the ACM method, while the images in the second row are from the CFAR method; the red curve indicates the extracted target edge.
Figure 2. Target segmentation results for the ACM method and CFAR detection method. The images in the first row are from the ACM method, while the images in the second row are from the CFAR method; the red curve indicates the extracted target edge.
Remotesensing 15 03800 g002
Figure 3. ASC extraction and target reconstruction: (a) the scattering center of the original image extraction (the red point is the extracted ASC); (b) the reconstructed image; (c) the remaining image.
Figure 3. ASC extraction and target reconstruction: (a) the scattering center of the original image extraction (the red point is the extracted ASC); (b) the reconstructed image; (c) the remaining image.
Remotesensing 15 03800 g003
Figure 4. The fitting results of parameters [ A , x , y ] based on the Gaussian distribution: (a) the fitting of the amplitude value A, (b) the fitting of the x coordinate, and (c) the fitting of the y coordinate.
Figure 4. The fitting results of parameters [ A , x , y ] based on the Gaussian distribution: (a) the fitting of the amplitude value A, (b) the fitting of the x coordinate, and (c) the fitting of the y coordinate.
Remotesensing 15 03800 g004
Figure 5. Estimated values of σ A , σ x , and σ y at different SNRs and resolutions Δ r : (a,c,e) are σ A , σ x , and σ y versus the SNR level, respectively, while (b,d,f) are σ A , σ x , and σ y versus the range resolution Δ r . The dots are the variances estimated by 10,000 Monte Carlo experiments and the lines are their fitting curves.
Figure 5. Estimated values of σ A , σ x , and σ y at different SNRs and resolutions Δ r : (a,c,e) are σ A , σ x , and σ y versus the SNR level, respectively, while (b,d,f) are σ A , σ x , and σ y versus the range resolution Δ r . The dots are the variances estimated by 10,000 Monte Carlo experiments and the lines are their fitting curves.
Remotesensing 15 03800 g005
Figure 6. Target optical images of vehicles from the ten categories in the MSTAR dataset.
Figure 6. Target optical images of vehicles from the ten categories in the MSTAR dataset.
Remotesensing 15 03800 g006
Figure 7. Basic process used for frequency domain transformation of the raw SAR images; the solid line denotes the forward conversion, while the dotted line expresses the reverse conversion.
Figure 7. Basic process used for frequency domain transformation of the raw SAR images; the solid line denotes the forward conversion, while the dotted line expresses the reverse conversion.
Remotesensing 15 03800 g007
Figure 8. SAR images at different SNRs: (a) original image, (b) 10 dB, (c) 5 dB, (d) 0 dB.
Figure 8. SAR images at different SNRs: (a) original image, (b) 10 dB, (c) 5 dB, (d) 0 dB.
Remotesensing 15 03800 g008
Figure 9. SAR images at different resolutions: (a) 0.3 m × 0.3 m, (b) 0.6 m × 0.6 m, (c) 0.9 m × 0.9 m, (d) 1.2 m × 1.2 m.
Figure 9. SAR images at different resolutions: (a) 0.3 m × 0.3 m, (b) 0.6 m × 0.6 m, (c) 0.9 m × 0.9 m, (d) 1.2 m × 1.2 m.
Remotesensing 15 03800 g009
Figure 10. Optical images of different models of the T72 tank.
Figure 10. Optical images of different models of the T72 tank.
Remotesensing 15 03800 g010
Figure 11. SAR images at different depression angles: (a) 17°, (b) 30°, (c) 45°.
Figure 11. SAR images at different depression angles: (a) 17°, (b) 30°, (c) 45°.
Remotesensing 15 03800 g011
Figure 12. The construction process of SAR target occlusion images.
Figure 12. The construction process of SAR target occlusion images.
Remotesensing 15 03800 g012
Figure 13. Schematic diagram of eight different occlusion directions.
Figure 13. Schematic diagram of eight different occlusion directions.
Remotesensing 15 03800 g013
Figure 14. Complete SAR image and 30% occluded image; (e) shows the complete image, while the other images are the same image occluded from directions 1–8. (ad) indicate directions 1–4, (fi) indicate directions 5–8.
Figure 14. Complete SAR image and 30% occluded image; (e) shows the complete image, while the other images are the same image occluded from directions 1–8. (ad) indicate directions 1–4, (fi) indicate directions 5–8.
Remotesensing 15 03800 g014
Figure 15. Performance comparison of various methods at different occlusion proportions.
Figure 15. Performance comparison of various methods at different occlusion proportions.
Remotesensing 15 03800 g015
Figure 16. Average time required to recognize a sample.
Figure 16. Average time required to recognize a sample.
Remotesensing 15 03800 g016
Table 1. Recognition results of nine methods under different SNR levels.
Table 1. Recognition results of nine methods under different SNR levels.
Recognition MethodAverage Recognition Rate (%)
10 dB5 dB0 dB
KNN90.5367.4347.92
SVM84.5660.2750.56
SRC91.6482.0651.63
Resnet1872.7452.2239.89
VGG1667.8659.4053.37
Densenet12165.4252.9445.48
HD-ASC95.4391.2470.49
G-ASC197.3493.5878.60
G-ASC96.6892.1575.42
Table 2. Recognition results of ten methods under different resolutions.
Table 2. Recognition results of ten methods under different resolutions.
MethodAverage Recognition Rate (%)
0.6 m0.9 m1.2 m
KNN80.9370.6263.43
SVM78.8662.7055.97
SRC83.2972.3860.65
Resnet1886.2278.6268.29
VGG1685.6768.5853.80
Densenet12163.5655.7953.08
HD-ASC85.6475.8768.70
G-ASC187.3980.6373.24
G-ASC86.6478.6570.43
Table 3. The training set and test set under model changes.
Table 3. The training set and test set under model changes.
Depression AngleBMP2BTR70T72
Training set17°232(SN_9566)233(SN_C71)232(SN_132)
Test set17°233(SN_9563)/231(SN_812)
233(SN_C21)228(SN_S7)
Table 4. The recognition confusion matrix of G-ASC under model changes.
Table 4. The recognition confusion matrix of G-ASC under model changes.
CategoryModelBMP2BTR70T72Recognition Rate (%)
BMP2SN_95632282397.85
SN_C212290498.28
T72SN_8122422597.40
SN_S71522297.37
Average recognition rate (%)97.73
Table 5. Recognition performance of five other methods under model changes.
Table 5. Recognition performance of five other methods under model changes.
MethodKNNSVMSRCResnet18VGG16Densenet121HD-ASCG-ASC1G-ASC
Average recognition rate (%)95.3395.6495.4395.4694.0597.4095.6698.8397.73
Table 6. The training set and test set under depression angle changes.
Table 6. The training set and test set under depression angle changes.
Depression AngleNumber
2S1BDRM2ZSU23_4
Training set17°299298299
Test set30°288287288
45°303303303
Table 7. Recognition results of G-ASC under depression angle changes.
Table 7. Recognition results of G-ASC under depression angle changes.
Depression AngleClassRecognition ResultRecognition Rate (%)Average Recognition Rate (%)
2S1BDRM2ZUS23_4
30°2S12796396.8896.41
BDRM26276696.17
ZUS23_45627796.18
45°2S1210703369.3177.56
BDRM2132424879.87
ZUS23_4272825383.50
Table 8. Rocognition performance of five methods under depression angle changes.
Table 8. Rocognition performance of five methods under depression angle changes.
Recognition MethodAverage Recognition Rate (%)
30° 45°
KNN94.7965.32
SVM95.6768.53
SRC94.9055.56
Resnet1895.3637.84
VGG1695.8758.75
Densenet12186.6739.71
HD-ASC96.5073.71
G-ASC197.6279.38
G-ASC96.4177.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, J.; Liu, Z.; Ran, L.; Xie, R.; Tang, J.; Zhu, H. An SAR Image Automatic Target Recognition Method Based on the Scattering Parameter Gaussian Mixture Model. Remote Sens. 2023, 15, 3800. https://doi.org/10.3390/rs15153800

AMA Style

Qin J, Liu Z, Ran L, Xie R, Tang J, Zhu H. An SAR Image Automatic Target Recognition Method Based on the Scattering Parameter Gaussian Mixture Model. Remote Sensing. 2023; 15(15):3800. https://doi.org/10.3390/rs15153800

Chicago/Turabian Style

Qin, Jikai, Zheng Liu, Lei Ran, Rong Xie, Junkui Tang, and Hongyu Zhu. 2023. "An SAR Image Automatic Target Recognition Method Based on the Scattering Parameter Gaussian Mixture Model" Remote Sensing 15, no. 15: 3800. https://doi.org/10.3390/rs15153800

APA Style

Qin, J., Liu, Z., Ran, L., Xie, R., Tang, J., & Zhu, H. (2023). An SAR Image Automatic Target Recognition Method Based on the Scattering Parameter Gaussian Mixture Model. Remote Sensing, 15(15), 3800. https://doi.org/10.3390/rs15153800

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop