Abstract
I. Introduction
II. Methodology
III. Experimental Results
IV. Conclusion
Authors
Figures
References
Abstract
Inspired by their tremendous success in optical image detection and classification, convolutional neural networks (CNNs) have recently been used in synthetic aperture radar automatic target recognition (SAR-ATR). Although CNN-based methods can achieve excellent recognition performance, it is difficult to collect a large number of real SAR images available for training. In this paper, we introduce simulated SAR data to alleviate the problem of insufficient training data. To address domain shift and task transfer problems caused by differences between simulated and real data, we propose a model that integrates meta-learning and adversarial domain adaptation. We use sufficient simulated data and a few real data to pre-train the model. After fine-tuning, the pre-trained model can quickly adapt to new tasks in real data. Extensive experimental results obtained in the real SAR dataset demonstrate that our model effectively solves the cross-domain and cross-task transfer problem. Compared with conventional SAR-ATR methods, the proposed model can achieve better recognition performance with a small amount of training data.
Introduction
Synthetic aperture radar (SAR) is an active sensor mounted on moving platforms such as aircraft, satellites, and spaceships. SAR provides two-dimensional high-resolution images by receiving the electromagnetic echoes of targets. Benefiting from its unique imaging mechanism, SAR can operate day and night, independent of weather conditions, and has specific surface penetration capability. The SAR system has unique advantages in many applications, ranging from disaster monitoring and resource exploration to military inspection, and it plays an unreplaceable role in both military and civilian fields. Automatic target recognition (ATR) is an essential topic in the field of SAR application research. According to different implementation methods, classic ATR methods can be classified into feature-based and model-based approaches. Feature-based methods extract discriminative features, such as binary regions [1], target contours [2], monogenic signals [3], [4], projection features [5], [6], and tensor decomposition features [7] from images. Classifiers such as K-nearest neighbor (KNN) [8], support vector machine (SVM) [9], the Bayesian classifier [10], and the sparse representation classifier [11] have been developed to classify the extracted features. Both feature extraction and classification require careful selection by experienced researchers. Model-based methods [12]–[14] focus on the electromagnetic scattering features of a target, which are related to the physical characteristics of the target.