Artificial Neural Networks are a class of biologically inspired machine learning methods that are modeled as collections of interconnected artificial neurons. Each neuron receives one or more inputs, applies weights to each input, sums them, and passes the result through a transfer function. Neural Networks organize these neurons into layers. The first layer receives the initial input, the final layer produces the output or target function, and in between, there are “hidden” layers. Each layer is connected to all the neurons in the next layer by a series of weights, and these weights are adjusted to optimize a loss function during the learning process.
Convolutional Neural Networks (CNNs) are similar to Artificial Neural Networks, except they make the explicit assumption that the inputs have collocated features. When applied to images, CNNs tile individual neurons to respond to overlapping regions in the visual field. The architecture is thus constrained so neurons in one layer are only connected to a small region of neurons in the next layer, reducing the number of free parameters in the model and allowing the algorithm to scale with large images. CNNs have gained considerable popularity for use in image recognition, as they continue to surpass previous computer vision benchmarks.
In 2014, EMSI was an early adaptor of Convolutional Neural Networks (CNNs) for Automatic Target Recognition (ATR) of Synthetic Aperture Radar (SAR) imagery. We show early 2014 results using the publically available MSTAR data. The Moving and Stationary Target Acquisition and Recognition (MSTAR) program was initiated by the USA Defense Advanced Research Project Agency (DARPA) and the US Air Force Research Laboratory (AFRL) in 1995. We show our early 2014 classification results in the confusion matrix above.
Although these results look exceptionally good, there are some subtle problems. The MSTAR collection was performed many years prior to the popularity of deep convolutional networks. The training and testing imagery in this public data set only differ by a slight grazing angle difference and the vehicles remained in the same positions during the entire data collection. One cannot determine from this public MSTAR data set if the CNN can generalize from these limited training conditions to realistic operational conditions.
EMSI has perfected technology to train deep learning CNNs with synthetically generated data. Our CNNs have been demonstrated to generalize well from the synthetic training data to operational test imagery.