The ability to detect and identify both fixed and mobile potential targets rapidly from multiple sensor feeds is a critical function in network centric warfare. We describe the use of image differencing and 3D terrain database editing in order to fuse oblique aerial photos, IR sensor imagery, and other non-traditional data sources to produce battlefield metrics that support network centric operations. Such metrics include target detection, recognition, and location, and improved knowledge of the target environment. Key to our approach is the rapid generation of target and background signatures from high-resolution 1-meter object descriptor terrain databases. This technique utilizes the difference between measured and calculated sensor images 1) to update and correct knowledge of the terrain background, 2) to register multi sensor imagery, 3) to identify potential/candidate targets based on residual image differencing, and 4) to measure and report target locations based on scene matching. The technique is especially suited for utilizing imagery from reconnaissance and remotely piloted vehicle sensors. It also holds promise for automation and real-time data reduction of battlefield sensor feeds and for improving now-time situational awareness. We present the algorithms and approach utilized in the image differencing technique. We also describe the software developed to implement the approach. Lastly we present the results of experiments and benchmarks conducted to identify and measure target locations.
Target location and sensor fusion through calculated and measured image differencing
2003
9 Seiten, 9 Quellen
Aufsatz (Konferenz)
Englisch
Kraftfahrwesen | 2008
|Spatial-domain image hiding using image differencing
IET Digital Library Archive | 2000
|Improved Numerical Differencing Analyzer
NTRS | 1982
|