EOIR Multi-Sensor Fusion Tracker Algorithm
Navy SBIR 2011.1 - Topic N111-026 NAVAIR - Mrs. Janet McGovern - [email protected] Opens: December 13, 2010 - Closes: January 12, 2011 N111-026 TITLE: EOIR Multi-Sensor Fusion Tracker Algorithm TECHNOLOGY AREAS: Information Systems, Sensors ACQUISITION PROGRAM: PMA-290, Maritime Surveillance Aircraft RESTRICTION ON PERFORMANCE BY FOREIGN CITIZENS (i.e., those holding non-U.S. Passports): This topic is "ITAR Restricted." The information and materials provided pursuant to or resulting from this topic are restricted under the International Traffic in Arms Regulations (ITAR), 22 CFR Parts 120 - 130, which control the export of defense-related material and services, including the export of sensitive technical data. Foreign Citizens may perform work under an award resulting from this topic only if they hold the "Permanent Resident Card", or are designated as "Protected Individuals" as defined by 8 U.S.C. 1324b(a)(3). If a proposal for this topic contains participation by a foreign citizen who is not in one of the above two categories, the proposal will be rejected. OBJECTIVE: Develop robust tracking capabilities fusing data from co-boresighted multi-sensor Electro-Optical/Infra-Red (EO/IR) turrets for near real-time surveillance and targeting. DESCRIPTION: Advances in airborne EO/IR turrets provide an ever-growing array of co-boresighted video-rate sensors that cover a wide variety of wavebands and fields of view (FOV). While each additional information source nominally increases situational awareness, the operator is still limited in ability to assess the information. For real-time tactical missions where rapid response is critical, the operator must not only scan all video feeds for and hold the sensor on target, but also provide feedback to command and control and ground forces. In the case of unmanned air vehicles, this process is complicated by a limited bandwidth down link that causes compression artifacts, imagery lag, and limited access to simultaneous video feeds. To decrease operator workload and compensate for down link limitations, turrets employ on board automated tracking algorithms designed to lock on an operator-specified target and provide feedback to the turret slewing mechanism to keep the object centered in the FOV. It is critical that these algorithms be robust when the target becomes temporarily obscured and, when necessary, lose track gracefully, since loss of lock or a sudden large slew can cause a mission to be aborted. Current tracking algorithms typically rely on linear (Kalman) algorithms. These approaches threshold input pixel regions into detections and non-detections and then perform data association. This hard-decision paradigm inherently degrades Signal-To-Noise Ratio (SNR) and loses information as it throws out regions that just nearly fail to meet the threshold, while giving regions that just barely pass the threshold full credit. As a result, in the presence of clutter, confuser vehicles frequently pass the threshold and cause the tracker to lock on to an incorrect vehicle and the desired track is lost for good. Furthermore, these algorithms mainly rely on just one of the modalities, and switch to other modalities when tracking becomes poor. This fails to exploit all of the information in the data stream at each time. Innovative real-time data fusion and tracking algorithms capable of improving current operational implementations in EO/IR turrets on airborne platforms are sought. The algorithms should give significantly improved robustness in the presence of temporary obstructions, (e.g., when a vehicle drives under an overpass), and improved performance in the presence of clutter. It is expected these gains will come about by exploiting non-linear (non-Kalman) tracking methods which admit both higher fidelity kinematic and sensor modeling. Enhanced kinematic modeling may exploit effects such as known go/no-go areas (e.g., roadways and water) and vehicle class-dependent preferred velocities and stopping times. Additionally, enhanced sensor statistical modeling will avoid hard-decision making (thresholding and association) of traditional methods, yielding improved information exploitation and ultimately superior performance in clutter. The algorithm must be capable of handling inputs from multiple large format (640x480) video feeds of a variety of spectral bandwidths and fields of view. The algorithm should assume co-boresighting within 10�s of pixels, but must be capable of registering all feeds on the fly assuming knowledge of sensor to sensor pixel sizes and fields of view. PHASE I: Define the architecture and concept of operation and identify the algorithm requirements to provide the capabilities described above. Through experiments, demonstrate the feasibility of the proposed approach. PHASE II: Develop new algorithms and integrate into a hardware test bed and/or ground station. Demonstrate the new technology. For Phase II demonstration, candidate EO/IR system is the AN/AAS-52, Multi-Spectral Targeting System. PHASE III: Transition technology to appropriate platforms and systems. PRIVATE SECTOR COMMERCIAL POTENTIAL/DUAL-USE APPLICATIONS: Successful outcome of this topic would be beneficial to security and policing actions as well as urban monitoring applications such as traffic monitoring and flow analysis, emergency response, and urban planning. In addition, wildlife management could also benefit from this application. REFERENCES: 2. Bar-Shalom, Y. & Li, X. (1995). Multitarget-Multisensor Tracking: Principles and Techniques, YBS Publishing. KEYWORDS: Sensor; Electro-optical; Tracker; Airborne; Fusion; Multi-Band
|