Distributed Synthetic Environment Correlation Assessment Architecture and Metrics
Navy SBIR 2014.1 - Topic N141-006
NAVAIR - Ms. Donna Moore - [email protected]
Opens: Dec 20, 2013 - Closes: Jan 22, 2014

N141-006 TITLE: Distributed Synthetic Environment Correlation Assessment Architecture and Metrics

TECHNOLOGY AREAS: Air Platform, Human Systems

ACQUISITION PROGRAM: PMA 205

OBJECTIVE: Develop an innovative and extensible distributed synthetic environment correlation assessment architecture that can verify correlation between flight simulator visual and sensor databases.

DESCRIPTION: Naval/Marine Corps flight simulators are often run in isolation; however, there are growing requirements for distributed networked simulation such as those included in the Aviation Distributed Virtual Training Environment (ADVTE). Communication, processing models and synthetic environments are some of the simulation components that are affected by distributed system interoperability [1] [2] [3]. Interoperability of distributed systems is achieved only when the perception of the same events and models by different systems is similar, thus enabling the linked war fighters to work together and achieve a common goal. Working together is dependent on the consistency between the synthetic environments, yet today aircraft simulators rely on manual inspection and limited testing to determine if the simulated environments are a match between networked simulators.

Correlation assessments between terrain databases and interoperability of simulation models have been investigated over the years [1] [2] [3]. However, at this point in time no clear solution exists for the automated assessment of correlation errors between large synthetic environments and in particular for the runtime formats used by the Naval/Marine Corps flight simulators. The current approaches bypass or minimize correlation assessments between synthetic environments and include techniques such as the generation of runtime from the same source data in real time [4] and dedicated facilities which generate static representations of the different environments based on a fix set of simulators [3]. The former real time regeneration technique requires co-located simulators and large network bandwidth whereas the latter technique requires dedicated terrain database generation facilities that result in large production and integration times as well as large budget. A gap exists in the automated assessments of correlation errors between large synthetic environments as far as it relates to visual and sensor simulation for Naval/Marine Corps flight simulators. Considerations on architecture, data collection, runtime, read and write operations and parameter prioritization will enable a robust and flexible solution that can be expanded to include other tests and formats.

Correlation errors occur between different simulation systems when environmental representation features are rendered differently between simulation applications [4]. Although some metrics and tools have been proposed in the past [2] [4], an automated, and even semi-automated, process for determining the degree of correlation between heterogeneous human in the loop simulators in a distributed environment still does not exist. As a result, programs must spend many months determining the degree of similarity between simulators and its impact on a successful networked exercise. This analysis may have to be repeated if new simulation platforms are added, and if the training scenario or location changes.

Ideally, the distributed synthetic environment correlation assessment architecture for aviation platforms should be flexible and expandable so that it can perform comparisons between different formats, versions of the same databases, and the original geospatial source data. The correlation assessment should put particular emphasis on aircraft mission areas of interest such as airports, landing zones, confined area landings, low-level terrain flight areas, and ranges. The correlational assessment should be automated and consider environment components of designated areas of interest that affect mission performance, such as avenues of approach, key landmarks, feature densities and texture densities. Furthermore, the architecture should allow for the addition of new runtime and source formats, as well as new tests and analysis plug-in modules by third party developers. The results of the correlation analysis should be displayed in a graphical way that allows for easy understanding of the correlation differences and the impacts on distributed training.

The correlation assessment component of the architecture should include parameters that consider visual and sensor simulation. Different weights should be used to account for the importance of virtual simulation domain, visual cues in different platforms, and training missions. The prototype should determine the degree of similarity between simulators for the simulation of sensors to include visual, radar, infrared, night vision devices, and others by geospatial location regions. The prototype should accommodate the different models which are used in simulations which include: rendering models, animations, sensor models, mobility models, damage assessment models, and explosions.

The prototype should consider metrics such as terrain elevation, culture existence, feature size and attributes, surface materials and composition, and line of sight. Evaluation between two or more representations should be based on a criterion that considers the most important parameters. Criteria for correlational score should consider constraints such as domain, platform, and training mission and weigh them accordingly. Furthermore, the proposed prototype should consider how standards, such as the Common Image Generator Interface (CIGI) and High Level Architecture (HLA), could be leveraged to collect data. The prototype concept should describe how runtime read/write and test Application Programming Interface (API) will be developed to allow access to proprietary runtime format data and enable flexible testing modules that can be developed by third parties.

The prototype should consider source dataset formats such as NAVAIR Portable Source Initiative (NPSI), Marine Corps ADVTE virtual simulations aviation platforms, and constructive simulations formats, such as Joint Semi-Automated Forces (JSAF)/Tactical Environment (TEn) Compact Terrain Database (CTDB) v8.

PHASE I: Investigate, further define, propose, design and demonstrate the feasibility of an extensible distributed synthetic environment correlation assessment architecture for aviation platforms that incorporates the various criteria and source dataset formats described above, including: a set of correlation metrics and weighing schema, a strategy for acquiring synthetic environment data to support the correlation metrics, and an API that supports the proposed collection of environment data. Propose a strategy or plan for the determination of acceptable values for correlation and how this correlation level maps or translate to an interoperability assessment.

PHASE II: Develop, demonstrate and validate a prototype system using selected flight simulation facilities and trainers. Include test techniques for the validation of the metrics.

PHASE III: Transition and apply the new technologies developed into standalone products/services, as enhancements to existing training systems, and appropriate military and commercial flight training simulators.

PRIVATE SECTOR COMMERCIAL POTENTIAL/DUAL-USE APPLICATIONS: The innovation would enhance the energy efficiency and reliability of all simulation based training systems for civilian, law enforcement, and emergency response communities. Systems of systems, or distributed application, which combine a variety of different environment representations can benefit from interoperability and synthetic environment correlation.

REFERENCES:
1. P. Wooddard, "Measuring fidelity differential in simulator networks," in I/ITSEC, 1992.

2. G. Schiavone, S. Sureshchandran and K. Hardis, "Terrain database interoperability issues in training with distributed interactive simulation," ACM: Transactions on Modeling and Computer Simulation, vol. 7, no. 3, pp. 332-367, 1997.

3. J. Shufelt, "Vision for Future Virtual Training. Army Combined Arms Training Activity," Training Simulations Devices Management Division, Fort Leavenworth, KS, 2006.

4. R. Simons and M. Legace, "The Common Database � An All Encompassing Environment Database for Networking Special Operations Simulation," in Image Society, Scottsdale, Arizona, 2004.

KEYWORDS: Interoperability; synthetic environment; terrain database; virtual, constructive; distributed

** TOPIC AUTHOR (TPOC) **
DoD Notice:  
Between November 20 and December 19 you may talk directly with the Topic Authors (TPOC) to ask technical questions about the topics. Their contact information is listed above. For reasons of competitive fairness, direct communication between proposers and topic authors is
not allowed starting Dec 20, 2013, when DoD begins accepting proposals for this solicitation.
However, proposers may still submit written questions about solicitation topics through the DoD's SBIR/STTR Interactive Topic Information System (SITIS), in which the questioner and respondent remain anonymous and all questions and answers are posted electronically for general viewing until the solicitation closes. All proposers are advised to monitor SITIS (14.1 Q&A) during the solicitation period for questions and answers, and other significant information, relevant to the SBIR 14.1 topic under which they are proposing.

If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at (866) 724-7457 or email weblink.