Team Performance Metrics for Command and Control of Unmanned Systems
Navy SBIR 2012.2 - Topic N122-136
ONR - Ms. Tracy Frost - firstname.lastname@example.org
Opens: May 24, 2012 - Closes: June 27, 2012
N122-137 TITLE: Team Performance Metrics for Command and Control of Unmanned Systems
TECHNOLOGY AREAS: Information Systems, Human Systems
ACQUISITION PROGRAM: Capable Manpower FNC EC: Human Systems Integration
OBJECTIVE: Develop and demonstrate metrics for a team of decision makers, managing multiple autonomous vehicles, to assess transaction efficiencies given autonomy and control interventions.
DESCRIPTION: Unmanned systems currently require increasingly large teams of operators. Diagnostic system and operator team metrics for teams of decision makers supporting these systems largely do not exist. Autonomy, currently being developed, will transform the nature of these systems and the requirements of warfighters to support them. Tools and metrics to assess the impact of autonomy, identify system performance issues, and to evaluate the impact of interventions do not currently exist. This topic will provide tools to measure the impact of emerging autonomy systems on warfighter manning and training requirements and ensure that the autonomous systems being developed meet the Navy’s operational objectives and life cycle costs.
Autonomous systems will require warfighters to interact with vehicles and other warfighters using higher-level information related to missions and tasks. Alternative schemes for managing autonomous systems are under development to achieve appropriate manning levels. As unmanned systems increasingly involve multiple vehicles with different levels of automation, the impact and utility of the autonomy to the warfighter and the overall man-machine system must be evaluated. New metrics and assessment techniques to evaluate alternative schemes, given the autonomy, are desired for evaluating overall system effectiveness and performance. System-level metrics may include: efficiency of warfighter control and information transactions, bandwidth impact, scope of control, and timeliness. User-level metrics may include: warfighter-autonomy expectation convergence, situational awareness, information uncertainty and operator workload. Metrics should result in a real-time, graphical model to aid in the visualization of transaction effectiveness and assist in the diagnosis of issues in the coordination of the human-machine system. This topic seeks innovative proposals for evaluating the users and usage of autonomous systems.
PHASE I: Conceptualize and design an innovative approach to assess performance of teams operating multiple autonomous vehicles. The proposed approach should describe human performance metrics and their application in evaluating the operational use of autonomous systems. Clearly state what diagnostic requirements are anticipated and how the proposed approach would address them. Metrics should identify and quantify critical, anticipated human performance capabilities and limitations relevant to managing autonomous systems as well as mission-based performance parameters. Metrics could include: operator information processing effectiveness, speed of decision making, minimization and/or reduction of procedural errors, and tactical control performance. The proposed metrics should operate in real time and communicate with operators in "quick look" graphical formats. Successful metrics will support both overall assessment of the autonomy in operational contexts as well as diagnostic indicators of critical factors contributing to the observed performance. Phase I must identify a proposed operational context (i.e., application) for developing and demonstrating proposed metrics that would be employed during Phase II.
PHASE II: Develop and demonstrate tools, techniques and metrics) to assess the management of highly autonomous systems. Conduct testbed-based validation in a lab or using a modeling and simulation environment. Initial demonstrations may be conducted using a notional scenario and synthetic data; however, evaluations with actual data are desired by the end of Phase II. Conduct one or more controlled experiments to validate tools and quantifiably demonstrate their benefit in improved team decision-making performance. Prepare guidelines and documentation for transition of the tool to an operational setting. Use the results of the development phase to build and test a prototype to assess the impact of system design changes (e.g., different autonomy designs) on team performance in a subsurface or similar tactical environment.
PHASE III: Conduct testing to validate, standardize, and document metric evaluation and assessment software and implement in a field experiment. Collect performance data with an autonomous system to validate improved performance. Develop guidelines and documentation for transition to an operational setting. Field-test the tool in an operational setting and produce improved performance measures. Implement the tool in a comprehensive package that would include an intuitive graphical user interface.
PRIVATE SECTOR COMMERCIAL POTENTIAL/DUAL-USE APPLICATIONS: Private sector products could utilize evaluation tools and techniques to assess complex human-machine systems with various degrees of automation. These might include multi-robot applications and any team performance situation that involves high-volume data and quick response requirements with significant coordination requirements across person-machine systems. Applications could include state and local emergency intervention teams for crisis response and humanitarian aid response.
2. Curtin, Thomas B., Denise M. Crimmins, Joseph Curcio, Michael Benjamin, Christopher Roper. 2005. "Autonomous Underwater Vehicles: Trends and Transformations". Marine Technology Society Journal 39, no. 3: 65-75. Accessed December 2, 2011. doi: 10.4031/002533205787442521.
3. Franke, Jerry L., Vera Zaychik Moffitt, Drew Housten, John G. Clark, and Anthony Lizardi. 2009. "ICARUS: The Construction of and Lessons Learned from a General-Purpose Autonomy System". Paper presented at the AIAA Infotech@Aerospace Conference and AIAA Unmanned...Unlimited Conference, Seattle, Washington, April 6-9, 2009. Accessed December 2, 2011. http://www.atl.lmco.com/papers/1610.pdf.
KEYWORDS: Unmanned autonomous systems; metrics; command and control; decision making; autonomy; human performance