Autonomous Unmanned Aerial Vehicle (UAV) Flight Without Supervisory Control
Navy SBIR 2019.2 - Topic N192-062
NAVAIR - Ms. Donna Attick - [email protected]
Opens: May 31, 2019 - Closes: July 1, 2019 (8:00 PM ET)

N192-062

TITLE: Autonomous Unmanned Aerial Vehicle (UAV) Flight Without Supervisory Control

 

TECHNOLOGY AREA(S): Air Platform, Electronics, Information Systems


ACQUISITION PROGRAM: PMA268 Navy Unmanned Combat Air System Demonstration

 

OBJECTIVE: Provide unmanned aerial vehicles (UAVs) with the capability to autonomously conduct flight from takeoff to landing, modifiable in real time by a human-in-the-loop or an Operations Center Supervisor (OCS) in real time without assuming a constant data link.

 

DESCRIPTION: UAVs cannot currently adapt to changing local conditions, broken data links, or dynamic mission objectives. Current state of Artificial Neural Networks (ANN)-centric reinforcement learning (RL) algorithms are capable of solving these problems.

 

In autonomous systems, humans and machines require common understanding and shared perception to maximize benefits of the human-machine team. Autonomous systems rely on models that consume real-time operational data to provide predictions, alerts, and recommendations. The ultimate goal of this SBIR topic is to provide ANN-centric RL algorithms to enhance UAV operator and machine performance in the processing of information management and knowledge management in the exercise of UAV missions.

 

ANN-centric RL algorithms are needed to execute: (1) preset lost-link procedures to attempt to reacquire the link in the event of data link loss within data link range; (2) contingency flight plans in the case of failure of data link reacquisition, a last-minute change in the safety of the landing site, or upon wave-off command by a human-in-the- loop; and (3) an abort if the UAV anticipates/detects a command/task that cannot be performed or an obstacle that cannot be avoided. In addition, UAVs using ANN RL algorithms must be able to be terminally guided from a variety of fields as well as from various locations with users having no specialized UAV flight training. Users can be field personnel, medical personnel, supply personnel, and/or remote command center personnel. Terminal guidance consists of the following options at the destination location: (a) update the requested point of landing at any point in the landing sequence; (b) abort delivery to hold at a remote location; (c) abort approach and commence again either to the same or an alternate location; (d) abort delivery to return to launch location with original load (or any other location specified by an air operations supervisor at a remote command center.); and (e) user ability to specify different flight profiles for supply vs. casualty evacuation missions.

 

Field users could be beyond line-of-sight (BLOS) from the launch location and should be able to interact with the UAV via hand controller using an Aersostack Architecture common language, which includes common language commands and common language data objects. To optimize this UAV-operator team, the ANN-centric RL algorithms should represent the information in an optimal way to enable the human user to form associations, reason, and make effective decisions.

 

Future UAV operations will require highly autonomous systems to operate without Global Positioning System (GPS), range, and photo-realistic data; and not have a constant data link to a ground station available in cyberspace. While this is a DoD problem, it is related to similar problems outside DoD, and thus has potential for commercialization. In particular, as we continue to move toward an �Internet of Things� (IoT) where everything from automobiles to household appliances are connected via some network, there are inherent bandwidth issues to be connected anytime, anywhere ideally using any network and providing any service. The IoT concept allows UAVs to become an integral part of IoT infrastructure due to the fact that UAVs possess unique characteristics in (1) being dynamic, easy-to-deploy, easy-to-reprogram during run-time, (2) capable of measuring anything anywhere, and (3) capable of flying in a controlled airspace with a high degree of autonomy. Urban areas may have adequate bandwidth with network support, but rural areas may not necessarily have that network-supported bandwidth available. In many cases just a few miles outside of city limits, adequate bandwidth is unavailable. Thus, methods and techniques produced in this SBIR topic have the commercial potential to solve problems associated with a burgeoning IoT in rural areas and other situations where there is inadequate networking infrastructure.

 

PHASE I: Using Aerostack architecture, which consists of a layered structure corresponding to the different abstraction levels in an unmanned aerial robotic system, and/or any combination of image sensors, acoustic sensors, laser sensors or radar, design and develop UAV ANN-centric RL algorithms to be tested via analysis and simulations. ANN-centric RL performance gains over traditional supervised learning algorithms (i.e., Feedforward Neural Networks, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs)) and unsupervised learning algorithms (i.e., Deep Belief Networks) should be demonstrated in the areas of object recognition and scene


classification (accuracy, precision, recall), especially for tasks related to UAV planning and situational awareness. Quantify these performance gains versus system parameters such as stop command time, minimum distance from suddenly appearing obstacles, collision probability, onboard processing size, onboard processing weight, and onboard processing power consumption, sensor resolution, and algorithms for similar or better accuracy at a lower time/energy cost. Establish feasibility of the approach by comparing performance with sensing networks employing traditional signal processing techniques vice inferring a pattern from raw inputs, such as images and LIDAR sensor data which can lead to proper UAV behavior even in cluttered natural scenarios, such as dense forests or trails.

 

Deliver a concept for an interface design that enables shared perception and shared understanding between the human and machine, taking into account the way in which humans fuse information. Ensure the concept is applicable to a variety of autonomous systems. Provide a Phase II plan for the practical deployment of the proposed interface design approach as a prototype.

 

Include analysis of the cost, benefits, and risks in applying specific ANN-centric RL.

 

PHASE II: Demonstrate neural networks on a commercial-off-the-shelf (COTS) pica size quadrotor UAV, which is approximately 2 cm, equal or greater than 0.0001 kilogram, and consumes approximately 0.1 watt of power. Provide data and movies showing that a pica size quadrotor UAV equipped with a neural network can autonomously, dynamically self-adjust its location and flying directions; brake in time to avoid collision; and provide an optimal flying path to users dynamically changing navigation routes.

 

Produce a medium-fidelity simulation for testing neural network algorithms and to validate and verify: (1) memory footprint and computational fit within the UAV available resources, while exploiting the architectural parallelism and a given real-time deadline; (2) a fully autonomous vision-based navigation system based on selected neural network algorithms for UAV operations within an allowed power budget; (3) a neural network that minimizes data transfers and minimizes communication overhead to processes all visual information concurrently and directly produces control commands for flying a UAV; and (4) ability of UAVs integrated with neural network to perform terminal guidance and communicate as reaction time shared perceptions and shared understanding with users with respect to an unexpected obstacle.

 

PHASE III DUAL USE APPLICATIONS: Transition neural network technology to enable autonomous operations to the following UAVs: MQ-25, Triton, Fire Scout, RQ-21 Blackjack, RQ-23 TigerShark, Autonomous Aerial Distribution Family of Systems Unmanned Logistics Systems � Air (ULS-A), Marine Air Ground Task Force (MAGTF) Unmanned Aircraft System (UAS) Expeditionary (MUX), and commercial and civil UAVs engaged in surveying, surveillance, and natural disaster support.

 

Providing connectivity from the sky to ground wireless users is an emerging trend in wireless communications. High- and low-altitude UAVs are being considered as candidates for servicing wireless users and, thus, complementing the terrestrial communication infrastructure. Such communication from the sky is expected to be a major component of beyond 5G cellular networks. Compared to terrestrial communications, a wireless system with low-altitude UAVs is faster to deploy, more flexibly reconfigured, and likely to have better communication channels due to the presence of short-range, line-of-sight (LoS) links. In a UAV-based wireless system, UAVs can have three key functions: Aerial Base Stations, aerial relays, and cellular-connected UAVs (i.e., user equipment (UE) UAVs) for information dissemination/data collection. Therefore, there is a need to investigate the optimal deployment of

UAVs for coverage extension and capacity improvement. Moreover, UAVs can be used for data collection, delivery, and transmitting telematics. Hence, there is a need to develop intelligent self-organizing control algorithms to optimize the flying path of UAVs.

 

REFERENCES:

1.   CCSDS.org Publications. NASA Consultative Committee for Space Data Systems. https://public.ccsds.org/Publications/default.aspx

 

2.   Jenkins, M.P., Gross, G.A., Bisantz, A.M., & Nagi, R. �Towards context aware data fusion: Modeling and integration of situationally qualified human observations to manage uncertainty in a hard [plus] soft fusion process.� Information Fusion, Volume 21, Issue 1, January 2015, pp.130-144.


https://www.researchgate.net/publication/265128206_Towards_context_aware_data_fusion_Modeling_and_integrat ion_of_situationally_qualified_human_observations_to_manage_uncertainty_in_a_hard_soft_fusion_process

 

3.   Hall, D. L., McNeese, M. D., Hellar, D. B., Panulla, B. J., & Shumaker, W. �A cyber infrastructure for evaluating the performance of human centered fusion.� Proceedings of the12th International Conference on Information Fusion, Seattle, WA, July 6-9, 2009, pp. 1257-1264. http://fusion.isif.org/proceedings/fusion09CD/data/papers/0377.pdf?

 

4.   Wooley, Lt Gen. �Concepts for Today Visions for Tomorrow.� Command Brief. http://www.ndiagulfcoast.com/events/archive/31st_symposium/day1/Wooley.pdf

 

KEYWORDS: Sensors; Video; Unmanned Aircraft System; UAS; Anti-Access Area Denial; A2AD; Datalinks; Data Links; Human Machine Interface; HMI; ANN; Artificial Neural Network; RL; Reinforcement Learning

 

 

** TOPIC NOTICE **

NOTICE: The data above is for casual reference only. The official DoD/Navy topic description and BAA information is available at https://www.defensesbirsttr.mil/

These Navy Topics are part of the overall DoD 2019.2 SBIR BAA. The DoD issued its 2019.2 BAA SBIR pre-release on May 2, 2019, which opens to receive proposals on May 31, 2019, and closes July 1, 2019 at 8:00 PM ET.

Between May 2, 2019 and May 30, 2019 you may communicate directly with the Topic Authors (TPOC) to ask technical questions about the topics. During these dates, their contact information is listed above. For reasons of competitive fairness, direct communication between proposers and topic authors is not allowed starting May 31, 2019
when DoD begins accepting proposals for this BAA.


Help: If you have general questions about DoD SBIR program, please contact the DoD SBIR Help Desk at 800-348-0787 or via email at [email protected]