DIGITAL ENGINEERING - Software Incident Report Capture and Scripting

Navy SBIR 23.1 - Topic N231-029
NAVSEA - Naval Sea Systems Command
Pre-release 1/11/23   Opens to accept proposals 2/08/23   Closes 3/08/23 12:00pm ET    [ View Q&A ]

N231-029 TITLE: DIGITAL ENGINEERING - Software Incident Report Capture and Scripting

OUSD (R&E) CRITICAL TECHNOLOGY AREA(S): Artificial Intelligence (AI)/Machine Learning (ML); Autonomy; Cybersecurity

The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.

OBJECTIVE: Develop a continuous event recording and incident capture tool for software test teams to enable test scripts that recreate system conditions so fixes may be efficiently validated.

DESCRIPTION: Complex Naval Control Systems (NCSs), such as the AN/SQQ-89A(V)15 Surface Ship Undersea Warfare / Anti-Submarine Warfare Combat System, enable sailors to perform complex missions in support of achieving national security objectives. NCSs can involve millions of source lines of code (SLOC). Despite rigorous testing, an NCS may field with numerous "low priority" software problem reports (SPRs) or bugs, software flaws that do not prevent successful mission execution, but which at least are irritating and at worst can extend the time required to achieve mission success.

The Navy seeks a method for capturing specific underlying conditions associated with manifestations of a bug, to enhance the Navy�s ability to diagnose the causes of the bug or at least to recreate the bug. Enabling capture of key conditions present during an observed software incident and producing scripting to enable faithful re-creation of the bug, will substantially improve the Navy�s ability to produce tactical code that better supports sailor use in pursuit of tactical objectives. This will reduce acquisition and maintenance costs. Currently, there are no solutions that enable capture of these key conditions during observed incidents.

Many NCSs include extensive recording capability, intended to enable reconstruction of tactically significant exercises and operations. However, this recording capability is not geared towards identifying the key attributes of system operation contributing to observed software bugs. As any observed bug would be associated with previously unknown software conflicts and contributing factors, it is impossible to determine in advance which system attributes would need to be recorded to ensure a bug could be recreated. However, it appears possible to use Artificial Intelligence (AI) and Machine Learning (ML) on full recordings involving bugs to develop an ontology of bug categories and the subset of system attributes required to recreate and diagnose the bug.

The desired solution will exist within the NCS and, upon a signal from a test engineer, would initiate analysis and capture of the conditions associated with a recent bug. The technology sought will involve concise capture of the nature of the bug, as observed by the user. The technology will also collect sufficient metadata regarding key system conditions to enable developers to diagnose the likely cause(s) of the bug, recreate the error, and validate a fix has been successful.

The technology will also have a capability available to tactical users, to capture information sufficient to diagnose and recreate "escaped bugs", software problems that do not manifest until after software has been released for tactical use.

The software incident report capture and scripting (SIRCS) technology should reduce the time required to find, fix, and repair (FFR) bugs by at least 25%. Improved FFR efficiency will enable the Navy to either reduce the time required to produce software with a set number of "low priority bugs," or substantially reduce the number of bugs present in software baselines produced under a standard release rate. The SIRCS technology will not need to capture all bugs accurately but will need to be able to identify when a bug has not been properly captured. A key attribute of the technology will be the ability of the bug ontology, once developed, to accurately synopsize key system conditions to support bug diagnosis and creation. A secondary attribute will be the ease with which a test engineer or other user can capture a concise definition of the bug as observed, as users will be unlikely to use a tool that is too cumbersome resulting in continued bugs.

The NCS will possess a mature logging function and ability to ingest scripts for automated testing. The Navy will have a clear definition of bug impact and likelihood to enable provisional categorization of bugs in advance of formal assessment by the configuration control board (CCB) associated with the NCS.

Work produced in Phase II may become classified. Note: The prospective contractor(s) must be U.S. Owned and Operated with no Foreign Influence as defined by DOD 5220.22-M, National Industrial Security Program Operating Manual, unless acceptable mitigating procedures can and have been implemented and approved by the Defense Counterintelligence Security Agency (DCSA), formerly the Defense Security Service (DSS). The selected contractor must be able to acquire and maintain a secret level facility and Personnel Security Clearances, in order to perform on advanced phases of this contract as set forth by DSS and NAVSEA in order to gain access to classified information pertaining to the national defense of the United States and its allies; this will be an inherent requirement. The selected company will be required to safeguard classified material IAW DoD 5220.22-M during the advance phases of this contract.

All DoD Information Systems (IS) and Platform Information Technology (PIT) systems will be categorized in accordance with Committee on National Security Systems Instruction (CNSSI) 1253, implemented using a corresponding set of security controls from National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53, and evaluated using assessment procedures from NIST SP 800-53A and DoD-specific (KS) (Information Assurance Technical Authority (IATA) Standards and Tools).

The Contractor shall support the Assessment and Authorization (A&A) of the system. The Contractor shall support the government�s efforts to obtain an Authorization to Operate (ATO) in accordance with DoDI 8500.01 Cybersecurity, DoDI 8510.01 Risk Management Framework (RMF) for DoD Information Technology (IT), NIST SP 800-53, NAVSEA 9400.2-M (October 2016), and business rules set by the NAVSEA Echelon II and the Functional Authorizing Official (FAO). The Contractor shall design the tool to their proposed RMF Security Controls necessary to obtain A&A. The Contractor shall provide technical support and design material for RMF assessment and authorization in accordance with NAVSEA Instruction 9400.2-M by delivering OQE and documentation to support assessment and authorization package development.

Contractor Information Systems Security Requirements. The Contractor shall implement the security requirements set forth in the clause entitled DFARS 252.204-7012, "Safeguarding Covered Defense Information and Cyber Incident Reporting," and National Institute of Standards and Technology (NIST) Special Publication 800-171.

PHASE I: Develop a concept for an embedded software incident report capture and scripting (SIRCS) technology to meet the parameters of the Description. The concept should be compatible with multiple software languages operating within a Red Hat Linux operating system. Demonstrate feasibility using an unclassified system that allows the Government to understand how the concept is extensible to NCSs in general and to the AN/SQQ-89A (V)15 in particular. The Phase I Option, if exercised, will include the initial design specifications and capabilities description to build a prototype solution in Phase II.

PHASE II: Develop and deliver a prototype software incident report capture and scripting system based on the results of Phase I. The Phase II effort will involve use of the technology with the AN/SQQ-89A(V)15 system itself. The prototype software incident report capture and scripting capability will be evaluated by Navy subject matter experts (SMEs) familiar with both NCS prototype testing and NCS certification testing.

It is probable that the work under this effort will be classified under Phase II (see Description section for details).

PHASE III DUAL USE APPLICATIONS: Support the Navy in transitioning the technology to Navy use. The final SIRCS product will be an integrated capability to capture concise descriptions of bugs, together with sufficient metadata to enable the bug to be recreated and diagnosed. The technology arising from this research would initially be incorporated into systems undergoing test during both development and certification stages of software maturation. The technology developed could be put into use as early as the testing of the AN/SQQ-89A(V)15 Advanced Capability Build (ACB) prototype undergoing development testing in 2027, likely the SQQ-89A(V)15 ACB-29 build. As this ACB matures, use of the SIRCS technology will expand to include certification testing and testing associated with installation and check-out aboard Navy combatants.

Throughout the envisioned use of the technology by Navy test personnel, the company would be funded to expand the bug classes to which the SIRCS technology reliably applies. A minimum viable product (MVP) would involve capture of 100% of concise user bug descriptions, 80% appropriate bug severity assessments prior to formal CCB adjudication, and sufficient capture of correct metadata and script generation to more than offset the time spent attempting fix and repair based on incorrect metadata and script generation.

There is potential for a SIRCS capability to apply beyond Naval control systems to other DoD control systems. Industrial applications would include complex control systems where failures can result in catastrophic consequences, such as control systems for nuclear power and information technology.

REFERENCES:

1.       Florac, William A., "Software Quality Measurement: A Framework for Counting Problems and Defects." Carnegie Mellon University, Software Engineering Institute Technical Report CMU/SEI-92-TR-022, ESC-TR-92-022. https://resources.sei.cmu.edu/asset_files/TechnicalReport/1992_005_001_16088.pdf

2.       Hanna, Milad et al. "A Review of Scripting Techniques Used in Automated Software Testing." International Journal of Advanced Computer Science and Applications (IJACSA), 5(1), 2014. https://thesai.org/Publications/ViewPaper?Volume=5&Issue=1&Code=IJACSA&SerialNo=28

3.       Navy Fact File, "AN/SQQ-89(V) Undersea Warfare / Anti-Submarine Warfare Combat System." U.S. Navy Office of Information, 20 Sep 2021. https://www.navy.mil/Resources/Fact-Files/Display-FactFiles/Article/2166784/ansqq-89v-undersea-warfare-anti-submarine-warfare-combat-system

 

KEYWORDS: Software incident report; automated testing; naval control systems; find, fix, and repair; FFR; ontology of bug categories; AI/ML for bug characterization


** TOPIC NOTICE **

The Navy Topic above is an "unofficial" copy from the Navy Topics in the DoD 23.1 SBIR BAA. Please see the official DoD Topic website at www.defensesbirsttr.mil/SBIR-STTR/Opportunities/#announcements for any updates.

The DoD issued its Navy 23.1 SBIR Topics pre-release on January 11, 2023 which opens to receive proposals on February 8, 2023, and closes March 8, 2023 (12:00pm ET).

Direct Contact with Topic Authors: During the pre-release period (January 11, 2023 thru February 7, 2023) proposing firms have an opportunity to directly contact the Technical Point of Contact (TPOC) to ask technical questions about the specific BAA topic. Once DoD begins accepting proposals on February 8, 2023 no further direct contact between proposers and topic authors is allowed unless the Topic Author is responding to a question submitted during the Pre-release period.

SITIS Q&A System: After the pre-release period, and until February 22, 2023, (at 12:00 PM ET), proposers may submit written questions through SITIS (SBIR/STTR Interactive Topic Information System) at www.dodsbirsttr.mil/topics-app/, login and follow instructions. In SITIS, the questioner and respondent remain anonymous but all questions and answers are posted for general viewing.

Topics Search Engine: Visit the DoD Topic Search Tool at www.dodsbirsttr.mil/topics-app/ to find topics by keyword across all DoD Components participating in this BAA.

Help: If you have general questions about the DoD SBIR program, please contact the DoD SBIR Help Desk via email at [email protected]

Topic Q & A

2/27/23  Q. a) What does the simulator look like? What does the debug process look like?
b) What does the monitoring deployment environment look like, is it at deployment on device, in simulation, proxy device, etc?
c) What does the test engineer�s debugging environment look like? Is it a deployed device, simulated, proxy device, debugging device etc?
d) What are the details about the existing �extensive recording capability.� There are lots of ways to hook into running binaries, including services even built into Linux like auditd. Is that being used now? If not, why not and what is in use and at what scope?
e) Where are the solution(s) expected to reside on the system? Binary instrumentation? OS?
   A. 1.The system for which we seek this technology is not a simulator. It is a prototype of a future Naval Control System (NCS) update. We perform testing using either recorded data or signal injection, including a high fidelity stimulation (tactical code sees signals and isn�t alerted that the signals aren�t �real.�).
The find, fix, repair (FFR) process varies depending on the nature of the bug to be corrected. A key that this topic seeks to address is characterizing the bug well at the time it is first noticed so that the job of a developer re-finding the bug is simplified. Developers change the code to fix the bug, then the conditions associated with the original bug are recreated to validate that the code fix has eliminated the bug without causing other problems.
2. For the AN/SQQ-89 system for which this technology will initially be used, there is a recorder functional segment (RecFS) that is embedded within the SQQ-89. Numerous sensor and system messages are recorded and if it is determined that more need to be added to the compendium for the success of the technology, that can be considered by the program office near the end of Phase II.
3. The test engineers are not doing the debugging. They test the system to confirm functionality and, in the process, find bugs. These bugs are identified in a bug-tracking database (e.g., Jira) and the developers recreate the bug, fix the bug (through changing code), then verify the fix.
4. For the AN/SQQ-89 system for which this technology will initially be used, there is a recorder functional segment (RecFS) that is embedded within the SQQ-89. Numerous sensor and system messages are recorded and if it is determined that more need to be added to the compendium for the success of the technology, that can be considered by the program office near the end of Phase II.
5. While it will be useful for an instantiation of the solution to exist within the Naval Control System (NCS), it seems it would also be useful for an instantiation to run external to the system, passively tapping into the system. This external instantiation of the solution could then be used if the bug in question causes a catastrophic fault that crashes the NCS.
2/21/23  Q. 1) " The concept should be compatible with multiple software languages operating within a Red Hat Linux" - Is Java a consideration? What about C++ or Python? What is the most prevalent language in the NCS environment?
2) When the programs are run, will we observe any screen outputs or we determine the errors only based on compiler errors or warnings?
There are 2 possible scenarios where errors can get generated. a) during compilation b) during run-time. During execution, we may have a situation where the programs could be nested. Program A calls Program B or it could be an API or Function call. Will such situations be taken into consideration?
3) What are the major types of errors we will be tracking? run time errors, compiler errors, memory overflow conditions, syntax errors, run-time warning .
4) At the time of testing, are we testing the compiled code or source code? Will the testers modify any parts of the source code to trigger errors?
5) Will the programs be run/ tested on a stand-alone basis, one program at a time or will the programs be executed as part of a framework like Jenkins?
   A. 1. The three you mention are used. I�m not certain whether C++ or another language is the most prevalent.
2a. The purpose of the technology we seek is to capture �bugs� identified by test engineers. These are unlikely to be compiler errors or warnings. An example of a bug is if the expected behavior does not manifest.
2b. The technology we seek is focused on run-time bugs. We seek a technology that is extensible to the full set of conditions that could result in run-time bugs.
3. The major errors this topics seeks to address are run-time bugs, where the system does not behave as it is expected to behave.
4. We seek technology that can both capture key information about the run-time environment at the time of bug identification and that makes it easy for the test engineers to quickly characterize the nature of the bug they have identified. The audience for this information is the developer community, which is tasked to re-find the bug, modify code to resolve the problem, and re-run the conditions associated with the bug to validate that the fix has worked.
5. The AN/SQQ-89 is a complex combat system consisting of tens of millions of source lines of code, much of which is running simultaneously during testing. There are some automated testing tools in use, but the key situation for which we seek this technology is human-conducted operations that result in a bug being identified. We seek technology that facilitates real-time characterization of the bug, both the system behavior observed by a human and the system state at the time of the problematic system behavior.
2/7/23  Q. Is there a TPOC we can reach out to for questions?
   A. Hello. The opportunity for direct communication with TPOCs of this BAA ended February 7. Technical questions related to a topic may be posted to this Topic Q&A page through February 22.
2/7/23  Q. 1. Does the captured information (for re-creating the bugs) have to be human readable?
2. Does the recreation/validation/fix process need to be fully or semi automated?
Thanks
   A. 1. There will be some aspects of the capture that should be human readable, but the metadata related to the underlying system state doesn�t need to be human readable. Just understandable by the system so the bug conditions can be recreated via a script hopefully created via the technology developed under this SBIR.
2.Initially the bug recreation could be semi-automated and it seems reasonable that some bugs would always require some level of human interaction. But the more the technology makes it possible to just recreate the bug conditions without significant human effort, the better.


[ Return ]