Bot Detection in Novel Information Environments
Navy STTR 2015.A - Topic N15A-T020
ONR - Ms. Lore-Anne Ponirakis - [email protected]
Opens: January 15, 2015 - Closes: February 25, 2015 6:00am ET

N15A-T020 TITLE: Bot Detection in Novel Information Environments

TECHNOLOGY AREAS: Information Systems, Sensors, Human Systems

ACQUISITION PROGRAM: CMP-FY15-02 Environment Designed to Undertake Counter A2AD Tactics Training

OBJECTIVE: Detection of "social bots" in social media information streams, to determine their impact on social discourse and characterize their source(s), intents and modes of operation.

DESCRIPTION: Social bots - automated systems for posting information, hijacking accounts and auto-posting for others in social media face or through "robot" (fake) accounts - can be used to influence and manipulate crowds in social media venues (web forums, Twitter, Facebook, etc.) and are currently used to spread messages for a variety of purposes. Many of these bots can be easily detected by existing bot-detection methods; others are more sophisticated. Social bots could be used to spread hate speech, incite crowds to engage in spontaneous violence, and amplify messages for many purposes. Social message attacks on phones through Short Message Service (SMS) messages are another venue of concern. This effort will develop new methods and technologies for detecting social bots; develop methods and techniques for evaluating their impact on social discourse; and develop methods and technologies for evaluating their mode of operation (entirely scripted from one coordinate point, many actors using similar methods, such as a common script and common goals, or many actors who use a variety of methods to manipulate crowds). Disaster and complex humanitarian operations increasingly operate in areas in which civil unrest, terrorist actions, mobbing and other events influenced by disinformation, hate speech, rumor mongering and hysteria propagation using social media pose dangers to human security. This effort will develop techniques to distinguish human from artificial propagation of messages, gauge the communicative reach of disruptive, deceptive messaging, and provide the basis for developing techniques for revealing, countering, and mitigating efforts to promote violent response in complex humanitarian crises.

This affects affordability as current methods of identifying orchestrated efforts to promote violence, hysteria, and deception are primarily done via expensive third-party providers that do not provide sufficient technologies to adequately discover and characterize these flows.

The Navy will only fund proposals that are innovative, address R&D, and involve technical risk.

PHASE I: Develop designs and prototype code for detecting social bots operating in a variety of information environments, characterizing their behavior, and estimating their impact on target audiences.

PHASE II: Refine initial products into a working prototype of software for detecting artificial and hybrid human-in-the-loop methods of artificially amplifying target messages in multiple strategic languages. Develop advanced techniques for characterizing their purpose, tactics, and typology; establish and test metrics for estimating impact on target audiences.

PHASE III: Incorporate methods to detect evolved and evolving strategies of deception, hate speech, and crowd polarization strategies. Complete the development of technologies to find malicious social bots in high scale, high velocity, multi-language messaging. Final phase should be capable of flowing tweets from the Twitter API, the Gnip "firehose", etc. and be incorporable into systems of analytic workflows for a variety of military operational domains such as Humanitarian Assistance, Disaster Relief, influence operations, public affairs, and civil affairs.

PRIVATE SECTOR COMMERCIAL POTENTIAL/DUAL-USE APPLICATIONS: The detection of social bots is a growing area of concern in business communities, government and non-government organizations.

REFERENCES:
1. arXIV. "The emerging threat from Twitter�s social capitalists." MIT Technology Review (2 July 2014). www.technologyreview.com/view/528746/the-emerging-threat-from-twitters-social-capitalists/?utm_campaign=socialsync&utm_medium=social-post&utm_source=facebook

2. Bilton, Nick. "Friends, and Influence, for Sale Online." New York Times (Bits). (20 April 2014).
bits.blogs.nytimes.com/2014/04/20/friends-and-influence-for-sale-online/?_php=true&_type=blogs&_r=0

KEYWORDS: Information technology; threat detection; information operations; data mining; computational social science; social networks