Interactive Generative Manifold Learning
Navy SBIR 2012.2 - Topic N122-138
ONR - Ms. Tracy Frost - email@example.com
Opens: May 24, 2012 - Closes: June 27, 2012
N122-138 TITLE: Interactive Generative Manifold Learning
TECHNOLOGY AREAS: Information Systems, Battlespace
ACQUISITION PROGRAM: Network-Centric Sensor Analysis for Mine Warfare (NSAM), PMS-495, PEO-LCS
OBJECTIVE: Develop techniques or mechanisms whereby a human operator may describe and/or generate previously unseen realizations of target data based on a low-dimensional representation learned from a training dataset (i.e., without a physical model.)
DESCRIPTION: Within the field of target recognition, the use of manifold learning has become increasingly popular and powerful (Ref. 1-3). Consider a classic example from facial recognition where a dataset is comprised of a single face imaged at many rotations (e.g., profile, head-on, etc.). In this example, a learned manifold would be low dimensional (here, one-dimensional) and correspond to rotation angle. There would be a corresponding mapping from the high-dimensional space of the image (where the dimensionality equals number of pixels) to the low-dimensional manifold. Therefore any point on the manifold would correspond to an image of the face at that given rotation. Recent advances have developed generative models for manifolds as well as one-to-one mappings (Ref. 4). Therefore, it is now possible to pick any arbitrary point along a manifold and map it back into the high dimensional space. This allows one to generate previously unseen data directly from the manifold without a physical model of the process that generated the data. The purpose is to enable the human to both "explore" and "describe" target characteristics beyond those represented by existing datasets.
The goal of this effort is to investigate methods and develop techniques for a human operator to explore high-dimensional data by: 1) traversing along the low-dimensional manifold in a meaningful, intuitive, and efficient manner (i.e., interpolating along the existing manifold); and 2) exploring data that either lives on the existing manifold beyond the currently characterized regions or exists on an expanded manifold of larger dimensionality (i.e., extrapolating beyond the existing manifold). This later focus on extrapolation is aimed at leveraging the expertise of the human operator to characterize manifestations in the data that have not been sufficiently sampled yet are well understood by the human. For example, in the facial recognition problem, manifestations due to head tilting, illumination effects, or facial gestures may be present to a minor degree. These manifestations could be brought out by the operator, characterized, and used to generate additional previously unseen data.
It is emphasized that facial recognition is used here as an example only; the high-dimensional data of interest to this effort may be imagery, video, etc., derived from electro-optic, sonar, radar, etc. Additionally, the data characteristics to be interactively explored by the human include any meaningful characteristics of the data (e.g., target shape, pose, appearance, motion, motion characteristics, background effects). Finally, recall that a low-dimensional manifold may still have dimensionality greater than three; therefore, a significant portion of this effort should focus on how the human may effectively interact with the low-dimensional data to explore the implications in the high-dimensional representation.
PHASE I: Investigate methods and techniques for a human operator to generate meaningful, previously unseen data in a high-dimensional space by exploring a low-dimensional manifold in an intuitive and efficient manner (i.e., the interpolation goal). For Phase I, developers should use their own data, publicly available data, or data they acquire after approval by the technical point of contact (TPOC) (i.e., data will not be provided by the government for Phase I). The nature of this data is less important and may, in general, be anything that is intuitive to a human (e.g., camera images, video). Additionally, the data provided by the performers need not be from a military application.
The option period will ensure that the methods and techniques investigated in the base effort are amenable to manifolds of dimensionality greater than three. It will also begin investigating methods and techniques for exploring data beyond the currently characterized regions of the existing manifold and extending the dimensionality of the existing manifold in a meaningful way, the extrapolation goal.
PHASE II: Develop prototype software system complete with user interface for both interpolation and extrapolation. Phase II will be initiated with data provided by the developer; however, the government may elect to provide additional data and/or sensing modalities as appropriate. Again, the nature of the data is secondary; the primary focus of this effort should be on the interaction between the human and the data.
The option period will be used to expand the robustness and effectiveness of the interface with the human. Emphasis here will be placed on the extrapolation goal.
PHASE III: Extend the software to operate effectively, be robust, and be fault tolerant to a full spectrum of government-provided data. This will involve significant coordination with a government laboratory to fully integrate and test in the program of record.
PRIVATE SECTOR COMMERCIAL POTENTIAL/DUAL-USE APPLICATIONS: This capability is applicable to any recognition system in any problem domain that requires gathering training data. Therefore, it has significant commercialization potential (e.g., medical, entertainment, web, etc.).
2. Roweis, Sam T. and Lawrence K. Saul. 2000. "Nonlinear Dimensionality Reduction by Locally Linear Embedding." Science 290: 2323-2326. Accessed December 2, 2011. doi: 10.1126/science.290.5500.2323.
3. Coifman, Ronald R. and Stephanie Lafon. 2006. "Diffusion Maps." Applied and Computational Harmonic Analysis: Special Issue on Diffusion Maps and Wavelets 21, no. 1: 5-30. Accessed December 2, 2011. doi: 10.1016/j.acha.2006.04.006.
4. Chen Haojun, Jorge Silva, David Dunson and Lawrence Carin. 2010. "Hierarchical Bayesian Embeddings for Analysis and Synthesis of Dynamic Data." 2010 AAAI Fall Symposium Series. Accessed December 2, 2011. http://people.ee.duke.edu/~lcarin/Haojun_TSP7.pdf.
KEYWORDS: Compression; machine learning; manifold learning; low-dimensional; sparse; target recognition