2015
publication
Architecture Exploration in the META Toolchain
publication
META Design Space Exploration Using Dynamics
This report (1) presents use cases and requirements for a vehicle information architecture platform (VIAP), (2) reviews and evaluates the Automotive Open System Architecture (AUTOSAR) and the Distributed Real-time Managed System (DREMS) architecture specifications, and (3) presents a preliminary architecture specification VIAP that addresses the needs of the DARPA Adaptive Vehicle Make program.
Contemporary engineering information system designs are generally interdisciplinary and exceedingly complex. As a result, managing and understanding these systems collaboratively poses unnecessary challenges to end users. In this research, we studied and developed visualization and collaboration techniques to facilitate comprehension and management of engineering information systems with great complexity. Existing commercial and research visualization developments only address applications in specific domains. This paper introduces two techniques applicable to large-scale models across various domains and integrated within a web-based modeling platform, WebGME. The techniques presented are 1) domain-specific visualization that allows representation of components in each domain with conventional or meaningful icons, and 2) model connectivity abstraction that allows domain independent, context-aware abstraction of model connections.
The increasing complexity and heterogeneity of operations necessitate the use of sophisticated planning tools and processes that allow the rapid evaluation of alternative Courses of Action (COA) to give feedback to planners. As the domain of Air Force operations extends across air, space, and cyberspace, the performance of existing processes and their supporting computational tools is not meeting the operators’ needs. COA evaluation should be rapid, allowing fast planning and change cycles. While raw computational power is becoming readily available and connectivity is improving, software tools that allow the effective harnessing of that power are lagging behind. Simulation-based evaluation of COAs is complex, as it involves multiple, heterogeneous domains, each with its own tools and simulations. The configuration and integration of these simulations into a coherent framework, namely, a federation of simulations, is a very difficult, time-consuming, labor-intensive, and error-prone task. Consequently, COA evaluation cannot be done rapidly and in a timely manner to provide answers to the planners. Because a COA has to be tested against a number of scenarios and situations, at most one COA with some minor variants is analyzed. The problem becomes more acute as the focus is shifting to integrated C2 where several COAs developed by different Command Centers (e.g., the different command centers for Air, Space, and Cyber Operations) have to be integrated and the result evaluated. Designing and efficiently deploying such a computational resource on a high-performance computing platform is a major challenge. In the course of the work described in this report, our team has devised, designed, constructed, and demonstrated a novel software framework and tool environment to support the evaluation of operational sequences that are prepared by and derived from COA by human experts.
The Internet of Things (IoT) paradigm has given rise to a new class of applications wherein complex data analytics must be performed in real-time on large volumes of fast-moving and heterogeneous sensor-generated data. Such data streams are often unbounded and must be processed in a distributed and parallel manner to ensure timely processing and delivery to interested subscribers. Dataflow architectures based on event-based design have served well in such applications because events support asynchrony, loose coupling, and helps build resilient, responsive and scalable applications. However, a unified programming model for event processing and distribution that can naturally compose the processing stages in a dataflow while exploiting the inherent parallelism available in the environment and computation is still lacking. To that end, we investigate the benefits of blending Reactive Programming with data distribution frameworks for building distributed, reactive, and high-performance stream-processing applications. Specifically, we present insights from our study integrating and evaluating Microsoft .NET Reactive Extensions (Rx) with OMG Data Distribution Service (DDS), which is a standards-based publish/subscribe middleware suitable for demanding industrial IoT applications. Several key insights from both qualitative and quantitative evaluation of our approach are presented.
publication
From System Modeling to Formal Verification
Due to increasing design complexity, modern systems are modeled at a high level of abstraction. SystemC is widely accepted as a system level language for modeling complex embedded systems. Verification of these SystemC designs nullifies the chances of error propagation down to the hardware. Due to lack of formal semantics of SystemC, the verification of such designs is done mostly in an unsystematic manner. This paper provides a new modeling environment that enables the designer to simulate and formally verify the designs by generating SystemC code. The generated SystemC code is automatically translated to timed automata for formal analysis.
Cyber-Physical Systems (CPS) are systems with seamless integration of physical, computational and networking components. These systems can potentially have an impact on the physical components, hence it is critical to safeguard them against a wide range of attacks. In this paper, it is argued that an effective approach to achieve this goal is to systematically identify the potential threats at the design phase of building such systems, commonly achieved via threat modeling. In this context, a tool to perform systematic analysis of threat modeling for CPS is proposed. A real-world wireless railway temperature monitoring system is used as a case study to validate the proposed approach. The threats identified in the system are subsequently mitigated using National Institute of Standards and Technology (NIST) standards.
This paper discusses a distributed diagnosis approach, where each subsystem diagnoser operates independently without a coordinator that combines local results and generates the correct global diagnosis. In addition, the distributed diagnosis algorithm is designed to minimize communication between the subsystems. A Minimal Structurally Overdetermined (MSO) set selection approach is developed as a Binary Integer Linear Programming (BILP) optimization problem for subsystem diagnoser design. For cases, where a complete global model of the system may not be available, we develop a heuristic approach, where individual subsystem diagnosers are designed incrementally, starting with the local system MSOs and progressively extending the local set to include MSOs from the immediate neighbors of the subsystem. The inclusion of additional neighbors continues till the MSO set ensures correct global diagnosis results. A multi-tank system is used to demonstrate and validate the proposed methods.