Publications
<p><span>Distributed real-time and embedded (DRE) systems executing mixed criticality task sets are increasingly being deployed in mobile and embedded cloud computing platforms, including space applications. These DRE systems must not only operate over a range of temporal and spatial scales, but also require stringent assurances for secure interactions between the system's tasks without violating their individual timing constraints. To address these challenges, this paper describes a novel distributed operating system focusing on the scheduler design to support the mixed criticality task sets. Empirical results from experiments involving a case study of a cluster of satellites emulated in a laboratory testbed validate our claims.</span></p>
publication
Cyber-Physical Vulnerability Analysis
Improvements in mobile networking combined with the ubiquitous availability and adoption of low-cost development boards have enabled the vision of mobile platforms of Cyber-Physical Systems (CPS), such as fractionated spacecraft and UAV swarms. Computation and communication resources, sensors, and actuators that are shared among different applications characterize these systems. The cyber-physical nature of these systems means that physical environments can affect both the resource availability and software applications that depend on resource availability. While many application development and management challenges associated with such systems have been described in existing literature, resilient operation and execution have received less attention. This paper describes our work on improving runtime support for resilience in mobile CPS, with a special focus on our runtime infrastructure that provides autonomous resilience via self-reconfiguration. We also describe the interplay between this runtime infrastructure and our design-time tools, as the later is used to statically determine the resilience properties of the former. Finally, we present a use case study to demonstrate and evaluate our design-time resilience analysis and runtime self-reconfiguration infrastructure.
publication
Reconciling the Tension Between Hardware Isolation and Data Sharing in Mixed-Criticality, Multicore Systems
Recent work involving a mixed-criticality framework called MC2 has shown that, by combining hardware-management techniques and criticality-aware task provisioning, capacity loss can be significantly reduced when supporting real-time workloads on multicore platforms. However, as in most other prior research on multicore hardware management, tasks were assumed in that work to not share data. Data sharing is problematic in the context of hardware management because it can violate the isolation properties hardware-management techniques seek to ensure. Clearly, for research on such techniques to have any practical impact, data sharing must be permitted. Towards this goal, this paper presents a new version of MC2 that permits tasks to share data within and across criticality levels through shared memory. Several techniques are presented for mitigating capacity loss due to data sharing. The effectiveness of these techniques is demonstrated by means of a large-scale, overhead-aware schedulability study driven by micro-benchmark data.
The multicore revolution is having limited impact in safety-critical application domains. A key reason is the "one-out-of-m" problem: when validating real-time constraints on an m-core platform, excessive analysis pessimism can effectively negate the processing capacity of the additional m-1 cores so that only "one core s worth" of capacity is available. Two approaches have been investigated previously to address this problem: mixed-criticality allocation techniques, which provision less-critical software components less pessimistically, and hardware-management techniques, which make the underlying platform itself more predictable. A better way forward may be to combine both approaches, but to show this, fundamentally new criticality-cognizant hardware-management tradeoffs must be explored. Such tradeoffs are investigated herein in the context of a large-scale, overhead-aware schedulability study. This study was guided by extensive trace data obtained by executing benchmark tasks on a new variant of the MC^2 framework that supports configurable criticality-based hardware management. This study shows that the two approaches mentioned above can be much more effective when applied together instead of alone.
publication
Automatically reasoning about metamodeling
<p>Metamodeling is foundational to many modeling frameworks, and so it is important to formalize and reason about it. Ideally, correctness proofs and test-case generation on the metamodeling framework should be automatic. However, it has yet to be shown that extensive automated reasoning on metamodeling frameworks can be achieved. In this paper, we present one approach to this problem: metamodeling frameworks are specified modularly using algebraic data types and constraint logic programming (CLP). Proofs and test-case generation are encoded as open world query operations and automatically solved.</p><p> </p>
<p>Polyglot is a tool for the systematic analysis of systems integrated from components built using multiple Statechart formalisms. In Polyglot, Statechart models are translated into a common Java representation with pluggable semantics for different Statechart variants. Polyglot is tightly integrated with the Java Pathfinder verification tool-set, providing analysis and test-case generation capabilities. The tool has been applied in the context of safety-critical software systems whose interacting components were modeled using multiple Statechart formalisms.</p>
System integration is the elephant in the china store of large-scale cyber-physical system (CPS) design. It would be hard to find any other technology that is more undervalued scientifically and at the same time has bigger impact on the presence and future of engineered systems. The unique challenges in CPS integration emerge from the heterogeneity of components and interactions. This heterogeneity drives the need for modeling and analyzing cross-domain interactions among physical and computational/networking domains and demands deep understanding of the effects of heterogeneous abstraction layers in the design flow. To address the challenges of CPS integration, significant progress needs to be made toward a new science and technology foundation that is model based, precise, and predictable. This paper presents a theory of composition for heterogeneous systems focusing on stability. Specifically, the paper presents a passivity-based design approach that decouples stability from timing uncertainties caused by networking and computation. In addition, the paper describes cross-domain abstractions that provide effective solution for model-based fully automated software synthesis and high-fidelity performance analysis. The design objectives demonstrated using the techniques presented in the paper are group coordination for networked unmanned air vehicles (UAVs) and high-confidence embedded control software design for a quadrotor UAV. Open problems in the area are also discussed, including the extension of the theory of compositional design to guarantee properties beyond stability, such as safety and performance.
In model-based development, verification techniques can be used to check whether an abstract model satisfies a set of properties. Ideally, implementation code generated from these models can also be verified against similar properties. However, the distance between the property specification languages and the implementation makes verifying such generated code difficult. Optimizations and renamings can blur the correspondence between the two, further increasing the difficulty of specifying verification properties on the generated code. This paper describes methods for specifying verification properties on abstract models that are then checked on implementation level code. These properties are translated by an extended code generator into implementation code and special annotations that are used by a software model checker.
The paper describes a model-integrated approach for embedded software development that is based on domain-specific, multiple-view models used in all phases of the development process. Models explicitly represent the embedded software and the environment it operates in, and capture the requirements and the design of the application, simultaneously. Models are descriptive , in the sense that they allow the formal analysis, verification, and validation of the embedded system at design time. Models are also generative, in the sense that they carry enough information for automatically generating embedded systems using the techniques of program generators. Because of the widely varying nature of embedded systems, a single modeling language may not be suitable for all domains; thus, modeling languages are often domain-specific. To decrease the cost of defining and integrating domain-specific modeling languages and corresponding analysis and synthesis tools, the model-integrated approach is applied in a metamodeling architecture, where formal models of domain-specific modeling languages-called metamodels-play a key role in customizing and connecting components of tool chains. This paper discusses the principles and techniques of model-integrated embedded software development in detail, as well as the capabilities of the tools supporting the process. Examples in terms of real systems will be given that illustrate how the model-integrated approach addresses the physical nature, the assurance issues, and the dynamic structure of embedded software.