2015
Next generation large-scale distributed systems – such as smart cities – are dynamic, heterogeneous and multi-domain in nature. The same is true for applications hosted on these systems. Application heterogeneity stems from their Unit of Composition (UoC); some applications might be coarse-grained and composed from processes or actors, whereas others might be fine-grained and composed from software components. Software components can further amplify heterogeneity since there exists different component models for different domains. Lifecycle management of such distributed, heterogeneous applications is a considerable challenge.
In this paper, we solve this problem by reasoning about these systems as a Software Product Line (SPL) where individual dimensions of heterogeneity can be considered as product variants. To enable such reasoning, first, we present UMRELA (Universal feature-Model for distRibutEd appLicAtions), a conceptual feature model that identifies commonalities and variability points for capturing and representing distributed applications and their target system. This results in a product line of a family of distributed applications. UMRELA facilitates representation of initial configuration point, and the configuration space of the system. The latter represents all possible states the system can reach and is used as an implicit encoding to calculate new configuration points at runtime. Second, we present a prototype Application Management Framework (AMF) as a proof of concept configuration management tool that uses UMRELA to manage heterogeneous distributed applications.
publication
Automatically reasoning about metamodeling
<p>Metamodeling is foundational to many modeling frameworks, and so it is important to formalize and reason about it. Ideally, correctness proofs and test-case generation on the metamodeling framework should be automatic. However, it has yet to be shown that extensive automated reasoning on metamodeling frameworks can be achieved. In this paper, we present one approach to this problem: metamodeling frameworks are specified modularly using algebraic data types and constraint logic programming (CLP). Proofs and test-case generation are encoded as open world query operations and automatically solved.</p><p> </p>
publication
Resilient traffic light control (Poster)
2014
publication
DREMS: A Model-Driven Distributed Secure Information Architecture Platform for Managed Embedded Systems
Architecting the software for a cloud computing platform built from mobile and distributed embedded
devices incurs many challenges not present in traditional cloud computing. Effective resource
and application management that guarantee performance and security isolation are required. This paper
describes a practical design and runtime solution incorporating modern software development practices
and technologies along with novel approaches to address these challenges. The patterns and principles
exhibited by our design will serve as guidelines for practitioners in this field.
Recent increases in industrial adoption of Model-Based Engi-neering has created demand for more advanced tools, environments, andinfrastructures. As a response to the Defense Advanced Research ProjectAgency’s (DARPA) initiative in the Adaptive Vehicle Make (AVM) pro-gram, we have designed and built VechicleFORGE, a collaborative en-vironment tightly integrated with the AVM design tools. This paperdescribes VehicleFORGE concepts and services facilitated by the cloudcomputing foundation of the infrastructure.
Virtual evaluation of complex Cyber-Physical Systems (CPS) [1] with a number of tightly integrated domains such as physical, mechanical, electrical, thermal, cyber, etc. demand the use of heterogeneous simulation environments. Our previous effort with C2 Wind Tunnel (C2WT) [2] [3] attempted to solve the challenges of evaluating these complex systems as-a-whole, by integrating multiple simulation platforms with varying semantics and integrating and managing different simulation models and their interactions. Recently, a great interest has developed to use Functional Mockup Interface (FMI) [4] for a variety of dynamics simulation packages, particularly in the automotive industry. Leveraging the C2WT effort on effective integration of different simulation engines with different Models of Computation (MoCs), we propose, in this paper, to use the proven methods of High-Level Architecture (HLA)-based model and system integration. We identify the challenges of integrating Functional Mockup Unit for Co-Simulation (FMU-CS) in general and via HLA [5] and present a novel model-based approach to rapidly synthesize an effective integration. The approach presented provides a unique opportunity to integrate readily available FMU-CS components with various specialized simulation packages to rapidly synthesize HLA-based integrated simulations for the overall composed Cyber-Physical Systems.
Cyber-Physical Systems (CPS) are engineered systems that require tight interaction between physical and computational components. Designing a CPS is highly challenging because these systems are inherently complex, need significant effort to describe and evaluate a vast set of cross-disciplinary interactions, and require seamless meshing of physical elements with corresponding software artifacts. Moreover, a large set of architectural and compositional alternatives must be systematically explored and evaluated in the context of a highly constrained design space. The constraints imposed on the se-lection of alternatives are derived from the system’s functional, performance, dimensional, physical, and economical objectives. Furthermore, the design pro-cess of these systems is highly iterative and requires continuous integration of design generation with design selection and manipulation supported by design analyses. To enable the iterative design process for CPS-s, we have developed a design toolchain, OpenMETA, built around a Domain-Specific Modeling Language (DSML), called the Cyber-Physical Modeling Language (Cy-PhyML). In this paper, we present parts OpenMETA that address the require-ments of Design Space Exploration and Manipulation (DSEM) for CPS-s.
For outdoor navigation, GPS provides the most widely-used means of node localization; however, the level of accuracy provided by low-cost receivers is typically insufficient for use in high-precision applications such as land surveying, vehicle collision avoidance, and formation flying, among others. Additionally, these applications do not require precise absolute Earth coordinates, but rather rely on relative positioning to infer information about the geometric configuration of the constituent nodes in a system. This dissertation presents a novel approach that uses GPS to derive relative location information for a scalable network of single-frequency receivers. Nodes in a network share their raw satellite measurements and track the positions of neighboring nodes as opposed to computing their own absolute coordinates. Random and systematic errors are mitigated in novel ways, challenging long-standing beliefs that precision GPS systems require extensive
stationary calibration times or complex equipment configurations. In addition to the mathematical basis for our technique, a working prototype is developed using a network of mobile devices with custom Bluetooth-enabled GPS sensors, enabling experimental evaluation of several real-world test scenarios. Our evaluation shows that sub-meter relative positioning accuracy at an update rate of 1 Hz is possible under various conditions with the presented technique. This is an order of magnitude more accurate than simply taking the difference of standalone receiver coordinates or other simplistic approaches.
publication
Advanced RF Techniques for Wireless Sensor Networks: The Software-Defined Radio Approach
Traditional wireless sensor node designs follow a common architectural paradigm that connects a low-power integrated radio transceiver chip to a microcontroller. This approach facilitated research on communication protocols that focused on the media access control layer and above, but the closed architecture of radio chips and the limited performance of microcontrollers prevented experimentation with novel communication protocols that require substantial physical layer signal processing. Software-defined radios address these limitations through direct access to the baseband radio signals and an abundance of reconfigurable computing resources, but the power consumption of existing such platforms renders them inapplicable for low-power wireless sensor networking.
This dissertation addresses this disparity by presenting a low-power wireless sensor platform with software-defined radio capabilities. The modular platform is built on a system-on-a-programmable chip to provide sufficient reconfigurable computational resources for realizing complete physical layers, and uses flash technology to reduce power consumption and support duty cycling. The direct access the platform provides to the baseband radio signals enables novel protocols and applications, which is evaluated in two ways.
First, this is demonstrated by designing the physical layer of a spread-spectrum communication protocol. The protocol is optimized for data-gathering network traffic and leverages spectrum spreading both to enable an asynchronous multiple-access scheme and to increase the maximum hop-distance between the sensor nodes and the basestation. The performance of the communication protocol is evaluated through real-world experiments using the proposed wireless platform.
Second, a multi-carrier phase measurement method is developed for radio frequency node localization. Compared to existing interferometric approaches, this method offers more than four orders of magnitude measurement speedup and requires no deliberately introduced carrier frequency offset. The operation of the multi-carrier approach is validated using the new platform in various experiments. The analysis of the collected phase measurement data led to a novel approach for phase measurement-based distance estimation. This model is utilized to derive two maximum-likelihood distance estimators and their corresponding theoretical bounds in order to analyze and interpret the experimental results.
Molecular dynamics (MD) simulations play an important role in materials design. However,
the effective use of the most widely used MD simulators require significant expertise of the
scientific domain and deep knowledge of the given software tool itself. In this paper, we present
a tool that offers an intuitive, component-oriented approach to design complex molecular sys-
tems and set up initial conditions of the simulations. We integrate this tool into a web- and
cloud-based software infrastructure, called MetaMDS, that lowers the barrier of entry into MD
simulations for practitioners. The web interface makes it possible for experts to build a rich
library of simulation components and for ordinary users to create full simulations by param-
eterizing and composing the components. A visual programming interface makes it possible
to create optimization workflows where the simulators are invoked multiple times with various
parameter configurations based on results of earlier simulation runs. Simulation configurations
including the various parameters, the version of tools utilized and the results are stored in a
database to support searching and browsing of existing simulation outputs and facilitating the
reproducibility of scientific results.
Developers of information systems have always utilized various visual formalisms during the
design process, albeit in an informal manner. Architecture diagrams, finite state machines,
and signal flow graphs are just a few examples. Model Integrated Computing (MIC) is an
approach that considers these design artifacts as first class models and uses them to generate
the system or subsystems automatically. Moreover, the same models can be used to analyze
the system and generate test cases and documentation. MIC advocates the formal definition of
these formalisms, called domain-specific modeling languages (DSML), via metamodeling and
the automatic configuration of modeling tools from the metamodels. However, current MIC
infrastructures are based on desktop applications that support a limited number of platforms,
discourage concurrent design collaboration and are not scalable. This paper presents WebGME, a cloud- and web-based cyberinfrastructure to support the collaborative modeling, analysis, and synthesis of complex, large-scale scientific and engineering information systems. It facilitates interfacing with existing external tools, such as simulators and analysis tools, it provides custom domain-specific visualization support and enables the creation of automatic code generators.
Molecular dynamics (MD) simulation is used increasingly often for materials design to reduce the costs associated with the pure experimental approach. Complex MD simulations are, however, notoriously hard to set up. It requires expertise in several distinct areas, including the peculiarities of a particular simulator tool, the chemical properties of the family of materials being studied, as well as in general C/C++ or Python programming. In this paper, we describe how MetaMDS, a web-based collaborative environment, allows experts of different domains to work together to create building blocks of MD simulations. These building blocks, capturing domain-specific knowledge at various levels of abstraction, are stored in a repository, and are shared with other users, who can reuse them to build complex simulation workflows. This approach has the potential to boost productivity in chemical and materials science research through separating concerns and promoting reuse in MD workflows.
In this paper we introduce a novel model-based reliability analysis methodology to guide the best maintenance practices for the different components in complex engineered systems. We have developed a tool that allows the system designer to explore the consequences of different design choices, and to assess the effects of faults and wear on critical components as a result of usage or age. The tool uses pre-computed simulations of usage scenarios for which performance met- rics can be computed as functions of system configurations and faulty/worn components. These simulations make use of damage maps, which estimate component degradation as a function of usage or age. This allows the designer to de- termine the components and their respective fault modes that are critical w.r.t. the performance requirements of the design. Given a design configuration, the tool is capable of providing a ranked list of critical fault modes and their individual contri- butions to the likelihood of failing the different performance requirements. From this initial analysis it is possible to deter- mine the components that have little to no effect on the prob- ability of the system meeting its performance requirements. These components are likely candidates for reactive mainte- nance. Other component faults may affect the performance over the short or long run. Given a limit for allowable failure risk, it is possible to compute the Mean Time Between Failure (MTBF) for each of those fault modes. These time intervals, grouped by component or Line Replaceable Units (LRUs), are aggregated to develop a preventive maintenance sched- ule. The most critical faults may be candidates for Condition- Based Maintenance (CBM). For these cases, the specific fault modes considered for CBM also guide sensor selection and placement.
Designing embedded systems has become a complex and expensive task, and simulation and other analysis tools are taking on a bigger role in the overall design process. In an effort to speed up the design process, we present an algorithm for reducing the simulation time of large, complex models by creating a parallel schedule from a flattened set of equations that collectively capture the system behavior. The developed approach is applied to a multi-core desktop processor to determine the estimated speedup in a set of subsystem models.
To minimize the design cost of a complex system and maximize performance, a design team ideally must be able to quantify reliability and mitigate risk at the earliest phases of the design process, where 80% of the cost is committed. This paper demonstrates the capabilities of a new System Reliability Exploration Tool based on the improved simulation capabilities of a system called Fault-Augmented Modelica Extension (FAME). This novel tool combines concepts from FMEA, traditional Reliability Analysis, and Quality Engineering to identify, gain insight, and quantify the impact of component failure modes through time evolution of a system’s lifecycle. We illustrate how to use the FAME System Reliability Exploration Tool through a vehicle design case study.