2017
With increasing advances in Internet-enabled devices, large cyber-physical
systems (CPS) are being realized by integrating several sub-systems
together. Analyzing and reasoning different properties of such CPS requires
co-simulations by composing individual and heterogeneous simulators,
each of which addresses only certain aspects of the CPS. Often these
co-simulations are realized as point solutions or composed in an ad
hoc manner, which makes it hard to reuse, maintain and evolve these
co-simulations. Although our prior work on a model-based framework
called Command and Control Wind Tunnel (C2WT) supports distributed
co-simulations, many challenges remain unresolved. For instance,
evaluating these complex CPSs requires large amount of computational
and I/O resources for which the cloud is an attractive option yet
there is a general lack of scientific approaches to deploy
co-simulations in the cloud. In this context, the key challenges
include (i) rapid provisioning and de-provisioning of experimental
resources in the cloud for different co-simulation workloads, (ii)
simulating incompatibility and resource violations, (iii) reliable
execution of co-simulation experiments, and (iv) reproducible
experiments. Our solution builds upon the C2WT heterogeneous
simulation integration technology and leverages the Docker container
technology to provide a model-driven integrated tool-suite for
specifying experiment and resource requirements, and deploying
repeatable cloud-scale experiments. In this work, we present the core
concepts and architecture of our framework, and provide a summary of
our current work in addressing these challenges.
Formation of microgrids have been proposed as a solution to improve grid reliability, and enable smoother integration of renewables into the grid. Microgrids are sections of the grid that can operate in isolation from the main power system. Maintaining power balance within an islanded microgrid is a challenging task, due to the low system inertia, which requires complex control to maintain stable and optimized operation. Many studies have demonstrates feasible distributed microgrid controllers that can maintain microgrid stability in grid connected and islanded modes. However, there is little emphasis on how to implement these distributed algorithms on a computational platform that allows for fast and seamless deployment. This paper introduces a decentralized software platform called Resilient Information Architecture Platform for Smart Systems (RIAPS) that runs on processors embedded with the microgrid component. As an example, we describe the implementation of a distributed microgrid secondary control and resynchronization algorithms on RIAPS platform. The controller developed on RIAPS platform is validated on a real-time microgrid testbed.
Cyber-Physical Systems (CPS) consist of embedded computers with sensing and actuation capability, and are integrated into and tightly coupled with a physical system. Because the physical and cyber components of the system are tightly coupled, cyber-security is important for ensuring the system functions properly and safely. However, the effects of a cyberattack on the whole system may be difficult to determine, analyze, and therefore detect and mitigate. This work presents a model based software development framework integrated with a hardware-in-the-loop (HIL) testbed for rapidly deploying CPS attack experiments. The framework provides the ability to emulate low level attacks and obtain platform specific performance measurements that are difficult to obtain in a traditional simulation environment. The framework improves the cybersecurity design process which can become more informed and customized to the production environment of a CPS. The developed framework is illustrated with a case study of a railway transportation system.
Distributed real-time and embedded (DRE) systems executing mixed criticality task sets are increasingly being deployed in mobile and embedded cloud computing platforms, including space applications. These DRE systems must not only operate over a range of temporal and spatial scales, but also require stringent assurances for secure interactions between the system's tasks without violating their individual timing constraints. To address these challenges, this paper describes a novel distributed operating system focusing on the scheduler design to support the mixed criticality task sets. Empirical results from experiments involving a case study of a cluster of satellites emulated in a laboratory testbed validate our claims.
This paper presents the time synchronization infrastructure for a low-cost run-time platform and application framework specifically targeting Smart Grid applications. Such distributed applications require the execution of reliable and accurate time-coordinated actions and observations both within islands of deployments and across geographically distant nodes. The time synchronization infrastructure is built on well-established technologies: GPS, NTP, PTP, PPS and Linux with real-time extensions, running on low-cost BeagleBone Black hardware nodes. We describe the architecture, implementation, instrumentation approach, performance results and present an example from the application domain. Also, we discuss an important finding on the effect of the Linux \texttt{RT\_PREEMPT} real-time patch on the accuracy of the PPS subsystem and its use for GPS-based time reference
publication
Providing Privacy, Safety, and Security in IoT-Based Transactive Energy Systems using Distributed Ledgers
Power grids are undergoing major changes due to rapid growth in renewable energy resources and improvements in battery technology. While these changes enhance sustainability and efficiency, they also create significant management challenges as the complexity of power systems increases. To tackle these challenges, decentralized Internet-of-Things (IoT) solutions are emerging, which arrange local communities into transactive microgrids. Within a transactive microgrid, ``prosumers'' (i.e., consumers with energy generation and storage capabilities) can trade energy with each other, thereby smoothing the load on the main grid using local supply. It is hard, however, to provide security, safety, and privacy in a decentralized and transactive energy system. On the one hand, prosumers' personal information must be protected from their trade partners and the system operator. On the other hand, the system must be protected from careless or malicious trading, which could destabilize the entire grid. This paper describes \emph{Privacy-preserving Energy Transactions} (PETra), which is a secure and safe solution for transactive microgrids that enables consumers to trade energy without sacrificing their privacy. PETra builds on distributed ledgers, such as blockchains, and provides anonymity for communication, bidding, and trading.
Non-recurring traffic congestion is caused by temporary disruptions, such as accidents, sports games, adverse weather, etc. We use data related to real-time traffic speed, jam factors (a traffic congestion indicator), and events collected over a year from Nashville, TN to train a multi-layered deep neural network. The traffic dataset contains over 900 million data records. The network is thereafter used to classify the real-time data and identify anomalous operations. Compared with traditional approaches of using statistical or machine learning techniques, our model reaches an accuracy of 98.73 percentage when identifying traffic congestion caused by football games. Our approach first encodes the traffic across a region as a scaled image. After that the image data from different timestamps is fused with event- and time-related data. Then a crossover operator is used as a data augmentation method to generate training datasets with more balanced classes. Finally, we use the receiver operating characteristic (ROC) analysis to tune the sensitivity of the classifier. We present the analysis of the training time and the inference time separately.
publication
A Simulation Testbed for Cascade Analysis
Electrical power systems are heavily instrumented with protection assemblies (relays and breakers) that detect anomalies and arrest failure propagation. However, failures in these discrete protection devices could have inadvertent consequences, including cascading failures resulting in blackouts. This paper aims to model the behavior of these discrete protection devices in nominal and faulty conditions and apply it towards simulation and contingency analysis of cascading failures in power transmission systems. The behavior under fault conditions are used to identify and explain conditions for blackout evolution which are not otherwise obvious. The results are demonstrated using a standard IEEE-14 Bus System.
Data generated by GPS-equipped probe vehicles, especially public transit vehicles can be a reliable source for traffic speed estimation. Traditionally, this estimation is done by learning the parameters of a model that describes the relationship between the speed of the probe vehicle and the actual traffic speed. However, such approaches typically suffer from data sparsity issues. Furthermore, most state of the art approaches does not consider the effect of weather and the driver of the probe vehicle on the parameters of the learned model. In this paper, we describe a multivariate predictive multi-model approach called SpeedPro that (a) first identifies similar clusters of operation from the historic data that includes the real-time position of the probe vehicle, the weather data, and anonymized driver identifier, and then (b) uses these different models to estimate the traffic speed in real-time as a function of current weather, driver and probe vehicle speed. When the real-time information is not available our approach uses a different model that uses the historical weather and traffic information for estimation. Our results show that the purely historical data is less accurate than the model that uses the real-time information.
publication
A smart public transportation decision support system with multi-timescale analytical services
Public transit is a critical component of a smart and connected community. As such, citizens expect and require accurate information about real-time arrival/departures of transportation assets. As transit agencies enable large-scale integration of
real-time sensors and support back-end data-driven decision support systems, the Dynamic Data-Driven Applications Systems (DDDAS) paradigm becomes a promising approach to make the system smarter by providing online model learning and multi-time scale analytics as part of the decision support system that is used in the DDDAS feedback loop. In this paper, we describe a system in use in Nashville and illustrate the analytic methods developed by our team. These methods use both historical as well as real-time streaming data for online bus arrival prediction. The historical data is used to build classifiers that enable us to create expected performance models as well as identify anomalies. These classifiers can be used to provide schedule adjustment feedback
to the metro transit authority. We also show how these analytics services can be packaged into modular, distributed and resilient
micro-services that can deployed on both cloud back ends as well as edge computing resources.
<p><span>Distributed real-time and embedded (DRE) systems executing mixed criticality task sets are increasingly being deployed in mobile and embedded cloud computing platforms, including space applications. These DRE systems must not only operate over a range of temporal and spatial scales, but also require stringent assurances for secure interactions between the system's tasks without violating their individual timing constraints. To address these challenges, this paper describes a novel distributed operating system focusing on the scheduler design to support the mixed criticality task sets. Empirical results from experiments involving a case study of a cluster of satellites emulated in a laboratory testbed validate our claims.</span></p>
2016
We investigate decoupling abstractions, by which we seek to simulate (i.e. abstract) a given system of ordinary differential equations (ODEs) by another system that features completely independent (i.e. uncoupled) sub-systems, which can be considered as separate systems in their own right. Beyond a purely mathematical interest as a tool for the qualitative analysis of ODEs, decoupling can be applied to verification problems arising in the fields of control and hybrid systems. Existing verification technology often scales poorly with dimension. Thus, reducing a verification problem to a number of independent verification problems for systems of smaller dimension may enable one to prove properties that are otherwise seen as too difficult. We show an interesting correspondence between Darboux polynomials and decoupling simulating abstractions of systems of polynomial ODEs and give a constructive procedure for automatically computing the latter.
Automotive control systems, such as modern Advanced Driver Assistance Systems (ADAS), are becoming more complex and prevalent in the automotive industry. Therefore, a highly-efficient design and evaluation ethodology for automotive control system development is required. In this paper, we propose a closed-loop simulation framework that improves ADAS design and evaluation. The proposed simulation framework consists of four tools: Dymola, Simulink, OpenMETA and Unity 3D game engine. Dymola simulates vehicle dynamics models written in Modelica. Simulink is used for vehicle control software modeling. OpenMETA provides horizontal integration between design tools. OpenMETA also has the capability to improve design efficiency through the use of PET (Parametric Exploration Tool) and DSE (Design Space Exploration) tools. Unity provides the key functionality to enable interactive, or closed-loop ADAS simulation, which contains sensor models for ADAS, road environment models and provides visualization.
publication
Poster Abstract: Distributed Reasoning for Diagnosing Cascading Outages in Cyber Physical Energy Systems
The power grid incorporates a number of protection elements such as distance relays that detect faults and prevent the propagation of failure effects from influencing the rest of system. However, the decision of these protection elements is only influenced by local information in the form of bus voltage/current (V-I) samples. Due to lack of system wide perspective, erroneous settings, and latent failure modes, protection devices often mis-operate and cause cascading effects that ultimately lead to blackouts. Blackouts around the world have been triggered or worsened by circuit breakers tripping, including the blackout of 2003 in North America, where the secondary/ remote protection relays incorrectly opened the breaker. Tools that aid the operators in finding the root cause of the problem on-line are required. However, high system complexity and the interdependencies between the cyber and physical elements of the system and the mis-operation of protection devices make the failure diagnosis a challenging problem.