The International Conference on Assured Autonomy (ICAA’24)
The International Conference on Assured Autonomy (ICAA’24) plans to address the gap that exists between theory-heavy artificially intelligent autonomous systems and the privacy, security, and safety of their real-world implementations. Advances in machine learning (ML) and artificial intelligence (AI) have shown great promise in algorithms and techniques for automating complex decision-making processes across transportation, robotics, critical infrastructure, and cyber infrastructure domains. Practical implementations of these algorithms require significant systems engineering and integration support for safe, trusted, and assured operations, especially as they integrate with the physical world. This need for assurance is further challenged by AI safety, security, privacy, responsibility, bias, and alignment issues.
The focus of this conference is: (1) Design of Assured and Safe Systems with AI and Autonomy, (2) Real-world Studies, Deployments and Industry Uses of Assured AI and Autonomy (3) Methods for Testing and Assuring AI and AI-enabled Autonomous Systems, and (4) Security and Privacy of AI and Autonomous Systems, which includes methods to detect, respond, mitigate, and recover resiliently to violations in safety, security, and privacy, and trust.
Topics of Interest
ICAA seeks new methodologies and contributions as well as applications and studies of all aspects of AI safety, security, and assurance in autonomous systems. Papers that encourage the discussion and exchange of experimental and theoretical results, novel designs, real-world uses, case studies and works in progress are requested. Both full papers (up to 10 pages) and working papers (up to 4 pages) will be accepted.
Topics of interest include (but are not limited to):
Design for Assured and Safe Autonomy
● Safe-by-construction methods for autonomous systems
● Formally verified AI and autonomy
● Neuro-symbolic learning and reasoning for assured, resilient autonomy
● Systems that learn and adapt in the field
● Sim-to-real transfer for assured AI and autonomy
● Runtime assurance and monitoring
● Safe learning and control for autonomous and AI-enabled systems
● Real-world studies, uses, and design considerations of AI for autonomous systems
Methods for Testing and Assuring AI and Autonomy
● Explainable and interpretable AI-enabled systems
● Alignment and safety of AI and autonomous systems
● Standards, ethics, and policies for autonomy and AI to meet responsible AI principles
● Verification, validation, testing, and assurance of systems with AI and autonomy
● Evaluating safety of autonomous systems according to their potential risks and vulnerabilities
● Test, evaluation, certification, and assurance of autonomous AI systems
● Modeling and simulation, virtual constructive and live testing challenges for AI and autonomous systems
● Safety, evaluation, and assurance of human-autonomy teaming
● Evaluation and safety of foundation models
● Lessons learned from deployments and industrial uses of AI and autonomy
Security and Privacy of AI and Autonomous Systems
● Detecting dataset anomalies that lead to autonomous system security and privacy violations
● Detecting data poisoning, model poisoning and system attacks
● Differential privacy and privacy-preserving learning and generative models
● Adversarial attacks on AI and autonomy, and defenses against adversarial attacks
● Mitigation and improved resiliency of AI and autonomous systems to various forms of attacks
● Engineering trusted autonomous system and AI software architectures
● Red-teaming and stress testing of AI-enabled systems to identify vulnerabilities
● Real-world studies of security and privacy challenges for AI and autonomy