AI Training Seminar: Neural Network Verification

Log in to see the full information for this event

AI Trainign Seminar Series Graphic

Ensuring safe, secure, and trustworthy artificial intelligence (AI), particularly within safety-critical systems like autonomous cyber-physical stems (CPS), is of paramount importance and of crucial urgency for dependability research. One approach to establishing such desiderata of AI is through formal verification, particularly in machine learning (ML) components like neural networks, to establish they meet certain formal specifications. The Neural Network Verification (NNV) software tool implements automated formal methods for this purpose, specifically, reachability analysis, and this interactive tutorial will demonstrate these to formally verify specifications in neural networks. The tutorial begins with a brief lecture to introduce the neural network verification problem, followed by an interactive tutorial of these methods implemented in NNV. Examples will be shown from the security, medicine, and CPS domains.

 

 

Headshot of Diego Manzanas Lopez

Diego Manzanas Lopez is a Research Scientist at the Institute for Software Integrated Systems at Vanderbilt University. He previously received a PhD in Electrical Engineering from Vanderbilt University under the supervision of Dr. Taylor T. Johnson. His research primarily intersects the fields of deep learning, formal methods, and cyber-physical systems. In his early years, he focused on the development of assurance techniques for intelligent systems, specifically for learning-enabled autonomous vehicles. Recently, his work has broadened to other domains such as security and medicine where formal methods are crucial to provide safety, trustworthiness and security guarantees to the AI systems in safety-critical domains.