CPS Events

Learning Safe Control Laws from Expert Demonstrations

Speaker Name: 
Lars Lindemann
Speaker Title: 
Assistant Professor
Speaker Organization: 
Department of Computer Science at the University of Southern California
Start Time: 
Thursday, May 9, 2024 - 2:00pm
End Time: 
Thursday, May 9, 2024 - 3:00pm
Location: 
https://ucsc.zoom.us/j/94560637937?pwd=bzNRWnVoUjBXN00ybUMyaEZrODdwdz09
Organizer: 
Ricardo Sanfelice

 

Abstract

Learning-enabled autonomous control systems promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Accelerated by algorithmic and computational advances in machine learning and the availability of data, there has been tremendous success in the design of learning-enabled controllers. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety of these increasingly complex control systems. In this talk, I will provide new insights and discuss exciting opportunities to learn verifiably safe control laws. Specifically, I will present an optimization framework to learn safe control laws from expert demonstrations in a setting where the system dynamics are at least partially known. In most safety-critical systems, expert demonstrations in the form of system trajectories that showcase safe system behavior are readily available or can easily be collected. I will propose a constrained optimization problem with constraints on the expert demonstrations and the system model to learn control barrier functions for safe control. Formal correctness guarantees are provided in terms of the density of the data and the smoothness of the system model and the learned control barrier function. In a next step, we will discuss how we can account for model uncertainty and for hybrid system models in this framework. Finally, we will see how we can learn safe control laws from high-dimensional sensor data such as cameras. We provide two empirical case studies on a self-driving car and a bipedal robot to illustrate the method.  

 

Bio

Lars Lindemann is an Assistant Professor in the Thomas Lord Department of Computer Science at the University of Southern California where he leads the Safe Autonomy and Intelligent Distributed Systems (SAIDS) lab. There, he is also a member of the Ming Hsieh Department of Electrical and Computer Engineering (by courtesy), the Robotics and Autonomous Systems Center, and the Center for Autonomy and Artificial Intelligence. Between 2020 and 2022, he was a Postdoctoral Fellow in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received the Ph.D. degree in Electrical Engineering from KTH Royal Institute of Technology in 2020. Prior to that, he received the M.Sc. degree in Systems, Control and Robotics from KTH in 2016 and two B.Sc. degrees in Electrical and Information Engineering and in Engineering Management from the Christian-Albrecht University of Kiel in 2014. His research interests include systems and control theory, formal methods, and autonomous systems. Professor Lindemann received the Outstanding Student Paper Award at the 58th IEEE Conference on Decision and Control and the Student Best Paper Award (as a co-advisor) at the 60th IEEE Conference on Decision and Control. He was finalist for the Best Paper Award at the 2022 Conference on Hybrid Systems: Computation and Control and for the Best Student Paper Award at the 2018 American Control Conference.

Building Safe Autonomous Systems Using Imperfect Components

Speaker Name: 
Samarjit Chakraborty
Speaker Title: 
Kenan Distinguished Professor and Chair of the Department of Computer Science
Speaker Organization: 
University of North Carolina, Chapel Hill
Start Time: 
Thursday, March 14, 2024 - 2:00pm
End Time: 
Thursday, March 14, 2024 - 3:00pm
Location: 
E2-506 or https://ucsc.zoom.us/j/91500694770?pwd=RU1SeWQ3SkJHVWxXak5hKzNwZU9Sdz09
Organizer: 
Ricardo Sanfelice

 

Abstract

Modern autonomous systems are an ensemble of multiple components implementing machine learning, control, scheduling, and security. Current design flows aim for each of these components to work perfectly, and system design consists of composing these components together. As a result research in machine learning aims towards near-perfect classification or estimation, scheduling techniques aim to meet all deadlines, and security algorithms aim towards fully secure systems. While such separation of concerns has served us well till now, as systems become more complex, this goal towards achieving perfection is becoming unreasonable. In this talk we will argue that we can design safe autonomous systems, without requiring its components to be perfect -- as long as the imperfections of one component are balanced by suitable actions from other components. Such a design approach is potentially more reasonable and cost effective, and we will provide examples of how it plays out. 

 

Speaker's Bio

Samarjit Chakraborty is a Kenan Distinguished Professor and Chair of the Department of Computer Science at UNC Chapel Hill. Prior to coming here in 2019, he was a professor of Electrical Engineering at the Technical University of Munich in Germany, where he held the Chair of Real-Time Computer Systems for 11 years. Before that he was an assistant professor of Computer Science at the National University of Singapore for 5 years. He obtained his PhD from ETH Zurich in 2003. His research interests can be best described as a random walk through various aspects of designing hardware and software for embedded computers. He is a Fellow of the IEEE and received the 2023 Humboldt Professorship Award from Germany.

Building a Framework for Trustworthy Autonomous Agents: Autonomous Agents and Value Alignment

Speaker Name: 
Gabriel Nemirovsy
Speaker Title: 
Ph.D. candidate at the Philosophy Department
Speaker Organization: 
University of York, England, UK
Start Time: 
Thursday, February 29, 2024 - 2:00pm
End Time: 
Thursday, February 29, 2024 - 3:00pm
Location: 
E2-506 or https://ucsc.zoom.us/j/96915637177?pwd=SEF1TWFwSmxYWThOSmtYQzlZeURMZz09
Organizer: 
Ricardo Sanfelice

 

Abstract

With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that traditionally required human or social value-judgements or norms. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative sensitivity and compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature.

These norms that agents must comply with are generally discussed in the abstract as high-level principles such as “respect for human autonomy” or “non-maleficence.” However, realistically addressing these concerns requires taking these abstract principles and formulating them into concrete particular rules that agents can follow. This can be tricky as a norm such as privacy can have different, and potentially contradictory requirements, when considering either its cultural or legal dimension, for example.

In my presentation, I will discuss research done by my colleagues and I to create a process for deriving specific rules from general norms. This proposed framework helps bridge the gap between abstract value-judgements about what is right and wrong and what agents actually do in practice – helping resolve potential conflicts between norms and develop actionable rules.

 

Speaker Bio

Gabriel Nemirovsky is a Ph.D. candidate within the philosophy department at the University of York. Previously, he served as a researcher at the UKRI Trustworthy Autonomous Systems, Resilience Node, collaborating closely with diverse stakeholders including industry, academia, government, and non-governmental organizations. As a researcher in the Resilience Node, Gabriel helped shape ethical frameworks for autonomous systems, underscoring his commitment to interdisciplinary excellence. Gabriel's academic pursuits are driven by a profound interest in the social impact of technological innovation, the economic dynamics of innovation, and political philosophy centered on justice and democratic engagement.

Towards Compositional Secure Autonomy: From Perception to Control

Speaker Name: 
Z. Berkay Celik
Speaker Title: 
Assistant Professor of Computer Science
Speaker Organization: 
Purdue University
Start Time: 
Thursday, February 15, 2024 - 2:00pm
End Time: 
Thursday, February 15, 2024 - 3:00pm
Location: 
E2-506 or https://ucsc.zoom.us/j/92628966055?pwd=RG82ZkpaZU9hOVZVMzBVZ3pCdHdCdz09
Organizer: 
Ricardo Sanfelice

 

Abstract

Autonomous systems, such as self-driving cars, drones, and mobile robots, are rapidly becoming ubiquitous in our society. These systems are composed of multiple individual software components for perception, prediction, planning, and control. While these systems are now blurring the lines between traditional computing systems and human intelligence and revolutionizing markets, a significant gap exists in developing theory and practice that indicates how the behavior of each component can be unified to reason about their system-wide security. This gap is exacerbated by the increasing use of learning-enabled components with inputs from diverse sensors and actuators that operate in open and uncontrolled physical environments.

In this talk, I present the challenges in compositional secure autonomy and principles from our recent efforts on vulnerability discovery and security enforcement to address these challenges. I illustrate these challenges and principles with examples and sample results by focusing on robotic vehicles and autonomous driving. I conclude with a discussion of the open problems and opportunities, and outline areas for defensive research in the future.

 

Speaker Bio

Z. Berkay Celik is an Assistant Professor of Computer Science at Purdue University, where he is the co-director of the Purdue Security (PurSec) laboratory and a member of the Center for Education and Research in Information Assurance and Security (CERIAS). His research investigates the design and evaluation of security for software and systems, specifically on emerging computing platforms and the complex physical environments in which they operate. Through systems design, program analysis, and formal methods, his research seeks to improve security and privacy guarantees in commodity computer systems. His research approach is best illustrated by his extensive work on the Internet of Things (IoT) and Cyber-Physical Systems (CPS), including robotic vehicles, automobiles, and autonomous vehicles. He has received the National Science Foundation CAREER Award in 2022 and Google's ASPIRE Research award in 2021-2023. More information about his research group and publication record is available at https://beerkay.github.io.

Control-theoretic Approaches towards Secure Industrial Control Systems

Speaker Name: 
Hampei Sasahara
Speaker Title: 
Assistant Professor
Speaker Organization: 
Tokyo Institute of Technology, Tokyo, Japan
Start Time: 
Thursday, February 1, 2024 - 2:00pm
End Time: 
Thursday, February 1, 2024 - 3:00pm
Location: 
E2-506 or https://ucsc.zoom.us/j/97501111669?pwd=bWo3VTJSWHF1L1hOWWk2NndIUzBOQT09
Organizer: 
Ricardo Sanfelice

 

Abstract

The term "Industrial Control System" (ICS) encompasses various control configurations, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), and programmable logic controllers (PLC). ICSs historically operated in isolation from the internet. However, recent technological development has driven a convergence between ICSs and internet-based environments, such as cloud computing, breaking the isolation. This shift exposes ICSs to the same attack vectors prevalent in cyberattacks. Despite this exposure, ICS devices are inherently less secure against advanced attack scenarios. A compromise to an ICS can result in substantial physical damage and pose threats to human lives.

The first half of this talk reviews fundamental topics about control-theoretic approaches for secure industrial control systems. Our exploration begins with traditional model-based anomaly detection and its adaptation to the security context. Subsequently, we discuss zero-dynamics attack that conceals its existence by exploiting the zero dynamics of the system's dynamics. In the latter half, the speaker provides recent results of his works. In particular, we consider a model-based defense techniques that perform not only detection but also counteractions based on Bayesian inference and mathematically analyze its fundamental properties using game theory. In addition, as another topic, recent findings about vulnerabilities of data-driven control are also exhibited.

 

Bio

Hampei Sasahara received the Ph.D. degree in engineering from Tokyo Institute of Technology in 2019. He is currently an Assistant Professor with Tokyo Institute of Technology, Tokyo, Japan. From 2019 to 2021, he was a Postdoctoral Scholar with KTH Royal Institute of Technology, Stockholm, Sweden. His main interests include secure control system design and control of large-scale systems.

 

Pages