CPS Events

Safe Planning and Control When Autonomy is Not the Only Driver

Speaker Name: 
Vishnu Desaraju
Speaker Title: 
Senior Research Scientist
Speaker Organization: 
Toyota Research Institute
Start Time: 
Thursday, May 13, 2021 - 2:00pm
End Time: 
Thursday, May 13, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/96891323311?pwd=L2IxdlJDdS9sL0U1aTNZZ0MvbDhKdz09
Organizer: 
Ricardo Sanfelice

  

Abstract

When developing planning and control algorithms for autonomous systems, we often assume that these algorithms will be the only significant source of inputs to the system. However, in some applications there may be an additional "driver", external to the autonomy pipeline, that has a large impact on the performance and safety of the overall system. In this talk, I will discuss two projects that investigate different aspects of this problem.

The first considers the effects of an external driver that acts alongside the autonomy but without any shared objective. For example, the aerodynamic forces acting on a micro aerial vehicle flying through a strong wind field can dramatically alter the vehicle’s motion, leading to violations of safety or operational constraints. Limited onboard computation also restricts modeling or incorporation of these nonlinear dynamics into the autonomy. This leads to the idea of Experience-driven Predictive Control (EPC). EPC builds on ideas from adaptive control and model predictive control by accumulating experience on how an external driver impacts the system while simultaneously leveraging that experience to ensure that constraints are met in a computationally efficient manner.

The second project considers the case where the external driver is actually the primary operator of the system, while the autonomy aims to assist this driver to safely achieve a common objective. A prime example of this is the next generation of advanced driver assistance systems that will be able to employ techniques developed for fully autonomous driving but in the context of keeping a human driver safe. The Toyota Guardian system is TRI’s novel approach to this problem, building on ideas from a variety of domains, ranging from aircraft control to shared autonomy. I will show a few preliminary examples of how this combination of the human driver and the autonomy can achieve superhuman performance and safety.


Bio

Vishnu Desaraju is a Senior Research Scientist at the Toyota Research Institute, Ann Arbor, MI working on automated driving technologies. He received a B.S.E. in Electrical Engineering from the University of Michigan in 2008, an S.M. in Aeronautics and Astronautics from MIT in 2010, and an M.S. and Ph.D. in Robotics from Carnegie Mellon University in 2015 and 2017, respectively. He received the AIAA Guidance, Navigation, and Control Best Paper award from SciTech 2018. His research interests include developing computationally efficient motion planning and feedback control algorithms for agile autonomous systems, including autonomous cars, boats, and micro air vehicles, with a focus on mitigating the effects of uncertainty to achieve safe and reliable operation in the field.

 spacer

Enabling Mutually Adaptive Autonomous Systems through Non-intrusive Performance Estimation

Speaker Name: 
Steve McGuire
Speaker Title: 
Assistant Professor of Electrical and Computer Engineering
Speaker Organization: 
University of California at Santa Cruz
Start Time: 
Thursday, May 6, 2021 - 2:00pm
End Time: 
Monday, May 3, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/92846502542?pwd=TEVXbFU0QzhBQXZaelg1UWs1UkdhUT09
Organizer: 
Ricardo Sanfelice

  

Abstract

In many human-robot teaming applications, an autonomous system has minimal information regarding its human teammates, particularly with regards to a human's ability to interact with or respond to system demands. If a system were able to predict a human's performance given their current mental state, interactions between human and system could be tailored to match the human's current mental engagement. In an alerting / monitoring scenario, such tailoring could be used to avoid both normalization of deviance, in which a system's warnings are so common so as to be ignored, and undernotification, where a system is aware of a condition and does not alert the human to take action. In my work, we have utilized passive physiological sensors to directly estimate human performance without using intermediate labels such as cognitive states using an online estimation approach based on reinforcement learning. In this talk, I will discuss our problem formulation, initial results, as well as planned experiments to investigate human-aware alerting strategies using immersive environments.

 

Bio

Steve McGuire is an Assistant Professor of Electrical and Computer Engineering at the University of California Santa Cruz. His lab, the Human-Aware Robotic Exploration (HARE) Group explores human-robot teaming in the context of field robotic applications. Prior to joining UCSC in 2020, he earned his PhD as a NASA Space Technology Research Fellow from the University of Colorado at Boulder in Aerospace Engineering Sciences. He is a former Marine helicopter pilot and has been part of several top-level field robotics initiatives, including the DARPA Grand Challenge in 2004, the Google Lunar X Prize, and the ongoing DARPA Subterranean Challenge.

 spacer

On the Role of Touch for Robustness and Generalisability in Robotic Manipulation

Speaker Name: 
Jeannette Bohg
Speaker Title: 
Assistant Professor of Computer Science
Speaker Organization: 
Stanford University
Start Time: 
Thursday, April 29, 2021 - 2:00pm
End Time: 
Friday, April 2, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/95445782277?pwd=WDBLVGtiemRZODczWmxqcDh4dXZ0Zz09
Organizer: 
Ricardo Sanfelice

  

Abstract

Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. In our research, we explore what representations of raw perceptual data enable a robot to better learn and perform these skills. Specifically for manipulation robots, the sense of touch is essential yet it is non-trivial to manually design a robot controller that combines different sensing modalities that have very different characteristics. I will present our set of research works that explore the question of how to best fuse the information from vision and touch for contact-rich manipulation tasks. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. I present experiments on a peg insertion task where the learned policy generalises over different geometry, configurations, and clearances, while being robust to external perturbations. While this work has shown very promising results on fusing vision and touch into a learned latent representation, this representation is also not interpretable. In follow-up work, we present a multimodal fusion algorithm that exploits a differentiable filtering framework for tracking the state of manipulated objects and therefore facilitating longer horizon planning. We also propose a framework where a robot can exploit information from failed manipulation attempts to recover and re-try. And finally, we show how exploiting multiple modalities helps to compensate for corrupted sensory data in one of the modalities. I will conclude this talk with a discussion of appropriate representations for multimodal sensory data.

 

Bio

Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several awards, most notably the 2019 IEEE International Conference on Robotics and Automation (ICRA) Best Paper Award, the 2019 IEEE Robotics and Automation Society Early Career Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper Award.

 spacer

Fundamental Physics of Moving Clock Time Synchronization in a Weak Gravitational Field

Speaker Name: 
Dr. Steven R. Wilkinson
Speaker Title: 
Principal Engineering Fellow
Speaker Organization: 
Raytheon Technologies
Start Time: 
Thursday, April 15, 2021 - 2:00pm
End Time: 
Thursday, April 15, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/95323996680?pwd=MXdUWlp5V3h3dmdyUXBUUUJkcTFGQT09
Organizer: 
Ricardo Sanfelice

  

Abstract

This talk will present the fundamental physics of near earth dynamic clocks and time synchronization.  We begin by establishing basic clock behavior and reviewing established synchronization approaches between two stationary clocks in separated ground based laboratories.  A discussion of different clock technologies that are in the literature will include comparisons of short term and long term stability including a relationship between stability and volume.   Once clocks start moving in a gravitational field we must use General Relativity to understand the behavior of time as compared to other clocks that are either stationary or moving.  We start with a one spatial and one time dimension to show how motion causes clocks to run at different rates and synchronization asymmetries that must be corrected.  We then discuss the transformation of proper time to coordinate time just as done in GPS clocks.  Then we’ll conclude by investigating the fundamentals of 4-dimensional dynamic clock synchronization of coordinate time.

Bio

Dr. Steven R. Wilkinson is a Principal Engineering Fellow on the Senior Technical Staff in RIS Engineering.  Over his 24-year career at the company, he has mainly worked on development programs that involved new mission systems solutions that covered microwave and RF systems, electro-optical systems and is the company expert in time & frequency metrology. He has applied this expertise to position navigation and timing (PNT), communications, radar, and EO/IR sensing and imaging.  He is currently the Principal Investigator on an effort that supports the National Radio Astronomy Observatory (NRAO) in two areas. Steven is the RIS planetary radar development technical lead, and is involved with the NRAO’s Next Generation Very Large Array (ngVLA) time and frequency effort. ngVLA is the future radio astronomy observatory that will replace the Very Large Array and the Very Long Baseline Array.  Planetary radar will be a new capability for the NRAO and it will compliment current systems (Goldstone) to enhance our global planetary defense mission and solar system research.  He received his undergraduate degree in physics from UC Berkeley and a PhD from the University of New Mexico.

 spacer

Modular Architectures for Autonomous Vehicles Guidance and Control

Speaker Name: 
Dr. Stefano Di Cairano
Speaker Title: 
Distinguished Researcher and Senior Team Leader
Speaker Organization: 
Mitsubishi Electric Research Laboratories
Start Time: 
Thursday, March 11, 2021 - 2:00pm
End Time: 
Thursday, March 11, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/94773491875?pwd=WlpHUjFWRTEvYUs0TWFTbmhOTDN5Zz09
Organizer: 
Ricardo Sanfelice

  

Abstract

Highly autonomous systems, such as autonomous vehicles are expected to exhibit complex behaviors in a changing and often unpredictable environment. As such they require an equally complex reasoning system to provide guidance and control (G&C). The overall decision problem involves continuous dynamics, related to the physical system, and discrete dynamics, related to rules such as traffic rules. Due to such a hybrid nature and to the different time scales involved, the overall problem is too computationally complex to be solved in real-time as a whole in production-grade embedded platforms. In this talk we describe modular architectures that decompose the G&C problem into tractable sequences of sub-problems, while retaining safety properties for the integrated control architecture. These result in G&C architectures for autonomous vehicles that are flexible, expandable, and provably safe and robust, and yet appropriate for the embedded platforms typical of automotive, aerospace, robotics. Several tests on a scaled testbench for autonomous driving system development will be presented to demonstrate the concept.

Bio

Stefano Di Cairano received the Master (Laurea), and the PhD in Information Engineering in ’04 and ’08, respectively, from the University of Siena, Italy. He has been visiting student at the Technical University of Denmark and at the California Institute of Technology. During 2008–2011, he was with Powertrain Control R&A, Ford Research and Adv. Engineering, Dearborn, MI. Since 2011, he is with Mitsubishi Electric Research Laboratories, Cambridge, MA, where he is now the Senior Team Leader of Control for Autonomy, and a Distinguished Researcher. His research is on optimization-based control strategies for complex mechatronic systems, in automotive, factory automation, transportation systems and aerospace. His research interests include model predictive control, constrained control, particle filtering, hybrid systems, optimization. Dr. Di Cairano is an author in more than 200 peer reviewed papers in journals and conference proceedings and an inventor in more than 50 patents. He was the Chair of the IEEE CSS Technical Committee on Automotive Controls, the Chair of IEEE CSS Standing Committee on Standards and an Associate Editor of the IEEE Transactions on Control Systems Technology. He is currently the Vice-Chair of the IFAC Technical Committee on Optimal Control, an Executive member of the IFAC Committee on Industry, and the Chair of the Technology Conferences Editorial Board.

 spacer

Pages