The Barycenter Method for Direct Optimization: an Overview with Applications
We will present properties of the recently developed barycenter method for direct optimization that make it particularly useful in control applications. Equivalence of the method's batch and recursive formulations can be used to show that it has descent-like properties, although no derivatives are used, and that it is robust to noisy measurements and lack of differentiability. As a relevant application example, the method can be employed in the joint estimation of parameters and switching times for hybrid linear systems, an important problem that can pose significant computational challenges due to the non-convex nature of the combined optimization.
Felipe Pait studied electrical engineering at the University of S Paulo, and received a PhD from Yale University in 1993, advised by AS Morse. He has worked on adaptive control and applications. Currently he is interested applying randomized optimization algorithms to classical open questions of adaptive control design, and in stability conditions for switched systems. He is associate professor at the University of S Paulo, Brazil, having dedicated substantial efforts to curriculum reform and multidisciplinary engineering education initiatives.
RAPID: Robot Assisted Precision Irrigation Delivery
Agricultural irrigation consumes 70% of the world's freshwater. Emerging sensing technologies such as UAVs equipped with heterogeneous sensors can provide farmers with detailed maps of water use and ground conditions, but closing the sensing-actuation loop to adjust irrigation at the plant level remains an unsolved challenge. Some proposed solutions rely on networks of motorized wireless actuators that are costly and prone to failure in field conditions. RAPID (Robot-Assisted Precision Irrigation Delivery) explores an alternative approach whereby humans and robots collaborate to adjust low-cost, adjustable drip irrigation emitters at the plant level. RAPID is designed for cost-conscious farm managers to be retrofit to existing irrigation systems and incrementally expanded to increase irrigation precision, reduce water usage, and permit thousands of emitters to be incrementally adjusted. The project involves the design, development, and evaluation in the field of robust co-robotic systems compatible with existing drip irrigation infrastructure in vineyards and orchards. After giving an overview of the project, in this talk I will illustrate a set of results in the area of routing in vineyards for single and multiple robots.
Stefano Carpin is Professor and founding chair of the department of Computer Science and Engineering at UC Merced. He received his “Laurea” (MSc) and Ph.D. degrees in electrical engineering and computer science from the University of Padova (Italy) in 1999 and 2003, respectively. Since 2007 he has been with the School of Engineering at UC Merced, where he established and leads the UC Merced robotics laboratory. His research interests include mobile and cooperative robotics, and robot algorithms. He is a Senior Member of the IEEE and served as associate editor for the IEEE Transactions on Robotics (T-RO), the IEEE Transactions on Automation Science and Engineering (T-ASE), and the IEEE Robotics and Automation Letters (RA-L). Under his supervision, teams participating in the RoboCupRescue Virtual Robots competition won second place in 2006 and 2008, and first place in 2009. In 2018, he also won the Best Conference Paper Award at the yearly IEEE International Conference on Automation Science and Engineering (CASE). Since he joined UC Merced his research has been supported by the National Science Foundation, DARPA, USDA, the Office of Naval Research, the Army Research Lab, the Department of Commerce (NIST), the Center for Information Technology Research in the Interest of Society (CITRIS), Microsoft Research, and General Motors.
Safe Planning and Control When Autonomy is Not the Only Driver
When developing planning and control algorithms for autonomous systems, we often assume that these algorithms will be the only significant source of inputs to the system. However, in some applications there may be an additional "driver", external to the autonomy pipeline, that has a large impact on the performance and safety of the overall system. In this talk, I will discuss two projects that investigate different aspects of this problem.
The first considers the effects of an external driver that acts alongside the autonomy but without any shared objective. For example, the aerodynamic forces acting on a micro aerial vehicle flying through a strong wind field can dramatically alter the vehicle’s motion, leading to violations of safety or operational constraints. Limited onboard computation also restricts modeling or incorporation of these nonlinear dynamics into the autonomy. This leads to the idea of Experience-driven Predictive Control (EPC). EPC builds on ideas from adaptive control and model predictive control by accumulating experience on how an external driver impacts the system while simultaneously leveraging that experience to ensure that constraints are met in a computationally efficient manner.
The second project considers the case where the external driver is actually the primary operator of the system, while the autonomy aims to assist this driver to safely achieve a common objective. A prime example of this is the next generation of advanced driver assistance systems that will be able to employ techniques developed for fully autonomous driving but in the context of keeping a human driver safe. The Toyota Guardian system is TRI’s novel approach to this problem, building on ideas from a variety of domains, ranging from aircraft control to shared autonomy. I will show a few preliminary examples of how this combination of the human driver and the autonomy can achieve superhuman performance and safety.
Vishnu Desaraju is a Senior Research Scientist at the Toyota Research Institute, Ann Arbor, MI working on automated driving technologies. He received a B.S.E. in Electrical Engineering from the University of Michigan in 2008, an S.M. in Aeronautics and Astronautics from MIT in 2010, and an M.S. and Ph.D. in Robotics from Carnegie Mellon University in 2015 and 2017, respectively. He received the AIAA Guidance, Navigation, and Control Best Paper award from SciTech 2018. His research interests include developing computationally efficient motion planning and feedback control algorithms for agile autonomous systems, including autonomous cars, boats, and micro air vehicles, with a focus on mitigating the effects of uncertainty to achieve safe and reliable operation in the field.
Enabling Mutually Adaptive Autonomous Systems through Non-intrusive Performance Estimation
In many human-robot teaming applications, an autonomous system has minimal information regarding its human teammates, particularly with regards to a human's ability to interact with or respond to system demands. If a system were able to predict a human's performance given their current mental state, interactions between human and system could be tailored to match the human's current mental engagement. In an alerting / monitoring scenario, such tailoring could be used to avoid both normalization of deviance, in which a system's warnings are so common so as to be ignored, and undernotification, where a system is aware of a condition and does not alert the human to take action. In my work, we have utilized passive physiological sensors to directly estimate human performance without using intermediate labels such as cognitive states using an online estimation approach based on reinforcement learning. In this talk, I will discuss our problem formulation, initial results, as well as planned experiments to investigate human-aware alerting strategies using immersive environments.
Steve McGuire is an Assistant Professor of Electrical and Computer Engineering at the University of California Santa Cruz. His lab, the Human-Aware Robotic Exploration (HARE) Group explores human-robot teaming in the context of field robotic applications. Prior to joining UCSC in 2020, he earned his PhD as a NASA Space Technology Research Fellow from the University of Colorado at Boulder in Aerospace Engineering Sciences. He is a former Marine helicopter pilot and has been part of several top-level field robotics initiatives, including the DARPA Grand Challenge in 2004, the Google Lunar X Prize, and the ongoing DARPA Subterranean Challenge.
On the Role of Touch for Robustness and Generalisability in Robotic Manipulation
Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. In our research, we explore what representations of raw perceptual data enable a robot to better learn and perform these skills. Specifically for manipulation robots, the sense of touch is essential yet it is non-trivial to manually design a robot controller that combines different sensing modalities that have very different characteristics. I will present our set of research works that explore the question of how to best fuse the information from vision and touch for contact-rich manipulation tasks. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. I present experiments on a peg insertion task where the learned policy generalises over different geometry, configurations, and clearances, while being robust to external perturbations. While this work has shown very promising results on fusing vision and touch into a learned latent representation, this representation is also not interpretable. In follow-up work, we present a multimodal fusion algorithm that exploits a differentiable filtering framework for tracking the state of manipulated objects and therefore facilitating longer horizon planning. We also propose a framework where a robot can exploit information from failed manipulation attempts to recover and re-try. And finally, we show how exploiting multiple modalities helps to compensate for corrupted sensory data in one of the modalities. I will conclude this talk with a discussion of appropriate representations for multimodal sensory data.
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several awards, most notably the 2019 IEEE International Conference on Robotics and Automation (ICRA) Best Paper Award, the 2019 IEEE Robotics and Automation Society Early Career Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper Award.