CPS Events

RAPID: Robot Assisted Precision Irrigation Delivery

Speaker Name: 
Stefano Carpin
Speaker Title: 
Professor of Computer Science and Engineering
Speaker Organization: 
UC Merced
Start Time: 
Thursday, May 27, 2021 - 2:00pm
End Time: 
Thursday, May 27, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/99553677882?pwd=N3QyME52OWh4WFNtK05yeTVGS3BGUT09
Organizer: 
Ricardo Sanfelice

  

Abstract

Agricultural irrigation consumes 70% of the world's freshwater. Emerging sensing technologies such as UAVs equipped with heterogeneous sensors can provide farmers with detailed maps of water use and ground conditions, but closing the sensing-actuation loop to adjust irrigation at the plant level remains an unsolved challenge. Some proposed solutions rely on networks of motorized wireless actuators that are costly and prone to failure in field conditions. RAPID (Robot-Assisted Precision Irrigation Delivery) explores an alternative approach whereby humans and robots collaborate to adjust low-cost, adjustable drip irrigation emitters at the plant level. RAPID is designed for cost-conscious farm managers to be retrofit to existing irrigation systems and incrementally expanded to increase irrigation precision, reduce water usage, and permit thousands of emitters to be incrementally adjusted. The project involves the design, development, and evaluation in the field of robust co-robotic systems compatible with existing drip irrigation infrastructure in vineyards and orchards.  After giving an overview of the project, in this talk I will illustrate a set of results in the area of routing in vineyards for single and multiple robots.


Bio

Stefano Carpin is Professor and founding chair of the department of Computer Science and Engineering at UC Merced. He received his “Laurea” (MSc) and Ph.D. degrees in electrical engineering and computer science from the University of Padova (Italy) in 1999 and 2003, respectively. Since 2007 he has been with the School of Engineering at UC Merced, where he established and leads the UC Merced robotics laboratory.  His research interests include mobile and cooperative robotics, and robot algorithms. He is a Senior Member of the IEEE and served as associate editor for the IEEE Transactions on Robotics (T-RO), the IEEE Transactions on Automation Science and Engineering (T-ASE), and the IEEE Robotics and Automation Letters (RA-L). Under his supervision, teams participating in the RoboCupRescue Virtual Robots competition won second place in 2006 and 2008, and first place in 2009. In 2018, he also won the Best Conference Paper Award at the yearly IEEE International Conference on Automation Science and Engineering (CASE). Since he joined UC Merced his research has been supported by the National Science Foundation, DARPA, USDA, the Office of Naval Research, the Army Research Lab, the Department of Commerce (NIST), the Center for Information Technology Research in the Interest of Society (CITRIS), Microsoft Research, and General Motors.

 spacer

Safe Planning and Control When Autonomy is Not the Only Driver

Speaker Name: 
Vishnu Desaraju
Speaker Title: 
Senior Research Scientist
Speaker Organization: 
Toyota Research Institute
Start Time: 
Thursday, May 13, 2021 - 2:00pm
End Time: 
Thursday, May 13, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/96891323311?pwd=L2IxdlJDdS9sL0U1aTNZZ0MvbDhKdz09
Organizer: 
Ricardo Sanfelice

  

Abstract

When developing planning and control algorithms for autonomous systems, we often assume that these algorithms will be the only significant source of inputs to the system. However, in some applications there may be an additional "driver", external to the autonomy pipeline, that has a large impact on the performance and safety of the overall system. In this talk, I will discuss two projects that investigate different aspects of this problem.

The first considers the effects of an external driver that acts alongside the autonomy but without any shared objective. For example, the aerodynamic forces acting on a micro aerial vehicle flying through a strong wind field can dramatically alter the vehicle’s motion, leading to violations of safety or operational constraints. Limited onboard computation also restricts modeling or incorporation of these nonlinear dynamics into the autonomy. This leads to the idea of Experience-driven Predictive Control (EPC). EPC builds on ideas from adaptive control and model predictive control by accumulating experience on how an external driver impacts the system while simultaneously leveraging that experience to ensure that constraints are met in a computationally efficient manner.

The second project considers the case where the external driver is actually the primary operator of the system, while the autonomy aims to assist this driver to safely achieve a common objective. A prime example of this is the next generation of advanced driver assistance systems that will be able to employ techniques developed for fully autonomous driving but in the context of keeping a human driver safe. The Toyota Guardian system is TRI’s novel approach to this problem, building on ideas from a variety of domains, ranging from aircraft control to shared autonomy. I will show a few preliminary examples of how this combination of the human driver and the autonomy can achieve superhuman performance and safety.


Bio

Vishnu Desaraju is a Senior Research Scientist at the Toyota Research Institute, Ann Arbor, MI working on automated driving technologies. He received a B.S.E. in Electrical Engineering from the University of Michigan in 2008, an S.M. in Aeronautics and Astronautics from MIT in 2010, and an M.S. and Ph.D. in Robotics from Carnegie Mellon University in 2015 and 2017, respectively. He received the AIAA Guidance, Navigation, and Control Best Paper award from SciTech 2018. His research interests include developing computationally efficient motion planning and feedback control algorithms for agile autonomous systems, including autonomous cars, boats, and micro air vehicles, with a focus on mitigating the effects of uncertainty to achieve safe and reliable operation in the field.

 spacer

Enabling Mutually Adaptive Autonomous Systems through Non-intrusive Performance Estimation

Speaker Name: 
Steve McGuire
Speaker Title: 
Assistant Professor of Electrical and Computer Engineering
Speaker Organization: 
University of California at Santa Cruz
Start Time: 
Thursday, May 6, 2021 - 2:00pm
End Time: 
Monday, May 3, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/92846502542?pwd=TEVXbFU0QzhBQXZaelg1UWs1UkdhUT09
Organizer: 
Ricardo Sanfelice

  

Abstract

In many human-robot teaming applications, an autonomous system has minimal information regarding its human teammates, particularly with regards to a human's ability to interact with or respond to system demands. If a system were able to predict a human's performance given their current mental state, interactions between human and system could be tailored to match the human's current mental engagement. In an alerting / monitoring scenario, such tailoring could be used to avoid both normalization of deviance, in which a system's warnings are so common so as to be ignored, and undernotification, where a system is aware of a condition and does not alert the human to take action. In my work, we have utilized passive physiological sensors to directly estimate human performance without using intermediate labels such as cognitive states using an online estimation approach based on reinforcement learning. In this talk, I will discuss our problem formulation, initial results, as well as planned experiments to investigate human-aware alerting strategies using immersive environments.

 

Bio

Steve McGuire is an Assistant Professor of Electrical and Computer Engineering at the University of California Santa Cruz. His lab, the Human-Aware Robotic Exploration (HARE) Group explores human-robot teaming in the context of field robotic applications. Prior to joining UCSC in 2020, he earned his PhD as a NASA Space Technology Research Fellow from the University of Colorado at Boulder in Aerospace Engineering Sciences. He is a former Marine helicopter pilot and has been part of several top-level field robotics initiatives, including the DARPA Grand Challenge in 2004, the Google Lunar X Prize, and the ongoing DARPA Subterranean Challenge.

 spacer

On the Role of Touch for Robustness and Generalisability in Robotic Manipulation

Speaker Name: 
Jeannette Bohg
Speaker Title: 
Assistant Professor of Computer Science
Speaker Organization: 
Stanford University
Start Time: 
Thursday, April 29, 2021 - 2:00pm
End Time: 
Friday, April 2, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/95445782277?pwd=WDBLVGtiemRZODczWmxqcDh4dXZ0Zz09
Organizer: 
Ricardo Sanfelice

  

Abstract

Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. In our research, we explore what representations of raw perceptual data enable a robot to better learn and perform these skills. Specifically for manipulation robots, the sense of touch is essential yet it is non-trivial to manually design a robot controller that combines different sensing modalities that have very different characteristics. I will present our set of research works that explore the question of how to best fuse the information from vision and touch for contact-rich manipulation tasks. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of visual and haptic sensory inputs, which can then be used to improve the sample efficiency of policy learning. I present experiments on a peg insertion task where the learned policy generalises over different geometry, configurations, and clearances, while being robust to external perturbations. While this work has shown very promising results on fusing vision and touch into a learned latent representation, this representation is also not interpretable. In follow-up work, we present a multimodal fusion algorithm that exploits a differentiable filtering framework for tracking the state of manipulated objects and therefore facilitating longer horizon planning. We also propose a framework where a robot can exploit information from failed manipulation attempts to recover and re-try. And finally, we show how exploiting multiple modalities helps to compensate for corrupted sensory data in one of the modalities. I will conclude this talk with a discussion of appropriate representations for multimodal sensory data.

 

Bio

Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several awards, most notably the 2019 IEEE International Conference on Robotics and Automation (ICRA) Best Paper Award, the 2019 IEEE Robotics and Automation Society Early Career Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper Award.

 spacer

Fundamental Physics of Moving Clock Time Synchronization in a Weak Gravitational Field

Speaker Name: 
Dr. Steven R. Wilkinson
Speaker Title: 
Principal Engineering Fellow
Speaker Organization: 
Raytheon Technologies
Start Time: 
Thursday, April 15, 2021 - 2:00pm
End Time: 
Thursday, April 15, 2021 - 3:00pm
Location: 
https://ucsc.zoom.us/j/95323996680?pwd=MXdUWlp5V3h3dmdyUXBUUUJkcTFGQT09
Organizer: 
Ricardo Sanfelice

  

Abstract

This talk will present the fundamental physics of near earth dynamic clocks and time synchronization.  We begin by establishing basic clock behavior and reviewing established synchronization approaches between two stationary clocks in separated ground based laboratories.  A discussion of different clock technologies that are in the literature will include comparisons of short term and long term stability including a relationship between stability and volume.   Once clocks start moving in a gravitational field we must use General Relativity to understand the behavior of time as compared to other clocks that are either stationary or moving.  We start with a one spatial and one time dimension to show how motion causes clocks to run at different rates and synchronization asymmetries that must be corrected.  We then discuss the transformation of proper time to coordinate time just as done in GPS clocks.  Then we’ll conclude by investigating the fundamentals of 4-dimensional dynamic clock synchronization of coordinate time.

Bio

Dr. Steven R. Wilkinson is a Principal Engineering Fellow on the Senior Technical Staff in RIS Engineering.  Over his 24-year career at the company, he has mainly worked on development programs that involved new mission systems solutions that covered microwave and RF systems, electro-optical systems and is the company expert in time & frequency metrology. He has applied this expertise to position navigation and timing (PNT), communications, radar, and EO/IR sensing and imaging.  He is currently the Principal Investigator on an effort that supports the National Radio Astronomy Observatory (NRAO) in two areas. Steven is the RIS planetary radar development technical lead, and is involved with the NRAO’s Next Generation Very Large Array (ngVLA) time and frequency effort. ngVLA is the future radio astronomy observatory that will replace the Very Large Array and the Very Long Baseline Array.  Planetary radar will be a new capability for the NRAO and it will compliment current systems (Goldstone) to enhance our global planetary defense mission and solar system research.  He received his undergraduate degree in physics from UC Berkeley and a PhD from the University of New Mexico.

 spacer

Pages