Plenary & Keynote Speakers

Morning

Time Speaker Title Room
08:30 - 09:30 Ayanna Howard Robots, Ethics, and Society: Mitigating the Bias in Emerging Technologies Terrace Ballroom 2-4

Afternoon

Time Speaker Title Room
14:00 - 15:00 Joshua Tenenbaum Scaling AI the human way: What do we start with, and how do we learn the rest? (Virtual, Asynchronous, on infovaya) Virtual on infovaya
14:00 - 15:00 Gavin Ananda Detect and Avoid for National Scale Autonomous Instant Logistics Operations 120 ABC
14:00 - 15:00 Marco Hutter Legged robots on the way from subterranean 121 ABC

Morning

Time Speaker Title Room
08:30 - 09:30 Julie Shah Human-Machine Partnerships and Work of the Future Terrace Ballroom 2-4

Afternoon

Time Speaker Title Room
14:00 - 15:00 Vandi Verma NASA Robots on Mars and the Future Missions They Inspire 121 ABC
14:00 - 15:00 Michael Yu Wang Adaptive Grasping with Touch Sensing and Dry-Adhesive Contact (Virtual, Asynchronous, on Infovaya) Virtual on infovaya
14:00 - 15:00 Benjamin Rosman Composition: a tale of robot behaviors and research communities 120 ABC

Morning

Time Speaker Title Room
08:30 - 09:30 Antonio Bicchi The Embodied Intelligence Aporia (and what we can get out of it) Terrace Ballroom 2-4

Afternoon

Time Speaker Title Room
14:00 - 15:00 Eiichi Yoshida Humanoid and Digital Actor as Cyber-Physical Twins for Understanding and Synthesizing Human Behaviors 119 AB
14:00 - 15:00 Salah Sukkarieh Next Gen Farm Robots: Extending their capabilities to include agronomy practice 121 ABC
14:00 - 15:00 Veronica Santos The Role of Touch in Robotics for a Connected World 120 ABC

Plenary Speakers

Ayanna Howard

Ayanna Howard , Ohio State University , usa

Robots, Ethics, and Society: Mitigating the Bias in Emerging Technologies

People tend to overtrust sophisticated robotic devices, especially those powered by AI. As these systems become more fully interactive with humans during the performance of day-to-day activities, ethical considerations in deploying these systems must be more carefully investigated.  Bias, for example, has often been encoded in and can manifest itself through AI algorithms, which humans then take guidance from, resulting in the phenomenon of excessive trust. Bias further impacts this potential risk for trust, or overtrust, in that these systems are learning by mimicking our own thinking processes, inheriting our own implicit gender and racial biases, for example. These types of human-AI feedback loops may consequently have a direct impact on the overall quality of the interaction between humans and machines, whether the interaction is in the domains of healthcare, job-placement, or other high-impact life scenarios. In this talk, we will discuss these types of ethical conundrums in robotics and possible ways to mitigate its impact on our quality of life.

Biography

Dr. Ayanna Howard is the Dean of Engineering at The Ohio State University and Monte Ahuja Endowed Dean's Chair.  She also holds a faculty appointment in the college’s Department of Electrical and Computer Engineering with a joint appointment in Computer Science and Engineering. Previously she was the Linda J. and Mark C. Smith Endowed Chair in Bioengineering and Chair of the School of Interactive Computing at the Georgia Institute of Technology. Prior to Georgia Tech, Dr. Howard was at NASA's Jet Propulsion Laboratory where she held the title of Senior Robotics Researcher and Deputy Manager in the Office of the Chief Scientist. Her research encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, and has resulted in over 275 peer-reviewed publications. In 2013, she founded Zyrobotics, an education technology startup, which designs AI-powered STEM tools and learning games to engage children with diverse abilities. She has also served as the Associate Director of Research for the Institute for Robotics and Intelligent Machines, Chair of the Robotics Ph.D. program, and the Associate Chair for Faculty Development in ECE at Georgia Tech. Dr. Howard is a Fellow of IEEE, AAAI, AAAS, and the National Academy of Inventors (NAI). She is also the recipient of the Anita Borg Institute Richard Newton Educator ABIE Award, CRA A. Nico Habermann Award, Richard A. Tapia Achievement Award, NSBE Janice Lumpkin Educator of the Year Award, and ACM Athena Lecturer Award. She also serves on the Board of Directors for Autodesk, Motorola Solutions, and the Partnership on AI. To date, Dr. Howard’s unique accomplishments have been highlighted through a number of other public recognitions, including highlights in Vanity Fair, USA Today, Upscale, Black Enterprise, and TIME Magazine, as well as being recognized as one of the 23 most powerful women engineers in the world by Business Insider and one of the Top 50 U.S. Women in Tech by Forbes.

Julie Shah

Julie Shah , MIT , USA

Human-Machine Partnerships and Work of the Future

More robots joined the U.S. workforce last year than ever before. What does this mean for workers? How do we build better jobs alongside intelligent machines? Our aim is to realize a future in which dramatic advances in automation and computation can go hand in hand with improved opportunities and economic security for workers. When engineers develop a new software tool or piece of equipment, we make decisions that have downstream consequences for workers. In manufacturing, for example, how much skill it requires to program a machine might affect the types of workers that can interact with the technology – and the wages they can demand. In this talk, I discuss what decision points are key for workers in a product development process, and how engineering research can incorporate worker context and worker consequences into the technology development process.

Biography

Julie Shah is a Professor of Aeronautics and Astronautics, associate dean of Social and Ethical Responsibilities of Computing at MIT, and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She is expanding the use of human cognitive models for artificial intelligence and has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. She was also the recipient of the 2018 IEEE RAS Academic Early Career Award for contributions to human-robot collaboration and transition of results to real world application. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT and is co-author of the book, What to Expect When You're Expecting RobotsThe Future of Human-Robot Collaboration (Basic Books, 2020).

Antonio Bicchi

Antonio Bicchi , University of Pisa , Italy

The Embodied Intelligence Aporia (and what we can get out of it)

The Embodied Intelligence philosophy sees cognition as determined and anticipated by the physics of the body which mediates our interaction with the world. Bio-inspired robotics - and probably all robotics - looks at natural systems and tries to reproduce their functions in artifacts. But how can we learn any lesson on the intelligent control of machines made of silicon and steel or polymers, by studying a completely different body, made of neurons and muscles?
I will discuss this paradox through few examples, showing how mathematical models of reality can help us abstract from the complexity of natural systems and bring these ideas to bear on novel robotic technologies. These innovations are moving today the research frontier from human-robot cooperation to human-robot integration. Examples will show how artificial hands and haptics, prostheses and avatars can be conceived and realized that can better serve humans by integrating with our bodies and minds.

Biography

Antonio Bicchi is a scientist interested in robotics and intelligent machines. After graduating in Pisa and receiving his Ph.D. from the University of Bologna, he was a scholar at MIT AI Lab in Cambridge (MA) before becoming Professor in Robotics at the University of Pisa in 2000. In 2009 he founded the Soft Robotics Lab at the Italian Institute of Technology in Genoa. Since 2013 he is Adjunct Professor at Arizona State University, Tempe, AZ.
He has coordinated several international projects, including four from the European Research Council (ERC), on haptics, collaborative robotics, soft robotics, artificial robot hands and prostheses. He has authored over 500 scientific papers cited more than 25,000 times. He supervised over 60 doctoral students and more than 20 postdocs, most of whom are now professors in universities and international research centers, or have launched their own companies. 
He served the research community in several ways, including by initiating the WorldHaptics Conference and the IEEE Robotics and Automation Letters. He is currently the President of the Italian Institute of Robotics and Intelligent Machines. He is a Fellow of IEEE since 2005, and received the IEEE Saridis Leadership Award in 2018. His students have received prestigious awards, including three first prizes and two nominations for the best Ph.D. thesis in Europe on robotics and haptics.

Keynote Speakers

Joshua Tenenbaum

Joshua Tenenbaum , MIT , USA

Scaling AI the human way: What do we start with, and how do we learn the rest? (Virtual, Asynchronous, on infovaya)

What would it take to build a machine that grows into intelligence the way a person does — that starts like a baby, and learns like a child!? AI researchers have long debated the relative value of building systems with strongly pre-specified knowledge representations versus learning representations from scratch, driven by data. However, in cognitive science, it is now widely accepted that the analogous “nature versus nurture?” question is a false choice: explaining the origins of human intelligence will most likely require both powerful learning mechanisms and a powerful foundation of built-in representational structure and inductive biases. I will introduce our efforts to build models of the starting state of the human mind, and the learning algorithms that grow knowledge through early childhood and beyond. I will focus on the setting of embodied intelligence, as babies do: seeing, acting and communicating in one's immediate physical environment, alone or with other agents.  I will also talk about practical applications of our cognitive models to engineering more human-like robotic perception and planning systems. 
Our models are expressed as probabilistic programs, defined on top of abstract simulation engines that capture the basic dynamics of objects and agents interacting in space and time.  Modeling and inference rests on a hybrid of neural, symbolic and probabilistic methods. Learning algorithms draw on techniques from program synthesis and probabilistic program induction. I will show how these models are beginning to capture core aspects of human cognition and cognitive development, and are also starting to be deployed in real-world robotics applications. I will also talk about some of the challenges facing this approach as it aims to scale up, and promising ways forward.  

 

Biography

Joshua Tenenbaum is Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). His long-term goal is to reverse-engineer intelligence in the human mind and brain, and to use these insights to engineer more human-like machine intelligence. In cognitive science, he is best known for developing theories of cognition as probabilistic inference in structured generative models, and applications to concept learning, causal reasoning, language acquisition, visual perception, intuitive physics, and theory of mind. In AI, he and his group have developed widely used models for nonlinear dimensionality reduction, probabilistic programming, and Bayesian unsupervised learning and structure discovery. His current research focuses on common-sense scene understanding and action planning, the development of common sense in infants and young children, and learning through probabilistic program induction and neuro-symbolic program synthesis. His work has been published in many leading journals and recognized with awards at conferences in Cognitive Science, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Distinguished Scientific Award for Early Career Contributions in Psychology from the American Psychological Association (2008), the Troland Research Award from the National Academy of Sciences (2011), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2016), the R&D Magazine Innovator of the Year award (2018), and a MacArthur Fellowship (2019). He is a fellow of the Cognitive Science Society, the Society for Experimental Psychologists, and a member of the American Academy of Arts and Sciences.

Gavin Ananda

Gavin Ananda , Zipline , USA

Detect and Avoid for National Scale Autonomous Instant Logistics Operations

Zipline is on a mission to transform the way goods move. Leveraging expertise in robotics and autonomy, Zipline designs, manufactures and operates the world’s largest automated delivery system. Zipline serves tens of millions of people around the world and is making good on the promise of building an equitable and more resilient global supply chain. The technology is complex but the idea is simple: a teleportation service that delivers what you need, when you need it.  Each day, Zipline's instant logistics system powers over 70,000 km in real-world BVLOS flight: day and night, rain or shine, from remote lands to dense urban sprawl. Our small (<55 lbs) fixed-wing vehicles can reach facilities over 80 kilometers away in under an hour. To date, we've made over 290,000 commercial deliveries of medical products and serve roughly 2,500 facilities in Rwanda, Ghana, and the United States. 
With years of continuous national-scale commercial operations and more than 30 million autonomous commercial delivery miles flown, Zipline has built a unique perspective on scaling the automated, on-demand delivery systems operations. Our fleet of aircraft, which we call "Zips", operate fully autonomously, with minimal human supervision. Operating in countries with busy and complex airspace has led us to build an onboard detect and avoid system that can perceive intruding aircraft up to 2 km away, predict their behavior and respond with avoidance maneuvers that mitigate collision risk while minimally impacting the delivery  mission. We use a  unique sensing modality that unlocks operation in poor visibility and inclement weather, while meeting non-negotiable size, weight and power constraints on the aircraft. This talk is a deep dive into the design of Zipline’s tactical detect and avoid system, including the novel sensing architecture and what it takes to validate such a system for scaled operations.

Biography

Gavin Ananda currently leads the Perception programs at Zipline for their current and next generation vehicles. The team's primary responsibility is developing systems that enable their vehicles to perceive and contextualize the environment in real time in order to support decision-making for safety and to open up new autonomous capabilities. He has been at Zipline for 3+ years and started as an Aerodynamicist/Systems Engineer. Previously, Gavin was at the University of Illinois, Urbana-Champaign working on his PhD which focused on aerodynamic flight modeling and simulation of aircraft in stall/upset conditions.

Marco Hutter

Marco Hutter , ETH Zurich , Switzerland

Legged robots on the way from subterranean

In the recent years, we saw a tremendous progress in the field of legged robotics. Thanks to superior mobility, in particular quadrupedal robots start offering unprecedented potential for applications in unstructured and challenging environments. For example, the top six teams at the recent DARPA SubT challenge built upon legged robots as vehicles to explore autonomously environments like caves, tunnels, or urban infrastructure. Today, these machines are not only widely established in the research community but have also found their way into industrial products. In this presentation, I will talk about control, perception and autonomy of quadrupedal robots. I will provide insights into the development of the ANYmal robots, show how reinforcement learning has revolutionized locomotion performance, and present new mapping and planning approaches to enable versatile navigation. We will look at different application examples in the industrial sector and I will give insights into what happened during the DARPA SubT challenge.

Biography

Marco is a professor for robotic systems at ETH Zurich and co-founder of several ETH Startups such as ANYbotics AG, a Zurich-based company developing legged robots for industrial applications. Marco’s research interests are in the development of novel machines and actuation concepts together with the underlying control, planning, and machine learning algorithms for locomotion and manipulation. Marco is part of the National Centre of Competence in Research (NCCR) Robotics and NCCR Digital Fabrication and PI in various international projects (e.g. EU Thing, NI) and challenges (e.g. DARPA SubT). His work in this field has been recognized with a number of awards and prestigious grants such as the Branco Weiss Fellowship, ETH medal, IEEE/RAS Early Career Award or ERC Starting Grant.

Verma Vandi

Vandi Verma , NASA Jet Propulsion Laboratory , USA

NASA Robots on Mars and the Future Missions They Inspire

The Perseverance rover and Ingenuity helicopter are using more autonomous capability than any robot NASA has sent to another world. Operating robots on distant planetary bodies presents unique challenges. Key design choices have enabled the unprecedented utility of autonomous robotic capabilities such as autonomous surface and aerial navigation, science observations, and manipulation. I discuss key design choices, challenges encountered, and the new capabilities that will be necessary for NASA's daring missions of the future.

Biography

Dr. Vandi Verma is the Deputy Manager for Mobility and Robotics Systems at NASA Jet Propulsion Laboratory, and the Chief Engineer of Robotic Operations for Mars 2020 Perseverance and Ingenuity. Her areas of expertise include space robotics, autonomous robots, and human-robot interaction. She has worked on Space Robotics and AI research and technology development tasks and has designed, developed, and operated rovers on Mars, the Arctic, Antarctica, and the Atacama Desert. As Deputy Section Manager for Mobility and Robotics she leads about 150 JPL roboticists developing new technology for future missions and working on a variety of JPL robotic missions. Robotics capabilities she has worked on are in regular use on the Perseverance, and Curiosity rovers, and in human spaceflight projects. Most recently she developed onboard robotic arm collision detection, autonomous robotic arm positioning, and autonomous science targeting on Perseverance. She has been engaged in robotic operations on Mars since 2008 with the Mars Exploration Rovers Spirit and Opportunity, Curiosity rover, Perseverance rover, and Ingenuity helicopter. She received her Ph.D. in Robotics from Carnegie Mellon University in 2005.

Michael Yu Wang

Michael Yu Wang , Monash University , Australia

Adaptive Grasping with Touch Sensing and Dry-Adhesive Contact (Virtual, Asynchronous, on Infovaya)

**This talk is Virtual, Asynchronous, on Infovaya**

In the field of robotic manipulation, touch sensing and contact adhesion have been considered as essential techniques for versatile capabilities of adaptive grasping and manipulation. Thanks to respective advances in optical tactile sensors and in scalable fabrication of gecko-inspired dry adhesive skins, these distinctive techniques continue to be developed. Moreover, complementary sense of touch and adaptive contact can be integrated into a robotic gripper. As such, touch sensing is endowed into a gecko-gripper for the promise of adaptive grasping. In this presentation, I will review our work on optical touch sensing and adhesive contact skins. Our deformable sensor provides high-resolution real-time measurements of contact area and contact shear force. Gecko-inspired dry-adhesive skins are readily integrated on the sensor surface, providing variable adhesion and friction. I will showcase the gripper’s ability to adjust fingertip pose for better contact using sensor feedback, especially for top-side gripping onto a nearly flat surface (smooth or rough) of an object with firm attachment. I will show practical applications in industrial automation and discuss the recent developments throughout the robotics community advancing in this promising direction.

Biography

Michael Y. Wang is the Professor and Head of Department of Mechanical and Aerospace Engineering at Monash University. He has numerous professional honors–National Science Foundation Research Initiation Award; Ralph R. Teetor Educational Award from Society of Automotive Engineers; LaRoux K. Gillespie Outstanding Young Manufacturing Engineer Award from Society of Manufacturing Engineers; Boeing–A.D. Welliver Faculty Summer Fellow, Boeing; Chang Jiang (Cheung Kong) Scholars Award from the Ministry of Education of China and Li Ka Shing Foundation (Hong Kong); Research Excellence Award of CUHK. He was the Editor-in-Chief of IEEE Trans. on Automation Science and Engineering. His main research interests are in robotic manipulation, learning and autonomous systems, manufacturing automation, and additive manufacturing. Before joining Monash University in 2022, he was the Founding Director of the Cheng Kar-Shun Robotics Institute, the Director of HKUST-BRIGHT DREAM ROBOTICS Joint Research Institute, and a Chair Professor of Mechanical and Aerospace Engineering as well as Electronic and Computer Engineering of Hong Kong University of Science and Technology (HKUST). Previously, he also served on the engineering faculty at University of Maryland, Chinese University of Hong Kong, and National University of Singapore. A recipient of ASME Design Automation Award, Professor Wang is a fellow of ASME and IEEE.

Benjamin Rosman

Benjamin Rosman , University of the Witwatersrand , South Africa

Composition: a tale of robot behaviors and research communities

There are many possible ways that robotics could grow alongside society over the next decade. One desirable path is for robots to become increasingly accessible to larger numbers of people, who use them in innovative ways to solve a wide range of problems. For this to be achievable, robots must be able to learn new skills and reuse previously acquired ones. Reinforcement learning is one framework that has been successful in doing so, but as yet has been unable to equip robots with the means to adapt and generalize to very large sets of tasks. Critical to achieving this is that robots should become more flexible in how they reuse their existing skills, be easier to instruct, and more reliable in their behavior. In this talk, I will present a framework for behavior composition within reinforcement learning, that enables a robot to be able to combine previously learned behaviors to solve a super-exponential number of tasks with no additional learning, do so with human-specifiable goals, and in a way that is provably optimal. Coupled with these advances in learning frameworks is the imperative that different communities around the world possess the relevant expertise to adapt robots to their local contexts. This is challenging because the global distribution of robotics knowledge is heavily skewed to developed countries. In the second part of this talk, I will discuss the recent growth of the African machine learning community, as a case study for how new technical communities can be established by bringing the right people together in the right environments. Our hope is that the combination of these robot frameworks and communities growing alongside each other will open the door to a desirable future.

Biography

Benjamin Rosman is an Associate Professor in the School of Computer Science and Applied Mathematics at the University of the Witwatersrand, South Africa, where he runs the Robotics, Autonomous Intelligence and Learning (RAIL) Laboratory and is the Director of the National E-Science Postgraduate Teaching and Training Platform (NEPTTP).He received his Ph.D. in Informatics from the University of Edinburgh in the UK in 2014, and previously obtained his M.Sc. in Artificial Intelligence from the University of Edinburgh. He also has a B.Sc. (Hons) in Computer Science and a B.Sc. (Hons) in Applied Mathematics, both from the University of the Witwatersrand. His research interests focus primarily on reinforcement learning and decision making in autonomous systems, specifically on how learning can be accelerated through abstracting and generalising knowledge gained from solving related problems.He is a founder and organiser of the Deep Learning Indaba machine learning summer school, with a focus on strengthening African machine learning. He was a 2017 recipient of a Google Faculty Research Award in machine learning, and a 2021  recipient of a Google Africa Research Award. In 2020, he was made a Senior Member of the IEEE.

Eiichi Yoshida

Eiichi Yoshida , Tokyo University of Science , Japan

Humanoid and Digital Actor as Cyber-Physical Twins for Understanding and Synthesizing Human Behaviors

Humanoid robots can be used as a "physical twin" of humans to analyze and synthesize human motions, and furthermore behaviors, while the those robots themselves are already useful for applications in industries like large-scale assembly. We intend to integrate humanoids and digital actors into "cyber-physical twins" in a complementary manner to understand, predict and synthesize the behavior of anthropomorphic systems in various aspects. Since it is difficult to measure the control output of humans, we may use humanoids to validate the physical interactions with real world, and digital actors to simulate control and interaction strategies using parameterized models like musculo-skeletal systems. Optimization is one of the key techniques to tackle this challenge. A comprehensive framework is introduced for efficient computation of derivatives of various physical quantities essential for optimization purpose, allowing real-time motion retargeting and musculo-skeletal analysis. We show some practical applications such as quantitative evaluation of wearable devices and monitoring of human workload. We believe the human model in cyber-physical space will become important for symbiotic robotic system supporting  humans naturally and efficiently responding to societal demands. Some future directions such as remote perception and workspace are also discussed.

Biography

Eiichi Yoshida received M.E and Ph. D degrees on Precision Machinery Engineering from Graduate School of Engineering, the University of Tokyo in 1996. He then joined former Mechanical Engineering Laboratory, later in 2001 reorganized as National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan. He served as Co-Director of AIST-CNRS JRL (Joint Robotics Laboratory) at LAAS-CNRS, Toulouse, France, from 2004 to 2008, and at AIST, Tsukuba, Japan from 2009 to 2021. He was also Director of Industrial Cyber-Physical Systems Research Center, and TICO-AIST Cooperative Research Laboratory for Advanced Logistics in AIST from 2020 to 2021. From 2022, he is Professor of Tokyo University of Science, at Department of Applied Electronics, Faculty of Advanced Engineering. He was previously invited as visiting professor at Karlsrule Institute of Technology and University of Tsukuba. He is IEEE Fellow, and member of RSJ, SICE and JSME. He has published more than 200 scientific papers in journals and peer-reviewed international conferences and co-edited some books. He received several awards including Best Paper Award in Advance Robotics Journal and DARS conferences, and the honor of Chevalier l’Ordre National du Mérite from French Government. He is currently serving as Senior Editor of IEEE Transactions on Robotics and joined editorial boards as Senior Editor or Program Committee Member in international conferences such as ICRA, IROS, RSS and Humanoids. His research interests include robot task and motion planning, human modeling, humanoid robotics and advanced logistics technology.

Salah Sukkarieh

Salah Sukkarieh , The University of Sydney , Australia

Next Gen Farm Robots: Extending their capabilities to include agronomy practice

On-Farm Robotics, both ground and air, have over the last decade demonstrated significant potential in the ability to ease labour concerns, reduce chemicals, and support farm operations for improving yield. Whilst the focus has been on the re-application of robotics and technologies from other industries, there is a growing interest in extending their capabilities to support, or even completely undertake, agronomy practice. This implies extending AI capabilities into real-time as well as advancing into higher levels of situational awareness on farm, but also ensuring that these are tightly coupled to field robotic platforms working within a biophysical domain. In this talk I will present a historical capture of our on-farm robotics R&D at the ACFR, as well as current activity, with an eye to how we might address extending their capabilities such as understanding crop growth and improving yield dynamically, or the behaviour of livestock around pasture and how to improve health. 

Biography

Salah Sukkarieh is the Professor of Robotics and Intelligent Systems at the University of Sydney, and is the CEO of Agerris, a new Agtech startup company from the ACFR developing autonomous robotic solutions to improve agricultural productivity and environmental sustainability. He was the Director Research and Innovation at the Australian Centre for Field Robotics from 2007-2018, where he led the strategic research and industry engagement program in the world's largest field robotics institute. He is an international expert in the research, development and commercialisation of field robotic systems and has led a number of robotics and intelligent systems R&D projects in logistics, commercial aviation, aerospace, education, environment monitoring, agriculture and mining. Salah was awarded the NSW Science and Engineering Award for Excellence in Engineering and Information and Communications Technologies in 2014, the 2017 CSIRO Eureka Prize for Leadership in Innovation and Science, and the 2019 NSW Australian of the Year nominee. Salah is a Fellow of Australian Academy of Technological Sciences and Engineering (ATSE), and has over 500 academic and industry publications in robotics and intelligent systems.

Veronica Santos

Veronica Santos , UCLA , USA

The Role of Touch in Robotics for a Connected World

Compared to vision, the complementary sense of touch has yet to be broadly integrated into robotic systems. As such, the promise of touch has yet to leave its fingerprint on robotics in work and society. The subfields of tactile sensing and tactile perception continue to grow thanks to advances in electronics, robust and scalable sensor designs, and renewed excitement about optical tactile sensors. In this presentation, I will highlight our prior work on task-driven efforts to endow robots with tactile perception capabilities for human-robot interaction and remote work in harsh environments. With advances in haptic display technologies, interfaces with the human body, and networking capabilities, however, touch can be used for more than completing novel tasks. Touch can enhance social connections from afar, enable the inclusion of marginalized groups in community activities, and create new opportunities for remote work involving social and physical interactions. I will conclude with a snapshot of projects throughout our robotics community that are headed in this promising direction.

Biography

Veronica Santos is a Professor of Mechanical and Aerospace Engineering and Director of the UCLA Biomechatronics Lab (http://biomechatronics.ucla.edu). She currently serves as the Associate Dean of Equity, Diversity, and Inclusion and Faculty Affairs for the Samueli School of Engineering. Dr. Santos earned her B.S. in mechanical engineering (music minor) from UC Berkeley, was a Quality and R&D Engineer at Guidant Corporation, and earned her M.S. and Ph.D. in mechanical engineering (biometry minor) from Cornell University. As a postdoc at the University of Southern California, she contributed to the development of a bio-inspired tactile sensor for prosthetic hands before moving to Arizona State University as an Assistant Professor. Her research interests include hand biomechanics, human-machine systems, tactile sensing and perception, and prosthetics/robotics for grasp and manipulation. Dr. Santos was selected for an NSF CAREER Award, the U.S. Defense Science Study Group, numerous teaching awards, and a U.S. National Academy of Engineering Frontiers of Engineering Education Symposium. She has co-edited a book entitled, “The Human Hand as an Inspiration for Robot Hand Development” and her work has appeared in TechCrunch and Forbes, among others. Dr. Santos has served as an ICRA Editor, ASME Journal of Mechanisms and Robotics AE, ACM Transactions on Human-Robot Interaction AE, and the 2020 IEEE Haptics Symposium Editor-in-Chief. She is currently a General Co-Chair for the 2022 IEEE Haptics Symposium.