Autonomous Navigation in Complex Outdoor Environments: Towards Companion Robots for Longevity | From Digital Humans to Safe Humanoids: Grounded Reasoning and Compliant Interaction
Location: Nvidia Auditorium (link)
Attendance Link: https://tinyurl.com/robosem-win-26
Time: Friday Jan 23th, 3:00-4:00PM
Jing Liang: Autonomous Navigation in Complex Outdoor Environments: Towards Companion Robots for Longevity
Abstract: Deploying mobile robots in unstructured outdoor environments remains a fundamental challenge, requiring the ability to robustly perceive complex terrains, pedestrian flows, and general traffic rules. To effectively serve humans, especially older adults, these robots must go beyond simple navigation to also understand human behavior and enhance personal mobility. In this talk, I will review our previous approaches for long-range outdoor navigation, with a focus on scene understanding and planning. Then, I will present a high-level overview of what we are currently working on, where I aim to apply these navigation technologies to develop companion robots that support older adults.
Bio: Jing Liang is a postdoctoral scholar in the Department of Computer Science at Stanford University, where he is affiliated with the Stanford Robotics Center and the Stanford Center on Longevity. He received his Ph.D. in Computer Science from the University of Maryland, College Park. His doctoral research focused on robot navigation, including scene understanding, planning, and control. Currently, he works on companion robots and human behavior understanding with the focus on how robotics can support longevity.
Yao Feng: From Digital Humans to Safe Humanoids: Grounded Reasoning and Compliant Interaction
Abstract: Humanoid robots are entering human-centric environments, where they must not only move well but also understand people and interact safely through physical contact. In this talk, I will present two complementary directions toward human-centered embodied intelligence. First, I will introduce GentleHumanoid, a whole-body control policy that combines motion tracking with compliant, tunable force regulation, enabling contact-rich behaviors such as gentle hugging, assistive support, and safe object interaction on the Unitree G1. Second, I will show how large language models can be grounded in 3D human motion for behavior understanding and planning, highlighting ChatPose and ChatHuman as steps toward systems that interpret actions, anticipate intent, and connect high-level reasoning to executable motion. I will close with future directions on scaling human–humanoid interaction data, developing vision-language-action models for long-horizon interaction, and incorporating muscle-driven modeling for more realistic and adaptive humanoids.
Bio: Yao Feng is currently a Postdoc at Stanford University, working with Karen Liu, Jennifer Hicks, and Scott L. Delp. She received her Ph.D. from ETH Zürich and the Max Planck Institute for Intelligent Systems, under the supervision of Michael J. Black and Marc Pollefeys. Her research focuses on large-scale capture and understanding of digital humans, with applications spanning computer vision, computer graphics, biomechanics, and robotics. She has received several recognitions, including an Honorable Mention for the Eurographics PhD Award, and was named an EECS Rising Star and a WiGRAPH Rising Star in Computer Graphics.
Please visit https://stanfordasl.github.io/robotics_seminar/ for this quarter’s lineup of speakers. Although we encourage live in-person attendance, recordings of talks will be posted also.