Robotics at Stanford
ASL is developing methodologies for the analysis, design, and control of autonomous systems, in particular self-driving cars, aerospace vehicles, and future mobility systems. The lab’s expertise in control theory, robotics, optimization, and machine learning guides us in developing new methods for operation in uncertain, rapidly changing, and potentially adversarial environments aimed at practical, computationally efficient, and provably correct algorithms for field deployment.
The Assistive Robotics and Manipulation Lab develops robots that improve everyday life by anticipating and acting on the needs of their human counterparts. We specialize in developing intelligent robotic systems that can perceive and model environments, humans, and tasks, and leverage these models to predict system processes and understand their assistive role. The research addresses robotic assistants, connected devices, and intelligent wearables. We use a combination of tools in collaborative robotics, machine learning, computer vision, state estimation and prediction, dynamical systems analysis, and control theory.
The Biomechatronics Lab develops wearable robotic devices to improve the efficiency, speed, and balance of walking and running, especially for people with disabilities. With a focus on speed and systematization of the design process, we develop versatile hardware prosthesis and exoskeleton emulators for human-in-the-loop optimization. We are researching topics such as human balance measurement, joint pain reduction, and novel actuation mechanisms for efficient exoskeletons.
IRIS acquires scalable robotic intelligence through establishing algorithms for general intelligent behavior using learning and interaction. With methods from deep reinforcement learning over raw sensory inputs, meta-learning from previous experience for accelerating new skill development, and self-supervised learning from interactions without human supervision, we aim for robots able to perform varied learned tasks across diverse environments.
BDML draws insights from nature in developing bio-inspired technologies including gecko-like adhesives, soft muscle actuators, under-actuated hands, and flexible tactile sensors enabling robots to climb, fly, perch, handle delicate objects, and interact responsively with humans. We work with biologists on design principles of animal performance and with roboticists to control our solutions in environments from under the se, inside the human body, and in space.
MSL develops algorithms for collaboration, coordination, and competition among machine/human teams in unstructured natural environments, including operation in complex traffic understanding and signaling intent, in 3D flying races against human pilots at the edge of the dynamic envelope; and for large-scale drone aerial remote surveys. Building on optimization, control and game theory, and machine learning, we develop the essentials for robots entering the real world.
The CHARM Lab develops safe and intuitive haptic interfaces for enhanced physical connection in remote and virtual interaction. We couple users with teleoperation tasks, increase perceptual realism in virtual environments, and deliver intuitive robotic manipulation via soft, safe, deformable mechanisms. Our solutions assist doctors in robot-assisted surgery, students in extended-realism simulations, disaster recovery specialists in assessment/situation handling, and the disabled for richer life experiences.
Center for Integrated Facility Engineering (CIFE) is a community of researchers and industry members who together shape the future of the Architecture, Engineering, Construction, and Operations (AECO) industry. This collaboration promotes multidisciplinary approaches and thinking in the planning, design, construction, operation, and management of the built environment. CIFE’s goal is to develop engineering and management methods that increase the performance, innovation, and sustainability of the built environment.
ILIAD is developing methods to enable groups of autonomous systems and groups of humans to interact safely and reliably with each other. Employing methods from AI, control theory, machine learning, and optimization, we are establishing theory and algorithms for interaction in uncertain and safety-critical environments. By learning from and with humans, we are moving robot teams out of factories and safely into humans’ lives.
IPRL seeks to understand the underlying principles of robust sensorimotor coordination by implementing them on robots. We study autonomous robots that can plan and execute complex manipulation tasks in dynamic, uncertain, and unstructured environments. We develop algorithms for autonomous learning that exploit different sensory modalities for robustness, structural priors for scalability, and that continuously adapt. Our solutions will allow manipulation robots to escape the factory floor and move into unstructured environments such as warehouses, our homes, and disaster zones.
The Navigation and Autonomous Vehicles (NAV) Lab conducts research on robust and secure positioning, navigation, and timing technologies. We focus on navigation safety, cyber security, and resilience to errors and uncertainties using machine learning, advanced signal processing, and formal verification methods. Our research has a wide range of applications, including manned and unmanned aerial vehicles, autonomous driving cars, as well as space robotics.
We at REAL @ Stanford are developing algorithms that enable intelligent systems to learn from their interactions with the physical world to execute complex tasks and assist people.
The Robotics Lab develops mathematical control algorithms, hardware capabilities, and programming interfaces, and, by acquiring human-level skill through learning, is advancing rich dexterous physical interaction with the environment. Our hierarchical feedback architecture coupling perception and action enables accommodation to scene dynamics, including people. Ocean One, the lab’s collaborative humanoid underwater robot, demonstrates how avatars can succeed at challenging tasks in inhospitable spaces.
The Movement Lab creates coordinated, functional, and efficient whole-body movements for digital agents and for real robots to interact with the world. We focus on holistic motor behaviors that involve fusing multiple modalities of perception to produce intelligent and natural movements. We study “motion intelligence” in the context of complex ecological environments, involving both high-level decision making and low-level physical execution. We develop computational approaches to modeling realistic human movements for Computer Graphics and Biomechanics applications, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms.
The SHAPE Lab explores how we can interact with digital information in a more physical and tangible way. Towards our goal of more human centered computing, we believe that interaction must be grounded in the physical world and leverage our innate abilities for spatial cognition and dexterous manipulation with our hands. We develop advanced technologies in robotics, mechatronics, and sensing to create interactive, dynamic physical 3D displays and haptic interfaces that allow 3D information to be touched as well as seen.
SVL develops methods for establishing rich geometric and semantic understanding of the environment. Aimed at enhancing a robot’s perception and action capabilities within the variability and uncertainty of the real world, we address tasks such as handheld-tool use, cooking and cleaning, and navigating crowded public spaces. We develop robust models of intelligent behavior, build these into general-purpose autonomy, and couple them with robots for complex operation.
SISL studies robust decision-making in settings involving complex and dynamic environments where safety and efficiency must be balanced. We apply our work to challenges including autonomous driving, route planning, deep reinforcement learning, and safety and validation, addressing algorithms for efficiently deriving optimal decision strategies from high-dimensional, probabilistic representations, and establishing confidence in their safe and correct application in the real world.
SIMLab utilizes analytical, numerical, and experimental tools to study the functional responses of stimuli-responsive materials subject to external stimuli such as stress, temperature, light, chemical, electric, or magnetic fields. Applications include soft actuators, soft robotics, flexible electronics, morphing structures, biomedical engineering, and sustainable energy.
CEE conducts cutting-edge engineering research that expands knowledge and creates new methods and designs needed in the natural and built environments to sustain people and nature. Our research focal areas are the built environment, the natural environment, and engineered civil and environmental systems.
The Salisbury Robotics Lab is currently conducting research on in-hand manipulation (non-anthropomorphic), physical Human/Robot Interaction (pHRI) - currently in the context of a robotic Emergency Medical Technician (rEMT), patient-specific simulation of skull-based procedures such as cochlear implantation in a haptically enabled pre-operative planning environment, and the development of a low-impedance high-dynamic range manipulator concept. The lab is led by Prof. Ken Salisbury, with contributions from students such as Shenli Yuan and Connor Yako.
The T.E.C.I. Center designs and implements advanced fabrication and engineering methods for measuring performance metrics in the mastery of surgical operations and bedside procedures. We aim to transform human health and welfare through advances in point-of-care sensor technology supported by data science and personalized, data-driven performance metrics for healthcare providers.
Frontier Technology Lab (FTL) is a resource for collaborative exploration of the far-reaching impact of emerging technologies, as well as their compound effects. FTL brings together experts and stakeholders in the latest frontier technologies—microelectronics, AI, blockchain, advanced manufacturing, and quantum sciences—to understand the potential of these technologies for a sustainable future. FTL is part of the School of Engineering and Doerr School of Sustainability.
GSL focuses on developing quantitative and data-driven methods that learn from real-world visual data to generate, predict, and simulate new or renewed built environments that place the human in the center. Our mission is to create sustainable, inclusive, and adaptive built environments that can support our current and future physical and digital needs. We believe that by cross-pollinating the physical (reality) and digital (virtual) domains, we can achieve higher immersion and view these spaces as a step toward more equitable living conditions.
NMBL investigators use ourexpertise in biomechanics, computer science, imaging, robotics, and neuroscience to analyze muscle function, study human movement, design medical technologies, and optimize human performance.
GCG addresses algorithmic problems in modeling physical objects and phenomena, studying computation, communication, and sensing as applied to the physical world. Interests include the analysis of shape or image collections, geometric modeling with point cloud data, deep architectures for geometric data, 3D reconstruction, deformations and contacts, sensor networks for lightweight distributed estimation/reasoning, the analysis of mobility data, and modeling the shape and motion of biological macromolecules and other biological structures.