Assistive Robotics and Manipulation (ARM)
The Assistive Robotics and Manipulation Lab develops robots that improve everyday life by anticipating and acting on the needs of their human counterparts. We specialize in developing intelligent robotic systems that can perceive and model environments, humans, and tasks, and leverage these models to predict system processes and understand their assistive role. The research addresses robotic assistants, connected devices, and intelligent wearables. We use a combination of tools in collaborative robotics, machine learning, computer vision, state estimation and prediction, dynamical systems analysis, and control theory.
Intelligence through Robotic Interaction at Scale (IRIS)
IRIS acquires scalable robotic intelligence through establishing algorithms for general intelligent behavior using learning and interaction. With methods from deep reinforcement learning over raw sensory inputs, meta-learning from previous experience for accelerating new skill development, and self-supervised learning from interactions without human supervision, we aim for robots able to perform varied learned tasks across diverse environments.
Intelligent and Interactive Autonomous Systems Group (ILIAD)
ILIAD is developing methods to enable groups of autonomous systems and groups of humans to interact safely and reliably with each other. Employing methods from AI, control theory, machine learning, and optimization, we are establishing theory and algorithms for interaction in uncertain and safety-critical environments. By learning from and with humans, we are moving robot teams out of factories and safely into humans’ lives.
Interactive Perception and Robot Learning (IPRL)
IPRL seeks to understand the underlying principles of robust sensorimotor coordination by implementing them on robots. We study autonomous robots that can plan and execute complex manipulation tasks in dynamic, uncertain, and unstructured environments. We develop algorithms for autonomous learning that exploit different sensory modalities for robustness, structural priors for scalability, and that continuously adapt. Our solutions will allow manipulation robots to escape the factory floor and move into unstructured environments such as warehouses, our homes, and disaster zones.
Robotics and Embodied Artificial Intelligence (REAL)
We at REAL @ Stanford are developing algorithms that enable intelligent systems to learn from their interactions with the physical world to execute complex tasks and assist people.
Movement
The Movement Lab creates coordinated, functional, and efficient whole-body movements for digital agents and for real robots to interact with the world. We focus on holistic motor behaviors that involve fusing multiple modalities of perception to produce intelligent and natural movements. We study “motion intelligence” in the context of complex ecological environments, involving both high-level decision making and low-level physical execution. We develop computational approaches to modeling realistic human movements for Computer Graphics and Biomechanics applications, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms.
Vision and Learning (SVL)
SVL develops methods for establishing rich geometric and semantic understanding of the environment. Aimed at enhancing a robot’s perception and action capabilities within the variability and uncertainty of the real world, we address tasks such as handheld-tool use, cooking and cleaning, and navigating crowded public spaces. We develop robust models of intelligent behavior, build these into general-purpose autonomy, and couple them with robots for complex operation.
Salisbury Robotics Lab
The Salisbury Robotics Lab is currently conducting research on in-hand manipulation (non-anthropomorphic), physical Human/Robot Interaction (pHRI) - currently in the context of a robotic Emergency Medical Technician (rEMT), patient-specific simulation of skull-based procedures such as cochlear implantation in a haptically enabled pre-operative planning environment, and the development of a low-impedance high-dynamic range manipulator concept. The lab is led by Prof. Ken Salisbury, with contributions from students such as Shenli Yuan and Connor Yako.
Geometric Computation Group (GCG)
GCG addresses algorithmic problems in modeling physical objects and phenomena, studying computation, communication, and sensing as applied to the physical world. Interests include the analysis of shape or image collections, geometric modeling with point cloud data, deep architectures for geometric data, 3D reconstruction, deformations and contacts, sensor networks for lightweight distributed estimation/reasoning, the analysis of mobility data, and modeling the shape and motion of biological macromolecules and other biological structures.