Assistive Robotics and Manipulation (ARM)
The Assistive Robotics and Manipulation Lab develops robots that improve everyday life by anticipating and acting on the needs of their human counterparts. We specialize in developing intelligent robotic systems that can perceive and model environments, humans, and tasks, and leverage these models to predict system processes and understand their assistive role. The research addresses robotic assistants, connected devices, and intelligent wearables. We use a combination of tools in collaborative robotics, machine learning, computer vision, state estimation and prediction, dynamical systems analysis, and control theory.
Intelligent and Interactive Autonomous Systems Group (ILIAD)
ILIAD is developing methods to enable groups of autonomous systems and groups of humans to interact safely and reliably with each other. Employing methods from AI, control theory, machine learning, and optimization, we are establishing theory and algorithms for interaction in uncertain and safety-critical environments. By learning from and with humans, we are moving robot teams out of factories and safely into humans’ lives.
Interactive Perception and Robot Learning (IPRL)
IPRL seeks to understand the underlying principles of robust sensorimotor coordination by implementing them on robots. We study autonomous robots that can plan and execute complex manipulation tasks in dynamic, uncertain, and unstructured environments. We develop algorithms for autonomous learning that exploit different sensory modalities for robustness, structural priors for scalability, and that continuously adapt. Our solutions will allow manipulation robots to escape the factory floor and move into unstructured environments such as warehouses, our homes, and disaster zones.
Vision and Learning (SVL)
SVL develops methods for establishing rich geometric and semantic understanding of the environment. Aimed at enhancing a robot’s perception and action capabilities within the variability and uncertainty of the real world, we address tasks such as handheld-tool use, cooking and cleaning, and navigating crowded public spaces. We develop robust models of intelligent behavior, build these into general-purpose autonomy, and couple them with robots for complex operation.