Skip to main content

Introduction to Robot Learning

16-831, Spring 2024

Mon/Wed 9:30am-10:50am, BH A51

Course Description

Robots need to make sequential decisions to operate in the world and generalize to diverse environments. How can they learn to do so? This is what we call the "robot learning" problem, and it spans topics in machine learning, deep learning, visual learning, and reinforcement learning and control. In this course, we will learn the fundamentals of topics in machine/deep/visual/reinforcement learning and how such approaches are applied to robot decision-making. We will study fundamentals of 1) machine/deep learning with an emphasis on approaches relevant to robotics; 2) reinforcement learning: model-based, model-free, on-policy (e.g., policy gradients), off-policy (e.g., Q-learning), offline, etc.; 3) imitation learning: behavior cloning, DAgger, inverse RL, etc.; 4) visual learning geared towards decision making including topics like generative models and their use for robotics, learning from human videos, passive internet videos, language models, etc.; and 5) leveraging simulations, building differentiable simulations and how to transfer policies from simulation to the real world. We will also briefly touch on topics in neuroscience and psychology that provide cognitive motivations for several techniques in decision-making. Throughout the course, we will look at many examples of how such methods can be applied to real robotics tasks as well as broader applications of decision-making beyond robotics (e.g., online dialogue agents). The course will provide an overview of relevant topics and open questions in the area. There will be a strong emphasis on bridging the gap between many different fields of AI. The goal is for students to get both a high-level understanding of important problems and possible solutions, as well as a low-level understanding of technical solutions. We hope that this course will inspire you to approach problems in embodied intelligence from different perspectives in your research.

Staff