| Finite Horizon Problems | 
                      | Lecture 1 (PDF) | 
Introduction to Dynamic ProgrammingExamples of Dynamic ProgrammingSignificance of Feedback | 
                      | Lecture 2 (PDF) | 
The Basic ProblemPrinciple of OptimalityThe General Dynamic Programming AlgorithmState Augmentation | 
                      | Lecture 3 (PDF) | 
Deterministic Finite-State ProblemBackward Shortest Path AlgorithmForward Shortest Path AlgorithmAlternative Shortest Path Algorithms | 
                      | Lecture 4 (PDF) | 
Examples of Stochastic Dynamic Programming ProblemsLinear-Quadratic ProblemsInventory Control | 
                      | Lecture 5 (PDF) | 
Stopping ProblemsScheduling ProblemsMinimax Control | 
                      | Lecture 6 (PDF) | 
Problems with Imperfect State InfoReduction to the Perfect State Info CasLinear Quadratic ProblemsSeparation of Estimation and Control | 
                      | Lecture 7 (PDF) | 
Imperfect State InformationSufficient StatisticsConditional State Distribution as a Sufficient StatisticFinite-State Analysis | 
                      | Lecture 8 (PDF) | 
Suboptimal ControlCost Approximation Methods: ClassificationCertainty Equivalent ControlLimited Lookahead PoliciesPerformance BoundsProblem Approximation ApproachParametric Cost-To-Go Approximation | 
                      | Lecture 9 (PDF) | 
Rollout AlgorithmsCost Improvement PropertyDiscrete Deterministic ProblemsApproximations to Rollout AlgorithmsModel Predictive Control (MPS)Discretization of Continuous TimeDiscretization of Continuous SpaceOther Suboptimal Approaches | 
                      | Simple Infinite Horizon Problems | 
                      | Lecture 10 (PDF) | 
Infinite Horizon ProblemsStochastic Shortest Path (SSP) ProblemsBellman's EquationDynamic Programming – Value IterationDiscounted Problems as a Special Case of SSP | 
                      | Lecture 11 (PDF) | 
Review of Stochastic Shortest Path ProblemsComputation Methods for SSPComputational Methods for Discounted Problems | 
                      | Lecture 12 (PDF) | 
Average Cost Per Stage ProblemsConnection With Stochastic Shortest Path ProblemsBellman's EquationValue Iteration, Policy Iteration | 
                      | Lecture 13 (PDF) | 
Control of Continuous-Time Markov Chains: Semi-Markov ProblemsProblem Formulation: Equivalence to Discrete-Time ProblemsDiscounted ProblemsAverage Cost Problems | 
                      | Advanced Infinite Horizon Problems | 
                      | Lecture 14 (PDF) | Introduction to Advanced Infinite Horizon Dynamic Programming and Approximation Methods
 | 
                      | Lecture 15 (PDF) | 
Review of Basic Theory of Discounted ProblemsMonotonicity of Contraction PropertiesContraction Mappings in Dynamic ProgrammingDiscounted Problems: Countable State Space with Unbounded CostsGeneralized Discounted Dynamic ProgrammingAn Introduction to Abstract Dynamic Programming | 
                      | Lecture 16 (PDF) | 
Review of Computational Theory of Discounted ProblemsValue Iteration (VI)Policy Iteration (PI)Optimistic PIComputational Methods for Generalized Discounted Dynamic ProgrammingAsynchronous Algorithms | 
                      | Lecture 17 (PDF) | 
Undiscounted ProblemsStochastic Shortest Path ProblemsProper and Improper PoliciesAnalysis and Computational Methods for SSPPathologies of SSPSSP Under Weak Conditions | 
                      | Lecture 18 (PDF) | 
Undiscounted Total Cost ProblemsPositive and Negative Cost ProblemsDeterministic Optimal Cost ProblemsAdaptive (Linear Quadratic) Dynamic ProgrammingAffine Monotomic and Risk Sensitive Problems | 
                      | Lecture 19 (PDF) | 
Introduction to approximate Dynamic ProgrammingApproximation in Policy SpaceApproximation in Value Space,	Rollout / Simulation-based Single Policy IterationApproximation in Value Space Using Problem Approximation | 
                      | Lecture 20 (PDF) | 
Discounted ProblemsApproximate (fitted) VIApproximate PIThe Projected EquationContraction Properties: Error BoundsMatrix Form of the Projected EquationSimulation-based ImplementationLSTD, LSPE, and TD Methods | 
                      | Lecture 21 (PDF) | 
Review of Approximate Policy IterationProjected Equation Methods for Policy EvaluationSimulation-Based Implementation Issues, Multistep Projected Equation MethodsBias-Variance TradeoffExploration-Enhanced Implementations, Oscillations | 
                      | Lecture 22 (PDF) | 
Aggregation as an Approximation MethodologyAggregate ProblemSimulation-based AggregationQ-Learning | 
                      | Lecture 23 (PDF) | 
Additional Topics in Advanced Dynamic ProgrammingStochastic Shortest Path ProblemsAverage Cost ProblemsGeneralizationsBasis Function AdaptationGradient-based Approximation in Policy SpaceAn Overview |