David MacKay
.






Search :
.
Slides for Information Theory, Pattern Recognition, and Neural Networks Lectures by David MacKay

Slides for Information Theory, Pattern Recognition, and Neural Networks Lectures

Note: I use the blackboard in lectures, and I give the audience problems to solve. These slides are therefore an incomplete record of the lectures.

Lecture 1 Introduction to information theory
Lecture 2Introduction to compression. Information content.  
Lecture 3Source coding theorem, bent coin lottery
Lecture 4Symbol codes
Lecture 5Symbol codes and Arithmetic coding
Lecture 6Inference and Information measures for Noisy Channels
Lecture 7Capacity of a Noisy Channel
Lecture 8Coding theorem
Lecture 9Inference of parameters
Lecture 10Inference of parameters and models
Lecture 11Clustering as an example inference problem
Lecture 12Monte Carlo methods
Lecture 13Advanced Monte Carlo methods
Lecture 14Variational methods
Lecture 15Neural networks part 1: feedforward neural networks
Lecture 16Neural networks part 2: Content-addressable memories; and State-of-the-art error-correcting codes

Bonus Lecture
Counting trees (10 mins)

These slides are copyright (c) 2005, 2006, 2007, 2008, 2009, 2012 David J.C. MacKay. They may be used for any educational purpose.

See also The Information Theory, Inference, and Learning Algorithms website, which includes all the figures from the book, for use in teaching.

Back to the Information Theory, Pattern Recognition, and Neural Networks course


Site last modified Sun Aug 31 18:51:05 BST 2014