Back to All Events

Mathematics for Foundations of Machine Learning & Deep Learning Courses


Course aims

This course, taught in person over two days, covers the mathematics required for understanding the fundamentals of machine learning and deep learning. By the end of the course you will have a good understanding of several core mathematical concepts, and the tools required to understand the theory behind the some of the most important machine learning and deep learning algorithms.

This is an intensive course ideally suited to people already in technical or quantitative roles - you might be a data scientist or software developer who wants to deepen their knowledge of machine learning.

Summary of syllabus

Probability and statistics

  • Probability theory. Sum and product rules of probability. Joint, conditional and marginal probability. Bayes’ theorem.

  • Discrete and continuous probability distributions. Probability densities, cumulative distribution function.

  • Expectation, variance and covariance of random variables.

  • Likelihood functions, prior and posterior distributions. Statistical estimators. Maximum likelihood estimation (MLE), maximum a posteriori (MAP) estimation.

  • Standard statistical distributions: binomial, Bernoulli, categorical/multinoulli, Gaussian, logistic etc.

Linear algebra

  • Vectors and matrices. Matrix and vector operations.

  • Vector space, basis, linear independence. Change of basis.

  • Vector norms and inner products.

  • Orthogonality, orthonormal vectors. Orthogonal projections.

  • Gram-Schmidt procedure.

  • Linear transformations. Eigenvectors and eigenvalues.

  • Orthogonal matrices and rigid transformations.

  • Solving linear systems of equations. Matrix inverse and determinant.

  • Singular value decomposition.

Calculus and optimisation

  • Function derivatives. Interpretation as gradient or slope. Product rule and chain rule of differentiation.

  • Multivariate calculus, partial differentiation. Local optima.

  • Jacobian matrix and the Hessian.

  • Gradient descent.

Information theory

  • Information gain, bits and nats.

  • Entropy of a random variable. Average coding lengths.

  • Joint, conditional and cross entropy.

  • Relative entropy / Kullback-Leibler divergence and mutual information.

Tutor

The FeedForward AI Academy programmes are led by Dr Kevin Webster, Honorary Research Fellow in Mathematics at Imperial College. Kevin recently completed teaching the graduate level course on Deep Learning in the mathematics department at Imperial College London in Autumn 2018.

Who is this course for?

This course is for anyone interested in Machine Learning & Deep Learning who has not previously learnt the core mathematical concepts, or has not visited these topics for a long time. These core mathematical concepts will prepare you for taking the Foundations of Machine Learning and Foundations of Deep Learning courses

Dates, times & location

Dates & times: Two-day course running from Thu 31 Jan to Fri 1 Feb 2019, 09.30 - 16.30

Location: The Dock, Tobacco Docks, Wapping, London.

If you have any questions about the course, please email academy@feedforwardai.com.