Deep probabilistic machine learning


Funded by EPSRC (EP/N014162/1)
Summary
This project aimed to develop scalable approaches to deep non-parametric probabilistic models that use approximate inference techniques to learn the structure of the model. The project required the development of practical, interpretable models, with latent variables that can be used by non-academics in a meaningful way. In this project, we developed scalable approaches for applying Gaussian process regression to multi-output signals with non-linear dependencies. This included using physics-based techniques, including series representation and ordinary differential equation solvers, combined with machine learning techniques such as GPs and autoregressive flows, to infer latent variables and forces.
Project outcomes
  • A paper using Volterra series for non-linear multi-output Gaussian processes (NLMOGP), published and presented at AISTATS
  • A paper presenting work on normalising flows for inferring latent forces in non-linear models, published and presented at AISTATS
machine-learning
deep-learning
spatio-temporal