CVPP - Gaussian Processes
A Gaussian process (GP) is a well-known non-parametric Bayesian regression technique, that uses a set of functions (mean, covariance and noise) to model the relationship between known input-output points, and then extrapolates this information to infer the state of unknown areas of the input space. It is an attractive technique because it does not require an explicit model of the underlying function it is trying to learn, and also because it naturally encodes uncertainty, providing not only its best guess given available information, but the predicted accuracy of this estimate as well.
The CVPP library provides several implementations of GP models, ranging from the original derivation to extensions such as sparse, heteroscedastic, variational and stochastic. It also provides most of the mean, covariance and noise functions available in the literature, that can be readily applied to any available model in a plug-and-play basis. Finally, it maintains templates that make it simple to create custom functions or GP models, based on new advancements in the area.
Below are some demos of GP models already available in the CVPP library, with brief explanations, references and videos.
The CVPP library provides several implementations of GP models, ranging from the original derivation to extensions such as sparse, heteroscedastic, variational and stochastic. It also provides most of the mean, covariance and noise functions available in the literature, that can be readily applied to any available model in a plug-and-play basis. Finally, it maintains templates that make it simple to create custom functions or GP models, based on new advancements in the area.
Below are some demos of GP models already available in the CVPP library, with brief explanations, references and videos.
Full GP is the term used by CVPP when referring to the standard GP derivation, in which there are no approximations or sparsity involved. This implementation is based on Chapter 2 of Gaussian Processes for Machine Learning, by Rasmussen and Williams (2006).
|
|
Sparse GP is the term used by CVPP when referring to the Projected Process approximation, in which the training data is projected into a subset of points to speed up covariance matrix inversion. This implementation is based on Chapter 8 of Gaussian Processes for Machine Learning, by Rasmussen and Williams (2006).
|
|