Generalization of Hamiltonian algorithms

Part of Advances in Neural Information Processing Systems 37 (NeurIPS 2024) Main Conference Track

Bibtex Paper

Authors

Andreas Maurer

Abstract

A method to prove generalization results for a class of stochastic learning algorithms is presented. It applies whenever the algorithm generates a distribution, which is absolutely continuous distribution relative to some a-priori measure, and the logarithm of its density is exponentially concentrated about its mean. Applications include bounds for the Gibbs algorithm and randomizations of stable deterministic algorithms, combinations thereof and PAC-Bayesian bounds with data-dependent priors.