Part of Advances in Neural Information Processing Systems 21 (NIPS 2008)
Mohak Shah
We derive risk bounds for the randomized classifiers in Sample Compressions settings where the classifier-specification utilizes two sources of information viz. the compression set and the message string. By extending the recently proposed Occamâs Hammer principle to the data-dependent settings, we derive point-wise versions of the bounds on the stochastic sample compressed classifiers and also recover the corresponding classical PAC-Bayes bound. We further show how these compare favorably to the existing results.