NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
It is a long-standing open question whether a preference-based teaching strategy can achieve teaching complexity linear in VC dimension. This work shows that, if we add a sequential aspect to the setting, where the learner's preference function can change after each example from the teacher (in a certain constrained way that avoids extreme "coding tricks"), then the teaching complexity is bounded by the VC dimension. This represents a major advance in our understanding of the complexity of machine teaching. The intermediate lemmas may also be of independent interest. The reviewers all agree that this is solid work, definitely worth accepting.