Download:

Abstract:

We consider the relationship between learnability of a “base class” of functions on a set $\rangespace$, and learnability of a class of statistical functions derived from the base class. For example, we refine results showing that learnability of a family $h_p: p \in \paramspace$ of functions implies learnability of the family of functions $h_\mu=\lambda p: \paramspace. E_\mu(h_p)$, where $E_\mu$ is the expectation with respect to $\mu$, and $\mu$ ranges over probability distributions on $\rangespace$. We will look at both Probably Approximately Correct (PAC) learning, where example inputs and outputs are chosen at random, and online learning, where the examples are chosen adversarily. For agnostic learning, we establish improved bounds on the sample complexity of learning for statistical classes, stated in terms of combinatorial dimensions of the base class. We connect these problems to techniques introduced in model theory for “randomizing a structure”. We also provide counterexamples for realizable learning, in both the PAC and online settings.


Citation

Aaron Anderson. 2025. “From Learnable Objects to Learnable Random Objects.” arXiv preprint, https://arxiv.org/abs/2504.00847.

@unpublished{learningrandom,
      title={Logical perspectives on learning statistical objects}, 
      author={Aaron Anderson and Michael Benedikt},
      year={2025},
      eprint={2504.00847},
      archivePrefix={arXiv},
      primaryClass={cs.LO},
      url={https://arxiv.org/abs/2504.00847},
      note={preprint},
}