One can learn any hypothesis class H with O(log |H|) labeled examples. Alas, learning with so few examples requires saving the examples in memory, and this requires |X|^(O(log|H|)) memory states, where X is the set of all labeled examples. This motivates the question of how many labeled examples are needed in case the memory is bounded. One might wonder whether a general combinatorial condition exists for unlearnability with bounded memory, as we have with the condition VCdim(H) = Infinity for PAC unlearnability. In this talk we give such a condition.
*No prior Machine Learning knowledge is needed (only basic probability theory).