We realized that data structure fundamentally expands the possibilities of Hopfield networks.
Hopfield networks consist in N binary neurons, densely connected with Hebbian couplings. Such networks are considered just inefficient a memory models, as they can store at most P=0.14N uncorrelated memories. Storing correlated memories reduces the memory capacity.
Instead, we considered memories that are the combination of random features and we realized that:
the features can become ground states of the model (learning transition) if enough data are provided;
any combination of feature can become a local minimum of the energy, allowing the network to store any example that was generated with the same features of the memories (generalization transition).
This may be a paradigm of a generalization mechanism in more complicated networks.
Training and test examples become fixed points after the features have been learned. The blue line is the magnetization of hidden features, which grows to 1 if α is high enough (learning phase). The orange line is the magnetization of the training examples, which is ≃ 1 for low α and drops when α increases, as expected from an associative memory (storage phase). Surprisingly, it grows to 1 again for high values of α. Near this transition, also test examples have magnetizaion = 1, as shown by the red line (generalization phase).
We refined an unlearning (a.k.a dreaming) algorithm and showed that it exploits the correlation to build larger basins of attraction.
Our Daydreaming protocol work surprisingly well also on real data, storing a high number of examples at low load and building prototype-like attractors at very high loads, showing the capability to converge to meaningful minima even for test examples.