In natural systems learning and adaptation occurs at multiple levels and often involves interaction between multiple independent agents. Examples include cell-level self-organization, brain plasticity, and complex societies of biological organisms that operate without a system-wide objective. All these systems exhibit remarkably similar patterns of learning through local interaction. On the other hand, most existing approaches to AI, though inspired by biological systems at the mechanistic level, usually ignore this aspect of collective learning, and instead optimize a global, hand-designed and usually fixed loss function in isolation.
We posit there is much to be learned and adopted from natural systems, in terms of how learning happens in these systems through collective interactions across scales (starting from single cells, through complex organisms up to groups and societies).
The goal of this workshop is to explore both natural and artificial systems and see how they can (or already do) lead to the development of new approaches to learning that go beyond the established optimization or game-theoretic views. The specific topics that we plan to solicit include, but are not limited to:
Learning leveraged through collectives, biological and otherwise (emergence of learning, swarm intelligence, applying high-level brain features such as fast/slow thinking to AI systems, self-organization in AI systems, evolutionary approaches to AI systems, natural induction)
Social and cultural learning in AI (cultural ratchet, cumulative cultural evolution, formulation of corresponding meta-losses and objectives, new methods for loss-free learning)
In particular, we are interested in the following questions:
What are the benefits of collectives for learning, beyond scalability?
Is emergence and self-organization a good foundation for learning systems?
What are the limits of learned self-organizing systems (such as Neural Cellular Automata) and how general are they?
What cannot be learned via optimization? Does collective learning offer a way in which yet unsolved problems can be more easily specified and solved compared to traditional approaches?
How far can learned optimizers take us in the discovery of novel learning mechanisms?
Where do goals come from? How do we build systems that continually invent their own goals?
Does social and cultural learning offer insights into the creation of better continual lifelong learners?
What are the computational mechanisms resulting in innovation akin to the one that differentiated us from other animals?
How did self-organization and collective intelligence evolve in nature? Can we ever hope to replicate it? Should we?
What role does the environment and others play in shaping individual intelligence?
What role does interaction between parts of the brain (such as left/right hemisphere, cortical columns, etc) play in the emergence of intelligence?
Why are these questions important?
There are likely limits in the way we currently build and scale up our learning systems. Many issues persist with existing machine learning and artificial intelligence methods, especially when compared to ways in which natural systems learn. Catastrophic forgetting, limited continual learning, lack of fast adaptation with extrapolation abilities, lack of open-endedness, and many other issues don’t seem to be present in most natural learners, yet plague most existing learning systems we build. Looking at natural and artificial systems, not individually, but collectively across disciplines and across scales should provide a way to reveal the underlying mechanisms that are perhaps shared and foundational to what makes learning in nature still significantly more effective than in most systems built today.