Thoughts

Sometimes, I wonder about the following things:

Regarding the third point, I am happy that I see my three lines of works are somewhat converging to the formulation of one single big problem. The works in AutocurriculaLab - being multi-agent reinforcement learning or agent-based generative models - are attempts to simulate a society of goal-directed agents without touching the origin of agency or formulating the problem of relevance. My current ongoing project in NeuroAILab is to test a few hypotheses in vision neuroscience using prospective configuration as a learning rule. Prospective configuration has been originated from energy based networks which are equivalent to a physical machine called energy machine, an instance of unconventional computing which might explain the emergence of agency or help to formulate the problem of relevance. And my long-term goal in LangTechAI is to build and train from scratch a biologically-inspired large language model ideally having an emergent agency to be able to realise relevance in the large world. As it is clear, problem of relevance is central to all these three lines of works, and unconventional cybernetics computation might be able to formulate and computationally solve this problem. This will be my ultimate goal of building biologically-par artificial general intelligence.