Thoughts
Sometimes, I wonder about the following things:
I have an idea about a short story around my everyday life. I might write it at some point.
Since a while ago, I wanted to celebrate two somewhat related Finnish masterpieces, the book Painful intelligence: What AI can tell us about human suffering by Hyvärinen, and the movie The man without a past by Kaurismäki. At some point, I might write it.
Recently, I am thinking to write an essay about the followings: I argue that Computational Functionalism with its classical definition (Turing algorithmic framework) might not be necessary for a good theory of consciousness and cognition, thus Integrated Information Theory as the only theory of consciousness which tries to explain experience on a physical ground might be still a good theory of consciousness. Following the same line of argument, I think that Embodied Turing Test of NeuroAI (capabilities common across animals and humans) and Social Autotelic Artificial Agent (capabilities belonging to humans) frameworks might not be sufficient to solve the Problem of Relevance in the Large World, and this problem still might be solved in a computational framework borrowing a fresh perspective from Cybernetics era, such as Cybernetics mode of bottom-up physical computation, and why Unconventional Computing such as Neuromorphic Computing might be a valuable tool in the quest for building biologically-par artificial intelligence.
Regarding the third point, I am happy that I see my three lines of works are somewhat converging to the formulation of one single big problem. The works in AutocurriculaLab - being multi-agent reinforcement learning or agent-based generative models - are attempts to simulate a society of goal-directed agents without touching the origin of agency or formulating the problem of relevance. My current ongoing project in NeuroAILab is to test a few hypotheses in vision neuroscience using prospective configuration as a learning rule. Prospective configuration has been originated from energy based networks which are equivalent to a physical machine called energy machine, an instance of unconventional computing which might explain the emergence of agency or help to formulate the problem of relevance. And my long-term goal in LangTechAI is to build and train from scratch a biologically-inspired large language model ideally having an emergent agency to be able to realise relevance in the large world. As it is clear, problem of relevance is central to all these three lines of works, and unconventional cybernetics computation might be able to formulate and computationally solve this problem. This will be my ultimate goal of building biologically-par artificial general intelligence.