Discover the Essence of Information
In an era where Large Language Models (LLMs) dominate, I explore an alternative approach to understanding intelligence and cognition. My project challenges the data-heavy methods by focusing on rule-based information expression inspired by the works of Hector Zenil and Stephen Wolfram. Here, I propose that:
Understanding equals compression, which leads to abstraction. (Chaitin, G. J., "Algorithmic Information Theory," Cambridge University Press, 1987, p. 62.)
Intelligence isn't just about holding information but how we manipulate and apply it.
By developing a system where information is compressed into simple, predictive rules, my work demonstrates that true cognitive advancement lies not in data accumulation but in elegant, rule-based synthesis.
Explore how simplicity can redefine complexity.
Code of this post available here
Fig 1. definition of a network by its connectivity matrix and dynamic
To know the information that this network is capable to generate we need to feed it with the all 2^16 = 65'536 possible inputs to generate its 65'536 outputs, but with my method of abstraction nodes the total repertoire of inputs-ouputs of this network can be expressed as is shown in figure 2
Fig 2. Abstraction of repertoire of network of the fig 1
Have to say that the process of abstraction never knows how repertoire of outputs was generated, or in other words, never knows the dynamic that rules the network.
This is possible because as Zenil claims, understanding = abstraction = patter finding. And that is what we, the intelligent entities -in principle- do to survive, we find behaviour patters to make adaptations, make predictions in order to survive.
I found that information spreading shows a fractal distribution, this means that seems to be chaotic, but actually respects an order (obviously, because was generated by a specific rule or dynamic) and then is possible to identify attractors of information.
Also, I find that the information distribution of output repertoire respects the Holland's schemata (1975) where as in genoma there are genes of the type * (do not care) where, it does not matter its input, it will generate the same outputs
FURTHER STEPS
All philosophical implications of this approach I'm using it in finance in an attempt to show that simple rules and abstraction of information works naturally better than the massive amount information expressed, for example, in calculation of indicators