I like to contemplate... here are my 3am and 3pm thoughts. I'd probably need a different site for them.
I am currently launching my social on Bluesky @amneuman.bsky.social and moving them to a social front.
If you had asked me two months ago whether AI could ever replicate the uniquely human capacity for logic and reasoning, I would have said no. Ask me the same question now, and the answer is a highly probable “yes.” The more appropriate question has shifted from whether to when. When will we invent machine learners that don’t consume the energy budget of a small nation? When will they learn to be as “efficient” as a human brain? (And yes, I’m aware that human brains burn plenty of calories, but at least they don’t require a data center just to generate a short video.) If there is a will, there is a way. And if humanity, or a subset of humanity with enough resources and ambition, decides to make it happen, it probably will. One of my research directions is spiking neural networks, and I’ve been curious whether they might contribute to a new generation of energy-friendly AI models. Or are they even a tiny, respectable imitation of the human brain? (no, but they’re trying their best) As I think more about “intelligence,” my real fear isn’t just that AI might eventually replicate human thought. It’s the possibility that AI might one day read my thoughts, anticipating not just my lunch order, but my desires, attractions, and inner doubts.
Anyway, this brings me to an article Google thoughtfully suggested. The piece discusses a new book by a life scientist and author who argues (I paraphrase) that “intelligence is an emergent property of social complexity.” Believable. Many of us have had those moments where we seem to sync brains with someone else. But the article left me unconvinced in a few places. Take this poetic sentence: “The way this explanation differs from others is by offering an incentive rather than simply means to achieve it: yes, free hands, meat diet, and many other factors made our brain possible, but the reason we needed it in the first place was to remember all our friends who helped us fight monsters.” To me, the “meat diet” was driven by the goal toward intelligence. “Free hands” arose through circumstantial evolution and was also driven by the goal toward intelligence. Complex social structures sustain intelligence once the human brain size reaches a certain plateau, after “free hands” and “meat diet.” Otherwise, would horses, when gathering in groups large enough to start their own social clubs, develop human intelligence? Something tells me they won’t. And what about the Neanderthals and Denisovans? They also trended toward intelligence, and there’s no evidence that their group sizes were meaningfully smaller than those of early Homo sapiens. Why did they trend slower, that in the end, they were absorbed by us?
Disagreements aside, it’s a nice read. (The author later drifted into a discussion of human happiness and all that jazz.) What intrigued me, though, is that this short article hints at a key I’ve suspected for a while, for sustainable intelligence and perhaps anti-hallucinations: high-dimensional connections and sizable strongly connected components, so as to essentially invoke a Ramsey-type principle. But then this drops me right back into a familiar loop: reaching that level of complexity requires enormous energy expenditure. I glimpse a light, it flickers, and then I’m lost in my thoughts all over again.
(I’d say ChatGPT isn’t funny. So that corner of human intelligence remains safe. Although, as someone who works in the mathematics of machine learning and is a comedy aficionado, I suppose I might contribute to the problem.
Oompa loompa.)