The Wake of AI
A nonspecialist blog on all things AI
A nonspecialist blog on all things AI
Having just returned from ICLR 2025 in Singapore, I wanted to disclose some of the ideas and themes that were most prevalent during the conference and how they agree with broader trends I've been observing, including e.g. at Winter Simulation Conference 2024.
One major takeaway is a growing (and even somewhat agreed upon) recognition that bigger is not always better when it comes to artificial intelligence models. The field has long been riding a wave of ever-larger models, with training procedures that now require megawatts of GPU power. And yet, the biological system from which neural networks originally drew inspiration — the human brain — operates on only about 10 watts. This contrast is becoming harder to ignore. Moreover, there are clear signs that we are hitting diminishing returns by just adding compute power. Throwing more compute power at the problem is no longer yielding the spectacular gains it once did. This observation is prompting a broader rethink: Is the path forward through scaling, or through smarter, more efficient models?
This concern connects to a broader discussion about digital twins (a primary topic of conversation at Winter Simulation Conference 2024 in Florida, USA). Digital twins can be viewed as massive surrogate models meant to replicate large physical or virtual systems. While promising in theory, these models are extremely data-hungry and computationally intensive to train well. One common opinion is that their complexity often obscures, rather than clarifies, the underlying systems and dynamics they aim to model. It prompts the question: Are simpler models 'better'?
For me, one of the most memorable moments of the ICLR conference was University of Hong Kong's Prof. Yi Ma's keynote address. There were some great taglines to speak about. One of which was:
"Intelligence is not a game for billionaires."
The pursuit of bigger models increasingly concentrates AI capabilities in a small number of private companies with the resources to afford massive compute budgets. Meanwhile, researchers and practitioners are starting to look for smarter, leaner approaches to machine learning; ones that rely more on mathematical insights and efficient algorithms than on brute-force scaling.
Prof. Ma also offered another memorable line:
"We developed mathematics to do science because natural language is ambiguous."
This comment is a useful reminder when thinking about large language models (LLMs), which dominated much of the conversation at ICLR this year. LLMs excel at language-related tasks, but whether they are the right path forward for domains that demand precision and rigor — such as mathematics and scientific modeling — remains a topic of debate. In fact, in painting broad strokes, for me, ICLR this year often felt more like an LLM conference than a general deep learning conference.
As for my research interests (smarter & efficient sampling), diffusion samplers are currently very popular. These methods aim to guide the transformation process of noisy, random data into target distributions by gradually removing the noise.
My collaborators and I have recently approached the sampling problem from a different angle. We use graph neural networks to learn a discrete transformation: a mapping from an input node set to a low-discrepancy, representative node set with respect to a target distribution. This framework, part of our Message-Passing Monte Carlo (MPMC) method, emphasizes structure and efficiency.
At the Frontiers in Probabilistic Inference workshop at ICLR, we presented an extension of this work: Stein-MPMC, which incorporates Stein discrepancies into the message-passing framework to sample from nonuniform target densities which are known (up to normalizing constant).
While it's still an open question which sampling approaches — diffusion models, flows, our graph-based methods, or something else entirely — will ultimately prove most effective, it is clear that sampling and machine learning are converging in exciting ways.
And perhaps as a last takeaway, there is a renewed openness in the community to rethinking scale: to recognize that true intelligence, whether natural or artificial, might emerge not from scaling, but from smart design, robust mathematics, and creativity!
Artificial intelligence is moving fast. Published in April 2025, a scenario called “AI 2027” imagines where it might take us in just two years, how we can make the most of it and the risks that should be mitigated.
_____________________________________________________________________________________________
In just a few short years, artificial intelligence has gone from a futuristic idea to something millions of people use every day in forms such as chatbots, coding assistants and image generators. But what happens if this progress doesn’t slow down...and instead speeds up?
That’s the question behind AI 2027, a scenario published by a group of researchers in April 2025, trying to imagine what the world might look like if AI continues advancing rapidly. Their conclusion is: by 2027, we could have AI systems more capable than the best human experts, not just in writing or coding, but in research, decision-making, and learning.
While that might sound like science fiction, the creators of the scenario emphasize this isn’t a prediction, it’s a possibility. One worth taking seriously.
What might the scenario look like?
The scenario describes a world where AI agents become powerful collaborators for humans. These aren’t just tools, they’re partners, able to read documents, write code, discover new materials, test scientific hypotheses, and explain their thinking along the way.
Here are some of the possibilities the AI 2027 explores:
In education, AI tutors help students learn at their own pace, adapting to each learner’s strengths and needs. Every child could have a “personal teaching assistant” available 24/7.
In business and commerce, AI systems handle market analysis, manage financial markets and portfolios, automate entire supply chains (from marketing, sales and manufacturing), and help companies launch new products addressing customer needs and better handling of customer demand. Strategic decisions are informed by AI that can simulate outcomes, flag risks, and optimize for long-term goals.
In science and research, AI accelerates discovery by generating new theories, analyzing huge datasets, and designing experiments. Fields like drug development, climate modelling, and materials science leap forward.
In everyday life, AI assistants become smarter and more useful, helping with everything from scheduling to personal medical monitoring to learning new skills.
It’s a vision filled with potential: a world where humans and machines collaborate to solve problems faster, learn more efficiently, and build more resilient systems.
It's not without risks...
With rapid progress comes important questions. The AI 2027 scenario doesn’t ignore risks, it highlights the need for careful planning, clear oversight, and shared responsibility.
Some areas that deserve particular attention:
In education, How do we ensure AI enhances learning rather than replacing human interaction? How do we teach students with AI, about AI so they grow up empowered by AI?
In business and commerce, which industries will change first? How do we prepare these workforces for rapid transformation? What kinds of new jobs might emerge and what values should shape the companies of the future?
In science and research, if AI can generate hypotheses or design molecules, how do we maintain scientific integrity and reproducibility? How do we combine human insight with machine speed responsibly?
In governance, who decides how AI is used and who benefits? How do we avoid a world where access to powerful AI systems is limited to a few? How do we protect against misuse while encouraging positive innovation?
These aren’t reasons to stop progress, they’re reasons to proceed thoughtfully. The power of AI will be shaped not just by what it can do, but by the values, policies, and choices we build around it.
An AI future is inevitable, but that isn't necessarily a bad thing
As described in the AI 2027 scenario, we could soon be in an AI dominated world, one which is not filled with flying cars and robots, but one that is not that different from the one we live in now. Hopefully, it is a world where children have better access to knowledge, where scientists make discoveries that save lives, where small businesses can compete globally with the help of smart tools, and where decisions are better informed.
It's not guaranteed, but it is possible.
_______________________________________________________________________________________________________________
Visit ai-2027.com for further reading.
Artificial intelligence is rapidly reshaping the world around us from how we communicate and work to how we learn. This transformation has the ability to fundamentally change many areas of life. A particularly important consideration is in education.
For students and educators alike, AI offers unprecedented opportunities. But also poses serious questions about integrity and the future of human connection in learning.
_______________________________________________________________________________________________
A Tutor That Never Sleeps
One of AI’s most compelling promises is personalization. Tools like Microsoft Copilot and AI-powered chatbots allow students instant, tailored support for their studies, whenever and wherever they need it. In a recent study, university students who used an AI assistant improved their exam scores by nearly 10% over their peers. One imagines that these tools help the students brainstorm ideas, simplify complex material, and receive immediate feedback, acting like a 24/7 tutor.
Additionally, AI has been shown to boost student confidence and curiosity, particularly among those who may otherwise struggle to participate. For example, at Brisbane Catholic Education, at-risk students using Copilot Chat showed a 275% increase in learner agency.
Integrity in the Age of AI: Whose work is it, really?
Educators are becoming increasingly and understandably concerned about plagiarism and over-reliance. Students, on the other hand, worry about being falsely accused or losing the most rewarding aspects of learning, like the sense of achievement from solving a difficult problem on their own.
In my opinion, the key probably does not lie in banning AI completely, but rather in integrating it meaningfully. Students should learn to articulate an argument, outline a draft, and then refine it with AI as a thought partner, not as a ghostwriter. When used this way, AI doesn’t replace learning but acts as an assistant.
Lessons from Lockdown: The Human Element Matters
We (hopefully!) learned during the COVID-19 pandemic that education without human connection can be isolating. Online classes, while logistically efficient, often left students disengaged and unsupported. There is a real risk that poorly integrated AI systems could repeat this mistake by further distancing learners from teachers and peers.
Schools should therefore explore AI as a complement to learning, not a replacement. From personal experience, when used in groups or classrooms, AI can actually initiate conversation and collaboration. In this sense, the need for thorough training and strict guidelines is clear. Despite the widespread use of AI in schools, fewer than half of educators and students report receiving any formal instruction on how to use it responsibly. As AI tools become more powerful, equipping students and teachers with the right skills and ethical frameworks will be essential.
What lies ahead?
When implemented correctly, I believe AI has the potential to augment the education experience into something more inclusive, dynamic, and personal than ever before. It can lighten workloads (for both educators; giving them more face-to-face time with their pupils, and students; allowing more time to explore adjacent topics independently), support learners with diverse needs and even turn classroom data into insights for teachers; for example, to identify students falling behind earlier. But it must be done with care.
I believe successful adoption will depend on more than just having the right AI tool. It will require clear guidance on ethical use of these tools, well-designed training for students and teachers, and thoughtful policy.
Ultimately, the goal isn’t to automate education, but rather, to enhance it.