July 2025
When people ask me how I became a statistician, I'm tempted to say it happened three times over. There was the formal training: the undergraduate degree, the PhD, the postdoc. Then there was the civil service apprenticeship, where I learned what statistics actually mean to people making policy decisions. And finally, there was industry, where I discovered that all my careful theory could be brought to its knees by a computer running out of numbers.
Each phase taught me something fundamental that no single institution could have provided alone. Together, they shaped not just my technical skills, but my understanding of what it means to be a professional statistician: someone who bridges the divide between expert knowledge and analytical application, rather than simply running analyses in isolation.
The six years after my undergraduate studies are when I became a statistician in the academic sense. I learned to develop novel statistical methods within a Bayesian framework. I became deeply specialised in a very niche area. This depth of expertise was essential. You cannot contribute meaningfully to statistical methodology without it. But specialism alone doesn't make you a professional statistician. It makes you a specialist.
The academic environment taught me precision, rigour and the importance of theoretical foundations. It gave me the tools I would need for everything that followed. But it couldn't teach me how those tools and philosophies would need to flex and adapt when confronted with messy reality.
When I joined the Central Science Laboratory as a civil servant, I entered a different world entirely. For three years, I worked on developing new methodologies to characterise and propagate uncertainties in risk assessments. But the real education wasn't in the methods themselves. It was in learning how to make those methods matter.
Working alongside other scientists and policy specialists, I began to develop what I now recognise as consultancy skills. These are extraordinarily difficult to teach within an undergraduate mathematics programme, perhaps impossible. They're not about equations or algorithms. They're about understanding what questions people are actually asking beneath the questions they think they're asking. About recognising when a technically perfect answer will be useless because it doesn't address the decision at hand.
I learned to speak multiple languages: the language of scientific evidence, the language of policy constraints and the language of public communication. This was my apprenticeship in making statistics useful.
After a decent stint at the University of Leeds, I spent two and a half years at JBA Risk Management Ltd, modelling large-scale flood events. If the civil service taught me about communication, industry taught me about humility.
Industry made the difficulties of handling large datasets crystal clear in ways that academic papers never could. Hours of keeping a close watch of RAM and worrying about integer overflow changes a person. Also, working alongside software engineers taught me to appreciate the value of careful coding and extensive testing. Statisticians are often taught to write code that works once, for one analysis. Software engineers write code that works repeatedly, for anyone, under any reasonable conditions.
This discipline (version control, unit testing, documentation, defensive programming) transformed how I approached statistical work. It was yet another apprenticeship, this time in the craft of reliable, reproducible analysis.
Since joining Durham in 2022, I've been able to bring together these varied experiences to help industry tackle their uncertainty quantification problems. The combination of methodological expertise, consultancy skills and practical industrial experience means I can understand both the technical challenges and the organisational contexts in which solutions need to work.
None of my three apprenticeships could have prepared me for this alone. The academic phase gave me technical depth. The civil service phase gave me communication skills and an understanding of decision contexts. The industry phase gave me practical discipline and respect for computational reality.
We cannot expect undergraduate programmes, or even PhD programmes, to produce fully formed professional statisticians. The consultancy skills needed to work effectively with experts from other domains, the ability to translate between technical precision and practical utility, the discipline of writing reliable code at scale: these can only be learned through experience.
But that experience needs to be varied. A career spent entirely in academia, or entirely in industry, or entirely in government, will develop certain skills at the expense of others. The most effective statisticians I know have worked across different sectors, accumulating varying forms of expertise and learning to see problems from multiple perspectives.
My own journey took me from university to civil service to university to industry and back to academia, with consultancy work throughout. Each transition felt, at the time, like starting over. But in retrospect, each was a continuation of the same long apprenticeship: learning what it means to do statistics that matters, in a world where the problems are never as clean as the textbooks suggest.