Ethan S. Lee

Heilbronn Research Fellow at the University of Bristol

About Me

I received my PhD in Mathematics from the University of New South Wales in January 2023, where I was supervised by Prof. Timothy Trudgian. I am currently a Heilbronn Research Fellow at the University of Bristol, where I am continuing my research in Analytic Number Theory, with particular focus on:

Beyond my research, which will be summarised below, I am:

I also have experience in teaching Mathematics, Computer Science, and Cyber Security to diverse audiences.

Outside of academia, I support Everton FC and enjoy hobbies such as games and exploring the countryside.

See my Abridged CV for a complete summary of my activities, positions, talks, etc.

Contact Me!

If you wish to contact me for any reason, please email me at:

Published Work

The following articles have been published in peer-reviewed journals:

Pre-prints (available on the arXiv)

Other Articles

A Frequently Asked Question

Why do I care about explicit results? I care about these, because a result that is asymptotically "worse" can have unexpected benefits over its asymptotically "superior" counter-part when we consider the size of the implied constants and the range that these bounds hold for. Sure, if you only care about the long term behaviour of an error term, then you shouldn't care about explicit results, but if you also care (like me) about the behaviour of that same error for reasonable x, say x < 10^{10000}, then you definitely should care about results that completely describe the error term (i.e. explicit results), because these will clarify any dependencies and reveal how practical that error is. In some cases, explicit results are so practical that they can be used to inform computations that verify long-standing conjectures up to some height (such as the Goldbach conjecture). On the other hand, some implied constants are large enough that that result is only decent when x is abominably large. In the last case, we need to refine that constant or consider using an asymptotically inferior result that has a better controlled error (in the explicit sense) to obtain the application at hand when x is not abominably large!

An illustrative example: A classic problem is to find the shortest interval that contains at least one prime; but the answer to this problem depends on what you consider shortest to mean. An interval of the form [x, x+o(x)] that contains a prime is called a short interval, because it will be the shortest in the asymptotic sense (i.e. in the long run), whereas an interval of the form [x, x+Cx] for some constant 0<C<1 that contains a prime is called a long interval. Finding the smallest value for C such that the long interval contains a prime for all x>Y is a classic problem for an explicit analytic number theorist. However, are long intervals going to be worse, because we know that we can do better asymptotically? No! In fact, for "small" x, where small can mean x<10^{1000}, the best long intervals are shorter than the best short intervals in the numerical (or physical) sense.