This one-day workshop aims to bring together theoretical computer scientists whose research involves randomness and pseudorandomness. There are five invited talks and a session with contributed talks by junior researchers.
The venue is a 15 minute walk from Sheffield main train station.
Attendance is free of charge. However, registration is required (deadline 10th June 2026) to help us with logistics. You can register here.
We encourage junior researchers to submit a proposal for a short contributed talk on any topic in theoretical computer science; please use the registration form above to indicate your interest.
University of Warwick
TU Dortmund
Queen Mary University of London
University of Cambridge
TBC
Amin Coja-Oghlan - The random XORSAT decimation process
In this talk I present an analysis of Belief Propagation Guided Decimation, a physics-inspired message passing algorithm, on random XORSAT. I will we derive an explicit threshold up to which the algorithm succeeds with a strictly positive probability, but beyond which the algorithm fails with high probability. In addition, we analyse a thought experiment called the decimation process for which we identify a (non-)reconstruction and a condensation phase transition. The main results of the present work confirm physics predictions from [J. Stat. Mech. 2009].
Mark Jerrum - Efficient sampling from high-dimensional distributions: perfect and local
Historically, the most important approach to sampling from complex high-dimensional distributions has been Markov chain simulation, which dates back at least to the 1950s. Such an approach provides a sample from a probability distribution arbitrarily close to the target one. Although the error in the approximation decays exponentially fast with the length of the simulation, it is still of interest to devise and analyse perfect sampling algorithms that match the target distribution exactly. Starting with Propp and Wilson's "Coupling From The Past" (CFTP), several such samplers are now known.
In this talk I seek to impose the condition of locality in addition to perfection. That is to say, given a large or even infinite problem instance, I aim to produce a portion of a perfect sample from the target distribution, as seen through a finite window onto the sample, in time linear in the size of the window. I will include joint work with Konrad Anand (Edinburgh).
Thomas Sauerwald - “Balanced Allocations: The Power of Choice versus Noise”
In the balanced allocation problem we wish to allocate m balls (jobs) into n bins (servers) by allowing each ball to choose from some bins sampled uniformly at random. The goal is to maintain a small gap between the maximum load and average load. For the one-choice protocol, where each ball is allocated to a random bin, the gap diverges for large m. However, for the two-choice protocol, where each ball samples two bins and is placed in the least loaded of the two, it was shown more than 20 years ago that the gap is only O(log log n) for all m. This dramatic improvement is widely known as ``power of two choices’’, and similar effects have been observed in hashing and routing.
In this talk, we will present new results in settings where the load information is restricted or noisy. For example, the queried load information of bins might be (i) outdated, (ii) subject to some adversarial or random perturbation or (iii) only retrievable by binary queries. We also exhibit settings with strong noise and demonstrate that having more choices can lead to a worse performance.
This is based on joint works with Dimitrios Los and John Sylvester. More information and some illustrations of the results are also available under: dimitrioslos.com/research/phd-thesis/index.html