About me
I am a research fellow in theoretical physics at University of Oxford. The main focus of my research is on quantum error correction and fault tolerant quantum computing. Prior to this position I was a postdoctoral researcher at Station Q, Microsoft’s quantum computing team, before which I spent four years as a PhD student in theoretical physics at Institut de Physique Théorique, CEA Saclay, under the supervision of Hubert Saleur and Jesper Jacobsen. I am originally from Sweden, where I did my bachelor's studies at Umeå University before moving to France for my master's studies (M1 at Paris-Sud, M2 at École Normale Supérieure in Paris).
Introduction to my current work
Quantum computers hold the promise of being able to solve certain problems that would be intractable for classical computers, such as simulating complex quantum systems for chemistry and materials science. There are currently several types of hardware being developed for quantum computers, but one thing they all have in common is that the qubits and gates are prone to errors. It is unlikely that hardware improvement alone will yield good enough fidelities to perform large-scale quantum computations.
Enter quantum error correction!
The idea is to encode the quantum information non-locally among many qubits, to protect against local errors. Error correction is a field that predates quantum computing -- it is also done for classical information. As a simple example: to send the bit string 010 over a noisy channel, we could instead send 000 111 000. This would allow the receiver to detect and correct any single bit flip through a majority vote (010 110 000 can be decoded properly, for instance). With quantum information it gets slightly trickier, but it is still very doable. The real challenge becomes to devise quantum error correcting codes that can operate at realistic noise levels, within the constraints of a given hardware, and that have as low overhead as possible.
If that goal can be achieved, logical qubits can be encoded in a large number of physical qubits using such a code, and algorithms are then written in terms of the logical qubits and the logical gates. By increasing the number of physical qubits, the fidelity of the logical qubits and logical gates can be boosted as much as necessary, and large-scale computations can be performed.
We are currently at the stage of hardware development where interesting demonstrations of quantum error correction can be done, but before it can be used for large-scale computations. (Time of writing being the year 2024.) One such demonstration was recently achieved in a collaboration between Microsoft and Quantinuum that I have had the good fortune to be a part of -- the paper can be found at arXiv, while some of the reporting and discussion can be found at Microsoft's blog, Quantinuum's blog, Scott Aaronson's blog and The Quantum Insider.
Introduction to my PhD work
My PhD work concerns two dimensional conformal field theories (CFTs) that are non-unitary. These are a particularly challenging type of two dimensional conformal field theories that describe important problems such as percolation, polymers and topological insulators.
Conformal field theories are ubiquitous in theoretical physics due to the wide range of systems that feature scale-invariance. Scale-invariant systems behave like a fractal, whose pattern repeats itself on every scale: zooming in or out one sees exactly the same features. To capture the seemingly complex form of a fractal, one simply needs a small part of the pattern and the rule for self-replication. Such self-similar behaviour (also called criticality or power-law behaviour) can be found in climate science, neurology, finance and especially in the critical systems and field theories important to condensed matter physics.
Within quantum field theory, scale-invariance often extends to invariance under any conformal, or angle-preserving, transformation. It is then called a conformal field theory. In two-dimensional systems the symmetry algebra of conformal transformations, the Virasoro algebra, is infinite, placing strong constraints on the theory. These constraints make the algebra a powerful tool in the computation of correlation functions.
In the lattice models used in condensed matter physics, such as the Ising model of magnetization, there is a limit on how far one can zoom in -- as in real life. However, at the critical point of a phase transition the microscopic details will not change the long-range behaviour of the correlation functions. The macroscopic behaviour of a critical lattice model is described by a conformal field theory. Different systems may even be described by the same conformal field theory, a feature called universality. This is why the exact same power laws can crop up in seemingly unrelated fields of science!
Below is a picture exemplifying why we might want to think about non-unitary CFTs (percolation), and what I've spent large parts of the past years doing (exploring representations of the Virasoro algebra).