FAQ on "Quantum theory of superresolution for two incoherent optical point sources"
Common questions about our recent papers on superresolution:
What's the big deal?
Imagine taking a picture of two stars close together through a telescope. Back in the 19th century, scientists discovered that the wave nature of light causes the image of each star to blur, and the size of the blurred spot sets the fundamental resolution of a telescope.
In a seminal 1879 paper, Lord Rayleigh further suggested that, to see the two stars clearly, the spots in the picture should be separated at least by such a spot size. Even the Lord himself admitted that his criterion isn't precise, but scientists have since found Rayleigh's criterion to be an excellent rule of thumb for both telescopes and microscopes.
With the modern advent of rigorous statistics and image processing, Rayleigh's criterion remains a curse. When the image is noisy, necessarily so owing to the quantum nature of light, and the criterion is violated, the distance between the spots becomes difficult to estimate accurately. This is a big problem in both astronomy for stars and microscopy for fluorescent particles.
Using quantum optics and quantum information theory, we have invented new optical devices that can determine the distance between two close light sources accurately without regard to Rayleigh's criterion. One of our proposed methods is called SPAtial-mode DEmultiplexing, or SPADE, which separates, or "demultiplexes," the incoming light into different channels. Another method exploits the interference between the incoming light and its spatially inverted version, which we call Super-Localization by Image-inVERsion interferometry, or SLIVER. These processes turn out to be extremely sensitive to the distance between the two light sources.
With SPADE or SLIVER, scientists will be able to measure the distance between two stars more accurately than ever before for astrometry, or do the same for fluorescent particles in microscopy. Despite the theoretical nature of our work, our proposed devices require current technology only and have been demonstrated experimentally.
For more in-depth commentaries, see the well written blog post by Kendra Redmond on the APS Physics Central website, the excellent news article by Edwin Cartlidge on the IoP Physics World website, and the perceptive Viewpoint article by Gabriel Durkin on the APS Physics website.
What's more, we have recently generalized our theory for an arbitrary number of sources (see our paper #7, #9, and #11), showing that SPADE can substantially enhance the imaging of any subdiffraction object, not just two point sources. SPADE will enable scientists to learn much more about any celestial body or fluorophore cluster that is poorly resolved by conventional imaging; both observational astronomy and fluorescence microscopy should benefit. For a more accessible summary, check out another fine blog post by Redmond on our paper #11.
What is "Rayleigh's curse" and how is it different from Rayleigh's criterion?
Rayleigh's criterion is a heuristic definition of resolution for two point sources, and modern imaging research recognizes that it can be beat using image processing, also called "deconvolution," "deblurring," or "denoising" in imaging research. When photon shot noise is present, however, Rayleigh's criterion remains a problem for image processing, in the sense that violation of the criterion makes the error of estimating the distance between the sources skyrocket, as shown by the orange dash-dotted curve in the plot on the right. This is a known statistical phenomenon valid for any "unbiased" estimator and discovered by Tsai and Dunn (1979), Bettens et al. (1999), Van Aert et al. (2002), and Ram, Ward, and Ober (2006). To be specific and distinguish this statistical phenomenon from the heuristic Rayleigh's criterion, we call it "Rayleigh's curse" in our papers.
Our study discovers that, contrary to earlier claims, Rayleigh's curse is not a fundamental limit, and quantum mechanics permits a much lower error, as shown by the flat blue line in the plot.
How do I understand this if I know classical optics only?
Consider the optical field generated by one source on the image plane:
If it is displaced by a distance of +d/2, the field can be approximated as the sum of the two fields shown above using the Taylor-series approximation. That means that the amplitude of the first-order odd mode is sensitive to the displacement, whereas the zeroth-order mode is somewhat insensitive.
Consider now the optical field generated by the other source:
It is displaced the other way, so now the amplitude of the first-order mode has a minus sign instead.
Here comes the first crucial point: Because the two sources are incoherent, the energy in the first-order mode is going to be the incoherent sum of the contributions from the two sources, i.e. it's going to be proportional to (d/2)2 + (-d/2)2 = d2/2, and the minus sign doesn't matter. This means that the energy in the first-order mode is going to be sensitive to the distance between the two sources.
The second crucial point concerns the zeroth-order mode. It's not very sensitive to d, and when one performs direct imaging, this mode contains almost no signal and just contributes a background noise. SPADE or SLIVER on the other hand can effectively filter it out and measure only the first-order mode, so that the noise is reduced and the signal-to-noise ratio is much improved.
For more technical details if you know the semiclassical photodetection theory and statistics, see our paper #2 for an alternative analysis and discussion of SLIVER using the semiclassical theory of photodetection. Our paper #3 and paper #9 describe a semiclassical Poisson model that is a bit less general but leads to pretty much the same results as the quantum theory.
Now let's move on to what happens with multiple sources (paper #7 and #9). Each source with intensity In and displacement dn makes a contribution ~In dn2 to the energy in the first-order mode, so the energy in the mode coming from the multiple incoherent sources ends up being ~Σn In dn2, which is the second moment of the source distribution. By separating out the zeroth-order mode, which contributes only background noise, the second moment can be estimated much more accurately.
To measure the first moment of the source distribution, consider a mode that is the sum of the zeroth-order mode and the first-order mode, with a wavefunction given by
Think about the overlap between this wavefunction and the displaced point-spread function ψ(x-d/2) sketched above, and convince yourself that the coupling efficiency for each source into this mode is ~|1 + a dn|2, where a is a constant. The total energy in this mode coming from multiple sources is then ~Σn In |1 + a dn|2.
Consider now the energy in another mode with this wavefunction:
Now the coupling efficiency from each source into this mode is ~ |1 - a dn|2, and the total energy is ~Σn In |1 - a dn|2. Subtracting the energy in the plus mode ~Σn In |1 + a dn|2 with the energy in the minus mode ~Σn In |1 - a dn|2 gives ~Σn In dn, which is the first moment of the distribution. The noise for these two measurements is still dominated by the zeroth-order mode, so you don't actually gain an advantage and direct imaging can estimate the first moment just as well, but generalizing this concept for higher-order odd moments does result in significant advantages.
To access arbitrary moments, the displaced point-spread function should be expanded not just in the first order but in arbitrary order and the modes to be measured look more complicated, but the basic concept remains the same: a moment of the object distribution can be measured by projecting in the right modes, while the background noise for the measurement of each moment can be much reduced if we can separate out the irrelevant modes.
If this is just classical sources, linear optics, and photon counting, why do you need quantum mechanics at all?
- Our quantum bound serves as a fundamental limit for any measurement allowed by quantum mechanics.
- If a measurement method, such as ours, achieves the quantum bound, it is optimal and you can't do better than that.
- The quantum theory ensures that our assumptions are rigorous and the proposed measurements are physically realizable.
- We originally discovered everything through the quantum calculations, so they are just powerful theoretical tools to discover what's fundamentally possible and what's not.
But your measurement methods can still be explained by a semiclassical theory. Are you sure people haven't thought of this in classical optics?
I have studied classical and quantum imaging for over ten years and never seen anything quite like our present work. The most relevant papers we have found in classical optics are Tsai and Dunn (1979), Bettens et al. (1999), Van Aert et al. (2002) (in the context of electron microscopy, although their mathematical model is applicable to optics as well), and Ram, Ward, and Ober (2006). They all studied the detrimental effect of Rayleigh's criterion on statistical estimation in conventional imaging, i.e., their results all suffer from Rayleigh's curse.
SLIVER is based on an image-inversion interferometer that was proposed and demonstrated by Wicker et al. in Rainer Heintzmann's group. Until our work, however, no one, including Heintzmann's group, had done any statistical analysis of that scheme or recognized its extraordinary accuracy in estimating the separation between two sources.
SPADE is a well studied technique in optical communication, but we are the first to recognize its usefulness for incoherent imaging.
How is this different from phase-contrast microscopes or holography?
Those techniques work on coherent sources only, meaning that you need a laser and coherent scattering of the laser light by the object. That is obviously impossible to do for astronomy, and even for microscopy, fluorescent particles are a lot more convenient to use as they can be attached to interesting stuff deep inside a biological sample. Starlight and fluorescence are incoherent sources, meaning that the phase is random at the source. Our techniques work for incoherent sources because the light after diffraction does pick up spatial coherence, as is well known in the context of the Van Cittert-Zernike theorem. The surprise here is that, even after focusing by a lens, there's still a bit of coherence on the image plane, and you can do a lot better than just measuring the intensity there.
How do your methods compare with PALM/STORM/STED?
For PALM/STORM/STED, you need to control the emission of the fluorophores to make sure that, in each image, only a sparse subset of fluorophores are emitting and they don't violate Rayleigh's criterion. These methods obviously don't work for astronomy or passive remote sensing, and even if you can do them in molecular microscopy, they are a bit slow. The biggest advantage of our methods is that they are pure far-field techniques and don't require control of the light sources or proximity to them.
How are your methods related to stellar interferometry?
Conventional wisdom suggests that stellar interferometers are useful for mitigating the effect of atmospheric turbulence, but they can't compete with direct imaging under the diffraction limit. To quote Joseph Goodman, Statistical Optics,
The reader may well wonder why the Fizeau stellar interferometer, which uses only a portion of the telescope aperture, is in any way preferred to the full telescope aperture in this task of measuring the angular diameter of a distant object. The answer lies in the effects of the random spatial and temporal fluctuations of the earth's atmosphere ("atmospheric seeing"), which are discussed in more detail in Chapter 8. For the present it suffices to say that it is easier to detect the vanishing of the contrast of a fringe in the presence of atmospheric fluctuations than it is to determine the diameter of an object from its highly blurred image.
However, it is important to remember that the imperfect beam patterns of sparse-aperture interferometers extract a sensitivity penalty as compared with filled-aperture telescopes, even after accounting for the differences in collecting areas. [Emphasis mine]
Our work, on the other hand, shows that linear optical methods can in fact be superior to direct imaging even on a fundamental level.
I have a problem with your definition of "resolution."
The word "resolution" means many different things to different people. We take it literally: the process or ability to resolve. Reducing the uncertainty and improving the accuracy of parameter estimation can certainly be regarded as an act of resolving. Related prior work also uses a similar terminology.
At the end of the day, the purpose of imaging is to learn more about the object. Statistics, by studying how close one can estimate the unknown parameters of the object, is the most rigorous and useful way of quantifying this knowledge. This is why we believe that a statistical definition of resolution is the right way, while other concepts, such as spatial frequency bandwidth and image sharpness, are beside the point, as they are properties of the optical waves and not directly related to the object itself.
What is the experimental progress so far?
Please see the following papers:
What source should I use in an experimental demo?
Any typical single-photon or thermal source, such as quantum dots, fluorescent molecules, or even SPDC should do. Obviously the two sources must be designed to satisfy the assumptions if SPDC is used to make sure they are not entangled or anything.
Our paper #3 shows that laser sources can also work, as long as the Poisson model is valid.
How about stars?
Our theory assumes a diffraction-limited imaging system, so atmospheric turbulence might be an issue for ground-based telescopes. You can, however, get close to the diffraction limit if the aperture size is small enough, you have a space telescope, or your adaptive optics is good enough. The Large Binocular Telescope (LBT) in Arizona, for example, can get pretty close to the diffraction limit, while the Giant Magellan Telescope (GMT), the Thirty Meter Telescope (TMT), and the European Extremely Large Telescope (E-ELT) will all be diffraction-limited.
What's the simplest setup you can think of?
For spatial-mode demultiplexing (SPADE), see Figure 7 of our PRX. It is absolutely essential that you count the photons in the leaky modes as well, or it's not going to work well. SLIVER can work similarly well and probably even easier to implement, as it does not need to be tailored to the point-spread function. See also the experimental papers above for variations of our proposals.
How sensitive is this to the centroid if you don't know it exactly?
The performance will be less ideal if the center of the device is not aligned exactly with the centroid of the two sources. However, the centroid is a lot easier to locate using direct imaging and doesn't suffer from Rayleigh's curse, while our study (Appendix D of our PRX) suggests that, as long as the misalignment is small relative to the width of the point-spread function, there is still a substantial improvement in estimation accuracy over conventional imaging. The misalignment can be reduced by splitting part of the beam for conventional imaging and use the centroid estimate there for alignment control, or scan the device across the image plane to look for the centroid first.
Chrostowski et al. also studied this multi-parameter estimation problem. They essentially showed that you do need to invest some overhead photons to estimate the centroid first if the measurement is restricted to linear optics and photon counting, though the overhead is not severe in an asymptotic sense. More excitingly, they suggested that a general collective measurement over all the photons can estimate both the centroid and the separation simultaneously at the quantum limit we suggested, at least in principle. Parniak et al. (2018) have even done a proof-of-concept experiment based on Hong-Ou-Mandel interference.
Another scenario that favors our schemes is when one measures the relative motion of binary stars, which usually moves a lot faster than their center of mass. This means that one has a lot more time and a lot more photons to determine the centroid accurately for alignment.
How sensitive is your theory to the shape of the point-spread function (PSF)? It seems to assume a Gaussian PSF a lot.
The quantum bound is valid for any spatially invariant PSF with constant phase, and we know that a single-parameter QCRB is asymptotically achievable in principle courtesy of Nagaoka, Hayashi et al. and Fujiwara. Binary SPADE for other PSFs is discussed in Sec. VI of our PRX. Rehacek et al. (2016) and Kerviche et al. (2017) have proposed other ways to design SPADE for arbitrary PSFs. SLIVER works for any circularly-symmetric point-spread function in 2D. Our paper #9 shows how SPADE can be generalized for moment estimation with a large class of PSFs in 2D.
How about multiple sources or sources with unequal intensities (e.g., exoplanet and star)?
See our paper #7, #9, #11, and #14 for a generalization of our theory for any subdiffraction object that consists of an arbitrary number of incoherent point sources. These papers show that SPADE can enhance the estimation of the second or higher-order moments of the distribution, and it's close to quantum-optimal. See also Dutton et al. (2018) on object-size estimation and Zhou and Jiang (2018).
Rehacek et al. (2017) studied the case of two unequal sources here and here; see also Prasad (2019). They showed that it is still possible to obtain a significant enhancement for separation estimation via a suitable quantum measurement. Both the quantum performance and the direct-imaging performance take a hit when the two sources are unequal, however.
How sensitive is your method to intensity fluctuations, i.e. fluctuations in epsilon?
As long as you can count all or most of the photons from all the output channels of the device, you can use the total photon number as an estimate of the intensity and a normalization factor for the estimator, so it's robust against intensity fluctuations.
How about unpolarized light or fixed but unknown polarizations?
SLIVER should work for any polarization or random polarizations, as it relies on the interference of a photon with itself. For SPADE, use weakly guiding waveguides, which are less sensitive to polarizations, or use a polarizing beam splitter and send the two polarized beams to two SPADEs.
How about non-paraxial effects?
We studied quantum bounds for vectoral electromagnetic fields here, mainly for coherent sources, and there's not much surprise in terms of nonparaxial effects, so we don't expect those to affect our theory too much.
How about coherent sources?
An earlier paper of ours shows that two coherent sources must still suffer from Rayleigh's curse if they have the same phase. If they are pi out of phase, their separation becomes easy to estimate but the centroid becomes difficult. In general, the Fisher information matrix depends crucially on the relative phase of the sources. Overcoming Rayleigh's curse for coherent sources in general requries the control of this relative phase for sub-Rayleigh emitters, which is not easy for practical applications. If you can control the phase however, the optimal measurement is much simpler—heterodyne/digital holography will do.
Larson and Saleh (2018) studied partially coherent sources, but reached a somewhat different conclusion. We do not agree with their analysis and conclusion, as explained in our paper #12. See also Lee and Ashok (2019), whose results agree with ours.
How about 2D/3D/4D imaging?
See our paper #6, #7, and #9 for 2D theory. Quantum limits to the 3D localization of one point source were studied by Tsang (2015) for vectoral electromagnetic fields and Backlund et al. (2018) with a scalar-wave approximation. Napoli et al. (2018) and Yu and Prasad (2018) (see also this and this) studied the quantum limits to the estimation of angular and axial positions of two sources. Donohue et al. (2018) did an experiment on time/frequency estimation.
Why do you have to use this complicated theory of statistics? There have been many superresolution proposals based on electromagnetism alone, such as metamaterials and superoscillation.
Without using statistical inference, there would be no objective, rigorous way to quantify the accuracy of your imaging protocol; anything else you do would just be a glorified version of Photoshop. Proper statistical analysis is especially important for astronomy and fluorescence microscopy, where the number of detected photons is so low and the signal is so weak. This is why the Cramer-Rao bound has become the standard precision measure in fluorescence microscopy [see, for example, Deschout et al., Nature Methods 11, 253 (2014) and Chao et al., JOSA A 33, B36 (2016)] and a proper statistical analysis has become an essential part of research by people who know what they are doing [see, for example, Shechtman et al., Phys. Rev. Lett. 113, 133902 (2014), Legant et al., Nature Methods 13, 359 (2016), and Balzarotti et al., Science (2016)].
To quote Goodman again,
The statistical approach is indeed somewhat more complex than the deterministic approach, for it requires knowledge of the elements of probability theory. In the long run, however, statistical models are far more powerful and useful than deterministic models in solving physical problems of genuine practical interest.
To quote Brad Efron,
Statistics has been the most successful information science. Those who ignore statistics are condemned to reinvent it.
And to quote this Zeiss advertisement,
Resolution is meaningless without good SNR.
This is also why metamaterials, superoscillation, and multiphoton-coincidence techniques (as well as gazillions of other superresolution proposals) don't really work in practice, as they lose too many photons or introduce too much noise to achieve a useful signal-to-noise ratio for non-laser sources. All their experimental demonstrations used lasers, which are way more intense. Even if you can see an apparent improvement in image quality, the biggest question is whether software deconvolution algorithms can do just as well or even better, and this can be answered only if you do a proper statistical analysis.
Why not study this in terms of binary hypothesis testing (one source vs two sources)?
Helstrom studied this binary hypothesis testing problem in terms of his quantum bound, but he didn't propose a concrete optical measurement setup to attain the bound. And he assumed a given separation in the two-source hypothesis, which is somewhat artificial. The separation is usually unknown and needs to be estimated in the first place. Krovi, Guha, and Shapiro also studied this problem recently using the quantum Chernoff bound, but they also assumed a given separation.
Our paper #8 shows that SPADE and SLIVER can also be used to perform this detection optimally and much more accurately than direct imaging without the separation being given. In the event of a successful detection of two sources, our measurements can also give an accurate estimate of the separation as a bonus.
In your simulations, why do the errors violate the Cramer-Rao bounds?
The Cramer-Rao bounds are valid only for "unbiased" estimators, and the maximum-likelihood estimator we used is actually biased for finite samples. It is possible to generalize our theory to deal with biased estimators as well if we adopt a Bayesian/minimax approach; please see our paper #5 for details. From the minimax perspective, there is still a significant performance gap between direct imaging and our techniques even if biased estimators are allowed. See also the discussion by Tham et al. on biased estimators.
Despite the unbiased-estimator assumption, the Cramer-Rao bound remains the standard precision measure in the microscopy and astronomy literature with nice asymptotic properties and a decent approximation of the estimation errors in our case; that's why we focus on it and not the more advanced statistical concepts.
Hasn't Carl Helstrom already done this kind of theory?
No. He studied mostly one point source, and for two sources he studied them only in the context of binary hypothesis testing, assuming that the separation between the two sources is given. Helstrom himself was quite pessimistic about his proposed measurement:
The optimum strategies required in order to attain the minimum error probabilities calculated here require the measurement of certain complicated quantum-mechanical projection operators, which, though possible in principle, cannot be carried out by any known apparatus.
In reality, you usually don't know the separation to begin with and need to estimate it first. We are able to solve the parameter estimation problem now because we managed to simplify the problem enough (Sec. II of our paper) to use the explicit formula for the quantum Cramer-Rao bound (see Chap. VIII 4, Helstrom, Quantum Detection and Estimation Theory). And most people nowadays just assumed that nothing new could be done with classical sources.
They never did anything about incoherent sources. Rayleigh's criterion is defined in terms of incoherent sources. All of their papers concern laser or squeezed light and are irrelevant to stars or fluorophores. The only place where they even mentioned incoherent sources is the last sentence in Vladislav N. Beskrovny and Mikhail I. Kolobov, Phys. Rev. A 78, 043824 (2008):
The second generalization of the quantum theory of superresolution presented in this paper is from coherent imaging into partially coherent and fully incoherent cases. This problem is very challenging and will be addressed in forthcoming publications. [Emphasis mine. They still haven't published anything on incoherent light.]
What about Kellerer and Ribak, Opt. Lett. 41, 3181 (2016)?
They claimed that an optical amplifier together with quantum non-demolition measurement and post-selection can enhance the estimation of the position of a thermal source beyond the diffraction limit, but their claim is bogus:
- First of all, it is known since at least the 1960s that the root-mean-square error of estimating the location of one point source from direct imaging with Poisson noise is σ/√N, where σ is the width of the point-spread function and N is the received photon number. In other words, you can enhance the localization accuracy way beyond the diffraction limit simply by direct imaging (with image processing) and having more photons. This fact is especially well known in fluorescence microscopy. Kellerer and Ribak on the other hand ignored the possibility of enhancement by image processing, so their comparison is not fair.
- Second, Helstrom has derived in 1970 the quantum limit to localization of one thermal point source for any measurement, and it is also σ/√N. This means that direct imaging is already the best you can do for one point source and no other measurement, including the proposal by Kellerer and Ribak, can do better.
- Our proposals have absolutely no relation with theirs. We have done much more rigorous calculations using proper statistics, quantum optics, and quantum information to ensure that our proposals are physical, consistent with the fundamental laws of quantum mechanics, and can beat image processing, not to mention that our proposals, based on passive linear optics and photon counting, are way more feasible to implement experimentally than a QND measurement.
What about Rafsanjani et al., Optica 4, 487 (2017)?
See this comment by Lantz, who shows that their claim is invalid. They also assumed a thermal state with a high photon number per mode, which is not what actually happens in optical astronomy (see below).
How good is the epsilon << 1 approximation in practice? (epsilon is the average received photon number per optical mode.)
Extremely good at optical frequencies and one of the best approximations you will ever make in your life. It's at most given by the blackbody occupancy number for a thermal source (epsilon ~ 0.01 at the surface of the sun at 6000K and 500nm wavelength), and a lot less for fluorescent particles, which typically have a very low photon flux (~10,000/s) and short coherence time (~10fs), giving epsilon ~ 1e-10. epsilon is further limited by the fraction of the aperture size relative to the optical coherence area, and the fraction is necessarily miniscule for telescopes and far-away stars. Please read Chap. 13, Mandel and Wolf and especially Chap. 9, Goodman, Statistical Optics. To quote Goodman,
A physical understanding of this result can be gained from the following considerations. If the count degeneracy parameter is much less than 1, it is highly probable that there will be either zero or one counts in each separate coherence interval of the incident classical wave. In such a case the classical intensity fluctuations have a negligible "bunching" effect on the photo-events, for (with high probability) the light is simply too weak to generate multiple events in a single coherence cell. If negligible bunching of the events takes place, the count statistics will be indistinguishable from those produced by stabilized single-mode laser radiation, for whlch no bunching occurs.
Here's another quote from Leonard Mandel, Proc. Phys. Soc. 74 233 (1959):
When the degeneracy is very small, p(n,T) simplifies very considerably... which is the classical Poisson distribution... This situation will generally apply when stellar sources are being studied. The light from these sources is always so weak that n xi/T< 1 and the degeneracy is unlikely to be detected in measurements on a single beam. The situation is, of course, improved when correlation measurements are undertaken on two or more coherent beams (Hanbury Brown and Twiss 1956), since these measurements single out the degenerate photons (Mandel 1958). Even so it is unlikely that any faint stars could be studied in this way.
It is well established that the photon counts registered by the detectors in an optical instrument follow statistically independent Poisson distributions, so that the fluctuations of the counts in different detectors are uncorrelated. To be more precise, this situation holds for the case of thermal emission (from the source, the atmosphere, the telescope, etc.) in which the mean photon occupation numbers of the modes incident on the detectors are low, n << 1. In the high occupancy limit, n >> 1, photon bunching becomes important in that it changes the counting statistics and can introduce correlations among the detectors. We will discuss only the first case, n << 1, which applies to most astronomical observations at optical and infrared wavelengths.
For fluorescent sources, check out Pawley, ed., Handbook of Biological Confocal Microscopy, Ram, Ward, and Ober PNAS 103, 4457 (2006), etc., which all use the Poisson model, and you need the epsilon << 1 condition (i.e. bunching/antibunching is negligible) for the Poisson model to hold.
What if epsilon is large?
In our paper #4, Ranjith managed to derive the quantum bound for thermal sources with arbitrary epsilon, and it is consistent with our earlier result with the small-epsilon approximation. He's also shown that SPADE and SLIVER still work quite well. This is more relevant to longer wavelengths, e.g., Terahertz and microwave radiation, and scattered laser sources. See also related work by Lupo and Pirandola.
Appendix B of our paper #11 shows that the quantum Fisher information in the epsilon << 1 limit also serves as a looser quantum bound for thermal states with arbitrary epsilon, so the quantum limits we've derived are in fact valid for any thermal state.
Can I do heterodyne/homodyne/digital holography for epsilon << 1?
You shouldn't do that because the state is vacuum most of the time and incoherent, and that leads to huge vacuum noise in dyne measurements. Our paper #10 shows that mode homodyne/heterodyne are much worse than direct imaging if epsilon << 1. For large epsilon (>2), however, dyne measurements can have an advantage.
How about multiphoton coincidence measurements? I read somewhere that quantum optics is all about multiphoton coincidence.
With epsilon << 1, multiphoton coincidence events are very rare for thermal optical sources or fluorescent particles. As they are so rare, the information you can gain from them is relatively very little if your goal is imaging. Our proposals on the other hand require only time-integrated photon counting in each output and do not require the detection or postselection of multiphoton coincidence events.
But how about Hanbury-Brown-Twiss intensity interferometry? I learned from my quantum optics course that it's a big deal in astronomy.
Hanbury-Brown-Twiss interferometry has been obsolete for decades in astronomy. Fundamentally, the SNR of intensity (two-photon) interferometry is simply too low compared with amplitude (single-photon) interferometry; see Chap. 9, Goodman, Statistical Optics. For example, Davis and Tango (1986) reported that
~40h of observations were required with the Narrabri instrument [referring to the HBT intensity interferometer], whereas <1h was needed to obtain comparable accuracy with the new prototype interferometer [referring to their amplitude interferometer].
There is some recent effort to revive intensity interferometry for astronomy, but it's for technical and convenient reasons irrelevant to quantum optics.
What if I have other questions?
Please email me at mankei at nus dot edu dot sg.
Other references of interest
- Sep 16, 2019: Added a bunch of references [our paper #14, Zhou et al. (2019), Paur et al. (2019) in the Experiments section, Bisketzi et al. (2019), Grace et al. (2019), Prasad (2019), Lee and Ashok (2019)].
- Jun 6, 2019; Added paper #13 and a reference on SPLICE [Bonsma-Fisher et al. (2019)].
- Mar 22, 2019: Publication of paper #12.
- Jan 3, 2019: Publication of paper #11.
- Dec 18, 2018: Added a paper on object-size estimation to the Generalizations section [Dutton et al. (2018)].
- Dec 7, 2018: Publication of paper #8.
- Nov 1, 2018: Added two papers on partially coherent sources [Larson and Saleh (2018) and our paper #12].
- Sep 14, 2018: Added two more experiment papers in the Experiments section [Hassett et al. (2018) and Paur et al. (2018)].
- Jul 16, 2018: Added a question about coherent sources in the Generalizations section.
- Jun 8, 2018: Added paper #11 on the quantum limit to moment estimation.
- Jun 1, 2018: Migrated to the new Google Sites and added another reference on 3D two-source localization [Yu and Prasad (2018)] in the Generalizations section.
- May 15, 2018: Added a reference on arbitrary objects [Zhou and Jiang (2018)] and a reference on angular/axial resolution [Napoli et al. (2018)] in the Generalizations section.
- May 13, 2018: Added a reference on a time-frequency-estimation experiment [Donohue et al. (2018)].
- Mar 28, 2018: Added another reference on two unequal sources in the Generalizations section [Rehacek et al. (2017)].
- Mar 25, 2018: Added a couple of misc. references in the Quantum Optics section.
- Mar 21, 2018: Added a reference on simultaneous centroid-separation estimation in the Generalizations section [Parniak et al. (2018)].
- Mar 10, 2018: Added a reference on 3D localization in the Generalizations section [Backlund et al. (2018)].
- Feb 21, 2018: Publication of paper #9.
- Dec 20, 2017: Publication of paper #10.
- Oct 5, 2017: Added two recent references in the Generalizations section [Chrostowski et al. (2017) and Rehacek et al. (2017)].
- Jun 30, 2017: Publication of paper #6.
- Jun 28, 2017: Added paper #10.
- Apr 6 2017: Added an early reference on the two-source localization problem [Tsai and Dunn (1979)].
- Mar 28 2017: Added paper #9.
- Mar 1 2017: Publication of paper #7.
- Feb 19 2017: Added an explanation of how SPADE works for multiple sources.
- Feb 18 2017: Added a question about phase-contrast microscopes and holograhy and a question about the definition of resolution. Put some questions in a new Optics section.
- Feb 16 2017: Updated some references.
- Dec 20 2016: Added some references on diffraction-limited ground-based telescopes and the use of the Cramer-Rao bound in fluorescence microscopy.
- Nov 10 2016: Added a more intuitive explanation for Q5 in the Introduction section.
- Nov 6 2016: Publication of paper #3 and #4.
- Sep 13 2016: Added paper #8 on detection of one-vs-two sources.
- Aug 30 2016: Publication of paper #1: Physical Review X 6, 031033 (2016)
- Aug 24 2016: Added paper #7 on general imaging.
- Jul 2 2016: Added a BibTeX file for our papers.
- Jun 28 2016: Added a 4th reference on another experiment.
- Jun 15 2016: Added paper #5 by Ang et al. on the 2D theory and 3 references on experimental demonstrations.
- May 7 2016: Added our latest paper on the quantum bound for general thermal sources.
- Apr 24 2016: Added a mention of the irrelevant work by Mikhail Kolobov, Claude Fabre, and co-workers.
- Feb 16 2016: Our new calculation using a Poisson model shows that our measurement schemes also work for laser sources.
- Feb 13 2016: Our work on SLIVER is covered by Laser Focus World and published in Optics Express 24, 3684 (2016). Links have been updated.
- Dec 30 2015: We have discovered an alternative measurement scheme called SLIVER (Super-Localization by Image-inVERsion interferometry) that works for most point-spread functions; see e-print arXiv:1512.08304. FAQ is also updated.
- Nov 5 2015: First version.