posted Nov 5, 2015, 4:45 AM by Mankei Tsang
[
updated Oct 10, 2017, 5:14 AM
]
Common questions about our recent papers on superresolution [ BibTeX file]:


Dimensions 
Sources 
Optics

Statistics

Experimental Proposals 
First Version

Publication

Popular Coverage

1. 
Mankei Tsang, Ranjith Nair, and XiaoMing Lu, "Quantum Theory of Superresolution for Two Incoherent Optical Point Sources," Physical Review X 6, 031033 (2016)

1D

Weak thermal (optical frequencies and above)

Quantum 
Fisher 
SPAtialmode DEmultiplexing (SPADE) 
2 Nov 2015 
29 Aug 2016

Follow this link

2. 
Ranjith Nair and Mankei Tsang, "Interferometric superlocalization of two incoherent optical point sources," Optics Express 24, 3684 (2016)

2D 
Thermal (any frequency)

Semiclassical

Fisher 
SuperLocalization via ImageinVERsion interferometry (SLIVER) 
28 Dec 2015 
12 Feb 2016

Laser Focus World

3. 
Mankei Tsang, Ranjith Nair, and XiaoMing Lu, "Quantum information for semiclassical optics," Proc. SPIE 10029, Quantum and Nonlinear Optics IV, 1002903 (2016) [PDF]

N/A

Weak thermal, lasers

Semiclassical 
Fisher, hypothesis testing

N/A

15 Feb 2016 
3 Nov 2016


4. 
Ranjith Nair and Mankei Tsang, "Farfield Superresolution of Thermal Electromagnetic Sources at the Quantum Limit," Physical Review Letters 117, 190801 (2016) [PDF, Supplemental Material]

1D 
Thermal 
Quantum 
Fisher

SPADE, SLIVER 
4 Apr 2016 
4 Nov 2016

Editors' Suggestion

5. 
Mankei Tsang, "Conservative classical and quantum resolution limits for incoherent imaging," Journal of Modern Optics (2017) [PDF]

1D 
Weak thermal

Quantum 
Bayesian, minimax 
SPADE

12 May 2016

18 Sep 2017


6. 
Shan Zheng Ang, Ranjith Nair, and Mankei Tsang, "Quantum limit for twodimensional resolution of two incoherent optical point sources," Physical Review A 95, 063847 (2017) [PDF]

2D 
Weak thermal

Quantum

Fisher

SPADE, SLIVER

2 Jun 2016 
29 Jun 2017


7.

Mankei Tsang, "Subdiffraction incoherent optical imaging via spatialmode demultiplexing," New Journal of Physics 19, 023054 (2017) [PDF]

2D 
Weak thermal, arbitrary distribution of multiple sources

Quantum

Fisher

SPADE

9 Aug 2016

28 Feb 2017


8. 
XiaoMing Lu, Ranjith Nair, and Mankei Tsang, "Quantumoptimal detection of oneversustwo incoherent sources with arbitrary separation," eprint arXiv:1609.03025 [PDF]

2D 
Weak thermal

Quantum

Hypothesis testing

SPADE, SLIVER

10 Sep 2016



9. 
Mankei Tsang, "Subdiffraction incoherent optical imaging via spatialmode demultiplexing: semiclassical treatment," eprint arXiv:1703.08833 [PDF]

2D 
Weak thermal,
arbitrary distribution of multiple sources 
Semiclassical 
Fisher 
SPADE 
26 Mar 2017



10.

Fan Yang, Ranjith Nair, Mankei Tsang, Christoph Simon, and Alexander I. Lvovsky, "Fisher information for farfield linear optical superresolution via homodyne or heterodyne detection in a higherorder local oscillator mode," eprint arXiv:1706.08633 [PDF]

1D 
Thermal 
Quantum

Fisher

Homodyne, heterodyne

27 Jun 2017



Introduction

What's the big deal?
Imagine taking a picture of two stars close together through a telescope. Back in the 19th century, scientists discovered that the wave nature of light causes the image of each star to blur, and the size of the blurred spot sets the fundamental resolution of a telescope.
In a seminal 1879 paper, Lord Rayleigh further suggested that, to see the two stars clearly, the spots in the picture should be separated at least by such a spot size. Even the Lord himself admits that his criterion isn't precise, but scientists have since found Rayleigh's criterion to be an excellent rule of thumb for both telescopes and microscopes.
With the modern advent of rigorous statistics and image processing, Rayleigh's criterion remains a curse. When the image is noisy, necessarily so owing to the quantum nature of light, and the criterion is violated, the distance between the spots becomes difficult to estimate accurately. This is a big problem in both astronomy for stars and microscopy for fluorescent particles.
Using quantum optics and quantum information theory, we have invented new optical devices that can determine the distance between two close light sources accurately without regard to Rayleigh's criterion. One of our proposed methods is called SPAtialmode DEmultiplexing, or SPADE, which separates, or "demultiplexes," the incoming light into different channels. Another method exploits the interference between the incoming light and its spatially inverted version, which we call SuperLocalization by ImageinVERsion interferometry, or SLIVER. These processes turn out to be extremely sensitive to the distance between the two light sources.
With SPADE or SLIVER, scientists will be able to measure the distance between two stars more accurately than before for astrometry,
or do the same for fluorescent particles in microscopy, potentially
improving the imaging resolution by orders of magnitude. Despite the
theoretical nature of our work, our proposed devices require current
technology only and have been demonstrated experimentally. Our ideas should become useful for practical applications in the near future.
For more indepth commentaries, see the well written blog post by Kendra Redmond on the APS Physics Central website, the excellent news article by Edwin Cartlidge on the IoP Physics World website, and the perceptive Viewpoint article by Gabriel Durkin on the APS Physics website.
LEFT: SPADE, RIGHT: SLIVER.
 What is "Rayleigh's curse" and how is it different from Rayleigh's criterion?
Rayleigh's criterion is a heuristic definition of resolution, and modern imaging research recognizes that it can be beat using image processing, also called "deconvolution," "deblurring," or "denoising" in imaging research. When photon shot noise is present, however, Rayleigh's criterion remains a problem for image processing, in the sense that violation of the criterion makes the error of estimating the distance between the sources skyrocket, as shown by the orange dashdotted curve in the plot below. This is a known statistical phenomenon valid for any "unbiased" estimator and discovered by Tsai and Dunn (1979), Bettens et al. (1999), Van Aert et al. (2002), and Ram, Ward, and Ober (2006). To be specific and distinguish this statistical phenomenon from the heuristic Rayleigh's criterion, we call it "Rayleigh's curse" in our papers.
Our study discovers that, contrary to earlier claims, Rayleigh's curse is not a fundamental limit, and quantum mechanics permits a much lower error, as shown by the flat blue curve in the plot above.
Optics

How do I understand this if I know classical optics only?
Consider the optical field generated by one source on the image plane:
If it is displaced by a distance of +d/2, the field can be approximated as the sum of the two fields shown above using the Taylorseries approximation. That means that the amplitude of the firstorder odd mode is sensitive to the displacement, whereas the zerothorder mode is somewhat insensitive.
Consider now the optical field generated by the other source:
It is displaced the other way, so now the amplitude of the firstorder mode has a minus sign instead.
Here comes the first crucial point: Because the two sources are incoherent, the energy in the firstorder mode is going to be the incoherent sum of the contributions from the two sources, i.e. it's going to be proportional to (d/2)^{2} + (d/2)^{2} = d^{2}/2, and the minus sign doesn't matter. This means that the energy in the firstorder mode is going to be sensitive to the distance between the two sources.
The second crucial point concerns the zerothorder mode. It's not very sensitive to d, and when one performs direct imaging, this mode contains almost no signal and just contributes a background noise. SPADE or SLIVER on the other hand can effectively filter it out and measure only the firstorder mode, so that the noise is reduced and the signaltonoise ratio is much improved.
For more technical details if you know the semiclassical photodetection theory and statistics, see our paper #2 for an alternative analysis and discussion of SLIVER using the semiclassical theory of photodetection. Our paper #3 and paper #9
describe a semiclassical Poisson model that is a bit less general but
proves pretty much the same things as the quantum theory.
Now let's move on to what happens with multiple sources (paper #7 and #9). Each source with intensity I_{n} and displacement d_{n} makes a contribution ~I_{n} d_{n}^{2} to the energy in the firstorder mode, so the energy in the mode coming from the multiple incoherent sources ends up being ~Σ_{n }I_{n} d_{n}^{2}, which is the second moment of the source distribution. By separating out the zerothorder mode, which contributes only background noise, the second moment can be estimated much more accurately.
To measure the first moment of the source distribution, consider a mode that is the sum of the zerothorder mode and the firstorder mode, with a wavefunction given by
Think about the overlap between this wavefunction and the displaced pointspread function ψ(xd/2) sketched above, and convince yourself that the coupling efficiency for each source into this mode is ~1 + a d_{n}^{2}, where a is a constant. The total energy in this mode coming from multiple sources is then ~Σ_{n }I_{n} 1 + a d_{n}^{2}.
Consider now the energy in another mode with this wavefunction:
Now the coupling efficiency from each source into this mode is ~ 1  a d_{n}^{2}, and the total energy is ~Σ_{n }I_{n} 1  a d_{n}^{2}. Subtracting the energy in the plus mode ~Σ_{n }I_{n} 1 + a d_{n}^{2 }with the energy in the minus mode ~Σ_{n }I_{n} 1  a d_{n}^{2 }gives ~Σ_{n }I_{n} d_{n}, which is the first moment of the distribution. The noise for these two measurements is still dominated by the zerothorder mode, so you don't actually gain an advantage and direct imaging can estimate the first moment just as well, but generalizing this concept for higherorder odd moments does result in significant advantages.
To access arbitrary moments, the displaced pointspread function should be expanded not just in the first order but in arbitrary order and the modes to be measured look more complicated, but the basic concept remains the same: a moment of the object distribution can be measured by projecting in the right modes, while the background noise for the measurement of each moment can be much reduced if we can separate out the irrelevant modes.

If this is just classical sources, linear optics, and photon counting, why do you need quantum mechanics at all?
 Our quantum bound serves as a fundamental limit for any measurement allowed by quantum mechanics.
 If a measurement method, such as ours, achieves the quantum bound, it is optimal and you can't do better than that.

The quantum theory ensures that our assumptions are rigorous and the proposed measurements are physically realizable.
 We originally discovered everything through the quantum calculations, so they are just powerful theoretical tools to discover what's fundamentally possible and what's not.

But your measurement methods can still be explained by a semiclassical theory. Are you sure people haven't thought of this in classical optics?
I have studied classical and quantum imaging for over ten years and never seen anything quite like our present work. The most relevant papers we have found in classical optics are Tsai and Dunn (1979), Bettens et al. (1999), Van Aert et al. (2002) (in the context of electron microscopy, although their mathematical model is applicable to optics as well), and Ram, Ward, and Ober (2006). They all studied the detrimental effect of Rayleigh's criterion on statistical estimation in conventional imaging, i.e., their results all suffer from Rayleigh's curse.
SLIVER is based on an imageinversion interferometer that was proposed and demonstrated by Wicker et al. in Rainer Heintzmann's group, although no one had done any statistical analysis of that scheme or recognized its extraordinary accuracy in estimating the separation between two sources until our work. Please email me at mankei at nus dot edu dot sg if you know another relevant prior work.
 How is this different from phasecontrast microscopes or holography?
Those techniques work on coherent sources only, meaning that you need a laser and coherent scattering of the laser light by the object. That is obviously impossible to do for astronomy, and even for microscopy, fluorescent particles are a lot more convenient to use as they can be attached to interesting stuff deep inside a biological sample. Starlight and fluorescence are incoherent sources, meaning that the phase is random at the source. Our techniques work for incoherent sources because the light after diffraction does pick up spatial coherence, as is well known in the context of the Van CittertZernike theorem. The surprise here is that, even after focusing by a lens, there's still a bit of coherence on the image plane, and you can do a lot better than just measuring the intensity there.
 How do your methods compare with PALM/STORM/STED?
For PALM/STORM/STED, you need to control the emission of the fluorophores to make sure that, in each image, only a sparse subset of fluorophores are emitting and they don't violate Rayleigh's criterion. These methods obviously don't work for astronomy or passive remote sensing, and even if you can do them in molecular microscopy, they are a bit slow. The biggest advantage of our methods is that they are pure farfield techniques and don't require control of the light sources or proximity to them.
 How are your methods related to stellar interferometry?
Conventional wisdom suggests that stellar interferometers are useful for mitigating the effect of atmospheric turbulence, but they can't compete with direct imaging under the diffraction limit. To quote Joseph Goodman, Statistical Optics,
The reader may well wonder why the Fizeau stellar interferometer, which uses only a portion of the telescope aperture, is in any way preferred to the full telescope aperture in this task of measuring the angular diameter of a distant object. The answer lies in the effects of the random spatial and temporal fluctuations of the earth's atmosphere ("atmospheric seeing"), which are discussed in more detail in Chapter 8. For the present it suffices to say that it is easier to detect the vanishing of the contrast of a fringe in the presence of atmospheric fluctuations than it is to determine the diameter of an object from its highly blurred image.
And to quote Jonas Zmuidzinas, JOSA A 20, 218233 (2003),
However, it is important to remember that the imperfect beam patterns of sparseaperture interferometers extract a sensitivity penalty as compared with filledaperture telescopes, even after accounting for the differences in collecting areas. [Emphasis mine]
Our work, on the other hand, shows that linear optical methods can in fact be superior to direct imaging even on a fundamental level.
 I have a problem with your definition of "resolution."
The word "resolution" means many different things to different people. We take it literally: the process or ability to resolve. Reducing the uncertainty and improving the accuracy of parameter estimation can certainly be regarded as an act of resolving. Related prior work also uses a similar terminology.
At the end of the day, the purpose of imaging is to learn more about the object. Statistics, by studying how close one can estimate the unknown parameters of the object, is the most rigorous and useful way of quantifying this knowledge. This is why we believe that a statistical definition of resolution is the right way, while other concepts, such as spatial frequency bandwidth and image sharpness, are beside the point, as they are properties of the optical waves and not directly related to the object itself.
Quantum Optics

Hasn't Carl Helstrom already done this kind of theory?
No. He studied mostly one point source, and for two sources he studied them only in the context of binary hypothesis testing, assuming that the separation between the two sources is given. In reality, you usually don't know the separation to begin with and need to estimate it first.
We are able to solve this statistical estimation problem now because we
managed to simplify the problem enough (Sec. II of our paper) to use
the explicit formula for the quantum CramerRao bound (see Chap. VIII 4,
Helstrom, Quantum Detection and Estimation Theory). And most people nowadays just assumed that nothing new could be done with classical sources.

They never did anything about incoherent sources. Rayleigh's criterion
is defined in terms of incoherent sources. All of their papers concern
laser or squeezed light and are irrelevant to stars or fluorophores. The
only place where they even mentioned incoherent sources is the last
sentence in Vladislav N. Beskrovny and Mikhail I. Kolobov, Phys. Rev. A 78, 043824 (2008):
The second generalization of the quantum theory of superresolution
presented in this paper is from coherent imaging into partially coherent
and fully incoherent cases. This
problem is very challenging and will be addressed in forthcoming publications.
[Emphasis mine. They still haven't published anything on incoherent light.]

How good is the epsilon << 1 approximation in practice? (epsilon is the average received photon number per optical mode.)
Extremely good at optical frequencies and one of the best approximations you will ever make in your life. It's at most given by the blackbody
occupancy number for a thermal source (epsilon ~ 0.01 at the surface of the sun at 6000K and 500nm wavelength), and a lot less for fluorescent particles,
which typically have a very low photon flux (~10,000/s) and short coherence time
(~10fs), giving epsilon ~ 1e10. epsilon is further limited by the fraction of the aperture size relative to the optical coherence area, and the fraction is necessarily miniscule for telescopes and faraway stars. Please read Chap. 13, Mandel and Wolf and especially Chap. 9, Goodman, Statistical Optics. To quote Goodman,
A physical understanding of this result can be gained from the following considerations. If the count degeneracy parameter is much less than 1, it is highly probable that there will be either zero or one counts in each separate coherence interval of the incident classical wave. In such a case the classical intensity fluctuations have a negligible "bunching" effect on the photoevents, for (with high probability) the light is simply too weak to generate multiple events in a single coherence cell. If negligible bunching of the events takes place, the count statistics will be indistinguishable from those produced by stabilized singlemode laser radiation, for whlch no bunching occurs.
Here's another quote from Leonard Mandel, Proc. Phys. Soc. 74 233 (1959):
When the degeneracy is very small, p(n,T) simplifies very considerably... which is the classical Poisson distribution... This situation will generally apply when stellar sources are being studied. The light from these sources is always so weak that n xi/T< 1 and the degeneracy is unlikely to be detected in measurements on a single beam. The situation is, of course, improved when correlation measurements are undertaken on two or more coherent beams (Hanbury Brown and Twiss 1956), since these measurements single out the degenerate photons (Mandel 1958). Even so it is unlikely that any faint stars could be studied in this way.
Here's a more recent quote from Zmuidzinas, JOSA A 20, 218233 (2003):
It is well established that the photon counts registered by the detectors in an optical instrument follow statistically independent Poisson distributions, so that the fluctuations of the counts in different detectors are uncorrelated. To be more precise, this situation holds for the case of thermal emission (from the source, the atmosphere, the telescope, etc.) in which the mean photon occupation numbers of the modes incident on the detectors are low, n << 1. In the high occupancy limit, n >> 1, photon bunching becomes important in that it changes the counting statistics and can introduce correlations among the detectors. We will discuss only the first case, n << 1, which applies to most astronomical observations at optical and infrared wavelengths.
For fluorescent sources, check out Pawley, ed., Handbook of Biological Confocal Microscopy, Ram, Ward, and Ober PNAS 103, 4457 (2006), etc., which all use the Poisson model, and you need the epsilon << 1 condition (i.e. bunching/antibunching is negligible) for the Poisson model to hold.

What if epsilon is large?
In our paper #4, Ranjith managed to derive the quantum bound for thermal sources with arbitrary epsilon, and it is consistent with our earlier result with the smallepsilon approximation. He's also shown that SPADE and SLIVER still work quite well. This is more relevant to longer wavelengths, e.g., Terahertz and microwave radiation, and scattered laser sources. See also related work by Lupo and Pirandola.

Can I do heterodyne/homodyne/digital holography for epsilon << 1?
You shouldn't do that because the state is vacuum most of the time and incoherent, and that leads to huge vacuum noise in dyne measurements. Our paper #10 shows that mode homodyne/heterodyne are much worse than direct imaging if epsilon << 1. For large epsilon (>2), however, dyne measurements can have an advantage.

How about multiphoton coincidence measurements? I read somewhere that quantum optics is all about multiphoton coincidence.
With epsilon << 1, multiphoton coincidence events are very rare for thermal optical sources or fluorescent particles. As they are so rare, the information you can gain from them is relatively very little if your goal is imaging.

But how about HanburyBrownTwiss intensity interferometry? I learned from my quantum optics course that it's a big deal in astronomy.
HanburyBrownTwiss interferometry has been obsolete for decades in astronomy. Fundamentally, the SNR of intensity (twophoton) interferometry is simply too low compared with amplitude (singlephoton) interferometry; see Chap. 9, Goodman, Statistical Optics. For example, Davis and Tango (1986) reported that
~40h of observations were required with the Narrabri instrument [referring to the HBT intensity interferometer], whereas <1h was needed to obtain comparable accuracy with the new prototype interferometer [referring to their amplitude interferometer].
There is some recent effort to revive intensity interferometry for astronomy, but it's for technical and convenient reasons irrelevant to quantum optics.
Statistics

Why do you have to use this complicated theory of statistical
parameter estimation? There have been many superresolution proposals
based on electromagnetism alone, such as metamaterials and
superoscillation.
Without using statistical inference, there would be no objective,
rigorous way to quantify the accuracy of your imaging protocol; anything
else you do would just be a glorified version of Photoshop. Proper statistical analysis is
especially important for astronomy and fluorescence microscopy, where
the number of detected photons is so low and the signal is so weak. This is why the CramerRao bound has become the standard precision measure in fluorescence microscopy [see, for example, Deschout et al., Nature Methods 11, 253 (2014) and Chao et al., JOSA A 33, B36 (2016)] and a proper statistical analysis has become an essential part of research by people who know what they are doing [see, for example, Shechtman et al., Phys. Rev. Lett. 113, 133902 (2014), Legant et al., Nature Methods 13, 359 (2016), and Balzarotti et al., Science (2016)].
To quote Goodman again,
The statistical approach is indeed somewhat more complex than the deterministic approach, for it requires knowledge of the elements of probability theory. In the long run, however, statistical models are far more powerful and useful than deterministic models in solving physical problems of genuine practical interest.
And to quote Brad Efron,
Statistics has been the most successful information science. Those who ignore statistics are condemned to reinvent it.
This is also why metamaterial, superoscillation, and multiphotoncoincidence techniques (as well as gazillions of other superresolution proposals) don't
really work in practice, as they lose too many photons or introduce too much noise to achieve a useful signaltonoise ratio
for nonlaser sources. All their experimental demonstrations used
lasers, which are way more intense. Even if you can see an apparent improvement in image quality, the biggest question is whether digital superresolution algorithms can do just as well or even better, and this can be answered only if you do a proper statistical analysis.

Why not study this in terms of binary hypothesis testing (one source vs two sources)?
Helstrom studied this binary hypothesis testing problem in terms
of his quantum bound, but he didn't propose a concrete optical
measurement setup to attain the bound.
And he assumed a given separation in the twosource hypothesis, which is
somewhat artificial.
The separation is usually unknown and needs to be estimated in the first
place. Krovi, Guha, and Shapiro also studied this problem recently using the quantum Chernoff bound, but they also assumed a given separation.
Our paper #8 shows that SPADE and SLIVER can also be used to perform this detection optimally and much more accurately than direct imaging without the separation being given. In the event of a successful detection of two sources, our measurements can also give an accurate estimate of the separation as a bonus.

In your simulations, why do the errors violate the CramerRao bounds?
The CramerRao bounds are valid only for "unbiased" estimators, and the
maximumlikelihood estimator we used is actually biased for finite
samples.
It is possible to generalize our theory to deal with biased estimators
as well if we adopt a Bayesian/minimax approach; please see our paper #5 for details.
From the minimax perspective, there is still a significant performance gap
between direct imaging and our techniques even if biased estimators are
allowed. See also the discussion by Tham et al. on biased estimators.
Despite the unbiasedestimator assumption, the CramerRao bound remains the standard precision measure in the microscopy and astronomy literature with nice asymptotic properties and a decent approximation of the estimation errors in our case; that's why we focus on it and not the more advanced statistical concepts.
Experiments

What is the experimental progress so far?
Please see the following papers:
 Zong Sheng Tang, Kadir Durak, Alexander Ling, "Faulttolerant and finiteerror localization for point emitters within the diffraction limit," Optics Express 24, 22004 (2016) [CQT Highlight].
 Fan Yang, Arina Taschilina, E. S. Moiseev, Christoph Simon, A. I. Lvovsky, "Farfield linear optical superresolution via heterodyne detection in a higherorder local oscillator mode," Optica 3, 1148 (2016).
 Weng Kian Tham, Hugo Ferretti, Aephraim M. Steinberg, "Beating Rayleigh's Curse by Imaging Using Phase Information," Physical Review Letters 118, 070801 (2017) [eprint arXiv:1606.02666, press release, short summary].
 Martin Paur, Bohumil Stoklasa, Zdenek Hradil, Luis L. SanchezSoto, Jaroslav Rehacek, "Achieving the ultimate optical resolution," Optica 3, 1144 (2016) [press release, OSA Optics & Photonics News].

What source should I use in an experimental demo?
Any typical singlephoton or thermal source, such as quantum dots, fluorescent molecules, or even SPDC should do. Obviously the two sources must be designed to satisfy the assumptions if SPDC is used to make sure they are not entangled or anything.
Our paper #3 shows that laser sources can also work, as long as the Poisson model is valid.

How about stars?
Our theory assumes a diffractionlimited imaging system, so atmospheric turbulence might be an issue for groundbased telescopes. You can, however, get close to the diffraction limit if the aperture size is small enough, you have a space telescope, or your adaptive optics is good enough. The Large Binocular Telescope (LBT) in Arizona, for example, can get pretty close to the diffraction limit, while the Giant Magellan Telescope (GMT), the Thirty Meter Telescope (TMT), and the European Extremely Large Telescope (EELT) will all be diffractionlimited.

What's the simplest setup you can think of?
For spatialmode demultiplexing (SPADE), see Figure 7 of our PRX. It is absolutely essential that you count the photons in the leaky modes as well, or it's not going to work well. SLIVER can work similarly well and probably even easier to implement, as it does not need to be tailored to the pointspread function. See also the experimental papers above for variations of our proposals.
Generalizations

How sensitive is this to the centroid if you don't know it exactly?
The performance will be less ideal if the center of the device is not
aligned exactly with the centroid of the two sources. However,
the centroid is a lot easier to locate using direct imaging and doesn't suffer from Rayleigh's curse, while our study (Appendix D of
our PRX) suggests that, as long as the misalignment is small relative to the
width of the pointspread function, there is still a substantial
improvement in estimation accuracy over conventional imaging. The
misalignment can be reduced by splitting part of the beam for
conventional imaging and use the centroid estimate there for alignment
control, or scan the device across the image plane to look for the
centroid first.
Chrostowski et al. also studied this multiparameter estimation problem. They essentially showed that you do need to invest some overhead photons to estimate the centroid first if the measurement is restricted to linear optics and photon counting, though the overhead is not severe in an asymptotic sense. More excitingly, they suggested that a general collective measurement over all the photons can estimate both the centroid and the separation simultaneously at the quantum limit we suggested, at least in principle.
Another scenario that favors our schemes is when one measures the relative motion of binary stars, which usually moves a lot faster than their center of mass. This means that one has a lot more time and a lot more photons to determine the centroid accurately for alignment.

How sensitive is your theory to the shape of the pointspread function? It seems to assume a Gaussian PSF a lot.
The quantum bound is valid for any spatially invariant pointspread function with constant phase, and we know that a singleparameter QCRB is asymptotically achievable in principle courtesy of Nagaoka, Hayashi et al. and Fujiwara. Binary SPADE for other pointspread functions is discussed in Sec. VI of our PRX. Rehacek et al. and Kerviche et al. have proposed other ways to design SPADE for arbitrary pointspread functions. SLIVER works for any circularlysymmetric pointspread function in 2D.

How about sources with unequal intensities (e.g., exoplanet and star) or multiple sources?
Please see our paper #7 and paper #9 for a generalization of the theory for an arbitrary distribution of incoherent sources. It shows that SPADE can enhance the estimation of the second or higherorder moments of the distribution.
Rehacek et al. studied the case of two unequal sources. They showed that it is still possible to obtain a significant enhancement for separation estimation via a suitable quantum measurement. Both the quantum performance and the directimaging performance take a hit when the two sources are unequal, however.

How sensitive is your method to intensity fluctuations, i.e. fluctuations in epsilon?
As long as you can count all or most of the photons from all the output channels of the device, you can use the total photon number as an estimate of the intensity and a normalization factor for the estimator, so it's robust against intensity fluctuations.

How about unpolarized light or fixed but unknown polarizations?
SLIVER should work for any polarization or random polarizations, as it relies on the interference of a photon with itself.
For SPADE, use weakly guiding waveguides, which are less sensitive to polarizations, or use a polarizing beam splitter and send the two polarized beams to two SPADEs.

How about nonparaxial effects?
We studied quantum bounds for vectoral electromagnetic fields here, mainly for coherent sources, and there's not much surprise in terms of nonparaxial effects, so we don't expect those to affect our theory too much.

How about 2D/3D/4D imaging?
See our paper #2, #6, #7, and #9 for 2D theory. No idea about what happens with more dimensions yet.
Miscellaneous

What if I have other questions?
Please email me at mankei at nus dot edu dot sg.
Changelog

Updating...
tsang_superresolution.bib (14k) Mankei Tsang, Jun 29, 2017, 6:59 PM
