Methodology

The methodology is based on 3 fundamental steps:

STEP1

The PSF prediction method proposed with APPLY follows a pragmatic, yet very efficient, approach where the AO-PSFs are described using analytical models that will be calibrated against telemetry and telescope data. The substantial advantage of parametric PSFs is to compress all the important information of the physical PSF into few parameters, hence drastically simplifying the PSF prediction process. The first critical aspect then concerns the choice of an adequate analytical model definition. The literature already provides some models of AO corrected PSF often based on Gaussian, Lorentzian and/or Moffat models. But the limitation of those models come from the description of both the AO corrected core and the turbulent halo with a restricted number of free parameters. Also, those PSF models only rely on mathematical parameters without direct physical meaning or units, hence extrapolating the PSFs over wavelengths or field position is cumbersome.

Our group has recently proposed a long-exposure AO-PSF model that accurately describes the PSF shape [Fétick et al]. This model requires between 3 and 8 parameters depending on the AO PSF structure, but those parameters are all related to physical quantities, hence easily estimated from the AO-telemetry

As an example, inset in the figure on the right shows AO-PSFs acquired with MUSE-NFM at two wavelengths (top-left = 476nm; top-right = 776nm), the associated PSF models (second row) and the corresponding residuals (third row). The circular average PSFs (data, model and residuals) are plotted for the 476nm case. Residuals are better than few percent at all PSF points (x,y), and for all wavelengths. The main differences are due to static aberrations not considered in our model so far, as for instance the telescope spiders.


Building from that, the goal is to calibrate this AO-PSF model and derive the correlation laws between the PSF parameters and the environmental data.

The procedure, summarized by the figure on the right is then straightforward:

  • First, we make use of our analytical model to fit the PSFs and extract the model parameters. This is done on a large set of PSFs covering different operational and atmospheric conditions.
  • Those PSF parameters are then correlated with the telemetry archive, with the aim to confirm or extract correlation laws. For this step, we will explore innovative methods based on machine-learning algorithms. Indeed, the relations between the inputs (hundreds of Gb of telemetry) and the outputs (~8 parameters/PSF) can be non-linear, and algorithms like multivariate regression are appropriated.


With APPLY we will both improve the parameter estimation from the telemetry and optimize the AO-PSF model, with the objective of reducing the amount of telemetry data needed. This is a critical aspect, as saving all the AO telemetry data may be a show-stopper for ELTs. For each {science case, AO system} couple, a dedicated optimization is carried out, with a trade-off between accuracy (for instance estimated as the fit residuals) and simplification. Simplification is understood as simplifying the model itself by reducing the number of free parameters.

STEP2

The major strength of APPLY relies on an integrated approach, where not only the PSF is provided, but the estimated PSFs are coupled with dedicated reduction tools and validated against “PSF science verification” observations. The optimization of the whole data-processing chain is carried-out with data processing specialists, and the quantitative scientific impact and critical feedback on the process is evaluated with astronomers. For that, the project takes advantage of a privileged access to large on-sky data-sets. We will also make use of simulated data, to demonstrate the pipeline accuracy in a controlled environment.

STEP3

Once validated and documented, the last step of APPLY concerns the distribution of the tools to the community via publicly available platforms.