Using stellar models to predict the measured MAROON-X spectra has several applications: (a) Exposure time calculators, (b) RV content or uncertainty calculators, (c) Instrument throughout/efficiency measurements. The latter is of course an ingredient for the first two but the same procedures apply.
First, we have to consider a number of loss factors. From the top of the atmosphere down, these are:
1.) Atmospheric Extinction: Dust and aerosols are scattering the star light on the way through the atmosphere and thus dilute the flux received by the telescope. Because this is a scattering process, it is much stronger at UV and blue visual wavelength than in the red and NIR. Extinction losses are usually expressed as wavelength dependent magnitudes per airmass and are unique for each observatory site due to factors such as altitude, distance to the ocean, soil type etc. For Gemini North, we use the data from the CFHT Observers Manual published on the Gemini website.
2.) Telluric absorption: On top of the scattering process of the extinction, actual absorption by molecules in the Earth's atmosphere cause two distinct features. (a) Narrow, often spectrally unresolved absorption lines, in the optical from water vapor (H2O) and Oxygen (O2); and (b) a pseudo-continuum from Ozone (O3) that expresses itself as strong absorption in the UV (Huggins band) and has moderately shallow absorption of ~5% between 500 and 700nm (Chappuis band). The latter is often neglected when using telluric models or fitting stellar continua inbetween the telluric absorption lines. All telluric lines are airmass dependent. The depth of the water vapor lines is also dependent on the moisture content of the atmosphere, expressed in mm H2O precipitable water vapor (PWV), and sometimes loosely characterized as 'dry' or 'wet' conditions.
Telluric absorption model (from Kimeswenger et al. 2015) for airmass 1.0. This model is for resolving power of R~10,000 and does not include extinction.
3.) Clouds: Clouds usually act as a grey absorbers. Its influence is described as an airmass independent magnitude. Gemini has a scheme to describe cloud cover conditions by the measured (cloud) extinction: 0 mag - photometric; <0.3mag - 70%ile; <1.0mag - 80%ile; >1mag - 'any'.
4.) Seeing losses (aka. slit or fiber injection losses): Depending on the seeing conditions, only a fraction of the light couples into the object fiber of MAROON-X. The fiber has diameter (aka FOV) of 0.77" on sky. Gemini expresses the seeing conditions as the FWHM of a Gaussian profile observed at Zenith. 50% of the flux is contained inside the FWHM of a 2D Gaussian profile. Under 70%ile seeing conditions, the FWHM in R band at Zenith is 0.75", hence almost 50% of the light is lost. Seeing conditions vary with wavelength and zenith distance as described on the Gemini website. In reality, the image quality and coupling efficiency is not well characterized by the FWHM alone as image jitter (e.g., telescope wind shake etc.) can significantly contribute to coupling losses.
5.) Instrument throughput: The instrument throughput can be separates into two categories. (a) Reflection and transmission losses on optical surfaces (including the telescope), the broad efficiency profile of the VPH cross-dispersion prisms, and the quantum efficiency (QE) of the CCDs lead to a throughput loss (or inversely expressed the instrumental efficiency) that is slowly varying with wavelength. (b) The blaze function of the echelle grating shapes the throughput in each echelle order, its shape closely resembling a sinc()**2 function (see section on pseudo-blaze).
MAROON-X instrument throughput including including seeing losses for typical observing conditions. Note the strong blaze function shaping the profile in each order.
6.) Instrument dispersion: In an echelle spectrograph, the spectral resolving power (lambda/delta_lambda) is roughly constant. The instrument dispersion, i.e. the coverage of each pixel in nm, is thus dependent on wavelength. In addition, due to anamorphism on the echelle grating, the optical magnification and the dispersion along any spectral order varies strongly (between 60% and 90% between 500nm and 900nm respectively). This can be seen by the stretching of the slit image along each order, its width varying from about 2.6 - 4.1 pixel from one side of order to the other. With the dispersion changing in tandem, the resolving power (where delta_lambda is the width of the slit in nm) stays roughly constant. However, this leads to a skew in the measured distribution of counts along an order over the detector for an otherwise 'flat' source. While not a 'loss factor' it shapes the received flux distribution.
The blaze function strongly shapes the flux distribution along each order with differences from the edge to the center of the order of up to a factor of 10. In theory, the blaze function is described by a sinc(x)**2 function with x = pi * a * X, where a is a constant between 0.5 and 1.0 and X = m(1-w_c/w), were m is the echelle order, w_c is the central wavelength of the order and w is the wavelength. w_c and m are linked through a grating constant k, with k = m*w_c. The resultant function is fairly symmetric around w_c.
In reality the shape of the blaze function is more complex as it has a strong dependence on the polarization of the incoming light, imperfections of the grating and the light distribution in the collimated beam.
We normally use the flatfield spectrum as a featureless spectrum and normalize the peak flux to unity in each order to describe the blaze function. However this is not entirely correct as even after normalizing each order, the resultant spectrum contains the underlying spectral energy distribution of the source as well as the change of the instrument throughput along an order. The latter can be rather steep, particular at the overlap of the blue and red arms where the dichroic leads to steep drops in flux. In addition, it also includes the change in dispersion and thus the varying bin size (in nm) of each pixel.
The 'true' blaze function is thus strongly entangled with the dispersion and the (relative) instrument throughput. Separating these components is not easy but also not strictly required. We could simply use the flatfield as a representation of the overall instrument throughput but two problems bar us from doing so:
1.) Uncertainty of the underlying source spectrum. We use a Thorlabs SLS201L Thungsten-Halogen lamp with FGT165 color balancing filter to provide the featureless flatfield spectrum. The source spectrum is poorly characterized by Thorlabs and then further shaped by the relative throughput of various components that are not shared by the stellar spectrum, such as fiber splitters, fiber switchers, etc., see Fig 1 below.
2.) The absolute brightness of the source flatfield spectrum is unknown. This could in principle be rectified by measuring the brightness at the fiber exit. However, we don't know the losses arising from the double path through the frontend optics to form the fiber image and the coupling losses at the fiber entrance.
In conclusion: Dividing a source spectrum by the flatfield as a representation of the pseudo-blaze does imprint the unknown source spectrum of the flatfield.
Fig1: Theoretical flatfield spectrum based on Thorlabs data for the SLS201L light source and the FGT165 color balancing filter compared to the spectrum measured with MAROON-X. Here I used the instrumental response function based on the observations of KELT-9. There is a slight difference in overall slope between theoretical and observed spectrum, which could be due to either residual uncorrected atmospheric effects in the KELT-9 spectrum (e.g. extinction and seeing), or unaccounted throughput factors in fiber components unique to the flatfield source. Note the discontinuity around 840nm, which is due to the broad hydrogen lines in the spectrum of KELT-9 which are not correctly fitted.
Models (either simulations or empirical) typically provide flux densities at the top of the atmosphere. Model fluxes are typically expressed in erg/s/cm^2/A. A number of factors then determine how much of this flux ends as counts (or data numbers - DN) in each detector pixel. In the first section I have listed the loss factors that dilute the photon flux on its way to the detector. Atmospheric factors (1.-4. in the list above) are described numerically for different observing conditions. The tricky part is point 5. - the instrument throughput. As I described in the previous section on the pseudo-blaze function, the flatfield spectrum is not not a good representation of the instrument throughput as we neither know is spectrum nor its absolute brightness.
One way to practically determine the instrument throughput is to compare a stellar model and a measured spectrum. For example, a Vega-like A0V star has only few absorption lines and a well determined continuum. KELT-9 is rapidly rotating (vsini~110km/s) A0V star. Due to the high vsini we could even use a low-resolution flux-calibrated model from the Pickles library. But for consistency's sake, I used a Phoenix model and applied the scaling factor determined by Jorge Sanchez to match the Phoenix flux (erg/s/cm^2/A at the emitter) to the flux received at the Earth (also erg/s/cm^2/A) for a V=0mag star. The transformation of the model requires the following steps:
1.) flam->photlam conversion (erg/s/cm^2/A to photons/s/cm^2/A) using pysynphot
2.) Applying rotational broadening (PyAstronomy.pyasl), instrument resolution (convolution with Gaussian)
3.) Re-binning onto the MAROON-X wavelength grid
4.) Multiplying by the dispersion (photons/s/cm^2/A -> photons/s/cm^2/pixel)
5.) Multiplying by the receiving area of the telescope and the exposure time (photons/s/cm^2/pixel -> photons/pixel)
6.) Multiplying by the scaling factor to adjust to V=0mag, then multiplying by scaling factor for true brightness
Comparing the resulting spectrum with the measured spectrum of KELT-9 results in the absolute throughput for a given observation of the star. Ignoring atmospheric factors (extinction, seeing losses, clouds) for now, the result would be the instrument throughput. The problem is that the stellar model and the KELT-9 spectrum don't match perfectly and telluric absorption lines are still present, too. The continuum would need to be fitted to exclude the stellar and telluric lines. Because the instrumental throughput is strongly dominated by the blaze function, it makes more sense to multiply the model by the pseudo-blaze (i.e. the flatfield spectrum) and rather fit the result (essentially the difference between the stellar continuum and the spectrum of the flatfield). Again, this is just a practical approach, minimizing the amount of non-linearity in the continuum to minimize fitting errors. In the same vain, the dispersion can be left in the pseudo-blaze function. I used a manual approach to identify continuum points and then fit a 4th order polynomial to the points. I would expect errors to be smaller than 10%.
Multiplying this 4th order polynomial (what I would call the 'correction function') with the pseudo-blaze (the flatfield spectrum that includes the dispersion and the realtive (per order) instrument throughput) results in a 'response function' that can be applied to any model spectrum:
F_maroonx = F'_model x response function x collecting area x exposure time x magnitude_scaling / gain
with
F'_model being the model spectrum in photons/s/cm^2/A, scaled to V=0mag, rotationally and instrumentally broadened and rebinned onto the MAROON-X wavelength grid;
gain being the CCD gain in e-/DN;
and
magnitude_scaling simply being the flux scaling factor for the stellar magnitude: 10 ** ((Vmag) / -2.5)
It should be noted that the response function implicitly contains the absolute throughput for the conditions at the time of the observation of the template star. For KELT-9 on 2020/06/03, this resulted in a peak throughput of only 4.5% in the red at airmass 1.1. The absolute throughput and its dependence on observing conditions (clouds, seeing, airmass, etc. would have to be backed out, assuming that at airmass 1.0 under IQ70 and CC50 conditions, the top throughput is currently 8%.