2022-05-19 Andreas Seifahrt:
SERVAL was missing a mask for the sky emission lines. This affects two orders in the blue (near 557.8nm and 589.2 nm) and a bunch of orders in the red. I build a sky emission line mask from the UVES and Keck lines lists and checked against observed data (LP791-18 and Wolf 359) to select emission lines visible in both the object and sky fibers for a very 'active' night were the skyglow was highest. The line mask is on mxred: /home/maroonx/Repos/serval3_maroonx/lib/masks/sky_mask/maroonx_skymask_complete.txt
To use the mask, SERVAL has to be configured in serval_config.py by setting the path_skymask variable to servallibmasks + os.sep + "sky_mask" + os.sep + "maroonx_skymask_complete.txt".
The influence of proper masking of the sky emission lines is likely minimal and will affect mostly very weak targets, particularly when the stellar line density is low (i.e. solar type stars).
Using the mask should still become our new default when using SERVAL.
2022-05-19 Andreas Seifahrt:
We so far always bypassed the 'missing pixel' interpolation in SERVAL and have done our own interpolation when building the combined flux of 'fiber 6' in our reduction pipeline. The interpolation was a cubic spline and the uncertainties for the interpolated pixels were set to a high value such that they get downweighted in SERVAL.
This led to two issues:
The bad pixel mask missed a bad column in the red channel at pixel 1794. This led to a bad datapoint with significantly lower flux than the neighboring pixels, next to a group of bad pixels.
The cubic spline interpolation, particularly in combination with the issue above, often leads to an overshooting of the interpolated flux.
Solution: I replaced the cubic spline with a linear interpolation in the combine_science_fibers.py script. I also marked the bad pixel in the script as nan. Eventually we need to update the bad pixel mask to make this permanent.
The data have to be re-reduced (fiber combination and pandas export). I started with this. Re-reduced data are marked in light blue in the Maroon-X data log google spreadsheet.
I also choose to add more penalties on the bad/interpolated pixels by increasing the error for the bad pixels when reading the data into serval. See read_spec.py on mxred. Note: When the penalty is too high (the downweighting too strong), there are overshooting issues arising in the template if the smoothing factor ("pspllam") is not set accordingly. This needs to be further optimized - see entry in 'remaining issues'.
2022-05-26 Andreas Seifahrt:
Major:
When combining data from different epochs with large barycentric velocity shifts, some orders in both channels can produce large RV offsets of several km/s and have 'skewed' chi2 profile shapes. This is particularly noticeable for Wolf359 when combining the April 2021, April 2022 and November 2021 datasets. There is nothing special about the affected orders in terms on number of knots, density of telluric lines, density of stellar lines, etc. Note that this problem is absent in the old SERVAL, indicating a combination of the number of knots, smoothing function and potentially background spline treatment might be the issue.
Solved: Increased the oversizing of the template to account for barycentric velocity shifts (hardcoded in maroonx_dictionary_complete() in serval_config.py)
When running SERVAL over datasets with very little barycentric change, there are parts of the template that have no 'valid' flux at the position of both telluric and sky emission lines and at the position of bad pixels. A strong downweighting of bad pixels causes an overshooting in the template interpolation for the bad pixels. While this seems to have no or only a minor influence on the RVs (given that the template uncertainties are likely equally high for those bad sections), it is an unwanted feature that hinders proper interpretation of the template and residual plots. The solution is an increase in the spline smoothing factor "pspllam". This factor used to be controllable as a command line parameter in the old serval ("-pspline") but is now hardcoded in serval_config.py.
TODO: Reactivate -pspline as a command line parameter to dynamically set the pspllam variable.
The default number os knots for the template is int(inst_config["osize"] / 4 * serval_config.ofac). Ofac (oversampling factor) is by default 1.0, the order length ("osize") is 3000 (plus some extra margin for template building), hence the default # of knots is ~800. But the default value ("nK") is logged as 3579. Why is that? We need a better understanding of this moving forward.
Minor: