BDF pipeline

The BDF pipeline is designed to take raw bias, dark, and flat files and combine them into "master" files. Conceptually, this involves taking all the files of each type taken during a single night and processing them into a "first-level" master file. It would be possible, then to combine masters from different nights to obtain a "better" product-- but whether or not that yields better results is TBD. There can be (at least) two approaches to applying the masters in the RAW pipeline. One could just take the newest "acceptable" B, D, and F files, or one could attempt to combine files from multiple nights (e.g., by re-computing a larger set after new data are added or by combining "master" files from different nights in some kind of (possibly weighted) average.

    1. Does combining results from different nights actually improve results?
    2. If combining masters (or re-computing data from multiple nights) does improve results, what is the optimal algorithm?
    3. Is any improvement worth the extra effort for "standard" pipeline processing?
    4. Is any improvement worth re-processing at least some datasets (e.g., for more demanding observing projects).
    5. Will some project benefit from re-processing raw data "by hand". That is, how reliable is the "automatic" pipeline reduction in identifying and rejecting bad data?
    6. Will some projects benefit from re-processing raw data automatically but with tighter quality standards?

Pipesteps

stepmasterbias

stepmasterdark

stepmasterflat

Header construction

Things which should be in the output fits header

    • List of all input raw bias, dark, flats
    • Ambient Temperature