Choices of a New Camera for MDM
Choices of a New Camera for MDM
The MDM website is still down, so all the informations listed below are based on my own memory and may not be accurate!
The currently available CCDs and most of the filters are all quite old. In the past few years, we see a few of them, including Wilbur, the 8K, TIFKAM, and the Red 4K all have different problems and retired. The most recent one is Echelle, which stopped working in Jan. 2022, but the problems were fixed and it finally comes back. Many of the filters are also degrading and have significant defects.
Right now the only usable general use CCDs are the Blue 4K, Templeton, Echelle, Nellie, and Andor. Andor is a small CCD with high frame rate or time resolution. I don't think anyone else (except for me) has used it for general imaging in recent years. Andor does not need liquid Nitrogen cooling, while all others have to be filled with liquid Nitrogen twice a day by either the staff or the observer. Parameters of different CCDs are summarized in Table 1. Nellie is the only thick CCD which is still usable. It has higher CR rate but better near-IR performance (better fringe features). Echelle was down in early Jan. 2022, but was recovered later in the semester. Base on my most recent test, Templeton has better performance (smoother background, higher sensitivity) in blue band (typically ~5000A) than Echelle and Nellie. But I didn't test the Blue 4K and the other CCDs on the same telescope. The most common combination at MDM for imaging observations is 1.3m/Templeton and 2.4m/OSMOS/Blue 4K.
Table 1. Specific parameters of MDM CCDs.
According to Eric Galayda, MDM is hoping to acquire a 4K detector that was previously used in Chile. The detector is currently in Tucson, where the Steward staffs are working to update to an Archon control system. This work was seriously affected by the pandemic, and I don't know the current status. This detector is expected to be the main workhorse for direct imaging at the 1.3m. But I don't know its detailed parameters nor the schedule.
Largely from: https://www.princetoninstruments.com/learn/camera-fundamentals/scmos-the-basics
CMOS (Complementary Metal-oxide Semiconductor) sensor technology differs from CCD (Charge Coupled Device) sensors as instead of having all sensor pixels fed through one output node, one amplifier and one analog-to-digital converter (ADC) as with a CCD sensor, CMOS sensors work in parallel by having a miniaturized capacitor and amplifier on every pixel, and an ADC for every column (Figure 1). Each ADC acts simultaneously, reading out entire columns rather than individual pixels of the whole sensor. This makes the process much faster and requires 100x less power than CCD sensor architecture.
In 2009, scientific CMOS (sCMOS) technology was launched, with sCMOS cameras being commercially available in 2010-11. sCMOS cameras are able to provide low noise, high speed, and a large field of view.
Figure 1. Readout architectures comparing interline transfer CCD and sCMOS sensor. Left: Interline transfer CCD format, in which electrons are shifted off the sensor onto a readout register, output node, amplified (by a capacity (C) and an amplifier (A)) and converted into digital grey levels by an ADC. This is then sent to the computer. Right: Typical CMOS format, where every pixel has a capacitor and amplifier associated. This means that photons that hit each pixel produce electrons, which are converted to readable voltage on the pixel. Voltages from the whole column are sent to an ADC (in which there is one per column), and these are sent straight to the computer. This makes CMOS cameras much faster as they work in parallel. Figure obtained from https://www.princetoninstruments.com/learn/camera-fundamentals/scmos-the-basics.
CCD vs sCMOS
The differences in CCD and sCMOS sensor architecture provide both advantages and disadvantages. Some of the advantages of sCMOS technology over CCD technology are as follows:
Low read noise: sCMOS features read noises ~1 e- compared to 5-6 e- from CCDs. ljt comments: The number of most of the products is in general higher than this value. I'm not sure if this is caused by the bigger format or not.
High speeds: sCMOS can achieve up to 100s fps compared to ~20 fps from CCDs. This is because each column has an ADC associated so has a fraction of the data to process. ljt comments: Both values are lower for a large format sensor.
Large field of view and cheaper cost: sCMOS sensors range from 19-29 mm diagonal, compared to 11-16 mm from CCDs. ljt comments: I believe these numbers are a little bit conservative or out of date. Now it is quite common to find both bigger sCMOS and bigger CCD in the market. But it is in general true that it is easier to make bigger sCMOS than CCD, and the cost is mucher lower.
Power efficiency: sCMOS uses 100x less power than CCDs due to parallelization.
Although sCMOS technology is advantageous to CCD technology in a number of ways, it does have some disadvantages:
Increased temporal and fixed-pattern noise: as each pixel is readout individually more temporal and fixed-pattern noise is introduced. sCMOS sensors have more active readout areas than CCDs, which causes an increase in these noise sources. This can be reduced by careful electronic design and calibration by camera companies.
Rolling shutter artifacts: sCMOS sensors can use a rolling shutter to acquire images, however if dynamic objects in the image are sufficiently fast enough to move on a similar timescale to the roll of the shutter, distortion effects can be introduced. Staggered readout between top row of the sensor and bottom row can also cause image artifacts and lost information, and significantly increases effective minimum exposure times for some applications. ljt comments: However, some people argue that this is no longer the case with newest models which are mostly equipped with global shutter.
Maximum exposure time and dynamical range: There are some arguments that one main advantage of CCD is its long maximum exposure times and ADC being 16 bit and more. ljt comments: I'm not exactly sure about this. Although this may be in general true, if you refer to Table 2, you can see that some sCMOS camera models indeed have quite good dynamical range and maximum exposure time up to ~1 hour.
Back-Illuminated sCMOS
Some early sCMOS sensors had issues with background quality and noise, limiting the use of CMOS for more demanding applications. To overcome this, higher sensitivity was required to improve the technology. In 2016, back-illuminated sCMOS was created, offering a peak quantum efficiency (QE) of 95% without compromising pixel size, while also improvements were made to background quality. Figure 2 shows a diagram of back-illuminated sCMOS technology, alongside a QE curve indicating the difference between early sCMOS and back-illuminated sCMOS.
Like CCD, the back illuminated version of sCMOS needs to be made very thin. This reduces yields quite a bit, especially for larger sensors, so drives up the price. Nevertheless, this all is compensated by the light sensitivity of a back-thinned sCMOS at its peak absorption wavelength. These turn up to 95% of incident photons into a usable signal.
Figure 2. Front vs back-illuminated sCMOS technology. Top: Front-illuminated sensor (left) have lower QE as light is scattered within the pixel and sensor wiring before hitting the silicon substrate. In back-illuminated sensors (right) the light directly hits the sensor, resulting in a much higher QE. Bottom: QE curve for various front-illuminated sCMOS technologies (early, 72% and 82% sCMOS) compared with back-illuminated sCMOS technology (peaks at 95% QE with the KURO).
Advantages of a new electronically cooled sCMOS camera, which will be mostly used together with the 1.3m:
(1) Maintenance cost. We will need less manpower to maintain the camera. Right now we need to fill the dewar twice a day, which limits the use of the telescope during holidays. A thermoelectrically cooled detector would also save us quite a bit of money on Nitrogen, at least a few hundred dollars a month.
(2) Calibration time. The readout time of the old CCDs is always slow (>2mins for a 4K). This is not a big issue for deep imaging or spectroscopy observations, but does affect the efficiency of some calibration observations, especially if you need twilight flat for many filters.
(3) Cost of filters. The 4" filters are always expensive, and many of the filters at MDM are quite old (we don't even have the transmission curves in digital format). If we want to purchase a series of new filters, 2" will be much less expensive.
(4) Design of new instrument. A smaller detector with a focal reducer is always cheaper than a larger format CCD or changing the secondary mirror. If we would like to make the 1.3m more efficient in multi-purpose observations, it will be better if we can have an imaging spectrograph like OSMOS, so people can change the setting of the observations at night, without bothering the staff to change the instrument during daytime. I think it would be less expensive to design such an instrument with a smaller detector, with less expensive lens, disperser, slit mask or image slicer. Also the size and weight of the instrument could fit the mounting requirement of the 1.3m.
(5) Modern accessories. Some latest CMOS cameras have amazing accessories, like the AO system developed by SBIG. I don't really expect the cheap AO system based on the off-axis guiding camera to be very efficient without laser guiding, but I hope it helps to achieve real time auto focus of the telescope. Right now the focusing of both telescopes can only be done manually, which is different for different filters, and is always a pain if you need multi-band photometry. In many cases, we can't really reach a resolution comparable to the seeing.
(6) Teaching, training, and outreach. Getting a modern, easy to use camera will help to make the telescopes easier to use (either remote or on-site). This is important for teaching, training new observers, and organizing some outreach events, especially after Magellan has become almost fully remote.
There are a few manufacturers producing high-end astronomy cameras in the market. The most common ones are SBIG, QHY, and FLI. In Table 2, I list a few products of these three manufacturers, especially the 4040 camera which use the same GPIXEL sensor (that's what I prefer). You can then compare the parameters of the products. The parameters are obtained from different sources, so may not be quite uniform for comparison, but I believe they should be OK for general references. I also list two 6k cameras: QHY6060 and FLI Kepler 6060, which are a larger, but also much more expensive models. You may see a comparison of sensor format in Figure 3. I won't recommend to purchase the best and most expensive camera. Instead, we should set up a budget (e.g., $50,000) first and find something fit it.
Table 2. Parameters of some high-end astronomy cameras on the market.
Figure 3. Size comparison of the sensors used by FLI. The other manufacturers also have products based on the same sensors. Note that the size of MDM4K is comparable to the 6K sCMOS sensor (the largest box in this figure).
The price of these high-end astronomy cameras is affected by a few issues:
(1) Grade of the sensor. This is mostly determined by the yield rate. The semiconductor manufacturing process is always imperfect, so all the products have defects. Here is a nice introduction of the common defects of CCDs and the general grading of them:
https://www.photometrics.com/learn/imaging-topics/ccd-grading
I don't really think a few more bad pixels could significantly affect our observations, while that makes the price of a camera made with a grade 1 sensor about twice of that made with a grade 2 sensor.
In principle, CMOS sensors are not generally graded. Because of their architecture, they generally either work or they don't. In contrast, CCDs are graded, typically by the number of column defects (which CMOS sensors do not have because of the different readout architecture). I don't exactly know the meaning of the grade for a sCMOS sensor. I just guess they should be similar as for the CCDs.
(2) Front or Back-Illuminated (FSI vs BSI). As introduced in section 2.1, a BSI sensor has to be thinned, so have a lower yield rate and much higher price (the price difference can be 2-3 times). However, the overall sensitivity is much better, with a peak QE typically 95% for a BSI while 70-80% for an FSI (Figures 2, 4). This could make huge difference in observations, so I prefer a BSI even if it is much more expensive. But please pay attention that the high sensitivity of a BSI is mostly in the blue band (and near-UV), so if most of our interest is in red or near-IR band (e.g., >7000A), a thick FSI sensor could be even better. In principle, because an FSI sensor is often thick, it should also have a better control of the fringe effect, while higher CR rate, but I'm not exactly sure about these for these consumer cameras.
(3) Coating of the camera. Coating makes a huge difference in the QE curve (Figure 5), but not a significant difference in price (Table 2). Therefore, we should choose the coating mainly based on our scientific interest, either a UV-VIS sensitive coating or a VIS-NIR sensitive coating (sorry I didn't find QE curves directly comparing these two. Should request it from the manufacturer).
(4) Accessories. Some manufacturers have some really cool accessories (like an auto focuser, a filter wheel, an integrated off-axis autoguiding system which may be useless for MDM, and the simple AO system by SBIG or Planewave), which add some additional cost. The real effects of some of these accessories need to be further explored. Unfortunately, for some of them, there is little information online.
(5) Manufacturer and other issues. There are many other issues which may affect the price, and are difficult to quantify. For example, new CMOS chips have different settings which the camera manufacturer may or may not be able to take advantage of (e.g., the maximum exposure time of the cameras made by different manufacturers with the same chips can be different). Other issues such as the electronics (e.g., analog to digital converters), isolation of the circuit designs, cooling system, etc., could all affect the price and quality of the camera. These are hard to compare based on the listed parameters. Furthermore, the country of the manufacturer is also an additional factor, with Chinese manufacturers (e.g., QHY and ZWO) often give the best price for low-end products, but this is less significant for high-end products.
The first two issues are the dominant, while the latter three are relatively minor.
Figure 4. Comparison of the QE curves of the FSI and BSI sensors of an SBIG AC4040 camera.
Figure 5. Comparison of the QE curves of different coatings (using the curves of a CCD camera: SBIG Aluma CCD 47-10).
36x24mm is full frame in photography, while 60x60mm is roughly medium format. This means we may use full-frame or medium format SLR lens to design a camera, which is typically much cheaper than a professional astronomy focal reducer. These lens typically have a fairly good vignetting control, but I'm a little worried about the sharpness at full aperture. The pixel size of a consumer SLR camera is typically much smaller than an astronomy camera discussed here, so I hope the sharpness is not a very big issue. Note that a larger format lens typically have a better vignetting control but less sharp. We need a balance here. Testing different lenses should not be too expensive compared to the price of the camera.
I prefer a 4K sCMOS camera with a grade 2 BSI sensor, which typically cost ~$35,000 (not including the accessories). My favorite model is SBIG Aluma AC4040, as it also has a package integrating the AO system. I don't really expect this simple tilt-shift AO system could be very helpful in correcting the atmosphere fluctuation, but indeed would like to use it as a real time auto focusor, which could be quite helpful for our old telescopes at MDM. Personally I perfer the UV-VIS coating, but I would be happy to listen to people's advice. In the first stage, I won't suggest to include too much budget for a full imaging (or even imaging/spectrograph) system with an additional focal reducer, a secondary filter wheel, and many filters. I prefer to first do some test observations with the proto-type camera on the 1.3m, before designing a complete imaging system such as OSMOS. Some small budget for accessories, such as lenses used as focal reducers, may be needed in the first stage, but not for the complete imaging system which should be considered in the second stage. Filters could be purchased by individual faculties based on their special research interest, or the department could purchase a series of medium- or narrow-band filters based on common interests.
Confirm the specific parameters, prices, and accessories of different products.
QE curves of different coatings.
Anti-dew system.
Prices of the standard-line and customized filters.
Send me your concerns ......
Brief description of the project: Deep (typically no less than 4-5 hours per band per field) narrow-band imaging with the MDM 1.3m telescope and a few continuously distributed narrow-band filters to identify LAE overdensity candidate fields in a continuous redshift range, in order to search for large scale structures at high-z (currently the redshifted [OIII] filters are used for LAEs at z~3.1-3.3). The sample is selected based on the identified strong CIV absorbers in the spectra of bright background quasars (the HIERACHY project led by Jiangtao Li). The selected LAE overdensity candidate fields will be further confirmed with the Magellan multi-object spectroscopy observations and other follow-up multi-wavelength observations.
How could the new camera help?
1. All of the nebular narrow-band filters at MDM are very old, many with significant defects, and do not cover some of the strong CIV absorbers identified in a broader redshift range in our sample. There are even no transimission curves available in digital format, which prevents an accurate calculation of the dropoff magnitude to identify LAEs. We need some new filters covering a broader redshift range which is optimized for different scientific goals. In particular, we need bluer filters covering lower redshifts.
2. Both the 1.3m and the 2.4m telescopes can only be manually focused, which is slow and less accurate. The focus varies for different pointings and different filters, so needs to be adjusted many times over a single night, which takes a lot of time. In many cases, we cannot even reach the seeing limit psf due to the inaccurate focus. The new camera with an auto focuser and/or a simple AO system will help us to keep the focus accurate, so greatly improves the resolution and detection limit of the images. The smaller pixel size (~0.2" on the 1.3m f/7.5 for the sCMOS listed in Table 2) also helps to better sample the psf, which we aim to reach sub-arcsec.
3. The readout of the current MDM CCDs is always very slow. If multiple filters are needed in a single night, there is often not enough time to take sky flats and accurately focus the telescope due to the large overhead caused by readout. The new camera will have much smaller readout time, then save a lot of time for calibration.
4. The drawback of the new camera is its smaller FOV (12.5' on the 1.3m f/7.5 for the 4K sCMOS listed in Table 2 without a focal reducer), but the FOV is in general large enough for this project (most of our high-z structures of interest has an angular scale of ~3'-4').
The current plan is to request MIRA funding (I'm thinking about the category of "injections"; cap $30,000) plus some department support to purchase a new camera for MDM (may also include some accessories used for test, which could be critical for stage 2 design), most likely mainly used on the 1.3m. This is all we need for stage 1. If it works well, we may further think about producing a complete imaging system (like OSMOS) based on this camera in stage 2.
Please fill this google form to answer a few questions related to the new camera and your research interest. Your feedback will be very important for me to complete a MIRA proposal requesting funding for a new camera!