Notes‎ > ‎

Adding Zero Spacing


Adding Zero–Spacing Information to Interferometry Data

You might think that adding zero–spacing information into interferometer data is a solved problem. The mathematical foundations of the problem are straightforward, and plenty of interferometer data reduction packages have been in use since decades. But if you ask around you will not find solutions that work out of the box. In fact, the solutions that do work seem to require quite a bit of tweaking. I am part of three large collaborations that now face this problem (i.e., CLASSy, the CARMA Orion survey, and the SMA Legacy Survey of the Galactic Center), and all of these projects started to tackle this problem from scratch.

Here I collect some of the lessons I learned while being part of these projects. There is no perfect solution to the problem of bringing zero–spacing information back into the analysis, but there are approaches that produce acceptable results. I try to lay some of these approaches out.

It is my experience that the best possible results are obtained when combining single–dish and interferometer data in the UV plane. I did this in MIRIAD for one project, and it worked beautifully. However, the MIRIAD script has a lot of error–prone intermediate steps. In addition we found that MIRIAD results can differ between software versions and computers. At the moment it seems that CASA provides the easiest–to–understand and best documented avenue to produce maps combining single–dish and interferometer data. It should be kept in mind that this is an approximate procedure, which however appears to produce decent results.

Please feel free to contact me. The data reduction landscape evolves constantly, and these pages can only capture a current snapshot of the situation.

This webpage can be accessed at http://tinyurl.com/zero-spacing. Please give this link if referring to this website.

Combination in the UV Plane: Reduction in MIRIAD

MIRIAD allows to combine single–dish and interferometer data in the UV plane in a mathematically correct way. This produces images that are largely free of artifacts. This approach is probably superior to other approaches to data reduction. I based my pipeline on an initial script by Qizhou Zhang (Harvard–Smithsonian CfA) who used this approach for various publications in the past.

However, this script is rather complex. We also found that MIRIAD results differ between software versions and individual computers. This makes MIRIAD not the ideal package for data reduction. This holds in particular for larger collaborations where the data reduction effort has to be shared. Still, MIRIAD produces very good results if one can fine–tune the reduction for few sources and emission lines.

Note that other packages, such as IRAM's GILDAS, provide alternative ways to combine single–dish and interferometer data. I have not pursued these avenues and cannot comment on them.

CASA CLEAN and Feathering: An Overview

At this moment I advocate the use of CASA for the combination of single–dish and interferometer data. My choice is driven by to factors. First, CASA scripts are easy to read (e.g., compared to much longer MIRIAD scripts) and modify. This is an important advantage when working in larger collaborations. Second, CASA is the obvious choice for ALMA data, which is important for future projects. However, it is my experience that MIRIAD produces the better results.

Below I document an imaging pipeline. The CASA scripts below are still being tested. It is conceivable that any of the steps have quantitative bugs. You should thus be careful with the pipeline as given here.

With these caveats in mind you are welcome to use the code as provided here.

The process can be decomposed into a number of separate steps.

  1. Setup of relevant parameters. This includes, e.g., the source name and the requested velocity resolution.
  2. Read the visibilities into CASA.
  3. Prepare the single–dish data. This step assures, for example, that the interferometer and single–dish data are gridded to the same velocity/frequency axis. Another important non–trivial step is to add a properly formatted Stokes axis to the single–dish data.
  4. Deconvolve the interferometer data via the "clean" algorithm, using a single–dish image as a first guess for the source structure on large spatial scales.
  5. Determine the interferometer–based clean components in excess of the model image. Convolve these components with the clean beam.
  6. Combine the convolved interferometer–induced clean components with the single–dish data via the "feather" algorithm to produce a new source model.
  7. Use this source model in another iteration with clean.

I also include a few more steps that can be used to compare the final result with the initial single–dish image.

Detailed Instructions

The script provided here reduces H2CO data from the SMA and combines the data with single–dish observations from APEX. The code is broken up into a number of sections for easier documentation.

Note that path names are specific to my directory tree. My file system is structured as follows. You will need to create a similar directory structure or change file names to make my script work.

   .
   |-data_sd
   |-data_uv
   |---process
   |---uv_casa
   |-----LineName_1
   |-----LineName_2
   |---uv_fits
   |-----LineName_1
   |-----LineName_2
   |---uv_miriad
   |-imaging

This assumes that you are working on two emission lines called "LineName_1" and "LineName_2".

I suggest to place all of the lines below in a script. This script can then be executed as

execfile('ScriptName.py')

within CASA. The code below was tested on CASA version 4.2.2. The script assumes that it is run in the "imaging" subdirectory.

Initial Steps outside of CASA

The script starts by ingesting UVFITS data. In the test case these data are SMA visibilities generated via MIRIAD. You need to move these visibility files into the uv_fits directory in the tree given above.

You also must move single–dish data sets into the data_sd folder. The script below requests specific file names and so you may use any naming convention convenient for you.

Initial Input Parameters

The first part of the script sets up a number of parameters. This should be largely self–explaining.

You will need to adjust this part of the script for your own purposes.

#
# set up parameters
#

# load some libraries
import re


# provide line information

LineID = 'H2CO_1'
RestFrequency = '218.2222GHz'       # must be in GHz
SidebandID = 'lsb'


# provide source information

SourceID = 'G0.326-0.085'
SDImage = 'cutout_G0.34-0.09_217to219_bl.fits'
SourceLocation = 'J2000 17h46m44.351 -28d41m34.5'
ImageSize = [800, 800]
NumberIterations = 10000
CleaningThreshold = '100mJy'
StartVelocity = '0km/s'
NumberChannels = 70
FudgeScale = 1.        # adjustment for flux calibration errors


# target resolution
VelocityResolution = '2.0km/s'     # must be in km/s
CellSize = '1arcsec'


# information on single-dish telescope
SizeSingleDishMeter = 12.
SDEfficiency = 0.75                # main beam efficiency to scale to Jansky
SDCutoffScaleMeter = 8.            # UV cutoff scale for feathering


# some additional conversions
RestFrequencyGHz = float(RestFrequency.replace('GHz', ''))
VelocityResolutionKMS = float(VelocityResolution.replace('km/s', ''))
FrequencyResolution = str(333.564095*VelocityResolutionKMS*(RestFrequencyGHz/100.)) + 'kHz'
CellSizeArcsec = float(CellSize.replace('arcsec', ''))
SingleDishResolutionArcsec = 74.88 / (RestFrequencyGHz/100.) / \
                             (SizeSingleDishMeter/10.)

Read the Visibility Data

This first converts the UVFITS data into a measurement set that can be handled in CASA. The data are quickly deconvolved to obtain a reference image for later gridding steps.

#
# prepare visibilities
#

# ingest UVFITS file into CASA
UVFileNaming = LineID + '/' + SourceID + '_' + SidebandID
os.system('rm -rf ../data_uv/uv_casa/' + UVFileNaming + '.ms')
importuvfits('../data_uv/uv_fits/' + UVFileNaming + '.uvfits', \
             vis = '../data_uv/uv_casa/' + UVFileNaming + '.ms')


# create a single channel dirty image
os.system('rm -rf deconvolved*')
clean(vis = '../data_uv/uv_casa/' + UVFileNaming + '.ms',
     imagename = 'deconvolved',
     spw='',
     restfreq = RestFrequency,
     imagermode = 'mosaic',
     mode = 'velocity',
     outframe='lsrk',
     width = VelocityResolution,
     start = StartVelocity,
     nchan = NumberChannels,
     interactive = False,
     imsize = ImageSize,
     cell = CellSize,
     phasecenter = SourceLocation,
     weighting = 'briggs',
     niter = 0,
     threshold = '20mJy',
     usescratch = True)

Prepare the Single–Dish Image

This procedure assumes that the single–dish data are in a FITS file. The telescope efficiency set in the initial parameters is used to convert to the Tmb–scale. The single–dish data are first converted into CASA's image format. This is followed by several less trivial steps.

  1. Smoothing of the single–dish data to the velocity resolution of the interferometer data.
  2. Transposing the smoothed single–dish image to have a correctly formatted Stokes (i.e., polarization) axis.
  3. Gridding of the transposed velocity–smoothed single–dish data into the velocity bins of the interferometer data.

Further steps (i) convert the single–dish image into the Jy/pixel scale and (ii) replace blanked values by zeros to have well–behaved Fourier transforms in later steps.

#
# prepare single-dish data
#

# convert APEX map
importfits(fitsimage = '../data_sd/' + SDImage,
           imagename='apex.image',
           overwrite=True)
imhead('apex.image',
       mode='put',
       hdkey='telescope', hdvalue='ALMA')


# smooth the velocity axis of the APEX data
SDCellSizeX = str(abs(imhead('apex.image', mode='get', hdkey='cdelt1')['value'] * \
                      180.*3600./pi)) + 'arcsec'
SDCellSizeY = str(abs(imhead('apex.image', mode='get', hdkey='cdelt2')['value'] * \
                      180.*3600./pi)) + 'arcsec'

os.system('rm -rf apex_vsmooth.image')
ia.open('apex.image')
im2 = ia.sepconvolve(outfile='apex_vsmooth.image',
                     axes=[0,1,2], types=['box','box','box'],
                     widths=[SDCellSizeX,SDCellSizeY,FrequencyResolution],
                     overwrite=True)
im2.done()
ia.close()


# beat the stokes axis into proper format
os.system('rm -rf apex_vsmooth_stokes.image')
ia.open('apex_vsmooth.image')
im2 = ia.regrid(dropdeg=True)
im3 = im2.adddegaxes(outfile='apex_vsmooth_stokes.image', overwrite=True,
                     stokes='I')
im2.done()
im3.done()
ia.close()

imhead('apex_vsmooth_stokes.image')

os.system('rm -rf apex_trans.image')
imtrans(imagename='apex_vsmooth_stokes.image',
        outfile='apex_trans.image',
        order='0132')

imhead('apex_trans.image')


# regrid single-dish image
os.system('rm -rf apex_trans_regrid.image')
ReferenceFrame = imregrid('deconvolved.image', template='get')
imregrid(imagename='apex_trans.image',
         template=ReferenceFrame,
         output='apex_trans_regrid.image',
         asvelocity=True,
         interpolation='linear',
         overwrite=True)

imhead('apex_trans_regrid.image')

imhead('deconvolved.image')


# flux conversion scaling for single-dish image
JyperKelvin = 0.817 * (RestFrequencyGHz/100.)**2. * \
              (SingleDishResolutionArcsec/10.)**2.
FactorJyperPixel = CellSizeArcsec**2. / \
                   (1.133 * SingleDishResolutionArcsec**2.)
ConversionFactor = FudgeScale * JyperKelvin * FactorJyperPixel / SDEfficiency

os.system('rm -rf apex_trans_regrid_scaled.image')
im1 = ia.imagecalc(outfile='apex_trans_regrid_scaled.image',
                   pixels=str(ConversionFactor)+'*apex_trans_regrid.image',
                   overwrite=True)
im1.done()
ia.close()

imhead('apex_trans_regrid_scaled.image',
       mode='put',
       hdkey='bunit', hdvalue='Jy/pixel')


# remove blanking for single-dish data
os.system('rm -rf apex_trans_regrid_scaled_unmasked.image')
os.system('cp -r apex_trans_regrid_scaled.image ' +
          'apex_trans_regrid_scaled_unmasked.image')
ia.open('apex_trans_regrid_scaled_unmasked.image')
ia.replacemaskedpixels(0., update=True)
ia.close()


# remove negative values in single-dish image
os.system('rm -rf apex_trans_regrid_scaled_unmasked_noneg.image')
immath(outfile='apex_trans_regrid_scaled_unmasked_noneg.image',
       imagename=['apex_trans_regrid_scaled_unmasked.image'],
       mode='evalexpr',expr='iif(IM0 >= 0.00, IM0, 0.0)')

Produce a first Source Model

The procedure now uses the single–dish data as an input model for cleaning. The model based on single–dish data is subtracted from the clean components to obtain the new clean components implied by the interferometer data. Those components are then convolved with the restoring beam. The resulting image and the single–dish image are combined with feather to produce an updated source model.

#
# produce a first model of combined interferometer and SD data
#

# clean the UV data
os.system('rm -rf deconvolved-sdinput*')
clean(vis = '../data_uv/uv_casa/' + UVFileNaming + '.ms',
      imagename = 'deconvolved-sdinput',
      modelimage = 'apex_trans_regrid_scaled_unmasked_noneg.image',
      spw='',
      restfreq = RestFrequency,
      imagermode = 'mosaic',
      mode = 'velocity',
      outframe='lsrk',
      width = VelocityResolution,
      start = StartVelocity,
      nchan = NumberChannels,
      interactive = False,
      imsize = ImageSize,
      cell = CellSize,
      phasecenter = SourceLocation,
      weighting = 'briggs',
      niter = NumberIterations,
      cyclefactor = 5.,
      threshold = CleaningThreshold,
      usescratch = True)


# get the positive interferometer-only clean components
os.system('rm -rf deconvolved-sdinput.intmodel')
immath(outfile='deconvolved-sdinput.intmodel',
   imagename=['deconvolved-sdinput.model',
              'apex_trans_regrid_scaled_unmasked_noneg.image'],
   mode='evalexpr',
   expr='iif((IM0-IM1) >= 0.00, IM0-IM1, 0.0)')


# remove those components if they are at the map edge
os.system('rm -rf deconvolved-sdinput-masked.intmodel')
immath(outfile='deconvolved-sdinput-masked.intmodel',
   imagename=['deconvolved-sdinput.intmodel',
              'deconvolved-sdinput.flux.pbcoverage'],
   mode='evalexpr',
   expr='iif((IM1) >= 0.25, IM0, 0.0)')

imhead('deconvolved-sdinput-masked.intmodel',
       mode='put',
       hdkey='bunit', hdvalue='Jy/pixel')


# smooth the interferometer-only components to the synthesized beam
SynthBeamMaj = imhead('deconvolved-sdinput.image', mode='get', hdkey='bmaj')['value']
SynthBeamMin = imhead('deconvolved-sdinput.image', mode='get', hdkey='bmin')['value']
SynthBeamPA = imhead('deconvolved-sdinput.image', mode='get', hdkey='bpa')['value']

os.system('rm -rf deconvolved-sdinput.intimage')
imsmooth(imagename='deconvolved-sdinput-masked.intmodel',
         outfile='deconvolved-sdinput.intimage',
         kernel='gauss',
         major=str(SynthBeamMaj)+'arcsec',
         minor=str(SynthBeamMin)+'arcsec',
         pa=str(SynthBeamPA)+'deg')


# produce a non-negative single-dish map in Jy/beam
os.system('rm -rf apex_trans_regrid_scaled_unmasked_noneg_perbeam.image')
immath(outfile='apex_trans_regrid_scaled_unmasked_noneg_perbeam.image',
   imagename=['apex_trans_regrid_scaled_unmasked_noneg.image'],
   mode='evalexpr', expr='IM0/'+str(FactorJyperPixel))

imhead('apex_trans_regrid_scaled_unmasked_noneg_perbeam.image',
       mode='put',
       hdkey='bunit', hdvalue='Jy/beam')
imhead('apex_trans_regrid_scaled_unmasked_noneg_perbeam.image',
       mode='put',
       hdkey='bmaj', hdvalue=str(SingleDishResolutionArcsec)+'arcsec')
imhead('apex_trans_regrid_scaled_unmasked_noneg_perbeam.image',
       mode='put',
       hdkey='bmin', hdvalue=str(SingleDishResolutionArcsec)+'arcsec')
imhead('apex_trans_regrid_scaled_unmasked_noneg_perbeam.image',
       mode='put',
       hdkey='bpa', hdvalue='0deg')


# feather the data
os.system('rm -rf deconvolved-combi.model')
feather(imagename = 'deconvolved-combi.model',
        highres = 'deconvolved-sdinput.intimage',
        lowres = 'apex_trans_regrid_scaled_unmasked_noneg_perbeam.image',
        effdishdiam = SDCutoffScaleMeter,
        lowpassfiltersd = True)


# clean up and keep the final model
os.system('rm -rf combimodel.image')
os.system('mv deconvolved-combi.model combimodel.image')

Second and final Cleaning

The dconvolved visibilities and the single–dish data are combined via feather. The resulting image is scaled to remove the sensitivity pattern of the interferometer map.

#
# second and final cleaning with new model
#

# clean the UV data
os.system('rm -rf deconvolved-sdinput*')
clean(vis = '../data_uv/uv_casa/' + UVFileNaming + '.ms',
      imagename = 'deconvolved-sdinput',
      modelimage = 'combimodel.image',
      spw='',
      restfreq = RestFrequency,
      imagermode = 'mosaic',
      mode = 'velocity',
      outframe='lsrk',
      width = VelocityResolution,
      start = StartVelocity,
      nchan = NumberChannels,
      interactive = False,
      imsize = ImageSize,
      cell = CellSize,
      phasecenter = SourceLocation,
      weighting = 'briggs',
      niter = NumberIterations,
      cyclefactor = 5.,
      threshold = CleaningThreshold,
      usescratch = True)

viewer('deconvolved-sdinput.image')