• Parallel Computing and Software Development

I have a lot of enthusasim for using high-performance computing methods in my astronomical research analysis. My engagements with wide field imaging processing often involves using parallel computing methods on high-performance machines (I will try to parallelize anything if there is a chance to do so :-). Most recently, as part of my low-surface brightness imaging studies, I have ve been running a lot of image processing code on the Open Science Grid (OSG), in a "scavenging" mode utilizing thousands of parallel processes on spare machines available through the OSG network. At times I have dealt with issues such as generating too much data, or that I have reached the database's parallel connection capabilities, but it has been a very exicting and rewarding experience.

Efficient computing algorithm for astronomy and cosmology is another topic I am deeply curious about. I've undertaken a couple of software development efforts, including working on image processing methods, and developing efficient cluster cosmology analysis pipelines. Efficiency, parallelizability, and reproducibility are my top concerns in these efforts. My most recent news on this front is that my collaborators and I have received a ~ $498,000 grant from Fermilab to integrate simulation-based inference and machine learning methods into cosmology analysis software. Super excited to start on this new project and the journey of being a co-PI!

Related Published papers:

Dark Energy Survey Year 3 Results: Measuring the Survey Transfer Function with Balrog, Everett, S. and 99 colleagues including Y.Zhang, submitted to ApJs, arXiv:2012.12825

Crowded Cluster Cores: Algorithms for Deblending in Dark Energy Survey Images, Y. Zhang, T. A. McKay, E. Bertin et al., 2015, PASP, 127, 1183, arXiv:1409.2885.