Our research group has been financially supported by grants from the National Aeronautics and Space Administration, the National Science Foundation, and the Air Force Office of Scientific Research. The group has ample computational resources -- both through dedicated high-performance computing clusters located on campus, as well as at the state and national levels. This access to a wide range of high-performance computing platforms enables our group members to pursue cutting-edge computational astrophysics research.
At the national level, our group has access to National Science Foundation Extreme Science and Engineering Discovery Environment (XSEDE) and Department of Energy supercomputers, including Stampede at the Texas Advanced Computing Center, through several active computer time grants won by Fisher and collaborators. Our research group is one of the largest computational astrophysics users of supercomputer time in the US; Fisher has won well over 100 million CPU hours of time to date, both as a single principal investigator and through collaborative time awards.
In addition, at the state-level, our group has been instrumental in planning the Massachusetts Green High Performance Computing Center (MGHPCC) which has just recently opened and is now available to our group members. MGHPCC is a $95 M, 10 MW data center in Holyoke, Massachusetts, dedicated to supporting the growing research computing needs of five of the most research-intensive universities in Massachusetts: the five campuses of the University of Massachusetts, MIT, Boston University, Northeastern University ,and Harvard University. Funding for the $95M project budget was provided by an innovative pooling of resources across public and private academic institutions, private industry, and state and federal governments : the university partners, Cisco, EMC, the Commonwealth of Massachusetts, and the Federal New Markets Tax Credit program.
The University of Massachusetts-wide system has installed a large-scale federated cluster in Holyoke, to be shared across all University of Massachusetts campuses. The recently-updated system consists of a shared high performance computer cluster with over 15,000 cores available, and 400 TB of high performance EMC Isilon X series storage. The High Performance Computing Cluster (HPCC) consists of the following hardware: an FDR based Infiniband (IB) network and a 10GE network for the storage environment. The HPC environment runs the IBM LSF scheduling software for job management.
Finally, at the campus level, we have played a key role in establishing an extensive array of computing and information technology at UMass Dartmouth. The newest generation cluster consists of 1300 CPU cores, 50 GPUs, 132 TB storage, 3 TB memory, and over 2 PetaFLOPs of sustained performance. The data center supports an existing CLARiiON CX-20C Storage Area Network (SAN) storage unit with both Fibre Channel (FC) and Internet Small Computer System Interface (iSCSI) interfaces. The system is backed up daily, with weekly offsite backups using virtual and physical Linear Tape-Open-3 (LTO3) libraries. The center also has existing dual cooling units, as well as battery and gas generator backup power.