Facilities


 






Our research group has ample computational resources -- both through dedicated high-performance computing clusters located on campus, as well as at the state and national levels. This access to a wide range of high-performance computing platforms enables our group members to pursue cutting-edge computational astrophysics research. 

At the  national level, our group has access to National Science Foundation XSEDE and Department of Energy  supercomputers, including Stampede at the Texas Advanced Computing Center, though several active computer time grants won by Fisher and collaborators. Our research group is one of the largest computational astrophysics users of supercomputer time in the US; Fisher has won well over 100 million CPU hours of time to date, both as a single principal investigator and through collaborative time awards. 

In addition, at the state-level,  our group has been instrumental in planning the Massachusetts Green High Performance Computing Center (MGHPCC) which has just recently opened and is now available to our group members. MGHPCC is a $95 M, 10 MW data center in Holyoke, Massachusetts, dedicated  to supporting the growing research computing needs of five of the most research-intensive universities in Massachusetts: the five campuses of the University of Massachusetts, MIT,  Boston University, Northeastern University ,and Harvard University. Funding for the $95M project budget was provided by an innovative pooling of resources across public and private academic institutions, private industry, and state and federal governments : the university partners,  Cisco, EMC, the Commonwealth of Massachusetts, and the Federal New Markets Tax Credit program.

The University of Massachusetts-wide system has installed a large-scale federated cluster in Holyoke, to be shared across all University of Massachusetts campuses.  The recently-updated system consists of a shared high performance computer cluster with over 15,000 cores available, and 400 TB of high performance EMC Isilon X series storage. The High Performance Computing Cluster (HPCC) consists of the following hardware: an FDR based Infiniband (IB) network and a 10GE network for the storage environment. The HPC environment runs the IBM LSF scheduling software for job management.

Finally, at the campus level, we have played a key role in establishing an extensive array of computing and information technology at UMass Dartmouth. Our campus data center houses an IBM iDataPlex GPGPU computer cluster acquired under the auspices of an NSF MRI award (PI Fisher, Co-PI Gottlieb), and an AFOSR DURIP award (PI Gottlieb, Co-PI Fisher). The cluster has now grown to some 86 nodes (688 CPU cores), with 64 Nvidia Tesla GPU cards,  networked with QDR Infiniband, and providing over 50 TB of NAS storage, thanks to additional acquisitions from startup funds from faculty members Raessi and Tootkobani. The data center supports an existing CLARiiON CX-20C Storage Area Network (SAN) storage unit with both Fibre Channel (FC) and Internet Small Computer System Interface (iSCSI) interfaces. The system is backed up daily, with weekly offsite backups using virtual and physical Linear Tape-Open-3 (LTO3) libraries. The center also has existing dual cooling units, as well as battery and gas generator backup power. In addition, our research group has exclusive access to a small-scale 88-core computer cluster acquired with Fisher's startup funds, also housed in the campus data center.


    NSF Logo