This high-performance computing (HPC) cluster is a specialized environment engineered for computational condensed matter physics. By integrating three powerful nodes via a high-speed fabric, it provides a seamless platform for investigating the electronic, magnetic, and structural properties of materials at the quantum level. In PKD Research Group, we have 3 robust compute nodes which are very efficient and fast. Several software are preloaded.
Hardware Architecture
The cluster is built on a foundation of balanced compute power and high-velocity data transfer:
Compute: Each node features an Intel Xeon Silver 4314 processor. With 16 physical cores (32 threads) per node, the cluster offers a total of 96 threads, providing the parallel processing power through INTEL PARALLEL STUDIO.
Memory: A robust 256 GB of RAM per node (768 GB total) allows for the simulation of large supercells and the storage of massive Hamiltonian matrices in memory, significantly reducing time spent on disk I/O.
Storage: Each node contains a 4 TB HDD, offering a total of 12 TB for storing extensive charge density maps, wavefunction files, and historical simulation data.
Interconnect: The nodes are linked by a 100 Gbps switch. This ultra-high-bandwidth interconnect is the backbone of the cluster, minimizing latency during the heavy inter-node communication required for parallel Fast Fourier Transforms (FFTs).
Software & Simulation Environment
The cluster is pre-configured with a specialized software stack for quantum mechanical modeling:
INTER Parallel Studio, Phonopy, VESTA and several post-processing softwares are installed.
DFT Engines: Open-source Density Functional Theory (DFT) suites—such as Quantum ESPRESSO, ABINIT, and CP2K—are pre-installed. These codes are optimized to leverage the underlying Intel architecture and MPI libraries for efficient scaling.
Workload Management: TORQUE is installed to manage job queuing. This allows group members of PKD Research Group to submit multiple simulations, define resource requirements (cores, memory, walltime), and ensure fair sharing of the cluster's resources among different projects.