High Performance Computing at Berkeley Lab

Berkeley Lab provides Lawrencium, a 942-node (23,424 computational cores) Linux cluster to its researchers needing access to computation to facilitate scientific discovery. The system, which consists of shared nodes and PI-contributed Condo nodes is equipped with an infiniband interconnect and has access to a 1.8 PB parallel filesystem storage system. Large memory, GPU and Intel Phi Knight's Landing nodes are also available.


We offer comprehensive Linux cluster support, including pre-purchase consulting, procurement assistance, installation, and ongoing support for PI-owned clusters. Our HPC User Services consultants can also help you to get your applications performing well. UC Berkeley PIs can also make use of our services through the very successful Berkeley Research Computing (BRC) program available through UC Berkeley Research ITAltogether the group manages over 70,000 compute cores and over 3100 users across 532 research projects for the Lab and UC Berkeley.

Mineral Discovery Made Easier
Like a tiny needle in a sprawling hayfield, a single crystal grain measuring just tens of millionths of a meter – found in a borehole sample drilled in Central Siberia – had an unexpected chemical makeup. Using a specialized X-ray technique in use at the ALS and compute clusters managed by SCG, scientists confirmed the uniqueness of the sample and paved the way for its formal recognition as a newly discovered mineral: ognitite... Read More »
How the Molecular Foundry scientists Model at the Nanoscale
In six of the seven facilities of the Molecular Foundry, scientists at benches or instruments, in lab coats or clean room suits, are hard at work creating and characterizing nanoscale materials. Sandwiched in between those levels of laboratories, however, is a different kind of lab — one within a more traditional workspace of offices and cubicles. “Our computing is our lab space,” said David Prendergast, director of the Berkeley Lab’s Theory... Read More »

LBNL Singularity wins HPCWire Editors and Readers Choice Awards. Again.
LBNL's Singularity Container software won three highly coveted HPCWire awards at Supercomputing 2018. In addition to winning the Editors Choice award for technologies to watch for the 3rd year in a row, Singularity was recognized with the Editor's and Reader's Choice award for Best HPC Programming Technology. Singularity is now available with commercial support from SyLabs.io

Klaus Halbach Award for Innovative Instrumentation at the ALS
The Klaus Halbach Award for Innovative Instrumentation at the ALS was awarded to PI David Shapiro and his team, which included the Scientific Computing Group, for their work in developing X-ray ptychography at the ALS COSMIC beamline. SCG staffers Susan James and Krishna Muriki (pictured front left and back left) led the Science IT efforts for this project.

High-Energy Nuclear Collisions, Large-Scale Computing Aid Study of Early Universe
ALICE (A Large Ion Collider Experiment) is a detector in the Large Hadron Collider (LHC) ring designed to investigate quark-gluon plasma, the primitive matter that filled the early universe. Berkeley Lab’s Nuclear Science Division, in partnership with IT’s Scientific Computing Group, has recently established a new site on the Worldwide LHC Computing Grid to provide a significant amount of ALICE computing and data storage. More> 

Materials Project Connects Computational, Experimental Materials Science -Thomas Edison tested thousands of materials before discovering the right one for his electric lightbulb. Materials scientists today are only recently transitioning from the “Edisonian” way of discovery to data-driven “materials by design.” Using supercomputing, Materials Project researcher Shyam Dwaraknath and other Lab scientists are helping to bridge the gap from computer simulations to real-world applications. More>

HPC Enables Ptychographic Imaging at ALS
Scientists at the Advanced Light Source are using the new COSMIC Imaging beamline and a high-performance data pipeline implemented by the IT Division’s Scientific Computing Group to turn large datasets of X-ray diffraction data into high-resolution images. With a technique called ptychographic computed tomography, the researchers recently mapped locations of nanoscale reactions inside a lithium-ion battery in 3D. More>

How a climate scientist leverages supercomputing to study wild weather from winds to wildfires
What is the relationship between the frequency of lightning strikes and frequency of wildfires? That’s just one of the questions researcher David Romps is trying to answer. As a faculty scientist in Berkeley Lab's Climate and Ecosystem Sciences Division, Romps leverages Berkeley Lab Science IT resources like Lawrencium to better understand Earth’s climate. Read More »

Jackie Scoggins selected as 'Women at the Lab' honoree
The Scientific Computing Group's Jackie Scoggins is one of sixteen women being recognized for their dedication, talent, STEM contributions, and commitment to the Lab's mission. The awards ceremony will be on July 9, 2018 at 3:00 pm
Berkeley Lab Researchers Develop Platform for Hosting Science Data Analytics CompetitionsNNSA turned to researchers and engineers in the Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Computational Research (CRD), Nuclear Science (NSD) and Information Technology (IT) divisions to help build a Kaggle-inspired platform. Now that the system has been developed and is hosted on the IT  Divisions new Scientific VM service, the team says it also can be used to host data analytics competitions for other scientific problems or disciplines. Read more.


Chemical Sciences Division We recently launched a 136-node Intel Cascade Lake and GPU Cluster for several CSD PIs:  Martin Head-Gordon, Phillip Geissler, David Prendergast, Teresa Head-Gordon, Kranthi Mandadapu, Eric Neuscamman, Bill McCurdy and Robert Lucchese.

ALICE (A Large Ion Collider Experiment)  LBNL has recently become one of the tier 2 computing sites for the Worldwide LHC Computing Grid in order to provide computing and data storage for the ALICE detector project under project lead Jeff Porter.

Using AWS for the Dark Energy Spectroscopic Instrument (DESI) Cloud Computing can be cost effective for projects with burst needs. Ask us how our science IT consultants helped DESI use an AWS micro instance with autoscale to handle their high demand data releases.

Secure Research and Data Computing We partner with our colleagues on campus to build a HPC and VM service for UC Berkeley researchers working with sensitive data requiring additional levels of protection.

Center for Financial Technology We are partnered with PI John Wu of the Computational Research Division to build a 128-node, 3072-core LInux cluster to support a collaboration between the Lab and Delaware Life. The cluster is being used to investigate modeling of financial markets.

Globus for Google Drive Using Google Drive for storage can be an exercise in babysitting data transfers. We partnered Globus to develop a connector to make big data transfers to and from Google Drive simple and painless.

The San Diego Supercomputer Center (SDSC) is making major high-performance computing resources available to the UC and Lab community through a new introductory program called HPC@UC. Researchers can apply for awards up to 1M core-hours on SDSC's new Comet Supercomputer.

Big Data at the ALS
We build a Data Pipeline using a Fast 400MB/s CCD, a 78,392 core GPU cluster and a 260TB Data Transfer Node with Globus Online for PI David Shapiro to do the X-ray diffraction 3D image reconstruction at the new COSMIC Beamline 7.0.1. Read more here about how their project set the microscopy record by achieving the highest resolution ever.