David Bruhwiler (RadiaSoft)
Host: Daniel Winklehner
3rd of August 2021 at 13:30 EDT (19:30 CEST)
The development and implementation of algorithms is a core competency of universities and research labs; however, the resulting codes can be difficult to use and expensive to maintain. Professional software developers could help to resolve the problem, but they are expensive to hire and difficult to retain. Retention difficulties and associated career pipeline problems are due in part to misaligned incentives. For example, making software sustainable and easy-to-use is orthogonal to publishing in academic journals and, perhaps more problematic, the scientific mission of national laboratories and university departments often does not motivate software developers and data scientists sufficiently to retain them. In contrast, software sustainability and ease-of-use are core competencies of software developers in industry. Hence, it is advantageous for national laboratories and universities to actively and routinely collaborate with industry. This more varied range of employment opportunities and internal institutional incentives will also offer possibilities for more varied career paths and, perhaps, better retention of talented individuals within the community.
SCK•CEN is at the forefront of Heavy Liquid Metal (HLM) nuclear technology worldwide with the development of the MYRRHA accelerator driven system (ADS). MYRRHA is serving since the FP5 EURATOM framework as the backbone of the P&T strategy of the European Commission based on the "4 building Blocks at Engineering level" and fostering the R&D activities in EU related to the ADS and the associated HLM technology developments.
At the same time MYRRHA is conceived as a flexible fast-spectrum pool-type research irradiation facility cooled by Lead Bismuth Eutectic (LBE), and was identified by SNETP (www.snetp.eu) as the European Technology Pilot Plant for the Lead-cooled Fast Reactor. MYRRHA is proposed to the international community of nuclear energy and nuclear physics as a pan-European large research infrastructure to serve as a multipurpose fast spectrum irradiation facility for various fields of research such as; transmutation of High Level Waste (HLW), material and fuel research for Gen.IV reactors, material for fusion energy, innovative radioisotopes development and production and for fundamental physics. As such MYRRHA is since 2010 on the high priority list of the ESFRI roadmap (http://www.esfri.eu/roadmap-2016).
Since 1998 SCK•CEN is developing the MYRRHA project as an accelerator driven system based on the lead-bismuth eutectic as a coolant of the reactor and a material for its spallation target. The nominal design power of the MYRRHA reactor is 100 MWth. It is driven in sub-critical mode (keff = 0.95) by a high power proton accelerator based on LINAC technology delivering a proton beam in Continuous Wave (CW) mode of 600 MeV proton energy and 4 mA intensity. The choice of LINAC technology is dictated by the unprecedented reliability level required by the ADS application. In the MYRRHA requirements the proton beam delivery should be guaranteed with a number of beam trips lasting more than 3 seconds limited to maximum 10 for a period of 3 months corresponding to the operating cycle of the MYRRHA facility. Since 2015, SCK•CEN and Belgium government decided to implement the MYRRHA facility in three phases to minimize the technical risks associated to the needed accelerator reliability.
On September 7, 2018 the Belgian federal government decided to build this large research infrastructure. In this lecture we will present the status of the MYRRHA project as a whole and in particular stressing the specific characteristics and requirements needed for ADS application and how are we meeting them specifically in the development of the accelerator of MYRRHA.
One of the most challenging application of plasma accelerators is the development of a plasma-based collider for high-energy physics studies. Fast and accurate simulation tools are essential to study the physics toward configurations that enable the production and acceleration of very small beams with low energy spread and emittance preservation over long distances, as required for a collider. The Particle-In-Cell code WarpX is being developed by a team of the U.S. DOE Exascale Computing Project (with non-U.S. collaborators on part of the code) to enable the modeling of chains of tens of plasma accelerators on exascale supercomputers, for collider designs. The code combines the latest algorithmic advances (e.g., boosted frame, pseudo-spectral Maxwell solvers) with mesh refinement and runs on the latest CPU and GPU architectures. The application to the modeling of chains of successive muti-GeV stages will be discussed. The latest implementation on GPU architectures will also be reported, as well as novel algorithmic developments.
Supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of two U.S. Department of Energy organizations (Office of Science and the National Nuclear Security Administration).
We report on the development of machine learning models for the recognition, identification, and prediction of C100 superconducting radio-frequency (SRF) cavity faults in the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. CEBAF is a continuous-wave recirculating linac utilizing SRF cavities to accelerate electrons up to 12 GeV. The C100 SRF cavities in CEBAF are designed with a digital low-level RF system configured to retain waveform recordings in a cavity failure event. Subject matter experts (SME) are able to analyze the collected time-series recordings and determine the type of fault, and the offending cavity. This information is used to identify failure trends and apply corrective measures to the problematic cavity. However, manual analysis of large-scale RF data to identify the cavity and the fault type is laborious and time-consuming. Consequently, we develop several machine learning and deep learning models to automate the process of cavity and fault classification with near-real time recognition capability. We discuss the performance of these models using an RF waveform dataset built using the past runs of the CEBAF, and present a real-world performance analysis on a model deployed in CEBAF through a recent physics run. Additionally, we discuss research efforts into the potential discovery and categorization of fault types through unsupervised machine learning techniques, and present preliminary work on the feasibility of cavity and fault prediction using RF data collected prior to a failure event.
Georg Hoffstaetter (Cornell)
Host: Andreas Adelmann
1st of December 2020 at 13:30 EDT (19:30 CEST)
As accelerators become larger and their beams require more power, efficiency becomes an important paradigm. Energy Recovery Linacs (ERLs), the use of superconducting cavities (SRF), and permanent magnets address this concern. A collaboration between Cornell University and Brookhaven National Laboratory has designed, constructed, and commissioned CBETA, the Cornell-BNL ERL Test Accelerator at Cornell University, with the first 4-turn operation late in 2019. Energy Recovery Linacs decelerate a used beam in SRF cavities to capture its energy and use it for the acceleration of new beam. CBETA is the first SRF ERL with multiple acceleration and deceleration turns. Another first is the larger energy-acceptance return loop that simultaneously transports 7 beams of different energy through a Fixed Field Alternating-gradient (FFA) lattice that is comprised of permanent magnets.
Successfully establishing 4-turn energy recovery at CBETA is especially relevant in the light of the increasing importance that ERLs have obtained: ERLs are part of the hadron coolers for the EIC, they are part of the LHeC plans, they are an integral component of an FCC-ee design option, they can be drivers for low energy nuclear physics expedients, and they have been investigated as drivers for compact Compton-x-ray sources and for industrial lithography.
Starting from history and physics motivation, this colloquium will review the key concepts of particle colliders, and then survey the proposed next and next-next(-next) generation high-energy machines. The machine survey will include hadron colliders, both circular and linear electron-positron colliders and muon colliders, along with some of their respective challenges and merits. A number of approaches could further boost collider energy efficiency, also again learning from history. The presentation will conclude with approximate technical timelines.
For the IsoDAR experiment in neutrino physics, we have developed a very compact and cost-effective cyclotron-based driver to produce very high intensity beams. The system will be able to deliver continuous wave (cw) particle beam currents of >10 mA of protons on target in the energy regime around 60 MeV. This is a factor of 4 higher than the current state-of-the-art for cyclotrons and a factor of 10 compared to what is commercially available. All areas of physics that call for high cw currents can greatly benefit from this result; e.g. particle physics, medical isotope production, and energy research. This increase in beam current is possible in part because the cyclotron is ab-initio designed to include and utilize so-called vortex motion, which allows clean extraction. Such a design process is only possible with the help of high-fidelity particle-in-cell codes, like OPAL.
In this seminar, I will focus on the design and simulations of the cyclotron driver. I will describe the pertinent physical processes, computational tools, and simulation results. At the end, I will describe how we are planning to include machine learning in the simulation effort, for error analysis, sensitivity studies and machine tuning assistance.