The basic steps to building a model are as follows:
Create a case: for DCMIP-2025 you will clone existing configurations via the create_clone command as described on the Workflow page
For your information: This differs from the typical procedure to create a new case which we do not use at DCMIP-2025. We just show the regular way here for information purposes (all XXX need to be replaced, see below):
~/CAM_XXX/cime/scripts/create_newcase --compset XXX --res XXX --case ~XXX/your_directory --run-unsupported --pecount XXX --project UMIC0107
Set simulation settings via ./xmlchange
Modify namelist user_nl_cam
Source Code Change (to force analytic initializations)
Build the model
qcmd -A UMIC0107 -- ./case.build
Submit Case
./case.submit
FV3/MPAS need postprocessing to fit on lat-lon grid
Play with output file and transfer!
Create a new case in a self explanatory directory with relevant info (example below) with the CAM version identifier XXX
~/CAM_XXX/cime/scripts/create_newcase --compset FADIAB --res f09_f09_mg17 --case ~/CAM_XXX_cases/self.explanatory.name --run-unsupported --project UMIC0107 --pecount 128
COMPSET (--compset XXX):
FADIAB: Adiabatic (pure fluid flow, no moisture, no precipitation)
FKESSLER: Idealized moist configuration with a warm-rain scheme called Kessler physics (provides precipitation)
FHS94: Idealized dry configuration with simplified physics forcings for climate-like simulations (Held-Suarez forcing)
DYCORE (--res XXX):360
Eulerian (EUL) spectral transform dynamical core
--res T42_T42_mg17 <- 2.8 degrees (312 km)
--res T85_T85_mg17 <- 1.4 degrees (156 km)
Finite-Volume (FV) dynamical core on a lat-lon grid
--res f19_f19_mg17 <- 1.9 x 2.5 (lat-lon) degrees (roughly 220 km)
--res f09_f09_mg17 <- 0.9x1.25 degrees (roughly 110 km)
--res f05_f05_mg17 <- 0.45x0.625 degrees (roughly 55 km)
Spectral Element (SE) dynamical core on a cubed-sphere grid:
--res ne16_ne16_mg17 <- 2 degrees (220 km)
--res ne30_ne30_mg16 <- 1 degree (110 km)
--res ne60_ne60_mg16 <- 0.5 degrees (55 km)
variable-resolution grid between 1 degree (110 km) - 1/8 degree (14 km over the continental U.S. domain), requires quite short time steps, check defaults
--res ne0CONUSne30x8_ne0CONUSne30x8_mg17
Finite-Volume dynamical core on a cubed-sphere grid (FV3):
--res C48_C48_mg17 <- 2 degrees
--res C96_C96_mg17 <- 1 degree
--res C192_C192_mg17 <- 0.5 degrees
Model for Predictions Across Scales (MPAS) on a hexagonal grid:
--res mpasa120_mpasa120 <- around 1 degree (120 km)
--res mpasa60_mpasa60 <- around 0.5 degrees (60 km)
PROCESSOR NUMBER (--pecount XXX):
Course resolution (1 degree or coarser) for up to 30 days:
use the defaults (do not specify this parameter), the defaults for 2 degree are likely --pecount 128 and for 1 degree --pecount 512 (check)
MPAS at the 240 km resolution should use --pecount 108
0.5 degree resolution
let us check the defaults before we set these explicitly
Create CASE DIRECTORY (--case XXX):
case name placed in the ~/CAM_XXX_cases/directory (use a longer name that starts with the CAM version XXX and lists an identifier for the dycore, its horizontal resolution, number of levels, and a test case identifier)
Create OUTPUT DIRECTORY (--output-root XXX):
output placed in the /glade/scratch/$USER/XXX/ directory
Alternative pathname for the directory where case output is written. Make sure to make this a directory in your scratch file
Navigate to ~/CAM_XXX_cases/your_new_case and configure the runtime settings (example below)
./xmlchange DOUT_S=FALSE,STOP_OPTION=ndays,STOP_N=NN
./xmlchange ATM_NCPL=XX
./xmlquery CAM_CONFIG_OPTS
./xmlchange --file env_build.xml --id CAM_CONFIG_OPTS --val "--phys adiabatic/kessler/held_suarez --analytic_ic --nlev=YY"
./xmlchange JOB_WALLCLOCK_TIME=01:00:00
./case.setup
DOUT_S=FALSE : Avoid the archiving of the model results
STOP_OPTION=ndays.STOP_N=XXX: set simulation time for unit ndays for XX units
ATM_NCPL=XX: current 'physics' timestep. The physics time step is 86400 s / ATM_NCPL. A setting of ATM_NCPL=48 provides a physics time step of 1800 s (30 minutes), and a setting of ATM_NCPL=96 corresponds to 900 s (15 minutes). Use the following settings for the various dycores, and change ATM_NCPL if needed via ./xmlchange ATM_NCPL=XX
EUL: keep the defaults for the T42 and T85 (ATM_NCPL=144, 600 s time step) resolutions
FV:
the default for f19 and f09 is 48 which is correct (no changes needed)
for a f05 setting the default is also '48' and needs to be changed to ATM_NCPL=96 via
./xmlchange ATM_NCPL=96
SE: the defaults (check) are likely correct and might not need to be changed: for a ne16 or ne30 setting use ATM_NCPL=48, for a ne60 setting use ATM_NCPL=96
FV3: check the defaults, for a C48 or C96 setting use ATM_NCPL=48, for the C192 setting ATM_NCPL=96 should be used, change if needed.
./xmlquery CAM_CONFIG_OPTS: check current configuration defaults
./xmlchange --file env_build.xml --id CAM_CONFIG_OPTS --val "--phys adiabatic/kessler/held_suarez --analytic_ic --nadv_tt=XX --chem terminator --age_of_air_trcs --nlev=XX"
--phys adiabtic/kessler/held_suarez activates adiabatic or kessler setting depending on FADIAB or FKESSLER
--analytic_ic: adds the analytic initialization. default initial data is UMJS14 baroclinic wave initial conditions
nadv_tt: adds XX number of tracers
chem terminator: implements the toy solar terminator chemistry (toy chemistry is implemented in ~/CAM_Jan22/src/chemistry/pp_terminator/chemistry.F90)
age_of_air_trcs: Activate the age-of-air functionality
nlev: change to XX levels, default is 30 Note: all MPAS configurations need to be specified with 32 levels (--nlev=32 in CAM_CONFIG_OPTS)
JOB_WALLCLOCK_TIME=01:00:00: maximum job execution time (set to 1hr)
NTHRDS=3: number of parallel threads (only change for 0.5 degree FV
./case.setup: activates the simulation!
Modify/add namelist entries in your case directory which determine the input variables and files: file user_nl_cam
Namelist examples will be made available.
Check whether the namelist settinsg are all correct:
./preview_namelists #if this hangs then something wrong with user_nl_cam check atm_in and drv_in in /glade/scratch/$USER/case_name/run
Check the settings for the actual simulations before submitting the run:
./preview_run
ALL USERS:
empty_htapes = .TRUE. #no default output should be produced
avgflag_pertape = 'I' #instantaneous, 'A' is averaged output
MFILT = 180, 180 #180 timesteps output before h0/h1 file closes and a new one opens
NDENS = 2, 2 #single precision output for h0 and h1
Set Coriolis forces to zero (optional):
omega = 0
Redefine the Earth's radius (optional): e.g. reduce by a factor of 10 (in m)
radius = 6371.e2
Output Frequency
NHTFRQ = -24,-1440 #outputfreq for h0 (-24 is 24hrs) and a high value for h1(-1440 is 60 days) b/c don't need h1 file output often
For an initial run, use less frequent output and create a few time snapshot plots. To create a video, use more frequent output.
Add Test Tracers (see proj3 for more):
test_tracer_names = 'TT_SLOT1','TT_SLOT2','TT_SLOT3','TT_COSB','TT_CCOSB','TT_lCCOSB' #etc
Recommended Adiabatic Output (h0 files)
fincl1 = 'PS','T','U','V','OMEGA','T850','U850','V850','OMEGA850','T700','T500','OMEGA500','U500','V500','PHIS','PSL','Z3','Z700','Z500','Z300'
Recommended Kessler Output (h0 and h1 files):
fincl1 = 'PS','Q','T','U','V','OMEGA','PRECL','RELHUM','T850','U850','V850','OMEGA850','Q850','T700','T500','OMEGA500','U500','V500','CLDLIQ','RAINQM','PHIS','PSL','Z3','Z700','Z500','Z300','PTTEND','TMQ','TMCLDLIQ','TMRAINQM' #PRECL=precip rate, Q=specific humidity, CLDLIQ=cloud water mixing ratio, RAINQM=rain water mixing ratio, PTTEND=temp tendency due to physics, TMQ=vertically integrated precip water vapor, TMCLDLIQ=vert integrated cloud liquid water, TMRAINQM=vert integrated rain water
fincl2 = 'PHIS'
Recommended Kessler Output without toy chemistry (h0/h1 file):
fincl1 = 'PS','Q','T','U','V','OMEGA','PRECL','RELHUM','T850','U850','V850','OMEGA850','Q850','T700','T500','OMEGA500','U500','V500','CLDLIQ','RAINQM','PHIS','PSL','Z3','Z700','Z500','Z300','PTTEND','TMQ','TMCLDLIQ','TMRAINQM'
fincl2 = 'PHIS'
Set analytic initial conditions (chose one):
analytic_ic_type = 'dry_baroclinic_wave_jw2006' #JW06 baroclinic wave/mountain tiggered rossby wave/pure advection test case
analytic_ic_type = 'dry_baroclinic_wave_dcmip2016 #dry UMJS14 baroclinic wave w/o topo
analytic_ic_type = 'moist_baroclinic_wave_dcmip2016' #moist UMJS14 baroclinic wave w/o topo
analytic_ic_type = ''held_suarez_1994" #Held-Suarez-like simulation
FHS94 compset for Held-Suarez simulations
aoa_read_from_ic_file = .FALSE. #activted age of air tracers
fv3_tau = 0. #only use for FV3 to deactivate sponge layer near model top
fv3_n_split = 3 #only use for FV3 to lengthen physics timestep
pertlim = 1.0D-5 #only needed for EUL and FV to break symmetry ( This option adds random perturbations with the specified maximum magnitude (here 1e-5 K, default is 0.) to the initial isothermal temperature field
avgflag_pertape = 'A' #stores time averaged fields
MFILT = 400, 400 #holds monthly means for over 30 years
NHTFRQ = -720, -720 #snapshots every 30 days
fincl1 = 'PS:','T','U','V','OMEGA','T850','AOA1','AOA2'
fincl2 = 'PS:A','T:A','U:A','V:A','OMEGA:A','AOA1','AOA2'
New Vertical Levels (only if not using default of 30) chose one:
NCDATA = '/glade/p/cesmdata/cseg/inputdata/atm/cam/inic/mpas/mpasa120_L32_notopo_coords_c211118.nc'
NCDATA = '/glade/p/cesmdata/cseg/inputdata/atm/cam/inic/mpas/mpasa120_L32_notopo_coords_c211118.nc' #120km grid MPAS w/ UMJS14 baroclinic or Held-Suarez with L32
NCDATA = '/glade/p/cesmdata/cseg/inputdata/atm/cam/inic/mpas/mpasa60_L32_notopo_coords_c211118.nc #60km grid MPAS w/ UMJS14 baroclinic or Held-Suarez with L32
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_L58_dz500m_low_top_43km.nc' #CESM low-top L58 model top at 2hPa (43km) and dz=500m or less in troposphere
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_L93_dz500m_high_top_86km.nc' #CESM high-top L93 model top at 0.005hPa (86km) and dz=500m or less in troposphere
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_L72_E3SM.nc ' #DOE 72 level E3SM (model top 0.1hPa (70-75km))
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_GFSv16_L127.p0_1000hPa.nc' # NOAA L127 for UFD (mode top 0.01hPa or 80km
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_L30_dz400m_top_12km_iso_300K.nc' #equidistant 30-level isothermal configuration (300 K) with dz=400 m and a model top at 12 km
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_L60_dz200m_top_12km_iso_300K.nc' #equidistant 60-level isothermal configuration (300 K) with dz=200 m and a model top at 12 km
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_L120_dz100m_top_12km_iso_300K.nc' #equidistant 120-level isothermal configuration (300 K) with dz=100 m and a model top at 12 km
FV dycore:
fv_div24del2flag = 4 #4th order divergence damping
fv_nsplit = 10 #physics timestep subcycled 10 times
SE dycore:
interpolate_output = .true., .true. #enforces an interpolation to a regular lat-lon grid inoutput
interpolate_nlon = 360 # number of longitudes for the interpolated grid
interpolate_nlat = 181 # number of latitudes for the interpolated grid
FV3 dycore:
fv3_hord_mt = 5
fv3_hord_vt = 5
fv3_hord_tm = 5
fv3_hord_dp = -5
fv3_hord_tr = 8
fv3_nord = 2
FV3 dycore (more diffusive):
fv3_hord_mt = 6
fv3_hord_vt = 6
fv3_hord_tm = 6
fv3_hord_dp = 6
fv3_hord_tr = 8
fv3_nord = 2
MPAS 60km resolution:
mpas_len_disp = 60000.0D0
mpas_dt = 450.D0
EUL
Order of the diffusion
eul_hdif_order = X
2, 4, 6 for 2nd, 4th or 6th-order diffusion
Strength of the diffusion coefficient:
eul_hdif_coef = X
see Eq. (13.20) in the Jablonowski and Williamson (2011) book chapter how the diffusion coefficient connects the spectral resolution (e.g. n0=85 for T85) and the diffusion time scale tau
For EULT85 the default 4th-order hyper-diffusion coefficient 7.14e14 m4/s leads to a diffusion time scale is 12h
when using tau=12h the 2nd-order diffusion coefficient for T85 becomes 1.29e5 m2/s, the 6th-order hyper-diffusion coefficient becomes 3.96e24 m^6/s.
explore the impact of the order of the diffusion
pick the 4th-order diffusion to see the effects of an increased (e.g. factor of 10 or 100) and decreased (factor 10, or set to 0.) diffusion coefficient. A run without diffusion will ultimately blow up, but you can give it a try.
FV
Order of the numerical scheme
fv_iord = X
fv_jord = X
where X can be set to 1 (first-order), 2 (second-order), or 3 or 4 (both 3rd-order, just different limiters)
Divergence damping
fv_div24del2flag = X
2 or 4 for 2nd / 4th-order divergence damping
FV3
Limiters
Highly non-diffusive
fv3_hord_mt = 5
fv3_hord_vt = 5
fv3_hord_tm = 5
fv3_hord_dp = -5
fv3_hord_tr = 8
fv3_nord = 2
More diffusive options are
fv3_hord_mt = 6
fv3_hord_vt = 6
fv3_hord_tm = 6
fv3_hord_dp = 6
fv3_hord_tr = 8
fv3_nord = 2
Another somewhat diffusive setting
fv3_hord_mt = 10
fv3_hord_vt = 10
fv3_hord_tm = 10
fv3_hord_dp = 10
fv3_hord_tr = 8
fv3_nord = 2
Divergence damping
fv3_nord = X
0 (2nd-order), 1 (4th-order), 2 (6th-order), 3 (8th-order)
Tracers
fv3_hord_tr = X
e.g. 5 for unlimited scheme, -5 positive-definite constraint, 8 for monotonicity constraint
Rayleigh friction in the sponge layer (with built-in frictional heating)
fv3_tau = X
fv3_rf_cutoff = Y
fv3_tau (in days) is the shortest time scale for the Rayleigh friction at the model top, the time scale follows a sin^2 profile in the vertical using a GFDL algorithm (as implemented in the source code change of the HS forcing), select values between 1-10 days
fv3_rf_cutoff (in Pa) determines the vertical position for the activation of the Rayleigh friction: all levels with p < fv3_rf_cutoff will apply RF to u and v,
for L72 (70 km top) 100 Pa is a good value, try higher pressures
SE
4th-order hyperdiffusion
se_nu = X
se_nu_p = Y
se_nu_div = Z
se_nu is the 4th-order diffusion coefficient for vorticity and temperature, SEne30 default is 0.50E+15 m4/s
se_nu_p is the 4th-order diffusion coefficient for the pressure thickness, SEne30 default is 0.10E+16 m4/s
se_nu_div is the 4th-order diffusion coefficient for the horizontal divergence, SEne30 default is 0.25E+16 m4/s
Tracer settings
se_limiter_option = X
0: None
4: Sign-preserving limiter
8: Monotone limiter (default)
MPAS
Tracer settings
mpas_monotonic = X
mpas_positive_definite = X
where X is either .true. or .false.
The default is mpas_monotonic = .true. and mpas_positive_definite = .false., try the opposite
This forces analytic initializations, only need if you want to override the defaults. Copy the appropriate file from a pre-existing directory and change it accordingly.
During DCMIP-2025: after cloning one of the DCMIP test case configurations go to the SourceMods/src.cam directory (linked from your case directory). In case you would like to make changes to the analytic initial conditions change the ic_* file in the SourceMods/src.cam directory.
Subject to change for DCMIP-2025:
Acid test (dry):
/glade/u/home/cjablono/CAM_6_3_45_cases/project_2_source_code/acid_test.F90
Dry mountain-generated Rossby wave
/glade/u/home/cjablono/CAM_6_3_45_cases/project_2_source_code/mountain_Rossby_wave_dry.F90
Moist mountain-generated Rossby wave
/glade/u/home/cjablono/CAM_6_3_45_cases/project_2_source_code/mountain_Rossby_wave_moist.F90
Moist baroclinic wave triggered by topography
/glade/u/home/cjablono/CAM_6_3_45_cases/project_2_source_code/ic_baroclinic.F90
Dry JW06 baroclinic wave with dynamic tracers: potential temperature and potential vorticity
/glade/u/home/cjablono/CAM_6_3_45_cases/project_3_source_code/JW06_dynamic_tracers.F90
Dry non-orographic gravity waves (adiabatic, not MPAS)
cp /glade/u/home/cjablono/CAM_6_3_45_cases/project_4_source_code/gravity_wave_N_0.01.F90 ~/CAM_6_3_45_cases/CAM_6_3_45.self_explanatory_name/SourceMods/src.cam/ic_baro_dry_jw06.F90
Use a 20-level configuration with dz=500 m and a model top at 10 km, set the Earth's rotation to zero, run the model for 6 days with an output frequency of 6 hours:
empty_htapes = .TRUE.
avgflag_pertape = 'I'
MFILT = 180
NDENS = 2
NCDATA = '/glade/u/home/cjablono/CESM_vertical_grids/cam_vcoords_L20_dz500m_top_10km_N_0.01.nc'
analytic_ic_type = 'dry_baroclinic_wave_jw2006'
omega = 0.d0
fincl1 = 'PS','T','U','V','OMEGA','T850','U850','V850','OMEGA850','T700','T500','OMEGA500','U500','V500','PHIS','PSL','Z3','Z700','Z500','Z300'
NHTFRQ = -6
Navigate to ~/CAM_XXX_cases/your_new_case and build the model (in a batch job). The build process takes a few minutes. The compilation is executed on a compute node. Wait until the compilation finishes. You should see the message MODEL BUILD HAS FINISHED SUCCESSFULLY
qcmd -A UMIC0107 -- ./case.build
If the build step failed, check the atm log file in /glade/scratch/$USER/CAM_XXX.self_explanatory_name/bld/ for the details (at the end of the log file), debug your code and build again until successful.
Navigate to ~/CAM_XXX_cases/your_new_case and submit the model
./case.submit
To check the status of your batch job. Q indicates that the job is in the queue with name 'regular', an R appears for a running job:
qstat -u $USER
Output file should be in /glade/derecho/scratch/$USER/XXX/model_name/run/ with the 'h0' in it, where XXX was defined in your output-root at the beginning. Default is to be in the scratch folder with the model name. Remember to change the name of this file to another name, because when you run again it will overwrite this file name.
mv /glade/derecho/scratch/$USER/XXX/model_name/run/old_name.nc /glade/derecho/scratch/$USER/XXX/model_name/run/new_name.nc
In case you want to transfer files from Derecho or Casper to your local laptop or to a different server, this can be done in various ways. For the transfer to your laptop use the scp Unix command (secure copy). Open an Xwindow on your laptop, change into a local directory of your choice where the file(s) should be copied to (via the cd command) and enter:
scp user_name@derecho.hpc.ucar.edu:your_glade_directory/file_name .
or
scp user_name@casper.ucar.edu:your_glade_directory/file_name .
where user_name is your name on Derecho/Casper, your_glade_directory is the directory, file_name is the desired Derecho/Casper file, and the dot at the end states that the file will be placed into your local laptop directory. You will be asked for the CIT password and the DUO authentication.
The scp command is good if you only want to transfer a few files. Note that multiple files can also be bundled into a so-called tar archive file before the data transfer, like
tar cvf my_ncl_scripts.tar *.ncl
in the selected Derecho directory. This bundles all files with extension .ncl and creates the new, single file my_ncl_scripts.tar which can then by transferred. Typically the tar file can be unpacked on the local Mac or Windows machine when double clicking it, or via the submission of the following command
tar xvf my_ncl_scripts.tar
If you need to transfer very big files or many files to a server (not your laptop), e.g. let us assume you have a Globus account on your server
https://www.globus.org/data-transfer
Sign into Globus with your credentials, and a GUI enables you to select the /glade/* directory on Derecho and a directory on your server. The file transfer can then start from Globus after the authentication on both machines.
Instructions for an example installation. Symbols and commands in red point to Unix or CESM configuration commands which need to be executed. This installs and runs an adiabatic baroclinic wave configuration.
Familiarize yourself with the NCAR CESM Tutorial notes (see links above). As a starting point, the 'Practical' lecture by Alice Bertini (from 2018) or the lecture from the 2021 tutorial can serve as a guide:
https://www.cesm.ucar.edu/events/tutorials/2018/files/Practical1-bertini.pdf
https://www2.cesm.ucar.edu/events/tutorials/2021/coursework.html, select the 'Practials' by Kate Thayer-Calder, Christine Shields and Cecile Hannay (see also the direct links in the previous CESM Tutorials block)
Logon to Derecho (with either -Y on Mac laptops or -X on WIndows laptops)
ssh -Y -l user_name derecho.hpc.ucar.edu
Enter the CIT password when asked for the Token and authenticate via DUO.
You will be in your home directory once you are on the machine, check via the command pwd which should show
/glade/u/home/user_name
The symbol ~ also stands for your home directory. You can always return to the home directory from any subdirectory when typing cd
Open 2-3 xterm windows via
xterm &
in the login window.
Copy a configuration file for NCL graphics application into your home directory (only needs to be done once):
cp /glade/u/home/cjablono/.hluresfile ~/.hluresfile
Download instructions
Get a recent cam_development branch, see also the instruction on the page https://github.com/ESCOMP/CAM. We slightly modify these instructions to download a specific tag (cam6_04_093 as an example below) of the development branch instead of a released branch, see the tagged versions here:
https://github.com/ESCOMP/CAM/tags
git clone https://github.com/ESCOMP/CAM CAM_6_4_093_20250519
cd CAM_6_4_093_20250519
git checkout cam6_4_093
./bin/git-fleximod update
Note that this is a download of the atmospheric component CAM, and not a full download of the coupled CESM model (which we save for later).
We are now ready to configure the CAM model. Start by creating a new case. We get some guidance from the CESM 'Simpler Model Configurations' web page https://www.cesm.ucar.edu/models/simpler-models/fkessler/index.html.
However, instead of configuring a moist baroclinic wave with Kessler microphysics (as explained this web page, the FKESSLER compset) we select a dry adiabatic configuration (the FADIAB compset) which we start from the dry idealized baroclinic wave initial condition by Ullrich et al. (2014).
The first step is to create a new case (note that the double dashes are important, this is one line):
~/CAM_6_4_093_20250519/cime/scripts/create_newcase --compset FADIAB --res f09_f09_mg17 --pecount 256 --case ~/CAM_6_4_093_20250519_cases/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016 --run-unsupported --project UMIC0107
This creates a new case which we call 'CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016. It uses the compset FADIAB which describes an atmosphere-only simulation without any physical parameterizations. The resolution setting --res determines the choice of the dynamical core ('f' stands for a finite-volume dynamical core on a latitude-longitude grid) with the horizontal grid spacing of 0.9x1.25 (lat x lon) degrees. 30 vertical levels are used by default (can be changed). The project number is our DCMIP account code for the computing resources. You can check out the links above that point to the CESM compsets and resolutions. The option --run-unsupported is always useful, even needed. This configuration sets the number of processors to 256 (or 2 Derecho nodes with 128 processors each) for the f09_f09_mg17 setting. A 15-day simulation will then only take 1min:33sec.
Change into the new case directory via
cd ~/CAM_6_4_093_20250519_cases/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016
Configure and check the runtime settings via xmlchange commands. Specify:
./xmlchange DEBUG=FALSE,DOUT_S=FALSE,STOP_OPTION=ndays,STOP_N=15
./xmlchange --file env_build.xml --id CAM_CONFIG_OPTS --val "--phys adiabatic --analytic_ic"
./xmlquery CAM_CONFIG_OPTS
./xmlchange JOB_WALLCLOCK_TIME=00:10:00
The first command sets the simulation period of the model to 15 days and we deactivated short-term archiving of the results (DOUT_S setting). The second command activates analytic initial conditions for the Ullrich et al. (2014) dry baroclinic wave (in combination with a namelist setting specified below).
The xmlquery command provides information about the configuration choices. We limit the maximum job execution time to 10 minutes.
Activate the simulation setting via
./case.setup
Edit the input namelist file 'user_nl_cam' in your case directory. Add the user settings:
analytic_ic_type = 'dry_baroclinic_wave_dcmip2016'
empty_htapes = .TRUE.
avgflag_pertape = 'I'
fincl1 = 'PS:I','T:I','U:I','V:I','OMEGA:I','T850:I','U850:I','V850:I','OMEGA850:I','PHIS:I','PSL:I'
MFILT = 90
NHTFRQ = -24
NDENS = 2
fv_nsplit = 10
fv_div24del2flag = 4
These settings (1) define the analytically prescribed initial conditions (analytic_ic_type), (2) state that no default output should be produced (empty_htapes), (3) select instantaneous output (avgflag_pertape), (4) select the output fields for the history file h0 (fincl1), (5) allow 90 time samples before the h0 output file is closed and a new one is opened, (5) determine that the output frequency is 24 hours (NHTFRQ, negative is hours), (6) select single-precision output (NDENS), (7) determine that the physics time step is subcycled 10 times in the FV dynamical core (fv_nsplit), and (8) select a 4th-order divergence damping machanism (fv_div24del2flag).
Preview the namelists (input parameters for the model run) via
./preview_namelists
and take a look at the namelist files in the scratch directory
/glade/derecho/scratch/$USER/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016/run
for example
more atm_in
more drv_in
You can also preview the jobscript settings:
./preview_run
Build the model (in a batch job) via
qcmd -A UMIC0107 -- ./case.build
The build process takes a few minutes. The compilation is executed on a compute node. Wait until the compilation finishes (about 5-10 minutes). You should see the message
MODEL BUILD HAS FINISHED SUCCESSFULLY
Submit the job to the batch queuing system via
./case.submit
This is a short simulation that should only take a few minutes to run (once the job started).
Check the status of your batch job via
qstat -u $USER
Output like:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
2379531.dadmi* cjablono regular run.CAM_J* 19382 3 108 -- 12:00 R 00:01
Q indicates that the job is in the queue with name 'regular', an R appears for a running job.
The model output will be generated in the
/glade/derecho/scratch/user_name/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016/run
directory. Look for the netcdf files (ending with .nc) that are named 'h0i', 'h1i', .... (you will only have h0i). These are so-called history files and contain the output data. Go to a different xterm window and change into your scratch directory via
cd /glade/derecho/scratch/$USER/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016/run
Look at the history output files with the visualization tools ncview or ncvis, e.g.
ncview your_file_name &
ncvis your_file_name &
and make yourself familiar with the ncview and ncvis functionality (play with the GUIs).
Run an example Python or NCL script to visualize the simulation.
When using NCL create an ncl directory in your home directory
mkdir ~/ncl
and copy the NCL scripts
cp /glade/u/home/cjablono/climate_589_WN2024/ncl/dry_lat_lon_single_file*.ncl ~/ncl
Go into the ncl directory via
cd ~/ncl
specify the output location of your data in the script (edit), and run the scripts (best in different xterm windows to have them side-by-side)
ncl dry_lat_lon_single_file.ncl
ncl dry_lat_lon_single_file_zoom.ncl
ncl dry_lat_lon_single_file_ps_t850.ncl
On some Mac laptops the graphics in the X11 window are unfortunately corrupt which is a problem with the Mac's XQuartz window application (version 2.8.5). This problem apparently does not exist in older XQuartz versions. The problem went away when switching to the older XQuartz version 2.8.4.
By default, the scripts plot some selected fields (mostly at 850 hPa, day 9) to the screen. The first script plots the whole globe in an equidistant cylindrical projection, the second script zooms into a portion of the domain.
Create png files via
ncl level=850 day=9 'pfmt="png"' dry_lat_lon_single_file.ncl
ncl level=850 day=9 'pfmt="png"' dry_lat_lon_single_file_zoom.ncl
ncl day=9 'pfmt="png"' dry_lat_lon_single_file_ps_t850.ncl
The png file can be viewed via
display your_filename.png &
Other possible formats for the pfmt option are "pdf" or "eps". These files can be viewed with the viewer evince (on Casper), encapsulated postscript files with either gv (ghostview, Casper) or gs (ghostscript, Derecho or Casper). The ncl example plot is posted below. The ncl script can be customized or called with varying input parameters like day=7 (for the visualization of day 7), a different pressure level like level=700 (for 700 hPa), or a different output format like 'pfmt="eps"' or 'pfmt="pdf"' as mentioned above. The output format png is best for the posting to the Wiki webpage.
Consider installing the visualization tool Panoply on your laptop and become familar with the GUI. This does not work on the NCAR machines.
In case you want to modify the initial conditions (via a code change):
Change the source code (the initial conditions), recompile the code, rename the first output file (otherwise it will be overwritten), re-run the simulation.
Copy the original initial data for the Ullrich et al. (2014) baroclinic wave routine into the SourceMods/src.cam directory
cp ~/CAM_6_4_093_20250519/src/dynamics/tests/initial_conditions/ic_baroclinic.F90 ~/CAM_6_4_093_20250519_cases/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016/SourceMods/src.cam
Edit the file with an editor like nano or vi (this requires you to learn a few vi basics. Here is a webpage that explains a few essential vi commands: https://www.cs.colostate.edu/helpdocs/vi.html
vi ~/CAM_6_4_025_20240829_cases/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016/SourceMods/src.cam/ic_baroclinic.F90
Change the initial conditions, e.g. change the temperature (in Kelvin) at the equator to T0E = 290. (default is T0E = 310.). This will impact the baroclinicity of the flow and the evolution of the baroclinic wave (slower evolution).
Rename the first output file (this is one line)
mv /glade/derecho/scratch/$USER/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016/run/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016.cam.h0i.0001-12-27-00000.nc /glade/derecho/scratch/$USER/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016/run/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016.cam.h0i.0001-12-27-00000.T0E_310.nc
Recompile the model (from ~/CAM_6_4_093_20250519_cases/CAM_6_4_093_20250519_fv09L30_dry_bw_dcmip2016)
qcmd -A UMIC0107 -- ./case.build
Rerun the model via
./case.submit
and analyze.
In case you want to transfer files from Derecho/Casper to your local laptop or to a different server, this can be done in various ways. For the transfer to your laptop use the scp Unix command (secure copy). Open an Xwindow on your laptop, change into a local directory of your choice where the file(s) should be copied to (via the cd command) and enter:
scp user_name@derecho.hpc.ucar.edu:your_glade_directory/file_name .
or
scp user_name@casper.ucar.edu:your_glade_directory/file_name .
where user_name is your name on Derecho/Casper, your_glade_directory is the directory, file_name is the desired Derecho/Casper file, and the dot at the end states that the file will be placed into your local laptop directory. You will be asked for the CIT password and the DUO authentication.
The scp command is good if you only want to transfer a few files. Note that multiple files can also be bundled into a so-called tar archive file before the data transfer, like
tar cvf my_ncl_scripts.tar *.ncl
in the selected Derecho directory. This bundles all files with extension .ncl and creates the new, single file my_ncl_scripts.tar which can then by transferred. Typically the tar file can be unpacked on the local Mac or Windows machine when double clicking it, or via the submission of the following command
tar xvf my_ncl_scripts.tar
If you need to transfer very big files or many files to a server (not your laptop), e.g. let us assume you have a Globus account on a specific server, it is better to use the 'Globus' application
https://www.globus.org/data-transfer
Sign into Globus with your credentials, and a GUI enables you to select the /glade/* directory on Derecho and a directory on your server. The file transfer can then start from Globus after the authentication on both machines.
Example ncl plot of the control simulation with original parameter settings (figure will be posted soon)