Build Instructions¶
Note: If you are new to XGC or a new platform, our Quickstart may be helpful by providing a functioning example to work from.
XGC has several external dependencies. If you are building XGC at one of the following HPC facilities, indicate which one by setting the environment variable
XGC_PLATFORM
before configuring, e.g.:export XGC_PLATFORM=frontier
System
XGC_PLATFORM
Aurora
aurora
Frontier
frontier
Greene
greene
Perlmutter GPU
perlmutter
Perlmutter CPU
perlmutter_cpu
Polaris
polaris
Stellar
stellar
Traverse
traverse
After that, please set the environment variable and load the modules specified in the section at the end of this page.
If you are not using one of the above facilities or want custom-built dependencies, then you have two options:
Install the libraries manually (see 3rd Party Software Installations)
Use spack to install the libraries. See
docker/rocm/spack.yaml
for an example Spack environment file.
Load modules and set environment variables
Compiling and running on the supported systems may require modules and environment variables. See Building at HPC Facilities at the bottom of this page for the commonly used ones on your system.
Create and enter a directory.
mkdir build; cd build
Run CMake to configure a build of XGC.
cmake ..
Additional settings can be passed as
-D
flags, e.g.:cmake -DBUILD_DEPENDENCIES=ON -DCONVERT_GRID2=ON -DSOLVERLU=OFF ..
To interactively edit configuration settings, use
ccmake .
for a CLI orcmake-gui
for a GUI.For a full list of XGC configure options, see XGC Preprocessor Macros.
Build all available targets:
make -j
Or just the one you want, e.g.:
make -j xgc-es-cpp
The executables will be in
build/bin
. Current available targets are:xgc-es-cpp
,xgc-es-cpp-gpu
,xgca-cpp
,xgca-cpp-gpu
,xgc-eem-cpp
,xgc-eem-cpp-gpu
, as well as kernels and tests (see Kernels and Tests).
Environment at HPC facilities¶
Compiling and running on the supported systems may require modules and environment variables. The ones we typically use for our target platforms are found inside the repository in the directory quickstart/modules_and_env
.
Additionally, the platform may require additional CMake
configuration specifications. These can be found in the directory quickstart/cmake_configs
.
For convenience, these two files are reproduced below for our supported platforms.
Aurora¶
module restore
module load cmake tmux
module load googletest fftw
module load kokkos cabana
module load adios2/2.10.0-cpu
module load petsc/3.21.4-cpu
module use /soft/modulefiles
export XGC_PLATFORM=aurora
# Runtime
export TZ='/usr/share/zoneinfo/US/Central'
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=16
export CPU_BIND="verbose,list:0-7,104-111:8-15,112-119:16-23,120-127:24-31,128-135:32-39,136-143:40-47,144-151:52-59,156-163:60-67,164-171:68-75,172-179:76-83,180-187:84-91,188-195:92-99,196-203"
unset OMP_PLACES
# Tweaks to make Aurora MPICH work:
export MPIR_CVAR_ENABLE_GPU=0 # disable gpu-aware mpich
export FI_MR_CACHE_MONITOR=memhooks
export MPIR_CVAR_ALLREDUCE_INTRA_ALGORITHM=recursive_doubling
unset MPIR_CVAR_CH4_COLL_SELECTION_TUNING_JSON_FILE
unset MPIR_CVAR_COLL_SELECTION_TUNING_JSON_FILE
unset MPIR_CVAR_CH4_POSIX_COLL_SELECTION_TUNING_JSON_FILE
# Other Aurora tweaks:
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=0
export FI_LOG_LEVEL=warn # to debug hangs
export ZES_ENABLE_SYSMAN=1 # enable GPU memory usage checking
export FI_CXI_DEFAULT_CQ_SIZE=131072 # try avoiding F90 MPI_BCAST errors
CXX="mpic++ -cxx=icpx" CC="mpicc -cc=icx" FC="mpifort -fc=ifx" I_MPI_CXX=icpx I_MPI_CC=icx I_MPI_F90=ifx \
cmake \
-DCMAKE_Fortran_FLAGS="-g -init=arrays,zero -fpp -O2 -fPIC -qopenmp -fp-model=precise -diag-disable=5462 -diag-disable=8291 -diag-disable=10448" \
-DCMAKE_C_FLAGS="-g -O2 -fPIC -qopenmp -ffp-model=precise" \
-DCMAKE_CXX_FLAGS="-g -O2 -fPIC -qopenmp -ffp-model=precise -Wno-tautological-constant-compare" \
-DCMAKE_EXE_LINKER_FLAGS="-g -ffp-model=precise -fsycl-max-parallel-link-jobs=20 -flink-huge-device-code -ftarget-register-alloc-mode=pvc:large -Xsycl-target-backend \"-device pvc\" -limf -lsvml -lintlc -lifcore" \
..
Frontier¶
module reset
module unload perftools-base
module load cmake
module load PrgEnv-amd
module swap amd amd/5.7.1
module load rocm/5.7.1
module swap cray-mpich cray-mpich/8.1.28
module load craype-accel-amd-gfx90a
export CRAYPE_LINK_TYPE=dynamic
export PATH=${CRAY_MPICH_PREFIX}/bin:${PATH}
export PATH=${ROCM_COMPILER_PATH}/bin:${PATH}
export MPICH_SMP_SINGLE_COPY_MODE=XPMEM
export MPICH_GPU_SUPPORT_ENABLED=1
export FI_CXI_RX_MATCH_MODE=software
export FI_MR_CACHE_MONITOR=kdreg2
export GTL_HSA_MAX_IPC_CACHE_SIZE=10
export ROCM_PATH=/opt/rocm-5.7.1
export OLCF_ROCM_ROOT=/opt/rocm-5.7.1
export LD_LIBRARY_PATH=$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH
export MPICH_CXX=${OLCF_ROCM_ROOT}/bin/hipcc
export LLVM_PATH=${ROCM_PATH}/llvm
export HIP_CLANG_PATH=${ROCM_PATH}/llvm/bin
export HSA_PATH=${ROCM_PATH}
export ROCMINFO_PATH=${ROCM_PATH}
export DEVICE_LIB_PATH=${ROCM_PATH}/amdgcn/bitcode
export HIP_DEVICE_LIB_PATH=${ROCM_PATH}/amdgcn/bitcode
export HIP_PLATFORM=amd
export HIP_COMPILER=clang
export HIPCC_COMPILE_FLAGS_APPEND="$HIPCC_COMPILE_FLAGS_APPEND --rocm-path=${ROCM_PATH}"
export XGC_PLATFORM=frontier
export OMP_PROC_BIND=true
export OMP_NUM_THREADS=14
export PATH=/lustre/orion/world-shared/phy122/xgc-deps-frontier/amd_rocm_5.7.1_mpich_8.1.28/ADIOS2/bin:${PATH}
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DCMAKE_CXX_COMPILER=`which mpicxx` \
-DCMAKE_C_COMPILER=`which mpicc` \
-DCMAKE_Fortran_COMPILER=`which mpifort` \
-DCMAKE_CXX_FLAGS="-I${OLCF_ROCM_ROOT}/include -munsafe-fp-atomics" \
-DCMAKE_EXE_LINKER_FLAGS="-L${OLCF_ROCM_ROOT}/lib -lamdhip64" \
-DUSE_GPU_AWARE_MPI=On ..
Greene¶
source /p/xgc/Software/greene_config_gcc11_20230501
Perlmutter (GPU)¶
module unload gpu cray-libsci
module load cudatoolkit cmake cray-fftw
export XGC_PLATFORM=perlmutter
export CRAYPE_LINK_TYPE=dynamic
export NVCC_WRAPPER_DEFAULT_COMPILER=CC
# Runtime
export OMP_STACKSIZE=2G
export OMP_PLACES=cores
export OMP_PROC_BIND=spread
export OMP_NUM_THREADS=32
# Disable GPU-aware MPI for PETSc; set '-use_gpu_aware_mpi 1' to enable
export PETSC_OPTIONS='-use_gpu_aware_mpi 0'
cmake -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn -DCMAKE_BUILD_TYPE=Release ..
Perlmutter (CPU)¶
module unload gpu
module load cmake
module load cray-fftw
module unload darshan
module unload cray-libsci
export XGC_PLATFORM=perlmutter_cpu
export CRAYPE_LINK_TYPE=dynamic
export FI_CXI_RX_MATCH_MODE=hybrid # prevents crash for large number of MPI processes, e.g. > 4096
export OMP_STACKSIZE=2G
# Perlmutter CPU-only nodes have dual-socket AMD EPYC, each with 64 cores (128 HT)
# For each CPU-only node, want (MPI ranks)*${OMP_NUM_THREADS}=256
# Recommend OMP_NUM_THREADS=8 or 16
export OMP_PLACES=cores
export OMP_PROC_BIND=close
export OMP_NUM_THREADS=8
cmake -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn -DCMAKE_BUILD_TYPE=Release ..
Polaris¶
module load craype-x86-milan
module load craype-accel-nvidia80
module swap PrgEnv-nvhpc PrgEnv-gnu
module load cray-fftw cray-hdf5-parallel cray-libsci cray-netcdf-hdf5parallel
module use /soft/modulefiles
module load cuda-PrgEnv-nvidia/12.2.91
module load spack-pe-base cmake
module load kokkos/4.2.01/shared/PrgEnv-gnu/8.5.0/gnu/12.3/cuda_cudatoolkit_12.2.91
module load cabana/dev-9a1ad605/kokv/4.2.01/PrgEnv-gnu/8.5.0/gnu/12.3/cuda_cudatoolkit_12.2.91
export XGC_PLATFORM=polaris
# Runtime environment settings
export OMP_NUM_THREADS=16
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
# For GPU-Aware MPI
#export MPICH_GPU_SUPPORT_ENABLED=1
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn -DCMAKE_CXX_STANDARD=17 '-DCMAKE_Fortran_FLAGS= -fallow-argument-mismatch' -DCMAKE_EXE_LINKER_FLAGS=-no-gcc-rpath ..
Stellar¶
source /projects/XGC/STELLAR/Software/bin/set_up_xgc.stellar
Traverse¶
source /projects/XGC/TRAVERSE/Software/bin/set_up_xgc.traverse
Build instructions for XGC-S¶
XGC-S can be compiled with GNU make.
Stellar¶
source /projects/XGC/STELLAR/Software/bin/set_up_xgc.stellar
cd XGC-Devel/XGC-S
make -f Makefile.Stellar
Perlmutter¶
Works with the following modules loaded
module list
Currently Loaded Modules:
1) craype-x86-milan 4) xpmem/2.6.2-2.5_2.33__gd067c3f.shasta 7) cray-mpich/8.1.28 10) perftools-base/23.12.0 13) cray-fftw/3.3.10.6 (math)
2) libfabric/1.15.2.0 5) PrgEnv-gnu/8.5.0 8) craype/2.7.30 11) cpe/23.12
3) craype-network-ofi 6) cray-dsmml/0.2.2 9) gcc-native/12.3 12) cmake/3.24.3 (buildtools)
cd XGC-Devel/XGC-S
make -f Makefile.Perlmutter