Build Instructions

  • XGC has several external dependencies. If you are building XGC at one of the following HPC facilities, indicate which one by setting the environment variable XGC_PLATFORM before configuring, e.g.:

    export XGC_PLATFORM=summit
    

    System

    XGC_PLATFORM

    Frontier

    frontier

    Greene

    greene

    JLSE

    jlse

    Perlmutter GCC

    perlmutter_gcc

    Perlmutter CPU GCC

    perlmutter_cpu_gcc

    Perlmutter Nvidia

    perlmutter_nvhpc

    Perlmutter CPU Nvidia

    perlmutter_cpu_nvidia

    Polaris

    polaris

    Stellar

    stellar

    Summit

    summit

    Sunspot

    sunspot

    Theta

    theta

    Traverse

    traverse

    After that, please set the environment variable and load the modules specified in the section at the end of this page.

    If you are not using one of the above facilities or want custom-built dependencies, then you have two options:

    1. Install the libraries manually (see 3rd Party Software Installations)

    2. Use the Superbuild, which will download, build, and install its dependencies for you. To enable this feature, pass

-DBUILD_DEPENDENCIES=ON to cmake when configuring the XGC build below.

  • Load modules and set environment variables

    Compiling and running on the supported systems may require modules and environment variables. See Building at HPC Facilities at the bottom of this page for the commonly used ones on your system.

  • Create and enter a directory.

    mkdir build; cd build
    
  • Run CMake to configure a build of XGC.

    cmake ..
    

    Additional settings can be passed as -D flags, e.g.:

    cmake -DBUILD_DEPENDENCIES=ON -DCONVERT_GRID2=ON -DSOLVERLU=OFF ..
    

    To interactively edit configuration settings, use ccmake . for a CLI or cmake-gui for a GUI.

    For a full list of XGC configure options, see XGC Preprocessor Macros.

  • Build all available targets:

    make -j
    

    Or just the one you want, e.g.:

    make -j xgc-es-cpp
    

    The executables will be in build/bin. Current available targets are: xgc-es-cpp, xgc-es-cpp-gpu, xgca-cpp, xgca-cpp-gpu, xgc-eem-cpp, xgc-eem-cpp-gpu, as well as kernels and tests (see Kernels and Tests).

Environment at HPC facilities

Compiling and running on the supported systems may require modules and environment variables. Below are the commonly used sets.

Frontier

 module reset
 module unload perftools-base
 module load cmake
 module load PrgEnv-amd
 module swap amd amd/5.2.0
 module swap cray-mpich cray-mpich/8.1.25
 module load craype-accel-amd-gfx90a
 export CRAYPE_LINK_TYPE=dynamic
 export PATH=${CRAY_MPICH_PREFIX}/bin:${PATH}
 export PATH=${ROCM_COMPILER_PATH}/bin:${PATH}
 export MPICH_SMP_SINGLE_COPY_MODE=NONE
 export MPICH_GPU_SUPPORT_ENABLED=1
 export ROCM_PATH=/opt/rocm-5.2.0
 export OLCF_ROCM_ROOT=/opt/rocm-5.2.0
 export LD_LIBRARY_PATH=$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH
 export MPICH_CXX=${OLCF_ROCM_ROOT}/bin/hipcc
 export LLVM_PATH=${ROCM_PATH}/llvm
 export HIP_CLANG_PATH=${ROCM_PATH}/llvm/bin
 export HSA_PATH=${ROCM_PATH}
 export ROCMINFO_PATH=${ROCM_PATH}
 export DEVICE_LIB_PATH=${ROCM_PATH}/amdgcn/bitcode
 export HIP_DEVICE_LIB_PATH=${ROCM_PATH}/amdgcn/bitcode
 export HIP_PLATFORM=amd
 export HIP_COMPILER=clang
 export HIPCC_COMPILE_FLAGS_APPEND="$HIPCC_COMPILE_FLAGS_APPEND --rocm-path=${ROCM_PATH}"
 export XGC_PLATFORM=frontier

Extra CMake variables needed:
-DCMAKE_BUILD_TYPE=RelWithDebInfo #Release
-DCMAKE_CXX_COMPILER=`which mpicxx`
-DCMAKE_C_COMPILER=`which mpicc`
-DCMAKE_Fortran_COMPILER=`which mpifort`
-DCMAKE_CXX_FLAGS="-I${OLCF_ROCM_ROOT}/include  -munsafe-fp-atomics"
-DCMAKE_EXE_LINKER_FLAGS="-L${OLCF_ROCM_ROOT}/lib -lamdhip64"
-DUSE_GPU_AWARE_MPI=On

Greene

source /p/xgc/Software/greene_config_gcc11_20230501

Perlmutter GCC

April 23, 2023 update: the GNU/GCC compilers are the recommended compilers for running on Perlmutter GPU compute nodes.

The assumption is that one is starting with the default list of modules before the following module load/unload commands, i.e. no module load/unload commands in the user’s .bashrc/.bash_profile/etc.

module load cmake/3.24.3
module load cray-fftw
module unload darshan
module unload cray-libsci
export XGC_PLATFORM=perlmutter_gcc
export CRAYPE_LINK_TYPE=dynamic
export NVCC_WRAPPER_DEFAULT_COMPILER=CC

cmake -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn -DCMAKE_BUILD_TYPE=Release ...

Perlmutter CPU GCC

For Perlmutter CPU-only compute nodes.

April 23, 2023 update: the GNU/GCC compilers are the recommended compilers for running on Perlmutter CPU-only compute nodes.

module unload gpu
module load cmake/3.24.3
module load cray-fftw
module unload darshan
module unload cray-libsci
export XGC_PLATFORM=perlmutter_cpu_gcc
export CRAYPE_LINK_TYPE=dynamic

cmake -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn -DCMAKE_BUILD_TYPE=Release ...

Perlmutter Nvidia

April 23, 2023 update: the recommendation is to use the GCC compilers for building XGC on Perlmutter. Still sorting out some issues with nvidia 23.1 + cuda 12.0.

module unload gpu
module load cmake/3.24.3
module load PrgEnv-nvidia
module swap nvidia nvidia/23.1
module load cudatoolkit/12.0
module load cray-fftw
module unload darshan
module unload cray-libsci
export XGC_PLATFORM=perlmutter_nvhpc
export CRAYPE_LINK_TYPE=dynamic
export NVCC_WRAPPER_DEFAULT_COMPILER=CC

cmake -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn ..

Perlmutter CPU Nvidia

For Perlmutter CPU-only compute nodes.

April 23, 2023 update: the recommendation is to use the GCC compilers for building XGC on Perlmutter.

module unload gpu
module load cmake/3.24.3
module load PrgEnv-nvidia
module swap nvidia nvidia/23.1
module load cray-fftw
module unload cray-libsci
module unload darshan
export XGC_PLATFORM=perlmutter_cpu_nvidia
export CRAYPE_LINK_TYPE=dynamic

cmake -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn ..

Polaris

module load cmake kokkos cabana cray-fftw
export XGC_PLATFORM=polaris

cmake -DCMAKE_CXX_COMPILER=CC -DCMAKE_C_COMPILER=cc -DCMAKE_Fortran_COMPILER=ftn ..

Stellar

source /scratch/gpfs/xgc/STELLAR/Software/bin/set_up_xgc.stellar

Summit

module load nvhpc/22.5
module load spectrum-mpi
module load python
module load netlib-lapack
module load hypre
module load fftw
module load hdf5
module load cmake/3.20.2
module load libfabric/1.12.1-sysrdma
module swap cuda/nvhpc cuda/11.7.1
export XGC_PLATFORM=summit
export OMP_NUM_THREADS=14
export FC=mpifort
export CC=mpicc
export KOKKOS_SRC_DIR=/gpfs/alpine/world-shared/phy122/lib/install/summit/kokkos/Aug31_23_nvhpc22.5
export CXX=$KOKKOS_SRC_DIR/bin/nvcc_wrapper
export NVCC_WRAPPER_DEFAULT_COMPILER=mpiCC

Sunspot

module load spack cmake
module load oneapi/eng-compiler/2023.05.15.006
module load kokkos/2023.05.15.006/eng-compiler/sycl_intel_aot
module load cabana/2023.05.15.006/eng-compiler/sycl_intel
export XGC_PLATFORM=sunspot
export OMP_NUM_THREADS=16
export ONEAPI_MPICH_GPU=NO_GPU
CXX="mpic++ -cxx=icpx" CC="mpicc -cc=icx" FC="mpifort -fc=ifort" I_MPI_CXX=icpx I_MPI_CC=icx I_MPI_F90=ifort\
    cmake\
    -DCMAKE_Fortran_FLAGS="-g -init=arrays,zero -fpp -O2 -fPIC -fopenmp -DCPP_PARTICLE_INIT=On -diag-disable=5462 -diag-disable=8291"\
    -DCMAKE_C_FLAGS="-g -O2 -fPIC -fopenmp -DCPP_PARTICLE_INIT=On"\
    -DCMAKE_CXX_FLAGS="-g -O2 -fPIC -fopenmp -Wno-tautological-constant-compare -DCPP_PARTICLE_INIT=On"\
    -DCMAKE_EXE_LINKER_FLAGS=" -fsycl-max-parallel-link-jobs=8 -limf -lsvml -lintlc -lifcore" ..

Theta

module load cray-libsci
module load cray-hdf5-parallel
module load cray-fftw
module use -a /projects/TokamakITER/Software/modulefiles
module load adios2/DEFAULT
module load cmake/3.20.4
export CRAYPE_LINK_TYPE=dynamic
export XGC_PLATFORM=theta

Traverse

  source /home/rhager/Software/bin/set_up_xgc.traverse

CMake version 3.22.2 is available and can be used via ``xgc_cmake`` and ``xgc_ccmake``.