XGC1
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Functions
shift.cpp File Reference
#include <Cabana_Core.hpp>
#include <Kokkos_Core.hpp>
#include <algorithm>
#include <iostream>
#include <vector>
#include <mpi.h>
#include "timer_macro.hpp"
#include "domain_decomposition.hpp"
#include "sort.hpp"
#include "transpose_particles.hpp"
#include "shift.hpp"
Include dependency graph for shift.cpp:

Functions

template<int PTL_N_DBL, class DataType , class MPIDevice >
void copy_to_send_buffer (Cabana::AoSoA< DataType, DeviceType, VEC_LEN > &local_particles, int n_staying, Kokkos::View< OneParticle< PTL_N_DBL > *, MPIDevice > &buf)
 
template<class DataType >
void transpose_and_shift (Kokkos::View< int *, HostType > &sendcnt, Kokkos::View< int *, HostType > &sdispls, Kokkos::View< int *, HostType > &recvcnt, Kokkos::View< int *, HostType > &rdispls, Cabana::AoSoA< DataType, DeviceType, VEC_LEN > &local_particles, int n_staying, int n_leaving, int n_arriving, const MyMPI &mpi)
 
void shift (Species< DeviceType > &species, SortViews< DeviceType > &sort_views, const Grid< DeviceType > &grid, const MagneticField< DeviceType > &magnetic_field, const DomainDecomposition< DeviceType > &pol_decomp, int isp, bool send_phase0)
 
void shift_all_species (Plasma &plasma, const Grid< DeviceType > &grid, const MagneticField< DeviceType > &magnetic_field, const DomainDecomposition< DeviceType > &pol_decomp, Shift::ShiftPh0 shift_ph0)
 

Function Documentation

template<int PTL_N_DBL, class DataType , class MPIDevice >
void copy_to_send_buffer ( Cabana::AoSoA< DataType, DeviceType, VEC_LEN > &  local_particles,
int  n_staying,
Kokkos::View< OneParticle< PTL_N_DBL > *, MPIDevice > &  buf 
)

Here is the caller graph for this function:

void shift ( Species< DeviceType > &  species,
SortViews< DeviceType > &  sort_views,
const Grid< DeviceType > &  grid,
const MagneticField< DeviceType > &  magnetic_field,
const DomainDecomposition< DeviceType > &  pol_decomp,
int  isp,
bool  send_phase0 
)

Shifts particles between MPI ranks. First the particles are sorted by destination rank The particles are packed into a buffer and sent with an MPI all-to-all Then they are unpacked from the receiving buffer back into the particle AoSoA The same is done for phase0, needed for ions

Parameters
[in,out]speciescontains the particles
[in]sort_viewsis needed for the sort
[in]gridis needed for the sort
[in]magnetic_fieldis needed for the sort
[in]pol_decompis needed for the sort and shift
[in]ispis the species index
Returns
void

Here is the call graph for this function:

Here is the caller graph for this function:

void shift_all_species ( Plasma plasma,
const Grid< DeviceType > &  grid,
const MagneticField< DeviceType > &  magnetic_field,
const DomainDecomposition< DeviceType > &  pol_decomp,
Shift::ShiftPh0  shift_ph0 
)

Shifts particles between MPI ranks. First the particles are sorted by destination rank The particles are packed into a buffer and sent with an MPI all-to-all Then they are unpacked from the receiving buffer back into the particle AoSoA The same is done for phase0, needed for ions

Parameters
[in,out]plasmacontains all species info
[in]gridis needed for the sort
[in]magnetic_fieldis needed for the sort
[in]pol_decompis needed for the sort
Returns
void

Here is the call graph for this function:

Here is the caller graph for this function:

template<class DataType >
void transpose_and_shift ( Kokkos::View< int *, HostType > &  sendcnt,
Kokkos::View< int *, HostType > &  sdispls,
Kokkos::View< int *, HostType > &  recvcnt,
Kokkos::View< int *, HostType > &  rdispls,
Cabana::AoSoA< DataType, DeviceType, VEC_LEN > &  local_particles,
int  n_staying,
int  n_leaving,
int  n_arriving,
const MyMPI mpi 
)

Packs particles into a send buffer and shifts them between MPI ranks. If particles are on the device and GPU-aware MPI is off, the send buffer resides on the host and there is a receive buffer on the host as well. If not, then the host buffer is on device and there is no receive buffer because the arriving particles are brought straight to the paticle array. The function is templated on the size of the particle (in doubles) and the particle data type. This is done so that phase0 can be sent with the same routine

Parameters
[in]sendcntis the number of particles getting sent to each rank
[in]sdisplsis the displacement in the send buffer of each rank
[in]recvcntis the number of particles getting received from each rank
[in]rdisplsis the displacement in the recv buffer of each rank
[in,out]local_particlesis where the particles are located on device
[in]n_stayingis the number of particles staying on this rank
[in]n_leavingis the number of particles leaving this rank
[in]n_arrivingis the number of particles arriving to this rank
[in]mpicontains MPI and domain decomposition parameters
Returns
void

Here is the call graph for this function: