DUNE PDELab (git)

Dune::Communication< MPI_Comm > Class Reference

Specialization of Communication for MPI. More...

#include <dune/common/parallel/mpicommunication.hh>

Public Member Functions

 Communication (const MPI_Comm &c=MPI_COMM_WORLD)
 Instantiation using a MPI communicator.
 
 Communication (const Communication< No_Comm > &)
 Converting constructor for no-communication that is interpreted as MPI_COMM_SELF.
 
int rank () const
 Return rank, is between 0 and size()-1. More...
 
int size () const
 Number of processes in set, is greater than 0. More...
 
template<class T >
int send (const T &data, int dest_rank, int tag) const
 Sends the data to the dest_rank. More...
 
template<class T >
MPIFuture< T > isend (T &&data, int dest_rank, int tag) const
 Sends the data to the dest_rank nonblocking. More...
 
template<class T >
recv (T &&data, int source_rank, int tag, MPI_Status *status=MPI_STATUS_IGNORE) const
 Receives the data from the source_rank. More...
 
template<class T >
MPIFuture< T > irecv (T &&data, int source_rank, int tag) const
 Receives the data from the source_rank nonblocking. More...
 
template<typename T >
sum (const T &in) const
 Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More...
 
template<typename T >
int sum (T *inout, int len) const
 Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More...
 
template<typename T >
prod (const T &in) const
 Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More...
 
template<typename T >
int prod (T *inout, int len) const
 Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More...
 
template<typename T >
min (const T &in) const
 Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
int min (T *inout, int len) const
 Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
max (const T &in) const
 Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
int max (T *inout, int len) const
 Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
int barrier () const
 Wait until all processes have arrived at this point in the program. More...
 
MPIFuture< void > ibarrier () const
 Nonblocking barrier. More...
 
template<typename T >
int broadcast (T *inout, int len, int root) const
 Distribute an array from the process with rank root to all other processes. More...
 
template<class T >
MPIFuture< T > ibroadcast (T &&data, int root) const
 Distribute an array from the process with rank root to all other processes nonblocking. More...
 
template<typename T >
int gather (const T *in, T *out, int len, int root) const
 Gather arrays on root task. More...
 
template<class TIN , class TOUT = std::vector<TIN>>
MPIFuture< TOUT, TIN > igather (TIN &&data_in, TOUT &&data_out, int root) const
 Gather arrays on root task nonblocking. More...
 
template<typename T >
int gatherv (const T *in, int sendDataLen, T *out, int *recvDataLen, int *displ, int root) const
 Gather arrays of variable size on root task. More...
 
template<typename T >
int scatter (const T *sendData, T *recvData, int len, int root) const
 Scatter array from a root to all other task. More...
 
template<class TIN , class TOUT = TIN>
MPIFuture< TOUT, TIN > iscatter (TIN &&data_in, TOUT &&data_out, int root) const
 Scatter array from a root to all other task nonblocking. More...
 
template<typename T >
int scatterv (const T *sendData, int *sendDataLen, int *displ, T *recvData, int recvDataLen, int root) const
 Scatter arrays of variable length from a root to all other tasks. More...
 
template<typename T , typename T1 >
int allgather (const T *sbuf, int count, T1 *rbuf) const
 Gathers data from all tasks and distribute it to all. More...
 
template<class TIN , class TOUT = TIN>
MPIFuture< TOUT, TIN > iallgather (TIN &&data_in, TOUT &&data_out) const
 Gathers data from all tasks and distribute it to all nonblocking. More...
 
template<typename T >
int allgatherv (const T *in, int sendDataLen, T *out, int *recvDataLen, int *displ) const
 Gathers data of variable length from all tasks and distribute it to all. More...
 
template<typename BinaryFunction , typename Type >
int allreduce (Type *inout, int len) const
 Compute something over all processes for each component of an array and return the result in every process. More...
 
template<class BinaryFunction , class TIN , class TOUT = TIN>
MPIFuture< TOUT, TIN > iallreduce (TIN &&data_in, TOUT &&data_out) const
 Compute something over all processes nonblocking. More...
 
template<class BinaryFunction , class T >
MPIFuture< T > iallreduce (T &&data) const
 Compute something over all processes nonblocking. More...
 
template<typename BinaryFunction , typename Type >
int allreduce (const Type *in, Type *out, int len) const
 

Detailed Description

Specialization of Communication for MPI.

Member Function Documentation

◆ allgather()

template<typename T , typename T1 >
int Dune::Communication< MPI_Comm >::allgather ( const T *  sbuf,
int  count,
T1 *  rbuf 
) const
inline

Gathers data from all tasks and distribute it to all.

The block of data sent from the jth process is received by every process and placed in the jth block of the buffer recvbuf.

Parameters
[in]sbufThe buffer with the data to send. Has to be the same for each task.
[in]countThe number of elements to send by any process.
[out]rbufThe receive buffer for the data. Has to be of size notasks*count, with notasks being the number of tasks in the communicator.
Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ allgatherv()

template<typename T >
int Dune::Communication< MPI_Comm >::allgatherv ( const T *  in,
int  sendDataLen,
T *  out,
int *  recvDataLen,
int *  displ 
) const
inline

Gathers data of variable length from all tasks and distribute it to all.

The block of data sent from the jth process is received by every process and placed in the jth block of the buffer out.

Parameters
[in]inThe send buffer with the data to send.
[in]sendDataLenThe number of elements to send on each task.
[out]outThe buffer to store the received data in.
[in]recvDataLenAn array with size equal to the number of processes containing the number of elements to receive from process i at position i, i.e. the number that is passed as sendDataLen argument to this function in process i.
[in]displAn array with size equal to the number of processes. Data received from process i will be written starting at out+displ[i].
Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ allreduce() [1/2]

template<typename BinaryFunction , typename Type >
int Dune::Communication< MPI_Comm >::allreduce ( const Type *  in,
Type *  out,
int  len 
) const
inline

◆ allreduce() [2/2]

template<typename BinaryFunction , typename Type >
int Dune::Communication< MPI_Comm >::allreduce ( Type *  inout,
int  len 
) const
inline

Compute something over all processes for each component of an array and return the result in every process.

The template parameter BinaryFunction is the type of the binary function to use for the computation

Parameters
inoutThe array to compute on.
lenThe number of components in the array
Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ barrier()

int Dune::Communication< MPI_Comm >::barrier ( ) const
inline

Wait until all processes have arrived at this point in the program.

Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

Referenced by Dune::graphRepartition().

◆ broadcast()

template<typename T >
int Dune::Communication< MPI_Comm >::broadcast ( T *  inout,
int  len,
int  root 
) const
inline

Distribute an array from the process with rank root to all other processes.

Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ gather()

template<typename T >
int Dune::Communication< MPI_Comm >::gather ( const T *  in,
T *  out,
int  len,
int  root 
) const
inline

Gather arrays on root task.

Each process sends its in array of length len to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array which must have size len * number of processes.

Parameters
[in]inThe send buffer with the data to send.
[out]outThe buffer to store the received data in. Might have length zero on non-root tasks.
[in]lenThe number of elements to send on each task.
[in]rootThe root task that gathers the data.
Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Note
out must have space for P*len elements

◆ gatherv()

template<typename T >
int Dune::Communication< MPI_Comm >::gatherv ( const T *  in,
int  sendDataLen,
T *  out,
int *  recvDataLen,
int *  displ,
int  root 
) const
inline

Gather arrays of variable size on root task.

Each process sends its in array of length sendDataLen to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array.

Parameters
[in]inThe send buffer with the data to be sent
[in]sendDataLenThe number of elements to send on each task
[out]outThe buffer to store the received data in. May have length zero on non-root tasks.
[in]recvDataLenAn array with size equal to the number of processes containing the number of elements to receive from process i at position i, i.e. the number that is passed as sendDataLen argument to this function in process i. May have length zero on non-root tasks.
[out]displAn array with size equal to the number of processes. Data received from process i will be written starting at out+displ[i] on the root process. May have length zero on non-root tasks.
[in]rootThe root task that gathers the data.
Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ iallgather()

template<class TIN , class TOUT = TIN>
MPIFuture< TOUT, TIN > Dune::Communication< MPI_Comm >::iallgather ( TIN &&  data_in,
TOUT &&  data_out 
) const
inline

Gathers data from all tasks and distribute it to all nonblocking.

Returns
Future<TOUT, TIN> containing the distributed data

◆ iallreduce() [1/2]

template<class BinaryFunction , class T >
MPIFuture< T > Dune::Communication< MPI_Comm >::iallreduce ( T &&  data) const
inline

Compute something over all processes nonblocking.

Returns
Future<TOUT, TIN> containing the computed something

◆ iallreduce() [2/2]

template<class BinaryFunction , class TIN , class TOUT = TIN>
MPIFuture< TOUT, TIN > Dune::Communication< MPI_Comm >::iallreduce ( TIN &&  data_in,
TOUT &&  data_out 
) const
inline

Compute something over all processes nonblocking.

Returns
Future<TOUT, TIN> containing the computed something

◆ ibarrier()

MPIFuture< void > Dune::Communication< MPI_Comm >::ibarrier ( ) const
inline

Nonblocking barrier.

Returns
Future<void> which is complete when all processes have reached the barrier

◆ ibroadcast()

template<class T >
MPIFuture< T > Dune::Communication< MPI_Comm >::ibroadcast ( T &&  data,
int  root 
) const
inline

Distribute an array from the process with rank root to all other processes nonblocking.

Returns
Future<T> containing the distributed data

◆ igather()

template<class TIN , class TOUT = std::vector<TIN>>
MPIFuture< TOUT, TIN > Dune::Communication< MPI_Comm >::igather ( TIN &&  data_in,
TOUT &&  data_out,
int  root 
) const
inline

Gather arrays on root task nonblocking.

Returns
Future<TOUT, TIN> containing the gathered data

◆ irecv()

template<class T >
MPIFuture< T > Dune::Communication< MPI_Comm >::irecv ( T &&  data,
int  source_rank,
int  tag 
) const
inline

Receives the data from the source_rank nonblocking.

Returns
Future<T> containing the received data when complete

References DUNE_THROW.

◆ iscatter()

template<class TIN , class TOUT = TIN>
MPIFuture< TOUT, TIN > Dune::Communication< MPI_Comm >::iscatter ( TIN &&  data_in,
TOUT &&  data_out,
int  root 
) const
inline

Scatter array from a root to all other task nonblocking.

Returns
Future<TOUT, TIN> containing scattered data;

◆ isend()

template<class T >
MPIFuture< T > Dune::Communication< MPI_Comm >::isend ( T &&  data,
int  dest_rank,
int  tag 
) const
inline

Sends the data to the dest_rank nonblocking.

Returns
Future<T> containing the send buffer, completes when data is send

◆ max() [1/2]

template<typename T >
T Dune::Communication< MPI_Comm >::max ( const T &  in) const
inline

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

Referenced by Dune::PDELab::SolverStatistics< T >::max().

◆ max() [2/2]

template<typename T >
int Dune::Communication< MPI_Comm >::max ( T *  inout,
int  len 
) const
inline

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ min() [1/2]

template<typename T >
T Dune::Communication< MPI_Comm >::min ( const T &  in) const
inline

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

Referenced by Dune::PDELab::SolverStatistics< T >::min().

◆ min() [2/2]

template<typename T >
int Dune::Communication< MPI_Comm >::min ( T *  inout,
int  len 
) const
inline

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ prod() [1/2]

template<typename T >
T Dune::Communication< MPI_Comm >::prod ( const T &  in) const
inline

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

◆ prod() [2/2]

template<typename T >
int Dune::Communication< MPI_Comm >::prod ( T *  inout,
int  len 
) const
inline

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

◆ rank()

int Dune::Communication< MPI_Comm >::rank ( ) const
inline

Return rank, is between 0 and size()-1.

Referenced by Dune::graphRepartition(), and Dune::storeMatrixMarket().

◆ recv()

template<class T >
T Dune::Communication< MPI_Comm >::recv ( T &&  data,
int  source_rank,
int  tag,
MPI_Status *  status = MPI_STATUS_IGNORE 
) const
inline

Receives the data from the source_rank.

Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ scatter()

template<typename T >
int Dune::Communication< MPI_Comm >::scatter ( const T *  sendData,
T *  recvData,
int  len,
int  root 
) const
inline

Scatter array from a root to all other task.

The root process sends the elements with index from k*len to (k+1)*len-1 in its array to task k, which stores it at index 0 to len-1.

Parameters
[in]sendDataThe array to scatter. Might have length zero on non-root tasks.
[out]recvDataThe buffer to store the received data in. Upon completion of the method each task will have same data stored there as the one in send buffer of the root task before.
[in]lenThe number of elements in the recv buffer.
[in]rootThe root task that gathers the data.
Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Note
out must have space for P*len elements

◆ scatterv()

template<typename T >
int Dune::Communication< MPI_Comm >::scatterv ( const T *  sendData,
int *  sendDataLen,
int *  displ,
T *  recvData,
int  recvDataLen,
int  root 
) const
inline

Scatter arrays of variable length from a root to all other tasks.

The root process sends the elements with index from send+displ[k] to send+displ[k]-1 in its array to task k, which stores it at index 0 to recvDataLen-1.

Parameters
[in]sendDataThe array to scatter. May have length zero on non-root tasks.
[in]sendDataLenAn array with size equal to the number of processes containing the number of elements to scatter to process i at position i, i.e. the number that is passed as recvDataLen argument to this function in process i.
[in]displAn array with size equal to the number of processes. Data scattered to process i will be read starting at send+displ[i] on root the process.
[out]recvDataThe buffer to store the received data in. Upon completion of the method each task will have the same data stored there as the one in send buffer of the root task before.
[in]recvDataLenThe number of elements in the recvData buffer.
[in]rootThe root task that gathers the data.
Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ send()

template<class T >
int Dune::Communication< MPI_Comm >::send ( const T &  data,
int  dest_rank,
int  tag 
) const
inline

Sends the data to the dest_rank.

Returns
MPI_SUCCESS (==0) if successful, an MPI error code otherwise

◆ size()

int Dune::Communication< MPI_Comm >::size ( ) const
inline

Number of processes in set, is greater than 0.

Referenced by Dune::graphRepartition().

◆ sum() [1/2]

template<typename T >
T Dune::Communication< MPI_Comm >::sum ( const T &  in) const
inline

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.

Referenced by Dune::PDELab::SolverStatistics< T >::avg(), Dune::OwnerOverlapCopyCommunication< GlobalIdType, LocalIdType >::dot(), Dune::PDELab::SolverStatistics< T >::size(), and Dune::PDELab::SolverStatistics< T >::stddev().

◆ sum() [2/2]

template<typename T >
int Dune::Communication< MPI_Comm >::sum ( T *  inout,
int  len 
) const
inline

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.


The documentation for this class was generated from the following file:
Creative Commons License   |  Legal Statements / Impressum  |  Hosted by TU Dresden  |  generated with Hugo v0.111.3 (Nov 23, 23:29, 2024)