Dune Core Modules (2.3.1)

Dune::CollectiveCommunication< MPI_Comm > Class Reference

Specialization of CollectiveCommunication for MPI. More...

#include <dune/common/parallel/mpicollectivecommunication.hh>

Public Member Functions

 CollectiveCommunication (const MPI_Comm &c)
 Instantiation using a MPI communicator.
 
int rank () const
 Return rank, is between 0 and size()-1. More...
 
int size () const
 Number of processes in set, is greater than 0. More...
 
template<typename T >
sum (T &in) const
 Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More...
 
template<typename T >
int sum (T *inout, int len) const
 Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More...
 
template<typename T >
prod (T &in) const
 Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More...
 
template<typename T >
int prod (T *inout, int len) const
 Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More...
 
template<typename T >
min (T &in) const
 Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
int min (T *inout, int len) const
 Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
max (T &in) const
 Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
int max (T *inout, int len) const
 Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
int barrier () const
 Wait until all processes have arrived at this point in the program. More...
 
template<typename T >
int broadcast (T *inout, int len, int root) const
 Distribute an array from the process with rank root to all other processes. More...
 
template<typename T >
int gather (T *in, T *out, int len, int root) const
 Gather arrays on root task. More...
 
template<typename T >
int scatter (T *send, T *recv, int len, int root) const
 Scatter array from a root to all other task. More...
 
template<typename T , typename T1 >
int allgather (T *sbuf, int count, T1 *rbuf) const
 Gathers data from all tasks and distribute it to all. More...
 
template<typename BinaryFunction , typename Type >
int allreduce (Type *inout, int len) const
 Compute something over all processes for each component of an array and return the result in every process. More...
 
template<typename BinaryFunction , typename Type >
int allreduce (Type *in, Type *out, int len) const
 Compute something over all processes for each component of an array and return the result in every process. More...
 

Detailed Description

Specialization of CollectiveCommunication for MPI.

Member Function Documentation

◆ allgather()

template<typename T , typename T1 >
int Dune::CollectiveCommunication< MPI_Comm >::allgather ( T *  sbuf,
int  count,
T1 *  rbuf 
) const
inline

Gathers data from all tasks and distribute it to all.

The block of data sent from the jth process is received by every process and placed in the jth block of the buffer recvbuf.

Parameters
[in]sbufThe buffer with the data to send. Has to be the same for each task.
[in]countThe number of elements to send by any process.
[out]rbufThe receive buffer for the data. Has to be of size notasks*count, with notasks being the number of tasks in the communicator.

◆ allreduce() [1/2]

template<typename BinaryFunction , typename Type >
int Dune::CollectiveCommunication< MPI_Comm >::allreduce ( Type *  in,
Type *  out,
int  len 
) const
inline

Compute something over all processes for each component of an array and return the result in every process.

The template parameter BinaryFunction is the type of the binary function to use for the computation

Parameters
inThe array to compute on.
outThe array to store the results in.
lenThe number of components in the array

◆ allreduce() [2/2]

template<typename BinaryFunction , typename Type >
int Dune::CollectiveCommunication< MPI_Comm >::allreduce ( Type *  inout,
int  len 
) const
inline

Compute something over all processes for each component of an array and return the result in every process.

The template parameter BinaryFunction is the type of the binary function to use for the computation

Parameters
inoutThe array to compute on.
lenThe number of components in the array

◆ barrier()

int Dune::CollectiveCommunication< MPI_Comm >::barrier ( ) const
inline

Wait until all processes have arrived at this point in the program.

Referenced by Dune::graphRepartition().

◆ broadcast()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::broadcast ( T *  inout,
int  len,
int  root 
) const
inline

Distribute an array from the process with rank root to all other processes.

◆ gather()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::gather ( T *  in,
T *  out,
int  len,
int  root 
) const
inline

Gather arrays on root task.

Each process sends its in array of length len to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array which must have size len * number of processes.

Parameters
[in]inThe send buffer with the data to send.
[out]outThe buffer to store the received data in. Might have length zero on non-root tasks.
[in]lenThe number of elements to send on each task.
[out]rootThe root task that gathers the data.
Note
out must have space for P*len elements

◆ max() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::max ( T &  in) const
inline

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

Referenced by Dune::YaspGrid< dim >::preAdapt().

◆ max() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::max ( T *  inout,
int  len 
) const
inline

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ min() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::min ( T &  in) const
inline

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ min() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::min ( T *  inout,
int  len 
) const
inline

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ prod() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::prod ( T &  in) const
inline

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

◆ prod() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::prod ( T *  inout,
int  len 
) const
inline

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

◆ rank()

int Dune::CollectiveCommunication< MPI_Comm >::rank ( ) const
inline

Return rank, is between 0 and size()-1.

Referenced by Dune::graphRepartition(), and Dune::storeMatrixMarket().

◆ scatter()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::scatter ( T *  send,
T *  recv,
int  len,
int  root 
) const
inline

Scatter array from a root to all other task.

The root process sends the elements with index from k*len to (k+1)*len-1 in its array to task k, which stores it at index 0 to len-1.

Parameters
[in]sendThe array to scatter. Might have length zero on non-root tasks.
[out]recvThe buffer to store the received data in. Upon completion of the method each task will have same data stored there as the one in send buffer of the root task before.
[in]lenThe number of elements in the recv buffer.
[out]rootThe root task that gathers the data.
Note
out must have space for P*len elements

◆ size()

int Dune::CollectiveCommunication< MPI_Comm >::size ( ) const
inline

Number of processes in set, is greater than 0.

Referenced by Dune::graphRepartition().

◆ sum() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::sum ( T &  in) const
inline

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.

Referenced by Dune::OwnerOverlapCopyCommunication< GlobalIdType, LocalIdType >::dot(), and Dune::OwnerOverlapCopyCommunication< GlobalIdType, LocalIdType >::norm().

◆ sum() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::sum ( T *  inout,
int  len 
) const
inline

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.


The documentation for this class was generated from the following file:
Creative Commons License   |  Legal Statements / Impressum  |  Hosted by TU Dresden  |  generated with Hugo v0.80.0 (May 1, 22:29, 2024)