Dune Core Modules (2.4.2)

Dune::CollectiveCommunication< MPI_Comm > Class Reference

Specialization of CollectiveCommunication for MPI. More...

#include <dune/common/parallel/mpicollectivecommunication.hh>

Public Member Functions

 CollectiveCommunication (const MPI_Comm &c=MPI_COMM_WORLD)
 Instantiation using a MPI communicator.
 
int rank () const
 Return rank, is between 0 and size()-1. More...
 
int size () const
 Number of processes in set, is greater than 0. More...
 
template<typename T >
sum (T &in) const
 Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More...
 
template<typename T >
int sum (T *inout, int len) const
 Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More...
 
template<typename T >
prod (T &in) const
 Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More...
 
template<typename T >
int prod (T *inout, int len) const
 Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More...
 
template<typename T >
min (T &in) const
 Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
int min (T *inout, int len) const
 Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
max (T &in) const
 Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
template<typename T >
int max (T *inout, int len) const
 Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More...
 
int barrier () const
 Wait until all processes have arrived at this point in the program. More...
 
template<typename T >
int broadcast (T *inout, int len, int root) const
 Distribute an array from the process with rank root to all other processes. More...
 
template<typename T >
int gather (T *in, T *out, int len, int root) const
 Gather arrays on root task. More...
 
template<typename T >
int gatherv (T *in, int sendlen, T *out, int *recvlen, int *displ, int root) const
 Gather arrays of variable size on root task. More...
 
template<typename T >
int scatter (T *send, T *recv, int len, int root) const
 Scatter array from a root to all other task. More...
 
template<typename T >
int scatterv (T *send, int *sendlen, int *displ, T *recv, int recvlen, int root) const
 Scatter arrays of variable length from a root to all other tasks. More...
 
template<typename T , typename T1 >
int allgather (T *sbuf, int count, T1 *rbuf) const
 Gathers data from all tasks and distribute it to all. More...
 
template<typename T >
int allgatherv (T *in, int sendlen, T *out, int *recvlen, int *displ) const
 Gathers data of variable length from all tasks and distribute it to all. More...
 
template<typename BinaryFunction , typename Type >
int allreduce (Type *inout, int len) const
 Compute something over all processes for each component of an array and return the result in every process. More...
 
template<typename BinaryFunction , typename Type >
int allreduce (Type *in, Type *out, int len) const
 Compute something over all processes for each component of an array and return the result in every process. More...
 

Detailed Description

Specialization of CollectiveCommunication for MPI.

Member Function Documentation

◆ allgather()

template<typename T , typename T1 >
int Dune::CollectiveCommunication< MPI_Comm >::allgather ( T *  sbuf,
int  count,
T1 *  rbuf 
) const
inline

Gathers data from all tasks and distribute it to all.

The block of data sent from the jth process is received by every process and placed in the jth block of the buffer recvbuf.

Parameters
[in]sbufThe buffer with the data to send. Has to be the same for each task.
[in]countThe number of elements to send by any process.
[out]rbufThe receive buffer for the data. Has to be of size notasks*count, with notasks being the number of tasks in the communicator.

◆ allgatherv()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::allgatherv ( T *  in,
int  sendlen,
T *  out,
int *  recvlen,
int *  displ 
) const
inline

Gathers data of variable length from all tasks and distribute it to all.

The block of data sent from the jth process is received by every process and placed in the jth block of the buffer out.

Parameters
[in]inThe send buffer with the data to send.
[in]sendlenThe number of elements to send on each task.
[out]outThe buffer to store the received data in.
[in]recvlenAn array with size equal to the number of processes containing the number of elements to recieve from process i at position i, i.e. the number that is passed as sendlen argument to this function in process i.
[in]displAn array with size equal to the number of processes. Data recieved from process i will be written starting at out+displ[i].

◆ allreduce() [1/2]

template<typename BinaryFunction , typename Type >
int Dune::CollectiveCommunication< MPI_Comm >::allreduce ( Type *  in,
Type *  out,
int  len 
) const
inline

Compute something over all processes for each component of an array and return the result in every process.

The template parameter BinaryFunction is the type of the binary function to use for the computation

Parameters
inThe array to compute on.
outThe array to store the results in.
lenThe number of components in the array

◆ allreduce() [2/2]

template<typename BinaryFunction , typename Type >
int Dune::CollectiveCommunication< MPI_Comm >::allreduce ( Type *  inout,
int  len 
) const
inline

Compute something over all processes for each component of an array and return the result in every process.

The template parameter BinaryFunction is the type of the binary function to use for the computation

Parameters
inoutThe array to compute on.
lenThe number of components in the array

◆ barrier()

int Dune::CollectiveCommunication< MPI_Comm >::barrier ( ) const
inline

Wait until all processes have arrived at this point in the program.

Referenced by Dune::graphRepartition().

◆ broadcast()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::broadcast ( T *  inout,
int  len,
int  root 
) const
inline

Distribute an array from the process with rank root to all other processes.

◆ gather()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::gather ( T *  in,
T *  out,
int  len,
int  root 
) const
inline

Gather arrays on root task.

Each process sends its in array of length len to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array which must have size len * number of processes.

Parameters
[in]inThe send buffer with the data to send.
[out]outThe buffer to store the received data in. Might have length zero on non-root tasks.
[in]lenThe number of elements to send on each task.
[in]rootThe root task that gathers the data.
Note
out must have space for P*len elements

◆ gatherv()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::gatherv ( T *  in,
int  sendlen,
T *  out,
int *  recvlen,
int *  displ,
int  root 
) const
inline

Gather arrays of variable size on root task.

Each process sends its in array of length sendlen to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array.

Parameters
[in]inThe send buffer with the data to be sent
[in]sendlenThe number of elements to send on each task
[out]outThe buffer to store the received data in. May have length zero on non-root tasks.
[in]recvlenAn array with size equal to the number of processes containing the number of elements to receive from process i at position i, i.e. the number that is passed as sendlen argument to this function in process i. May have length zero on non-root tasks.
[out]displAn array with size equal to the number of processes. Data received from process i will be written starting at out+displ[i] on the root process. May have length zero on non-root tasks.
[in]rootThe root task that gathers the data.

◆ max() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::max ( T &  in) const
inline

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ max() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::max ( T *  inout,
int  len 
) const
inline

Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ min() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::min ( T &  in) const
inline

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ min() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::min ( T *  inout,
int  len 
) const
inline

Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.

◆ prod() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::prod ( T &  in) const
inline

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

◆ prod() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::prod ( T *  inout,
int  len 
) const
inline

Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.

◆ rank()

int Dune::CollectiveCommunication< MPI_Comm >::rank ( ) const
inline

Return rank, is between 0 and size()-1.

Referenced by Dune::graphRepartition(), and Dune::storeMatrixMarket().

◆ scatter()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::scatter ( T *  send,
T *  recv,
int  len,
int  root 
) const
inline

Scatter array from a root to all other task.

The root process sends the elements with index from k*len to (k+1)*len-1 in its array to task k, which stores it at index 0 to len-1.

Parameters
[in]sendThe array to scatter. Might have length zero on non-root tasks.
[out]recvThe buffer to store the received data in. Upon completion of the method each task will have same data stored there as the one in send buffer of the root task before.
[in]lenThe number of elements in the recv buffer.
[in]rootThe root task that gathers the data.
Note
out must have space for P*len elements

◆ scatterv()

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::scatterv ( T *  send,
int *  sendlen,
int *  displ,
T *  recv,
int  recvlen,
int  root 
) const
inline

Scatter arrays of variable length from a root to all other tasks.

The root process sends the elements with index from send+displ[k] to send+displ[k]-1 in its array to task k, which stores it at index 0 to recvlen-1.

Parameters
[in]sendThe array to scatter. May have length zero on non-root tasks.
[in]sendlenAn array with size equal to the number of processes containing the number of elements to scatter to process i at position i, i.e. the number that is passed as recvlen argument to this function in process i.
[in]displAn array with size equal to the number of processes. Data scattered to process i will be read starting at send+displ[i] on root the process.
[out]recvThe buffer to store the received data in. Upon completion of the method each task will have the same data stored there as the one in send buffer of the root task before.
[in]recvlenThe number of elements in the recv buffer.
[in]rootThe root task that gathers the data.

◆ size()

int Dune::CollectiveCommunication< MPI_Comm >::size ( ) const
inline

Number of processes in set, is greater than 0.

Referenced by Dune::graphRepartition().

◆ sum() [1/2]

template<typename T >
T Dune::CollectiveCommunication< MPI_Comm >::sum ( T &  in) const
inline

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.

Referenced by Dune::OwnerOverlapCopyCommunication< GlobalIdType, LocalIdType >::dot(), and Dune::OwnerOverlapCopyCommunication< GlobalIdType, LocalIdType >::norm().

◆ sum() [2/2]

template<typename T >
int Dune::CollectiveCommunication< MPI_Comm >::sum ( T *  inout,
int  len 
) const
inline

Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.


The documentation for this class was generated from the following file:
Creative Commons License   |  Legal Statements / Impressum  |  Hosted by TU Dresden  |  generated with Hugo v0.111.3 (Dec 20, 23:31, 2024)