DUNE PDELab (git)
Specialization of Communication for MPI. More...
#include <dune/common/parallel/mpicommunication.hh>
Public Member Functions | |
Communication (const MPI_Comm &c=MPI_COMM_WORLD) | |
Instantiation using a MPI communicator. | |
Communication (const Communication< No_Comm > &) | |
Converting constructor for no-communication that is interpreted as MPI_COMM_SELF. | |
int | rank () const |
Return rank, is between 0 and size()-1. More... | |
int | size () const |
Number of processes in set, is greater than 0. More... | |
template<class T > | |
int | send (const T &data, int dest_rank, int tag) const |
Sends the data to the dest_rank. More... | |
template<class T > | |
MPIFuture< T > | isend (T &&data, int dest_rank, int tag) const |
Sends the data to the dest_rank nonblocking. More... | |
template<class T > | |
T | recv (T &&data, int source_rank, int tag, MPI_Status *status=MPI_STATUS_IGNORE) const |
Receives the data from the source_rank. More... | |
template<class T > | |
MPIFuture< T > | irecv (T &&data, int source_rank, int tag) const |
Receives the data from the source_rank nonblocking. More... | |
template<typename T > | |
T | sum (const T &in) const |
Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More... | |
template<typename T > | |
int | sum (T *inout, int len) const |
Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+. More... | |
template<typename T > | |
T | prod (const T &in) const |
Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More... | |
template<typename T > | |
int | prod (T *inout, int len) const |
Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*. More... | |
template<typename T > | |
T | min (const T &in) const |
Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More... | |
template<typename T > | |
int | min (T *inout, int len) const |
Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More... | |
template<typename T > | |
T | max (const T &in) const |
Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More... | |
template<typename T > | |
int | max (T *inout, int len) const |
Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<. More... | |
int | barrier () const |
Wait until all processes have arrived at this point in the program. More... | |
MPIFuture< void > | ibarrier () const |
Nonblocking barrier. More... | |
template<typename T > | |
int | broadcast (T *inout, int len, int root) const |
Distribute an array from the process with rank root to all other processes. More... | |
template<class T > | |
MPIFuture< T > | ibroadcast (T &&data, int root) const |
Distribute an array from the process with rank root to all other processes nonblocking. More... | |
template<typename T > | |
int | gather (const T *in, T *out, int len, int root) const |
Gather arrays on root task. More... | |
template<class TIN , class TOUT = std::vector<TIN>> | |
MPIFuture< TOUT, TIN > | igather (TIN &&data_in, TOUT &&data_out, int root) const |
Gather arrays on root task nonblocking. More... | |
template<typename T > | |
int | gatherv (const T *in, int sendDataLen, T *out, int *recvDataLen, int *displ, int root) const |
Gather arrays of variable size on root task. More... | |
template<typename T > | |
int | scatter (const T *sendData, T *recvData, int len, int root) const |
Scatter array from a root to all other task. More... | |
template<class TIN , class TOUT = TIN> | |
MPIFuture< TOUT, TIN > | iscatter (TIN &&data_in, TOUT &&data_out, int root) const |
Scatter array from a root to all other task nonblocking. More... | |
template<typename T > | |
int | scatterv (const T *sendData, int *sendDataLen, int *displ, T *recvData, int recvDataLen, int root) const |
Scatter arrays of variable length from a root to all other tasks. More... | |
template<typename T , typename T1 > | |
int | allgather (const T *sbuf, int count, T1 *rbuf) const |
Gathers data from all tasks and distribute it to all. More... | |
template<class TIN , class TOUT = TIN> | |
MPIFuture< TOUT, TIN > | iallgather (TIN &&data_in, TOUT &&data_out) const |
Gathers data from all tasks and distribute it to all nonblocking. More... | |
template<typename T > | |
int | allgatherv (const T *in, int sendDataLen, T *out, int *recvDataLen, int *displ) const |
Gathers data of variable length from all tasks and distribute it to all. More... | |
template<typename BinaryFunction , typename Type > | |
int | allreduce (Type *inout, int len) const |
Compute something over all processes for each component of an array and return the result in every process. More... | |
template<class BinaryFunction , class TIN , class TOUT = TIN> | |
MPIFuture< TOUT, TIN > | iallreduce (TIN &&data_in, TOUT &&data_out) const |
Compute something over all processes nonblocking. More... | |
template<class BinaryFunction , class T > | |
MPIFuture< T > | iallreduce (T &&data) const |
Compute something over all processes nonblocking. More... | |
template<typename BinaryFunction , typename Type > | |
int | allreduce (const Type *in, Type *out, int len) const |
Detailed Description
Specialization of Communication for MPI.
Member Function Documentation
◆ allgather()
|
inline |
Gathers data from all tasks and distribute it to all.
The block of data sent from the jth process is received by every process and placed in the jth block of the buffer recvbuf.
- Parameters
-
[in] sbuf The buffer with the data to send. Has to be the same for each task. [in] count The number of elements to send by any process. [out] rbuf The receive buffer for the data. Has to be of size notasks*count, with notasks being the number of tasks in the communicator.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ allgatherv()
|
inline |
Gathers data of variable length from all tasks and distribute it to all.
The block of data sent from the jth process is received by every process and placed in the jth block of the buffer out.
- Parameters
-
[in] in The send buffer with the data to send. [in] sendDataLen The number of elements to send on each task. [out] out The buffer to store the received data in. [in] recvDataLen An array with size equal to the number of processes containing the number of elements to receive from process i at position i, i.e. the number that is passed as sendDataLen argument to this function in process i. [in] displ An array with size equal to the number of processes. Data received from process i will be written starting at out+displ[i].
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ allreduce() [1/2]
|
inline |
◆ allreduce() [2/2]
|
inline |
Compute something over all processes for each component of an array and return the result in every process.
The template parameter BinaryFunction is the type of the binary function to use for the computation
- Parameters
-
inout The array to compute on. len The number of components in the array
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ barrier()
|
inline |
Wait until all processes have arrived at this point in the program.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
Referenced by Dune::graphRepartition().
◆ broadcast()
|
inline |
Distribute an array from the process with rank root to all other processes.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ gather()
|
inline |
Gather arrays on root task.
Each process sends its in array of length len to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array which must have size len * number of processes.
- Parameters
-
[in] in The send buffer with the data to send. [out] out The buffer to store the received data in. Might have length zero on non-root tasks. [in] len The number of elements to send on each task. [in] root The root task that gathers the data.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
- Note
- out must have space for P*len elements
◆ gatherv()
|
inline |
Gather arrays of variable size on root task.
Each process sends its in array of length sendDataLen to the root process (including the root itself). In the root process these arrays are stored in rank order in the out array.
- Parameters
-
[in] in The send buffer with the data to be sent [in] sendDataLen The number of elements to send on each task [out] out The buffer to store the received data in. May have length zero on non-root tasks. [in] recvDataLen An array with size equal to the number of processes containing the number of elements to receive from process i at position i, i.e. the number that is passed as sendDataLen argument to this function in process i. May have length zero on non-root tasks. [out] displ An array with size equal to the number of processes. Data received from process i will be written starting at out+displ[i] on the root process. May have length zero on non-root tasks. [in] root The root task that gathers the data.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ iallgather()
|
inline |
Gathers data from all tasks and distribute it to all nonblocking.
- Returns
- Future<TOUT, TIN> containing the distributed data
◆ iallreduce() [1/2]
|
inline |
Compute something over all processes nonblocking.
- Returns
- Future<TOUT, TIN> containing the computed something
◆ iallreduce() [2/2]
|
inline |
Compute something over all processes nonblocking.
- Returns
- Future<TOUT, TIN> containing the computed something
◆ ibarrier()
|
inline |
Nonblocking barrier.
- Returns
- Future<void> which is complete when all processes have reached the barrier
◆ ibroadcast()
|
inline |
Distribute an array from the process with rank root to all other processes nonblocking.
- Returns
- Future<T> containing the distributed data
◆ igather()
|
inline |
Gather arrays on root task nonblocking.
- Returns
- Future<TOUT, TIN> containing the gathered data
◆ irecv()
|
inline |
Receives the data from the source_rank nonblocking.
- Returns
- Future<T> containing the received data when complete
References DUNE_THROW.
◆ iscatter()
|
inline |
Scatter array from a root to all other task nonblocking.
- Returns
- Future<TOUT, TIN> containing scattered data;
◆ isend()
|
inline |
Sends the data to the dest_rank nonblocking.
- Returns
- Future<T> containing the send buffer, completes when data is send
◆ max() [1/2]
|
inline |
Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
Referenced by Dune::PDELab::SolverStatistics< T >::max().
◆ max() [2/2]
|
inline |
Compute the maximum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
◆ min() [1/2]
|
inline |
Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
Referenced by Dune::PDELab::SolverStatistics< T >::min().
◆ min() [2/2]
|
inline |
Compute the minimum of the argument over all processes and return the result in every process. Assumes that T has an operator<.
◆ prod() [1/2]
|
inline |
Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.
◆ prod() [2/2]
|
inline |
Compute the product of the argument over all processes and return the result in every process. Assumes that T has an operator*.
◆ rank()
|
inline |
Return rank, is between 0 and size()-1.
Referenced by Dune::graphRepartition(), and Dune::storeMatrixMarket().
◆ recv()
|
inline |
Receives the data from the source_rank.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ scatter()
|
inline |
Scatter array from a root to all other task.
The root process sends the elements with index from k*len to (k+1)*len-1 in its array to task k, which stores it at index 0 to len-1.
- Parameters
-
[in] sendData The array to scatter. Might have length zero on non-root tasks. [out] recvData The buffer to store the received data in. Upon completion of the method each task will have same data stored there as the one in send buffer of the root task before. [in] len The number of elements in the recv buffer. [in] root The root task that gathers the data.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
- Note
- out must have space for P*len elements
◆ scatterv()
|
inline |
Scatter arrays of variable length from a root to all other tasks.
The root process sends the elements with index from send+displ[k] to send+displ[k]-1 in its array to task k, which stores it at index 0 to recvDataLen-1.
- Parameters
-
[in] sendData The array to scatter. May have length zero on non-root tasks. [in] sendDataLen An array with size equal to the number of processes containing the number of elements to scatter to process i at position i, i.e. the number that is passed as recvDataLen argument to this function in process i. [in] displ An array with size equal to the number of processes. Data scattered to process i will be read starting at send+displ[i] on root the process. [out] recvData The buffer to store the received data in. Upon completion of the method each task will have the same data stored there as the one in send buffer of the root task before. [in] recvDataLen The number of elements in the recvData buffer. [in] root The root task that gathers the data.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ send()
|
inline |
Sends the data to the dest_rank.
- Returns
- MPI_SUCCESS (==0) if successful, an MPI error code otherwise
◆ size()
|
inline |
Number of processes in set, is greater than 0.
Referenced by Dune::graphRepartition().
◆ sum() [1/2]
|
inline |
Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.
Referenced by Dune::PDELab::SolverStatistics< T >::avg(), Dune::OwnerOverlapCopyCommunication< GlobalIdType, LocalIdType >::dot(), Dune::PDELab::SolverStatistics< T >::size(), and Dune::PDELab::SolverStatistics< T >::stddev().
◆ sum() [2/2]
|
inline |
Compute the sum of the argument over all processes and return the result in every process. Assumes that T has an operator+.
The documentation for this class was generated from the following file:
- dune/common/parallel/mpicommunication.hh