DUNE PDELab (2.8)

Communication in parallel programs

This recipe explains two types of communication available in GridView: neighbourwise communication designed for domain decomposition methods, and CollectiveCommunication.

Parallel solvers in DUNE already use the communication, so it is often possible to run parallel models without communicating explicitly.

For complete communication preview check section 4 in tutorial06.

Neighbourwise communication

This type of communication is designed to communicate shared degrees of freedom between domains. The communication happens only between neighbouring domains, and only at specified parts they have in common.

template <class DataHandleImp, class DataType>
void Dune::GridView<ViewTraits>::communicate(CommDataHandleIF<DataHandleImp,DataType> &dh, InterfaceType iftype, CommunicationDirection dir) const
auto communicate(CommDataHandleIF< DataHandleImp, DataType > &data, InterfaceType iftype, CommunicationDirection dir) const
Communicate data on this view.
Definition: gridview.hh:268
CommunicationDirection
Define a type for communication direction parameter.
Definition: gridenums.hh:168
InterfaceType
Parameter to be used for the communication functions.
Definition: gridenums.hh:84

The communicate function is accessible from GridView object

typedef Grid::LeafGridView GV;
GV gv=gridp->leafGridView();

To tell the method what data to communicate, we provide data handle (dh) encapsulating the data vector,

using DH = Dune::PDELab::AddDataHandle<GFS,Z>;
DH dh(gfs,z);

and InterfaceType describing which entities are sent and received.

switch (communicationType){
case 4: gv.communicate(dh, Dune::Overlap_All_Interface ,Dune::ForwardCommunication); break;
default: gv.communicate(dh, Dune::All_All_Interface ,Dune::ForwardCommunication);
}
@ ForwardCommunication
communicate as given in InterfaceType
Definition: gridenums.hh:169
@ InteriorBorder_All_Interface
send interior and border, receive all entities
Definition: gridenums.hh:86
@ All_All_Interface
send all and receive all entities
Definition: gridenums.hh:89
@ Overlap_All_Interface
send overlap, receive all entities
Definition: gridenums.hh:88
@ Overlap_OverlapFront_Interface
send overlap, receive overlap and front entities
Definition: gridenums.hh:87
@ InteriorBorder_InteriorBorder_Interface
send/receive interior and border entities
Definition: gridenums.hh:85
Table of interface types
InteriorBorder_InteriorBorder_Interface send/receive interior and border entities
InteriorBorder_All_Interface send interior and border, receive all entities
Overlap_OverlapFront_Interface send overlap, receive overlap and front entities
Overlap_All_Interface send overlap, receive all entities
All_All_Interface send all and receive all entities

Collective communication

This type of communication shares data between all ranks. Offers many MPI methods, for example

Table of collective communication functions
Method name Description
\lstinline rank obtain number (rank) of this process
\lstinline size obtain number of processes
\lstinline barrier wait until all process arrived at the barrier
\lstinline min global min of local values
\lstinline max global max of local values
\lstinline sum global sum of local values
\lstinline allreduce compute something over all processes for each component of
an array and return result in every process
\lstinline broadcast broadcast from one process to all other processes
\lstinline scatter scatter individual data from root process to all other tasks
\lstinline gather, allgather gather data on root process (and distribute it to all other tasks)

The communication object is a part of the GridView

auto comm = gv.comm();

Most methods take a constant reference and return that type. We need to use a variable as an argument, and not forget to store the result.

globmax = comm.max(sum);
globsum = comm.sum(sum);

Full example code: recipe-communication.cc

Creative Commons License   |  Legal Statements / Impressum  |  Hosted by TU Dresden  |  generated with Hugo v0.111.3 (Oct 6, 22:30, 2024)