Hi,
today a lot of posts :)

In a previous post, see here, I have explained how to assemble a FEM scheme in parallel using threads. Starting from that code, I added also the MPI support. At the following link

you can find the full code.

There are several differences compered to the thread only code.

First of all, each thread computes a piece of the local stiffness matrix and of the RHS. This values are then pushed into the local stiffness matrix and into the local RHS using a coloring scheme. Up to this point we are using only a shared memory.

After that, the communication is set up and performed, see my previous post here for more details about DUNE facilities for parallel communication. Here we update the entries related to the shared grid nodes adding the values obtained in different processes. This operation is performed doing a forward communication with an add policy and a backward communication with a copy policy. Obviously here the memory is distributed.

It is worth to notice that the global matrix A (and the global RHS) doesn’t exist anymore since each process just store a partition of it. This partition are’t disjoint but we have an overlapping for the shared grid nodes.

No more posts for today :)
Stay tuned!
Marco.