1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
 # Parallel runs with MPI
To compile with MPI parallelism enabled, you need to use something like:
~~~bash
CC99='mpicc std=c99' qcc Wall O2 D_MPI=1 example.c o example lm
~~~
where *mpicc* calls the MPI compiler on your system. The resulting
executable can then be run in parallel using something like
~~~bash
mpirun np 8 ./example
~~~
The details may vary according to how the MPI compiler is setup on
your system.
## Using Makefiles
The "manual" way above is automated if you use the [standard
Makefiles](/Tutorial#usingmakefiles) provided by Basilisk. You can
then compile and run the example above on eight processes using:
~~~bash
CC='mpicc D_MPI=8' make example.tst
~~~
This assumes that *mpicc* and *mpirun* are available on your system.
## Running on supercomputers
A simple way to run Basilisk code on a supercomputer is to first
generate a portable (ISO C99) source code on a machine where *qcc* is
installed i.e.
~~~bash
%localmachine: qcc source D_MPI=1 example.c
~~~
Then copy the portable source code *_example.c* (don't forget the
underscore!) on the supercomputer and compile it:
~~~bash
%localmachine: scp _example.c login@supercomputer.org:
%localmachine: ssh login@supercomputer.org
%supercomputer.org: mpicc Wall std=c99 O2 D_MPI=1 _example.c o example lm
~~~
where the *std=c99* option sets the version of the language to C99. Note that
this option may change depending on the compiler (the options shown above are
valid for *gcc* or *icc*, the Intel compiler).
You will then need to use the job submission system of the
supercomputer to set the number of processes and run the
executable. See also the following examples:
* [Parallel scalability](test/mpilaplacian.c#howtorunonoccigen)
* [Atomisation of a pulsed liquid jet](examples/atomisation.c#runninginparallel)
* [Forced isotropic turbulence in a triplyperiodic box](examples/isotropic.c#runningwithmpionoccigen)
## Noncubic domains
For the moment, the only way to combine noncubic domains and MPI
parallelism implies using multigrids rather than tree grids (because
[masking](/Basilisk%20C#complexdomains) does not work together with
MPI yet). This also means that MPIparallel, noncubic and adaptive
simulations are not possible yet.
On multigrids, MPI subdomains are setup using Cartesian topologies
i.e. the processes are arranged on a line (in 1D), a rectangle (in 2D)
or a [cuboid](https://en.wikipedia.org/wiki/Cuboid) (in 3D). The total
number of processes $n_p$ thus verifies the relation $n_p = n_x n_y
n_z$ where $n_x$, $n_y$, $n_z$ are the number of processes along each
axis. Controlling the number of processes along each axis allows to
change the aspect ratio of the (global) domain.
If $n_x$, $n_y$ and $n_z$ are not specified by the user, Basilisk sets
them automatically based on the value of $n_p$ (as set by the `mpirun
np` command). To do so, it calls the
[`MPI_Dims_create()`](https://www.openmpi.org/doc/v1.8/man3/MPI_Dims_create.3.php)
MPI function. Some particular cases are:
* $n_p$ is a [square
number](https://en.wikipedia.org/wiki/Square_number) (in 2D) or
[cubic number](https://en.wikipedia.org/wiki/Cubic_number) (in 3D):
In this case $n_x = n_y = n_z = n_p^{1/3}$ i.e. the domain is a
cube.
* $n_p$ is a prime number: $n_x = n_p$, $n_y = n_z = 1$ i.e. the
domain is a long channel.
* $n_p$ is the product of two prime numbers: a 3D decomposition is not
possible, an MPI error is returned.
To explicitly control the dimensions along each axis, one can call the
`dimensions()` function. Any dimension which is not set is computed
automatically.
All this is also compatible with global [periodic
boundaries](/Basilisk%20C#periodicboundaries) (but not periodic
boundaries for individual fields).
### Physical dimensions and spatial resolution
As usual, the physical dimension of the domain is set by calling the
`size()` function. What is set is the size along the xaxis of the
entire domain (not of the individual subdomains). This allow one to
vary the number of processes while keeping the physical size constant
(i.e. do the same simulation on a different number of processes).
The spatial resolution (N) is handled in a similar way, with the added
constraint that it must be a multiple of $n_x\times 2^n$ with $n$ an
integer.
### Examples
A $[0:1]\times[0:1/3]\times[0:1/3]$ channel, periodic along the xaxis
(with slip walls), on 3 MPI processes, with 96 points along the
xaxis:
~~~literatec
#include "grid/multigrid3D.h"
...
int main() {
periodic (right);
init_grid (128);
...
}
~~~
The same simulation but on 24 processes:
~~~literatec
#include "grid/multigrid3D.h"
...
int main() {
dimensions (nx = 6);
periodic (right);
init_grid (128);
...
}
~~~
A cube centered on the origin, of size 1.5, allperiodic, with 512
points along each axis, on 8 = 2^3^ or 64 = 4^3^ or 512 = 8^3^ or 4096
= 16^3^ etc... processes:
~~~literatec
#include "grid/multigrid3D.h"
...
int main() {
size (1.5);
origin (0.75, 0.75, 0.75);
foreach_dimension()
periodic (right);
init_grid (512);
...
}
~~~
