PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
setup.h
Go to the documentation of this file.
1#ifndef SETUP_H
2#define SETUP_H
3
4#include <petscpf.h>
5#include <petscdmswarm.h>
6#include <stdlib.h>
7#include <time.h>
8#include <math.h>
9#include <petsctime.h>
10#include <petscsys.h>
11#include <petscdmcomposite.h>
12#include <petscsystypes.h>
13
14// Include additional headers
15#include "variables.h" // Shared type definitions
16#include "ParticleSwarm.h" // Particle swarm functions
17#include "walkingsearch.h" // Particle location functions
18#include "grid.h" // Grid functions
19#include "logging.h" // Logging macros
20#include "io.h" // Data Input and Output functions
21#include "interpolation.h" // Interpolation routines
22#include "AnalyticalSolution.h" // Analytical Solution for testing
23#include "ParticleMotion.h" // Functions related to motion of particles
24#include "Boundaries.h" // Functions related to Boundary conditions
25
26
27/* Macro to automatically select the correct allocation function */
28#define Allocate3DArray(array, nz, ny, nx) \
29 _Generic((array), \
30 PetscReal ****: Allocate3DArrayScalar, \
31 Cmpnts ****: Allocate3DArrayVector \
32 )(array, nz, ny, nx)
33
34/* Macro for deallocation */
35#define Deallocate3DArray(array, nz, ny) \
36 _Generic((array), \
37 PetscReal ***: Deallocate3DArrayScalar, \
38 Cmpnts ***: Deallocate3DArrayVector \
39 )(array, nz, ny)
40
41
42/**
43 * @brief Allocates and populates the master SimulationContext object.
44 *
45 * This function serves as the single, authoritative entry point for all
46 * simulation configuration. It merges the setup logic from both the legacy
47 * FSI/IBM solver and the modern particle solver into a unified, robust
48 * process.
49 *
50 * The function follows a strict sequence:
51 * 1. **Allocate Context & Set Defaults:** It first allocates the `SimulationContext`
52 * and populates every field with a sane, hardcoded default value. This
53 * ensures the simulation starts from a known, predictable state.
54 * 2. **Configure Logging System:** It configures the custom logging framework. It
55 * parses the `-func_config_file` option to load a list of function names
56 * allowed to produce log output. This configuration (the file path and the
57 * list of function names) is stored within the `SimulationContext` for
58 * later reference and cleanup.
59 * 3. **Parse All Options:** It performs a comprehensive pass of `PetscOptionsGet...`
60 * calls for every possible command-line flag, overriding the default
61 * values set in step 1.
62 * 4. **Log Summary:** After all options are parsed, it uses the now-active
63 * logging system to print a summary of the key simulation parameters.
64 *
65 * @param[in] argc Argument count passed from `main`.
66 * @param[in] argv Argument vector passed from `main`.
67 * @param[out] p_simCtx On success, this will point to the newly created and
68 * fully configured `SimulationContext` pointer. The caller
69 * is responsible for eventually destroying this object by
70 * calling `FinalizeSimulation()`.
71 *
72 * @return PetscErrorCode Returns 0 on success, or a non-zero PETSc error code on failure.
73 */
74PetscErrorCode CreateSimulationContext(int argc, char **argv, SimCtx **p_simCtx);
75
76
77/**
78 * @brief The main orchestrator for setting up all grid-related components.
79 *
80 * This function is the high-level driver for creating the entire computational
81 * domain, including the multigrid hierarchy, PETSc DMDA and Vec objects, and
82 * calculating all necessary grid metrics.
83 *
84 * @param simCtx The fully configured SimulationContext.
85 * @return PetscErrorCode
86 */
87PetscErrorCode SetupGridAndSolvers(SimCtx *simCtx);
88
89/**
90 @brief Creates and initializes all PETSc Vec objects for all fields.
91 *
92 * This function iterates through every UserCtx in the multigrid and multi-block
93 * hierarchy. For each context, it creates the comprehensive set of global and
94 * local PETSc Vecs required by the flow solver (e.g., Ucont, P, Nvert, metrics,
95 * turbulence fields, etc.). Each vector is initialized to zero.
96 *
97 * @param simCtx The master SimCtx, containing the configured UserCtx hierarchy.
98 * @return PetscErrorCode
99 */
100PetscErrorCode CreateAndInitializeAllVectors(SimCtx *simCtx);
101
102/**
103 * @brief Updates the local vector (including ghost points) from its corresponding global vector.
104 *
105 * This function identifies the correct global vector, local vector, and DM based on the
106 * provided fieldName and performs the standard PETSc DMGlobalToLocalBegin/End sequence.
107 * Includes optional debugging output (max norms before/after).
108 *
109 * @param user The UserCtx structure containing the vectors and DMs.
110 * @param fieldName The name of the field to update ("Ucat", "Ucont", "P", "Nvert", etc.).
111 *
112 * @return PetscErrorCode 0 on success, non-zero on failure.
113 *
114 * @note This function assumes the global vector associated with fieldName has already
115 * been populated with the desired data (including any boundary conditions).
116 */
117PetscErrorCode UpdateLocalGhosts(UserCtx* user, const char *fieldName);
118
119/**
120 * @brief (Orchestrator) Sets up all boundary conditions for the simulation.
121 */
122PetscErrorCode SetupBoundaryConditions(SimCtx *simCtx);
123
124/**
125 * @brief Allocates a 3D array of PetscReal values using PetscCalloc.
126 *
127 * This function dynamically allocates memory for a 3D array of PetscReal values
128 * with dimensions nz (layers) x ny (rows) x nx (columns). It uses PetscCalloc1
129 * to ensure the memory is zero-initialized.
130 *
131 * The allocation is done in three steps:
132 * 1. Allocate an array of nz pointers (one for each layer).
133 * 2. Allocate a contiguous block for nz*ny row pointers and assign each layer’s row pointers.
134 * 3. Allocate a contiguous block for all nz*ny*nx PetscReal values.
135 *
136 * This setup allows the array to be accessed as array[k][j][i], and the memory
137 * for the data is contiguous, which improves cache efficiency.
138 *
139 * @param[out] array Pointer to the 3D array to be allocated.
140 * @param[in] nz Number of layers (z-direction).
141 * @param[in] ny Number of rows (y-direction).
142 * @param[in] nx Number of columns (x-direction).
143 *
144 * @return PetscErrorCode 0 on success, nonzero on failure.
145 */
146 PetscErrorCode Allocate3DArrayScalar(PetscReal ****array, PetscInt nz, PetscInt ny, PetscInt nx);
147
148/**
149 * @brief Deallocates a 3D array of PetscReal values allocated by Allocate3DArrayScalar.
150 *
151 * This function frees the memory allocated for a 3D array of PetscReal values.
152 * It assumes the memory was allocated using Allocate3DArrayScalar, which allocated
153 * three separate memory blocks: one for the contiguous data, one for the row pointers,
154 * and one for the layer pointers.
155 *
156 * @param[in] array Pointer to the 3D array to be deallocated.
157 * @param[in] nz Number of layers (z-direction).
158 * @param[in] ny Number of rows (y-direction).
159 *
160 * @return PetscErrorCode 0 on success, nonzero on failure.
161 */
162PetscErrorCode Deallocate3DArrayScalar(PetscReal ***array, PetscInt nz, PetscInt ny);
163
164/**
165 * @brief Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
166 *
167 * This function frees the memory allocated for a 3D array of Cmpnts structures.
168 * It assumes the memory was allocated using Allocate3DArrayVector, which created three
169 * separate memory blocks: one for the contiguous vector data, one for the row pointers,
170 * and one for the layer pointers.
171 *
172 * @param[in] array Pointer to the 3D array to be deallocated.
173 * @param[in] nz Number of layers in the z-direction.
174 * @param[in] ny Number of rows in the y-direction.
175 *
176 * @return PetscErrorCode 0 on success, nonzero on failure.
177 */
178 PetscErrorCode Allocate3DArrayVector(Cmpnts ****array, PetscInt nz, PetscInt ny, PetscInt nx);
179
180/**
181 * @brief Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
182 *
183 * This function frees the memory allocated for a 3D array of Cmpnts structures.
184 * It assumes the memory was allocated using Allocate3DArrayVector, which created three
185 * separate memory blocks: one for the contiguous vector data, one for the row pointers,
186 * and one for the layer pointers.
187 *
188 * @param[in] array Pointer to the 3D array to be deallocated.
189 * @param[in] nz Number of layers in the z-direction.
190 * @param[in] ny Number of rows in the y-direction.
191 *
192 * @return PetscErrorCode 0 on success, nonzero on failure.
193 */
194PetscErrorCode Deallocate3DArrayVector(Cmpnts ***array, PetscInt nz, PetscInt ny);
195
196/**
197 * @brief Determines the global starting index and number of CELLS owned by the
198 * current processor in a specified dimension. Ownership is defined by the
199 * rank owning the cell's origin node (min i,j,k corner).
200 *
201 * @param[in] info_nodes Pointer to the DMDALocalInfo struct obtained from the NODE-based DMDA
202 * (e.g., user->da or user->fda, assuming they have consistent nodal partitioning
203 * for defining cell origins).
204 * @param[in] dim The dimension to compute the range for (0 for x/i, 1 for y/j, 2 for z/k).
205 * @param[out] xs_cell_global Pointer to store the starting GLOBAL CELL index owned by this process.
206 * A cell C(i) is defined by nodes N(i) and N(i+1). Its global index is i.
207 * @param[out] xm_cell_local Pointer to store the NUMBER of CELLs owned by this process in this dimension.
208 *
209 * @return PetscErrorCode 0 on success.
210 */
211PetscErrorCode GetOwnedCellRange(const DMDALocalInfo *info_nodes,
212 PetscInt dim,
213 PetscInt *xs_cell_global,
214 PetscInt *xm_cell_local);
215
216/**
217 * @brief Gets the global cell range for a rank, including boundary cells.
218 *
219 * This function first calls GetOwnedCellRange to get the conservative range of
220 * fully-contained cells. It then extends this range by applying the
221 * "Lower-Rank-Owns-Boundary" principle. A rank claims ownership of the
222 * boundary cells it shares with neighbors in the positive (+x, +y, +z)
223 * directions.
224 *
225 * This results in a final cell range that is gap-free and suitable for building
226 * the definitive particle ownership map.
227 *
228 * @param[in] info_nodes Pointer to the DMDALocalInfo struct.
229 * @param[in] neighbors Pointer to the RankNeighbors struct containing neighbor info.
230 * @param[in] dim The dimension (0 for i/x, 1 for j/y, 2 for k/z).
231 * @param[out] xs_cell_global_out Pointer to store the final starting cell index.
232 * @param[out] xm_cell_local_out Pointer to store the final number of cells.
233 *
234 * @return PetscErrorCode 0 on success, or an error code on failure.
235 */
236PetscErrorCode GetGhostedCellRange(const DMDALocalInfo *info_nodes,
237 const RankNeighbors *neighbors,
238 PetscInt dim,
239 PetscInt *xs_cell_global_out,
240 PetscInt *xm_cell_local_out);
241
242
243/**
244 * @brief Computes and stores the Cartesian neighbor ranks for the DMDA decomposition.
245 *
246 * This function retrieves the neighbor information from the primary DMDA (user->da)
247 * and stores the face neighbors (xm, xp, ym, yp, zm, zp) in the user->neighbors structure.
248 * It assumes a standard PETSc ordering for the neighbors array returned by DMDAGetNeighbors.
249 * Logs warnings if the assumed indices seem incorrect (e.g., center rank mismatch).
250 *
251 * @param[in,out] user Pointer to the UserCtx structure where neighbor info will be stored.
252 *
253 * @return PetscErrorCode 0 on success, non-zero on failure.
254 */
255PetscErrorCode ComputeAndStoreNeighborRanks(UserCtx *user);
256
257/**
258 * @brief Sets the processor layout for a given DMDA based on PETSc options.
259 *
260 * Reads the desired number of processors in x, y, and z directions using
261 * PETSc options (e.g., -dm_processors_x, -dm_processors_y, -dm_processors_z).
262 * If an option is not provided for a direction, PETSC_DECIDE is used for that direction.
263 * Applies the layout using DMDASetNumProcs.
264 *
265 * Also stores the retrieved/decided values in user->procs_x/y/z if user context is provided.
266 *
267 * @param dm The DMDA object to configure the layout for.
268 * @param user Pointer to the UserCtx structure (optional, used to store layout values).
269 *
270 * @return PetscErrorCode 0 on success, non-zero on failure.
271 */
272PetscErrorCode SetDMDAProcLayout(DM dm, UserCtx *user);
273
274/**
275 * @brief Sets up the full rank communication infrastructure, including neighbor ranks and bounding box exchange.
276 *
277 * This function orchestrates the following steps:
278 * 1. Compute and store the neighbor ranks in the user context.
279 * 2. Gather all local bounding boxes to rank 0.
280 * 3. Broadcast the complete bounding box list to all ranks.
281 *
282 * The final result is that each rank has access to its immediate neighbors and the bounding box information of all ranks.
283 *
284 * @param[in,out] user Pointer to the UserCtx structure (must be initialized).
285 * @param[in,out] bboxlist Pointer to BoundingBox array pointer; after this call, it will point to the broadcasted list.
286 *
287 * @return PetscErrorCode Returns 0 on success or non-zero PETSc error code.
288 */
289PetscErrorCode SetupDomainRankInfo(SimCtx *simCtx);
290
291/**
292 * @brief Reconstructs Cartesian velocity (Ucat) at cell centers from contravariant
293 * velocity (Ucont) defined on cell faces.
294 *
295 * This function performs the transformation from a contravariant velocity representation
296 * (which is natural on a curvilinear grid) to a Cartesian (x,y,z) representation.
297 * For each interior computational cell owned by the rank, it performs the following:
298 *
299 * 1. It averages the contravariant velocity components (U¹, U², U³) from the
300 * surrounding faces to get an estimate of the contravariant velocity at the cell center.
301 * 2. It averages the metric vectors (Csi, Eta, Zet) from the surrounding faces
302 * to get an estimate of the metric tensor at the cell center. This tensor forms
303 * the transformation matrix.
304 * 3. It solves the linear system `[MetricTensor] * [ucat] = [ucont]` for the
305 * Cartesian velocity vector `ucat = (u,v,w)` using Cramer's rule.
306 * 4. The computed Cartesian velocity is stored in the global `user->Ucat` vector.
307 *
308 * The function operates on local, ghosted versions of the input vectors (`user->lUcont`,
309 * `user->lCsi`, etc.) to ensure stencils are valid across processor boundaries.
310 *
311 * @param[in,out] user Pointer to the UserCtx structure. The function reads from
312 * `user->lUcont`, `user->lCsi`, `user->lEta`, `user->lZet`, `user->lNvert`
313 * and writes to the global `user->Ucat` vector.
314 *
315 * @return PetscErrorCode 0 on success.
316 *
317 * @note
318 * - This function should be called AFTER `user->lUcont` and all local metric vectors
319 * (`user->lCsi`, etc.) have been populated with up-to-date ghost values via `UpdateLocalGhosts`.
320 * - It only computes `Ucat` for interior cells (not on physical boundaries) and for
321 * cells not marked as solid/blanked by `user->lNvert`.
322 * - The caller is responsible for subsequently applying boundary conditions to `user->Ucat`
323 * and calling `UpdateLocalGhosts(user, "Ucat")` to populate `user->lUcat`.
324 */
325PetscErrorCode Contra2Cart(UserCtx *user);
326
327
328/**
329 * @brief Creates and distributes a map of the domain's cell decomposition to all ranks.
330 * @ingroup DomainInfo
331 *
332 * This function is a critical part of the simulation setup. It determines the global
333 * cell ownership for each MPI rank and makes this information available to all
334 * other ranks. This "decomposition map" is essential for the robust "Walk and Handoff"
335 * particle migration strategy, allowing any rank to quickly identify the owner of a
336 * target cell.
337 *
338 * The process involves:
339 * 1. Each rank gets its own node ownership information from the DMDA.
340 * 2. It converts this node information into cell ownership ranges using the
341 * `GetOwnedCellRange` helper function.
342 * 3. It participates in an `MPI_Allgather` collective operation to build a complete
343 * array (`user->RankCellInfoMap`) containing the ownership information for every rank.
344 *
345 * This function should be called once during initialization after the primary DMDA
346 * (user->da) has been set up.
347 *
348 * @param[in,out] user Pointer to the UserCtx structure. The function will allocate and
349 * populate `user->RankCellInfoMap` and set `user->num_ranks`.
350 *
351 * @return PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.
352 * Errors can occur if input pointers are NULL or if MPI communication fails.
353 */
354PetscErrorCode SetupDomainCellDecompositionMap(UserCtx *user);
355
356/**
357 * @brief Performs a binary search for a key in a sorted array of PetscInt64.
358 *
359 * This is a standard binary search algorithm implemented as a PETSc-style helper function.
360 * It efficiently determines if a given `key` exists within a `sorted` array.
361 *
362 * @param[in] n The number of elements in the array.
363 * @param[in] arr A pointer to the sorted array of PetscInt64 values to be searched.
364 * @param[in] key The PetscInt64 value to search for.
365 * @param[out] found A pointer to a PetscBool that will be set to PETSC_TRUE if the key
366 * is found, and PETSC_FALSE otherwise.
367 *
368 * @return PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.
369 *
370 * @note The input array `arr` **must** be sorted in ascending order for the algorithm
371 * to work correctly.
372 */
373PetscErrorCode BinarySearchInt64(PetscInt n, const PetscInt64 arr[], PetscInt64 key, PetscBool *found);
374
375
376 #endif // SETUP_H
Header file for Particle Motion and migration related functions.
Public interface for grid, solver, and metric setup routines.
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
PetscErrorCode SetupDomainRankInfo(SimCtx *simCtx)
Sets up the full rank communication infrastructure, including neighbor ranks and bounding box exchang...
Definition setup.c:1687
PetscErrorCode GetGhostedCellRange(const DMDALocalInfo *info_nodes, const RankNeighbors *neighbors, PetscInt dim, PetscInt *xs_cell_global_out, PetscInt *xm_cell_local_out)
Gets the global cell range for a rank, including boundary cells.
Definition setup.c:1452
PetscErrorCode Deallocate3DArrayVector(Cmpnts ***array, PetscInt nz, PetscInt ny)
Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
Definition setup.c:1245
PetscErrorCode SetupGridAndSolvers(SimCtx *simCtx)
The main orchestrator for setting up all grid-related components.
Definition setup.c:577
PetscErrorCode CreateAndInitializeAllVectors(SimCtx *simCtx)
Creates and initializes all PETSc Vec objects for all fields.
Definition setup.c:610
PetscErrorCode ComputeAndStoreNeighborRanks(UserCtx *user)
Computes and stores the Cartesian neighbor ranks for the DMDA decomposition.
Definition setup.c:1494
PetscErrorCode Contra2Cart(UserCtx *user)
Reconstructs Cartesian velocity (Ucat) at cell centers from contravariant velocity (Ucont) defined on...
Definition setup.c:1785
PetscErrorCode Allocate3DArrayScalar(PetscReal ****array, PetscInt nz, PetscInt ny, PetscInt nx)
Allocates a 3D array of PetscReal values using PetscCalloc.
Definition setup.c:1085
PetscErrorCode CreateSimulationContext(int argc, char **argv, SimCtx **p_simCtx)
Allocates and populates the master SimulationContext object.
Definition setup.c:41
PetscErrorCode GetOwnedCellRange(const DMDALocalInfo *info_nodes, PetscInt dim, PetscInt *xs_cell_global, PetscInt *xm_cell_local)
Determines the global starting index and number of CELLS owned by the current processor in a specifie...
Definition setup.c:1344
PetscErrorCode SetDMDAProcLayout(DM dm, UserCtx *user)
Sets the processor layout for a given DMDA based on PETSc options.
Definition setup.c:1617
PetscErrorCode BinarySearchInt64(PetscInt n, const PetscInt64 arr[], PetscInt64 key, PetscBool *found)
Performs a binary search for a key in a sorted array of PetscInt64.
Definition setup.c:2017
PetscErrorCode Allocate3DArrayVector(Cmpnts ****array, PetscInt nz, PetscInt ny, PetscInt nx)
Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
Definition setup.c:1187
PetscErrorCode UpdateLocalGhosts(UserCtx *user, const char *fieldName)
Updates the local vector (including ghost points) from its corresponding global vector.
Definition setup.c:779
PetscErrorCode SetupBoundaryConditions(SimCtx *simCtx)
(Orchestrator) Sets up all boundary conditions for the simulation.
Definition setup.c:1035
PetscErrorCode SetupDomainCellDecompositionMap(UserCtx *user)
Creates and distributes a map of the domain's cell decomposition to all ranks.
Definition setup.c:1940
PetscErrorCode Deallocate3DArrayScalar(PetscReal ***array, PetscInt nz, PetscInt ny)
Deallocates a 3D array of PetscReal values allocated by Allocate3DArrayScalar.
Definition setup.c:1130
Main header file for a complex fluid dynamics solver.
A 3D point or vector with PetscScalar components.
Definition variables.h:99
Stores the MPI ranks of neighboring subdomains.
Definition variables.h:175
The master context for the entire simulation.
Definition variables.h:513
User-defined context containing data specific to a single computational grid level.
Definition variables.h:630
Header file for particle location functions using the walking search algorithm.