PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
setup.h
Go to the documentation of this file.
1#ifndef SETUP_H
2#define SETUP_H
3
4#include <petscpf.h>
5#include <petscdmswarm.h>
6#include <stdlib.h>
7#include <time.h>
8#include <math.h>
9#include <petsctime.h>
10#include <petscsys.h>
11#include <petscdmcomposite.h>
12#include <petscsystypes.h>
13
14// Include additional headers
15#include "variables.h" // Shared type definitions
16#include "ParticleSwarm.h" // Particle swarm functions
17#include "walkingsearch.h" // Particle location functions
18#include "grid.h" // Grid functions
19#include "logging.h" // Logging macros
20#include "io.h" // Data Input and Output functions
21#include "interpolation.h" // Interpolation routines
22#include "ParticleMotion.h" // Functions related to motion of particles
23#include "Boundaries.h" // Functions related to Boundary conditions
24
25
26/* Macro to automatically select the correct allocation function */
27#define Allocate3DArray(array, nz, ny, nx) \
28 _Generic((array), \
29 PetscReal ****: Allocate3DArrayScalar, \
30 Cmpnts ****: Allocate3DArrayVector \
31 )(array, nz, ny, nx)
32
33/* Macro for deallocation */
34#define Deallocate3DArray(array, nz, ny) \
35 _Generic((array), \
36 PetscReal ***: Deallocate3DArrayScalar, \
37 Cmpnts ***: Deallocate3DArrayVector \
38 )(array, nz, ny)
39
40
41/**
42 * @brief Allocates and populates the master SimulationContext object.
43 *
44 * This function serves as the single, authoritative entry point for all
45 * simulation configuration. It merges the setup logic from both the legacy
46 * FSI/IBM solver and the modern particle solver into a unified, robust
47 * process.
48 *
49 * The function follows a strict sequence:
50 * 1. **Allocate Context & Set Defaults:** It first allocates the `SimulationContext`
51 * and populates every field with a sane, hardcoded default value. This
52 * ensures the simulation starts from a known, predictable state.
53 * 2. **Configure Logging System:** It configures the custom logging framework. It
54 * parses the `-func_config_file` option to load a list of function names
55 * allowed to produce log output. This configuration (the file path and the
56 * list of function names) is stored within the `SimulationContext` for
57 * later reference and cleanup.
58 * 3. **Parse All Options:** It performs a comprehensive pass of `PetscOptionsGet...`
59 * calls for every possible command-line flag, overriding the default
60 * values set in step 1.
61 * 4. **Log Summary:** After all options are parsed, it uses the now-active
62 * logging system to print a summary of the key simulation parameters.
63 *
64 * @param[in] argc Argument count passed from `main`.
65 * @param[in] argv Argument vector passed from `main`.
66 * @param[out] p_simCtx On success, this will point to the newly created and
67 * fully configured `SimulationContext` pointer. The caller
68 * is responsible for eventually destroying this object by
69 * calling `FinalizeSimulation()`.
70 *
71 * @return PetscErrorCode Returns 0 on success, or a non-zero PETSc error code on failure.
72 */
73PetscErrorCode CreateSimulationContext(int argc, char **argv, SimCtx **p_simCtx);
74
75/**
76 * @brief Verifies and prepares the complete I/O environment for a simulation run.
77 *
78 * This function performs a comprehensive series of checks and setup actions to
79 * ensure a valid and clean environment. It is parallel-safe; all filesystem
80 * operations and checks are performed by Rank 0, with collective error handling.
81 *
82 * The function's responsibilities include:
83 * 1. **Checking Mandatory Inputs:** Verifies existence of grid and BCs files.
84 * 2. **Checking Optional Inputs:** Warns if optional config files (whitelist, profile) are missing.
85 * 3. **Validating Run Mode Paths:** Ensures `restart_dir` or post-processing source directories exist when needed.
86 * 4. **Preparing Log Directory:** Creates the log directory and cleans it of previous logs.
87 * 5. **Preparing Output Directories:** Creates the main output directory and its required subdirectories.
88 *
89 * @param[in] simCtx The fully configured SimulationContext object.
90 *
91 * @return PetscErrorCode Returns 0 on success, or a non-zero error code if a
92 * mandatory file/directory is missing or a critical operation fails.
93 */
94PetscErrorCode SetupSimulationEnvironment(SimCtx *simCtx);
95
96/**
97 * @brief The main orchestrator for setting up all grid-related components.
98 *
99 * This function is the high-level driver for creating the entire computational
100 * domain, including the multigrid hierarchy, PETSc DMDA and Vec objects, and
101 * calculating all necessary grid metrics.
102 *
103 * @param simCtx The fully configured SimulationContext.
104 * @return PetscErrorCode
105 */
106PetscErrorCode SetupGridAndSolvers(SimCtx *simCtx);
107
108/**
109 @brief Creates and initializes all PETSc Vec objects for all fields.
110 *
111 * This function iterates through every UserCtx in the multigrid and multi-block
112 * hierarchy. For each context, it creates the comprehensive set of global and
113 * local PETSc Vecs required by the flow solver (e.g., Ucont, P, Nvert, metrics,
114 * turbulence fields, etc.). Each vector is initialized to zero.
115 *
116 * @param simCtx The master SimCtx, containing the configured UserCtx hierarchy.
117 * @return PetscErrorCode
118 */
119PetscErrorCode CreateAndInitializeAllVectors(SimCtx *simCtx);
120
121/**
122 * @brief Updates the local vector (including ghost points) from its corresponding global vector.
123 *
124 * This function identifies the correct global vector, local vector, and DM based on the
125 * provided fieldName and performs the standard PETSc DMGlobalToLocalBegin/End sequence.
126 * Includes optional debugging output (max norms before/after).
127 *
128 * @param user The UserCtx structure containing the vectors and DMs.
129 * @param fieldName The name of the field to update ("Ucat", "Ucont", "P", "Nvert", etc.).
130 *
131 * @return PetscErrorCode 0 on success, non-zero on failure.
132 *
133 * @note This function assumes the global vector associated with fieldName has already
134 * been populated with the desired data (including any boundary conditions).
135 */
136PetscErrorCode UpdateLocalGhosts(UserCtx* user, const char *fieldName);
137
138/**
139 * @brief (Orchestrator) Sets up all boundary conditions for the simulation.
140 */
141PetscErrorCode SetupBoundaryConditions(SimCtx *simCtx);
142
143/**
144 * @brief Allocates a 3D array of PetscReal values using PetscCalloc.
145 *
146 * This function dynamically allocates memory for a 3D array of PetscReal values
147 * with dimensions nz (layers) x ny (rows) x nx (columns). It uses PetscCalloc1
148 * to ensure the memory is zero-initialized.
149 *
150 * The allocation is done in three steps:
151 * 1. Allocate an array of nz pointers (one for each layer).
152 * 2. Allocate a contiguous block for nz*ny row pointers and assign each layer’s row pointers.
153 * 3. Allocate a contiguous block for all nz*ny*nx PetscReal values.
154 *
155 * This setup allows the array to be accessed as array[k][j][i], and the memory
156 * for the data is contiguous, which improves cache efficiency.
157 *
158 * @param[out] array Pointer to the 3D array to be allocated.
159 * @param[in] nz Number of layers (z-direction).
160 * @param[in] ny Number of rows (y-direction).
161 * @param[in] nx Number of columns (x-direction).
162 *
163 * @return PetscErrorCode 0 on success, nonzero on failure.
164 */
165 PetscErrorCode Allocate3DArrayScalar(PetscReal ****array, PetscInt nz, PetscInt ny, PetscInt nx);
166
167/**
168 * @brief Deallocates a 3D array of PetscReal values allocated by Allocate3DArrayScalar.
169 *
170 * This function frees the memory allocated for a 3D array of PetscReal values.
171 * It assumes the memory was allocated using Allocate3DArrayScalar, which allocated
172 * three separate memory blocks: one for the contiguous data, one for the row pointers,
173 * and one for the layer pointers.
174 *
175 * @param[in] array Pointer to the 3D array to be deallocated.
176 * @param[in] nz Number of layers (z-direction).
177 * @param[in] ny Number of rows (y-direction).
178 *
179 * @return PetscErrorCode 0 on success, nonzero on failure.
180 */
181PetscErrorCode Deallocate3DArrayScalar(PetscReal ***array, PetscInt nz, PetscInt ny);
182
183/**
184 * @brief Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
185 *
186 * This function frees the memory allocated for a 3D array of Cmpnts structures.
187 * It assumes the memory was allocated using Allocate3DArrayVector, which created three
188 * separate memory blocks: one for the contiguous vector data, one for the row pointers,
189 * and one for the layer pointers.
190 *
191 * @param[in] array Pointer to the 3D array to be deallocated.
192 * @param[in] nz Number of layers in the z-direction.
193 * @param[in] ny Number of rows in the y-direction.
194 *
195 * @return PetscErrorCode 0 on success, nonzero on failure.
196 */
197 PetscErrorCode Allocate3DArrayVector(Cmpnts ****array, PetscInt nz, PetscInt ny, PetscInt nx);
198
199/**
200 * @brief Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
201 *
202 * This function frees the memory allocated for a 3D array of Cmpnts structures.
203 * It assumes the memory was allocated using Allocate3DArrayVector, which created three
204 * separate memory blocks: one for the contiguous vector data, one for the row pointers,
205 * and one for the layer pointers.
206 *
207 * @param[in] array Pointer to the 3D array to be deallocated.
208 * @param[in] nz Number of layers in the z-direction.
209 * @param[in] ny Number of rows in the y-direction.
210 *
211 * @return PetscErrorCode 0 on success, nonzero on failure.
212 */
213PetscErrorCode Deallocate3DArrayVector(Cmpnts ***array, PetscInt nz, PetscInt ny);
214
215/**
216 * @brief Determines the global starting index and number of CELLS owned by the
217 * current processor in a specified dimension. Ownership is defined by the
218 * rank owning the cell's origin node (min i,j,k corner).
219 *
220 * @param[in] info_nodes Pointer to the DMDALocalInfo struct obtained from the NODE-based DMDA
221 * (e.g., user->da or user->fda, assuming they have consistent nodal partitioning
222 * for defining cell origins).
223 * @param[in] dim The dimension to compute the range for (0 for x/i, 1 for y/j, 2 for z/k).
224 * @param[out] xs_cell_global Pointer to store the starting GLOBAL CELL index owned by this process.
225 * A cell C(i) is defined by nodes N(i) and N(i+1). Its global index is i.
226 * @param[out] xm_cell_local Pointer to store the NUMBER of CELLs owned by this process in this dimension.
227 *
228 * @return PetscErrorCode 0 on success.
229 */
230PetscErrorCode GetOwnedCellRange(const DMDALocalInfo *info_nodes,
231 PetscInt dim,
232 PetscInt *xs_cell_global,
233 PetscInt *xm_cell_local);
234
235/**
236 * @brief Computes and stores the Cartesian neighbor ranks for the DMDA decomposition.
237 *
238 * This function retrieves the neighbor information from the primary DMDA (user->da)
239 * and stores the face neighbors (xm, xp, ym, yp, zm, zp) in the user->neighbors structure.
240 * It assumes a standard PETSc ordering for the neighbors array returned by DMDAGetNeighbors.
241 * Logs warnings if the assumed indices seem incorrect (e.g., center rank mismatch).
242 *
243 * @param[in,out] user Pointer to the UserCtx structure where neighbor info will be stored.
244 *
245 * @return PetscErrorCode 0 on success, non-zero on failure.
246 */
247PetscErrorCode ComputeAndStoreNeighborRanks(UserCtx *user);
248
249/**
250 * @brief Sets the processor layout for a given DMDA based on PETSc options.
251 *
252 * Reads the desired number of processors in x, y, and z directions using
253 * PETSc options (e.g., -dm_processors_x, -dm_processors_y, -dm_processors_z).
254 * If an option is not provided for a direction, PETSC_DECIDE is used for that direction.
255 * Applies the layout using DMDASetNumProcs.
256 *
257 * Also stores the retrieved/decided values in user->procs_x/y/z if user context is provided.
258 *
259 * @param dm The DMDA object to configure the layout for.
260 * @param user Pointer to the UserCtx structure (optional, used to store layout values).
261 *
262 * @return PetscErrorCode 0 on success, non-zero on failure.
263 */
264PetscErrorCode SetDMDAProcLayout(DM dm, UserCtx *user);
265
266/**
267 * @brief Sets up the full rank communication infrastructure, including neighbor ranks and bounding box exchange.
268 *
269 * This function orchestrates the following steps:
270 * 1. Compute and store the neighbor ranks in the user context.
271 * 2. Gather all local bounding boxes to rank 0.
272 * 3. Broadcast the complete bounding box list to all ranks.
273 *
274 * The final result is that each rank has access to its immediate neighbors and the bounding box information of all ranks.
275 *
276 * @param[in,out] user Pointer to the UserCtx structure (must be initialized).
277 * @param[in,out] bboxlist Pointer to BoundingBox array pointer; after this call, it will point to the broadcasted list.
278 *
279 * @return PetscErrorCode Returns 0 on success or non-zero PETSc error code.
280 */
281PetscErrorCode SetupDomainRankInfo(SimCtx *simCtx);
282
283/**
284 * @brief Reconstructs Cartesian velocity (Ucat) at cell centers from contravariant
285 * velocity (Ucont) defined on cell faces.
286 *
287 * This function performs the transformation from a contravariant velocity representation
288 * (which is natural on a curvilinear grid) to a Cartesian (x,y,z) representation.
289 * For each interior computational cell owned by the rank, it performs the following:
290 *
291 * 1. It averages the contravariant velocity components (U¹, U², U³) from the
292 * surrounding faces to get an estimate of the contravariant velocity at the cell center.
293 * 2. It averages the metric vectors (Csi, Eta, Zet) from the surrounding faces
294 * to get an estimate of the metric tensor at the cell center. This tensor forms
295 * the transformation matrix.
296 * 3. It solves the linear system `[MetricTensor] * [ucat] = [ucont]` for the
297 * Cartesian velocity vector `ucat = (u,v,w)` using Cramer's rule.
298 * 4. The computed Cartesian velocity is stored in the global `user->Ucat` vector.
299 *
300 * The function operates on local, ghosted versions of the input vectors (`user->lUcont`,
301 * `user->lCsi`, etc.) to ensure stencils are valid across processor boundaries.
302 *
303 * @param[in,out] user Pointer to the UserCtx structure. The function reads from
304 * `user->lUcont`, `user->lCsi`, `user->lEta`, `user->lZet`, `user->lNvert`
305 * and writes to the global `user->Ucat` vector.
306 *
307 * @return PetscErrorCode 0 on success.
308 *
309 * @note
310 * - This function should be called AFTER `user->lUcont` and all local metric vectors
311 * (`user->lCsi`, etc.) have been populated with up-to-date ghost values via `UpdateLocalGhosts`.
312 * - It only computes `Ucat` for interior cells (not on physical boundaries) and for
313 * cells not marked as solid/blanked by `user->lNvert`.
314 * - The caller is responsible for subsequently applying boundary conditions to `user->Ucat`
315 * and calling `UpdateLocalGhosts(user, "Ucat")` to populate `user->lUcat`.
316 */
317PetscErrorCode Contra2Cart(UserCtx *user);
318
319
320/**
321 * @brief Creates and distributes a map of the domain's cell decomposition to all ranks.
322 * @ingroup DomainInfo
323 *
324 * This function is a critical part of the simulation setup. It determines the global
325 * cell ownership for each MPI rank and makes this information available to all
326 * other ranks. This "decomposition map" is essential for the robust "Walk and Handoff"
327 * particle migration strategy, allowing any rank to quickly identify the owner of a
328 * target cell.
329 *
330 * The process involves:
331 * 1. Each rank gets its own node ownership information from the DMDA.
332 * 2. It converts this node information into cell ownership ranges using the
333 * `GetOwnedCellRange` helper function.
334 * 3. It participates in an `MPI_Allgather` collective operation to build a complete
335 * array (`user->RankCellInfoMap`) containing the ownership information for every rank.
336 *
337 * This function should be called once during initialization after the primary DMDA
338 * (user->da) has been set up.
339 *
340 * @param[in,out] user Pointer to the UserCtx structure. The function will allocate and
341 * populate `user->RankCellInfoMap` and set `user->num_ranks`.
342 *
343 * @return PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.
344 * Errors can occur if input pointers are NULL or if MPI communication fails.
345 */
346PetscErrorCode SetupDomainCellDecompositionMap(UserCtx *user);
347
348/**
349 * @brief Performs a binary search for a key in a sorted array of PetscInt64.
350 *
351 * This is a standard binary search algorithm implemented as a PETSc-style helper function.
352 * It efficiently determines if a given `key` exists within a `sorted` array.
353 *
354 * @param[in] n The number of elements in the array.
355 * @param[in] arr A pointer to the sorted array of PetscInt64 values to be searched.
356 * @param[in] key The PetscInt64 value to search for.
357 * @param[out] found A pointer to a PetscBool that will be set to PETSC_TRUE if the key
358 * is found, and PETSC_FALSE otherwise.
359 *
360 * @return PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.
361 *
362 * @note The input array `arr` **must** be sorted in ascending order for the algorithm
363 * to work correctly.
364 */
365PetscErrorCode BinarySearchInt64(PetscInt n, const PetscInt64 arr[], PetscInt64 key, PetscBool *found);
366
367
368PetscErrorCode ComputeDivergence(UserCtx *user);
369
370/**
371 * @brief Initializes random number generators for assigning particle properties.
372 *
373 * This function creates and configures separate PETSc random number generators for the x, y, and z coordinates.
374 *
375 * @param[in,out] user Pointer to the UserCtx structure containing simulation context.
376 * @param[out] randx Pointer to store the RNG for the x-coordinate.
377 * @param[out] randy Pointer to store the RNG for the y-coordinate.
378 * @param[out] randz Pointer to store the RNG for the z-coordinate.
379 *
380 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
381 */
382PetscErrorCode InitializeRandomGenerators(UserCtx *user, PetscRandom *randx, PetscRandom *randy, PetscRandom *randz);
383
384/**
385 * @brief Initializes random number generators for logical space operations [0.0, 1.0).
386 *
387 * This function creates and configures three separate PETSc random number generators,
388 * one for each logical dimension (i, j, k or xi, eta, zeta equivalent).
389 * Each RNG is configured to produce uniformly distributed real numbers in the interval [0.0, 1.0).
390 * These are typically used for selecting owned cells or generating intra-cell logical coordinates.
391 *
392 * @param[out] rand_logic_i Pointer to store the RNG for the i-logical dimension.
393 * @param[out] rand_logic_j Pointer to store the RNG for the j-logical dimension.
394 * @param[out] rand_logic_k Pointer to store the RNG for the k-logical dimension.
395 *
396 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
397 */
398PetscErrorCode InitializeLogicalSpaceRNGs(PetscRandom *rand_logic_i, PetscRandom *rand_logic_j, PetscRandom *rand_logic_k);
399
400/**
401 * @brief Initializes a single master RNG for time-stepping physics (Brownian motion).
402 * Configures it for Uniform [0, 1) which is required for Box-Muller transformation.
403 *
404 * @param[in,out] simCtx Pointer to the Simulation Context.
405 * @return PetscErrorCode
406 */
407PetscErrorCode InitializeBrownianRNG(SimCtx *simCtx);
408
409/**
410 * @brief Transforms scalar derivatives from computational space to physical space
411 * using the chain rule.
412 *
413 * Formula: dPhi/dx = J * ( dPhi/dCsi * dCsi/dx + dPhi/dEta * dEta/dx + ... )
414 */
415 void TransformScalarDerivativesToPhysical(PetscReal jacobian,
416 Cmpnts csi_metrics,
417 Cmpnts eta_metrics,
418 Cmpnts zet_metrics,
419 PetscReal dPhi_dcsi,
420 PetscReal dPhi_deta,
421 PetscReal dPhi_dzet,
422 Cmpnts *gradPhi);
423
424/**
425 * @brief Computes the gradient of a cell-centered SCALAR field at a specific grid point.
426 *
427 * @param user The user context.
428 * @param i, j, k The grid indices.
429 * @param field_data 3D array pointer to the scalar field (PetscReal***).
430 * @param grad Output: A Cmpnts struct storing [dPhi/dx, dPhi/dy, dPhi/dz].
431 * @return PetscErrorCode
432 */
433PetscErrorCode ComputeScalarFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k,
434 PetscReal ***field_data, Cmpnts *grad);
435
436/**
437 * @brief Computes the derivatives of a cell-centered vector field at a specific grid point.
438 *
439 * This function orchestrates the calculation of spatial derivatives. It first computes
440 * the derivatives in computational space (d/dcsi, d/deta, d/dzet) using a central
441 * difference scheme and then transforms them into physical space (d/dx, d/dy, d/dz).
442 *
443 * @param user The user context for the current computational block.
444 * @param i, j, k The grid indices of the cell center where derivatives are required.
445 * @param field_data A 3D array pointer to the raw local data of the vector field (e.g., from lUcat).
446 * @param dudx Output: A Cmpnts struct storing [du/dx, du/dy, du/dz].
447 * @param dvdx Output: A Cmpnts struct storing [dv/dx, dv/dy, dv/dz].
448 * @param dwdx Output: A Cmpnts struct storing [dw/dx, dw/dy, dw/dz].
449 * @return PetscErrorCode 0 on success.
450 */
451PetscErrorCode ComputeVectorFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k, Cmpnts ***field_data,
452 Cmpnts *dudx, Cmpnts *dvdx, Cmpnts *dwdx);
453
454/**
455 * @brief Destroys all PETSc Vec objects within a single UserCtx structure.
456 *
457 * This helper function systematically destroys all ~74 Vec objects stored in a UserCtx.
458 * The vectors are organized into 14 groups (A-N) for clarity:
459 * - Primary flow fields (Ucont, Ucat, P, Nvert)
460 * - Solver work vectors (Phi)
461 * - Time-stepping vectors (Ucont_o, Ucont_rm1, etc.)
462 * - Grid metrics (cell-centered, face-centered, coordinates)
463 * - Turbulence vectors (Nu_t, CS, lFriction_Velocity)
464 * - Particle vectors (ParticleCount, Psi)
465 * - Boundary condition vectors (Ubcs, Uch)
466 * - Post-processing vectors (P_nodal, Qcrit)
467 * - Statistical averaging vectors (Ucat_sum, etc.)
468 * - And more...
469 *
470 * All destroys are protected with NULL checks to handle conditional allocations safely.
471 *
472 * @param[in,out] user Pointer to the UserCtx containing the vectors to destroy.
473 *
474 * @return PetscErrorCode 0 on success.
475 */
476PetscErrorCode DestroyUserVectors(UserCtx *user);
477
478/**
479 * @brief Destroys all resources allocated within a single UserCtx structure.
480 *
481 * This function cleans up all memory and PETSc objects associated with a single
482 * UserCtx (grid level). It calls the helper functions and destroys remaining objects
483 * in the proper dependency order:
484 * 1. Boundary conditions (handlers and their data)
485 * 2. All PETSc vectors (via DestroyUserVectors)
486 * 3. Matrix and solver objects (A, C, MR, MP, ksp, nullsp)
487 * 4. Application ordering (AO)
488 * 5. Distributed mesh objects (DMs) - most derived first
489 * 6. Raw PetscMalloc'd arrays (RankCellInfoMap, KSKE)
490 *
491 * This function should be called for each UserCtx in the multigrid hierarchy.
492 *
493 * @param[in,out] user Pointer to the UserCtx to be destroyed.
494 *
495 * @return PetscErrorCode 0 on success.
496 */
497PetscErrorCode DestroyUserContext(UserCtx *user);
498
499/**
500 * @brief Main cleanup function for the entire simulation context.
501 *
502 * This function is responsible for destroying ALL memory and PETSc objects allocated
503 * during the simulation, including:
504 * - All UserCtx structures in the multigrid hierarchy (via DestroyUserContext)
505 * - The multigrid management structures (UserMG, MGCtx array)
506 * - All SimCtx-level objects (logviewer, dm_swarm, bboxlist, string arrays, etc.)
507 *
508 * This function should be called ONCE at the end of the simulation, after all
509 * computation is complete, but BEFORE PetscFinalize().
510 *
511 * Call order in main:
512 * 1. [Simulation runs]
513 * 2. ProfilingFinalize(simCtx);
514 * 3. FinalizeSimulation(simCtx); <- This function
515 * 4. PetscFinalize();
516 *
517 * @param[in,out] simCtx Pointer to the master SimulationContext to be destroyed.
518 *
519 * @return PetscErrorCode 0 on success.
520 */
521PetscErrorCode FinalizeSimulation(SimCtx *simCtx);
522
523
524 #endif // SETUP_H
Header file for Particle Motion and migration related functions.
Header file for Particle Swarm management functions.
Public interface for grid, solver, and metric setup routines.
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
PetscErrorCode ComputeVectorFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k, Cmpnts ***field_data, Cmpnts *dudx, Cmpnts *dvdx, Cmpnts *dwdx)
Computes the derivatives of a cell-centered vector field at a specific grid point.
Definition setup.c:2886
PetscErrorCode DestroyUserContext(UserCtx *user)
Destroys all resources allocated within a single UserCtx structure.
Definition setup.c:3096
PetscErrorCode SetupDomainRankInfo(SimCtx *simCtx)
Sets up the full rank communication infrastructure, including neighbor ranks and bounding box exchang...
Definition setup.c:2074
PetscErrorCode InitializeRandomGenerators(UserCtx *user, PetscRandom *randx, PetscRandom *randy, PetscRandom *randz)
Initializes random number generators for assigning particle properties.
Definition setup.c:2642
PetscErrorCode Deallocate3DArrayVector(Cmpnts ***array, PetscInt nz, PetscInt ny)
Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
Definition setup.c:1664
PetscErrorCode SetupGridAndSolvers(SimCtx *simCtx)
The main orchestrator for setting up all grid-related components.
Definition setup.c:910
PetscErrorCode InitializeBrownianRNG(SimCtx *simCtx)
Initializes a single master RNG for time-stepping physics (Brownian motion).
Definition setup.c:2740
PetscErrorCode SetupSimulationEnvironment(SimCtx *simCtx)
Verifies and prepares the complete I/O environment for a simulation run.
Definition setup.c:609
PetscErrorCode CreateAndInitializeAllVectors(SimCtx *simCtx)
Creates and initializes all PETSc Vec objects for all fields.
Definition setup.c:953
PetscErrorCode ComputeAndStoreNeighborRanks(UserCtx *user)
Computes and stores the Cartesian neighbor ranks for the DMDA decomposition.
Definition setup.c:1863
PetscErrorCode Contra2Cart(UserCtx *user)
Reconstructs Cartesian velocity (Ucat) at cell centers from contravariant velocity (Ucont) defined on...
Definition setup.c:2177
void TransformScalarDerivativesToPhysical(PetscReal jacobian, Cmpnts csi_metrics, Cmpnts eta_metrics, Cmpnts zet_metrics, PetscReal dPhi_dcsi, PetscReal dPhi_deta, PetscReal dPhi_dzet, Cmpnts *gradPhi)
Transforms scalar derivatives from computational space to physical space using the chain rule.
Definition setup.c:2779
PetscErrorCode Allocate3DArrayScalar(PetscReal ****array, PetscInt nz, PetscInt ny, PetscInt nx)
Allocates a 3D array of PetscReal values using PetscCalloc.
Definition setup.c:1504
PetscErrorCode CreateSimulationContext(int argc, char **argv, SimCtx **p_simCtx)
Allocates and populates the master SimulationContext object.
Definition setup.c:44
PetscErrorCode GetOwnedCellRange(const DMDALocalInfo *info_nodes, PetscInt dim, PetscInt *xs_cell_global, PetscInt *xm_cell_local)
Determines the global starting index and number of CELLS owned by the current processor in a specifie...
Definition setup.c:1756
PetscErrorCode SetDMDAProcLayout(DM dm, UserCtx *user)
Sets the processor layout for a given DMDA based on PETSc options.
Definition setup.c:1990
PetscErrorCode InitializeLogicalSpaceRNGs(PetscRandom *rand_logic_i, PetscRandom *rand_logic_j, PetscRandom *rand_logic_k)
Initializes random number generators for logical space operations [0.0, 1.0).
Definition setup.c:2693
PetscErrorCode ComputeScalarFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k, PetscReal ***field_data, Cmpnts *grad)
Computes the gradient of a cell-centered SCALAR field at a specific grid point.
Definition setup.c:2832
PetscErrorCode ComputeDivergence(UserCtx *user)
Definition setup.c:2470
PetscErrorCode BinarySearchInt64(PetscInt n, const PetscInt64 arr[], PetscInt64 key, PetscBool *found)
Performs a binary search for a key in a sorted array of PetscInt64.
Definition setup.c:2411
PetscErrorCode DestroyUserVectors(UserCtx *user)
Destroys all PETSc Vec objects within a single UserCtx structure.
Definition setup.c:2946
PetscErrorCode Allocate3DArrayVector(Cmpnts ****array, PetscInt nz, PetscInt ny, PetscInt nx)
Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
Definition setup.c:1606
PetscErrorCode UpdateLocalGhosts(UserCtx *user, const char *fieldName)
Updates the local vector (including ghost points) from its corresponding global vector.
Definition setup.c:1157
PetscErrorCode SetupBoundaryConditions(SimCtx *simCtx)
(Orchestrator) Sets up all boundary conditions for the simulation.
Definition setup.c:1428
PetscErrorCode SetupDomainCellDecompositionMap(UserCtx *user)
Creates and distributes a map of the domain's cell decomposition to all ranks.
Definition setup.c:2335
PetscErrorCode FinalizeSimulation(SimCtx *simCtx)
Main cleanup function for the entire simulation context.
Definition setup.c:3211
PetscErrorCode Deallocate3DArrayScalar(PetscReal ***array, PetscInt nz, PetscInt ny)
Deallocates a 3D array of PetscReal values allocated by Allocate3DArrayScalar.
Definition setup.c:1549
Main header file for a complex fluid dynamics solver.
A 3D point or vector with PetscScalar components.
Definition variables.h:100
The master context for the entire simulation.
Definition variables.h:585
User-defined context containing data specific to a single computational grid level.
Definition variables.h:728
Header file for particle location functions using the walking search algorithm.