PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
setup.h
Go to the documentation of this file.
1#ifndef SETUP_H
2#define SETUP_H
3
4#include <petscpf.h>
5#include <petscdmswarm.h>
6#include <stdlib.h>
7#include <time.h>
8#include <math.h>
9#include <petsctime.h>
10#include <petscsys.h>
11#include <petscdmcomposite.h>
12#include <petscsystypes.h>
13
14// Include additional headers
15#include "variables.h" // Shared type definitions
16#include "ParticleSwarm.h" // Particle swarm functions
17#include "walkingsearch.h" // Particle location functions
18#include "grid.h" // Grid functions
19#include "logging.h" // Logging macros
20#include "io.h" // Data Input and Output functions
21#include "interpolation.h" // Interpolation routines
22#include "ParticleMotion.h" // Functions related to motion of particles
23#include "Boundaries.h" // Functions related to Boundary conditions
24
25
26/* Macro to automatically select the correct allocation function */
27#define Allocate3DArray(array, nz, ny, nx) \
28 _Generic((array), \
29 PetscReal ****: Allocate3DArrayScalar, \
30 Cmpnts ****: Allocate3DArrayVector \
31 )(array, nz, ny, nx)
32
33/* Macro for deallocation */
34#define Deallocate3DArray(array, nz, ny) \
35 _Generic((array), \
36 PetscReal ***: Deallocate3DArrayScalar, \
37 Cmpnts ***: Deallocate3DArrayVector \
38 )(array, nz, ny)
39
40
41/**
42 * @brief Allocates and populates the master SimulationContext object.
43 *
44 * This function serves as the single, authoritative entry point for all
45 * simulation configuration. It merges the setup logic from both the legacy
46 * FSI/IBM solver and the modern particle solver into a unified, robust
47 * process.
48 *
49 * The function follows a strict sequence:
50 * 1. **Allocate Context & Set Defaults:** It first allocates the `SimulationContext`
51 * and populates every field with a sane, hardcoded default value. This
52 * ensures the simulation starts from a known, predictable state.
53 * 2. **Configure Logging System:** It configures the custom logging framework. It
54 * parses the `-func_config_file` option to load a list of function names
55 * allowed to produce log output. This configuration (the file path and the
56 * list of function names) is stored within the `SimulationContext` for
57 * later reference and cleanup.
58 * 3. **Parse All Options:** It performs a comprehensive pass of `PetscOptionsGet...`
59 * calls for every possible command-line flag, overriding the default
60 * values set in step 1.
61 * 4. **Log Summary:** After all options are parsed, it uses the now-active
62 * logging system to print a summary of the key simulation parameters.
63 *
64 * @param[in] argc Argument count passed from `main`.
65 * @param[in] argv Argument vector passed from `main`.
66 * @param[out] p_simCtx On success, this will point to the newly created and
67 * fully configured `SimulationContext` pointer. The caller
68 * is responsible for eventually destroying this object by
69 * calling `FinalizeSimulation()`.
70 *
71 * @return PetscErrorCode Returns 0 on success, or a non-zero PETSc error code on failure.
72 */
73PetscErrorCode CreateSimulationContext(int argc, char **argv, SimCtx **p_simCtx);
74
75/**
76 * @brief Parse a positive floating-point seconds value from runtime metadata.
77 *
78 * This helper is used for shell-exported walltime metadata such as
79 * `PICURV_JOB_START_EPOCH` and `PICURV_WALLTIME_LIMIT_SECONDS`.
80 *
81 * @param[in] text String to parse.
82 * @param[out] seconds_out Parsed positive seconds value when successful.
83 * @return PetscBool `PETSC_TRUE` when parsing succeeds, else `PETSC_FALSE`.
84 */
85PetscBool RuntimeWalltimeGuardParsePositiveSeconds(const char *text, PetscReal *seconds_out);
86
87/**
88 * @brief Verifies and prepares the complete I/O environment for a simulation run.
89 *
90 * This function performs a comprehensive series of checks and setup actions to
91 * ensure a valid and clean environment. It is parallel-safe; all filesystem
92 * operations and checks are performed by Rank 0, with collective error handling.
93 *
94 * The function's responsibilities include:
95 * 1. **Checking Mandatory Inputs:** Verifies existence of grid and BCs files.
96 * 2. **Checking Optional Inputs:** Warns if optional config files (whitelist, profile) are missing.
97 * 3. **Validating Run Mode Paths:** Ensures `restart_dir` or post-processing source directories exist when needed.
98 * 4. **Preparing Log Directory:** Creates the log directory and cleans it of previous logs.
99 * 5. **Preparing Output Directories:** Creates the main output directory and its required subdirectories.
100 *
101 * @param[in] simCtx The fully configured SimulationContext object.
102 *
103 * @return PetscErrorCode Returns 0 on success, or a non-zero error code if a
104 * mandatory file/directory is missing or a critical operation fails.
105 */
106PetscErrorCode SetupSimulationEnvironment(SimCtx *simCtx);
107
108/**
109 * @brief The main orchestrator for setting up all grid-related components.
110 *
111 * This function is the high-level driver for creating the entire computational
112 * domain, including the multigrid hierarchy, PETSc DMDA and Vec objects, and
113 * calculating all necessary grid metrics.
114 *
115 * @param simCtx The fully configured SimulationContext.
116 * @return PetscErrorCode
117 */
118PetscErrorCode SetupGridAndSolvers(SimCtx *simCtx);
119
120/**
121 @brief Creates and initializes all PETSc Vec objects for all fields.
122 *
123 * This function iterates through every UserCtx in the multigrid and multi-block
124 * hierarchy. For each context, it creates the comprehensive set of global and
125 * local PETSc Vecs required by the flow solver (e.g., Ucont, P, Nvert, metrics,
126 * turbulence fields, etc.). Each vector is initialized to zero.
127 *
128 * @param simCtx The master SimCtx, containing the configured UserCtx hierarchy.
129 * @return PetscErrorCode
130 */
131PetscErrorCode CreateAndInitializeAllVectors(SimCtx *simCtx);
132
133/**
134 * @brief Updates the local vector (including ghost points) from its corresponding global vector.
135 *
136 * This function identifies the correct global vector, local vector, and DM based on the
137 * provided fieldName and performs the standard PETSc DMGlobalToLocalBegin/End sequence.
138 * Includes optional debugging output (max norms before/after).
139 *
140 * @param user The UserCtx structure containing the vectors and DMs.
141 * @param fieldName The name of the field to update ("Ucat", "Ucont", "P", "Nvert", etc.).
142 *
143 * @return PetscErrorCode 0 on success, non-zero on failure.
144 *
145 * @note This function assumes the global vector associated with fieldName has already
146 * been populated with the desired data (including any boundary conditions).
147 */
148PetscErrorCode UpdateLocalGhosts(UserCtx* user, const char *fieldName);
149
150/**
151 * @brief (Orchestrator) Sets up all boundary conditions for the simulation.
152 *
153 * @param simCtx Simulation context controlling the operation.
154 * @return PetscErrorCode 0 on success.
155 */
156PetscErrorCode SetupBoundaryConditions(SimCtx *simCtx);
157
158/**
159 * @brief Allocates a 3D array of PetscReal values using PetscCalloc.
160 *
161 * This function dynamically allocates memory for a 3D array of PetscReal values
162 * with dimensions nz (layers) x ny (rows) x nx (columns). It uses PetscCalloc1
163 * to ensure the memory is zero-initialized.
164 *
165 * The allocation is done in three steps:
166 * 1. Allocate an array of nz pointers (one for each layer).
167 * 2. Allocate a contiguous block for nz*ny row pointers and assign each layer’s row pointers.
168 * 3. Allocate a contiguous block for all nz*ny*nx PetscReal values.
169 *
170 * This setup allows the array to be accessed as array[k][j][i], and the memory
171 * for the data is contiguous, which improves cache efficiency.
172 *
173 * @param[out] array Pointer to the 3D array to be allocated.
174 * @param[in] nz Number of layers (z-direction).
175 * @param[in] ny Number of rows (y-direction).
176 * @param[in] nx Number of columns (x-direction).
177 *
178 * @return PetscErrorCode 0 on success, nonzero on failure.
179 */
180 PetscErrorCode Allocate3DArrayScalar(PetscReal ****array, PetscInt nz, PetscInt ny, PetscInt nx);
181
182/**
183 * @brief Deallocates a 3D array of PetscReal values allocated by Allocate3DArrayScalar.
184 *
185 * This function frees the memory allocated for a 3D array of PetscReal values.
186 * It assumes the memory was allocated using Allocate3DArrayScalar, which allocated
187 * three separate memory blocks: one for the contiguous data, one for the row pointers,
188 * and one for the layer pointers.
189 *
190 * @param[in] array Pointer to the 3D array to be deallocated.
191 * @param[in] nz Number of layers (z-direction).
192 * @param[in] ny Number of rows (y-direction).
193 *
194 * @return PetscErrorCode 0 on success, nonzero on failure.
195 */
196PetscErrorCode Deallocate3DArrayScalar(PetscReal ***array, PetscInt nz, PetscInt ny);
197
198/**
199 * @brief Allocates a contiguous 3D array of `Cmpnts` values.
200 *
201 * The memory layout mirrors `Allocate3DArrayScalar`: layer pointers, row pointers,
202 * and contiguous payload are allocated such that indexing as `array[k][j][i]` is valid
203 * while keeping payload data contiguous.
204 *
205 * @param[out] array Pointer to the 3D vector array to allocate.
206 * @param[in] nz Number of layers in the z-direction.
207 * @param[in] ny Number of rows in the y-direction.
208 * @param[in] nx Number of columns in the x-direction.
209 * @return PetscErrorCode 0 on success, nonzero on failure.
210 */
211 PetscErrorCode Allocate3DArrayVector(Cmpnts ****array, PetscInt nz, PetscInt ny, PetscInt nx);
212
213/**
214 * @brief Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
215 *
216 * This function frees the memory allocated for a 3D array of Cmpnts structures.
217 * It assumes the memory was allocated using Allocate3DArrayVector, which created three
218 * separate memory blocks: one for the contiguous vector data, one for the row pointers,
219 * and one for the layer pointers.
220 *
221 * @param[in] array Pointer to the 3D array to be deallocated.
222 * @param[in] nz Number of layers in the z-direction.
223 * @param[in] ny Number of rows in the y-direction.
224 *
225 * @return PetscErrorCode 0 on success, nonzero on failure.
226 */
227PetscErrorCode Deallocate3DArrayVector(Cmpnts ***array, PetscInt nz, PetscInt ny);
228
229/**
230 * @brief Determines the global starting index and number of CELLS owned by the
231 * current processor in a specified dimension. Ownership is defined by the
232 * rank owning the cell's origin node (min i,j,k corner).
233 *
234 * @param[in] info_nodes Pointer to the DMDALocalInfo struct obtained from the NODE-based DMDA
235 * (e.g., user->da or user->fda, assuming they have consistent nodal partitioning
236 * for defining cell origins).
237 * @param[in] dim The dimension to compute the range for (0 for x/i, 1 for y/j, 2 for z/k).
238 * @param[out] xs_cell_global_out Pointer to store the starting global cell index owned by this process.
239 * @param[out] xm_cell_local_out Pointer to store the number of owned cells in this dimension.
240 *
241 * @return PetscErrorCode 0 on success.
242 */
243PetscErrorCode GetOwnedCellRange(const DMDALocalInfo *info_nodes,
244 PetscInt dim,
245 PetscInt *xs_cell_global_out,
246 PetscInt *xm_cell_local_out);
247
248/**
249 * @brief Computes and stores the Cartesian neighbor ranks for the DMDA decomposition.
250 *
251 * This function retrieves the neighbor information from the primary DMDA (user->da)
252 * and stores the face neighbors (xm, xp, ym, yp, zm, zp) in the user->neighbors structure.
253 * It assumes a standard PETSc ordering for the neighbors array returned by DMDAGetNeighbors.
254 * Logs warnings if the assumed indices seem incorrect (e.g., center rank mismatch).
255 *
256 * @param[in,out] user Pointer to the UserCtx structure where neighbor info will be stored.
257 *
258 * @return PetscErrorCode 0 on success, non-zero on failure.
259 */
260PetscErrorCode ComputeAndStoreNeighborRanks(UserCtx *user);
261
262/**
263 * @brief Sets the processor layout for a given DMDA based on PETSc options.
264 *
265 * Reads the desired number of processors in x, y, and z directions using
266 * PETSc options (e.g., -dm_processors_x, -dm_processors_y, -dm_processors_z).
267 * If an option is not provided for a direction, PETSC_DECIDE is used for that direction.
268 * Applies the layout using DMDASetNumProcs.
269 *
270 * Also stores the retrieved/decided values in user->procs_x/y/z if user context is provided.
271 *
272 * @param dm The DMDA object to configure the layout for.
273 * @param user Pointer to the UserCtx structure (optional, used to store layout values).
274 *
275 * @return PetscErrorCode 0 on success, non-zero on failure.
276 */
277PetscErrorCode SetDMDAProcLayout(DM dm, UserCtx *user);
278
279/**
280 * @brief Sets up the full rank communication infrastructure, including neighbor ranks and bounding box exchange.
281 *
282 * This function orchestrates the following steps:
283 * 1. Compute and store the neighbor ranks in the user context.
284 * 2. Gather all local bounding boxes to rank 0.
285 * 3. Broadcast the complete bounding box list to all ranks.
286 *
287 * The final result is that each rank has access to its immediate neighbors and the bounding box information of all ranks.
288 *
289 * @param[in,out] simCtx Pointer to initialized simulation context that owns all block UserCtx objects.
290 *
291 * @return PetscErrorCode Returns 0 on success or non-zero PETSc error code.
292 */
293PetscErrorCode SetupDomainRankInfo(SimCtx *simCtx);
294
295/**
296 * @brief Reconstructs Cartesian velocity (Ucat) at cell centers from contravariant
297 * velocity (Ucont) defined on cell faces.
298 *
299 * This function performs the transformation from a contravariant velocity representation
300 * (which is natural on a curvilinear grid) to a Cartesian (x,y,z) representation.
301 * For each interior computational cell owned by the rank, it performs the following:
302 *
303 * 1. It averages the contravariant velocity components (U¹, U², U³) from the
304 * surrounding faces to get an estimate of the contravariant velocity at the cell center.
305 * 2. It averages the metric vectors (Csi, Eta, Zet) from the surrounding faces
306 * to get an estimate of the metric tensor at the cell center. This tensor forms
307 * the transformation matrix.
308 * 3. It solves the linear system `[MetricTensor] * [ucat] = [ucont]` for the
309 * Cartesian velocity vector `ucat = (u,v,w)` using Cramer's rule.
310 * 4. The computed Cartesian velocity is stored in the global `user->Ucat` vector.
311 *
312 * The function operates on local, ghosted versions of the input vectors (`user->lUcont`,
313 * `user->lCsi`, etc.) to ensure stencils are valid across processor boundaries.
314 *
315 * @param[in,out] user Pointer to the UserCtx structure. The function reads from
316 * `user->lUcont`, `user->lCsi`, `user->lEta`, `user->lZet`, `user->lNvert`
317 * and writes to the global `user->Ucat` vector.
318 *
319 * @return PetscErrorCode 0 on success.
320 *
321 * @note
322 * - This function should be called AFTER `user->lUcont` and all local metric vectors
323 * (`user->lCsi`, etc.) have been populated with up-to-date ghost values via `UpdateLocalGhosts`.
324 * - It only computes `Ucat` for interior cells (not on physical boundaries) and for
325 * cells not marked as solid/blanked by `user->lNvert`.
326 * - The caller is responsible for subsequently applying boundary conditions to `user->Ucat`
327 * and calling `UpdateLocalGhosts(user, "Ucat")` to populate `user->lUcat`.
328 */
329PetscErrorCode Contra2Cart(UserCtx *user);
330
331
332/**
333 * @brief Creates and distributes a map of the domain's cell decomposition to all ranks.
334 * @ingroup DomainInfo
335 *
336 * This function is a critical part of the simulation setup. It determines the global
337 * cell ownership for each MPI rank and makes this information available to all
338 * other ranks. This "decomposition map" is essential for the robust "Walk and Handoff"
339 * particle migration strategy, allowing any rank to quickly identify the owner of a
340 * target cell.
341 *
342 * The process involves:
343 * 1. Each rank gets its own node ownership information from the DMDA.
344 * 2. It converts this node information into cell ownership ranges using the
345 * `GetOwnedCellRange` helper function.
346 * 3. It participates in an `MPI_Allgather` collective operation to build a complete
347 * array (`user->RankCellInfoMap`) containing the ownership information for every rank.
348 *
349 * This function should be called once during initialization after the primary DMDA
350 * (user->da) has been set up.
351 *
352 * @param[in,out] user Pointer to the UserCtx structure. The function will allocate and
353 * populate `user->RankCellInfoMap` and set `user->num_ranks`.
354 *
355 * @return PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.
356 * Errors can occur if input pointers are NULL or if MPI communication fails.
357 */
358PetscErrorCode SetupDomainCellDecompositionMap(UserCtx *user);
359
360/**
361 * @brief Performs a binary search for a key in a sorted array of PetscInt64.
362 *
363 * This is a standard binary search algorithm implemented as a PETSc-style helper function.
364 * It efficiently determines if a given `key` exists within a `sorted` array.
365 *
366 * @param[in] n The number of elements in the array.
367 * @param[in] arr A pointer to the sorted array of PetscInt64 values to be searched.
368 * @param[in] key The PetscInt64 value to search for.
369 * @param[out] found A pointer to a PetscBool that will be set to PETSC_TRUE if the key
370 * is found, and PETSC_FALSE otherwise.
371 *
372 * @return PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.
373 *
374 * @note The input array `arr` **must** be sorted in ascending order for the algorithm
375 * to work correctly.
376 */
377PetscErrorCode BinarySearchInt64(PetscInt n, const PetscInt64 arr[], PetscInt64 key, PetscBool *found);
378
379
380/**
381 * @brief Computes the discrete divergence of the contravariant velocity field.
382 *
383 * This diagnostic/kernel routine evaluates continuity residuals on the local block
384 * and writes the resulting divergence field into the configured output vector(s).
385 *
386 * @param[in,out] user Block-level context containing velocity and metric fields.
387 * @return PetscErrorCode 0 on success.
388 */
389PetscErrorCode ComputeDivergence(UserCtx *user);
390
391/**
392 * @brief Initializes random number generators for assigning particle properties.
393 *
394 * This function creates and configures separate PETSc random number generators for the x, y, and z coordinates.
395 *
396 * @param[in,out] user Pointer to the UserCtx structure containing simulation context.
397 * @param[out] randx Pointer to store the RNG for the x-coordinate.
398 * @param[out] randy Pointer to store the RNG for the y-coordinate.
399 * @param[out] randz Pointer to store the RNG for the z-coordinate.
400 *
401 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
402 */
403PetscErrorCode InitializeRandomGenerators(UserCtx *user, PetscRandom *randx, PetscRandom *randy, PetscRandom *randz);
404
405/**
406 * @brief Initializes random number generators for logical space operations [0.0, 1.0).
407 *
408 * This function creates and configures three separate PETSc random number generators,
409 * one for each logical dimension (i, j, k or xi, eta, zeta equivalent).
410 * Each RNG is configured to produce uniformly distributed real numbers in the interval [0.0, 1.0).
411 * These are typically used for selecting owned cells or generating intra-cell logical coordinates.
412 *
413 * @param[out] rand_logic_i Pointer to store the RNG for the i-logical dimension.
414 * @param[out] rand_logic_j Pointer to store the RNG for the j-logical dimension.
415 * @param[out] rand_logic_k Pointer to store the RNG for the k-logical dimension.
416 *
417 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
418 */
419PetscErrorCode InitializeLogicalSpaceRNGs(PetscRandom *rand_logic_i, PetscRandom *rand_logic_j, PetscRandom *rand_logic_k);
420
421/**
422 * @brief Initializes a single master RNG for time-stepping physics (Brownian motion).
423 * Configures it for Uniform [0, 1) which is required for Box-Muller transformation.
424 *
425 * @param[in,out] simCtx Pointer to the Simulation Context.
426 * @return PetscErrorCode
427 */
428PetscErrorCode InitializeBrownianRNG(SimCtx *simCtx);
429
430/**
431 * @brief Transforms scalar derivatives from computational space to physical space
432 *
433 * using the chain rule.
434 * Formula: dPhi/dx = J * ( dPhi/dCsi * dCsi/dx + dPhi/dEta * dEta/dx + ... )
435 *
436 * @param jacobian Parameter `jacobian` passed to `TransformScalarDerivativesToPhysical()`.
437 * @param csi_metrics Parameter `csi_metrics` passed to `TransformScalarDerivativesToPhysical()`.
438 * @param eta_metrics Parameter `eta_metrics` passed to `TransformScalarDerivativesToPhysical()`.
439 * @param zet_metrics Parameter `zet_metrics` passed to `TransformScalarDerivativesToPhysical()`.
440 * @param dPhi_dcsi Parameter `dPhi_dcsi` passed to `TransformScalarDerivativesToPhysical()`.
441 * @param dPhi_deta Parameter `dPhi_deta` passed to `TransformScalarDerivativesToPhysical()`.
442 * @param dPhi_dzet Parameter `dPhi_dzet` passed to `TransformScalarDerivativesToPhysical()`.
443 * @param gradPhi Parameter `gradPhi` passed to `TransformScalarDerivativesToPhysical()`.
444 */
445 void TransformScalarDerivativesToPhysical(PetscReal jacobian,
446 Cmpnts csi_metrics,
447 Cmpnts eta_metrics,
448 Cmpnts zet_metrics,
449 PetscReal dPhi_dcsi,
450 PetscReal dPhi_deta,
451 PetscReal dPhi_dzet,
452 Cmpnts *gradPhi);
453
454/**
455 * @brief Computes the gradient of a cell-centered SCALAR field at a specific grid point.
456 *
457 * @param user The
458 * @param i Parameter `i` passed to `ComputeScalarFieldDerivatives()`.
459 * @param j Parameter `j` passed to `ComputeScalarFieldDerivatives()`.
460 * @param k Parameter `k` passed to `ComputeScalarFieldDerivatives()`.
461 * @param field_data 3D
462 * @param grad Output:
463 * @return PetscErrorCode
464 */
465PetscErrorCode ComputeScalarFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k,
466 PetscReal ***field_data, Cmpnts *grad);
467
468/**
469 * @brief Computes the derivatives of a cell-centered vector field at a specific grid point.
470 *
471 * This function orchestrates the calculation of spatial derivatives. It first computes
472 * the derivatives in computational space (d/dcsi, d/deta, d/dzet) using a central
473 * difference scheme and then transforms them into physical space (d/dx, d/dy, d/dz).
474 *
475 * @param user The
476 * @param i Parameter `i` passed to `ComputeVectorFieldDerivatives()`.
477 * @param j Parameter `j` passed to `ComputeVectorFieldDerivatives()`.
478 * @param k Parameter `k` passed to `ComputeVectorFieldDerivatives()`.
479 * @param field_data A
480 * @param dudx Output:
481 * @param dvdx Output:
482 * @param dwdx Output:
483 * @return PetscErrorCode 0 on success.
484 */
485PetscErrorCode ComputeVectorFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k, Cmpnts ***field_data,
486 Cmpnts *dudx, Cmpnts *dvdx, Cmpnts *dwdx);
487
488/**
489 * @brief Destroys all PETSc Vec objects within a single UserCtx structure.
490 *
491 * This helper function systematically destroys all ~74 Vec objects stored in a UserCtx.
492 * The vectors are organized into 14 groups (A-N) for clarity:
493 * - Primary flow fields (Ucont, Ucat, P, Nvert)
494 * - Solver work vectors (Phi)
495 * - Time-stepping vectors (Ucont_o, Ucont_rm1, etc.)
496 * - Grid metrics (cell-centered, face-centered, coordinates)
497 * - Turbulence vectors (Nu_t, CS, lFriction_Velocity)
498 * - Particle vectors (ParticleCount, Psi)
499 * - Boundary condition vectors (Ubcs, Uch)
500 * - Post-processing vectors (P_nodal, Qcrit)
501 * - Statistical averaging vectors (Ucat_sum, etc.)
502 * - And more...
503 *
504 * All destroys are protected with NULL checks to handle conditional allocations safely.
505 *
506 * @param[in,out] user Pointer to the UserCtx containing the vectors to destroy.
507 *
508 * @return PetscErrorCode 0 on success.
509 */
510PetscErrorCode DestroyUserVectors(UserCtx *user);
511
512/**
513 * @brief Destroys all resources allocated within a single UserCtx structure.
514 *
515 * This function cleans up all memory and PETSc objects associated with a single
516 * UserCtx (grid level). It calls the helper functions and destroys remaining objects
517 * in the proper dependency order:
518 * 1. Boundary conditions (handlers and their data)
519 * 2. All PETSc vectors (via DestroyUserVectors)
520 * 3. Matrix and solver objects (A, C, MR, MP, ksp, nullsp)
521 * 4. Application ordering (AO)
522 * 5. Distributed mesh objects (DMs) - most derived first
523 * 6. Raw PetscMalloc'd arrays (RankCellInfoMap, KSKE)
524 *
525 * This function should be called for each UserCtx in the multigrid hierarchy.
526 *
527 * @param[in,out] user Pointer to the UserCtx to be destroyed.
528 *
529 * @return PetscErrorCode 0 on success.
530 */
531PetscErrorCode DestroyUserContext(UserCtx *user);
532
533/**
534 * @brief Main cleanup function for the entire simulation context.
535 *
536 * This function is responsible for destroying ALL memory and PETSc objects allocated
537 * during the simulation, including:
538 * - All UserCtx structures in the multigrid hierarchy (via DestroyUserContext)
539 * - The multigrid management structures (UserMG, MGCtx array)
540 * - All SimCtx-level objects (logviewer, dm_swarm, bboxlist, string arrays, etc.)
541 *
542 * This function should be called ONCE at the end of the simulation, after all
543 * computation is complete, but BEFORE PetscFinalize().
544 *
545 * Call order in main:
546 * 1. [Simulation runs]
547 * 2. ProfilingFinalize(simCtx);
548 * 3. FinalizeSimulation(simCtx); <- This function
549 * 4. PetscFinalize();
550 *
551 * @param[in,out] simCtx Pointer to the master SimulationContext to be destroyed.
552 *
553 * @return PetscErrorCode 0 on success.
554 */
555PetscErrorCode FinalizeSimulation(SimCtx *simCtx);
556
557
558 #endif // SETUP_H
Header file for Particle Motion and migration related functions.
Header file for Particle Swarm management functions.
Public interface for grid, solver, and metric setup routines.
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
PetscErrorCode ComputeVectorFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k, Cmpnts ***field_data, Cmpnts *dudx, Cmpnts *dvdx, Cmpnts *dwdx)
Computes the derivatives of a cell-centered vector field at a specific grid point.
Definition setup.c:2900
PetscErrorCode DestroyUserContext(UserCtx *user)
Destroys all resources allocated within a single UserCtx structure.
Definition setup.c:3089
PetscErrorCode GetOwnedCellRange(const DMDALocalInfo *info_nodes, PetscInt dim, PetscInt *xs_cell_global_out, PetscInt *xm_cell_local_out)
Determines the global starting index and number of CELLS owned by the current processor in a specifie...
Definition setup.c:1883
PetscErrorCode SetupDomainRankInfo(SimCtx *simCtx)
Sets up the full rank communication infrastructure, including neighbor ranks and bounding box exchang...
Definition setup.c:2174
PetscErrorCode InitializeRandomGenerators(UserCtx *user, PetscRandom *randx, PetscRandom *randy, PetscRandom *randz)
Initializes random number generators for assigning particle properties.
Definition setup.c:2684
PetscErrorCode Deallocate3DArrayVector(Cmpnts ***array, PetscInt nz, PetscInt ny)
Deallocates a 3D array of Cmpnts structures allocated by Allocate3DArrayVector.
Definition setup.c:1817
PetscErrorCode SetupGridAndSolvers(SimCtx *simCtx)
The main orchestrator for setting up all grid-related components.
Definition setup.c:1132
PetscErrorCode InitializeBrownianRNG(SimCtx *simCtx)
Initializes a single master RNG for time-stepping physics (Brownian motion).
Definition setup.c:2769
PetscErrorCode SetupSimulationEnvironment(SimCtx *simCtx)
Verifies and prepares the complete I/O environment for a simulation run.
Definition setup.c:813
PetscErrorCode CreateAndInitializeAllVectors(SimCtx *simCtx)
Creates and initializes all PETSc Vec objects for all fields.
Definition setup.c:1168
PetscErrorCode ComputeAndStoreNeighborRanks(UserCtx *user)
Computes and stores the Cartesian neighbor ranks for the DMDA decomposition.
Definition setup.c:1980
PetscErrorCode Contra2Cart(UserCtx *user)
Reconstructs Cartesian velocity (Ucat) at cell centers from contravariant velocity (Ucont) defined on...
Definition setup.c:2247
void TransformScalarDerivativesToPhysical(PetscReal jacobian, Cmpnts csi_metrics, Cmpnts eta_metrics, Cmpnts zet_metrics, PetscReal dPhi_dcsi, PetscReal dPhi_deta, PetscReal dPhi_dzet, Cmpnts *gradPhi)
Transforms scalar derivatives from computational space to physical space.
Definition setup.c:2808
PetscErrorCode Allocate3DArrayScalar(PetscReal ****array, PetscInt nz, PetscInt ny, PetscInt nx)
Allocates a 3D array of PetscReal values using PetscCalloc.
Definition setup.c:1689
PetscErrorCode CreateSimulationContext(int argc, char **argv, SimCtx **p_simCtx)
Allocates and populates the master SimulationContext object.
Definition setup.c:50
PetscErrorCode SetDMDAProcLayout(DM dm, UserCtx *user)
Sets the processor layout for a given DMDA based on PETSc options.
Definition setup.c:2096
PetscErrorCode InitializeLogicalSpaceRNGs(PetscRandom *rand_logic_i, PetscRandom *rand_logic_j, PetscRandom *rand_logic_k)
Initializes random number generators for logical space operations [0.0, 1.0).
Definition setup.c:2725
PetscErrorCode ComputeScalarFieldDerivatives(UserCtx *user, PetscInt i, PetscInt j, PetscInt k, PetscReal ***field_data, Cmpnts *grad)
Computes the gradient of a cell-centered SCALAR field at a specific grid point.
Definition setup.c:2857
PetscErrorCode ComputeDivergence(UserCtx *user)
Computes the discrete divergence of the contravariant velocity field.
Definition setup.c:2518
PetscErrorCode BinarySearchInt64(PetscInt n, const PetscInt64 arr[], PetscInt64 key, PetscBool *found)
Performs a binary search for a key in a sorted array of PetscInt64.
Definition setup.c:2448
PetscErrorCode DestroyUserVectors(UserCtx *user)
Destroys all PETSc Vec objects within a single UserCtx structure.
Definition setup.c:2954
PetscErrorCode Allocate3DArrayVector(Cmpnts ****array, PetscInt nz, PetscInt ny, PetscInt nx)
Allocates a contiguous 3D array of Cmpnts values.
Definition setup.c:1767
PetscErrorCode UpdateLocalGhosts(UserCtx *user, const char *fieldName)
Updates the local vector (including ghost points) from its corresponding global vector.
Definition setup.c:1361
PetscErrorCode SetupBoundaryConditions(SimCtx *simCtx)
(Orchestrator) Sets up all boundary conditions for the simulation.
Definition setup.c:1633
PetscErrorCode SetupDomainCellDecompositionMap(UserCtx *user)
Creates and distributes a map of the domain's cell decomposition to all ranks.
Definition setup.c:2383
PetscErrorCode FinalizeSimulation(SimCtx *simCtx)
Main cleanup function for the entire simulation context.
Definition setup.c:3188
PetscErrorCode Deallocate3DArrayScalar(PetscReal ***array, PetscInt nz, PetscInt ny)
Deallocates a 3D array of PetscReal values allocated by Allocate3DArrayScalar.
Definition setup.c:1724
PetscBool RuntimeWalltimeGuardParsePositiveSeconds(const char *text, PetscReal *seconds_out)
Parse a positive floating-point seconds value from runtime metadata.
Definition setup.c:18
Main header file for a complex fluid dynamics solver.
A 3D point or vector with PetscScalar components.
Definition variables.h:100
The master context for the entire simulation.
Definition variables.h:643
User-defined context containing data specific to a single computational grid level.
Definition variables.h:811
Header file for particle location functions using the walking search algorithm.