PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
grid.h
Go to the documentation of this file.
1// in include/grid.h
2
3#ifndef GRID_H
4#define GRID_H
5
6#include "variables.h" // Essential for SimulationContext and UserCtx definitions
7#include "logging.h" // Logging macros and definitions
8#include "io.h"
9#include "setup.h" // For SetDMDAProcLayout
10#include "AnalyticalSolutions.h" // For SetAnalyticalGridInfo
11/**
12 * @file grid.h
13 * @brief Public interface for grid, solver, and metric setup routines.
14 */
15
16/**
17 * @brief Orchestrates the parsing and setting of grid dimensions for all blocks.
18 *
19 * This function serves as the high-level entry point for defining the geometric
20 * properties of each grid block in the simulation. It iterates through every
21 * block defined by `simCtx->block_number`.
22 *
23 * For each block, it performs two key actions:
24 * 1. It explicitly sets the block's index (`_this`) in the corresponding `UserCtx`
25 * struct for the finest multigrid level. This makes the context "self-aware".
26 * 2. It calls a helper function (`ParseAndSetGridInputs`) to handle the detailed
27 * work of parsing options or files to populate the rest of the geometric
28 * properties for that specific block (e.g., `IM`, `Min_X`, `rx`).
29 *
30 * @param simCtx The master SimCtx, which contains the number of blocks and the
31 * UserCtx hierarchy to be configured.
32 * @return PetscErrorCode 0 on success, or a PETSc error code on failure.
33 */
34PetscErrorCode DefineAllGridDimensions(SimCtx *simCtx);
35
36/**
37 * @brief Orchestrates the creation of DMDA objects for every block and multigrid level.
38 *
39 * This function systematically builds the entire DMDA hierarchy. It first
40 * calculates the dimensions (IM, JM, KM) for all coarse grids based on the
41 * finest grid's dimensions and the semi-coarsening flags. It then iterates
42 * from the coarsest to the finest level, calling a powerful helper function
43 * (`InitializeSingleGridDM`) to create the DMs for each block, ensuring that
44 * finer grids are properly aligned with their coarser parents for multigrid efficiency.
45 *
46 * @param simCtx The master SimCtx, containing the configured UserCtx hierarchy.
47 * @return PetscErrorCode 0 on success, or a PETSc error code on failure.
48 */
49PetscErrorCode InitializeAllGridDMs(SimCtx *simCtx);
50
51/**
52 * @brief Orchestrates the assignment of physical coordinates to all DMDA objects.
53 *
54 * This function manages the entire process of populating the coordinate vectors
55 * for every DMDA across all multigrid levels and blocks. It follows a two-part
56 * strategy that is essential for multigrid methods:
57 *
58 * 1. **Populate Finest Level:** It first loops through each block and calls a
59 * helper (`SetFinestLevelCoordinates`) to set the physical coordinates for
60 * the highest-resolution grid (the finest multigrid level).
61 * 2. **Restrict to Coarser Levels:** It then iterates downwards from the finest
62 * level, calling a helper (`RestrictCoordinates`) to copy the coordinate
63 * values from the fine grid nodes to their corresponding parent nodes on the
64 * coarser grids. This ensures all levels represent the exact same geometry.
65 *
66 * @param simCtx The master SimCtx, containing the configured UserCtx hierarchy.
67 * @return PetscErrorCode 0 on success, or a PETSc error code on failure.
68 */
69PetscErrorCode AssignAllGridCoordinates(SimCtx *simCtx);
70
71
72/**
73 * @brief Computes the local bounding box of the grid on the current process.
74 *
75 * This function calculates the minimum and maximum coordinates of the local grid points owned
76 * by the current MPI process and stores the computed bounding box in the provided structure.
77 *
78 * @param[in] user Pointer to the user-defined context containing grid information.
79 * @param[out] localBBox Pointer to the BoundingBox structure to store the computed bounding box.
80 *
81 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
82 */
83PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox);
84
85/**
86 * @brief Gathers local bounding boxes from all MPI processes to rank 0.
87 *
88 * This function computes the local bounding box on each process, then collects all local
89 * bounding boxes on the root process (rank 0) using MPI. The result is stored in an array
90 * of BoundingBox structures on rank 0.
91 *
92 * @param[in] user Pointer to the user-defined context containing grid information.
93 * @param[out] allBBoxes Pointer to a pointer where the array of gathered bounding boxes
94 * will be stored on rank 0. The caller on rank 0 must free this array.
95 *
96 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
97 */
98PetscErrorCode GatherAllBoundingBoxes(UserCtx *user, BoundingBox **allBBoxes);
99
100/**
101 * @brief Broadcasts the bounding box information collected on rank 0 to all other ranks.
102 *
103 * This function assumes that `GatherAllBoundingBoxes()` was previously called, so `bboxlist`
104 * is allocated and populated on rank 0. All other ranks will allocate memory for `bboxlist`,
105 * and this function will use MPI_Bcast to distribute the bounding box data to them.
106 *
107 * @param[in] user Pointer to the UserCtx structure. (Currently unused in this function, but kept for consistency.)
108 * @param[in,out] bboxlist Pointer to the array of BoundingBoxes. On rank 0, this should point to
109 * a valid array of size 'size' (where size is the number of MPI ranks).
110 * On non-root ranks, this function will allocate memory for `bboxlist`.
111 *
112 * @return PetscErrorCode Returns 0 on success, non-zero on MPI or PETSc-related errors.
113 */
114PetscErrorCode BroadcastAllBoundingBoxes(UserCtx *user, BoundingBox **bboxlist);
115
116/**
117 * @brief Calculates the center and area of the primary INLET face.
118 *
119 * This function identifies the primary INLET face from the boundary face
120 * configurations, computes its geometric center and total area using a
121 * generic utility function, and stores these results in the simulation context.
122 *
123 * @param user Pointer to the UserCtx containing boundary face information.
124 * @return PetscErrorCode
125 */
126PetscErrorCode CalculateInletProperties(UserCtx *user);
127
128/**
129 * @brief Calculates the center and area of the primary OUTLET face.
130 *
131 * This function identifies the primary OUTLET face from the boundary face
132 * configurations, computes its geometric center and total area using a
133 * generic utility function, and stores these results in the simulation context.
134 * @param user Pointer to the UserCtx containing boundary face information.
135 * @return PetscErrorCode
136 */
137PetscErrorCode CalculateOutletProperties(UserCtx *user);
138
139/**
140 * @brief Calculates the geometric center and total area of a specified boundary face.
141 *
142 * This function computes two key properties of a boundary face in the computational domain:
143 * 1. **Geometric Center**: The average (x,y,z) position of all physical nodes on the face
144 * 2. **Total Area**: The sum of face area vector magnitudes from all non-solid cells adjacent to the face
145 *
146 * @section architecture Indexing Architecture
147 *
148 * The solver uses different indexing conventions for different field types:
149 *
150 * **Node-Centered Fields (Coordinates):**
151 * - Direct indexing: Node n stored at coor[n]
152 * - For mx=26: Physical nodes [0-24], Dummy at [25]
153 * - For mz=98: Physical nodes [0-96], Dummy at [97]
154 *
155 * **Face-Centered Fields (Metrics: csi, eta, zet):**
156 * - Direct indexing: Face n stored at csi/eta/zet[n]
157 * - For mx=26: Physical faces [0-24], Dummy at [25]
158 * - For mz=98: Physical faces [0-96], Dummy at [97]
159 * - Face at index k bounds cells k-1 and k
160 *
161 * **Cell-Centered Fields (nvert):**
162 * - Shifted indexing: Physical cell c stored at nvert[c+1]
163 * - For mx=26 (25 cells): Cell 0→nvert[1], Cell 23→nvert[24]
164 * - For mz=98 (96 cells): Cell 0→nvert[1], Cell 95→nvert[96]
165 * - nvert[0] and nvert[mx-1] are ghost values
166 *
167 * @section face_geometry Face-to-Index Mapping
168 *
169 * Example for a domain with mx=26, my=26, mz=98:
170 *
171 * | Face ID | Node Index | Face Metric | Adjacent Cell (shifted) | Physical Extent |
172 * |---------------|------------|------------------|-------------------------|-----------------|
173 * | BC_FACE_NEG_X | i=0 | csi[k][j][0] | nvert[k][j][1] (Cell 0) | j∈[0,24], k∈[0,96] |
174 * | BC_FACE_POS_X | i=24 | csi[k][j][24] | nvert[k][j][24] (Cell 23)| j∈[0,24], k∈[0,96] |
175 * | BC_FACE_NEG_Y | j=0 | eta[k][0][i] | nvert[k][1][i] (Cell 0) | i∈[0,24], k∈[0,96] |
176 * | BC_FACE_POS_Y | j=24 | eta[k][24][i] | nvert[k][24][i] (Cell 23)| i∈[0,24], k∈[0,96] |
177 * | BC_FACE_NEG_Z | k=0 | zet[0][j][i] | nvert[1][j][i] (Cell 0) | i∈[0,24], j∈[0,24] |
178 * | BC_FACE_POS_Z | k=96 | zet[96][j][i] | nvert[96][j][i] (Cell 95)| i∈[0,24], j∈[0,24] |
179 *
180 * @section algorithm Algorithm
181 *
182 * The function performs two separate computations with different loop bounds:
183 *
184 * **1. Center Calculation (uses ALL physical nodes):**
185 * - Loop over all physical nodes on the face (excluding dummy indices)
186 * - Accumulate coordinate sums: Σx, Σy, Σz
187 * - Count number of nodes
188 * - Average: center = (Σx/n, Σy/n, Σz/n)
189 *
190 * **2. Area Calculation (uses INTERIOR cells only):**
191 * - Loop over interior cell range to avoid accessing ghost values in nvert
192 * - For each face adjacent to a fluid cell (nvert < 0.1):
193 * - Compute area magnitude: |csi/eta/zet| = √(x² + y² + z²)
194 * - Accumulate to total area
195 *
196 * @section loop_bounds Loop Bound Details
197 *
198 * **Why different bounds for center vs. area?**
199 *
200 * For BC_FACE_NEG_X at i=0 with my=26, mz=98:
201 *
202 * *Center calculation (coordinates):*
203 * - j ∈ [ys, j_max): Includes j=[0,24] (25 nodes), excludes dummy at j=25
204 * - k ∈ [zs, k_max): Includes k=[0,96] (97 nodes), excludes dummy at k=97
205 * - Total: 25 × 97 = 2,425 nodes
206 *
207 * *Area calculation (nvert checks):*
208 * - j ∈ [lys, lye): j=[1,24] (24 values), excludes boundaries
209 * - k ∈ [lzs, lze): k=[1,96] (96 values), excludes boundaries
210 * - Why restricted?
211 * - At j=0: nvert[k][0][1] is ghost (no cell at j=-1)
212 * - At j=25: nvert[k][25][1] is ghost (no cell at j=24, index 25 is dummy)
213 * - At k=0: nvert[0][j][1] is ghost (no cell at k=-1)
214 * - At k=97: nvert[97][j][1] is ghost (no cell at k=96, index 97 is dummy)
215 * - Total: 24 × 96 = 2,304 interior cells adjacent to face
216 *
217 * @section area_formulas Area Calculation Formulas
218 *
219 * Face area contributions are computed from metric tensor magnitudes:
220 * - **i-faces (±Xi)**: Area = |csi| = √(csi_x² + csi_y² + csi_z²)
221 * - **j-faces (±Eta)**: Area = |eta| = √(eta_x² + eta_y² + eta_z²)
222 * - **k-faces (±Zeta)**: Area = |zet| = √(zet_x² + zet_y² + zet_z²)
223 *
224 * @param[in] user Pointer to UserCtx containing grid info, DMs, and field vectors
225 * @param[in] face_id Enum identifying which boundary face to analyze (BC_FACE_NEG_X, etc.)
226 * @param[out] face_center Pointer to Cmpnts structure to store computed geometric center (x,y,z)
227 * @param[out] face_area Pointer to PetscReal to store computed total face area
228 *
229 * @return PetscErrorCode Returns 0 on success, non-zero PETSc error code on failure
230 *
231 * @note This function uses MPI_Allreduce, so it must be called collectively by all ranks
232 * @note Only ranks that own the specified boundary face contribute to the calculation
233 * @note Center calculation includes ALL physical nodes on the face
234 * @note Area calculation ONLY includes faces adjacent to fluid cells (nvert < 0.1)
235 * @note Dummy/unused indices (e.g., k=97, j=25 for standard test case) are excluded
236 *
237 * @warning Assumes grid and field arrays have been properly initialized
238 * @warning Incorrect face_id values will result in zero contribution from all ranks
239 *
240 * @see CanRankServiceFace() for determining rank ownership of boundary faces
241 * @see BCFace enum for valid face_id values
242 * @see LOG_FIELD_ANATOMY() for debugging field indexing
243 *
244 * @par Example Usage:
245 * @code
246 * Cmpnts inlet_center;
247 * PetscReal inlet_area;
248 * ierr = CalculateFaceCenterAndArea(user, BC_FACE_NEG_Z, &inlet_center, &inlet_area);
249 * PetscPrintf(PETSC_COMM_WORLD, "Inlet center: (%.4f, %.4f, %.4f), Area: %.6f\n",
250 * inlet_center.x, inlet_center.y, inlet_center.z, inlet_area);
251 * @endcode
252 */
253PetscErrorCode CalculateFaceCenterAndArea(UserCtx *user, BCFace face_id,
254 Cmpnts *face_center, PetscReal *face_area);
255
256#endif // GRID_H
PetscErrorCode DefineAllGridDimensions(SimCtx *simCtx)
Orchestrates the parsing and setting of grid dimensions for all blocks.
Definition grid.c:68
PetscErrorCode CalculateOutletProperties(UserCtx *user)
Calculates the center and area of the primary OUTLET face.
Definition grid.c:1125
PetscErrorCode BroadcastAllBoundingBoxes(UserCtx *user, BoundingBox **bboxlist)
Broadcasts the bounding box information collected on rank 0 to all other ranks.
Definition grid.c:1016
PetscErrorCode CalculateFaceCenterAndArea(UserCtx *user, BCFace face_id, Cmpnts *face_center, PetscReal *face_area)
Calculates the geometric center and total area of a specified boundary face.
Definition grid.c:1276
PetscErrorCode InitializeAllGridDMs(SimCtx *simCtx)
Orchestrates the creation of DMDA objects for every block and multigrid level.
Definition grid.c:260
PetscErrorCode AssignAllGridCoordinates(SimCtx *simCtx)
Orchestrates the assignment of physical coordinates to all DMDA objects.
Definition grid.c:356
PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox)
Computes the local bounding box of the grid on the current process.
Definition grid.c:800
PetscErrorCode CalculateInletProperties(UserCtx *user)
Calculates the center and area of the primary INLET face.
Definition grid.c:1069
PetscErrorCode GatherAllBoundingBoxes(UserCtx *user, BoundingBox **allBBoxes)
Gathers local bounding boxes from all MPI processes to rank 0.
Definition grid.c:948
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
Main header file for a complex fluid dynamics solver.
BCFace
Identifies the six logical faces of a structured computational block.
Definition variables.h:203
Defines a 3D axis-aligned bounding box.
Definition variables.h:154
A 3D point or vector with PetscScalar components.
Definition variables.h:100
The master context for the entire simulation.
Definition variables.h:585
User-defined context containing data specific to a single computational grid level.
Definition variables.h:728