PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
grid.h
Go to the documentation of this file.
1// in include/grid.h
2
3#ifndef GRID_H
4#define GRID_H
5
6#include "variables.h" // Essential for SimulationContext and UserCtx definitions
7#include "logging.h" // Logging macros and definitions
8#include "io.h"
9#include "setup.h" // For SetDMDAProcLayout
10/**
11 * @file grid.h
12 * @brief Public interface for grid, solver, and metric setup routines.
13 */
14
15/**
16 * @brief Orchestrates the parsing and setting of grid dimensions for all blocks.
17 *
18 * This function serves as the high-level entry point for defining the geometric
19 * properties of each grid block in the simulation. It iterates through every
20 * block defined by `simCtx->block_number`.
21 *
22 * For each block, it performs two key actions:
23 * 1. It explicitly sets the block's index (`_this`) in the corresponding `UserCtx`
24 * struct for the finest multigrid level. This makes the context "self-aware".
25 * 2. It calls a helper function (`ParseAndSetGridInputs`) to handle the detailed
26 * work of parsing options or files to populate the rest of the geometric
27 * properties for that specific block (e.g., `IM`, `Min_X`, `rx`).
28 *
29 * @param simCtx The master SimCtx, which contains the number of blocks and the
30 * UserCtx hierarchy to be configured.
31 * @return PetscErrorCode 0 on success, or a PETSc error code on failure.
32 */
33PetscErrorCode DefineAllGridDimensions(SimCtx *simCtx);
34
35/**
36 * @brief Orchestrates the creation of DMDA objects for every block and multigrid level.
37 *
38 * This function systematically builds the entire DMDA hierarchy. It first
39 * calculates the dimensions (IM, JM, KM) for all coarse grids based on the
40 * finest grid's dimensions and the semi-coarsening flags. It then iterates
41 * from the coarsest to the finest level, calling a powerful helper function
42 * (`InitializeSingleGridDM`) to create the DMs for each block, ensuring that
43 * finer grids are properly aligned with their coarser parents for multigrid efficiency.
44 *
45 * @param simCtx The master SimCtx, containing the configured UserCtx hierarchy.
46 * @return PetscErrorCode 0 on success, or a PETSc error code on failure.
47 */
48PetscErrorCode InitializeAllGridDMs(SimCtx *simCtx);
49
50/**
51 * @brief Orchestrates the assignment of physical coordinates to all DMDA objects.
52 *
53 * This function manages the entire process of populating the coordinate vectors
54 * for every DMDA across all multigrid levels and blocks. It follows a two-part
55 * strategy that is essential for multigrid methods:
56 *
57 * 1. **Populate Finest Level:** It first loops through each block and calls a
58 * helper (`SetFinestLevelCoordinates`) to set the physical coordinates for
59 * the highest-resolution grid (the finest multigrid level).
60 * 2. **Restrict to Coarser Levels:** It then iterates downwards from the finest
61 * level, calling a helper (`RestrictCoordinates`) to copy the coordinate
62 * values from the fine grid nodes to their corresponding parent nodes on the
63 * coarser grids. This ensures all levels represent the exact same geometry.
64 *
65 * @param simCtx The master SimCtx, containing the configured UserCtx hierarchy.
66 * @return PetscErrorCode 0 on success, or a PETSc error code on failure.
67 */
68PetscErrorCode AssignAllGridCoordinates(SimCtx *simCtx);
69
70
71/**
72 * @brief Computes the local bounding box of the grid on the current process.
73 *
74 * This function calculates the minimum and maximum coordinates of the local grid points owned
75 * by the current MPI process and stores the computed bounding box in the provided structure.
76 *
77 * @param[in] user Pointer to the user-defined context containing grid information.
78 * @param[out] localBBox Pointer to the BoundingBox structure to store the computed bounding box.
79 *
80 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
81 */
82PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox);
83
84/**
85 * @brief Gathers local bounding boxes from all MPI processes to rank 0.
86 *
87 * This function computes the local bounding box on each process, then collects all local
88 * bounding boxes on the root process (rank 0) using MPI. The result is stored in an array
89 * of BoundingBox structures on rank 0.
90 *
91 * @param[in] user Pointer to the user-defined context containing grid information.
92 * @param[out] allBBoxes Pointer to a pointer where the array of gathered bounding boxes
93 * will be stored on rank 0. The caller on rank 0 must free this array.
94 *
95 * @return PetscErrorCode Returns 0 on success, non-zero on failure.
96 */
97PetscErrorCode GatherAllBoundingBoxes(UserCtx *user, BoundingBox **allBBoxes);
98
99/**
100 * @brief Broadcasts the bounding box information collected on rank 0 to all other ranks.
101 *
102 * This function assumes that `GatherAllBoundingBoxes()` was previously called, so `bboxlist`
103 * is allocated and populated on rank 0. All other ranks will allocate memory for `bboxlist`,
104 * and this function will use MPI_Bcast to distribute the bounding box data to them.
105 *
106 * @param[in] user Pointer to the UserCtx structure. (Currently unused in this function, but kept for consistency.)
107 * @param[in,out] bboxlist Pointer to the array of BoundingBoxes. On rank 0, this should point to
108 * a valid array of size 'size' (where size is the number of MPI ranks).
109 * On non-root ranks, this function will allocate memory for `bboxlist`.
110 *
111 * @return PetscErrorCode Returns 0 on success, non-zero on MPI or PETSc-related errors.
112 */
113PetscErrorCode BroadcastAllBoundingBoxes(UserCtx *user, BoundingBox **bboxlist);
114
115
116#endif // GRID_H
PetscErrorCode DefineAllGridDimensions(SimCtx *simCtx)
Orchestrates the parsing and setting of grid dimensions for all blocks.
Definition grid.c:57
PetscErrorCode BroadcastAllBoundingBoxes(UserCtx *user, BoundingBox **bboxlist)
Broadcasts the bounding box information collected on rank 0 to all other ranks.
Definition grid.c:896
PetscErrorCode InitializeAllGridDMs(SimCtx *simCtx)
Orchestrates the creation of DMDA objects for every block and multigrid level.
Definition grid.c:230
PetscErrorCode AssignAllGridCoordinates(SimCtx *simCtx)
Orchestrates the assignment of physical coordinates to all DMDA objects.
Definition grid.c:317
PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox)
Computes the local bounding box of the grid on the current process.
Definition grid.c:702
PetscErrorCode GatherAllBoundingBoxes(UserCtx *user, BoundingBox **allBBoxes)
Gathers local bounding boxes from all MPI processes to rank 0.
Definition grid.c:837
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
Main header file for a complex fluid dynamics solver.
Defines a 3D axis-aligned bounding box.
Definition variables.h:153
The master context for the entire simulation.
Definition variables.h:513
User-defined context containing data specific to a single computational grid level.
Definition variables.h:630