PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
grid.c File Reference
#include "grid.h"
#include "logging.h"
Include dependency graph for grid.c:

Go to the source code of this file.

Macros

#define BBOX_TOLERANCE   1e-6
 
#define __FUNCT__   "InitializeSingleGridDM"
 
#define __FUNCT__   "InitializeAllGridDMs"
 
#define __FUNCT__   "AssignAllGridCoordinates"
 
#define __FUNCT__   "SetFinestLevelCoordinates"
 
#define __FUNCT__   "GenerateAndSetCoordinates"
 
#define __FUNCT__   "ReadAndSetCoordinates"
 
#define __FUNCT__   "RestrictCoordinates"
 

Functions

static PetscErrorCode ParseAndSetGridInputs (UserCtx *user)
 Determines the grid source and calls the appropriate parsing routine.
 
PetscErrorCode DefineAllGridDimensions (SimCtx *simCtx)
 Orchestrates the parsing and setting of grid dimensions for all blocks.
 
static PetscErrorCode InitializeSingleGridDM (UserCtx *user, UserCtx *coarse_user)
 Creates the DMDA objects (da and fda) for a single UserCtx.
 
PetscErrorCode InitializeAllGridDMs (SimCtx *simCtx)
 Orchestrates the creation of DMDA objects for every block and multigrid level.
 
static PetscErrorCode SetFinestLevelCoordinates (UserCtx *user)
 A router that populates the coordinates for a single finest-level DMDA.
 
static PetscErrorCode GenerateAndSetCoordinates (UserCtx *user)
 Programmatically generates and sets grid coordinates based on user parameters.
 
static PetscErrorCode ReadAndSetCoordinates (UserCtx *user, FILE *fd)
 Reads physical coordinates from a file and populates the DMDA for a specific block.
 
static PetscErrorCode RestrictCoordinates (UserCtx *coarse_user, UserCtx *fine_user)
 Populates coarse grid coordinates by restricting from a fine grid.
 
PetscErrorCode AssignAllGridCoordinates (SimCtx *simCtx)
 Orchestrates the assignment of physical coordinates to all DMDA objects.
 
static PetscReal ComputeStretchedCoord (PetscInt i, PetscInt N, PetscReal L, PetscReal r)
 Computes a stretched coordinate along one dimension.
 
PetscErrorCode ComputeLocalBoundingBox (UserCtx *user, BoundingBox *localBBox)
 Computes the local bounding box of the grid on the current process.
 
PetscErrorCode GatherAllBoundingBoxes (UserCtx *user, BoundingBox **allBBoxes)
 Gathers local bounding boxes from all MPI processes to rank 0.
 
PetscErrorCode BroadcastAllBoundingBoxes (UserCtx *user, BoundingBox **bboxlist)
 Broadcasts the bounding box list from rank 0 to all ranks.
 

Macro Definition Documentation

◆ BBOX_TOLERANCE

#define BBOX_TOLERANCE   1e-6

Definition at line 6 of file grid.c.

◆ __FUNCT__ [1/7]

#define __FUNCT__   "InitializeSingleGridDM"

Definition at line 90 of file grid.c.

◆ __FUNCT__ [2/7]

#define __FUNCT__   "InitializeAllGridDMs"

Definition at line 90 of file grid.c.

◆ __FUNCT__ [3/7]

#define __FUNCT__   "AssignAllGridCoordinates"

Definition at line 90 of file grid.c.

◆ __FUNCT__ [4/7]

#define __FUNCT__   "SetFinestLevelCoordinates"

Definition at line 90 of file grid.c.

◆ __FUNCT__ [5/7]

#define __FUNCT__   "GenerateAndSetCoordinates"

Definition at line 90 of file grid.c.

◆ __FUNCT__ [6/7]

#define __FUNCT__   "ReadAndSetCoordinates"

Definition at line 90 of file grid.c.

◆ __FUNCT__ [7/7]

#define __FUNCT__   "RestrictCoordinates"

Definition at line 90 of file grid.c.

Function Documentation

◆ ParseAndSetGridInputs()

static PetscErrorCode ParseAndSetGridInputs ( UserCtx user)
static

Determines the grid source and calls the appropriate parsing routine.

This function acts as a router. It checks the global simCtx->generate_grid flag (accessed via the user->simCtx back-pointer) to decide whether to call the parser for a programmatically generated grid or for a grid defined in a file.

Parameters
userPointer to the UserCtx for a specific block. The function will populate the geometric fields within this struct.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 20 of file grid.c.

21{
22 PetscErrorCode ierr;
23 SimCtx *simCtx = user->simCtx; // Get the global context via the back-pointer
24
25 PetscFunctionBeginUser;
26
27 if (simCtx->generate_grid) {
28 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Block %d is programmatically generated. Calling generation parser.\n", simCtx->rank, user->_this);
29 ierr = ReadGridGenerationInputs(user); CHKERRQ(ierr);
30 } else {
31 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Block %d is file-based. Calling file parser.\n", simCtx->rank, user->_this);
32 ierr = ReadGridFile(user); CHKERRQ(ierr);
33 }
34
35 PetscFunctionReturn(0);
36}
PetscErrorCode ReadGridFile(UserCtx *user)
Sets grid dimensions from a file for a SINGLE block using a one-time read cache.
Definition io.c:145
PetscErrorCode ReadGridGenerationInputs(UserCtx *user)
Parses command-line options for a programmatically generated grid for a SINGLE block.
Definition io.c:47
#define LOCAL
Logging scope definitions for controlling message output.
Definition logging.h:44
#define LOG_ALLOW(scope, level, fmt,...)
Logging macro that checks both the log level and whether the calling function is in the allowed-funct...
Definition logging.h:207
@ LOG_DEBUG
Detailed debugging information.
Definition logging.h:33
PetscMPIInt rank
Definition variables.h:516
SimCtx * simCtx
Back-pointer to the master simulation context.
Definition variables.h:633
PetscInt _this
Definition variables.h:643
PetscBool generate_grid
Definition variables.h:564
The master context for the entire simulation.
Definition variables.h:513
Here is the call graph for this function:
Here is the caller graph for this function:

◆ DefineAllGridDimensions()

PetscErrorCode DefineAllGridDimensions ( SimCtx simCtx)

Orchestrates the parsing and setting of grid dimensions for all blocks.

This function serves as the high-level entry point for defining the geometric properties of each grid block in the simulation. It iterates through every block defined by simCtx->block_number.

For each block, it performs two key actions:

  1. It explicitly sets the block's index (_this) in the corresponding UserCtx struct for the finest multigrid level. This makes the context "self-aware".
  2. It calls a helper function (ParseAndSetGridInputs) to handle the detailed work of parsing options or files to populate the rest of the geometric properties for that specific block (e.g., IM, Min_X, rx).
Parameters
simCtxThe master SimCtx, which contains the number of blocks and the UserCtx hierarchy to be configured.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 57 of file grid.c.

58{
59 PetscErrorCode ierr;
60 PetscInt nblk = simCtx->block_number;
61 UserCtx *fine_users;
62
63 PetscFunctionBeginUser;
64
65 if (simCtx->usermg.mglevels == 0) {
66 SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONGSTATE, "MG levels not set. Cannot get finest_users.");
67 }
68 // Get the UserCtx array for the finest grid level
69 fine_users = simCtx->usermg.mgctx[simCtx->usermg.mglevels - 1].user;
70
71 LOG_ALLOW(GLOBAL, LOG_INFO, "Defining grid dimensions for %d blocks...\n", nblk);
72
73 // Loop over each block to configure its grid dimensions and geometry.
74 for (PetscInt bi = 0; bi < nblk; bi++) {
75 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: --- Configuring Geometry for Block %d ---\n", simCtx->rank, bi);
76
77 // Before calling any helpers, set the block index in the context.
78 // This makes the UserCtx self-aware of which block it represents.
79 fine_users[bi]._this = bi;
80
81 // Call the helper function for this specific block. It can now derive
82 // all necessary information from the UserCtx pointer it receives.
83 ierr = ParseAndSetGridInputs(&fine_users[bi]); CHKERRQ(ierr);
84 }
85
86 PetscFunctionReturn(0);
87}
static PetscErrorCode ParseAndSetGridInputs(UserCtx *user)
Determines the grid source and calls the appropriate parsing routine.
Definition grid.c:20
#define GLOBAL
Scope for global logging across all processes.
Definition logging.h:45
@ LOG_INFO
Informational messages about program execution.
Definition logging.h:32
UserCtx * user
Definition variables.h:418
PetscInt block_number
Definition variables.h:562
UserMG usermg
Definition variables.h:599
PetscInt mglevels
Definition variables.h:425
MGCtx * mgctx
Definition variables.h:428
User-defined context containing data specific to a single computational grid level.
Definition variables.h:630
Here is the call graph for this function:
Here is the caller graph for this function:

◆ InitializeSingleGridDM()

static PetscErrorCode InitializeSingleGridDM ( UserCtx user,
UserCtx coarse_user 
)
static

Creates the DMDA objects (da and fda) for a single UserCtx.

This function is a direct adaptation of the core logic in MGDACreate. It creates the scalar (da) and vector (fda) DMs for a single grid level.

If a coarse_user context is provided, it performs the critical processor alignment calculation from the legacy code. This ensures the new (fine) DM aligns with the coarse DM for multigrid efficiency. If coarse_user is NULL, it creates the DM with a default PETSc decomposition, intended for the coarsest grid level.

Parameters
userThe UserCtx for which the DMs will be created. Its IM, JM, KM fields must be pre-populated.
coarse_userThe UserCtx of the next-coarser grid level, or NULL if user is the coarsest level.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 107 of file grid.c.

108{
109 PetscErrorCode ierr;
110 SimCtx *simCtx = user->simCtx;
111
112 DMBoundaryType xperiod = (simCtx->i_periodic) ? DM_BOUNDARY_PERIODIC : DM_BOUNDARY_NONE;
113 DMBoundaryType yperiod = (simCtx->j_periodic) ? DM_BOUNDARY_PERIODIC : DM_BOUNDARY_NONE;
114 DMBoundaryType zperiod = (simCtx->k_periodic) ? DM_BOUNDARY_PERIODIC : DM_BOUNDARY_NONE;
115 PetscInt stencil_width = 2; // Stencil width is 2 in the legacy code
116
117 PetscInt *lx = NULL, *ly = NULL, *lz = NULL;
118 PetscInt m, n, p;
119
120 PetscFunctionBeginUser;
121
122 if (coarse_user) {
123 // --- This is a FINE grid; it must be aligned with the COARSE grid ---
124 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: [Aligning DM] for block %d level %d (size %dx%dx%d) with level %d\n", simCtx->rank, user->_this, user->thislevel, user->IM, user->JM, user->KM, coarse_user->thislevel);
125
126 DMDAGetInfo(coarse_user->da, NULL, NULL, NULL, NULL, &m, &n, &p, NULL, NULL, NULL, NULL, NULL, NULL);
127 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Coarse grid processor decomposition is %d x %d x %d\n", simCtx->rank, m, n, p);
128
129 // This is the core logic from MGDACreate to ensure processor alignment.
130 PetscInt *lx_contrib, *ly_contrib, *lz_contrib;
131 ierr = PetscMalloc3(m, &lx_contrib, n, &ly_contrib, p, &lz_contrib); CHKERRQ(ierr);
132 ierr = PetscMemzero(lx_contrib, m * sizeof(PetscInt)); CHKERRQ(ierr);
133 ierr = PetscMemzero(ly_contrib, n * sizeof(PetscInt)); CHKERRQ(ierr);
134 ierr = PetscMemzero(lz_contrib, p * sizeof(PetscInt)); CHKERRQ(ierr);
135
136 DMDALocalInfo info;
137 DMDAGetLocalInfo(coarse_user->da, &info);
138 PetscInt xs = info.xs, xe = info.xs + info.xm, mx = info.mx;
139 PetscInt ys = info.ys, ye = info.ys + info.ym, my = info.my;
140 PetscInt zs = info.zs, ze = info.zs + info.zm, mz = info.mz;
141
142 PetscMPIInt rank;
143 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
144 PetscInt proc_i = rank % m;
145 PetscInt proc_j = (rank / m) % n;
146 PetscInt proc_k = rank / (m * n);
147
148 // --- X-Direction Logic (Identical to MGDACreate) ---
149 if (user->isc) lx_contrib[proc_i] = (xe - xs);
150 else {
151 if (m == 1) lx_contrib[0] = user->IM + 1;
152 else if (xs == 0) lx_contrib[0] = 2 * xe - 1;
153 else if (xe == mx) lx_contrib[proc_i] = user->IM + 1 - (2 * xs - 1);
154 else lx_contrib[proc_i] = (xe - xs) * 2;
155 }
156
157 // --- Y-Direction Logic (Identical to MGDACreate) ---
158 if (user->jsc) ly_contrib[proc_j] = (ye - ys);
159 else {
160 if (n == 1) ly_contrib[0] = user->JM + 1;
161 else if (ys == 0) ly_contrib[0] = 2 * ye - 1;
162 else if (ye == my) ly_contrib[proc_j] = user->JM + 1 - (2 * ys - 1);
163 else ly_contrib[proc_j] = (ye - ys) * 2;
164 }
165
166 // --- Z-Direction Logic (Identical to MGDACreate) ---
167 if (user->ksc) lz_contrib[proc_k] = (ze - zs);
168 else {
169 if (p == 1) lz_contrib[0] = user->KM + 1;
170 else if (zs == 0) lz_contrib[0] = 2 * ze - 1;
171 else if (ze == mz) lz_contrib[proc_k] = user->KM + 1 - (2 * zs - 1);
172 else lz_contrib[proc_k] = (ze - zs) * 2;
173 }
174 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Calculated this rank's node contribution to fine grid: lx=%d, ly=%d, lz=%d\n", simCtx->rank, lx_contrib[proc_i], ly_contrib[proc_j], lz_contrib[proc_k]);
175
176 // Allocate the final distribution arrays and Allreduce to get the global distribution
177 ierr = PetscMalloc3(m, &lx, n, &ly, p, &lz); CHKERRQ(ierr);
178 ierr = MPI_Allreduce(lx_contrib, lx, m, MPIU_INT, MPI_MAX, PETSC_COMM_WORLD); CHKERRQ(ierr);
179 ierr = MPI_Allreduce(ly_contrib, ly, n, MPIU_INT, MPI_MAX, PETSC_COMM_WORLD); CHKERRQ(ierr);
180 ierr = MPI_Allreduce(lz_contrib, lz, p, MPIU_INT, MPI_MAX, PETSC_COMM_WORLD); CHKERRQ(ierr);
181
182 ierr = PetscFree3(lx_contrib, ly_contrib, lz_contrib); CHKERRQ(ierr);
183
184 } else {
185 // --- CASE 2: This is the COARSEST grid; use default or user-specified decomposition ---
186 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Creating coarsest DM for block %d level %d (size %dx%dx%d)\n", simCtx->rank, user->_this, user->thislevel, user->IM, user->JM, user->KM);
187 m = simCtx->da_procs_x;
188 n = simCtx->da_procs_y;
189 p = simCtx->da_procs_z;
190 // lx, ly, lz are NULL, so DMDACreate3d will use the m,n,p values.
191 }
192
193 // --- Create the DMDA for the current UserCtx ---
194 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Calling DMDACreate3d...\n", simCtx->rank);
195 ierr = DMDACreate3d(PETSC_COMM_WORLD, xperiod, yperiod, zperiod, DMDA_STENCIL_BOX,
196 user->IM + 1, user->JM + 1, user->KM + 1,
197 m, n, p,
198 1, stencil_width, lx, ly, lz, &user->da); CHKERRQ(ierr);
199
200 if (coarse_user) {
201 ierr = PetscFree3(lx, ly, lz); CHKERRQ(ierr);
202 }
203
204 // --- Standard DM setup applicable to all levels ---
205 ierr = DMSetUp(user->da); CHKERRQ(ierr);
206 ierr = DMGetCoordinateDM(user->da, &user->fda); CHKERRQ(ierr);
207 ierr = DMDASetUniformCoordinates(user->da, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0); CHKERRQ(ierr);
208 ierr = DMDAGetLocalInfo(user->da, &user->info); CHKERRQ(ierr);
209 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: DM creation for block %d level %d complete.\n", simCtx->rank, user->_this, user->thislevel);
210
211 PetscFunctionReturn(0);
212}
#define LOG_ALLOW_SYNC(scope, level, fmt,...)
----— DEBUG ---------------------------------------— #define LOG_ALLOW(scope, level,...
Definition logging.h:274
PetscInt isc
Definition variables.h:643
PetscInt da_procs_z
Definition variables.h:568
PetscInt ksc
Definition variables.h:643
PetscInt KM
Definition variables.h:639
PetscInt da_procs_y
Definition variables.h:568
PetscInt k_periodic
Definition variables.h:563
PetscInt jsc
Definition variables.h:643
PetscInt thislevel
Definition variables.h:689
PetscInt JM
Definition variables.h:639
PetscInt da_procs_x
Definition variables.h:568
PetscInt i_periodic
Definition variables.h:563
DMDALocalInfo info
Definition variables.h:637
PetscInt IM
Definition variables.h:639
PetscInt j_periodic
Definition variables.h:563
Here is the caller graph for this function:

◆ InitializeAllGridDMs()

PetscErrorCode InitializeAllGridDMs ( SimCtx simCtx)

Orchestrates the creation of DMDA objects for every block and multigrid level.

This function systematically builds the entire DMDA hierarchy. It first calculates the dimensions (IM, JM, KM) for all coarse grids based on the finest grid's dimensions and the semi-coarsening flags. It then iterates from the coarsest to the finest level, calling a powerful helper function (InitializeSingleGridDM) to create the DMs for each block, ensuring that finer grids are properly aligned with their coarser parents for multigrid efficiency.

Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 230 of file grid.c.

231{
232 PetscErrorCode ierr;
233 UserMG *usermg = &simCtx->usermg;
234 MGCtx *mgctx = usermg->mgctx;
235 PetscInt nblk = simCtx->block_number;
236
237 PetscFunctionBeginUser;
238 LOG_ALLOW(GLOBAL, LOG_INFO, "Creating DMDA objects for all levels and blocks...\n");
239
240 // --- Part 1: Calculate Coarse Grid Dimensions & VALIDATE ---
241 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Calculating and validating coarse grid dimensions...\n");
242 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
243 for (PetscInt bi = 0; bi < nblk; bi++) {
244 UserCtx *user_coarse = &mgctx[level].user[bi];
245 UserCtx *user_fine = &mgctx[level + 1].user[bi];
246
247 user_coarse->IM = user_fine->isc ? user_fine->IM : (user_fine->IM + 1) / 2;
248 user_coarse->JM = user_fine->jsc ? user_fine->JM : (user_fine->JM + 1) / 2;
249 user_coarse->KM = user_fine->ksc ? user_fine->KM : (user_fine->KM + 1) / 2;
250
251 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Block %d, Level %d dims calculated: %d x %d x %d\n",
252 simCtx->rank, bi, level, user_coarse->IM, user_coarse->JM, user_coarse->KM);
253
254 // Validation check from legacy MGDACreate to ensure coarsening is possible
255 PetscInt check_i = user_coarse->IM * (2 - user_coarse->isc) - (user_fine->IM + 1 - user_coarse->isc);
256 PetscInt check_j = user_coarse->JM * (2 - user_coarse->jsc) - (user_fine->JM + 1 - user_coarse->jsc);
257 PetscInt check_k = user_coarse->KM * (2 - user_coarse->ksc) - (user_fine->KM + 1 - user_coarse->ksc);
258
259 if (check_i + check_j + check_k != 0) {
260 // SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONG,
261 // "Grid at level %d, block %d cannot be coarsened from %dx%dx%d to %dx%dx%d with the given semi-coarsening flags. Check grid dimensions.",
262 // level, bi, user_fine->IM, user_fine->JM, user_fine->KM, user_coarse->IM, user_coarse->JM, user_coarse->KM);
263 LOG(GLOBAL,LOG_WARNING,"WARNING: Grid at level %d, block %d can't be consistently coarsened further.\n", level, bi);
264 }
265 }
266 }
267
268 // --- Part 2: Create DMs from Coarse to Fine for each Block ---
269 for (PetscInt bi = 0; bi < nblk; bi++) {
270 LOG_ALLOW_SYNC(GLOBAL, LOG_DEBUG, "--- Creating DMs for Block %d ---\n", bi);
271
272 // Create the coarsest level DM first (passing NULL for the coarse_user)
273 ierr = InitializeSingleGridDM(&mgctx[0].user[bi], NULL); CHKERRQ(ierr);
274
275 // Create finer level DMs, passing the next-coarser context for alignment
276 for (PetscInt level = 1; level < usermg->mglevels; level++) {
277 ierr = InitializeSingleGridDM(&mgctx[level].user[bi], &mgctx[level-1].user[bi]); CHKERRQ(ierr);
278 }
279 }
280
281 // --- Optional: View the finest DM for debugging verification ---
282 if (get_log_level() >= LOG_DEBUG) {
283 LOG_ALLOW_SYNC(GLOBAL, LOG_INFO, "--- Viewing Finest DMDA (Level %d, Block 0) ---\n", usermg->mglevels - 1);
284 ierr = DMView(mgctx[usermg->mglevels - 1].user[0].da, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
285 }
286
287 LOG_ALLOW(GLOBAL, LOG_INFO, "DMDA object creation complete.\n");
288 PetscFunctionReturn(0);
289}
static PetscErrorCode InitializeSingleGridDM(UserCtx *user, UserCtx *coarse_user)
Creates the DMDA objects (da and fda) for a single UserCtx.
Definition grid.c:107
#define LOG(scope, level, fmt,...)
Logging macro for PETSc-based applications with scope control.
Definition logging.h:91
LogLevel get_log_level()
Retrieves the current logging level from the environment variable LOG_LEVEL.
Definition logging.c:49
@ LOG_WARNING
Non-critical issues that warrant attention.
Definition logging.h:30
Context for Multigrid operations.
Definition variables.h:417
User-level context for managing the entire multigrid hierarchy.
Definition variables.h:424
Here is the call graph for this function:
Here is the caller graph for this function:

◆ SetFinestLevelCoordinates()

static PetscErrorCode SetFinestLevelCoordinates ( UserCtx user)
static

A router that populates the coordinates for a single finest-level DMDA.

This function orchestrates the coordinate setting for one block. It checks the global generate_grid flag and calls the appropriate helper for either programmatic generation or reading from a file.

After the local coordinate vector is populated by a helper, this function performs the necessary DMLocalToGlobal and DMGlobalToLocal scatters to ensure that the ghost node coordinate values are correctly communicated and updated across all MPI ranks.

Parameters
userThe UserCtx for a specific block on the finest level.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 367 of file grid.c.

368{
369 PetscErrorCode ierr;
370 SimCtx *simCtx = user->simCtx;
371
372 PetscFunctionBeginUser;
373
374 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Setting finest level coordinates for block %d...\n", simCtx->rank, user->_this);
375
376 if (simCtx->generate_grid) {
377 ierr = GenerateAndSetCoordinates(user); CHKERRQ(ierr);
378 } else {
379
380 FILE *grid_file_handle = NULL;
381 // Only Rank 0 opens the file.
382 if (simCtx->rank == 0) {
383 grid_file_handle = fopen(simCtx->grid_file, "r");
384 if (!grid_file_handle) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_FILE_OPEN, "Cannot open file: %s", simCtx->grid_file);
385
386 // Now, on Rank 0, we skip the entire header section once.
387 // This is the logic from your modern code's AssignGridCoordinates.
388 PetscInt headerLines = simCtx->block_number + 2; // 1 for nblk, plus one for each block's dims
389 char dummy_buffer[2048];
390 for (PetscInt s = 0; s < headerLines; ++s) {
391 if (!fgets(dummy_buffer, sizeof(dummy_buffer), grid_file_handle)) {
392 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_FILE_READ, "Unexpected EOF while skipping grid header");
393 }
394 }
395 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank 0: Skipped %d header lines, now at coordinate data.\n", headerLines);
396 }
397
398 // We now call the coordinate reader, passing the file handle.
399 // It's responsible for reading its block's data and broadcasting.
400 ierr = ReadAndSetCoordinates(user, grid_file_handle); CHKERRQ(ierr);
401
402 // Only Rank 0, which opened the file, should close it.
403 if (simCtx->rank == 0) {
404 fclose(grid_file_handle);
405 }
406 }
407
408 // After populating the local coordinate vector, we must perform a
409 // Local-to-Global and then Global-to-Local scatter to correctly
410 // populate the ghost node coordinates across process boundaries.
411 Vec gCoor, lCoor;
412 ierr = DMGetCoordinates(user->da, &gCoor); CHKERRQ(ierr);
413 ierr = DMGetCoordinatesLocal(user->da, &lCoor); CHKERRQ(ierr);
414
415 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Scattering coordinates to update ghost nodes for block %d...\n", simCtx->rank, user->_this);
416 ierr = DMLocalToGlobalBegin(user->fda, lCoor, INSERT_VALUES, gCoor); CHKERRQ(ierr);
417 ierr = DMLocalToGlobalEnd(user->fda, lCoor, INSERT_VALUES, gCoor); CHKERRQ(ierr);
418
419 ierr = DMGlobalToLocalBegin(user->fda, gCoor, INSERT_VALUES, lCoor); CHKERRQ(ierr);
420 ierr = DMGlobalToLocalEnd(user->fda, gCoor, INSERT_VALUES, lCoor); CHKERRQ(ierr);
421
422 PetscFunctionReturn(0);
423}
static PetscErrorCode GenerateAndSetCoordinates(UserCtx *user)
Programmatically generates and sets grid coordinates based on user parameters.
Definition grid.c:476
static PetscErrorCode ReadAndSetCoordinates(UserCtx *user, FILE *fd)
Reads physical coordinates from a file and populates the DMDA for a specific block.
Definition grid.c:535
char grid_file[PETSC_MAX_PATH_LEN]
Definition variables.h:567
Here is the call graph for this function:
Here is the caller graph for this function:

◆ GenerateAndSetCoordinates()

static PetscErrorCode GenerateAndSetCoordinates ( UserCtx user)
static

Programmatically generates and sets grid coordinates based on user parameters.

This function populates the local coordinate vector of the provided UserCtx using the geometric properties (Min_X, Max_X, rx, etc.) that were parsed from command-line options. It supports non-linear grid stretching.

Parameters
userThe UserCtx for a specific block.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 476 of file grid.c.

477{
478 PetscErrorCode ierr;
479 DMDALocalInfo info;
480 Cmpnts ***coor;
481 Vec lCoor;
482
483 PetscFunctionBeginUser;
484 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Generating coordinates for block %d...\n", user->simCtx->rank, user->_this);
485
486 ierr = DMDAGetLocalInfo(user->da, &info); CHKERRQ(ierr);
487 ierr = DMGetCoordinatesLocal(user->da, &lCoor); CHKERRQ(ierr);
488 ierr = VecSet(lCoor, 0.0); CHKERRQ(ierr);
489 ierr = DMDAVecGetArray(user->fda, lCoor, &coor); CHKERRQ(ierr);
490
491 PetscReal Lx = user->Max_X - user->Min_X;
492 PetscReal Ly = user->Max_Y - user->Min_Y;
493 PetscReal Lz = user->Max_Z - user->Min_Z;
494
495 // Loop over the local nodes, including ghost nodes, owned by this process.
496 for (PetscInt k = info.zs; k < info.zs + info.zm; k++) {
497 for (PetscInt j = info.ys; j < info.ys + info.ym; j++) {
498 for (PetscInt i = info.xs; i < info.xs + info.xm; i++) {
499 if(k<user->KM && j<user->JM && i < user->IM){
500 coor[k][j][i].x = user->Min_X + ComputeStretchedCoord(i, user->IM, Lx, user->rx);
501 coor[k][j][i].y = user->Min_Y + ComputeStretchedCoord(j, user->JM, Ly, user->ry);
502 coor[k][j][i].z = user->Min_Z + ComputeStretchedCoord(k, user->KM, Lz, user->rz);
503 }
504 }
505 }
506 }
507
508 ierr = DMDAVecRestoreArray(user->fda, lCoor, &coor); CHKERRQ(ierr);
509 PetscFunctionReturn(0);
510}
static PetscReal ComputeStretchedCoord(PetscInt i, PetscInt N, PetscReal L, PetscReal r)
Computes a stretched coordinate along one dimension.
Definition grid.c:453
PetscReal Min_X
Definition variables.h:640
PetscReal ry
Definition variables.h:644
PetscReal Max_Y
Definition variables.h:640
PetscScalar x
Definition variables.h:100
PetscReal rz
Definition variables.h:644
PetscScalar z
Definition variables.h:100
PetscReal Min_Z
Definition variables.h:640
PetscReal Max_X
Definition variables.h:640
PetscReal Min_Y
Definition variables.h:640
PetscScalar y
Definition variables.h:100
PetscReal rx
Definition variables.h:644
PetscReal Max_Z
Definition variables.h:640
A 3D point or vector with PetscScalar components.
Definition variables.h:99
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ReadAndSetCoordinates()

static PetscErrorCode ReadAndSetCoordinates ( UserCtx user,
FILE *  fd 
)
static

Reads physical coordinates from a file and populates the DMDA for a specific block.

This function handles the collective read of an interleaved (X Y Z per line) multi-block grid file. It assumes the file header (nblk, dimensions) has already been processed by ReadGridFile.

The process is robust for parallel execution:

  1. Rank 0 opens the grid file.
  2. It intelligently skips past the header section and the coordinate data for all blocks preceding the current block being processed (user->_this).
  3. It then reads the entire coordinate data for the current block into a single contiguous buffer gc.
  4. This global buffer gc is broadcast to all other MPI ranks.
  5. Each rank then loops through its local (owned + ghost) node indices, calculates the corresponding index in the global gc array, and copies the (x,y,z) coordinates into its local PETSc coordinate vector.
Parameters
userThe UserCtx for a specific block. Its _this field must be set, and its IM, JM, KM fields must be correctly pre-populated.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 535 of file grid.c.

536{
537 PetscErrorCode ierr;
538 SimCtx *simCtx = user->simCtx;
539 PetscMPIInt rank = simCtx->rank;
540 PetscInt block_index = user->_this;
541 PetscInt IM = user->IM, JM = user->JM, KM = user->KM;
542 DMDALocalInfo info;
543 Cmpnts ***coor;
544 Vec lCoor;
545 PetscReal *gc = NULL; // Global coordinate buffer, allocated on all ranks
546
547 PetscFunctionBeginUser;
548 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Reading interleaved coordinates from file for block %d...\n",
549 simCtx->rank, block_index);
550
551 // 1. Allocate the buffer on ALL ranks to receive the broadcast data.
552 // PetscInt n_nodes = (IM + 1) * (JM + 1) * (KM + 1);
553 PetscInt n_nodes = (IM) * (JM) * (KM);
554 ierr = PetscMalloc1(3 * n_nodes, &gc); CHKERRQ(ierr);
555
556 // 2. Only Rank 0 opens the file and reads the data.
557 if (rank == 0) {
558 if (!fd) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_FILE_OPEN, "Recieved a NULL file handle.\n");
559
560 // Read the coordinate data for the CURRENT block.
561 for (PetscInt k = 0; k < KM; k++) {
562 for (PetscInt j = 0; j < JM; j++) {
563 for (PetscInt i = 0; i < IM; i++) {
564 PetscInt base_index = 3 * ((k * (JM) + j) * (IM) + i);
565 if (fscanf(fd, "%le %le %le\n", &gc[base_index], &gc[base_index + 1], &gc[base_index + 2]) != 3) {
566 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_FILE_READ, "Error reading coordinates for node (i,j,k)=(%d,%d,%d) in block %d", i, j, k, block_index);
567 }
568 }
569 }
570 }
571
572 }
573
574 // 3. Broadcast the coordinate block for the current block to all other processes.
575 ierr = MPI_Bcast(gc, 3 * n_nodes, MPIU_REAL, 0, PETSC_COMM_WORLD); CHKERRQ(ierr);
576
577 // 4. Each rank populates its local portion of the coordinate vector.
578 ierr = DMDAGetLocalInfo(user->da, &info); CHKERRQ(ierr);
579 ierr = DMGetCoordinatesLocal(user->da, &lCoor); CHKERRQ(ierr);
580 ierr = DMDAVecGetArray(user->fda, lCoor, &coor); CHKERRQ(ierr);
581
582 for (PetscInt k = info.zs; k < info.zs + info.zm; k++) {
583 for (PetscInt j = info.ys; j < info.ys + info.ym; j++) {
584 for (PetscInt i = info.xs; i < info.xs + info.xm; i++) {
585 if(k< KM && j < JM && i < IM){
586 //PetscInt base_idx = 3 * ((k * (JM + 1) + j) * (IM + 1) + i);
587 PetscInt base_idx = 3 * ((k * (JM) + j) * (IM) + i);
588 coor[k][j][i].x = gc[base_idx];
589 coor[k][j][i].y = gc[base_idx + 1];
590 coor[k][j][i].z = gc[base_idx + 2];
591 }
592 }
593 }
594 }
595
596 // 5. Clean up and restore.
597 ierr = DMDAVecRestoreArray(user->fda, lCoor, &coor); CHKERRQ(ierr);
598 ierr = PetscFree(gc); CHKERRQ(ierr);
599 PetscFunctionReturn(0);
600}
Here is the caller graph for this function:

◆ RestrictCoordinates()

static PetscErrorCode RestrictCoordinates ( UserCtx coarse_user,
UserCtx fine_user 
)
static

Populates coarse grid coordinates by restricting from a fine grid.

This function is a direct adaptation of the coordinate restriction loop from the legacy MG_Initial function. It ensures that the physical location of a coarse grid node is identical to its corresponding parent node on the fine grid. The mapping from coarse to fine index (i -> ih) is determined by the semi-coarsening flags (isc, jsc, ksc) stored in the UserCtx.

Parameters
coarse_userThe UserCtx for the coarse grid (destination).
fine_userThe UserCtx for the fine grid (source).
Returns
PetscErrorCode

Definition at line 617 of file grid.c.

618{
619 PetscErrorCode ierr;
620 Vec c_lCoor, f_lCoor; // Coarse and Fine local coordinate vectors
621 Cmpnts ***c_coor;
622 const Cmpnts ***f_coor; // Use const for read-only access
623 DMDALocalInfo c_info;
624 PetscInt ih, jh, kh; // Fine-grid indices corresponding to coarse-grid i,j,k
625
626 PetscFunctionBeginUser;
627 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Restricting coords from level %d to level %d for block %d\n",
628 fine_user->simCtx->rank, fine_user->thislevel, coarse_user->thislevel, coarse_user->_this);
629
630 ierr = DMDAGetLocalInfo(coarse_user->da, &c_info); CHKERRQ(ierr);
631
632 ierr = DMGetCoordinatesLocal(coarse_user->da, &c_lCoor); CHKERRQ(ierr);
633 ierr = DMGetCoordinatesLocal(fine_user->da, &f_lCoor); CHKERRQ(ierr);
634
635 ierr = DMDAVecGetArray(coarse_user->fda, c_lCoor, &c_coor); CHKERRQ(ierr);
636 ierr = DMDAVecGetArrayRead(fine_user->fda, f_lCoor, &f_coor); CHKERRQ(ierr);
637
638 // Get the local owned range of the coarse grid.
639 PetscInt xs = c_info.xs, xe = c_info.xs + c_info.xm;
640 PetscInt ys = c_info.ys, ye = c_info.ys + c_info.ym;
641 PetscInt zs = c_info.zs, ze = c_info.zs + c_info.zm;
642
643 // Get the global dimensions of the coarse grid.
644 PetscInt mx = c_info.mx, my = c_info.my, mz = c_info.mz;
645
646 // If this process owns the maximum boundary node, contract the loop by one
647 // to prevent the index doubling `2*i` from going out of bounds.
648 if (xe == mx) xe--;
649 if (ye == my) ye--;
650 if (ze == mz) ze--;
651
652 for (PetscInt k = zs; k < ze; k++) {
653 for (PetscInt j = ys; j < ye; j++) {
654 for (PetscInt i = xs; i < xe; i++) {
655 // Determine the corresponding parent node index on the FINE grid,
656 // respecting the semi-coarsening flags of the FINE grid's UserCtx.
657 ih = coarse_user->isc ? i : 2 * i;
658 jh = coarse_user->jsc ? j : 2 * j;
659 kh = coarse_user->ksc ? k : 2 * k;
660
661 // LOG_ALLOW(GLOBAL,LOG_DEBUG," [kh][ih][jh] = %d,%d,%d - k,j,i = %d,%d,%d.\n",kh,jh,ih,k,j,i);
662
663 c_coor[k][j][i] = f_coor[kh][jh][ih];
664 }
665 }
666 }
667
668 ierr = DMDAVecRestoreArray(coarse_user->fda, c_lCoor, &c_coor); CHKERRQ(ierr);
669 ierr = DMDAVecRestoreArrayRead(fine_user->fda, f_lCoor, &f_coor); CHKERRQ(ierr);
670
671 // After populating the local portion, scatter to update ghost regions.
672 Vec c_gCoor;
673 ierr = DMGetCoordinates(coarse_user->da, &c_gCoor); CHKERRQ(ierr);
674 ierr = DMLocalToGlobalBegin(coarse_user->fda, c_lCoor, INSERT_VALUES, c_gCoor); CHKERRQ(ierr);
675 ierr = DMLocalToGlobalEnd(coarse_user->fda, c_lCoor, INSERT_VALUES, c_gCoor); CHKERRQ(ierr);
676 ierr = DMGlobalToLocalBegin(coarse_user->fda, c_gCoor, INSERT_VALUES, c_lCoor); CHKERRQ(ierr);
677 ierr = DMGlobalToLocalEnd(coarse_user->fda, c_gCoor, INSERT_VALUES, c_lCoor); CHKERRQ(ierr);
678
679 PetscFunctionReturn(0);
680}
Here is the caller graph for this function:

◆ AssignAllGridCoordinates()

PetscErrorCode AssignAllGridCoordinates ( SimCtx simCtx)

Orchestrates the assignment of physical coordinates to all DMDA objects.

This function manages the entire process of populating the coordinate vectors for every DMDA across all multigrid levels and blocks. It follows a two-part strategy that is essential for multigrid methods:

  1. Populate Finest Level: It first loops through each block and calls a helper (SetFinestLevelCoordinates) to set the physical coordinates for the highest-resolution grid (the finest multigrid level).
  2. Restrict to Coarser Levels: It then iterates downwards from the finest level, calling a helper (RestrictCoordinates) to copy the coordinate values from the fine grid nodes to their corresponding parent nodes on the coarser grids. This ensures all levels represent the exact same geometry.
Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 317 of file grid.c.

318{
319 PetscErrorCode ierr;
320 UserMG *usermg = &simCtx->usermg;
321 PetscInt nblk = simCtx->block_number;
322
323 PetscFunctionBeginUser;
324 LOG_ALLOW(GLOBAL, LOG_INFO, "Assigning physical coordinates to all grid DMs...\n");
325
326 // --- Part 1: Populate the Finest Grid Level ---
327 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Setting coordinates for the finest grid level (%d)...\n", usermg->mglevels - 1);
328 for (PetscInt bi = 0; bi < nblk; bi++) {
329 UserCtx *fine_user = &usermg->mgctx[usermg->mglevels - 1].user[bi];
330 ierr = SetFinestLevelCoordinates(fine_user); CHKERRQ(ierr);
331 }
332 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Finest level coordinates have been set for all blocks.\n");
333
334
335 // --- Part 2: Restrict Coordinates to Coarser Levels ---
336 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Restricting coordinates to coarser grid levels...\n");
337 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
338 for (PetscInt bi = 0; bi < nblk; bi++) {
339 UserCtx *coarse_user = &usermg->mgctx[level].user[bi];
340 UserCtx *fine_user = &usermg->mgctx[level + 1].user[bi];
341 ierr = RestrictCoordinates(coarse_user, fine_user); CHKERRQ(ierr);
342 }
343 }
344
345 LOG_ALLOW(GLOBAL, LOG_INFO, "Physical coordinates assigned to all grid levels and blocks.\n");
346 PetscFunctionReturn(0);
347}
static PetscErrorCode RestrictCoordinates(UserCtx *coarse_user, UserCtx *fine_user)
Populates coarse grid coordinates by restricting from a fine grid.
Definition grid.c:617
static PetscErrorCode SetFinestLevelCoordinates(UserCtx *user)
A router that populates the coordinates for a single finest-level DMDA.
Definition grid.c:367
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ComputeStretchedCoord()

static PetscReal ComputeStretchedCoord ( PetscInt  i,
PetscInt  N,
PetscReal  L,
PetscReal  r 
)
inlinestatic

Computes a stretched coordinate along one dimension.

This function computes a coordinate based on a geometric stretching ratio. If the ratio (r) is 1.0, a uniform distribution is used: x(i) = L * (i/N)

If r != 1.0, a geometric stretching is applied: x(i) = L * [ (r^(i/N) - 1 ) / (r - 1) ]

Here:

  • i : The current index along the dimension.
  • N : The total number of divisions along that dimension.
  • L : The length of the domain along that dimension.
  • r : The stretching ratio. r > 1.0 stretches the grid in a geometric fashion increasing spacing away from the start, whereas 0 < r < 1.0 would cluster points near the start.
Parameters
[in]iCurrent index (0 <= i <= N).
[in]NNumber of segments along the dimension.
[in]LTotal length of the domain.
[in]rStretching ratio.
Returns
PetscReal The computed coordinate at index i.
Note
This function does not return a PetscErrorCode because it does not allocate memory or call PETSc routines that can fail. It is just a helper calculation function.

Definition at line 453 of file grid.c.

454{
455 if (N <=1) return 0.0;
456 PetscReal fraction = (PetscReal)i / ((PetscReal)N - 1.0);
457 if (PetscAbsReal(r - 1.0) < 1.0e-9) { // Use a tolerance for float comparison
458 return L * fraction;
459 } else {
460 return L * (PetscPowReal(r, fraction) - 1.0) / (r - 1.0);
461 }
462}
Here is the caller graph for this function:

◆ ComputeLocalBoundingBox()

PetscErrorCode ComputeLocalBoundingBox ( UserCtx user,
BoundingBox localBBox 
)

Computes the local bounding box of the grid on the current process.

This function calculates the minimum and maximum coordinates (x, y, z) of the local grid points owned by the current MPI process. It iterates over the local portion of the grid, examines each grid point's coordinates, and updates the minimum and maximum values accordingly.

The computed bounding box is stored in the provided localBBox structure, and the user->bbox field is also updated with this bounding box for consistency within the user context.

Parameters
[in]userPointer to the user-defined context containing grid information. This context must be properly initialized before calling this function.
[out]localBBoxPointer to the BoundingBox structure where the computed local bounding box will be stored. The structure should be allocated by the caller.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

Definition at line 702 of file grid.c.

703{
704 PetscErrorCode ierr;
705 PetscInt i, j, k;
706 PetscMPIInt rank;
707 PetscInt xs, ys, zs, xe, ye, ze;
708 DMDALocalInfo info;
709 Vec coordinates;
710 Cmpnts ***coordArray;
711 Cmpnts minCoords, maxCoords;
712
713 // Start of function execution
714 LOG_ALLOW(GLOBAL, LOG_INFO, "Entering the function.\n");
715
716 // Validate input Pointers
717 if (!user) {
718 LOG_ALLOW(LOCAL, LOG_ERROR, "Input 'user' Pointer is NULL.\n");
719 return PETSC_ERR_ARG_NULL;
720 }
721 if (!localBBox) {
722 LOG_ALLOW(LOCAL, LOG_ERROR, "Output 'localBBox' Pointer is NULL.\n");
723 return PETSC_ERR_ARG_NULL;
724 }
725
726 // Get MPI rank
727 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
728
729 // Get the local coordinates vector from the DMDA
730 ierr = DMGetCoordinatesLocal(user->da, &coordinates);
731 if (ierr) {
732 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting local coordinates vector.\n");
733 return ierr;
734 }
735
736 if (!coordinates) {
737 LOG_ALLOW(LOCAL, LOG_ERROR, "Coordinates vector is NULL.\n");
738 return PETSC_ERR_ARG_NULL;
739 }
740
741 // Access the coordinate array for reading
742 ierr = DMDAVecGetArrayRead(user->fda, coordinates, &coordArray);
743 if (ierr) {
744 LOG_ALLOW(LOCAL, LOG_ERROR, "Error accessing coordinate array.\n");
745 return ierr;
746 }
747
748 // Get the local grid information (indices and sizes)
749 ierr = DMDAGetLocalInfo(user->da, &info);
750 if (ierr) {
751 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting DMDA local info.\n");
752 return ierr;
753 }
754
755
756 xs = info.gxs; xe = xs + info.gxm;
757 ys = info.gys; ye = ys + info.gym;
758 zs = info.gzs; ze = zs + info.gzm;
759
760 /*
761 xs = info.xs; xe = xs + info.xm;
762 ys = info.ys; ye = ys + info.ym;
763 zs = info.zs; ze = zs + info.zm;
764 */
765
766 // Initialize min and max coordinates with extreme values
767 minCoords.x = minCoords.y = minCoords.z = PETSC_MAX_REAL;
768 maxCoords.x = maxCoords.y = maxCoords.z = PETSC_MIN_REAL;
769
770 LOG_ALLOW(LOCAL, LOG_DEBUG, "[Rank %d] Grid indices (Including Ghosts): xs=%d, xe=%d, ys=%d, ye=%d, zs=%d, ze=%d.\n",rank, xs, xe, ys, ye, zs, ze);
771
772 // Iterate over the local grid to find min and max coordinates
773 for (k = zs; k < ze; k++) {
774 for (j = ys; j < ye; j++) {
775 for (i = xs; i < xe; i++) {
776 Cmpnts coord = coordArray[k][j][i];
777
778 // Update min and max coordinates
779 if (coord.x < minCoords.x) minCoords.x = coord.x;
780 if (coord.y < minCoords.y) minCoords.y = coord.y;
781 if (coord.z < minCoords.z) minCoords.z = coord.z;
782
783 if (coord.x > maxCoords.x) maxCoords.x = coord.x;
784 if (coord.y > maxCoords.y) maxCoords.y = coord.y;
785 if (coord.z > maxCoords.z) maxCoords.z = coord.z;
786 }
787 }
788 }
789
790
791 // Add tolerance to bboxes.
792 minCoords.x = minCoords.x - BBOX_TOLERANCE;
793 minCoords.y = minCoords.y - BBOX_TOLERANCE;
794 minCoords.z = minCoords.z - BBOX_TOLERANCE;
795
796 maxCoords.x = maxCoords.x + BBOX_TOLERANCE;
797 maxCoords.y = maxCoords.y + BBOX_TOLERANCE;
798 maxCoords.z = maxCoords.z + BBOX_TOLERANCE;
799
800 LOG_ALLOW(LOCAL,LOG_INFO," Tolerance added to the limits: %.8e .\n",(PetscReal)BBOX_TOLERANCE);
801
802 // Log the computed min and max coordinates
803 LOG_ALLOW(LOCAL, LOG_INFO,"[Rank %d]minCoords=(%.6f, %.6f, %.6f), maxCoords=(%.6f, %.6f, %.6f).\n",rank,minCoords.x, minCoords.y, minCoords.z, maxCoords.x, maxCoords.y, maxCoords.z);
804
805
806
807 // Restore the coordinate array
808 ierr = DMDAVecRestoreArrayRead(user->fda, coordinates, &coordArray);
809 if (ierr) {
810 LOG_ALLOW(LOCAL, LOG_ERROR, "Error restoring coordinate array.\n");
811 return ierr;
812 }
813
814 // Set the local bounding box
815 localBBox->min_coords = minCoords;
816 localBBox->max_coords = maxCoords;
817
818 // Update the bounding box inside the UserCtx for consistency
819 user->bbox = *localBBox;
820
821 LOG_ALLOW(GLOBAL, LOG_INFO, "Exiting the function successfully.\n");
822 return 0;
823}
#define BBOX_TOLERANCE
Definition grid.c:6
@ LOG_ERROR
Critical errors that may halt the program.
Definition logging.h:29
Cmpnts max_coords
Maximum x, y, z coordinates of the bounding box.
Definition variables.h:155
Cmpnts min_coords
Minimum x, y, z coordinates of the bounding box.
Definition variables.h:154
BoundingBox bbox
Definition variables.h:641
Here is the caller graph for this function:

◆ GatherAllBoundingBoxes()

PetscErrorCode GatherAllBoundingBoxes ( UserCtx user,
BoundingBox **  allBBoxes 
)

Gathers local bounding boxes from all MPI processes to rank 0.

Each rank computes its local bounding box, then all ranks participate in an MPI_Gather to send their BoundingBox to rank 0. Rank 0 allocates the result array and returns it via allBBoxes.

Parameters
[in]userPointer to UserCtx (must be non-NULL).
[out]allBBoxesOn rank 0, receives malloc’d array of size size. On other ranks, set to NULL.
Returns
PetscErrorCode

Definition at line 837 of file grid.c.

838{
839 PetscErrorCode ierr;
840 PetscMPIInt rank, size;
841 BoundingBox *bboxArray = NULL;
842 BoundingBox localBBox;
843
844 /* Validate */
845 if (!user || !allBBoxes) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
846 "GatherAllBoundingBoxes: NULL pointer");
847
848 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
849 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
850
851 /* Compute local bbox */
852 ierr = ComputeLocalBoundingBox(user, &localBBox); CHKERRQ(ierr);
853
854 /* Ensure everyone is synchronized before the gather */
855 MPI_Barrier(PETSC_COMM_WORLD);
857 "Rank %d: about to MPI_Gather(localBBox)\n", rank);
858
859 /* Allocate on root */
860 if (rank == 0) {
861 bboxArray = (BoundingBox*)malloc(size * sizeof(BoundingBox));
862 if (!bboxArray) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
863 "GatherAllBoundingBoxes: malloc failed");
864 }
865
866 /* Collective: every rank must call */
867 ierr = MPI_Gather(&localBBox, sizeof(BoundingBox), MPI_BYTE,
868 bboxArray, sizeof(BoundingBox), MPI_BYTE,
869 0, PETSC_COMM_WORLD);
870 CHKERRMPI(ierr);
871
872 MPI_Barrier(PETSC_COMM_WORLD);
874 "Rank %d: completed MPI_Gather(localBBox)\n", rank);
875
876 /* Return result */
877 if (rank == 0) {
878 *allBBoxes = bboxArray;
879 } else {
880 *allBBoxes = NULL;
881 }
882
883 return 0;
884}
PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox)
Computes the local bounding box of the grid on the current process.
Definition grid.c:702
Defines a 3D axis-aligned bounding box.
Definition variables.h:153
Here is the call graph for this function:
Here is the caller graph for this function:

◆ BroadcastAllBoundingBoxes()

PetscErrorCode BroadcastAllBoundingBoxes ( UserCtx user,
BoundingBox **  bboxlist 
)

Broadcasts the bounding box list from rank 0 to all ranks.

Broadcasts the bounding box information collected on rank 0 to all other ranks.

After GatherAllBoundingBoxes, rank 0 has an array of size boxes. This routine makes sure every rank ends up with its own malloc’d copy.

Parameters
[in]userPointer to UserCtx (unused here, but kept for signature).
[in,out]bboxlistOn entry: rank 0’s array; on exit: every rank’s array.
Returns
PetscErrorCode

Definition at line 896 of file grid.c.

897{
898 PetscErrorCode ierr;
899 PetscMPIInt rank, size;
900
901 if (!bboxlist) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
902 "BroadcastAllBoundingBoxes: NULL pointer");
903
904 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
905 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
906
907 /* Non-root ranks must allocate before the Bcast */
908 if (rank != 0) {
909 *bboxlist = (BoundingBox*)malloc(size * sizeof(BoundingBox));
910 if (!*bboxlist) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
911 "BroadcastAllBoundingBoxes: malloc failed");
912 }
913
914 MPI_Barrier(PETSC_COMM_WORLD);
916 "Rank %d: about to MPI_Bcast(%d boxes)\n", rank, size);
917
918 /* Collective: every rank must call */
919 ierr = MPI_Bcast(*bboxlist, size * sizeof(BoundingBox), MPI_BYTE,
920 0, PETSC_COMM_WORLD);
921 CHKERRMPI(ierr);
922
923 MPI_Barrier(PETSC_COMM_WORLD);
925 "Rank %d: completed MPI_Bcast(%d boxes)\n", rank, size);
926
927 return 0;
928}
Here is the caller graph for this function: