PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
grid.h File Reference

Public interface for grid, solver, and metric setup routines. More...

#include "variables.h"
#include "logging.h"
#include "io.h"
#include "setup.h"
Include dependency graph for grid.h:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Functions

PetscErrorCode DefineAllGridDimensions (SimCtx *simCtx)
 Orchestrates the parsing and setting of grid dimensions for all blocks.
 
PetscErrorCode InitializeAllGridDMs (SimCtx *simCtx)
 Orchestrates the creation of DMDA objects for every block and multigrid level.
 
PetscErrorCode AssignAllGridCoordinates (SimCtx *simCtx)
 Orchestrates the assignment of physical coordinates to all DMDA objects.
 
PetscErrorCode ComputeLocalBoundingBox (UserCtx *user, BoundingBox *localBBox)
 Computes the local bounding box of the grid on the current process.
 
PetscErrorCode GatherAllBoundingBoxes (UserCtx *user, BoundingBox **allBBoxes)
 Gathers local bounding boxes from all MPI processes to rank 0.
 
PetscErrorCode BroadcastAllBoundingBoxes (UserCtx *user, BoundingBox **bboxlist)
 Broadcasts the bounding box information collected on rank 0 to all other ranks.
 
PetscErrorCode CalculateInletCenter (UserCtx *user)
 Calculates the geometric center of the primary inlet face.
 

Detailed Description

Public interface for grid, solver, and metric setup routines.

Definition in file grid.h.

Function Documentation

◆ DefineAllGridDimensions()

PetscErrorCode DefineAllGridDimensions ( SimCtx simCtx)

Orchestrates the parsing and setting of grid dimensions for all blocks.

This function serves as the high-level entry point for defining the geometric properties of each grid block in the simulation. It iterates through every block defined by simCtx->block_number.

For each block, it performs two key actions:

  1. It explicitly sets the block's index (_this) in the corresponding UserCtx struct for the finest multigrid level. This makes the context "self-aware".
  2. It calls a helper function (ParseAndSetGridInputs) to handle the detailed work of parsing options or files to populate the rest of the geometric properties for that specific block (e.g., IM, Min_X, rx).
Parameters
simCtxThe master SimCtx, which contains the number of blocks and the UserCtx hierarchy to be configured.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 65 of file grid.c.

66{
67 PetscErrorCode ierr;
68 PetscInt nblk = simCtx->block_number;
69 UserCtx *finest_users;
70
71 PetscFunctionBeginUser;
72
74
75 if (simCtx->usermg.mglevels == 0) {
76 SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONGSTATE, "MG levels not set. Cannot get finest_users.");
77 }
78 // Get the UserCtx array for the finest grid level
79 finest_users = simCtx->usermg.mgctx[simCtx->usermg.mglevels - 1].user;
80
81 LOG_ALLOW(GLOBAL, LOG_INFO, "Defining grid dimensions for %d blocks...\n", nblk);
82
83 // Loop over each block to configure its grid dimensions and geometry.
84 for (PetscInt bi = 0; bi < nblk; bi++) {
85 LOG_ALLOW_SYNC(GLOBAL, LOG_DEBUG, "Rank %d: --- Configuring Geometry for Block %d ---\n", simCtx->rank, bi);
86
87 // Before calling any helpers, set the block index in the context.
88 // This makes the UserCtx self-aware of which block it represents.
89 LOG_ALLOW(GLOBAL,LOG_DEBUG,"finest_users->_this = %d, bi = %d\n",finest_users[bi]._this,bi);
90 //finest_user[bi]._this = bi;
91
92 // Call the helper function for this specific block. It can now derive
93 // all necessary information from the UserCtx pointer it receives.
94 ierr = ParseAndSetGridInputs(&finest_users[bi]); CHKERRQ(ierr);
95 }
96
98
99 PetscFunctionReturn(0);
100}
static PetscErrorCode ParseAndSetGridInputs(UserCtx *user)
Determines the grid source and calls the appropriate parsing routine.
Definition grid.c:22
#define LOG_ALLOW_SYNC(scope, level, fmt,...)
----— DEBUG ---------------------------------------— #define LOG_ALLOW(scope, level,...
Definition logging.h:268
#define GLOBAL
Scope for global logging across all processes.
Definition logging.h:47
#define LOG_ALLOW(scope, level, fmt,...)
Logging macro that checks both the log level and whether the calling function is in the allowed-funct...
Definition logging.h:201
#define PROFILE_FUNCTION_END
Marks the end of a profiled code block.
Definition logging.h:740
@ LOG_INFO
Informational messages about program execution.
Definition logging.h:32
@ LOG_DEBUG
Detailed debugging information.
Definition logging.h:33
#define PROFILE_FUNCTION_BEGIN
Marks the beginning of a profiled code block (typically a function).
Definition logging.h:731
UserCtx * user
Definition variables.h:427
PetscMPIInt rank
Definition variables.h:541
PetscInt block_number
Definition variables.h:593
UserMG usermg
Definition variables.h:631
PetscInt mglevels
Definition variables.h:434
MGCtx * mgctx
Definition variables.h:437
User-defined context containing data specific to a single computational grid level.
Definition variables.h:661
Here is the call graph for this function:
Here is the caller graph for this function:

◆ InitializeAllGridDMs()

PetscErrorCode InitializeAllGridDMs ( SimCtx simCtx)

Orchestrates the creation of DMDA objects for every block and multigrid level.

This function systematically builds the entire DMDA hierarchy. It first calculates the dimensions (IM, JM, KM) for all coarse grids based on the finest grid's dimensions and the semi-coarsening flags. It then iterates from the coarsest to the finest level, calling a powerful helper function (InitializeSingleGridDM) to create the DMs for each block, ensuring that finer grids are properly aligned with their coarser parents for multigrid efficiency.

Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 247 of file grid.c.

248{
249 PetscErrorCode ierr;
250 UserMG *usermg = &simCtx->usermg;
251 MGCtx *mgctx = usermg->mgctx;
252 PetscInt nblk = simCtx->block_number;
253
254 PetscFunctionBeginUser;
255
257
258 LOG_ALLOW(GLOBAL, LOG_INFO, "Creating DMDA objects for all levels and blocks...\n");
259
260 // --- Part 1: Calculate Coarse Grid Dimensions & VALIDATE ---
261 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Calculating and validating coarse grid dimensions...\n");
262 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
263 for (PetscInt bi = 0; bi < nblk; bi++) {
264 UserCtx *user_coarse = &mgctx[level].user[bi];
265 UserCtx *user_fine = &mgctx[level + 1].user[bi];
266
267 user_coarse->IM = user_fine->isc ? user_fine->IM : (user_fine->IM + 1) / 2;
268 user_coarse->JM = user_fine->jsc ? user_fine->JM : (user_fine->JM + 1) / 2;
269 user_coarse->KM = user_fine->ksc ? user_fine->KM : (user_fine->KM + 1) / 2;
270
271 LOG_ALLOW_SYNC(LOCAL, LOG_TRACE, "Rank %d: Block %d, Level %d dims calculated: %d x %d x %d\n",
272 simCtx->rank, bi, level, user_coarse->IM, user_coarse->JM, user_coarse->KM);
273
274 // Validation check from legacy MGDACreate to ensure coarsening is possible
275 PetscInt check_i = user_coarse->IM * (2 - user_coarse->isc) - (user_fine->IM + 1 - user_coarse->isc);
276 PetscInt check_j = user_coarse->JM * (2 - user_coarse->jsc) - (user_fine->JM + 1 - user_coarse->jsc);
277 PetscInt check_k = user_coarse->KM * (2 - user_coarse->ksc) - (user_fine->KM + 1 - user_coarse->ksc);
278
279 if (check_i + check_j + check_k != 0) {
280 // SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONG,
281 // "Grid at level %d, block %d cannot be coarsened from %dx%dx%d to %dx%dx%d with the given semi-coarsening flags. Check grid dimensions.",
282 // level, bi, user_fine->IM, user_fine->JM, user_fine->KM, user_coarse->IM, user_coarse->JM, user_coarse->KM);
283 LOG(GLOBAL,LOG_WARNING,"WARNING: Grid at level %d, block %d can't be consistently coarsened further.\n", level, bi);
284 }
285 }
286 }
287
288 // --- Part 2: Create DMs from Coarse to Fine for each Block ---
289 for (PetscInt bi = 0; bi < nblk; bi++) {
290 LOG_ALLOW_SYNC(GLOBAL, LOG_DEBUG, "--- Creating DMs for Block %d ---\n", bi);
291
292 // Create the coarsest level DM first (passing NULL for the coarse_user)
293 ierr = InitializeSingleGridDM(&mgctx[0].user[bi], NULL); CHKERRQ(ierr);
294
295 // Create finer level DMs, passing the next-coarser context for alignment
296 for (PetscInt level = 1; level < usermg->mglevels; level++) {
297 ierr = InitializeSingleGridDM(&mgctx[level].user[bi], &mgctx[level-1].user[bi]); CHKERRQ(ierr);
298 }
299 }
300
301 // --- Optional: View the finest DM for debugging verification ---
302 if (get_log_level() >= LOG_DEBUG) {
303 LOG_ALLOW_SYNC(GLOBAL, LOG_INFO, "--- Viewing Finest DMDA (Level %d, Block 0) ---\n", usermg->mglevels - 1);
304 ierr = DMView(mgctx[usermg->mglevels - 1].user[0].da, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
305 }
306
307 LOG_ALLOW(GLOBAL, LOG_INFO, "DMDA object creation complete.\n");
308
310
311 PetscFunctionReturn(0);
312}
static PetscErrorCode InitializeSingleGridDM(UserCtx *user, UserCtx *coarse_user)
Creates the DMDA objects (da and fda) for a single UserCtx.
Definition grid.c:120
#define LOCAL
Logging scope definitions for controlling message output.
Definition logging.h:46
#define LOG(scope, level, fmt,...)
Logging macro for PETSc-based applications with scope control.
Definition logging.h:85
LogLevel get_log_level()
Retrieves the current logging level from the environment variable LOG_LEVEL.
Definition logging.c:39
@ LOG_TRACE
Very fine-grained tracing information for in-depth debugging.
Definition logging.h:34
@ LOG_WARNING
Non-critical issues that warrant attention.
Definition logging.h:30
PetscInt isc
Definition variables.h:674
PetscInt ksc
Definition variables.h:674
PetscInt KM
Definition variables.h:670
PetscInt jsc
Definition variables.h:674
PetscInt JM
Definition variables.h:670
PetscInt IM
Definition variables.h:670
Context for Multigrid operations.
Definition variables.h:426
User-level context for managing the entire multigrid hierarchy.
Definition variables.h:433
Here is the call graph for this function:
Here is the caller graph for this function:

◆ AssignAllGridCoordinates()

PetscErrorCode AssignAllGridCoordinates ( SimCtx simCtx)

Orchestrates the assignment of physical coordinates to all DMDA objects.

This function manages the entire process of populating the coordinate vectors for every DMDA across all multigrid levels and blocks. It follows a two-part strategy that is essential for multigrid methods:

  1. Populate Finest Level: It first loops through each block and calls a helper (SetFinestLevelCoordinates) to set the physical coordinates for the highest-resolution grid (the finest multigrid level).
  2. Restrict to Coarser Levels: It then iterates downwards from the finest level, calling a helper (RestrictCoordinates) to copy the coordinate values from the fine grid nodes to their corresponding parent nodes on the coarser grids. This ensures all levels represent the exact same geometry.
Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 340 of file grid.c.

341{
342 PetscErrorCode ierr;
343 UserMG *usermg = &simCtx->usermg;
344 PetscInt nblk = simCtx->block_number;
345
346 PetscFunctionBeginUser;
347
349
350 LOG_ALLOW(GLOBAL, LOG_INFO, "Assigning physical coordinates to all grid DMs...\n");
351
352 // --- Part 1: Populate the Finest Grid Level ---
353 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Setting coordinates for the finest grid level (%d)...\n", usermg->mglevels - 1);
354 for (PetscInt bi = 0; bi < nblk; bi++) {
355 UserCtx *fine_user = &usermg->mgctx[usermg->mglevels - 1].user[bi];
356 ierr = SetFinestLevelCoordinates(fine_user); CHKERRQ(ierr);
357 LOG_ALLOW(GLOBAL,LOG_TRACE,"The Finest level coordinates for block %d have been set.\n",bi);
359 ierr = LOG_FIELD_MIN_MAX(fine_user,"Coordinates");
360 }
361 }
362 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Finest level coordinates have been set for all blocks.\n");
363
364 // --- Part 2: Restrict Coordinates to Coarser Levels ---
365 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Restricting coordinates to coarser grid levels...\n");
366 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
367 for (PetscInt bi = 0; bi < nblk; bi++) {
368 UserCtx *coarse_user = &usermg->mgctx[level].user[bi];
369 UserCtx *fine_user = &usermg->mgctx[level + 1].user[bi];
370 ierr = RestrictCoordinates(coarse_user, fine_user); CHKERRQ(ierr);
371
372 LOG_ALLOW(GLOBAL,LOG_TRACE,"Coordinates restricted to block %d level %d.\n",bi,level);
374 ierr = LOG_FIELD_MIN_MAX(coarse_user,"Coordinates");
375 }
376 }
377 }
378
379 LOG_ALLOW(GLOBAL, LOG_INFO, "Physical coordinates assigned to all grid levels and blocks.\n");
380
382
383 PetscFunctionReturn(0);
384}
static PetscErrorCode RestrictCoordinates(UserCtx *coarse_user, UserCtx *fine_user)
Populates coarse grid coordinates by restricting from a fine grid.
Definition grid.c:688
static PetscErrorCode SetFinestLevelCoordinates(UserCtx *user)
A router that populates the coordinates for a single finest-level DMDA.
Definition grid.c:404
#define __FUNCT__
Definition grid.c:9
PetscBool is_function_allowed(const char *functionName)
Checks if a given function is in the allow-list.
Definition logging.c:157
PetscErrorCode LOG_FIELD_MIN_MAX(UserCtx *user, const char *fieldName)
Computes and logs the local and global min/max values of a 3-component vector field.
Definition logging.c:1273
@ LOG_VERBOSE
Extremely detailed logs, typically for development use only.
Definition logging.h:35
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ComputeLocalBoundingBox()

PetscErrorCode ComputeLocalBoundingBox ( UserCtx user,
BoundingBox localBBox 
)

Computes the local bounding box of the grid on the current process.

This function calculates the minimum and maximum coordinates of the local grid points owned by the current MPI process and stores the computed bounding box in the provided structure.

Parameters
[in]userPointer to the user-defined context containing grid information.
[out]localBBoxPointer to the BoundingBox structure to store the computed bounding box.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

This function calculates the minimum and maximum coordinates (x, y, z) of the local grid points owned by the current MPI process. It iterates over the local portion of the grid, examines each grid point's coordinates, and updates the minimum and maximum values accordingly.

The computed bounding box is stored in the provided localBBox structure, and the user->bbox field is also updated with this bounding box for consistency within the user context.

Parameters
[in]userPointer to the user-defined context containing grid information. This context must be properly initialized before calling this function.
[out]localBBoxPointer to the BoundingBox structure where the computed local bounding box will be stored. The structure should be allocated by the caller.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

Definition at line 780 of file grid.c.

781{
782 PetscErrorCode ierr;
783 PetscInt i, j, k;
784 PetscMPIInt rank;
785 PetscInt xs, ys, zs, xe, ye, ze;
786 DMDALocalInfo info;
787 Vec coordinates;
788 Cmpnts ***coordArray;
789 Cmpnts minCoords, maxCoords;
790
791 PetscFunctionBeginUser;
792
794
795 // Start of function execution
796 LOG_ALLOW(GLOBAL, LOG_INFO, "Entering the function.\n");
797
798 // Validate input Pointers
799 if (!user) {
800 LOG_ALLOW(LOCAL, LOG_ERROR, "Input 'user' Pointer is NULL.\n");
801 return PETSC_ERR_ARG_NULL;
802 }
803 if (!localBBox) {
804 LOG_ALLOW(LOCAL, LOG_ERROR, "Output 'localBBox' Pointer is NULL.\n");
805 return PETSC_ERR_ARG_NULL;
806 }
807
808 // Get MPI rank
809 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
810
811 // Get the local coordinates vector from the DMDA
812 ierr = DMGetCoordinatesLocal(user->da, &coordinates);
813 if (ierr) {
814 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting local coordinates vector.\n");
815 return ierr;
816 }
817
818 if (!coordinates) {
819 LOG_ALLOW(LOCAL, LOG_ERROR, "Coordinates vector is NULL.\n");
820 return PETSC_ERR_ARG_NULL;
821 }
822
823 // Access the coordinate array for reading
824 ierr = DMDAVecGetArrayRead(user->fda, coordinates, &coordArray);
825 if (ierr) {
826 LOG_ALLOW(LOCAL, LOG_ERROR, "Error accessing coordinate array.\n");
827 return ierr;
828 }
829
830 // Get the local grid information (indices and sizes)
831 ierr = DMDAGetLocalInfo(user->da, &info);
832 if (ierr) {
833 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting DMDA local info.\n");
834 return ierr;
835 }
836
837
838 xs = info.gxs; xe = xs + info.gxm;
839 ys = info.gys; ye = ys + info.gym;
840 zs = info.gzs; ze = zs + info.gzm;
841
842 /*
843 xs = info.xs; xe = xs + info.xm;
844 ys = info.ys; ye = ys + info.ym;
845 zs = info.zs; ze = zs + info.zm;
846 */
847
848 // Initialize min and max coordinates with extreme values
849 minCoords.x = minCoords.y = minCoords.z = PETSC_MAX_REAL;
850 maxCoords.x = maxCoords.y = maxCoords.z = PETSC_MIN_REAL;
851
852 LOG_ALLOW(LOCAL, LOG_TRACE, "[Rank %d] Grid indices (Including Ghosts): xs=%d, xe=%d, ys=%d, ye=%d, zs=%d, ze=%d.\n",rank, xs, xe, ys, ye, zs, ze);
853
854 // Iterate over the local grid to find min and max coordinates
855 for (k = zs; k < ze; k++) {
856 for (j = ys; j < ye; j++) {
857 for (i = xs; i < xe; i++) {
858 // Only consider nodes within the physical domain.
859 if(i < user->IM && j < user->JM && k < user->KM){
860 Cmpnts coord = coordArray[k][j][i];
861
862 // Update min and max coordinates
863 if (coord.x < minCoords.x) minCoords.x = coord.x;
864 if (coord.y < minCoords.y) minCoords.y = coord.y;
865 if (coord.z < minCoords.z) minCoords.z = coord.z;
866
867 if (coord.x > maxCoords.x) maxCoords.x = coord.x;
868 if (coord.y > maxCoords.y) maxCoords.y = coord.y;
869 if (coord.z > maxCoords.z) maxCoords.z = coord.z;
870 }
871 }
872 }
873 }
874
875
876 // Add tolerance to bboxes.
877 minCoords.x = minCoords.x - BBOX_TOLERANCE;
878 minCoords.y = minCoords.y - BBOX_TOLERANCE;
879 minCoords.z = minCoords.z - BBOX_TOLERANCE;
880
881 maxCoords.x = maxCoords.x + BBOX_TOLERANCE;
882 maxCoords.y = maxCoords.y + BBOX_TOLERANCE;
883 maxCoords.z = maxCoords.z + BBOX_TOLERANCE;
884
885 LOG_ALLOW(LOCAL,LOG_DEBUG," Tolerance added to the limits: %.8e .\n",(PetscReal)BBOX_TOLERANCE);
886
887 // Log the computed min and max coordinates
888 LOG_ALLOW(LOCAL, LOG_INFO,"[Rank %d] Bounding Box Ranges = X[%.6f, %.6f], Y[%.6f,%.6f], Z[%.6f, %.6f].\n",rank,minCoords.x, maxCoords.x,minCoords.y, maxCoords.y, minCoords.z, maxCoords.z);
889
890
891
892 // Restore the coordinate array
893 ierr = DMDAVecRestoreArrayRead(user->fda, coordinates, &coordArray);
894 if (ierr) {
895 LOG_ALLOW(LOCAL, LOG_ERROR, "Error restoring coordinate array.\n");
896 return ierr;
897 }
898
899 // Set the local bounding box
900 localBBox->min_coords = minCoords;
901 localBBox->max_coords = maxCoords;
902
903 // Update the bounding box inside the UserCtx for consistency
904 user->bbox = *localBBox;
905
906 LOG_ALLOW(GLOBAL, LOG_INFO, "Exiting the function successfully.\n");
907
909
910 PetscFunctionReturn(0);
911}
#define BBOX_TOLERANCE
Definition grid.c:6
@ LOG_ERROR
Critical errors that may halt the program.
Definition logging.h:29
Cmpnts max_coords
Maximum x, y, z coordinates of the bounding box.
Definition variables.h:156
Cmpnts min_coords
Minimum x, y, z coordinates of the bounding box.
Definition variables.h:155
PetscScalar x
Definition variables.h:101
PetscScalar z
Definition variables.h:101
PetscScalar y
Definition variables.h:101
BoundingBox bbox
Definition variables.h:672
A 3D point or vector with PetscScalar components.
Definition variables.h:100
Here is the caller graph for this function:

◆ GatherAllBoundingBoxes()

PetscErrorCode GatherAllBoundingBoxes ( UserCtx user,
BoundingBox **  allBBoxes 
)

Gathers local bounding boxes from all MPI processes to rank 0.

This function computes the local bounding box on each process, then collects all local bounding boxes on the root process (rank 0) using MPI. The result is stored in an array of BoundingBox structures on rank 0.

Parameters
[in]userPointer to the user-defined context containing grid information.
[out]allBBoxesPointer to a pointer where the array of gathered bounding boxes will be stored on rank 0. The caller on rank 0 must free this array.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

Each rank computes its local bounding box, then all ranks participate in an MPI_Gather to send their BoundingBox to rank 0. Rank 0 allocates the result array and returns it via allBBoxes.

Parameters
[in]userPointer to UserCtx (must be non-NULL).
[out]allBBoxesOn rank 0, receives malloc’d array of size size. On other ranks, set to NULL.
Returns
PetscErrorCode

Definition at line 928 of file grid.c.

929{
930 PetscErrorCode ierr;
931 PetscMPIInt rank, size;
932 BoundingBox *bboxArray = NULL;
933 BoundingBox localBBox;
934
935 PetscFunctionBeginUser;
936
938
939 /* Validate */
940 if (!user || !allBBoxes) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
941 "GatherAllBoundingBoxes: NULL pointer");
942
943 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
944 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
945
946 /* Compute local bbox */
947 ierr = ComputeLocalBoundingBox(user, &localBBox); CHKERRQ(ierr);
948
949 /* Ensure everyone is synchronized before the gather */
950 MPI_Barrier(PETSC_COMM_WORLD);
952 "Rank %d: about to MPI_Gather(localBBox)\n", rank);
953
954 /* Allocate on root */
955 if (rank == 0) {
956 bboxArray = (BoundingBox*)malloc(size * sizeof(BoundingBox));
957 if (!bboxArray) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
958 "GatherAllBoundingBoxes: malloc failed");
959 }
960
961 /* Collective: every rank must call */
962 ierr = MPI_Gather(&localBBox, sizeof(BoundingBox), MPI_BYTE,
963 bboxArray, sizeof(BoundingBox), MPI_BYTE,
964 0, PETSC_COMM_WORLD);
965 CHKERRMPI(ierr);
966
967 MPI_Barrier(PETSC_COMM_WORLD);
969 "Rank %d: completed MPI_Gather(localBBox)\n", rank);
970
971 /* Return result */
972 if (rank == 0) {
973 *allBBoxes = bboxArray;
974 } else {
975 *allBBoxes = NULL;
976 }
977
979
980 PetscFunctionReturn(0);
981}
PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox)
Computes the local bounding box of the grid on the current process.
Definition grid.c:780
Defines a 3D axis-aligned bounding box.
Definition variables.h:154
Here is the call graph for this function:
Here is the caller graph for this function:

◆ BroadcastAllBoundingBoxes()

PetscErrorCode BroadcastAllBoundingBoxes ( UserCtx user,
BoundingBox **  bboxlist 
)

Broadcasts the bounding box information collected on rank 0 to all other ranks.

This function assumes that GatherAllBoundingBoxes() was previously called, so bboxlist is allocated and populated on rank 0. All other ranks will allocate memory for bboxlist, and this function will use MPI_Bcast to distribute the bounding box data to them.

Parameters
[in]userPointer to the UserCtx structure. (Currently unused in this function, but kept for consistency.)
[in,out]bboxlistPointer to the array of BoundingBoxes. On rank 0, this should point to a valid array of size 'size' (where size is the number of MPI ranks). On non-root ranks, this function will allocate memory for bboxlist.
Returns
PetscErrorCode Returns 0 on success, non-zero on MPI or PETSc-related errors.

Broadcasts the bounding box information collected on rank 0 to all other ranks.

After GatherAllBoundingBoxes, rank 0 has an array of size boxes. This routine makes sure every rank ends up with its own malloc’d copy.

Parameters
[in]userPointer to UserCtx (unused here, but kept for signature).
[in,out]bboxlistOn entry: rank 0’s array; on exit: every rank’s array.
Returns
PetscErrorCode

Definition at line 996 of file grid.c.

997{
998 PetscErrorCode ierr;
999 PetscMPIInt rank, size;
1000
1001 PetscFunctionBeginUser;
1002
1004
1005 if (!bboxlist) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
1006 "BroadcastAllBoundingBoxes: NULL pointer");
1007
1008 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
1009 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
1010
1011 /* Non-root ranks must allocate before the Bcast */
1012 if (rank != 0) {
1013 *bboxlist = (BoundingBox*)malloc(size * sizeof(BoundingBox));
1014 if (!*bboxlist) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
1015 "BroadcastAllBoundingBoxes: malloc failed");
1016 }
1017
1018 MPI_Barrier(PETSC_COMM_WORLD);
1020 "Rank %d: about to MPI_Bcast(%d boxes)\n", rank, size);
1021
1022 /* Collective: every rank must call */
1023 ierr = MPI_Bcast(*bboxlist, size * sizeof(BoundingBox), MPI_BYTE,
1024 0, PETSC_COMM_WORLD);
1025 CHKERRMPI(ierr);
1026
1027 MPI_Barrier(PETSC_COMM_WORLD);
1029 "Rank %d: completed MPI_Bcast(%d boxes)\n", rank, size);
1030
1031
1033
1034 PetscFunctionReturn(0);
1035}
Here is the caller graph for this function:

◆ CalculateInletCenter()

PetscErrorCode CalculateInletCenter ( UserCtx user)

Calculates the geometric center of the primary inlet face.

This function identifies the first face designated as an INLET in the boundary condition configuration. It then iterates over all grid nodes on that physical face across all MPI processes, calculates the average of their coordinates, and stores the result in the user's SimCtx (CMx_c, CMy_c, CMz_c).

This provides an automatic, robust way to determine the center for profiles like parabolic flow, removing the need for manual user input.

Parameters
userThe main UserCtx struct, containing BC config and the grid coordinate vector.
Returns
PetscErrorCode 0 on success.

Definition at line 1054 of file grid.c.

1055{
1056 PetscErrorCode ierr;
1057 BCFace inlet_face_id = -1;
1058 PetscBool inlet_found = PETSC_FALSE;
1059
1060 PetscReal local_sum[3] = {0.0, 0.0, 0.0};
1061 PetscReal global_sum[3] = {0.0, 0.0, 0.0};
1062 PetscCount local_n_points = 0;
1063 PetscCount global_n_points = 0;
1064
1065 DM da = user->da;
1066 DMDALocalInfo info = user->info;
1067 PetscInt xs = info.xs, xe = info.xs + info.xm;
1068 PetscInt ys = info.ys, ye = info.ys + info.ym;
1069 PetscInt zs = info.zs, ze = info.zs + info.zm;
1070 PetscInt mx = info.mx, my = info.my, mz = info.mz;
1071 Vec lCoor;
1072 Cmpnts ***coor;
1073
1074
1075 PetscFunctionBeginUser;
1076
1078
1079 // 1. Identify the primary inlet face from the configuration
1080 for (int i = 0; i < 6; i++) {
1081 if (user->boundary_faces[i].mathematical_type == INLET) {
1082 inlet_face_id = user->boundary_faces[i].face_id;
1083 inlet_found = PETSC_TRUE;
1084 break; // Use the first inlet found
1085 }
1086 }
1087
1088 if (!inlet_found) {
1089 LOG_ALLOW(GLOBAL, LOG_INFO, "No INLET face found. Skipping inlet center calculation.\n");
1090 PetscFunctionReturn(0);
1091 }
1092
1093 // 2. Get the nodal coordinates
1094 ierr = DMGetCoordinatesLocal(user->da,&lCoor);
1095 ierr = DMDAVecGetArrayRead(user->fda, lCoor, &coor); CHKERRQ(ierr);
1096
1097 // 3. Loop over the identified inlet face and sum local coordinates
1098 switch (inlet_face_id) {
1099 case BC_FACE_NEG_X:
1100 if (xs == 0) {
1101 for (PetscInt k = zs; k < ze; k++) for (PetscInt j = ys; j < ye; j++) {
1102 if(j < user->JM && k < user->KM){ // Ensure within physical domain
1103 local_sum[0] += coor[k][j][0].x;
1104 local_sum[1] += coor[k][j][0].y;
1105 local_sum[2] += coor[k][j][0].z;
1106 local_n_points++;
1107 }
1108 }
1109 }
1110 break;
1111 case BC_FACE_POS_X:
1112 if (xe == mx) { // another check could be if (xe > user->IM - 1)
1113 for (PetscInt k = zs; k < ze; k++) for (PetscInt j = ys; j < ye; j++) {
1114 if(j < user->JM && k < user->KM){ // Ensure within physical domain
1115 local_sum[0] += coor[k][j][mx-2].x; // mx-1 is the ghost layer
1116 local_sum[1] += coor[k][j][mx-2].y; // mx-2 = IM - 1.
1117 local_sum[2] += coor[k][j][mx-2].z;
1118 local_n_points++;
1119 }
1120 }
1121 }
1122 break;
1123 case BC_FACE_NEG_Y:
1124 if (ys == 0) {
1125 for (PetscInt k = zs; k < ze; k++) for (PetscInt i = xs; i < xe; i++) {
1126 if(i < user->IM && k < user->KM){ // Ensure within physical domain
1127 local_sum[0] += coor[k][0][i].x;
1128 local_sum[1] += coor[k][0][i].y;
1129 local_sum[2] += coor[k][0][i].z;
1130 local_n_points++;
1131 }
1132 }
1133 }
1134 break;
1135 case BC_FACE_POS_Y:
1136 if (ye == my) { // another check could be if (ye > user->JM - 1)
1137 for (PetscInt k = zs; k < ze; k++) for (PetscInt i = xs; i < xe; i++) {
1138 local_sum[0] += coor[k][my-2][i].x; // my-1 is the ghost layer
1139 local_sum[1] += coor[k][my-2][i].y; // my-2 = JM - 1.
1140 local_sum[2] += coor[k][my-2][i].z;
1141 local_n_points++;
1142 }
1143 }
1144 break;
1145 case BC_FACE_NEG_Z:
1146 if (zs == 0) {
1147 for (PetscInt j = ys; j < ye; j++) for (PetscInt i = xs; i < xe; i++) {
1148 if(i < user->IM && j < user->JM){ // Ensure within physical domain
1149 local_sum[0] += coor[0][j][i].x;
1150 local_sum[1] += coor[0][j][i].y;
1151 local_sum[2] += coor[0][j][i].z;
1152 local_n_points++;
1153 }
1154 }
1155 }
1156 break;
1157 case BC_FACE_POS_Z:
1158 if (ze == mz) { // another check could be if (ze > user->KM - 1)
1159 for (PetscInt j = ys; j < ye; j++) for (PetscInt i = xs; i < xe; i++) {
1160 if(i < user->IM && j < user->JM){ // Ensure within physical domain
1161 local_sum[0] += coor[mz-2][j][i].x; // mz-1 is the ghost layer
1162 local_sum[1] += coor[mz-2][j][i].y; // mz-2 = KM - 1.
1163 local_sum[2] += coor[mz-2][j][i].z;
1164 local_n_points++;
1165 }
1166 }
1167 }
1168 break;
1169 }
1170
1171 ierr = DMDAVecRestoreArrayRead(user->fda, lCoor, &coor); CHKERRQ(ierr);
1172
1173 // 4. Perform MPI Allreduce to get global sums
1174 ierr = MPI_Allreduce(local_sum, global_sum, 3, MPI_DOUBLE, MPI_SUM, PETSC_COMM_WORLD); CHKERRQ(ierr);
1175 ierr = MPI_Allreduce(&local_n_points, &global_n_points, 1, MPI_INT, MPI_SUM, PETSC_COMM_WORLD); CHKERRQ(ierr);
1176
1177 // 5. Calculate average and store in SimCtx
1178 if (global_n_points > 0) {
1179 user->simCtx->CMx_c = global_sum[0] / global_n_points;
1180 user->simCtx->CMy_c = global_sum[1] / global_n_points;
1181 user->simCtx->CMz_c = global_sum[2] / global_n_points;
1182 LOG_ALLOW(GLOBAL, LOG_INFO, "Calculated inlet center for Face %s: (x=%.4f, y=%.4f, z=%.4f)\n",
1183 BCFaceToString((BCFace)inlet_face_id), user->simCtx->CMx_c, user->simCtx->CMy_c, user->simCtx->CMz_c);
1184 } else {
1185 LOG_ALLOW(GLOBAL, LOG_WARNING, "WARNING: Inlet face was identified but no grid points found on it. Center not calculated.\n");
1186 }
1187
1189
1190 PetscFunctionReturn(0);
1191}
const char * BCFaceToString(BCFace face)
Helper function to convert BCFace enum to a string representation.
Definition logging.c:643
@ INLET
Definition variables.h:214
BoundaryFaceConfig boundary_faces[6]
Definition variables.h:679
SimCtx * simCtx
Back-pointer to the master simulation context.
Definition variables.h:664
PetscReal CMy_c
Definition variables.h:589
PetscReal CMz_c
Definition variables.h:589
DMDALocalInfo info
Definition variables.h:668
BCType mathematical_type
Definition variables.h:273
PetscReal CMx_c
Definition variables.h:589
BCFace
Identifies the six logical faces of a structured computational block.
Definition variables.h:200
@ BC_FACE_NEG_X
Definition variables.h:201
@ BC_FACE_POS_Z
Definition variables.h:203
@ BC_FACE_POS_Y
Definition variables.h:202
@ BC_FACE_NEG_Z
Definition variables.h:203
@ BC_FACE_POS_X
Definition variables.h:201
@ BC_FACE_NEG_Y
Definition variables.h:202
Here is the call graph for this function:
Here is the caller graph for this function: