PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
Functions
grid.h File Reference

Public interface for grid, solver, and metric setup routines. More...

#include "variables.h"
#include "logging.h"
#include "io.h"
#include "setup.h"
#include "AnalyticalSolutions.h"
Include dependency graph for grid.h:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Functions

PetscErrorCode DefineAllGridDimensions (SimCtx *simCtx)
 Orchestrates the parsing and setting of grid dimensions for all blocks.
 
PetscErrorCode InitializeAllGridDMs (SimCtx *simCtx)
 Orchestrates the creation of DMDA objects for every block and multigrid level.
 
PetscErrorCode AssignAllGridCoordinates (SimCtx *simCtx)
 Orchestrates the assignment of physical coordinates to all DMDA objects.
 
PetscErrorCode ComputeLocalBoundingBox (UserCtx *user, BoundingBox *localBBox)
 Computes the local bounding box of the grid on the current process.
 
PetscErrorCode GatherAllBoundingBoxes (UserCtx *user, BoundingBox **allBBoxes)
 Gathers local bounding boxes from all MPI processes to rank 0.
 
PetscErrorCode BroadcastAllBoundingBoxes (UserCtx *user, BoundingBox **bboxlist)
 Broadcasts the bounding box information collected on rank 0 to all other ranks.
 
PetscErrorCode CalculateInletProperties (UserCtx *user)
 Calculates the center and area of the primary INLET face.
 
PetscErrorCode CalculateOutletProperties (UserCtx *user)
 Calculates the center and area of the primary OUTLET face.
 
PetscErrorCode CalculateFaceCenterAndArea (UserCtx *user, BCFace face_id, Cmpnts *face_center, PetscReal *face_area)
 Calculates the geometric center and total area of a specified boundary face.
 

Detailed Description

Public interface for grid, solver, and metric setup routines.

Definition in file grid.h.

Function Documentation

◆ DefineAllGridDimensions()

PetscErrorCode DefineAllGridDimensions ( SimCtx simCtx)

Orchestrates the parsing and setting of grid dimensions for all blocks.

This function serves as the high-level entry point for defining the geometric properties of each grid block in the simulation. It iterates through every block defined by simCtx->block_number.

For each block, it performs two key actions:

  1. It explicitly sets the block's index (_this) in the corresponding UserCtx struct for the finest multigrid level. This makes the context "self-aware".
  2. It calls a helper function (ParseAndSetGridInputs) to handle the detailed work of parsing options or files to populate the rest of the geometric properties for that specific block (e.g., IM, Min_X, rx).
Parameters
simCtxThe master SimCtx, which contains the number of blocks and the UserCtx hierarchy to be configured.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Orchestrates the parsing and setting of grid dimensions for all blocks.

Local to this translation unit.

Definition at line 57 of file grid.c.

58{
59 PetscErrorCode ierr;
60 PetscInt nblk = simCtx->block_number;
61 UserCtx *finest_users;
62
63 PetscFunctionBeginUser;
64
66
67 if (simCtx->usermg.mglevels == 0) {
68 SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONGSTATE, "MG levels not set. Cannot get finest_users.");
69 }
70 // Get the UserCtx array for the finest grid level
71 finest_users = simCtx->usermg.mgctx[simCtx->usermg.mglevels - 1].user;
72
73 LOG_ALLOW(GLOBAL, LOG_INFO, "Defining grid dimensions for %d blocks...\n", nblk);
74 if (strcmp(simCtx->eulerianSource, "analytical") == 0 &&
77 "Analytical type '%s' requires custom geometry; preloading finest-grid IM/JM/KM once.\n",
79 ierr = PopulateFinestUserGridResolutionFromOptions(finest_users, nblk); CHKERRQ(ierr);
80 }
81
82 // Loop over each block to configure its grid dimensions and geometry.
83 for (PetscInt bi = 0; bi < nblk; bi++) {
84 LOG_ALLOW_SYNC(GLOBAL, LOG_DEBUG, "Rank %d: --- Configuring Geometry for Block %d ---\n", simCtx->rank, bi);
85
86 // Before calling any helpers, set the block index in the context.
87 // This makes the UserCtx self-aware of which block it represents.
88 LOG_ALLOW(GLOBAL,LOG_DEBUG,"finest_users->_this = %d, bi = %d\n",finest_users[bi]._this,bi);
89 //finest_user[bi]._this = bi;
90
91 // Call the helper function for this specific block. It can now derive
92 // all necessary information from the UserCtx pointer it receives.
93 ierr = ParseAndSetGridInputs(&finest_users[bi]); CHKERRQ(ierr);
94 }
95
97
98 PetscFunctionReturn(0);
99}
PetscBool AnalyticalTypeRequiresCustomGeometry(const char *analytical_type)
Reports whether an analytical type requires custom geometry/decomposition logic.
static PetscErrorCode ParseAndSetGridInputs(UserCtx *user)
Internal helper implementation: ParseAndSetGridInputs().
Definition grid.c:14
PetscErrorCode PopulateFinestUserGridResolutionFromOptions(UserCtx *finest_users, PetscInt nblk)
Parses grid resolution arrays (-im, -jm, -km) once and applies them to all finest-grid blocks.
Definition io.c:170
#define LOG_ALLOW_SYNC(scope, level, fmt,...)
Synchronized logging macro that checks both the log level and whether the calling function is in the ...
Definition logging.h:252
#define GLOBAL
Scope for global logging across all processes.
Definition logging.h:45
#define LOG_ALLOW(scope, level, fmt,...)
Logging macro that checks both the log level and whether the calling function is in the allowed-funct...
Definition logging.h:199
#define PROFILE_FUNCTION_END
Marks the end of a profiled code block.
Definition logging.h:779
@ LOG_INFO
Informational messages about program execution.
Definition logging.h:30
@ LOG_DEBUG
Detailed debugging information.
Definition logging.h:31
#define PROFILE_FUNCTION_BEGIN
Marks the beginning of a profiled code block (typically a function).
Definition logging.h:770
UserCtx * user
Definition variables.h:528
PetscMPIInt rank
Definition variables.h:646
PetscInt block_number
Definition variables.h:712
UserMG usermg
Definition variables.h:764
char eulerianSource[PETSC_MAX_PATH_LEN]
Definition variables.h:663
PetscInt mglevels
Definition variables.h:535
char AnalyticalSolutionType[PETSC_MAX_PATH_LEN]
Definition variables.h:676
MGCtx * mgctx
Definition variables.h:538
User-defined context containing data specific to a single computational grid level.
Definition variables.h:811
Here is the call graph for this function:
Here is the caller graph for this function:

◆ InitializeAllGridDMs()

PetscErrorCode InitializeAllGridDMs ( SimCtx simCtx)

Orchestrates the creation of DMDA objects for every block and multigrid level.

This function systematically builds the entire DMDA hierarchy. It first calculates the dimensions (IM, JM, KM) for all coarse grids based on the finest grid's dimensions and the semi-coarsening flags. It then iterates from the coarsest to the finest level, calling a powerful helper function (InitializeSingleGridDM) to create the DMs for each block, ensuring that finer grids are properly aligned with their coarser parents for multigrid efficiency.

Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Orchestrates the creation of DMDA objects for every block and multigrid level.

Local to this translation unit.

Definition at line 235 of file grid.c.

236{
237 PetscErrorCode ierr;
238 UserMG *usermg = &simCtx->usermg;
239 MGCtx *mgctx = usermg->mgctx;
240 PetscInt nblk = simCtx->block_number;
241
242 PetscFunctionBeginUser;
243
245
246 LOG_ALLOW(GLOBAL,LOG_INFO, "Pre-scanning BCs to identify domain periodicity.\n");
247 ierr = DeterminePeriodicity(simCtx); CHKERRQ(ierr);
248
249 LOG_ALLOW(GLOBAL, LOG_INFO, "Creating DMDA objects for all levels and blocks...\n");
250
251 // --- Part 1: Calculate Coarse Grid Dimensions & VALIDATE ---
252 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Calculating and validating coarse grid dimensions...\n");
253 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
254 for (PetscInt bi = 0; bi < nblk; bi++) {
255 UserCtx *user_coarse = &mgctx[level].user[bi];
256 UserCtx *user_fine = &mgctx[level + 1].user[bi];
257
258 user_coarse->IM = user_fine->isc ? user_fine->IM : (user_fine->IM + 1) / 2;
259 user_coarse->JM = user_fine->jsc ? user_fine->JM : (user_fine->JM + 1) / 2;
260 user_coarse->KM = user_fine->ksc ? user_fine->KM : (user_fine->KM + 1) / 2;
261
262 LOG_ALLOW_SYNC(LOCAL, LOG_TRACE, "Rank %d: Block %d, Level %d dims calculated: %d x %d x %d\n",
263 simCtx->rank, bi, level, user_coarse->IM, user_coarse->JM, user_coarse->KM);
264
265 // Validation check from legacy MGDACreate to ensure coarsening is possible
266 PetscInt check_i = user_coarse->IM * (2 - user_coarse->isc) - (user_fine->IM + 1 - user_coarse->isc);
267 PetscInt check_j = user_coarse->JM * (2 - user_coarse->jsc) - (user_fine->JM + 1 - user_coarse->jsc);
268 PetscInt check_k = user_coarse->KM * (2 - user_coarse->ksc) - (user_fine->KM + 1 - user_coarse->ksc);
269
270 if (check_i + check_j + check_k != 0) {
271 // SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONG,
272 // "Grid at level %d, block %d cannot be coarsened from %dx%dx%d to %dx%dx%d with the given semi-coarsening flags. Check grid dimensions.",
273 // level, bi, user_fine->IM, user_fine->JM, user_fine->KM, user_coarse->IM, user_coarse->JM, user_coarse->KM);
274 LOG(GLOBAL,LOG_WARNING,"WARNING: Grid at level %d, block %d can't be consistently coarsened further.\n", level, bi);
275 }
276 }
277 }
278
279 // --- Part 2: Create DMs from Coarse to Fine for each Block ---
280 for (PetscInt bi = 0; bi < nblk; bi++) {
281 LOG_ALLOW_SYNC(GLOBAL, LOG_DEBUG, "--- Creating DMs for Block %d ---\n", bi);
282
283 // Create the coarsest level DM first (passing NULL for the coarse_user)
284 ierr = InitializeSingleGridDM(&mgctx[0].user[bi], NULL); CHKERRQ(ierr);
285
286 // Create finer level DMs, passing the next-coarser context for alignment
287 for (PetscInt level = 1; level < usermg->mglevels; level++) {
288 ierr = InitializeSingleGridDM(&mgctx[level].user[bi], &mgctx[level-1].user[bi]); CHKERRQ(ierr);
289 }
290 }
291
292 // --- Optional: View the finest DM for debugging verification ---
293 if (get_log_level() >= LOG_DEBUG) {
294 LOG_ALLOW_SYNC(GLOBAL, LOG_INFO, "--- Viewing Finest DMDA (Level %d, Block 0) ---\n", usermg->mglevels - 1);
295 ierr = DMView(mgctx[usermg->mglevels - 1].user[0].da, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
296 }
297
298 LOG_ALLOW(GLOBAL, LOG_INFO, "DMDA object creation complete.\n");
299
301
302 PetscFunctionReturn(0);
303}
static PetscErrorCode InitializeSingleGridDM(UserCtx *user, UserCtx *coarse_user)
Internal helper implementation: InitializeSingleGridDM().
Definition grid.c:107
PetscErrorCode DeterminePeriodicity(SimCtx *simCtx)
Scans all block-specific boundary condition files to determine a globally consistent periodicity for ...
Definition io.c:634
#define LOCAL
Logging scope definitions for controlling message output.
Definition logging.h:44
#define LOG(scope, level, fmt,...)
Logging macro for PETSc-based applications with scope control.
Definition logging.h:83
LogLevel get_log_level()
Retrieves the current logging level from the environment variable LOG_LEVEL.
Definition logging.c:84
@ LOG_TRACE
Very fine-grained tracing information for in-depth debugging.
Definition logging.h:32
@ LOG_WARNING
Non-critical issues that warrant attention.
Definition logging.h:29
PetscInt isc
Definition variables.h:824
PetscInt ksc
Definition variables.h:824
PetscInt KM
Definition variables.h:820
PetscInt jsc
Definition variables.h:824
PetscInt JM
Definition variables.h:820
PetscInt IM
Definition variables.h:820
Context for Multigrid operations.
Definition variables.h:527
User-level context for managing the entire multigrid hierarchy.
Definition variables.h:534
Here is the call graph for this function:
Here is the caller graph for this function:

◆ AssignAllGridCoordinates()

PetscErrorCode AssignAllGridCoordinates ( SimCtx simCtx)

Orchestrates the assignment of physical coordinates to all DMDA objects.

This function manages the entire process of populating the coordinate vectors for every DMDA across all multigrid levels and blocks. It follows a two-part strategy that is essential for multigrid methods:

  1. Populate Finest Level: It first loops through each block and calls a helper (SetFinestLevelCoordinates) to set the physical coordinates for the highest-resolution grid (the finest multigrid level).
  2. Restrict to Coarser Levels: It then iterates downwards from the finest level, calling a helper (RestrictCoordinates) to copy the coordinate values from the fine grid nodes to their corresponding parent nodes on the coarser grids. This ensures all levels represent the exact same geometry.
Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Orchestrates the assignment of physical coordinates to all DMDA objects.

Local to this translation unit.

Definition at line 317 of file grid.c.

318{
319 PetscErrorCode ierr;
320 UserMG *usermg = &simCtx->usermg;
321 PetscInt nblk = simCtx->block_number;
322
323 PetscFunctionBeginUser;
324
326
327 LOG_ALLOW(GLOBAL, LOG_INFO, "Assigning physical coordinates to all grid DMs...\n");
328
329 // --- Part 1: Populate the Finest Grid Level ---
330 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Setting coordinates for the finest grid level (%d)...\n", usermg->mglevels - 1);
331 for (PetscInt bi = 0; bi < nblk; bi++) {
332 UserCtx *fine_user = &usermg->mgctx[usermg->mglevels - 1].user[bi];
333 ierr = SetFinestLevelCoordinates(fine_user); CHKERRQ(ierr);
334 LOG_ALLOW(GLOBAL,LOG_TRACE,"The Finest level coordinates for block %d have been set.\n",bi);
336 ierr = LOG_FIELD_MIN_MAX(fine_user,"Coordinates");
337 }
338 }
339 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Finest level coordinates have been set for all blocks.\n");
340
341 // --- Part 2: Restrict Coordinates to Coarser Levels ---
342 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Restricting coordinates to coarser grid levels...\n");
343 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
344 for (PetscInt bi = 0; bi < nblk; bi++) {
345 UserCtx *coarse_user = &usermg->mgctx[level].user[bi];
346 UserCtx *fine_user = &usermg->mgctx[level + 1].user[bi];
347 ierr = RestrictCoordinates(coarse_user, fine_user); CHKERRQ(ierr);
348
349 LOG_ALLOW(GLOBAL,LOG_TRACE,"Coordinates restricted to block %d level %d.\n",bi,level);
351 ierr = LOG_FIELD_MIN_MAX(coarse_user,"Coordinates");
352 }
353 }
354 }
355
356 LOG_ALLOW(GLOBAL, LOG_INFO, "Physical coordinates assigned to all grid levels and blocks.\n");
357
359
360 PetscFunctionReturn(0);
361}
static PetscErrorCode RestrictCoordinates(UserCtx *coarse_user, UserCtx *fine_user)
Internal helper implementation: RestrictCoordinates().
Definition grid.c:600
static PetscErrorCode SetFinestLevelCoordinates(UserCtx *user)
Internal helper implementation: SetFinestLevelCoordinates().
Definition grid.c:370
#define __FUNCT__
Definition grid.c:9
PetscBool is_function_allowed(const char *functionName)
Checks if a given function is in the allow-list.
Definition logging.c:183
PetscErrorCode LOG_FIELD_MIN_MAX(UserCtx *user, const char *fieldName)
Computes and logs the local and global min/max values of a 3-component vector field.
Definition logging.c:1367
@ LOG_VERBOSE
Extremely detailed logs, typically for development use only.
Definition logging.h:33
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ComputeLocalBoundingBox()

PetscErrorCode ComputeLocalBoundingBox ( UserCtx user,
BoundingBox localBBox 
)

Computes the local bounding box of the grid on the current process.

This function calculates the minimum and maximum coordinates of the local grid points owned by the current MPI process and stores the computed bounding box in the provided structure.

Parameters
[in]userPointer to the user-defined context containing grid information.
[out]localBBoxPointer to the BoundingBox structure to store the computed bounding box.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

Computes the local bounding box of the grid on the current process.

Full API contract (arguments, ownership, side effects) is documented with the header declaration in include/grid.h.

See also
ComputeLocalBoundingBox()

Definition at line 679 of file grid.c.

680{
681 PetscErrorCode ierr;
682 PetscInt i, j, k;
683 PetscMPIInt rank;
684 PetscInt xs, ys, zs, xe, ye, ze;
685 DMDALocalInfo info;
686 Vec coordinates;
687 Cmpnts ***coordArray;
688 Cmpnts minCoords, maxCoords;
689
690 PetscFunctionBeginUser;
691
693
694 // Start of function execution
695 LOG_ALLOW(GLOBAL, LOG_INFO, "Entering the function.\n");
696
697 // Validate input Pointers
698 if (!user) {
699 LOG_ALLOW(LOCAL, LOG_ERROR, "Input 'user' Pointer is NULL.\n");
700 return PETSC_ERR_ARG_NULL;
701 }
702 if (!localBBox) {
703 LOG_ALLOW(LOCAL, LOG_ERROR, "Output 'localBBox' Pointer is NULL.\n");
704 return PETSC_ERR_ARG_NULL;
705 }
706
707 // Get MPI rank
708 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
709
710 // Get the local coordinates vector from the DMDA
711 ierr = DMGetCoordinatesLocal(user->da, &coordinates);
712 if (ierr) {
713 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting local coordinates vector.\n");
714 return ierr;
715 }
716
717 if (!coordinates) {
718 LOG_ALLOW(LOCAL, LOG_ERROR, "Coordinates vector is NULL.\n");
719 return PETSC_ERR_ARG_NULL;
720 }
721
722 // Access the coordinate array for reading
723 ierr = DMDAVecGetArrayRead(user->fda, coordinates, &coordArray);
724 if (ierr) {
725 LOG_ALLOW(LOCAL, LOG_ERROR, "Error accessing coordinate array.\n");
726 return ierr;
727 }
728
729 // Get the local grid information (indices and sizes)
730 ierr = DMDAGetLocalInfo(user->da, &info);
731 if (ierr) {
732 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting DMDA local info.\n");
733 return ierr;
734 }
735
736
737 xs = info.gxs; xe = xs + info.gxm;
738 ys = info.gys; ye = ys + info.gym;
739 zs = info.gzs; ze = zs + info.gzm;
740
741 /*
742 xs = info.xs; xe = xs + info.xm;
743 ys = info.ys; ye = ys + info.ym;
744 zs = info.zs; ze = zs + info.zm;
745 */
746
747 // Initialize min and max coordinates with extreme values
748 minCoords.x = minCoords.y = minCoords.z = PETSC_MAX_REAL;
749 maxCoords.x = maxCoords.y = maxCoords.z = PETSC_MIN_REAL;
750
751 LOG_ALLOW(LOCAL, LOG_TRACE, "[Rank %d] Grid indices (Including Ghosts): xs=%d, xe=%d, ys=%d, ye=%d, zs=%d, ze=%d.\n",rank, xs, xe, ys, ye, zs, ze);
752
753 // Iterate over the local grid to find min and max coordinates
754 for (k = zs; k < ze; k++) {
755 for (j = ys; j < ye; j++) {
756 for (i = xs; i < xe; i++) {
757 // Only consider nodes within the physical domain.
758 if(i < user->IM && j < user->JM && k < user->KM){
759 Cmpnts coord = coordArray[k][j][i];
760
761 // Update min and max coordinates
762 if (coord.x < minCoords.x) minCoords.x = coord.x;
763 if (coord.y < minCoords.y) minCoords.y = coord.y;
764 if (coord.z < minCoords.z) minCoords.z = coord.z;
765
766 if (coord.x > maxCoords.x) maxCoords.x = coord.x;
767 if (coord.y > maxCoords.y) maxCoords.y = coord.y;
768 if (coord.z > maxCoords.z) maxCoords.z = coord.z;
769 }
770 }
771 }
772 }
773
774
775 // Add tolerance to bboxes.
776 minCoords.x = minCoords.x - BBOX_TOLERANCE;
777 minCoords.y = minCoords.y - BBOX_TOLERANCE;
778 minCoords.z = minCoords.z - BBOX_TOLERANCE;
779
780 maxCoords.x = maxCoords.x + BBOX_TOLERANCE;
781 maxCoords.y = maxCoords.y + BBOX_TOLERANCE;
782 maxCoords.z = maxCoords.z + BBOX_TOLERANCE;
783
784 LOG_ALLOW(LOCAL,LOG_DEBUG," Tolerance added to the limits: %.8e .\n",(PetscReal)BBOX_TOLERANCE);
785
786 // Log the computed min and max coordinates
787 LOG_ALLOW(LOCAL, LOG_INFO,"[Rank %d] Bounding Box Ranges = X[%.6f, %.6f], Y[%.6f,%.6f], Z[%.6f, %.6f].\n",rank,minCoords.x, maxCoords.x,minCoords.y, maxCoords.y, minCoords.z, maxCoords.z);
788
789
790
791 // Restore the coordinate array
792 ierr = DMDAVecRestoreArrayRead(user->fda, coordinates, &coordArray);
793 if (ierr) {
794 LOG_ALLOW(LOCAL, LOG_ERROR, "Error restoring coordinate array.\n");
795 return ierr;
796 }
797
798 // Set the local bounding box
799 localBBox->min_coords = minCoords;
800 localBBox->max_coords = maxCoords;
801
802 // Update the bounding box inside the UserCtx for consistency
803 user->bbox = *localBBox;
804
805 LOG_ALLOW(GLOBAL, LOG_INFO, "Exiting the function successfully.\n");
806
808
809 PetscFunctionReturn(0);
810}
#define BBOX_TOLERANCE
Definition grid.c:6
@ LOG_ERROR
Critical errors that may halt the program.
Definition logging.h:28
Cmpnts max_coords
Maximum x, y, z coordinates of the bounding box.
Definition variables.h:156
Cmpnts min_coords
Minimum x, y, z coordinates of the bounding box.
Definition variables.h:155
PetscScalar x
Definition variables.h:101
PetscScalar z
Definition variables.h:101
PetscScalar y
Definition variables.h:101
BoundingBox bbox
Definition variables.h:822
A 3D point or vector with PetscScalar components.
Definition variables.h:100
Here is the caller graph for this function:

◆ GatherAllBoundingBoxes()

PetscErrorCode GatherAllBoundingBoxes ( UserCtx user,
BoundingBox **  allBBoxes 
)

Gathers local bounding boxes from all MPI processes to rank 0.

This function computes the local bounding box on each process, then collects all local bounding boxes on the root process (rank 0) using MPI. The result is stored in an array of BoundingBox structures on rank 0.

Parameters
[in]userPointer to the user-defined context containing grid information.
[out]allBBoxesPointer to a pointer where the array of gathered bounding boxes will be stored on rank 0. The caller on rank 0 must free this array.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

Gathers local bounding boxes from all MPI processes to rank 0.

Full API contract (arguments, ownership, side effects) is documented with the header declaration in include/grid.h.

See also
GatherAllBoundingBoxes()

Definition at line 821 of file grid.c.

822{
823 PetscErrorCode ierr;
824 PetscMPIInt rank, size;
825 BoundingBox *bboxArray = NULL;
826 BoundingBox localBBox;
827
828 PetscFunctionBeginUser;
829
831
832 /* Validate */
833 if (!user || !allBBoxes) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
834 "GatherAllBoundingBoxes: NULL pointer");
835
836 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
837 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
838
839 /* Compute local bbox */
840 ierr = ComputeLocalBoundingBox(user, &localBBox); CHKERRQ(ierr);
841
842 /* Ensure everyone is synchronized before the gather */
843 MPI_Barrier(PETSC_COMM_WORLD);
845 "Rank %d: about to MPI_Gather(localBBox)\n", rank);
846
847 /* Allocate on root */
848 if (rank == 0) {
849 bboxArray = (BoundingBox*)malloc(size * sizeof(BoundingBox));
850 if (!bboxArray) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
851 "GatherAllBoundingBoxes: malloc failed");
852 }
853
854 /* Collective: every rank must call */
855 ierr = MPI_Gather(&localBBox, sizeof(BoundingBox), MPI_BYTE,
856 bboxArray, sizeof(BoundingBox), MPI_BYTE,
857 0, PETSC_COMM_WORLD);
858 CHKERRMPI(ierr);
859
860 MPI_Barrier(PETSC_COMM_WORLD);
862 "Rank %d: completed MPI_Gather(localBBox)\n", rank);
863
864 /* Return result */
865 if (rank == 0) {
866 *allBBoxes = bboxArray;
867 } else {
868 *allBBoxes = NULL;
869 }
870
872
873 PetscFunctionReturn(0);
874}
PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox)
Implementation of ComputeLocalBoundingBox().
Definition grid.c:679
Defines a 3D axis-aligned bounding box.
Definition variables.h:154
Here is the call graph for this function:
Here is the caller graph for this function:

◆ BroadcastAllBoundingBoxes()

PetscErrorCode BroadcastAllBoundingBoxes ( UserCtx user,
BoundingBox **  bboxlist 
)

Broadcasts the bounding box information collected on rank 0 to all other ranks.

This function assumes that GatherAllBoundingBoxes() was previously called, so bboxlist is allocated and populated on rank 0. All other ranks will allocate memory for bboxlist, and this function will use MPI_Bcast to distribute the bounding box data to them.

Parameters
[in]userPointer to the UserCtx structure. (Currently unused in this function, but kept for consistency.)
[in,out]bboxlistPointer to the array of BoundingBoxes. On rank 0, this should point to a valid array of size 'size' (where size is the number of MPI ranks). On non-root ranks, this function will allocate memory for bboxlist.
Returns
PetscErrorCode Returns 0 on success, non-zero on MPI or PETSc-related errors.

Broadcasts the bounding box information collected on rank 0 to all other ranks.

Local to this translation unit.

Definition at line 883 of file grid.c.

884{
885 PetscErrorCode ierr;
886 (void)user;
887 PetscMPIInt rank, size;
888
889 PetscFunctionBeginUser;
890
892
893 if (!bboxlist) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
894 "BroadcastAllBoundingBoxes: NULL pointer");
895
896 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
897 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
898
899 /* Non-root ranks must allocate before the Bcast */
900 if (rank != 0) {
901 *bboxlist = (BoundingBox*)malloc(size * sizeof(BoundingBox));
902 if (!*bboxlist) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
903 "BroadcastAllBoundingBoxes: malloc failed");
904 }
905
906 MPI_Barrier(PETSC_COMM_WORLD);
908 "Rank %d: about to MPI_Bcast(%d boxes)\n", rank, size);
909
910 /* Collective: every rank must call */
911 ierr = MPI_Bcast(*bboxlist, size * sizeof(BoundingBox), MPI_BYTE,
912 0, PETSC_COMM_WORLD);
913 CHKERRMPI(ierr);
914
915 MPI_Barrier(PETSC_COMM_WORLD);
917 "Rank %d: completed MPI_Bcast(%d boxes)\n", rank, size);
918
919
921
922 PetscFunctionReturn(0);
923}
Here is the caller graph for this function:

◆ CalculateInletProperties()

PetscErrorCode CalculateInletProperties ( UserCtx user)

Calculates the center and area of the primary INLET face.

This function identifies the primary INLET face from the boundary face configurations, computes its geometric center and total area using a generic utility function, and stores these results in the simulation context.

Parameters
userPointer to the UserCtx containing boundary face information.
Returns
PetscErrorCode

Calculates the center and area of the primary INLET face.

Full API contract (arguments, ownership, side effects) is documented with the header declaration in include/grid.h.

See also
CalculateInletProperties()

Definition at line 933 of file grid.c.

934{
935 PetscErrorCode ierr;
936 BCFace inlet_face_id = -1;
937 PetscBool inlet_found = PETSC_FALSE;
938
939 PetscFunctionBeginUser;
941
942 // 1. Identify the primary inlet face from the configuration
943 for (int i = 0; i < 6; i++) {
944 if (user->boundary_faces[i].mathematical_type == INLET) {
945 inlet_face_id = user->boundary_faces[i].face_id;
946 inlet_found = PETSC_TRUE;
947 break; // Use the first inlet found
948 }
949 }
950
951 if (!inlet_found) {
952 LOG_ALLOW(GLOBAL, LOG_INFO, "No INLET face found. Skipping inlet center calculation.\n");
954 PetscFunctionReturn(0);
955 }
956
957 Cmpnts inlet_center;
958 PetscReal inlet_area;
959
960 // 2. Call the generic utility to compute the center and area of any face.
961 ierr = CalculateFaceCenterAndArea(user,inlet_face_id,&inlet_center,&inlet_area); CHKERRQ(ierr);
962
963 // 3. Store results in the SimCtx
964 user->simCtx->CMx_c = inlet_center.x;
965 user->simCtx->CMy_c = inlet_center.y;
966 user->simCtx->CMz_c = inlet_center.z;
967 user->simCtx->AreaInSum = inlet_area;
968
970 "Rank[%d] Inlet Center: (%.6f, %.6f, %.6f), Area: %.6f\n",
971 user->simCtx->rank, inlet_center.x, inlet_center.y, inlet_center.z, inlet_area);
972
974 PetscFunctionReturn(0);
975
976}
PetscErrorCode CalculateFaceCenterAndArea(UserCtx *user, BCFace face_id, Cmpnts *face_center, PetscReal *face_area)
Implementation of CalculateFaceCenterAndArea().
Definition grid.c:1029
@ INLET
Definition variables.h:258
BoundaryFaceConfig boundary_faces[6]
Definition variables.h:829
SimCtx * simCtx
Back-pointer to the master simulation context.
Definition variables.h:814
PetscReal CMy_c
Definition variables.h:705
PetscReal CMz_c
Definition variables.h:705
BCType mathematical_type
Definition variables.h:336
PetscReal AreaInSum
Definition variables.h:726
PetscReal CMx_c
Definition variables.h:705
BCFace
Identifies the six logical faces of a structured computational block.
Definition variables.h:244
Here is the call graph for this function:
Here is the caller graph for this function:

◆ CalculateOutletProperties()

PetscErrorCode CalculateOutletProperties ( UserCtx user)

Calculates the center and area of the primary OUTLET face.

This function identifies the primary OUTLET face from the boundary face configurations, computes its geometric center and total area using a generic utility function, and stores these results in the simulation context.

Parameters
userPointer to the UserCtx containing boundary face information.
Returns
PetscErrorCode

Calculates the center and area of the primary OUTLET face.

Full API contract (arguments, ownership, side effects) is documented with the header declaration in include/grid.h.

See also
CalculateOutletProperties()

Definition at line 986 of file grid.c.

987{
988 PetscErrorCode ierr;
989 BCFace outlet_face_id = -1;
990 PetscBool outlet_found = PETSC_FALSE;
991 PetscFunctionBeginUser;
993 // 1. Identify the primary outlet face from the configuration
994 for (int i = 0; i < 6; i++) {
995 if (user->boundary_faces[i].mathematical_type == OUTLET) {
996 outlet_face_id = user->boundary_faces[i].face_id;
997 outlet_found = PETSC_TRUE;
998 break; // Use the first outlet found
999 }
1000 }
1001 if (!outlet_found) {
1002 LOG_ALLOW(GLOBAL, LOG_INFO, "No OUTLET face found. Skipping outlet center calculation.\n");
1004 PetscFunctionReturn(0);
1005 }
1006 PetscReal outlet_area;
1007 Cmpnts outlet_center;
1008 // 2. Call the generic utility to compute the center and area of any face
1009 ierr = CalculateFaceCenterAndArea(user,outlet_face_id,&outlet_center,&outlet_area); CHKERRQ(ierr);
1010 // 3. Store results in the SimCtx
1011 user->simCtx->AreaOutSum = outlet_area;
1012
1014 "Outlet Center: (%.6f, %.6f, %.6f), Area: %.6f\n",
1015 outlet_center.x, outlet_center.y, outlet_center.z, outlet_area);
1016
1018 PetscFunctionReturn(0);
1019}
@ OUTLET
Definition variables.h:257
PetscReal AreaOutSum
Definition variables.h:726
Here is the call graph for this function:
Here is the caller graph for this function:

◆ CalculateFaceCenterAndArea()

PetscErrorCode CalculateFaceCenterAndArea ( UserCtx user,
BCFace  face_id,
Cmpnts face_center,
PetscReal *  face_area 
)

Calculates the geometric center and total area of a specified boundary face.

This function computes two key properties of a boundary face in the computational domain:

  1. Geometric Center: The average (x,y,z) position of all physical nodes on the face
  2. Total Area: The sum of face area vector magnitudes from all non-solid cells adjacent to the face

Indexing Architecture

The solver uses different indexing conventions for different field types:

Node-Centered Fields (Coordinates):

  • Direct indexing: Node n stored at coor[n]
  • For mx=26: Physical nodes [0-24], Dummy at [25]
  • For mz=98: Physical nodes [0-96], Dummy at [97]

Face-Centered Fields (Metrics: csi, eta, zet):

  • Direct indexing: Face n stored at csi/eta/zet[n]
  • For mx=26: Physical faces [0-24], Dummy at [25]
  • For mz=98: Physical faces [0-96], Dummy at [97]
  • Face at index k bounds cells k-1 and k

Cell-Centered Fields (nvert):

  • Shifted indexing: Physical cell c stored at nvert[c+1]
  • For mx=26 (25 cells): Cell 0→nvert[1], Cell 23→nvert[24]
  • For mz=98 (96 cells): Cell 0→nvert[1], Cell 95→nvert[96]
  • nvert[0] and nvert[mx-1] are ghost values

Face-to-Index Mapping

Example for a domain with mx=26, my=26, mz=98:

Face ID Node Index Face Metric Adjacent Cell (shifted) Physical Extent
BC_FACE_NEG_X i=0 csi[k][j][0] nvert[k][j][1] (Cell 0) j∈[0,24], k∈[0,96]
BC_FACE_POS_X i=24 csi[k][j][24] nvert[k][j][24] (Cell 23) j∈[0,24], k∈[0,96]
BC_FACE_NEG_Y j=0 eta[k][0][i] nvert[k][1][i] (Cell 0) i∈[0,24], k∈[0,96]
BC_FACE_POS_Y j=24 eta[k][24][i] nvert[k][24][i] (Cell 23) i∈[0,24], k∈[0,96]
BC_FACE_NEG_Z k=0 zet[0][j][i] nvert[1][j][i] (Cell 0) i∈[0,24], j∈[0,24]
BC_FACE_POS_Z k=96 zet[96][j][i] nvert[96][j][i] (Cell 95) i∈[0,24], j∈[0,24]

Algorithm

The function performs two separate computations with different loop bounds:

1. Center Calculation (uses ALL physical nodes):

  • Loop over all physical nodes on the face (excluding dummy indices)
  • Accumulate coordinate sums: Σx, Σy, Σz
  • Count number of nodes
  • Average: center = (Σx/n, Σy/n, Σz/n)

2. Area Calculation (uses INTERIOR cells only):

  • Loop over interior cell range to avoid accessing ghost values in nvert
  • For each face adjacent to a fluid cell (nvert < 0.1):
    • Compute area magnitude: |csi/eta/zet| = √(x² + y² + z²)
    • Accumulate to total area

Loop Bound Details

Why different bounds for center vs. area?

For BC_FACE_NEG_X at i=0 with my=26, mz=98:

Center calculation (coordinates):

  • j ∈ [ys, j_max): Includes j=[0,24] (25 nodes), excludes dummy at j=25
  • k ∈ [zs, k_max): Includes k=[0,96] (97 nodes), excludes dummy at k=97
  • Total: 25 × 97 = 2,425 nodes

Area calculation (nvert checks):

  • j ∈ [lys, lye): j=[1,24] (24 values), excludes boundaries
  • k ∈ [lzs, lze): k=[1,96] (96 values), excludes boundaries
  • Why restricted?
    • At j=0: nvert[k][0][1] is ghost (no cell at j=-1)
    • At j=25: nvert[k][25][1] is ghost (no cell at j=24, index 25 is dummy)
    • At k=0: nvert[0][j][1] is ghost (no cell at k=-1)
    • At k=97: nvert[97][j][1] is ghost (no cell at k=96, index 97 is dummy)
  • Total: 24 × 96 = 2,304 interior cells adjacent to face

Area Calculation Formulas

Face area contributions are computed from metric tensor magnitudes:

  • i-faces (±Xi): Area = |csi| = √(csi_x² + csi_y² + csi_z²)
  • j-faces (±Eta): Area = |eta| = √(eta_x² + eta_y² + eta_z²)
  • k-faces (±Zeta): Area = |zet| = √(zet_x² + zet_y² + zet_z²)
Parameters
[in]userPointer to UserCtx containing grid info, DMs, and field vectors
[in]face_idEnum identifying which boundary face to analyze (BC_FACE_NEG_X, etc.)
[out]face_centerPointer to Cmpnts structure to store computed geometric center (x,y,z)
[out]face_areaPointer to PetscReal to store computed total face area
Returns
PetscErrorCode Returns 0 on success, non-zero PETSc error code on failure
Note
This function uses MPI_Allreduce, so it must be called collectively by all ranks
Only ranks that own the specified boundary face contribute to the calculation
Center calculation includes ALL physical nodes on the face
Area calculation ONLY includes faces adjacent to fluid cells (nvert < 0.1)
Dummy/unused indices (e.g., k=97, j=25 for standard test case) are excluded
Warning
Assumes grid and field arrays have been properly initialized
Incorrect face_id values will result in zero contribution from all ranks
See also
CanRankServiceFace() for determining rank ownership of boundary faces
BCFace enum for valid face_id values
LOG_FIELD_ANATOMY() for debugging field indexing
Example Usage:
Cmpnts inlet_center;
PetscReal inlet_area;
ierr = CalculateFaceCenterAndArea(user, BC_FACE_NEG_Z, &inlet_center, &inlet_area);
PetscPrintf(PETSC_COMM_WORLD, "Inlet center: (%.4f, %.4f, %.4f), Area: %.6f\n",
inlet_center.x, inlet_center.y, inlet_center.z, inlet_area);
PetscErrorCode CalculateFaceCenterAndArea(UserCtx *user, BCFace face_id, Cmpnts *face_center, PetscReal *face_area)
Calculates the geometric center and total area of a specified boundary face.
Definition grid.c:1029
@ BC_FACE_NEG_Z
Definition variables.h:247

Calculates the geometric center and total area of a specified boundary face.

Full API contract (arguments, ownership, side effects) is documented with the header declaration in include/grid.h.

See also
CalculateFaceCenterAndArea()

< Local sum of (x,y,z) coordinates

< Local sum of face area magnitudes

< Local count of nodes

< Global sum of coordinates

< Global sum of areas

< Global count of nodes

< i-range: [xs, xe)

< j-range: [ys, ye)

< k-range: [zs, ze)

< Physical domain size in i (exclude dummy)

< Physical domain size in j (exclude dummy)

< Physical domain size in k (exclude dummy)

< Start at 1 if on -Xi boundary

< End at mx-1 if on +Xi boundary

< Start at 1 if on -Eta boundary

< End at my-1 if on +Eta boundary

< Start at 1 if on -Zeta boundary

< End at mz-1 if on +Zeta boundary

< Exclude dummy at i=mx-1 (e.g., i=25)

< Exclude dummy at j=my-1 (e.g., j=25)

< Exclude dummy at k=mz-1 (e.g., k=97)

< Local ghosted coordinate vector

< Nodal coordinates [k][j][i]

< Face metric tensors [k][j][i]

< Cell blanking field [k][j][i] (shifted +1)

Definition at line 1029 of file grid.c.

1031{
1032 PetscErrorCode ierr;
1033 DMDALocalInfo info;
1034
1035 // ========================================================================
1036 // Local accumulators for this rank's contribution
1037 // ========================================================================
1038 PetscReal local_sum[3] = {0.0, 0.0, 0.0}; ///< Local sum of (x,y,z) coordinates
1039 PetscReal localAreaSum = 0.0; ///< Local sum of face area magnitudes
1040 PetscCount local_n_points = 0; ///< Local count of nodes
1041
1042 // ========================================================================
1043 // Global accumulators after MPI reduction
1044 // ========================================================================
1045 PetscReal global_sum[3] = {0.0, 0.0, 0.0}; ///< Global sum of coordinates
1046 PetscReal globalAreaSum = 0.0; ///< Global sum of areas
1047 PetscCount global_n_points = 0; ///< Global count of nodes
1048
1049 // ========================================================================
1050 // Grid information and array pointers
1051 // ========================================================================
1052 info = user->info;
1053
1054 // Rank's owned range in global indices
1055 PetscInt xs = info.xs, xe = info.xs + info.xm; ///< i-range: [xs, xe)
1056 PetscInt ys = info.ys, ye = info.ys + info.ym; ///< j-range: [ys, ye)
1057 PetscInt zs = info.zs, ze = info.zs + info.zm; ///< k-range: [zs, ze)
1058
1059 // Global domain dimensions (total allocated, includes dummy at end)
1060 PetscInt mx = info.mx, my = info.my, mz = info.mz;
1061 PetscInt IM = user->IM; ///< Physical domain size in i (exclude dummy)
1062 PetscInt JM = user->JM; ///< Physical domain size in j (exclude dummy)
1063 PetscInt KM = user->KM; ///< Physical domain size in k (exclude dummy)
1064
1065 // ========================================================================
1066 // Interior loop bounds (adjusted to avoid ghost/boundary cells)
1067 // These are used for nvert checks where we need valid cell indices
1068 // ========================================================================
1069 PetscInt lxs = xs; if(xs == 0) lxs = xs + 1; ///< Start at 1 if on -Xi boundary
1070 PetscInt lxe = xe; if(xe == mx) lxe = xe - 1; ///< End at mx-1 if on +Xi boundary
1071 PetscInt lys = ys; if(ys == 0) lys = ys + 1; ///< Start at 1 if on -Eta boundary
1072 PetscInt lye = ye; if(ye == my) lye = ye - 1; ///< End at my-1 if on +Eta boundary
1073 PetscInt lzs = zs; if(zs == 0) lzs = zs + 1; ///< Start at 1 if on -Zeta boundary
1074 PetscInt lze = ze; if(ze == mz) lze = ze - 1; ///< End at mz-1 if on +Zeta boundary
1075
1076 // ========================================================================
1077 // Physical node bounds (exclude dummy indices at mx-1, my-1, mz-1)
1078 // These are used for coordinate loops where we want ALL physical nodes
1079 // ========================================================================
1080 PetscInt i_max = (xe == mx) ? mx - 1 : xe; ///< Exclude dummy at i=mx-1 (e.g., i=25)
1081 PetscInt j_max = (ye == my) ? my - 1 : ye; ///< Exclude dummy at j=my-1 (e.g., j=25)
1082 PetscInt k_max = (ze == mz) ? mz - 1 : ze; ///< Exclude dummy at k=mz-1 (e.g., k=97)
1083
1084 // ========================================================================
1085 // Array pointers for field access
1086 // ========================================================================
1087 Vec lCoor; ///< Local ghosted coordinate vector
1088 Cmpnts ***coor; ///< Nodal coordinates [k][j][i]
1089 Cmpnts ***csi, ***eta, ***zet; ///< Face metric tensors [k][j][i]
1090 PetscReal ***nvert; ///< Cell blanking field [k][j][i] (shifted +1)
1091
1092 PetscFunctionBeginUser;
1094
1095 // ========================================================================
1096 // Step 1: Check if this rank owns the specified boundary face
1097 // ========================================================================
1098 PetscBool owns_face = PETSC_FALSE;
1099 ierr = CanRankServiceFace(&info,IM,JM,KM,face_id,&owns_face); CHKERRQ(ierr);
1100 if(owns_face){
1101 // ========================================================================
1102 // Step 2: Get read-only array access for all required fields
1103 // ========================================================================
1104 ierr = DMGetCoordinatesLocal(user->da, &lCoor); CHKERRQ(ierr);
1105 ierr = DMDAVecGetArrayRead(user->fda, lCoor, &coor); CHKERRQ(ierr);
1106 ierr = DMDAVecGetArrayRead(user->da, user->lNvert, &nvert); CHKERRQ(ierr);
1107 ierr = DMDAVecGetArrayRead(user->fda, user->lCsi, &csi); CHKERRQ(ierr);
1108 ierr = DMDAVecGetArrayRead(user->fda, user->lEta, &eta); CHKERRQ(ierr);
1109 ierr = DMDAVecGetArrayRead(user->fda, user->lZet, &zet); CHKERRQ(ierr);
1110
1111 // ========================================================================
1112 // Step 3: Loop over the specified face and accumulate center and area
1113 // ========================================================================
1114 switch (face_id) {
1115
1116 // ====================================================================
1117 // BC_FACE_NEG_X: Face at i=0 (bottom boundary in i-direction)
1118 // ====================================================================
1119 case BC_FACE_NEG_X:
1120 if (xs == 0) {
1121 PetscInt i = 0; // Face is at node index i=0
1122
1123 // ---- Part 1: Center calculation (ALL physical nodes) ----
1124 // Loop over ALL physical nodes on this face
1125 // For my=26, mz=98: j∈[0,24], k∈[0,96] → 25×97 = 2,425 nodes
1126 for (PetscInt k = zs; k < k_max; k++) {
1127 for (PetscInt j = ys; j < j_max; j++) {
1128 // Accumulate coordinates at node [k][j][0]
1129 local_sum[0] += coor[k][j][i].x;
1130 local_sum[1] += coor[k][j][i].y;
1131 local_sum[2] += coor[k][j][i].z;
1132 local_n_points++;
1133 }
1134 }
1135
1136 // ---- Part 2: Area calculation (INTERIOR cells only) ----
1137 // Loop over interior range where nvert checks are valid
1138 // For my=26, mz=98: j∈[1,24], k∈[1,96] → 24×96 = 2,304 cells
1139 for (PetscInt k = lzs; k < lze; k++) {
1140 for (PetscInt j = lys; j < lye; j++) {
1141 // Check if adjacent cell is fluid
1142 // nvert[k][j][i+1] = nvert[k][j][1] checks Cell 0
1143 // (Physical Cell 0 in j-k plane, stored at shifted index [1])
1144 if (nvert[k][j][i+1] < 0.1) {
1145 // Cell is fluid - add face area contribution
1146 // Face area = magnitude of csi metric at [k][j][0]
1147 localAreaSum += sqrt(csi[k][j][i].x * csi[k][j][i].x +
1148 csi[k][j][i].y * csi[k][j][i].y +
1149 csi[k][j][i].z * csi[k][j][i].z);
1150 }
1151 }
1152 }
1153 }
1154 break;
1155
1156 // ====================================================================
1157 // BC_FACE_POS_X: Face at i=IM-1 (top boundary in i-direction)
1158 // ====================================================================
1159 case BC_FACE_POS_X:
1160 if (xe == mx) {
1161 PetscInt i = mx - 2; // Last physical node (e.g., i=24 for mx=26)
1162
1163 // ---- Part 1: Center calculation (ALL physical nodes) ----
1164 for (PetscInt k = zs; k < k_max; k++) {
1165 for (PetscInt j = ys; j < j_max; j++) {
1166 local_sum[0] += coor[k][j][i].x;
1167 local_sum[1] += coor[k][j][i].y;
1168 local_sum[2] += coor[k][j][i].z;
1169 local_n_points++;
1170 }
1171 }
1172
1173 // ---- Part 2: Area calculation (INTERIOR cells only) ----
1174 for (PetscInt k = lzs; k < lze; k++) {
1175 for (PetscInt j = lys; j < lye; j++) {
1176 // Check if adjacent cell is fluid
1177 // nvert[k][j][i] = nvert[k][j][24] checks last cell (Cell 23)
1178 // (Physical Cell 23, stored at shifted index [24])
1179 if (nvert[k][j][i] < 0.1) {
1180 // Face area = magnitude of csi metric at [k][j][24]
1181 localAreaSum += sqrt(csi[k][j][i].x * csi[k][j][i].x +
1182 csi[k][j][i].y * csi[k][j][i].y +
1183 csi[k][j][i].z * csi[k][j][i].z);
1184 }
1185 }
1186 }
1187 }
1188 break;
1189
1190 // ====================================================================
1191 // BC_FACE_NEG_Y: Face at j=0 (bottom boundary in j-direction)
1192 // ====================================================================
1193 case BC_FACE_NEG_Y:
1194 if (ys == 0) {
1195 PetscInt j = 0; // Face is at node index j=0
1196
1197 // ---- Part 1: Center calculation (ALL physical nodes) ----
1198 // For mx=26, mz=98: i∈[0,24], k∈[0,96] → 25×97 = 2,425 nodes
1199 for (PetscInt k = zs; k < k_max; k++) {
1200 for (PetscInt i = xs; i < i_max; i++) {
1201 local_sum[0] += coor[k][j][i].x;
1202 local_sum[1] += coor[k][j][i].y;
1203 local_sum[2] += coor[k][j][i].z;
1204 local_n_points++;
1205 }
1206 }
1207
1208 // ---- Part 2: Area calculation (INTERIOR cells only) ----
1209 // For mx=26, mz=98: i∈[1,24], k∈[1,96] → 24×96 = 2,304 cells
1210 for (PetscInt k = lzs; k < lze; k++) {
1211 for (PetscInt i = lxs; i < lxe; i++) {
1212 // nvert[k][j+1][i] = nvert[k][1][i] checks Cell 0
1213 if (nvert[k][j+1][i] < 0.1) {
1214 // Face area = magnitude of eta metric at [k][0][i]
1215 localAreaSum += sqrt(eta[k][j][i].x * eta[k][j][i].x +
1216 eta[k][j][i].y * eta[k][j][i].y +
1217 eta[k][j][i].z * eta[k][j][i].z);
1218 }
1219 }
1220 }
1221 }
1222 break;
1223
1224 // ====================================================================
1225 // BC_FACE_POS_Y: Face at j=JM-1 (top boundary in j-direction)
1226 // ====================================================================
1227 case BC_FACE_POS_Y:
1228 if (ye == my) {
1229 PetscInt j = my - 2; // Last physical node (e.g., j=24 for my=26)
1230
1231 // ---- Part 1: Center calculation (ALL physical nodes) ----
1232 for (PetscInt k = zs; k < k_max; k++) {
1233 for (PetscInt i = xs; i < i_max; i++) {
1234 local_sum[0] += coor[k][j][i].x;
1235 local_sum[1] += coor[k][j][i].y;
1236 local_sum[2] += coor[k][j][i].z;
1237 local_n_points++;
1238 }
1239 }
1240
1241 // ---- Part 2: Area calculation (INTERIOR cells only) ----
1242 for (PetscInt k = lzs; k < lze; k++) {
1243 for (PetscInt i = lxs; i < lxe; i++) {
1244 // nvert[k][j][i] = nvert[k][24][i] checks last cell (Cell 23)
1245 if (nvert[k][j][i] < 0.1) {
1246 // Face area = magnitude of eta metric at [k][24][i]
1247 localAreaSum += sqrt(eta[k][j][i].x * eta[k][j][i].x +
1248 eta[k][j][i].y * eta[k][j][i].y +
1249 eta[k][j][i].z * eta[k][j][i].z);
1250 }
1251 }
1252 }
1253 }
1254 break;
1255
1256 // ====================================================================
1257 // BC_FACE_NEG_Z: Face at k=0 (inlet, bottom boundary in k-direction)
1258 // ====================================================================
1259 case BC_FACE_NEG_Z:
1260 if (zs == 0) {
1261 PetscInt k = 0; // Face is at node index k=0
1262
1263 // ---- Part 1: Center calculation (ALL physical nodes) ----
1264 // For mx=26, my=26: i∈[0,24], j∈[0,24] → 25×25 = 625 nodes
1265 for (PetscInt j = ys; j < j_max; j++) {
1266 for (PetscInt i = xs; i < i_max; i++) {
1267 local_sum[0] += coor[k][j][i].x;
1268 local_sum[1] += coor[k][j][i].y;
1269 local_sum[2] += coor[k][j][i].z;
1270 local_n_points++;
1271 }
1272 }
1273
1274 // ---- Part 2: Area calculation (INTERIOR cells only) ----
1275 // For mx=26, my=26: i∈[1,24], j∈[1,24] → 24×24 = 576 cells
1276 for (PetscInt j = lys; j < lye; j++) {
1277 for (PetscInt i = lxs; i < lxe; i++) {
1278 // nvert[k+1][j][i] = nvert[1][j][i] checks Cell 0
1279 // (Physical Cell 0 in i-j plane, stored at shifted index [1])
1280 if (nvert[k+1][j][i] < 0.1) {
1281 // Face area = magnitude of zet metric at [0][j][i]
1282 localAreaSum += sqrt(zet[k][j][i].x * zet[k][j][i].x +
1283 zet[k][j][i].y * zet[k][j][i].y +
1284 zet[k][j][i].z * zet[k][j][i].z);
1285 }
1286 }
1287 }
1288 }
1289 break;
1290
1291 // ====================================================================
1292 // BC_FACE_POS_Z: Face at k=KM-1 (outlet, top boundary in k-direction)
1293 // ====================================================================
1294 case BC_FACE_POS_Z:
1295 if (ze == mz) {
1296 PetscInt k = mz - 2; // Last physical node (e.g., k=96 for mz=98)
1297
1298 // ---- Part 1: Center calculation (ALL physical nodes) ----
1299 // For mx=26, my=26: i∈[0,24], j∈[0,24] → 25×25 = 625 nodes
1300 for (PetscInt j = ys; j < j_max; j++) {
1301 for (PetscInt i = xs; i < i_max; i++) {
1302 local_sum[0] += coor[k][j][i].x;
1303 local_sum[1] += coor[k][j][i].y;
1304 local_sum[2] += coor[k][j][i].z;
1305 local_n_points++;
1306 }
1307 }
1308
1309 // ---- Part 2: Area calculation (INTERIOR cells only) ----
1310 // For mx=26, my=26: i∈[1,24], j∈[1,24] → 24×24 = 576 cells
1311 for (PetscInt j = lys; j < lye; j++) {
1312 for (PetscInt i = lxs; i < lxe; i++) {
1313 // nvert[k][j][i] = nvert[96][j][i] checks last cell (Cell 95)
1314 // (Physical Cell 95, stored at shifted index [96])
1315 if (nvert[k][j][i] < 0.1) {
1316 // Face area = magnitude of zet metric at [96][j][i]
1317 localAreaSum += sqrt(zet[k][j][i].x * zet[k][j][i].x +
1318 zet[k][j][i].y * zet[k][j][i].y +
1319 zet[k][j][i].z * zet[k][j][i].z);
1320 }
1321 }
1322 }
1323 }
1324 break;
1325
1326 default:
1327 SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_OUTOFRANGE,
1328 "Unknown face_id %d in CalculateFaceCenterAndArea", face_id);
1329 }
1330
1331 // ========================================================================
1332 // Step 4: Restore array access (release pointers)
1333 // ========================================================================
1334 ierr = DMDAVecRestoreArrayRead(user->fda, lCoor, &coor); CHKERRQ(ierr);
1335 ierr = DMDAVecRestoreArrayRead(user->da, user->lNvert, &nvert); CHKERRQ(ierr);
1336 ierr = DMDAVecRestoreArrayRead(user->fda, user->lCsi, &csi); CHKERRQ(ierr);
1337 ierr = DMDAVecRestoreArrayRead(user->fda, user->lEta, &eta); CHKERRQ(ierr);
1338 ierr = DMDAVecRestoreArrayRead(user->fda, user->lZet, &zet); CHKERRQ(ierr);
1339 }
1340 // ========================================================================
1341 // Step 5: Perform MPI reductions to get global sums
1342 // ========================================================================
1343 // Sum coordinate contributions from all ranks
1344 ierr = MPI_Allreduce(local_sum, global_sum, 3, MPI_DOUBLE, MPI_SUM,
1345 PETSC_COMM_WORLD); CHKERRQ(ierr);
1346
1347 // Sum node counts from all ranks
1348 ierr = MPI_Allreduce(&local_n_points, &global_n_points, 1, MPI_COUNT, MPI_SUM,
1349 PETSC_COMM_WORLD); CHKERRQ(ierr);
1350
1351 // Sum area contributions from all ranks
1352 ierr = MPI_Allreduce(&localAreaSum, &globalAreaSum, 1, MPI_DOUBLE, MPI_SUM,
1353 PETSC_COMM_WORLD); CHKERRQ(ierr);
1354
1355 // ========================================================================
1356 // Step 6: Calculate geometric center by averaging coordinates
1357 // ========================================================================
1358 if (global_n_points > 0) {
1359 face_center->x = global_sum[0] / global_n_points;
1360 face_center->y = global_sum[1] / global_n_points;
1361 face_center->z = global_sum[2] / global_n_points;
1363 "Calculated center for Face %s: (x=%.4f, y=%.4f, z=%.4f) from %lld nodes\n",
1364 BCFaceToString(face_id),
1365 face_center->x, face_center->y, face_center->z,
1366 (long long)global_n_points);
1367 } else {
1368 // No nodes found - this should not happen for a valid face
1370 "WARNING: Face %s identified but no grid points found. Center not calculated.\n",
1371 BCFaceToString(face_id));
1372 face_center->x = face_center->y = face_center->z = 0.0;
1373 }
1374
1375 // ========================================================================
1376 // Step 7: Return computed total area
1377 // ========================================================================
1378 *face_area = globalAreaSum;
1380 "Calculated area for Face %s: Area=%.6f\n",
1381 BCFaceToString(face_id), *face_area);
1382
1383 PetscFunctionReturn(0);
1384}
PetscErrorCode CanRankServiceFace(const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, BCFace face_id, PetscBool *can_service_out)
Determines if the current MPI rank owns any part of a specified global face.
Definition Boundaries.c:126
const char * BCFaceToString(BCFace face)
Helper function to convert BCFace enum to a string representation.
Definition logging.c:669
Vec lNvert
Definition variables.h:837
Vec lZet
Definition variables.h:858
Vec lCsi
Definition variables.h:858
DMDALocalInfo info
Definition variables.h:818
Vec lEta
Definition variables.h:858
@ BC_FACE_NEG_X
Definition variables.h:245
@ BC_FACE_POS_Z
Definition variables.h:247
@ BC_FACE_POS_Y
Definition variables.h:246
@ BC_FACE_POS_X
Definition variables.h:245
@ BC_FACE_NEG_Y
Definition variables.h:246
Here is the call graph for this function:
Here is the caller graph for this function: