PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
grid.h File Reference

Public interface for grid, solver, and metric setup routines. More...

#include "variables.h"
#include "logging.h"
#include "io.h"
#include "setup.h"
Include dependency graph for grid.h:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Functions

PetscErrorCode DefineAllGridDimensions (SimCtx *simCtx)
 Orchestrates the parsing and setting of grid dimensions for all blocks.
 
PetscErrorCode InitializeAllGridDMs (SimCtx *simCtx)
 Orchestrates the creation of DMDA objects for every block and multigrid level.
 
PetscErrorCode AssignAllGridCoordinates (SimCtx *simCtx)
 Orchestrates the assignment of physical coordinates to all DMDA objects.
 
PetscErrorCode ComputeLocalBoundingBox (UserCtx *user, BoundingBox *localBBox)
 Computes the local bounding box of the grid on the current process.
 
PetscErrorCode GatherAllBoundingBoxes (UserCtx *user, BoundingBox **allBBoxes)
 Gathers local bounding boxes from all MPI processes to rank 0.
 
PetscErrorCode BroadcastAllBoundingBoxes (UserCtx *user, BoundingBox **bboxlist)
 Broadcasts the bounding box information collected on rank 0 to all other ranks.
 

Detailed Description

Public interface for grid, solver, and metric setup routines.

Definition in file grid.h.

Function Documentation

◆ DefineAllGridDimensions()

PetscErrorCode DefineAllGridDimensions ( SimCtx simCtx)

Orchestrates the parsing and setting of grid dimensions for all blocks.

This function serves as the high-level entry point for defining the geometric properties of each grid block in the simulation. It iterates through every block defined by simCtx->block_number.

For each block, it performs two key actions:

  1. It explicitly sets the block's index (_this) in the corresponding UserCtx struct for the finest multigrid level. This makes the context "self-aware".
  2. It calls a helper function (ParseAndSetGridInputs) to handle the detailed work of parsing options or files to populate the rest of the geometric properties for that specific block (e.g., IM, Min_X, rx).
Parameters
simCtxThe master SimCtx, which contains the number of blocks and the UserCtx hierarchy to be configured.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 57 of file grid.c.

58{
59 PetscErrorCode ierr;
60 PetscInt nblk = simCtx->block_number;
61 UserCtx *fine_users;
62
63 PetscFunctionBeginUser;
64
65 if (simCtx->usermg.mglevels == 0) {
66 SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONGSTATE, "MG levels not set. Cannot get finest_users.");
67 }
68 // Get the UserCtx array for the finest grid level
69 fine_users = simCtx->usermg.mgctx[simCtx->usermg.mglevels - 1].user;
70
71 LOG_ALLOW(GLOBAL, LOG_INFO, "Defining grid dimensions for %d blocks...\n", nblk);
72
73 // Loop over each block to configure its grid dimensions and geometry.
74 for (PetscInt bi = 0; bi < nblk; bi++) {
75 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: --- Configuring Geometry for Block %d ---\n", simCtx->rank, bi);
76
77 // Before calling any helpers, set the block index in the context.
78 // This makes the UserCtx self-aware of which block it represents.
79 fine_users[bi]._this = bi;
80
81 // Call the helper function for this specific block. It can now derive
82 // all necessary information from the UserCtx pointer it receives.
83 ierr = ParseAndSetGridInputs(&fine_users[bi]); CHKERRQ(ierr);
84 }
85
86 PetscFunctionReturn(0);
87}
static PetscErrorCode ParseAndSetGridInputs(UserCtx *user)
Determines the grid source and calls the appropriate parsing routine.
Definition grid.c:20
#define LOCAL
Logging scope definitions for controlling message output.
Definition logging.h:44
#define GLOBAL
Scope for global logging across all processes.
Definition logging.h:45
#define LOG_ALLOW(scope, level, fmt,...)
Logging macro that checks both the log level and whether the calling function is in the allowed-funct...
Definition logging.h:207
@ LOG_INFO
Informational messages about program execution.
Definition logging.h:32
@ LOG_DEBUG
Detailed debugging information.
Definition logging.h:33
UserCtx * user
Definition variables.h:418
PetscMPIInt rank
Definition variables.h:516
PetscInt block_number
Definition variables.h:562
UserMG usermg
Definition variables.h:599
PetscInt _this
Definition variables.h:643
PetscInt mglevels
Definition variables.h:425
MGCtx * mgctx
Definition variables.h:428
User-defined context containing data specific to a single computational grid level.
Definition variables.h:630
Here is the call graph for this function:
Here is the caller graph for this function:

◆ InitializeAllGridDMs()

PetscErrorCode InitializeAllGridDMs ( SimCtx simCtx)

Orchestrates the creation of DMDA objects for every block and multigrid level.

This function systematically builds the entire DMDA hierarchy. It first calculates the dimensions (IM, JM, KM) for all coarse grids based on the finest grid's dimensions and the semi-coarsening flags. It then iterates from the coarsest to the finest level, calling a powerful helper function (InitializeSingleGridDM) to create the DMs for each block, ensuring that finer grids are properly aligned with their coarser parents for multigrid efficiency.

Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 230 of file grid.c.

231{
232 PetscErrorCode ierr;
233 UserMG *usermg = &simCtx->usermg;
234 MGCtx *mgctx = usermg->mgctx;
235 PetscInt nblk = simCtx->block_number;
236
237 PetscFunctionBeginUser;
238 LOG_ALLOW(GLOBAL, LOG_INFO, "Creating DMDA objects for all levels and blocks...\n");
239
240 // --- Part 1: Calculate Coarse Grid Dimensions & VALIDATE ---
241 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Calculating and validating coarse grid dimensions...\n");
242 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
243 for (PetscInt bi = 0; bi < nblk; bi++) {
244 UserCtx *user_coarse = &mgctx[level].user[bi];
245 UserCtx *user_fine = &mgctx[level + 1].user[bi];
246
247 user_coarse->IM = user_fine->isc ? user_fine->IM : (user_fine->IM + 1) / 2;
248 user_coarse->JM = user_fine->jsc ? user_fine->JM : (user_fine->JM + 1) / 2;
249 user_coarse->KM = user_fine->ksc ? user_fine->KM : (user_fine->KM + 1) / 2;
250
251 LOG_ALLOW_SYNC(LOCAL, LOG_DEBUG, "Rank %d: Block %d, Level %d dims calculated: %d x %d x %d\n",
252 simCtx->rank, bi, level, user_coarse->IM, user_coarse->JM, user_coarse->KM);
253
254 // Validation check from legacy MGDACreate to ensure coarsening is possible
255 PetscInt check_i = user_coarse->IM * (2 - user_coarse->isc) - (user_fine->IM + 1 - user_coarse->isc);
256 PetscInt check_j = user_coarse->JM * (2 - user_coarse->jsc) - (user_fine->JM + 1 - user_coarse->jsc);
257 PetscInt check_k = user_coarse->KM * (2 - user_coarse->ksc) - (user_fine->KM + 1 - user_coarse->ksc);
258
259 if (check_i + check_j + check_k != 0) {
260 // SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_ARG_WRONG,
261 // "Grid at level %d, block %d cannot be coarsened from %dx%dx%d to %dx%dx%d with the given semi-coarsening flags. Check grid dimensions.",
262 // level, bi, user_fine->IM, user_fine->JM, user_fine->KM, user_coarse->IM, user_coarse->JM, user_coarse->KM);
263 LOG(GLOBAL,LOG_WARNING,"WARNING: Grid at level %d, block %d can't be consistently coarsened further.\n", level, bi);
264 }
265 }
266 }
267
268 // --- Part 2: Create DMs from Coarse to Fine for each Block ---
269 for (PetscInt bi = 0; bi < nblk; bi++) {
270 LOG_ALLOW_SYNC(GLOBAL, LOG_DEBUG, "--- Creating DMs for Block %d ---\n", bi);
271
272 // Create the coarsest level DM first (passing NULL for the coarse_user)
273 ierr = InitializeSingleGridDM(&mgctx[0].user[bi], NULL); CHKERRQ(ierr);
274
275 // Create finer level DMs, passing the next-coarser context for alignment
276 for (PetscInt level = 1; level < usermg->mglevels; level++) {
277 ierr = InitializeSingleGridDM(&mgctx[level].user[bi], &mgctx[level-1].user[bi]); CHKERRQ(ierr);
278 }
279 }
280
281 // --- Optional: View the finest DM for debugging verification ---
282 if (get_log_level() >= LOG_DEBUG) {
283 LOG_ALLOW_SYNC(GLOBAL, LOG_INFO, "--- Viewing Finest DMDA (Level %d, Block 0) ---\n", usermg->mglevels - 1);
284 ierr = DMView(mgctx[usermg->mglevels - 1].user[0].da, PETSC_VIEWER_STDOUT_WORLD); CHKERRQ(ierr);
285 }
286
287 LOG_ALLOW(GLOBAL, LOG_INFO, "DMDA object creation complete.\n");
288 PetscFunctionReturn(0);
289}
static PetscErrorCode InitializeSingleGridDM(UserCtx *user, UserCtx *coarse_user)
Creates the DMDA objects (da and fda) for a single UserCtx.
Definition grid.c:107
#define LOG_ALLOW_SYNC(scope, level, fmt,...)
----— DEBUG ---------------------------------------— #define LOG_ALLOW(scope, level,...
Definition logging.h:274
#define LOG(scope, level, fmt,...)
Logging macro for PETSc-based applications with scope control.
Definition logging.h:91
LogLevel get_log_level()
Retrieves the current logging level from the environment variable LOG_LEVEL.
Definition logging.c:49
@ LOG_WARNING
Non-critical issues that warrant attention.
Definition logging.h:30
PetscInt isc
Definition variables.h:643
PetscInt ksc
Definition variables.h:643
PetscInt KM
Definition variables.h:639
PetscInt jsc
Definition variables.h:643
PetscInt JM
Definition variables.h:639
PetscInt IM
Definition variables.h:639
Context for Multigrid operations.
Definition variables.h:417
User-level context for managing the entire multigrid hierarchy.
Definition variables.h:424
Here is the call graph for this function:
Here is the caller graph for this function:

◆ AssignAllGridCoordinates()

PetscErrorCode AssignAllGridCoordinates ( SimCtx simCtx)

Orchestrates the assignment of physical coordinates to all DMDA objects.

This function manages the entire process of populating the coordinate vectors for every DMDA across all multigrid levels and blocks. It follows a two-part strategy that is essential for multigrid methods:

  1. Populate Finest Level: It first loops through each block and calls a helper (SetFinestLevelCoordinates) to set the physical coordinates for the highest-resolution grid (the finest multigrid level).
  2. Restrict to Coarser Levels: It then iterates downwards from the finest level, calling a helper (RestrictCoordinates) to copy the coordinate values from the fine grid nodes to their corresponding parent nodes on the coarser grids. This ensures all levels represent the exact same geometry.
Parameters
simCtxThe master SimCtx, containing the configured UserCtx hierarchy.
Returns
PetscErrorCode 0 on success, or a PETSc error code on failure.

Definition at line 317 of file grid.c.

318{
319 PetscErrorCode ierr;
320 UserMG *usermg = &simCtx->usermg;
321 PetscInt nblk = simCtx->block_number;
322
323 PetscFunctionBeginUser;
324 LOG_ALLOW(GLOBAL, LOG_INFO, "Assigning physical coordinates to all grid DMs...\n");
325
326 // --- Part 1: Populate the Finest Grid Level ---
327 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Setting coordinates for the finest grid level (%d)...\n", usermg->mglevels - 1);
328 for (PetscInt bi = 0; bi < nblk; bi++) {
329 UserCtx *fine_user = &usermg->mgctx[usermg->mglevels - 1].user[bi];
330 ierr = SetFinestLevelCoordinates(fine_user); CHKERRQ(ierr);
331 }
332 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Finest level coordinates have been set for all blocks.\n");
333
334
335 // --- Part 2: Restrict Coordinates to Coarser Levels ---
336 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Restricting coordinates to coarser grid levels...\n");
337 for (PetscInt level = usermg->mglevels - 2; level >= 0; level--) {
338 for (PetscInt bi = 0; bi < nblk; bi++) {
339 UserCtx *coarse_user = &usermg->mgctx[level].user[bi];
340 UserCtx *fine_user = &usermg->mgctx[level + 1].user[bi];
341 ierr = RestrictCoordinates(coarse_user, fine_user); CHKERRQ(ierr);
342 }
343 }
344
345 LOG_ALLOW(GLOBAL, LOG_INFO, "Physical coordinates assigned to all grid levels and blocks.\n");
346 PetscFunctionReturn(0);
347}
static PetscErrorCode RestrictCoordinates(UserCtx *coarse_user, UserCtx *fine_user)
Populates coarse grid coordinates by restricting from a fine grid.
Definition grid.c:617
static PetscErrorCode SetFinestLevelCoordinates(UserCtx *user)
A router that populates the coordinates for a single finest-level DMDA.
Definition grid.c:367
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ComputeLocalBoundingBox()

PetscErrorCode ComputeLocalBoundingBox ( UserCtx user,
BoundingBox localBBox 
)

Computes the local bounding box of the grid on the current process.

This function calculates the minimum and maximum coordinates of the local grid points owned by the current MPI process and stores the computed bounding box in the provided structure.

Parameters
[in]userPointer to the user-defined context containing grid information.
[out]localBBoxPointer to the BoundingBox structure to store the computed bounding box.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

This function calculates the minimum and maximum coordinates (x, y, z) of the local grid points owned by the current MPI process. It iterates over the local portion of the grid, examines each grid point's coordinates, and updates the minimum and maximum values accordingly.

The computed bounding box is stored in the provided localBBox structure, and the user->bbox field is also updated with this bounding box for consistency within the user context.

Parameters
[in]userPointer to the user-defined context containing grid information. This context must be properly initialized before calling this function.
[out]localBBoxPointer to the BoundingBox structure where the computed local bounding box will be stored. The structure should be allocated by the caller.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

Definition at line 702 of file grid.c.

703{
704 PetscErrorCode ierr;
705 PetscInt i, j, k;
706 PetscMPIInt rank;
707 PetscInt xs, ys, zs, xe, ye, ze;
708 DMDALocalInfo info;
709 Vec coordinates;
710 Cmpnts ***coordArray;
711 Cmpnts minCoords, maxCoords;
712
713 // Start of function execution
714 LOG_ALLOW(GLOBAL, LOG_INFO, "Entering the function.\n");
715
716 // Validate input Pointers
717 if (!user) {
718 LOG_ALLOW(LOCAL, LOG_ERROR, "Input 'user' Pointer is NULL.\n");
719 return PETSC_ERR_ARG_NULL;
720 }
721 if (!localBBox) {
722 LOG_ALLOW(LOCAL, LOG_ERROR, "Output 'localBBox' Pointer is NULL.\n");
723 return PETSC_ERR_ARG_NULL;
724 }
725
726 // Get MPI rank
727 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
728
729 // Get the local coordinates vector from the DMDA
730 ierr = DMGetCoordinatesLocal(user->da, &coordinates);
731 if (ierr) {
732 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting local coordinates vector.\n");
733 return ierr;
734 }
735
736 if (!coordinates) {
737 LOG_ALLOW(LOCAL, LOG_ERROR, "Coordinates vector is NULL.\n");
738 return PETSC_ERR_ARG_NULL;
739 }
740
741 // Access the coordinate array for reading
742 ierr = DMDAVecGetArrayRead(user->fda, coordinates, &coordArray);
743 if (ierr) {
744 LOG_ALLOW(LOCAL, LOG_ERROR, "Error accessing coordinate array.\n");
745 return ierr;
746 }
747
748 // Get the local grid information (indices and sizes)
749 ierr = DMDAGetLocalInfo(user->da, &info);
750 if (ierr) {
751 LOG_ALLOW(LOCAL, LOG_ERROR, "Error getting DMDA local info.\n");
752 return ierr;
753 }
754
755
756 xs = info.gxs; xe = xs + info.gxm;
757 ys = info.gys; ye = ys + info.gym;
758 zs = info.gzs; ze = zs + info.gzm;
759
760 /*
761 xs = info.xs; xe = xs + info.xm;
762 ys = info.ys; ye = ys + info.ym;
763 zs = info.zs; ze = zs + info.zm;
764 */
765
766 // Initialize min and max coordinates with extreme values
767 minCoords.x = minCoords.y = minCoords.z = PETSC_MAX_REAL;
768 maxCoords.x = maxCoords.y = maxCoords.z = PETSC_MIN_REAL;
769
770 LOG_ALLOW(LOCAL, LOG_DEBUG, "[Rank %d] Grid indices (Including Ghosts): xs=%d, xe=%d, ys=%d, ye=%d, zs=%d, ze=%d.\n",rank, xs, xe, ys, ye, zs, ze);
771
772 // Iterate over the local grid to find min and max coordinates
773 for (k = zs; k < ze; k++) {
774 for (j = ys; j < ye; j++) {
775 for (i = xs; i < xe; i++) {
776 Cmpnts coord = coordArray[k][j][i];
777
778 // Update min and max coordinates
779 if (coord.x < minCoords.x) minCoords.x = coord.x;
780 if (coord.y < minCoords.y) minCoords.y = coord.y;
781 if (coord.z < minCoords.z) minCoords.z = coord.z;
782
783 if (coord.x > maxCoords.x) maxCoords.x = coord.x;
784 if (coord.y > maxCoords.y) maxCoords.y = coord.y;
785 if (coord.z > maxCoords.z) maxCoords.z = coord.z;
786 }
787 }
788 }
789
790
791 // Add tolerance to bboxes.
792 minCoords.x = minCoords.x - BBOX_TOLERANCE;
793 minCoords.y = minCoords.y - BBOX_TOLERANCE;
794 minCoords.z = minCoords.z - BBOX_TOLERANCE;
795
796 maxCoords.x = maxCoords.x + BBOX_TOLERANCE;
797 maxCoords.y = maxCoords.y + BBOX_TOLERANCE;
798 maxCoords.z = maxCoords.z + BBOX_TOLERANCE;
799
800 LOG_ALLOW(LOCAL,LOG_INFO," Tolerance added to the limits: %.8e .\n",(PetscReal)BBOX_TOLERANCE);
801
802 // Log the computed min and max coordinates
803 LOG_ALLOW(LOCAL, LOG_INFO,"[Rank %d]minCoords=(%.6f, %.6f, %.6f), maxCoords=(%.6f, %.6f, %.6f).\n",rank,minCoords.x, minCoords.y, minCoords.z, maxCoords.x, maxCoords.y, maxCoords.z);
804
805
806
807 // Restore the coordinate array
808 ierr = DMDAVecRestoreArrayRead(user->fda, coordinates, &coordArray);
809 if (ierr) {
810 LOG_ALLOW(LOCAL, LOG_ERROR, "Error restoring coordinate array.\n");
811 return ierr;
812 }
813
814 // Set the local bounding box
815 localBBox->min_coords = minCoords;
816 localBBox->max_coords = maxCoords;
817
818 // Update the bounding box inside the UserCtx for consistency
819 user->bbox = *localBBox;
820
821 LOG_ALLOW(GLOBAL, LOG_INFO, "Exiting the function successfully.\n");
822 return 0;
823}
#define BBOX_TOLERANCE
Definition grid.c:6
@ LOG_ERROR
Critical errors that may halt the program.
Definition logging.h:29
Cmpnts max_coords
Maximum x, y, z coordinates of the bounding box.
Definition variables.h:155
Cmpnts min_coords
Minimum x, y, z coordinates of the bounding box.
Definition variables.h:154
PetscScalar x
Definition variables.h:100
PetscScalar z
Definition variables.h:100
PetscScalar y
Definition variables.h:100
BoundingBox bbox
Definition variables.h:641
A 3D point or vector with PetscScalar components.
Definition variables.h:99
Here is the caller graph for this function:

◆ GatherAllBoundingBoxes()

PetscErrorCode GatherAllBoundingBoxes ( UserCtx user,
BoundingBox **  allBBoxes 
)

Gathers local bounding boxes from all MPI processes to rank 0.

This function computes the local bounding box on each process, then collects all local bounding boxes on the root process (rank 0) using MPI. The result is stored in an array of BoundingBox structures on rank 0.

Parameters
[in]userPointer to the user-defined context containing grid information.
[out]allBBoxesPointer to a pointer where the array of gathered bounding boxes will be stored on rank 0. The caller on rank 0 must free this array.
Returns
PetscErrorCode Returns 0 on success, non-zero on failure.

Each rank computes its local bounding box, then all ranks participate in an MPI_Gather to send their BoundingBox to rank 0. Rank 0 allocates the result array and returns it via allBBoxes.

Parameters
[in]userPointer to UserCtx (must be non-NULL).
[out]allBBoxesOn rank 0, receives malloc’d array of size size. On other ranks, set to NULL.
Returns
PetscErrorCode

Definition at line 837 of file grid.c.

838{
839 PetscErrorCode ierr;
840 PetscMPIInt rank, size;
841 BoundingBox *bboxArray = NULL;
842 BoundingBox localBBox;
843
844 /* Validate */
845 if (!user || !allBBoxes) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
846 "GatherAllBoundingBoxes: NULL pointer");
847
848 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
849 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
850
851 /* Compute local bbox */
852 ierr = ComputeLocalBoundingBox(user, &localBBox); CHKERRQ(ierr);
853
854 /* Ensure everyone is synchronized before the gather */
855 MPI_Barrier(PETSC_COMM_WORLD);
857 "Rank %d: about to MPI_Gather(localBBox)\n", rank);
858
859 /* Allocate on root */
860 if (rank == 0) {
861 bboxArray = (BoundingBox*)malloc(size * sizeof(BoundingBox));
862 if (!bboxArray) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
863 "GatherAllBoundingBoxes: malloc failed");
864 }
865
866 /* Collective: every rank must call */
867 ierr = MPI_Gather(&localBBox, sizeof(BoundingBox), MPI_BYTE,
868 bboxArray, sizeof(BoundingBox), MPI_BYTE,
869 0, PETSC_COMM_WORLD);
870 CHKERRMPI(ierr);
871
872 MPI_Barrier(PETSC_COMM_WORLD);
874 "Rank %d: completed MPI_Gather(localBBox)\n", rank);
875
876 /* Return result */
877 if (rank == 0) {
878 *allBBoxes = bboxArray;
879 } else {
880 *allBBoxes = NULL;
881 }
882
883 return 0;
884}
PetscErrorCode ComputeLocalBoundingBox(UserCtx *user, BoundingBox *localBBox)
Computes the local bounding box of the grid on the current process.
Definition grid.c:702
Defines a 3D axis-aligned bounding box.
Definition variables.h:153
Here is the call graph for this function:
Here is the caller graph for this function:

◆ BroadcastAllBoundingBoxes()

PetscErrorCode BroadcastAllBoundingBoxes ( UserCtx user,
BoundingBox **  bboxlist 
)

Broadcasts the bounding box information collected on rank 0 to all other ranks.

This function assumes that GatherAllBoundingBoxes() was previously called, so bboxlist is allocated and populated on rank 0. All other ranks will allocate memory for bboxlist, and this function will use MPI_Bcast to distribute the bounding box data to them.

Parameters
[in]userPointer to the UserCtx structure. (Currently unused in this function, but kept for consistency.)
[in,out]bboxlistPointer to the array of BoundingBoxes. On rank 0, this should point to a valid array of size 'size' (where size is the number of MPI ranks). On non-root ranks, this function will allocate memory for bboxlist.
Returns
PetscErrorCode Returns 0 on success, non-zero on MPI or PETSc-related errors.

Broadcasts the bounding box information collected on rank 0 to all other ranks.

After GatherAllBoundingBoxes, rank 0 has an array of size boxes. This routine makes sure every rank ends up with its own malloc’d copy.

Parameters
[in]userPointer to UserCtx (unused here, but kept for signature).
[in,out]bboxlistOn entry: rank 0’s array; on exit: every rank’s array.
Returns
PetscErrorCode

Definition at line 896 of file grid.c.

897{
898 PetscErrorCode ierr;
899 PetscMPIInt rank, size;
900
901 if (!bboxlist) SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL,
902 "BroadcastAllBoundingBoxes: NULL pointer");
903
904 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRMPI(ierr);
905 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRMPI(ierr);
906
907 /* Non-root ranks must allocate before the Bcast */
908 if (rank != 0) {
909 *bboxlist = (BoundingBox*)malloc(size * sizeof(BoundingBox));
910 if (!*bboxlist) SETERRABORT(PETSC_COMM_WORLD, PETSC_ERR_MEM,
911 "BroadcastAllBoundingBoxes: malloc failed");
912 }
913
914 MPI_Barrier(PETSC_COMM_WORLD);
916 "Rank %d: about to MPI_Bcast(%d boxes)\n", rank, size);
917
918 /* Collective: every rank must call */
919 ierr = MPI_Bcast(*bboxlist, size * sizeof(BoundingBox), MPI_BYTE,
920 0, PETSC_COMM_WORLD);
921 CHKERRMPI(ierr);
922
923 MPI_Barrier(PETSC_COMM_WORLD);
925 "Rank %d: completed MPI_Bcast(%d boxes)\n", rank, size);
926
927 return 0;
928}
Here is the caller graph for this function: