PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
ParticleMotion.c File Reference
#include "ParticleMotion.h"
Include dependency graph for ParticleMotion.c:

Go to the source code of this file.

Macros

#define ERROR_MSG_BUFFER_SIZE   256
 
#define __FUNCT__   "UpdateAllParticlePositions"
 Removes particles that have been definitively flagged as LOST by the location algorithm.
 
#define __FUNCT__   "CheckAndRemoveLostParticles"
 Removes particles that have been definitively flagged as LOST by the location algorithm.
 
#define __FUNCT__   "LocateAllParticlesInGrid_TEST"
 Removes particles that have been definitively flagged as LOST by the location algorithm.
 

Functions

PetscErrorCode UpdateParticlePosition (UserCtx *user, Cmpnts *position, const Cmpnts *velocity)
 Updates a particle's position based on its velocity and the timestep dt (stored in user->dt).
 
PetscErrorCode UpdateAllParticlePositions (UserCtx *user)
 Loops over all local particles in the DMSwarm, updating their positions based on velocity and the global timestep user->dt.
 
static PetscBool IsParticleInBox (const BoundingBox *bbox, const Cmpnts *pos)
 Checks if a particle position is within the bounds of a given bounding box.
 
PetscErrorCode CheckAndRemoveOutOfBoundsParticles (UserCtx *user, PetscInt *removedCountLocal, PetscInt *removedCountGlobal, const BoundingBox *bboxlist)
 Checks for particles outside the global physical domain and removes them.
 
PetscErrorCode CheckAndRemoveLostParticles (UserCtx *user, PetscInt *removedCountLocal, PetscInt *removedCountGlobal)
 Removes particles that have been definitively flagged as LOST by the location algorithm.
 
PetscErrorCode SetMigrationRanks (UserCtx *user, const MigrationInfo *migrationList, PetscInt migrationCount)
 Sets the target rank field (DMSwarmPICField_rank) for particles scheduled for migration.
 
PetscErrorCode PerformMigration (UserCtx *user)
 Performs particle migration based on the pre-populated DMSwarmPICField_rank field.
 
PetscErrorCode CalculateParticleCountPerCell (UserCtx *user)
 Counts particles in each cell of the DMDA 'da' and stores the result in user->ParticleCount.
 
PetscErrorCode ResizeSwarmGlobally (DM swarm, PetscInt N_target)
 
PetscErrorCode PreCheckAndResizeSwarm (UserCtx *user, PetscInt ti, const char *ext)
 Checks particle count from a saved file and resizes the swarm globally.
 
PetscErrorCode ReinitializeParticlesOnInletSurface (UserCtx *user, PetscReal currentTime, PetscInt step)
 Re-initializes the positions of particles currently on this rank if this rank owns part of the designated inlet surface.
 
static int compare_PetscInt64 (const void *a, const void *b)
 
PetscErrorCode GetLocalPIDSnapshot (const PetscInt64 pid_field[], PetscInt n_local, PetscInt64 **pids_snapshot_out)
 Creates a sorted snapshot of all Particle IDs (PIDs) from a raw data array.
 
PetscErrorCode AddToMigrationList (MigrationInfo **migration_list_p, PetscInt *capacity_p, PetscInt *count_p, PetscInt particle_local_idx, PetscMPIInt destination_rank)
 Safely adds a new migration task to a dynamically sized list.
 
PetscErrorCode FlagNewcomersForLocation (DM swarm, PetscInt n_local_before, const PetscInt64 pids_before[])
 Identifies newly arrived particles after migration and flags them for a location search.
 
PetscErrorCode GuessParticleOwnerWithBBox (UserCtx *user, const Particle *particle, const BoundingBox *bboxlist, PetscMPIInt *guess_rank_out)
 Provides a fast, heuristic-based guess for a particle's owner rank using bounding boxes.
 
PetscErrorCode LocateAllParticlesInGrid_TEST (UserCtx *user, BoundingBox *bboxlist)
 Orchestrates the complete particle location and migration process for one timestep.
 
PetscErrorCode ResetAllParticleStatuses (UserCtx *user)
 This function is designed to be called at the end of a full timestep, after all particle-based calculations are complete.
 

Macro Definition Documentation

◆ ERROR_MSG_BUFFER_SIZE

#define ERROR_MSG_BUFFER_SIZE   256

Definition at line 7 of file ParticleMotion.c.

◆ __FUNCT__ [1/3]

#define __FUNCT__   "UpdateAllParticlePositions"

Removes particles that have been definitively flagged as LOST by the location algorithm.

This function is the designated cleanup utility for particles that have exited the physical domain or could not be located for any other reason. It should be called after a particle location and migration phase is complete.

It iterates through all locally owned particles and checks their DMSwarm_location_status field. If a particle's status is LOST, it is permanently removed from the simulation using DMSwarmRemovePointAtIndex.

The function uses a robust "Restore-Remove-Reacquire" pattern for modifying the swarm. When a particle is removed, all pointers to swarm data are restored, the removal operation is performed, and then the pointers are reacquired if the loop needs to continue.

Warning
This function contains a collective MPI operation (MPI_Allreduce). To avoid deadlocks, it MUST be called by ALL ranks in the communicator. This implementation is designed to be safe by ensuring all ranks, even those with zero local particles, participate in the collective call.
Parameters
[in,out]userPointer to the UserCtx structure containing the swarm.
[out]removedCountLocalPointer to store the number of particles removed on THIS rank.
[out]removedCountGlobalPointer to store the total number of particles removed ACROSS ALL ranks.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 34 of file ParticleMotion.c.

◆ __FUNCT__ [2/3]

#define __FUNCT__   "CheckAndRemoveLostParticles"

Removes particles that have been definitively flagged as LOST by the location algorithm.

This function is the designated cleanup utility for particles that have exited the physical domain or could not be located for any other reason. It should be called after a particle location and migration phase is complete.

It iterates through all locally owned particles and checks their DMSwarm_location_status field. If a particle's status is LOST, it is permanently removed from the simulation using DMSwarmRemovePointAtIndex.

The function uses a robust "Restore-Remove-Reacquire" pattern for modifying the swarm. When a particle is removed, all pointers to swarm data are restored, the removal operation is performed, and then the pointers are reacquired if the loop needs to continue.

Warning
This function contains a collective MPI operation (MPI_Allreduce). To avoid deadlocks, it MUST be called by ALL ranks in the communicator. This implementation is designed to be safe by ensuring all ranks, even those with zero local particles, participate in the collective call.
Parameters
[in,out]userPointer to the UserCtx structure containing the swarm.
[out]removedCountLocalPointer to store the number of particles removed on THIS rank.
[out]removedCountGlobalPointer to store the total number of particles removed ACROSS ALL ranks.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 34 of file ParticleMotion.c.

◆ __FUNCT__ [3/3]

#define __FUNCT__   "LocateAllParticlesInGrid_TEST"

Removes particles that have been definitively flagged as LOST by the location algorithm.

This function is the designated cleanup utility for particles that have exited the physical domain or could not be located for any other reason. It should be called after a particle location and migration phase is complete.

It iterates through all locally owned particles and checks their DMSwarm_location_status field. If a particle's status is LOST, it is permanently removed from the simulation using DMSwarmRemovePointAtIndex.

The function uses a robust "Restore-Remove-Reacquire" pattern for modifying the swarm. When a particle is removed, all pointers to swarm data are restored, the removal operation is performed, and then the pointers are reacquired if the loop needs to continue.

Warning
This function contains a collective MPI operation (MPI_Allreduce). To avoid deadlocks, it MUST be called by ALL ranks in the communicator. This implementation is designed to be safe by ensuring all ranks, even those with zero local particles, participate in the collective call.
Parameters
[in,out]userPointer to the UserCtx structure containing the swarm.
[out]removedCountLocalPointer to store the number of particles removed on THIS rank.
[out]removedCountGlobalPointer to store the total number of particles removed ACROSS ALL ranks.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 34 of file ParticleMotion.c.

Function Documentation

◆ UpdateParticlePosition()

PetscErrorCode UpdateParticlePosition ( UserCtx user,
Cmpnts position,
const Cmpnts velocity 
)

Updates a particle's position based on its velocity and the timestep dt (stored in user->dt).

Parameters
[in]userPointer to your UserCtx (must contain user->dt).
[in,out]positionPointer to the particle's current position (Cmpnts).
[in]velocityPointer to the particle's velocity (Cmpnts).
Returns
PetscErrorCode Returns 0 on success, or an error code on failure.

Definition at line 19 of file ParticleMotion.c.

20{
21 PetscFunctionBeginUser; // PETSc macro for error/stack tracing
22
23 PetscReal dt = user->simCtx->dt;
24
25 /* Update the position with velocity * dt */
26 position->x += velocity->x * dt;
27 position->y += velocity->y * dt;
28 position->z += velocity->z * dt;
29
30 PetscFunctionReturn(0);
31}
SimCtx * simCtx
Back-pointer to the master simulation context.
Definition variables.h:633
PetscReal dt
Definition variables.h:527
PetscScalar x
Definition variables.h:100
PetscScalar z
Definition variables.h:100
PetscScalar y
Definition variables.h:100
Here is the caller graph for this function:

◆ UpdateAllParticlePositions()

PetscErrorCode UpdateAllParticlePositions ( UserCtx user)

Loops over all local particles in the DMSwarm, updating their positions based on velocity and the global timestep user->dt.

Parameters
[in,out]userPointer to UserCtx (must contain dt).
Returns
PetscErrorCode Returns 0 on success, or an error code on failure.

Definition at line 42 of file ParticleMotion.c.

43{
44 PetscErrorCode ierr;
45 DM swarm = user->swarm;
46 PetscInt nLocal, p;
47 PetscReal *pos = NULL;
48 PetscReal *vel = NULL;
49 PetscMPIInt rank;
50 Cmpnts temp_pos, temp_vel;
51
52 ierr = MPI_Comm_rank(PETSC_COMM_WORLD,&rank);
53
54 PetscFunctionBeginUser; // PETSc macro for error/stack tracing
55
57
58 // 1) Get the number of local particles
59 ierr = DMSwarmGetLocalSize(swarm, &nLocal); CHKERRQ(ierr);
60 if (nLocal == 0) {
61 LOG_ALLOW(LOCAL,LOG_DEBUG,"[Rank %d] No particles to move/transport. \n",rank);
62 PetscFunctionReturn(0); // nothing to do, no fields held
63 }
64 // 2) Access the "position" and "velocity" fields
65 ierr = DMSwarmGetField(swarm, "position", NULL, NULL, (void**)&pos); CHKERRQ(ierr);
66 ierr = DMSwarmGetField(swarm, "velocity", NULL, NULL, (void**)&vel); CHKERRQ(ierr);
67
68 LOG_ALLOW(GLOBAL,LOG_DEBUG," [Rank %d] No.of Particles to update: %d.\n",rank,nLocal);
69
70 // 3) Loop over all local particles, updating each position by velocity * dt
71 for (p = 0; p < nLocal; p++) {
72 // update temporary position struct
73 temp_pos.x = pos[3*p];
74 temp_pos.y = pos[3*p + 1];
75 temp_pos.z = pos[3*p + 2];
76
77 // update temporary velocity struct
78 temp_vel.x = vel[3*p];
79 temp_vel.y = vel[3*p + 1];
80 temp_vel.z = vel[3*p + 2];
81
82 ierr = UpdateParticlePosition(user, &temp_pos, &temp_vel); CHKERRQ(ierr);
83
84 // update swarm from temporary position struct
85 pos[3*p] = temp_pos.x;
86 pos[3*p + 1] = temp_pos.y;
87 pos[3*p + 2] = temp_pos.z;
88
89 vel[3*p] = temp_vel.x;
90 vel[3*p + 1] = temp_vel.y;
91 vel[3*p + 2] = temp_vel.z;
92 }
93
94 // 4) Restore the fields
95 ierr = DMSwarmRestoreField(swarm, "position", NULL, NULL, (void**)&pos); CHKERRQ(ierr);
96 ierr = DMSwarmRestoreField(swarm, "velocity", NULL, NULL, (void**)&vel); CHKERRQ(ierr);
97
98 LOG_ALLOW(LOCAL,LOG_DEBUG,"Particle moved/transported successfully on Rank %d.\n",rank);
99
101
102 PetscFunctionReturn(0);
103}
PetscErrorCode UpdateParticlePosition(UserCtx *user, Cmpnts *position, const Cmpnts *velocity)
Updates a particle's position based on its velocity and the timestep dt (stored in user->dt).
#define LOCAL
Logging scope definitions for controlling message output.
Definition logging.h:44
#define GLOBAL
Scope for global logging across all processes.
Definition logging.h:45
#define LOG_ALLOW(scope, level, fmt,...)
Logging macro that checks both the log level and whether the calling function is in the allowed-funct...
Definition logging.h:207
#define PROFILE_FUNCTION_END
Marks the end of a profiled code block.
Definition logging.h:767
@ LOG_DEBUG
Detailed debugging information.
Definition logging.h:33
#define PROFILE_FUNCTION_BEGIN
Marks the beginning of a profiled code block (typically a function).
Definition logging.h:758
A 3D point or vector with PetscScalar components.
Definition variables.h:99
Here is the call graph for this function:
Here is the caller graph for this function:

◆ IsParticleInBox()

static PetscBool IsParticleInBox ( const BoundingBox bbox,
const Cmpnts pos 
)
inlinestatic

Checks if a particle position is within the bounds of a given bounding box.

Parameters
bboxPointer to the BoundingBox structure.
posPointer to the particle's position (Cmpnts).
Returns
PetscBool PETSC_TRUE if the particle is inside or on the boundary, PETSC_FALSE otherwise.

Definition at line 114 of file ParticleMotion.c.

114 {
115 return (pos->x >= bbox->min_coords.x && pos->x <= bbox->max_coords.x &&
116 pos->y >= bbox->min_coords.y && pos->y <= bbox->max_coords.y &&
117 pos->z >= bbox->min_coords.z && pos->z <= bbox->max_coords.z);
118}
Cmpnts max_coords
Maximum x, y, z coordinates of the bounding box.
Definition variables.h:155
Cmpnts min_coords
Minimum x, y, z coordinates of the bounding box.
Definition variables.h:154
Here is the caller graph for this function:

◆ CheckAndRemoveOutOfBoundsParticles()

PetscErrorCode CheckAndRemoveOutOfBoundsParticles ( UserCtx user,
PetscInt *  removedCountLocal,
PetscInt *  removedCountGlobal,
const BoundingBox bboxlist 
)

Checks for particles outside the global physical domain and removes them.

Checks for particles outside the physical domain boundaries and removes them using DMSwarmRemovePointAtIndex.

This function iterates through all particles local to the current MPI rank. It checks if a particle's position is inside ANY of the bounding boxes that define the subdomains of the MPI ranks. The list of all bounding boxes must be available on all ranks.

If a particle is not found within any of the provided bounding boxes, it is considered to be "out of bounds" and is permanently removed from the simulation using DMSwarmRemovePointAtIndex().

This function uses a robust "Restore-Remove-Reacquire" pattern for modifying the swarm. When a particle is removed, all pointers to swarm data are restored, the removal operation is performed, and then the pointers are reacquired if the loop needs to continue. This is the safest way to modify the swarm's structure during iteration.

Note
This function is often redundant if a robust particle location scheme (e.g., one that sets a LOST status) and a corresponding cleanup function (e.g., CheckAndRemoveLostParticles) are already in use. Using both can introduce complexity.
Warning
This function contains a collective MPI operation (MPI_Allreduce). To avoid deadlocks, it MUST be called by ALL ranks in the communicator, even if they have no work to do. Do not place this call inside rank-specific conditional logic in your main loop.
Parameters
[in,out]userPointer to the UserCtx structure containing the swarm and MPI info.
[out]removedCountLocalPointer to store the number of particles removed on THIS rank.
[out]removedCountGlobalPointer to store the total number of particles removed ACROSS ALL ranks.
[in]bboxlistAn array of BoundingBox structures for ALL MPI ranks, indexed 0 to (size-1). This array must be up-to-date and available on all ranks.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Checks for particles outside the global physical domain and removes them.

This function iterates through all particles local to the current MPI rank. It determines if a particle's physical position is outside of ALL subdomains owned by the MPI processes. To perform this check, it requires a list of the bounding boxes for every MPI rank's subdomain.

If a particle is not found within any of the subdomains, it is considered "out of bounds" and is permanently removed from the simulation using DMSwarmRemovePointAtIndex().

The function is carefully designed to handle modifications to the DMSwarm during iteration safely. It does this by:

  1. Iterating BACKWARDS through the local particle list, so removing an element at index 'p' does not affect the indices of subsequent elements to be checked (p-1, p-2, ...).
  2. Using a robust "Restore-Remove-Reacquire" pattern. When a particle is removed, all pointers to swarm data are first restored, the removal operation is performed, and then the pointers are re-acquired for the remainder of the loop.
Warning
This function contains a collective MPI operation (MPI_Allreduce) at the end. To avoid deadlocks, it MUST be called by ALL ranks in the communicator, even if a rank has no local particles. Do not place this call inside rank-specific conditional logic in your main loop.
Note
This function can be redundant if a robust particle location scheme (e.g., one that sets a LOST status) and a corresponding cleanup function are already in use.
Parameters
[in,out]userPointer to the UserCtx structure containing the swarm and MPI info.
[out]removedCountLocalPointer to an integer that will store the number of particles removed on THIS rank.
[out]removedCountGlobalPointer to an integer that will store the total number of particles removed ACROSS ALL ranks.
[in]bboxlistAn array of BoundingBox structures for ALL MPI ranks, indexed 0 to (size-1). This array must be up-to-date and available on every rank.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 291 of file ParticleMotion.c.

295{
296 PetscErrorCode ierr;
297 DM swarm = user->swarm;
298 PetscInt nLocalInitial;
299 PetscReal *pos_p = NULL;
300 PetscInt64 *pid_p = NULL; // For better logging
301 PetscInt local_removed_count = 0;
302 PetscMPIInt global_removed_count_mpi = 0;
303 PetscMPIInt rank, size;
304
305 PetscFunctionBeginUser;
306 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
307 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRQ(ierr);
308 LOG_ALLOW(LOCAL, LOG_INFO, "[Rank %d] Checking for out-of-bounds particles...", rank);
309
310 // Initialize output parameters to ensure clean state
311 *removedCountLocal = 0;
312 if (removedCountGlobal) *removedCountGlobal = 0;
313
314 ierr = DMSwarmGetLocalSize(swarm, &nLocalInitial); CHKERRQ(ierr);
315
316 // Only proceed if there are particles to check on this rank.
317 // All ranks will still participate in the final collective MPI_Allreduce.
318 if (nLocalInitial > 0) {
319 // Get access to swarm fields once before the loop begins.
320 ierr = DMSwarmGetField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr);
321 ierr = DMSwarmGetField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr);
322
323 // --- Iterate BACKWARDS to handle index changes safely during removal ---
324 for (PetscInt p = nLocalInitial - 1; p >= 0; p--) {
325 PetscBool isInsideAnyBox = PETSC_FALSE;
326 Cmpnts current_pos = {pos_p[3*p + 0], pos_p[3*p + 1], pos_p[3*p + 2]};
327
328 // Check if the particle is inside ANY of the rank bounding boxes
329 for (PetscMPIInt proc = 0; proc < size; proc++) {
330 if (IsParticleInBox(&bboxlist[proc], &current_pos)) {
331 isInsideAnyBox = PETSC_TRUE;
332 break; // Particle is inside a valid domain, stop checking.
333 }
334 }
335
336 if (!isInsideAnyBox) {
337 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Removing out-of-bounds particle [PID %lld] at local index %d. Pos: (%g, %g, %g)\n",
338 rank, (long long)pid_p[p], p, current_pos.x, current_pos.y, current_pos.z);
339
340 // --- Safe Removal Pattern: Restore -> Remove -> Reacquire ---
341 // This is the fix for the double-restore bug. Pointers are managed carefully
342 // within this block and then restored cleanly after the loop.
343
344 // 1. Restore all fields BEFORE modifying the swarm structure. This invalidates pos_p and pid_p.
345 ierr = DMSwarmRestoreField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr);
346 ierr = DMSwarmRestoreField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr);
347
348 // 2. Remove the particle at the current local index 'p'.
349 ierr = DMSwarmRemovePointAtIndex(swarm, p); CHKERRQ(ierr);
350 local_removed_count++;
351
352 // 3. After removal, re-acquire pointers ONLY if the loop is not finished.
353 PetscInt nLocalCurrent;
354 ierr = DMSwarmGetLocalSize(swarm, &nLocalCurrent); CHKERRQ(ierr);
355
356 if (nLocalCurrent > 0 && p > 0) { // Check if there are particles left AND iterations left
357 ierr = DMSwarmGetField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr);
358 ierr = DMSwarmGetField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr);
359 } else {
360 // All remaining particles were removed OR this was the last particle (p=0).
361 // Invalidate pointers to prevent the final restore call and exit the loop.
362 pos_p = NULL;
363 pid_p = NULL;
364 break;
365 }
366 }
367 } // End of backwards loop
368
369 // At the end, restore any valid pointers. This handles three cases:
370 // 1. No particles were removed: restores the original pointers.
371 // 2. Particles were removed mid-loop: restores the pointers from the last re-acquisition.
372 // 3. All particles were removed: pointers are NULL, so nothing is done.
373 if (pos_p) { ierr = DMSwarmRestoreField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr); }
374 if (pid_p) { ierr = DMSwarmRestoreField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr); }
375 } // End of if (nLocalInitial > 0)
376
377 PetscInt nLocalFinal;
378 ierr = DMSwarmGetLocalSize(swarm, &nLocalFinal); CHKERRQ(ierr);
379 LOG_ALLOW(LOCAL, LOG_INFO, "[Rank %d] Finished removing %d out-of-bounds particles. Final local size: %d.\n", rank, local_removed_count, nLocalFinal);
380
381 // --- Synchronize counts across all ranks ---
382 *removedCountLocal = local_removed_count;
383 if (removedCountGlobal) {
384 ierr = MPI_Allreduce(&local_removed_count, &global_removed_count_mpi, 1, MPI_INT, MPI_SUM, PetscObjectComm((PetscObject)swarm)); CHKERRQ(ierr);
385 *removedCountGlobal = global_removed_count_mpi;
386 // Use a synchronized log message so only one rank prints the global total.
387 LOG_ALLOW_SYNC(GLOBAL, LOG_INFO, "[Rank %d] Removed %d out-of-bounds particles globally.\n", rank, *removedCountGlobal);
388 }
389
390 PetscFunctionReturn(0);
391}
static PetscBool IsParticleInBox(const BoundingBox *bbox, const Cmpnts *pos)
Checks if a particle position is within the bounds of a given bounding box.
#define LOG_ALLOW_SYNC(scope, level, fmt,...)
----— DEBUG ---------------------------------------— #define LOG_ALLOW(scope, level,...
Definition logging.h:274
@ LOG_INFO
Informational messages about program execution.
Definition logging.h:32
Here is the call graph for this function:
Here is the caller graph for this function:

◆ CheckAndRemoveLostParticles()

PetscErrorCode CheckAndRemoveLostParticles ( UserCtx user,
PetscInt *  removedCountLocal,
PetscInt *  removedCountGlobal 
)

Removes particles that have been definitively flagged as LOST by the location algorithm.

This function is the designated cleanup utility for particles that have exited the physical domain or could not be located for any other reason. It should be called after a particle location and migration phase is complete.

It iterates through all locally owned particles and checks their DMSwarm_location_status field. If a particle's status is LOST, it is permanently removed from the simulation using DMSwarmRemovePointAtIndex.

The function is carefully designed to handle modifications to the DMSwarm during iteration safely. It does this by:

  1. Iterating BACKWARDS through the local particle list, so removing an element at index 'p' does not affect the indices of subsequent elements to be checked (p-1, p-2, ...).
  2. Using a robust "Restore-Remove-Reacquire" pattern. When a particle is removed, all pointers to swarm data are first restored, the removal operation is performed, and then the pointers are re-acquired for the remainder of the loop.
Warning
This function contains a collective MPI operation (MPI_Allreduce) at the end. To avoid deadlocks, it MUST be called by ALL ranks in the communicator, even if a rank has no local particles. Do not place this call inside rank-specific conditional logic in your main loop.
Parameters
[in,out]userPointer to the UserCtx structure containing the swarm.
[out]removedCountLocalPointer to an integer that will store the number of particles removed on THIS rank.
[out]removedCountGlobalPointer to an integer that will store the total number of particles removed ACROSS ALL ranks.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 542 of file ParticleMotion.c.

545{
546 PetscErrorCode ierr;
547 DM swarm = user->swarm;
548 PetscInt nLocalInitial;
549 PetscInt *status_p = NULL;
550 PetscInt64 *pid_p = NULL; // For better logging
551 PetscReal *pos_p = NULL; // For better logging
552 PetscInt local_removed_count = 0;
553 PetscMPIInt global_removed_count_mpi = 0;
554 PetscMPIInt rank;
555
556 PetscFunctionBeginUser;
558 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
559 LOG_ALLOW(LOCAL, LOG_INFO, "Rank %d: Checking for and removing LOST particles...", rank);
560
561 // Initialize output parameters to ensure clean state
562 *removedCountLocal = 0;
563 if (removedCountGlobal) *removedCountGlobal = 0;
564
565 ierr = DMSwarmGetLocalSize(swarm, &nLocalInitial); CHKERRQ(ierr);
566
567 // Only proceed if there are particles to check on this rank.
568 // All ranks will still participate in the final collective MPI_Allreduce.
569 if (nLocalInitial > 0) {
570 // Get access to all swarm fields once before the loop begins.
571 ierr = DMSwarmGetField(swarm, "DMSwarm_location_status", NULL, NULL, (void **)&status_p); CHKERRQ(ierr);
572 ierr = DMSwarmGetField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr);
573 ierr = DMSwarmGetField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr);
574
575 // --- Iterate BACKWARDS to handle index changes safely during removal ---
576 for (PetscInt p = nLocalInitial - 1; p >= 0; p--) {
577 if (status_p[p] == LOST) {
578 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Removing LOST particle [PID %lld] at local index %d. Position: (%.4f, %.4f, %.4f).\n",
579 rank, (long long)pid_p[p], p, pos_p[3*p], pos_p[3*p+1], pos_p[3*p+2]);
580
581 // --- Safe Removal Pattern: Restore -> Remove -> Reacquire ---
582 // This is the fix for the double-restore bug. Pointers are managed carefully
583 // within this block and then restored cleanly after the loop.
584
585 // 1. Restore all fields BEFORE modifying the swarm structure. This invalidates all pointers.
586 ierr = DMSwarmRestoreField(swarm, "DMSwarm_location_status", NULL, NULL, (void **)&status_p); CHKERRQ(ierr);
587 ierr = DMSwarmRestoreField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr);
588 ierr = DMSwarmRestoreField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr);
589
590 // 2. Remove the particle at the current local index 'p'.
591 ierr = DMSwarmRemovePointAtIndex(swarm, p); CHKERRQ(ierr);
592 local_removed_count++;
593
594 // 3. After removal, re-acquire pointers ONLY if the loop is not finished.
595 PetscInt nLocalCurrent;
596 ierr = DMSwarmGetLocalSize(swarm, &nLocalCurrent); CHKERRQ(ierr);
597
598 if (nLocalCurrent > 0 && p > 0) { // Check if there are particles left AND iterations left
599 ierr = DMSwarmGetField(swarm, "DMSwarm_location_status", NULL, NULL, (void **)&status_p); CHKERRQ(ierr);
600 ierr = DMSwarmGetField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr);
601 ierr = DMSwarmGetField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr);
602 } else {
603 // All remaining particles were removed OR this was the last particle (p=0).
604 // Invalidate pointers to prevent the final restore call and exit the loop.
605 status_p = NULL;
606 pid_p = NULL;
607 pos_p = NULL;
608 break;
609 }
610 }
611 } // End of backwards loop
612
613 // At the end, restore any valid pointers. This handles three cases:
614 // 1. No particles were removed: restores the original pointers.
615 // 2. Particles were removed mid-loop: restores the pointers from the last re-acquisition.
616 // 3. All particles were removed: pointers are NULL, so nothing is done.
617 if (status_p) { ierr = DMSwarmRestoreField(swarm, "DMSwarm_location_status", NULL, NULL, (void **)&status_p); CHKERRQ(ierr); }
618 if (pid_p) { ierr = DMSwarmRestoreField(swarm, "DMSwarm_pid", NULL, NULL, (void **)&pid_p); CHKERRQ(ierr); }
619 if (pos_p) { ierr = DMSwarmRestoreField(swarm, "position", NULL, NULL, (void **)&pos_p); CHKERRQ(ierr); }
620 } // End of if (nLocalInitial > 0)
621
622 PetscInt nLocalFinal;
623 ierr = DMSwarmGetLocalSize(swarm, &nLocalFinal); CHKERRQ(ierr);
624 LOG_ALLOW(LOCAL, LOG_INFO, "Rank %d: Finished removing %d LOST particles. Final local size: %d.\n", rank, local_removed_count, nLocalFinal);
625
626 // --- Synchronize counts across all ranks ---
627 *removedCountLocal = local_removed_count;
628 if (removedCountGlobal) {
629 ierr = MPI_Allreduce(&local_removed_count, &global_removed_count_mpi, 1, MPI_INT, MPI_SUM, PetscObjectComm((PetscObject)swarm)); CHKERRQ(ierr);
630 *removedCountGlobal = global_removed_count_mpi;
631 // Use a synchronized log message so only one rank prints the global total.
632 LOG_ALLOW_SYNC(GLOBAL, LOG_INFO, "[Rank %d] Removed %d LOST particles globally.\n", rank, *removedCountGlobal);
633 }
634
636 PetscFunctionReturn(0);
637}
@ LOST
Definition variables.h:138
Here is the caller graph for this function:

◆ SetMigrationRanks()

PetscErrorCode SetMigrationRanks ( UserCtx user,
const MigrationInfo migrationList,
PetscInt  migrationCount 
)

Sets the target rank field (DMSwarmPICField_rank) for particles scheduled for migration.

Parameters
userPointer to UserCtx pa(contains swarm).
migrationListArray of MigrationInfo structs containing local indices and target ranks.
migrationCountNumber of particles in the migrationList.
Returns
PetscErrorCode 0 on success, non-zero on failure.

Definition at line 648 of file ParticleMotion.c.

649{
650 PetscErrorCode ierr;
651 DM swarm = user->swarm;
652 PetscInt p_idx;
653 PetscInt *rankField = NULL; // Field storing target rank
654
655 PetscFunctionBeginUser;
656
657 // Ensure the migration rank field exists
658 ierr = DMSwarmGetField(swarm, "DMSwarm_rank", NULL, NULL, (void **)&rankField); CHKERRQ(ierr);
659
660 // Set the target rank for migrating particles
661 for(p_idx = 0; p_idx < migrationCount; ++p_idx) {
662 rankField[migrationList[p_idx].local_index] = migrationList[p_idx].target_rank;
663 }
664
665 ierr = DMSwarmRestoreField(swarm, "DMSwarm_rank", NULL, NULL, (void **)&rankField); CHKERRQ(ierr);
666 PetscFunctionReturn(0);
667}
PetscInt local_index
Definition variables.h:189
Here is the caller graph for this function:

◆ PerformMigration()

PetscErrorCode PerformMigration ( UserCtx user)

Performs particle migration based on the pre-populated DMSwarmPICField_rank field.

Assumes SetMigrationRanks has already been called to mark particles with their target ranks. Calls DMSwarmMigrate to execute the communication and removal of un-migrated particles.

Parameters
userPointer to the UserCtx structure containing the swarm.
Returns
PetscErrorCode 0 on success, non-zero on failure.

Definition at line 679 of file ParticleMotion.c.

680{
681 PetscErrorCode ierr;
682 DM swarm = user->swarm;
683 PetscMPIInt rank;
684
685 PetscFunctionBeginUser;
686 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
687 LOG_ALLOW(GLOBAL, LOG_INFO, "Rank %d: Starting DMSwarmMigrate...\n", rank);
688
689 // Perform the migration - PETSC_TRUE removes particles that fail to land
690 // in a valid cell on the target rank (or were marked with an invalid rank).
691 ierr = DMSwarmMigrate(swarm, PETSC_TRUE); CHKERRQ(ierr);
692
693 LOG_ALLOW(GLOBAL, LOG_INFO, "Rank %d: Migration complete.\n", rank);
694 PetscFunctionReturn(0);
695}
Here is the caller graph for this function:

◆ ResizeSwarmGlobally()

PetscErrorCode ResizeSwarmGlobally ( DM  swarm,
PetscInt  N_target 
)

Definition at line 857 of file ParticleMotion.c.

858{
859 PetscErrorCode ierr;
860 PetscInt N_current, nlocal_current;
861 PetscMPIInt rank;
862 MPI_Comm comm;
863
864 PetscFunctionBeginUser;
865 ierr = PetscObjectGetComm((PetscObject)swarm, &comm); CHKERRQ(ierr);
866 ierr = MPI_Comm_rank(comm, &rank); CHKERRQ(ierr);
867 ierr = DMSwarmGetSize(swarm, &N_current); CHKERRQ(ierr);
868 ierr = DMSwarmGetLocalSize(swarm, &nlocal_current); CHKERRQ(ierr);
869
870 PetscInt delta = N_target - N_current;
871
872 if (delta == 0) {
873 PetscFunctionReturn(0); // Nothing to do
874 }
875
876 if (delta < 0) { // Remove particles
877 PetscInt num_to_remove_global = -delta;
878 LOG_ALLOW(GLOBAL, LOG_INFO, "ResizeSwarmGlobally: Current size %d > target size %d. Removing %d particles globally.\n", N_current, N_target, num_to_remove_global);
879
880 // --- Strategy: Remove the globally last 'num_to_remove_global' particles ---
881 // Each rank needs to determine how many of its *local* particles fall
882 // within the range of global indices [N_target, N_current - 1].
883
884 PetscInt rstart = 0;
885 PetscInt rend;
886 // Global range owned by this rank [rstart, rend)
887
888 ierr = MPI_Exscan(&nlocal_current, &rstart, 1, MPIU_INT, MPI_SUM, comm); CHKERRMPI(ierr); // Use CHKERRMPI for MPI calls
889
890 rend = rstart + nlocal_current;
891
892
893 // Calculate how many local particles have global indices >= N_target
894 PetscInt nlocal_remove_count = 0;
895 if (rend > N_target) { // If this rank owns any particles slated for removal
896 PetscInt start_remove_local_idx = (N_target > rstart) ? (N_target - rstart) : 0;
897 nlocal_remove_count = nlocal_current - start_remove_local_idx;
898 }
899
900 if (nlocal_remove_count < 0) nlocal_remove_count = 0; // Sanity check
901 if (nlocal_remove_count > nlocal_current) nlocal_remove_count = nlocal_current; // Sanity check
902
903 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Global range [%d, %d). Target size %d. Need to remove %d local particles (from end).\n", rank, rstart, rend, N_target, nlocal_remove_count);
904
905 // Remove the last 'nlocal_remove_count' particles *locally* by iterating backwards
906 PetscInt removal_ops_done = 0;
907 for (PetscInt p = nlocal_current - 1; p >= 0 && removal_ops_done < nlocal_remove_count; --p) {
908 ierr = DMSwarmRemovePointAtIndex(swarm, p); CHKERRQ(ierr);
909 removal_ops_done++;
910 }
911
912 if (removal_ops_done != nlocal_remove_count) {
913 SETERRQ(comm, PETSC_ERR_PLIB, "Rank %d: Failed to remove the expected number of local particles (%d != %d)", rank, removal_ops_done, nlocal_remove_count);
914 }
915 LOG_ALLOW(LOCAL, LOG_DEBUG, "Rank %d: Removed %d local particles.\n", rank, removal_ops_done);
916
917 // Barrier to ensure all removals are done before size check
918 ierr = MPI_Barrier(comm); CHKERRMPI(ierr);
919
920 } else { // delta > 0: Add particles
921 PetscInt num_to_add_global = delta;
922 LOG_ALLOW(GLOBAL, LOG_INFO, "ResizeSwarmGlobally: Current size %d < target size %d. Adding %d particles globally.\n", N_current, N_target, num_to_add_global);
923 ierr = DMSwarmAddNPoints(swarm, num_to_add_global); CHKERRQ(ierr);
924 // Note: Added particles will have uninitialized field data. Reading will overwrite.
925 }
926
927 // Verify final size
928 PetscInt N_final;
929 ierr = DMSwarmGetSize(swarm, &N_final); CHKERRQ(ierr);
930 if (N_final != N_target) {
931 SETERRQ(comm, PETSC_ERR_PLIB, "Failed to resize swarm: expected %d particles, got %d", N_target, N_final);
932 }
933 LOG_ALLOW(GLOBAL, LOG_DEBUG, "ResizeSwarmGlobally: Swarm successfully resized to %d particles.\n", N_final);
934 PetscFunctionReturn(0);
935}
Here is the caller graph for this function:

◆ PreCheckAndResizeSwarm()

PetscErrorCode PreCheckAndResizeSwarm ( UserCtx user,
PetscInt  ti,
const char *  ext 
)

Checks particle count from a saved file and resizes the swarm globally.

Checks particle count in the reference file and resizes the swarm if needed.

This function uses a robust parallel pattern: only Rank 0 reads the reference position file to determine the total number of particles saved (N_file). This count is then broadcast to all other ranks. Finally, each rank compares N_file with the current swarm size and participates in resizing if necessary.

Parameters
[in,out]userPointer to the UserCtx structure containing the DMSwarm.
[in]tiTime index for constructing the file name.
[in]extFile extension (e.g., "dat").
Returns
PetscErrorCode 0 on success, non-zero on failure.

Definition at line 951 of file ParticleMotion.c.

954{
955 PetscErrorCode ierr;
956 char filename[PETSC_MAX_PATH_LEN];
957 PetscInt N_file = 0; // The number of particles determined from the file
958 PetscInt N_current = 0;
959 MPI_Comm comm;
960 PetscMPIInt rank;
961 const char *refFieldName = "position";
962 const PetscInt bs = 3;
963
964 // NOTE: Your filename format has a hardcoded "_0" which is typical for
965 // PETSc when writing a parallel object from a single rank.
966 // If you ever write in parallel, PETSc might create one file per rank.
967 // The current logic assumes a single file written by one process.
968 const int placeholder_int = 0;
969
970 PetscFunctionBeginUser;
971 ierr = PetscObjectGetComm((PetscObject)user->swarm, &comm); CHKERRQ(ierr);
972 ierr = MPI_Comm_rank(comm, &rank); CHKERRQ(ierr);
973
974 // --- Construct filename using the specified format ---
975 // results/%s%05<PetscInt_FMT>_%d.%s
976 ierr = PetscSNPrintf(filename, sizeof(filename), "results/%s%05" PetscInt_FMT "_%d.%s",
977 refFieldName, ti, placeholder_int, ext); CHKERRQ(ierr);
978 // Note: Make sure the "results" directory exists or handle directory creation elsewhere.
979
980 LOG_ALLOW(GLOBAL, LOG_INFO, "PreCheckAndResizeSwarm: Checking particle count for timestep %d using ref file '%s'.\n", ti, filename);
981
982 // --- Rank 0 reads the file to determine the size ---
983 if (rank == 0) {
984 PetscBool fileExists = PETSC_FALSE;
985 ierr = PetscTestFile(filename, 'r', &fileExists); CHKERRQ(ierr);
986
987 if (!fileExists) {
988 // Set a special value to indicate file not found, then broadcast it.
989 N_file = -1;
990 LOG_ALLOW(GLOBAL, LOG_ERROR, "Rank 0: Mandatory reference file '%s' not found for timestep %d.\n", filename, ti);
991 } else {
992 PetscViewer viewer;
993 Vec tmpVec;
994 PetscInt vecSize;
995
996 ierr = VecCreate(PETSC_COMM_SELF, &tmpVec); CHKERRQ(ierr); // Create a SEQUENTIAL vector
997 ierr = PetscViewerBinaryOpen(PETSC_COMM_SELF, filename, FILE_MODE_READ, &viewer); CHKERRQ(ierr);
998 ierr = VecLoad(tmpVec, viewer); CHKERRQ(ierr);
999 ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);
1000
1001 ierr = VecGetSize(tmpVec, &vecSize); CHKERRQ(ierr);
1002 ierr = VecDestroy(&tmpVec); CHKERRQ(ierr);
1003
1004 if (vecSize % bs != 0) {
1005 N_file = -2; // Special error code for bad file format
1006 LOG_ALLOW(GLOBAL, LOG_ERROR, "Rank 0: Vector size %d from file '%s' is not divisible by block size %d.\n", vecSize, filename, bs);
1007 } else {
1008 N_file = vecSize / bs;
1009 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Rank 0: Found %d particles in file.\n", N_file);
1010 }
1011 }
1012 }
1013
1014 // --- Broadcast the particle count (or error code) from Rank 0 to all other ranks ---
1015 ierr = MPI_Bcast(&N_file, 1, MPIU_INT, 0, comm); CHKERRMPI(ierr);
1016
1017 // --- All ranks check for errors and abort if necessary ---
1018 if (N_file == -1) {
1019 SETERRQ(comm, PETSC_ERR_FILE_OPEN, "Mandatory reference file '%s' not found for timestep %d (as determined by Rank 0).", filename, ti);
1020 }
1021 if (N_file == -2) {
1022 SETERRQ(comm, PETSC_ERR_FILE_READ, "Reference file '%s' has incorrect format (as determined by Rank 0).", filename);
1023 }
1024 if (N_file < 0) {
1025 SETERRQ(comm, PETSC_ERR_PLIB, "Received invalid particle count %d from Rank 0.", N_file);
1026 }
1027
1028
1029 // --- Now all ranks have the correct N_file, compare and resize if needed ---
1030 ierr = DMSwarmGetSize(user->swarm, &N_current); CHKERRQ(ierr);
1031
1032 if (N_file != N_current) {
1033 LOG_ALLOW(GLOBAL, LOG_INFO, "Swarm size %d differs from file size %d. Resizing swarm globally.\n", N_current, N_file);
1034 ierr = ResizeSwarmGlobally(user->swarm, N_file); CHKERRQ(ierr);
1035 } else {
1036 LOG_ALLOW(GLOBAL, LOG_DEBUG, "Swarm size (%d) already matches file size. No resize needed.\n", N_current);
1037 }
1038
1039 // Also update the context
1040 user->simCtx->np = N_file;
1041
1042 PetscFunctionReturn(0);
1043}
PetscErrorCode ResizeSwarmGlobally(DM swarm, PetscInt N_target)
@ LOG_ERROR
Critical errors that may halt the program.
Definition logging.h:29
PetscInt np
Definition variables.h:585
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ReinitializeParticlesOnInletSurface()

PetscErrorCode ReinitializeParticlesOnInletSurface ( UserCtx user,
PetscReal  currentTime,
PetscInt  step 
)

Re-initializes the positions of particles currently on this rank if this rank owns part of the designated inlet surface.

This function is intended for user->ParticleInitialization == 0 (Surface Initialization mode) and is typically called after an initial migration step (e.g., in PerformInitialSetup). It ensures that all particles that should originate from the inlet surface and are now on the correct MPI rank are properly distributed across that rank's portion of the inlet.

Parameters
userPointer to the UserCtx structure, containing simulation settings and grid information.
currentTimeCurrent simulation time (used for logging).
stepCurrent simulation step number (used for logging).
Returns
PetscErrorCode 0 on success, non-zero on failure.

Definition at line 1059 of file ParticleMotion.c.

1060{
1061 PetscErrorCode ierr;
1062 PetscMPIInt rank; // MPI rank of the current process
1063 DM swarm = user->swarm; // The particle swarm DM
1064 PetscReal *positions_field = NULL; // Pointer to swarm field for physical positions
1065 PetscReal *pos_phy_field = NULL; // Pointer to swarm field for physical positions (backup)
1066 PetscInt64 *particleIDs = NULL; // Pointer to swarm field for Particle IDs (for logging)
1067 PetscInt *cell_ID_field = NULL; // Pointer to swarm field for Cell IDs (for resetting after migration)
1068 const Cmpnts ***coor_nodes_local_array; // Read-only access to local node coordinates
1069 Vec Coor_local; // Local vector for node coordinates
1070 DMDALocalInfo info; // Local grid information (node-based) from user->da
1071 PetscInt xs_gnode_rank, ys_gnode_rank, zs_gnode_rank; // Local starting node indices (incl. ghosts) of rank's DA
1072 PetscInt IM_nodes_global, JM_nodes_global, KM_nodes_global; // Global node counts
1073
1074 PetscRandom rand_logic_reinit_i, rand_logic_reinit_j, rand_logic_reinit_k; // RNGs for re-placement
1075 PetscInt nlocal_current; // Number of particles currently on this rank
1076 PetscInt particles_actually_reinitialized_count = 0; // Counter for logging
1077 PetscBool can_this_rank_service_inlet = PETSC_FALSE; // Flag
1078
1079 PetscFunctionBeginUser;
1080
1081 // This function is only relevant for surface initialization mode and if an inlet face is defined.
1082 if (user->simCtx->ParticleInitialization != 0 || !user->inletFaceDefined) {
1083 PetscFunctionReturn(0);
1084 }
1085
1086 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
1087 ierr = DMSwarmGetLocalSize(swarm, &nlocal_current); CHKERRQ(ierr);
1088
1089 // If no particles on this rank, nothing to do.
1090 if (nlocal_current == 0) {
1091 LOG_ALLOW(LOCAL, LOG_DEBUG, "[T=%.4f, Step=%d] Rank %d has no local particles to re-initialize on inlet.\n", currentTime, step, rank);
1092 PetscFunctionReturn(0);
1093 }
1094
1095 // Get DMDA information for the node-centered coordinate grid (user->da)
1096 ierr = DMDAGetLocalInfo(user->da, &info); CHKERRQ(ierr);
1097 ierr = DMDAGetInfo(user->da, NULL, &IM_nodes_global, &JM_nodes_global, &KM_nodes_global, NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL); CHKERRQ(ierr);
1098 ierr = DMDAGetGhostCorners(user->da, &xs_gnode_rank, &ys_gnode_rank, &zs_gnode_rank, NULL, NULL, NULL); CHKERRQ(ierr);
1099
1100 // Check if this rank is responsible for (part of) the designated inlet surface
1101 ierr = CanRankServiceInletFace(user, &info, IM_nodes_global, JM_nodes_global, KM_nodes_global, &can_this_rank_service_inlet); CHKERRQ(ierr);
1102
1103 if (!can_this_rank_service_inlet) {
1104 LOG_ALLOW(LOCAL, LOG_DEBUG, "[T=%.4f, Step=%d] Rank %d cannot service inlet face %d. Skipping re-initialization of %d particles.\n", currentTime, step, rank, user->identifiedInletBCFace, nlocal_current);
1105 PetscFunctionReturn(0);
1106 }
1107
1108 LOG_ALLOW(GLOBAL, LOG_INFO, "[T=%.4f, Step=%d] Rank %d is on inlet face %d. Attempting to re-place %d local particles.\n", currentTime, step, rank, user->identifiedInletBCFace, nlocal_current);
1109
1110 // Get coordinate array and swarm fields for modification
1111 ierr = DMGetCoordinatesLocal(user->da, &Coor_local); CHKERRQ(ierr);
1112 ierr = DMDAVecGetArrayRead(user->fda, Coor_local, (void*)&coor_nodes_local_array); CHKERRQ(ierr);
1113 ierr = DMSwarmGetField(swarm, "position", NULL, NULL, (void**)&positions_field); CHKERRQ(ierr);
1114 ierr = DMSwarmGetField(swarm, "pos_phy", NULL, NULL, (void**)&pos_phy_field); CHKERRQ(ierr);
1115 ierr = DMSwarmGetField(swarm, "DMSwarm_pid", NULL, NULL, (void**)&particleIDs); CHKERRQ(ierr); // For logging
1116 ierr = DMSwarmGetField(swarm, "DMSwarm_CellID", NULL, NULL, (void**)&cell_ID_field); CHKERRQ(ierr);
1117
1118 // Initialize fresh RNGs for this re-placement to ensure good distribution
1119 ierr = InitializeLogicalSpaceRNGs(&rand_logic_reinit_i, &rand_logic_reinit_j, &rand_logic_reinit_k); CHKERRQ(ierr);
1120 // Optional: Seed RNGs for deterministic behavior if required, e.g., based on rank and step.
1121 // PetscRandomSetSeed(rand_logic_i, (unsigned long)rank*1000 + step + 100); PetscRandomSeed(rand_logic_i); // Example
1122
1123 // Loop over all particles currently local to this rank
1124 for (PetscInt p = 0; p < nlocal_current; p++) {
1125 PetscInt ci_metric_lnode, cj_metric_lnode, ck_metric_lnode; // Local node indices (of rank's DA patch) for cell origin
1126 PetscReal xi_metric_logic, eta_metric_logic, zta_metric_logic; // Intra-cell logical coordinates
1127 Cmpnts phys_coords; // To store newly calculated physical coordinates
1128
1129 // Get random cell on this rank's portion of the inlet and random logical coords within it
1130 ierr = GetRandomCellAndLogicOnInletFace(user, &info, xs_gnode_rank, ys_gnode_rank, zs_gnode_rank,
1131 IM_nodes_global, JM_nodes_global, KM_nodes_global,
1132 &rand_logic_reinit_i, &rand_logic_reinit_j, &rand_logic_reinit_k,
1133 &ci_metric_lnode, &cj_metric_lnode, &ck_metric_lnode,
1134 &xi_metric_logic, &eta_metric_logic, &zta_metric_logic); CHKERRQ(ierr);
1135
1136 // Convert these logical coordinates to physical coordinates
1137 ierr = MetricLogicalToPhysical(user, coor_nodes_local_array,
1138 ci_metric_lnode, cj_metric_lnode, ck_metric_lnode,
1139 xi_metric_logic, eta_metric_logic, zta_metric_logic,
1140 &phys_coords); CHKERRQ(ierr);
1141
1142 // Update the particle's position in the swarm fields
1143 positions_field[3*p+0] = phys_coords.x;
1144 positions_field[3*p+1] = phys_coords.y;
1145 positions_field[3*p+2] = phys_coords.z;
1146 pos_phy_field[3*p+0] = phys_coords.x;
1147 pos_phy_field[3*p+1] = phys_coords.y;
1148 pos_phy_field[3*p+2] = phys_coords.z;
1149 particles_actually_reinitialized_count++;
1150
1151 cell_ID_field[3*p+0] = -1;
1152 cell_ID_field[3*p+1] = -1;
1153 cell_ID_field[3*p+2] = -1;
1154
1155 LOG_LOOP_ALLOW(LOCAL, LOG_DEBUG, p, (nlocal_current > 20 ? nlocal_current/10 : 1), // Sampled logging
1156 "ReInit - Rank %d: PID %ld (idx %ld) RE-PLACED. CellOriginNode(locDAIdx):(%d,%d,%d). LogicCoords: (%.2e,%.2f,%.2f). PhysCoords: (%.6f,%.6f,%.6f).\n",
1157 rank, particleIDs[p], (long)p,
1158 ci_metric_lnode, cj_metric_lnode, ck_metric_lnode,
1159 xi_metric_logic, eta_metric_logic, zta_metric_logic,
1160 phys_coords.x, phys_coords.y, phys_coords.z);
1161 }
1162
1163 // Logging summary of re-initialization
1164 if (particles_actually_reinitialized_count > 0) {
1165 LOG_ALLOW(GLOBAL, LOG_INFO, "[T=%.4f, Step=%d] Rank %d (on inlet face %d) successfully re-initialized %d of %d local particles.\n", currentTime, step, rank, user->identifiedInletBCFace, particles_actually_reinitialized_count, nlocal_current);
1166 } else if (nlocal_current > 0) { // This case should ideally not be hit if can_this_rank_service_inlet was true and particles were present.
1167 LOG_ALLOW(GLOBAL, LOG_WARNING, "[T=%.4f, Step=%d] Rank %d claimed to service inlet face %d, but re-initialized 0 of %d local particles. This may indicate an issue if particles were expected to be re-placed.\n", currentTime, step, rank, user->identifiedInletBCFace, nlocal_current);
1168 }
1169
1170 // Cleanup: Destroy RNGs and restore swarm fields/coordinate array
1171 ierr = PetscRandomDestroy(&rand_logic_reinit_i); CHKERRQ(ierr);
1172 ierr = PetscRandomDestroy(&rand_logic_reinit_j); CHKERRQ(ierr);
1173 ierr = PetscRandomDestroy(&rand_logic_reinit_k); CHKERRQ(ierr);
1174
1175 ierr = DMSwarmRestoreField(swarm, "position", NULL, NULL, (void**)&positions_field); CHKERRQ(ierr);
1176 ierr = DMSwarmRestoreField(swarm, "pos_phy", NULL, NULL, (void**)&pos_phy_field); CHKERRQ(ierr);
1177 ierr = DMSwarmRestoreField(swarm, "DMSwarm_pid", NULL, NULL, (void**)&particleIDs); CHKERRQ(ierr);
1178 ierr = DMSwarmRestoreField(swarm, "DMSwarm_CellID", NULL, NULL, (void**)&cell_ID_field); CHKERRQ(ierr);
1179 ierr = DMDAVecRestoreArrayRead(user->fda, Coor_local, (void*)&coor_nodes_local_array); CHKERRQ(ierr);
1180
1181
1182 PetscFunctionReturn(0);
1183}
PetscErrorCode CanRankServiceInletFace(UserCtx *user, const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, PetscBool *can_service_inlet_out)
Determines if the current MPI rank owns any part of the globally defined inlet face,...
Definition Boundaries.c:24
PetscErrorCode GetRandomCellAndLogicOnInletFace(UserCtx *user, const DMDALocalInfo *info, PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, PetscRandom *rand_logic_i_ptr, PetscRandom *rand_logic_j_ptr, PetscRandom *rand_logic_k_ptr, PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out, PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out)
Assuming the current rank services the inlet face, this function selects a random cell (owned by this...
Definition Boundaries.c:225
PetscErrorCode MetricLogicalToPhysical(UserCtx *user, const Cmpnts ***X, PetscInt i, PetscInt j, PetscInt k, PetscReal xi, PetscReal eta, PetscReal zta, Cmpnts *Xp)
Public wrapper: map (cell index, ξ,η,ζ) to (x,y,z).
Definition Metric.c:68
PetscErrorCode InitializeLogicalSpaceRNGs(PetscRandom *rand_logic_i, PetscRandom *rand_logic_j, PetscRandom *rand_logic_k)
Initializes random number generators for logical space operations [0.0, 1.0).
#define LOG_LOOP_ALLOW(scope, level, iterVar, interval, fmt,...)
Logs a message inside a loop, but only every interval iterations.
Definition logging.h:319
@ LOG_WARNING
Non-critical issues that warrant attention.
Definition logging.h:30
PetscBool inletFaceDefined
Definition variables.h:649
BCFace identifiedInletBCFace
Definition variables.h:650
PetscInt ParticleInitialization
Definition variables.h:589
Here is the call graph for this function:
Here is the caller graph for this function:

◆ compare_PetscInt64()

static int compare_PetscInt64 ( const void *  a,
const void *  b 
)
static

Definition at line 1186 of file ParticleMotion.c.

1186 {
1187 PetscInt64 val_a = *(const PetscInt64*)a;
1188 PetscInt64 val_b = *(const PetscInt64*)b;
1189 if (val_a < val_b) return -1;
1190 if (val_a > val_b) return 1;
1191 return 0;
1192}

◆ GetLocalPIDSnapshot()

PetscErrorCode GetLocalPIDSnapshot ( const PetscInt64  pid_field[],
PetscInt  n_local,
PetscInt64 **  pids_snapshot_out 
)

Creates a sorted snapshot of all Particle IDs (PIDs) from a raw data array.

This function is a crucial helper for the migration process. It captures the state of which particles are on the current MPI rank before migration occurs by taking a pointer to the swarm's raw PID data array. The resulting sorted array can then be used with an efficient binary search to quickly identify newcomer particles after migration.

This function does NOT call DMSwarmGetField/RestoreField. It is the caller's responsibility to acquire the pid_field pointer before calling and restore it afterward.

Parameters
[in]pid_fieldA read-only pointer to the raw array of PIDs for the local swarm.
[in]n_localThe number of particles currently on the local rank.
[out]pids_snapshot_outA pointer to a PetscInt64* array. This function will allocate memory for this array, and the caller is responsible for freeing it with PetscFree() when it is no longer needed.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 1215 of file ParticleMotion.c.

1218{
1219 PetscErrorCode ierr;
1220 PetscMPIInt rank;
1221
1222 PetscFunctionBeginUser;
1223
1224 // --- 1. Input Validation ---
1225 if (!pids_snapshot_out) {
1226 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL, "Output pointer pids_snapshot_out is NULL.");
1227 }
1228 // If n_local > 0, pid_field must not be NULL.
1229 if (n_local > 0 && !pid_field) {
1230 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL, "Input pid_field pointer is NULL for n_local > 0.");
1231 }
1232
1233 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
1234 LOG_ALLOW(LOCAL, LOG_DEBUG, "[Rank %d]: Creating PID snapshot for %d local particles.\n", rank, n_local);
1235
1236 // If there are no local particles, the snapshot is empty (NULL).
1237 if (n_local == 0) {
1238 *pids_snapshot_out = NULL;
1239 PetscFunctionReturn(0);
1240 }
1241
1242 // --- 2. Allocate Memory for the Snapshot ---
1243 ierr = PetscMalloc1(n_local, pids_snapshot_out); CHKERRQ(ierr);
1244
1245 // --- 3. Copy Data ---
1246 // Perform a fast memory copy from the provided array to our new snapshot array.
1247 ierr = PetscMemcpy(*pids_snapshot_out, pid_field, n_local * sizeof(PetscInt64)); CHKERRQ(ierr);
1248 LOG_ALLOW(LOCAL, LOG_DEBUG, "[Rank %d]: Copied %d PIDs.\n", rank, n_local);
1249
1250 // --- 4. Sort the Snapshot Array ---
1251 // Sorting enables fast binary search lookups later.
1252 ierr = PetscSortInt64(n_local, *pids_snapshot_out); CHKERRQ(ierr);
1253 LOG_ALLOW(LOCAL, LOG_DEBUG, "[Rank %d]: PID snapshot sorted successfully.\n", rank);
1254
1255 PetscFunctionReturn(0);
1256}
Here is the caller graph for this function:

◆ AddToMigrationList()

PetscErrorCode AddToMigrationList ( MigrationInfo **  migration_list_p,
PetscInt *  capacity_p,
PetscInt *  count_p,
PetscInt  particle_local_idx,
PetscMPIInt  destination_rank 
)

Safely adds a new migration task to a dynamically sized list.

This utility function manages a dynamic array of MigrationInfo structs. It appends a new entry to the list and automatically doubles the array's capacity using PetscRealloc if the current capacity is exceeded. This prevents buffer overflows and avoids the need to know the number of migrating particles in advance.

Parameters
[in,out]migration_list_pA pointer to the MigrationInfo array pointer. The function will update this pointer if the array is reallocated.
[in,out]capacity_pA pointer to an integer holding the current allocated capacity of the list (in number of elements). This will be updated upon reallocation.
[in,out]count_pA pointer to an integer holding the current number of items in the list. This will be incremented by one.
[in]particle_local_idxThe local index (from 0 to nlocal-1) of the particle that needs to be migrated.
[in]destination_rankThe target MPI rank for the particle.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure (e.g., from memory allocation).

Definition at line 1280 of file ParticleMotion.c.

1285{
1286 PetscErrorCode ierr;
1287 PetscMPIInt rank;
1288
1289 PetscFunctionBeginUser;
1290
1291 // --- 1. Input Validation ---
1292 if (!migration_list_p || !capacity_p || !count_p) {
1293 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL, "Null pointer provided to AddToMigrationList for list management.");
1294 }
1295
1296 // --- 2. Check if the list needs to be resized ---
1297 if (*count_p >= *capacity_p) {
1298 PetscInt old_capacity = *capacity_p;
1299 // Start with a reasonable base capacity, then double for subsequent reallocations.
1300 PetscInt new_capacity = (old_capacity == 0) ? 16 : old_capacity * 2;
1301
1302 // Use PetscRealloc for safe memory reallocation.
1303 // It handles allocating new memory, copying old data, and freeing the old block.
1304 // The first argument to PetscRealloc is the new size in BYTES.
1305 ierr = PetscRealloc(new_capacity * sizeof(MigrationInfo), migration_list_p); CHKERRQ(ierr);
1306
1307 *capacity_p = new_capacity; // Update the capacity tracker
1308
1309 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
1310 LOG_ALLOW(LOCAL, LOG_DEBUG, "AddToMigrationList [Rank %d]: Reallocated migrationList capacity from %d to %d.\n",
1311 rank, old_capacity, new_capacity);
1312 }
1313
1314 // --- 3. Add the new migration data to the list ---
1315 // Dereference the pointer-to-a-pointer to get the actual array.
1316 MigrationInfo *list = *migration_list_p;
1317
1318 list[*count_p].local_index = particle_local_idx;
1319 list[*count_p].target_rank = destination_rank;
1320
1321 // --- 4. Increment the count of items in the list ---
1322 (*count_p)++;
1323
1324 PetscFunctionReturn(0);
1325}
Information needed to migrate a single particle between MPI ranks.
Definition variables.h:188
Head of a generic C-style linked list.
Definition variables.h:341
Here is the caller graph for this function:

◆ FlagNewcomersForLocation()

PetscErrorCode FlagNewcomersForLocation ( DM  swarm,
PetscInt  n_local_before,
const PetscInt64  pids_before[] 
)

Identifies newly arrived particles after migration and flags them for a location search.

This function is a critical component of the iterative migration process managed by the main particle settlement orchestrator (e.g., SettleParticles). After a DMSwarmMigrate call, each rank's local particle list is a new mix of resident particles and newly received ones. This function's job is to efficiently identify these "newcomers" and set their DMSwarm_location_status field to NEEDS_LOCATION.

This ensures that in the subsequent pass of the migration do-while loop, only the newly arrived particles are processed by the expensive location algorithm, preventing redundant work on particles that are already settled on the current rank.

The identification is done by comparing the PIDs of particles currently on the rank against a "snapshot" of PIDs taken before the migration occurred.

Parameters
[in]swarmThe DMSwarm object, which has just completed a migration.
[in]n_local_beforeThe number of particles that were on this rank before the migration was performed.
[in]pids_beforeA pre-sorted array of the PIDs that were on this rank before the migration. This is used for fast lookups.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.
Note
This function assumes the pids_before array is sorted in ascending order to enable the use of an efficient binary search.

Definition at line 1355 of file ParticleMotion.c.

1358{
1359 PetscErrorCode ierr;
1360 PetscMPIInt rank;
1361 PetscInt n_local_after;
1362 PetscInt newcomer_count = 0;
1363
1364 // Pointers to the swarm data fields we will read and modify
1365 PetscInt64 *pid_field_after = NULL;
1366 PetscInt *status_field_after = NULL;
1367 PetscInt *cell_field_after = NULL;
1368
1369 PetscFunctionBeginUser;
1370
1371 // --- 1. Input Validation and Basic Setup ---
1372 if (!swarm) {
1373 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_WRONGSTATE, "Input DMSwarm is NULL in FlagNewcomersForLocation.");
1374 }
1375 // If n_local_before > 0, the corresponding PID array must not be null.
1376 if (n_local_before > 0 && !pids_before) {
1377 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL, "Input pids_before array is NULL for n_local_before > 0.");
1378 }
1379
1380 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
1381
1382 // Get the number of particles on this rank *after* the migration.
1383 ierr = DMSwarmGetLocalSize(swarm, &n_local_after); CHKERRQ(ierr);
1384
1385 LOG_ALLOW(LOCAL, LOG_DEBUG, "FlagNewcomersForLocation [Rank %d]: Checking for newcomers. Size before: %d, Size after: %d\n",
1386 rank, n_local_before, n_local_after);
1387
1388 // If there are no particles now, there's nothing to do.
1389 if (n_local_after == 0) {
1390 PetscFunctionReturn(0);
1391 }
1392
1393 // --- 2. Access Swarm Data ---
1394 // Get read-only access to the PIDs and read-write access to the status field.
1395 ierr = DMSwarmGetField(swarm, "DMSwarm_pid", NULL, NULL, (void**)&pid_field_after); CHKERRQ(ierr);
1396 ierr = DMSwarmGetField(swarm, "DMSwarm_location_status", NULL, NULL, (void**)&status_field_after); CHKERRQ(ierr);
1397 ierr = DMSwarmGetField(swarm, "DMSwarm_CellID", NULL, NULL, (void**)&cell_field_after); CHKERRQ(ierr);
1398 if (!pid_field_after || !status_field_after) {
1399 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_WRONGSTATE, "Failed to get required swarm fields in FlagNewcomersForLocation.");
1400 }
1401
1402 // --- 3. Identify and Flag Newcomers ---
1403 // Loop through all particles currently on this rank.
1404 for (PetscInt p_idx = 0; p_idx < n_local_after; ++p_idx) {
1405 PetscInt64 current_pid = pid_field_after[p_idx];
1406 PetscBool is_found_in_before_list;
1407
1408 // Use our custom, efficient helper function for the lookup.
1409 ierr = BinarySearchInt64(n_local_before, pids_before, current_pid, &is_found_in_before_list); CHKERRQ(ierr);
1410
1411 // If the PID was NOT found in the "before" list, it must be a newcomer.
1412 if (!is_found_in_before_list) {
1413 // Flag it for processing in the next pass of the migration loop.
1414 status_field_after[p_idx] = NEEDS_LOCATION;
1415 // cell_field_after[3*p_idx+0] = -1;
1416 // cell_field_after[3*p_idx+1] = -1;
1417 // cell_field_after[3*p_idx+2] = -1;
1418 newcomer_count++;
1419
1420 LOG_ALLOW(LOCAL, LOG_DEBUG, "[Rank %d]: Flagged newcomer PID %ld at local index %d as NEEDS_LOCATION.\n",
1421 rank, current_pid, p_idx);
1422 }
1423 }
1424
1425 // --- 4. Restore Swarm Fields ---
1426 // Release the locks on the swarm data arrays.
1427 ierr = DMSwarmRestoreField(swarm, "DMSwarm_pid", NULL, NULL, (void**)&pid_field_after); CHKERRQ(ierr);
1428 ierr = DMSwarmRestoreField(swarm, "DMSwarm_location_status", NULL, NULL, (void**)&status_field_after); CHKERRQ(ierr);
1429 ierr = DMSwarmRestoreField(swarm, "DMSwarm_CellID", NULL, NULL, (void**)&cell_field_after); CHKERRQ(ierr);
1430
1431 if (newcomer_count > 0) {
1432 LOG_ALLOW(LOCAL, LOG_INFO, "[Rank %d]: Identified and flagged %d newcomers.\n", rank, newcomer_count);
1433 }
1434
1435 PetscFunctionReturn(0);
1436}
PetscErrorCode BinarySearchInt64(PetscInt n, const PetscInt64 arr[], PetscInt64 key, PetscBool *found)
Performs a binary search for a key in a sorted array of PetscInt64.
Definition setup.c:2017
@ NEEDS_LOCATION
Definition variables.h:135
Here is the call graph for this function:
Here is the caller graph for this function:

◆ GuessParticleOwnerWithBBox()

PetscErrorCode GuessParticleOwnerWithBBox ( UserCtx user,
const Particle particle,
const BoundingBox bboxlist,
PetscMPIInt *  guess_rank_out 
)

Provides a fast, heuristic-based guess for a particle's owner rank using bounding boxes.

This function is part of the "Guess and Verify" strategy, called only for "lost" particles. It attempts to find a candidate owner by checking which rank's bounding box contains the particle's physical position.

To optimize the search, it uses the particle's position relative to the local bounding box to intelligently check the most likely neighboring ranks first. For example, if a particle's x-coordinate is less than the local minimum x, it will check the -X neighbor first. If no owner is found in the immediate neighbors, it performs a full search of all other ranks as a fallback.

Parameters
[in]userPointer to the UserCtx, which must contain the pre-computed bbox (local), neighbors struct, and the global bboxlist.
[in]particleA pointer to the particle whose owner is being guessed.
[in]bboxlistAn array of BoundingBox structures for ALL MPI ranks, indexed 0 to (size-1). This array must be up-to-date and available on all ranks.
[out]guess_rank_outA pointer to a PetscMPIInt. Set to the candidate owner's rank if found, otherwise set to -1 (or MPI_PROC_NULL).
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 1462 of file ParticleMotion.c.

1466{
1467 PetscErrorCode ierr;
1468 PetscMPIInt rank, size;
1469 const RankNeighbors *neighbors = &user->neighbors; // Use a direct pointer for clarity
1470 const BoundingBox *localBBox = &user->bbox;
1471
1472 PetscFunctionBeginUser;
1473
1474 // --- 1. Input Validation and Setup ---
1475 if (!user || !particle || !guess_rank_out || !bboxlist) {
1476 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_NULL, "Null pointer provided to GuessParticleOwnerWithBBox.");
1477 }
1478 if (!localBBox|| !neighbors) {
1479 SETERRQ(PETSC_COMM_SELF, PETSC_ERR_ARG_WRONGSTATE, "Required user->bboxl or user->neighbors is not initialized.");
1480 }
1481
1482 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
1483 ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRQ(ierr);
1484
1485 *guess_rank_out = MPI_PROC_NULL; // Default to "not found"
1486
1487 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld]: Starting guess for particle at (%.3f, %.3f, %.3f).\n",
1488 particle->PID, particle->loc.x, particle->loc.y, particle->loc.z);
1489
1490 // *** THE PRIMARY FIX ***
1491 // --- Step 0: Check if the particle is inside the CURRENT rank's bounding box FIRST. ---
1492 // This handles the common case of initial placement where a particle is "lost" but physically local.
1493 if (IsParticleInBox(localBBox, &particle->loc)) {
1494 *guess_rank_out = rank;
1495 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld]: Fast path guess SUCCESS. Particle is within the local (Rank %d) bounding box.\n",
1496 particle->PID, rank);
1497 PetscFunctionReturn(0); // Found it, we're done.
1498 }
1499 // --- 2. Fast Path: Check Immediate Neighbors Based on Exit Direction ---
1500
1501 // Determine likely exit direction(s) to prioritize neighbor check
1502 PetscBool exit_xm = particle->loc.x < localBBox->min_coords.x;
1503 PetscBool exit_xp = particle->loc.x > localBBox->max_coords.x;
1504 PetscBool exit_ym = particle->loc.y < localBBox->min_coords.y;
1505 PetscBool exit_yp = particle->loc.y > localBBox->max_coords.y;
1506 PetscBool exit_zm = particle->loc.z < localBBox->min_coords.z;
1507 PetscBool exit_zp = particle->loc.z > localBBox->max_coords.z;
1508
1509 if (exit_xm && neighbors->rank_xm != MPI_PROC_NULL && IsParticleInBox(&bboxlist[neighbors->rank_xm], &particle->loc)) {
1510 *guess_rank_out = neighbors->rank_xm;
1511 } else if (exit_xp&& neighbors->rank_xp != MPI_PROC_NULL && IsParticleInBox(&bboxlist[neighbors->rank_xp], &particle->loc)) {
1512 *guess_rank_out = neighbors->rank_xp;
1513 } else if (exit_ym && neighbors->rank_ym != MPI_PROC_NULL && IsParticleInBox(&bboxlist[neighbors->rank_ym], &particle->loc)) {
1514 *guess_rank_out = neighbors->rank_ym;
1515 } else if (exit_yp && neighbors->rank_yp != MPI_PROC_NULL && IsParticleInBox(&bboxlist[neighbors->rank_yp], &particle->loc)) {
1516 *guess_rank_out = neighbors->rank_yp;
1517 } else if (exit_zm && neighbors->rank_zm != MPI_PROC_NULL && IsParticleInBox(&bboxlist[neighbors->rank_zm], &particle->loc)) {
1518 *guess_rank_out = neighbors->rank_zm;
1519 } else if (exit_zp && neighbors->rank_zp != MPI_PROC_NULL && IsParticleInBox(&bboxlist[neighbors->rank_zp], &particle->loc)) {
1520 *guess_rank_out = neighbors->rank_zp;
1521 }
1522 // Note: This does not handle corner/edge neighbors, which is why the fallback is essential.
1523
1524 if (*guess_rank_out != MPI_PROC_NULL) {
1525 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld]: Fast path guess SUCCESS. Found in immediate neighbor Rank %d.\n",
1526 particle->PID, *guess_rank_out);
1527 PetscFunctionReturn(0); // Found it, we're done.
1528 }
1529
1530 // --- 3. Robust Fallback: Check All Other Ranks ---
1531 // If we get here, the particle was not in any of the immediate face neighbors' boxes.
1532 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld]: Not in immediate face neighbors. Starting global fallback search.\n",
1533 particle->PID);
1534
1535 for (PetscMPIInt r = 0; r < size; ++r) {
1536 if (r == rank) continue; // Don't check ourselves.
1537
1538 if (IsParticleInBox(&bboxlist[r], &particle->loc)) {
1539 PetscBool is_in = PETSC_TRUE;
1540 // This detailed, synchronized print will solve the mystery
1541 LOG_ALLOW(LOCAL,LOG_DEBUG, "[Guess BBox | Rank %d] Checking PID %lld at (%.4f, %.4f, %.4f) against Rank %d's box: [(%.4f, %.4f, %.4f) to (%.4f, %.4f, %.4f)]. Result: %s\n",
1542 (int)rank, (long long)particle->PID,
1543 particle->loc.x, particle->loc.y, particle->loc.z,
1544 (int)r,
1545 bboxlist[r].min_coords.x, bboxlist[r].min_coords.y, bboxlist[r].min_coords.z,
1546 bboxlist[r].max_coords.x, bboxlist[r].max_coords.y, bboxlist[r].max_coords.z,
1547 is_in ? "INSIDE" : "OUTSIDE");
1548
1549 *guess_rank_out = r;
1550 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld]: Fallback search SUCCESS. Found in Rank %d.\n",
1551 particle->PID, *guess_rank_out);
1552 PetscFunctionReturn(0); // Found it, we're done.
1553 }
1554 }
1555
1556 // If the code reaches here, the particle was not found in any rank's bounding box.
1557 LOG_ALLOW(LOCAL, LOG_WARNING, "[PID %ld]: Guess FAILED. Particle not found in any rank's bounding box.\n",
1558 particle->PID);
1559
1560 // The guess_rank_out will remain -1, signaling failure to the caller.
1561 PetscFunctionReturn(0);
1562}
PetscMPIInt rank_zm
Definition variables.h:178
PetscMPIInt rank_yp
Definition variables.h:177
PetscMPIInt rank_ym
Definition variables.h:177
PetscMPIInt rank_xp
Definition variables.h:176
RankNeighbors neighbors
Definition variables.h:642
Cmpnts loc
Definition variables.h:167
PetscMPIInt rank_xm
Definition variables.h:176
PetscMPIInt rank_zp
Definition variables.h:178
BoundingBox bbox
Definition variables.h:641
PetscInt64 PID
Definition variables.h:165
Defines a 3D axis-aligned bounding box.
Definition variables.h:153
Stores the MPI ranks of neighboring subdomains.
Definition variables.h:175
Here is the call graph for this function:
Here is the caller graph for this function:

◆ LocateAllParticlesInGrid_TEST()

PetscErrorCode LocateAllParticlesInGrid_TEST ( UserCtx user,
BoundingBox bboxlist 
)

Orchestrates the complete particle location and migration process for one timestep.

This function is the master orchestrator for ensuring every particle is on its correct MPI rank and has a valid host cell index. It is designed to be called once per timestep after particle positions have been updated.

The function uses a robust, iterative "Guess and Verify" strategy within a do-while loop to handle complex particle motion across processor boundaries, especially on curvilinear grids.

  1. State Snapshot: At the start of each pass, it captures a list of all Particle IDs (PIDs) on the current rank.
  2. **"Guess" (Heuristic):** For particles that are "lost" (no valid host cell), it first attempts a fast, bounding-box-based guess to find a potential new owner rank.
  3. **"Verify" (Robust Walk):** For all other particles, or if the guess fails, it uses a robust cell-walking algorithm (LocateParticleOrFindMigrationTarget) that determines the particle's status: located locally, needs migration, or is lost.
  4. Migration: After identifying all migrating particles on a pass, it performs the MPI communication using the SetMigrationRanks and PerformMigration helpers.
  5. Newcomer Flagging: After migration, it uses the PID snapshot from step 1 to efficiently identify newly arrived particles and flag them for location on the next pass.
  6. Iteration: The process repeats in a do-while loop until a pass occurs where no particles migrate, ensuring the entire swarm is in a stable, consistent state.
Parameters
[in,out]userPointer to the UserCtx, containing the swarm and all necessary domain topology information (bboxlist, RankCellInfoMap, etc.).
[in]bboxlistAn array of BoundingBox structures for ALL MPI ranks, indexed 0 to (size-1). This array must be up-to-date and available on all ranks.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 1598 of file ParticleMotion.c.

1599{
1600 PetscErrorCode ierr;
1601 PetscInt passes = 0;
1602 const PetscInt MAX_MIGRATION_PASSES = 10; // Safety break for runaway loops
1603 PetscInt global_migrations_this_pass;
1604 PetscMPIInt rank;
1605
1606 PetscFunctionBeginUser;
1608 ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
1610 LOG_ALLOW(GLOBAL, LOG_INFO, "LocateAllParticlesInGrid (Orchestrator) - Beginning particle settlement process.\n");
1611
1612 // This loop ensures that particles that jump across multiple ranks are
1613 // handled correctly in successive, iterative handoffs.
1614 do {
1615 passes++;
1616 LOG_ALLOW_SYNC(GLOBAL, LOG_INFO, "[Rank %d] Starting migration pass %d.\n", rank, passes);
1617
1618 // --- STAGE 1: PER-PASS INITIALIZATION ---
1619 MigrationInfo *migrationList = NULL;
1620 PetscInt local_migration_count = 0;
1621 PetscInt migrationListCapacity = 0;
1622 PetscInt nlocal_before;
1623 PetscInt64 *pids_before_snapshot = NULL;
1624 PetscInt local_lost_count = 0;
1625
1626 ierr = DMSwarmGetLocalSize(user->swarm, &nlocal_before); CHKERRQ(ierr);
1627 LOG_ALLOW(LOCAL, LOG_DEBUG, "[Rank %d] Pass %d begins with %d local particles.\n", rank, passes, nlocal_before);
1628
1629
1630 // --- STAGE 2: PRE-MIGRATION SNAPSHOT & MAIN PROCESSING LOOP ---
1631 if (nlocal_before > 0) {
1632 // Get pointers to all fields needed for this pass
1633 PetscReal *pos_p, *weights_p, *vel_p;
1634 PetscInt *cell_p, *status_p;
1635 PetscInt64 *pid_p;
1636 ierr = DMSwarmGetField(user->swarm, "position", NULL, NULL, (void**)&pos_p); CHKERRQ(ierr);
1637 ierr = DMSwarmGetField(user->swarm, "velocity", NULL, NULL, (void**)&vel_p); CHKERRQ(ierr);
1638 ierr = DMSwarmGetField(user->swarm, "weight", NULL, NULL, (void**)&weights_p); CHKERRQ(ierr);
1639 ierr = DMSwarmGetField(user->swarm, "DMSwarm_CellID", NULL, NULL, (void**)&cell_p); CHKERRQ(ierr);
1640 ierr = DMSwarmGetField(user->swarm, "DMSwarm_pid", NULL, NULL, (void**)&pid_p); CHKERRQ(ierr);
1641 ierr = DMSwarmGetField(user->swarm, "DMSwarm_location_status", NULL, NULL, (void**)&status_p); CHKERRQ(ierr);
1642
1643 // Create a sorted snapshot of current PIDs to identify newcomers after migration.
1644 // This helper requires a raw pointer, which we just acquired.
1645 ierr = GetLocalPIDSnapshot(pid_p, nlocal_before, &pids_before_snapshot); CHKERRQ(ierr);
1646
1647 for (PetscInt p_idx = 0; p_idx < nlocal_before; p_idx++) {
1648
1649 // OPTIMIZATION: Skip particles already settled in a previous pass of this do-while loop.
1650
1652 "p_idx=%d, PID=%ld, status=%d, cell=(%d, %d, %d)\n",
1653 p_idx,
1654 (long)pid_p[p_idx],
1655 status_p[p_idx],
1656 cell_p[3*p_idx],
1657 cell_p[3*p_idx+1],
1658 cell_p[3*p_idx+2]);
1659
1660 if (status_p[p_idx] == ACTIVE_AND_LOCATED) {
1661 LOG_ALLOW(LOCAL,LOG_DEBUG," [rank %d][PID %ld] skipped in pass %d as it is already located at (%d,%d,%d).\n",rank,pid_p[p_idx],passes,cell_p[3*p_idx],cell_p[3*p_idx + 1],cell_p[3*p_idx + 2]);
1662 continue;
1663 }
1664
1665 // UNPACK: Create a temporary C struct for easier processing using our helper.
1666 Particle current_particle;
1667
1668 // LOG_ALLOW(LOCAL,LOG_DEBUG,"about to unpack p_idx=%d (PID=%ld)\n",p_idx, (long)pid_p[p_idx]);
1669
1670 ierr = UnpackSwarmFields(p_idx, pid_p, weights_p, pos_p, cell_p, vel_p, status_p, &current_particle); CHKERRQ(ierr);
1671
1672 // LOG_ALLOW(LOCAL,LOG_DEBUG,"unpacked p_idx=%d → cell[0]=%d, status=%d\n",p_idx, current_particle.cell[0], current_particle.location_status);
1673
1674 ParticleLocationStatus final_status = (ParticleLocationStatus)status_p[p_idx];
1675
1676
1677 // CASE 1: Particle has a valid prior cell index.
1678 // It has moved, so we only need to run the robust walk from its last known location.
1679 if (current_particle.cell[0] >= 0) {
1680 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld] has valid prior cell. Strategy: Robust Walk from previous cell.\n", current_particle.PID);
1681 ierr = LocateParticleOrFindMigrationTarget_TEST(user, &current_particle, &final_status); CHKERRQ(ierr);
1682 }
1683
1684 /*
1685 // --- "GUESS" FAST PATH for lost particles ---
1686 if (current_particle.cell[0] < 0) {
1687 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld] is lost or uninitialzied (cell=%d), attempting fast guess.\n",current_particle.PID, current_particle.cell[0]);
1688 ierr = GuessParticleOwnerWithBBox(user, &current_particle, bboxlist, &destination_rank); CHKERRQ(ierr);
1689 if (destination_rank != MPI_PROC_NULL && destination_rank != rank) {
1690 final_status = MIGRATING_OUT;
1691 // The particle struct's destination rank must be updated for consistency
1692 current_particle.destination_rank = destination_rank;
1693 }
1694 }
1695
1696 LOG_ALLOW(LOCAL,LOG_DEBUG,"[PID %ld] Particle status after Initial Guess:%d \n",current_particle.PID,final_status);
1697
1698 // --- "VERIFY" ROBUST WALK if guess didn't resolve it ---
1699 if (final_status == NEEDS_LOCATION || UNINITIALIZED) {
1700 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld] Not resolved by guess, starting robust walk.\n", current_particle.PID);
1701 // This function will update the particle's status and destination rank internally.
1702 ierr = LocateParticleOrFindMigrationTarget_TEST(user, &current_particle, &final_status); CHKERRQ(ierr);
1703 destination_rank = current_particle.destination_rank; // Retrieve the result
1704 }
1705
1706 // --- PROCESS THE FINAL STATUS AND TAKE ACTION ---
1707 if (final_status == MIGRATING_OUT) {
1708 status_p[p_idx] = MIGRATING_OUT; // Mark for removal by DMSwarm
1709 ierr = AddToMigrationList(&migrationList, &migrationListCapacity, &local_migration_count, p_idx, destination_rank); CHKERRQ(ierr);
1710 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld] at local index %d marked for migration to rank %d.\n",current_particle.PID, p_idx, destination_rank);
1711 } else {
1712 // Particle's final status is either LOCATED or LOST; update its state in the swarm arrays.
1713 current_particle.location_status = final_status;
1714 // PACK: Use the helper to write results back to the swarm arrays.
1715 ierr = UpdateSwarmFields(p_idx, &current_particle, weights_p, cell_p, status_p); CHKERRQ(ierr);
1716 }
1717 */
1718 // CASE 2: Particle is "lost" (cell = -1). Strategy: Guess -> Verify.
1719 else {
1720 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld] has invalid cell. Strategy: Guess Owner -> Find Cell.\n",current_particle.PID);
1721
1722 PetscMPIInt guessed_owner_rank = MPI_PROC_NULL;
1723 ierr = GuessParticleOwnerWithBBox(user, &current_particle, bboxlist, &guessed_owner_rank); CHKERRQ(ierr);
1724
1725 // If the guess finds a DIFFERENT rank, we can mark for migration and skip the walk.
1726 if (guessed_owner_rank != MPI_PROC_NULL && guessed_owner_rank != rank) {
1727 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld] Guess SUCCESS: Found migration target Rank %d. Finalizing.\n", current_particle.PID, guessed_owner_rank);
1728 final_status = MIGRATING_OUT;
1729 current_particle.destination_rank = guessed_owner_rank;
1730 }
1731 else {
1732
1733 // This block runs if the guess either failed (rank is NULL) or found the particle is local (rank is self).
1734 // In BOTH cases, the situation is unresolved, and we MUST fall back to the robust walk.
1735 if (guessed_owner_rank == rank) {
1736 LOG_ALLOW(LOCAL, LOG_DEBUG, "[PID %ld] Guess determined particle is local. Proceeding to robust walk to find cell.\n", current_particle.PID);
1737 } else { // guessed_owner_rank == MPI_PROC_NULL
1738 LOG_ALLOW(LOCAL, LOG_WARNING, "[PID %ld] Guess FAILED to find an owner. Proceeding to robust walk for definitive search.\n", current_particle.PID);
1739 }
1740
1741 ierr = LocateParticleOrFindMigrationTarget_TEST(user, &current_particle, &final_status); CHKERRQ(ierr);
1742 }
1743 }
1744
1745 // --- PROCESS THE FINAL, DEFINITIVE STATUS ---
1746 current_particle.location_status = final_status;
1747 ierr = UpdateSwarmFields(p_idx, &current_particle, weights_p, cell_p, status_p); CHKERRQ(ierr);
1748
1749 if (final_status == MIGRATING_OUT) {
1750 ierr = AddToMigrationList(&migrationList, &migrationListCapacity, &local_migration_count, p_idx, current_particle.destination_rank); CHKERRQ(ierr);
1751 } else if (final_status == LOST) {
1752 local_lost_count++;
1753 }
1754
1755 } // End of main particle processing loop
1756
1757 // Restore all the fields acquired for this pass.
1758 ierr = DMSwarmRestoreField(user->swarm, "position", NULL, NULL, (void**)&pos_p); CHKERRQ(ierr);
1759 ierr = DMSwarmRestoreField(user->swarm, "velocity", NULL, NULL, (void**)&vel_p); CHKERRQ(ierr);
1760 ierr = DMSwarmRestoreField(user->swarm, "weight", NULL, NULL, (void**)&weights_p); CHKERRQ(ierr);
1761 ierr = DMSwarmRestoreField(user->swarm, "DMSwarm_CellID", NULL, NULL, (void**)&cell_p); CHKERRQ(ierr);
1762 ierr = DMSwarmRestoreField(user->swarm, "DMSwarm_pid", NULL, NULL, (void**)&pid_p); CHKERRQ(ierr);
1763 ierr = DMSwarmRestoreField(user->swarm, "DMSwarm_location_status", NULL, NULL, (void**)&status_p); CHKERRQ(ierr);
1764 }
1765
1766 // --- STAGE 3: ACTION & MPI COMMUNICATION ---
1767 LOG_ALLOW(LOCAL, LOG_INFO, "[Rank %d] Pass %d: Identified %d particles to migrate out.\n", rank, passes, local_migration_count);
1768
1769 // --- STAGE 3: SYNCHRONIZE AND DECIDE ---
1770 // FIRST, determine if any rank wants to migrate. This call is safe because
1771 // all ranks have finished their local work and can participate.
1772 ierr = MPI_Allreduce(&local_migration_count, &global_migrations_this_pass, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); CHKERRQ(ierr);
1773
1774 if(global_migrations_this_pass > 0 ){
1775
1776 LOG_ALLOW(GLOBAL, LOG_INFO, "Pass %d: Migrating %d particles globally.\n", passes, global_migrations_this_pass);
1777
1778 ierr = SetMigrationRanks(user, migrationList, local_migration_count); CHKERRQ(ierr);
1779 ierr = PerformMigration(user); CHKERRQ(ierr);
1780
1781 // --- STAGE 4: POST-MIGRATION RESET ---
1782 // Identify newly arrived particles and flag them with NEEDS_LOCATION so they are
1783 // processed in the next pass. This uses the snapshot taken in STAGE 2.
1784 ierr = FlagNewcomersForLocation(user->swarm, nlocal_before, pids_before_snapshot); CHKERRQ(ierr);
1785 }
1786 // --- STAGE 5: LOOP SYNCHRONIZATION AND CLEANUP ---
1787
1788 ierr = PetscFree(pids_before_snapshot);
1789 ierr = PetscFree(migrationList);
1790
1791 LOG_ALLOW(GLOBAL, LOG_INFO, "End of LocateAllParticlesInGrid pass %d. Total particles migrated globally: %d.\n", passes, global_migrations_this_pass);
1792
1793 } while (global_migrations_this_pass > 0 && passes < MAX_MIGRATION_PASSES);
1794
1795 // --- FINAL CHECKS ---
1796 if (passes >= MAX_MIGRATION_PASSES) {
1797 SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_CONV_FAILED, "Particle migration failed to converge after %d passes. Check for particles oscillating between ranks.", MAX_MIGRATION_PASSES);
1798 }
1799
1800 LOG_ALLOW(GLOBAL, LOG_INFO, "Particle Location completed in %d passes.\n", passes);
1803 PetscFunctionReturn(0);
1804}
PetscErrorCode GuessParticleOwnerWithBBox(UserCtx *user, const Particle *particle, const BoundingBox *bboxlist, PetscMPIInt *guess_rank_out)
Provides a fast, heuristic-based guess for a particle's owner rank using bounding boxes.
PetscErrorCode AddToMigrationList(MigrationInfo **migration_list_p, PetscInt *capacity_p, PetscInt *count_p, PetscInt particle_local_idx, PetscMPIInt destination_rank)
Safely adds a new migration task to a dynamically sized list.
PetscErrorCode SetMigrationRanks(UserCtx *user, const MigrationInfo *migrationList, PetscInt migrationCount)
Sets the target rank field (DMSwarmPICField_rank) for particles scheduled for migration.
PetscErrorCode GetLocalPIDSnapshot(const PetscInt64 pid_field[], PetscInt n_local, PetscInt64 **pids_snapshot_out)
Creates a sorted snapshot of all Particle IDs (PIDs) from a raw data array.
PetscErrorCode FlagNewcomersForLocation(DM swarm, PetscInt n_local_before, const PetscInt64 pids_before[])
Identifies newly arrived particles after migration and flags them for a location search.
PetscErrorCode PerformMigration(UserCtx *user)
Performs particle migration based on the pre-populated DMSwarmPICField_rank field.
PetscErrorCode UnpackSwarmFields(PetscInt i, const PetscInt64 *PIDs, const PetscReal *weights, const PetscReal *positions, const PetscInt *cellIndices, PetscReal *velocities, PetscInt *LocStatus, Particle *particle)
Initializes a Particle struct with data from DMSwarm fields.
PetscErrorCode UpdateSwarmFields(PetscInt i, const Particle *particle, PetscReal *weights, PetscInt *cellIndices, PetscInt *status_field)
Updates DMSwarm fields with data from a Particle struct.
#define LOG_FUNC_TIMER_END_EVENT(eventID, scope)
Ends timing a function by:
Definition logging.h:462
#define LOG_FUNC_TIMER_BEGIN_EVENT(eventID, scope)
Begins timing a function by:
Definition logging.h:436
PetscLogEvent EVENT_GlobalParticleLocation
Definition logging.c:35
PetscInt cell[3]
Definition variables.h:166
ParticleLocationStatus
Defines the state of a particle with respect to its location and migration status during the iterativ...
Definition variables.h:134
@ ACTIVE_AND_LOCATED
Definition variables.h:136
@ MIGRATING_OUT
Definition variables.h:137
PetscMPIInt destination_rank
Definition variables.h:171
ParticleLocationStatus location_status
Definition variables.h:170
Defines a particle's core properties for Lagrangian tracking.
Definition variables.h:164
PetscErrorCode LocateParticleOrFindMigrationTarget_TEST(UserCtx *user, Particle *particle, ParticleLocationStatus *status_out)
Locates a particle's host cell or identifies its migration target using a robust walk search.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ ResetAllParticleStatuses()

PetscErrorCode ResetAllParticleStatuses ( UserCtx user)

This function is designed to be called at the end of a full timestep, after all particle-based calculations are complete.

It prepares the swarm for the next timestep by ensuring that after the next position update, every particle will be re-evaluated by the LocateAllParticlesInGrid orchestrator.

It iterates through all locally owned particles and sets their DMSwarm_location_status field to NEEDS_LOCATION.

Parameters
[in,out]userPointer to the UserCtx containing the swarm.
Returns
PetscErrorCode 0 on success, or a non-zero PETSc error code on failure.

Definition at line 1818 of file ParticleMotion.c.

1819{
1820 PetscErrorCode ierr;
1821 PetscInt n_local;
1822 PetscInt *status_p;
1823
1824 PetscFunctionBeginUser;
1825
1826 ierr = DMSwarmGetLocalSize(user->swarm, &n_local); CHKERRQ(ierr);
1827
1828 if (n_local > 0) {
1829 // Get write access to the status field
1830 ierr = DMSwarmGetField(user->swarm, "DMSwarm_location_status", NULL, NULL, (void**)&status_p); CHKERRQ(ierr);
1831
1832 for (PetscInt p = 0; p < n_local; ++p) {
1833 // Only reset particles that are considered settled. This is a small optimization
1834 // to avoid changing the status of a LOST particle, though resetting all would also be fine.
1835 if (status_p[p] == ACTIVE_AND_LOCATED) {
1836 status_p[p] = NEEDS_LOCATION;
1837 }
1838 }
1839
1840 // Restore the field
1841 ierr = DMSwarmRestoreField(user->swarm, "DMSwarm_location_status", NULL, NULL, (void**)&status_p); CHKERRQ(ierr);
1842 }
1843
1844 PetscFunctionReturn(0);
1845}
Here is the caller graph for this function: