PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
Boundaries.h
Go to the documentation of this file.
1#ifndef BOUNDARIES_H
2#define BOUNDARIES_H
3
4#include <petscpf.h>
5#include <petscdmswarm.h>
6#include <stdlib.h>
7#include <time.h>
8#include <math.h>
9#include <petsctime.h>
10#include <petscsys.h>
11#include <petscdmcomposite.h>
12#include <petscsystypes.h>
13
14// Include additional headers
15#include "variables.h" // Shared type definitions
16#include "ParticleSwarm.h" // Particle swarm functions
17#include "walkingsearch.h" // Particle location functions
18#include "grid.h" // Grid functions
19#include "logging.h" // Logging macros
20#include "io.h" // Data Input and Output functions
21#include "interpolation.h" // Interpolation routines
22#include "ParticleMotion.h" // Functions related to motion of particles
23#include "BC_Handlers.h" // Boundary Handlers
24#include "wallfunction.h" // wall functions for LES
25//================================================================================
26//
27// PUBLIC SYSTEM-LEVEL FUNCTIONS
28//
29// These are the main entry points for interacting with the boundary system.
30//
31//================================================================================
32
33/**
34 * @brief (Public) Validates the consistency and compatibility of the parsed boundary condition system.
35 *
36 * This function is the main entry point for all boundary condition validation. It should be
37 * called from the main setup sequence AFTER the configuration file has been parsed by
38 * `ParseAllBoundaryConditions` but BEFORE any `BoundaryCondition` handler objects are created.
39 *
40 * It acts as a dispatcher, calling specialized private sub-validators for different complex
41 * BC setups (like driven flow) to ensure the combination of `mathematical_type` and `handler_type`
42 * across all six faces is physically and numerically valid. This provides a "fail-fast"
43 * mechanism to prevent users from running improperly configured simulations.
44 *
45 * @param user The UserCtx for a single block, containing the populated `boundary_faces` configuration.
46 * @return PetscErrorCode 0 on success, non-zero PETSc error code on failure.
47 */
48PetscErrorCode BoundarySystem_Validate(UserCtx *user);
49
50/**
51 * @brief (Private) Creates and configures a specific BoundaryCondition handler object.
52 *
53 * This function acts as a factory. Based on the requested handler_type, it allocates
54 * a BoundaryCondition object and populates it with the correct set of function
55 * pointers corresponding to that specific behavior.
56 *
57 * @param handler_type The specific handler to create (e.g., BC_HANDLER_WALL_NOSLIP).
58 * @param[out] new_bc_ptr A pointer to where the newly created BoundaryCondition
59 * object's address will be stored.
60 * @return PetscErrorCode 0 on success.
61 */
62
63PetscErrorCode BoundaryCondition_Create(BCHandlerType handler_type, BoundaryCondition **new_bc_ptr);
64
65/**
66 * @brief Initializes the entire boundary system.
67 *
68 * @param user The
69 * @param bcs_filename The
70 * @return PetscErrorCode 0 on success.
71 */
72PetscErrorCode BoundarySystem_Initialize(UserCtx *user, const char *bcs_filename);
73
74/**
75 * @brief Propagates boundary condition configuration from finest to all coarser multigrid levels.
76 *
77 * Coarser levels need BC type information for geometric operations (e.g., periodic corrections)
78 * but do NOT need full handler objects since timestepping only occurs at the finest level.
79 * This function copies the boundary_faces configuration down the hierarchy.
80 *
81 * @param simCtx The master SimCtx containing the multigrid hierarchy
82 * @return PetscErrorCode 0 on success
83 */
85
86/**
87 * @brief Executes one full boundary condition update cycle for a time step.
88 *
89 * @param user The
90 * @return PetscErrorCode 0 on success.
91 */
92PetscErrorCode BoundarySystem_ExecuteStep(UserCtx *user);
93
94/**
95 * @brief (Private) A lightweight execution engine that calls the UpdateUbcs() method on all relevant handlers.
96 *
97 * This function's sole purpose is to re-evaluate the target boundary values (`ubcs`) for
98 * flow-dependent boundary conditions (e.g., Symmetry, Outlets) after the interior
99 * velocity field has changed, such as after the projection step.
100 *
101 * It operates based on a "pull" model: it iterates through all boundary handlers and
102 * executes their `UpdateUbcs` method only if the handler has provided one. This makes the
103 * system extensible, as new flow-dependent handlers can be added without changing this
104 * engine. Handlers for fixed boundary conditions (e.g., a wall with a constant velocity)
105 * will have their `UpdateUbcs` pointer set to `NULL` and will be skipped automatically.
106 *
107 * @note This function is a critical part of the post-projection refresh. It intentionally
108 * does NOT modify `ucont` and does NOT perform flux balancing.
109 *
110 * @param user The main UserCtx struct.
111 * @return PetscErrorCode 0 on success.
112 */
113PetscErrorCode BoundarySystem_RefreshUbcs(UserCtx *user);
114
115/**
116 * @brief Cleans up and destroys all boundary system resources.
117 *
118 * @param user The
119 * @return PetscErrorCode 0 on success.
120 */
121PetscErrorCode BoundarySystem_Destroy(UserCtx *user);
122
123/**
124 * @brief Determines if the current MPI rank owns any part of the globally defined inlet face,
125 * making it responsible for placing particles on that portion of the surface.
126 *
127 * The determination is based on the rank's owned nodes (from `DMDALocalInfo`) and
128 * the global node counts, in conjunction with the `user->identifiedInletBCFace`.
129 * A rank can service an inlet face if it owns the cells adjacent to that global boundary
130 * and has a non-zero extent (owns cells) in the tangential dimensions of that face.
131 *
132 * @param user Pointer to the UserCtx structure, containing `identifiedInletBCFace`.
133 * @param info Pointer to the DMDALocalInfo for the current rank's DA (node-based).
134 * @param IM_nodes_global Global number of nodes in the I-direction (e.g., user->IM + 1 if user->IM is cell count).
135 * @param JM_nodes_global Global number of nodes in the J-direction.
136 * @param KM_nodes_global Global number of nodes in the K-direction.
137 * @param[out] can_service_inlet_out Pointer to a PetscBool; set to PETSC_TRUE if the rank
138 * services (part of) the inlet, PETSC_FALSE otherwise.
139 * @return PetscErrorCode 0 on success, non-zero on failure.
140 */
141PetscErrorCode CanRankServiceInletFace(UserCtx *user, const DMDALocalInfo *info,
142 PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global,
143 PetscBool *can_service_inlet_out);
144
145/**
146 * @brief Determines if the current MPI rank owns any part of a specified global face.
147 *
148 * This function is a general utility for parallel boundary operations. It checks if the
149 * local domain of the current MPI rank is adjacent to a specified global boundary face.
150 * A rank "services" a face if it owns the cells adjacent to that global boundary and has
151 * a non-zero extent (i.e., owns at least one cell) in the tangential dimensions of that face.
152 *
153 * @param info Pointer to the DMDALocalInfo for the current rank's DA.
154 * @param IM_nodes_global Global number of nodes in the I-direction (e.g., user->IM + 1 if user->IM is cell count).
155 * @param JM_nodes_global Global number of nodes in the J-direction.
156 * @param KM_nodes_global Global number of nodes in the K-direction.
157 * @param face_id The specific global face (e.g., BC_FACE_NEG_Z) to check.
158 * @param[out] can_service_out Pointer to a PetscBool; set to PETSC_TRUE if the rank
159 * services the face, PETSC_FALSE otherwise.
160 * @return PetscErrorCode 0 on success.
161 */
162PetscErrorCode CanRankServiceFace(const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global,
163 BCFace face_id, PetscBool *can_service_out);
164
165/**
166 * @brief Places particles in a deterministic grid/raster pattern on a specified domain face.
167 *
168 * This function creates a set of equidistant, parallel lines of particles near the four
169 * edges of the face specified by user->identifiedInletBCFace. The number of lines drawn
170 * from each edge is hardcoded within this function (default is 2).
171 * For example, if grid_layers=2 on face BC_FACE_NEG_X, the function will create particle lines at:
172 * - y ~ 0*dy, y ~ 1*dy (parallel to the Z-axis, starting from the J=0 edge)
173 * - y ~ y_max, y ~ y_max-dy (parallel to the Z-axis, starting from the J=max edge)
174 * - z ~ 0*dz, z ~ 1*dz (parallel to the Y-axis, starting from the K=0 edge)
175 * - z ~ z_max, z ~ z_max-dz (parallel to the Y-axis, starting from the K=max edge)
176 * The particle's final position is set just inside the target cell face to ensure it is
177 * correctly located. The total number of particles (simCtx->np) is distributed as evenly
178 * as possible among all generated lines.
179 * The function includes extensive validation to stop with an error if the requested grid
180 * placement is geometrically impossible (e.g., in a 2D domain or if layers would overlap).
181 * It also issues warnings for non-fatal but potentially unintended configurations.
182 *
183 * @param user Pointer
184 * @param info Pointer
185 * @param xs_gnode_rank Parameter `xs_gnode_rank` passed to `GetDeterministicFaceGridLocation()`.
186 * @param ys_gnode_rank Parameter `ys_gnode_rank` passed to `GetDeterministicFaceGridLocation()`.
187 * @param zs_gnode_rank Parameter `zs_gnode_rank` passed to `GetDeterministicFaceGridLocation()`.
188 * @param IM_cells_global Parameter `IM_cells_global` passed to `GetDeterministicFaceGridLocation()`.
189 * @param JM_cells_global Parameter `JM_cells_global` passed to `GetDeterministicFaceGridLocation()`.
190 * @param KM_cells_global Parameter `KM_cells_global` passed to `GetDeterministicFaceGridLocation()`.
191 * @param particle_global_id The
192 * @param ci_metric_lnode_out Local
193 * @param cj_metric_lnode_out Local
194 * @param ck_metric_lnode_out Local
195 * @param xi_metric_logic_out Logical
196 * @param eta_metric_logic_out Logical
197 * @param zta_metric_logic_out Logical
198 * @param placement_successful_out PETSC_TRUE
199 * @return PetscErrorCode
200 */
202 UserCtx *user, const DMDALocalInfo *info,
203 PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank,
204 PetscInt IM_cells_global, PetscInt JM_cells_global, PetscInt KM_cells_global,
205 PetscInt64 particle_global_id,
206 PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out,
207 PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out,
208 PetscBool *placement_successful_out);
209
210
211/**
212 * @brief Assuming the current rank services the inlet face, this function selects a random
213 * cell (owned by this rank on that face) and random logical coordinates within that cell,
214 * suitable for placing a particle on the inlet surface.
215 *
216 * It is the caller's responsibility to ensure CanRankServiceInletFace returned true.
217 *
218 * @param user Pointer to UserCtx.
219 * @param info Pointer to DMDALocalInfo for the current rank (node-based).
220 * @param xs_gnode_rank Local i-start node index (including ghosts) for this rank.
221 * @param ys_gnode_rank Local j-start node index (including ghosts) for this rank.
222 * @param zs_gnode_rank Local k-start node index (including ghosts) for this rank.
223 * @param IM_nodes_global Global node count in i.
224 * @param JM_nodes_global Global node count in j.
225 * @param KM_nodes_global Global node count in k.
226 * @param rand_logic_i_ptr RNG handle for sampling local logical xi.
227 * @param rand_logic_j_ptr RNG handle for sampling local logical eta.
228 * @param rand_logic_k_ptr RNG handle for sampling local logical zta.
229 * @param[out] ci_metric_lnode_out Local i node index of selected cell origin.
230 * @param[out] cj_metric_lnode_out Local j node index of selected cell origin.
231 * @param[out] ck_metric_lnode_out Local k node index of selected cell origin.
232 * @param[out] xi_metric_logic_out Logical xi coordinate in [0,1].
233 * @param[out] eta_metric_logic_out Logical eta coordinate in [0,1].
234 * @param[out] zta_metric_logic_out Logical zta coordinate in [0,1].
235 * @return PetscErrorCode
236 */
238 UserCtx *user, const DMDALocalInfo *info,
239 PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank, // Local starting node index (with ghosts) of the rank's DA patch
240 PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global,
241 PetscRandom *rand_logic_i_ptr, PetscRandom *rand_logic_j_ptr, PetscRandom *rand_logic_k_ptr,
242 PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out,
243 PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out);
244
245/**
246 * @brief Enforces boundary conditions on the momentum equation's Right-Hand-Side (RHS) vector.
247 *
248 * This function performs two critical roles based on the legacy implementation:
249 *
250 * 1. **Strong BC Enforcement for Physical Boundaries:** For non-periodic boundaries (e.g., walls, inlets),
251 * it zeroes the normal component of the RHS in a "buffer" layer of cells just inside the
252 * domain (e.g., at i=mx-2). This strongly enforces Dirichlet conditions on velocity by preventing
253 * the time-stepping scheme from altering the boundary values set by `ApplyBoundaryConditions`.
254 *
255 * 2. **Ghost Cell Sanitization:** For all boundary faces (`i=0`, `i=mx-1`, etc.), it zeroes out all
256 * components of the RHS. Since the RHS is a cell-centered quantity in this architecture, these
257 * locations correspond to ghost cells. This step sanitizes these unused locations, ensuring they
258 * do not contain garbage data that could affect diagnostics or other routines. This sanitization
259 * is performed for ALL boundary types, including periodic ones.
260 *
261 * This function should be called immediately after the RHS vector is fully assembled
262 * (spatial + temporal terms) and before it is used in a time-stepping update.
263 *
264 * @param user The UserCtx for the specific block being computed.
265 * @return PetscErrorCode 0 on success.
266 */
267PetscErrorCode EnforceRHSBoundaryConditions(UserCtx *user);
268
269/**
270 * @brief (Private Worker) Copies periodic data for a SINGLE field in a SINGLE direction.
271 *
272 * This is a low-level helper that performs the memory copy from the local ghost
273 * array to the global array for a specified field and direction ('i', 'j', or 'k').
274 * It contains NO communication logic; that is handled by the orchestrator.
275 *
276 * @param user The main UserCtx struct.
277 * @param field_name The string identifier for the field to transfer (e.g., "Ucat").
278 * @param direction The character 'i', 'j', or 'k' specifying the direction.
279 * @return PetscErrorCode 0 on success.
280 */
281PetscErrorCode TransferPeriodicFieldByDirection(UserCtx *user, const char *field_name, char direction);
282
283/**
284 * @brief (Orchestrator) Applies periodic transfer for one field across all i/j/k directions.
285 *
286 * This wrapper executes directional periodic transfers in the prescribed order with
287 * intermediate ghost synchronization where needed, so callers can request a complete
288 * periodic update for a single field without handling the directional details.
289 *
290 * @param user The main UserCtx struct.
291 * @param field_name The string identifier for the field to transfer.
292 * @return PetscErrorCode 0 on success.
293 */
294PetscErrorCode TransferPeriodicField(UserCtx *user, const char *field_name);
295
296/**
297 * @brief (Primitive) Copies periodic data from the interior to the local ghost cell region for a single field.
298 *
299 * This primitive function performs a direct memory copy for a specified field, updating
300 * all periodic ghost faces (i, j, and k). It reads data from just inside the periodic boundary
301 * and writes it to the corresponding local ghost cells.
302 *
303 * The copy is "two-cells deep" to support wider computational stencils.
304 *
305 * This function does NOT involve any MPI communication; it operates entirely on local PETSc vectors.
306 *
307 * @param user The main UserCtx struct.
308 * @param field_name The string identifier for the field to update (e.g., "Csi", "Ucont").
309 * @return PetscErrorCode 0 on success.
310 */
311PetscErrorCode TransferPeriodicFaceField(UserCtx *user, const char *field_name);
312
313/**
314 * @brief (Orchestrator) Updates all metric-related fields in the local ghost cell regions for periodic boundaries.
315 *
316 * This function calls the TransferPeriodicFaceField primitive for each of the 16
317 * metric fields that require a 2-cell deep periodic ghost cell update.
318 * This is a direct replacement for the legacy Update_Metrics_PBC function.
319 *
320 * @param user The main UserCtx struct.
321 * @return PetscErrorCode 0 on success.
322 */
323PetscErrorCode ApplyMetricsPeriodicBCs(UserCtx *user);
324
325/**
326 * @brief Applies periodic boundary conditions by copying data across domain boundaries for all relevant fields.
327 *
328 * This is the canonical periodic orchestrator for geometric consistency. It updates
329 * `Ucat`, `P`, and `Nvert` through the generic field transfer helper and also updates
330 * staggered `Ucont` periodicity via `ApplyUcontPeriodicBCs()` and
331 * `EnforceUcontPeriodicity()`.
332 *
333 * Future extension rule: add new periodic variables by extending the existing field
334 * string dispatchers and invoking them from this orchestrator.
335 *
336 * @param user The main UserCtx struct.
337 * @return PetscErrorCode 0 on success.
338 */
339PetscErrorCode ApplyPeriodicBCs(UserCtx *user);
340
341/**
342 * @brief (Orchestrator) Updates the contravariant velocity field in the local ghost cell regions for periodic boundaries.
343 *
344 * This function calls the TransferPeriodicFaceField primitive for the Ucont field.
345 * This is a direct replacement for the legacy Update_U_Cont_PBC function.
346 *
347 * @param user The main UserCtx struct.
348 * @return PetscErrorCode 0 on success.
349 */
350PetscErrorCode ApplyUcontPeriodicBCs(UserCtx *user);
351
352/**
353 * @brief Enforces strict periodicity on the interior contravariant velocity field.
354 *
355 * This function is a "fix-up" routine for staggered grids. After a solver step,
356 * numerical inaccuracies can lead to small discrepancies between fluxes on opposing
357 * periodic boundaries. This function manually corrects this by copying the flux value
358 * from the first boundary face (retrieved from a ghost cell) to the last interior face.
359 *
360 * This routine involves MPI communication to synchronize the grid before and after the copy.
361 *
362 * @param user The main UserCtx struct.
363 * @return PetscErrorCode 0 on success.
364 */
365PetscErrorCode EnforceUcontPeriodicity(UserCtx *user);
366
367/**
368 * @brief Updates the dummy cells (ghost nodes) on the faces of the local domain for NON-PERIODIC boundaries.
369 *
370 * This function's role is to apply a second-order extrapolation to set the ghost
371 * cell values based on the boundary condition value (stored in `ubcs`) and the
372 * first interior cell.
373 *
374 * NOTE: This function deliberately IGNORES periodic boundaries. It is part of a
375 * larger workflow where `ApplyPeriodicBCs` handles periodic faces first.
376 *
377 * CRITICAL DETAIL: This function uses shrunken loop ranges (lxs, lxe, etc.) to
378 * intentionally update only the flat part of the faces, avoiding the edges and
379
380 * corners. The edges and corners are then handled separately by `UpdateCornerNodes`.
381 * This precisely replicates the logic of the original FormBCS function.
382 *
383 * @param user The main UserCtx struct containing all necessary data.
384 * @return PetscErrorCode 0 on success.
385 */
386PetscErrorCode UpdateDummyCells(UserCtx *user);
387
388/**
389 * @brief Updates the corner and edge ghost nodes of the local domain by averaging.
390 *
391 * This function should be called AFTER the face ghost nodes are finalized by both
392 * `ApplyPeriodicBCs` and `UpdateDummyCells`. It resolves the values at shared
393 * edges and corners by averaging the values of adjacent, previously-computed
394 * ghost nodes.
395 *
396 * The logic is generic and works correctly regardless of the boundary types on
397 * the adjacent faces (e.g., it will correctly average a periodic face neighbor
398 * with a wall face neighbor).
399 *
400 * @param user The main UserCtx struct containing all necessary data.
401 * @return PetscErrorCode 0 on success.
402 */
403PetscErrorCode UpdateCornerNodes(UserCtx *user);
404
405/**
406 * @brief (Orchestrator) Performs a sequential, deterministic periodic update for a list of fields.
407 *
408 * This function orchestrates the resolution of ambiguous periodic corners and edges.
409 * It takes an array of field names and updates them in a strict i-sync-j-sync-k order
410 * by calling the low-level worker `TransferPeriodicFieldByDirection` and the
411 * communication routine `UpdateLocalGhosts`.
412 *
413 * @param user The main UserCtx struct.
414 * @param num_fields The number of fields in the field_names array.
415 * @param field_names An array of strings with the names of fields to update (e.g., ["Ucat", "P"]).
416 * @return PetscErrorCode 0 on success.
417 */
418PetscErrorCode UpdatePeriodicCornerNodes(UserCtx *user, PetscInt num_fields, const char* field_names[]);
419
420/**
421 * @brief Applies wall function modeling to near-wall velocities for all wall-type boundaries.
422 *
423 * This function implements log-law wall functions to model the near-wall velocity profile
424 * without fully resolving the viscous sublayer. It is applicable to ALL wall-type boundaries
425 * regardless of their specific boundary condition (no-slip, moving wall, slip, etc.), as
426 * determined by the mathematical_type being WALL.
427 *
428 * MATHEMATICAL BACKGROUND:
429 * Wall functions bridge the gap between the wall (y=0) and the first computational cell
430 * center by using empirical log-law relationships:
431 * - Viscous sublayer (y+ < 11.81): u+ = y+
432 * - Log-law region (y+ > 11.81): u+ = (1/κ) * ln(E * y+)
433 * where u+ = u/u_τ, y+ = y*u_τ/ν, κ = 0.41 (von Karman constant), E = exp(κB)
434 *
435 * IMPLEMENTATION DETAILS:
436 * Unlike standard boundary conditions that set ghost cell values, wall functions:
437 * 1. Read velocity from the SECOND interior cell (i±2, j±2, k±2)
438 * 2. Compute wall shear stress using log-law
439 * 3. Modify velocity at the FIRST interior cell (i±1, j±1, k±1)
440 * 4. Keep ghost cell boundary values (ubcs, ucont) at zero
441 *
442 * WORKFLOW:
443 * - Called from ApplyBoundaryConditions after standard BC application
444 * - Operates on ucat (Cartesian velocity)
445 * - Updates ustar (friction velocity field) for diagnostics/turbulence models
446 * - Ghost cells remain zero; UpdateDummyCells handles extrapolation afterward
447 *
448 * GEOMETRIC QUANTITIES:
449 * sb = wall-normal distance from wall to first interior cell center
450 * sc = wall-normal distance from wall to second interior cell center
451 * These are computed from cell Jacobians (aj) and face area vectors
452 *
453 * APPLICABILITY:
454 * - Requires simCtx->wallfunction = true
455 * - Only processes faces where mathematical_type == WALL
456 * - Skips solid-embedded cells (nvert >= 0.1)
457 *
458 * @param user The UserCtx containing all simulation state and geometry
459 * @return PetscErrorCode 0 on success
460 *
461 * @note This function modifies interior cell velocities, NOT ghost cells
462 * @note Wall roughness (ks) is currently set to 1e-16 (smooth wall)
463 * @see wall_function_loglaw() in wallfunction.c for the actual log-law implementation
464 * @see noslip() in wallfunction.c for the initial linear interpolation
465 */
466PetscErrorCode ApplyWallFunction(UserCtx *user);
467
468/**
469 * @brief (Public) Orchestrates the "light" refresh of all boundary ghost cells after the projection step.
470 *
471 * This function is the correct and complete replacement for the role that `GhostNodeVelocity`
472 * played when called from within the `Projection` function. Its purpose is to ensure that
473 * all ghost cells for `ucat` and `p` are made consistent with the final, divergence-free
474 * interior velocity field computed by the projection step.
475 *
476 * This function is fundamentally different from `ApplyBoundaryConditions` because it does
477 * NOT modify the physical flux field (`ucont`) and does NOT apply physical models like
478 * the wall function. It is a purely geometric and data-consistency operation.
479 *
480 * WORKFLOW:
481 * 1. Calls the lightweight `BoundarySystem_RefreshUbcs()` engine. This re-calculates the
482 * `ubcs` target values ONLY for flow-dependent boundary conditions (like Symmetry or Outlets)
483 * using the newly updated interior `ucat` field.
484 *
485 * 2. Calls the geometric updaters (`ApplyPeriodicBCs`, `UpdateDummyCells`, `UpdateCornerNodes`)
486 * in the correct, dependency-aware order to fill in all ghost cell values based on the now
487 * fully-refreshed `ubcs` targets.
488 *
489 * 3. Performs a final synchronization of local PETSc vectors to ensure all MPI ranks are
490 * consistent before proceeding to the next time step.
491 *
492 * @param user The main UserCtx struct, containing all simulation state.
493 * @return PetscErrorCode 0 on success.
494 */
495PetscErrorCode RefreshBoundaryGhostCells(UserCtx *user);
496
497/**
498 * @brief Main boundary-condition orchestrator executed during solver timestepping.
499 *
500 * This routine performs the full BC workflow for the current block, including
501 * dynamic boundary refresh, periodic transfer, dummy/corner updates, and optional
502 * wall-function corrections in the same order expected by the runtime solver.
503 * It may iterate boundary updates to enforce coupled boundary dependencies.
504 *
505 * @param user The main UserCtx struct containing field vectors and boundary system state.
506 * @return PetscErrorCode 0 on success.
507 */
508PetscErrorCode ApplyBoundaryConditions(UserCtx *user);
509
510#endif // BOUNDARIES_H
PetscErrorCode ApplyPeriodicBCs(UserCtx *user)
Applies periodic boundary conditions by copying data across domain boundaries for all relevant fields...
PetscErrorCode BoundarySystem_Initialize(UserCtx *user, const char *bcs_filename)
Initializes the entire boundary system.
Definition Boundaries.c:857
PetscErrorCode ApplyUcontPeriodicBCs(UserCtx *user)
(Orchestrator) Updates the contravariant velocity field in the local ghost cell regions for periodic ...
PetscErrorCode TransferPeriodicFaceField(UserCtx *user, const char *field_name)
(Primitive) Copies periodic data from the interior to the local ghost cell region for a single field.
PetscErrorCode RefreshBoundaryGhostCells(UserCtx *user)
(Public) Orchestrates the "light" refresh of all boundary ghost cells after the projection step.
PetscErrorCode GetRandomCellAndLogicalCoordsOnInletFace(UserCtx *user, const DMDALocalInfo *info, PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, PetscRandom *rand_logic_i_ptr, PetscRandom *rand_logic_j_ptr, PetscRandom *rand_logic_k_ptr, PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out, PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out)
Assuming the current rank services the inlet face, this function selects a random cell (owned by this...
Definition Boundaries.c:399
PetscErrorCode ApplyWallFunction(UserCtx *user)
Applies wall function modeling to near-wall velocities for all wall-type boundaries.
PetscErrorCode PropagateBoundaryConfigToCoarserLevels(SimCtx *simCtx)
Propagates boundary condition configuration from finest to all coarser multigrid levels.
Definition Boundaries.c:982
PetscErrorCode ApplyMetricsPeriodicBCs(UserCtx *user)
(Orchestrator) Updates all metric-related fields in the local ghost cell regions for periodic boundar...
PetscErrorCode EnforceRHSBoundaryConditions(UserCtx *user)
Enforces boundary conditions on the momentum equation's Right-Hand-Side (RHS) vector.
Definition Boundaries.c:591
PetscErrorCode EnforceUcontPeriodicity(UserCtx *user)
Enforces strict periodicity on the interior contravariant velocity field.
PetscErrorCode TransferPeriodicFieldByDirection(UserCtx *user, const char *field_name, char direction)
(Private Worker) Copies periodic data for a SINGLE field in a SINGLE direction.
PetscErrorCode UpdateDummyCells(UserCtx *user)
Updates the dummy cells (ghost nodes) on the faces of the local domain for NON-PERIODIC boundaries.
PetscErrorCode BoundarySystem_RefreshUbcs(UserCtx *user)
(Private) A lightweight execution engine that calls the UpdateUbcs() method on all relevant handlers.
PetscErrorCode BoundarySystem_Validate(UserCtx *user)
(Public) Validates the consistency and compatibility of the parsed boundary condition system.
Definition Boundaries.c:825
PetscErrorCode UpdatePeriodicCornerNodes(UserCtx *user, PetscInt num_fields, const char *field_names[])
(Orchestrator) Performs a sequential, deterministic periodic update for a list of fields.
PetscErrorCode BoundaryCondition_Create(BCHandlerType handler_type, BoundaryCondition **new_bc_ptr)
(Private) Creates and configures a specific BoundaryCondition handler object.
Definition Boundaries.c:744
PetscErrorCode CanRankServiceInletFace(UserCtx *user, const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, PetscBool *can_service_inlet_out)
Determines if the current MPI rank owns any part of the globally defined inlet face,...
Definition Boundaries.c:11
PetscErrorCode ApplyBoundaryConditions(UserCtx *user)
Main boundary-condition orchestrator executed during solver timestepping.
PetscErrorCode CanRankServiceFace(const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, BCFace face_id, PetscBool *can_service_out)
Determines if the current MPI rank owns any part of a specified global face.
Definition Boundaries.c:126
PetscErrorCode GetDeterministicFaceGridLocation(UserCtx *user, const DMDALocalInfo *info, PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank, PetscInt IM_cells_global, PetscInt JM_cells_global, PetscInt KM_cells_global, PetscInt64 particle_global_id, PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out, PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out, PetscBool *placement_successful_out)
Places particles in a deterministic grid/raster pattern on a specified domain face.
Definition Boundaries.c:212
PetscErrorCode BoundarySystem_ExecuteStep(UserCtx *user)
Executes one full boundary condition update cycle for a time step.
PetscErrorCode BoundarySystem_Destroy(UserCtx *user)
Cleans up and destroys all boundary system resources.
PetscErrorCode UpdateCornerNodes(UserCtx *user)
Updates the corner and edge ghost nodes of the local domain by averaging.
PetscErrorCode TransferPeriodicField(UserCtx *user, const char *field_name)
(Orchestrator) Applies periodic transfer for one field across all i/j/k directions.
Header file for Particle Motion and migration related functions.
Header file for Particle Swarm management functions.
Public interface for grid, solver, and metric setup routines.
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
The "virtual table" struct for a boundary condition handler object.
Definition variables.h:321
Main header file for a complex fluid dynamics solver.
BCHandlerType
Defines the specific computational "strategy" for a boundary handler.
Definition variables.h:271
BCFace
Identifies the six logical faces of a structured computational block.
Definition variables.h:244
The master context for the entire simulation.
Definition variables.h:643
User-defined context containing data specific to a single computational grid level.
Definition variables.h:811
Header file for particle location functions using the walking search algorithm.