PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
Boundaries.h
Go to the documentation of this file.
1#ifndef BOUNDARIES_H
2#define BOUNDARIES_H
3
4#include <petscpf.h>
5#include <petscdmswarm.h>
6#include <stdlib.h>
7#include <time.h>
8#include <math.h>
9#include <petsctime.h>
10#include <petscsys.h>
11#include <petscdmcomposite.h>
12#include <petscsystypes.h>
13
14// Include additional headers
15#include "variables.h" // Shared type definitions
16#include "ParticleSwarm.h" // Particle swarm functions
17#include "walkingsearch.h" // Particle location functions
18#include "grid.h" // Grid functions
19#include "logging.h" // Logging macros
20#include "io.h" // Data Input and Output functions
21#include "interpolation.h" // Interpolation routines
22#include "ParticleMotion.h" // Functions related to motion of particles
23#include "BC_Handlers.h" // Boundary Handlers
24#include "wallfunction.h" // wall functions for LES
25//================================================================================
26//
27// PUBLIC SYSTEM-LEVEL FUNCTIONS
28//
29// These are the main entry points for interacting with the boundary system.
30//
31//================================================================================
32
33/**
34 * @brief (Public) Validates the consistency and compatibility of the parsed boundary condition system.
35 *
36 * This function is the main entry point for all boundary condition validation. It should be
37 * called from the main setup sequence AFTER the configuration file has been parsed by
38 * `ParseAllBoundaryConditions` but BEFORE any `BoundaryCondition` handler objects are created.
39 *
40 * It acts as a dispatcher, calling specialized private sub-validators for different complex
41 * BC setups (like driven flow) to ensure the combination of `mathematical_type` and `handler_type`
42 * across all six faces is physically and numerically valid. This provides a "fail-fast"
43 * mechanism to prevent users from running improperly configured simulations.
44 *
45 * @param user The UserCtx for a single block, containing the populated `boundary_faces` configuration.
46 * @return PetscErrorCode 0 on success, non-zero PETSc error code on failure.
47 */
48PetscErrorCode BoundarySystem_Validate(UserCtx *user);
49
50/**
51 * @brief (Private) Creates and configures a specific BoundaryCondition handler object.
52 *
53 * This function acts as a factory. Based on the requested handler_type, it allocates
54 * a BoundaryCondition object and populates it with the correct set of function
55 * pointers corresponding to that specific behavior.
56 *
57 * @param handler_type The specific handler to create (e.g., BC_HANDLER_WALL_NOSLIP).
58 * @param[out] new_bc_ptr A pointer to where the newly created BoundaryCondition
59 * object's address will be stored.
60 * @return PetscErrorCode 0 on success.
61 */
62
63PetscErrorCode BoundaryCondition_Create(BCHandlerType handler_type, BoundaryCondition **new_bc_ptr);
64
65/**
66 * @brief Initializes the entire boundary system.
67 * @param user The main UserCtx struct.
68 * @param bcs_filename The path to the boundary conditions configuration file.
69 */
70PetscErrorCode BoundarySystem_Initialize(UserCtx *user, const char *bcs_filename);
71
72/**
73 * @brief Propagates boundary condition configuration from finest to all coarser multigrid levels.
74 *
75 * Coarser levels need BC type information for geometric operations (e.g., periodic corrections)
76 * but do NOT need full handler objects since timestepping only occurs at the finest level.
77 * This function copies the boundary_faces configuration down the hierarchy.
78 *
79 * @param simCtx The master SimCtx containing the multigrid hierarchy
80 * @return PetscErrorCode 0 on success
81 */
83
84/**
85 * @brief Executes one full boundary condition update cycle for a time step.
86 * @param user The main UserCtx struct.
87 */
88PetscErrorCode BoundarySystem_ExecuteStep(UserCtx *user);
89
90/**
91 * @brief (Private) A lightweight execution engine that calls the UpdateUbcs() method on all relevant handlers.
92 *
93 * This function's sole purpose is to re-evaluate the target boundary values (`ubcs`) for
94 * flow-dependent boundary conditions (e.g., Symmetry, Outlets) after the interior
95 * velocity field has changed, such as after the projection step.
96 *
97 * It operates based on a "pull" model: it iterates through all boundary handlers and
98 * executes their `UpdateUbcs` method only if the handler has provided one. This makes the
99 * system extensible, as new flow-dependent handlers can be added without changing this
100 * engine. Handlers for fixed boundary conditions (e.g., a wall with a constant velocity)
101 * will have their `UpdateUbcs` pointer set to `NULL` and will be skipped automatically.
102 *
103 * @note This function is a critical part of the post-projection refresh. It intentionally
104 * does NOT modify `ucont` and does NOT perform flux balancing.
105 *
106 * @param user The main UserCtx struct.
107 * @return PetscErrorCode 0 on success.
108 */
109PetscErrorCode BoundarySystem_RefreshUbcs(UserCtx *user);
110
111/**
112 * @brief Cleans up and destroys all boundary system resources.
113 * @param user The main UserCtx struct.
114 */
115PetscErrorCode BoundarySystem_Destroy(UserCtx *user);
116
117/**
118 * @brief Determines if the current MPI rank owns any part of the globally defined inlet face,
119 * making it responsible for placing particles on that portion of the surface.
120 *
121 * The determination is based on the rank's owned nodes (from `DMDALocalInfo`) and
122 * the global node counts, in conjunction with the `user->identifiedInletBCFace`.
123 * A rank can service an inlet face if it owns the cells adjacent to that global boundary
124 * and has a non-zero extent (owns cells) in the tangential dimensions of that face.
125 *
126 * @param user Pointer to the UserCtx structure, containing `identifiedInletBCFace`.
127 * @param info Pointer to the DMDALocalInfo for the current rank's DA (node-based).
128 * @param IM_nodes_global Global number of nodes in the I-direction (e.g., user->IM + 1 if user->IM is cell count).
129 * @param JM_nodes_global Global number of nodes in the J-direction.
130 * @param KM_nodes_global Global number of nodes in the K-direction.
131 * @param[out] can_service_inlet_out Pointer to a PetscBool; set to PETSC_TRUE if the rank
132 * services (part of) the inlet, PETSC_FALSE otherwise.
133 * @return PetscErrorCode 0 on success, non-zero on failure.
134 */
135PetscErrorCode CanRankServiceInletFace(UserCtx *user, const DMDALocalInfo *info,
136 PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global,
137 PetscBool *can_service_inlet_out);
138
139/**
140 * @brief Determines if the current MPI rank owns any part of a specified global face.
141 *
142 * This function is a general utility for parallel boundary operations. It checks if the
143 * local domain of the current MPI rank is adjacent to a specified global boundary face.
144 * A rank "services" a face if it owns the cells adjacent to that global boundary and has
145 * a non-zero extent (i.e., owns at least one cell) in the tangential dimensions of that face.
146 *
147 * @param info Pointer to the DMDALocalInfo for the current rank's DA.
148 * @param IM_nodes_global Global number of nodes in the I-direction (e.g., user->IM + 1 if user->IM is cell count).
149 * @param JM_nodes_global Global number of nodes in the J-direction.
150 * @param KM_nodes_global Global number of nodes in the K-direction.
151 * @param face_id The specific global face (e.g., BC_FACE_NEG_Z) to check.
152 * @param[out] can_service_out Pointer to a PetscBool; set to PETSC_TRUE if the rank
153 * services the face, PETSC_FALSE otherwise.
154 * @return PetscErrorCode 0 on success.
155 */
156PetscErrorCode CanRankServiceFace(const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global,
157 BCFace face_id, PetscBool *can_service_out);
158
159/**
160 * @brief Places particles in a deterministic grid/raster pattern on a specified domain face.
161 *
162 * This function creates a set of equidistant, parallel lines of particles near the four
163 * edges of the face specified by user->identifiedInletBCFace. The number of lines drawn
164 * from each edge is hardcoded within this function (default is 2).
165 *
166 * For example, if grid_layers=2 on face BC_FACE_NEG_X, the function will create particle lines at:
167 * - y ~ 0*dy, y ~ 1*dy (parallel to the Z-axis, starting from the J=0 edge)
168 * - y ~ y_max, y ~ y_max-dy (parallel to the Z-axis, starting from the J=max edge)
169 * - z ~ 0*dz, z ~ 1*dz (parallel to the Y-axis, starting from the K=0 edge)
170 * - z ~ z_max, z ~ z_max-dz (parallel to the Y-axis, starting from the K=max edge)
171 *
172 * The particle's final position is set just inside the target cell face to ensure it is
173 * correctly located. The total number of particles (simCtx->np) is distributed as evenly
174 * as possible among all generated lines.
175 *
176 * The function includes extensive validation to stop with an error if the requested grid
177 * placement is geometrically impossible (e.g., in a 2D domain or if layers would overlap).
178 * It also issues warnings for non-fatal but potentially unintended configurations.
179 *
180 * @param user Pointer to UserCtx, which must contain a valid identifiedInletBCFace.
181 * @param info Pointer to DMDALocalInfo for the current rank's grid layout.
182 * @param xs_gnode_rank, ys_gnode_rank, zs_gnode_rank Local starting node indices (incl. ghosts) for the rank's DA.
183 * @param IM_cells_global, JM_cells_global, KM_cells_global Global cell counts.
184 * @param particle_global_id The unique global ID of the particle being placed (from 0 to np-1).
185 * @param[out] ci_metric_lnode_out Local I-node index of the selected cell's origin.
186 * @param[out] cj_metric_lnode_out Local J-node index of the selected cell's origin.
187 * @param[out] ck_metric_lnode_out Local K-node index of the selected cell's origin.
188 * @param[out] xi_metric_logic_out Logical xi-coordinate [0,1] within the cell.
189 * @param[out] eta_metric_logic_out Logical eta-coordinate [0,1] within the cell.
190 * @param[out] zta_metric_logic_out Logical zta-coordinate [0,1] within the cell.
191 * @param[out] placement_successful_out PETSC_TRUE if the point belongs to this rank, PETSC_FALSE otherwise.
192 * @return PetscErrorCode
193 */
195 UserCtx *user, const DMDALocalInfo *info,
196 PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank,
197 PetscInt IM_cells_global, PetscInt JM_cells_global, PetscInt KM_cells_global,
198 PetscInt64 particle_global_id,
199 PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out,
200 PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out,
201 PetscBool *placement_successful_out);
202
203
204/**
205 * @brief Assuming the current rank services the inlet face, this function selects a random
206 * cell (owned by this rank on that face) and random logical coordinates within that cell,
207 * suitable for placing a particle on the inlet surface.
208 *
209 * It is the caller's responsibility to ensure CanRankServiceInletFace returned true.
210 *
211 * @param user Pointer to UserCtx.
212 * @param info Pointer to DMDALocalInfo for the current rank (node-based).
213 * @param xs_gnode, ys_gnode, zs_gnode Local starting node indices (incl. ghosts) for the rank's DA.
214 * @param IM_nodes_global, JM_nodes_global, KM_nodes_global Global node counts.
215 * @param rand_logic_i_ptr, rand_logic_j_ptr, rand_logic_k_ptr Pointers to RNGs for logical coords.
216 * @param[out] ci_metric_lnode_out, cj_metric_lnode_out, ck_metric_lnode_out Local node indices of the selected cell's origin (these are local to the rank's DA including ghosts).
217 * @param[out] xi_metric_logic_out, eta_metric_logic_out, zta_metric_logic_out Logical coords [0,1] within the cell.
218 * @return PetscErrorCode
219 */
221 UserCtx *user, const DMDALocalInfo *info,
222 PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank, // Local starting node index (with ghosts) of the rank's DA patch
223 PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global,
224 PetscRandom *rand_logic_i_ptr, PetscRandom *rand_logic_j_ptr, PetscRandom *rand_logic_k_ptr,
225 PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out,
226 PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out);
227
228PetscErrorCode TranslateModernBCsToLegacy(UserCtx *user);
229
230/**
231 * @brief (Private) A generic routine to copy data for a single, named field across periodic boundaries.
232 *
233 * This function encapsulates all logic for a periodic transfer. Given a field name (e.g., "P", "Ucat"),
234 * it determines the field's data type (scalar/vector), retrieves the correct DMDA and Vecs from the
235 * UserCtx, and then performs the memory copy from the local ghost array to the global array.
236 *
237 * This must be called AFTER the corresponding local ghost vector has been updated via DMGlobalToLocal.
238 *
239 * @param user The main UserCtx struct, containing all grid info and field data.
240 * @param field_name A string identifier for the field to transfer.
241 * @return PetscErrorCode 0 on success.
242 */
243
244/**
245 * @brief Enforces boundary conditions on the momentum equation's Right-Hand-Side (RHS) vector.
246 *
247 * This function performs two critical roles based on the legacy implementation:
248 *
249 * 1. **Strong BC Enforcement for Physical Boundaries:** For non-periodic boundaries (e.g., walls, inlets),
250 * it zeroes the normal component of the RHS in a "buffer" layer of cells just inside the
251 * domain (e.g., at i=mx-2). This strongly enforces Dirichlet conditions on velocity by preventing
252 * the time-stepping scheme from altering the boundary values set by `ApplyBoundaryConditions`.
253 *
254 * 2. **Ghost Cell Sanitization:** For all boundary faces (`i=0`, `i=mx-1`, etc.), it zeroes out all
255 * components of the RHS. Since the RHS is a cell-centered quantity in this architecture, these
256 * locations correspond to ghost cells. This step sanitizes these unused locations, ensuring they
257 * do not contain garbage data that could affect diagnostics or other routines. This sanitization
258 * is performed for ALL boundary types, including periodic ones.
259 *
260 * This function should be called immediately after the RHS vector is fully assembled
261 * (spatial + temporal terms) and before it is used in a time-stepping update.
262 *
263 * @param user The UserCtx for the specific block being computed.
264 * @return PetscErrorCode 0 on success.
265 */
266PetscErrorCode EnforceRHSBoundaryConditions(UserCtx *user);
267
268/**
269 * @brief (Private Worker) Copies periodic data for a SINGLE field in a SINGLE direction.
270 *
271 * This is a low-level helper that performs the memory copy from the local ghost
272 * array to the global array for a specified field and direction ('i', 'j', or 'k').
273 * It contains NO communication logic; that is handled by the orchestrator.
274 *
275 * @param user The main UserCtx struct.
276 * @param field_name The string identifier for the field to transfer (e.g., "Ucat").
277 * @param direction The character 'i', 'j', or 'k' specifying the direction.
278 * @return PetscErrorCode 0 on success.
279 */
280PetscErrorCode TransferPeriodicFieldByDirection(UserCtx *user, const char *field_name, char direction);
281
282PetscErrorCode TransferPeriodicField(UserCtx *user, const char *field_name);
283
284/**
285 * @brief (Primitive) Copies periodic data from the interior to the local ghost cell region for a single field.
286 *
287 * This primitive function performs a direct memory copy for a specified field, updating
288 * all periodic ghost faces (i, j, and k). It reads data from just inside the periodic boundary
289 * and writes it to the corresponding local ghost cells.
290 *
291 * The copy is "two-cells deep" to support wider computational stencils.
292 *
293 * This function does NOT involve any MPI communication; it operates entirely on local PETSc vectors.
294 *
295 * @param user The main UserCtx struct.
296 * @param field_name The string identifier for the field to update (e.g., "Csi", "Ucont").
297 * @return PetscErrorCode 0 on success.
298 */
299PetscErrorCode TransferPeriodicFaceField(UserCtx *user, const char *field_name);
300
301/**
302 * @brief (Orchestrator) Updates all metric-related fields in the local ghost cell regions for periodic boundaries.
303 *
304 * This function calls the TransferPeriodicFaceField primitive for each of the 16
305 * metric fields that require a 2-cell deep periodic ghost cell update.
306 * This is a direct replacement for the legacy Update_Metrics_PBC function.
307 *
308 * @param user The main UserCtx struct.
309 * @return PetscErrorCode 0 on success.
310 */
311PetscErrorCode ApplyMetricsPeriodicBCs(UserCtx *user);
312
313/**
314 * @brief Applies periodic boundary conditions by copying data across domain boundaries for all relevant fields.
315 *
316 * This function orchestrates the periodic update. It first performs a single, collective
317 * ghost-cell exchange for all fields. Then, it calls a generic helper routine to perform
318 * the memory copy for each individual field by name.
319 *
320 * @param user The main UserCtx struct.
321 * @return PetscErrorCode 0 on success.
322 */
323PetscErrorCode ApplyPeriodicBCs(UserCtx *user);
324
325/**
326 * @brief (Orchestrator) Updates the contravariant velocity field in the local ghost cell regions for periodic boundaries.
327 *
328 * This function calls the TransferPeriodicFaceField primitive for the Ucont field.
329 * This is a direct replacement for the legacy Update_U_Cont_PBC function.
330 *
331 * @param user The main UserCtx struct.
332 * @return PetscErrorCode 0 on success.
333 */
334PetscErrorCode ApplyUcontPeriodicBCs(UserCtx *user);
335
336/**
337 * @brief Enforces strict periodicity on the interior contravariant velocity field.
338 *
339 * This function is a "fix-up" routine for staggered grids. After a solver step,
340 * numerical inaccuracies can lead to small discrepancies between fluxes on opposing
341 * periodic boundaries. This function manually corrects this by copying the flux value
342 * from the first boundary face (retrieved from a ghost cell) to the last interior face.
343 *
344 * This routine involves MPI communication to synchronize the grid before and after the copy.
345 *
346 * @param user The main UserCtx struct.
347 * @return PetscErrorCode 0 on success.
348 */
349PetscErrorCode EnforceUcontPeriodicity(UserCtx *user);
350
351/**
352 * @brief Updates the dummy cells (ghost nodes) on the faces of the local domain for NON-PERIODIC boundaries.
353 *
354 * This function's role is to apply a second-order extrapolation to set the ghost
355 * cell values based on the boundary condition value (stored in `ubcs`) and the
356 * first interior cell.
357 *
358 * NOTE: This function deliberately IGNORES periodic boundaries. It is part of a
359 * larger workflow where `ApplyPeriodicBCs` handles periodic faces first.
360 *
361 * CRITICAL DETAIL: This function uses shrunken loop ranges (lxs, lxe, etc.) to
362 * intentionally update only the flat part of the faces, avoiding the edges and
363
364 * corners. The edges and corners are then handled separately by `UpdateCornerNodes`.
365 * This precisely replicates the logic of the original FormBCS function.
366 *
367 * @param user The main UserCtx struct containing all necessary data.
368 * @return PetscErrorCode 0 on success.
369 */
370PetscErrorCode UpdateDummyCells(UserCtx *user);
371
372/**
373 * @brief Updates the corner and edge ghost nodes of the local domain by averaging.
374 *
375 * This function should be called AFTER the face ghost nodes are finalized by both
376 * `ApplyPeriodicBCs` and `UpdateDummyCells`. It resolves the values at shared
377 * edges and corners by averaging the values of adjacent, previously-computed
378 * ghost nodes.
379 *
380 * The logic is generic and works correctly regardless of the boundary types on
381 * the adjacent faces (e.g., it will correctly average a periodic face neighbor
382 * with a wall face neighbor).
383 *
384 * @param user The main UserCtx struct containing all necessary data.
385 * @return PetscErrorCode 0 on success.
386 */
387PetscErrorCode UpdateCornerNodes(UserCtx *user);
388
389/**
390 * @brief (Orchestrator) Performs a sequential, deterministic periodic update for a list of fields.
391 *
392 * This function orchestrates the resolution of ambiguous periodic corners and edges.
393 * It takes an array of field names and updates them in a strict i-sync-j-sync-k order
394 * by calling the low-level worker `TransferPeriodicFieldByDirection` and the
395 * communication routine `UpdateLocalGhosts`.
396 *
397 * @param user The main UserCtx struct.
398 * @param num_fields The number of fields in the field_names array.
399 * @param field_names An array of strings with the names of fields to update (e.g., ["Ucat", "P"]).
400 * @return PetscErrorCode 0 on success.
401 */
402PetscErrorCode UpdatePeriodicCornerNodes(UserCtx *user, PetscInt num_fields, const char* field_names[]);
403
404/**
405 * @brief Applies wall function modeling to near-wall velocities for all wall-type boundaries.
406 *
407 * This function implements log-law wall functions to model the near-wall velocity profile
408 * without fully resolving the viscous sublayer. It is applicable to ALL wall-type boundaries
409 * regardless of their specific boundary condition (no-slip, moving wall, slip, etc.), as
410 * determined by the mathematical_type being WALL.
411 *
412 * MATHEMATICAL BACKGROUND:
413 * Wall functions bridge the gap between the wall (y=0) and the first computational cell
414 * center by using empirical log-law relationships:
415 * - Viscous sublayer (y+ < 11.81): u+ = y+
416 * - Log-law region (y+ > 11.81): u+ = (1/κ) * ln(E * y+)
417 * where u+ = u/u_τ, y+ = y*u_τ/ν, κ = 0.41 (von Karman constant), E = exp(κB)
418 *
419 * IMPLEMENTATION DETAILS:
420 * Unlike standard boundary conditions that set ghost cell values, wall functions:
421 * 1. Read velocity from the SECOND interior cell (i±2, j±2, k±2)
422 * 2. Compute wall shear stress using log-law
423 * 3. Modify velocity at the FIRST interior cell (i±1, j±1, k±1)
424 * 4. Keep ghost cell boundary values (ubcs, ucont) at zero
425 *
426 * WORKFLOW:
427 * - Called from ApplyBoundaryConditions after standard BC application
428 * - Operates on ucat (Cartesian velocity)
429 * - Updates ustar (friction velocity field) for diagnostics/turbulence models
430 * - Ghost cells remain zero; UpdateDummyCells handles extrapolation afterward
431 *
432 * GEOMETRIC QUANTITIES:
433 * sb = wall-normal distance from wall to first interior cell center
434 * sc = wall-normal distance from wall to second interior cell center
435 * These are computed from cell Jacobians (aj) and face area vectors
436 *
437 * APPLICABILITY:
438 * - Requires simCtx->wallfunction = true
439 * - Only processes faces where mathematical_type == WALL
440 * - Skips solid-embedded cells (nvert >= 0.1)
441 *
442 * @param user The UserCtx containing all simulation state and geometry
443 * @return PetscErrorCode 0 on success
444 *
445 * @note This function modifies interior cell velocities, NOT ghost cells
446 * @note Wall roughness (ks) is currently set to 1e-16 (smooth wall)
447 * @see wall_function_loglaw() in wallfunction.c for the actual log-law implementation
448 * @see noslip() in wallfunction.c for the initial linear interpolation
449 */
450PetscErrorCode ApplyWallFunction(UserCtx *user);
451
452/**
453 * @brief (Public) Orchestrates the "light" refresh of all boundary ghost cells after the projection step.
454 *
455 * This function is the correct and complete replacement for the role that `GhostNodeVelocity`
456 * played when called from within the `Projection` function. Its purpose is to ensure that
457 * all ghost cells for `ucat` and `p` are made consistent with the final, divergence-free
458 * interior velocity field computed by the projection step.
459 *
460 * This function is fundamentally different from `ApplyBoundaryConditions` because it does
461 * NOT modify the physical flux field (`ucont`) and does NOT apply physical models like
462 * the wall function. It is a purely geometric and data-consistency operation.
463 *
464 * WORKFLOW:
465 * 1. Calls the lightweight `BoundarySystem_RefreshUbcs()` engine. This re-calculates the
466 * `ubcs` target values ONLY for flow-dependent boundary conditions (like Symmetry or Outlets)
467 * using the newly updated interior `ucat` field.
468 *
469 * 2. Calls the geometric updaters (`ApplyPeriodicBCs`, `UpdateDummyCells`, `UpdateCornerNodes`)
470 * in the correct, dependency-aware order to fill in all ghost cell values based on the now
471 * fully-refreshed `ubcs` targets.
472 *
473 * 3. Performs a final synchronization of local PETSc vectors to ensure all MPI ranks are
474 * consistent before proceeding to the next time step.
475 *
476 * @param user The main UserCtx struct, containing all simulation state.
477 * @return PetscErrorCode 0 on success.
478 */
479PetscErrorCode RefreshBoundaryGhostCells(UserCtx *user);
480
481/*
482* @brief Main master function to apply boundary conditions for a time step.
483* This function orchestrates the application of boundary conditions
484* by calling the BoundarySystem_ExecuteStep function multiple times to ensure convergence.
485* @param user The main UserCtx struct containing the boundary system.
486* @return PetscErrorCode 0 on success.
487*/
488PetscErrorCode ApplyBoundaryConditions(UserCtx *user);
489
490#endif // BOUNDARIES_H
PetscErrorCode ApplyPeriodicBCs(UserCtx *user)
Applies periodic boundary conditions by copying data across domain boundaries for all relevant fields...
PetscErrorCode BoundarySystem_Initialize(UserCtx *user, const char *bcs_filename)
Initializes the entire boundary system.
Definition Boundaries.c:987
PetscErrorCode ApplyUcontPeriodicBCs(UserCtx *user)
(Orchestrator) Updates the contravariant velocity field in the local ghost cell regions for periodic ...
PetscErrorCode TransferPeriodicFaceField(UserCtx *user, const char *field_name)
(Primitive) Copies periodic data from the interior to the local ghost cell region for a single field.
PetscErrorCode RefreshBoundaryGhostCells(UserCtx *user)
(Public) Orchestrates the "light" refresh of all boundary ghost cells after the projection step.
PetscErrorCode GetRandomCellAndLogicalCoordsOnInletFace(UserCtx *user, const DMDALocalInfo *info, PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, PetscRandom *rand_logic_i_ptr, PetscRandom *rand_logic_j_ptr, PetscRandom *rand_logic_k_ptr, PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out, PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out)
Assuming the current rank services the inlet face, this function selects a random cell (owned by this...
Definition Boundaries.c:467
PetscErrorCode ApplyWallFunction(UserCtx *user)
Applies wall function modeling to near-wall velocities for all wall-type boundaries.
PetscErrorCode PropagateBoundaryConfigToCoarserLevels(SimCtx *simCtx)
Propagates boundary condition configuration from finest to all coarser multigrid levels.
PetscErrorCode ApplyMetricsPeriodicBCs(UserCtx *user)
(Orchestrator) Updates all metric-related fields in the local ghost cell regions for periodic boundar...
PetscErrorCode EnforceRHSBoundaryConditions(UserCtx *user)
(Private) A generic routine to copy data for a single, named field across periodic boundaries.
Definition Boundaries.c:705
PetscErrorCode EnforceUcontPeriodicity(UserCtx *user)
Enforces strict periodicity on the interior contravariant velocity field.
PetscErrorCode TransferPeriodicFieldByDirection(UserCtx *user, const char *field_name, char direction)
(Private Worker) Copies periodic data for a SINGLE field in a SINGLE direction.
PetscErrorCode UpdateDummyCells(UserCtx *user)
Updates the dummy cells (ghost nodes) on the faces of the local domain for NON-PERIODIC boundaries.
PetscErrorCode BoundarySystem_RefreshUbcs(UserCtx *user)
(Private) A lightweight execution engine that calls the UpdateUbcs() method on all relevant handlers.
PetscErrorCode BoundarySystem_Validate(UserCtx *user)
(Public) Validates the consistency and compatibility of the parsed boundary condition system.
Definition Boundaries.c:958
PetscErrorCode UpdatePeriodicCornerNodes(UserCtx *user, PetscInt num_fields, const char *field_names[])
(Orchestrator) Performs a sequential, deterministic periodic update for a list of fields.
PetscErrorCode BoundaryCondition_Create(BCHandlerType handler_type, BoundaryCondition **new_bc_ptr)
(Private) Creates and configures a specific BoundaryCondition handler object.
Definition Boundaries.c:866
PetscErrorCode CanRankServiceInletFace(UserCtx *user, const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, PetscBool *can_service_inlet_out)
Determines if the current MPI rank owns any part of the globally defined inlet face,...
Definition Boundaries.c:25
PetscErrorCode ApplyBoundaryConditions(UserCtx *user)
Main master function to apply all boundary conditions for a time step.
PetscErrorCode CanRankServiceFace(const DMDALocalInfo *info, PetscInt IM_nodes_global, PetscInt JM_nodes_global, PetscInt KM_nodes_global, BCFace face_id, PetscBool *can_service_out)
Determines if the current MPI rank owns any part of a specified global face.
Definition Boundaries.c:151
PetscErrorCode GetDeterministicFaceGridLocation(UserCtx *user, const DMDALocalInfo *info, PetscInt xs_gnode_rank, PetscInt ys_gnode_rank, PetscInt zs_gnode_rank, PetscInt IM_cells_global, PetscInt JM_cells_global, PetscInt KM_cells_global, PetscInt64 particle_global_id, PetscInt *ci_metric_lnode_out, PetscInt *cj_metric_lnode_out, PetscInt *ck_metric_lnode_out, PetscReal *xi_metric_logic_out, PetscReal *eta_metric_logic_out, PetscReal *zta_metric_logic_out, PetscBool *placement_successful_out)
Places particles in a deterministic grid/raster pattern on a specified domain face.
Definition Boundaries.c:268
PetscErrorCode BoundarySystem_ExecuteStep(UserCtx *user)
Executes one full boundary condition update cycle for a time step.
PetscErrorCode BoundarySystem_Destroy(UserCtx *user)
Cleans up and destroys all boundary system resources.
PetscErrorCode UpdateCornerNodes(UserCtx *user)
Updates the corner and edge ghost nodes of the local domain by averaging.
PetscErrorCode TranslateModernBCsToLegacy(UserCtx *user)
Definition Boundaries.c:656
PetscErrorCode TransferPeriodicField(UserCtx *user, const char *field_name)
(Private) A generic routine to copy data for a single, named field across periodic boundaries.
Header file for Particle Motion and migration related functions.
Header file for Particle Swarm management functions.
Public interface for grid, solver, and metric setup routines.
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
The "virtual table" struct for a boundary condition handler object.
Definition variables.h:280
Main header file for a complex fluid dynamics solver.
BCHandlerType
Defines the specific computational "strategy" for a boundary handler.
Definition variables.h:230
BCFace
Identifies the six logical faces of a structured computational block.
Definition variables.h:203
The master context for the entire simulation.
Definition variables.h:585
User-defined context containing data specific to a single computational grid level.
Definition variables.h:728
Header file for particle location functions using the walking search algorithm.