|
PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
|
For the full commented template, see:
# ==============================================================================
# PICurv Master Case Configuration Template
# ==============================================================================
#
# PURPOSE:
# This file defines the complete physical setup for a single simulation experiment.
# It describes the geometry, fluid properties, run duration, physical models,
# and boundary conditions. It is the "digital lab notebook" for your run.
#
# SECTIONS OVERVIEW:
# 1. simulation_control: How long the simulation runs and at what time resolution.
# 2. properties: The core physical numbers (density, viscosity, scales).
# 3. grid: The geometric domain, either from a file or generated.
# 4. models: Switches to turn on/off different physics modules (LES, FSI, etc.).
# 5. boundary_conditions: What happens at the edges of the domain.
#
# YAML SYNTAX NOTE:
# - Indentation matters! Use spaces, not tabs.
# - For lists, ensure there is a space after the hyphen (e.g., "- item").
# - For more details: https://learnxinyminutes.com/docs/yaml/
#
# ==============================================================================
# ==============================================================================
# 1. SIMULATION CONTROL
# Defines the duration and temporal resolution of the experiment.
# ==============================================================================
run_control:
# The simulation step number to start from.
# 0 for a new run, or a specific step number for a restart.
start_step: 0
# Total number of timesteps to execute in this run.
# Total simulation time = total_steps * dt_physical.
total_steps: 2000
# Restart: use the --restart-from CLI flag to point at a previous run
# directory. See `picurv run --help` for details.
# The physical time increment for each step, in seconds [s].
dt_physical: 0.001
# ==============================================================================
# 2. PHYSICAL PROPERTIES
# Defines the non-dimensional scaling and material properties.
# ==============================================================================
properties:
# --- Scaling ---
# These reference values are used to non-dimensionalize the problem for the solver.
scaling:
length_ref: 0.05 # [m] Characteristic length of the domain (e.g., pipe diameter, chord length).
velocity_ref: 1.5 # [m/s] Characteristic velocity (e.g., inlet velocity, freestream velocity).
# --- Fluid ---
# Properties of the fluid being simulated.
fluid:
density: 1000.0 # [kg/m^3]
viscosity: 0.001 # [Pa.s or kg/(m.s)]
# --- Initial Conditions ---
# The state of the fluid at the start of the simulation (t=0).
# Values are in physical units [m/s].
initial_conditions:
# --- Field Initialization Mode ---
# Defines the type of initial velocity field.
# Options:
# - "Zero": All velocity components are zero.
# - "Constant": A uniform velocity field is applied everywhere.
# - "Poiseuille": A parabolic velocity profile (for channel/pipe flow).
mode: "Constant" # This will be translated to -finit
# --- Initial Velocity ---
# For 'Constant': set u_physical/v_physical/w_physical.
# For 'Poiseuille': prefer 'peak_velocity_physical' (scalar Vmax mapped to the
# primary inlet axis), or use u_physical/v_physical/w_physical as an explicit component override.
# peak_velocity_physical: 1.0
# Values are in physical units [m/s].
u_physical: 0.0
v_physical: 0.0
w_physical: 0.0
# ==============================================================================
# 3. GRID DEFINITION
# Defines the computational mesh for the simulation.
# ==============================================================================
grid:
# --- Mode Selection ---
# Determines how the grid is provided to the solver.
# Options: 'file', 'programmatic_c', or 'grid_gen'.
mode: programmatic_c
# --- Optional Global DMDA Layout (applies to all grid modes) ---
# Edit or remove these for parallel runs. The product must equal the total
# MPI process count (-n) when all three are set.
# NOTE: Per-block processor decomposition is not implemented on the C side.
da_processors_x: 2
da_processors_y: 2
da_processors_z: 4
# --- Option A: Grid from File ---
# Use this if you have a pre-generated grid file.
# The script validates and non-dimensionalizes coordinates using 'length_ref'.
# source_file: "grids/my_premade_grid.picgrid"
# --- Option B: Grid via Python Grid Generator (scripts/grid.gen) ---
# Use this for complex curvilinear meshes generated before solver launch.
# generator:
# config_file: "config/grids/coarse_square_tube_curved.cfg" # Required today. Relative to case.yml or absolute.
# grid_type: "cpipe" # Optional override: cpipe | pipe | warp
# cli_args: # Optional raw CLI token list
# - "--ncells-i"
# - "96"
# - "--ncells-j"
# - "96"
# output_file: "config/grid.generated.picgrid" # Optional; relative to run dir.
# stats_file: "config/grid.generated.info" # Optional.
# vts_file: "config/grid.generated.vts" # Optional.
# --- Option C: Programmatic Grid Generation in C ---
# Use this to have the C solver generate a structured Cartesian grid.
# `im/jm/km` are cell counts in YAML; picurv converts them to node counts
# before emitting `-im/-jm/-km` for the C runtime.
# For MULTI-BLOCK cases, all values must be LISTS of the same length.
programmatic_settings:
# --- Cell Counts ---
im: 64 # [Cells in i-direction] -> For 2 blocks: [64, 128]
jm: 32 # [Cells in j-direction] -> For 2 blocks: [32, 32]
km: 128 # [Cells in k-direction] -> For 2 blocks: [128, 256]
# --- Domain Bounds (in physical units [m]) ---
xMins: 0.0 # [Min x-coordinate] -> For 2 blocks: [0.0, 1.0]
xMaxs: 1.0 # [Max x-coordinate] -> For 2 blocks: [1.0, 2.0]
yMins: 0.0
yMaxs: 0.5
zMins: 0.0
zMaxs: 2.0
# --- Grid Stretching Ratios ---
# 1.0 = uniform. >1.0 = stretching towards max coordinate.
rxs: 1.0
rys: 1.05
rzs: 1.0
# ==============================================================================
# 4. MODEL SELECTION
# User-friendly switches to enable different physical models and features.
# If a section or key is omitted, the C-code's default will be used.
# ==============================================================================
models:
# --- Domain and Block Configuration ---
domain:
blocks: 1 # [Integer] -> -nblk. CRITICAL for multi-block setups.
i_periodic: false # [true/false] -> -i_periodic
j_periodic: false # [true/false] -> -j_periodic
k_periodic: false # [true/false] -> -k_periodic
# --- Core Physics Modules ---
physics:
dimensionality: "3D" # Options: "3D" (default), "2D" (sets -TwoD 1)
fsi:
immersed: false # [true/false] -> -imm (Immersed Boundary Method)
moving_fsi: false # [true/false] -> -fsi (Fluid-Structure Interaction)
particles:
count: 0 # [Integer] -> -numParticles
init_mode: "Surface" # [string] -> -pinit. Options: "Surface", "Volume", "PointSource", "SurfaceEdges"
restart_mode: "init" # [string] -> -particle_restart_mode. Options: "init", "load"
point_source: # Required only when init_mode = "PointSource" (maps to -psrc_x/-psrc_y/-psrc_z)
x: 0.5
y: 0.5
z: 0.5
turbulence:
les: false # [true/false] -> -les (Large Eddy Simulation)
rans: false # [true/false] -> -rans (Reynolds-Averaged Navier-Stokes)
wall_function: false # [true/false] -> -wallfunction
# --- Statistical Analysis ---
statistics:
time_averaging: false # [true/false] -> -averaging. Enables running averages.
# ==============================================================================
# 5. BOUNDARY CONDITIONS
# Defines the behavior at each of the 6 faces of the computational domain(s).
# ==============================================================================
# --- For SINGLE-BLOCK cases: Provide a simple list of 6 face definitions. ---
# --- For MULTI-BLOCK cases: Provide a LIST OF LISTS. The outer list corresponds
# to the block index [0, 1, ...], and each inner list defines the 6 faces
# for that block. See the multi-block example commented out below. ---
boundary_conditions:
# --- Example for a Single Block Case ---
- face: "-Xi"
type: WALL
handler: noslip
- face: "+Xi"
type: WALL
handler: noslip
- face: "-Eta"
type: INLET
handler: constant_velocity
params:
vx: 1.5 # Physical velocity [m/s]
vy: 0.0
vz: 0.0
- face: "+Eta"
type: OUTLET
handler: conservation
- face: "-Zeta"
type: WALL
handler: noslip
- face: "+Zeta"
type: WALL
handler: noslip
# --- Example for a 2-Block Case (syntax example; all handlers shown are currently supported) ---
# boundary_conditions:
# # --- Block 0 Definitions ---
# - - face: "-Xi"
# type: INLET
# handler: constant_velocity
# params: { vx: 1.5, vy: 0.0, vz: 0.0 }
# - face: "+Xi"
# type: OUTLET
# handler: conservation
# - face: "-Eta"
# type: WALL
# handler: noslip
# - face: "+Eta"
# type: WALL
# handler: noslip
# - face: "-Zeta"
# type: WALL
# handler: noslip
# - face: "+Zeta"
# type: WALL
# handler: noslip
#
# # --- Block 1 Definitions ---
# - - face: "-Xi"
# type: INLET
# handler: parabolic
# params: { v_max: 1.5 }
# - face: "+Xi"
# type: OUTLET
# handler: conservation
# - face: "-Eta"
# type: WALL
# handler: noslip
# - face: "+Eta"
# type: WALL
# handler: noslip
# - face: "-Zeta"
# type: WALL
# handler: noslip
# - face: "+Zeta"
# type: WALL
# handler: noslip
# ==============================================================================
# 6. ADVANCED PASSTHROUGH (OPTIONAL)
# ------------------------------------------------------------------------------
# Use this for C flags not yet exposed in the structured schema.
# Keys must be full C/PETSc-style flags (including leading '-').
# ==============================================================================
# solver_parameters:
# -read_fields: true
# -some_new_flag: 123
case.yml defines physical setup, grid source, domain topology, and boundary conditions. It is intentionally modular: the same case.yml can be paired with different solver.yml, monitor.yml, and post.yml profiles when the combination remains contract-compatible.
Key mappings:
scaling.length_ref -> -scaling_L_refscaling.velocity_ref -> -scaling_U_reffluid.density and fluid.viscosity are used by picurv to compute Reynolds number -> -reninitial_conditions.mode -> -finit (Zero, Constant, Poiseuille)u_physical/v_physical/w_physical -> -ucont_x/-ucont_y/-ucont_zpeak_velocity_physical (Poiseuille only) -> mapped by picurv to the inlet-aligned -ucont_* componentFor the scaling model and conversion logic, see Non-Dimensionalization Model. For detailed startup behavior of field initialization modes, see Initial Condition Modes.
Practical contract notes:
initial_conditions.mode should be set explicitly. The launcher now requires the choice instead of silently inferring it.mode: "Zero" may omit velocity components entirely.mode: "Constant" requires explicit u_physical/v_physical/w_physical.mode: "Poiseuille" may use either:peak_velocity_physical for the scalar centerline-speed input, oru_physical/v_physical/w_physical for an explicit component override.Mappings:
dt_physical -> -dt (non-dimensionalized)start_step -> -start_steptotal_steps -> -totalstepsRestart is handled via CLI flags rather than case.yml keys:
--restart-from <previous_run_dir> -> picurv resolves the prior run's actual restart source directory and emits -restart_dir <absolute_previous_source>--continue -> shorthand for resuming from the most recent run of the same caseSupported modes:
programmatic_cfilegrid_genMode compatibility note:
solve and load workflows, all three grid modes are supported.solver.yml -> operation_mode.eulerian_field_source: analytical, TGV3D requires grid.mode: programmatic_c.ZERO_FLOW and UNIFORM_FLOW support grid.mode: programmatic_c and grid.mode: file.Optional global DMDA layout hints apply to all grid modes:
These are scalar global values, not per-block vectors. Legacy placement under grid.programmatic_settings.da_processors_* is still accepted for compatibility, but the shared top-level grid.da_processors_* form is preferred.
programmatic_settings supports per-block lists for geometry arrays:
im/jm/kmxMins/xMaxs, yMins/yMaxs, zMins/zMaxsrxs/rys/rzsDimension contract:
im/jm/km in YAML are cell counts.picurv converts them to node counts before emitting -im/-jm/-km for the C runtime.Important constraint:
grid.da_processors_x/y/z are scalar integers only (global DMDA layout). Per-block processor decomposition is not implemented.picurv validates existence and prepares normalized grid data for C-side ingestion.
Optional legacy conversion path (headerless 1D-axis payloads):
When enabled, picurv first calls scripts/grid.gen legacy1d to create a canonical PICGRID file in the run config, then runs the normal validation/non-dimensionalization staging path.
This runs scripts/grid.gen before solver launch and stages generated grid artifacts into the run config. grid.generator.config_file is required today; picurv does not synthesize a temporary grid config. grid.gen accepts cell-count inputs (ncells_* / --ncells-*) and writes node counts into the generated .picgrid header.
For direct grid.gen usage, generator types, and config-file structure, see Grid Generator Guide: scripts/grid.gen.
Common mappings:
domain.blocks -> -nblk-i_periodic/-j_periodic/-k_periodicphysics.dimensionality: "2D" -> -TwoD 1physics.turbulence.les -> -lesphysics.particles.count -> -numParticlesphysics.particles.init_mode -> -pinit (Surface, Volume, PointSource, SurfaceEdges)physics.particles.restart_mode -> -particle_restart_mode-psrc_x/-psrc_y/-psrc_zRestart note:
run_control.start_step > 0, particles are enabled, and restart_mode is omitted, picurv warns that C will default to load.For mode-specific particle behavior and restart flow, see Particle Initialization and Restart Guide.
Single-block syntax: list of 6 face entries. Multi-block syntax: list-of-lists, one 6-face list per block.
Supported face names:
-Xi, +Xi, -Eta, +Eta, -Zeta, +ZetaSupported type/handler combinations:
INLET + constant_velocity (vx/vy/vz)INLET + parabolic (v_max)OUTLET + conservationWALL + noslipPERIODIC + geometricPERIODIC + constant_flux (target_flux, optional apply_trim)All six faces must be explicitly provided for each block. For detailed handler semantics, validation constraints, and C dispatch path, see Boundary Conditions Guide.
Optional escape hatch for flags not yet exposed in structured schema:
Use sparingly and prefer structured keys when available.
case.yml is designed to be combined with reusable profiles for the other config roles.
Common patterns:
case.yml + multiple monitor.yml files (debug vs production output),case.yml + multiple post.yml recipes (quick scalar check vs heavy VTK/statistics),solver.yml reused across many case.yml files,cluster.yml reused across many runs and sweeps.For worked combinations, see Workflow Recipes and Config Cookbook.
Proceed to Configuration Reference: Solver YAML.
Cross-file contract/mapping:
This page describes Configuration Reference: Case YAML within the PICurv workflow. For CFD users, the most reliable reading strategy is to map the page content to a concrete run decision: what is configured, what runtime stage it influences, and which diagnostics should confirm expected behavior.
Treat this page as both a conceptual reference and a runbook. If you are debugging, pair the method/procedure described here with monitor output, generated runtime artifacts under runs/<run_id>/config, and the associated solver/post logs so numerical intent and implementation behavior stay aligned.