|
PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
|
This page is the user-facing source of truth for the configuration contract implemented by picurv. It describes the launcher-level contract, which may be stricter or more explicit than the raw C defaults because picurv validates and normalizes inputs before runtime.
picurv composes a standard single-run workflow from five logical inputs, with two additional files for cluster/sweep modes:
case.yml: physics, grid, BC definitions, run control.solver.yml: numerical strategy and solver parameters.monitor.yml: I/O and logging/profiling controls.post.yml: post-processing recipe.-n, executable stage selection).-n/--num-procs sizes the solver stage launch..picurv-execution.yml.cluster.yml: scheduler/resource/launcher contract.study.yml: parameter matrix + metrics/plot contract.You can name files however you want. File names are not hardcoded on the C side; picurv resolves paths and emits generated artifacts.
These roles are intentionally modular:
case.yml describes physical setup and geometry contract.solver.yml describes numerical strategy.monitor.yml describes logging and I/O behavior.post.yml describes post-processing outputs.In normal use, you reuse and mix these files instead of cloning one monolithic config for every run.
For each run, picurv generates:
<run_id>.control: master PETSc/control flags for solver/post setup.bcs.run or bcs_block*.run: boundary condition definitions.whitelist.run: logging function allow-list.profile.run: selected per-step profiling function list (only when profiling.timestep_output.mode: selected).post.run: key=value post-processing recipe consumed by C post parser.grid.mode supports: file, programmatic_c, grid_gen.programmatic_c, per-block arrays are supported for geometry (im/jm/km, bounds, stretching).programmatic_c.im/jm/km are cell counts in YAML; picurv converts them to node counts before writing -im/-jm/-km.grid.da_processors_x/y/z optionally set the global DMDA layout for any grid mode.grid.programmatic_settings.da_processors_* is still accepted for compatibility.da_processors_x/y/z are scalar integers only (global DMDA layout). Per-block MPI decomposition is not currently supported.grid_gen, grid.generator.config_file is required today. grid.gen consumes cell counts and writes node counts into .picgrid.file, optional grid.legacy_conversion can call grid.gen legacy1d to convert headerless 1D-axis legacy payloads before standard validation/non-dimensionalization.boundary_conditions supports single-block list or multi-block list-of-lists.solver_parameters is an advanced passthrough map for raw flags not yet modeled in schema.properties.initial_conditions.mode is required explicitly by the launcher.properties.initial_conditions.mode: Zero may omit velocity components.properties.initial_conditions.mode: Poiseuille supports:peak_velocity_physical (scalar centerline speed), oru_physical/v_physical/w_physical (explicit component override).operation_mode.eulerian_field_source -> -euler_field_sourceoperation_mode.analytical_type -> -analytical_typeoperation_mode.uniform_flow.{u,v,w} -> -analytical_uniform_u/-analytical_uniform_v/-analytical_uniform_w for UNIFORM_FLOWverification.sources.diffusivity.* -> -verification_diffusivity_*verification.sources.scalar.* -> -verification_scalar_*strategy.momentum_solver -> -mom_solver_type via normalized names.momentum_solver.dual_time_picard_rk4.interpolation.method -> -interpolation_method. Defaults to Trilinear (direct cell-center, second-order). Set to CornerAveraged for the legacy two-stage path.petsc_passthrough_options remains the escape hatch for advanced PETSc/C flags.Analytical-mode compatibility rule:
operation_mode.eulerian_field_source: analytical is selected, TGV3D still requires case.yml -> grid.mode: programmatic_c.ZERO_FLOW and UNIFORM_FLOW support case.yml -> grid.mode: programmatic_c and case.yml -> grid.mode: file.grid_gen remains outside the current documented analytical contract.Verification-pathway rule:
solver.yml -> verification.sources.diffusivity and solver.yml -> verification.sources.scalar are reserved for verification-only source overrides when no cleaner end-to-end path exists.verification.sources.scalar prescribes particle Psi and drives the runtime diagnostic logs/scatter_metrics.csv while leaving ordinary production runs unchanged when absent.verification_sources.*, with production call sites kept as thin delegation points.io.data_output_frequency -> -tioio.particle_console_output_frequency -> -particle_console_output_freq (defaults to data_output_frequency when omitted)io.particle_log_interval -> -logfreqio.directories.output/restart/log -> -output_dir/-restart_dir/-log_dirio.directories.eulerian_subdir/particle_subdir -> -euler_subdir/-particle_subdirsolver_monitoring maps raw flags directly into control output.io.eulerian_fields -> output_fields_instantaneousio.eulerian_fields_averaged -> output_fields_averaged (reserved/no-op in current writer path)io.particle_fields -> particle_fields_instantaneousio.input_extensions.eulerian/particle -> eulerianExt/particleExt for post input readerssource_data.directory -> source_directoryscheduler.type currently supports slurm only.resources.account/nodes/ntasks_per_node/mem/time are required.resources.partition is optional.notifications.mail_user/mail_type are optional; email is validated when provided.execution.module_setup injects shell lines before launch.execution.launcher controls launch style (srun, mpirun, custom). A multi-word launcher string is accepted for site compatibility, but keeping the executable here and extra flags in execution.launcher_args is the preferred portable form.execution.launcher_args provides site-specific launch flags and is appended after any inline tokens parsed from execution.launcher.execution.launcher / execution.launcher_args are omitted, picurv falls back to nearest .picurv-execution.yml (cluster_execution, then default_execution) before using the built-in default srun.execution.walltime_guard optionally tunes the automatic runtime walltime estimator for generated Slurm solver jobs. When omitted, generated solver jobs still use the built-in default policy (enabled: true, warmup_steps: 10, multiplier: 2.0, min_seconds: 60, estimator_alpha: 0.35).execution.extra_sbatch supports scheduler-specific pass-through flags.cluster.yml does not currently define run naming. picurv derives run_id from <case_basename>_<timestamp> and uses that same run ID to name generated scheduler jobs.Optional shared runtime execution file:
picurv init writes .picurv-execution.yml into each new case with inert defaults..picurv-execution.yml may define:default_executionlocal_executioncluster_executionPICURV_MPI_LAUNCHER -> MPI_LAUNCHER -> .picurv-execution.yml -> legacy .picurv-local.yml -> default mpiexeccluster.yml.execution -> .picurv-execution.yml cluster_execution -> .picurv-execution.yml default_execution -> default srunpicurv run --cluster ... generates:
runs/<run_id>/scheduler/solver.sbatchruns/<run_id>/scheduler/post.sbatchruns/<run_id>/scheduler/solver_<jobid>.out/.err and post_<jobid>.out/.err after submissionruns/<run_id>/scheduler/submission.jsonbase_configs provides case/solver/monitor/post template paths.study_type is one of:grid_independencetimestep_independencesensitivityparameters defines cartesian-product sweeps using keys of form:case.<yaml.path>solver.<yaml.path>monitor.<yaml.path>post.<yaml.path>metrics defines CSV/log extractors for aggregate tables.plotting controls whether plots are generated and output format.execution.max_concurrent_array_tasks maps to Slurm array throttling N.picurv sweep --study ... --cluster ... generates:
studies/<study_id>/scheduler/case_index.tsvstudies/<study_id>/scheduler/solver_array.sbatchstudies/<study_id>/scheduler/post_array.sbatchstudies/<study_id>/scheduler/solver_<array_jobid>_<taskid>.out/.err and post_<array_jobid>_<taskid>.out/.err after submissionstudies/<study_id>/results/metrics_table.csvstudies/<study_id>/results/plots/*case.solver_parameterssolver.petsc_passthrough_optionsmonitor.solver_monitoringdat unless overridden.Launcher defaults vs C defaults:
Examples:
properties.initial_conditions.mode is rejected at the launcher level,mode: Zero is accepted and defaults to zero,models.physics.particles.restart_mode on a particle restart emits a warning that C will default to load.For workflow growth patterns (grid generation orchestration, multi-run studies, and ML coupling paths), see Workflow Extensibility Guide. For worked examples and profile-composition patterns, see Workflow Recipes and Config Cookbook. For selector-specific contributor hook points, see Modular Selector Extension Guide.
This page describes Configuration Contract (YAML -> Generated Artifacts -> Runtime) within the PICurv workflow. For CFD users, the most reliable reading strategy is to map the page content to a concrete run decision: what is configured, what runtime stage it influences, and which diagnostics should confirm expected behavior.
Treat this page as both a conceptual reference and a runbook. If you are debugging, pair the method/procedure described here with monitor output, generated runtime artifacts under runs/<run_id>/config, and the associated solver/post logs so numerical intent and implementation behavior stay aligned.