PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
User How-To Guides

This page provides operational recipes for common PICurv tasks. Each recipe includes what to change, why it matters, and a quick verification action.

1. Setup and Physics

1.1 Change Reynolds Number

What to change (in case.yml):

properties:
scaling:
length_ref: 0.1
velocity_ref: 1.5
fluid:
density: 1000.0
viscosity: 0.001

Why:

\[ Re = \frac{\rho U L}{\mu} \]

These values set the non-dimensional operating point consumed by solver controls.

Quick check:

  • rerun picurv validate ...,
  • inspect generated .control file for updated values.

1.2 Run in 2D

models:
physics:
dimensionality: "2D"
grid:
mode: programmatic_c
programmatic_settings:
km: [3]

Why:

  • 2D mode still uses a thin third dimension for structured-grid machinery,
  • small km is typically enough for planar scenarios.

Quick check:

  • confirm generated run uses expected Z resolution.

1.3 Increase Grid Resolution

  • programmatic_c: increase im/jm/km arrays,
  • file: use finer .picgrid,
  • grid_gen: increase generator resolution args.

Verification:

  • compare runtime memory/cost and key output metrics across resolutions.

2. Boundary Conditions

2.1 Set a Constant-Velocity Inlet and Walls

boundary_conditions:
- face: "-Zeta"
type: "INLET"
handler: "constant_velocity"
params: {vx: 0.0, vy: 0.0, vz: 1.5}
- face: "-Eta"
type: "WALL"
handler: "noslip"
# define all remaining faces explicitly

Why:

  • handler/type compatibility is validated,
  • all faces must be covered for each block.

Verification:

  • use validate first and check BC generation files under runs/<run_id>/config/.

2.2 Enable Periodicity in One Direction

models:
domain:
i_periodic: true

Use PERIODIC BC type on both paired faces with supported handlers:

  • geometric
  • constant_flux (requires target_flux)

Verification:

  • confirm paired-face consistency in validation output.

3. Running and Monitoring

3.1 Run in Parallel and Control DMDA Layout

./bin/picurv run -n 16 --solve ...

Optional partition hints:

grid:
da_processors_x: 4
da_processors_y: 2
da_processors_z: 2

Note: da_processors_* are scalar globals, not per-block vectors.

3.2 Run on Slurm (Generate and Submit)

./bin/picurv run --solve --post-process \
--case my_case/case.yml \
--solver my_case/solver.yml \
--monitor my_case/monitor.yml \
--post my_case/post.yml \
--cluster my_case/cluster.yml

Generate-only mode:

./bin/picurv run --solve --post-process \
--case my_case/case.yml \
--solver my_case/solver.yml \
--monitor my_case/monitor.yml \
--post my_case/post.yml \
--cluster my_case/cluster.yml \
--no-submit

Submit an already staged run later:

./bin/picurv submit --run-dir runs/<run_id>

Cancel a submitted run by directory:

./bin/picurv cancel --run-dir runs/<run_id> --stage solve

Generated Slurm solver jobs already enable an automatic runtime walltime guard. Override it only when you need a different warmup/headroom policy:

execution:
walltime_guard:
enabled: true
warmup_steps: 10
multiplier: 2.0
min_seconds: 60
estimator_alpha: 0.35

Ask Slurm for an early warning signal as fallback protection when you want PICurv to flush one last snapshot before walltime or preemption:

execution:
extra_sbatch:
signal: "USR1@300"

If the batch script launches mpirun directly, use signal: "B:USR1@300" and prefer exec mpirun ....

Verification:

  • inspect scheduler/*.sbatch and submission.json in run directory.
  • confirm the generated solver script exports PICURV_JOB_START_EPOCH and PICURV_WALLTIME_LIMIT_SECONDS.
  • confirm the generated cluster profile contains the intended signal fallback policy before submission.

3.3 Restart from a Saved Step

run_control:
start_step: 500
total_steps: 1000

Pass --restart-from on the CLI to point at the previous run:

./bin/picurv run --solve --post-process \
--restart-from ../runs/flat_channel_20260303-120000 \
--case restart_case/case.yml \
--solver restart_case/solver.yml \
--monitor restart_case/monitor.yml \
--post restart_case/post.yml

Meaning:

  • If a run has completed through step 500, set start_step: 500.
  • The next run loads the saved state at step 500.
  • The first new step advanced is step 501.
  • total_steps is the number of additional steps to run.
  • In this example, the restarted run advances from step 501 through step 1500.
  • When --restart-from is given, picurv automatically resolves the previous run's restart directory from that run's config/monitor.yml and injects the correct -restart_dir into the new control file.

Typical full field restart (solver.yml):

operation_mode:
eulerian_field_source: "load"

Particle restart choices (case.yml):

Full restart of the existing particle swarm:

models:
physics:
particles:
restart_mode: "load"

Restart the flow field but reseed particles from scratch:

models:
physics:
particles:
restart_mode: "init"

Common combinations:

  • Full restart: start_step > 0, eulerian_field_source: load, restart_mode: load
  • Flow restart + fresh particles: start_step > 0, eulerian_field_source: load, restart_mode: init
  • Analytical mode is different: eulerian_field_source: analytical regenerates the analytical field at the requested (t, step) instead of loading restart files.

How to think about this workflow:

  • restart uses the normal run --solve path; there is no separate restart command,
  • the new run directory is a fresh run artifact,
  • the saved field state is loaded from existing restart/output files referenced by the current restart path contract,
  • --restart-from on the CLI is the preferred way to point picurv at the old run automatically,
  • use --continue as a shorthand when resuming from the most recent run of the same case,
  • the old run directory is not mutated in place by picurv.

Before launching a restart, verify:

  • the previous run actually wrote solver outputs for the target start_step,
  • the --restart-from path points to the intended previous run directory,
  • monitor.yml -> io.directories.restart points to the intended restart directory name,
  • monitor.yml -> io.directories.output matches where the prior run wrote field data,
  • restart source files for the requested step exist,
  • start_step matches an actual saved timestep, not just a desired number.

Common restart mistakes:

  • Setting start_step: 501 after a run that ended at 500. Use start_step: 500.
  • Forgetting solver.yml -> operation_mode.eulerian_field_source: load for a true field restart.
  • Forgetting to choose particles.restart_mode: load or init explicitly.
  • Trying to restart from a step that was never written to disk.

Verification:

  • confirm restart directory and step indices in run logs,
  • confirm the banner/load path shows the expected restart step,
  • if particles are enabled, confirm the log shows the intended particle restart mode.

See also:

3.4 Enable Targeted Debug Logging

logging:
verbosity: "DEBUG"
enabled_functions:
- Projection
- UpdatePressure

Use this for local diagnosis of instability or boundary anomalies. Prefer narrow function lists to keep logs manageable.

4. Post-Processing Recipes

4.1 Postprocess an Existing Run

./bin/picurv run --post-process \
--run-dir runs/flat_channel_20240401-153000 \
--post my_study/standard_analysis.yml

Use when solver outputs already exist and you are iterating only on analysis pipeline.

To catch up the same recipe in batches while the solver is still running, add --continue and keep the full desired window in post.yml:

./bin/picurv run --post-process --continue \
--run-dir runs/flat_channel_20240401-153000 \
--post my_study/standard_analysis.yml

Behavior notes:

  • if the same recipe already produced steps 0..60 and post.yml still asks for 0..100, PICurv launches only 70..100.
  • if source files currently exist only through 420, PICurv launches only the fully available contiguous prefix and exits successfully; a later --continue run picks up the newer steps.
  • if you change the recipe itself, PICurv starts from that recipe's configured start_step instead of inheriting completion from the earlier recipe.
  • if you omit --continue, PICurv honors the requested window exactly and treats the run as an explicit rerun.

For the broader run-directory lifecycle around restart, post-only reuse, and generated scheduler artifacts, see Run Lifecycle Guide.

4.2 Add Q-Criterion to Eulerian Pipeline

eulerian_pipeline:
- task: nodal_average
input_field: Ucat
output_field: Ucat_nodal
- task: q_criterion
io:
eulerian_fields:
- Ucat_nodal
- Qcrit

Verification:

  • open VTK output and confirm Qcrit field is present.

4.3 Enable Statistics Output (MSD)

statistics_pipeline:
output_prefix: "Stats"
tasks:
- task: msd

Verification:

  • check output/statistics/Stats_msd.csv by default, unless statistics_pipeline.output_prefix includes an explicit relative or absolute path.

5. Case Initialization and Binary Management

5.1 Initialize a New Case

picurv init flat_channel --dest my_case

This copies template files and writes metadata. Runtime binaries (simulator, postprocessor) are resolved from the project bin/ directory via PATH — no copies are placed in the case.

5.2 Pin Binaries for Reproducibility

picurv init flat_channel --dest my_case --pin-binaries

Use --pin-binaries when you plan to submit Slurm jobs and may rebuild the repo before the job executes. Case-local copies take precedence over bin/ originals at runtime.

Equivalent manual step after init:

picurv sync-binaries --case-dir my_case

5.3 Rebuild Safety

  • picurv (the Python conductor) can be updated at any time — it only launches jobs, it does not run during solver execution.
  • simulator and postprocessor in bin/ are overwritten by make all. If a queued Slurm job references them by absolute path, the running binary may change.
  • Use --pin-binaries or sync-binaries before submission to protect running jobs.

6. Sweep Studies

./bin/picurv sweep \
--study my_study/study.yml \
--cluster my_study/cluster.yml

What you get:

  • expanded case matrix,
  • scheduler array scripts,
  • aggregated metrics table (auto-collected after jobs complete),
  • optional plots.

If a case is killed (e.g. walltime), continue the study:

./bin/picurv sweep --continue --study-dir studies/<study_id>

Re-aggregate metrics manually:

./bin/picurv sweep --reaggregate --study-dir studies/<study_id>

See Sweep and Study Guide for full contract details.

7. Next Steps

CFD Reader Guidance and Practical Use

This page describes User How-To Guides within the PICurv workflow. For CFD users, the most reliable reading strategy is to map the page content to a concrete run decision: what is configured, what runtime stage it influences, and which diagnostics should confirm expected behavior.

Treat this page as both a conceptual reference and a runbook. If you are debugging, pair the method/procedure described here with monitor output, generated runtime artifacts under runs/<run_id>/config, and the associated solver/post logs so numerical intent and implementation behavior stay aligned.

What To Extract Before Changing A Case

  • Identify which YAML role or runtime stage this page governs.
  • List the primary control knobs (tolerances, cadence, paths, selectors, or mode flags).
  • Record expected success indicators (convergence trend, artifact presence, or stable derived metrics).
  • Record failure signals that require rollback or parameter isolation.

Practical CFD Troubleshooting Pattern

  1. Reproduce the issue on a tiny case or narrow timestep window.
  2. Change one control at a time and keep all other roles/configs fixed.
  3. Validate generated artifacts and logs after each change before scaling up.
  4. If behavior remains inconsistent, compare against a known-good baseline example and re-check grid/BC consistency.