|
PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
|
This page provides operational recipes for common PICurv tasks. Each recipe includes what to change, why it matters, and a quick verification action.
What to change (in case.yml):
Why:
\[ Re = \frac{\rho U L}{\mu} \]
These values set the non-dimensional operating point consumed by solver controls.
Quick check:
picurv validate ...,.control file for updated values.Why:
km is typically enough for planar scenarios.Quick check:
programmatic_c: increase im/jm/km arrays,file: use finer .picgrid,grid_gen: increase generator resolution args.Verification:
Why:
Verification:
validate first and check BC generation files under runs/<run_id>/config/.Use PERIODIC BC type on both paired faces with supported handlers:
geometricconstant_flux (requires target_flux)Verification:
Optional partition hints:
Note: da_processors_* are scalar globals, not per-block vectors.
Generate-only mode:
Submit an already staged run later:
Cancel a submitted run by directory:
Generated Slurm solver jobs already enable an automatic runtime walltime guard. Override it only when you need a different warmup/headroom policy:
Ask Slurm for an early warning signal as fallback protection when you want PICurv to flush one last snapshot before walltime or preemption:
If the batch script launches mpirun directly, use signal: "B:USR1@300" and prefer exec mpirun ....
Verification:
scheduler/*.sbatch and submission.json in run directory.PICURV_JOB_START_EPOCH and PICURV_WALLTIME_LIMIT_SECONDS.signal fallback policy before submission.Pass --restart-from on the CLI to point at the previous run:
Meaning:
start_step: 500.total_steps is the number of additional steps to run.--restart-from is given, picurv automatically resolves the previous run's restart directory from that run's config/monitor.yml and injects the correct -restart_dir into the new control file.Typical full field restart (solver.yml):
Particle restart choices (case.yml):
Full restart of the existing particle swarm:
Restart the flow field but reseed particles from scratch:
Common combinations:
start_step > 0, eulerian_field_source: load, restart_mode: loadstart_step > 0, eulerian_field_source: load, restart_mode: initeulerian_field_source: analytical regenerates the analytical field at the requested (t, step) instead of loading restart files.How to think about this workflow:
run --solve path; there is no separate restart command,--restart-from on the CLI is the preferred way to point picurv at the old run automatically,--continue as a shorthand when resuming from the most recent run of the same case,picurv.Before launching a restart, verify:
start_step,--restart-from path points to the intended previous run directory,monitor.yml -> io.directories.restart points to the intended restart directory name,monitor.yml -> io.directories.output matches where the prior run wrote field data,start_step matches an actual saved timestep, not just a desired number.Common restart mistakes:
start_step: 501 after a run that ended at 500. Use start_step: 500.solver.yml -> operation_mode.eulerian_field_source: load for a true field restart.particles.restart_mode: load or init explicitly.Verification:
See also:
Use this for local diagnosis of instability or boundary anomalies. Prefer narrow function lists to keep logs manageable.
Use when solver outputs already exist and you are iterating only on analysis pipeline.
To catch up the same recipe in batches while the solver is still running, add --continue and keep the full desired window in post.yml:
Behavior notes:
0..60 and post.yml still asks for 0..100, PICurv launches only 70..100.420, PICurv launches only the fully available contiguous prefix and exits successfully; a later --continue run picks up the newer steps.start_step instead of inheriting completion from the earlier recipe.--continue, PICurv honors the requested window exactly and treats the run as an explicit rerun.For the broader run-directory lifecycle around restart, post-only reuse, and generated scheduler artifacts, see Run Lifecycle Guide.
Verification:
Qcrit field is present.Verification:
output/statistics/Stats_msd.csv by default, unless statistics_pipeline.output_prefix includes an explicit relative or absolute path.This copies template files and writes metadata. Runtime binaries (simulator, postprocessor) are resolved from the project bin/ directory via PATH — no copies are placed in the case.
Use --pin-binaries when you plan to submit Slurm jobs and may rebuild the repo before the job executes. Case-local copies take precedence over bin/ originals at runtime.
Equivalent manual step after init:
picurv (the Python conductor) can be updated at any time — it only launches jobs, it does not run during solver execution.simulator and postprocessor in bin/ are overwritten by make all. If a queued Slurm job references them by absolute path, the running binary may change.--pin-binaries or sync-binaries before submission to protect running jobs.What you get:
If a case is killed (e.g. walltime), continue the study:
Re-aggregate metrics manually:
See Sweep and Study Guide for full contract details.
This page describes User How-To Guides within the PICurv workflow. For CFD users, the most reliable reading strategy is to map the page content to a concrete run decision: what is configured, what runtime stage it influences, and which diagnostics should confirm expected behavior.
Treat this page as both a conceptual reference and a runbook. If you are debugging, pair the method/procedure described here with monitor output, generated runtime artifacts under runs/<run_id>/config, and the associated solver/post logs so numerical intent and implementation behavior stay aligned.