|
PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
|
For the full commented template, see:
# ==============================================================================
# PICurv Master Post-Processing Profile
# ==============================================================================
#
# This file is the definitive template for post-processing tasks, derived
# directly from the capabilities of the C post-processor executable. All options
# here map to specific, implemented C functions and configuration keys.
#
# Use this as a guide to create specific analysis profiles (e.g., standard_analysis.yml).
#
# ==============================================================================
# SECTION 1: SOURCE DATA & RUN CONTROL
# ------------------------------------------------------------------------------
# Defines where to find the input data and which timesteps to process.
# This section directly configures the main processing loop in the C executable.
# ==============================================================================
source_data:
# (Required) The directory containing the input solver data files (.dat).
#
# SPECIAL TOKEN: "<solver_output_dir>"
# The picurv script replaces this with the solver's output directory.
# This is the standard and recommended setting.
directory: "<solver_output_dir>"
run_control:
# (Required) The first timestep in the logical analysis window.
# Maps to 'startTime' in the generated C-side post.run recipe.
# Keep this at the true beginning of the recipe window you care about.
# When you launch `picurv run --post-process --continue`, PICurv may
# internally resume at a later effective step for the same recipe without
# requiring you to edit this value.
start_step: 0
# (Required) The last timestep in the logical analysis window.
# A value of -1 means "process up to the last available step".
# On live solver runs, PICurv caps each launch to the highest fully
# available contiguous source step before generating `post.run`.
# Maps to 'endTime' in the generated C-side post.run recipe.
end_step: -1
# (Optional) The interval for processing steps (e.g., 10 means process
# steps 0, 10, 20...). Defaults to 1.
# Maps to 'timeStep' in the generated C-side post.run recipe.
step_interval: 1
# ==============================================================================
# SECTION 2: GLOBAL OPERATIONS
# ------------------------------------------------------------------------------
# These are operations that apply to the entire dataset at the beginning of each
# timestep's processing.
# ==============================================================================
global_operations:
# (Optional) If true, converts all loaded fields (grid and data) to physical,
# dimensional units before any other processing.
# Corresponds to adding `DimensionalizeAllLoadedFields` to the C-side pipeline.
dimensionalize: true
# ==============================================================================
# SECTION 3: PROCESSING PIPELINE
# ------------------------------------------------------------------------------
# Defines a sequence of operations to perform on the loaded data. The picurv
# script will serialize these lists into semicolon-separated strings for the
# C executable's 'process_pipeline' and 'particle_pipeline' keys.
# ==============================================================================
eulerian_pipeline:
# Computes the Q-Criterion for vortex identification.
# Creates a new field named "Qcrit".
# Invokes the C function: `ComputeQCriterion()`.
- task: q_criterion
# (Optional) Performs cell-to-node averaging on specified fields.
# For each field in the list (e.g., 'P'), this computes a new field
# named with a "_nodal" suffix (e.g., 'P_nodal').
# Invokes the C function: `ComputeNodalAverage(user, "P", "P_nodal")`.
- task: nodal_average
input_field: 'P'
output_field: 'P_nodal'
- task: nodal_average
input_field: 'Ucat'
output_field: 'Ucat_nodal'
# (Optional) Normalizes a relative field (like pressure) by subtracting
# the value at a specific grid index.
# Invokes the C function: `NormalizeRelativeField()`.
- task: normalize_field
# The field to normalize. Currently, only "P" is supported by the C code.
field: 'P'
# The [i, j, k] grid indices of the reference point. The Python script will
# generate 'reference_ip', 'reference_jp', 'reference_kp' keys in post.run.
reference_point: [1, 1, 1]
# --- Lagrangian (Particle-Based) Pipeline ---
lagrangian_pipeline:
# A list of operations performed on particle data.
# Computes the specific kinetic energy (0.5 * |v|^2) for each particle.
# Creates a new particle field named "specific_ke".
# Invokes the C function: `ComputeSpecificKE()`.
- task: specific_ke
input_field: 'velocity'
output_field: 'ske'
# --- Global Particle Statistics Pipeline ---
# Optional reduction kernels that append CSV diagnostics per timestep.
# NOTE: Requires particle data in solver outputs (np > 0 at solve time).
statistics_pipeline:
# Optional base name for CSV outputs (default: "Stats")
output_prefix: "Stats"
tasks:
# Mean-square displacement based on simCtx->psrc_x/y/z reference point.
- task: msd
# ==============================================================================
# SECTION 4: OUTPUT CONFIGURATION
# ------------------------------------------------------------------------------
# Controls what data is written to the final VTK files.
# ==============================================================================
io:
# (Required) The directory where output files will be saved, relative to the
# main run directory (e.g., 'runs/my_run_id/'). The picurv script will
# create this directory if it does not exist.
# picurv prepends this path onto the generated 'output_prefix' and
# 'particle_output_prefix' entries in post.run.
output_directory: "visualization/standard_analysis"
# (Required) The base name for the output files. The C code will append
# the timestep and extension, e.g., "eulerian_data_00100.vts".
# Combined with output_directory to form 'output_prefix' in post.run.
output_filename_prefix: "eulerian_data"
# (Required) List of Eulerian fields to save to the output .vts files.
# You can include original fields ('P', 'Ucat') and newly computed fields
# from the pipeline ('Qcrit', 'P_nodal', 'Ucat_nodal').
# Maps to 'output_fields_instantaneous' in post.run.
eulerian_fields:
- 'Ucat_nodal'
- 'Qcrit'
- 'P_nodal'
# (Optional) Reserved for future averaged-field writer support.
# Maps to 'output_fields_averaged' in post.run.
eulerian_fields_averaged: []
# (Optional) A master switch for all particle-related output. Defaults to false.
# Maps to 'output_particles' in post.run.
output_particles: true
# (Optional) List of particle fields to save to the output .vtp files.
# Can include original fields and newly computed fields like 'specific_ke'.
# Standard Swarm Fields: 'position', 'velocity', 'pid', 'CellID', 'weight'
# Maps to 'particle_fields_instantaneous' in post.run.
particle_fields:
- 'velocity'
- 'pid'
- 'specific_ke'
# (Optional) The frequency for subsampling particles for output.
# A value of 1 saves every particle. A value of 10 saves every 10th particle.
# Defaults to 1 if not specified.
# Maps to 'particle_output_freq' in post.run.
particle_subsampling_frequency: 1
# (Optional) Input file extensions used by the C post reader (without dot preferred).
# Maps to 'eulerianExt' and 'particleExt' in post.run.
input_extensions:
eulerian: "dat"
particle: "dat"
post.yml defines postprocessing input range, processing pipelines, statistics tasks, and VTK output selection.
Mappings in generated post.run:
start_step -> startTimeend_step -> endTimestep_interval -> timeStepOperational semantics when launched through picurv:
start_step and end_step as the full logical analysis window you want the recipe to represent.picurv run --post-process --continue --run-dir ... --post post.yml computes an internal effective start step for the same recipe lineage, so you do not need to keep editing start_step during batch catch-up.step_interval, pipeline tasks, output prefixes, or selected fields, PICurv starts from the configured start_step instead of inheriting completion from the previous recipe.end_step: -1 still means "up to the last available step", but PICurv now caps each invocation to the highest fully available contiguous source prefix before generating post.run.source_data.directory -> source_directory<solver_output_dir> is a supported placeholder resolved by picurv.Eulerian tasks (eulerian_pipeline):
q_criterion -> ComputeQCriterionnodal_average -> CellToNodeAverage:<in>><out>normalize_field -> NormalizeRelativeField:<field>Global operation:
global_operations.dimensionalize: true prepends DimensionalizeAllLoadedFieldsLagrangian tasks (lagrangian_pipeline):
specific_ke -> ComputeSpecificKE:<in>><out>statistics_pipeline supports either:
tasks and optional output_prefixCurrently supported statistics task is msd, which picurv serializes as the ComputeMSD pipeline token consumed by the C dispatcher before it calls ComputeParticleMSD.
Mappings:
statistics_pipelineoutput_prefix -> statistics_output_prefixMappings:
output_directory + output_filename_prefix -> output_prefixoutput_directory + particle_filename_prefix -> particle_output_prefixoutput_particles -> output_particlesparticle_subsampling_frequency -> particle_output_freqeulerian_fields -> output_fields_instantaneouseulerian_fields_averaged -> output_fields_averaged (reserved)particle_fields -> particle_fields_instantaneousinput_extensions.eulerian -> eulerianExtinput_extensions.particle -> particleExtDefault post input extension remains dat unless overridden. statistics_pipeline.output_prefix is independent of io.output_directory; bare basenames default under <monitor output>/statistics/, while explicit relative or absolute paths are preserved. When the same timestep is post-processed again, PICurv now rewrites same-step VTK/VTP outputs and rewrites same-step statistics rows so the final CSV still contains one row per step.
Proceed to User How-To Guides for goal-oriented recipes.
For mapping and extension details:
This page describes Configuration Reference: Postprocessor YAML within the PICurv workflow. For CFD users, the most reliable reading strategy is to map the page content to a concrete run decision: what is configured, what runtime stage it influences, and which diagnostics should confirm expected behavior.
Treat this page as both a conceptual reference and a runbook. If you are debugging, pair the method/procedure described here with monitor output, generated runtime artifacts under runs/<run_id>/config, and the associated solver/post logs so numerical intent and implementation behavior stay aligned.