PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
Configuration Reference: Postprocessor YAML

For the full commented template, see:

# ==============================================================================
#                 PICurv Master Post-Processing Profile
# ==============================================================================
#
# This file is the definitive template for post-processing tasks, derived
# directly from the capabilities of the C post-processor executable. All options
# here map to specific, implemented C functions and configuration keys.
#
# Use this as a guide to create specific analysis profiles (e.g., standard_analysis.yml).
#
# ==============================================================================
# SECTION 1: SOURCE DATA & RUN CONTROL
# ------------------------------------------------------------------------------
# Defines where to find the input data and which timesteps to process.
# This section directly configures the main processing loop in the C executable.
# ==============================================================================
source_data:
  # (Required) The directory containing the input solver data files (.dat).
  #
  # SPECIAL TOKEN: "<solver_output_dir>"
  #   The picurv script replaces this with the solver's output directory.
  #   This is the standard and recommended setting.
  directory: "<solver_output_dir>"

run_control:
  # (Required) The first timestep in the logical analysis window.
  # Maps to 'startTime' in the generated C-side post.run recipe.
  # Keep this at the true beginning of the recipe window you care about.
  # When you launch `picurv run --post-process --continue`, PICurv may
  # internally resume at a later effective step for the same recipe without
  # requiring you to edit this value.
  start_step: 0

  # (Required) The last timestep in the logical analysis window.
  # A value of -1 means "process up to the last available step".
  # On live solver runs, PICurv caps each launch to the highest fully
  # available contiguous source step before generating `post.run`.
  # Maps to 'endTime' in the generated C-side post.run recipe.
  end_step: -1

  # (Optional) The interval for processing steps (e.g., 10 means process
  # steps 0, 10, 20...). Defaults to 1.
  # Maps to 'timeStep' in the generated C-side post.run recipe.
  step_interval: 1

# ==============================================================================
# SECTION 2: GLOBAL OPERATIONS
# ------------------------------------------------------------------------------
# These are operations that apply to the entire dataset at the beginning of each
# timestep's processing.
# ==============================================================================
global_operations:
  # (Optional) If true, converts all loaded fields (grid and data) to physical,
  # dimensional units before any other processing.
  # Corresponds to adding `DimensionalizeAllLoadedFields` to the C-side pipeline.
  dimensionalize: true

# ==============================================================================
# SECTION 3: PROCESSING PIPELINE
# ------------------------------------------------------------------------------
# Defines a sequence of operations to perform on the loaded data. The picurv
# script will serialize these lists into semicolon-separated strings for the
# C executable's 'process_pipeline' and 'particle_pipeline' keys.
# ==============================================================================
eulerian_pipeline:

  # Computes the Q-Criterion for vortex identification.
  # Creates a new field named "Qcrit".
  # Invokes the C function: `ComputeQCriterion()`.
  - task: q_criterion

  # (Optional) Performs cell-to-node averaging on specified fields.
  # For each field in the list (e.g., 'P'), this computes a new field
  # named with a "_nodal" suffix (e.g., 'P_nodal').
  # Invokes the C function: `ComputeNodalAverage(user, "P", "P_nodal")`.
  - task: nodal_average
    input_field:  'P'
    output_field: 'P_nodal'

  - task: nodal_average
    input_field:  'Ucat'
    output_field: 'Ucat_nodal'

  # (Optional) Normalizes a relative field (like pressure) by subtracting
  # the value at a specific grid index.
  # Invokes the C function: `NormalizeRelativeField()`.
  - task: normalize_field
    # The field to normalize. Currently, only "P" is supported by the C code.
    field: 'P'
    # The [i, j, k] grid indices of the reference point. The Python script will
    # generate 'reference_ip', 'reference_jp', 'reference_kp' keys in post.run.
    reference_point: [1, 1, 1]

# --- Lagrangian (Particle-Based) Pipeline ---
lagrangian_pipeline:
  # A list of operations performed on particle data.
  # Computes the specific kinetic energy (0.5 * |v|^2) for each particle.
  # Creates a new particle field named "specific_ke".
  # Invokes the C function: `ComputeSpecificKE()`.
  - task: specific_ke
    input_field: 'velocity'
    output_field: 'ske'

# --- Global Particle Statistics Pipeline ---
# Optional reduction kernels that append CSV diagnostics per timestep.
# NOTE: Requires particle data in solver outputs (np > 0 at solve time).
statistics_pipeline:
  # Optional base name for CSV outputs (default: "Stats")
  output_prefix: "Stats"
  tasks:
    # Mean-square displacement based on simCtx->psrc_x/y/z reference point.
    - task: msd

# ==============================================================================
# SECTION 4: OUTPUT CONFIGURATION
# ------------------------------------------------------------------------------
# Controls what data is written to the final VTK files.
# ==============================================================================
io:
  # (Required) The directory where output files will be saved, relative to the
  # main run directory (e.g., 'runs/my_run_id/'). The picurv script will
  # create this directory if it does not exist.
  # picurv prepends this path onto the generated 'output_prefix' and
  # 'particle_output_prefix' entries in post.run.
  output_directory: "visualization/standard_analysis"

  # (Required) The base name for the output files. The C code will append
  # the timestep and extension, e.g., "eulerian_data_00100.vts".
  # Combined with output_directory to form 'output_prefix' in post.run.
  output_filename_prefix: "eulerian_data"

  # (Required) List of Eulerian fields to save to the output .vts files.
  # You can include original fields ('P', 'Ucat') and newly computed fields
  # from the pipeline ('Qcrit', 'P_nodal', 'Ucat_nodal').
  # Maps to 'output_fields_instantaneous' in post.run.
  eulerian_fields:
    - 'Ucat_nodal'
    - 'Qcrit'
    - 'P_nodal'

  # (Optional) Reserved for future averaged-field writer support.
  # Maps to 'output_fields_averaged' in post.run.
  eulerian_fields_averaged: []

  # (Optional) A master switch for all particle-related output. Defaults to false.
  # Maps to 'output_particles' in post.run.
  output_particles: true

  # (Optional) List of particle fields to save to the output .vtp files.
  # Can include original fields and newly computed fields like 'specific_ke'.
  # Standard Swarm Fields: 'position', 'velocity', 'pid', 'CellID', 'weight'
  # Maps to 'particle_fields_instantaneous' in post.run.
  particle_fields:
    - 'velocity'
    - 'pid'
    - 'specific_ke'

  # (Optional) The frequency for subsampling particles for output.
  # A value of 1 saves every particle. A value of 10 saves every 10th particle.
  # Defaults to 1 if not specified.
  # Maps to 'particle_output_freq' in post.run.
  particle_subsampling_frequency: 1

  # (Optional) Input file extensions used by the C post reader (without dot preferred).
  # Maps to 'eulerianExt' and 'particleExt' in post.run.
  input_extensions:
    eulerian: "dat"
    particle: "dat"

post.yml defines postprocessing input range, processing pipelines, statistics tasks, and VTK output selection.

1. File Structure

run_control:
start_step: 100
end_step: 1000
step_interval: 100
source_data:
directory: "<solver_output_dir>"
global_operations:
dimensionalize: true
eulerian_pipeline:
- task: nodal_average
input_field: Ucat
output_field: Ucat_nodal
- task: q_criterion
lagrangian_pipeline:
- task: specific_ke
input_field: velocity
output_field: SpecificKE
statistics_pipeline:
output_prefix: "Stats"
tasks:
- task: msd
io:
output_directory: "viz"
output_filename_prefix: "Field"
particle_filename_prefix: "Particle"
output_particles: true
eulerian_fields: [Ucat_nodal, Qcrit]
particle_fields: [velocity, SpecificKE]

2. run_control

Mappings in generated post.run:

  • start_step -> startTime
  • end_step -> endTime
  • step_interval -> timeStep

Operational semantics when launched through picurv:

  • keep start_step and end_step as the full logical analysis window you want the recipe to represent.
  • picurv run --post-process --continue --run-dir ... --post post.yml computes an internal effective start step for the same recipe lineage, so you do not need to keep editing start_step during batch catch-up.
  • if the recipe changes in a way that affects generated outputs, such as step_interval, pipeline tasks, output prefixes, or selected fields, PICurv starts from the configured start_step instead of inheriting completion from the previous recipe.
  • on live solver runs, end_step: -1 still means "up to the last available step", but PICurv now caps each invocation to the highest fully available contiguous source prefix before generating post.run.

3. source_data

  • source_data.directory -> source_directory
  • <solver_output_dir> is a supported placeholder resolved by picurv.
  • for live post-processing while the solver is still running, PICurv treats a timestep as source-available only when the full required source set for the current recipe exists. For Eulerian-only recipes that means the mandatory field files; for particle/statistics recipes it also requires the particle position file for that timestep.

4. Processing Pipelines

Eulerian tasks (eulerian_pipeline):

  • q_criterion -> ComputeQCriterion
  • nodal_average -> CellToNodeAverage:<in>><out>
  • normalize_field -> NormalizeRelativeField:<field>

Global operation:

  • global_operations.dimensionalize: true prepends DimensionalizeAllLoadedFields

Lagrangian tasks (lagrangian_pipeline):

  • specific_ke -> ComputeSpecificKE:<in>><out>

5. Statistics Pipeline

statistics_pipeline supports either:

  • list form, or
  • mapping with tasks and optional output_prefix

Currently supported statistics task is msd, which picurv serializes as the ComputeMSD pipeline token consumed by the C dispatcher before it calls ComputeParticleMSD.

Mappings:

  • tasks -> statistics_pipeline
  • output_prefix -> statistics_output_prefix

6. io

Mappings:

  • output_directory + output_filename_prefix -> output_prefix
  • output_directory + particle_filename_prefix -> particle_output_prefix
  • output_particles -> output_particles
  • particle_subsampling_frequency -> particle_output_freq
  • eulerian_fields -> output_fields_instantaneous
  • eulerian_fields_averaged -> output_fields_averaged (reserved)
  • particle_fields -> particle_fields_instantaneous
  • input_extensions.eulerian -> eulerianExt
  • input_extensions.particle -> particleExt

Default post input extension remains dat unless overridden. statistics_pipeline.output_prefix is independent of io.output_directory; bare basenames default under <monitor output>/statistics/, while explicit relative or absolute paths are preserved. When the same timestep is post-processed again, PICurv now rewrites same-step VTK/VTP outputs and rewrites same-step statistics rows so the final CSV still contains one row per step.

7. Next Steps

Proceed to User How-To Guides for goal-oriented recipes.

For mapping and extension details:

CFD Reader Guidance and Practical Use

This page describes Configuration Reference: Postprocessor YAML within the PICurv workflow. For CFD users, the most reliable reading strategy is to map the page content to a concrete run decision: what is configured, what runtime stage it influences, and which diagnostics should confirm expected behavior.

Treat this page as both a conceptual reference and a runbook. If you are debugging, pair the method/procedure described here with monitor output, generated runtime artifacts under runs/<run_id>/config, and the associated solver/post logs so numerical intent and implementation behavior stay aligned.

What To Extract Before Changing A Case

  • Identify which YAML role or runtime stage this page governs.
  • List the primary control knobs (tolerances, cadence, paths, selectors, or mode flags).
  • Record expected success indicators (convergence trend, artifact presence, or stable derived metrics).
  • Record failure signals that require rollback or parameter isolation.

Practical CFD Troubleshooting Pattern

  1. Reproduce the issue on a tiny case or narrow timestep window.
  2. Change one control at a time and keep all other roles/configs fixed.
  3. Validate generated artifacts and logs after each change before scaling up.
  4. If behavior remains inconsistent, compare against a known-good baseline example and re-check grid/BC consistency.