|
PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
|
#include "io.h"#include "variables.h"#include "logging.h"#include "ParticleSwarm.h"#include "interpolation.h"#include "grid.h"#include "setup.h"#include "Metric.h"#include "postprocessing_kernels.h"#include "vtk_io.h"#include "particle_statistics.h"Go to the source code of this file.
Functions | |
| PetscErrorCode | SetupPostProcessSwarm (UserCtx *user, PostProcessParams *pps) |
| Creates a new, dedicated DMSwarm for post-processing tasks. | |
| PetscErrorCode | WriteEulerianFile (UserCtx *user, PostProcessParams *pps, PetscInt ti) |
| Orchestrates the writing of a combined, multi-field VTK file for a single time step. | |
| PetscErrorCode | EulerianDataProcessingPipeline (UserCtx *user, PostProcessParams *pps) |
| Parses the processing pipeline string and executes the requested kernels. | |
| PetscErrorCode | ParticleDataProcessingPipeline (UserCtx *user, PostProcessParams *pps) |
| Parses and executes the particle pipeline using a robust two-pass approach. | |
| PetscErrorCode | WriteParticleFile (UserCtx *user, PostProcessParams *pps, PetscInt ti) |
| Writes particle data to a VTP file using the Prepare-Write-Cleanup pattern. | |
| PetscErrorCode | GlobalStatisticsPipeline (UserCtx *user, PostProcessParams *pps, PetscInt ti) |
| Executes the global statistics pipeline, computing aggregate reductions over all particles. | |
| PetscErrorCode SetupPostProcessSwarm | ( | UserCtx * | user, |
| PostProcessParams * | pps | ||
| ) |
Creates a new, dedicated DMSwarm for post-processing tasks.
This function is called once at startup. It creates an empty DMSwarm and associates it with the same grid DM as the primary swarm and registers all the required fields.
| user | The UserCtx where user->post_swarm will be created. |
| pps | The PostProcessParams containing the particle_pipeline string for field registration. |
Definition at line 23 of file postprocessor.c.
| PetscErrorCode WriteEulerianFile | ( | UserCtx * | user, |
| PostProcessParams * | pps, | ||
| PetscInt | ti | ||
| ) |
Orchestrates the writing of a combined, multi-field VTK file for a single time step.
This function is the primary driver for generating output. It performs these steps:
| user | The UserCtx for the finest grid level. |
| pps | The post-processing configuration struct. |
| ti | The current time step index. |
Orchestrates the writing of a combined, multi-field VTK file for a single time step.
This function exports instantaneous Eulerian field data (such as pressure, velocity, Q-criterion, and particle concentration) to a VTK structured grid file for the specified time step. The output includes subsampled interior grid coordinates and point data fields as specified in the PostProcessParams configuration.
The function performs the following steps:
Only fields listed in pps->output_fields_instantaneous are written. If the field list is empty, the function returns immediately without creating a file.
| user | The UserCtx containing simulation data and field vectors to be output. |
| pps | The PostProcessParams struct containing output configuration (field list, prefix). |
| ti | The time index/step number used for file naming. |
Definition at line 217 of file postprocessor.c.
| PetscErrorCode EulerianDataProcessingPipeline | ( | UserCtx * | user, |
| PostProcessParams * | pps | ||
| ) |
Parses the processing pipeline string and executes the requested kernels.
| user | The UserCtx containing the data to be transformed. |
| config | The PostProcessConfig containing the pipeline string. |
Parses the processing pipeline string and executes the requested kernels.
This function uses a general-purpose parser to handle a syntax of the form: "Keyword1:in1>out1; Keyword2:in1,in2>out2; Keyword3:arg1;"
It tokenizes the pipeline string and dispatches to the appropriate kernel function from processing_kernels.c with the specified field name arguments.
| user | The UserCtx containing the data to be transformed. |
| pps | The PostProcessParams struct containing the pipeline string. |
Definition at line 110 of file postprocessor.c.
| PetscErrorCode ParticleDataProcessingPipeline | ( | UserCtx * | user, |
| PostProcessParams * | pps | ||
| ) |
Parses and executes the particle pipeline using a robust two-pass approach.
This function ensures correctness and efficiency by separating field registration from kernel execution.
PASS 1 (Registration): The pipeline string is parsed to identify all new fields that will be created. These fields are registered with the DMSwarm.
Finalize: After Pass 1, DMSwarmFinalizeFieldRegister is called exactly once if any new fields were added, preparing the swarm's memory layout.
PASS 2 (Execution): The pipeline string is parsed again, and this time the actual compute kernels are executed, filling the now-valid fields.
| user | The UserCtx containing the DMSwarm. |
| pps | The PostProcessParams struct containing the particle_pipeline string. |
Definition at line 524 of file postprocessor.c.
| PetscErrorCode WriteParticleFile | ( | UserCtx * | user, |
| PostProcessParams * | pps, | ||
| PetscInt | ti | ||
| ) |
Writes particle data to a VTP file using the Prepare-Write-Cleanup pattern.
Definition at line 638 of file postprocessor.c.
| PetscErrorCode GlobalStatisticsPipeline | ( | UserCtx * | user, |
| PostProcessParams * | pps, | ||
| PetscInt | ti | ||
| ) |
Executes the global statistics pipeline, computing aggregate reductions over all particles.
Parses the semicolon-delimited pps->statistics_pipeline string and dispatches to the appropriate kernel (e.g. ComputeParticleMSD). Each kernel appends one row to its own CSV file and logs a summary via LOG_INFO. All MPI reductions happen inside each kernel. This pipeline is independent of the per-particle VTK pipeline; it produces no .vtp output.
| user | The UserCtx containing the primary DMSwarm (user->swarm). |
| pps | The PostProcessParams containing statistics_pipeline and statistics_output_prefix. |
| ti | Current time-step index (passed through to kernels for time computation). |
Executes the global statistics pipeline, computing aggregate reductions over all particles.
Parses pps->statistics_pipeline (semicolon-delimited) and dispatches to the appropriate kernel in particle_statistics.c. Each kernel does its own MPI_Allreduce, appends to its own CSV file, and logs a one-line summary. This function is a no-op if the statistics_pipeline string is empty.
Definition at line 592 of file postprocessor.c.