PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
postprocessor.h
Go to the documentation of this file.
1#ifndef POSTPROCESSOR_H
2#define POSTPROCESSOR_H
3
4#include "io.h"
5#include "variables.h"
6#include "logging.h"
7#include "ParticleSwarm.h"
8#include "interpolation.h"
9#include "grid.h"
10#include "setup.h"
11#include "Metric.h"
13#include "vtk_io.h"
14#include "particle_statistics.h"
15/* --------------------------------------------------------------------
16 postprocessor.h
17
18 This header declares the interface for the post-processing executable
19 or library. Typically, you'd have a function that runs your main
20 post-processing routine, or you might declare other helper functions.
21
22 Here, we declare a single function: PostprocessMain,
23 which could be your main entry point if you want to call it from
24 another place (or you might just put main() in postprocess.c).
25
26*/
27
28/**
29 * @brief Creates a new, dedicated DMSwarm for post-processing tasks.
30 *
31 * This function is called once at startup. It creates an empty DMSwarm and
32 * associates it with the same grid DM as the primary swarm and registers all the required fields.
33 * @param user The UserCtx where user->post_swarm will be created.
34 * @param pps The PostProcessParams containing the particle_pipeline string for field registration.
35 * @return PetscErrorCode
36 */
37PetscErrorCode SetupPostProcessSwarm(UserCtx* user, PostProcessParams* pps);
38
39/**
40 * @brief Orchestrates the writing of a combined, multi-field VTK file for a single time step.
41 *
42 * This function is the primary driver for generating output. It performs these steps:
43 * 1. Prepares the subsampled coordinate array required for the legacy grid format.
44 * 2. Parses the user-requested list of fields from the configuration.
45 * 3. For each field, prepares a corresponding subsampled data array.
46 * 4. Assembles all prepared arrays into a single VTKMetaData struct.
47 * 5. Calls the low-level VTK writer to generate the final .vts file.
48 * 6. Frees all temporary memory allocated during the preparation phase.
49 *
50 * @param user The UserCtx for the finest grid level.
51 * @param pps The post-processing configuration struct.
52 * @param ti The current time step index.
53 * @return PetscErrorCode
54 */
55PetscErrorCode WriteEulerianFile(UserCtx* user, PostProcessParams* pps, PetscInt ti);
56
57/**
58 * @brief Parses the processing pipeline string and executes the requested kernels.
59 * @param user The UserCtx containing the data to be transformed.
60 * @param config The PostProcessConfig containing the pipeline string.
61 * @return PetscErrorCode
62 */
64
65
66/**
67 * @brief Parses and executes the particle pipeline using a robust two-pass approach.
68 *
69 * This function ensures correctness and efficiency by separating field registration
70 * from kernel execution.
71 *
72 * PASS 1 (Registration): The pipeline string is parsed to identify all new fields
73 * that will be created. These fields are registered with the DMSwarm.
74 *
75 * Finalize: After Pass 1, DMSwarmFinalizeFieldRegister is called exactly once if
76 * any new fields were added, preparing the swarm's memory layout.
77 *
78 * PASS 2 (Execution): The pipeline string is parsed again, and this time the
79 * actual compute kernels are executed, filling the now-valid fields.
80 *
81 * @param user The UserCtx containing the DMSwarm.
82 * @param pps The PostProcessParams struct containing the particle_pipeline string.
83 * @return PetscErrorCode
84 */
86
87/**
88 * @brief Writes particle data to a VTP file using the Prepare-Write-Cleanup pattern.
89 */
90PetscErrorCode WriteParticleFile(UserCtx* user, PostProcessParams* pps, PetscInt ti);
91
92/**
93 * @brief Executes the global statistics pipeline, computing aggregate reductions over all particles.
94 *
95 * Parses the semicolon-delimited `pps->statistics_pipeline` string and dispatches to the
96 * appropriate kernel (e.g. ComputeParticleMSD). Each kernel appends one row to its own CSV
97 * file and logs a summary via LOG_INFO. All MPI reductions happen inside each kernel.
98 * This pipeline is independent of the per-particle VTK pipeline; it produces no .vtp output.
99 *
100 * @param user The UserCtx containing the primary DMSwarm (user->swarm).
101 * @param pps The PostProcessParams containing statistics_pipeline and statistics_output_prefix.
102 * @param ti Current time-step index (passed through to kernels for time computation).
103 * @return PetscErrorCode
104 */
105PetscErrorCode GlobalStatisticsPipeline(UserCtx *user, PostProcessParams *pps, PetscInt ti);
106
107#endif /* POSTPROCESSOR_H */
108
Header file for Particle Swarm management functions.
Public interface for grid, solver, and metric setup routines.
Public interface for data input/output routines.
Logging utilities and macros for PETSc-based applications.
Global statistics kernels for the Statistics Pipeline.
PetscErrorCode EulerianDataProcessingPipeline(UserCtx *user, PostProcessParams *pps)
Parses the processing pipeline string and executes the requested kernels.
PetscErrorCode WriteEulerianFile(UserCtx *user, PostProcessParams *pps, PetscInt ti)
Orchestrates the writing of a combined, multi-field VTK file for a single time step.
PetscErrorCode GlobalStatisticsPipeline(UserCtx *user, PostProcessParams *pps, PetscInt ti)
Executes the global statistics pipeline, computing aggregate reductions over all particles.
PetscErrorCode ParticleDataProcessingPipeline(UserCtx *user, PostProcessParams *pps)
Parses and executes the particle pipeline using a robust two-pass approach.
PetscErrorCode WriteParticleFile(UserCtx *user, PostProcessParams *pps, PetscInt ti)
Writes particle data to a VTP file using the Prepare-Write-Cleanup pattern.
PetscErrorCode SetupPostProcessSwarm(UserCtx *user, PostProcessParams *pps)
Creates a new, dedicated DMSwarm for post-processing tasks.
Main header file for a complex fluid dynamics solver.
Holds all configuration parameters for a post-processing run.
Definition variables.h:499
User-defined context containing data specific to a single computational grid level.
Definition variables.h:733