PICurv 0.1.0
A Parallel Particle-In-Cell Solver for Curvilinear LES
Loading...
Searching...
No Matches
Code Architecture

Welcome to the Developer Portal. This document provides a high-level architectural overview of the PICurv C codebase. It is intended for developers who wish to understand, modify, or extend the solver's capabilities.

1. Program Flow: The Five Stages of <tt>main()</tt>

The entire simulation is orchestrated by the main() function in src/main.c. The program follows a clear, five-stage execution path, which provides a roadmap to the entire codebase.

High-Level Simulation Workflow
  1. Initialize (PetscInitialize)**
    • Purpose: Sets up the fundamental parallel environment.
    • Action: Initializes MPI and the PETSc library.
  2. **Configure (CreateSimulationContext)**
    • Purpose: Gathers all simulation parameters into a single configuration object.
    • File: src/setup.c
    • Action: This monolithic function creates the master SimCtx struct and populates it by parsing the .control file generated by pic-flow. It reads every possible flag (e.g., -ren, -dt, -nblk) and sets the corresponding member in simCtx.
  3. Setup (The Setup* Functions)
  4. **Execute (AdvanceSimulation)**
    • Purpose: Runs the main time-stepping loop.
    • File: src/simulation.c
    • Action: This function contains the for loop that advances the simulation from start_step to total_steps. See the 13_Core_Algorithms page for a detailed breakdown of this loop.
  5. **Finalize (PetscFinalize)**
    • Purpose: Cleans up all resources.
    • Action: Destroys PETSc objects and shuts down MPI.

2. Core Data Structures

Understanding the two main context structs is essential to understanding the code. All data is passed explicitly through these structs—there are no global variables.

2.1. The Simulation Context (<tt>SimCtx</tt>)

  • Header: include/variables.h
  • Purpose: The single, master "source of truth" for the entire simulation's configuration. It is roughly equivalent to the contents of all your .yml files.
  • **Content: Contains global parameters like Reynolds number (ren), time step (dt), number of blocks (block_number), pointers to top-level managers like the UserMG struct, and all physics/model flags.
  • Lifecycle: Created and populated once in CreateSimulationContext(). It is then passed down (by pointer) to almost every function in the solver.

2.2. The User Context (<tt>UserCtx</tt>)

  • Header: include/variables.h
  • Purpose: Holds all data specific to one computational grid (i.e., one block at one multigrid level).
  • Content: Contains the PETSc DMDAs (da, fda), all PETSc Vecs for that grid's fields (Ucont, P, Csi, etc.), and pointers to its coarser/finer neighbors in the multigrid hierarchy.
  • Lifecycle: A 2D array of UserCtx structs is allocated in AllocateContextHierarchy: mgctx[level][block]. Each UserCtx contains a back-pointer (simCtx) to the master simulation context.

3. Code Modules Overview

The codebase is organized into logical modules, typically as pairs of .c and .h files.

  • **Main Orchestration (main.c, simulation.c)**
    • Contains the high-level program flow and the main time-stepping loop.
  • **Setup & Configuration (setup.c, io.c)**
    • Handles command-line parsing, SimCtx creation, and orchestrates the setup of all other modules.
  • **Grid & Metrics (grid.c, Metric.c)**
    • Responsible for creating the DMDAs, assigning coordinates, and computing all curvilinear metric terms (Csi, Eta, Zet, Aj, etc.).
  • **Numerical Solvers (solvers.c, implicitsolvers.c, poisson.c, rhs.c)**
    • Contains the implementations of the momentum equation solvers (explicit/implicit), the pressure-Poisson solver, and the functions that compute the RHS terms (convection, diffusion, pressure gradient).
  • **Boundary Conditions (Boundaries.c, BC_Handlers.c)**
    • Implements the modern, modular boundary condition system. Boundaries.c contains the parser and high-level logic, while BC_Handlers.c would contain the specific implementations (e.g., constant_velocity, noslip).
  • **Lagrangian Particles (ParticleSwarm.c, ParticleMotion.c, walkingsearch.c)**
    • All code related to the DMSwarm. This includes particle initialization, advection (UpdateAllParticlePositions), and the robust "walking search" algorithm for locating particles on the grid (LocateAllParticlesInGrid_TEST).
  • **Grid-Particle Communication (interpolation.c)**
    • Contains the kernels for transferring data between the grid and the particles:
      • Grid -> Particle: InterpolateAllFieldsToSwarm
      • Particle -> Grid: ScatterAllParticleFieldsToEulerFields

5. Next Steps

This page provides the "map" to the codebase. To understand how these components work together during a time step, proceed to the next page.

  • Next: Dive into the 14_Core_Algorithms to see a detailed breakdown of the main simulation loop.
  • For function-level details: Consult the "" "API Reference".