Abigail Berardi

Linked BS/PhD Student
Computer Engineering:
Modeling and Simulation Engineering

Structured LLM Prompting Framework for Simulation Software Development – GPE-A

  • Tools/Skills: Prompt Engineering, Software Development, Data Analysis
  • Date: 2025-02-11

Overview

As a part of an ongoing research initiative at Old Dominion University, I have discovered and am experimenting with Goal–Performance–Exclusion Architecture (GPE-A) — a structured prompt engineering framework designed to guide Large Language Models (LLMs) toward producing reliable, performant, and correct simulation software.
This work explores how structured, semantically precise prompts can constrain generative models to produce outputs meeting quantitative engineering requirements, treating the LLM as a black box system that can be guided to converge to a desired outcome through its inputs.

Discovery Process

This framework emerged through iterative exchange with an LLM while working to generate a method to force the generation of simulation code that could meet a known high-performance baseline. Early experiments with role-based prompting — assigning the LLM developer-like roles such as “software engineer” or “test engineer” — drastically failed to achieve the desired performance metrics. Through continued experimentation, I uncovered that prompt structure, not role framing, was the key driver of code quality. This prompt structured is termed GPE-A and it is a concise, ordered framework that encodes goals, performance constraints, and exclusions directly into specific slots in the prompt.

Current Work

Using GPE-A, I have generated and benchmarked heapsort, classical and quantum random number generators (RNGs) and runge-kutta (RK4) methods against established C, C++ and Q# implementations.
The early results are promising with GPE-A-generated implementations often meeting or exceeding standard baselines across applicable metrics such as performance. Our ongoing work is focused on expanding testing to a range of algorithms that are key to simulation software across a multitude of LLM models and programming languages.

Future Vision

Through this research, we aim to establish a foundation for AI-supported software engineering in simulation, where LLMs serve as adaptable, metrics-driven code synthesis tools. We are working to determine the extensibility of GPE-A into a scalable, model-agnostic prompting architecture that can bridge classical and quantum computational domains while preserving performance, correctness, and reproducibility. The long-term goal is to define an LLM-driven, metrics constrained software development pipeline to accelerate simulation software development.