Back to Publications
arXiv

Reporting LLM Prompting in Automated Software Engineering: A Guideline Based on Current Practices and Expectations

Alexander Korn, Lea Zaruchas, Chetan Arora, Andreas Metzger, Sven Smolka, Fanyu Wang, Andreas Vogelsang

Abstract

Large Language Models, particularly decoder-only generative models such as GPT, are increasingly used to automate Software Engineering tasks. These models are primarily guided through natural language prompts, making prompt engineering a critical factor in system performance and behavior. Despite their growing role in SE research, prompt-related decisions are rarely documented in a systematic or transparent manner, hindering reproducibility and comparability across studies. To address this gap, we conducted a two-phase empirical study. First, we analyzed nearly 300 papers published at the top-3 SE conferences since 2022 to assess how prompt design, testing, and optimization are currently reported. Second, we surveyed 105 program committee members from these conferences to capture their expectations for prompt reporting in LLM-driven research. Based on the findings, we derived a structured guideline that distinguishes essential, desirable, and exceptional reporting elements. Our results reveal significant misalignment between current practices and reviewer expectations, particularly regarding version disclosure, prompt justification, and threats to validity. We present our guideline as a step toward improving transparency, reproducibility, and methodological rigor in LLM-based SE research.

Resources

Guidelines

Guideline
Essential
Authors must name the used LLMs (e.g., GPT-4, Llama 3, Claude Opus 4).
Authors must precisely name the used LLM versions (e.g., GPT-4 2024-08-06).
Authors must provide the exact prompts used word-by-word. They may shorten the prompt using templates.
Authors must describe the prompts used and how they are structured.
Authors must justify why a specific prompt structure or phrasing was chosen.
Authors must mention all prompt engineering techniques used (e.g., few-shot, chain-of-thought).
Authors must discuss their use of prompts as part of threats to validity.
Desirable
Authors should use different LLMs and compare the results.
Authors should report how they refined/iterated the prompts to improve performance.
Authors should test multiple prompt variations and report the results.
Exceptional
Authors may apply automated prompt tuning techniques.

Citation

@inproceedings{kornLLMREI2025,
  title = {{LLMREI}: {Automating Requirements Elicitation Interviews} with {LLMs}},
  author = {Korn, Alexander and Gorsch, Samuel and Vogelsang, Andreas},
  booktitle = {2025 IEEE 33rd International Requirements Engineering Conference (RE)},
  pages = {19--30},
  year = 2025,
  doi = {10.1109/RE63999.2025.00013}
}