ISCAP Proceedings: Abstract Presentation
Generating Single Step to Launch Experiments Using AI
Shawn Zwach
Dakota State University
Mark Spanier
Dakota State University
Abstract
Reproducibility is a cornerstone of trustworthy, valid research, yet many published experiments lack the configuration details and deployment instructions needed for replication. This work explores the use of large language models (including frontier systems such as GPT-5, GPT-5-mini, and GPT-4o, as well as leading open-source models) to automate the process of extracting experimental design information from research papers and associated codebases. The goal is to generate functional experiment configurations and deployment scripts that enable seamless replication and adaptation.
Researchers will evaluate whether such models can successfully produce working configurations for a set of 10–30 publications, while also measuring the time and financial costs associated with each attempt. The focus is on freely available research that provides accessible source code repositories, such as GitHub projects, but lacks streamlined execution pathways (one or two command sequences). The anticipated impact of this work is twofold: first, enabling rapid re-testing and modernization of legacy research, and second, supporting the development of reproducible workflows for new research outputs.