International Journal on Science and Technology

E-ISSN: 2229-7677     Impact Factor: 9.88

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 16 Issue 2 April-June 2025 Submit your research before last 3 days of June to publish your research paper in the issue of April-June.

Intelligent Test Automation: A Multi-Agent LLM Framework for Dynamic Test Case Generation and Validation

Author(s) Pragati Kumari
Country India
Abstract Automated software testing is essential in modern software development, ensuring stability and resilience. This study describes a unique technique for using the capabilities of Large Language Models (LLMs) via a system of autonomous agents. These agents collaborate to dynamically generate, validate, and execute test cases based on specified requirements [1, 2]. By iteratively improving test cases via agent-to-agent communication, the system improves accuracy and effectiveness. Our implementation, which uses AutoGen and Python's unittest framework, shows how this method helps to maintain excellent software quality. Experimental evaluations across a variety of test scenarios demonstrate the versatility and efficiency of our framework, Intelligent Test Automation (ITA), emphasizing its promise for increasing automated software testing [3, 4].
Keywords Intelligent Test Automation (ITA), Large Language Models (LLMs), Multi-Agent Systems, Automated Software Testing, Test Case Generation
Field Engineering
Published In Volume 16, Issue 1, January-March 2025
Published On 2025-03-05
Cite This Intelligent Test Automation: A Multi-Agent LLM Framework for Dynamic Test Case Generation and Validation - Pragati Kumari - IJSAT Volume 16, Issue 1, January-March 2025. DOI 10.71097/IJSAT.v16.i1.2232
DOI https://doi.org/10.71097/IJSAT.v16.i1.2232
Short DOI https://doi.org/g869w7

Share this