Title Large language models for OpenAPI definition autocompletion /
Translation of Title Dideli kalbos modeliai „OpenAPI“ apibrėžimų automatiniam užbaigimui.
Authors Petryshyn, Bohdan
Full Text Download
Pages 52
Keywords [eng] large language models ; code completion ; fine-tuning ; prompt engineering ; benchmarking
Abstract [eng] Recent advancements in Large Language Models (LLMs) and their utilization in code generation tasks have significantly reshaped the field of software development. Despite the remarkable efficacy of code completion solutions in mainstream programming languages, their performance lags when applied to less ubiquitous formats such as OpenAPI definitions. This study evaluates the OpenAPI completion performance of GitHub Copilot, a prevalent commercial code completion tool, and proposes a set of task-specific optimizations leveraging Meta's open-source model, Code Llama. A semantics-aware OpenAPI completion benchmark proposed in this research is used to perform a series of experiments through which the impact of various prompt engineering and fine-tuning techniques on the Code Llama model's performance is analyzed. The fine-tuned Code Llama model reaches a peak correctness improvement of 55.2\% over GitHub Copilot despite utilizing 25 times fewer parameters than the commercial solution's underlying Codex model. Additionally, this research proposes an enhancement to a widely used code infilling training technique, addressing the issue of underperformance when the model is prompted with context sizes smaller than those used during training.
Dissertation Institution Kauno technologijos universitetas.
Type Master thesis
Language English
Publication date 2024