Home
Scholarly Works
Evaluating SQL Understanding in Large Language...
Conference

Evaluating SQL Understanding in Large Language Models

Abstract

The rise of large language models (LLMs) has significantly impacted various domains, including natural language processing (NLP) and image generation, by making complex computational tasks more accessible. While LLMs demonstrate impressive generative capabilities, there is an ongoing debate about their level of "understanding," particularly in structured domains like SQL. In this paper, we evaluate the extent to which LLMs "understand" SQL by testing them on a series of key SQL tasks. These tasks, such as syntax error detection, missing token identification, query performance prediction, query equivalence checking, and query explanation, assess the models' proficiency in recognition, context awareness, semantics, and coherence-skills essential for SQL understanding. We generate labeled datasets from wellknown workloads, and evaluate the latest LLMs, focusing on how query complexity and syntactic features influence performance. Our results indicate that while GPT4 excels at tasks requiring recognition and context, all models struggle with deeper semantic understanding and coherence, especially in query equivalence and performance estimation, revealing the limitations of current LLMs in achieving full SQL comprehension.

Authors

Rahaman A; Zheng A; Milani M; Chiang F; Pottinger R

Volume

28

Pagination

pp. 909-921

Publication Date

March 10, 2025

DOI

10.48786/edbt.2025.74

Conference proceedings

Advances in Database Technology Edbt

Issue

3
View published work (Non-McMaster Users)

Contact the Experts team