b'@online{Cao_2502.15835,'b"\nTITLE = {Pragmatic Reasoning improves {LLM} Code Generation},\nAUTHOR = {Cao, Zhuchen and Apel, Sven and Singla, Adish and Demberg, Vera},\nLANGUAGE = {eng},\nURL = {https://arxiv.org/abs/2502.15835},\nEPRINT = {2502.15835},\nEPRINTTYPE = {arXiv},\nYEAR = {2025},\nMARGINALMARK = {$\\bullet$},\nABSTRACT = {Large Language Models (LLMs) have demonstrated impressive potential in<br>translating natural language (NL) instructions into program code. However, user<br>instructions often contain inherent ambiguities, making it challenging for LLMs<br>to generate code that accurately reflects the user's true intent. To address<br>this challenge, researchers have proposed to produce multiple candidates of the<br>program code and then rerank them to identify the best solution. In this paper,<br>we propose CodeRSA, a novel code candidate reranking mechanism built upon the<br>Rational Speech Act (RSA) framework, designed to guide LLMs toward more<br>comprehensive pragmatic reasoning about user intent. We evaluate CodeRSA using<br>one of the latest LLMs on a popular code generation dataset. Our experiment<br>results show that CodeRSA consistently outperforms common baselines, surpasses<br>the state-of-the-art approach in most cases, and demonstrates robust overall<br>performance. These findings underscore the effectiveness of integrating<br>pragmatic reasoning into code candidate reranking, offering a promising<br>direction for enhancing code generation quality in LLMs.<br>},\n}\n"