Thu 7 Dec 2023 14:45 - 15:00 at Golden Gate C1 - Models of Code and Documentation Chair(s): Gema Rodríguez-Pérez

Developers expend a significant amount of time in editing code for a variety of reasons such as bug fixing or adding new features. Designing effective methods to predict code edits has been an active yet challenging area of research due to the diversity of code edits and the difficulty of capturing the developer intent. In this work, we address these challenges by endowing pre-trained large language models (LLMs) of code with the knowledge of prior, relevant edits. The generative capability of the LLMs helps address the diversity in code changes and conditioning code generation on prior edits helps capture the latent developer intent. We evaluate two well-known LLMs, Codex and CodeT5, in zero-shot and fine-tuning settings respectively. In our experiments with two datasets, the knowledge of prior edits boosts the performance of the LLMs significantly and enables them to generate 29% and 54% more correctly-edited code in top-1 suggestions relative to the current state-of-the-art symbolic and neural approaches, respectively.

Thu 7 Dec

Displayed time zone: Pacific Time (US & Canada) change

14:00 - 15:30
Models of Code and DocumentationResearch Papers / Journal First / Ideas, Visions and Reflections at Golden Gate C1
Chair(s): Gema Rodríguez-Pérez University of British Columbia (UBC)
14:00
15m
Talk
On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code
Research Papers
Martin Weyssow DIRO, Université de Montréal, Xin Zhou Singapore Management University, Singapore, Kisub Kim School of Computing and Information Systems, Singapore Management University, David Lo School of Computing and Information Systems, Singapore Management University, Houari Sahraoui DIRO, Université de Montréal
Pre-print Media Attached
14:15
15m
Talk
A Vision on Intentions in Software Engineering
Ideas, Visions and Reflections
Jacob Krüger Eindhoven University of Technology, Yi Li Nanyang Technological University, Chenguang Zhu Meta, Marsha Chechik University of Toronto, Thorsten Berger Ruhr University Bochum, Julia Rubin University of British Columbia, Canada
Media Attached
14:30
15m
Paper
Automated Identification of Toxic Code Reviews Using ToxiCR
Journal First
Jaydeb Sarker Department of Computer Science, Wayne State University, Asif Kamal Turzo Wayne State University, Amiangshu Bosu Wayne State University, Ming Dong Wayne State University
Link to publication DOI Pre-print Media Attached
14:45
15m
Talk
GrACE: Language Models Meet Code Edits
Research Papers
Priyanshu Gupta Microsoft, Avishree Khare Microsoft, Yasharth Bajpai Microsoft, Saikat Chakraborty Microsoft Research , Sumit Gulwani Microsoft, Aditya Kanade Microsoft Research India, Arjun Radhakrishna Microsoft, Gustavo Soares Microsoft, Ashish Tiwari Microsoft
Media Attached
15:00
15m
Talk
Recommending Analogical APIs via Knowledge Graph Embedding
Research Papers
Mingwei Liu Fudan University, Yanjun Yang Fudan University, Yiling Lou Fudan University, Xin Peng Fudan University, Zhong Zhou Fudan University, Xueying Du Fudan University, Tianyong Yang Fudan University
Pre-print Media Attached
15:15
15m
Talk
[Remote] CCT5: A Code-Change-Oriented Pre-Trained Model
Research Papers
Bo Lin National University of Defense Technology, Shangwen Wang National University of Defense Technology, Zhongxin Liu Zhejiang University, Yepang Liu Southern University of Science and Technology, Xin Xia Huawei Technologies, Xiaoguang Mao National University of Defense Technology
DOI Pre-print Media Attached