Thu 7 Dec 2023 14:00 - 14:15 at Golden Gate C1 - Models of Code and Documentation Chair(s): Gema Rodríguez-Pérez

Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases poses a challenge to the effectiveness and robustness of PLMs. In particular, world-realistic scenarios potentially lead to significant differences between the distribution of the pre-training and test data, \textit{i.e.,} distribution shift, resulting in a degradation of the PLM’s performance on downstream tasks. In this paper, we stress the need for adapting PLMs of code to software data whose distribution changes over time, a crucial problem that has been overlooked in previous works. The motivation of this work is to consider the PLM in a non-stationary environment, where fine-tuning data evolves over time according to a software evolution scenario. Specifically, we design a scenario where the model needs to learn from a stream of programs containing new, unseen APIs over time. We study two widely used PLM architectures, \textit{i.e.,} a GPT2 decoder and a RoBERTa encoder, on two downstream tasks, API call and API usage prediction. We demonstrate that the most commonly used fine-tuning technique from prior work is not robust enough to handle the dynamic nature of APIs, leading to the loss of previously acquired knowledge \textit{i.e.,} catastrophic forgetting. To address these issues, we implement five continual learning baselines, including replay-based and regularization-based methods. Our findings demonstrate that utilizing these straightforward baselines effectively mitigates catastrophic forgetting in PLMs across both downstream tasks while achieving comparable or superior performance.

Thu 7 Dec

Displayed time zone: Pacific Time (US & Canada) change

14:00 - 15:30
Models of Code and DocumentationResearch Papers / Journal First / Ideas, Visions and Reflections at Golden Gate C1
Chair(s): Gema Rodríguez-Pérez University of British Columbia (UBC)
14:00
15m
Talk
On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code
Research Papers
Martin Weyssow DIRO, Université de Montréal, Xin Zhou Singapore Management University, Singapore, Kisub Kim School of Computing and Information Systems, Singapore Management University, David Lo School of Computing and Information Systems, Singapore Management University, Houari Sahraoui DIRO, Université de Montréal
Pre-print Media Attached
14:15
15m
Talk
A Vision on Intentions in Software Engineering
Ideas, Visions and Reflections
Jacob Krüger Eindhoven University of Technology, Yi Li Nanyang Technological University, Chenguang Zhu Meta, Marsha Chechik University of Toronto, Thorsten Berger Ruhr University Bochum, Julia Rubin University of British Columbia, Canada
Media Attached
14:30
15m
Paper
Automated Identification of Toxic Code Reviews Using ToxiCR
Journal First
Jaydeb Sarker Department of Computer Science, Wayne State University, Asif Kamal Turzo Wayne State University, Amiangshu Bosu Wayne State University, Ming Dong Wayne State University
Link to publication DOI Pre-print Media Attached
14:45
15m
Talk
GrACE: Language Models Meet Code Edits
Research Papers
Priyanshu Gupta Microsoft, Avishree Khare Microsoft, Yasharth Bajpai Microsoft, Saikat Chakraborty Microsoft Research , Sumit Gulwani Microsoft, Aditya Kanade Microsoft Research India, Arjun Radhakrishna Microsoft, Gustavo Soares Microsoft, Ashish Tiwari Microsoft
Media Attached
15:00
15m
Talk
Recommending Analogical APIs via Knowledge Graph Embedding
Research Papers
Mingwei Liu Fudan University, Yanjun Yang Fudan University, Yiling Lou Fudan University, Xin Peng Fudan University, Zhong Zhou Fudan University, Xueying Du Fudan University, Tianyong Yang Fudan University
Pre-print Media Attached
15:15
15m
Talk
[Remote] CCT5: A Code-Change-Oriented Pre-Trained Model
Research Papers
Bo Lin National University of Defense Technology, Shangwen Wang National University of Defense Technology, Zhongxin Liu Zhejiang University, Yepang Liu Southern University of Science and Technology, Xin Xia Huawei Technologies, Xiaoguang Mao National University of Defense Technology
DOI Pre-print Media Attached