Evaluating Transfer Learning for Simplifying GitHub READMEs
Software documentation captures detailed knowledge about a software product, e.g., code, technologies, and design. It plays an important role in the coordination of development teams and in conveying ideas to various stakeholders. However, software documentation can be hard to comprehend if it is written with jargon and complicated sentence structure. In this study, we explored the potential of text simplification techniques in the domain of software engineering to automatically simplify GitHub README files. We collected software-related pairs of GitHub README files consisting of 14,588 entries, aligned difficult sentences with their simplified counterparts, and trained a Transformer-based model to automatically simplify difficult versions. To mitigate the sparse and noisy nature of the software-related simplification dataset, we applied general text simplification knowledge to this field. Since many general-domain difficult-to-simple Wikipedia document pairs are already publicly available, we explored the potential of transfer learning by first training the model on the Wikipedia data and then fine-tuning it on the README data. Using automated BLEU scores and human evaluation, we compared the performance of different transfer learning schemes and the baseline models without transfer learning. The transfer learning model using the best checkpoint trained on a general topic corpus achieved the best performance of 34.68 BLEU score and statistically significantly higher human annotation scores compared to the rest of the schemes and baselines. We conclude that using transfer learning is a promising direction to circumvent the lack of data and drift style problem in software README files simplification and achieved a better trade-off between simplification and preservation of meaning.
Thu 7 DecDisplayed time zone: Pacific Time (US & Canada) change
14:00 - 15:30 | Machine Learning VResearch Papers / Ideas, Visions and Reflections / Journal First at Golden Gate C2 Chair(s): Prem Devanbu University of California at Davis | ||
14:00 15mTalk | LExecutor: Learning-Guided Execution Research Papers Media Attached | ||
14:15 15mTalk | Deeper Notions of Correctness in Image-based DNNs: Lifting Properties from Pixel to Entities Ideas, Visions and Reflections Felipe Toledo , David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia Link to publication DOI Pre-print Media Attached | ||
14:30 15mTalk | Software Architecture Recovery with Information Fusion Research Papers Yiran Zhang Nanyang Technological University, Zhengzi Xu Nanyang Technological University, Chengwei Liu Nanyang Technological University, Hongxu Chen Huawei Technologies Co., Ltd., Sun Jianwen Huawei Technologies Co., Ltd, Dong Qiu Huawei Technologies Co., Ltd, Yang Liu Nanyang Technological University Media Attached | ||
14:45 15mTalk | What Kinds of Contracts Do ML APIs Need? Journal First Samantha Syeda Khairunnesa Bradley University, Shibbir Ahmed Dept. of Computer Science, Iowa State University, Sayem Mohammad Imtiaz Iowa State University, Hridesh Rajan Dept. of Computer Science, Iowa State University, Gary T. Leavens University of Central Florida Media Attached | ||
15:00 15mTalk | Evaluating Transfer Learning for Simplifying GitHub READMEs Research Papers Haoyu Gao The University of Melbourne, Christoph Treude University of Melbourne, Mansooreh Zahedi The Univeristy of Melbourne Pre-print Media Attached | ||
15:15 15mTalk | [Remote] CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models Research Papers Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Australia, Fu Song State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, and University of Chinese Academy of Sciences Beijing, China, Li Li Beihang University Pre-print Media Attached |