Automated Identification of Toxic Code Reviews Using ToxiCR
Toxic conversations during software development interactions may have serious repercussions on a Free and Open Source Software (FOSS) development project. For example, victims of toxic conversations may become afraid to express themselves, therefore get demotivated, and may eventually leave the project. Automated filtering of toxic conversations may help a FOSS community maintain healthy interactions among its members. However, off-the-shelf toxicity detectors perform poorly on a software engineering dataset, such as one curated from code review comments. To counter this challenge, we present ToxiCR, a supervised learning based toxicity identification tool for code review interactions. ToxiCR includes a choice to select one of the 10 supervised learning algorithms, an option to select text vectorization techniques, eight preprocessing steps, and a large-scale labeled dataset of 19,651 code review comments. Two out of those eight preprocessing steps are software engineering domain specific. With our rigorous evaluation of the models with various combinations of preprocessing steps and vectorization techniques, we have identified the best combination for our dataset that boosts 95.8% accuracy and an 88.9% F1-score in identifying toxic texts. ToxiCR significantly outperforms existing toxicity detectors on our dataset. We have released our dataset, pre-trained models, evaluation results, and source code publicly, which is available at https://github.com/WSU-SEAL/ToxiCR.
Thu 7 DecDisplayed time zone: Pacific Time (US & Canada) change
14:00 - 15:30 | Models of Code and DocumentationResearch Papers / Journal First / Ideas, Visions and Reflections at Golden Gate C1 Chair(s): Gema Rodríguez-Pérez University of British Columbia (UBC) | ||
14:00 15mTalk | On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code Research Papers Martin Weyssow DIRO, Université de Montréal, Xin Zhou Singapore Management University, Singapore, Kisub Kim School of Computing and Information Systems, Singapore Management University, David Lo School of Computing and Information Systems, Singapore Management University, Houari Sahraoui DIRO, Université de Montréal Pre-print Media Attached | ||
14:15 15mTalk | A Vision on Intentions in Software Engineering Ideas, Visions and Reflections Jacob Krüger Eindhoven University of Technology, Yi Li Nanyang Technological University, Chenguang Zhu Meta, Marsha Chechik University of Toronto, Thorsten Berger Ruhr University Bochum, Julia Rubin University of British Columbia, Canada Media Attached | ||
14:30 15mPaper | Automated Identification of Toxic Code Reviews Using ToxiCR Journal First Jaydeb Sarker Department of Computer Science, Wayne State University, Asif Kamal Turzo Wayne State University, Amiangshu Bosu Wayne State University, Ming Dong Wayne State University Link to publication DOI Pre-print Media Attached | ||
14:45 15mTalk | GrACE: Language Models Meet Code Edits Research Papers Priyanshu Gupta Microsoft, Avishree Khare Microsoft, Yasharth Bajpai Microsoft, Saikat Chakraborty Microsoft Research , Sumit Gulwani Microsoft, Aditya Kanade Microsoft Research India, Arjun Radhakrishna Microsoft, Gustavo Soares Microsoft, Ashish Tiwari Microsoft Media Attached | ||
15:00 15mTalk | Recommending Analogical APIs via Knowledge Graph Embedding Research Papers Mingwei Liu Fudan University, Yanjun Yang Fudan University, Yiling Lou Fudan University, Xin Peng Fudan University, Zhong Zhou Fudan University, Xueying Du Fudan University, Tianyong Yang Fudan University Pre-print Media Attached | ||
15:15 15mTalk | [Remote] CCT5: A Code-Change-Oriented Pre-Trained Model Research Papers Bo Lin National University of Defense Technology, Shangwen Wang National University of Defense Technology, Zhongxin Liu Zhejiang University, Yepang Liu Southern University of Science and Technology, Xin Xia Huawei Technologies, Xiaoguang Mao National University of Defense Technology DOI Pre-print Media Attached |