[Remote] An Extensive Study on Adversarial Attack against Pre-trained Models of Code
Transformer-based pre-trained models of code (PTMC) have been widely used and have achieved state-of-the-art performance in many mission-critical applications. However, they can be vulnerable to adversarial attacks through identifier substitution or coding style transformation, which can significantly reduce accuracy and may incur security concerns. Although several approaches have been proposed to generate adversarial examples for PTMC, the effectiveness and efficiency of these approaches, especially for different code intelligence tasks, is not well understood. To bridge this gap, this study systematically analyzes five state-of-the-art adversarial attack approaches from three perspectives: effectiveness, efficiency, and the quality of generated examples. The results showed that none of the five approaches balanced all these perspectives. Particularly, approaches with a high attack success rate tend to be time-consuming, and the adversarial code they generate often lack naturalness, and vice versa. To address this limitation, we explored the impact of perturbing identifiers under different contexts and found that identifier substitution within FOR and IF statements was the most effective. Based on these findings, we proposed a new approach that prioritizes different types of statements for various tasks and utilize beam search to generate adversarial examples. Evaluation results show that it outperforms the state-of-the-art ALERT in terms of effectiveness and efficiency while preserving the naturalness of the generated adversarial examples.
Tue 5 DecDisplayed time zone: Pacific Time (US & Canada) change
16:00 - 18:00 | Machine Learning IIResearch Papers / Ideas, Visions and Reflections at Golden Gate C2 Chair(s): Iftekhar Ahmed University of California at Irvine | ||
16:00 15mTalk | [Remote] Compatibility Issues in Deep Learning Systems: Problems and Opportunities Research Papers Jun Wang Nanjing University of Aeronautics and Astronautics, Nanjing, China, Guanping Xiao Nanjing University of Aeronautics and Astronautics, China, Shuai Zhang Nanjing University of Aeronautics and Astronautics, China, Huashan Lei Nanjing University of Aeronautics and Astronautics, China, Yepang Liu Southern University of Science and Technology, Yulei Sui University of New South Wales, Australia DOI Pre-print Media Attached | ||
16:15 15mTalk | [Remote] An Extensive Study on Adversarial Attack against Pre-trained Models of Code Research Papers Xiaohu Du Huazhong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, Zichao Wei Huazhong University of Science and Technology, Shangwen Wang National University of Defense Technology, Hai Jin Huazhong University of Science and Technology Media Attached | ||
16:30 15mTalk | Can Machine Learning Pipelines Be Better Configured? Research Papers Yibo Wang Northeastern University, Ying Wang Northeastern University, Tingwei Zhang Northeastern University, Yue Yu National University of Defense Technology, Shing-Chi Cheung Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hai Yu Software College, Northeastern University, Zhiliang Zhu Software College, Northeastern University Media Attached | ||
16:45 15mTalk | Towards Feature-Based Analysis of the Machine Learning Development Lifecycle Ideas, Visions and Reflections Media Attached | ||
17:00 15mTalk | Fix Fairness, Don’t Ruin Accuracy: Performance Aware Fairness Repair using AutoML Research Papers Giang Nguyen Dept. of Computer Science, Iowa State University, Sumon Biswas Carnegie Mellon University, Hridesh Rajan Dept. of Computer Science, Iowa State University Pre-print Media Attached | ||
17:15 15mTalk | BiasAsker: Measuring the Bias in Conversational AI System Research Papers Yuxuan Wan The Chinese University of Hong Kong, Wenxuan Wang Chinese University of Hong Kong, Pinjia He The Chinese University of Hong Kong, Shenzhen, Jiazhen Gu Chinese University of Hong Kong, Haonan Bai The Chinese University of Hong Kong, Michael Lyu The Chinese University of Hong Kong Media Attached | ||
17:30 15mTalk | Pitfalls in Experiments with DNN4SE: An Analysis of the State of the Practice Research Papers Media Attached | ||
17:45 15mTalk | DecompoVision: Reliability Analysis of Machine Vision Components Through Decomposition and Reuse Research Papers Boyue Caroline Hu University of Toronto, Lina Marsso University of Toronto, Nikita Dvornik Waabi, Huakun Shen University of Toronto, Marsha Chechik University of Toronto Media Attached |