Tue 5 Dec 2023 16:30 - 16:45 at Golden Gate C2 - Machine Learning II Chair(s): Iftekhar Ahmed

A Machine Learning (ML) pipeline configures the workflow of a learning task using the APIs provided by ML libraries. However, a pipeline’s performance can vary significantly across different configurations of ML library versions. Misconfigured pipelines can result in inferior performance, such as poor \emph{execution time} and \emph{memory usage}, \emph{numeric errors} and even \emph{crashes}. A pipeline is subject to misconfiguration if it exhibits significantly inconsistent performance upon changes in the versions of its configured libraries or the combination of these libraries. We refer to such performance inconsistency as a \emph{pipeline configuration (PLC) issue}.

There is no prior systematic study on the pervasiveness, impact and root causes of PLC issues. A systematic understanding of these issues helps configure effective ML pipelines and identify misconfigured ones. In this paper, we conduct the first empirical study of PLC issues. To better dig into the problem, we propose \textsc{Piecer}, an infrastructure that automatically generates a set of pipeline variants by varying different version combinations of ML libraries and compares their performance inconsistencies. We apply \textsc{Piecer} to the 3,380 pipelines that can be deployed out of the 11,363 ML pipelines collected from multiple ML competitions at \textsc{Kaggle} platform. The empirical study results show that 1,092 (32.3%) of the 3,380 pipelines manifest significant performance inconsistencies on at least one variant. We find that 399, 243 and 440 pipelines can achieve better competition scores, execution time and memory usage, respectively, by adopting a different configuration. Based on our empirical findings, we construct a repository containing 164 defective APIs and 106 API combinations from 418 library versions. The defective API repository facilitates future studies of automated detection techniques for PLC issues. Leveraging the repository, we captured PLC issues in 309 real-world ML pipelines.

Tue 5 Dec

Displayed time zone: Pacific Time (US & Canada) change

16:00 - 18:00
Machine Learning IIResearch Papers / Ideas, Visions and Reflections at Golden Gate C2
Chair(s): Iftekhar Ahmed University of California at Irvine
16:00
15m
Talk
[Remote] Compatibility Issues in Deep Learning Systems: Problems and Opportunities
Research Papers
Jun Wang Nanjing University of Aeronautics and Astronautics, Nanjing, China, Guanping Xiao Nanjing University of Aeronautics and Astronautics, China, Shuai Zhang Nanjing University of Aeronautics and Astronautics, China, Huashan Lei Nanjing University of Aeronautics and Astronautics, China, Yepang Liu Southern University of Science and Technology, Yulei Sui University of New South Wales, Australia
DOI Pre-print Media Attached
16:15
15m
Talk
[Remote] An Extensive Study on Adversarial Attack against Pre-trained Models of Code
Research Papers
Xiaohu Du Huazhong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, Zichao Wei Huazhong University of Science and Technology, Shangwen Wang National University of Defense Technology, Hai Jin Huazhong University of Science and Technology
Media Attached
16:30
15m
Talk
Can Machine Learning Pipelines Be Better Configured?
Research Papers
Yibo Wang Northeastern University, Ying Wang Northeastern University, Tingwei Zhang Northeastern University, Yue Yu National University of Defense Technology, Shing-Chi Cheung Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hai Yu Software College, Northeastern University, Zhiliang Zhu Software College, Northeastern University
Media Attached
16:45
15m
Talk
Towards Feature-Based Analysis of the Machine Learning Development Lifecycle
Ideas, Visions and Reflections
Boyue Caroline Hu University of Toronto, Marsha Chechik University of Toronto
Media Attached
17:00
15m
Talk
Fix Fairness, Don’t Ruin Accuracy: Performance Aware Fairness Repair using AutoML
Research Papers
Giang Nguyen Dept. of Computer Science, Iowa State University, Sumon Biswas Carnegie Mellon University, Hridesh Rajan Dept. of Computer Science, Iowa State University
Pre-print Media Attached
17:15
15m
Talk
BiasAsker: Measuring the Bias in Conversational AI System
Research Papers
Yuxuan Wan The Chinese University of Hong Kong, Wenxuan Wang Chinese University of Hong Kong, Pinjia He The Chinese University of Hong Kong, Shenzhen, Jiazhen Gu Chinese University of Hong Kong, Haonan Bai The Chinese University of Hong Kong, Michael Lyu The Chinese University of Hong Kong
Media Attached
17:30
15m
Talk
Pitfalls in Experiments with DNN4SE: An Analysis of the State of the Practice
Research Papers
Sira Vegas Universidad Politecnica de Madrid, Sebastian Elbaum University of Virginia
Media Attached
17:45
15m
Talk
DecompoVision: Reliability Analysis of Machine Vision Components Through Decomposition and Reuse
Research Papers
Boyue Caroline Hu University of Toronto, Lina Marsso University of Toronto, Nikita Dvornik Waabi, Huakun Shen University of Toronto, Marsha Chechik University of Toronto
Media Attached