Tue 5 Dec 2023 14:00 - 14:15 at Golden Gate C1 - Testing II Chair(s): Brittany Johnson

Static analyzers use rule checkers to verify the reliability, performance, and readability of programs. One of the key limitations of static analyzers is the failure to produce accurate analysis results (i.e., they generate too many spurious warnings or miss significant defects). To ensure the reliability of a static analyzer, developers usually manually write tests involving input programs and the corresponding expected analysis results. Meanwhile, a rule checker may include example programs in its documentation to help users understand each rule. Our key insight is that we can reuse programs extracted either from the official test suite or documentation and apply semantic-preserving transformations to them to generate variants. We studied the quality of input programs from these two sources and found that most rules in static analyzers are covered by at least one input program, implying the potential of using these programs as the basis for test generation. We present Statfier, a heuristic-based automated testing approach for static analyzers that generates program variants via semantic-preserving transformations and detects inconsistencies between the original program and variants (indicate inaccurate analysis results in the static analyzer). To select variants that are more likely to lead to new bugs, Statfier leverages two key heuristics: (1) analysis report guided location selection that uses program locations in the reports produced by static analyzers to perform transformations and (2) structure diversity driven variant selection that chooses variants with different program contexts and diverse types of transformations. Our experiments with five popular static analyzers show that Statfier can find 79 bugs in these analyzers, of which 46 have been confirmed.

Tue 5 Dec

Displayed time zone: Pacific Time (US & Canada) change

14:00 - 15:30
14:00
15m
Talk
Statfier: Automated Testing of Static Analyzers via Semantic-preserving Program Transformations
Research Papers
Huaien Zhang Southern University of Science and Technology, The Hong Kong Polytechnic University, Yu Pei Hong Kong Polytechnic University, Junjie Chen Tianjin University, Shin Hwei Tan Concordia University
Media Attached
14:15
15m
Talk
Towards Efficient Record and Replay: A Case Study in WeChat
Industry Papers
Sidong Feng Monash University, Haochuan Lu Tencent, Ting Xiong Tencent Inc., Yuetang Deng Tencent Inc., Chunyang Chen Monash University
DOI Media Attached
14:30
15m
Talk
Contextual Predictive Mutation Testing
Research Papers
Kush Jain Carnegie Mellon University, Uri Alon Carnegie Mellon University, Alex Groce Northern Arizona University, Claire Le Goues Carnegie Mellon University
Media Attached
14:45
15m
Talk
Towards Automated Software Security Testing: Augmenting Penetration Testing through LLMs
Ideas, Visions and Reflections
Andreas Happe TU Wien, Jürgen Cito TU Wien
Media Attached
15:00
7m
Talk
LazyCow: A Lightweight Crowdsourced Testing Tool for Taming Android Fragmentation
Demonstrations
Xiaoyu Sun Australian National University, Australia, Xiao Chen Monash University, Yonghui Liu Monash University, John Grundy Monash University, Li Li Beihang University
Media Attached
15:08
7m
Talk
Rotten Green Tests in Google Test
Industry Papers
DOI Media Attached
15:15
15m
Talk
MuAkka: Mutation Testing for Actor Concurrency in Akka Using Real-World Bugs
Research Papers
Mohsen Moradi Moghadam Oakland University, Mehdi Bagherzadeh Oakland University, Raffi Khatchadourian City University of New York (CUNY) Hunter College, Hamid Bagheri University of Nebraska-Lincoln
Pre-print Media Attached