Tue 5 Dec 2023 14:30 - 14:45 at Golden Gate A - Empirical Studies I Chair(s): Cristian Cadar

Proponents of software verification have argued that simpler code is easier to verify: that is, that verification tools issue fewer false positives and require less human intervention when analyzing simpler code. We empirically validate this assumption by comparing the number of warnings produced by four state-of-the-art verification tools on 211 snippets of Java code with 20 metrics of code comprehensibility from human subjects in six prior studies.

Our experiments, based on a statistical (meta-)analysis, show that, in aggregate, there is a small correlation (š‘Ÿ = 0.23) between understandability and verifiability. The results support the claim that easy-to-verify code is often easier to understand than code that requires more effort to verify. Our work has implications for the users and designers of verification tools and for future attempts to automatically measure code comprehensibility: verification tools may have ancillary benefits to understandability, and measuring understandability may require reasoning about semantic, not just syntactic, code properties.

Tue 5 Dec

Displayed time zone: Pacific Time (US & Canada) change

14:00 - 15:30
Empirical Studies IIdeas, Visions and Reflections / Research Papers / Industry Papers / Journal First at Golden Gate A
Chair(s): Cristian Cadar Imperial College London
14:00
15m
Talk
[Remote] Assess and Summarize: Improve Outage Understanding with Large Language Models
Industry Papers
Pengxiang Jin Nankai University, Shenglin Zhang Nankai University, Minghua Ma Microsoft Research, Haozhe Li Peking University, Yu Kang Microsoft Research, Liqun Li Microsoft Research, Yudong Liu Microsoft Research, Bo Qiao Microsoft Research, Chaoyun Zhang Microsoft, Pu Zhao Microsoft Research, Shilin He Microsoft Research, Federica Sarro University College London, Yingnong Dang Microsoft Azure, Saravan Rajmohan Microsoft 365, Qingwei Lin Microsoft, Dongmei Zhang Microsoft Research
DOI Media Attached
14:15
15m
Talk
Open Source License Inconsistencies on GitHub
Journal First
Thomas Wolter Friedrich-Alexander University Erlangen-Nuernberg, Ann Barcomb Department of Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Dirk Riehle U of Erlangen, Nikolay Harutyunyan Friedrich-Alexander University Erlangen-Nuremberg, Germany
Media Attached
14:30
15m
Talk
On the Relationship Between Code Verifiability and Understandability
Research Papers
Kobi Feldman College of William & Mary, Martin Kellogg New Jersey Institute of Technology, Oscar Chaparro William & Mary
Media Attached
14:45
15m
Talk
Lessons from the Long Tail: Analysing Unsafe Dependency Updates across Software Ecosystems
Ideas, Visions and Reflections
Supatsara Wattanakriengkrai Nara Institute of Science and Technology, Raula Gaikovina Kula Nara Institute of Science and Technology, Christoph Treude University of Melbourne, Kenichi Matsumoto Nara Institute of Science and Technology
Media Attached
15:00
15m
Talk
Towards Greener Yet Powerful Code Generation via Quantization: An Empirical Study
Research Papers
Xiaokai Wei AWS AI Labs, Sujan Kumar Gonugondla AWS AI Labs, Shiqi Wang AWS AI Labs, Wasi Ahmad AWS AI Labs, Baishakhi Ray Columbia University, Haifeng Qian AWS AI Labs, Xiaopeng LI AWS AI Labs, Varun Kumar AWS AI Labs, Zijian Wang AWS AI Labs, Yuchen Tian AWS, Qing Sun AWS AI Labs, Ben Athiwaratkun AWS AI Labs, Mingyue Shang AWS AI Labs, Murali Krishna Ramanathan AWS AI Labs, Parminder Bhatia AWS AI Labs, Bing Xiang AWS AI Labs
Media Attached
15:15
15m
Talk
Understanding Hackersā€™ Work: An Empirical Study of Offensive Security Practitioners
Industry Papers
Andreas Happe TU Wien, JĆ¼rgen Cito TU Wien
DOI Media Attached