Write a Blog >>
ESEC/FSE 2021
Thu 19 - Sat 28 August 2021 Clowdr Platform
Wed 25 Aug 2021 17:20 - 17:30 - SE & AI—Software Engineering for Machine Learning 1 Chair(s): Na Meng
Thu 26 Aug 2021 05:20 - 05:30 - SE & AI—Software Engineering for Machine Learning 1 Chair(s): Lei Ma

There are increasing uses of deep learning (DL) compilers to generate optimized code, boosting the runtime performance of DL models on specific hardware. Like their traditional counterparts, DL compilers can generate incorrect code, resulting in unexpected model behaviors that may cause catastrophic consequences in mission-critical systems. On the other hand, the DL models processed by DL compilers differ fundamentally from imperative programs in that the program logic in DL models is implicit. As such, various characteristics of the bugs arising from traditional compilers need to be revisited in the context of DL compilers.

In this paper, we present the first systematic study of DL compiler bugs by analyzing 603 bugs arising in three popular DL compilers (i.e., TVM from Apache, Glow from Facebook, and nGraph from Intel). We analyzed these bugs according to their root causes, symptoms, and the stages where they occur during compilation. We obtain 12 findings, and provide a series of valuable guidelines for future work on DL compiler bug detection and debugging. For example, a large portion (nearly 20%) of DL compiler bugs are related to types, especially tensor types. The analysis of these bugs helps design new mutation operators (e.g., adding type cast for a tensor to promote implicit type conversion in subsequent tensor computations) to facilitate type-related bug detection. Further, we developed TVMfuzz as a proof-of-concept application of our findings to test the TVM DL compiler. It generates new tests based on TVM's original test suite. They expose 8 TVM bugs that are missed by the original test suite. The result demonstrates the usefulness of our findings.

Wed 25 Aug

Displayed time zone: Athens change

17:00 - 18:00
SE & AI—Software Engineering for Machine Learning 1Research Papers +12h
Chair(s): Na Meng Virginia Tech
17:00
10m
Paper
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization
Research Papers
Sahil Suneja , Yunhui Zheng IBM Research, Yufan Zhuang IBM Research, Jim A. Laredo IBM Research, Alessandro Morari IBM Research
DOI
17:10
10m
Paper
Generating Efficient Solvers from Constraint Models
Research Papers
Shu Lin Peking University, Na Meng Virginia Tech, Wenxin Li Peking University
DOI
17:20
10m
Paper
A Comprehensive Study of Deep Learning Compiler BugsArtifacts Available
Research Papers
Qingchao Shen Tianjin University, Haoyang Ma Tianjin University, Junjie Chen Tianjin University, Yongqiang Tian University of Waterloo, Shing-Chi Cheung Hong Kong University of Science and Technology, Xiang Chen Nantong University
DOI
17:30
30m
Live Q&A
Q&A (SE & AI—Software Engineering for Machine Learning 1)
Research Papers

Thu 26 Aug

Displayed time zone: Athens change

05:00 - 06:00
SE & AI—Software Engineering for Machine Learning 1Research Papers
Chair(s): Lei Ma University of Alberta
05:00
10m
Paper
Probing Model Signal-Awareness via Prediction-Preserving Input Minimization
Research Papers
Sahil Suneja , Yunhui Zheng IBM Research, Yufan Zhuang IBM Research, Jim A. Laredo IBM Research, Alessandro Morari IBM Research
DOI
05:10
10m
Paper
Generating Efficient Solvers from Constraint Models
Research Papers
Shu Lin Peking University, Na Meng Virginia Tech, Wenxin Li Peking University
DOI
05:20
10m
Paper
A Comprehensive Study of Deep Learning Compiler BugsArtifacts Available
Research Papers
Qingchao Shen Tianjin University, Haoyang Ma Tianjin University, Junjie Chen Tianjin University, Yongqiang Tian University of Waterloo, Shing-Chi Cheung Hong Kong University of Science and Technology, Xiang Chen Nantong University
DOI
05:30
30m
Live Q&A
Q&A (SE & AI—Software Engineering for Machine Learning 1)
Research Papers