Write a Blog >>
ESEC/FSE 2021
Thu 19 - Sat 28 August 2021 Clowdr Platform

Modern software applications are increasingly configurable, which puts a burden on users to tune these configurations for their target hardware and workloads. To help users, machine learning techniques can model the complex relationships between software configuration parameters and performance. While powerful, these learners have two major drawbacks: (1) they rarely incorporate prior knowledge and (2) they produce outputs that are not interpretable by users. These limitations make it difficult to (1) leverage information a user has already collected (e.g., tuning for new hardware using the best configurations from old hardware) and (2) gain insights into the learner's behavior (e.g., understanding why the learner chose different configurations on different hardware or for different workloads). To address these issues, this paper presents two configuration optimization tools, GIL and GIL+, using the proposed \underline{g}eneralizable and \underline{i}nterpretable \underline{l}earning approaches. To incorporate prior knowledge, the proposed tools (1) start from known configurations, (2) iteratively construct a new linear model, (3) extrapolate better performance configurations from that model, and (4) repeat. Since the base learners are linear models, these tools are inherently interpretable. We enhance this property with a graphical representation of how they arrived at the highest performance configuration. We evaluate GIL and GIL+ by using them to configure Apache Spark workloads on different hardware platforms and find that, compared to prior work, GIL and GIL+ produce comparable, and sometimes even better performance configurations, but with interpretable results.

Wed 25 Aug

Displayed time zone: Athens change

09:00 - 10:00
SE & AI—Machine Learning for Software Engineering 2Research Papers +12h
Chair(s): Michael Pradel University of Stuttgart, Saikat Chakraborty Columbia University
09:00
10m
Paper
Empirical Study of Transformers for Source Code
Research Papers
Nadezhda Chirkova HSE University, Sergey Troshin HSE University
DOI
09:10
10m
Paper
Explaining Mispredictions of Machine Learning Models using Rule Induction
Research Papers
Jürgen Cito TU Vienna; Facebook, Işıl Dillig University of Texas at Austin, Seohyun Kim Facebook, Vijayaraghavan Murali Facebook, Satish Chandra Facebook
DOI
09:20
10m
Paper
Generalizable and Interpretable Learning for Configuration Extrapolation
Research Papers
Yi Ding Massachusetts Institute of Technology, Ahsan Pervaiz University of Chicago, Michael Carbin Massachusetts Institute of Technology, Henry Hoffmann University of Chicago
DOI
09:30
30m
Live Q&A
Q&A (SE & AI—Machine Learning for Software Engineering 2)
Research Papers

21:00 - 22:00
SE & AI—Machine Learning for Software Engineering 2Research Papers
Chair(s): Kelly Lyons University of Toronto, Phuong T. Nguyen University of L’Aquila
21:00
10m
Paper
Empirical Study of Transformers for Source Code
Research Papers
Nadezhda Chirkova HSE University, Sergey Troshin HSE University
DOI
21:10
10m
Paper
Explaining Mispredictions of Machine Learning Models using Rule Induction
Research Papers
Jürgen Cito TU Vienna; Facebook, Işıl Dillig University of Texas at Austin, Seohyun Kim Facebook, Vijayaraghavan Murali Facebook, Satish Chandra Facebook
DOI
21:20
10m
Paper
Generalizable and Interpretable Learning for Configuration Extrapolation
Research Papers
Yi Ding Massachusetts Institute of Technology, Ahsan Pervaiz University of Chicago, Michael Carbin Massachusetts Institute of Technology, Henry Hoffmann University of Chicago
DOI
21:30
30m
Live Q&A
Q&A (SE & AI—Machine Learning for Software Engineering 2)
Research Papers