Write a Blog >>
ESEC/FSE 2021
Thu 19 - Sat 28 August 2021 Clowdr Platform

Software defect prediction models are classifiers often built by setting a threshold t on a defect proneness model, i.e., a scoring function. For instance, they classify a software module non-faulty if its defect proneness is below t and positive otherwise. Different values of t may lead to different defect prediction models, possibly with very different performance levels. Receiver Operating Characteristic (ROC) curves provide an overall assessment of a defect proneness model, by taking into account all possible values of t and thus all defect prediction models that can be built based on it.

However, using a defect proneness model with a value of t is sensible only if the resulting defect prediction model has a performance that is at least as good as some minimal performance level that depends on practitioners’ and researchers’ goals and needs.

We introduce a new approach and a new performance metric (the Ratio of Relevant Areas) for assessing a defect proneness model by taking into account only the parts of a ROC curve corresponding to values of t for which defect proneness models have higher performance than some reference value.

We provide the practical motivations and theoretical underpinnings for our approach, by: 1) showing how it addresses the shortcomings of existing performance metrics like the Area Under the Curve and Gini’s coefficient; 2) deriving reference values based on random defect prediction policies, in addition to deterministic ones; 3) showing how the approach works with several performance metrics (e.g., Precision and Recall) and their combinations; 4) studying misclassification costs and providing a general upper bound for the cost related to the use of any defect proneness model; 5) showing the relationships between misclassification costs and performance metrics.

We also carried out a comprehensive empirical study on real-life data from the SEACRAFT repository, to show the differences between our metric and the existing ones and how more reliable and less misleading our metric can be.

Wed 25 Aug

Displayed time zone: Athens change

19:00 - 20:00
Analytics & Software Evolution—Defect Prediction and Effort EstimationResearch Papers / Journal First +12h
Chair(s): Davide Di Ruscio University of L'Aquila
19:00
10m
Paper
Learning From Mistakes: Machine Learning Enhanced Human Expert Effort Estimates
Journal First
Federica Sarro University College London, Rebecca Moussa University College London, Alessio Petrozziello University College London, Mark Harman University College London
19:10
10m
Paper
Sound and Efficient Concurrency Bug PredictionArtifacts Reusable
Research Papers
Yan Cai Institute of Software at Chinese Academy of Sciences, Hao Yun Institute of Software at Chinese Academy of Sciences, Jinqiu Wang Institute of Software at Chinese Academy of Sciences, Lei Qiao Beijing Institute of Control Engineering, Jens Palsberg University of California at Los Angeles
DOI
19:20
10m
Paper
On the Assessment of Software Defect Prediction Models via ROC Curves
Journal First
Sandro Morasca Università degli Studi dell'Insubria, Luigi Lavazza Università degli Studi dell'Insubria
19:30
30m
Live Q&A
Q&A (Analytics & Software Evolution—Defect Prediction and Effort Estimation)
Research Papers

Thu 26 Aug

Displayed time zone: Athens change

07:00 - 08:00
Analytics & Software Evolution—Defect Prediction and Effort EstimationJournal First / Research Papers
Chair(s): Alexander Chatzigeorgiou University of Macedonia
07:00
10m
Paper
Learning From Mistakes: Machine Learning Enhanced Human Expert Effort Estimates
Journal First
Federica Sarro University College London, Rebecca Moussa University College London, Alessio Petrozziello University College London, Mark Harman University College London
07:10
10m
Paper
Sound and Efficient Concurrency Bug PredictionArtifacts Reusable
Research Papers
Yan Cai Institute of Software at Chinese Academy of Sciences, Hao Yun Institute of Software at Chinese Academy of Sciences, Jinqiu Wang Institute of Software at Chinese Academy of Sciences, Lei Qiao Beijing Institute of Control Engineering, Jens Palsberg University of California at Los Angeles
DOI
07:20
10m
Paper
On the Assessment of Software Defect Prediction Models via ROC Curves
Journal First
Sandro Morasca Università degli Studi dell'Insubria, Luigi Lavazza Università degli Studi dell'Insubria
07:30
30m
Live Q&A
Q&A (Analytics & Software Evolution—Defect Prediction and Effort Estimation)
Research Papers