Evaluating Search-Based Software Microbenchmark Prioritization

  • Christoph Laaber*
  • , Tao Yue
  • , Shaukat Ali
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Ensuring that software performance does not degrade after a code change is paramount. A solution is to regularly execute software microbenchmarks, a performance testing technique similar to (functional) unit tests, which, however, often becomes infeasible due to extensive runtimes. To address that challenge, research has investigated regression testing techniques, such as test case prioritization (TCP), which reorder the execution within a microbenchmark suite to detect larger performance changes sooner. Such techniques are either designed for unit tests and perform sub-par on microbenchmarks or require complex performance models, drastically reducing their potential application. In this paper, we empirically evaluate single-and multi-objective search-based microbenchmark prioritization techniques to understand whether they are more effective and efficient than greedy, coverage-based techniques. For this, we devise three search objectives, i.e., coverage to maximize, coverage overlap to minimize, and historical performance change detection to maximize. We find that search algorithms (SAs) are only competitive with but do not outperform the best greedy, coverage-based baselines. However, a simple greedy technique utilizing solely the performance change history (without coverage information) is equally or more effective than the best coverage-based techniques while being considerably more efficient, with a runtime overhead of less than 1%. These results show that simple, non-coverage-based techniques are a better fit for microbenchmarks than complex coverage-based techniques.

Original languageEnglish
Pages (from-to)1687-1703
Number of pages17
JournalIEEE Transactions on Software Engineering
Volume50
Issue number7
DOIs
StatePublished - 2024

Keywords

  • JMH
  • Software microbenchmarking
  • multi-objective optimization
  • performance testing
  • regression testing
  • search-based software engineering
  • test case prioritization

Fingerprint

Dive into the research topics of 'Evaluating Search-Based Software Microbenchmark Prioritization'. Together they form a unique fingerprint.

Cite this