author: Dennis de Weerdt
title: Can a tool learn its own optimal settings?
keywords: benchmarking, auto-tuning, learning, meta-heuristics
topics: Algorithms and Data Structures
committee: Jaco van de Pol ,
Jeroen Meijer
end: July 2016


With the LTSmin toolset we have created an enormously powerful toolset that can analyse large (software) systems for correct and interesting behaviour. Large scale applications include Railway Safety systems, CERN's LHC control software and Biological Cell research. LTSmin has won a challenging competetion in the analysis of C-programs.

LTSmin is a powerful collection of tools, all based on some sort of graph traversal, which can run on sequential machines, but also on clusters or multi-core hardware. As all high-performance engines, LTSmin comes with an impressive amount of options and settings, that bewilders novice users and sometimes even its designers.

The goal of this project is to help to select the best options and settings. Several possibilities come to mind, some obvious, some smart, and maybe you can contribute some other smart ideas as well!


- provide a better, possible graphical or wizard-like, user interface to guide the user in selecting an optimal setting

- experiment and measure with several options in order to find out which options work well in practice. We do have extensive benchmark suites.

- try to understand how the best possible options and settings depend on the input models, and let the tool suggest and/or select the options depending on the input. Here we enter the realm of heuristics.

- let the tool run for a while on a concrete instance, and adapt its settings from measurements gathered during this initial run.

- let the tool learn itself how to set the options by means of genetic/neural network algorithms. Here we enter the world of meta-heuristics.


  1. LTSmin web site (Digital version available here)
  2. Wikipedia on meta-heuristics (Digital version available here)

Additional Resources

  1. The paper
  2. BeAT Design project: a framework for systematic benchmarking and analysis