title: Can Machine Learning automatically choose your best Model Checking strategy?
keywords: Model checking, benchmark models, model features, machine learning, classifiers
topics: Algorithms and Data Structures , Case studies and Applications
committee: Doina Bucur ,
Jaco van de Pol
type: BSc research project

Description

There is data in model checking, too: researchers design and implement increasingly more model checkers (and algorithmic variations for existingmodel checkers), and more benchmark models are published in order to test these algorithmic variations. We thus have, or can easily obtain, data which tells how fast a particular model checker (when executed with certain options, i.e., algorithmic variations and strategies) is on a particular benchmark.


The research question is the following: is it possible to train a classifier which, given a benchmark model to be model-checked, predicts which options should be selected when running the model checker, so that the run completes successfully and quickly?

If such a classifier can be obtained, the model checker studied would become automatically adaptive to the input model given to verify, and would have an advantage in model-checking competitions such as [1].

The project will focus on one model checker (LTSmin [2]) and its many possible arguments, and hundreds of benchmarks from one type of model (e.g., Petri Nets).

[1] https://mcc.lip6.fr/ 

[2] http://ltsmin.utwente.nl, https://github.com/utwente-fmt/ltsmin

References

  1. MCC model checking competition (Digital version available here)
  2. LTSmin toolset (Digital version available here)