Symposium: Mini-symposium Model-Based Testing
When: Sept. 18, 2014, 9:30-16:00
Where: Horsttoren, T1300
Registration and cost
Registration can be done by sending an email to Joke Lammerink: J.M.W.Lammerink@utwente.nl . Please indicate whether you stay on for the luch, and whether you will be at the reception.
Participation of the mini symposium is free.
Model-Based Testing for Embedded Systems: Observations and Challenges; or: What do Dykes, De Ruyter, and Wafer Scanners have in common?
High-tech systems increasingly depend on software: software controls, connects, and monitors almost every aspect of modern high-tech system operation. Consequently, overall system quality and reliability are more and more determined by the quality of the embedded software. How can we assess and test the quality of high-tech embedded systems? This presentation will discuss some trends, issues, and challenges in testing high-tech embedded systems, and to what extent model-based testing can help alleviating these issues.
Model-based conformance test generation for timed systems
The talk presents the main principles of automatic test generation for timed automata models. We consider the model of timed automata with inputs/outputs (TAIOs), an extension of timed automata, well suited for the specification of systems with both timing constraints and interaction with their environment. We review the underlying testing theory tioco which extends the classical ioco theory to the timed context. Test generation from TAIOs is then explained, with in particular the underlying problem of partial observation for a priori non-deterministic and even non-determinizable models. Reference paper: http://arxiv.org/pdf/1207.6267.pdf
The BEAT project: BEtter testing with gAme Theory
Testing is naturally phrased as a game, where the tester tries to find faults, playing against the system-under-test. The goal of the BEAT project is to improve test effectiveness and efficiency by using mathematical game theory, yielding higher system quality at lower costs.
Empirical Research Methods for Technology Validation
In this talk I will discuss methodological aspects of the role of testing when scaling up technology from the laboratory to real-world practice.
What's a good test case? Text book, practice and intuition suggest: one that reveals a fault. But the thought experiment of a perfect program shows the deficiency of this definition - in this case, there would not be any good test cases. A more adequate definition is that a good test case reveals likely potential faults with good cost effectiveness. The model-based testing community tends to answer this question in one of two ways: good test cases are defined by coverage (because we can) and by explicit test purposes (because we sense that there must be more, but others should do the work). In this talk, we argue why coverage-based testing is inherently problematic, if not useless, and propose to complement explicit test purposes by fault models. These encode what typically goes wrong in a specific domain, technology, company, or application family, and describe what can potentially go wrong, thus catering to the above definition of good test cases. We discuss the nature of potential faults, formalize them, provide examples, and discuss their operationalization for test case generation also outside the domain of model-based testing.
Machiel van der Bijl
Model-Based Testing, the Difference between Theory and Practice Model-based testing has been around for at least 20 years, at least in academia. It solves a problem that is recognized by the industry: thorough testing of (complex) software/hardware systems. When one explains model-based testing to an engineer from another engineering discipline, say construction, the question is always asked why the technique is not used in practice. This talk is about what it takes to apply model-based tested to industry grade systems.