author: Gijs Kant
title: Distributed State Space Generation for Graphs up to Isomorphism
keywords: <a href=&quot;http://groove.cs.utwente.nl/&quot;>Groove</a>
topics:
committee: Arend Rensink ,
Eduardo Zambon ,
Stefan Blom
end: August 2010

Abstract

In order to achieve reliability of systems, formal verification techniques are required. We model systems as graph transition systems, where states are represented by graphs and transistions between states are defined by graph transformation rules. For formal verication, often the state space of the system has to be generated, i.e., the set of all reachable states. The main problem with generating the state space is its size, which tends to grow exponentially with the size of the modelled system. This results in enormous computation time and memory usage already for relatively small systems. Recent developments in hardware design aim towards multi-core and multi-processor architectures. In order to benefit from these we require distributed tools: the task of state space generation needs to be split up into subtasks that can be distributed to multiple cores, called workers.

In this report we present a tool for distributed state space generation for graph-based systems. The tool uses LTSmin for efficiently distributing the states over the workers and for storing the state space, and instances of Groove for computing successor states by applying graph transformation rules. Because LTSmin uses fixed sized state vectors to represent states, a serialisation from graphs to state vectors and its reverse are required. We define two such serialisation functions: one that partitions the graph by its nodes (node vector) and one that uses edge labels to partition the graph (label vector).

In graph transformation systems a powerful symmetry reduction can be achieved by using isomorphism checking [1]. Isomorphic states can be merged, resulting in a reduced transition system that is bisimilar to the transition system without reduction, which implies that they satisfy the same semantic properties [2]. In our distributed tool we compute a canonical form for each computed successor graph, which enables us to also distribute the isomorphism reduction to the workers. For computing canonical forms, Bliss is used together with a conversion that is needed because Groove uses a slightly different graph formalism (edge labelled graphs) than Bliss (coloured graphs) does.

We have performed experiments to investigate the time and memory performance of the distributed tool, based on LTSmin, with the two different serialisation functions, compared to the sequential version of Groove for three different case studies. The experiments show that the node vector encoding is better than the label vector encoding both in memory usage and execution time.

The distributed setup with the node vector serialisation results in orders of magnitude less memory usage than sequential Groove for the very symmetric cases. This is suprising because the storage in Groove is optimised for graphs, storing only the differences (deltas) between state graphs instead of the complete graphs themselves.

The execution time for LTSmin with one core is much worse than that of Groove, because Groove uses a canonical hashcode, which often prevents that isomorphism checking has to be used, and some kind of partial order reduction for graph transition systems that is not applicable in the distributed setting. Also, the conversion to coloured graphs, needed for computing canonical forms using Bliss, blows up the size of the graphs. However, for the larger models the distributed solution scales well. For all reported case studies there are start graphs for which Groove runs out of memory or is not capable of finishing within the time limit, while the distributed tool still can generate the state space. In one of the cases a speedup of 16 is achieved with 64 workers in the largest case for which also Groove could generate the state space within the time limit. For the very symmetric cases most of the time is spent on isomorphism checking in Groove. In the distributed setting, the speedup is explained by the time spent on canonical form computation that decreases linearly as the number of workers increases. As the number of workers increases, however, the communication overhead also grows.

  1. A. Rensink. Explicit State Model Checking for Graph Grammars. In: Concurrency, Graphs and Models. Lecture Notes in Computer Science 5065, Springer Verlag, Berlin, pp. 114-132. 2008.
  2. A. Rensink. Isomorphism Checking for Symmetry Reduction. Technical Report TR-CTIT-10-27, Centre for Telematics and Information Technology, University of Twente, Enschede. 2010.

References

  1. A. Rensink. Explicit State Model Checking for Graph Grammars. In: Concurrency, Graphs and Models. Lecture Notes in Computer Science 5065, Springer Verlag, Berlin, pp. 114-132. 2008. (Digital version available here)
  2. A. Rensink. Isomorphism Checking for Symmetry Reduction. Technical Report TR-CTIT-10-27, Centre for Telematics and Information Technology, University of Twente, Enschede. 2010. (Digital version available here)

Additional Resources

  1. The paper