iFM 2013

iFM 2013 Accepted Papers with Abstracts

Thai Son Hoang and Simon Hudon. Progress Concerns as Design Guidelines
Abstract: We present Unit-B, a formal method inspired by Event-B and UNITY, for designing systems via step-wise refinement preserving both safety and liveness properties. In particular, we introduce the notion of coarse and fine scheduling, a generalisation of weakly and strongly fair assumptions. We propose proof rules for reasoning about progress properties. Furthermore, we develop techniques for refining systems by adapting scheduling information accordingly. As a result, Unit-B is a development method driven by both safety and liveness requirements.
Christian Prehofer. Assume-Guarantee Specifications of State Transition Diagrams for Behavioral Refinement
Abstract: In this paper, we consider extending state transition diagrams by new features which add new events, states and transitions. The main goal is to capture when the behavior of a state transition diagram is preserved under such an extension, which we call behavioral refinement. Our behavioral semantics is based on the observable traces of input and output events, respectively. We use assume / guarantee specifications to specify both the base SD and the extensions, where assumptions limit the permitted input streams. Based on this, we formalize behavioral refinement and also determined suitable assumes on the input for the extended SD. As existing techniques for behavioral refinement are limited as they only abstract from existing events, we develop new concepts based on the elimination of the added behavior on the trace level and present new refinement concepts. We introduce new refinement relations and show that properties are preserved even when the new features are added.
Kenneth Lausdahl. Translating VDM to Alloy
Abstract: The Vienna Development Method is one of the longest established formal methods. Initial software design is often best described using implicit specifications but limited tool support exists to help with the difficult task of validating that such specifications capture their intended meaning. Traditionally, theorem provers are used to prove that specifications are correct but this process is highly dependent on expert users. Alternatively, model finding has proved to be useful for validation of specifications. The Alloy Analyzer is an automated model finder for checking and visualising Alloy specifications. However, to take advantage of the automated analysis of Alloy, the model-oriented VDM specifications must be translated into the constraint-based Alloy specifications. We describe how a subset of VDM can be translated into Alloy and how assertions can be expressed in VDM and checked by the Alloy Analyzer.
Dimitrios Vekris, Frederic Lang, Catalin Dima and Radu Mateescu. Verification of EB3 Specification using CADP
Abstract: EB3 is a specification language for information systems.
The core of the EB3 language consists of process algebraic specifications describing the behaviour of the entity types in a system, and attribute function definitions describing the entity attribute types.
The verification of EB3 specifications against temporal properties is of great interest to users of EB3.
In this paper, we propose a translation of EB3 to LOTOS NT (LNT for short), a value-passing concurrent language with classical process
algebra features. Our translation ensures the one-to-one correspondence between states and transitions of the labelled
transition systems corresponding to EB3 and the LNT specification. We automated this translation with the
EB32LNT tool, which makes it possible to verify EB3 specifications with the use of the
CADP toolbox.
Murat Moran, James Heather and Steve Schneider. Automated Anonymity Verification of ThreeBallot Voting System
Abstract: In recent years, a large number of secure voting protocols have been proposed in the literature. Often these protocols contain flaws, but because they are complex protocols, rigorous formal analysis has proven hard to come by.

Rivest's ThreeBallot voting system is important because it aims to provide security (voter anonymity and voter verifiability) without requiring cryptography. In this paper, we construct a CSP model of ThreeBallot, and use it to produce the first automated formal analysis of its anonymity property.

Along the way, we discover that one of the crucial assumptions under which ThreeBallot (and many other voting systems) operates --- the Short Ballot Assumption --- is highly ambiguous in the literature. We give various plausible precise interpretations, and discover that in each case, the interpretation either is unrealistically strong, or else fails to ensure anonymity. Therefore, we give a version of the Short Ballot Assumption for ThreeBallot that is realistic but still provides a guarantee of anonymity.
Jean-Vivien Millo, Krishna S, Ganesh Narwane and S Ramesh. Compositional Verification of Software Product Lines
Abstract: This paper presents a novel approach to the design verification of Software Product Lines(SPL). The proposed approach assumes that the requirements and designs are modeled as finite state machines with variability information. The variability information at the requirement and design levels are expressed differently and at different levels of abstraction. Also the proposed approach supports verification of SPL in which new features and variability may be added incrementally.
Given the design and requirements of an SPL, the proposed design verification method ensures that every product at the design level behaviorally conforms to a product at the requirement level. The conformance procedure is compositional in the sense that the verification of an entire SPL consisting of multiple features is reduced to the verification of the individual features.
The method has been implemented and demonstrated in a prototype tool SPLEnD (SPL Engine for Design Verification) on a couple of fairly large case studies.
Frédéric Gava, Jean Fortin and Michael Guedj. Deductive Verification of State-space Algorithms
Abstract: As any softwre, model checkers are subject to bugs. They can thus generate false violations or validate a model that they should not. Different methods, such as theorem provers or Proof-Carrying Code, have been used to gain more confidence in the model checkers results. In this paper, we focus on using a verification condition generator that takes annotated algorithms and ensures their termination and correctness. We study four algorithms (three sequential and one distributed) of state-space construction as a first step towards mechanically-assisted deductive verification of model-checkers.
Daisuke Ishii, Guillaume Melquiond and Shin Nakajima. Inductive Verification of Hybrid Automata with Strongest Postcondition Calculus
Abstract: Safety verification of hybrid systems is a key technique in developing embedded systems that have strong coupling with physical environment. We propose an automated logical analytic method for verifying a class of hybrid automata. The problems are more general than those solved by the existing model checkers: our method can verify models with symbolic parameters and nonlinear equations as well. First, we encode the execution trace of a hybrid automaton as an imperative program. Its safety property is then translated into proof obligations by strongest postcondition calculus. Finally, these logic formulas are discharged by state-of-the-art arithmetic solvers (e.g., Mathematica). Our proposed algorithm efficiently performs inductive reasoning by unrolling the execution for some steps and generating loop invariants from verification failures. Our experimental results along with examples taken from the literature show that the proposed approach is feasible.
Songzheng Song, Lin Gui, Jun Sun, Yang Liu and Jin Song Dong. Improved Reachability Analysis in DTMC via Divide and Conquer
Abstract: Discrete Time Markov Chains (DTMCs) are widely used to model probabilistic systems in many domains, such as biology, network and communication protocols. There are two main approaches for probability reachability analysis of DTMCs, i.e., solving linear equations or using value iteration. However, both approaches have drawbacks. On one hand, solving linear equations can generate accurate results, but it can be only applied to relatively small models. On the other hand, value iteration is more scalable, but often suffers from slow convergence. Furthermore, it is unclear how to parallelize (i.e., taking advantage of multi-cores or distributed computers) these two approaches. In this work, we propose a divide-and-conquer approach to eliminate loops in DTMC and hereby speed up probabilistic reachability analysis. A DTMC is separated into several partitions according to our proposed cutting criteria. Each partition is then solved by Gauss-Jordan elimination effectively and the state space is reduced afterwards. This divide and conquer algorithm will continue until there is no loop existing in the system. Experiments are conducted to demonstrate that our approach can generate accurate results, avoid the slow convergence problems and handle larger models.
Andreas Morgenstern, Manuel Gesell and Klaus Schneider. Solving Games Using Incremental Induction
Abstract: Recently, IC3 has been presented as a new algorithm for formal verification. Based on incremental induction, it is often much faster compared to otherwise used fixpoint-based model checking algorithms. In this paper, we use the idea of incremental induction for solving two-player concurrent games. While formal verification requires to prove that a given system satisfies a given specification, game solving aims at automatically synthesizing or completing a system to satisfy the specification. This involves both universal (player 1) and existential quantification (player 2) over the formulas that represent state transitions. Hence, algorithms for solving games are usually implemented with BDD packages that offer quantification procedures. In this paper, we show how to compute a solution of games by using incremental induction. On the experiments we considered, our new algorithm significantly outperforms comparable BDD-based approaches.
Fu Song and Tayssir Touili. Model-Checking Software Library API Usage Rules
Abstract: Modern software increasingly relies on using libraries which are accessed
via Application Programming Interfaces (APIs). Libraries usually impose constraints
on how API functions can be used (API usage rules) and programmers have to obey
these API usage rules. However, API usage rules often are not well-documented or
documented informally. In this work, we show how to use the SCTPL logic to precisely
specify API usage rules in libraries, where SCTPL can be seen as an extension
of the branching-time temporal logic CTL with variables, quantifiers, and predicates
over the stack. This allows library providers to formally describe API usage rules
without knowing how their libraries will be used by programmers. We also propose
an approach to automatically check whether programs using libraries violate or not
the corresponding API usage rules. Our approach consists in modeling programs as
pushdown systems (PDSs), and checking API usage rules on programs using SCTPL
model checking for PDSs. To make the model-checking procedure more efficient, we
propose an abstraction that reduces drastically the size of the program model. Moreover,
we characterize a sub-logic rSCTPL of SCTPL preserved by the abstraction.
rSCTPL is sufficient to precisely specify all the API usage rules we met. We implemented
our techniques in a tool and applied it to check several API usage rules. Our
tool allowed to detect several unknown errors in well-known programs, such as Nssl,
Verbs, Acacia+, Walksat and Getafix. Our experimental results are encouraging.
Dominique Mery and Michael Poppleton. Formal Modelling and Verification of Population Protocols
Abstract: The design of a wireless sensor or ad-hoc mobile network (WSN/ AHMN) for a given application requires demanding optimization against many parameters, e.g. node energy and transmission range limits, variable node and link reliability, message latency and throughput, not to mention application requirements. The verification of such systems remains very challenging. A recent theoretical approach to this area is the population protocols of Angluin et al: a population protocol is a finite population of simple agents interacting pairwise in a defined manner from some initial state. The work examines the classes of problems computable, and associated complexity results, by various protocols. Notions of fairness - that key actions in these protocols are infinitely often enabled - are key to convergence arguments.

In this paper we consider two example population protocols. Experimental modelling of the mechanics (state transition behaviour), analysis and proof in the Event-B formal language and its RODIN toolkit are straightforward. The interesting questions are about the reachability and convergence properties of these protocols and to what extent we can specify, reason about and prove such properties formally. We consider the applicability of Lamport's Temporal Logic of Actions (TLA) for such reasoning, since it includes elegant notions and proof rules on fairness in a temporal framework for modelling and verifying software systems. As part of this we consider the extent to which TLA reasoning may be ``collapsed'' to first-order reasoning, and thus be implemented by the Rodin provers.
Aymerick Savary, Marc Frappier and Jean-Louis Lanet. Detecting Vulnerabilities in Java-Card Bytecode verifiers using Model-Based Testing
Abstract: Java Card security is based on different elements among which the bytecode verifier is one of the most important. Finding vulnerabilities is a complex, tedious and error-prone task. In the case of the Java bytecode verifier, vulnerability tests are typically derived by hand. We propose a new approach to generate vulnerability test suites using model-based testing. Each instruction of the Java bytecode language is represented by an event of an Event-B machine, with a guard that denotes security conditions as defined in the virtual machine specification. We generate vulnerability tests by negating guards of events and generating traces with these faulty events using the ProB model checker. This approach has been applied to a subset of twelve instructions of the bytecode language and tested on five Java Card bytecode verifiers. Vulnerabilities have been found for each of them. We have developed a complete tool chain to support the approach and provide a proof of concept.
Rimvydas Rukšėnas, Paul Curzon and Michael D. Harrison. Integrating Formal Predictions of Interactive System Behaviour with User Evaluation
Abstract: It is well known that human error in the use of interactive devices can have severe safety or business consequences. It is important therefore that aspects of the design that compromise the usability of a device can be predicted before deployment. A range of techniques have been developed for identifying potential usability problems including laboratory based experiments with prototypes and paper based evaluation techniques. This paper proposes a framework that integrates experimental techniques with formal models of the device, along with assumptions about how the device will be used. Abstract models of prototype designs and use assumptions are analysed using model checking techniques. As a result of the analysis hypotheses are formulated about how a design will fail in terms of its usability. These hypotheses are then used in an experimental environment with potential users to test the predictions. Formal methods are therefore integrated with laboratory based user evaluation to give increased confidence in the results of the usability evaluation process.

The approach is illustrated by exploring the design of an IV infusion pump designed for use in a hospital context.
Ramsay Taylor, Kirill Bogdanov and John Derrick. Automatic Inference of Erlang Module Behaviour
Abstract: Previous work has shown the benefits of using grammar inference techniques to infer models
of software behaviour for systems whose specifications are not available. However, this inference
has required considerable direction from an expert user who needs to have familiarity with
the system's operation, and must be actively involved in the inference process. This paper
presents an approach that can be applied automatically to
infer a model of the behaviour of Erlang modules in respect to their presented interface. It
integrates the automated learning system StateChum with the automated refactoring tool Wrangler
to allow both interface discovery and behaviour inference to proceed without human involvement.
Manamiary Bruno Andriamiarina, Dominique Mery and Neeraj-Kumar Singh. Integrating Proved State-Based Models for Constructing Correct Distributed Algorithms
Abstract: The verification of distributed algorithms is a challenge for formal techniques supported by tools, such as model checkers and proof assistants. The difficulties, even for powerful tools, lie in the derivation of proofs of required properties, such as safety and eventuality, for distributed algorithms. In this paper, we present a methodology based on the general concept of refinement and used for developing distributed algorithms satisfying a given list of safety and liveness properties. The methodology is a recipe reusing old ingredients of the classical temporal approaches and we illustrate it on examples (routing protocols, etc.). More precisely, we show how state-based models can be developed for specific problems and how they can be simply reused by controlling the composition of state-based models through the refinement relationship. Consequently, we obtain a redevelopment of correct-by-construction existing distributed algorithms and a framework for deriving new distributed algorithms (by integrating models) and for ensuring the correctness of resulting distributed algorithms by construction. Generally, distributed algorithms are supposed to run on a fixed network and we
consider a network with a changing topology.
Elvira Albert, Jesús Correas, German Puebla and Guillermo Román-Díez. Quantified Abstractions of Distributed Systems
Abstract: When reasoning about distributed systems, it is essential to have
information about the different kinds of nodes which compose the
system, how many instances of each kind exist, and how nodes
communicate with other nodes. In this paper we present a
static-analysis-based approach which is able to provide information
about the questions above. In order to cope with an unbounded number
of nodes and an unbounded number of calls among them, the analysis
performs an abstraction of the system producing a graph whose nodes
may represent (infinitely) many concrete nodes and arcs represent
any number of (infinitely) many calls among nodes. The crux of our
approach is that the abstraction is enriched with upper bounds
inferred by a resource analysis which limit the number of concrete
instances which the nodes and arcs represent. The combined information
provided by our approach has interesting applications such as debugging,
optimizing and dimensioning distributed systems.
Kalou Cabrera Castillos, Frederic Dadeau, Jacques Julliand, Bilal Kanso and Safouan Taha. A Compositional Automata-based Semantics for Property Patterns
Abstract: Dwyer et al. define a language to specify dynamic properties based on predefined patterns and scopes. To define a property, the user has to choose a pattern and a scope among a limited number of them. Dwyer et al. define the semantics of these properties by translating each composition of a pattern and a scope into usual temporal logics (LTL, CTL, etc.). First, this translational semantics is not compositional and thus not easily extensible to other patterns/scopes. Second, it is not always faithful to the natural semantics of the informal definitions.

In this paper, we propose a compositional automata-based approach defining the semantics of each pattern and each scope by an automaton. Then, we propose a composition operation in such a way the property semantics is defined by composing the automata. Hence, the semantics is compositional and easily extensible as we show it by handling many extensions to the Dwyer et al.'s language. We compare our compositional semantics with the Dwyer et al.'s translational semantics by checking whether our automata are equivalent to the B├╝chi automata of the LTL expressions given by Dwyer et al. In some cases, our semantics reveals a lack of homogeneity within Dwyer et al.'s semantics.
Shuang Liu, Yang Liu, Étienne André, Christine Choppy, Jun Sun, Bimlesh Wadhwa and Jin Song Dong. A Formal Semantics for Complete UML State Machines with Communications
Abstract: UML is a widely used formalism introduced by the OMG, and formalizing
its semantics is an important issue. In this work, we concentrate on state
machines which are used to express the dynamic behaviors. We propose an operational
semantics covering all features of the latest version (2.4.1) of UML state
machine specification. We use Labeled Transition System (LTS) as a semantic
domain, which naturally coincides with the implementation of model checking
tools. Our proposed semantics also include synchronous and asynchronous communications
between state machines.We implemented the semantics in USM2C,
a model checker supporting edition and automatic verification of UML state machines.
Experiments show the efficiency of our approach.
Stefan Ciobaca. From Small-step Semantics to Big-step Semantics, Automatically
Abstract: Small-step semantics and big-step semantics are two styles for operationally defining the meaning of programming languages. Small-step semantics are given as a relation between program configurations, relation which denotes one computational step; big-step semantics are given as a relation directly associating to each program configuration the corresponding final configuration. Small-step semantics are useful for making precise reasonings about programs, but reasoning in big-step semantics is easier and more intuitive. When both the small-step and the bigstep semantics are needed for a language, the proof of a theorem stating that the two semantics are equivalent should also be provided in order to trust that they define the same language. We show that the big-step semantics can be automatically obtained from the small-step semantics when the small-step semantics are given by inference rules satisfying certain assumptions that we identify. The transformation that we propose is very simple and we show that when the identified assumptions are met, it is sound and complete in the sense that the two semantics are equivalent. We exemplify our transformation on a number of case studies.
Dorel Lucanu and Vlad Rusu. Program Equivalence by Circular Reasoning
Abstract: We propose a logic and a deductive system for stating and automatically proving the equivalence of programs in deterministic languages having a rewriting-based operational semantics. The deductive system is circular in nature and is proved sound and weakly complete; together, these results say that, when it terminates, our system correctly solves the program-equivalence problem as we state it. We show that our approach is suitable for proving the equivalence of both terminating and non-terminating programs, and also the equivalence of both concrete and symbolic programs. The latter are programs in which some statements or expressions are symbolic variables. By proving the equivalence between symbolic programs, one proves in one shot the equivalence of (possibly, infinitely) many concrete programs obtained by replacing the variables by concrete statements or expressions. We also report on a prototype implementation of the proposed deductive system in the K Framework.
Ernst-Ruediger Olderog and Mani Swaminathan. Structural Transformations for Data-Enriched Real-Time Systems
Abstract: We investigate structural transformations for
easier verification of real-time systems with shared data variables,
modelled as networks of extended timed automata (ETA). Our contributions to this end are :
(1) An operator for layered composition of ETA that yields communication closed
equivalences under certain independence and / or precedence conditions.
(2) Two reachability preserving transformations of separation and flattening
for reducing (under certain cycle conditions) the number cycles of the ETA.
(3) The interplay of the three structural transformations (separation, flattening,
and layering), illustrated on an enhanced version of Fischer's real-time mutual exclusion protocol.
Elena Giachino, Carlo A. Grazia, Cosimo Laneve, Michael Lienhardt and Peter Y. H. Wong. Deadlock Analysis of Concurrent Objects: Theory and Practice
Abstract: We present a framework for statically detecting deadlocks in
a concurrent object language with asynchronous invocations and
operations for getting the values and releasing the control.
Our approach is based on the integration of two static analysis techniques:
(i) an inference algorithm to extract abstract descriptions of method's behaviors in the form of behavioral types (contracts), and
(ii) an evaluator, which computes a fixpoint semantics, to return finite state models of contracts.
A potential deadlock is detected when a circular dependency
is found in some state of the model.

We discuss the theory and the prototype implementation of our framework. Our tool
is validated on an industrial case study based on the Fredhopper Access Server (FAS)
developed by SDL Fredhoppper, an eCommerce software company. In particular we
verify one of the core concurrent components of FAS to be deadlock-free.
Roberto Vigo, Flemming Nielson and Hanne Riis Nielson. Broadcast, Denial-of-Service, and Secure Communication
Abstract: A main challenge in the design of wireless-based Cyber-Physical Systems consists in balancing the need for security and the effect of broadcast communication with the limited capabilities and reliability of sensor nodes. We present a calculus of broadcasting processes that enables to reason about unsolicited messages and lacking of expected communication. Moreover, standard cryptographic mechanisms can be implemented in the calculus via term rewriting. The modelling framework is complemented by an executable specification of the semantics of the calculus in Maude, thereby facilitating solving a number of simple reachability problems.
Ramiro Demasi, Pablo Castro, Tom Maibaum and Nazareno Aguirre. Characterizing Fault-Tolerant Systems by Means of Simulation Relations
Abstract: In this paper, we study a formal characterization of fault tolerant behaviors of systems via simulation relations. This formalization makes use of particular notions of simulation and bisimulation in order to compare the executions of a system that exhibit faults with executions where no faults occur. By employing variations of standard (bi)simulation algorithms, our characterization enables us to algorithmically check fault tolerance, i.e., to verify that a system behaves in an acceptable way even under the occurrence of faults.

Our approach has the benefit of being simple and supporting an efficient automated treatment. We demonstrate the practical application of our formalization through some well-known case studies, which illustrate that the main ideas behind most fault-tolerance mechanisms are naturally captured in our setting.

News

  • 1.7.2013
    Susanne Graf's pictures from iFM 2013 are now available.
  • 27.6.2013
    Videos are now available.
  • 18.6.2013
    Photos from all days are now available.
  • 10.6.2013
    The detailed schedule of iFM 2013 is here.
  • 10.6.2013
    Maps of Turku city and ICT-building has been added.
  • 10.6.2013
    Social event program updated.
  • 6.6.2013
    The Rector of Åbo Akademi University, Professor Jorma Mattinen will kindly open iFM 2013, on June 12, at 8:45 in Auditorium Alpha!
  • 4.6.2013
    The Åbo Akademi University magazine "Meddelanden" reported (in Swedish) about iFM 2013 here.
  • 23.05.2013
    The proceedings of iFM 2013 (LNCS 7940) is now accessible from Springer, here.
  • 22.4.2013
    The schedule and the list of accepted papers (with abstracts) have been uploaded.
  • 26.3.2013
    Registration has opened!
  • 11.1.2013
    We have extended the paper submission deadline to 27.1.2013!
  • 19.11.2012
    We are very happy to announce that the following TWO renowned Springer journals have kindly accepted to host special issues of iFM 2013: Formal Aspects of Computing and Software and Systems Modeling.
  • 12.11.2012
    We have the pleasure of confirming the following invited speakers at iFM 2013: Jean-Raymond Abrial, Cosimo Laneve, Susanne Graf, and Kim Larsen.
  • 31.10.2012
    We have now booked the lovely venue of Herrankukkaro for the social event on 13.6.2013. We will go to a fishing village close to Turku, where we will walk around, take a (smoke) sauna, and enjoy a nice dinner. We will go there by bus and come back by boat, enjoying the light nights of June in Finland. You can check more about it here.
Archive