Here are some papers that cite the Parma Polyhedra Library and/or cite papers on which we have described innovations that are implemented in the library. If you are aware of a paper that should be mentioned in this bibliography, please let us know.
The corresponding BibTeX source is also available.
[AGC12] 
A. Albarghouthi, A. Gurfinkel, and M. Chechik.
Craig interpretation.
In A. Miné and D. Schmidt, editors, Static Analysis:
Proceedings of the 19th International Symposium (SAS 2012), volume 7460 of
Lecture Notes in Computer Science, pages 300316, Deauville, France,
2012. SpringerVerlag, Berlin.
Abstract interpretation (AI) is one of the most scalable automated approaches to program verification available today. To achieve efficiency, many steps of the analysis, e.g., joins and widening, lose precision. As a result, AI often produces false alarms, coming from the inability to find a safe inductive invariant even when it exists in a chosen abstract domain.

[AAC^{+}08] 
E. Albert, P. Arenas, M. Codish, S. Genaim, G. Puebla, and D. Zanardini.
Termination analysis of Java bytecode.
In G. Barthe and F. S. de Boer, editors, Proceedings of the 10th
IFIP WG 6.1 International Conference on Formal Methods for Open ObjectBased
Distributed Systems (FMOODS 2008), volume 5051 of Lecture Notes in
Computer Science, pages 218, Oslo, Norway, 2008. SpringerVerlag, Berlin.
Termination analysis has received considerable attention, traditionally in the context of declarative programming, and recently also for imperative languages. In existing approaches, termination is performed on source programs. However, there are many situations, including mobile code, where only the compiled code is available. In this work we present an automatic termination analysis for sequential Java Bytecode programs. Such analysis presents all of the challenges of analyzing a lowlevel language as well as those introduced by objectoriented languages. Interestingly, given a bytecode program, we produce a constraint logic program, CLP, whose termination entails termination of the bytecode program. This allows applying the large body of work in termination of CLP programs to termination of Java bytecode. A prototype analyzer is described and initial experimentation is reported.

[AAG^{+}12] 
E. Albert, P. Arenas, S. Genaim, G. Puebla, and D. Zanardini.
Cost analysis of objectoriented bytecode programs.
Theoretical Computer Science, 413(1):142159, 2012.
Quantitative Aspects of Programming Languages (QAPL 2010).
[ http ]
Cost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of objectoriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other objectoriented imperative languages, not necessarily in bytecode form.

[AAGP08] 
E. Albert, P. Arenas, S. Genaim, and G. Puebla.
Automatic inference of upper bounds for recurrence relations in cost
analysis.
In M. Alpuente and G. Vidal, editors, Static Analysis:
Proceedings of the 15th International Symposium, volume 5079 of Lecture
Notes in Computer Science, pages 221237, Valencia, Spain, 2008.
SpringerVerlag, Berlin.
The classical approach to automatic cost analysis consists of two phases. Given a program and some measure of cost, we first produce recurrence relations (RRs) which capture the cost of our program in terms of the size of its input data. Second, we convert such RRs into closed form (i.e., without recurrences). Whereas the first phase has received considerable attention, with a number of cost analyses available for a variety of programming languages, the second phase has received comparatively little attention. In this paper we first study the features of RRs generated by automatic cost analysis and discuss why existing computer algebra systems are not appropriate for automatically obtaining closed form solutions nor upper bounds of them. Then we present, to our knowledge, the first practical framework for the fully automatic generation of reasonably accurate upper bounds of RRs originating from cost analysis of a wide range of programs. It is based on the inference of ranking functions and loop invariants and on partial evaluation.

[AAG^{+}08] 
E. Albert, P. Arenas, S. Genaim, G. Puebla, and D. Zanardini.
COSTA: Design and implementation of a cost and termination analyzer
for Java bytecode.
In Frank S. de Boer, M. M. Bonsangue, S. Graf, and W. P. de
Roever, editors, Formal Methods for Components and Objects, volume
5382 of Lecture Notes in Computer Science, pages 113132.
SpringerVerlag, Berlin, 2008.
Revised papers presented at the 6th International Symposium on Formal
Methods for Components and Objects (FMCO 2007), Amsterdam, The Netherlands,
October 2426, 2007.
This paper describes the architecture of COSTA, an abstract interpretation based COST and termination Analyzer for Java bytecode. The system receives as input a bytecode program, (a choice of) a resource of interest and tries to obtain an upper bound of the resource consumption of the program. COSTA provides several nontrivial notions of cost, as the consumption of the heap, the number of bytecode instructions executed and the number of calls to a specific method. Additionally, COSTA tries to prove termination of the bytecode program which implies the boundedness of any resource consumption. Having cost and termination together is interesting, as both analyses share most of the machinery to, respectively, infer cost upper bounds and to prove that the execution length is always finite (i.e., the program terminates). We report on experimental results which show that COSTA can deal with programs of realistic size and complexity, including programs which use Java libraries. To the best of our knowledge, this system provides for the first time evidence that resource usage analysis can be applied to a realistic objectoriented, bytecode programming language.

[AABP^{+}13] 
E. Albert, D. E. AlonsoBlas, A. Puri, J. Correas, A. FloresMontoya,
S. Genaim, M. GómezZamalloa, A. N. Masud, G. Puebla, J. M. Rojas,
G. RománDíez, and D. Zanardini.
Automatic inference of bounds on resource consumption.
In E. Giachino, R. Hähnle, F. S. de Boer, and M. M. Bonsangue,
editors, Formal Methods for Components and Objects: Proceedings of the
11th International Symposium (FMCO 2012), volume 7866 of Lecture Notes
in Computer Science, pages 119144, Bertinoro, Italy, 2013.
SpringerVerlag, Berlin.
In this tutorial paper, we overview the techniques that underlie the automatic inference of resource consumption bounds. We first explain the basic techniques on a Javalike sequential language. Then, we describe the extensions that are required to apply our method on concurrent ABS programs. Finally, we discuss some advanced issues in resource analysis, including the inference of noncumulative resources and the treatment of shared mutable data.

[AGM13] 
E. Albert, S. Genaim, and A. N. Masud.
On the inference of resource usage upper and lower bounds.
ACM Transactions on Computational Logic, 14(3), 2013.
Cost analysis aims at determining the amount of resources required to run a program in terms of its input data sizes. The most challenging step is to infer the cost of executing the loops in the program. This requires bounding the number of iterations of each loop and findig tight bounds for the cost of each of its iterations. This article presents a novel approach to infer upper and lower bounds from cost relations. These relations are an extended form of standard recurrence equations that can be nondeterministic, contain inexact size constraints and have multiple arguments that increase and/or decrease. We propose novel techniques to automatically transform cost relations into worstcase and bestcase deterministic oneargument recurrence relations. The solution of each recursive relation provides a precise upperbound and lowerbound for executing a corresponding loop in the program. Importantly, since the approach is developed at the level of the cost equations, our techniques are programming language independent.

[AAG11] 
D. Alonso, P. Arenas, and S. Genaim.
Handling nonlinear operations in the value analysis of COSTA.
In Proceedings of the 6th Workshop on Bytecode Semantics,
Verification, Analysis and Transformation (Bytecode 2011), volume 279 of
Electronic Notes in Theoretical Computer Science, pages 317,
Saarbrucken, Germany, 2011. Elsevier Science B.V.
Inferring precise relations between (the values of) program variables at different program points is essential for termination and resource usage analysis. In both cases, this information is used to synthesize ranking functions that imply the program's termination and bound the number of iterations of its loops. For efficiency, it is common to base value analysis on nondisjunctive abstract domains such as Polyhedra, Octagon, etc. While these domains are efficient and able to infer complex relations for a wide class of programs, they are often not sufficient for modeling the effect of nonlinear and bit arithmetic operations. Modeling such operations precisely can be done by using more sophisticated abstract domains, at the price of performance overhead. In this paper we report on the value analysis of COSTA that is based on the idea of encoding the disjunctive nature of nonlinear operations into the (abstract) program itself, instead of using more sophisticated abstract domains. Our experiments demonstrate that COSTA is able to prove termination and infer bounds on resource consumption for programs that could not be handled before.

[AK12] 
M. Althoff and B. H. Krogh.
Avoiding geometric intersection operations in reachability analysis
of hybrid systems.
In Proceedings of the 15th ACM International Conference on
Hybrid Systems: Computation and Control (HSCC '12), pages 4554, Beijing,
China, 2012. ACM Press, New York, USA.
Although a growing number of dynamical systems studied in various fields are hybrid in nature, the verification of properties, such as stability, safety, etc., is still a challenging problem. Reachability analysis is one of the promising methods for hybrid system verification, which together with all other verification techniques faces the challenge of making the analysis scale with respect to the number of continuous state variables. The bottleneck of many reachability analysis techniques for hybrid systems is the geometrically computed intersection with guard sets. In this work, we replace the intersection operation by a nonlinear mapping onto the guard, which is not only numerically stable, but also scalable, making it possible to verify systems which were previously out of reach. The approach can be applied to the fairly common class of hybrid systems with piecewise continuous solutions, guard sets modeled as halfspaces, and urgent semantics, i.e. discrete transitions are immediately taken when enabled by guard sets. We demonstrate the usefulness of the new approach by a mechanical system with backlash which has 101 continuous state variables.

[Alu11] 
R. Alur.
Formal verification of hybrid systems.
In S. Chakraborty, A. Jerraya, S. K. Baruah, and S. Fischmeister,
editors, Proceedings of the 11th International Conference on Embedded
Software (EMSOFT 2011), pages 273278, Taipei, Taiwan, 2011. ACM Press.
In formal verification, a designer first constructs a model, with mathematically precise semantics, of the system under design, and performs extensive analysis with respect to correctness requirements. The appropriate mathematical model for embedded control systems is hybrid systems that combines the traditional statemachine based models for discrete control with classical differentialequations based models for continuously evolving physical activities. In this article, we briefly review selected existing approaches to formal verification of hybrid systems, along with directions for future research.

[AKRS08] 
R. Alur, A. Kanade, S. Ramesh, and K. Shashidhar.
Symbolic analysis for improving simulation coverage of
simulink/stateflow models.
In L. de Alfaro and J. Palsberg, editors, Proceedings of the
8th ACM & IEEE International Conference on Embedded Software (EMSOFT
2008), pages 8998, Atlanta, Georgia, USA, 2008. ACM Press.
Aimed at verifying safety properties and improving simulation coverage for hybrid systems models of embedded controlsoftware, we propose a technique that combines numerical simulation and symbolic methods for computing statesets. We consider systems with linear dynamics described in the commercial modeling tool Simulink/Stateflow. Given an initial state x, and a discretetime simulation trajectory, our method computes a set of initial states that are guaranteed to be equivalent to x, where two initial states are considered to be equivalent if the resulting simulation trajectories contain the same discrete components at each step of the simulation. We illustrate the benefits of our method on two case studies. One case study is a benchmark proposed in the literature for hybrid systems verification and another is a Simulink demo model from Mathworks.

[APS10] 
G. Amato, M. Parton, and F. Scozzari.
A tool which mines partial execution traces to improve static
analysis.
In H. Barringer, Y. Falcone, B. Finkbeiner, K. Havelund, I. Lee,
G. Pace, G. Rosu, O. Sokolsky, and N. Tillmann, editors, Proceedings of
the 1st International Conference on Runtime Verification (RV 2010), volume
6418 of Lecture Notes in Computer Science, pages 475479, Balluta Bay,
St Julians, Malta, 2010. SpringerVerlag, Berlin.
We present a tool which performs abstract interpretation based static analysis of numerical variables. The novelty is that the analysis is parametric, and parameters are chosen by applying a variant of principal component analysis to partial execution traces of programs.

[APS12] 
G. Amato, M. Parton, and F. Scozzari.
Discovering invariants via simple component analysis.
Journal of Symbolic Computation, 47(12):15331560, 2012.
We propose a new technique combining dynamic and static analysis of programs to find linear invariants. We use a statistical tool, called simple component analysis, to analyze partial execution traces of a given program. We get a new coordinate system in the vector space of program variables, which is used to specialize numerical abstract domains. As an application, we instantiate our technique to interval analysis of simple imperative programs and show some experimental evaluations.

[AS12] 
G. Amato and F. Scozzari.
Random: Rbased analyzer for numerical domains.
In N. Bjørner and A. Voronkov, editors, Proceedings of the
18th International Conference on Logic for Programming, Artificial
Intelligence, and Reasoning (LPAR 2012), volume 7180 of Lecture Notes
in Computer Science, pages 375382, Mérida, Venezuela, 2012.
SpringerVerlag, Berlin.
We present the tool Random (Rbased Analyzer for Numerical DOMains) for static analysis of imperative programs. The tool is based on the theory of abstract interpretation and implements several abstract domains for detecting numerical properties, in particular integer loop invariants. The tool combines a statistical dynamic analysis with a static analysis on the new domain of parallelotopes. The tool has a graphical interface for tuning the parameters of the analysis and visualizing partial traces.

[AS13a] 
G. Amato and F. Scozzari.
Localizing widening and narrowing.
In F. Logozzo and M. Fähndrich, editors, Static Analysis:
Proceedings of the 20th International Symposium (SAS 2013), volume 7935 of
Lecture Notes in Computer Science, pages 2542, Seattle, USA, 2013.
SpringerVerlag, Berlin.
We show two strategies which may be easily applied to standard abstract interpretationbased static analyzers. They consist in 1) restricting the scope of widening, and 2) intertwining the computation of ascending and descending chains. Using these optimizations it is possible to improve the precision of the analysis, without any change to the abstract domains.

[And10] 
É. André.
IMITATOR II: A tool for solving the good parameters problem in
timed automata.
In Y.F. Chen and A. Rezine, editors, Proceedings of the 12th
International Workshop on Verification of Infinite State Systems
(INFINITY'10), volume 39 of Electronic Proceedings in Theoretical
Computer Science, pages 9199, Singapore, 2010.
We present here Imitator II, a new version of Imitator, a tool implementing the “inverse method” for parametric timed automata: given a reference valuation of the parameters, it synthesizes a constraint such that, for any valuation satisfying this constraint, the system behaves the same as under the reference valuation in terms of traces, i.e., alternating sequences of locations and actions. Imitator II also implements the “behavioral cartography algorithm”, allowing us to solve the following good parameters problem: find a set of valuations within a given bounded parametric domain for which the system behaves well. We present new features and optimizations of the tool, and give results of applications to various examples of asynchronous circuits and communication protocols.

[AFS11] 
É. André, L. Fribourg, and R. Soulat.
Enhancing the inverse method with state merging.
Research report LSV1126, Laboratoire Spécification &
Vérification, École Normale Supérieure de Cachan, Paris, France,
December 2011.
Keeping the state space small is essential when verifying realtime systems using Timed Automata (TA). In the modelchecker Uppaal, the merging operation has been used extensively in order to reduce the number of states. Actually, Uppaal's merging technique applies within the more general setting of Parametric Timed Automata (PTA). The Inverse Method (IM) for a PTA A is a procedure that synthesizes a zone around a given point 0 (parameter valuation) over which A is guaranteed to behave similarly. We show that the integration of merging into IM leads to the synthesis of larger zones around 0. It also often improves the performance of IM, both in terms of computational space and time, as shown by our experimental results.

[AFS12] 
É. André, L. Fribourg, and R. Soulat.
Enhancing the inverse method with state merging.
In A. E. Goodloe and S. Person, editors, NASA Formal Methods:
Proceedings of the 4th International Symposium (NFM 2012), volume 7226 of
Lecture Notes in Computer Science, pages 381396, Norfolk, USA, 2012.
SpringerVerlag, Berlin.
Keeping the state space small is essential when verifying realtime systems using Timed Automata (TA). In the modelchecker Uppaal, the merging operation has been used extensively in order to reduce the number of states. Actually, Uppaal's merging technique applies within the more general setting of Parametric Timed Automata (PTA). The Inverse Method (IM) for a PTA A is a procedure that synthesizes a zone around a given point 0 (parameter valuation) over which A is guaranteed to behave similarly. We show that the integration of merging into IM leads to the synthesis of larger zones around 0. It also often improves the performance of IM, both in terms of computational space and time, as shown by our experimental results.

[AFS13] 
É. André, L. Fribourg, and R. Soulat.
Merge and conquer: State merging in parametric timed automata.
In D. Van Hung and M. Ogawa, editors, Automated Technology for
Verification and Analysis: Proceedings of the 11th International Symposium
(ATVA 2013), volume 8172 of Lecture Notes in Computer Science, pages
381396, Hanoi, Vietnam, 2013. SpringerVerlag, Berlin.
Parameter synthesis for realtime systems aims at synthesizing dense sets of valuations for the timing requirements, guaranteeing a good behavior. A popular formalism for modeling parameterized realtime systems is parametric timed automata (PTAs). Compacting the state space of PTAs as much as possible is fundamental. We present here a state merging reduction based on convex union, that reduces the state space, but yields an overapproximation of the executable paths. However, we show that it preserves the sets of reachable locations and executable actions. We also show that our merging technique associated with the inverse method, an algorithm for parameter synthesis, preserves locations as well, and outputs larger sets of parameter valuations.

[ALS^{+}13] 
E. André, Y. Liu, J. Sun, J. S. Dong, and S.W. Lin.
Psyhcos: Parameter synthesis for hierarchical concurrent realtime
systems.
In N. Sharygina and H. Veith, editors, Computer Aided
Verification: Proceedings of the 25th International Conference (CAV 2013),
volume 8044 of Lecture Notes in Computer Science, pages 984989.
SpringerVerlag, Berlin, 2013.
Realtime systems are often hard to control, due to their complicated structures, quantitative time factors and even unknown delays. We present here PSyHCoS, a tool for analyzing parametric realtime systems specified using the hierarchical modeling language PSTCSP. PSyHCoS supports several algorithms for parameter synthesis and model checking, as well as state space reduction techniques. Its architecture favors reusability in terms of syntax, semantics, and algorithms. It comes with a friendly user interface that can be used to edit, simulate and verify PSTCSP models. Experiments show its efficiency and applicability.

[AS13b] 
É. André and R. Soulat.
The Inverse Method: Parametric Verification of Realtime
Embedded Systems.
FOCUS Series. John Wiley and Sons., 2013.
This book introduces stateoftheart verification techniques for realtime embedded systems, based on the inverse method for parametric timed automata. It reviews popular formalisms for the specification and verification of timed concurrent systems and, in particular, timed automata as well as several extensions such as timed automata equipped with stopwatches, linear hybrid automata and affine hybrid automata. The inverse method is introduced, and its benefits for guaranteeing robustness in realtime systems are shown. Then, it is shown how an iteration of the inverse method can solve the good parameters problem for parametric timed automata by computing a behavioral cartography of the system. Different extensions are proposed particularly for hybrid systems and applications to scheduling problems using timed automata with stopwatches. Various examples, both from the literature and industry, illustrate the techniques throughout the book. Various parametric verifications are performed, in particular of abstractions of a memory circuit sold by the chipset manufacturer STMicroelectronics, as well as of the prospective flight control system of the next generation of spacecraft designed by ASTRIUM Space Transportation.

[ABM07] 
A. Armando, M. Benerecetti, and J. Mantovani.
Abstraction refinement of linear programs with arrays.
In O. Grumberg and M. Huth, editors, Tools and Algorithms for
the Construction and Analysis of Systems: Proceedings of the 13th
International Conference (TACAS 2007), volume 4424 of Lecture Notes in
Computer Science, pages 373388, Braga, Portugal, 2007. SpringerVerlag,
Berlin.
In previous work we presented a model checking procedure for linear programs, i.e. programs in which variables range over a numeric domain and expressions involve linear combinations of the variables. In this paper we lift our model checking procedure for linear programs to deal with arrays via iterative abstraction refinement. While most approaches are based on predicate abstraction and therefore the abstraction is relative to sets of predicates, in our approach the abstraction is relative to sets of variables and array indexes, and the abstract program can express complex correlations between program variables and array elements. Thus, while arrays are problematic for most of the approaches based on predicate abstraction, our approach treats them in a precise way. This is an important feature as arrays are ubiquitous in programming. We provide a detailed account of both the abstraction and the refinement processes, discuss their implementation in the eureka tool, and present experimental results that confirm the effectiveness of our approach on a number of programs of interest.

[ADF^{+}06] 
E. Asarin, T. Dang, G. Frehse, A. Girard, C. Le Guernic, and O. Maler.
Recent progress in continuous and hybrid reachability analysis.
In Proceedings of the IEEE International Symposium on
ComputerAided Control Systems Design, Technische Universität
München, Munich, Germany, 2006.
Setbased reachability analysis computes all possible states a system may attain, and in this sense provides knowledge about the system with a completeness, or coverage, that a finite number of simulation runs can not deliver. Due to its inherent complexity, the application of reachability analysis has been limited so far to simple systems, both in the continuous and the hybrid domain. In this paper we present recent advances that, in combination, significantly improve this applicability, and allow us to find better balance between computational cost and accuracy. The presentation covers, in a unified manner, a variety of methods handling increasingly complex types of continuous dynamics (constant derivative, linear, nonlinear). The improvements include new geometrical objects for representing sets, new approximation schemes, and more flexible combinations of graphsearch algorithm and partition refinement. We report briefly some preliminary experiments that have enabled the analysis of systems previously beyond reach.

[Ave06] 
J. Avery.
Sizechange termination and bound analysis.
In M. Hagiya and P. Wadler, editors, Proceedings of the 8th
International Symposium on Functional and Logic Programming (FLOPS 2006),
volume 3945 of Lecture Notes in Computer Science, pages 192207,
FujiSusono, Japan, 2006. SpringerVerlag, Berlin.
Despite its simplicity, the sizechange termination principle, presented by Lee, Jones and BenAmram in [LJB01], is surprisingly strong and is able to show termination for a large class of programs. A significant limitation for its use, however, is the fact that the SCT requires data types to be wellfounded, and that all mechanisms used to determine termination must involve decreases in these global, wellfounded partial orders. Following is an extension of the sizechange principle that allows for nonwell founded data types, and a realization of this principle for integer data types. The extended sizechange principle is realized through combining abstract interpretation over the domain of convex polyhedra with the use of sizechange graphs. In the cases when data types are well founded, the method handles every case that is handled by LJB sizechange termination. The method has been implemented in a subject language independent shared library, libesct (available at [Ave05a]), as well as for the ANSI C specializer CMix_{}II, handling a subset of its internal language CoreC.

[BRCZ05a] 
R. Bagnara, E. RodríguezCarbonell, and E. Zaffanella.
Generation of basic semialgebraic invariants using convex polyhedra.
In C. Hankin and I. Siveroni, editors, Static Analysis:
Proceedings of the 12th International Symposium, volume 3672 of Lecture
Notes in Computer Science, pages 1934, London, UK, 2005. SpringerVerlag,
Berlin.
A technique for generating invariant polynomial inequalities of bounded degree is presented using the abstract interpretation framework. It is based on overapproximating basic semialgebraic sets, i.e., sets defined by conjunctions of polynomial inequalities, by means of convex polyhedra. While improving on the existing methods for generating invariant polynomial equalities, since polynomial inequalities are allowed in the guards of the transition system, the approach does not suffer from the prohibitive complexity of the methods based on quantifierelimination. The application of our implementation to benchmark programs shows that the method produces nontrivial invariants in reasonable time. In some cases the generated invariants are essential to verify safety properties that cannot be proved with classical linear invariants.

[BRCZ05b]  R. Bagnara, E. RodríguezCarbonell, and E. Zaffanella. Generation of basic semialgebraic invariants using convex polyhedra. Report de recerca LSI0514R, Departament de Llenguatges i Sistemes Informàtics, Universitat Politècnica de Catalunya, Barcelona, Spain, 2005. Available at http://www.lsi.upc.edu/dept/techreps/techreps.html. 
[BG09] 
G. Banda and J. P. Gallagher.
Analysis of linear hybrid systems in CLP.
In M. Hanus, editor, LogicBased Program Synthesis and
Transformation, volume 5438 of Lecture Notes in Computer Science,
pages 5570. SpringerVerlag, Berlin, 2009.
Revised selected papers presented at the 18th International Symposium
on LogicBased Program Synthesis and Transformation (LOPSTR 2008), Valencia,
Spain, July 1718, 2008.
In this paper we present a procedure for representing the semantics of linear hybrid automata (LHAs) as constraint logic programs (CLP); flexible and accurate analysis and verification of LHAs can then be performed using generic CLP analysis and transformation tools. LHAs provide an expressive notation for specifying realtime systems. The main contributions are (i) a technique for capturing the reachable states of the continuously changing state variables of the LHA as CLP constraints; (ii) a way of representing events in the LHA as constraints in CLP, along with a product construction on the CLP representation including synchronisation on shared events; (iii) a framework in which various kinds of reasoning about an LHA can be flexibly performed by combining standard CLP transformation and analysis techniques. We give experimental results to support the usefulness of the approach and argue that we contribute to the general field of using static analysis tools for verification.

[BAG13] 
A. M. BenAmram and S. Genaim.
On the linear ranking problem for integer linearconstraint loops.
In Proceedings of the 40th Annual ACM SIGPLANSIGACT Symposium
on Principles of Programming Languages (POPL 2013), pages 5162, Rome,
Italy, 2013. ACM Press.
Also published in ACM SIGPLAN Notices, POPL '13, Volume 48, Issue 1.
In this paper we study the complexity of the Linear Ranking problem: given a loop, described by linear constraints over a finite set of integer variables, is there a linear ranking function for this loop? While existence of such a function implies termination, this problem is not equivalent to termination. When the variables range over the rationals or reals, the Linear Ranking problem is known to be PTIME decidable. However, when they range over the integers, whether for singlepath or multipath loops, the complexity of the Linear Ranking problem has not yet been determined. We show that it is coNPcomplete. However, we point out some special cases of importance of PTIME complexity. We also present complete algorithms for synthesizing linear ranking functions, both for the general case and the special PTIME cases.

[BJV10] 
J. Berendsen, D. N. Jansen, and F. W. Vaandrager.
Fortuna: Model checking priced probabilistic timed automata.
In Proceedings of the 7th International Conference on the
Quantitative Evaluation of Systems (QEST 2010), pages 273281,
Williamsburg, Virginia, USA, 2010. IEEE Computer Society.
Fortuna is the first tool for model checking priced probabilistic timed automata (PPTAs). PPTAs are an important model that can handle the combination of realtime, probabilistic and cost features. Only model checkers that incorporate all these features can address the key design tradeoffs that arise in many practical applications such as: the Zeroconf, Bluetooth, IEEE802.11 and Firewire protocols, protocols for sensor networks, and scheduling problems with failures. PPTAs are an extension of probabilistic timed automata (PTAs), by having costrates and discrete cost increments on states. Fortuna is able to compute the maximal probability by which a class of states can be reached under a certain costbound (and time bound.) Although the problem is undecidable in general, there exists a semialgorithm that produces a nondecreasing sequence of maximal probabilities. This paper enhances that algorithm. We compared the performance of Fortuna with existing approaches for PTAs. Surprisingly, although PPTAs are more general, our techniques exhibit superior performance.

[Leu11] 
A. Bey S. Leue.
Modeling and analyzing spike timing dependent plasticity with linear
hybrid automata.
Technical Report soft1103, University of Konstanz, Germany, May
2011.
We propose a model for synaptic plasticity according to the Spike Timing Dependent Plasticity (STDP) theory using Linear Hybrid Au tomata (LHA). We first present a compositional LHA model in which each component corresponds to some process in STDP. We then ab stract this model into a monolithic LHA model in order to enable formal analysis using hybrid model checking. We discuss how the avail ability of an LHA model as well as its formal analysis using the tool PHAVer can support a better understanding of the dynamics of STDP.

[BG11] 
M. Beyer and S. Glesner.
Static runtime mode extraction by state partitioning in synchronous
process networks.
In Proceedings of the 14th International Workshop on Software
and Compilers for Embedded Systems (SCOPES 2011), pages 2837, New York,
NY, USA, 2011. ACM Press.
[ http ]
Process Networks (PNs) are used for modeling streamingoriented applications with changing behavior, which must be mapped on a concurrent architecture to meet the performance and energy constraints of embedded devices. Finding an optimal mapping of Process Networks to the constrained architecture presumes that the behavior of the PN is statically known. In this paper we present a static analysis for synchronous PNs that partitions the state space according to extract runtime modes based on a Data Augmented Control Flow Automaton (DACFA). The result is a mode automaton whose nodes describe identified program modes and whose edges represent transitions among them. Optimizing backends mapping from PNs to concurrent architectures can be guided by these analysis results.

[BG12] 
M. Beyer and S. Glesner.
Static analysis of runtime modes in synchronous process network.
In E. Clarke, I. Virbitskaite, and A. Voronkov, editors,
Perspectives of Systems Informatics: Proceedings of the 8th International
Andrei Ershov Memorial Conference, volume 7162 of Lecture Notes in
Computer Science, pages 5567. SpringerVerlag, Berlin, 2012.
For modeling modern streamingoriented applications, Process Networks (PNs) are used to describe systems with changing behavior, which must be mapped on a concurrent architecture to meet the performance and energy constraints of embedded devices. Finding an optimal mapping of Process Networks to the constrained architecture presumes that the behavior of the Process Network is statically known. In this paper we present a static analysis for synchronous PNs that extracts different runtime modes by using polyhedral abstraction. The result is a Mealy machine whose states describe different runtime modes and the edges among them represent transitions. This machine can be used to guide optimizing backend mappings from PNs to concurrent architectures.

[BFL08] 
F. Bouchy, A. Finkel, and J. Leroux.
Decomposition of decidable firstorder logics over integers and
reals.
In Proceedings of the 15th International Symposium on Temporal
Representation and Reasoning (TIME '08), pages 147155, Montreal, Canada,
2008. IEEE Computer Society Press.
We tackle the issue of representing infinite sets of realvalued vectors. This paper introduces an operator for combining integer and real sets. Using this operator, we decompose three wellknown logics extending Presburger with reals. Our decomposition splits a logic into two parts : one integer, and one decimal (i.e. on the interval [0, 1[). We also give a basis for an implementation of our representation.

[BGI09] 
M. Bozga, C. Gîrlea, and R. Iosif.
Iterating octagons.
In S. Kowalewski and A. Philippou, editors, Tools and Algorithms
for the Construction and Analysis of Systems: Proceedings of the 15th
International Conference (TACAS 2009), volume 5505 of Lecture Notes in
Computer Science, pages 337351, York, UK, 2009. SpringerVerlag, Berlin.
In this paper we prove that the transitive closure of a nondeterministic octagonal relation using integer counters can be expressed in Presburger arithmetic. The direct consequence of this fact is that the reachability problem is decidable for flat counter automata with octagonal transition relations. This result improves the previous results of Comon and Jurski [Hubert Comon and Yan Jurski. Multiple counters automata, safety analysis and presburger arithmetic. In CAV, LNCS 1427, pages 268279, 1998] and Bozga, Iosif and Lakhnech [Marius Bozga, Radu Iosif, and Yassine Lakhnech. Flat parametric counter automata. In ICALP, LNCS 4052, pages 577588. SpringerVerlag, 2006] concerning the computation of transitive closures for difference bound relations. The importance of this result is justified by the wide use of octagons to computing sound abstractions of reallife systems [A. Miné. The octagon abstract domain. HigherOrder and Symbolic Computation, 19(1):31100, 2006]. We have implemented the octagonal transitive closure algorithm in a prototype system for the analysis of counter automata, called FLATA, and we have experimented with a number of test cases.

[BIK12] 
M. Bozga, R. Iosif, and F. Konečný.
Deciding conditional termination.
In C. Flanagan and B. König, editors, Tools and Algorithms
for the Construction and Analysis of Systems: Proceedings of the 18th
International Conference (TACAS 2012), volume 7214 of Lecture Notes in
Computer Science, pages 252266, Tallinn, Estonia, 2012. SpringerVerlag,
Berlin.
This paper addresses the problem of conditional termination, which is that of defining the set of initial configurations from which a given program terminates. First we define the dual set, of initial configurations, from which a nonterminating execution exists, as the greatest fixpoint of the preimage of the transition relation. This definition enables the representation of this set, whenever the closed form of the relation of the loop is definable in a logic that has quantifier elimination. This entails the decidability of the termination problem for such loops. Second, we present effective ways to compute the weakest precondition for nontermination for difference bounds and octagonal (nondeterministic) relations, by avoiding complex quantifier eliminations. We also investigate the existence of linear ranking functions for such loops. Finally, we study the class of linear affine relations and give a method of underapproximating the termination precondition for a nontrivial subclass of affine relations. We have performed preliminary experiments on transition systems modeling reallife systems, and have obtained encouraging results.

[BM08] 
J. M. B. Braman and R. M. Murray.
Safety verification of fault tolerant goalbased control programs
with estimation uncertainty.
In Proceedings of the 2008 American Control Conference, pages
2732, Seattle, Washington, USA, 2008. IEEE Press.
Fault tolerance and safety verification of control systems that have state variable estimation uncertainty are essential for the success of autonomous robotic systems. A software control architecture called mission data system, developed at the Jet Propulsion Laboratory, uses goal networks as the control program for autonomous systems. Certain types of goal networks can be converted into linear hybrid systems and verified for safety using existing symbolic model checking software. A process for calculating the probability of failure of certain classes of verifiable goal networks due to state estimation uncertainty is presented. A verifiable example task is presented and the failure probability of the control program based on estimation uncertainty is found.

[Bra12] 
V. Braun.
Counting points and Hilbert series in string theory.
Technical Report arXiv:1206.2236v1 [hepth], University of
Pennsylvania in Philadelphia, USA, June 2012.
Available from http://arxiv.org/.
[ http ]
The problem of counting points is revisited from the perspective of reflexive 4dimensional polytopes. As an application, the Hilbert series of the 473,800,776 reflexive polytopes (equivalently, their CalabiYau hypersurfaces) are computed.

[BKK13] 
J. Brauer, A King., and S. Kowalewski.
Abstract interpretation of microcontroller code: Intervals meet
congruences.
Science of Computer Programming, 78(7):862883, 2013.
Bitwise instructions, loops and indirect data access present challenges to the verification of microcontroller programs. In particular, since registers are often memory mapped, it is necessary to show that an indirect store operation does not accidentally mutate a register. To prove this and related properties, this article advocates using the domain of bitwise linear congruences in conjunction with intervals to derive accurate range information. The paper argues that these two domains complement one another when reasoning about microcontroller code. The paper also explains how SAT solving, which applied with dichotomic search, can be used to recover branching conditions from binary code which, in turn, further improves interval analysis.

[BDG^{+}13] 
T. Brihaye, L. Doyen, G. Geeraerts, J. Ouaknine, J.F. Raskin, and J. Worrell.
Timebounded reachability for monotonic hybrid automata: Complexity
and fixed points.
In D. Van Hung and M. Ogawa, editors, Automated Technology for
Verification and Analysis: Proceedings of the 11th International Symposium
(ATVA 2013), volume 8172 of Lecture Notes in Computer Science, pages
5570, Hanoi, Vietnam, 2013. SpringerVerlag, Berlin.
We study the timebounded reachability problem for monotonic hybrid automata (MHA), i.e., rectangular hybrid automata for which the rate of each variable is either always nonnegative or always nonpositive. In this paper, we revisit the decidability results presented in [T. Brihaye and L. Doyen and G. Geeraerts and J. Ouaknine. and J.F. Raskin and J. Worrell: On reachability for hybrid automata over bounded time. In: ICALP 2011, Part II. LNCS, vol. 6756, pp. 416427. Springer, Heidelberg (2011)] and show that the problem is NExpTimecomplete. We also show that we can effectively compute fixed points that characterise the sets of states that are reachable (resp. coreachable) within T time units from a given state.

[CMA05] 
D. Cachera and K. MorinAllory.
Proving parameterized systems: The use of a widening operator and
pseudopipelines in polyhedral logic.
Research Report ISRN TIMARR05/0401FR, TIMA Laboratory,
Grenoble, France, 2005.
We propose proof techniques and tools for the formal verification of regular parameterized systems. These systems are expressed in the polyhedral model, which combines affine recurrence equations with index sets of polyhedral shape. We extend a previously defined proof system based on a polyhedral logic with the detection of pseudopipelines, that are particular patterns in the variable definitions generalizing the notion of pipeline. The combination of pseudopipeline detection with the use of a simple widening operator greatly improves the effectiveness of our proof techniques.

[CPPSV06] 
L. P. Carloni, R. Passerone, A. Pinto, and A. L. SangiovanniVincentelli.
Languages and tools for hybrid systems design.
Foundations and Trends. in Electronic Design Automation,
1(1/2):1193, 2006.
The explosive growth of embedded electronics is bringing information and control systems of increasing complexity to every aspects of our lives. The most challenging designs are safetycritical systems, such as transportation systems (e.g., airplanes, cars, and trains), industrial plants and health care monitoring. The difficulties reside in accommodating constraints both on functionality and implementation. The correct behavior must be guaranteed under diverse states of the environment and potential failures; implementation has to meet cost, size, and power consumption requirements. The design is therefore subject to extensive mathematical analysis and simulation. However, traditional models of information systems do not interface well to the continuous evolving nature of the environment in which these devices operate. Thus, in practice, different mathematical representations have to be mixed to analyze the overall behavior of the system. Hybrid systems are a particular class of mixed models that focus on the combination of discrete and continuous subsystems. There is a wealth of tools and languages that have been proposed over the years to handle hybrid systems. However, each tool makes different assumptions on the environment, resulting in somewhat different notions of hybrid system. This makes it difficult to share information among tools. Thus, the community cannot maximally leverage the substantial amount of work that has been directed to this important topic. In this paper, we review and compare hybrid system tools by highlighting their differences in terms of their underlying semantics, expressive power and mathematical mechanisms. We conclude our review with a comparative summary, which suggests the need for a unifying approach to hybrid systems design. As a step in this direction, we make the case for a semanticaware interchange format, which would enable the use of joint techniques, make a formal comparison between different approaches possible, and facilitate exporting and importing design representations.

[CPSV13] 
L. Carnevali, M. Paolieri, A. Santoni, and E. Vicario.
Nonmarkovian analysis for model driven engineering of realtime
software.
In Proceedings of the 4th ACM/SPEC International Conference on
Performance Engineering (ICPE '13), pages 113124, Prague, Czech Republic,
2013. ACM, New York, USA.
Quantitative evaluation of models with stochastic timings can decisively support schedulability analysis and performance engineering of realtime concurrent systems. These tasks require modeling formalisms and solution techniques that can encompass stochastic temporal parameters firmly constrained within a bounded support, thus breaking the limits of Markovian approaches. The problem is further exacerbated by the need to represent suspension of timers, which results from common patterns of realtime programming. This poses relevant challenges both in the theoretical development of nonMarkovian solution techniques and in their practical integration within a viable tailoring of industrial processes.

[CS13] 
A. Chakarov and S. Sankaranarayanan.
Probabilistic program analysis with martingales.
In N. Sharygina and H. Veith, editors, Computer Aided
Verification: Proceedings of the 25th International Conference (CAV 2013),
volume 8044 of Lecture Notes in Computer Science, pages 511526.
SpringerVerlag, Berlin, 2013.
We present techniques for the analysis of infinite state probabilistic programs to synthesize probabilistic invariants and prove almostsure termination. Our analysis is based on the notion of (super) martingales from probability theory. First, we define the concept of (super) martingales for loops in probabilistic programs. Next, we present the use of concentration of measure inequalities to bound the values of martingales with high probability. This directly allows us to infer probabilistic bounds on assertions involving the program variables. Next, we present the notion of a super martingale ranking function (SMRF) to prove almost sure termination of probabilistic programs. Finally, we extend constraintbased techniques to synthesize martingales and supermartingale ranking functions for probabilistic programs. We present some applications of our approach to reason about invariance and termination of small but complex probabilistic programs.

[CMS06] 
S. Chakraborty, J. Mekie, and D. K. Sharma.
Reasoning about synchronization in GALS systems.
Formal Methods in System Design, 28(2):153169, 2006.
Correct design of interface circuits is crucial for the development of SystemonChips (SoC) using offtheshelf IP cores. For correct operation, an interface circuit must meet strict synchronization timing constraints, and also respect sequencing constraints between events dictated by interfacing protocols and rational clock relations. In this paper, we propose a technique for automatically analyzing the interaction between independently specified synchronization constraints and sequencing constraints between events. We show how this analysis can be used to derive delay constraints for correct operation of interface circuits in a GALS system. Our methodology allows an SoC designer to mix and match different interfacing protocols, rational clock relations and synchronization constraints for communication between a pair of modules, and automatically explore their implications on correct interface circuit design.

[CMC08] 
L. Chen, A. Miné, and P. Cousot.
A sound floatingpoint polyhedra abstract domain.
In G. Ramalingam, editor, Proceedings of the 6th Asian Symposium
on Programming Languages and Systems (APLAS 2008), volume 5356 of
Lecture Notes in Computer Science, pages 318, Bangalore, India, 2008.
SpringerVerlag, Berlin.
The polyhedra abstract domain is one of the most powerful and commonly used numerical abstract domains in the field of static program analysis based on abstract interpretation. In this paper, we present an implementation of the polyhedra domain using floatingpoint arithmetic without sacrificing soundness. Floatingpoint arithmetic allows a compact memory representation and an efficient implementation on current hardware, at the cost of some loss of precision due to rounding. Our domain is based on a constraintonly representation and employs sound floatingpoint variants of FourierMotzkin elimination and linear programming. The preliminary experimental results of our prototype are encouraging. To our knowledge, this is the first time that the polyhedra domain is adapted to floatingpoint arithmetic in a sound way.

[CKSW13] 
T. Chen, M. Kwiatkowska, A. Simaitis, and C. Wiltsche.
Synthesis for multiobjective stochastic games: An application to
autonomous urban driving.
In K. Joshi, M. Siegle, M. Stoelinga, and P. R. D'Argenio, editors,
Quantitative Evaluation of Systems: Proceedings of the 10th
International Conference (QEST 2013), volume 8054 of Lecture Notes in
Computer Science, pages 322337, Buenos Aires, Argentina, 2013.
SpringerVerlag, Berlin.
We study strategy synthesis for stochastic twoplayer games with multiple objectives expressed as a conjunction of LTL and expected total reward goals. For stopping games, the strategies are constructed from the Pareto frontiers that we compute via value iteration. Since, in general, infinite memory is required for deterministic winning strategies in such games, our construction takes advantage of randomised memory updates in order to provide compact strategies. We implement our methods in PRISMgames, a model checker for stochastic multiplayer games, and present a case study motivated by the DARPA Urban Challenge, illustrating how our methods can be used to synthesise strategies for highlevel control of autonomous vehicles.

[CS11] 
M. Colón and S. Sankaranarayanan.
Generalizing the template polyhedral domain.
In G. Barthe, editor, Proceedings of the 20th European Symposium
on Programming (ESOP 2011), volume 6602 of Lecture Notes in Computer
Science, pages 176195, Saarbrücken, Germany, 2011. SpringerVerlag,
Berlin.
Template polyhedra generalize weakly relational domains by specifying arbitrary fixed linear expressions on the lefthand sides of inequalities and undetermined constants on the right. The domain operations required for analysis over template polyhedra can be computed in polynomial time using linear programming. In this paper, we introduce the generalized template polyhedral domain that extends template polyhedra using fixed lefthand side expressions with bilinear forms involving program variables and unknown parameters to the right. We prove that the domain operations over generalized templates can be defined as the “best possible abstractions” of the corresponding polyhedral domain operations. The resulting analysis can straddle the entire space of linear relation analysis starting from the template domain to the full polyhedral domain. We show that analysis in the generalized template domain can be performed by dualizing the join, postcondition and widening operations. We also investigate the special case of template polyhedra wherein each bilinear form has at most two parameters. For this domain, we use the special properties of two dimensional polyhedra and techniques from fractional linear programming to derive domain operations that can be implemented in polynomial time over the number of variables in the program and the size of the polyhedra. We present applications of generalized template polyhedra to strengthen previously obtained invariants by converting them into templates. We describe an experimental evaluation of an implementation over several benchmark systems.

[CFM13] 
G. Costantini, P. Ferrara, and G. Maggiore.
The domain of parametric hypercubes for static analysis of computer
games software.
In L. Groves and J. Sun, editors, Formal Methods and Software
Engineering: Proceedings of 15th International Conference (ICFEM 2013),
volume 8144 of Lecture Notes in Computer Science, pages 447463,
Queenstown, New Zealand, 2013. SpringerVerlag, Berlin.
Computer Games Software deeply relies on physics simulations, which are particularly demanding to analyze because they manipulate a large amount of interleaving floating point variables. Therefore, this application domain is an interesting workbench to stress the tradeoff between accuracy and efficiency of abstract domains for static analysis.

[CFBV06] 
M. Cova, V. Felmetsger, G. Banks, and G. Vigna.
Static detection of vulnerabilities in x86 executables.
In Proceedings of the 22nd Annual Computer Security Applications
Conference (ACSAC 22), pages 269278, Miami, Florida, USA, 2006. IEEE
Computer Society Press.
In the last few years, several approaches have been proposed to perform vulnerability analysis of applications written in highlevel languages. However, little has been done to automatically identify securityrelevant flaws in binary code.

[DT12] 
T. Dang and R. Testylier.
Reachability analysis for polynomial dynamical systems using the
bernstein expansion.
Reliable Computing, 17(2):128152, 2012.
This paper is concerned with the reachability computation problem for polynomial discretetime dynamical systems. Such computations con stitute a crucial component in algorithmic verication tools for hybrid systems and embedded software with polynomial dynamics, which have found applications in many engineering domains. We describe two meth ods for overapproximating the reachable sets of such systems; these meth ods are based on a combination of the Bernstein expansion of polynomial functions and a representation of reachable sets by template polyhedra. Using a prototype implementation, the performance of the methods was demonstrated on a number of examples of control systems and biological systems.

[DGD07] 
T. Denmat, A. Gotlieb, and M. Ducassé.
An abstract interpretation based combinator for modelling while loops
in constraint programming.
In C. Bessiere, editor, Proceedings of the 13th International
Conference on Principles and Practice of Constraint Programming (CP 2007),
volume 4741 of Lecture Notes in Computer Science, pages 241255,
Providence, Rhode Island, USA, 2007. SpringerVerlag, Berlin.
We present the w constraint combinator that models while loops in Constraint Programming. Embedded in a finite domain constraint solver, it allows programmers to develop nontrivial arithmetical relations using loops, exactly as in an imperative language style. The deduction capabilities of this combinator come from abstract interpretation over the polyhedra abstract domain. This combinator has already demonstrated its utility in constraintbased verification and we argue that it also facilitates the rapid prototyping of arithmetic constraints (e.g. power, gcd or sum).

[DM05] 
D. Doose and Z. Mammeri.
Polyhedrabased approach for incremental validation of realtime
systems.
In L. T. Yang, M. Amamiya, Z. Liu, M. Guo, and F. J. Rammig, editors,
Proceedings of the International Conference on Embedded and Ubiquitous
Computing (EUC 2005), volume 3824 of Lecture Notes in Computer
Science, pages 184193, Nagasaki, Japan, 2005. SpringerVerlag, Berlin.
Realtime embedded systems can be used in hightly important or even vital tasks (avionic and medical systems, etc.), thus having strict temporal constraints that need to be validated. Existing solutions use temporal logic, automata or scheduling techniques. However, scheduling techniques are often pessimistic and require an almost complete knowledge of the system, and formal methods can be illfitted to manipulate some of the concepts involved in realtime systems. In this article, we propose a method that gives to the designer the advantages of formal methods and some simplicity in manipulating realtime systems notions. This method is able to model and validate all the classical features of realtime systems, without any pessimism, while guaranteeing the terminaison of the validation process. Moreover, its formalism enables to study systems of which we have only a partial knowledge, and thus to validate or invalidate a system still under design. This latest point is very important, since it greatly decreases the cost of design backtracks.

[DHR05a] 
L. Doyen, T. A. Henzinger, and J.F. Raskin.
Automatic rectangular refinement of affine hybrid systems.
In P. Pettersson and W. Yi, editors, Proceedings of the 3rd
International Conference on Formal Modeling and Analysis of Timed Systems
(FORMATS 2005), volume 3829 of Lecture Notes in Computer Science,
pages 144161, Uppsala, Sweden, 2005. SpringerVerlag, Berlin.
We show how to automatically construct and refine rectangular abstractions of systems of linear differential equations. From a hybrid automaton whose dynamics are given by a system of linear differential equations, our method computes automatically a sequence of rectangular hybrid automata that are increasingly precise overapproximations of the original hybrid automaton. We prove an optimality criterion for successive refinements. We also show that this method can take into account a safety property to be verified, refining only relevant parts of the state space. The practicability of the method is illustrated on a benchmark case study.

[DHR05b] 
L. Doyen, T. A. Henzinger, and J.F. Raskin.
Automatic rectangular refinement of affine hybrid systems.
Technical Report 2005.47, Centre Fédéré en Vérification,
Université Libre de Bruxelles, Belgium, 2005.
We show how to automatically construct and refine rectangular abstractions of systems of linear differential equations. From a hybrid automaton whose dynamics are given by a system of linear differential equations, our method computes automatically a sequence of rectangular hybrid automata that are increasingly precise overapproximations of the original hybrid automaton. We prove an optimality criterion for successive refinements. We also show that this method can take into account a safety property to be verified, refining only relevant parts of the state space. The practicability of the method is illustrated on a benchmark case study.

[Doy06] 
L. Doyen.
Algorithmic Analysis of Complex Semantics for Timed and Hybrid
Automata.
PhD thesis, Université Libre de Bruxelles, Bruxelles, Belgium, June
2006.
In the field of formal verification of realtime systems, major developments have been recorded in the last fifteen years. It is about logics, automata, process algebra, programming languages, etc. From the beginning, a formalism has played an important role: timed automata and their natural extension, hybrid automata. Those models allow the definition of realtime constraints using realvalued clocks, or more generally analog variables whose evolution is governed by differential equations. They generalize finite automata in that their semantics defines timed words where each symbol is associated with an occurrence timestamp.

[Ell04] 
R. Ellenbogen.
Fully automatic verification of absence of errors via interprocedural
integer analysis.
Master's thesis, School of Computer Science, TelAviv University,
TelAviv, Israel, December 2004.
We present a interprocedural C String Static Verifier (iCSSV), a whole program analysis algorithm for verifying the safety of string operations in C programs. The algorithm automatically proves linear relationships among pointer expressions. The algorithm is conservative, i.e., it infers only valid relationships although it may fail to detect some of them. The algorithm is targeted to programs with “shallow” pointers and complex integer relationships. Therefore, the algorithm combines contextsensitive flowinsensitive pointer analysis of pointer updates with contextsensitive and flowsensitive integer analysis of properties of allocation sites. Contextsensitivity is achieved by specializing pointer aliases to the context and functional integer analysis. The algorithm is powerful enough to verify the absence of string manipulation errors such as accesses beyond buffer length and null terminating character. Here the interprocedural analysis guarantees that our algorithm is fully automatic, i.e., does not require user annotations or any other intervention. A prototype of the algorithm was implemented. Several novel techniques are employed to make the interprocedural analysis of realistic programs feasible.

[FR09] 
F. Fages and A. Rizk.
From modelchecking to temporal logic constraint solving.
In I. P. Gent, editor, Proceedings of the 15th International
Conference on Principles and Practice of Constraint Programming (CP 2009),
volume 5732 of Lecture Notes in Computer Science, pages 319334,
Lisbon, Portugal, 2009. SpringerVerlag, Berlin.
In this paper, we show how modelchecking can be generalized to temporal logic constraint solving, by considering temporal logic formulae with free variables over some domain D, and by computing a validity domain for the variables rather than a truth value for the formula. This allows us to define a continuous degree of satisfaction for a temporal logic formula in a given structure, opening up the field of modelchecking to optimization. We illustrate this approach with reverseengineering problems coming from systems biology, and provide some performance figures on parameter optimization problems with respect to temporal logic specifications.

[FGZS12] 
H. Fang, J. Guo, H. Zhu, and J. Shi.
Formal verification and simulation: Coverification for subway
control systems.
In T. Margaria, Z. Qiu, and H. Yang, editors, Sixth
International Symposium on Theoretical Aspects of Software Engineering
(TASE), pages 145152, Beijing, 2012. IEEE Xplore Digital Library.
For hybrid systems, hybrid automata based tools are capable of verification while Matlab Simulink/Stateflow is proficient in simulation. In this paper, a methodology is developed in which the formal verification tool PHAVer and simulation tool Matlab are integrated to analyze and verify hybrid systems. For application of this methodology, a Platform Screen Doors System (abbreviated as PSDS), a subsystem of the subway, is modeled with formal verification techniques based on hybrid automata and Matlab Simulink/Stateflow charts, respectively. The models of PSDS are simulated by Matlab and verified by PHAVer. It is verified that the sandwich situation can be avoided under time interval conditions. We conclude that this integration methodology is competent in verifying Platform Screen Doors System.

[FM11] 
D. Fenacci and K. MacKenzie.
Static resource analysis for Java bytecode using amortisation and
separation logic.
In Proceedings of the 6th Workshop on Bytecode Semantics,
Verification, Analysis and Transformation (Bytecode 2011), volume 279 of
Electronic Notes in Theoretical Computer Science, pages 1932,
Saarbrucken, Germany, 2011. Elsevier Science B.V.
In this paper we describe a static analyser for Java bytecode which uses a combination of amortised analysis and Separation Logic due to Robert Atkey. With the help of Java annotations we are able to give precise resource utilisation constraints for Java methods which manipulate various heapbased data structures.

[FPPS12b] 
F. Fioravanti, A. Pettorossi, M. Proietti, and V. Senni.
Using real relaxations during program specialization.
In G. Vidal, editor, Logic Program Synthesis and Transformation:
Proceedings of the 21st International Symposium, volume 7225 of Lecture
Notes in Computer Science, pages 106122, Odense, Denmark, 2012.
SpringerVerlag, Berlin.
We propose a program specialization technique for locally stratified CLP(Z) programs, that is, logic programs with linear constraints over the set Z of the integer numbers. For reasons of efficiency our technique makes use of a relaxation from integers to reals. We reformulate the familiar unfold/fold transformation rules for CLP programs so that: (i) the applicability conditions of the rules are based on the satisfiability or entailment of constraints over the set R of the real numbers, and (ii) every application of the rules transforms a given program into a new program with the same perfect model constructed over Z. Then, we introduce a strategy which applies the transformation rules for specializing CLP(Z) programs with respect to a given query. Finally, we show that our specialization strategy can be applied for verifying properties of infinite state reactive systems specified by constraints over Z.

[FPPS12a] 
F. Fioravanti, A. Pettorossi, M. Proietti, and V. Senni.
Improving reachability analysis of infinite state systems by
specialization.
Fundamenta Informaticae, 119(34):281300, 2012.
We consider infinite state reactive systems specified by using linear constraints over the integers, and we address the problem of verifying safety properties of these systems by applying reachability analysis techniques. We propose a method based on program specialization, which improves the effectiveness of the backward and forward reachability analyses. For backward reachability our method consists in: (i) specializing the reactive system with respect to the initial states, and then (ii) applying to the specialized system the reachability analysis that works backwards from the unsafe states. For reasons of efficiency, during specialization we make use of a relaxation from integers to reals. In particular, we test the satisfiability or entailment of constraints over the real numbers, while preserving the reachability properties of the reactive systems when constraints are interpreted over the integers. For forward reachability our method works as for backward reachability, except that the role of the initial states and the unsafe states are interchanged. We have implemented our method using the MAP transformation system and the ALV verification system. Through various experiments performed on several infinite state systems, we have shown that our specializationbased verification technique considerably increases the number of successful verifications without a significant degradation of the time performance.

[Fle05] 
A. Flexeder.
Interprozedurale Analyse linearer Ungleichungen.
Diploma thesis, Technische Universität München, München,
Germany, July 2005.
In German.
Diese Arbeit beschreibt eine intra und auch interprozedurale Datenflussanalyse, welche an jedem Programmpunkt statisch die Beziehungen, die zwischen den Programmvariablen gelten, bestimmen können. Die intraprozeduralen Analyse, beruhend auf einem Modell von Cousot [P. Cousot and N. Halbwachs. Automatic discovery of linear restraints among variables of a program. Conference Record of the 5th Annunal ACM Symposium on Principles of Programming Languages, pages 8496, 1978] interpretiert lineare Zuweisungen und Bedingungen und betrachtet die nicht linearen Konstrukte mit Hilfe von nicht linearen Zuweisungen. Mit dieser Abstraktion versucht man lineare Gleichheits und Ungleichheitsbeziehungen zwischen den Programmvariablen in Form von Polyedern rauszufinden. Da man nicht nur eine Funktion, sondern ganze Programme als Zusammenspiel mehrerer Funktionen, analysieren möchte, ist eine interprozedurale Analyse nötig [M. MuellerOlm and H. Seidl. Precise Interprocedural Analysis through Linear Algebra. POPL, 2004]. Diese soll mit den Mitteln der linearen Algebra die affinen Beziehungen, welche zwischen den Programmvariablen an einem bestimmten Programmpunkt gelten, erkennen. Die Behandlung von Prozeduraufrufen steht dabei im Vordergrund.

[FMP13] 
A. Fouilhe, D. Monniaux, and M. Périn.
Efficient generation of correctness certificates for the abstract
domain of polyhedra.
In F. Logozzo and M. Fähndrich, editors, Static Analysis:
Proceedings of the 20th International Symposium (SAS 2013), volume 7935 of
Lecture Notes in Computer Science, pages 345365, Seattle, USA, 2013.
SpringerVerlag, Berlin.
Polyhedra form an established abstract domain for inferring runtime properties of programs using abstract interpretation. Computations on them need to be certified for the whole static analysis results to be trusted. In this work, we look at how far we can get down the road of a posteriori verification to lower the overhead of certification of the abstract domain of polyhedra. We demonstrate methods for making the cost of inclusion certificate generation negligible. From a performance point of view, our singlerepresentation, constraintsbased implementation compares with stateoftheart implementations.

[FM02] 
S. Frank and P. R. Mai.
Strategies for cooperating constraint solvers.
Diploma thesis, Technische Universität Berlin, Berlin, Germany,
July 2002.
Cooperative constraint solving has been investigated by several different research groups and individuals as it provides a comfortable mechanism to attack multidomain constraint problems. The theoretical framework of Hofstedt [P. Hofstedt. Cooperation and Coordination of Constraint Solvers. PhD thesis, Technische Universität Dresden, March 2001. Shaker Verlag, Aachen] provided the basis for the prototypical implementation described in [E. Godehardt and D. Seifert. Kooperation und Koordination von Constraint Solvern  Implementierung eines Prototyps. Master's thesis, Technische Universität Berlin, January 2001]. Taking aboard the lessons learned in the prototype, we introduce a revised implementation of the framework, to serve as a flexible basis for the conception and evaluation of advanced strategies for solver cooperation. Several additional enhancements and optimisations over the preceding implementation or the underlying theoretical framework are described, proving the correctness of those changes where necessary. Using the newly implemented framework, we propose and benchmark a set of new cooperation strategies, iteratively refining them to the point where we can offer a set of generally useful (i.e. nonproblem specific) strategies. Finally we introduce a strategy language, that allows the user to define problemspecific strategies, either from scratch or derived from other strategies, and demonstrate the effectiveness of the language on a wellknown example problem.

[Fre04] 
G. Frehse.
Compositional verification of hybrid systems with discrete
interaction using simulation relations.
In Proceedings of the IEEE Conference on Computer Aided Control
Systems Design (CACSD 2004), Taipei, Taiwan, 2004.
Simulation relations can be used to verify refinement between a system and its specification, or between models of different complexity. It is known that for the verification of safety properties, simulation between hybrid systems can be defined based on their labeled transition system semantics. We show that for hybrid systems without shared variables, which therefore only interact at discrete events, this simulation preorder is compositional, and present assumeguarantee rules that help to counter the state explosion problem. Some experimental results for simulation checking of linear hybrid automata are provided using a prototype tool with exact arithmetic and unlimited digits.

[FHK04] 
G. Frehse, Z. Han, and B. Krogh.
Assumeguarantee reasoning for hybrid I/Oautomata by
overapproximation of continuous interaction.
In Proceedings of the 43rd IEEE Conference on Decision and
Control (CDC 2004), Atlantis, Paradise Island, Bahamas, 2004.
This paper extends assumeguarantee reasoning (AGR) to hybrid dynamic systems that interact through continuous state variables. We use simulation relations for timed transition systems to analyze compositions of hybrid I/O automata. This makes it possible to perform compositional reasoning that is conservative in the sense of over approximating the composed behaviors. In contrast to previous approaches that require global receptivity conditions, circularity is broken in our approach by a statebased nonblocking condition that can be checked in the course of computing the AGR simulation relations. The proposed procedures for AGR are implemented in a computational tool for the case of linear hybrid I/O automata, and the approach is illustrated with a simple example.

[Fre05b] 
G. Frehse.
PHAVer: Algorithmic verification of hybrid systems past HyTech.
In M. Morari and L. Thiele, editors, Hybrid Systems: Computation
and Control: Proceedings of the 8th International Workshop (HSCC 2005),
volume 3414 of Lecture Notes in Computer Science, pages 258273,
Zürich, Switzerland, 2005. SpringerVerlag, Berlin.
In 1995, HyTech broke new ground as a potentially powerful tool for verifying hybrid systems  yet it has remained severely limited in its applicability to more complex systems. We address the main problems of HyTech with PHAVer, a new tool for the exact verification of safety properties of hybrid systems with piecewise constant bounds on the derivatives. Affine dynamics are handled by onthefly overapproximation and by partitioning the state space based on userdefinable constraints and the dynamics of the system. PHAVer's exact arithmetic is robust due to the use of the Parma Polyhedra Library, which supports arbitrarily large numbers. To manage the complexity of the polyhedral computations, we propose methods to conservatively limit the number of bits and constraints of polyhedra. Experimental results for a navigation benchmark and a tunnel diode circuit show the effectiveness of the approach.

[Fre05a]  G. Frehse. Compositional Verification of Hybrid Systems using Simulation Relations. PhD thesis, Radboud Universiteit Nijmegen, Nijmegen, The Netherlands, October 2005. 
[Fre08] 
G. Frehse.
PHAVer: Algorithmic verification of hybrid systems past HyTech.
Software Tools for Technology Transfer, 10(3):263279, 2008.
In 1995, HyTech broke new ground as a potentially powerful tool for verifying hybrid systems  yet its appicability remains limited to relatively simple systems. We address the main problems of HyTech with PHAVer, a new tool for the exact verification of safety properties of hybrid systems with piecewise constant bounds on the derivatives. Affine dynamics are handled by onthefly overapproximation, partitioning the state space based on userdefinable constraints and the dynamics of the system. PHAVer features exact arithmetic in a robust implementation that, based on the Parma Polyhedra Library, supports arbitrarily large numbers. To manage the complexity of the polyhedral computations, we propose methods to conservatively limit the number of bits and constraints of polyhedra. Experimental results for a navigation benchmark and a tunnel diode circuit show the effectiveness of the approach.

[FKRM06] 
G. Frehse, B. H. Krogh, R. A. Rutenbar, and O. Maler.
Time domain verification of oscillator circuit properties.
In Proceedings of the First Workshop on Formal Verification of
Analog Circuits (FAC 2005), volume 153 of Electronic Notes in
Theoretical Computer Science, pages 922, Edinburgh, Scotland, 2006.
Elsevier Science B.V.
The application of formal methods to analog and mixed signal circuits requires efficient methods for constructing abstractions of circuit behaviors. This paper concerns the verification of properties of oscillator circuits. Generic monitor automata are proposed to facilitate the application of hybrid system reachability computations to characterize time domain features of oscillatory behavior, such as bounds on the signal amplitude and jitter. The approach is illustrated for a nonlinear tunneldiode circuit model using PHAVer, a hybrid system analysis tool that provides sound verification results based on linear hybrid automata approximations and infinite precision computations.

[FKR06] 
G. Frehse, B. H. Krogh, and R. A. Rutenbar.
Verifying analog oscillator circuits using forward/backward
refinement.
In Proceedings of the 9th Conference on Design, Automation and
Test in Europe (DATE 06), Munich, Germany, 2006. ACM SIGDA.
CDROM publication.
Properties of analog circuits can be verified formally by partitioning the continuous state space and applying hybrid system verification techniques to the resulting abstraction. To verify properties of oscillator circuits, cyclic invariants need to be computed. Methods based on forward reachability have proven to be inefficient and in some cases inadequate in constructing these invariant sets. In this paper we propose a novel approach combining forward and backwardreachability while iteratively refining partitions at each step. The technique can yield dramatic memory and runtime reductions. We illustrate the effectiveness by verifying, for the first time, the limit cycle oscillation behavior of a thirdorder model of a differential VCO circuit.

[FK11] 
L. Fribourg and U. Kühne.
Parametric verification and test coverage for hybrid automata using
the inverse method.
In G. Delzanno and I. Potapov, editors, In Proceedings of the
5th International Workshop on Reachability Problems (RP 2011), volume 6945
of Lecture Notes in Computer Science, pages 191204, Genova, Italy,
2011. SpringerVerlag, Berlin.
Hybrid systems combine continuous and discrete behavior. Hybrid Automata are a powerful formalism for the modeling and verification of such systems. A common problem in hybrid system verification is the good parameters problem, which consists in identifying a set of parameter valuations which guarantee a certain behavior of a system. Recently, a method has been presented for attacking this problem for Timed Automata. In this paper, we show the extension of this methodology for hybrid automata with linear and affine dynamics. The method is demonstrated with a hybrid system benchmark from the literature.

[FK13] 
L. Fribourg and U. Kühne.
Parametric verification and test coverage for hybrid automata using
the inverse method.
International Journal of Foundations of Computer Science,
24(2):233249, 2013.
Hybrid systems combine continuous and discrete behavior. Hybrid Automata are a powerful formalism for the modeling and verification of such systems. A common problem in hybrid system verification is the good parameters problem, which consists in identifying a set of parameter valuations which guarantee a certain behavior of a system. Recently, a method has been presented for attacking this problem for Timed Automata. In this paper, we show the extension of this methodology for hybrid automata with linear and affine dynamics. The method is demonstrated with a hybrid system benchmark from the literature.

[Fu13] 
Z. Fu.
Static Analysis of Numerical Properties in the Presence of
Pointers.
Thèse pour le grade de “Docteur de l'Université de Rennes
1”, École doctorale MATISSE, Université de Rennes 1 sous le sceau
de l'Université Européenne de Bretagne, Rennes, France, December 2013.
The fast and furious pace of change in computing technology has become an article of faith for many. The reliability of computerbased systems crucially depends on the correctness of its computing. Can man, who created the computer, be capable of preventing machinemade misfortune? The theory of static analysis strives to achieve this ambition. The analysis of numerical properties of programs has been an essential research topic for static analysis. These kinds of properties are commonly modeled and handled by the concept of numerical abstract domains. Unfortunately, lifting these domains to heapmanipulating programs is not obvious. On the other hand, pointsto analyses have been intensively studied to analyze pointer behaviors and some scale to very large programs but without inferring any numerical properties. We propose a framework based on the theory of abstract interpretation that is able to combine existing numerical domains and pointsto analyses in a modular way. The static numerical analysis is prototyped using the SOOT framework for pointer analyses and the PPL library for numerical domains. The implementation is able to analyze large Java program within several minutes. The second part of this thesis consists of a theoretical study of the combination of the pointsto analysis with another pointer analysis providing information called mustalias. Two pointer variables must alias at some program control point if they hold equal reference whenever the control point is reached. We have developed an algorithm of quadruple complexity that sharpens pointsto analysis using mustalias information. The algorithm is proved correct following a semanticsbased formalization and the concept of bisimulation borrowed from the game theory, model checking etc.

[Fu14a] 
Z. Fu.
Modularly combining numeric abstract domains with pointsto analysis,
and a scalable static numeric analyzer for java.
In K. McMillan and X. Rival, editors, Verification, Model
Checking, and Abstract Interpretation: Proceedings of the 15th International
Conference (VMCAI 2014), San Diego, US, 2014. SpringerVerlag, Berlin.
To appear.
This paper contributes to a new abstract domain that combines static numeric analysis and pointsto analysis. One particularity of this abstract domain lies in its high degree of modularity, in the sense that the domain is constructed by reusing its combined components as blackboxes. This modularity dramatically eases the proof of its soundness and renders its algorithm intuitive. We have prototyped the abstract domain for analyzing realworld Java programs. Our experimental results show a tangible precision enhancement compared to what is possible by traditional static numeric analysis, and this at a cost that is comparable to the cost of running the numeric and pointer analyses separately.

[Fu14b] 
Z. Fu.
Targeted update  aggressive memory abstraction beyond common sense
and its application on static numeric analysis.
In Z. Shao, editor, Proceedings of the 23rd European Symposium
on Programming (ESOP'14), Grenoble, France, 2014. SpringerVerlag, Berlin.
To appear.
Summarizing techniques are widely used in the reasoning of unbounded data structures. These techniques prohibit strong update unless certain restricted safety conditions are satisfied. We find that by setting and enforcing the analysis boundaries to a limited scope of program identifiers, called targets in this paper, more cases of strong update can be shown sound, not with regard to the entire heap, but with regard to the targets. We have implemented the analysis for inferring numeric properties in Java programs. The experimental results show a tangible precision enhancement compared with classical approaches while preserving high scalability.

[dMGP13] 
M. d. M. Gallardo and L. Panizo.
Extending model checkers for hybrid system verification: The case
study of SPIN.
Software Testing, Verification and Reliability, 2013.
Early View (Online Version of Record published before inclusion in an
issue).
[ DOI ]
A hybrid system is a system that evolves following a continuous dynamic, which may instantaneously change when certain internal or external events occur. Because of this combination of discrete and continuous dynamics, the behaviour of a hybrid system is, in general, difficult to model and analyse. Model checking techniques have been proven to be an excellent approach to analyse critical properties of complex systems. This paper presents a new methodology to extend explicit model checkers for hybrid systems analysis. The explicit model checker is integrated, in a nonintrusive way, with some external structures and existing abstraction libraries, which store and manipulate the abstraction of the continuous behaviour irrespective of the underlying model checker. The methodology is applied to SPIN using Parma Polyhedra Library. In addition, the authors are currently working on the extension of other model checkers.

[GKT13] 
P.L. Garoche, T. Kahsai, and C. Tinelli.
Incremental invariant generation using logicbased automatic abstract
transformers.
In G. Brat, N. Rungta, and A. Venet, editors, NASA Formal
Methods: Proceedings of the 5th International Symposium (NFM 2013), volume
7871 of Lecture Notes in Computer Science, pages 139154, Moffett
Field, CA, USA, 2013. SpringerVerlag, Berlin.
Formal analysis tools for system models often require or benefit from the availability of auxiliary system invariants. Abstract interpretation is currently one of the best approaches for discovering useful invariants, in particular numerical ones. However, its application is limited by two orthogonal issues: (i) developing an abstract interpretation is often nontrivial; each transfer function of the system has to be represented at the abstract level, depending on the abstract domain used; (ii) with precise but costly abstract domains, the information computed by the abstract interpreter can be used only once a post fix point has been reached; this may take a long time for large systems or when widening is delayed to improve precision. We propose a new, completely automatic, method to build abstract interpreters which, in addition, can provide sound invariants of the system under analysis before reaching the end of the post fix point computation. In effect, such interpreters act as onthefly invariant generators and can be used by other tools such as logicbased model checkers. We present some experimental results that provide initial evidence of the practical usefulness of our method.

[GIBM12] 
K Ghorbal, F Ivančić, G Balakrishnan, and N Maeda.
Donut domains: Efficient nonconvex domains for abstract
interpretation.
In V. Kuncak and A. Rybalchenko, editors, Verification, Model
Checking, and Abstract Interpretation: Proceedings of the 13th International
Conference (VMCAI 2012), volume 7148, pages 235250, Philadelphia, PA, USA,
2012. SpringerVerlag, Berlin.
Program analysis using abstract interpretation has been successfully applied in practice to find runtime bugs or prove software correct. Most abstract domains that are used widely rely on convexity for their scalability. However, the ability to express nonconvex properties is sometimes required in order to achieve a precise analysis of some numerical properties. This work combines already known abstract domains in a novel way in order to design new abstract domains that tackle some nonconvex invariants. The abstract objects of interest are encoded as a pair of two convex abstract objects: the first abstract object defines an overapproximation of the possible reached values, as is done customarily. The second abstract object underapproximates the set of impossible values within the statespace of the first abstract object. Therefore, the geometrical concretization of our objects is defined by a convex set minus another convex set (or hole). We thus call these domains donut domains.

[Gob07] 
F. Gobert.
Towards putting abstract interpretation of Prolog into
practice: design, implementation, and evaluation of a tool to verify and
optimise Prolog programs.
PhD thesis, Université catholique de Louvain, LouvainlaNeuve,
Belgium, December 2007.
Logic programming is an attractive paradigm that allows the programmer to concentrate on the meaning (the logic) of the problem to be solved  the declarative layer. An execution model is then used as the problem solver  the operational layer. In practice, for efficiency reasons, the semantics of the two layers do not always match. For instance, in Prolog, the computation of solutions is based on an incomplete depthfirst search rule, unifications and negations may be unsound, some builtin language primitives are not multidirectional, and there exist extralogical features like the cut or dynamic predicates. A large number of work has been realised to reconcile the declarative and operational features of Prolog. Methodologies have been proposed to construct operationally correct and efficient Prolog code. Researchers have designed methods to automate the verification of specific operational properties on which optimisation of logic programs can be based. A few tools have been implemented but there is a lack of a unified framework. The goal and topic of this thesis is the design, implementation, and evaluation of a static analyser of Prolog programs to integrate `stateoftheart' techniques into a unified abstract interpretation framework. Abstract interpretation is an adequate methodology to design, justify, and combine complex analyses. The analyser that we present in this thesis is based on a nonimplemented original proposal. The original framework defines the notion of abstract sequence, which allows one to verify many desirable operational properties of a logic procedure. The properties include verifying type, mode, and sharing of terms, proving termination, sure success or failure, and determinacy of logic procedures, as well as linear relations between the size of input/output terms and the number of solutions to a call. An abstract sequence maintains information about the input and output terms, as well as the nonfailure conditions on input terms, and the number of solutions for such inputs. The domains of abstract sequences cooperate together and improve each other. The abstract execution is performed during a single global analysis, and abstract sequences are derived at each program point (the information of the various domains are computed simultaneously). The intended operational properties of a procedure are written in formal specifications. The original framework is an interesting starting point for combining several analyses inside a unified framework. However, it is limited and inaccurate in many ways: it is not implemented, and therefore, it has not been validated by experiments, it accepts only a subset of Prolog (without negation, cut, conditional and disjunctive constructs), and some of the proposed domains are not precise enough. The basic framework is only oriented towards the verification of Prolog programs, but it cannot always prove the desirable properties. In this thesis, we implement and evaluate the basic framework, and, more importantly, we overcome its limitations to make it accurate and usable in practice: the improved framework accepts any Prolog program with modules, new abstract domains and operations are added, and the language of specifications is more expressive. We also design and implement an optimiser that generates specialised code. Optimisation is essential in Prolog, but it is not easy to perform by hand and it is error prone. The optimiser uses the information to safely apply sourcetosource transformations. Code transformations include clause and literal reordering, introduction of cuts, and removal of redundant literals. The optimiser follows a precise strategy to choose the most rewarding transformations in best order. This thesis shows the feasibility of a unified framework that integrates many complex analyses in a single global analysis. Practically and theoretically, a single global analysis is more attractive than a combination of a lot of separate analyses and frameworks. Many extensions have been performed to obtain an accurate and usable tool devoted to verification and optimisation of Prolog programs.

[GL07] 
F. Gobert and B. Le Charlier.
A system to check operational properties of logic programs.
In M.L. Potet, P.Y. Schobbens, H. Toussaint, and G. Saval, editors,
Approches Formelles dans l'Assistance au Développement de Logiciels:
Actes de la 8e conférence, pages 245259. Université de Namur, Belgium,
2007.
An implemented static analyser of logic programs is presented. The system is based on a unied abstract interpretation framework, which allows the integration of several analyses devoted to verication and optimisation. The analyser is able to verify many desirable properties of logic programs executed with the searchrule and other specic features of Prolog. Such operational properties include verifying type, mode, sharing, and linearity of terms, proving termination, occurcheck freeness, sure success or failure, and determinacy of logic procedures, as well as linear relations between the size of input/output terms and the number of solutions to a call. It is emphasized how each analysis may contribute to each other.

[GH06] 
L. Gonnord and N. Halbwachs.
Combining widening and acceleration in linear relation analysis.
In K. Yi, editor, Static Analysis: Proceedings of the 13th
International Symposium, volume 4134 of Lecture Notes in Computer
Science, pages 144160, Seoul, Korea, 2006. SpringerVerlag, Berlin.
Linear Relation Analysis [CH78,Hal79] is one of the first, but still one of the most powerful, abstract interpretations working in an infinite lattice. As such, it makes use of a widening operator to enforce the convergence of fixpoint computations. While the approximation due to widening can be arbitrarily refined by delaying the application of widening, the analysis quickly becomes too expensive with the increase of delay. Previous attempts at improving the precision of widening are not completely satisfactory, since none of them is guaranteed to improve the precision of the result, and they can nevertheless increase the cost of the analysis. In this paper, we investigate an improvement of Linear Relation Analysis consisting in computing, when possible, the exact (abstract) effect of a loop. This technique is fully compatible with the use of widening, and whenever it applies, it improves both the precision and the performance of the analysis.

[Gop07] 
D. Gopan.
Numeric Program Analysis Techniques with Applications to Array
Analysis and Library Summarization.
PhD thesis, University of Wisconsin, Madison, Wisconsin, USA, August
2007.
Numeric program analysis is of great importance for the areas of software engineering, software verification, and security: to identify many program errors, such as outofbounds array accesses and integer overflows, which constitute the lion's share of security vulnerabilities reported by CERT, an analyzer needs to establish numeric properties of program variables. Many important program analyses, such as lowlevel code analysis, memorycleanness analysis, and shape analysis, rely in some ways on numericprogramanalysis techniques. However, existing numeric abstractions are complex (numeric abstract domains are typically nondistributive, and form infiniteheight lattices); thus, obtaining precise numericanalysis results is by no means a trivial undertaking. In this thesis, we develop a suite of techniques with the common goal of improving the precision and applicability of numeric program analysis. The techniques address various aspects of numeric analysis, such as handling dynamicallyallocated memory, dealing with programs that manipulate arrays, improving the precision of extrapolation (widening), and performing interprocedural analysis. The techniques use existing numeric abstractions as building blocks. The communication with existing abstractions is done strictly through a generic abstractdomain interface. The abstractions constructed by our techniques also expose that same interface, and thus, are compatible with existing analysis engines. As a result, our techniques are independent from specific abstractions and specific analysis engines, can be easily incorporated into existing programanalysis tools, and should be readily compatible with new abstractions to be introduced in the future. A practical application of numeric analysis that we consider in this thesis is the automatic generation of summaries for library functions from their lowlevel implementation (that is, from a library's binary). The source code for library functions is typically not available. This poses a stumbling block for many sourcelevel program analyses. Automatic generation of summary functions will both speed up and improve the accuracy of librarymodeling, a process that is currently carried out by hand. This thesis addresses the automatic generation of summaries for memorysafety analysis.

[GR06a] 
D. Gopan and T. W. Reps.
Lookahead widening.
In T. Ball and R. B. Jones, editors, Computer Aided
Verification: Proceedings of the 18th International Conference (CAV 2006),
volume 4144 of Lecture Notes in Computer Science, pages 452466,
Seattle, Washington, USA, 2006. SpringerVerlag, Berlin.
We present lookahead widening, a novel technique for using existing widening and narrowing operators to improve the precision of static analysis. This technique is both selfcontained and fullyautomatic in the sense that it does not rely on separate analyzes or human involvement. We show how to integrate lookahead widening into existing analyzers with minimal effort. Experimental results indicate that the technique is able to achieve sizable precision improvements at reasonable costs.

[GR07b] 
D. Gopan and T. W. Reps.
Lowlevel library analysis and summarization.
In W. Damm and H. Holger, editors, Computer Aided Verification:
Proceedings of the 19th International Conference (CAV 2007), volume 4590 of
Lecture Notes in Computer Science, pages 6881, Berlin, Germany, 2007.
SpringerVerlag, Berlin.
Programs typically make extensive use of libraries, including dynamically linked libraries, which are often not available in sourcecode form, and hence not analyzable by tools that work at source level (i.e., that analyze intermediate representations created from source code). A common approach is to write library models by hand. A library model is a collection of function stubs and variable declarations that capture some aspect of the library code's behavior. Because these are handcrafted, they are likely to contain errors, which may cause an analysis to return incorrect results.

[GR07a] 
D. Gopan and T. W. Reps.
Guided static analysis.
In G. Filé and H. R. Nielson, editors, Static Analysis:
Proceedings of the 14th International Symposium, volume 4634 of Lecture
Notes in Computer Science, pages 349365, Kongens Lyngby, Denmark, 2007.
SpringerVerlag, Berlin.
In static analysis, the semantics of the program is expressed as a set of equations. The equations are solved iteratively over some abstract domain. If the abstract domain is distributive and satisfies the ascendingchain condition, an iterative technique yields the most precise solution for the equations. However, if the above properties are not satisfied, the solution obtained is typically imprecise. Moreover, due to the properties of widening operators, the precision loss is sensitive to the order in which the statespace is explored.

[GRS05] 
D. Gopan, T. W. Reps, and M. Sagiv.
A framework for numeric analysis of array operations.
In Proceedings of the 32nd ACM SIGPLANSIGACT Symposium on
Principles of Programming Languages, pages 338350, Long Beach, California,
USA, 2005.
Automatic discovery of relationships among values of array elements is a challenging problem due to the unbounded nature of arrays. We present a framework for analyzing array operations that is capable of capturing numeric properties of array elements. In particular, the analysis is able to establish that all array elements are initialized by an arrayinitialization loop, as well as to discover numeric constraints on the values of initialized elements. The analysis is based on the combination of canonical abstraction and summarizing numeric domains. We describe a prototype implementation of the analysis and discuss our experience with applying the prototype to several examples, including the verification of correctness of an insertionsort procedure.

[BG10b] 
G. Banda and J. P. Gallagher.
Constraintbased abstraction of a model checker for infinite state
systems.
In U. Geske and A. Wolf, editors, Proceedings of the 23rd
Workshop on (Constraint) Logic Programming (WLP 2009), pages 109124,
Potsdam, Germany, 2010. Potsdam Universitätsverlag.
Abstract interpretationbased model checking provides an approach to verifying properties of infinitestate systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.

[BG10a] 
G. Banda and J. P. Gallagher.
Constraintbased abstract semantics for temporal logic: A direct
approach to design and implementation.
In E. Clarke and A. Voronkov, editors, Proceedings of the 17th
International Conference on Logic for Programming, Artificial Intelligence,
and Reasoning (LPAR 2010), volume 6355 of Lecture Notes in Computer
Science, pages 2745, Yogyakarta, Indonesia, 2010. SpringerVerlag, Berlin.
Abstract interpretation provides a practical approach to verifying properties of infinitestate systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal ? calculus, which is the basis for abstract model checking. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete statespace and an abstract domain. There is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Using the modal ? calculus to implement CTL, the abstract semantics gives an overapproximation of the set of states in which an arbitrary CTL formula holds. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.

[Gro09] 
T. Grosser.
Optimization opportunities based on the polyhedral model in
GRAPHITE. how much impact has GRAPHITE already?
In Proceedings of the GCC Developers' Summit, pages 3346,
Montreal, Quebec, Canada, June 2009.
[ .pdf ]
The polytope model is used since many years to describe standard loop optimizations like blocking, interchange or fusion, but also advanced memory access optimizations and automatic parallelization. Its exact mathematical description of memory accesses and loop iterations allows to concentrate on the optimization problem and to take advantage of professional problem solving tools developed for operational research. Up to today the polytope model was limited to research compilers or source to source transformations. Graphite generates a polytope description of all programs compiled by the gcc. Therefore polytope optimization techniques are not limited anymore to hand selected code pieces, but can actually be applied in large scale on real world programs. By showing the impact of GRAPHITE on important benchmarks  “How much runtime is actually spent in code, that can be optimized by polytope optimization techniques?”  we invite people to base their current polytope research on GRAPHITE to make these optimizations available to the large set of gcc compiled applications.

[GR06b] 
B. S. Gulavani and S. K. Rajamani.
Counterexample driven refinement for abstract interpretation.
In H. Hermanns and J. Palsberg, editors, Tools and Algorithms
for the Construction and Analysis of Systems: Proceedings of the 12th
International Conference (TACAS 2006), volume 3920 of Lecture Notes in
Computer Science, pages 474488, Vienna, Austria, 2006. SpringerVerlag,
Berlin.
Abstract interpretation techniques prove properties of programs by computing abstract fixpoints. All such analyses suffer from the possibility of false errors. We present a new counterexample driven refinement technique to reduce false errors in abstract interpretations. Our technique keeps track of the precision losses during forward fixpoint computation, and does a precise backward propagation from the error to either confirm the error as a true error, or identify a refinement so as to avoid the false error. Our technique is quite simple, and is independent of the specific abstract domain used. An implementation of our technique for affine transition systems is able to prove invariants generated by the StInG tool [19] without doing any specialized analysis for linear relations. Thus, we hope that the technique can work for other abstract domains as well.We sketch how our technique can be used to perform shape analysis by simply defining an appropriate widening operator over shape graphs.

[GLAS09] 
S. Gulwani, T. LevAmi, and S. Sagiv.
A combination framework for tracking partition sizes.
In Z. Shao and B. C. Pierce, editors, Proceedings of the 36th
ACM SIGPLANSIGACT Symposium on Principles of Programming Languages (POPL
2009), pages 239251, Savannah, Georgia, USA, 2009. ACM Press.
We describe an abstract interpretation based framework for proving relationships between sizes of memory partitions. Instances of this framework can prove traditional properties such as memory safety and program termination but can also establish upper bounds on usage of dynamically allocated memory. Our framework also stands out in its ability to prove properties of programs manipulating both heap and arrays which is considered a difficult task. Technically, we define an abstract domain that is parameterized by an abstract domain for tracking memory partitions (sets of memory locations) and by a numerical abstract domain for tracking relationships between cardinalities of the partitions. We describe algorithms to construct the transfer functions for the abstract domain in terms of the corresponding transfer functions of the parameterized abstract domains. A prototype of the framework was implemented and used to prove interesting properties of realistic programs, including programs that could not have been automatically analyzed before.

[HMG06] 
N. Halbwachs, D. Merchat, and L. Gonnord.
Some ways to reduce the space dimension in polyhedra computations.
Formal Methods in System Design, 29(1):7995, 2006.
Convex polyhedra are often used to approximate sets of states of programs involving numerical variables. The manipulation of convex polyhedra relies on the socalled double description, consisting of viewing a polyhedron both as the set of solutions of a system of linear inequalities, and as the convex hull of a system of generators, i.e., a set of vertices and rays. The cost of these manipulations is highly dependent on the number of numerical variables, since the size of each representation can be exponential in the dimension of the space. In this paper, we investigate some ways for reducing the dimension: On one hand, when a polyhedron satisfies affine equations, these equations can obviously be used to eliminate some variables. On the other hand, when groups of variables are unrelated with each other, this means that the polyhedron is in fact a Cartesian product of polyhedra of lower dimensions. Detecting such Cartesian factoring is not very difficult, but we adapt also the operations to work on Cartesian products. Finally, we extend the applicability of Cartesian factoring by applying suitable variable change, in order to maximize the factoring.

[HMPV03] 
N. Halbwachs, D. Merchat, and C. ParentVigouroux.
Cartesian factoring of polyhedra in linear relation analysis.
In R. Cousot, editor, Static Analysis: Proceedings of the 10th
International Symposium, volume 2694 of Lecture Notes in Computer
Science, pages 355365, San Diego, California, USA, 2003. SpringerVerlag,
Berlin.
Linear Relation Analysis [CH78] suffers from the cost of operations on convex polyhedra, which can be exponential with the number of involved variables. In order to reduce this cost, we propose to detect when a polyhedron is a Cartesian product of polyhedra of lower dimensions, i.e., when groups of variables are unrelated with each other. Classical operations are adapted to work on such factored polyhedra. Our implementation shows encouraging experimental results.

[HG06] 
K. S. Henriksen and J. P. Gallagher.
Abstract interpretation of PIC programs through logic programming.
In Proceedings of the 6th IEEE International Workshop on Source
Code Analysis and Manipulation, pages 184196, Sheraton Society Hill,
Philadelphia, PA, USA, 2006. IEEE Computer Society Press.
A logic based general approach to abstract interpretation of lowlevel machine programs is reported. It is based on modelling the behavior of the machine as a logic program. General purpose program analysis and transformation of logic programs, such as partial evaluation and convex hull analysis, are applied to the logic based model of the machine.

[Hen07] 
K. S. Henriksen.
A Logic Programming Based Approach to Applying Abstract
Interpretation to Embedded Software.
PhD thesis, Computer Science, Roskilde University, Roskilde, Denmark,
October 2007.
Published as Computer Science Research Report #117.
Abstract interpretation is a general framework for static program analysis. In recent years this framework has been used outside academia for verification of embedded and realtime systems. Airbus and the European Space Agency are examples of organisations that have successfully adapted this analysis framework for verification of critical components. Logic programming is a programming paradigm with a sound mathematical foundation. One of its characteristics is the separation of logic (the meaning of a program) and control (how it is executed); hence logic programming, and in particular its extension with constraints, is a language comparatively well suited for program analysis. In this thesis logic programming is used to analyse software developed for embedded systems. The particular embedded system is modeled as an emulator written as a constraint logic program. The emulator is specialised with respect to some object program in order to obtain a constraint logic program isomorphic to this object program. Applying abstract interpretation based analysers to the specialised emulator will provide analysis results that can directly be related back to the object program due to the isomorphism maintained between the object program and the specialised emulator. Two abstract interpretation based analysers for logic programs have been developed. The first is a convex polyhedron analyser for constraint logic programs implementing a set of widening techniques for improved precision of the analysis. The second analyser is a type analysis tool for logic programs that automatically derives a preinterpretation from a regular type definition. Additionallly, a framework for using a restricted form of logic programming, namely Datalog, to express and check program properties is described. At the end of the thesis it is shown how instrumenting the semantics of the emulator can be used to obtain, for instance, a fully automatic Worst Case Execution Time analysis by applying the convex polyhedron analyser to the instrumented and specialised emulator. The tools developed in this thesis have all been made available online for demonstration.

[HMM12] 
J. Henry, D. Monniaux, and M. Moy.
Succinct representations for abstract interpretation.
In A. Miné and D. Schmidt, editors, Static Analysis:
Proceedings of the 19th International Symposium (SAS 2012), volume 7460 of
Lecture Notes in Computer Science, pages 283299, Deauville, France,
2012. SpringerVerlag, Berlin.
Abstract interpretation techniques can be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. SMTsolving techniques and sparse representations of paths and sets of paths avoid this pitfall.

[HK12] 
J. M. Howe and A. King.
Polyhedral analysis using parametric objectives.
In A. Miné and D. Schmidt, editors, Static Analysis:
Proceedings of the 19th International Symposium (SAS 2012), volume 7460 of
Lecture Notes in Computer Science, pages 4157, Deauville, France,
2012. SpringerVerlag, Berlin.
The abstract domain of polyhedra lies at the heart of many program analysis techniques. However, its operations can be expensive, precluding their application to polyhedra that involve many variables. This paper describes a new approach to computing polyhedral domain operations. The core of this approach is an algorithm to calculate variable elimination (projection) based on parametric linear programming. The algorithm enumerates only nonredundant inequalities of the projection space, hence permits anytime approximation of the output.

[HU04] 
C. Hymans and E. Upton.
Static analysis of gated data dependence graphs.
In R. Giacobazzi, editor, Static Analysis: Proceedings of the
11th International Symposium, volume 3148 of Lecture Notes in Computer
Science, pages 197211, Verona, Italy, 2004. SpringerVerlag, Berlin.
Several authors have advocated the use of the gated data dependence graph as a compiler intermediate representation. If this representation is to gain acceptance, it is important to show that we may construct static analyses which operate directly on it. In this paper we present the first example of such an analysis, developed using the methodology of abstract interpretation. The analysis is shown to be sound with respect to a concrete semantics for the representation. Experimental results are presented which indicate that the analysis performs well in comparison to conventional techniques.

[Jak12] 
K. Jakubczyk.
Sweeping in abstract interpretation.
Electronic Notes in Theoretical Computer Science, 288:2536,
2012.
In this paper we present how sweeping line techniques, which are very popular in computational geometry, can be adapted for static analysis of computer software by abstract interpretation. We expose how concept of the sweeping line can be used to represent elements of a numerical abstract domain of boxes, which is a disjunctive refinement of a well known domain of intervals that allows finite number of disjunctions. We provide a detailed description of the representation along with standard domain operations algorithms. Furthermore we introduce very precise widening operator for the domain. Additionally we show that the presented idea of the representation based on sweeping line technique is a generalisation of the representation that uses Linear Decision Diagrams (LDD), which is one of possible optimisations of our idea. We also show that the presented widening operator is often more precise than the previous one.

[JM09b] 
R. Jhala and R. Majumdar.
Software model checking.
ACM Computing Surveys, 41(4):154, 2009.
We survey recent progress in software model checking.

[Jea10] 
B. Jeannet.
Some experience on the software engineering of abstract
interpretation tools.
In Proceedings of Tools for Automatic Program AnalysiS (TAPAS
2010), volume 267 of Electronic Notes in Theoretical Computer Science,
pages 2942, Perpignan, France, 2010. Elsevier Science B.V.
The “right” way of writing and structuring compilers is wellknown. The situation is a bit less clear for static analysis tools. It seems to us that a static analysis tool is ideally decomposed into three building blocks: (1) a frontend, which parses programs, generates semantic equations, and supervises the analysis process; (2) a fixpoint equation solver, which takes equations and solves them; (3) and an abstract domain, on which equations are interpreted. The expected advantages of such a modular structure is the ability of sharing development efforts between analyzers for different languages, using common solvers and abstract domains. However putting in practice such ideal concepts is not so easy, and some static analyzers merge for instance the blocks (1) and (2).

[JM09a] 
B. Jeannet and A. Miné.
Apron: A library of numerical abstract domains for static analysis.
In A. Bouajjani and O. Maler, editors, Computer Aided
Verification: Proceedings of the 21st International Conference (CAV 2009),
volume 5643 of Lecture Notes in Computer Science, pages 661667,
Grenoble, France, 2009. SpringerVerlag, Berlin.
This article describes Apron, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with readytouse numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.

[KZH^{+}13] 
D. Kapur, Z. Zhang, M. Horbach, H. Zhao, Q. Lu, and T. Nguyen.
Geometric quantifier elimination heuristics for automatically
generating octagonal and maxplus invariants.
In M. P. Bonacina and M. E. Stickel, editors, Automated
Reasoning and Mathematics: Essays in Memory of William W. McCune, volume
7788 of Lecture Notes in Computer Science, pages 189228.
SpringerVerlag, Berlin, 2013.
Geometric heuristics for the quantifier elimination approach presented by Kapur (2004) are investigated to automatically derive loop invariants expressing weakly relational numerical properties (such as l <=x <=h or l <=x y <=h) for imperative programs. Such properties have been successfully used to analyze commercial software consisting of hundreds of thousands of lines of code (using for example, the Astrée tool based on abstract interpretation framework proposed by Cousot and his group). The main attraction of the proposed approach is its much lower complexity in contrast to the abstract interpretation approach (O(n^{2}) in contrast to O(n^{4}), where n is the number of variables) with the ability to still generate invariants of comparable strength. This approach has been generalized to consider disjunctive invariants of the similar form, expressed using maximum function (such as max(x+a, y+b, z+c, d) <=max(x+e, y+f, z+g, h)), thus enabling automatic generation of a subclass of disjunctive invariants for imperative programs as well.

[KGP09] 
G. Khalil, E. Goubault, and S. Putot.
The zonotope abstract domain Taylor1+.
In A. Bouajjani and O. Maler, editors, Computer Aided
Verification: Proceedings of the 21st International Conference (CAV 2009),
volume 5643 of Lecture Notes in Computer Science, pages 627633.
SpringerVerlag, Berlin, Grenoble, France, 2009.
Static analysis by abstract interpretation [1] aims it automatically inferring properties on the behaviour of programs. We focus here on a specific kind of numerical invariants: the set of values taken by numerical variables, with a real numbers semantics, at each control point of a program.

[KGR07] 
D. Kim, G. Gupta, and S. V. Rajopadhye.
On control signals for multidimensional time.
In G. Almási, C. Cascaval, and P. Wu, editors, Languages
and Compilers for Parallel Computing, volume 4382 of Lecture Notes in
Computer Science, pages 141155. SpringerVerlag, Berlin, 2007.
Revised papers presented at the 19th International Workshop on
Languages and Compilers for Parallel Computing (LCPC 2006), New Orleans,
Louisiana, USA, November 24, 2006.
Affine control loops (ACLs) comprise an important class of compute and dataintensive computations. The theoretical framework for the automatic parallelization of ACLs is well established. However, the hardware compilation of arbitrary ACLs is still in its infancy. An important component for an efficient hardware implementation is a control mechanism that informs each processing element (PE) which computation needs to be performed and when. We formulate this control signal problem in the context of compiling arbitrary ACLs parallelized with a multidimensional schedule into hardware. We characterize the logical time instants when PEs need a control signal indicating which particular computations need to be performed. Finally, we present an algorithm to compute the minimal set of logical time instants for these control signals.

[KR13] 
B. Köpf and A. Rybalchenko.
Automation of quantitative informationflow analysis.
In M. Bernardo, E. de Vink, A. Di Pierro, and H. Wiklicky, editors,
Formal Methods for Dynamical Systems: 13th International School on
Formal Methods for the Design of Computer, Communication, and Software
Systems (SFM 2013), volume 7938 of Lecture Notes in Computer Science,
pages 128, Bertinoro, Italy, 2013. SpringerVerlag, Berlin.
Quantitative informationflow analysis (QIF) is an emerging technique for establishing informationtheoretic confidentiality properties. Automation of QIF is an important step towards ensuring its practical applicability, since manual reasoning about program security has been shown to be a tedious and expensive task. In this chapter we describe a approximation and randomization techniques to bear on the challenge of sufficiently precise, yet efficient computation of quantitative information flow properties.

[KKM^{+}05] 
C. Kruegel, E. Kirda, D. Mutz, W. Robertson, and G. Vigna.
Automating mimicry attacks using static binary analysis.
In Proceedings of Security '05, the 14th USENIX Security
Symposium, pages 161176, Baltimore, MD, USA, 2005.
Intrusion detection systems that monitor sequences of system calls have recently become more sophisticated in defining legitimate application behavior. In particular, additional information, such as the value of the program counter and the configuration of the program's call stack at each system call, has been used to achieve better characterization of program behavior. While there is common agreement that this additional information complicates the task for the attacker, it is less clear to which extent an intruder is constrained. In this paper, we present a novel technique to evade the extended detection features of stateoftheart intrusion detection systems and reduce the task of the intruder to a traditional mimicry attack. Given a legitimate sequence of system calls, our technique allows the attacker to execute each system call in the correct execution context by obtaining and relinquishing the control of the application's execution flow through manipulation of code pointers. We have developed a static analysis tool for Intel x86 binaries that uses symbolic execution to automatically identify instructions that can be used to redirect control flow and to compute the necessary modifications to the environment of the process. We used our tool to successfully exploit three vulnerable programs and evade detection by existing stateoftheart system call monitors. In addition, we analyzed three realworld applications to verify the general applicability of our techniques.

[LMS03] 
V. Lagoon, F. Mesnard, and P. J. Stuckey.
Termination analysis with types is more accurate.
In Logic Programming: Proceedings of the 19th International
Conference (ICLP 2003), volume 2916 of Lecture Notes in Computer
Science, pages 254268, Mumbai, India, 2003. SpringerVerlag, Berlin.
In this paper we show how we can use size and groundness analyses lifted to regular and (polymorphic) Hindley/Milner typed programs to determine more accurate termination of (type correct) programs. Type information for programs may be either inferred automatically or declared by the programmer. The analysis of the typed logic programs is able to completely reuse a framework for termination analysis of untyped logic programs by using abstract compilation of the type abstraction. We show that our typed termination analysis is uniformly more accurate than untyped termination analysis for regularly typed programs, and demonstrate how it is able to prove termination of programs which the untyped analysis can not.

[LLTW13] 
K. G. Larsen, A. Legay, L.M. Traonouez, and A. Wasowski.
Robust synthesis for realtime systems.
Theoretical Computer Science, 515:96122, 2013.
Specification theories for realtime systems allow reasoning about interfaces and their implementation models, using a set of operators that includes satisfaction, refinement, logical and parallel composition. To make such theories applicable throughout the entire design process from an abstract specification to an implementation, we need to reason about the possibility to effectively implement the theoretical specifications on physical systems, despite their limited precision. In the literature, this implementation problem has been linked to the robustness problem that analyzes the consequences of introducing small perturbations into formal models.

[LL09] 
V. Laviron and F. Logozzo.
Subpolyhedra: A (more) scalable approach to infer linear
inequalities.
In N. D. Jones and M. MüllerOlm, editors, Verification,
Model Checking, and Abstract Interpretation: Proceedings of the 10th
International Conference (VMCAI 2009), volume 5403 of Lecture Notes in
Computer Science, pages 229244, Savannah, Georgia, USA, 2009.
SpringerVerlag, Berlin.
We introduce Subpolyhedra (SubPoly) a new numerical abstract domain to infer and propagate linear inequalities. SubPoly is as expressive as Polyhedra, but it drops some of the deductive power to achieve scalability. SubPoly is based on the insight that the reduced product of linear equalities and intervals produces powerful yet scalable analyses. Precision can be recovered using hints. Hints can be automatically generated or provided by the user in the form of annotations. We implemented SubPoly on the top of Clousot, a generic abstract interpreter for .Net. Clousot with SubPoly analyzes very large and complex code bases in few minutes. SubPoly can efficiently capture linear inequalities among hundreds of variables, a result wellbeyond stateoftheart implementations of Polyhedra.

[LB06] 
M. Leconte and B. Berstel.
Extending a CP solver with congruences as domains for program
verification.
In B. Blanc, A. Gotlieb, and C. Michel, editors, Proceedings of
the 1st workshop on Constraints in Software Testing, Verification and
Analysis (CSTVA '06), pages 2233, Nantes, France, 2006. IEEE Computer
Society Press.
Available at http://www.irisa.fr/manifestations/2006/CSTVA06/.
Constraints generated for Program Verification tasks very often involve integer variables ranging on all the machinerepresentable integer values. Thus, if the propagation takes a time that is linear in the size of the domains, it will not reach a fix point in practical time. Indeed, the propagation time needed to reduce the interval domains for as simple equations as x = 2y + 1 and x = 2z is proportional to the size of the initial domains of the variables. To avoid this slow convergence phenomenon, we propose to enrich a Constraint Programming Solver (CP Solver) with congruence domains. This idea has been introduced by [Granger, P.: Static analysis of arithmetic congruences. International Journal of Computer Math (1989) 165199] in the abstract interpretation community and we show how a CP Solver can benefit from it, for example in discovering immediately that 12x + y = 3 and 4z + 7y = 0 have no integer solution.

[LT13] 
A. Legay and L.M. Traonouez.
PyEcdar: Towards open source implementation for timed systems.
In D. Van Hung and M. Ogawa, editors, Automated Technology for
Verification and Analysis: Proceedings of the 11th International Symposium
(ATVA 2013), volume 8172 of Lecture Notes in Computer Science, pages
460463, Hanoi, Vietnam, 2013. SpringerVerlag, Berlin.
PyEcdar is an open source implementation for reasoning on timed systems. PyEcdar's main objective is not efficiency, but rather flexibility to test and implement new results on timed systems

[LRST09] 
D. Lime, O. H. Roux, C. Seidner, and L.M. Traonouez.
Romeo: A parametric modelchecker for Petri nets with stopwatches.
In S. Kowalewski and A. Philippou, editors, Tools and Algorithms
for the Construction and Analysis of Systems: Proceedings of the 15th
International Conference (TACAS 2009), volume 5505 of Lecture Notes in
Computer Science, pages 5457, York, UK, 2009. SpringerVerlag, Berlin.
Last time we reported on Romeo, analyses with this tool were mostly based on translations to other tools. This new version provides an integrated TCTL modelchecker and has gained in expressivity with the addition of parameters. Although there exists other tools to compute the statespace of stopwatch models, Romeo is the first one that performs TCTL modelchecking on stopwatch models. Moreover, it is the first tool that performs TCTL modelchecking on timed parametric models. Indeed, Romeo now features an efficient modelchecking of time Petri nets using the Uppaal DBM Library, the modelchecking of stopwatch Petri nets and parametric stopwatch Petri nets using the Parma Polyhedra Library and a graphical editor and simulator of these models. Furthermore, its audience has increased leading to several industrial contracts. This paper reports on these recent developments of Romeo.

[LMR13] 
D. Lime, C. Martinez, and O. H. Roux.
Shrinking of time Petri nets.
Discrete Event Dynamic Systems, 23(4):419438, 2013.
The problem of the synthesis of time bounds enforcing good properties for reactive systems has been much studied in the literature. These works mainly rely on dioid algebra theory and require important restrictions on the structure of the model (notably by restricting to timed event graphs). In this paper, we address the problems of existence and synthesis of shrinkings of the bounds of the time intervals of a time Petri net, such that a given property is verified. We show that this problem is decidable for CTL properties on bounded time Petri nets. We then propose a symbolic algorithm based on the state class graph for a fragment of CTL. If the desired property “include” kboundedness, the proposed algorithm terminates even if the net is unbounded. A prototype has been implemented in our tool Romeo and the method is illustrated on a small case study from the literature.

[LF08] 
F. Logozzo and M. Fähndrich.
Pentagons: A weakly relational abstract domain for the efficient
validation of array accesses.
In Proceedings of the 2008 ACM Symposium on Applied Computing
(SAC 2008), pages 184188, Fortaleza, Ceará, Brazil, 2008. ACM Press.
We introduce Pentagons (Pntg), a weakly relational numerical abstract domain useful for the validation of array accesses in bytecode and intermediate languages (IL). This abstract domain captures properties of the form of x in[a. b] /\x < y. It is more precise than the well known Interval domain, but it is less precise than the Octagon domain. The goal of Pntg is to be a lightweight numerical domain useful for adaptive static analysis, where Pntg is used to quickly prove the safety of most array accesses, restricting the use of more precise (but also more expensive) domains to only a small fraction of the code. We implemented the Pntg abstract domain in Clousot, a generic abstract interpreter for .NET assemblies. Using it, we were able to validate 83% of array accesses in the core runtime library mscorlib.dll in less than 8 minutes.

[LMM^{+}12] 
Q. Lu, M. Madsen, M. Milata, S. Ravn, U. Fahrenberg, and K. G. Larsen.
Reachability analysis for timed automata using maxplus algebra.
Journal of Logic and Algebraic Programming, 81(3), 2012.
We show that maxplus polyhedra are usable as a data structure in reachability analysis of timed automata. Drawing inspiration from the extensive work that has been done on difference bound matrices, as well as previous work on maxplus polyhedra in other areas, we develop the algorithms needed to perform forward and backward reachability analysis using maxplus polyhedra. To show that the approach works in practice and theory alike, we have created a proofofconcept implementation on top of the model checker opaal.

[MK06] 
I. B. Makhlouf and S. Kowalewski.
An evaluation of two recent reachability analysis tools for hybrid
systems.
In C. Cassandras, A. Giua, C. Seatzu, and J. Zaytoon, editors,
Proceedings of the 2nd IFAC Conference on Analysis and Design of Hybrid
Systems, Alghero, Italy, 2006. Elsevier Science B.V.
The hybrid systems community is still struggling to provide practically applicable verification tools. Recently, two new tools, PHAVer and Hsolver, were introduced which promise to be a further step in this direction. We evaluate and compare both tools with the help of several benchmark examples. The results show that both have their strengths and weaknesses, and that there still is no allpurpose reachability analysis tool for hybrid systems.

[MMHS13] 
P. Mardziel, S. Magill, M. Hicks, and M. Srivatsa.
Dynamic enforcement of knowledgebased security policies using
probabilistic abstract interpretation.
Journal of Computer Security, 21(4):463532, 2013.
This paper explores the idea of knowledgebased security policies, which are used to decide whether to answer queries over secret data based on an estimation of the querier's (possibly increased) knowledge given the results. Limiting knowledge is the goal of existing information release policies that employ mechanisms such as noising, anonymization, and redaction. Knowledgebased policies are more general: they increase flexibility by not fixing the means to restrict information flow. We enforce a knowledgebased policy by explicitly tracking a model of a querier's belief about secret data, represented as a probability distribution, and denying any query that could increase knowledge above a given threshold. We implement query analysis and belief tracking via abstract interpretation, which allows us to trade off precision and performance through the use of abstraction. We have developed an approach to augment standard abstract domains to include probabilities, and thus define distributions. We focus on developing probabilistic polyhedra in particular, to support numeric programs. While probabilistic abstract interpretation has been considered before, our domain is the first whose design supports sound conditioning, which is required to ensure that estimates of a querier's knowledge are accurate. Experiments with our implementation show that several useful queries can be handled efficiently, particularly compared to exact (i.e., sound) inference involving sampling. We also show that, for our benchmarks, restricting constraints to octagons or intervals, rather than full polyhedra, can dramatically improve performance while incurring little to no loss in precision.

[MMHS11] 
P. Mardziel, S. Magill, M. Hicks, and M. Srivatsa.
Dynamic enforcement of knowledgebased security policies.
In M. Backes and S. Zdancewic, editors, Proceedings of the 24th
IEEE Computer Security Foundations Symposium (CSF 2011), pages 114128,
CernaylaVille, France, 2011. IEEE Xplore Digital Library.
This paper explores the idea of knowledgebased security policies, which are used to decide whether to answer queries over secret data based on an estimation of the querier's (possibly increased) knowledge given the results. Limiting knowledge is the goal of existing information release policies that employ mechanisms such as noising, anonymization, and redaction. Knowledgebased policies are more general: they increase flexibility by not fixing the means to restrict information flow. We enforce a knowledgebased policy by explicitly tracking a model of a querier's belief about secret data, represented as a probability distribution, and denying any query that could increase knowledge above a given threshold. We implement query analysis and belief tracking via abstract interpretation using a novel probabilistic polyhedral domain, whose design permits trading off precision with performance while ensuring estimates of a querier's knowledge are sound. Experiments with our implementation show that several useful queries can be handled efficiently, and performance scales far better than would more standard implementations of probabilistic computation based on sampling.

[MSRF04] 
R. Manevich, M. Sagiv, G. Ramalingam, and J. Field.
Partially disjunctive heap abstraction.
In R. Giacobazzi, editor, Static Analysis: Proceedings of the
11th International Symposium, volume 3148 of Lecture Notes in Computer
Science, pages 265279, Verona, Italy, 2004. SpringerVerlag, Berlin.
One of the continuing challenges in abstract interpretation is the creation of abstractions that yield analyses that are both tractable and precise enough to prove interesting properties about realworld programs. One source of difficulty is the need to handle programs with different behaviors along different execution paths. Disjunctive (powerset) abstractions capture such distinctions in a natural way. However, in general, powerset abstractions increase the space and time by an exponential factor. Thus, powerset abstractions are generally perceived as being very costly. In this paper we partially address this challenge by presenting and empirically evaluating a new heap abstraction. The new heap abstraction works by merging shape descriptors according to a partial isomorphism similarity criteria, resulting in a partially disjunctive abstraction. We implemented this abstraction in TVLA  a generic system for implementing program analyses. We conducted an empirical evaluation of the new abstraction and compared it with the powerset heap abstraction. The experiments show that analyses based on the partially disjunctive abstraction are as precise as the ones based on the fully disjunctive abstraction. In terms of performance, analyses based on the partially disjunctive abstraction are often superior to analyses based on the fully disjunctive heap abstraction. The empirical results show considerable speedups, up to 2 orders of magnitude, enabling previously nonterminating analyses, such as verification of the DeutschSchorrWaite marking algorithm, to terminate with no negative effect on the overall precision. Indeed, experience indicates that the partially disjunctive shape abstraction improves performance across all TVLA analyses uniformly, and in many cases is essential for making precise shape analysis feasible.

[MS09] 
B. McCloskey and M. Sagiv.
Combining quantified domains.
Technical Report EECS2009106, EECS Department University of
California, Berkeley USA, July 2009.
[ .pdf ]
We develop general algorithms for reasoning about numerical properties of programs manipulating the heap via pointers. We automatically infer quantified invariants regarding unbounded sets of memory locations and unbounded numeric values. As an example, we can infer that for every node in a data structure, the node's length field is less than its capacity field. We can also infer pernode statements about cardinality, such as that each node's count field is equal to the number of elements reachable from it. This additional power allows us to prove properties about reference counted data structures and Btrees that were previously unattainable. Besides the ability to verify more programs, we believe that our work sheds new light on the interaction between heap and numerical reasoning.

[Mei10] 
S. Meijer.
Transformations for Polyhedral Process Networks.
PhD thesis, Leiden Institute of Advanced Computer Science (LIACS),
Faculty of Science, Leiden University, Leiden, The Netherlands, 2010.
We use the polyhedral process network (PPN) model of computation to program and map streaming applications onto embedded MultiProcessor Systems on Chip (MPSoCs) platforms. The PPNs, which can be automatically derived from sequential program applications, do not necessarily meet the performance/resource constraints. A designer can therefore apply the process splitting transformations to increase program performance, and the process merging transformation to reduce the number of processes in a PPN. These transformations were defined, but a designer had many possibilities to apply a particular transformation, and these transformations can also be ordered in many different ways. In this dissertation, we define compiletime solution approaches that assist the designer in evaluating and applying process splitting and merging transformations in the most effective way.

[MB05] 
F. Mesnard and R. Bagnara.
cTI: A constraintbased termination inference tool for
ISOProlog.
Theory and Practice of Logic Programming, 5(1&2):243257,
2005.
We present cTI, the first system for universal lefttermination inference of logic programs. Termination inference generalizes termination analysis and checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, for instance by means of user annotations. Moreover, the analysis must be redone every time the class of queries of interest is updated. Termination inference, in contrast, requires neither user annotations nor recomputation. In this approach, terminating classes for all predicates are inferred at once. We describe the architecture of cTI and report an extensive experimental evaluation of the system covering many classical examples from the logic programming termination literature and several Prolog programs of respectable size and complexity.

[MSS13] 
B. Mihaila, A. Sepp, and A. Simon.
Widening as abstract domain.
In G. Brat, N. Rungta, and A. Venet, editors, NASA Formal
Methods: Proceedings of the 5th International Symposium (NFM 2013), volume
7871 of Lecture Notes in Computer Science, pages 170174, Moffett
Field, CA, USA, 2013. SpringerVerlag, Berlin.
Verification using static analysis often hinges on precise numeric invariants. Numeric domains of infinite height can infer these invariants, but require widening/narrowing which complicates the fixpoint computation and is often too imprecise. As a consequence, several strategies have been proposed to prevent a precision loss during widening or to narrow in a smarter way. Most of these strategies are difficult to retrofit into an existing analysis as they either require a preanalysis, an onthefly modification of the CFG, or modifications to the fixpoint algorithm. We propose to encode widening and its various refinements from the literature as cofibered abstract domains that wrap standard numeric domains, thereby providing a modular way to add numeric analysis to any static analysis, that is, without modifying the fixpoint engine. Since these domains cannot make any assumptions about the structure of the program, our approach is suitable to the analysis of executables, where the (potentially irreducible) CFG is reconstructed onthefly. Moreover, our domainbased approach not only mirrors the precision of more intrusive approaches in the literature but also requires fewer iterations to find a fixpoint of loops than many heuristics that merely aim for precision.

[ML11] 
D. Monniaux and J. Le Guen.
Stratified static analysis based on variable dependencies.
In Proceedings of the Third International Workshop on Numerical
and Symbolic Abstract Domains (NSAD 2011), Venice, Italy, 2011.
[ http ]
In static analysis by abstract interpretation, one often uses widening operators in order to enforce convergence within finite time to an inductive invariant. Certain widening operators, including the classical one over finite polyhedra, exhibit an unintuitive behavior: analyzing the program over a subset of its variables may lead a more precise result than analyzing the original program! In this article, we present simple workarounds for such behavior.

[ML12] 
D. Monniaux and J. Le Guen.
Stratified static analysis based on variable dependencies.
Electronic Notes in Theoretical Computer Science (ENTCS),
288:6174, 2012.
In static analysis by abstract interpretation, one often uses widening operators in order to enforce convergence within finite time to an inductive invariant. Certain widening operators, including the classical one over finite polyhedra, exhibit an unintuitive behavior: analyzing the program over a subset of its variables may lead a more precise result than analyzing the original program! In this article, we present simple workarounds for such behavior.

[MSvn12] 
M. Montenegro, O. Shkaravska, M. van Eekelen, and R. Pe na.
Interpolationbased height analysis for improving a recurrence
solver.
In R. Pe na, M. van Eekelen, and O. Shkaravska, editors,
Foundational and Practical Aspects of Resource Analysis: Second International
Workshop (FOPARA 2011), Revised Selected Papers, volume 7177 of Lecture
Notes in Computer Science, pages 3653. SpringerVerlag, Berlin, Madrid,
Spain, 2012.
The COSTA system infers resource consumption bounds from Java bytecode using an internal recurrence solver PUBS. This paper suggests an improvement of the COSTA system, such that it can solve a larger number of recurrences. The idea is to replace one of its static analyses, the ranking function analysis, by another kind of analysis, height analysis, in such a way that polynomial bounds of any degree may be inferred instead of just linear expressions. The work can be seen as an application of some polynomial interpolation techniques used by some of the authors in prior analyses. Finding a way to choose proper test nodes is the key to the solution presented in this paper.

[MKK07] 
A. Moser, C. Krügel, and E. Kirda.
Exploring multiple execution paths for malware analysis.
In Proceedings of the 2007 IEEE Symposium on Security and
Privacy (S&P 2007), pages 231245, Oakland, California, USA, 2007. IEEE
Computer Society Press.
Malicious code (or malware) is defined as software that fulfills the deliberately harmful intent of an attacker. Malware analysis is the process of determining the behavior and purpose of a given malware sample (such as a virus, worm, or Trojan horse). This process is a necessary step to be able to develop effective detection techniques and removal tools. Currently, malware analysis is mostly a manual process that is tedious and timeintensive. To mitigate this problem, a number of analysis tools have been proposed that automatically extract the behavior of an unknown program by executing it in a restricted environment and recording the operating system calls that are invoked. The problem of dynamic analysis tools is that only a single program execution is observed. Unfortunately, however, it is possible that certain malicious actions are only triggered under specific circumstances (e.g., on a particular day, when a certain file is present, or when a certain command is received). In this paper, we propose a system that allows us to explore multiple execution paths and identify malicious actions that are executed only when certain conditions are met. This enables us to automatically extract a more complete view of the program under analysis and identify under which circumstances suspicious actions are carried out. Our experimental results demonstrate that many malware samples show different behavior depending on input read from the environment. Thus, by exploring multiple execution paths, we can obtain a more complete picture of their actions.

[NMLH09] 
J. Navas, M. MéndezLojo, and M. V. Hermenegildo.
Userdefinable resource usage bounds analysis for Java bytecode.
In Proceedings of the 4th Workshop on Bytecode Semantics,
Verification, Analysis and Transformation (Bytecode 2009), volume 253 of
Electronic Notes in Theoretical Computer Science, pages 6582, York,
UK, 2009. Elsevier Science B.V.
Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of userlevel properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually applicationdependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a database, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmerdefinable resources. In our context, a resource is defined by programmerprovided annotations which state the basic consumption that certain program elements make of that resource. From these definitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering a significant set of interesting resources.

[PdMG12] 
L. Panizo and M. d. M. Gallardo.
An extension of java pathfinder for hybrid systems.
ACM SIGSOFT Software Engineering Notes, 37(6):15, 2012.
Hybrid systems are characterized by combining discrete and continuous behaviors. Verification of hybrid systems is, in general, a diffcult task due to the potential complexity of the continuous dynamics. Currently, there are different formalisms and tools which are able to analyze specific types of hybrid systems, model checking being one of the most used approaches. In this paper, we describe an extension of Java PathFinder in order to analyze hybrid systems. We apply a general methodology which has been successfully used to extend Spin. This methodology is nonintrusive, and uses external libraries, such as the Parma Polyhedra Library, to abstract the continuous behavior of the hybrid system.

[PY13] 
N. Partush and E. Yahav.
Abstract semantic differencing for numerical programs.
In F. Logozzo and M. Fähndrich, editors, Static Analysis:
Proceedings of the 20th International Symposium (SAS 2013), volume 7935 of
Lecture Notes in Computer Science, pages 238258, Seattle, WA, USA,
2013. SpringerVerlag, Berlin.
We address the problem of computing semantic differences between a program and a patched version of the program. Our goal is to obtain a precise characterization of the difference between program versions, or establish their equivalence when no difference exists.

[CNVM12] 
B. Cuervo Parrino, J. Narboux, E. Violard, and N. Magaud.
Dealing with arithmetic overflows in the polyhedral model.
In U. Bondhugula and V. Loechner, editors, Proceedings of the
2nd International Workshop on Polyhedral Compilation Techniques (IMPACT
2012), Paris, France, 2012.
[ http ]
The polyhedral model provides techniques to optimize Static Control Programs (SCoP) using some complex transforma tions which improve datalocality and which can exhibit parallelism. These advanced transformations are now available in both GCC and LLVM. In this paper, we focus on the cor rectness of these transformations and in particular on the problem of integer overflows. Indeed, the strength of the polyhedral model is to produce an abstract mathematical representation of a loop nest which allows highlevel trans formations. But this abstract representation is valid only when we ignore the fact that our integers are only machine integers. In this paper, we present a method to deal with this problem of mismatch between the mathematical and concrete representations of loop nests. We assume the exis tence of polyhedral optimization transformations which are proved to be correct in a world without overflows and we provide a selfverifying compilation function. Rather than verifying the correctness of this function, we use an approach based on a validator, which is a tool that is run by the com piler after the transformation itself and which confirms that the code produced is equivalent to the original code. As we aim at the formal proof of the validator we implement this validator using the Coq proof assistant as a programming language [4].

[PS07] 
E. Payet and F. Spoto.
Magicsets transformation for the analysis of Java bytecode.
In G. Filé and H. R. Nielson, editors, Static Analysis:
Proceedings of the 14th International Symposium, volume 4634 of Lecture
Notes in Computer Science, pages 452467, Kongens Lyngby, Denmark, 2007.
SpringerVerlag, Berlin.
Denotational static analysis of Java bytecode has a nice and clean compositional definition and an efficient implementation with binary decision diagrams. But it models only the functional i.e., input/output behaviour of a program P, not enough if one needs P's internal behaviours i.e., from the input to some internal program points. We overcome this limitation with a technique used up to now for logic programs only. It adds new magic blocks of code to P, whose functional behaviours are the internal behaviours of P. We prove this transformation correct with an operational semantics. We define an equivalent denotational semantics, whose denotations for the magic blocks are hence the internal behaviours of P. We implement our transformation and instantiate it with abstract domains modelling sharing of two variables and noncyclicity of variables. We get a static analyser for full Java bytecode that is faster and scales better than another operational pairsharing analyser and a constraintbased pointer analyser.

[NRS09] 
J. A. Navarro Pérez, A. Rybalchenko, and A. Singh.
Cardinality abstraction for declarative networking applications.
In A. Bouajjani and O. Maler, editors, Computer Aided
Verification: Proceedings of the 21st International Conference (CAV 2009),
volume 5643 of Lecture Notes in Computer Science, pages 584598,
Grenoble, France, 2009. Springer.
Declarative Networking is a recent, viable approach to make distributed programming easier, which is becoming increasingly popular in systems and networking community. It offers the programmer a declarative, rulebased language, called P2, for writing distributed applications in an abstract, yet expressive way. This approach, however, imposes new challenges on analysis and verification methods when they are applied to P2 programs. Reasoning about P2 computations is beyond the scope of existing tools since it requires handling of program states defined in terms of collections of relations, which store the application data, together with multisets of tuples, which represent communication events inflight. In this paper, we propose a cardinality abstraction technique that can be used to analyze and verify P2 programs. It keeps track of the size of relations (together with projections thereof) and multisets defining P2 states, and provides an appropriate treatment of declarative operations, e.g., indexing, unification, variable binding, and negation. Our cardinality abstractionbased verifier successfully proves critical safety properties of a P2 implementation of the Byzantine fault tolerance protocol Zyzzyva, which is a representative and complex declarative networking application.

[PTTC11] 
T.H. Pham, M.T. Trinh, A.H. Truong, and W.N. Chin.
FixBag: a fixpoint calculator for quantified bag constraints.
In G. Gopalakrishnan and S. Qadeer, editors, Computer Aided
Verification: Proceedings of the 23rd International Conference (CAV 2011),
volume 6806 of Lecture Notes in Computer Science, pages 656662,
Snowbird, UT, USA, 2011. SpringerVerlag, Berlin.
Abstract interpretation techniques have played a major role in advancing the stateoftheart in program analysis. Traditionally, standalone tools for these techniques have been developed for the numerical domains which may be sufficient for lower levels of program correctness. To analyze a wider range of programs, we have developed a tool to compute symbolic fixpoints for quantified bag domain . This domain is useful for programs that deal with collections of values. Our tool is able to derive both loop invariants and method pre/post conditions via fixpoint analysis of recursive bag constraints. To support better precision, we have allowed disjunctive formulae to be inferred, where appropriate. As a standalone tool, we have tested it on a range of small but challenging examples with acceptable precision and performance.

[PSC^{+}06] 
S. Pop, G.A. Silber, A. Cohen, C. Bastoul, S. Girbal, and N. Vasilache.
GRAPHITE: Polyhedral analyses and optimizations for GCC.
Technical Report A/378/CRI, Centre de Recherche en Informatique,
École des Mines de Paris, Fontainebleau, France, 2006.
Contribution to the GNU Compilers Collection Developers Summit 2006
(GCC Summit 06), Ottawa, Canada, June 2830, 2006.
We present a plan to add loop nest optimizations in GCC based on polyhedral representations of loop nests. We advocate a static analysis approach based on a hierarchy of interchangeable abstractions with solvers that range from the exact solvers such as OMEGA, to faster but less precise solvers based on more coarse abstractions. The intermediate representation GRAPHITE (GIMPLE Represented as Polyhedra with Interchangeable Envelopes), built on GIMPLE and the natural loops, hosts the high level loop transformations. We base this presentation on the WRaPIT project developed in the Alchemy group at INRIA Futurs and ParisSud University, on the PIPS compiler developed at École des mines de Paris, and on a joint work with several members of the static analysis and polyhedral compilation community in France.

[PC08] 
C. Popeea and W.N. Chin.
Inferring disjunctive postconditions.
In M. Okada and I. Satoh, editors, Advances in Computer Science
 ASIAN 2006. Secure Software and Related Issues, volume 4435 of
Lecture Notes in Computer Science, pages 331345. SpringerVerlag, Berlin,
2008.
Revised selected papers presented at the 11th Asian Computing Science
Conference, Tokyo, Japan, December 68, 2006.
Polyhedral analysis [9] is an abstract interpretation used for automatic discovery of invariant linear inequalities among numerical variables of a program. Convexity of this abstract domain allows efficient analysis but also loses precision via convexhull and widening operators. To selectively recover the loss of precision, sets of polyhedra (disjunctive elements) may be used to capture more precise invariants. However a balance must be struck between precision and cost. We introduce the notion of affinity to characterize how closely related is a pair of polyhedra. Finding related elements in the polyhedron (base) domain allows the formulation of precise hull and widening operators lifted to the disjunctive (powerset extension of the) polyhedron domain. We have implemented a modular static analyzer based on the disjunctive polyhedral analysis where the relational domain and the proposed operators can progressively enhance precision at a reasonable cost.

[RMHH13] 
A. Rastogi, P. Mardziel, M. Hammer, and M. Hicks.
Knowledge inference for optimizing secure multiparty computation.
In P. Naldurg, editor, Proceedings of the 2013 ACM SIGPLAN
Workshop on Programming Languages and Analysis for Security (PLAS'13), pages
314, Seattle, Washington, USA, 2013. ACM Press, New York, USA.
In secure multiparty computation, mutually distrusting parties cooperatively compute functions of their private data; in the process, they only learn certain results as per the protocol (e.g., the final output). The realization of these protocols uses cryptographic techniques to avoid leaking information between the parties. A protocol for a secure computation can sometimes be optimized without changing its security guarantee: when the parties can use their private data and the revealed output to infer the values of other data, then this other data need not be concealed from them via cryptography.

[RBFS09] 
A. Rizk, G. Batt, F. Fages, and S. Soliman.
A general computational method for robustness analysis with
applications to synthetic gene networks.
Bioinformatics, 25(12):i169i178, 2009.
Paper accepted for presentation at the 2009 ISMB/ECCB Conference,
Stockholm, Sweden, June 27July 2, 2009. Available at
http://bioinformatics.oxfordjournals.org/cgi/content/abstract/25/12/i169.
Motivation: Robustness is the capacity of a system to maintain a function in the face of perturbations. It is essential for the correct functioning of natural and engineered biological systems. Robustness is generally defined in an ad hoc, problemdependent manner, thus hampering the fruitful development of a theory of biological robustness, recently advocated by Kitano.

[RBFS11] 
A. Rizk, G. Batt, F. Fages, and S. Soliman.
Continuous valuations of temporal logic specifications with
applications to parameter optimization and robustness measures.
Theoretical Computer Science, 412(26):28272839, 2011.
Finding mathematical models satisfying a specification built from the formalization of biological experiments, is a common task of the modeler that techniques like modelchecking help solving, in the qualitative but also in the quantitative case. In this article we define a continuous degree of satisfaction of temporal logic formulae with constraints. We show how such a satisfaction measure can be used as a fitness function with stateoftheart evolutionary optimization methods in order to find biochemical kinetic parameter values satisfying a set of biological properties formalized in temporal logic. We also show how it can be used to define a measure of robustness of a biological model with respect to some temporal specification. These methods are evaluated on models of the cell cycle and of the MAPK signaling cascade.

[SIG07] 
S. Sankaranarayanan, F. Ivancic, and A. Gupta.
Program analysis using symbolic ranges.
In G. Filé and H. R. Nielson, editors, Static Analysis:
Proceedings of the 14th International Symposium, volume 4634 of Lecture
Notes in Computer Science, pages 366383, Kongens Lyngby, Denmark, 2007.
SpringerVerlag, Berlin.
Interval analysis seeks static lower and upper bounds on the values of program variables. These bounds are useful, especially for inferring invariants to prove buffer overflow checks. In practice, however, intervals by themselves are often inadequate as invariants due to the lack of relational information among program variables. In this paper, we present a technique for deriving symbolic bounds on variable values. We study a restricted class of polyhedra whose constraints are stratified with respect to some variable ordering provided by the user, or chosen heuristically. We define a notion of normalization for such constraints and demonstrate polynomial time domain operations on the resulting domain of symbolic range constraints. The abstract domain is intended to complement widely used domains such as intervals and octagons for use in buffer overflow analysis. Finally, we study the impact of our analysis on commercial software using an overflow analyzer for the C language.

[SSM04] 
S. Sankaranarayanan, H. B. Sipma, and Z. Manna.
Constraintbased linearrelations analysis.
In R. Giacobazzi, editor, Static Analysis: Proceedings of the
11th International Symposium, volume 3148 of Lecture Notes in Computer
Science, pages 5368, Verona, Italy, 2004. SpringerVerlag, Berlin.
Linearrelations analysis of transition systems discovers linear invariant relationships among the variables of the system. These relationships help establish important safety and liveness properties. Efficient techniques for the analysis of systems using polyhedra have been explored, leading to the development of successful tools like HyTech. However, existing techniques rely on the use of approximations such as widening and extrapolation in order to ensure termination. In an earlier paper, we demonstrated the use of Farkas' Lemma to provide a translation from the linearrelations analysis problem into a system of constraints on the unknown coefficients of a candidate invariant. However, since the constraints in question are nonlinear, a naive application of the method does not scale. In this paper, we show that by some efficient simplifications and approximations to the quantifier elimination, not only does the method scale to higher dimensions, but also enjoys performance advantages for some larger examples.

[SSM05] 
S. Sankaranarayanan, H. B. Sipma, and Z. Manna.
Scalable analysis of linear systems using mathematical programming.
In R. Cousot, editor, Verification, Model Checking and Abstract
Interpretation: Proceedings of the 6th International Conference (VMCAI
2005), volume 3385 of Lecture Notes in Computer Science, pages 2541,
Paris, France, 2005. SpringerVerlag, Berlin.
We present a method for generating linear invariants for large systems. The method performs forward propagation in an abstract domain consisting of arbitrary polyhedra of a predefined fixed shape. The basic operations on the domain like abstraction, intersection, join and inclusion tests are all posed as linear optimization queries, which can be solved efficiently by existing LP solvers. The number and dimensionality of the LP queries are polynomial in the program dimensionality, size and the number of target invariants. The method generalizes similar analyses in the interval, octagon, and octahedra domains, without resorting to polyhedral manipulations. We demonstrate the performance of our method on some benchmark programs.

[SCSM06] 
S. Sankaranarayanan, M. Colón, H. B. Sipma, and Z. Manna.
Efficient strongly relational polyhedral analysis.
In E. A. Emerson and K. S. Namjoshi, editors, Verification,
Model Checking and Abstract Interpretation: Proceedings of the 7th
International Conference (VMCAI 2006), volume 3855 of Lecture Notes in
Computer Science, pages 111125, Charleston, SC, USA, 2006.
SpringerVerlag, Berlin.
Polyhedral analysis infers invariant linear equalities and inequalities of imperative programs. However, the exponential complexity of polyhedral operations such as image computation and convex hull limits the applicability of polyhedral analysis. Weakly relational domains such as intervals and octagons address the scalability issue by considering polyhedra whose constraints are drawn from a restricted, userspecified class. On the other hand, these domains rely solely on candidate expressions provided by the user. Therefore, they often fail to produce strong invariants. We propose a polynomial time approach to strongly relational analysis. We provide efficient implementations of join and post condition operations, achieving a trade off between performance and accuracy. We have implemented a strongly relational polyhedral analyzer for a subset of the C language. Initial experimental results on benchmark examples are encouraging.

[SSM06] 
S. Sankaranarayanan, H. B. Sipma, and Z. Manna.
Fixed point iteration for computing the time elapse operator.
In J. Hespanha and A. Tiwari, editors, Hybrid Systems:
Computation and Control: Proceedings of the 9th International Workshop (HSCC
2006), volume 3927 of Lecture Notes in Computer Science, pages
537551, Santa Barbara, CA, USA, 2006. SpringerVerlag, Berlin.
We investigate techniques for automatically generating symbolic approximations to the time solution of a system of differential equations. This is an important primitive operation for the safety analysis of continuous and hybrid systems. In this paper we design a time elapse operator that computes a symbolic overapproximation of time solutions to a continuous system starting from a given initial region. Our approach is iterative over the cone of functions (drawn from a suitable universe) that are non negative over the initial region. At each stage, we iteratively remove functions from the cone whose Lie derivatives do not lie inside the current iterate. If the iteration converges, the set of states defined by the final iterate is shown to contain all the time successors of the initial region. The convergence of the iteration can be forced using abstract interpretation operations such as widening and narrowing. We instantiate our technique to linear hybrid systems with piecewiseaffine dynamics to compute polyhedral approximations to the time successors. Using our prototype implementation TimePass, we demonstrate the performance of our technique on benchmark examples.

[SISG06] 
S. Sankaranarayanan, F. Ivančić, I. Shlyakhter, and A. Gupta.
Static analysis in disjunctive numerical domains.
In K. Yi, editor, Static Analysis: Proceedings of the 13th
International Symposium, volume 4134 of Lecture Notes in Computer
Science, pages 317, Seoul, Korea, 2006. SpringerVerlag, Berlin.
The convexity of numerical domains such as polyhedra, octagons, intervals and linear equalities enables tractable analysis of software for buffer overflows, null pointer dereferences and floating point errors. However, convexity also causes the analysis to fail in many common cases. Powerset extensions can remedy this shortcoming by considering disjunctions of predicates. Unfortunately, analysis using powerset domains can be exponentially more expensive as compared to analysis on the base domain. In this paper, we prove structural properties of fixed points computed in commonly used powerset extensions. We show that a fixed point computed on a powerset extension is also a fixed point in the base domain computed on an “elaboration” of the program's CFG structure. Using this insight, we build analysis algorithms that approach path sensitive static analysis algorithms by performing the fixed point computation on the base domain while discovering an “elaboration” on the fly. Using restrictions on the nature of the elaborations, we design algorithms that scale polynomially in terms of the number of disjuncts. We have implemented a lightweight static analyzer for C programs with encouraging initial results.

[SJ12] 
P. Schrammel and B. Jeannet.
Applying abstract acceleration to (co)reachability analysis of
reactive programs.
Journal of Symbolic Computation, 47(12):15121532, 2012.
We propose a new technique combining dynamic and static analysis of programs to find linear invariants. We use a statistical tool, called simple component analysis, to analyze partial execution traces of a given program. We get a new coordinate system in the vector space of program variables, which is used to specialize numerical abstract domains. As an application, we instantiate our technique to interval analysis of simple imperative programs and show some experimental evaluations.

[SGH^{+}13] 
R. Sharma, S. Gupta, B. Hariharan, A. Aiken, and A. V. Nori.
Verification as learning geometric concepts.
In F. Logozzo and M. Fähndrich, editors, Static Analysis:
Proceedings of the 20th International Symposium (SAS 2013), volume 7935 of
Lecture Notes in Computer Science, pages 388411, Seattle, USA, 2013.
SpringerVerlag, Berlin.
We formalize the problem of program verification as a learning problem, showing that invariants in program verification can be regarded as geometric concepts in machine learning. Safety properties define bad states: states a program should not reach. Program verification explains why a program's set of reachable states is disjoint from the set of bad states. In Hoare Logic, these explanations are predicates that form inductive assertions. Using samples for reachable and bad states and by applying well known machine learning algorithms for classification, we are able to generate inductive assertions. By relaxing the search for an exact proof to classifiers, we obtain complexity theoretic improvements. Further, we extend the learning algorithm to obtain a sound procedure that can generate proofs containing invariants that are arbitrary boolean combinations of polynomial inequalities. We have evaluated our approach on a number of challenging benchmarks and the results are promising.

[Sim10a] 
A. Simon.
A note on the inversion join for polyhedral analysis.
In Proceedings of the 2nd International Workshop on Numerical
and Symbolic Abstract Domains (NSAD 2010), volume 267 of Electronic
Notes in Theoretical Computer Science, pages 115126. Elsevier Science
B.V., 2010.
Linear invariants are essential in many optimization and verification tasks. The domain of convex polyhedra (sets of linear inequalities) has the potential to infer all linear relationships. Yet, it is rarely applied to larger problems due to the join operation whose most precise result is given by the convex hull of two polyhedra which, in turn, may be of exponential size. Recently, Sankaranarayanan et al. proposed an operation called inversion join to efficiently approximate the convex hull. While their proposal has an adhoc flavour, we show that it is quite principled and, indeed, complete for planar polyhedra and, for general polyhedra, complete on over 70% of our benchmarks.

[Sim10b] 
A. Simon.
Speeding up polyhedral analysis by identifying common constraints.
In Proceedings of the 2nd International Workshop on Numerical
and Symbolic Abstract Domains (NSAD 2010), volume 267 of Electronic
Notes in Theoretical Computer Science, pages 127138. Elsevier Science
B.V., 2010.
Sets of linear inequalities are an expressive reasoning tool for approximating the reachable states of a program. However, the most precise way to join two states is to calculate the convex hull of the two polyhedra that are represented by the inequality sets, an operation that is exponential in the dimension of the polyhedra. We investigate how similarities in the two input polyhedra can be exploited to improve the performance of this costly operation. In particular, we discuss how common equalities and certain inequalities can be omitted from the calculation without affecting the result. We expose a maximum of common equalities and inequalities by converting the polyhedra into a normal form and give experimental evidence of the merit of our method.

[SC10] 
A. Simon and L. Chen.
Simple and precise widenings for HPolyhedra.
In K. Ueda, editor, Proceedings of the 8th Asian Symposium on
the Programming Languages and Systems (APLAS 2010), volume 6461 of
Lecture Notes in Computer Science, pages 139155, Shanghai, China, 2010.
SpringerVerlag, Berlin.
While the definition of the revised widening for polyhedra is defined in terms of inequalities, most implementations use the double description method as a means to an efficient implementation. We show how standard widening can be implemented in a simple and efficient way using a normalized H representation (constraintonly) which has become popular in recent approximations to polyhedral analysis. We then detail a novel heuristic for this representation that is tuned to capture linear transformations of the state space while ensuring quick convergence for nonlinear transformations for which no precise linear invariants exist.

[SSSM07] 
M. Slanina, S. Sankaranarayanan, H. B. Sipma, and Z. Manna.
Controller synthesis of discrete linear plants using polyhedra.
Technical Report REACTTR200701, Computer Science Department,
Stanford University, Stanford, California, USA, 2007.
We study techniques for synthesizing synchronous controllers for affine plants with disturbances, based on safety specifications. Our plants are modeled in terms of discrete linear systems whose variables are partitioned into system, control, and disturbance variables. We synthesize nonblocking controllers that satisfy a userprovided safety specification by means of a fixed point iteration over the control precondition state transformer. Using convex polyhedra to represent sets of states, we present both precise and approximate algorithms for computing control preconditions and discuss strategies for forcing convergence of the iteration. We present technique for automatically deriving controllers from the result of the analysis, and demonstrate our approach on examples.

[Sof08] 
S. Soffia.
Definition and implementation of a pointsto analysis for clike
languages.
Technical Report arXiv:cs.PL/0810.0753, Dipartimento di Matematica,
Università di Parma, Italy, 2008.
Available from http://arxiv.org/.
The pointsto problem is the problem of determining the possible runtime targets of pointer variables and is usually considered part of the more general aliasing problem, which consists in establishing whether and when different expressions can refer to the same memory address. Aliasing information is essential to every tool that needs to reason about the semantics of programs. However, due to wellknown undecidability results, for all interesting languages that admit aliasing, the exact solution of nontrivial aliasing problems is not generally computable. This work focuses on approximated solutions to this problem by presenting a storebased, flowsensitive pointsto analysis, for applications in the field of automated software verification. In contrast to software testing procedures, which heuristically check the program against a finite set of executions, the methods considered in this work are static analyses, where the computed results are valid for all the possible executions of the analyzed program. We present a simplified programming language and its execution model; then an approximated execution model is developed using the ideas of abstract interpretation theory. Finally, the soundness of the approximation is formally proved. The aim of developing a realistic pointsto analysis is pursued by presenting some extensions to the initial simplified model and discussing the correctness of their formulation. This work contains original contributions to the issue of pointsto analysis, as it provides a formulation of a filter operation on the pointsto abstract domain and a formal proof of the soundness of the defined abstract operations: these, as far as we now, are lacking from the previous literature.

[Sta07] 
B. Starynkevitch.
Multistage construction of a global static analyzer.
In Proceedings of the 2007 GCC Developers' Summit, pages
143151, Ottawa, Canada, 2007.
We describe ongoing work about global static analysis for GCC4 within the GlobalGCC European project, funded thru the ITEA Programme. The aim of this work is to provide global (whole program) static analysis, notably based upon abstract interpretation and knowledge based techniques, within the GCC compiler, targeted for analysis of medium sized C, Fortran or C++ programs. This will facilitate the adoption of GCC in the area of safetycritical software development, by providing features found in a few expensive commercial tools (PolySpace, AbsInt) or research prototypes (Astree). In this perspective, the emphasis is on the quality of analysis, at the expense of much bigger compilation times, without sacrificing scalability. Such analysis can be used for several purposes: statically compute some interesting properties of the program at most control points (possibly reporting them the user); provide clever, contextual, warnings about possible hazards in the user program (null pointer dereferences, zero divide, conversion loss, out of bound array access, ...) while avoiding too much false alarms; enable additional optimisations, like conditional contextual constant folding, C++ method call devirtualization, an other contextual optimizations. The compiler's rich program manipulation infrastructure facilitates the development of these advanced analysis capabilities. To facilitate the development highlevel semantical analyses, a domain specific language has been developped, and is translated (thru C) into dynamically loaded code. It uses the Parma Polyhedra Library (also used in the GRAPHITE project) for relational analysis on scalars and gives more expressivity to develop analaysis algorithms. It permits multistaged generation of the specific analysis tailored to the analyzed source code. Presenting this work at the 2007 GCC summit will allow us to stress the importance of all outputs of the compiler, not only objectcode, and to expose the complementary contribution of static analyses and dynamic/instrumentation approaches like mudflap.

[SCR06] 
H. Song, K. J. Compton, and W. C. Rounds.
SPHIN: a model checker for reconfigurable hybrid systems based on
SPIN.
In R. Lazic and R. Nagarajan, editors, Proceedings of the 5th
International Workshop on Automated Verification of Critical Systems, volume
145 of Electronic Notes in Theoretical Computer Science, pages
167183, University of Warwick, UK, 2006. Elsevier Science B.V.
We present SPHIN, a model checker for reconfigurable hybrid systems based on the model checker SPIN. We observe that physical (analog) mobility can be modeled in the same way as logical (discrete) mobility is modeled in the πcalculus by means of channel name passing. We chose SPIN because it supports channel name passing and can model reconfigurations. We extend the syntax of PROMELA and the verification algorithms based on the expected semantics. We demonstrate the tool's capabilities by modeling and verifying a reconfigurable hybrid system.

[TLR08] 
L.M. Traonouez, D. Lime, and O. H. Roux.
Parametric modelchecking of time Petri nets with stopwatches using
the stateclass graph.
In F. Cassez and C. Jard, editors, Proceedings of the 6th
International Conference on Formal Modeling and Analysis of Timed Systems
(FORMATS 2008), volume 5215 of Lecture Notes in Computer Science,
pages 280294, Saint Malo, France, 2008. SpringerVerlag, Berlin.
In this paper, we propose a new framework for the parametric verification of time Petri nets with stopwatches controlled by inhibitor arcs. We first introduce an extension of time Petri nets with inhibitor arcs (ITPNs) with temporal parameters. Then, we define a symbolic representation of the parametric state space based on the classical state class graph method. The parameters of the model are embedded into the firing domains of the classes, that are represented by convex polyhedra. Finally, we propose semialgorithms for the parametric modelchecking of a subset of parametric TCTL formulae on ITPNs. We can thus generate the set of the parameter valuations that satisfy the formulae.

[TLR09] 
L.M. Traonouez, D. Lime, and O. H. Roux.
Parametric modelchecking of stopwatch Petri nets.
Journal of Universal Computer Science, 15(17):32733304, 2009.
At the border between control and verification, parametric verification can be used to synthesize constraints on the parameters to ensure that a system verifies given specifications. In this paper we propose a new framework for the parametric verification of time Petri nets with stopwatches. We first introduce a parametric extension of time Petri nets with inhibitor arcs (ITPNs) with temporal parameters and we define a symbolic representation of the parametric statespace based on the classical stateclass graph method. Then, we propose semialgorithms for the parametric modelchecking of a subset of parametric TCTL formulae on ITPNs. These results have been implemented in the tool Romeo and we illustrate them in a casestudy based on a scheduling problem.

[TCE^{+}10] 
K. Trifunovic, A. Cohen, D. Edelsohn, L. Feng, T. Grosser, H. Jagasia,
R. Ladelsky, S. Pop, J. Sjödin, and R. Upadrasta.
GRAPHITE two years after: First lessons learned from realworld
polyhedral compilation.
In Proceedings of the 2nd International Workshop on GCC Research
Opportunities (GROW'10), pages 419, Pisa, Italy, 2010.
Modern compilers are responsible for adapting the semantics of source programs into a form that makes efficient use of a highly complex, heterogeneous machine. This adaptation amounts to solve an optimization problem in a huge and unstructured search space, while predicting the performance outcome of complex sequences of program transformations. The polyhedral model of compilation is aimed at these challenges. Its geometrical, noninductive semantics enables the construction of betterstructured optimization problems and precise analytical models. Recent work demonstrated the scalability of the main polyhedral algorithms to realworld programs. Its integration into production compilers is under way, pioneered by the graphite branch of the GNU Compiler Collection (GCC). Two years after the effective beginning of the project, this paper reports on original questions and innovative solutions that arose during the design and implementation of graphite.

[UFL^{+}06] 
E. Uchoa, R. Fukasawa, J. Lysgaard, A. Pessoa, M. Poggi de Aragão, and
D. Andrade.
Robust branchcutandprice for the capacitated minimum spanning tree
problem over a large extended formulation.
Technical Report RPEP, Vol. 6, No. 9, Universidade Federal
Fluminense, Engenharia de Produçao, Niteroi, Brazil, 2006.
This paper presents a robust branchcutandprice algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to qarbs, a structure that arises from a relaxation of the capacitated prizecollecting arborescence probem in order to make it solvable in pseudopolynomial time. Traditional inequalities over the arc formulation, like Capacity Cuts, are also used. Moreover, a novel feature is introduced in such kind of algorithms. Powerful new cuts expressed over a very large set of variables could be added, without increasing the complexity of the pricing subproblem or the size of the LPs that are actually solved. Computational results on benchmark instances from the ORLibrary show very significant improvements over previous algorithms. Several open instances could be solved to optimality.

[UC13] 
R. Upadrasta and A. Cohen.
Subpolyhedral scheduling using (unit)twovariableperinequality
polyhedra.
In Proceedings of the 40th Annual ACM SIGPLANSIGACT Symposium
on Principles of Programming Languages (POPL 2013), pages 483496, Rome,
Italy, 2013. ACM Press, New York, USA.
Also published in SIGPLAN Notices, Volume 48, Number 1.
Polyhedral compilation has been successful in the design and implementation of complex loop nest optimizers and parallelizing compilers. The algorithmic complexity and scalability limitations remain one important weakness. We address it using subpolyhedral underaproximations of the systems of constraints resulting from affine scheduling problems. We propose a subpolyhedral scheduling technique using (Unit)TwoVariablePerInequality or (U)TVPI Polyhedra. This technique relies on simple polynomial time algorithms to underapproximate a general polyhedron into (U)TVPI polyhedra. We modify the stateoftheart PLuTo compiler using our scheduling technique, and show that for a majority of the Polybench (2.0) kernels, the above underapproximations yield polyhedra that are nonempty. Solving the underapproximated system leads to asymptotic gains in complexity, and shows practically significant improvements when compared to a traditional LP solver. We also verify that code generated by our subpolyhedral parallelization prototype matches the performance of PLuTooptimized code when the underapproximation preserves feasibility.

[vOSV06] 
K. van Hee, O. Oanea, N. Sidorova, and M. Voorhoeve.
Verifying generalized soundness for workflow nets.
In I. Virbitskaite and A. Voronkov, editors, Perspectives of
System Informatics: Proceedings of the 6th International Andrei Ershov
Memorial Conference, volume 4378 of Lecture Notes in Computer Science,
pages 231244, Akademgorodok, Novosibirsk, Russia, 2006. SpringerVerlag,
Berlin.
We improve the decision procedure from [K. van Hee, N. Sidorova, and M. Voorhoeve. Generalized soundness of workflow nets is decidable. In Proc. of ICATPN'2004, volume 3099 of LNCS, pages 197216, 2004] for the problem of generalized soundness for workflow nets: “Every marking reachable from an initial marking with k tokens on the initial place terminates properly, i.e. it can reach a marking with k tokens on the final place, for an arbitrary natural number k”. Moreover, our new decision procedure returns a counterexample in case the workflow net is not generalized sound. We also report on experimental results obtained with the prototype we made and explain how the procedure can be used for the compositional verification of large workflows.

[Vas08] 
P. B. Vasconcelos.
Space Cost Analysis Using Sized Types.
PhD thesis, School of Computer Science, University of St Andrews, St
Andrews, UK, August 2008.
Programming resourcesensitive systems, such as realtime embedded systems, requires guaranteeing both the functional correctness of computations and also that time and space usage fit within constraints imposed by hardware limits or the environment. Functional programming languages have proved very good at meeting the former logical kind of guarantees but not the latter resource guarantees.

[Ver10] 
S. Verdoolaege.
An integer set library for the polyhedral model.
In K. Fukuda, J. Hoeven, M. Joswig, and N. Takayama, editors,
Proceedings of the 3rd International Congress on Mathematical Software (ICMS
2010), volume 6327 of Lecture Notes in Computer Science, pages
299302, Kobe, Japan, 2010. SpringerVerlag, Berlin.
In compiler research, polytopes and related mathematical objects have been successfully used for several decades to represent and manipulate computer programs in an approach that has become known as the polyhedral model. The key insight is that the kernels of many computeintensive applications are composed of loops with bounds that are affine combinations of symbolic constants and outer loop iterators. The iterations of a loop nest can then be represented as the integer points in a (parametric) polytope and manipulated as a whole, rather than as individual iterations. A similar reasoning holds for the elements of an array and for mappings between loop iterations and array elements.

[Vig07] 
G. Vigna.
Static disassembly and code analysis.
In M. Christodorescu, S. Jha, D. Maughan, D. Song, and C. Wang,
editors, Malware Detection, volume 27 of Advances in Information
Security. SpringerVerlag, Berlin, 2007.
The classification of an unknown binary program as malicious or benign requires two steps. In the first step, the stream of bytes that constitutes the program has to be transformed (or disassembled) into the corresponding sequence of machine instructions. In the second step, based on this machine code representation, static or dynamic code analysis techniques can be applied to determine the properties and function of the program. Both the disassembly and code analysis steps can be foiled by techniques that obfuscate the binary representation of a program. Thus, robust techniques are required that deliver reliable results under such adverse circumstances. In this chapter, we introduce a disassemble technique that can deal with obfuscated binaries. Also, we introduce a static code analysis approach that can identify highlevel semantic properties of code that are difficult to conceal.

[YWGI06] 
Z. Yang, C. Wang, A. Gupta, and F. Ivančić.
Mixed symbolic representations for model checking software programs.
In Proceedings of the 4th ACM & IEEE International Conference
on Formal Methods and Models for CoDesign (MEMOCODE 2006), pages 1726,
Embassy Suites, Napa, California, USA, 2006. IEEE Press.
We present an efficient symbolic search algorithm for software model checking. The algorithm combines multiple symbolic representations to efficiently represent the transition relation and reachable states and uses a combination of decision procedures for Boolean and integer representations. Our main contributions include: (1) mixed symbolic representations to model C programs with rich data types and complex expressions; and (2) new symbolic search strategies and optimization techniques specific to sequential programs that can significantly improve the scalability of model checking algorithms. Our controlled experiments on realworld software programs show that the new symbolic search algorithm can achieve several ordersofmagnitude improvements over existing methods. The proposed techniques are extremely competitive in handling sequential models of nontrivial sizes, and also compare favorably to popular Booleanlevel model checking algorithms based on BDDs and SAT.

[YWGI09] 
Z. Yang, C. Wang, A. Gupta, and F. Ivančić.
Model checking sequential software programs via mixed symbolic
analysis.
ACM Transactions on Design Automation of Electronic Systems,
14(1):126, 2009.
We present an efficient symbolic search algorithm for software model checking. Our algorithms perform wordlevel reasoning by using a combination of decision procedures in Boolean and integer and real domains, and use novel symbolic search strategies optimized specifically for sequential programs to improve scalability. Experiments on realworld C programs show that the new symbolic search algorithms can achieve several ordersofmagnitude improvements over existing methods based on bitlevel (Boolean) reasoning.

[ZC11] 
M. Zanioli and A. Cortesi.
Information leakage analysis by abstract interpretation.
In I. Cerná, T. Gyimóthy, J. Hromkovic, K. Jefferey,
R. Královic, M. Vukolic, and S. Wolf, editors, SOFTSEM 2011: Theory
and Practice of Computer Science, volume 6543 of Lecture Notes in
Computer Science, pages 545557. SpringerVerlag, Berlin, 2011.
Protecting the confidentiality of information stored in a computer system or transmitted over a public network is a relevant problem in computer security. The approach of information flow analysis involves performing a static analysis of the program with the aim of proving that there will not be leaks of sensitive information. In this paper we propose a new domain that combines variable dependency analysis, based on propositional formulas, and variables' value analysis, based on polyhedra. The resulting analysis is strictly more accurate than the state of the art abstract interpretation based analyses for information leakage detection. Its modular construction allows to deal with the tradeoff between efficiency and accuracy by tuning the granularity of the abstraction and the complexity of the abstract operators.

[ZFC12] 
M. Zanioli, P. Ferrara, and A. Cortesi.
SAILS: Static analysis of information leakage with sample.
In C.C. Hung and J. Hong, editors, Proceedings of the 27th
Annual ACM Symposium on Applied Computing, pages 13081313, Trento, Italy,
2012. ACM New York, USA.
In this paper, we introduce Sails, a new tool that combines Sample, a generic static analyzer, and a sophisticated domain for leakage analysis. This tool does not require to modify the original language, since it works with mainstream languages like Java, and it does not require any manual annotation. Sails can combine the information leakage analysis with different heap abstractions, inferring information leakage over programs dealing with complex data structures. We applied Sails to the analysis of the SecuriBenchmicro suite. The experimental results show the effectiveness of our approach.

[ZWH12] 
D. Zufferey, T. Wies, and T. A. Henzinger.
Ideal abstractions for wellstructured transition systems.
In V. Kuncak and A. Rybalchenko, editors, Verification, Model
Checking, and Abstract Interpretation: Proceedings of 13th International
Conference (VMCAI 2012), volume 7148 of Lecture Notes in Computer
Science, pages 445460. SpringerVerlag, Berlin, 2012.
Many infinite state systems can be seen as wellstructured transition systems (WSTS), i.e., systems equipped with a wellquasiordering on states that is also a simulation relation. WSTS are an attractive target for formal analysis because there exist generic algorithms that decide interesting verification problems for this class. Among the most popular algorithms are accelerationbased forward analyses for computing the covering set. Termination of these algorithms can only be guaranteed for flattable WSTS. Yet, many WSTS of practical interest are not flattable and the question whether any given WSTS is flattable is itself undecidable. We therefore propose an analysis that computes the covering set and captures the essence of accelerationbased algorithms, but sacrifices precision for guaranteed termination. Our analysis is an abstract interpretation whose abstract domain builds on the ideal completion of the wellquasiordered state space, and a widening operator that mimics acceleration and controls the loss of precision of the analysis. We present instances of our framework for various classes of WSTS. Our experience with a prototype implementation indicates that, despite the inherent precision loss, our analysis often computes the precise covering set of the analyzed system.
