搜档网
当前位置:搜档网 › Proving program invariance and termination by parametric abstraction, lagrangian relaxation

Proving program invariance and termination by parametric abstraction, lagrangian relaxation

Proving program invariance and termination by parametric abstraction, lagrangian relaxation
Proving program invariance and termination by parametric abstraction, lagrangian relaxation

1 Proving Program Invariance and Termination by Parametric Abstraction,Lagrangian Relaxation and Semide?nite Programming

Patrick Cousot

′Ecole Normale Sup′e rieure

45rue d’Ulm,75230Paris cedex05(France)

Patrick.Cousot@ens.fr

www.di.ens.fr/~cousot

Abstract.In order to verify semialgebraic programs,we automatize

the Floyd/Naur/Hoare proof method.The main task is to automatically

infer valid invariants and rank functions.

First we express the program semantics in polynomial form.Then the

unknown rank function and invariants are abstracted in parametric form.

The implication in the Floyd/Naur/Hoare veri?cation conditions is han-

dled by abstraction into numerical constraints by Lagrangian relaxation.

The remaining universal quanti?cation is handled by semide?nite pro-

gramming relaxation.Finally the parameters are computed using semidef-

inite programming solvers.

This new approach exploits the recent progress in the numerical resolu-

tion of linear or bilinear matrix inequalities by semide?nite programming

using e?cient polynomial primal/dual interior point methods generaliz-

ing those well-known in linear programming to convex optimization.

The framework is applied to invariance and termination proof of sequen-

tial,nondeterministic,concurrent,and fair parallel imperative polyno-

mial programs and can easily be extended to other safety and liveness

properties.

Keywords:Bilinear matrix inequality(BMI),Convex optimization,In-

variance,Lagrangian relaxation,Linear matrix inequality(LMI),Live-

ness,Parametric abstraction,Polynomial optimization,Proof,Rank func-

tion,Safety,S-procedure,Semide?nite programming,Termination pre-

condition,Termination,Program veri?cation.

1Introduction

Program veri?cation is based on reasonings by induction(e.g.on program steps) which involves the discovery of unknown inductive arguments(e.g.rank func-tions,invariants)satisfying universally quanti?ed veri?cation conditions.For static analysis the discovery of the inductive arguments must be automated, which consists in solving the constraints provided by the veri?cation conditions. Several methods have been considered:recurrence/di?erence equation resolu-tion;iteration,possibly with convergence acceleration;or direct methods(such R.Cousot(Ed.):VMCAI2005,LNCS3385,pp.1–24,2005.

c Springer-Verlag Berlin Heidelberg2005

2Patrick Cousot

as elimination).All these methods involve some form of simpli?cation of the constraints by abstraction.

In this paper,we explore parametric abstraction and direct resolution by Lagrangian relaxation into semide?nite programming.This is applied to termi-nation(a typical liveness property)of semialgebraic programs.The extension to invariance(a typical safety property)is sketched.

The automatic determination of loop invariant/rank function can be sum-marized as follows:

1.Establish the relational semantics of the loop body(Sec.2)(may be strength-

ened with correctness preconditions(Sec.2.2),abstract invariants(Sec.2.3), and/or simpli?ed by relational abstraction(Sec.2.4));

2.Set up the termination/invariance veri?cation conditions(Sec.3);

3.Choose a parametric abstraction(Sec.4).The resolution of the abstract

logical veri?cation conditions by?rst-order quanti?er elimination can be considered,but is very often too costly(Sec.5);

4.Abstract further the abstract logical veri?cation conditions into numerical

constraints(Sec.8)by Lagrangian relaxation(Sec.6)obtaining Linear Ma-trix Inequalities for termination(Sec.6.2)or Bilinear Matrix Inequalities for invariance(Sec.12);

5.Solve the numerical constraints(Sec.9)by semide?nite programming(Sec.

7);

After a series of examples(Sec.10),we consider more complex language features including disjunctions in the loop test and conditionals in the loop body(Sec.

11.1),nested loops(Sec.11.2),nondeterminism and concurrency(Sec.11.3), bounded weakly fair parallelism(Sec.11.4),and semi-algebraic/polynomial pro-grams,for which a further relaxation into a sum of squares is applied(Sec.11.5). The case of invariance is illustrated in Sec.12.Potential problems with solvers are discussed in Sec.13,before concluding(Sec.14).

2Relational Semantics of Programs

2.1Semialgebraic Programs

We consider numerical iterative programs while B do C od where B is a boolean condition and C is an imperative command(assignment,test or loop) on the global program variables.We assume that the operational semantics of the loop is given for an iteration as:

N

B;C (x0,x)=

σk(x0,x)≥0(1)

k=1

where x0is the line vector of values of the n program variables before an itera-tion,x is the line vector of values of the n program variables after this iteration and the relationship between x0and x during a single iteration is expressed as

Proving Program Invariance and Termination 3

a conjunction of N real valued positivity constraints with σk ∈R n ×R n ?→R ,k =1,...,N 1.Algorithmically interesting particular cases are when the con-straints σk ≥0can be expressed as linear constraints,quadratic forms and polynomial positivity.Equalities σk (x 0,x )=0have to be written as σk (x 0,x )≥0∧?σk (x 0,x )≥0.

Example 1(Factorial).The program below computes the greatest factorial less than or equal to a given N ,if any.The operational semantics of the loop body can be de?ned by the following constraints:

n :=0;f :=1;

while (f <=N)do

n :=n +1;f :=n *f od

?f 0+N 0≥0

n 0≥0f 0?1≥0?n 0+n ?1=0?f 0.n +f =0?N 0+N =0

All constraints are linear but f ?n.f 0=0which is quadratic (of the form [x 0x ]A [x 0x ] +2[x 0x ]q +r ≥0,where A is symmetric,q is a column vector,r is a constant,and is transposition),and can be written as follows:

[n 0f 0N 0n f N ]????????000000000?12000000000?12

0000000000000000????????????????n 0f 0N 0n f N ???

?????+2[n 0f 0N 0nfN ]????????0

000120????

????

+0=0,or equivalently as [x 0x 1]M [x 0x 1] ≥0where M is symmetric,that is:

[n 0f 0N 0n f N 1]??????????0000000000?1200000000000?120000000000012000000000001200

????????????????????n 0f 0N 0n f N 1??

?

?????

?

?

=0.

2.2Establishing Necessary Correctness Preconditions

Iterated Forward/Backward Static Analysis.Program veri?cation may

be unsuccessful when the program execution may fail under some circumstances

1

Any Boolean constraint on numerical variables can be written in that form using an appropriate numerical encoding of the boolean values and embedding of the numerical values into R .

4Patrick Cousot

(e.g.non termination,run-time errors).As part of the correctness proof,it is therefore mandatory to establish correctness preconditions excluding such mis-behaviors.Such a necessary termination and absence of runtime errors precon-dition can be discovered automatically by an iterated forward/backward static analysis[8,13].

Discovering a Termination Precondition by the Auxiliary Termination Counter Method.Termination requirements can be incorporated into the iterated forward/backward static analysis in the form of an auxiliary termination counter k which is strictly decremented in the loop and is asserted to be zero on loop exit.For relational analyzes,this strengthens the necessary termination precondition.

Example2.The following example is from[4].The analyzer uses the polyhedral abstract domain[15]implemented using the New Polka library[21].

while(x<>y)do x:=x-1;

y:=y+1

od {x>=y}

while(x<>y)do

{x>=y+2}

x:=x-1;

{x>=y+1}

y:=y+1

{x>=y}

od

{x=y}

{x=y+2k,x>=y}

while(x<>y)do

{x=y+2k,x>=y+2}

k:=k-1;

x:=x-1;

y:=y+1

{x=y+2k,x>=y}

od

{x=y,k=0}

assume(k=0)

Program Iterated forward/backward

static analysis Iterated forward/backward static analysis with termination counter.

The use of the auxiliary termination counter allows the iterated forward/back-ward polyhedral analysis to discover the necessary termination condition that the initial di?erence between the two variables should be even.

2.3Strengthening the Relational Semantics

Proofs can be made easier by strengthening the relational semantics through incorporation of known facts about the program runtime behavior.For example an invariant may be needed,in addition to a termination precondition,to prove termination.Such a loop invariant can be determined automatically by a forward static analysis[9]assuming the initial termination precondition.

Example3(Ex.2continued).A forward polyhedral analysis[15]yields linear invariants:

Proving Program Invariance and Termination5

assume(x=y+2*k)&(x>=y); while(x<>y)do

k:=k-1;

x:=x-1;

y:=y+1

od assume(x=y+2*k)&(x>=y); {x=y+2k,x>=y}

{loop invariant:x=y+2k} while(x<>y)do

{x=y+2k}

k:=k-1;

x:=x-1;

y:=y+1

{x=y+2k}

od

{k=0,x=y}

Program Forward polyhedral analysis.

2.4Abstracting the Relational Semantics

Program veri?cation may be made easier when the big-step operational seman-tics of the loop body(1)is simpli?ed e.g.through abstraction.To get such a simpli?ed but sound semantics,one can use any relational abstraction.The technique consists in using auxiliary initial variables to denote the values of the program variables at the beginning of the loop iteration(whence satisfying the loop invariant and the loop condition).The invariant at the end of the loop body is then a sound approximation of the relational semantics(1)of the loop body2. The polyhedral abstraction[15]will be particularly useful to derive automati-cally an approximate linear semantics.

Example4(Ex.3continued).Handling the operator<>(di?erent)by case anal-ysis,we get3:

assume(x=y+2*k)&(x>=y);

{x=y+2k,x>=y}

assume(x

empty(6)

assume(x0=x)&(y0=y)&(k0=k); k:=k-1;

x:=x-1;

y:=y+1

empty(6)assume(x=y+2*k)&(x>=y);

{x=y+2k,1>=0,x>=y}

assume(x>y);

{x=y+2k,1>=0,x>=y+1}

assume(x0=x)&(y0=y)&(k0=k);

k:=k-1;

x:=x-1;

y:=y+1

{x+2=y+2k0,y=y0+1,x+1=x0,x=y+2k,x+1>=y}

3Veri?cation Condition Setup

3.1Floyd/Naur/Hoare Invariance Proof Method

Given a loop precondition P(x)≥0which holds before loop entry,the invariance proof method[17,20,26]consists in proving that the invariant I(x)≥0is initial 2The technique was?rst used in the context of static analysis for context-sensitive interprocedural analysis to compute summaries of recursive procedures[10].

3empty(6)denotes the empty polyhedron in6variables,that is unreachability⊥.

6Patrick Cousot

(that is to say holds on loop entry)and inductive(i.e.remains true after each loop iteration):

?x∈R n:(P(x)≥0)?(I(x)≥0),(2)?x0,x∈R n:(I(x0)≥0∧

N

k=1

σk(x0,x)≥0)?(I(x)≥0).(3)

3.2Floyd Rank Function Termination Proof Method

Floyd’s method[17]for proving loop termination consists in discovering a rank function r∈R n?→W of the values of the program variables into a well-founded set W, which is strictly decreasing at each iteration of the loop. If the nondeterminism is bounded,one can choose W, = N,≤ .In what follows,we will often use real valued rank functions r which are nonnegative on loop body entry and strictly decrease at each iteration by a positive quantity bounded from below4.In such a case,and up to an isomorphism,the rank function r can be embedded into N.

In general a loop terminates for some initial values of the variables only, satisfying some loop termination precondition P(x)≥0so that the strict decre-mentation can be requested for states reachable from this initial condition only, as characterized by an invariant I(x)≥0in the sense of[17,20,26].

Floyd’s veri?cation conditions[17]for proving loop termination become:

?r∈R n?→R,?δ∈R:

?x0∈R n:(I(x0)≥0)?(r(x0)≥0),(4)?x0,x∈R n:(I(x0)≥0∧

N

k=1

σk(x0,x)≥0)?(r(x0)?r(x)?δ≥0),(5)

δ>0.(6) Remark1.We can also chooseδ=1but it is sometimes more?exible to let its value be computed by the solver(see later Rem.4). Remark2.As proved in[12,Sec.9,p.290],the above choice of I and r not depending upon initial states is incomplete so that,more generally,we may have to use I(x,x )and r(x,x )where x∈R n denotes the initial value of the variables before loop entry and x ∈R n their current value that is x0at the beginning of an iteration of the loop body and x at the end of that same iteration.

4Parametric Abstraction

Fixing the form of the unknown invariant I(x)in(2)and(3)or of the rank func-tion r in(4)and(5)in terms of p unknown parameters a∈R p to be determined by the analysis is an abstraction[11].An example is the a?ne abstraction[22].

4to avoid the Zeno phenomenon,that is,a strict decrease by1,1

2,1

4

,1

8

,...,which

could be in?nite.

Proving Program Invariance and Termination7 More generally,a function f(x)can be abstracted in the form f a(x)where a is a line vector of unknown parameters and x is the line vector of values of the loop variables5.For example,the linear case is f a(x)=a.x and the a?ne case is f a(x)=a.(x1) .A quadratic choice would be f a(x)=x.a.x or f a(x)=(x1).a.(x1) where a is a symmetric matrix of unknown parameters.

After parametric abstraction,it remains to compute the parameters a by solving the veri?cation constraints.For example,the termination veri?cation conditions(4),(5),and(6)become:

?a∈R p:?δ∈R:

?x0∈R n:(I(x0)≥0)?(r a(x0)≥0),(7)

N

?x0,x∈R n:(I(x0)≥0∧

σk(x0,x)≥0)?(r a(x0)?r a(x)?δ≥0),(8)

k=1

δ>0.(9) The resolution of these termination constraints in the case of linear programs and rank functions has been explored by[7],using a reduction of the constraints based on the construction of polars,intersection and projection of polyhedral cones(with limitations,such as that the loop test contains no disjunction and the body contains no test).

5Solving the Abstract Veri?cation Conditions by First-order Quanti?er Elimination

The Tarski-Seidenberg decision procedure for the?rst-order theory of real closed ?elds by quanti?er elimination can be used to solve(7),(8),and(9)since it trans-forms a formula Q1x1:...Q n xn:F(x1,...,x n)(where the Q i are?rst-order quanti?ers?,?and F is a logical combination of polynomial equations and in-equalities in the variables x1,...,x n)into an equivalent quanti?er free formula. However Tarski’s method cannot be bound by any tower of exponentials.The cylindrical algebraic decomposition method by Collins[6]has a worst-case time-complexity for real quanti?er elimination which is“only”doubly exponential in the number of quanti?er blocks.It is implemented in Mathematica r but can-not be expected to scale up to large problems.So we rely on another abstraction method described below.

5The sets of constraints f

a(x)≥0for all a may not be a Moore family for the pointwise ordering,in which case a concretization functionγmay be used[14].

For simplicity we make no distinction between the representation of the constraint f a(x)≥0and its valueγ(f a(x)≥0)={x∈R n|f a(x)≥0}.In consequence,the

use of a concretization function will remain implicit.

8Patrick Cousot

6

Abstraction of the Veri?cation Conditions into Numerical Constraints by Lagrangian Relaxation

6.1

The Lagrangian Relaxation Method

Let V be a ?nite dimensional linear vector space,N >0and ?k ∈[0,N ]:σk ∈V ?→R (not necessarily linear).Let R +={x ≥0|x ∈R }.To prove:

?x ∈V : N

k =1

σk (x )≥0 ?(σ0(x )≥0),(10)

the Lagrangian relaxation consists in proving that:

?λ∈[1,N ]?→R +

:?x ∈V :σ0(x )?

N k =1

λk σk (x )≥0,

(11)

where the λk are called Lagrange coe?cients .The interest of Lagrangian relax-ation is that the implication ?and conjunction

in (10)are eliminated in (11).

The approach is obviously sound,since the hypothesis N

k =1σk (x )≥0in (10)and the positivity of the Lagrange coe?cients λ∈[1,N ]?→R +implies

the positivity of N

k =1λk σk (x ).Hence,by the antecedent of (10)and transitivity,σ0(x )≥0holds in the consequent of (10).

Observe that equality constraints can be handled in the same way by request-ing the corresponding Lagrange coe?cients to be reals (as opposed to nonnega-tive reals for inequality constraints).Indeed for equality constraints σk (x )=0,we can use (11)for both σk (x )≥0and ?σk (x )≥0with respective coe?cients

λk ≥0and λ k ≥0so that the terms λk σk (x )+λ

k (?σk (x ))can be grouped into

a single term (λk ?λ k )σk (x )with no sign restriction on λk ?λ

k .Since any real is equal to the di?erence of some nonnegative reals,we can equivalently use a

single term λ k σk (x )with λ

k ∈R .

Lagrangian relaxation is in general incomplete (that is (10)?(11),also called lossy ).However it is complete (also called lossless ))in the linear case (by the a?ne Farkas’lemma)and the linear case with at most two quadratic constraints (by Yakubovitch’s S-procedure [34,Th.1]).6.2

Lagrangian Relaxation of Floyd Termination Veri?cation Conditions on Rank Functions

Relaxing Floyd’s parametric veri?cation conditions (7),(8),and (9),we get:?a ∈R p :?δ∈R :?μ∈R +:?λ∈[0,N ]?→R +:

?x 0∈R n :r a (x 0)?μ.I (x 0)≥0,

(12)

?x 0,x ∈R n

:r a (x 0)?r a (x )?δ?λ0.I (x 0)?

N k =1

λk .σk (x 0,x )≥0,

(13)δ>0.

(14)

Proving Program Invariance and Termination 9

In [30],the constraints σk (x,x )≥0are assumed to be linear in which case the Lagrange coe?cients can be eliminated by hand.Then the problem reduces to linear programming (with limitations,such as that the loop test contains no disjunction,the loop body contains no tests and the method cannot identify the cases when the loop does not terminate).We can use semide?nite programming to overcome the linearity limitation.

7Semide?nite Programming

The semide?nite programming optimization problem is to ?nd a solution to the constraints:

?x ∈R m :M (x )<0

Minimizing c x

where c ∈R m is a given real vector,the linear matrix inequality (LMI)[3]M (x )<0is of the form:

M (x )=M 0+

m k =1

x k .M k

with symmetric matrices (M k =M k ),and positive semide?niteness is de?ned

as:

M (x )<0

?

=

?X ∈R N :XM (x )X ≥0.

The semide?nite programming feasibility problem consists in ?nding a solution to the constraints M (x )<0.A feasibility problem can be converted into the

optimization program min {?y ∈R | N

i =1M i (x )?y <0}.

8LMI Constraint Setup for Termination

For programs which invariant and operational semantics (1)can be expressed in the form:

I (x 0)∧ B ;C (x 0,x )=

N k =1

(x 0x 1)M k (x 0x 1) ≥0,

(15)

the constraints (12),(13),and (14)become LMIs (in the unknown a ,μ,δand

the λk ,k =1,...,N by parametric abstraction (Sec.4)of r a in the form r a (x )=(x 1)R (x 1) where R is a real (n +1)×(n +1)-symmetric matrix of unknown parameters).

The conjunction of LMIs M 1(x )<0∧...∧M k (x )<0can be expressed as a single LMI diag(M 1(x ),...,M k (x ))<0where diag(M 1(x ),...,M k (x ))denotes the block-diagonal matrix with M 1(x ),...,M k (x )on its diagonal.

These LMIs can then be solved by semide?nite programming.

10Patrick Cousot

Example5.To show this,we prove the linear termination of the linear example program below,considered as in general semide?nite form(so that the general-ization to(15)is immediate).The semantics of the loop body can be determined by a forward symbolic analysis of the loop body assuming the loop invariant(here the loop condition)and by naming the values of the variables at the beginning of the loop body6:

while(x>=1)&(y>=1)do x:=x-y

od assume(x0>0)&(y0>0); {y0>=1,x0>=1}

assume(x=x0)&(y=y0); {y0=y,x0=x,y0>=1,x0>=1}

x:=x-y

{y0=y,x0=x+y,y0>=1,x0>=1}

Program Semantics of the loop body.

The constraintsσk(x0,x)are encoded as(x0x1)M k(x0x1) .For the above example,we have:

Mk(:,:,1)=

00001/2

00000

00000

00000

1/2000-1 Mk(:,:,2)=

00000

00001/2

00000

00000

01/200-1Mk(:,:,3)=

00001/2

00000

0000-1/2

0000-1/2

1/20-1/2-1/20 Mk(:,:,4)=

00000

0000-1/2

00000

00001/2

0-1/201/20

that is in symbolic form:

x0?1≥0(x0y0x y1)Mk(:,:,1)(x0y0x y1) ≥0

y0?1≥0(x0y0x y1)Mk(:,:,2)(x0y0x y1) ≥0

x0?x?y=0(x0y0x y1)Mk(:,:,3)(x0y0x y1) =0

?y0+y=0(x0y0x y1)Mk(:,:,4)(x0y0x y1) =0.

The termination constraints(12),(13),and(14)now become the following LMIs7:

6as considered in Sec.2.4,which is di?erent from Rem.2where the values of variables were remembered before loop entry.

7Notice that if(x1)A(x1) ≥0for all x,this is the same as(y t)A(y t) ≥0for all y and all t=0(multiply the original inequality by t2and call xt=y).Since the latter inequality holds true for all x and all t=0,by continuity it holds true for all x,t, that is,the original inequality is equivalent to positive semide?niteness of A(thanks Arkadi Nemirovski for this argument).

Proving Program Invariance and Termination 11

M0-l0(1,1)*Mk(:,:,1)-l0(2,1)*Mk(:,:,2)-l0(3,1)*Mk(:,:,3)-l0(4,1)*Mk(:,:,4)>=0

M0-M_0-delta-l(1,1)*Mk(:,:,1)-l(2,1)*Mk(:,:,2)-l(3,1)*Mk(:,:,3)-l(4,1)*Mk(:,:,4)>=0

where >=is semide?nite positiveness

l0(1,1)>=0

l0(2,1)>=0

l(1,1)>=0

l(2,1)>=0

(where >=is the real comparison for elementwise constraints),the rank function r (x )=(x 1).R.(x 1) appears in M0=

??R 1:n,1:n 0

n ×n

R 1:n,n +10n ×n 0n ×n 0n ×1R n +1,1:n 01×n R n +1,n +1

??such that ?x :r (x 0)=(x 0x 1).M0.(x 0x 1) and in M 0= 0n ×n 0

n ×n +10n +1×n

R such that ?x 0:r (x )=(x 0x 1).M 0.(x 0x 1) and delta = 02n ×2n 0

2n ×101×2n

δ

so that ?x 0,x :(x 0x 1).delta .(x 0x 1) =δ.

Remark 3.An a?ne (linear by abuse of language)rank function r a (x )=a.(x 1) where x ∈R n and a ∈R n can be enforced by choosing R = 0

n,n (a 2) 1:n (a 2)1:n a n :n

.

9Solving the Termination LMI Constraints

Following the extension of the interior point method for linear programming to convex cones [28],numerous solvers have been developed for semide?nite programming such as bnb 8[24],CSDP [2],DSDP4[1],lmilab [18],PenBMI 9[23],

Sdplr [5],Sdpt3[33],SeDuMi [32],with common interfaces under Matlab r

such as YALMIP [24].

Example 6(Ex.5continued).Choosing δ=1and a linear rank function as in Rem.3,we can solve the LMI constraints of Ex.5using various solvers under YALMIP :

r(x,y)=+4.x +2.y -3bnb

r(x,y)=+5.268942e+02.x +4.956309e+02.y -5.270981e+02CSDP-4.9r(x,y)=+2.040148e+07.x +2.222757e+07.y +9.096450e+06DSDP4-4.7r(x,y)=+2.767658e+11.x +2.265404e+11.y -1.311440e+11lmilab r(x,y)=+4.031146e+03.x +3.903684e+03.y +1.401577e+03lmilab 10r(x,y)=+1.042725e+00.x +4.890035e-01.y +1.975391e-01Sdplr-1.01r(x,y)=+9.888097e+01.x +1.343247e+02.y -1.725408e+02Sdpt3-3.02r(x,y)=+1.291131e+00.x +4.498515e-01.y -1.316373e+00SeDuMi-1.05

8for integer semide?nite programming (bnb is a higher level module of YALMIP for solving integer programs over convex cones using the other referenced solvers).9

to solve bilinear matrix inequalities.

12Patrick Cousot

Since di?erent solvers use di?erent resolution strategies,each one may provide a di?erent solution.Moreover,since there are in?nitely many di?erent rank func-tions(e.g.just multiply by or add a positive constant),the solution may not be the one a human being would naturally think of.Indeed,in the above example, any r(x,y)=ax+by+c with a≥1,b≥0and a+b+c≥0will do. Remark4.It is also possible to letδbe an unknown parameter with the con-straintδ>0as in(14).In this case,looking for a linear rank function with bnb, we get r(x,y)=+2.x-2andδ=8.193079e-01. Remark5.It is possible to check a rank function by?xing R as well asδand then by checking for the feasibility of the constraints(12),(13),and(14),which returns the Lagrange coe?cients.For example to check r(x,y)=+1.x,we use R =[[0,0,1/2];[0,0,0];[1/2,0,0]]andδ=1while performing the feasibility check with bnb. 10Examples

The examples illustrate di?erent kind of ranking functions.

10.1Examples of Linear Termination of a Linear Loop

Example7.Choosingδ=1and a linear rank function for the na¨?ve Euclidean division:

assume(y>=1);

q:=0;r:=x; while(y<=r)do r:=r-y;

q:=q+1

od y?1≥0

q?1≥0

r≥0

?q0+q?1=0

?x0+x=0

?y0+y=0

?r0+y+r=0

The linear semantics of the loop body(with polyhedral invariant)is provided on the right.Solving the corresponding termination constraints with bnb,we automatically get the ranking function r’(x,y,q,r)=-2.y+2.q+6.r,which is certainly less intuitive than Floyd’s proposal r (x,y,q,r)=x?q[17]but has the advantage not to depend upon the nonlinear loop invariant x=r+qy. Example8(Ex.4continued).For the example Ex.4from[4]considered in Sec.

2.2,where the di?erence<>was handled as a disjunction and one case was shown to be impossible,we get:

assume(x=y+2*k)&(x>=y); while(x<>y)do

k:=k-1;

x:=x-1;

y:=y+1

od

x?y+1≥0?2k0+x?y+2=0?y0+y?1=0

?x0+x+1=0

x?y?2k=0

10with a feasibility radius ofρ=1.0e4,constraining the solution x to lie in the ball x x<ρ2whereρ>0.

Proving Program Invariance and Termination 13

With bnb ,the proposed rank function is r(x,y,k)=+4.k ,proving that the necessary termination precondition automatically determined by the auxiliary termination counter method of Sec.2.2is also su?cient. 10.2

Example of Quadratic Termination of a Linear Loop

Example 9.Let us consider the program below which oddly simulates for i =n downto 1do for j =n downto 1do skip end end .The termination pre-condition has been automatically determined by iterated forward/backward poly-hedral analysis.The loop invariant has been automatically determined by a for-ward polyhedral analysis,assuming the termination precondition.The analysis of the loop body involves a partitioning according to the test (j >0),as later explained in Sec.11.1.For each case,the polyhedral approximation of the se-mantics of the loop body (where initially (n0=n)&(i0=i)&(j0=j))is given on the right:

assume (n >=0);i :=n;j :=n;while (i <>0)do

assume ((j>=0)&(i>=0)&

(n>=i)&(n>=j));

if (j >0)then j :=j -1else

j :=n;i :=i -1fi od

Case (j 0>0):n ?i ≥0i ?1≥0

j ≥0

n ?j ?1≥0?j 0+j +1=0

?i 0+i =0?n 0+n =0

Case (j 0≤0):i ≥0

?i +j ?1≥0?n 0+j =0

j 0=0

?i 0+i +1=0

?n +j =0

Choosing δ=1and a quadratic rank function,the resolution of the LMI con-straints given in next Sec.11.1by Sdplr-1.01(with feasibility radius of 1.0e+3)yield the solution:

r(n,i,j)=+7.024176e-04.n^2+4.394909e-05.n.i -2.809222e-03.n.j...

+1.533829e-02.n +1.569773e-03.i^2+7.077127e-05.i.j ...+3.093629e+01.i

...+4.237694e+00.Successive values of r (n ,i ,j )dur-ing program execution are plotted above for n =10on loop entry.They strictly decrease along the inclined plane.

Nested loops are better han-dled by induction on the nesting level,as shown in Sec.11.2.

10.3

Example of Linear Termination of a Quadratic Loop

Example 10.The following program computes the least factorial strictly greater

than a given integer N :

14Patrick Cousot

n:=0;f:=1;

while(f<=N)do

n:=n+1;f:=n*f od ?f0+N0≥0

n0≥0

f0?1≥0

?n0+n?1=0

?f0.n+f=0

?N0+N=0

The non-linear semantics of the loop body(with polyhedral invariant)is provided on the right.It has only one quadratic constraint,a case when the Lagrangian relaxation is complete.The ranking function found by SeDuMi-1.05(with fea-sibility radius of1.0e+3)is r(n,f,N)=-9.993455e-01.n+4.346533e-04.f +2.689218e+02.N+8.744670e+02. 11Extension To More Complex Language Features

11.1Disjunctions in the Loop Test and Conditionals in the Loop

Body

Disjunctions in the loop test and/or conditionals within the loop body can be analyzed by partitioning along the values of the boolean expressions[11,Sec.

10.2].Equivalently,a case analysis of the boolean expressions yields an opera-tional semantics of the loop body of the form:

B;C (x,x )=

M

j=1

N j

k=1

σjk(x,x )≥0.(16)

Whichever alternative is chosen,the rank function must strictly decrease while remaining nonnegative.Hence,we just have to consider the conjunction of all terminating constraints for each of the possible alternatives.We have already seen Ex.9.Here is another one.

Example11.For the program below:

while(x=0)then

x:=x+i+1 else

y:=y+i

fi

od Case(x0

?x0+y0?1≥0

i0≥0

?i0?x0+x?1=0

?y0+y=0

?i0+i=0

Case(x0≥y0):

?x0+y0?1≥0

?i0?1≥0

?i0?y0+y=0

?x0+x=0

?i0+i=0

the cases are listed on the right11.The termination constraints are given below (the P(j).Mk(:,:,k)corresponding to the k-th constraint in the j-th case,the corresponding Lagrange coe?cients being l0(j).v(k,j)for the nonnegativity and l(j).v(k,j)for decrementation by at leastδ=1.The matrices M0and M0 encapsulate the matrix R of the ranking function r(x)=(x1).R.(x1) while (x1).delta.(x1)=δas explained in Sec.8):

11Since the alternatives are considered on loop entry,a backward analysis may in general have to be used if some variable involved in a test of the loop body is modi?ed in the body before that test.

Proving Program Invariance and Termination15 M0-l0(1).v(1,1)*P(1).Mk(:,:,1)-l0(1).v(2,1)*P(1).Mk(:,:,2)-...

l0(1).v(3,1)*P(1).Mk(:,:,3)-l0(1).v(4,1)*P(1).Mk(:,:,4)-...

l0(1).v(5,1)*P(1).Mk(:,:,5)>=0

M0-M_0-delta-l(1).v(1,1)*P(1).Mk(:,:,1)-l(1).v(2,1)*P(1).Mk(:,:,2)-...

l(1).v(3,1)*P(1).Mk(:,:,3)-l(1).v(4,1)*P(1).Mk(:,:,4)-...

l(1).v(5,1)*P(1).Mk(:,:,5)>=0

l0(1).v(1,1)>=0

l0(1).v(2,1)>=0

l(1).v(1,1)>=0

l(1).v(2,1)>=0

M0-l0(2).v(1,1)*P(2).Mk(:,:,1)-l0(2).v(2,1)*P(2).Mk(:,:,2)-...

l0(2).v(3,1)*P(2).Mk(:,:,3)-l0(2).v(4,1)*P(2).Mk(:,:,4)-...

l0(2).v(5,1)*P(2).Mk(:,:,5)>=0

M0-M_0-delta-l(2).v(1,1)*P(2).Mk(:,:,1)-l(2).v(2,1)*P(2).Mk(:,:,2)-...

l(2).v(3,1)*P(2).Mk(:,:,3)-l(2).v(4,1)*P(2).Mk(:,:,4)-...

l(2).v(5,1)*P(2).Mk(:,:,5)>=0

l0(2).v(1,1)>=0

l0(2).v(2,1)>=0

l(2).v(1,1)>=0

l(2).v(2,1)>=0

Solving these LMI and elementwise constraints with bnb,we get r(i,x,y)= -4.x+4.y,that is essentially y?x,which corresponds to the intuition.

11.2Nested Loops

In the case of nested loops,the loops are handled one at a time,starting from the inner ones.

Example12(Manna’s original bubble sort).For the bubble sort example below (taken literally from[25,p.191]),the necessary termination precondition N≥0is automatically determined by the iterated forward/backward method of Sec.2.2.

A further automatic forward reachability analysis starting from this termination precondition yields loop invariants:

assume(N>=0);

n:=N;

i:=n;

loop invariant:{N=n,i>=0,n>=i}

while(i<>0)do

j:=0;

loop invariant:{N=n,j>=0,i>=j,i>=1,N>=i}

while(j<>i)do

j:=j+1

od;

i:=i-1

od

The result of this global analysis is used to determine the semantics of the inner loop body as given by its forward analysis:

16Patrick Cousot

assume((N=n)&(j>=0)&(i>=j)&(i>=1)&(N>=i));

assume(j<>i);

assume((N0=N)&(n0=n)&(i0=i)&(j0=j));

j:=j+1

{j=j0+1,i=i0,N=n0,N=N0,N=n,j>=1,N>=i,j<=i}

The termination of the inner loop is then proved by solving the correspond-ing termination constraints as shown in Sec.8.The bnb solver yields the rank function r(N,n,i,j)=+2.n+4.i-4.j-4.

Next,the semantics of the outer loop body is given by its forward polyhedral analysis:

assume((N=n)&(i>=0)&(n>=i));

assume(i<>0);

assume((N0=N)&(n0=n)&(i0=i)&(j0=j));

j:=0;

while(j<>i)do

j:=j+1

od;

i:=i-1

{i+1=j,i+1=i0,N=n0,N=N0,N=n,N>=i+1,i>=0}

The termination of the outer loop is then proved by solving the corresponding termination constraints as shown in Sec.8.With bnb,we get the rank function r(N,n,i,j)=+2.n+4.i-3. In case the program graph is irreducible,the program has to be considered as a whole,with di?erent ranking functions attached to cutpoints(the choice of which may not be unique).

11.3Nondeterminism and Concurrency

Nondeterministic semantics are similar to(16)in Sec.11.1.Nondeterminism can be used to handle concurrency by nondeterministic interleaving.

Example13.The following concurrent program(where atomic actions are square bracketed)does terminate without any fairness hypothesis.If one process is never activated then the other process will terminate and so the remaining one will then be activated.

[|

while[x+2

[x:=x+1]

od

||

while[x+2

[y:=y-1]

od

|]while(x+2

x:=x+1

else if(??)then

y:=y-1

else

x:=x+1;

y:=y-1

fi fi

od

Proving Program Invariance and Termination17 By nondeterministic interleaving,the program is equivalent to the nondeter-ministic one on its right.The conditionals in the loop body can be handled as explained in Sec.11.1.An even simpler solution is to consider an abstract inter-pretation of the semantics of the loop body through a polyhedral approximation (the resulting constraints are given on the right):

assume(x+2

assume((x0=x)&(y0=y));

if(??)then

x:=x+1

else if(??)then

y:=y-1

else

x:=x+1;

y:=y-1

fi fi

{y+1>=y0,x<=x0+1,x+y0>=y+x0+1,x0+3<=y0}

?y0+y+1≥0

x0?x+1≥0?x0+y0+x?y?1≥0

?x0+y0?3≥0

Establishing the termination constraints as explained in Sec.8,and solving with bnb,we get the following termination function r(x,y)=-4.x+4.y-9.

11.4Bounded Weakly Fair Parallelism

One way of handling fair parallelism is to consider nondeterministic interleaving with a scheduler to ensure bounded weak fairness.

Example14.The following weakly fair parallel program(where atomic actions are bracketed):

[[while[(x>0)|(y>0)do x:=x-1]

od ||while[(x>0)|(y>0)do y:=y-1]

od]]

does not terminate when x and y are initially positive and any one of the pro-cesses is never activated.Because of the bounded fairness hypothesis,the parallel program is semantically equivalent to the following nondeterministic program:

assume(m>=1);

t:=?;

assume(0<=t&t<=1);

s:=?;

assume((1<=s)&(s<=m)); while((x>0)|(y>0))do if(t=1)then

x:=x-1

else

y:=y-1

fi;

if(s=1)then

if(t=1)then

t:=0

else

t:=1

fi;

s:=?;

assume((1<=s)&(s<=m)) else

s:=s-1

fi

od

The nondeterministic program incorporates an explicit scheduler of the two par-allel processes where the turn t indicates which process is running and s indicates

18Patrick Cousot

the number of atomic steps remaining to run before activating another process. The nondeterminism is bounded by m which ensures the existence of an integer-valued rank function.Notice however that,as found in practice,although the nondeterminism is known to be bounded,it is not known of how much(m can take any positive integer value,including very large ones).The theoretical no-tion of weak fairness corresponds to the case when m→∞that is unbounded nondeterminism,which may require ordinal-valued rank functions.In practice one can use lexicographic orderings on N.

A forward analysis of the program determines the loop invariant{t<=1,s<=m, s>=1,t>=0}.The disjunction in the loop test is handled by partitioning,see Sec.

11.1.There are two cases for the loop test(x>0),or(y>0).In each case, the loop body is partitioned according to the value of s which,according to the invariant determined by the forward polyhedral analysis is either(s=1)or(s >1).The case(x>0∧s>1)is illustrated below(empty(10)stands for⊥, that is unreachability):

assume(t<=1)&(s<=m)&(s>=1)&(t>=0);

assume(x>0);

assume(s=1);

assume((x0=x)&(y0=y)&(t0=t)&(s0=s)&(m0=m));

if(t=1)then x:=x-1else y:=y-1fi;

if(s=1)then

if(t=1)then

t:=0

else

t:=1

fi;

s:=?;

assume((1<=s)&(s<=m))

{empty(10)}

else

s:=s-1

fi

{m=m0,s+1=s0,t=t0,t+y0=y+1,t+x=x0,s+1<=m,t<=1,t>=0,t+x>=1,s>=1} The other three forward analyses are similar and yield the following a?ne operational semantics for each of the alternatives:

x>0∧s>1{m=m0,s+1=s0,t=t0,t+y0=y+1,t+x=x0,s+1<=m,t<=1,t>=0,t<=y,s>=1} x>0∧s=1{m=m0,s+1=s0,t=t0,t+y0=y+1,t+x=x0,s+1<=m,t<=1,t>=0,t<=y,s>=1} y>0∧s>1{m=m0,s+1=s0,t=t0,t+y0=y+1,t+x=x0,s+1<=m,t<=1,t>=0,t<=y,s>=1} y>0∧s=1{m=m0,s0=1,t+t0=1,t+y=y0,t+x0=x+1,t<=1,s<=m,s>=1,t>=0,t+y>=1} The LMI termination constraints can then be established,as explained in Sec.

11.1.Solving with SeDuMi-1.05(with a feasibility radius of1.0e+4),we get the quadratic rank function:

r(x,y,m,s,t)=+8.078228e-06.x^2+8.889797e-10.x.y+2.061102e-10.x.m +2.360326e-11.x.s+2.763786e-09.x.t+9.998548e-01.x+9.770849e-07.y^2 +7.219411e-07.y.m-1.091400e-07.y.s-2.098975e-06.y.t+6.158628e+02.y

Proving Program Invariance and Termination 19

+4.044804e-06.m^2-2.266154e-08.m.s +1.794800e-06.m.t +4.524134e-04.m +7.994478e-06.s^2-1.899723e-08.s.t -3.197335e-05.s +2.450149e-06.t^2+3.556544e-04.t +9.696939e+03.

11.5

Semi-algebraic/Polynomial Programs

The termination constraints (12),(13),and (14)for semi-algebraic/polynomial

programs lead to polynomial inequalities.A necessary condition for ?x :p (x )≥0is that the degree m =2d of p be even.A su?cient condition for nonnegativity of p (x )is that p (x )≥q (x )where q (x )is a sum of squares (SOS)of other polynomials q (x )= i r 2i (x )for some r i (x )∈R [x ]of degree d [27].However the condition is not necessary.

The Gram matrix method [29]consists in ?xing a priori the form of the base

polynomials r i (x )in the sum of squares and in assuming that q (x )=z (x ) Qz (x )where z (x )is the vector of N =

…n +d

d

?monomials of the monomial basis B d,n

in any total monomial order (for example z (x )==[1,x 1,...,x n ,x 21,x 1x 2,

...,x d

n ])and Q is a symmetric positive de?nite matrix of reals.Since Q <0,Q has a Cholesky decomposition L which is an upper triangular matrix L

such that Q =L L .It follows that q (x )=z (x ) Qz (x )=z (x ) L Lz (x )=(Lz (x )) Lz (x )=[L i,:·z (x )] [L i,:·z (x )]= i (L i,:·z (x ))2

(where ·is the vector dot product x ·y = i x i y i ),proving that q (x )is a sum of squares.

Finally,z (x ) z (x )contains all monomials in x appearing in p (x )and so ?x :p (x )?q (x )≥0can be expressed in the form ?x :z (x ) Mz (x )≥0where M is a square symmetric matrix depending upon the coe?cients of p (x )and the unknowns in Q .By letting X be z (x ),the problem can be relaxed into the feasibility of ?X :X MX which can be expressed as a semide?nite problem.If the problem is feasible,then the solution provides the value of Q whence a proof that p (x )is positive.

The method is implemented both by sostool [31]and by an entirely inde-pendent built-in module of YALMIP [24]under Matlab r

.Example 15(Logistic map).The deterministic logistic map f (x )=ax (1?x )

with bifurcation parameter a such that 0≤a <1has a sink at 0and every ini-tial condition between 0and 1is attracted to this sink.So the following program (where z >0is implemented as z ≥ with a small )

eps =1.0e-10;

while (0<=a)&(a<=1-eps)

&(eps<=x)&(x<=1)do

x :=a*x*(1-x)od

a .x .(1-x )

0.4

x

a

The Matlab r

program below establishes the termination conditions with La-grangian relaxation (12),(13),and (14):

20Patrick Cousot

pvar a x0x1c0d0e0l1l2l3l4l5m1m2m3m4m5;

eps=1.0e-10;

iv=[a;x0;x1];

uv=[c0;d0;l1;l2;l3;l4;l5;m1;m2;m3;m4;m5];

pb=sosprogram(iv,uv);

pb=sosineq(pb,l1);pb=sosineq(pb,l2);

pb=sosineq(pb,l3);pb=sosineq(pb,l4);

pb=sosineq(pb,c0*x0+d0-l1*a-l2*(1-eps-a)-l3*(x0-eps)-l4*(1-x0)...

-l5*(x1-a*x0*(1-x0)));

pb=sosineq(pb,m1);pb=sosineq(pb,m2);

pb=sosineq(pb,m3);pb=sosineq(pb,m4);

pb=sosineq(pb,c0*x0-c0*x1-eps^2-m1*a-m2*(1-eps-a)-m3*(x0-eps)...

-m4*(1-x0)-m5*(x1-a*x0*(1-x0)));

spb=sossolve(pb);

c=sosgetsol(spb,c0);d=sosgetsol(spb,d0);

disp(sprintf(’r(x)=%i.x+%i’,double(c),double(d)));

These polynomial constraints are relaxed by sostools v2.00into a semide?nite program which is then solved by SeDuMi-1.05.The result is:

r(x)=1.222356e-13.x+1.406392e+00. 12Invariance

In the same way,loop invariants can be generated automatically by parametric abstraction(Sec.4)and resolution of the Lagrangian relaxation(Sec.6.1)of Floyd’s invariance veri?cation conditions(2)and(3).We get:

?a∈R p:?μ∈R+:?λ∈[0,N]?→R+:

?x∈R n:I a(x)?μ.P(x)≥0,(17)

?x0,x∈R n:I a(x)?λ0.I a(x0)?

N

k=1

λk.σk(x0,x)≥0.(18)

There is an additional di?culty however since the appearance of the paramet-ric abstraction of the invariant on the left of the implication in(3)yields,by Lagrangian relaxation,to the termλ0.I a(x0)in(18),which is bilinear inλ0 and a.For programs which operational semantics has the form B;C (x0,x)=

N k=1(x0x1)M k(x0x1) ≥0,constraint(18)is a bilinear matrix inequality

(BMI),which can be solved by BMI solvers,which?rst appeared only recently, such as PenBMI[23]and bmibnb[24].Contrary to iterative methods(at least when the ascending chain condition is satis?ed),the invariant need not be the strongest one.

Example16.This is illustrated by the following example from[15]where the invariant is obtained by forward polyhedral analysis:

相关主题