Question About Computational Complexity

Dear LtU,

My google search didn't answer my doubts so I am posting here my question.

When we see a program, especially in imperative languages, we are often very fast in determining his computational complexity. Other times the same job is very hard.
My question is about the logic process that we follow in determining the computational complexity of a program/function.
(Of course i mean the complexity of the worst case only).
I am searching for somewhat very similar to the Hoare Logic.
A sort of formal logical system for derive judgments like P \in O(n^2) ... where n is the length of the Input.

This system should make clear the informal steps we use every days in determining the complexity of a function (especially in imp. languages).

I don't know if Hoare Logic can be directly used to this purposed. Maybe it's possible and we don't need a different formal system.

Thanks for any help.

Matteo

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Undecidable in general, but

I expect there exist fairly natural algorithms to determine complexity in languages like this one:

Type Systems For Polynomial-time Computation

Infact in a language like that you may even be able to tell the complexity from the type of the function? (I should actually read the paper, shouldn't I :)

Thanks for your hint! This

Thanks for your hint!

This paper seems very interesting but It is not what I am searching about.
Roughly speaking, in this thesis, the author imposes several syntactical restrictions so that all writable programs (or better, all typable programs) are in P-time.
While this is very interesting, I feel it like "if in a "while" program we omit all while and use only for, we always obtain terminating programs".

The Hoare Logic, to the contrary, offer a systems for the deduction of the termination of a program.
It is a proof system so "not decidable in general" is not the point here.

So I am searching for some proof system more than syntactical restrictions.

Thank you again.

Matteo

What you're asking for seems

What you're asking for seems so reasonable and natural that I would be shocked if someone hadn't looked into it already, but casual Googling doesn't seem to find any references. So let's see if we can't cook one up.

First, programs are the usual, with pure expressions and commands that have imperative effects:

e ::= x | n | e + e | e < e | etc.
C ::= x := e | skip | if e then C else C | while e do C | C; C

Next, we need a semantics for this language. Usually, what we do is give a transition relation that says how a command transforms one environment (mapping of variables to values) to another one. However, to analyze performance, we need to know how long it takes, too. So we can give an instrumented semantics that also tells us how long it took. For simplicity, I'll assume that evaluating expressions is free:

C @ R ==> R'; k 

Read this as "command C transforms environment R to environment R', in k steps"

------------------
skip @ R ==> R; 0


R' = [R | x = R[e]]
--------------------
x := e @ R ==> R'; 1


R[e] == true   C1 @ R ==> R'; k
-------------------------------------
if e then C1 else C2 @ R ==> R'; k+1


R[e] == false   C2 @ R ==> R'; k
-------------------------------------
if e then C1 else C2 @ R ==> R'; k+1


C1 @ R ==> R'; k1   C2 @ R' ==> R''; k2
---------------------------------------
C1; C2 @ R ==> R''; k1 + k2


R[e] == false  
-------------------------
while e do C @ R ==> R; 1


R[e] == true  C @ R ==> R'; k'    while e do C @ R' ==> R''; k''
----------------------------------------------------------------
while e do C @ R ==> R''; k' + k''

Now, let's figure out what Hoare triples should be. Usually, they're
of the form {P} C {Q}, where P and Q are assertions, and P is
the precondition state and Q is the state after the command finishes
execution. A triple holds, when any environment which makes P true
will be transformed by C into a state where Q is true:

{P} C {Q} is valid iff

forall R. if FV(C, P, Q) subset of dom(R) then 
  if R[P] holds then 
    there exist R', k such that
       C @ R ==> R'; k  and  R[Q] holds

(I'm being a bit sloppy here about free variables, but I think it can
all be made correct!)

How can we extend this to say something about the execution time, as
well? Let's do the obvious thing, and turn the triple into a quadruple
-- {P} C {Q; T}. The idea is that the evaluation time of C
is bounded by the expression T.

{P} C {Q; T} is valid iff 

forall R. if FV(C, P, Q, T) subset of dom(R) then
  if R[P] holds, then 
    there exist R', k. 
      C @ R ==> R'; k  and  R[Q] holds  and k <= R[T]

Now, using this definition, I think you can prove that the following
inference rules are sound:

---------------
{P} skip {P; 0}


----------------------
{P[x/E]} x := E {P; 1}



{P} C {Q; T}   T <= T'
----------------------
    {P} C {Q; T'}  


{P} C1 {Q; T1}   {Q} C2 {R; T2}
-------------------------------
   {P} C1; C2 {R; T1 + T2}


{P and B} C1 {Q; T}  {P and not(B)} C2 {Q; T}
---------------------------------------------
    {P} if B then C1 else C2 {Q; T}


{P and B and i = n} C {P and i = f(n); T}   
f is a monotone decreasing function
f^j(n) == 0
-----------------------------------------
   {P} while B do C {Q; 1 + j*T}

Some doubts

hank you neekl !!

I was very shocked too when my googling didn't find something like your work.
What you have done is very similar to what I had in my mind.

(** Sorry if my English is not very good **)

But There is something i would like to discuss.
With your system we always obtain a triple {P} C {Q;k} with k constant.
This give us an upper bound that is not related with the size of the input variables.

What I mean is that "P terminates in 34220 steps" doesn't tell us anything about his Computational Complexity.

A first edit of your work to this purpose could be to assume that evaluating of expressions is NOT free.
So, for example, the evaluating of e=x+y should take time size(x)+size(y).

At the end we will have, in the position of k, some expression with the size of variables.

Suppose we have done this change...
How to change the "while rule" ?


{P and B and i = n} C {P and i = f(n); T}
f is a monotone decreasing function
f^j(n) == 0
-----------------------------------------
{P} while B do C {Q; 1 + j*T}

I am not sure it should be changed but I think, written in this manner, it only increase, by a constant factor, our Complexity counter.
In fact We don't want, say, j=1500 but we want j=size(m)^2 + size(n).

What do you think about it?
Consider also that others rules only SUMS "computational complexity".
So we have the problem that complexity doesn't grow up!
size(n) +.........+ size(n) = x * size(n) = O( size(n) )
even if x = size(n)^3.

This is because we are not considering the size of variables in the right manner I think.

What do you think?

Thank you again,

Matteo

...

First, I screwed up the postcondition in the conclusion of the while rule -- it should read:

{P and B and i = e} C {P and i = f(e); T}
f is a monotone decreasing function
f^j(e) == 0
------------------------------------------
{P} while B do C {P and not(B); j*T}

Secondly, T is an expression with free variables, just like P and Q. As a result, you can treat it as a function of its free variables, which is how you can read off the computational complexity -- it will be a function of the free variables of the program (which is where the input comes from). So a valid proof that decrementing by 1 will reduce a variable to 0 in linear time will look like:

{x = n and i = n-x}
while x > 0 do
  {x = n - i and x > 0 and i = (n-x)}
  x := x - 1
  {x = n - i and i = (n-x) - 1; 1}
  {true and i = (n-x) - 1; 1}
{true and not(x>0); 1 + n}
{x = 0; 1 + n}

So you can see that T here is the expression "1 + n", which lets us see that it is linear in the size of x.

Hehner's approach

You may be interested in Hehner's approach, where he embeds variables in the program to track whatever space/time property he wants to track and then uses the same methods to handle those as he does with any other properties of the program.

See Chapter 4 of A Practical Theory of Programming. The complexity discussion starts around page 46 of this pdf, but you'll probably have to read the earlier parts to understand his notation.

And there's a separate publication, Formalization of Time and Space.