Proving running times of algorithms

Back in university, I only learned to analyse running times of simple imperative algorithms. I don't see how the techniques extend to higher order algorithms, or ones that rely on lazy evaluation. I've looked at papers describing lazy algorithms, but their running times are only analysed in hand-waving form. Is that the best anyone can do, or are there better ways? I haven't seen anything even attempt to describe the running time of higher order algorithms.

I'm exploring for my own interest adding support for proofs of worst-case running time to a lazy language with first-class functions, possibly using dependent types, and I'm trying to see what notation and techniques are already used out there for doing the proofs by hand.

So, got any papers or languages relevant to this?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


Have you looked at Chris Okasaki's Purely Functional Data Structures? He discusses various techniques for analyzing the execution cost of lazily evaluated code.


This is an excellent starting point, and is reasonably thorough while remaining approachable. The implementations (ML and Haskell IIRC) of the structures themselves are also readily ported to "YourLanguageOfChoice" for further study. In fact, there are many such ports already available. The work covers a few different treatments of runtime analysis both lazy and eager. It also makes some interesting comparisons.

I'd be very interested to see similar work done under a dependent type system, as I believe it could have many practical applications.

Just to clarify: PFDS was

Just to clarify: PFDS was published as a book.

To further clarify, the

To further clarify, the book has a few additional chapters and revisions not in the thesis.

answers half

Thank you all for your suggestion! I've looked at the thesis version now. It tells me what I wanted to know about dealing with laziness, but as far as I can see yet it doesn't talk about running times of higher order functions.

I've got another paper of Okasaki's lined up to read, since it may deal with that. It's Even Higher-Order Functions for Parsing or Why Would Anyone Ever Want To Use a Sixth-Order Function?.

To clarify, I'm hoping for a way to describe the the running time of things like map. Its running time depends on the running time of the first parameter applied to the elements of the second parameter. Map is simple, but what about sortBy, or scarier still iterate.

Some papers on LtU have

Some papers on LtU have demonstrated full inference of the time and space complexity of programs written in a first-order language. I can't find the paper I read, but it stated higher-order analyses were an open problem at the time.

This paper is relevant to complexity analysis in the context of Okasaki's work. This list of papers describes some complexity analysis work for higher-order programs.

Thank you!

Thank you! I haven't had time to read what you linked to, but the topics in that list of papers look like what I hoped to find.

I don't recall seeing that

"Some papers on LtU have demonstrated full inference of the time and space complexity of programs written in a first-order language."

But if you can dig up the papers I'd be very interested :)

If such inference is possible I'm very surprised there isn't a language yet with a first-order subset where you can verify O() running time...

I remember it was a

I remember it was a low-level first-order language targeting embedded systems, but I can't find the right incantations for google, so perhaps this aspect was not mentioned. This was all I could find. [Edit: there are plenty of results for space complexity analysis though.]

Here's one candidate for

Here's one candidate for time complexity inference:

Automatic time-bound analysis for a higher-order language (2002)

Here's a much more recent