archives

Metrics for Programmatic Complexity?

Are there any current metrics available for programmatic complexity? For example, there are computational complexity classes for different types of programs, and big-Oh notation for runtime complexity of different algorithms, but is there a measure of how hard it is to actually program a given task in a given system?

For example, let's say the task is converting a list of numbers representing miles into kilometers. In C++, the code would be something like

float *miles_to_kilometers(float[] mile_list, int length) {
  float *kilo_list = new float[length];
  for(int i = 0; i < length; i++) {
    kilo_list[i] = mile_list[i] * 1.61;
  }
  return kilo_list;

In Perl, it would be something like:

sub miles_to_kilometers {
  my @result = map { $_ * 1.61 } @{$_[0]};
  return \@result;
}

My intuition would be that the Perl version would have a lower programmatic complexity because the loop is built-in rather than explicitly coded. Is there a metric available that evaluates such propositions?

Postfix lambdas

Here is a paper explaining the addition of lambdas to Forth.

In the non-academic space, my language Enchilada has postfix lambdas via the eager macro operator.

Interestingly, Enchilada can avoid the name capturing problems that plague Lisp macros (although some Lisp hackers love name capturing). Moreover, the detection of name capturing is done very efficiently in Enchilada.

Enchilada macros bind with numbers and lists but not with other macros or operators. But because Enchilada lists hold expressions, macros can be indirectly bound to other macros.

Here is a simple (postfix) Enchilada macro:

5 3 {a b=b a - +}
5 {a=3 a - +}
3 5 - +
3 _5 +
_2

macro variables have lexical scope:

1 2 3 {a={b={c=a b c}}}
1 2 {b={c=3 b c}}
1 {c=3 2 c}
3 2 1

macros can be reduced eagerly:

[1] [2] [3] {a b c==[a] b [c] + +}
[1] [2] [3] {a b c==[a] b [c] + +}
[1] [2] {a b==[a] b [[3]] + +}
[1] {a==[a;2;[3]]}
[[1];2;[3]]

and name clashes are avoided by automatic alpha-conversion:

1 2 {a==[a] {a b=a b}}
1 2 {a=={aa=aa [a]}}
1 {aa=aa [2]}
1 [2]

passing and lifting(^) a macro into another macro:

1 2 [{a b==b a}] {d e f==[d] [e] f ^}
1 2 [{a b==b a}] {d e f==[d] [e] f ^}
1 2 {d e==[e] [d]}
1 {d==[2] [d]}
[2] [1]

Enchilada started as a language similar in design to the Joy language. But I think with macros on board it boosts the expressive power of Lisp.

"Limits to implicit parallelism in functional application"

How much implicit parallelism is there in functional programs?

One answer is "lots (I'd imagine)." Functional languages (and especially non-strict functional languages) can place relatively few constraints on a program's order of evaluation, often allowing multiple evaluations to proceed in parallel--at least in principle. The resulting "implicit" parallelism may be easy and useful to locate and exploit.

Another answer is "not so much (in actual experience)." When you take an existing, non-trivial functional program and try to find practical ways to speed it up by judicious placement of `par` (or by strategies or whatever), a surprisingly large effort can produce a surprisingly small improvement. So maybe that implicit parallelism just isn't there.

Another is "well, maybe it's there but it's just hard to find." If people can't find much implicit parallelism in real software, maybe it's the fault of the people. Maybe `par` and strategies and such are just too hard for people to use. Maybe some automatic parallelization mechanisms, yet to be invented, could do much better. Or something.

One interesting new answer is "I don't know. Let's measure it." I've just written a short paper on measuring the implicit parallelism in real Haskell programs, such as the NHC98 compiler. These measurements are in the context of a limit study, so they let us say that there is _no more_ than some certain amount of implicit parallelism present; exploiting even that much might of course be impractical.

The paper is at http://www.detreville.org/papers/Limits.pdf. I'd appreciate comments, of course.

Cheers,
John